url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://juanitorduz.github.io/uc_pymc/
|
Unobserved Components Model as a Bayesian Model with PyMC
In this notebook I want to deep-dive into the idea of wrapping a statsmodels UnobservedComponents model as a bayesian model with PyMC described in the (great!) post Fast Bayesian estimation of SARIMAX models. This is a nice excuse to get into some internals of how PyMC works. I hope this can serve as a complement to the original post mentioned above. This post has two parts: In the first one we fit a UnobservedComponents model to a simulated time series. In the second part we describe the process of wrapping the model as a PyMC model, running the MCMC and sampling and generating out of sample predictions.
Remark: This notebook was motivated by trying to extend the Causal Impact implementation pycausalimpact from willfuks to the Bayesian setting (we would still need to restrict the level priors, see here). Please check out his newest implementation (tfcausalimpact) using tensorflow-probability.
Part 1: Unobserved Components Model
Prepare Notebook
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(
style="darkgrid",
rc={"axes.facecolor": "0.9", "grid.color": "0.8"}
)
sns.set_palette(palette="deep")
sns_c = sns.color_palette(palette="deep")
plt.rcParams["figure.facecolor"] = "white"
plt.rcParams["figure.figsize"] = [10, 6]
plt.rcParams["figure.dpi"] = 100
%config InlineBackend.figure_format = "svg"
Generate Sample Data
We generate a sample time series with a known trend, seasonal component, an external regressor and an auto-regressive term (see here for more details).
np.random.seed(1)
min_date = pd.to_datetime("2015-01-01")
max_date = pd.to_datetime("2022-01-01")
data_df = pd.DataFrame(
data={"date": pd.date_range(start=min_date, end=max_date, freq="M")}
)
n = data_df.shape[0]
def generate_data(n, sigma_eta, sigma_epsilon):
y = np.zeros(n)
mu = np.zeros(n)
epsilon = np.zeros(n)
eta = np.zeros(n)
for t in range(1, n):
eta[t] = np.random.normal(loc=0.0, scale=sigma_eta)
mu[t] = mu[t - 1] + eta[t]
epsilon[t] = np.random.normal(loc=0.0, scale=sigma_epsilon)
y[t] = mu[t] + epsilon[t]
return y, mu
sigma_eta = 0.1
sigma_epsilon = 0.1
y, mu = generate_data(n=n, sigma_eta=sigma_eta, sigma_epsilon=sigma_epsilon)
data_df["y"] = y
x = np.random.uniform(low=0.0, high=1.0, size=n)
data_df["x"] = np.where( x > 0.80, x, 0)
data_df["cs"] = np.sin(2 * np.pi * data_df["date"].dt.dayofyear / 356.5)
data_df["cc"] = np.cos(3 * np.pi * data_df["date"].dt.dayofyear / 356.5)
data_df["s"] = data_df["cs"] + data_df["cc"]
# Construct target variable.
data_df["z"] = data_df["y"] + data_df["x"] + data_df["s"]
data_df["z"] = data_df["z"] + 0.5 * data_df["z"].shift(1).fillna(0)
Let us plot the time series and its components:
fig, ax = plt.subplots()
sns.lineplot(x="date", y="z", data=data_df, marker="o", markersize=5, color="black", label="z", ax=ax)
sns.lineplot(x="date", y="y", data=data_df, marker="o", markersize=5, color=sns_c[0], alpha=0.6, label="y (local level)", ax=ax)
sns.lineplot(x="date", y="s", data=data_df, marker="o", markersize=5, color=sns_c[1], alpha=0.6, label="seasonal", ax=ax)
sns.lineplot(x="date", y="x", data=data_df, marker="o", markersize=5, color=sns_c[2], alpha=0.6, label="x", ax=ax)
ax.legend(loc="center left", bbox_to_anchor=(1, 0.5))
ax.set(title="Simulated data components");
Train-Test Split
Next we split the data into a training and test set.
# Set date as index.
data_df.set_index("date", inplace=True)
data_df.index = pd.DatetimeIndex(
data=data_df.index.values,
freq=data_df.index.inferred_freq
)
Let us see the split:
train_test_ratio = 0.80
n_train = int(n * train_test_ratio)
n_test = n - n_train
data_train_df = data_df[: n_train]
data_test_df = data_df[- n_test :]
y_train = data_train_df["z"]
x_train = data_train_df[["x"]]
y_test = data_test_df["z"]
x_test = data_test_df[["x"]]
fig, ax = plt.subplots()
sns.lineplot(x=y_train.index, y=y_train, marker="o", markersize=5, color=sns_c[0], label="y_train", ax=ax)
sns.lineplot(x=y_test.index, y=y_test, marker="o", markersize=5, color=sns_c[1], label="y_test", ax=ax)
ax.axvline(x=x_train.tail(1).index[0], color=sns_c[6], linestyle="--", label="train-test-split")
ax.legend(loc="center left", bbox_to_anchor=(1, 0.5))
ax.set(title="Train-Test Split");
Fit Model
We not fit a UnobservedComponents model (also known as structural time series models). Recall that this is a model of the form:
$y_z = \mu_t + \gamma_t + c_t + \varepsilon_t$
where $$\mu_t$$ is the trend component, $$\gamma_t$$ is the seasonal component, $$c_t$$ is the cycle component and $$\varepsilon_t$$ is the irregular component. Please see the great documentation for more details. For this specific case we will use $$4$$ Fourier terms to model the yearly seasonality (similar to prophet). In addition, we are going to set the parameter mle_regression to True to add the external regressor into the state vector.
from statsmodels.tsa.statespace.structural import UnobservedComponents
model_params = {
"endog": y_train,
"exog": x_train,
"level": "local level",
"freq_seasonal": [
{"period": 12, "harmonics": 4}
],
"autoregressive": 1,
"mle_regression": True,
}
model = UnobservedComponents(**model_params)
Let us now fit the model.
result = model.fit(disp=0)
result.summary()
Now we can see some diagnostic plots:
result.plot_diagnostics(figsize=(12, 9));
The error distribution looks ok! Now let us visualize the model (learned) components:
alpha = 0.05
fig = result.plot_components(figsize=(12, 9), alpha=alpha)
for ax in fig.get_axes():
ax.legend(loc="center left", bbox_to_anchor=(1, 0.5))
ax.set(title=None)
fig.suptitle("Model Components", y=0.92);
This decomposition agrees with our data generation process.
Remark: Let us get the number of observations needed for the approximate diffuse initialization (see here for more details).
n_obs_init = model.k_states - int(model._unused_state) - model.ar_order
Generate Predictions
Now we can use the fitted model to generate in and out sample predictions and confidence intervals.
train_predictions_summary = result \
.get_prediction() \
.summary_frame(alpha=alpha)
test_predictions_summary = result \
.get_forecast(steps=n_test, exog=x_test) \
.summary_frame(alpha=alpha)
Moreover, we can simulate from the state space model.
repetitions = 100
simulations_train_df = result.simulate(
anchor="start",
nsimulations=n_train,
repetitions=repetitions,
exog=x_train
)
simulations_test_df = result.simulate(
anchor="end",
nsimulations=n_test,
repetitions=repetitions,
exog=x_test
)
# Verify expected shape of the simulations dataframes.
assert simulations_train_df.shape == (n_train, repetitions)
assert simulations_test_df.shape == (n_test, repetitions)
Let us see the results:
fig, ax = plt.subplots()
# Input data
sns.lineplot(
x=y_train.index,
y=y_train,
marker="o",
markersize=5,
color="C0",
label="y_train",
ax=ax
)
sns.lineplot(
x=y_test.index,
y=y_test,
marker="o",
markersize=5,
color="C1",
label="y_test",
ax=ax
)
ax.axvline(
x=x_train.tail(1).index[0],
color="C6",
linestyle="--",
label="train-test-split"
)
# Simulations
for col in simulations_test_df.columns:
sns.lineplot(
x=simulations_test_df.index,
y=simulations_test_df[col],
color="C3",
alpha=0.05,
ax=ax
)
# Prediction intervals
ax.fill_between(
x=train_predictions_summary.index[n_obs_init: ],
y1=train_predictions_summary["mean_ci_lower"][n_obs_init: ],
y2=train_predictions_summary["mean_ci_upper"][n_obs_init: ],
color="C2",
label="confidence interval (train)",
alpha=0.3
)
ax.fill_between(
x=test_predictions_summary.index[n_obs_init: ],
y1=test_predictions_summary["mean_ci_lower"][n_obs_init: ],
y2=test_predictions_summary["mean_ci_upper"][n_obs_init: ],
color="C3",
label="confidence interval (test)",
alpha=0.3
)
# Predictions
sns.lineplot(
x=train_predictions_summary.index,
y=train_predictions_summary["mean"],
marker="o",
markersize=5,
color="C2",
label="y_train_pred",
ax=ax
)
sns.lineplot(
x=test_predictions_summary.index,
y=test_predictions_summary["mean"],
marker="o",
markersize=5,
color="C3",
label="y_test_pred",
ax=ax
)
# diffuse initialization
ax.axvline(
x=y_train.index[n_obs_init],
color="C5",
linestyle="--",
label="diffuse initialization"
)
ax.legend(loc="center left", bbox_to_anchor=(1, 0.5))
ax.set(title="Unobserved Components Model Predictions");
Here are some remarks:
• The in and out sample predictions look good and there is no sign of significant over-fit.
• Nevertheless, it seems that the models is underestimating the trend component a bit.
• Note that all the simulations lie withing the confidence intervals.
Part 2: PyMC Integration
Write Model Wrapper
As mentioned in the introduction, we follow the indications of the post Fast Bayesian estimation of SARIMAX models to wrap the model as a PyMC model. Here is the main idea. The UnobservedComponents implementation from statsmodels already computes the derivative of the log-likelihood function with respect to the state vector. This is the main ingredient for the Hamiltonian Monte Carlo (HMC) algorithm. Hence, we just need to extract the likelihood and its derivate as a Theano / Aesara operator, which is what PyMC uses to construct the computational graph. To see the details about what is required to create a custom Python operator please see the extensive documentation Creating a new Op: Python implementation. The following is the structure of a Theano graph:
From the documentation:
Theano represents symbolic mathematical computations as graphs. Those graphs are bi-partite graphs (graphs with 2 types of nodes), they are composed of interconnected Apply and Variable nodes. Variable nodes represent data in the graph, either inputs, outputs or intermediary values. As such, Inputs and Outputs of a graph are lists of Theano Variable nodes. Apply nodes perform computation on these variables to produce new variables. Each Apply node has a link to an instance of Op which describes the computation to perform.
For our particular, we need to construct a Theano operator to compute the model’s likelihood and its derivative given a set of parameters. Concretely, we need to implement something of the form:
from theano.graph.op import Op
class Loglike(tt.Op):
def __init__(self, model):
...
def perform(self, node, inputs, outputs):
"""Compute log-likelihood."""
...
...
See the description and requirements of the perform and grad methods respectively.
Now, how do we extract the likelihood from the UnobservedComponents model? We can access it via the loglike method. Let us compute the likelihood on the fitted parameters:
model.loglike(result.params)
-10.895206701947446
Similarly, we can compute the gradient of the likelihood via the score method.
model.score(params = result.params)
array([-6.72506452e+00, -1.14243319e-01, -4.49922733e+03, 2.79173498e-02,
1.59160648e-05, -1.51104128e-03])
Note that the output is a vector as expected.
We first define a Score operator to compute the log-likelihood gradient:
import theano.tensor as tt
class Score(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dvector] # outputs vector of log-likelihood values
def __init__(self, model):
self.model = model
def perform(self, node, inputs, outputs):
(theta,) = inputs
outputs[0][0] = self.model.score(theta)
Next we need a class to compute the likelihood.
class Loglike(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, model):
self.model = model
self.score = Score(self.model)
def perform(self, node, inputs, outputs):
(theta,) = inputs # contains the vector of parameters
llf = self.model.loglike(theta)
outputs[0][0] = np.array(llf) # output the log-likelihood
# the method that calculates the gradients - it actually returns the
# vector-Jacobian product - gradients[0] is a vector of parameter values
(theta,) = inputs # our parameters
return out
Remark: From the documentation The grad method must return a list containing one Variable for each input. Each returned Variable represents the gradient with respect to that input computed based on the symbolic gradients with respect to each output.
We are now ready to write the wrapper:
loglike = Loglike(model)
Let us see this in action:
# parameters to theano tensor
tt_params = tt.as_tensor_variable(x=result.params.to_numpy(), ndim=1)
# We evaluate the log-likelihood
type(loglike(tt_params))
theano.tensor.var.TensorVariable
# To get the value we evaluate the tensor
loglike(tt_params).eval()
array(-10.8952067)
Run PyMC Model
In order to write the PyMC model we need to define a target distribution. In this case we can use DensityDist which is a distribution with the passed log density function is created.
import arviz as az
import pymc3 as pm
# Set sampling params
ndraws = 4000 # number of draws from the distribution
nburn = 1000 # number of "burn-in points" (which will be discarded)
with pm.Model() as pm_model:
# Priors
sigma2_irregular = pm.InverseGamma("sigma2.irregular", alpha=2, beta=1)
sigma2_level = pm.InverseGamma("sigma2.level", alpha=2, beta=1)
sigma2_freq_seasonal = pm.InverseGamma("sigma2.freq_seasonal_12(4)", alpha=2, beta=1)
sigma2_ar = pm.InverseGamma("sigma2.ar", alpha=2, beta=1)
ar_L1 = pm.Uniform("ar.L1", lower=-0.99, upper=0.99)
beta_x = pm.Normal("beta.x", mu=0, sigma=1)
# convert variables to tensor vectors
theta = tt.as_tensor_variable([
sigma2_irregular,
sigma2_level,
sigma2_freq_seasonal,
sigma2_ar,
ar_L1,
beta_x
])
# use a DensityDist (use a lamdba function to "call" the Op)
pm.DensityDist("likelihood", logp=loglike, observed=theta)
Let us see the model structure:
pm_model
$\begin{array}{rcl} \text{sigma2.irregular_log__} &\sim & \text{TransformedDistribution}\\\text{sigma2.level_log__} &\sim & \text{TransformedDistribution}\\\text{sigma2.freq_seasonal_12(4)_log__} &\sim & \text{TransformedDistribution}\\\text{sigma2.ar_log__} &\sim & \text{TransformedDistribution}\\\text{ar.L1_interval__} &\sim & \text{TransformedDistribution}\\\text{beta.x} &\sim & \text{Normal}\\\text{sigma2.irregular} &\sim & \text{InverseGamma}\\\text{sigma2.level} &\sim & \text{InverseGamma}\\\text{sigma2.freq_seasonal_12(4)} &\sim & \text{InverseGamma}\\\text{sigma2.ar} &\sim & \text{InverseGamma}\\\text{ar.L1} &\sim & \text{Uniform}\\\text{likelihood} &\sim & \text{DensityDist} \end{array}$
Next, we fit the model:
with pm_model:
trace = pm.sample(
draws=ndraws,
tune=nburn,
chains=4,
return_inferencedata=True,
cores=-1,
compute_convergence_checks=True,
)
We can now plot the trace of the model and compare them against the point estimate of the maximu likelihood estimations from the statsmodels implementation:
az.plot_trace(
data=trace,
lines=[(k, {}, [v]) for k, v in dict(result.params).items()]
)
plt.tight_layout()
Note that all the sigma2.* variables have much larger values in the pymc model. This is also the case in some of the models of the original statsmodels post.
We can now compute some diagnostics metrics as usual (see this post on Introduction to Bayesian Modeling with PyMC3:
az.summary(trace)
Let us compare with the original model parameters:
result.params
sigma2.irregular 6.936593e-09
sigma2.level 2.357237e-03
sigma2.freq_seasonal_12(4) 2.385904e-11
sigma2.ar 4.892693e-02
ar.L1 2.854379e-01
beta.x 5.656842e-01
dtype: float64
Generate Predictions
Now ge generate predictions by sampling from the posterior distribution. First, we take a subset of samples.
# number os samples
n_samples = 200
# sample with replacement from the posterior
posterior_samples = trace["posterior"] \
.stack(sample=("chain", "draw")) \
.to_pandas() \
.sample(n=n_samples, replace=True)
# make sure the parameters are in the right order
posterior_samples = posterior_samples[result.params.index.to_numpy()]
For these samples we just update the parameters of the model using the (Kalman) smooth method.
pm_models = []
for i in range(n_samples):
params = posterior_samples.iloc[i].to_numpy()
result_bayes = model.smooth(params=params)
pm_models.append(result_bayes)
# get in-sample fitted values
train_fitted_values_df = pd.concat(
[m.fittedvalues[n_obs_init: ] for m in pm_models],
axis=1)
# get out-of-sample forecasted values
test_predicted_values_df = pd.concat(
[m.forecast(steps=n_test, exog=x_test) for m in pm_models],
axis=1)
test_predicted_values_df.columns = [f"c_{i}" for i in range(n_samples)]
# simulate from the model
sim_test_df = pd.concat(
[
m.simulate(
anchor="end",
nsimulations=n_test,
repetitions=1,
exog=x_test
)
for m in pm_models
],
axis=1)
sim_test_df.columns = [f"c_{i}" for i in range(n_samples)]
Finally, let us visualize the final results:
fig, ax = plt.subplots(figsize=(12, 7))
# Input data
sns.lineplot(
x=y_train.index,
y=y_train,
marker="o",
markersize=5,
color=sns_c[0],
label="y_train",
ax=ax
)
sns.lineplot(
x=y_test.index,
y=y_test,
marker="o",
markersize=5,
color=sns_c[1],
label="y_test",
ax=ax
)
ax.axvline(
x=x_train.tail(1).index[0],
color=sns_c[6],
linestyle="--",
label="train-test-split"
)
# Predictions
for i, col in enumerate(train_fitted_values_df.columns):
label = "fitted values" if i == 0 else None
sns.lineplot(
x=train_fitted_values_df.index,
y=train_fitted_values_df[col],
color="C2",
alpha=0.03,
label=label,
ax=ax
)
for i, col in enumerate(test_predicted_values_df.columns):
label = "predicted values" if i == 0 else None
sns.lineplot(
x=test_predicted_values_df.index,
y=test_predicted_values_df[col],
color="C3",
alpha=0.03,
label=label,
ax=ax
)
for i, col in enumerate(sim_test_df.columns):
label = "simulated value" if i == 0 else None
sns.lineplot(
x=sim_test_df.index,
y=sim_test_df[col],
color="C3",
alpha=0.03,
label=label,
ax=ax
)
# diffuse initialization
ax.axvline(
x=y_train.index[n_obs_init],
color="C5",
linestyle="--",
label="diffuse initialization"
)
ax.legend(loc="center left", bbox_to_anchor=(1, 0.5))
ax.set(title="Unobserved Components PyMC Model Predictions");
Note that the simulated out-of-sample predictions range is much wider than the frequentist results as we are also considering the uncertainty of the model fitted parameters from the posterior distribution.
|
2023-01-30 08:10:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5664844512939453, "perplexity": 7601.633372525917}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499804.60/warc/CC-MAIN-20230130070411-20230130100411-00520.warc.gz"}
|
http://www.amodarressi.com/AdapLeR/
|
- 8 mins
This is a post for the ACL 2022 paper AdapLeR: Speeding up Inference by Adaptive Length Reduction.
We propose a novel approach for reducing the computational cost of BERT with minimal loss in downstream performance. AdapLeR dynamically eliminates less contributing tokens through layers, resulting in shorter lengths and consequently lower computational cost. Our contributions are as follows:
• We couple a simple Contribution Predictor (CP) with each layer of the model to estimate tokens’ contribution scores to eliminate redundant representations.
• Instead of an instant token removal, we gradually mask out less contributing token representations by employing a novel soft-removal function.
• We also show the superiority of our token selection strategy over the other widely used strategies by using human rationales.
## Efficient Methods
There have been various efforts at improving the efficiency of BERT-based models such as Knowledge Distilation, Quantization, Weight Pruning and progressive Module Replacing. Despite providing significant reductions in model size, these techniques are generally static at inference time, i.e., they dedicate the same amount of computation to all input examples, irrespective of their difficulty.
Another branch of pruning methods known as input adaptive pruning aims to address this issue by allowing an input example to exit without passing through all layers or dropping some token representations at inference time. The latter can be viewed as a compression method from a width perspective and it is particularly promising as recent attribution analysis studies have shown that some token representations carry more task-specific information than others, suggesting that only these hidden states could be considered through the model. Moreover, in contrast to layer-wise pruning, token-level pruning does not come at the cost of reducing the model’s capacity in complex reasoning which generally happens at the deeper layers.
In this post, we introduce Adapler, an input-adaptive pruning method and we show the superiority of our method over other state-of-the-art length reduction techniques.
In AdapLeR, each layer dynamically eliminates less contributing tokens. This leads in shorter lengths and as a result, reduced computational cost.
The goal is to reduce the number of token representations in a progressive manner
## Inference
In our approach, we eliminate less contributing token representations before delivering them to the next encoder. Therefore in each layer, first a Contribution Predictor (CP) provides an estimation of each tokens’ importance and then a trainable threshold drops the less contributing tokens.
### CP
An important step in detecting less contributing tokens is to estimate the importance of each token. we simply add a CP after each layer $$\ell$$ in the model to estimate contribution score for each token representation, i.e., $$\tilde{S}^\ell$$. The model then decides on the tokens that should be passed to the next layer based on the values of $$\tilde{S}^\ell$$. CP computes $$\tilde{S}^\ell$$ for each token using an MLP followed by a softmax activation function.
### Token Removal
After the CP computed the contribution scores, a trainable threshold ($$\delta$$) separates the remaining tokens from the dropping ones. As the sum of the contribution scores is equal to one, a uniform level indicates that all tokens contribute equally to the prediction and should be retained. On the other hand, the lower-scoring tokens could be viewed as unnecessary tokens if the contribution scores are concentrated only on a subset of tokens. Hence, $$\delta$$ has a value equal or smaller than a uniform-level ($$\frac{1}{n}$$) score. Note that the final classification head uses the last hidden state of the $$\text{[CLS]}$$ token. So we preserve this token’s representation in all layers.
## Training
This section breaks down the training procedure into three main challenges: training the CPs alongside the model, removing tokens gradually, and the speedup tuning objective.
### CP Training
For training the CPs, we opted to use saliency scores, which have been recently shown as a reliable criterion in measuring token contributions. The CPs are trained using the KL-divergence of each layer’s CP output with saliency scores extracted from a model finetuned for the given target task.
The main training objective is jointly training the CPs with the rest of the model alongside with the target task labels:
$$\mathcal{L}_{\text{CP}}=\displaystyle\sum_{\ell=0}^{L-1}(L-\ell)D_{KL}(\hat{S}^\ell || \tilde{S}^\ell)$$ $$\mathcal{L}=\mathcal{L}_{\text{CE}}+\gamma\mathcal{L}_{\text{CP}}$$
### Soft-Removal
During training, if tokens are immediately dropped similarly to the inference mode, the effect of dropping tokens cannot be captured using a gradient backpropagation procedure. Using batch-wise training in this scenario will also be problematic as the structure will vary with each example.
Hence, inspired by the padding mechanism of self-attention models we gradually mask out less contributing representations by employing a novel soft-removal function. The less important tokens with scores lower than the threshold ($$\delta$$) are assigned higher negative masking as they get more distant from $$\delta$$. After each epoch, the slope increases exponentially, gradually approaching a hard-masking characteristic similar to inference mode.
### Speedup Tuning
To control the speedup-performance tradeoff we define a separate objective by combining the the cross-entropy loss of the target classification task with a length loss summed over all layers.
Note that this objective is responsible for training the thresholds ($$\delta$$) while the previously mentioned loss in “CP Training” section, trains the whole model (except $$\delta$$).
## Results
This Table shows performance and speedup for AdapLeR and other comparison models across eight various text classification datasets:
As our baseline, we report results for the pre-trained BERT model which is also the backbone of AdapLeR. We also compare against three other approaches: DistilBERT as a static compression method, PoWER-BERT and TR-BERT as two strong length reduction methods. The model’s speedup is defined as the total FLOPs (i.e., the number of floating-point operation) measured on BERT (our baseline) divided by the corresponding model’s total FLOPs. This allows us to assess models’ speedups independently of their operating environment (e.g., CPU/GPU).
Results also reveal some form of dependency on the type of the tasks. Some tasks may need less amount of contextualism during inference and could be classified by using only a fraction of input tokens. For instance, in AG’s News, the topic of a sentence might be identifiable with a single token (e.g., soccer → Topic: Sports).
PoWER-BERT adopts attention weights in its token selection which requires at least one layer of computation to be determined, and TR-BERT applies token elimination only in two layers to reduce the training search space. In contrast, our procedure performs token elimination for all layers of the model, enabling a more effective removal of redundant tokens.
Speedup-performance tradeoff curves also show that AdapLeR significantly outperforms the other state-of-the-art length reduction methods.
## Analysis
In this section, we evaluate the behavior of Contribution Predictors (CPs) in identifying the most contributing tokens in the AdapLeR. We resort to human rationale as a reliable upper bound for measuring token importance (see the paper for details).
### Qualitative Analysis
This Figure illustrates two examples from the SST-2 and QNLI datasets in which CPs identify and gradually drop the irrelevant tokens through layers, finally focusing mostly on the most important token representations; pedestrian (adjective) in SST-2 and tesla coil in the passage part of QNLI (both of which are highly aligned with human rationale).
### Quantitative Analysis
To investigate the effectiveness of trained CPs in predicting human rationales we computed the output scores of CPs in AdapLeR for each token representation in each layer. We also fine-tuned a BERT model on the Movie Review dataset and computed layer-wise raw attention, attention rollout, and saliency scores for each token representation. In addition to these soft scores, we used the uniform-level threshold (i.e., 1/n) to reach a binary score indicating tokens selected in each layer.
As for evaluation, we used the Average Precision (AP) and False Positive Rate (FPR) metrics by comparing the remaining tokens to the human rationale annotations. The first metric measures whether the model assigns higher continuous scores to those tokens that are annotated by humans as rationales. Whereas, the intuition behind the second metric is how many irrelevant tokens are selected by the model to be passed to subsequent layers. We used soft scores for computing AP and binary scores for computing FPR.
This Figure shows the agreement between human rationales and the selected tokens based on the two metrics. In comparison with the other widely used strategies for selecting important tokens, such as salinecy and attention, our CPs have significantly less false positive rate in preserving rationales through layers. Despite having similar FPRs at the final layer, CP is preferable to attention in that it can better identify rationales at the earlier layers, allowing the model to combine the most relevant token representations when building the final one. As a by-product of the contribution estimation process, our trained CPs are able to generate these rationales at inference without the need for human-generated annotations.
|
2022-07-03 08:51:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5646669268608093, "perplexity": 1964.1804958613416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00456.warc.gz"}
|
http://mathematica.stackexchange.com/questions/7764/when-do-i-use-mathematica-when-do-i-use-gephi-and-how-do-i-use-them-together
|
# When do I use Mathematica, when do I use Gephi, and how do I use them together?
Mathematica has great operations for analyzing graphs but when the graphs are a bit bigger, Gephi seems to be more efficient (at least on my computer) at drawing and analyzing, i.e. visualizing the graph. So my question is:
What is the "best practice" (white papers, web pages or similar) for how to divide the work between Mathematica and Gephi?
Here is an example, I have a graph from an analysis of all offerings and the biggest clients of a midsize telecom operator, if a client has been invoiced for the offering, there is an edge between the client and the offering. The bipartite graph (see http://mathworld.wolfram.com/BipartiteGraph.html) has about 540 nodes (clients and offerings) and about 14900 edges.
In the image - Gephi visualization of "Force Atlas" layout - you see that the core offerings gravitate to the middle. Clients (blue nodes) become grouped around some key offerings (name of clients are removed), e.g. in the upper right corner of the graph you have all the multinationals that are buying international calls and data offerings.
Seeing the clients and offerings in this way is useful for reasoning around a number of business questions such as market segmentation, bundling of offerings, solution building and reasoning around strategic options.
## Background
I started doing the experiments in Gephi but after learning more about Mathematica, I would like move more of the work (if not all) to this environment.
However, importing the graph to Mathematica using .gml file from Gephi failed for me. The symptom was that the evaluation of the Import function in Mathematica failed to terminate.
The reason for wanting to import to Mathematica is for more control of normalizing data, annotations, analysis, and simulations based on the graph. This cannot be done in Gephi.
Same question posted on the Gephi forum Gephi - MMA
## Miscellaneous notes
Gephi (or similar programs) seems to be faster than Mathematica for the visualization.
Key points for using Gephi initially was the ability to handle the large amount of nodes and edged and the layout algorithms. The "tags" that seem to be relevant for finding our more are:
-
Gephi is a rather arbitrary choice, there are many more programs out there to work with graphs. I'd be surprised to see a white paper specifically considering Mma and Gephi (maybe unless it comes from the Gephi people) – Szabolcs Jul 1 '12 at 18:20
I have worked with Graphs in MMA with about 60,000 nodes and edges and that worked out just fine, be it a bit slow. – Sjoerd C. de Vries Jul 1 '12 at 23:21
Sounds good, I will experiment with importing the basic graph directly to MMA and not via the .gml file – FredrikD Jul 2 '12 at 7:29
@fredob314 Try to avoid visualizing it in Mathematica, except when you actually need to see that visualization. Drawing graphs can be very very slow, and Mathematica doesn't seem to cache the last layout. – Szabolcs Jul 3 '12 at 19:45
@Szabolcs, indicates that the visualization and visualization experiments should be done in Gephi (or similar programs) – FredrikD Jul 3 '12 at 20:25
If, as you suggest in your Miscellaneous Notes, speed of the visualization is a primary concern, then perhaps there are ways to speed up the drawing in Mathematica. Of course, it's hard to be sure without sample code. Built in Graph objects, for example, generate dynamic objects and, thus, render more slowly than the Graphics objects generated by say GraphPlot. Perhaps you could use Graph for computational purposes and GraphPlot for visualization purposes? Here's an example:
SeedRandom[1];
g = RandomGraph[{540, 14900}];
vl = VertexList[g];
el = EdgeList[g];
t = AbsoluteTime[];
g = Graph[vl, el]
AbsoluteTime[] - t
If you place that last command (AbsoluteTime[]-t) in a separate cell and then evaluate all simultaneously, you should get an accurate time - about 3s on my machine. Here's the corresponding GraphPlot timing:
el2 = Rule @@@ el;
t = AbsoluteTime[];
GraphPlot[el2]
AbsoluteTime[] - t
This second set of commands takes only 1 sec.
As far as the appearance of the graph goes, be sure to have a look at the General Graph Drawing tutorial in the documentation. There are myriad options that generally affect the appearance of a graph and gives you a lot of control. I think that the defaults are generally optimized for graphs of moderate size.
-
Is there an option like Dynamic->False to influence this rendering behaviour of Graph. If not, would it not be a very useful addition? – Yves Klett Jul 5 '12 at 17:23
@yves I agree. Histogram has something like PerformanceGoal -> "Speed" which disables the tooltips – R. M. Jul 5 '12 at 17:47
Good point! If I can generate and work with the graph in my example, the visualization could be done in MMA. Regarding the "missing coding part", there is no coding involved in visualizing the graph, more of thinking about which layout that makes sense given attributes of the bipartite graph. Before I accept the answer (basically - do it in MMA), I will check if it is practical and works from a performance point of view. Also, to share the data, I can scramble the names clients and the offerings and share the graph data through Google docs. – FredrikD Jul 5 '12 at 18:44
|
2016-02-08 10:25:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20851238071918488, "perplexity": 1507.136613688621}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152987.97/warc/CC-MAIN-20160205193912-00190-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/tags/automata/hot
|
# Tag Info
2
First notice that $L_1$ is really simple: $$L_1=\{w^* \mid w=x\text{ and }x \in \Sigma^*\}\supseteq\{w^1 \mid w=x\text{ and }x \in \Sigma^*\} =\Sigma^*$$ Hence $L_1=\Sigma^*$. Then notice that$L_3$ is also really simple: $$L_3=\{w \mid w=x.y, x,y \in \Sigma^*, y\text{ is a sub-string of }x\}$$ If as sub-string you allows $\epsilon$, then you van ...
2
Hint: Draw a state machine with four states: $\boxed{S_{0,0}}$: This is the initial and only accepting state. If you're here, then there is an even number of both $a$'s and $b$'s. $\boxed{S_{0,1}}$: If you're here, then there is an even number of $a$'s and an odd number of $b$'s. $\boxed{S_{1,0}}$: If you're here, then there is an odd number of $a$'s and ...
1
The grammar $G_1$ is ambiguous because there are two left-most derivations for the string "()". The first is $S\to SS\to (S)S\to ()S \to ()$, and the second is $S\to(S)\to()$. The grammar $G_2$ is not ambiguous. First, we can see that $L=\mathcal{L}(G_2)$ is the set of well-parenthesized strings (like "(()(()))()"). There are a few ways to prove this, ...
1
Popular tool for proving a language is not context-free is the Pumping Lemma. Let's recall its statement: If $L$ is a context-free language, there exists $p\geq 1$ such that each $s\in L$ with $\left|s\right|\geq p$ can be written as $$s=xuyvz$$ in such a way, that $\left|uyv\right|\leq p$ $\left|uv\right|\geq 0$ (i.e. at least one of $u$, ...
1
The following conforms to your request to just "fill in the boxes", but technically there is an arrow for each of the three items in box 2, all pointing to state c. Also, I think the given portion of your diagram implies rather non-standard input/output conventions. Assumptions: $\text{div}$ denotes integer division; i.e., $\ \ x \text{ div } 2 \ = ... 1 This is a good example of how bad notation can make simple questions much more difficult. (1) There is no reason to introduce grammars. You may as well denote by$L_1$the language accepted by$M_1$and by$L_2$the language accepted by$M_2$. (2) Let us denote by$L^c$the complement of a language$L$. Then your first (correct) equality amounts to prove ... 1 p-regular languages are commonly known as (regular) group languages in the literature since their syntactic monoid is a finite group. If a language is accepted by a permutation automaton, then its minimal DFA is also a permutation group, but this group is transitive (since every state is accessible from the initial state). Thus your subclass is actually ... 1 If I'm not missing something, then strings generated by option 3 must always end with a$\mathtt{1}\$ while for the regex in question this is not necessary. So I'd agree with you that option 3 doesn't seem to be the right answer.
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-09-18 14:11:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9569219946861267, "perplexity": 198.4361721879487}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127503.54/warc/CC-MAIN-20140914011207-00265-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://brill.com/browse?et=renc&level=parent&pageSize=10&sort=titlesort&t=HCLS
|
# Browse results
## You are looking at 1 - 10 of 129 items for :
• Encyclopedia
• Classical Studies
• Search level: Titles
Clear All
Justinian's Corpus iuris in the Byzantine world
Basilica Online is a fully-searchable online edition of the 17 volumes of the Basilica text and its scholia, as edited between 1945 and 1988 by H.J. Scheltema, D. Holwerda, and N. van der Wal. The Basilica is the single-most important source for Byzantine law throughout the period of the Byzantine empire, and is a major source for Byzantine studies more broadly.
- Most recent and accurate edition of the Basilica text and its scholia.
- Fully searchable in both Latin and Greek.
- All critical apparatus of the edition included.
- Browsing and navigation functionalities at volume (volumen), book (liber) or chapter (titulus) level.
- Full academic introduction written specifically for the online edition by Professor Dr B. H. Stolte.
- Comprehensive and up-to-date bibliography compiled by Dr T. E. van Bochove.
- Collective index to the text and scholia.
• Institutional outright purchase price €3,170.00$3,677.00 * * Price valid to Dec 31, 2021 • For further information, contact your sales manager Eduard Sachau's Edition of Kitāb al-Ṭabaqāt al-Kabīr The Kitāb al-Ṭabaqāt al-Kabīr ( Biography of Muḥammad, His Companions and the Successors up to the Year 230 of the Hijra) by Ibn Saʿd (d. 230 A.H./845 C.E.) is the earliest extant biographical dictionary on the life of the Prophet and the early generations of Muslims. It is one of the most important historical works about the first centuries of Muslim society in Arabic. This classic Brill edition was supervised by Eduard Sachau and was originally titled Biographien Muhammeds, seiner Gefährten und der späteren Träger des Islams bis zum Jahre 230 der Flucht. This edition was originally published between 1904 and 1940. Contributing editors Carl Brockelmann, Josef Horovitz, Julius Lippert, Bruno Meissner, Eugen Mittwoch, Friedrich Schwally, Karl Vilhelm Zetterstéen. CHOICE Outstanding Academic Title 2014 Library Journal Best Print Reference Selection 2014 With its striking range and penetrating depth, Brill’s Encyclopaedia of the Neo-Latin World traces the enduring history and broad cultural influence of Neo-Latin, the form of Latin that originated in the Italian Renaissance and persists to the modern era. Featuring original contributions by a host of distinguished international scholars, this 800,000 word two-volume work explores every aspect of the civilized world from literature and law to philosophy and the sciences. An invaluable resource for both the advanced scholar and the graduate student. The Encyclopaedia is also available ONLINE. Contributors are: Monica Azzolini, Irena Backus, Jon Balserak, Ann Blair, Jan Bloemendal, David Butterfield, Isabelle Charmantier, John Considine, Alejandro Coroleu, Ricardo da Cunha Lima, Susanna de Beer, Erik De Bom, Jeanine De Landtsheer, Tom Deneire, Ingrid De Smet, Karl Enenkel, Charles Fantazzi, Mathieu Ferrand, Roger Fisher, Philip Ford, Raphaele Garrod, Guido Giglioni, Roger Green, Yasmin Haskell, Hans Helander, Lex Hermans, Louise Hill Curth, Leofranc Holford-Strevens, Brenda Hosington, Erika Jurikova, Craig Kallendorf, Jill Kraye, Andrew Laird, Han Lamers, Marc Laureys, Jeltine Ledegang-Keegstra, Jan Machielsen, Peter Mack, David Marsh, Dustin Mengelkoch, Milena Minkova, David Money, Jennifer Morrish Tunberg, Adam Mosley, Ann Moss, Monique Mund-Dopchie, Colette Nativel, Lodi Nauta, Henk Nellen, Gideon Nisbet, Richard Oosterhoff, Marianne Pade, Jan Papy, David Porter, Johann Ramminger, Jennifer Rampling, Rudolf Rasch, Karen Reeds, Valery Rees, Bettina Reitz-Joosse, Stella Revard, Dirk Sacré, Gerald Sandy, Minna Skafte Jensen, Carl Springer, Gorana Stepanić, Harry Stevenson, Jane Stevenson, Andrew Taylor, Nikolaus Thurn, Johannes Trapman, Terence Tunberg, Piotr Urbański, Wiep van Bunge, Harm-Jan van Dam, Demmy Verbeke, Zweder von Martels, Maia Wellington Gahtan, and Paul White. Encyclopedia of the Ancient World Series: Brill's New Pauly Brill’s New Pauly is the first lexicographic project that both differentiates between Greco-Roman antiquity itself and its subsequent images, and demonstrates the close connection between antiquity and its aftermath. Volumes 1 to 15 ( Antiquity) are devoted to Greco-Roman antiquity. Volumes I to V ( Classical Tradition) are uniquely concerned with the long and influential aftermath of the classical heritage. Index Antiquity relates to the 15 volumes of Brill’s New Pauly that deal with Antiquity. Index The Classical Tradition, relates to the 5 volumes of Brill’s New Pauly that deal with the Classical Tradition. Brill's New Pauly is also available as an online resource. For more information, see Brill's New Pauly Online Encyclopaedia of the Ancient World - 20 Volumes with Index BRILL’S NEW PAULY is the English edition of the authoritative DER NEUE PAULY, published by Verlag J.B. Metzler since 1996. The encyclopaedic coverage and high academic standard of the work, the interdisciplinary and contemporary approach and clear and accessible presentation have made the NEW PAULY the unrivalled modern reference work for the ancient world. Fifteen volumes ( Antiquity, 1-15) of BRILL’S NEW PAULY are devoted to Greco-Roman antiquity and cover more than two thousand years of history, ranging from the second millennium BC to early medieval Europe. Special emphasis is given to the interaction between Greco-Roman culture on the one hand, and Semitic, Celtic, Germanic, and Slavonic culture, and ancient Judaism, Christianity, and Islam on the other hand. Five volumes ( Classical Tradition, I-V) are uniquely concerned with the long and influential aftermath of antiquity and the process of continuous reinterpretation and revaluation of the ancient heritage, including the history of classical scholarship. BRILL’S NEW PAULY presents the current state of traditional and new areas of research and brings together specialist knowledge from leading scholars from all over the world. Many entries are elucidated with maps and illustrations and the English edition will include updated bibliographic references. Brill's New Pauly Supplements is a series of additional reference works complementing the information of Brill's New Pauly. Taking a variety of approaches, each volume provides scholars quick access to a wealth of indepth knowledge on subjects from chronological lists of rulers of the ancient world, a biographical dictionary of classists who have made their mark on scholarship, to an historical atlas and encyclopedia-type works on the reception of myth and classical literature. These Supplements are also available online, visit Brill's New Pauly Supplements Online for more information. CHOICE Outstanding Academic Title 2014 With its striking range and penetrating depth, Brill’s Encyclopaedia of the Neo-Latin World traces the enduring history and wide-ranging cultural influence of Neo-Latin, the form of Latin that originated in the Italian Renaissance and persists to the modern era. Featuring original contributions by a host of distinguished international scholars, this comprehensive reference work explores every aspect of the civilized world from literature and law to philosophy and the sciences. An invaluable resource for both the advanced scholar and the graduate student. The Encyclopaedia is also available in PRINT. The online edition gives access to a number of newer entries that are not included in the print edition and also includes corrections. Contributors are: Monica Azzolini, Irena Backus, Patrick Baker, Jon Balserak, Ann Blair, Jan Bloemendal, David Butterfield, Isabelle Charmantier, John Considine, Alejandro Coroleu, Ricardo da Cunha Lima, Susanna de Beer, Erik De Bom, Jeanine De Landtsheer, Tom Deneire, Ingrid De Smet, Karl Enenkel, Charles Fantazzi, Mathieu Ferrand, Roger Fisher, Philip Ford, Raphaele Garrod, Guido Giglioni, Roger Green, Yasmin Haskell, Hans Helander, Lex Hermans, Thomas Herron, Louise Hill Curth, Leofranc Holford-Strevens, Brenda Hosington, Erika Jurikova, Craig Kallendorf, Jill Kraye, Andrew Laird, Han Lamers, Marc Laureys, Jeltine Ledegang-Keegstra, Jan Machielsen, Peter Mack, Eric MacPhail, David Marsh, Dustin Mengelkoch, Milena Minkova, David Money, Jennifer Morrish Tunberg, Adam Mosley, Ann Moss, Monique Mund-Dopchie, Colette Nativel, Lodi Nauta, Henk Nellen, Gideon Nisbet, Philipp Nothaft, Katrina Olds, Richard Oosterhoff, Marianne Pade, Jan Papy, David Porter, Johann Ramminger, Jennifer Rampling, Rudolf Rasch, Karen Reeds, Valery Rees, Bettina Reitz-Joosse, Stella Revard, Dirk Sacre, Gerald Sandy, Minna Skafte Jensen, Carl Springer, Gorana Stepanić, Harry Stevenson, Jane Stevenson, Andrew Taylor, Nikolaus Thurn, Johannes Trapman, Terence Tunberg, Piotr Urbański, Wiep van Bunge, Harm-Jan van Dam, Demmy Verbeke, Zweder von Martels, Maia Wellington Gahtan, and Paul White. • Institutional outright purchase price €1,120.00$1,299.00 *
|
2021-12-07 19:53:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4362565279006958, "perplexity": 11158.607344507043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00559.warc.gz"}
|
https://forthright48.com/tag/theorem/page/4/
|
Extended Euclidean Algorithm
Extended Euclidean Algorithm is an extension of Euclidean Algorithm which finds two things for integer $a$ and $b$:
1. It finds the value of $GCD(a,b)$.
2. It finds two integers $x$ and $y$ such that, $ax + by = gcd(a,b)$.
The expression $ax + by = gcd(a,b)$ is known as Bezout’s identity and the pair $(x,y)$ that satisfies the identity is called Bezout coefficients. We will look into Bezout’s identity at the end of this post. For now, just know the name.
How It Works
In Euclidean Algorithm we worked with remainders only. In Extended Euclidean Algorithm (ext_gcd) we will use the quotient and few other extra variables. Suppose we are finding $GCD(240,46)$. Using Euclidean Algorithm the steps for finding $GCD$ would be:
$GCD(240,46) = GCD(46, 240 \ \% \ 46) = GCD(46,10)$
$GCD(46,10) = GCD(10, 46 \ \% \ 10) = GCD(10,6)$
$GCD(10,6) = GCD(6, 10 \ \% \ 6) = GCD(6,4)$
$GCD(6,4) = GCD(4, 6 \ \% \ 4) = GCD(4,2)$
$GCD(4,2) = GCD(2, 4 \ \% \ 2) = GCD(2,0)$
$GCD(2,0) = 2$
Introducing $r$ and $q$
We will slowly move towards ext_gcd algorithm. Let us add two new variables. $r$ represents remainder and $q$ means quotient.
Let $r_0$ be $a$ and $r_1$ be $b$. At each step, we will calculate $r_i$. Let $q_i = \lfloor \frac{r_{i-2}}{ r_{i-1}} \rfloor$. Therefore, $r_i = r_{i-2} – q_i \times r_{i-1}$.
Then the above steps will look like the following:
Index iQuotient $q_i$Remainder $r_i$
$0$$240 1$$46$
$2$$240 / 46 = 5$$240 – 5 \times 46 = 10$
$3$$46 / 10 = 4$$46 – 4 \times 10 = 6$
$4$$10 / 6 = 1$$10 – 1 \times 6 = 4$
$5$$6 / 4 = 1$$6 – 1 \times 4 = 2$
$6$$4 / 2 = 2$$4 – 2 \times 2 = 0$
This table is the same as the calculation for the Euclidean algorithm except for a few extra details. Note that the line before last ( index $5$ ) contains the $GCD(a,b)$.
Introducing $x$ and $y$
We are trying to express $GCD(a,b) = ax + by$. So the variable $x$ and $y$ will hold the coefficients. To be exact we will write each row as terms of $a$ and $b$, i.e, $r_i = ax_i + by_i$.
Initially, $(x_0, y_0) = (1,0)$ and $(x_1, y_1) = (0,1)$. But how do we calculate $(x_i,y_i)$?
We know that $r_i = r_{i-2} – q_i \times r_{i-1}$. We also claimed that $r_i = ax_i + by_i$. By combining this two we get
$r_i = ( ax_{i-2} + by_{i-2} ) – q_i \times ( ax_{i-1} + by_{i-1} )$
$r_i = ax_{i-2} – q_i \times ax_{i-1} + by_{i-2} – q_i \times by_{i-1}$
$r_i = a ( x_{i-2} – q_i \times x_{i-1} ) + b ( y_{i-2} – q_i \times y_{i-1})$
$r_i = a x_i + b y_i$
$\therefore x_i = x_{i-2} – q_i \times x_{i-1} \ \text{and} \ y_i = y_{i-2} – q_i \times y_{i-1}$.
Our new table enhanced with $x$ and $y$ will now look like:
IndexQuotient $q_i$Remainder $r_i$$x_i$$y_i$
$0$$240$$1$$0 1$$46$$0$$1$
$2$$240 / 46 = 5$$240 – 5 \times 46 = 10$$1 – 5 \times 0 = 1$$0 – 5 \times 1 = -5$
$3$$46 / 10 = 4$$46 – 4 \times 10 = 6$$0 – 4 \times 1 = -4$$1 – 4 \times -5 = 21$
$4$$10 / 6 = 1$$10 – 1 \times 6 = 4$$1 − 1 × −4 = 5$$−5 − 1 × 21 = −26$
$5$$6 / 4 = 1$$6 – 1 \times 4 = 2$$−4 − 1 × 5 = −9$$21 − 1 × −26 = 47$
$6$$4 / 2 = 2$$4 – 2 \times 2 = 0$$5 − 2 × -9 = 23$$−26 − 2 × 47 = −120$
Our answer lies on the line before last. $240 \times -9 + 46 \times 47 = 2$.
So all we need to do now is implement these steps in code.
Code
Even though we will be calculating many rows in ext_gcd algorithm, in order to calculate any row we just need information from previous two rows. So we can save memory by simply storing $r_{i-2}, r_{i-1}, x_{i-2}, x_{i-1}, y_{i-2}, y_{i-1}$.
In our code, $x2$ is $x_{i-2}$, $x1$ is $x_{i-1}$ and $x$ is $x_i$. Same style is followed for other variable.
int ext_gcd ( int A, int B, int *X, int *Y ){
int x2, y2, x1, y1, x, y, r2, r1, q, r;
x2 = 1; y2 = 0;
x1 = 0; y1 = 1;
for (r2 = A, r1 = B; r1 != 0; r2 = r1, r1 = r, x2 = x1, y2 = y1, x1 = x, y1 = y ) {
q = r2 / r1;
r = r2 % r1;
x = x2 - (q * x1);
y = y2 - (q * y1);
}
*X = x2; *Y = y2;
return r2;
}
In line $1$ we define the function. The function ext_gcd() takes 4 parameters. The first two parameter $A$ and $B$ represents the two integers whose gcd we wish to find. The next two parameters *X and *Y are two integer pointers. Our function returns us $GCD(A,B)$ but we also want to find the coefficients $x,y$ such that $ax + by = GCD(A,B)$. So we extract those values using pointers.
In line $2$, we declare all necessary variables. We initiate some variables in line $3$ and $4$. Then a for loop starts in line $5$. We initiate few more variables in first section, $r2$ and $r1$. The loop runs till $r1$ becomes 0. In the last section of for loop, we make transitions of variables, such as $r2$ becomes $r1$ and $r1$ gets the new value $r$.
Inside the for loop, we calculate values needed for the current row, $q, r, x, y$.
When the loop finishes, $r1$ contains $0$ and $r2$ is the row before it containing $GCD(A,B)$. So we send $x2,y2$ as coefficients.
Complexity
Same as Euclidean Algorithm.
Uniqueness of Solution
Using ext_gcd we found $(x,y)$ pair which satisfies $ax + by = gcd(a,b)$. But is it unique?
No. Once we find a pair $(x,y)$ using ext_gcd, we can generate infinite pairs of Bezout coefficients using the formula:
$$( x + k \frac{ b } { \text{gcd}(a,b)}, y – k \frac{ a } { \text{gcd}(a,b)} )$$
Using any integer value for $k$ we can generate a different pair of Bezout coefficients $(x,y)$ which will satisfy the Bezout’s identity. Here is why it works:
$a ( x + \frac{ kb } { \text{gcd}(a,b)} ) + b ( y – \frac{ ka } { \text{gcd}(a,b)} )$
$ax + \frac{ kab } { \text{gcd}(a,b)} + by – \frac{ kab } { \text{gcd}(a,b)}$
$ax + by$
As you can see above, the terms with $k$ in them cancel each other out. They don’t change the expression $ax + by$ in any way, therefore, there are infinite pairs of Bezout coefficients.
Smallest Positive Integer of form $ax + by$
We showed that it is possible to write $ax + by = gcd(a,b)$, now we need to find a positive number smaller than $gcd(a,b)$ of form $ax + by$. Is that even possible?
No. $gcd(a,b)$ divides both $a$ and $b$. Therefore, if a number is of form $ax + by$ it will be divisible by $gcd(a,b)$ since $ax$ and $by$ are both divisible by $gcd(a,b)$. And the smallest positive number which is divisible by $gcd(a,b)$ is $gcd(a,b)$ itself.
So $gcd(a,b)$ is the smallest positive number of the form $ax + by$.
Bezout’s Identity
This was mentioned at the beginning of the post. Almost everything related to Bezout’s Identity has been explained. I will still mention them once more for the sake of completeness.
Bézout’s identity (also called Bézout’s lemma) is a theorem in the elementary theory of numbers: let a and b be nonzero integers and let d be their greatest common divisor. Then there exist integers x and y such that $ax + by = d$
• the greatest common divisor d is the smallest positive integer that can be written as $ax + by$
• every integer of the form ax + by is a multiple of the greatest common divisor d.
Wiki
Bezout’s Lemma simply states that $ax + by = gcd(a,b)$ exists. We need to use the Extended Euclidean Algorithm to find Bezout’s Coefficients.
Coding Pitfalls
Careful about the equation $ax + by = \text{gcd} (a,b)$. Here $\text{gcd}()$ means the result from Euclidean Algorithm, not what we mean mathematically. For example, when $a = 4$ and $b = -2$, Extended Euclidean Algorithm finds solution for $4x – 2y = -2$. According to Euclidean algorithm $gcd(4,-2)$ returns $-2$, though in common sense it should be $2$.
Reference
1. forthright48 – Euclidean Algorithm – https://forthright48.com/euclidean-algorithm
2. Wiki – Extended Euclidean algorithm – https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
3. Wiki – Bezout’s identity – https://en.wikipedia.org/wiki/B%C3%A9zout%27s_identity
|
2021-10-24 06:20:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091300129890442, "perplexity": 357.93215879723994}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585911.17/warc/CC-MAIN-20211024050128-20211024080128-00714.warc.gz"}
|
http://www.r-bloggers.com/page/175/?s=git
|
# 3245 search results for "git"
## Predictive analytics: Some ways to waste time
August 17, 2012
By
I am starting to take part at different competitions at kaggle and crowdanalytics. The goal of most competitions is to predict a certain outcome given some covariables. It is a lot of fun trying out different methods like random forests, boosted ...
## Horizon Plots with plot.xts
August 17, 2012
By
Anyone who has read 48 Industries (Dendrogram Ordered) Over 50 Years 48 Industries Since 1963 “Trend is Not Your Friend” Applied to 48 Industries Horizon Plots in Base Graphics More on Horizon Charts Application of Horizon Plots Horizon Plo...
## An update on visualizing Bayesian updating
August 17, 2012
By
A while ago I wrote this post with some R code to visualize the updating of a beta distribution as the outcome of Bernoulli trials are observed. The code provided a single plot of this process, with all the curves overlayed on top of one another. Then John Myles White (co-author of Machine Learning for
## plot.xts is wonderful
August 16, 2012
By
As mentioned in FOSS Trading post A New plot.xts yesterday “The Google Summer of Code (2012) project to extend xts has produced a very promising new plot.xts function. Michael Weylandt, the project's student, wrote R-SIG-Finance to request impressio...
## INLA: Bayes goes to Norway
August 15, 2012
By
INLA is not the Norwegian answer to ABBA; that would probably be a-ha. INLA is the answer to ‘Why do I have enough time to cook a three-course meal while running MCMC analyses?”. Integrated Nested Laplace Approximations (INLA) is based … Continue reading →
## Conference Presentations
August 15, 2012
By
I recently gave a talk at the Ecological Society of America (ESA) annual meeting in Portland, OR and a poster presentation at the World Congress of Herpetology meeting in Vancouver, BC, Canada. Both presentations were comparing generalized linear mixed models … Continue reading →
## Twitter coverage of the ISMB 2012 meeting: some statistics
August 15, 2012
By
OK, let’s do this: some statistics and visualization of the tweets for ISMB 2012. First, thanks to Stephen Turner who got things started in this post at his excellent blog, Getting Genetics Done. Subscribe to his feed if you don’t already do so. I’ve created a Github repository for this project (and future Twitter-related work).
## (Manually) making letters with geom_path() – fun example
August 15, 2012
By
Disclaimer, maybe the title should be ‘lame example’. Nothing overly exciting here. Just posting cause it took a little faffing about and someone else might like the idea. At my work (research institute) we (the social club committee) were organising … Continue reading →
## What does a generalized linear model do?
August 15, 2012
By
What does a generalized linear model do? R supplies a modeling function called glm() that fits generalized linear models (abbreviated as GLMs). A natural question is what does it do and what problem is it solving for you? We work some examples and place generalized linear models in context with other techniques.For predicting a categoricalRelated posts:
## Probit Models with Endogeneity
August 15, 2012
By
$Probit Models with Endogeneity$
Dealing with endogeneity in a binary dependent variable model requires more consideration than the simpler continuous dependent variable case. For some, the best approach to this problem is to use the same methodology used in the continuous case, i.e. 2 stage least squares. Thus, the equation of interest becomes a linear probability model (LPM). The
|
2014-03-11 08:21:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23032931983470917, "perplexity": 4550.482514780038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011159105/warc/CC-MAIN-20140305091919-00005-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/3079771/calculating-running-time-from-time-complexity
|
# Calculating Running time from Time Complexity
I have read about big $$O$$ notation for time complexity and for counting functions like that for prime numbers. Recently on StackOverflow I read:
The problem with defining how long an algorithm takes to run is that you usually can't give an answer in milliseconds because it depends on the machine, and you can't give an answer in clock cycles or as an operation count because that would be too specific to particular data to be useful.
https://stackoverflow.com/questions/11806021/how-does-one-calculate-the-runtime-of-an-algorithm
My question is, if we consider an algorithm that is running on a known time complexity (polynomial, linear, etc.) on a machine whose parameters are known, how can we calculate running time in seconds? Essentially, how can time complexity be translated into real time for a given machine?
I ask because I have seen instances where people have said $$x$$ algorithm will take $$y$$ time to run.
From what I understand after reading the wikipedia page on time complexity, I would think it is the polynomial value or number of computations divided by the amount of computations a given machine can process per unit time. Is this correct? Is there a general answer?
• The complexity denoted by big-O only tells you something about the growth of the computation time as the input size grows. This is not enough to conclude the exact running time. – SmileyCraft Jan 19 at 20:19
|
2019-07-18 10:54:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7386690378189087, "perplexity": 241.73716574393754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525627.38/warc/CC-MAIN-20190718104512-20190718130512-00557.warc.gz"}
|
https://www.mrholloman.net/APS/Tests/U08/apss21u08aa.htm
|
AP Statistics Unit Test #8
_____ / 63
The fuel economy for a car varies approximately normally with mean 24.8 mpg and standard deviation 6.2 mpg. Use this information for all questions on this page.
1 Calculate and interpret the standardized score of a car that has a fuel economy of 19 mpg. (5)
$\frac{19-24.8}{6.2}=-0.9355$; this car’s fuel economy is 0.9355 standard deviations below the mean (which is not unusual).
2 According to the Empirical Rule, what approximate percentage of cars have fuel economies of 18.6 mpg or less? (3)
Since 18.6 is one standard deviation below the mean, about 16% of cars will have fuel economies of 18.6 mpg or less.
3 According to the Empirical Rule, approximately 2.5% of cars have fuel economies that are greater than what amount? (3)
The upper 2.5% of fuel economies will have standardized scores of 2 or more; that translates to fuel economies of $\left(2\right)\left(6.2\right)+24.8=37.2$ mpg or more.
4 Find the probability that a randomly selected car will have a fuel economy between 18 mpg and 25 mpg. (5)
$P\left(18
5 The 1% of cars with the worst fuel economies get how many miles to the gallon? (5)
The lowest 1% will have z-scores of $-2.3263$ and lower. That translates to fuel economies of $x=\left(-2.3263\right)\left(6.2\right)+24.8=10.3766$ mpg or lower.
An airline has found that weight of a piece of luggage brought by a passenger (X) varies with mean 40 pounds and standard deviation 8 pounds. A particular plane can carry 200 passengers—assume that each passenger brings one piece of luggage.
6 Find ${\mu }_{\overline{x}}$. (2)
${\mu }_{\overline{x}}=\mu =40$
7 Find ${\sigma }_{\overline{x}}$. (4)
${\sigma }_{\overline{x}}=\frac{\sigma }{\sqrt{n}}=\frac{8}{\sqrt{200}}=0.5657$
8 Describe the shape of the sampling distribution of $\overline{x}$. Justify your answer. (4)
Since the size of the sample is quite large (well over 30), the shape of the sampling distribution of $\overline{x}$ will be approximately normal.
9 Find the probability that the mean weight of the luggage is greater than 41 pounds. (5)
$P\left(\overline{x}>41\right)=P\left(z>\frac{41-40}{0.5657}\right)=P\left(z>1.7678\right)\approx 0.0386$
10 Find the probability that the mean weight of the luggage is less than 39.5 pounds. (5)
$P\left(\overline{x}<39.5\right)=P\left(z<\frac{39.5-40}{0.5657}\right)=P\left(z<-0.8839\right)\approx 0.1884$
A large corporation has found, over the years, that about 10% of its sales trainees are rated as outstanding at the completion of their training program. A particular office of this corporation had 150 sales trainees in a recent year. Let $\stackrel{^}{p}$ be the proportion of sales trainees at this location that will be rated as outstanding.
11 Find ${\mu }_{\stackrel{^}{p}}$. (2)
${\mu }_{\stackrel{^}{p}}=p=0.1$
12 Find ${\sigma }_{\stackrel{^}{p}}$. (5)
${\sigma }_{\stackrel{^}{p}}=\sqrt{\frac{p\left(1-p\right)}{n}}=\sqrt{\frac{\left(0.1\right)\left(0.9\right)}{150}}\approx 0.0245$
13 Describe the shape of the sampling distribution of $\stackrel{^}{p}$. Justify your answer. (5)
The shape will be approximately normal because $np=\left(150\right)\left(0.1\right)=15>10$ and $n\left(1-p\right)=\left(150\right)\left(0.9\right)=135>10$.
14 Find the probability that at least 17 of the trainees will be rated as outstanding. (5)
$P\left(X\ge 17\right)=P\left(\stackrel{^}{p}>\frac{17}{150}\right)=P\left(\stackrel{^}{p}>0.1133\right)=P\left(z>\frac{0.1133-0.1}{0.0245}\right)=P\left(z>0.5443\right)\approx 0.2931$
15 Find the probability that fewer than 14 of the trainees will be rated as outstanding. (5)
$P\left(X<14\right)=P\left(\stackrel{^}{p}<\frac{14}{150}\right)=P\left(\stackrel{^}{p}<0.0933\right)=P\left(z<\frac{0.0933-0.1}{0.0245}\right)=P\left(z<-0.2722\right)\approx 0.3927$
Page last updated 10:26 2020-01-30
|
2022-05-21 11:57:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5603892207145691, "perplexity": 3665.173547227417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00469.warc.gz"}
|
http://clay6.com/qa/9462/the-eccentricity-of-the-hyperbola-whose-latus-rectum-is-equal-to-half-of-it
|
Browse Questions
# The eccentricity of the hyperbola whose latus rectum is equal to half of its conjugate axis is
$\begin{array}{1 1}(1)\frac{\sqrt{3}}{2}&(2)\frac{5}{3}\\(3)\frac{3}{2}&(4)\frac{\sqrt{5}}{2}\end{array}$
Given latus rectum = $\large\frac{1}{2} \times$ Conjugate axis
$\large\frac{2b^2}{a} =\frac{1}{2} \times$$2b 2b=a b^2 =a^2(e^2-1) b^2=4b^2(e^2-1) \large \frac{1}{4}=$$e^2-1$
$e^2=1+ \large\frac{1}{4}=\frac{5}{4}$
$e= \large\frac{\sqrt{5}}{2}$
Hence 4 is the correct answer.
|
2017-04-26 17:47:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9428643584251404, "perplexity": 2762.53984918075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121528.59/warc/CC-MAIN-20170423031201-00383-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://docs.integrations.arria.com/BI/PowerBI/en/arria-for-power-bi-release-notes.html
|
# Release Notes
### Arria for Power BI
For further information or help with any of these issues, please contact our Support Team.
## Version 3.3.1.0
March 2022
### New features and enhancements
• Arria Answers: Support for single date fields
Arria Answers can now handle date values stored in a single column, as shown below. Spreading date components (year, quarter, month, and day) across multiple columns is no longer required.
• Arria Answers: Two-digit year values
In addition to four-digit year values (e.g. 2022), two-digit year values (e.g. 22) are now supported.
• Arria Answers: Support for additional aggregation types
Arria Answers now supports five aggregation types: Don't summarize, Sum, Average, Minimum, and Maximum. Previously, it supported Don't summarize and Sum aggregation only. This improvement increases your options for querying your data — see Query types for guidance.
### Fixed issues
• NLG Apps: Changing the aggregation type of a measure to an unsupported type resulted in errors or incorrect narratives
Previously, when Arria for Power BI displayed a generated narrative, changing a measure's aggregation type to one unsupported by the app resulted in errors or incorrect narratives. This no longer occurs.
• Arria Answers now correctly handles values that contain asterisks
Arria Answers no longer generates an error message when a data field's name contains an asterisk.
### Known issues
• NLG Apps: Some narratives must be reconfigured after updating to Arria for Power BI 3.3.1.0
When you update to Arria for Power BI 3.3.1.0, the add-in replaces any pre-existing narratives generated using the Anomalies, Correlations, Describe a Pie Chart, Describe a Bar Chart, and Describe a Line Chart apps. More specifically, the update replaces the narrative with an error message or a narrative generated by a different NLG app.
To resolve this issue, you must reconfigure any narratives generated using these apps after completing the update.
• Arria Answers: Hyphens are not supported in field names or values
Arria Answers does not recognize field names or field values containing hyphens.
• Arria Answers: Empty cells (null values) in dimension fields result in error messages
If a non-time dimension field in your dataset — e.g. Country or Product — contains empty cells (null values), Arria Answers returns this error message:
I'm sorry, something went wrong. Please try again or ask for help.
If a time dimension field in your dataset — e.g. Date or Year — contains empty cells (null values), Arria Answers returns this error message:
I'm sorry, I can't understand your date column format. Please use an ISO-8601 format and try again.
To avoid these issues, please ensure that your dimension fields contain no empty cells.
Top
## Version 3.3.0
December 2021
### New features and enhancements
• New NLG Apps
Arria for Power BI 3.3.0 introduces two new out-of-the-box narrative types:
• The Anomalies app generates a narrative that detects and contextualizes anomalies in one or more measures — at a single time point or across a time range — broken down by a number of dimensions. For example, the narrative may report unexpectedly high or low sales for a certain month when compared to all other months in the dataset.
See Anomalies for details.
• The Correlations app generates an analysis of correlations between measures — at a single time point or across a time range — broken down by a number of dimensions. For example, for a particular product, the narrative may report a strong positive correlation between marketing spend and an increase in profit two months later.
See Correlations for details.
Top
## Version 3.2.2
August 2021
### Fixed issues
• Error when existing NLG Apps narratives updated to version 3.2.0 of Arria for Power BI
When existing NLG Apps narratives are updated to version 3.2.0 of Arria for Power BI, the following error may occur:
Please check the data you have provided. Missing time dimension. You have not supplied the measure selected for analysis as part of the dataset.
This has been fixed in Arria for Power BI 3.2.2.
Top
## Version 3.2.1
August 2021
### Fixed issues
• Incorrect sign-up URL on Welcome page
The Sign up here URL has been updated.
Top
## Version 3.2.0
August 2021
### New features and enhancements
• NLG Apps: Support for additional aggregation and entity types
Arria for Power BI 3.2 introduces NLG Apps support for measures aggregated by Average, Minimum, Maximum, and Count, and precalculated values such as percentages, ratings, and ratios.
See the Support for additional aggregation and entity types in NLG Apps topic for further information.
### Important
Arria Answers currently supports only the Sum aggregation type and the None and Currency entity types. When switching from an NLG Apps narrative to Arria Answers, make sure the measures chosen to generate the narrative meet these requirements. If they don't, Arria Answers will return incorrect responses.
### Fixed issues
• Using summarization with measures
When working with NLG Apps and Arria Answers in previous releases of Arria for Power BI, measures could only be aggregated using the Sum summarization type. Using other summarization types would result in erroneous narratives.
For NLG Apps only, support has been added for the Average, Minimum, Maximum, and Count summarization types. Arria Answers still supports only the Sum summarization type.
See the Support for additional aggregation and entity types in NLG Apps topic for further information.
### Known issues
• NLG Apps: Changing the aggregation type of a measure to an unsupported type results in errors or incorrect narratives
When on a generated narrative page, changing the aggregation type of a measure to one that is not supported by that app results in errors or incorrect narratives being generated.
To resolve the issue, change the aggregation type back to one that is supported.
For details of the aggregation and entity type combinations supported by your chosen app, see the relevant page in the NLG App Directory.
Top
## Version 3.1.0
### March 2021
The following enhancements to NLG Apps are now available through Arria for Power BI 3.1.0.
• Support for single column dates
NLG Apps no longer requires dates to be presented as a date hierarchy in separate Year, Quarter, Month and Day fields. Single Date fields are now supported.
See NLG Apps Data requirements for more information on supported date formats.
• Support for custom date field names
NLG Apps no longer requires date fields to be named exactly Date, Year, Quarter, Month, and Day.
Now, for example, fields containing day values can be named either "d", "D", or any expression that contains the word "day", such as "day_name". This requirement is case-insensitive: any mixture of cases is acceptable. Therefore, "DAY_name" and "Day_Name" (to give two alternatives) are valid alternatives to "day_name".
See NLG Apps Data requirements for more information on supported date field names.
### December 2020
#### New features and enhancements
• Two new NLG Apps narratives
The NLG Apps feature introduces two new out-of-the-box narrative types:
Trend AnalysisGenerates a narrative that analyzes the changes in trend of a measure over time. The narrative highlights significant shifts, and drills down through multiple dimensions. For example, an analysis of the changes in Sales by Month, broken down by Country, Product and Market. Ranking AnalysisGenerates an analysis of changes in the ranking of dimensions over time as they relate to a measure, and summarizes the top and/or bottom rankings of combinations of dimensions over the same time period. For example, an analysis of the changes in Profit over a Quarter, broken down by Country, Product and Segment.
See the NLG Apps reference for details.
#### Notable changes
• Improvements to the Describe a Line Chart NLG App narrative
The structure of the Describe a Line Chart narrative has been improved so that the data insights are presented more concisely and clearly. For example, the Correlations section has been moved to the end of the narrative and is included only when multiple measures are selected.
#### Fixed issues
• Arria Answers: calculated measures cannot be queried
In previous versions of Arria for Power BI, Arria Answers incorrectly treated calculated measures as dimensions. This issue has been fixed.
• Arria Answers: renamed data columns not recognized
In previous versions of Arria for Power BI, Arria Answers incorrectly handled columns that had been renamed in the Fields pane and issued an error stating that the renamed column could not be found. This has been fixed.
#### Known issues
• Measures narrated in reverse order of priority when using the Descriptive Statistics app
When the Ranking option of the Descriptive Statistics app is set to Use order of dimensions set in Step 1, the order of the measures set in Step 1 of the wizard is reversed in the generated narrative.
• Arria Answers generates error when a data value contains an asterisk
Arria Answers generates the error "I'm sorry, something went wrong. Please try again or ask for help." when asked about data fields that include values containing the asterisk character.
For example, the error is generated when asking about the Country data field, shown above, which includes the value *Canada*:
Top
## Version 3.0.2
October 2020
### New features and enhancements
• Support for the Power BI context menu
Arria for Power BI now supports the Power BI context menu feature. The Power BI context menu is accessed by right-clicking in the add-in.
Users can now copy the visual using the Copy visual option and show the data being used to create the narrative using the Show as a table option.
### Known issues
• Using NLG Apps with the Private Cloud and Customer-Hosted deployment options
You may notice minor differences between the narratives generated using NLG Apps when working in a private cloud or customer-hosted environment and those generated when working in the public cloud environment.
This will be fixed in version 3.1.0.
Top
## Version 3.0.0
September 2020
Arria for Power BI 3.0.0 introduces a redesigned out-of-the-box experience, an all-new natural language query experience, and several user interface enhancements — making it even easier to add automatically-generated, natural-language insights to your reports and dashboards.
### New features and enhancements
• NLG Apps
NLG Apps (built from a feature that was formerly known as Configure Narrative) introduces three new, out-of-the-box narrative types. In addition to the existing Descriptive Statistics, Time-based Variance, and Target-based Variance analyses, you can now analyze and narrate your charts using the Describe a Pie Chart, Describe a Bar Chart, and Describe a Line Chart apps.
See NLG Apps for details.
### Note
The Descriptive Statistics analysis now forms a discrete app, so this type of analysis can no longer be selected with Time-based or Target-based Variance to produce a single narrative. Instead, each analysis type and its corresponding narrative must be added to the dashboard separately.
Arria Answers is a new, conversational-AI platform that integrates with Power BI dashboards to give businesses real-time access to key insights from data, using natural language queries.
See Arria Answers for details.
• Support for larger datasets (payloads)
Previous versions of Arria for Power BI had a dataset limit of 30,000 rows. Improvements to the add-in mean that there is no longer a limit on the number of rows in your data.
Now, the only restriction on dataset size is the data payload limit imposed by Studio (20MB).
See Dataset size limits and performance for details.
### Notable changes
• New Arria for Power BI login
To configure Arria for Power BI, you are now required to log in to the add-in using your Arria NLG Studio or Arria for Power BI account credentials.
The new Arria for Power BI account's subscription model allows you to choose the core features (NLG Apps, Arria Answers, and Custom Narratives) you want to configure and make available to your dashboard viewers.
See Arria accounts and supported features for details.
• Private Cloud and Customer-Hosted deployment options
Users of the Private Cloud (a.k.a. Dedicated Cloud) and Customer-Hosted (a.k.a. On-Premises) deployment options can no longer use the publicly available Arria for Power BI add-in.
Instead, a unique instance of the add-in, configured to call your exclusive Arria for Power BI services, must be downloaded from your private cloud or customer-hosted environment.
### Fixed issues
• Comma-separated values now correctly handled when working with table-type custom narrative projects.
Comma-separated values in a single cell are now correctly handled when calling table-type Studio projects from the add-in.
### Known issues
• Supported browsers
Arria for Power BI 3.0.0 is supported in the Chrome and Edge browsers only.
• Using summarization with measures
When working with NLG Apps and Arria Answers, measures can only be aggregated using the Sum summarization type. Using other summarization types will result in erroneous narratives.
When working with Custom Narratives, you may use any summarization type as long as the custom Studio project has been configured to handle it.
• Arria Answers: calculated measures cannot be queried
Arria Answers incorrectly treats calculated measures as dimensions. You may receive errors such as "I'm sorry, I can't answer a question without a measure".
• Arria Answers: renamed data columns not recognized
Arria Answers does not recognize columns that have been renamed in the Fields pane.
Although they are correctly identified as a dimension or measure in the question entered into Arria Answers (highlighted in pink or orange), you will receive an error stating that the column could not be found.
For example, renaming the Sales column as Actual Sales in the Fields pane results in this error:
Top
|
2022-05-18 03:32:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20612820982933044, "perplexity": 6513.829297479615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00378.warc.gz"}
|
http://thetubeguru.com/study_material/quadrilaterals-exercise-8-2-mathematics-9th-class/
|
0
Exercise 8.2
1. ABCD is a quadrilateral in which P, Q, R and S are midpoints of the sides AB, BC, CD and DA (sec figure), AC is a diagonal. Show that
(i) $SR\parallel AC$ and SR = ${1 \over 2}AC$
(ii) PQ = SR
(iii) PQRS is a parallelogram.
Sol. (i) Consider triangle ACD,
S and R are mid-points of sides AD and DC respectively.
$SR\parallel AC$ and SR = ${1 \over 2}AC$ ….(i)
[Line segment joining mid-points of two sides of a triangle is parallel to the third and half of it.]
(ii) Consider triangle ABC, P and Q are mid-points of sides AB and BC respectively.
∴ $PQ\parallel AC$ and PQ= ${1 \over 2}AC$ ….(ii)
[Reason same as above]
From (i) and (ii),
$SR\parallel AC$ and $PQ\parallel AC$ ⇒ $SR\parallel PQ$ …(iii)
and SR = ${1 \over 2}$ AC and PQ =${1 \over 2}$ AC ⇒ SR = PQ. …(iv)
(iii) $SR\parallel PQ$ and SR = PQ. [From (iii) and (iv)]
⇒ PQRS is a parallelogram.
2. ABCD is a rhombus and P, Q, R and S are the midpoints of the sides AB, BC, CD and DA respectively. Show that the quadrilateral PQRS is a rectangle.
Sol. First prove that PQRS is a parallelogram.
(i) Consider triangle ACD,
S and R are mid-points of sides AD and DC respectively.
∴ $SR\parallel AC$ and $SR = {1 \over 2}AC$
[Line segment joining mid-points of two sides of a triangle is parallel to the third and half of it.]
(ii) Consider triangle ABC, P and Q are mid-points of sides AB and BC respectively.
∴ $PQ\parallel AC$ and $PQ = {1 \over 2}AC$ …(ii) [Reason same as above]
From (i) and (ii),
$SR\parallel PQ$ and $PQ\parallel AC$ ⇒ $SR\parallel PQ$ …(iii)
and $SR = {1 \over 2}AC$ and $PQ = {1 \over 2}AC$ ⇒ SR = PQ. …(iv)
(iii) $SR\parallel PQ$ and SR = PQ. [From (iii) and (iv)]
⇒ PQRS is a parallelogram.
PQRS is a parallelogram.
As $PX\parallel YO$ and $PY\parallel OX$ is a parallelogram.
⇒ ∠YPX = ∠YOX = 90° [Diagonals of a rhombus bisect each other and are at right angles.]|
As in parallelogram PQRS,
∠SPQ is 90°.
∴ PQRS is a rectangle.
3. ABCD is a rectangle and P, Q, R and S are mid-points of the sides AB, BC, CD and DA respectively. Show that the quadrilateral PQRS is a rhombus.
Sol. Construction:
Join AC and BD.
As ABCD is a rectangle.
∴ AC = BD …(i)
Consider ΔABC, P and Q are midpoints of sides AB and BC respectively.
∴ $PQ\parallel AC$ and $PQ = {1 \over 2}AC$
Similarly, consider AADC, S and R are mid-points of sides AD and DC respectively.
∴ $SR\parallel AC$ and $SR = {1 \over 2}AC$ ….(iii)
From (ii) and (iii)
$PQ = SR = {1 \over 2}AC$ ……(iv)
Similarly, we can show
$PS = QR = {1 \over 2}BD$
From (i), (iv) and (v), we have PQ = QR .= RS = SP
∴ PQRS is a rhombus.
4. ABCD is a trapezium in which $AB\parallel DC$ , BD is a diagonal and E is the mid-point of AD. A line is drawn through E parallel to AB intersecting BC at F (see figure). Show that F is the mid-point of BC.
Sol. Consider ΔADB, $AB\parallel EF$ ⇒ $AB\parallel EG$
⇒ G is mid-point of BD. …(i)
[ A line drawn through mid-point of one side, parallel to other bisects the third side.]
Consider triangle BCD,
$AB\parallel CD$ and $EF\parallel AB$
⇒ $EF\parallel CD$ ⇒ $GF\parallel CD$
⇒ F is mid-point of BC. [Reason same as above]
5. In a parallelogram ABCD, E and F are the mid-points of sides AB and CD respectively (see figure). Show that the line segments AF and EC trisect the diagonal BD.
Sol. $AB = CD \Rightarrow {1 \over 2}AB = {1 \over 2}CD$
⇒ AE = CF
As AE = CF and $AE = CF$ [ $AB = CD$]
⇒ AECF is a parallelogram.
⇒ $AP\parallel CE$ …(i)
Consider triangle ABP,
E is mid-point of AB and $EQ\parallel AP$ [From (i)]
⇒ Q is mid-point of BP [A line segment drawn through mid-point of one side of a triangle and parallel to other, bisects the third side.]
BQ = PQ …(ii)
Similarly, by considering triangle DCQ and proceeding as
above, we can show that
DP = PQ …(iii)
⇒ BQ = PQ = DP [From (ii) and (iii)]
⇒ P and Q trisect BD.
6. Show that the line segments joining the mid-points of the opposite sides of a quadrilateral bisect each other.
Sol.
(i) Consider triangle ACD,
S and R are mid-points of sides AD and DC respectively.
∴ $SR\parallel AC$ and SR $= {1 \over 2}$ …(i)
[Line segment joining mid-points of two sides of a triangle is parallel to the third and half of it.]
(ii) Consider triangle ABC, P and Q are mid-points of sides AB and BC respectively.
∴ $PQ = AC$ and PQ $= {1 \over 2}AC$ [Reason same as above]
From (i) and (ii),
$SR\parallel AC$ and $PQ\parallel AC$ ⇒ $SR\parallel PQ$ …(iii)
and $SR = {1 \over 2}AC$ and $PQ = {1 \over 2}AC$ ⇒ SR = PQ . …(iv)
(iii) $SR\parallel PQ$ and SR = PQ. [From (iii) and (iv)]
⇒ PQRS is a parallelogram.
We know that diagonals of a parallelogram bisect each other, i.e., OP = OR and OQ = OS.
Hence, line segments joining midpoints of opposite sides of a quadrilateral bisect each other.
7. ABC is a triangle right angled at C. A line through the mid-point M of hypotenuse AB and parallel to BC intersects AC at D. Show that
(i) D is the mid-point of AC
(ii) MD ⊥ AC
(iii) CM = MA = ${1 \over 2}$ AB.
Sol.
(i) $MD\parallel BC$ meets AC at D.
∴ D is mid-point of AC.
[A line through the mid-point of a ID side of a triangle parallel to other bisects and third side.]
(ii) $MD\parallel BC$ and AC is transversal.
∴ ∠ADM = ∠ACB. [ Corresponding angles ]
⇒ ∠ADM = 90° [ ∠ACB = 90° ]
⇒ MD ⊥ AC.
(iii) Consider triangles ADM and CDM,
AD = DC [ From result (i) ]
MD is common.
∠ADM = ∠CDM [90° each] [From result (ii)]
∴ MA = CM = ${1 \over 2}AB$ [ M is mid-point of AB]
|
2017-12-11 15:12:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 56, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9415346384048462, "perplexity": 10036.660114508266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948513611.22/warc/CC-MAIN-20171211144705-20171211164705-00715.warc.gz"}
|
https://28mm.github.io/notes/d3-terraform-graphs
|
Exploring Terraform Graphs With D3.js Part 1
TL;DR
As a newcomer to Terraform (and to AWS), I sometimes find it difficult to reason about the many available resource types, and the dependencies that can exist between them. Especially when coming to terms with larger configurations.
To address this difficulty, I want a tool to help me explore dependency graphs, and resource definitions, interactively. This is the first in a series (parts: one, two, three, four; code, documentation) of posts about building such a tool, using d3.js, starting with the simple example below1, and building upon it.
Terraform and Dependency Graphs, Introduced
Terraform is a remarkable piece of software; it’s like Make for infrastructure. Rather than transforming source into libraries and executables, Terraform transforms resource definitions (such as vm instances, dns records, s3 buckets) into running infrastructure.
Like Make, Terraform walks a dependency graph to determine the order in which it should create resources, to identify what can be done in parallel, and to re-create resources affected by changes.
Consider the following example, a straightforward Terraform graph–the same as above–laid out by the graphviz package.2
This graph is easy to understand because it has only a handful of nodes, and an obvious structure. You can easily find the single instance, its provider (aws, in this case), and the few variables they depend on. Here is a slighly more complex example.3
This is graph remains fairly legible. But larger examples tend to sprawl, making resources harder to find, and dependencies harder to trace.4
Interactive Dependency Graphs
These visualizations could be improved in various ways. Adding color, and varying the shapes used, for instance, as well as collapsing less interesting parts of the graph. But an interactive visualization offers these possibilities and more.
As a reminder of how this might work, here is the first example, again. The root node is made larger than its dependencies, and nodes of different types are assigned colors, according to an arbitrary scheme.
And here is is the second. This example is worryingly dense, compared with the Graphviz version, but being able to call up resource definititions with the mouse is a striking advantage.
One possible improvement here is to use curved edges, so that their direction is more obvious. (Tracing an edge in the clockwise direction brings you to a dependency.)
That’s an encouraging result, but what about a much larger graph, like the sprawling third example?
This version is harder to make sense of than the Graphviz version! It contains so many types of resource, for example, that it exhausts the 20-color palette, used previously.
Additionally, many of the edges overlap, or are drawn so close together, that they become hard to distinguish.
Conclusion
So far this has been a fun exercise, and I’m satisfied with it as a proof of concept. However, larger configurations remain a problem. In the next post, I plan to take up this problem, and explore possible solutions to it.
Have a suggestion? patrick.mcmurchie@gmail.com
1. .tf files borrowed from Udemy’s Terraform course materials, here↩︎
2. Terraform directly supports this type of visualization, through its graph argument: terraform graph | dot -Tsvg > graph.svg↩︎
3. .tf files borrowed from Hashicorp’s aws provider examples, here↩︎
4. .tf files borrowed from Udemy’s Terraform course materials here ↩︎
|
2019-06-27 00:50:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.233496755361557, "perplexity": 2275.155378073235}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000609.90/warc/CC-MAIN-20190626234958-20190627020958-00312.warc.gz"}
|
https://me.jhu.edu/news-and-events/calendar/action~agenda/time_limit~1518757200/request_format~json/
|
# Event Calendar
Feb
16
Fri
Graduate Seminar in Fluid Mechanics @ 132 Gilman Hall
Feb 16 @ 4:00 pm – 5:00 pm
4:00 pm Presentation
Large Eddy Simulation Including Population Dynamics Model for Polydisperse Droplet Evolution in Turbulence”
Previous studies have shown that dispersion patterns of oil droplets in the ocean following an oil spill depend critically on droplet diameter. Hence predicting the evolution of the droplet size distribution is of critical importance for predicting macroscopic features of oil dispersion in the ocean. We adopt a population dynamics model of polydisperse droplet distributions for use in Large Eddy Simulation. We generalize a breakup model from Reynolds averaging approaches to LES in which the breakup is modeled as due to bombardment of droplets by turbulent eddies of various sizes. The breakage rate is expressed as an integral of a collision frequency times a breakage efficiency over all eddy sizes. An empirical fit to the integral is proposed in order to avoid having to recalculate the integral at every LES grid point and time step. The fit is tested by comparison with various stirred tank experiments. As a flow application for LES we consider a turbulent jet emanating from a source where oil droplets are released. The advected velocity and concentration fields of the droplets are described using an Eulerian approach. We apply this LES model to study the change of the oil droplet size distribution due to breakup, caused by interaction of turbulence generated by the jet with the oil droplets.
4:25 pm Presentation
On the Interactions of a Rotor Blade Tip flow with Axial Casing Grooves in an Axial Compressor Near Best Efficiency Point
Presented by HUANG CHEN (Adviser: Prof. Katz)
Feb
23
Fri
GRADUATE SEMINAR IN FLUID MECHANICS @ 132 Gilman Hall
Feb 23 @ 4:00 pm – 5:00 pm
4:15-4:40 p.m. Presentation
“Simultaneous PLIF and PIV Measurements on Refractive Index Matched Immiscible Buoyant Oil Jet Fragmentation in Water”
Presented by XINZHI XUE (Adviser: Prof. Katz)
Subsurface oil well blowout generate immiscible turbulent buoyant oil jets, which breaks up into a cloud of oil droplets. Understanding of the fragmentation is essential for evaluating the spreading of the oil jet and its interaction with the surrounding water in the near field, and for determining the droplet size distribution needed for modeling the subsequent transport of the oil. There is limited experimental data on the near field behavior of the opaque oil jet because of the inability to perform phase distribution measurements there. Injecting silicone oil into sugar water, which have the same refractive index, as surrogates for crude oil and water, respectively, enables us to observe the breakup process. The dynamic similarity is maintained by keeping nearly the same interfacial tension as well as density and viscosity ratios. The mixing process is visualized by simultaneous applications of planar laser induced fluorescence(PLIF) by premixing the oil with dye, and particle image velocimetry (PIV). The PLIF images are used for measuring the droplet sizes as well. Results show that with increasing Reynolds number, the jet spreading angle evaluated from the PLIF images increases, and its centerline velocity decreases at a faster rate. Beyond about 7 nozzle diameters, the turbulence peaks at the center of the jet, and its magnitudes scales with centerline velocity. As expected, the oil ligament fragmentation occurs primarily in regions of high strain-rate fluctuations in the near field shear layer and at the end of the potential core. This latter moves closer to the nozzle, and the resulting characteristic droplet sizes decrease with increasing Reynolds number. In both cases, the fragmentation process generates compound droplets, each containing multiple layers of oil and water.
4:40-5:00 p.m. Presentation
“Common Features of the Turbulence Structures in the Tip Region of Axial Turbomachines”
Presented by YUANCHAO LI (Adviser: Prof. Katz)
The flow in turbomachines is inherently complex and turbulent. Modern design processes rely heavily on CFD, especially RANS simulations to elucidate the flow, raising the questions about the applicability of popular turbulence models to turbomachines. Several axial turbomachines, including two waterjet pumps and an aviation compressor, have been studied experimentally in the JHU refractive index-matched facility in the past few years. This talk summarizes some common features in turbulence structure observed in these machines, such as the anisotropic distributions of Reynolds stresses and mechanisms (e.g., production, transport) causing it. Among all the machines, elevated turbulent kinetic energy (TKE) is observed to be associated with the characteristic vortical structures, e.g., the tip leakage vortex (TLV) and the shear layer connecting TLV to the blade suction side tip corner. High TKE is also evident near the blade tip corners and in some cases, in the layer separated from the endwall boundary where the leakage flow meets the opposite-directional main passage flow. The dominant terms of turbulence production rates show similar distributions in these machines, such as high shear production in the shear layer and opposite effects by flow contraction/stretching in the production of <u′z2> and <u′r2> near the TLV center. Moreover, eddy-viscosities estimated using individual stress and strain rate components reveal extreme spatial variability and inconsistency, suggesting that the popular eddy-viscosity based models are not applicable for these machines. However, the common turbulence features can serve as useful references for numerical simulations in which local production and anisotropy should be taken into account.
Mar
2
Fri
Graduate Seminar in Fluid Mechanics @ 132 Gilman Hall
Mar 2 @ 4:00 pm – 5:00 pm
4:00 pm Presentation
“Cavitation Inception in Turbulent Shear Layers”
Presented by KARUNA AGARWAL (Adviser: Prof. Katz)
Cavitation in turbulent shear layers initiates along streamwise vortices. This has been argued to be the cause of Reynolds number dependence of the cavitation index. However, no volumetric pressure and flow-field measurements exist to explain this. Experiments to obtain tomographic PIV data downstream of a backward-facing step in the high speed water tunnel facility are planned. To characterize the turbulent boundary layer at the step, 2D PIV images are recorded. High speed images in wall-normal and spanwise planes are recorded to study the cavitating structures and find the conditions at which they first appear. To better understand cavitation in turbulence, very high speed (5 MHz frame rate) holographic study of injected free stream nuclei will be performed.
4:25 pm Presentation
An Ensemble-Based Algorithm for Characterization of Scalar Sources in Turbulent Environment”
Presented by QI WANG (Adviser: Prof. Zaki)
An algorithm to determine the location and intensity of a scalar source with a parametrized shape is proposed and tested in a canonical turbulent channel flow at $Re_\tau = 180$. The algorithm uses forward simulations of an ensemble of scalar-source distributions, and can be easily applied to scenarios with a growing time horizon. The history of the scalar concentration at the sensor location due to the true source is compared with predictions from the ensemble members in order to determine the parameters of the source. Prediction errors are due to the approximation of the eigenvectors of the impulse-response matrix, or “eigen-sources”. In order to obtain a better approximation of the eigen-sources, a POD projection is used and is demonstrated to enhance the accuracy of the algorithm. The effect of measurement noise on the quality of reconstruction is quantified using the ratio of the standard deviation in the predicted source parameters and in the observation noise. The results provide a measure of the difficulty of source reconstruction for different relative positioning of sources and sensors.
Mar
8
Thu
Department of Mechanical Engineering Spring Seminar Series @ 210 Hodson Hall
Mar 8 @ 3:00 pm – 4:00 pm
Experimental Methods in Thermal-Fluid Sciences
Presented by Professor Matthieu Andre, Mechanical and Aerospace Engineering Department, The George Washington University
There exists a wide range of measurement techniques applied to fluid mechanics, each with its advantages and drawbacks. In this seminar, some examples of advanced optical techniques covering very diverse aspects of thermal-fluid sciences are presented.
In the first part, time-resolved particle image velocimetry (PIV) coupled to planar laser induced fluorescence (PLIF) is applied to a free surface flow to study fundamental physics responsible for atomization and air entrainment. High spatio-temporal resolution PIV data in both phases and precise reconstruction of the interface give new understandings of bubble entrainment caused by shear layer instability below the surface.
The second part discusses the use of molecular tagging velocimetry (MTV) to probe gas-cooled nuclear reactors in accident scenarios. This applied research aims at measuring in a large test facility the slow flow transient following a loss of forced circulation of the coolant. The diagnostics capabilities and performances are first assessed in the lab, and then the technique is deployed to perform in-situ measurements, providing valuable validation data for the models used in the design of such reactors.
Finally, a new experimental facility for fluid-structure interaction studies is described, and examples of optical measurements applied to other research areas are presented.
Matthieu Andre is a research professor in the Mechanical and Aerospace Engineering department at The George Washington University in Washington D.C. He received his M.S. degree from the Ecole Centrale de Lille in France in 2010, and obtained his Ph.D. in Mechanical Engineering from the George Washington University in 2014. His work focuses on experimental fluid mechanics and his current research interests include multiphase flows (e.g. cavitation, stratified flows, free surface flows), buoyant flows, and the development of experimental measurement techniques. He has experience with many laser-based diagnostics such as PIV, MTV, PLIF, Rayleigh scattering, and tunable diode laser absorption spectroscopy. His work was published in prominent journals such as Journal of Fluid Mechanics, Physics of Fluids, Experiments in Fluids, Measurement Science and Technology, and International Journal of Multiphase Flow. He received the best presentation award at the Young Professional Thermal Hydraulics Research Competition at the 2013 ANS winter meeting, and was a winner of the 2013 GW SEAS R&D Showcase for his work on free surface flow instabilities.
Mar
9
Fri
Graduate Seminar in Fluid Mechanics @ 132 Gilman Hall
Mar 9 @ 4:00 pm – 5:00 pm
4:10 pm Presentation
“Experimental Study of Shock Waves Interaction with Rigid Porous Media”
Presented by OMRI RAM (Adviser: Prof. Katz)
It is well known that porous obstacles can cause significant diffraction and attenuate a shock wave propagating through them. Various models were proposed in the past to incorporate the microscopic interaction forces between the fluid and the skeleton of the porous sample into a macroscopic solution of the governing equations. However, these models which are usually based on a multiphase solution approach require identifying multiple properties of the fluid, the solid matrix and its geometry, some of which are notoriously difficult to measure. In this study, silicon carbide porous media with various porosities were placed in a shock tube at a fixed distance from the end-wall. The samples were subjected to a shock wave and the pressure build-up at the end-wall was recorded. An analysis methodology was developed to study the effect of various parameters on the pressure build-up in the confined volume. This methodology addresses the porous medium and the gas in the confined volume behind it as a single mechanical system. Assuming that the flow through the porous sample is close to being isentropic, a constitutive model that enables predicting the pressure profile developing on the end-wall was derived. Furthermore, it was shown that all of the experimental results can be represented in a non-dimensional form, thus revealing the similarity between them. The mechanical system perspective enabled us to better understand the physical mechanisms affecting the pressure pulse transformation while passing through the porous medium and through the air gap between the rear face of the porous sample and the end-wall. The modal response of the system revealed that when an arbitrary pressure pulse is imposed on the front face of the porous medium the high frequency spectral components were attenuated. The system acts as a low pass filter on the pressure profile propagating through it and inhibits the propagation of fast changing pressure pulses.
4:35 pm Presentation
Instability of Supersonic Boundary Layers and its Sensitivity to Base-Flow Distortion”
Presented by JUNHO PARK (Adviser: Prof. Zaki)
The nonlinear parabolized stability equations (NPSE) can accurately and efficiently predict the amplification of finite amplitude instability waves and transition to turbulence in high-speed boundary layers. The base state is obtained from the similarity solution of the boundary-layer equations, and is distorted by the instabilities. While the NPSE fully accounts for this distortion, it does not account for potential uncertainties in the base state due to the flow environment, and boundary and thermal conditions. These uncertainties alter the transition behavior. In this work, we examine the sensitivity of finite-amplitude boundary-layer instabilities to base-flow distortions using the NPSE framework. We start with a review of the transition in supersonic boundary layers, and formulate the sensitivity analysis via theoretical (adjoint) and numerical techniques. The sensitivity of instability waves and transition onset to modifications in the base velocity and temperature are analyzed, and the uncertainty in transition due to wall heating is discussed.
Mar
15
Thu
Mechanical Engineering 2018 Spring Seminar Series: Class 530.804 @ 210 Hodson Hall
Mar 15 @ 3:00 pm – 4:00 pm
“Advanced 3D/4D Bioprinting and Nanomaterials for Complex Tissue Regeneration”
Presented by Professor Lijie Grace Zhang, Department of Mechanical and Aerospace Engineering, the George Washington University
As an emerging tissue manufacturing technique, 3D bioprinting offers great precision and control of the internal architecture and outer shape of a scaffold, allowing for close recapitulation of complicated structures found in biological tissue. In addition, 4D bioprinting is a highly innovative additive manufacturing process to fabricate pre-designed, self-assembly structures with the ability to transform from one state to another directly off the bioprinter. The term “4D” refers to the time-dependent dynamic process triggered by specific stimulation according to predesigned requirements. However, current 3D/4D bioprinting based additive manufacturing technologies are hindered by the lack of advanced smart “inks”. Therefore, the main objective of our research is to develop novel biologically inspired nano or smart inks and advanced 3D/4D bioprinting techniques to fabricate the next generation of complex tissue constructs (such as vascularized tissue, neural tissue and osteochondral tissue). For this purpose, we designed and synthesized innovative biologically inspired nanomaterials (i.e., self-assembly materials, and conductive carbon nanomaterials) and smart natural materials. Through 3D/4D bioprinting in our lab, a series of biomimetic tissue scaffolds were successfully fabricated. Our results show that these bioprinted nano or smart scaffolds have not only improved mechanical properties but also excellent cytocompatibility properties for enhancing various cell growth and differentiation, thus promising for complex tissue/organ regeneration.
Dr. Lijie Grace Zhang is an associate professor in the Department of Mechanical and Aerospace Engineering at the George Washington University. She obtained her Ph.D. in Biomedical Engineering at Brown University. Dr. Zhang joined GW after finishing her postdoctoral training at Rice University and Harvard Medical School. She is the director of the Bioengineering Laboratory for Nanomedicine and Tissue Engineering at GW. She has received the ASME Sia Nemat-Nasser Early Career Award, NIH Director’s New Innovator Award, Young Innovator in Cellular and Molecular Bioengineering, John Haddad Young Investigator Award by American Society for Bone and Mineral Research, and Early Career Award from the International Journal of Nanomedicine, etc. Her research interests include 3D/4D bioprinting, nanobiomaterials, complex tissue engineering and breast cancer bone metastasis. Dr. Zhang has authored 3 books, over 109 journal papers, book chapters and conference proceedings, 6 patents and has presented her work on over 280 conferences, university and institutes. She also serves as the Editor of Materials Science and Engineering C: Materials for Biological Applications; Associate Editor-in-Chief of International Journal of Nanomedicine; and Associate Editor of ASME Journal of Engineering and Science in Medical Diagnostics and Therapy.
Mar
19
Mon
Department of Mechanical Engineering Special Seminar @ 210 Hodson Hall
Mar 19 @ 3:00 pm – 4:00 pm
“Data-driven spectral filters for identifying structure in the streamwise turbulent kinetic energy of turbulent boundary layers”
Presented by Dr. Woutijn Baars
University of Melbourne
Even though flow-induced jet noise and wall-turbulence are highly broadband in nature, both physical phenomena exhibit a strong coherence in the acoustic pressure and velocity fields, respectively. In the first part of this seminar, a short overview will be provided on the acoustic signatures emitted by high-speed jets. Using an acoustic similarity parameter developed for a characteristic jet sound source, we highlight that nonlinear acoustic waveform distortion can be substantial, but, only under certain combinations of operating conditions and geometric scale of the jet.
The second, main part of this seminar focuses on the appearance of organized motions in wall-bounded turbulence. An organization is evidenced by the classification of distinctly different flow structures, including large-scale motions, such as hairpin packets, and very large-scale motions. In conjunction with less organized turbulence, all these flow structures contribute to the streamwise turbulent kinetic energy. Since different class structures comprise dissimilar scaling behaviors of their overlapping imprints in the velocity spectra, their coexistence complicates the development of models for the wall-normal trend of the energy statistics. Via coherence analyses of two-point data we derive spectral filters for stochastically decomposing the velocity spectra into sub-components, representing different types of statistical flow structures. In the process we reveal a Reynolds-number invariant wall-scaling for a portion of the outer-region turbulence that is coherent with the near-wall region; this supports the existence of a wall-attached self-similar structure embedded within the logarithmic region. It is also explored how these findings affect our ongoing work in the unique high-Reynolds-number boundary layer facility at Melbourne, including real-time control of the coherent scales to investigate their responsiveness to wall-based actuation.
Dr. Woutijn Baars received his B.Sc. (2006) and M.Sc. (2009) degrees from Delft University of Technology, where he experimentally studied the effects of icing on the stability of light aircraft. In 2013, Dr. Baars received his Ph.D. degree in Aerospace Engineering and Engineering Mechanics from the University of Texas at Austin. At UT Austin, his research investigations included the acoustic signatures generated by high-speed jets and the unsteady wall-pressure induced by shock wave boundary layer interactions in overexpanded nozzle flows. Currently he is a Post-Doctoral Research Fellow at the University of Melbourne, where he focuses on high-Reynolds-number wall-bounded flows. His ongoing research interests include the stochastic structure of wall-turbulence and how this organisation can assist active flow control for skin-friction drag reduction.
Mar
26
Mon
Mechanical Engineering Special Seminar @ 213 Hodson Hall
Mar 26 @ 3:00 pm – 4:00 pm
“Dynamics of buoyant particles in turbulent flows”
Presented by Dr. Varghese Mathai, University of Twente
Particle suspensions in turbulent flows occur widely in nature and industry. In most situations, the particles have a density which is different from that of the carrier fluid. This density difference can affect their motion through flows, and offers potential for changing the flow properties in many multiphase settings. In this talk, we will discuss the use of Lagrangian particle-tracking techniques to study the dynamics of light (buoyant) particles in turbulent flows.
In the first part, we address the acceleration dynamics of tiny buoyant particles (100-micron air bubbles) in a turbulent water flow. We examine the role of gravity on the bubble acceleration statistics. We find that microbubbles experience very different accelerations as compared to fluid tracers, and these occur despite their small size and minute Stokes number (small response time). Some implications of these findings to particle tracking experiments will be discussed.
In the second part, we move to the case of buoyant particles of finite size (particle size is large compared to the smallest turbulent flow length-scales). For spherical particles, buoyancy produces interesting variability in particle dynamics. In addition to buoyancy, we reveal the role of a largely ignored control parameter, the particle’s moment of inertia. Using experiments and direct numerical simulations, we demonstrate that the moment of inertia can be tuned to trigger distinctly different wake-induced-motions for both spherical and cylindrical particles. We draw some interesting analogies to the motions observed for anisotropic particles.
Dr. Varghese Mathai is a postdoctoral researcher in the Physics of Fluids group at University of Twente, the Netherlands. He received his Master’s in Mechanical Engineering from Indian Institute of Science, Bangalore, and PhD in Applied Physics from University Twente, the Netherlands (2017). His PhD research was focused on the dynamics of buoyant particles and air bubbles in turbulent flows by using Lagrangian Particle Tracking and Particle Image Velocimetry techniques. His research interests lie in dispersed multiphase flows, bluff body flows, and free surface flows. His work has appeared in journals such as Physical Review Letters, Journal of Fluid Mechanics, Experiments in Fluids, and Journal of Vascular Surgery. Varghese’s work was selected among the top five PhD theses in fluid mechanics by the European Research Committee on Flow, Turbulence, and Combustion (ERCOFTAC). In 2018, he received the Best Research Prize by the European Cooperation in Science and Technology (Eu-COST).
Mar
28
Wed
Mechanical Engineering Special Seminar @ G33/35 Malone Hall
Mar 28 @ 3:00 pm – 4:00 pm
“Engineering Matter with Photons for Advanced Technologies”
Presented by Dr. Kitty Kumar, Carnegie Mellon University
Photons are central to many of the forefront trends in science and technology today, serving as a powerful nanofabrication tool or a delicate laser tweezer to manipulate nanoparticles, or an insightful spectroscopic probe for unraveling the structure of large protein molecules. I will present how I have developed light (photons) as the tool to encode functionality into materials and reset the state-of-the-art in flexible silicon-based and soft matter electronics. The work addresses the key challenges in the advancement of emerging technologies by studying the fundamental laser-material interactions and bridges the gap between research and commercialization.
Dr. Kitty Kumar is a postdoctoral associate at Carnegie Mellon University. Her research interests are focused on fundamental principles and practices in ultrafast laser science, soft condensed matter, laser material processing, nanofabrication, biomimetics, and programmable soft matter to address emerging scientific questions and key technical bottlenecks in advanced soft matter technologies for sensing, analysis, space exploration and biomedicine. Kitty received her Ph.D. from the University of Toronto, where she focused on the laser-assisted fabrication of flexible solar cells and developed a novel laser processing technique for three-dimensional structuring of dielectric thin films for flexible electronics. During the postdoctoral position at the Wyss Institute for Biologically Inspired Technologies, Harvard University, she concentrated on the design and fabrication of bio-inspired advanced soft robotic systems for biomedical applications.
Mar
29
Thu
Mechanical Engineering 2018 Spring Seminar Series: Class 530.804 @ 210 Hodson Hall
Mar 29 @ 3:00 pm – 4:00 pm
Control of Wind Turbines and Wind Farms
Presented by Professor Lucy Pao, Electrical, Computer, and Energy Engineering Department, University of Colorado at Boulder
Wind energy is recognized worldwide as cost-effective and environmentally friendly and is among the world’s fastest-growing sources of electrical energy. However, science and engineering challenges still exist. For instance, in order to further decrease the cost of wind energy, wind turbines are being designed at ever larger scales, especially for offshore installations. We will overview a two-bladed downwind morphing rotor concept that is expected to lower the cost of energy more at wind turbine sizes beyond 13 MW compared to continued upscaling of traditional three-bladed upwind rotor designs. We will highlight some of the control systems issues for such wind turbines at these extreme scales and outline selected advanced control methods we are developing to address these issues. In the second part of the talk, we will discuss the growing interest in the coordinated control of wind turbines on a wind farm. Most wind farms currently
operate in a simplistic “greedy” fashion where each turbine optimizes its own power capture. Due to
wake interactions, however, this greedy control is actually suboptimal to methods in which the collective wind farm is considered. We will overview recent work in wind farm control and show selected results that demonstrate the performance improvements possible when carefully accounting for the wake interactions in coordinating the control of the wind turbines on the farm. We shall close by discussing continuing challenges and on-going and future research avenues that can further facilitate the growth of wind energy.
Lucy Pao is a Professor in the Electrical, Computer, and Energy Engineering Department at the University of Colorado Boulder. She has completed sabbaticals at Harvard University (2001-2002), the University of California, Berkeley (2008), the US National Renewable Energy Laboratory (2009), the Hanse-Wissenschaftskolleg Institute for Advanced Study in Delmenhorst, Germany (2016-2017) and the ForWind Center for Wind Energy Research at Oldenburg University (2016-2017). She earned B.S., M.S., and Ph.D. degrees in Electrical Engineering from Stanford University. Her research has primarily focused on combined feedforward and feedback control of flexible structures, with applications ranging from atomic force microscopy to disk drives to digital tape drives to megawatt wind turbines and wind farms. She is a Fellow of the International Federation of Automatic Control (IFAC) and the Institute of Electrical and Electronics Engineers (IEEE). Selected recent awards include the 2012 IEEE Control Systems Magazine Outstanding Paper Award (with K. Johnson), the 2015 Society for Industrial and Applied Mathematics (SIAM) Journal on Control and Optimization Best Paper Prize (with J. Marden and H. P. Young), the 2017 Control Engineering Practice Award from the American Automatic Control Council, and the Scientific Award 2017 from the European Academy of Wind Energy. Selected professional society activities include being a Fellow of the Renewable and Sustainable Energy Institute (2009-present), General Chair of the 2013 American Control Conference, member of the IEEE Control Systems Society (CSS) Board of Governors (2011-2013 and 2015), IEEE CSS Fellow Nominations Chair (2016-present), and member of the IFAC Executive Board (2017-2020).
|
2019-09-18 13:38:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32049012184143066, "perplexity": 2025.9580800035167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573289.83/warc/CC-MAIN-20190918131429-20190918153429-00118.warc.gz"}
|
https://physics.stackexchange.com/questions/69966/electron-revolving-in-an-atom
|
# Electron revolving in an atom
When an electron revolves around the nucleus in P or d-orbitals why does not it collide with the nucleus.
I mean to say that the shape of the orbital narrows near the nucleus , so shouldn't it collide with the nucleus?
• – dmckee --- ex-moderator kitten Jul 4 '13 at 6:08
• I'm not too certain those will help the OP. The misconception seems to be around the fact that those diagrams of orbitals show 90% confidence interval surfaces, not surfaces of constrained motion. – user10851 Jul 4 '13 at 6:12
• @M.Tarun In any event, only s-orbitals have nonzero probability at the very center. The 2p orbital, for instance, has a probability distribution $(1/96\pi a_0^2) r^2\mathrm{e}^{-r/a_0} (1-\sin^2(\theta)\cos(2\phi))$ in some choice of spherical coordinates, which does indeed vanish for $r = 0$. – user10851 Jul 4 '13 at 6:23
• On @ChrisWhite's thoughts I'll re-open this. It brings up a different set of issues. Basically the meaning of "touch" or "collide" is not the same in the world of the very small. – dmckee --- ex-moderator kitten Jul 4 '13 at 14:25
• consider the orbital as a twisted loop.It has a point of intersection. In this case I think the electron should go through that point and collide with nucleus. – M.Tarun Jul 5 '13 at 17:38
First let's take a step back and try to explain what exactly is being shown in "pictures" of orbitals. When you solve the Schrödinger equation for an electron around a nucleus, you can describe the resulting wavefunction any number of ways. One of the most common is in the position basis, so that the (square magnitude of) the wavefunction gives the probability density for the electron to be "found" at that location if you were to instantly measure its position (i.e. force it into an eigenfunction of the position operator).1
So all we have is this "cloud" - a nonnegative function of $\mathbb{R}^3$ describing where the electron might be. Since it is hard to draw functions of 3D space, what people often do is draw a single surface, usually a surface of constant probability density that contains, say, 90% of the total probability inside of it.2 If you think of the cloud as having varying mass density, we want to depict a natural region wherein we can find 90% of the total "mass."
Just because this surface comes to a point doesn't mean the electron is forced to that point. In fact, such simple diagrams don't really say anything about how the probability density is distributed inside the orbital. Moreover, if you believe the probability density doesn't do anything crazy (e.g. go off to infinity), the shrinking of the surface to a point tells us the electron is actually unlikely to be found near the center.3
As I finish writing this I realize it's something of a loose, mathy explanation for what those shapes are supposed to be. On the other hand, Jim's answer gives a far better explanation of physically what's going on.
1 For more explicit formulas, see an answer I wrote here.
2 The reason I specify a surface of constant density is because there are infinitely many surfaces that contain 90% of the probability. Start with one, expand it a little over here, shrink it a little over there, and you're left with a different 90% surface.
3 In fact, the process whereby a proton captures an electron really only happens with s-orbital electrons. All other orbitals have a node at the very center, so the electrons almost never "come close" to the nucleus.
The shape you see associated with the d-orbital is not the actual shape of the d-orbital. That is a probability cloud of the location of the electron. Outside of the shape, or at least in the not shaded-in regions, the probability of finding the electron drops significantly. This means that the narrowing in the shape of the d-orbital indicates that the region where the electron could be found is shrinking. This can be extended to mean that at the nucleus, the probability cloud effectively doesn't exist. This means not that the electron should collide with it, but that the electron cannot be found there ever.
If you are now wondering how the electron gets from one region of the point cloud to another if they it cannot travel through the nucleus, it is simply a matter of treating the electron as a wave. The waveform exists at one location or another. Like a sound wave, it is possible to have regions of destructive interference where the wave doesn't exist but still have the wave exist on either side. Similarly, the probability cloud shows where you can find the electron. It does not say that the electron is never outside of this region nor does it comment on the velocity or motion of the electron. It can move from one region to another non-connected one; the cloud only implies that you will never (never meaning high statistical improbability) measure the electrons position outside of the region.
Having now read Chris' answer, it excellently fills in my loose description of the important mathematics behind the concepts I was attempting to convey. Anyone reading this answer should definitely read that one as well; they are complimentary.
• I see no logic in your inference. At first, "significant dropping" does not mean elimination. Secondly, the fact that the cloud of probability narrows down to a single point does not mean that electron does not pass through the point. It actually means the opposite: electron has no way to bypass the point. Making things logical also removes the paradox with traveling from one part of the cloud to the other. Try to find another explanation. I would say that as electron comes closer to the nucleus, it accelerates to ∞, which means ∞ speed and thus, 0-probability. But, this is a paradox. – Val Jul 4 '13 at 16:24
• @Val Like it or not, the electron does not travel across the nucleus. Many sources claim it uses quantum tunneling to move to other areas. Also, I can say with 100% certainty that it does not accelerate to $\infty$ since that is slightly above c. There is no paradox, I simply did not want to provide all of the extensive mathematical detail required to fully explain it. I recommend conducting more research into QM; the essence of my explanation rings true with the material from that subject – Jim Jul 4 '13 at 17:39
|
2020-03-30 22:37:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6989411115646362, "perplexity": 408.36718646159574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370497309.31/warc/CC-MAIN-20200330212722-20200331002722-00306.warc.gz"}
|
https://jackpotsuomi.com/northern-territory/what-is-a-rank-of-a-matrix-example.php
|
# Northern Territory What Is A Rank Of A Matrix Example
## linear algebra Matrices of rank 1 show that $A^2=c\cdot ### Rank Definition of Rank by Merriam-Webster matrices Full rank vs short rank matrix - Mathematics. Matrix rank,, rank of a matrix for determimants, rank of a Matrix by the Gaussian elimination method, definition, examples and problems with solutions., If you want to know the rank for your matrix, you can just count them. Or if you don't want to count those, Dimension of the Column Space or Rank.. ### Rank Definition of Rank by Merriam-Webster Rank Definition of Rank by Merriam-Webster. The maximum rank matrix completion problem is the process of assigning Figure 2: A graph and its corresponding Tutte matrix Figure 2 above shows an example., Rank definition is rows or columns in a matrix. rank. verb. to reflect current usage of the word 'rank.' Views expressed in the examples do not represent the. 13The rank-nullity (dimension) theorem 13.1Rank and nullity of a matrix Definition: The rank of the matrix A is the dimension of the row space of A, and is To find the rank of any matrix, reduced the matrix into Echelon form: A matrix is said to be in echelon form if: a) All the non zero rows, if any precedes the 0 rows. Note that the rank of an m× n matrix cannot be bigger than m, since As an example, consider the matrix Arref of equation (A.10). The four equations read: The normal form of a matrix is a matrix of a pre-assigned special form obtained from by means of transformations of a prescribed type. One distinguishes various The normal form of a matrix is a matrix of a pre-assigned special form obtained from by means of transformations of a prescribed type. One distinguishes various The following examples are of matrices in echelon form: The rank of a matrix is equal to the number of linearly independent rows. 10/11/2007 · hey can anyone tell me what a rank of a matrix is for example i have the following matrix. 1 0 3 1 2 1 4 2 1 5 3 4 8 1 2. which iv put in row echelon Note that the rank of an m× n matrix cannot be bigger than m, since As an example, consider the matrix Arref of equation (A.10). The four equations read: Let the$n \times n$complex matrix$A$have rank 1. Prove:$A^2 = c\cdot A$for some scalar$c$. What I know is that all matrices having rank 1 have rows based on a Subspaces, Basis, Dimension, and Rank Example. If we have some n Definition The rank of a matrix A is the dimension of its row and column Use the Matrix ATAR Calculator to estimate your ATAR using HSC marks or analyse your ATAR goal by understanding the HSC marks required. You can even identify the ATAR The rank of a matrix is the order of the largest non-zero square submatrix. See the following example. (2018) Rank of a matrix by means of determinants. Rank definition, a the order of the nonzero determinant of greatest order that can be selected from a given matrix by the elimination of Contemporary Examples To find the rank of any matrix, reduced the matrix into Echelon form: A matrix is said to be in echelon form if: a) All the non zero rows, if any precedes the 0 rows. 13The rank-nullity (dimension) theorem 13.1Rank and nullity of a matrix Definition: The rank of the matrix A is the dimension of the row space of A, and is To find the rank of any matrix, reduced the matrix into Echelon form: A matrix is said to be in echelon form if: a) All the non zero rows, if any precedes the 0 rows. The normal form of a matrix is a matrix of a pre-assigned special form obtained from by means of transformations of a prescribed type. One distinguishes various 13The rank-nullity (dimension) theorem 13.1Rank and nullity of a matrix Definition: The rank of the matrix A is the dimension of the row space of A, and is The following examples are of matrices in echelon form: The rank of a matrix is equal to the number of linearly independent rows. Rank definition is rows or columns in a matrix. rank. verb. to reflect current usage of the word 'rank.' Views expressed in the examples do not represent the Note that the rank of an m× n matrix cannot be bigger than m, since As an example, consider the matrix Arref of equation (A.10). The four equations read: How to change a matrix into two forms of echelon matrix, the row echelon form (REF) and the reduced row echelon form (RREF). Includes problems with solutions. Note that the rank of an m× n matrix cannot be bigger than m, since As an example, consider the matrix Arref of equation (A.10). The four equations read: A Summary of Linear Algebra John Mitchell. For example, The inner product or The dimension of the row space is the rank of the matrix. To find the rank of any matrix, reduced the matrix into Echelon form: A matrix is said to be in echelon form if: a) All the non zero rows, if any precedes the 0 rows. Matrix Dimensions. The numbers of rows and columns of a matrix are called its dimensions. Here is a matrix with three rows and two columns: Sometimes the dimensions The rank of a matrix is the order of the largest non-zero square submatrix. See the following example. (2018) Rank of a matrix by means of determinants. A Summary of Linear Algebra John Mitchell. For example, The inner product or The dimension of the row space is the rank of the matrix. Use the Matrix ATAR Calculator to estimate your ATAR using HSC marks or analyse your ATAR goal by understanding the HSC marks required. You can even identify the ATAR For example, the 4 × 4 matrix in the example above has rank three. Because the column space is the image of the corresponding matrix transformation, How to change a matrix into two forms of echelon matrix, the row echelon form (REF) and the reduced row echelon form (RREF). Includes problems with solutions. Matrix rank,, rank of a matrix for determimants, rank of a Matrix by the Gaussian elimination method, definition, examples and problems with solutions. Full rank vs short rank matrix. The matrix in your example in fact is of full rank, so I can't give an example there, but if we instead take the matrix: Let the$n \times n$complex matrix$A$have rank 1. Prove:$A^2 = c\cdot A$for some scalar$c$. What I know is that all matrices having rank 1 have rows based on a The following examples are of matrices in echelon form: The rank of a matrix is equal to the number of linearly independent rows. If you want to know the rank for your matrix, you can just count them. Or if you don't want to count those, Dimension of the Column Space or Rank. If you want to know the rank for your matrix, you can just count them. Or if you don't want to count those, Dimension of the Column Space or Rank. 10/11/2007 · hey can anyone tell me what a rank of a matrix is for example i have the following matrix. 1 0 3 1 2 1 4 2 1 5 3 4 8 1 2. which iv put in row echelon matrices Full rank vs short rank matrix - Mathematics. Matrix rank,, rank of a matrix for determimants, rank of a Matrix by the Gaussian elimination method, definition, examples and problems with solutions., Let the$n \times n$complex matrix$A$have rank 1. Prove:$A^2 = c\cdot A$for some scalar$c$. What I know is that all matrices having rank 1 have rows based on a. ### Rank of a Matrix Steps & Examples Math@TutorCircle.com Rank Define Rank at Dictionary.com. The rank of a matrix is the order of the largest non-zero square submatrix. See the following example. (2018) Rank of a matrix by means of determinants., If you want to know the rank for your matrix, you can just count them. Or if you don't want to count those, Dimension of the Column Space or Rank.. ### linear algebra Matrices of rank 1 show that$A^2=c\cdot
matrices Full rank vs short rank matrix - Mathematics. Use the Matrix ATAR Calculator to estimate your ATAR using HSC marks or analyse your ATAR goal by understanding the HSC marks required. You can even identify the ATAR Full rank vs short rank matrix. The matrix in your example in fact is of full rank, so I can't give an example there, but if we instead take the matrix:.
• Appendix A web.ma.utexas.edu
• A Matrix Rank Problem pdfs.semanticscholar.org
### Linear Algebra basics Rensselaer Polytechnic Institute (RPI)
Rank of a Matrix Steps & Examples Math@TutorCircle.com. Let the $n \times n$ complex matrix $A$ have rank 1. Prove: $A^2 = c\cdot A$ for some scalar $c$. What I know is that all matrices having rank 1 have rows based on a, Let the $n \times n$ complex matrix $A$ have rank 1. Prove: $A^2 = c\cdot A$ for some scalar $c$. What I know is that all matrices having rank 1 have rows based on a.
Appendix A web.ma.utexas.edu. If you want to know the rank for your matrix, you can just count them. Or if you don't want to count those, Dimension of the Column Space or Rank., The normal form of a matrix is a matrix of a pre-assigned special form obtained from by means of transformations of a prescribed type. One distinguishes various.
### Rank Definition of Rank by Merriam-Webster
Rank Define Rank at Dictionary.com. The rank of a matrix is the order of the largest non-zero square submatrix. See the following example. (2018) Rank of a matrix by means of determinants. A Summary of Linear Algebra John Mitchell. For example, The inner product or The dimension of the row space is the rank of the matrix..
• Appendix A web.ma.utexas.edu
• Rank of a Matrix Steps & Examples Math@TutorCircle.com
• Appendix A web.ma.utexas.edu
• Rank definition is rows or columns in a matrix. rank. verb. to reflect current usage of the word 'rank.' Views expressed in the examples do not represent the The rank of a matrix is the order of the largest non-zero square submatrix. See the following example. (2018) Rank of a matrix by means of determinants.
Matrix rank,, rank of a matrix for determimants, rank of a Matrix by the Gaussian elimination method, definition, examples and problems with solutions. Rank definition, a the order of the nonzero determinant of greatest order that can be selected from a given matrix by the elimination of Contemporary Examples
If you want to know the rank for your matrix, you can just count them. Or if you don't want to count those, Dimension of the Column Space or Rank. If you want to know the rank for your matrix, you can just count them. Or if you don't want to count those, Dimension of the Column Space or Rank.
For example, the 4 Г— 4 matrix in the example above has rank three. Because the column space is the image of the corresponding matrix transformation, The following examples are of matrices in echelon form: The rank of a matrix is equal to the number of linearly independent rows.
The normal form of a matrix is a matrix of a pre-assigned special form obtained from by means of transformations of a prescribed type. One distinguishes various Note that the rank of an mГ— n matrix cannot be bigger than m, since As an example, consider the matrix Arref of equation (A.10). The four equations read:
Matrix Dimensions. The numbers of rows and columns of a matrix are called its dimensions. Here is a matrix with three rows and two columns: Sometimes the dimensions The rank of a matrix is the order of the largest non-zero square submatrix. See the following example. (2018) Rank of a matrix by means of determinants.
10/11/2007В В· hey can anyone tell me what a rank of a matrix is for example i have the following matrix. 1 0 3 1 2 1 4 2 1 5 3 4 8 1 2. which iv put in row echelon The rank of a matrix Rank: Examples using minors Example Find the rank of the matrix A = 0 @ (BI Dept of Economics) Lecture 2 The rank of a matrix September 3
Matrix Dimensions. The numbers of rows and columns of a matrix are called its dimensions. Here is a matrix with three rows and two columns: Sometimes the dimensions The normal form of a matrix is a matrix of a pre-assigned special form obtained from by means of transformations of a prescribed type. One distinguishes various
To find the rank of any matrix, reduced the matrix into Echelon form: A matrix is said to be in echelon form if: a) All the non zero rows, if any precedes the 0 rows. Full rank vs short rank matrix. The matrix in your example in fact is of full rank, so I can't give an example there, but if we instead take the matrix:
For example, the 4 × 4 matrix in the example above has rank three. Because the column space is the image of the corresponding matrix transformation, Subspaces, Basis, Dimension, and Rank Example. If we have some n Definition The rank of a matrix A is the dimension of its row and column
View all posts in Northern Territory category
|
2022-07-02 05:03:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8047537207603455, "perplexity": 364.9953527528306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00380.warc.gz"}
|
http://clay6.com/qa/24182/a-lens-of-focal-30-cm-and-diameter-5-cm-forms-an-image-of-intensity-i-if-ce
|
Browse Questions
# A lens of focal $30\; cm$ and diameter 5 cm forms an image of intensity I. If central part of the lens is covered by black circular paper of diameter $2.5 \;cm$ What will be the new focal length and new intensity of image.
$(a)\;15\;cm,I \\ (b)\;30\;cm,\frac{I}{4} \\ (c)\;15\;cm, \frac{3I}{4} \\ (d)\;30\;cm, \frac{3I}{4}$
Can you answer this question?
The focal length of the lens does not change.
Intensity of light is proportional to area.
$\large\frac{I}{I_1}=\frac{A}{A_1}$
Where $A$= initial area
$A_1$ =final area
$I_1= \large\frac{I A_1}{A}$
$A= \pi \bigg( \large\frac{d}{2}\bigg)^2$
$\qquad= \large\frac{\pi d^2}{4}$
$A_1 =\pi \large\frac{ d^2}{4} -\frac{\pi (d/2)^2}{4}$
$\qquad= \large\frac{ 3 \pi d^2}{16}$
$I_1= I \large\frac{\Large \frac{3 \pi d^2}{16}}{\Large\frac{\pi d^2 }{4}}$
$\qquad=\large\frac{3}{4}$$I$
Hence d is the correct answer.
answered Jan 20, 2014 by
|
2017-06-23 03:18:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869854748249054, "perplexity": 1249.7970730419381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319992.22/warc/CC-MAIN-20170623031127-20170623051127-00678.warc.gz"}
|
https://www.studyadda.com/ncert-solution/11th-chemistry-some-basic-concepts-of-chemistry_q78/490/38142
|
• # question_answer 78) If 4 g of $NaOH$ dissolves in 36 g of ${{H}_{2}}O$ calculate the mole fraction of each component in the solution. Also, determine the molarity of solution (specific gravity of solution is 1g$m{{L}^{-1}}$).
${{n}_{B}}(NaOH)=\frac{{{w}_{B}}}{{{m}_{B}}}=\frac{4}{40}=0.1$ ${{n}_{A}}({{H}_{2}}O)=\frac{{{w}_{A}}}{{{m}_{A}}}=\frac{36}{18}=2$ ${{x}_{B}}=\frac{{{n}_{B}}}{{{n}_{A}}+{{n}_{B}}}=\frac{0.1}{2+0.1}=\frac{0.1}{2.1}=0.0476$ ${{x}_{A}}=(1-0.0476)=0.9524$ Volume of solution = $\frac{\text{Mass}\,\text{of}\,\text{solution}}{\text{Density}\,\text{of}\,\text{solutions}}$ $=\frac{(4+36)}{1}=40mL$ Molarity of solution can be calculated as: $M=\frac{{{w}_{B}}\times 1000}{{{m}_{B}}\times V}=\frac{4\times 1000}{40\times 40}=2.5$
|
2020-09-24 21:51:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067588448524475, "perplexity": 1823.051696216157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00003.warc.gz"}
|
https://unofficialaciguide.com/2018/02/16/aci-naming-convention-best-practices/
|
### ACI Naming Convention Best Practices
This statement is especially true when it comes to naming objects inside of ACI. Naming your ACI objects in a meaningful and thoughtful way will increase the supportability of your fabric and even help the fabric to become “self-documenting”. Having naming conventions that are haphazard, will have an equally negative impact.
To complicate matters, most ACI objects don’t allow you to re-name them; so if you wanted to re-name something, it would require you to delete the object and recreate it.
## High-level naming conventions
Beautiful naming conventions are definitely in the eye of the beholder. You may find other delimiters or naming conventions more appropriate for your deployment. What I have outlined here are recommendations and naming conventions that I have used for countless customers. In almost all cases, customers will add a bit of uniqueness and modify the naming conventions to fit their needs. If I can convey two things in this post that are absolutely critical, I hope you remember this:
1. Develop a naming convention for all of your named objects in ACI
2. Ensure that the the naming convention makes it easier to operate your ACI Fabric.
### Select a delimiter – My recommendation: The Underscore “_”.
The underscore is my delimiter of choice for separation of object suffix/prefix (i.e., web_epg, Leaf201_SwProf). Why the underscore? The underscore is NEVER used by the system (i.e., the fabric) when displaying XML or JSON configuration. By using the underscore character, when you download XML or JSON configuration, it will be much easier to differentiate where the system object names end, and where the human naming conventions for the objects begin.
<fvTenant dn="uni/tn-CloudMgmt_Tenant" name="CloudMgmt_Tenant">
### CapitalizeSeparateWords for each of the objects to improve readability
Using a capitalization between words in your objects will help to make it more readable. Some examples of this:
• Leaf201_SwProf or lf201_SwProf
• TenantX_AAEP
• TenantX_VlanPoolStatic
• V201_EPG
• ScomWeb_EPG
• ScomWeb_BD
### Leaf and Spine Numbering
Keep your leaf and spine numbering simple. A few rules of thumb for numbering:
• Have even and odd members of your VPCs (i.e., Leaf201 and Leaf202 make up a VPC pair). I have seen customers use Leaf201_203 as a VPC leaf pair, and because this is uncommon, it can really hamper troubleshooting in a network down situation because you have to get everyone on the same page in regard to numbering of your leaf switches.
• Spines = 101 -> 199. You will generally have far fewer spines. Keep them in the 100 range
• Leafs = 200 and above. For a single site, just use 200 and above. You can also use separate leaf numbers to separate leafs in different Pods.
• Pod1 Leafs = 200 –> 299
• Pod2 Leafs = 300 — > 399
Whatever the leaf numbering convention – still to the K.I.S.S. rule (Keep it simple s…..) If you have to spend more than 30 seconds explaining your spine and leaf numbering design to someone, chances are you’ve overthought it.
## Tenant Naming Conventions
• VMM integration considerations – If you are taking advantage of VMM integration, your “Tenant | Application Profile | EPG” name will be displayed in Vcenter as portgroup names. Please keep in mind that less is more, in this case.
• Troubleshooting considerations – When you are troubleshooting on your leafs, the name of your routing table is the combination of your TenantName:VrfName. Choose wisely, or you will be typing (or copying and pasting) that name a lot.
Leaf201# show vrf all
VRF-Name VRF-ID State Reason
black-hole 3 Up --
Coast:main_vrf 6 Up --
common:default 5 Up --
management 2 Up --
overlay-1 4 Up --
Leaf201# show ip route vrf Coast:main_vrf
IP Route Table for VRF "Coast:main_vrf"
'*' denotes best ucast next-hop
'**' denotes best mcast next-hop
'[x/y]' denotes [preference/metric]
'%<string>' in via output denotes VRF <string>
100.1.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive
*via 10.0.192.66%overlay-1, [1/0], 3d16h, static
100.1.1.1/32, ubest/mbest: 1/0, attached, pervasive
*via 100.1.1.1, vlan12, [1/0], 3d16h, local, local
101.1.1.0/24, ubest/mbest: 1/0, attached, direct, pervasive
*via 10.0.192.66%overlay-1, [1/0], 3d16h, static
101.1.1.1/32, ubest/mbest: 1/0, attached, pervasive
*via 101.1.1.1, vlan17, [1/0], 3d16h, local, local
111.111.111.111/32, ubest/mbest: 1/0
*via 192.168.50.251, vlan14, [90/128576], 3d16h, eigrp-default, internal
### Tenant Names
Keep the Tenant name as short and concise as possible. A short tenant name will ensure that if you need to reference the Tenant name when naming other objects (i.e., AAEPs, Vlan Pools), the shorter the better. For Tenants, I don’t recommend that you add a “_Tenant” suffix.
Customer Name: Enterprise
Tenant Name: EntProd; EntTest; EntDev
### Application Profiles
(Tenant > Application Profiles)
Keep it short. Are you noticing a pattern? Again, for VMM considerations, application profiles are one of the best places to save on real estate. In general, I recommend a short application name + the “_AP” suffix.
ApplicationName + “_” + AP
Application Name: Microsoft SCOM
Application Profile: Scom_AP, Scom_Ap, scom_ap
### Application EPGs
(Tenant > Application Profiles > Application EPGs)
ApplicationEPGName + “_” + EPG
Application EPGs – Below are a list of recommend sample EPG names.
Grouping(s): Web, Vlan 101, Management, PXE
EPG Name(s): Web_EPG, Vl101_EPG, Mgmt_EPG, PXE_EPG
Web_epg, Vl101_epg, Mgmt_EPG, PXE_epg
### BD (Bridge Domains)
(Tenant > Networking > Bridge Domains)
If you are using a Network Centric Approach to EPG/BD creation, then re-using the EPG name for the BD makes perfect sense. It will also limit errors when you are associating the EPG to the correct Bridge Domain.
BridgeDomain + “_” + BD
Bridge Domain – Below are a list of recommend sample BD names.
Name(s): Web, Vlan 101, Management, PXE
BD Name(s): Web_BD, Vl101_BD, Mgmt_BD, PXE_BD
Web_bd, Vl101_bd, Mgmt_bd, PXE_bd
### VRF (Routing Table)
(Tenant > Networking > VRFs)
For your VRF name, regardless of what you choose, remember to keep it short. Remember, during troubleshooting (on your leafs), the VRF name is the combination of your TenantName+VrfName.
VrfName + “_” + VRF
VRF (Routing table, VRF, or Context) – Below are a list of recommend sample VRF names.
VRF Name(s): Main_VRF, Prod_VRF, TenantX_VRF, DMZ_VRF
Main_vrf, Prod_vrf, TenantX_vrf, DMZ_vrf
### L3out (External Routed Domain)
(Tenant > Networking > External Routed Domains)
In general for L3outs, I like to mirror the VRF that will be referenced from the L3out. If you VRF name is “Prod_VRF”, then “Prod_L3out” leaves little doubt as to which VRF the L3out is attached. This type of naming also ensures that you don’t attach your L3out to the wrong VRF (assuming you have multiple).
L3outName + “_” + L3out
L3out – Below are a list of recommend sample L3out names.
L3out Name(s): Main_L3out, Prod_L3out, TenantX_L3out, DMZ_L3out
### L3out Node Profiles
(Tenant > Networking > External Routed Networks > Node Profiles)
A Logical Node Profile for an L3Out defines which Leafs will be used. While you can certainly define multiple border leaf switches under a single node profile, I recommend using a node profile per switch to keep things simple and straight forward. Note – The Node Profile is not referenced outside of the L3out. The suffix is optional.
L3outNodeProfile + “_” + NodeProf
L3out Node Profile Name(s): Leaf201_NodeProf, Leaf202_NodeProf
lf201_NodeProf, lf202_NodeProf
### L3out Interface Profiles
(Tenant > Networking > External Routed Networks > Node Profiles > Interface Profiles)
Logical Interface Profiles for an L3Out defines which Leaf interfaces will be used. Note – The Interface Profile is not referenced outside of the L3out. The suffix is optional.
L3outNodeProfile + “_” + IntProf
L3out Node Profile Name(s): Leaf201_IntProf, Leaf202_IntProf
lf201_IntProf, lf202_IntProf
### L3out EPG
(Tenant > Networking > External Routed Networks > L3out > Networks)
Your L3out EPG is the External Endpoint Group for your L3out. Policy for external routes (which you specify) will be applied here. It is recommended that you name the L3out EPG according to it’s function.
L3EPGName + “_” + L3EPG
L3out EPG – Below are a list of recommend sample L3out EPG names.
L3out EPG Name(s): DC_L3EPG, Internet_L3EPG, InetProxy_L3EPG,
Campus_L3EPG, LabSubnets_L3EPG
### Contracts
(Tenant > Security Policies > Contracts) – in later versions of code (Tenant > Contracts)
Contracts define protocols that are allowed from EPG to EPG. Because we will be referencing the contract we configure in multiple places, I generally like to be as descriptive with the contract as possible. In addition, this is another place where a suffix can come in handy (especially if reading through XML or JSON)
ContractName + “_” + CT
Sample Contract names: web_http_CT, web_https_CT, webMultiple_CT,
ssh_CT, mssql_CT
### Filter
(Tenant > Security Policies > Filters) – in later versions of code (Tenant > Contracts > Filters)
FilterName + “_” + Filt
Filters are the entries that make up contract (think of these as ACE entries to an ACL). Filters can be single entries, or contain multiple entries, which can lead to confusion. In general, I typically recommend that customers use a single entry per filter, and ensure the naming leaves no question. Some samples are included below:
Filter Name Filter Purpose Filter Entry Name http_Filt Http Filter using tcp80 tcp80 https_Filt HTTPS Filter using tcp443 tcp443 icmp_Filt ICMP icmp
## Fabric Access Policy Naming Conventions
### VPC Pair naming
Fabric > Access Policies > Policies > Switch > Virtual Port Channel default
Explicit VPC Protection Groups (this is just a large set of words that describe a VPC Pair). For the name, I would recommend using a short name for the leaf (and not necessarily the FDQN of the leaf switch). For the logical pair ID, use the first node ID of the VPC pair. This will ensure uniqueness and should be easy to follow.
LeafSwitchA + “_” + LeafSwitchB
Name: lf201_lf202 or Leaf201_Leaf202
Logical Pair ID: 201
Name: lf203_lf204 or Leaf203_Leaf204
Logical Pair ID: 203
### Interface Policies
Fabric > Access Policies > Policies > Interface
PolicyConfigurationName + “_” + State (i.e., Enable|Disable, Active|Off|On)
Interface Policies are the individual configuration options, such as, enabling CDP, setting the interface speed to 10Gig, disabling LLDP, and so on. Although ACI has pre-configured defaults, I always setup my own policies for each feature to highlight if the feature is enabled or disabled. You’ll notice I capitalize the feature and use the delimiter of an “underscore” to separate the feature from its state. I do this for maximum readability. If you’d like a simple way of configuring interface policies quickly using Postman, check out this post.
LLDP_Enable
LLDP_Disable
CDP_Enable
CDP_Disable
MCP_Enable
MCP_Disable
LACP_Active
LACP_On
LACP_Off
10GigAuto
40GigAuto
InheritAuto
### Interface Policy Groups
Fabric > Access Policies > Interfaces > Leaf Interfaces > Policy Groups
Interface Policy Groups allow you to group the configuration polices you’ve already created and apply those to a collection of switches and interfaces. For Interface Policy groups, I recommend a strategy that will allow you to document what it is you are attaching, and the type of policy group you are using (i.e., policy groups can be of type access, port-channel, or vpc).
Name of thing you are attaching to ACI + type of policy group.
APG = Access Port
PC = Port-channel
VPC = VPC port-channel
Sample Policy Groups:
Pod1_UCSB_APG <<< UCSB policy group (access port)
N7K1_VPC <<< N7k1 policy group (vpc port)
Server1_APG <<< Server connection (access port)
Server2_PC <<< Server connection (port-channel)
### Switch Selectors (Profiles)
Fabric > Access Policies > Switches > Leaf Switches > Profiles
Switch Selectors allow you to select switches. You will then associate your switch selector with interface selectors. From a naming convention, there are typically 3 options I see in used. I generally gravitate to option #1 to keep it simple.
Leaf Name + “_” + SwSel suffix
Option #1 - Single Switch Selector for each switch;
Option #2 - Combined Switch Selectors for VPC pairs;
Option #3 - a combination of the option #1 and option #2.
Sample Names
Option #1
Lf201_SwSel or Leaf201_SwSel
Lf202_SwSel or Leaf202_SwSel
Option #2
Lf201_202_SwSel or Leaf201_202_SwSel
Option #3
Lf201_SwSel or Leaf201_SwSel
- and -
Lf201_202_SwSel or Leaf201_202_SwSel
### Interface Selectors (Profiles)
Fabric > Access Policies > Interfaces > Leaf Interfaces > Profiles
Interface Profile Selectors allow you to select your interfaces. You will then associate your Interface Profiles with your access port selectors. You will select Interface Profile Selectors from your Switch Selector (Profile) configuration. From a naming convention, there are typically 3 options I see in used. I generally gravitate to option #1 to keep it simple.
Leaf Name + “_” + IntProf suffix
Option #1 - Single Interface Profile for each switch;
Option #2 - Combined Interface Profile Selectors for VPC pairs;
Option #3 - a combination of the option #1 and option #2.
Sample Names
Option #1
Lf201_IntProf or Leaf201_IntProf
Lf202_IntProf or Leaf202_IntProf
Option #2
Lf201_202_IntProf or Leaf201_202_IntProf
Option #3
Lf201_IntProf or Leaf201_IntProf
- and -
Lf201_202_IntProf or Leaf201_202_IntProf
### Access Port Selectors
Fabric > Access Policies > Interfaces > Leaf Interfaces > Profiles > PROFILE_NAME > ACCESS_PORT_SELECTOR
Access Port Selectors are objects in ACI that refer to the individual interfaces under your Interface Profiles. An interface profile will act as a folder for all of the access ports (i.e., 1-48).
I recommend that you create a list that reference all 48 ports. From there, you’ll be able to point your access port selector to your policy groups.
Sample Naming Convention
eth1_1
eth1_2
eth1_3
....
eth1_48
This is an example of what I was referring to when I discussed using naming conventions to assist with a self documenting fabric; the access port selector is under Leaf201_IntProf (so I know which switch I am on), the access port selector name is eth1_48 (so I know exactly which port on Leaf201), and the Policy Group name is N7K1_VPC.
So – by just looking at the access port selector, I know that N7k1 is VPC connected off of Leaf201 eth1/48 (and if I’ve done my job correctly), I also know that the other leg of the VPC is connected off of Leaf202 eth1/48. << Self documenting.
### AAEPs
Fabric > Access Policies > Policies > Global > AAEP
AAEPs act as the glue between our Switch Interfaces and our Vlan Pools. It is a good practice to name them according to the resources that will be using them.
TenantName + “_” + AAEP
Sample AAEPs:
EntProd_AAEP
EntDev_AAEP
EntTest_AAEP
### Vlan Pools
Fabric > Access Policies > Pools > Vlan Pools
Vlan Pools are pools of vlan resources that can be utilized. There are two different types of pools, static, and dynamic. I typically recommend that you name them according to the resources that will be using them, the pool type (static or dynamic) suffix.
TenantName + “_” + TypeOfVlanPool + VLPool
Sample Vlan Pool Names:
EntProd_StaticVLPool
EntProd_DynVLPool
EntDev_StaticVLPool
EntDev_DynVLPool
### Domains
Fabric > Access Policies > Physical and External Domains
I typically recommend that you name domains according to the resources that will be using them, the domain type (Physical, External, VMM) suffix.
TenantName + “_” + TypeOfDomain
Sample Domain Names:
EntProd_PhysDom
EntProd_ExtRoutedDom
EntProd_VMMDom
EntDev_PhysDom
EntDev_ExtRoutedDom
EntDev_VMMDom
## 6 thoughts on “ACI Naming Convention Best Practices”
1. andrew says:
This is great Jody. That’s how I will use it. Thanks 🙂
2. Pingback: ACI: vPC in ACI
3. Ivan Dwi Putra says:
|
2022-05-22 01:59:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18446040153503418, "perplexity": 11059.616823123195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00687.warc.gz"}
|
https://wikieducator.org/Albany_Senior_High_School/The_Curious_Incident_of_the_Dog_in_the_Night-time
|
Albany Senior High School/The Curious Incident of the Dog in the Night-time
Objective
Engage with the novel, its characters and ideas
Contents
Characters
Structure
Themes
Language
Author
Characters
Siobhan
GFTDintnjxrpjhxertjxiotjpiodbjdojdiojtb
Did you notice?
the numbering of the chapters
Wellington
|
2021-04-17 12:16:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20637860894203186, "perplexity": 13985.461095103834}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038119532.50/warc/CC-MAIN-20210417102129-20210417132129-00508.warc.gz"}
|
https://quickessayhelpers.com/2021/06/08/solve-square-root-problems_rb/
|
# Solve square root problems
By | June 8, 2021
Step 2: just as with the nonnegative real numbers, there are two complex numbers whose square will be la homework help z. the speed s in miles per hour that a car is traveling when it goes into a skid can be estimated by using the formula $$s solve square root problems = \sqrt {30fd}$$, where f is the coefficient of friction and d is the length of the skid marks in feet as usual, in scholarship essay financial need solving these equations, what we do to one side of an equation english essay example we must do to the other side as well. when solving square root problems, sometimes you get answers that are not correct, so make sure you plug your answer into the original question to see if it introduction essay writing is correct. adding free math solver that shows work subtracting fractions worksheet solving square solve square root problems root equations worksheets math playhouse algebra lessons solve system graphically calculator circular graph paper grade 9 guide sumnermuseumdc org. x 2 = 9 use the square root property. means the same 100 most popular essays ever written thing; that is, it indicates by default the positive square root. hipaa research paper level 01 solve the given practice questions based on key sections of a business plan square roots. adding subtracting fractions worksheet solving square root equations worksheets math playhouse algebra lessons solve research proposal document system graphically calculator circular graph paper grade 9 guide sumnermuseumdc org. this problem is similar to solve square root problems example 1 because you can not simplify student homework planner template either of the square roots. 2) there are no solve square root problems square root signs in the. solution. = (2 2 x 2 buy a philosophy paper 2 x 3 2) = (2 x 2 x 3) 2.
|
2021-06-13 00:06:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28207576274871826, "perplexity": 1121.6772083692147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00086.warc.gz"}
|
https://ask.sagemath.org/questions/51442/revisions/
|
# Revision history [back]
### Trig simplification of implicit functions fails
I'd like to use Sage to verify my solutions to Lagrangian equations of motion for a double pendulum. However, Sage seems unable to handle some basic substitutions needed to make sense of this relatively simple problem.
For example, this all works fine:
var('z')
r = cos(z)**2 + sin(z)**2
assert r.simplify_trig() == 1
However, when z is a function of time, things break down entirely:
var('x,y,t,z')
θ1 = function('θ1')(t)
θ2 = function('θ2')(t)
K = sin(θ1)**2 + cos(θ1)**2
assert K.simplify_trig() == 1
Specifically, K.simplify_trig() throws:
TypeError: ECL says: THROW: The catch MACSYMA-QUIT is undefined.
I would also expect
assert K.substitute_function(θ1, z) == cos(z)^2 + sin(z)^2
However K.substitute_function(θ1, z) just gives me K unchanged.
Seems related, but I'm still stumped: ask.sagemath.org/question/7856/lagranian-mechanics/
|
2021-09-22 07:56:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6252769827842712, "perplexity": 7059.55664545548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057337.81/warc/CC-MAIN-20210922072047-20210922102047-00562.warc.gz"}
|
https://chemistry.stackexchange.com/questions/59254/what-is-the-difference-between-exponential-factor-and-orientation-factor-in-case
|
# What is the difference between exponential factor and orientation factor in case of Arrhenius equation?
As per the book (Nivaldo J. Tro)
The Frequency Factor: The number of approaches to the activation barrier per unit time.
The Exponential factor: Number between 0 and 1 that represents the fraction of molecules that have enough energy to make it over the activation barrier on a given approach. The exponential factor is the fraction of approaches that are actually successful and result in the product.
Collision Frequency: It is the number of collisions that occurs per unit time.
Orientation Factor: Usually between 0 and 1, which represents the fraction of collisions with an orientation that allows the reaction to occur
I am unable to distinguish the between the exponential factor and orientation factor. Aren't they saying the same thing? How do they differ from each other?
Another doubt is what does the number of approaches mean? i.e. number of approaches taken by a single reactant per sec, or all reactant per sec.
|
2020-01-17 14:38:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8444986343383789, "perplexity": 564.4259167890464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589560.16/warc/CC-MAIN-20200117123339-20200117151339-00355.warc.gz"}
|
https://robotics.stackexchange.com/questions/8111/multiple-control-loops-with-overlapping-effects
|
# Multiple control loops with overlapping effects
I'm familiar with using PID to perform closed loop control when there is a single output and a single error signal for how well the output is achieving the desired set-point.
Suppose, however, there are multiple control loops, each with one output and one error signal, but the loops are not fully independent. In particular, when one loop increases its actuator signal, this changes the impact of the output from other loops in the system.
For a concrete example, imagine a voltage source in series with a resistor, applying a voltage across a system of six adjustable resistors in parallel. We can measure the current through each resistor and we want to control the current of each resistor independently by adjusting the resistance. Of course, the trick here is that when you adjust one resistor's resistance, it changes the overall resistance of the parallel set, which means it changes the voltage drop due to the divider with the voltage source's resistance and hence changes the current through the other resistors.
Now, clearly we have an ideal model for this system, so we can predict what resistance we should use for all resistors simultaneously by solving a set of linear equations. However, the whole point of closed loop control is that we want to correct for various unknown errors/biases in the system that deviate from our ideal model. The question then: what's a good way to implement closed loop control when you have a model with this kind of cross-coupling?
Typically with a multiple input, multiple output (MIMO) system, a control engineer uses a state feedback controller. This style of controller leverages a state-space model of the system and generally takes the form:
$$\dot{x}=\mbox{A}x+\mbox{B}u \\ y = \mbox{C}x + \mbox{D}u \\$$
where $x$ is a vector of states, $u$ is a vector of inputs, $y$ is a vector of outputs, and the time derivative of the states, $\dot{x}$, shows how the states evolve over time, as determined by combinations of states $\mbox{A}$ and inputs $\mbox{B}$. Outputs are also determined by an interaction between states and inputs, but the outputs can be any combination, so the output state and input matrices are different - $\mbox{C}$ and $\mbox{D}$.
I won't go into a large amount of detail regarding state feedback controls, but in general, the matrices $\mbox{A} \rightarrow \mbox{D}$ "map" or associate a particular state or input to another state or input. For instance, if you want to model a system of unrelated differential equations, you would get something like:
$$\dot{x} = \left[ \begin{array}{ccc} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{array} \right] = \left[ \begin{array}{ccc} k_1 & 0 & 0 \\ 0 & k_2 & 0 \\ 0 & 0 & k_3 \end{array} \right] \left[ \begin{array}{ccc} x_1 \\ x_2 \\ x_3 \end{array} \right]$$ which represents: $$\dot{x}_1 = k_1 x_1 \\ \dot{x}_2 = k_2 x_2 \\ \dot{x}_3 = k_3 x_3 \\$$
If you wanted to add input $u_1$ to the equation for $\dot{x}_1$ and input $u_2$ to $\dot{x}_3$, then you could add a $\mbox{B}u$ term:
$$\dot{x} = \left[ \begin{array}{ccc} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{array} \right] = \left[ \begin{array}{ccc} k_1 & 0 & 0 \\ 0 & k_2 & 0 \\ 0 & 0 & k_3 \end{array} \right] \left[ \begin{array}{ccc} x_1 \\ x_2 \\ x_3 \end{array} \right] + \left[ \begin{array}{ccc} 1 & 0 \\ 0 & 0 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{ccc} u_1 \\ u_2 \end{array} \right]$$
If you want to keep this, but you think that state $x_1$ contributes to how $x_2$ changes, you can add that interaction:
$$\dot{x} = \left[ \begin{array}{ccc} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \end{array} \right] = \left[ \begin{array}{ccc} k_1 & 0 & 0 \\ \boxed{ k_{x_1 \rightarrow x_2} } & k_2 & 0 \\ 0 & 0 & k_3 \end{array} \right] \left[ \begin{array}{ccc} x_1 \\ x_2 \\ x_3 \end{array} \right] + \left[ \begin{array}{ccc} 1 & 0 \\ 0 & 0 \\ 0 & 1 \end{array} \right] \left[ \begin{array}{ccc} u_1 \\ u_2 \end{array} \right]$$
When you write these out now, you get:
$$\begin{array} \\ \dot{x}_1 & = & k_1 x_1 + u_1 \\ \dot{x}_2 & = & k_{x_1 \rightarrow x_2}x_1 + k_2 x_2 \\ \dot{x}_3 & = & k_3 x_3 + u_2 \end{array}$$
You can keep building up complexity as your system requires. Once you have a model, for state feedback controls, you need to make sure that the system is linear, in that the system doesn't have trig functions or one state multiplying itself or another state, and make sure that it is time invariant, in that the matrices $\mbox{A} \rightarrow \mbox{D}$ don't change with time - no function of (t) in them. You may be able to make some simplifications, such as a small angle approximation to help get your $\mbox{A}$ matrix into the LTI form required for the next step.
Now you can "mask" the entire system into the tidy two equations first shown, hiding the entire $\mbox{A}$ matrix with just the letter 'A', etc. With the Laplace transform you can (hand-wave) evaluate the uncontrolled, open-loop dynamics of the system. You do this by finding the poles of the system, which in term indicate system response.
You can also evaluate the system to see if it is controllable, meaning that you can use your inputs to alter all of the states in a unique manner, and to see if it is observable, meaning that you can actually determine what the values of the states are.
If the system is controllable, you can take information about the states, $-\mbox{G}x$, and feed that into the system, using the information you have about the states to drive them to a desired value. Using only the two initial equations for clarity, when you add the control signal to the input you get:
$$\dot{x} = \mbox{A}x + \mbox{B}(u - \mbox{G}x) \\ y = \mbox{C}x + \mbox{D}u \\$$
which becomes:
$$\dot{x} = \mbox{A}x - \mbox{BG}x + \mbox{B}u \\ y = \mbox{C}x + \mbox{D}u \\$$
which can be rearranged as:
$$\dot{x} = [\mbox{A}-\mbox{BG}]x + \mbox{B}u \\ y = \mbox{C}x + \mbox{D}u \\$$
Where before you system response was driven by the $\mbox{A}$ matrix, now it is driven by $\mbox{A-BG}$. You can again evaluate the poles via the Laplace transform, but now you have a gain matrix $\mbox{G}$ you can use to tune the controller, putting the poles wherever you want, which establishes the time response to be whatever you want.
The process continues, with observers setup to compare the actual system output $y$ with the model's predicted output $\hat{y}$. This is where it's important to note that the outputs don't have to be the same combination of states as you use in the state differential equation - where your states might be a current your output might be a voltage ($R\times I$) so you can make a comparison with a measurable signal on your real system.
Like I said, there is a ton of information involved with modeling systems and designing state feedback controllers, I just outlined the general process as I believe this is scope you were looking for with your question.
• Thanks, this is an excellent basis for some further research. – Dan Bryant Sep 23 '15 at 20:07
• great answer, tl;dr; scalar values describing a SISO system become matrices for a MIMO system, the "cross-coupling" can be seen in the off-diagonal values in the matrices. – Bending Unit 22 Sep 26 '15 at 18:06
|
2020-05-27 00:39:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7054370641708374, "perplexity": 267.035211757682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00590.warc.gz"}
|
http://www.yhxb.org.cn/EN/abstract/abstract4514.shtml
|
Journal of Astronautics ›› 2014, Vol. 35 ›› Issue (3): 345-355.
### Analysis of InSAR 3D Imaging Characteristics of Target with Rotational Micro Motion
ZHANG Jing ke, DAI Da hai, XING Shi qi, WANG Xue song, XIAO Shun ping
1. State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology, Changsha 410073, China
• Received:2013-04-20 Revised:2013-08-29 Online:2014-03-15 Published:2014-03-25
Abstract:
Based on single pass InSAR system, the inteferometric phase and InSAR 3D imaging characteristics of target with rotational micro motion are studied, and the “barrier” effect of target with rotational micro\|motion in the InSAR image is revealed for the first time. Firstly, both 3D geometric model and signal model of the rotational target with arbitrary attitude are built in this paper, and the imaging results are derived. Then, by comparing the imaging results of InSAR’s two channels, the interferometry phases of the rotational target are proved to be approximate equal to the inteferometry phase of the location where the rotational center lies in, and the effect of the rotational parameters and the attitude of the rotational target can be neglected. On the basis, it is pointed out that the InSAR image result of the rotational target shows the “barrier” effect which spreads along the cross range direction. The height and the length of the “barrier” depend on the vertical direction coordinate of the rotation center and the rotational parameters respectively. The analyses are verified by simulation experiments.
CLC Number:
|
2023-03-23 17:36:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3699890971183777, "perplexity": 3092.149075274698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00650.warc.gz"}
|
https://stats.meta.stackexchange.com/questions/4654/what-are-the-criteria-applied-to-conclude-this-migrated-q-was-off-topic-how-do
|
What are the criteria applied to conclude this migrated Q was off topic? How do we draw the line?
This post was migrated to mathematics:
https://math.stackexchange.com/questions/2166949/deriving-a-1-alpha100-confidence-interval-for-theta-pivotal-quantity
(Its revision history on CV can be seen here.)
At present I cannot identify any plausible reason why this one is off topic on our site. We have many hundreds, maybe even thousands of questions of precisely this kind; while it uses mathematics (solve an equation), it's directly answering a stats question (how do I find a confidence interval in this case?).
For consistency's sake, I'd like a clear explanation (preferably from each of the people who voted to migrate) of how we can determine that this question is off topic on our site -- what criteria can applied that would put this question over the line?
Getting the policy right here is a serious issue for us -- if we don't identify some criterion that leaves lots of apparently similar questions on topic while this one is not, then many, many questions will need to be closed or migrated.
It looks to me like some of us are in for a very large amount of extra work here (unless we're just going to choose to be deliberately inconsistent, I suppose), and I want to have some clear guidelines to use when doing all that extra work. Right now I have no idea how to conclude this one is off topic.
[If it is in fact in error, I think that those involved should be the ones doing the legwork in redressing the issue -- getting it migrated back. Right or wrong, there's work to be done here.]
I've sought similar criteria before here (and elsewhere) on a number of other (to me) questionable migrations -- it's not like closure where a mistake can be undone with a couple of clicks, we need to be clear about why we're doing it. An example of such an earlier question is here.
[I don't recall having been offered clear criteria on those occasions, though I recall one where at least one or two people offered a level of justification, which overall suggests that there's a problem with some of the voting on migration. Since a migration is hard to undo, we should be prepared to justify why it's off topic.]
Edit (to address an issue in a couple of comments): If a question is on-topic at the source site, the clearly established principle (on many meta.SE threads as well as advice in parts of the help or other documentation relating to migration) is that it shouldn't be migrated; the exception would be where the OP requests it and it's on topic at the destination (in effect, if a question is on topic, askers get to decide where they want their question). As far as I see it, in deciding whether migration is appropriate, that leaves only the question of whether it's on topic here. To me it clearly is, but there's clearly thought to be room for argument (as in the comments).
• I didn't vote on that, but I can see some ambiguity. I certainly think it should be on topic on Mathematics. It could also be on topic here, IMO, but math might be a better fit. As @Cardinal mentioned in your other meta.CV Q, there are going to be "many questions that fall in grey areas ... Ultimately we want to find the best possible site for each question so that it gets a great answer". – gung Mar 2 '17 at 13:31
• As asked the q. was just about solving an equation. Your answer brought in more statistical context, though; I wouldn't have voted to migrate the q. after reading it. – Scortchi Mar 2 '17 at 14:11
• @Scortchi the post's title is "Deriving a (1−α)100% confidence interval for θ pivotal quantity", for me that's where the statistical context arises. While it's good to also put that information in the body of the post I don't see it as required to establish statistical context. – Glen_b Mar 2 '17 at 14:39
• It was only when I read your answer that I fully appreciated the statistical context so i can see where opinions could differ here. – mdewey Mar 2 '17 at 14:57
• @Glen_b: Well, yes; the context is there, but rather scanty - the distribution $\theta$ parametrizes isn't even mentioned - & the question focuses on how to solve $u = \frac{\sqrt{n}(\bar{y}-\theta)}{\sqrt{\theta}}$ for $\theta$. I can see why someone might well have thought "Well if you just need help with the algebra, Maths SE is the best site". – Scortchi Mar 2 '17 at 15:03
• I would add stats.stackexchange.com/questions/266151/… as another example for consideration. Although phrased as a request for a "calculation in R," to me it appears to ask a useful statistical question. I can't imagine an answer on SO (whither it was migrated) that didn't also have to work through some statistical and mathematical issues first. – whuber Mar 8 '17 at 17:00
• @whuber I see that as a rather more astonishing example, to say the least. Indeed, I had begun a new post about that one (pointing out our on-topic help says " if it needs statistical expertise to understand or answer, ask it here") before I saw your comment. How does that not need considerable statistical expertise to even begin to answer? – Glen_b Mar 9 '17 at 1:02
• I remember discussing this in meta a long time ago. At that point, I was in favor of leaving a lot of questions here that others wanted to migrate, esp. to the programming site but sometimes to math. So, do we have a clear policy? It would also be good to distinguish between migrating questions and closing them for being about using a language. I'll follow whatever policy there is, but what is it? – Peter Flom Mar 10 '17 at 12:15
• @Peter Asking about the policy is a fair question, and deserves discussion, but at the same time you are dodging the question Glen_b has posed here: "I'd like a clear explanation (preferably from each of the people who voted to migrate) of how we can determine that this question is off topic on our site." What criteria did you apply when deciding to migrate this question? If you would like to discuss this privately, then please join the moderators' chat room. – whuber Mar 12 '17 at 15:52
• @whuber Fair enough. I don't think I can exactly articulate what I do, because I am not sure of what the guidelines are supposed to be. What I try to do is to close questions that are about things that look like pure programming questions and that ought to be answerable from package documentation and to migrate questions that are more about programming - usually, these are more complex questions or ones that could apply in several packages or languages. – Peter Flom Mar 12 '17 at 21:26
• @whuber Of course, sometimes, I'm wrong. Who isn't? Maybe the standard for migrating a question has to be changed? – Peter Flom Mar 12 '17 at 21:27
• Thanks for responding. The problem is I can't see how to relate a substantial number of migration decisions (I've only posted about a few) to what I see as the current standard. Specifically, I can't see how they can be migrated on the basis of two things - 1. don't migrate it if it's on topic here, and - 2. the fairly plain wording of help center which says what's on topic. I don't think those standards changing would solve issue that the decisions frequently don't seem to match the criteria. ...ctd – Glen_b Mar 12 '17 at 21:45
• ctd ... if we could see how some of those decisions relate to the criteria, we could then have some way to converge on being consistent with each other. At the very least, when we can communicate about the ones we can't figure out it helps to see where we might need to be clearer about what we're doing and so work at converging on those aspects. – Glen_b Mar 12 '17 at 21:46
• I'd like to take this opportunity to state that I appreciate the ongoing work @PeterFlom does to keep the review queues clear. Being rather disagreeable, I have disagreed w/ some of Peter's decisions from time to time, but I agree w/ the bulk of them & I would rather he continue (& I occasionally disagree) than that he stop. The fact is, there are a lot of reviews that need to be done & there really aren't enough experienced, dedicated reviewers. If it weren't for Peter, our review queues would be steadily growing. I hope he continues. – gung Mar 13 '17 at 22:49
• Indeed, I think we all appreciate Peter's efforts; no doubt there. – Glen_b Mar 13 '17 at 23:01
The key difference here is that the asker's intention was to get maths help. The asker knew how to compute a confidence interval. They were asking how to solve for $\theta$ in an equation. As you say, it's a lot of work to come up with general rules, but I can see a couple of guidelines that would help you here.
Firstly, think about the type of person who is best suited to answering that question. On the one hand, it's possible that someone who uses statistics often might not know how to solve for $\theta$ in that problem. They might use a solver program instead. Lots of statisticians would be able to solve it, but there's no branch of statistics where that kind of manipulation is fundamental. On the other hand, solving an equation involving surds and powers of $\theta$ is an important skill in mathematics. There are areas of mathematics where an ability to solve a problem like that is a prerequisite. So one criterion might be that a question is only relevant on CrossValidated if the required technique for solving the problem is more important in statistics than in other branches of mathematics.
|
2019-09-18 01:03:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5238142013549805, "perplexity": 641.1082411750874}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573173.68/warc/CC-MAIN-20190918003832-20190918025832-00218.warc.gz"}
|
https://mathematica.stackexchange.com/questions/195664/plotting-a-graph-of-a-function-and-part-of-it/195675
|
# Plotting a graph of a function and part of it
I am doing a numerical experiment for solving some differential equation. In some part I need to plot a graph of the exact and approximated solutions. However, they are very closed to each other and it is difficult to graphically show how the approximated solution is related to the exact one. My question is how I can show the plot in two graphs, one showing the graph with its approximated solution (no problem if t is not clear how they are closed to each other) and another one beside it showing part of this graph in a very short domain in order to graphically represent how they are closed to each other. You can consider y=x^2 as exact solution and we need to show another plot beside the graph of y in a very small domain. I am interested to show the second graph by putting a small box on the graph of y and direct an arrow to the other graph showing this domain that assigned by the square we already constructed. Hope it is clear. looking for your help!
Please look t the plots below. One is for the domain [-1,1] and the other with a small domain and it is clear how the approximated solution behaves.
• Why don't you plot their difference? – Szabolcs Apr 20 at 18:22
• I did. But I need to show the exact and approximated solution together – Mutaz Apr 20 at 18:24
• It can be done with Inset. You can take a look at its documentation page while waiting for someone to post an answer. – Szabolcs Apr 20 at 19:05
Here is a starting for you.
inset = Inset[
Plot[u^2, {u, 0.05, 0.25}, Frame -> True,
FrameTicks -> {{None, All}, {None, All}} , ImageSize -> 150],
Scaled[{0.5, 0.7}]];
Show[Plot[x^2, {x, -1, 1}, Frame -> True, Axes -> False,
Epilog -> {inset}],
Graphics[{FaceForm[], EdgeForm[Black],
Rectangle[{0.05, 0}, {0.25, 0.06}], Dashed, GrayLevel[0.5],
Line[{{0.05, 0.06}, {-0.42, 0.5}}],
Line[{{0.25, 0.06}, {0.3, 0.49}}]}]]
• Thank you so much Okkes! – Mutaz Apr 21 at 10:05
|
2019-07-23 14:57:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3091767728328705, "perplexity": 521.761131659701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529406.97/warc/CC-MAIN-20190723130306-20190723152306-00040.warc.gz"}
|
https://www.zbmath.org/authors/?q=ai%3Adeconinck.bernard
|
## Deconinck, Bernard
Compute Distance To:
Author ID: deconinck.bernard Published as: Deconinck, Bernard; Deconinck, B.
Documents Indexed: 76 Publications since 1996 1 Contribution as Editor · 1 Further Contribution Co-Authors: 56 Co-Authors with 74 Joint Publications 1,041 Co-Co-Authors
all top 5
### Co-Authors
4 single-authored 9 Trogdon, Thomas 8 Vasan, Vishal 7 Kutz, J. Nathan 7 Trichtchenko, Olga 6 Sheils, Natalie E. 5 Carter, John D. 5 Nivala, Michael 5 Oliveras, Katie L. 4 Nguyen, Nghiem V. 4 Patterson, Matthew S. 4 Segal, Benjamin Louis 4 Segur, Harvey 3 Upsal, Jeremy 2 Bobenko, Alexander Ivanovich 2 Bottman, Nathaniel 2 Creedon, Ryan P. 2 Curtis, Christopher W. 2 Frigyik, Béla A. 2 Henderson, Diane M. 2 Hereman, Willy A. 2 Kapitula, Todd M. 2 Kollár, Richard 2 Pelinovsky, Dmitry Efimovich 2 Schmies, Markus 2 Tian, Rushun 2 van Hoeij, Mark 2 Yang, Xin 1 Bradley, R. Mark 1 Chen, Min 1 Cisneros, Jorge 1 Colagrosso, Michael 1 Crowdy, Darren Gregory 1 Fokas, Athanassios S. 1 Frauendiener, Jörg 1 Guo, Qi 1 Heil, Matthias 1 Hickman, Mark S. 1 Hidalgo, Rubén Antonio 1 Kalisch, Henrik 1 Kimura, Yoshifumi 1 Kiyak, Firat 1 Klein, Christian 1 Kokotov, Aleksey 1 Kurkina, O. E. 1 Lee, Crystal W. 1 Lenells, Jonatan 1 Lovit, David O. 1 Marshall, J. D. 1 McGill, Peter 1 Mercat, Christian 1 Moldabayev, Daulet 1 Olver, Sheehan Shakiban 1 Poole, L. D. 1 Ringler, Adam 1 Rouvinskaya, E. A. 1 Sayers, Ryan 1 Seppälä, Mika 1 Shlizerman, Eli 1 Smith, David Andrew 1 Sun, Wen-Rong 1 Swierczewski, Christopher 1 Thelwell, Roger J. 1 Warner, Brandon W. 1 Wilkening, Jon A.
all top 5
### Serials
10 Physica D 7 Studies in Applied Mathematics 6 Physics Letters. A 4 Mathematics and Computers in Simulation 4 Journal of Physics A: Mathematical and Theoretical 3 Journal of Fluid Mechanics 3 Applied Mathematics Letters 3 Journal of Physics A: Mathematical and General 3 SIAM Review 3 Discrete and Continuous Dynamical Systems 3 SIAM Journal on Applied Dynamical Systems 2 Applicable Analysis 2 Wave Motion 2 Mathematics of Computation 2 SIAM Journal on Mathematical Analysis 2 Journal of Nonlinear Science 2 European Journal of Mechanics. B. Fluids 1 Communications in Mathematical Physics 1 IMA Journal of Numerical Analysis 1 Journal of Computational Physics 1 Journal of Mathematical Physics 1 Letters in Mathematical Physics 1 Nonlinearity 1 Theoretical and Mathematical Physics 1 Quarterly of Applied Mathematics 1 SIAM Journal on Applied Mathematics 1 Mathematical Physics, Analysis and Geometry 1 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 1 Journal of Nonlinear Mathematical Physics 1 Lecture Notes in Mathematics
all top 5
### Fields
53 Partial differential equations (35-XX) 27 Dynamical systems and ergodic theory (37-XX) 17 Fluid mechanics (76-XX) 15 Numerical analysis (65-XX) 6 Algebraic geometry (14-XX) 5 Ordinary differential equations (34-XX) 5 Quantum theory (81-XX) 3 General and overarching topics; collections (00-XX) 3 Functions of a complex variable (30-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Special functions (33-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Computer science (68-XX) 2 Classical thermodynamics, heat transfer (80-XX) 1 History and biography (01-XX) 1 Number theory (11-XX) 1 Difference and functional equations (39-XX) 1 Operator theory (47-XX) 1 Mechanics of particles and systems (70-XX) 1 Geophysics (86-XX) 1 Systems theory; control (93-XX)
### Citations contained in zbMATH Open
67 Publications have been cited 729 times in 429 Documents Cited by Year
Computing spectra of linear operators using the Floquet-Fourier-Hill method. Zbl 1105.65119
Deconinck, Bernard; Kutz, J. Nathan
2006
The instability of periodic surface gravity waves. Zbl 1241.76212
Deconinck, Bernard; Oliveras, Katie
2011
Recovering the water-wave profile from pressure measurements. Zbl 1343.76005
Oliveras, K. L.; Vasan, V.; Deconinck, B.; Henderson, D.
2012
KdV cnoidal waves are spectrally stable. Zbl 1178.35327
Bottman, Nate; Deconinck, Bernard
2009
Computing Riemann matrices of algebraic curves. Zbl 1054.14079
Deconinck, Bernard; van Hoeij, Mark
2001
The method of fokas for solving linear partial differential equations. Zbl 1295.35002
Deconinck, Bernard; Trogdon, Thomas; Vasan, Vishal
2014
The orbital stability of the cnoidal waves of the Korteweg-de Vries equation. Zbl 1238.35128
Deconinck, Bernard; Kapitula, Todd
2010
Computing Riemann theta functions. Zbl 1092.33018
Deconinck, Bernard; Heil, Matthias; Bobenko, Alexander; van Hoeij, Mark; Schmies, Marcus
2004
Global existence for a coupled system of Schrödinger equations with power-type nonlinearities. Zbl 1286.35230
Nguyen, Nghiem V.; Tian, Rushun; Deconinck, Bernard; Sheils, Natalie
2013
On the spectral and orbital stability of spatially periodic stationary solutions of generalized Korteweg-de Vries equations. Zbl 1331.35305
Kapitula, Todd; Deconinck, Bernard
2015
Numerical inverse scattering for the Korteweg-de Vries and modified Korteweg-de Vries equations. Zbl 1248.65108
Trogdon, Thomas; Olver, Sheehan; Deconinck, Bernard
2012
Elliptic solutions of the defocusing NLS equation are stable. Zbl 1222.81157
Bottman, Nathaniel; Deconinck, Bernard; Nivala, Michael
2011
The stability analysis of the periodic traveling wave solutions of the mKdV equation. Zbl 1231.35197
Deconinck, B.; Nivala, M.
2011
The inverse water wave problem of bathymetry detection. Zbl 1284.76077
Vasan, Vishal; Deconinck, Bernard
2013
SpectrUW: a laboratory for the numerical exploration of spectra of linear operators. Zbl 1113.65058
Deconinck, Bernard; Kiyak, Firat; Carter, John D.; Kutz, J. Nathan
2007
High-frequency instabilities of small-amplitude solutions of Hamiltonian PDEs. Zbl 1365.37057
Deconinck, Bernard; Trichtchenko, Olga
2017
On the convergence of Hill’s method. Zbl 1205.34116
Curtis, Christopher W.; Deconinck, Bernard
2010
Computational approach to Riemann surfaces. Zbl 1207.14002
2011
Continuous and discrete homotopy operators and the computation of conservation laws. Zbl 1161.65376
Hereman, Willy; Colagrosso, Michael; Sayers, Ryan; Ringler, Adam; Deconinck, Bernard; Nivala, Michael; Hickman, Mark
2005
Dynamics and stability of Bose-Einstein condensates: the nonlinear Schrödinger equation with periodic potential. Zbl 1009.35078
Deconinck, B.; Frigyik, B. A.; Kutz, J. N.
2002
Stability of periodic gravity waves in the presence of surface tension. Zbl 1297.76028
Deconinck, Bernard; Trichtchenko, Olga
2014
Transverse instabilities of deep-water solitary waves. Zbl 1149.76627
Deconinck, Bernard; Pelinovsky, Dmitry E.; Carter, John D.
2006
The linear KdV equation with an interface. Zbl 1351.35170
Deconinck, Bernard; Sheils, Natalie E.; Smith, David A.
2016
Spectral stability of stationary solutions of a Boussinesq system describing long waves in dispersive media. Zbl 1300.35090
Chen, Min; Curtis, Christopher W.; Deconinck, Bernard; Lee, Crystal W.; Nguyen, Nghiem
2010
Relating the bottom pressure and the surface elevation in the water wave problem. Zbl 1362.35220
Deconinck, B.; Oliveras, K. L.; Vasan, V.
2012
The stability spectrum for elliptic solutions to the focusing NLS equation. Zbl 1415.35251
Deconinck, Bernard; Segal, Benjamin L.
2017
The solution of linear constant-coefficient evolution PDEs with periodic boundary conditions. Zbl 1242.35011
Trogdon, Thomas; Deconinck, Bernard
2012
A Riemann-Hilbert problem for the finite-genus solutions of the KdV equation and its numerical solution. Zbl 1278.37050
Trogdon, Thomas; Deconinck, Bernard
2013
Periodic finite-genus solutions of the KdV equation are orbitally stable. Zbl 1189.37080
Nivala, Michael; Deconinck, Bernard
2010
Computing with plane algebraic curves and Riemann surfaces: the algorithms of the Maple package “algcurves”. Zbl 1213.14114
Deconinck, Bernard; Patterson, Matthew S.
2011
Pole dynamics for elliptic solutions of the Korteweg-de Vries equation. Zbl 0970.35130
Deconinck, Bernard; Segur, Harvey
2000
Stability of periodic traveling wave solutions to the Kawahara equation. Zbl 1404.76054
Trichtchenko, Olga; Deconinck, Bernard; Kollár, Richard
2018
The instability of Wilton ripples. Zbl 1467.76018
Trichtchenko, Olga; Deconinck, Bernard; Wilkening, Jon
2016
Short-wave transverse instabilities of line solitons of the two-dimensional hyperbolic nonlinear Schrödinger equation. Zbl 1301.35156
Pelinovsky, D. E.; Rouvinskaya, E. A.; Kurkina, O. E.; Deconinck, B.
2014
A numerical dressing method for the nonlinear superposition of solutions of the KdV equation. Zbl 1302.65234
Trogdon, Thomas; Deconinck, Bernard
2014
Numerical computation of the finite-genus solutions of the Korteweg-de Vries equation via Riemann-Hilbert problems. Zbl 1255.65177
Trogdon, Thomas; Deconinck, Bernard
2013
Dynamics of periodic multi-component Bose-Einstein condensates. Zbl 1038.82056
Deconinck, Bernard; Kutz, J. Nathan; Patterson, Matthew S.; Warner, Brandon W.
2003
Interface problems for dispersive equations. Zbl 1314.35125
Sheils, Natalie E.; Deconinck, Bernard
2015
Initial-to-interface maps for the heat equation on composite domains. Zbl 1346.35195
Sheils, Natalie E.; Deconinck, Bernard
2016
The stability spectrum for elliptic solutions to the sine-Gordon equation. Zbl 1378.35028
Deconinck, Bernard; McGill, Peter; Segal, Benjamin L.
2017
Well-posedness of boundary-value problems for the linear Benjamin-Bona-Mahony equation. Zbl 1277.35115
Vasan, Vishal; Deconinck, Bernard
2013
Computing the Abel map. Zbl 1200.37069
Deconinck, Bernard; Patterson, Matthew S.
2008
The interaction of long and short waves in dispersive media. Zbl 1349.76030
Deconinck, Bernard; Nguyen, Nghiem V.; Segal, Benjamin L.
2016
Continuous and discrete homotopy operators: a theoretical approach made concrete. Zbl 1120.65073
Hereman, W.; Deconinck, B.; Poole, L. D.
2007
Direct characterization of spectral stability of small-amplitude periodic waves in scalar Hamiltonian problems via dispersion relation. Zbl 1426.37053
Kollár, Richard; Deconinck, Bernard; Trichtchenko, Olga
2019
Exact nonstationary solutions to the mean-field equations of motion for two-component Bose-Einstein condensates in periodic potentials. Zbl 1136.82339
Bradley, R. Mark; Deconinck, Bernard; Kutz, J. Nathan
2005
The KP equation with quasiperiodic initial data. Zbl 0938.35165
Deconinck, Bernard; Segur, Harvey
1998
Stability of exact solutions of the defocusing nonlinear Schrödinger equation with periodic potential in two dimensions. Zbl 0984.81028
Deconinck, B.; Frigyik, B. A.; Kutz, J. N.
2001
Heat conduction on the ring: interface problems with periodic boundary conditions. Zbl 1314.80002
Sheils, Natalie E.; Deconinck, Bernard
2014
Symbolic integration using homotopy methods. Zbl 1182.65044
Deconinck, Bernard; Nivala, Michael
2009
Fokas’s unified transform method for linear systems. Zbl 1407.35004
Deconinck, Bernard; Guo, Qi; Shlizerman, Eli; Vasan, Vishal
2018
Real Lax spectrum implies spectral stability. Zbl 1457.37094
Upsal, Jeremy; Deconinck, Bernard
2020
Instabilities of one-dimensional trivial-phase solutions of the two-dimensional cubic nonlinear Schrödinger equation. Zbl 1091.35086
Carter, John D.; Deconinck, Bernard
2006
The Bernoulli boundary condition for traveling water waves. Zbl 1320.76022
Vasan, Vishal; Deconinck, Bernard
2013
A constructive test for integrability of semi-discrete systems. Zbl 1037.37505
Deconinck, Bernard
1996
Explicit solutions for a long-wave model with constant vorticity. Zbl 1408.76087
Segal, Benjamin L.; Moldabayev, Daulet; Kalisch, Henrik; Deconinck, Bernard
2017
A method to recover water-wave profiles from pressure measurements. Zbl 07213212
Vasan, Vishal; Oliveras, Katie; Henderson, Diane; Deconinck, Bernard
2017
Computing Riemann theta functions in Sage with applications. Zbl 07313685
Swierczewski, Christopher; Deconinck, Bernard
2016
Canonical variables for multiphase solutions of the KP equation. Zbl 1002.37033
Deconinck, Bernard
2000
Singular instability of exact stationary solutions of the non-local Gross-Pitaevskii equation. Zbl 1045.35077
Deconinck, Bernard; Kutz, J. Nathan
2003
The orbital stability of elliptic solutions of the focusing nonlinear Schrödinger equation. Zbl 1437.37096
Deconinck, Bernard; Upsal, Jeremy
2020
The pole dynamics of rational solutions of the viscous Burgers equation. Zbl 1119.35074
Deconinck, Bernard; Kimura, Yoshifumi; Segur, Harvey
2007
High-frequency instabilities of the Kawahara equation: a perturbative approach. Zbl 1478.35072
Creedon, Ryan; Deconinck, Bernard; Trichtchenko, Olga
2021
On the nonintegrability of equations for long- and short-wave interactions. Zbl 1396.35052
Deconinck, Bernard; Upsal, Jeremy
2018
Data analysis and reduction using stationary solutions of the NLS equation. Zbl 1191.81110
Deconinck, Bernard; Lovit, David O.
2010
The instabilities of periodic traveling water waves with respect to transverse perturbations. Zbl 1326.35283
Oliveras, Katie; Deconinck, Bernard
2015
Dispersive and soliton perturbations of finite-genus solutions of the KdV equation: computational results. Zbl 1331.35312
Trogdon, Thomas; Deconinck, Bernard
2014
High-frequency instabilities of the Kawahara equation: a perturbative approach. Zbl 1478.35072
Creedon, Ryan; Deconinck, Bernard; Trichtchenko, Olga
2021
Real Lax spectrum implies spectral stability. Zbl 1457.37094
Upsal, Jeremy; Deconinck, Bernard
2020
The orbital stability of elliptic solutions of the focusing nonlinear Schrödinger equation. Zbl 1437.37096
Deconinck, Bernard; Upsal, Jeremy
2020
Direct characterization of spectral stability of small-amplitude periodic waves in scalar Hamiltonian problems via dispersion relation. Zbl 1426.37053
Kollár, Richard; Deconinck, Bernard; Trichtchenko, Olga
2019
Stability of periodic traveling wave solutions to the Kawahara equation. Zbl 1404.76054
Trichtchenko, Olga; Deconinck, Bernard; Kollár, Richard
2018
Fokas’s unified transform method for linear systems. Zbl 1407.35004
Deconinck, Bernard; Guo, Qi; Shlizerman, Eli; Vasan, Vishal
2018
On the nonintegrability of equations for long- and short-wave interactions. Zbl 1396.35052
Deconinck, Bernard; Upsal, Jeremy
2018
High-frequency instabilities of small-amplitude solutions of Hamiltonian PDEs. Zbl 1365.37057
Deconinck, Bernard; Trichtchenko, Olga
2017
The stability spectrum for elliptic solutions to the focusing NLS equation. Zbl 1415.35251
Deconinck, Bernard; Segal, Benjamin L.
2017
The stability spectrum for elliptic solutions to the sine-Gordon equation. Zbl 1378.35028
Deconinck, Bernard; McGill, Peter; Segal, Benjamin L.
2017
Explicit solutions for a long-wave model with constant vorticity. Zbl 1408.76087
Segal, Benjamin L.; Moldabayev, Daulet; Kalisch, Henrik; Deconinck, Bernard
2017
A method to recover water-wave profiles from pressure measurements. Zbl 07213212
Vasan, Vishal; Oliveras, Katie; Henderson, Diane; Deconinck, Bernard
2017
The linear KdV equation with an interface. Zbl 1351.35170
Deconinck, Bernard; Sheils, Natalie E.; Smith, David A.
2016
The instability of Wilton ripples. Zbl 1467.76018
Trichtchenko, Olga; Deconinck, Bernard; Wilkening, Jon
2016
Initial-to-interface maps for the heat equation on composite domains. Zbl 1346.35195
Sheils, Natalie E.; Deconinck, Bernard
2016
The interaction of long and short waves in dispersive media. Zbl 1349.76030
Deconinck, Bernard; Nguyen, Nghiem V.; Segal, Benjamin L.
2016
Computing Riemann theta functions in Sage with applications. Zbl 07313685
Swierczewski, Christopher; Deconinck, Bernard
2016
On the spectral and orbital stability of spatially periodic stationary solutions of generalized Korteweg-de Vries equations. Zbl 1331.35305
Kapitula, Todd; Deconinck, Bernard
2015
Interface problems for dispersive equations. Zbl 1314.35125
Sheils, Natalie E.; Deconinck, Bernard
2015
The instabilities of periodic traveling water waves with respect to transverse perturbations. Zbl 1326.35283
Oliveras, Katie; Deconinck, Bernard
2015
The method of fokas for solving linear partial differential equations. Zbl 1295.35002
Deconinck, Bernard; Trogdon, Thomas; Vasan, Vishal
2014
Stability of periodic gravity waves in the presence of surface tension. Zbl 1297.76028
Deconinck, Bernard; Trichtchenko, Olga
2014
Short-wave transverse instabilities of line solitons of the two-dimensional hyperbolic nonlinear Schrödinger equation. Zbl 1301.35156
Pelinovsky, D. E.; Rouvinskaya, E. A.; Kurkina, O. E.; Deconinck, B.
2014
A numerical dressing method for the nonlinear superposition of solutions of the KdV equation. Zbl 1302.65234
Trogdon, Thomas; Deconinck, Bernard
2014
Heat conduction on the ring: interface problems with periodic boundary conditions. Zbl 1314.80002
Sheils, Natalie E.; Deconinck, Bernard
2014
Dispersive and soliton perturbations of finite-genus solutions of the KdV equation: computational results. Zbl 1331.35312
Trogdon, Thomas; Deconinck, Bernard
2014
Global existence for a coupled system of Schrödinger equations with power-type nonlinearities. Zbl 1286.35230
Nguyen, Nghiem V.; Tian, Rushun; Deconinck, Bernard; Sheils, Natalie
2013
The inverse water wave problem of bathymetry detection. Zbl 1284.76077
Vasan, Vishal; Deconinck, Bernard
2013
A Riemann-Hilbert problem for the finite-genus solutions of the KdV equation and its numerical solution. Zbl 1278.37050
Trogdon, Thomas; Deconinck, Bernard
2013
Numerical computation of the finite-genus solutions of the Korteweg-de Vries equation via Riemann-Hilbert problems. Zbl 1255.65177
Trogdon, Thomas; Deconinck, Bernard
2013
Well-posedness of boundary-value problems for the linear Benjamin-Bona-Mahony equation. Zbl 1277.35115
Vasan, Vishal; Deconinck, Bernard
2013
The Bernoulli boundary condition for traveling water waves. Zbl 1320.76022
Vasan, Vishal; Deconinck, Bernard
2013
Recovering the water-wave profile from pressure measurements. Zbl 1343.76005
Oliveras, K. L.; Vasan, V.; Deconinck, B.; Henderson, D.
2012
Numerical inverse scattering for the Korteweg-de Vries and modified Korteweg-de Vries equations. Zbl 1248.65108
Trogdon, Thomas; Olver, Sheehan; Deconinck, Bernard
2012
Relating the bottom pressure and the surface elevation in the water wave problem. Zbl 1362.35220
Deconinck, B.; Oliveras, K. L.; Vasan, V.
2012
The solution of linear constant-coefficient evolution PDEs with periodic boundary conditions. Zbl 1242.35011
Trogdon, Thomas; Deconinck, Bernard
2012
The instability of periodic surface gravity waves. Zbl 1241.76212
Deconinck, Bernard; Oliveras, Katie
2011
Elliptic solutions of the defocusing NLS equation are stable. Zbl 1222.81157
Bottman, Nathaniel; Deconinck, Bernard; Nivala, Michael
2011
The stability analysis of the periodic traveling wave solutions of the mKdV equation. Zbl 1231.35197
Deconinck, B.; Nivala, M.
2011
Computational approach to Riemann surfaces. Zbl 1207.14002
2011
Computing with plane algebraic curves and Riemann surfaces: the algorithms of the Maple package “algcurves”. Zbl 1213.14114
Deconinck, Bernard; Patterson, Matthew S.
2011
The orbital stability of the cnoidal waves of the Korteweg-de Vries equation. Zbl 1238.35128
Deconinck, Bernard; Kapitula, Todd
2010
On the convergence of Hill’s method. Zbl 1205.34116
Curtis, Christopher W.; Deconinck, Bernard
2010
Spectral stability of stationary solutions of a Boussinesq system describing long waves in dispersive media. Zbl 1300.35090
Chen, Min; Curtis, Christopher W.; Deconinck, Bernard; Lee, Crystal W.; Nguyen, Nghiem
2010
Periodic finite-genus solutions of the KdV equation are orbitally stable. Zbl 1189.37080
Nivala, Michael; Deconinck, Bernard
2010
Data analysis and reduction using stationary solutions of the NLS equation. Zbl 1191.81110
Deconinck, Bernard; Lovit, David O.
2010
KdV cnoidal waves are spectrally stable. Zbl 1178.35327
Bottman, Nate; Deconinck, Bernard
2009
Symbolic integration using homotopy methods. Zbl 1182.65044
Deconinck, Bernard; Nivala, Michael
2009
Computing the Abel map. Zbl 1200.37069
Deconinck, Bernard; Patterson, Matthew S.
2008
SpectrUW: a laboratory for the numerical exploration of spectra of linear operators. Zbl 1113.65058
Deconinck, Bernard; Kiyak, Firat; Carter, John D.; Kutz, J. Nathan
2007
Continuous and discrete homotopy operators: a theoretical approach made concrete. Zbl 1120.65073
Hereman, W.; Deconinck, B.; Poole, L. D.
2007
The pole dynamics of rational solutions of the viscous Burgers equation. Zbl 1119.35074
Deconinck, Bernard; Kimura, Yoshifumi; Segur, Harvey
2007
Computing spectra of linear operators using the Floquet-Fourier-Hill method. Zbl 1105.65119
Deconinck, Bernard; Kutz, J. Nathan
2006
Transverse instabilities of deep-water solitary waves. Zbl 1149.76627
Deconinck, Bernard; Pelinovsky, Dmitry E.; Carter, John D.
2006
Instabilities of one-dimensional trivial-phase solutions of the two-dimensional cubic nonlinear Schrödinger equation. Zbl 1091.35086
Carter, John D.; Deconinck, Bernard
2006
Continuous and discrete homotopy operators and the computation of conservation laws. Zbl 1161.65376
Hereman, Willy; Colagrosso, Michael; Sayers, Ryan; Ringler, Adam; Deconinck, Bernard; Nivala, Michael; Hickman, Mark
2005
Exact nonstationary solutions to the mean-field equations of motion for two-component Bose-Einstein condensates in periodic potentials. Zbl 1136.82339
Bradley, R. Mark; Deconinck, Bernard; Kutz, J. Nathan
2005
Computing Riemann theta functions. Zbl 1092.33018
Deconinck, Bernard; Heil, Matthias; Bobenko, Alexander; van Hoeij, Mark; Schmies, Marcus
2004
Dynamics of periodic multi-component Bose-Einstein condensates. Zbl 1038.82056
Deconinck, Bernard; Kutz, J. Nathan; Patterson, Matthew S.; Warner, Brandon W.
2003
Singular instability of exact stationary solutions of the non-local Gross-Pitaevskii equation. Zbl 1045.35077
Deconinck, Bernard; Kutz, J. Nathan
2003
Dynamics and stability of Bose-Einstein condensates: the nonlinear Schrödinger equation with periodic potential. Zbl 1009.35078
Deconinck, B.; Frigyik, B. A.; Kutz, J. N.
2002
Computing Riemann matrices of algebraic curves. Zbl 1054.14079
Deconinck, Bernard; van Hoeij, Mark
2001
Stability of exact solutions of the defocusing nonlinear Schrödinger equation with periodic potential in two dimensions. Zbl 0984.81028
Deconinck, B.; Frigyik, B. A.; Kutz, J. N.
2001
Pole dynamics for elliptic solutions of the Korteweg-de Vries equation. Zbl 0970.35130
Deconinck, Bernard; Segur, Harvey
2000
Canonical variables for multiphase solutions of the KP equation. Zbl 1002.37033
Deconinck, Bernard
2000
The KP equation with quasiperiodic initial data. Zbl 0938.35165
Deconinck, Bernard; Segur, Harvey
1998
A constructive test for integrability of semi-discrete systems. Zbl 1037.37505
Deconinck, Bernard
1996
all top 5
### Cited by 515 Authors
36 Deconinck, Bernard 19 Johnson, Mathew A. 13 Pelinovsky, Dmitry Efimovich 10 Fokas, Athanassios S. 10 Trogdon, Thomas 9 Kapitula, Todd M. 8 Hur, Vera Mikyoung 8 Trichtchenko, Olga 8 Zumbrun, Kevin R. 7 Barker, Blake 7 Carter, John D. 7 Kalimeris, Konstantinos 7 Klein, Christian 7 Rodrigues, Luis Miguel 7 Saanouni, Tarek 6 Akers, Benjamin F. 6 Frauendiener, Jörg 6 Haragus, Mariana 6 Kalisch, Henrik 6 Nicholls, David P. 6 Sheils, Natalie E. 6 Stanislavova, Milena 6 Stefanov, Atanas G. 6 Vasan, Vishal 5 Bilman, Deniz 5 Biondini, Gino 5 Bronski, Jared C. 5 Constantin, Adrian 5 Himonas, A. Alexandrou 5 Mantzavinos, Dionyssios 5 Noble, Pascal 5 Oliveras, Katie L. 5 Pastor, Ademir 4 Bogatyrev, Andrei Borisovich 4 Cavalcante, Márcio André Araújo 4 Clamond, Didier 4 Colbrook, Matthew J. 4 Natali, Fábio M. Amorin 4 Părău, Emilian I. 4 Sherratt, Jonathan A. 4 Smith, David Andrew 4 Wang, Dengshan 4 Zhang, Yingnan 4 Zharinov, Victor Victorovich 3 Abid, Malek 3 Basu, Biswajit 3 Blyth, Mark G. 3 Camassa, Roberto 3 Chen, Jinbing 3 Chen, Robin Ming 3 Chien, Mao-Ting 3 Corcho, Adán J. 3 Crowdy, Darren Gregory 3 Ehrnström, Mats 3 Geng, Xianguo 3 Grigor’ev, O. A. 3 Hakkaev, Sevdzhan A. 3 Henry, David 3 Kevrekidis, Panayotis G. 3 Kharif, Christian 3 Lyons, Tony 3 Malomed, Boris A. 3 Marangell, Robert 3 Nachbin, André 3 Nakazato, Hiroshi 3 Naz, Rehana 3 Nguyen, Nghiem V. 3 Nivala, Michael 3 Olver, Sheehan Shakiban 3 Plaza, Ramón G. 3 Shimabukuro, Yusuke 3 Sturmfels, Bernd 3 Sun, Jianqing 3 Upsal, Jeremy 3 Vanden-Broeck, Jean-Marc 3 Walsh, Samuel 3 Wen, Xiaoyong 3 Wilkening, Jon A. 3 Yan, Fangchi 2 Ablowitz, Mark Jay 2 Abou-Dina, Moustafa S. 2 Agostini, Daniele 2 Amann, Dominic 2 Anco, Stephen C. 2 Andrade, David 2 Ashton, Anthony C. L. 2 Bhattarai, Santosh 2 Bobenko, Alexander Ivanovich 2 Claassen, Kyle M. 2 Cole, Justin T. 2 Creedon, Ryan P. 2 Curtis, Christopher W. 2 Dasgupta, Anirvan 2 Dutykh, Denys 2 Farah, Luiz Gustavo 2 Gerdzhikov, Vladimir Stefanov 2 Gesztesy, Fritz 2 Ghaleb, Ahmed Fouad 2 Gou, Tianxiang 2 Grinevich, Pëtr Georgievich ...and 415 more Authors
all top 5
### Cited in 128 Serials
41 Physica D 22 Journal of Fluid Mechanics 17 Journal of Mathematical Physics 15 Studies in Applied Mathematics 11 Journal of Differential Equations 11 European Journal of Mechanics. B. Fluids 10 Journal of Mathematical Analysis and Applications 9 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 9 Journal of Nonlinear Science 8 Nonlinearity 8 SIAM Journal on Mathematical Analysis 8 Discrete and Continuous Dynamical Systems 7 Communications in Mathematical Physics 7 Physics Letters. A 7 Mathematics of Computation 7 Applied Mathematics Letters 7 SIAM Journal on Applied Mathematics 7 Communications in Nonlinear Science and Numerical Simulation 6 Wave Motion 6 Mathematics and Computers in Simulation 6 Nonlinear Analysis. Real World Applications 6 Computational Methods and Function Theory 6 Philosophical Transactions of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 5 Archive for Rational Mechanics and Analysis 5 Journal of Computational Physics 5 Theoretical and Mathematical Physics 5 Quarterly of Applied Mathematics 5 Nonlinear Dynamics 5 Analysis and Mathematical Physics 4 Applicable Analysis 4 Journal of Symbolic Computation 4 Journal of Dynamics and Differential Equations 4 Water Waves 3 Computers & Mathematics with Applications 3 Letters in Mathematical Physics 3 ZAMP. Zeitschrift für angewandte Mathematik und Physik 3 Applied Mathematics and Computation 3 Journal of Computational and Applied Mathematics 3 Proceedings of the American Mathematical Society 3 Transactions of the American Mathematical Society 3 Linear Algebra and its Applications 3 Journal of Mathematical Fluid Mechanics 3 SIAM Journal on Applied Dynamical Systems 3 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 2 Journal of Engineering Mathematics 2 Mathematical Methods in the Applied Sciences 2 Acta Applicandae Mathematicae 2 European Journal of Applied Mathematics 2 Applied Mathematical Modelling 2 Experimental Mathematics 2 Advances in Computational Mathematics 2 Abstract and Applied Analysis 2 Chaos 2 Communications in Contemporary Mathematics 2 Journal of Evolution Equations 2 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 2 Proceedings of the Steklov Institute of Mathematics 2 Discrete and Continuous Dynamical Systems. Series S 2 Arabian Journal of Mathematics 2 East Asian Journal on Applied Mathematics 2 SIAM Journal on Applied Algebra and Geometry 1 International Journal of Modern Physics B 1 Acta Mechanica 1 Communications on Pure and Applied Mathematics 1 International Journal of Theoretical Physics 1 Inverse Problems 1 Journal of Mathematical Biology 1 Linear and Multilinear Algebra 1 Mathematical Proceedings of the Cambridge Philosophical Society 1 Zhurnal Vychislitel’noĭ Matematiki i Matematicheskoĭ Fiziki 1 Chaos, Solitons and Fractals 1 Journal of Geometry and Physics 1 Acta Arithmetica 1 BIT 1 Calcolo 1 Czechoslovak Mathematical Journal 1 Duke Mathematical Journal 1 Journal of Approximation Theory 1 Journal of Functional Analysis 1 Mathematische Annalen 1 Memoirs of the American Mathematical Society 1 SIAM Journal on Control and Optimization 1 Theoretical Computer Science 1 Systems & Control Letters 1 Annales de l’Institut Henri Poincaré. Analyse Non Linéaire 1 Applied Numerical Mathematics 1 Revista Matemática Iberoamericana 1 Japan Journal of Industrial and Applied Mathematics 1 M$$^3$$AS. Mathematical Models & Methods in Applied Sciences 1 Numerical Algorithms 1 Communications in Partial Differential Equations 1 SIAM Review 1 Archive of Applied Mechanics 1 SIAM Journal on Scientific Computing 1 NoDEA. Nonlinear Differential Equations and Applications 1 Opuscula Mathematica 1 Integral Transforms and Special Functions 1 Mathematical Problems in Engineering 1 Vietnam Journal of Mathematics 1 Mathematical Physics, Analysis and Geometry ...and 28 more Serials
all top 5
### Cited in 43 Fields
312 Partial differential equations (35-XX) 117 Fluid mechanics (76-XX) 96 Dynamical systems and ergodic theory (37-XX) 55 Numerical analysis (65-XX) 31 Algebraic geometry (14-XX) 18 Operator theory (47-XX) 16 Ordinary differential equations (34-XX) 15 Geophysics (86-XX) 14 Quantum theory (81-XX) 13 Special functions (33-XX) 13 Computer science (68-XX) 13 Statistical mechanics, structure of matter (82-XX) 12 Functions of a complex variable (30-XX) 12 Optics, electromagnetic theory (78-XX) 9 Mechanics of deformable solids (74-XX) 6 Linear and multilinear algebra; matrix theory (15-XX) 6 Difference and functional equations (39-XX) 5 Number theory (11-XX) 5 Mechanics of particles and systems (70-XX) 4 Commutative algebra (13-XX) 4 Harmonic analysis on Euclidean spaces (42-XX) 4 Calculus of variations and optimal control; optimization (49-XX) 3 History and biography (01-XX) 3 Several complex variables and analytic spaces (32-XX) 3 Global analysis, analysis on manifolds (58-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Relativity and gravitational theory (83-XX) 3 Biology and other natural sciences (92-XX) 3 Systems theory; control (93-XX) 2 Field theory and polynomials (12-XX) 2 Real functions (26-XX) 2 Potential theory (31-XX) 2 Integral transforms, operational calculus (44-XX) 2 Functional analysis (46-XX) 2 Probability theory and stochastic processes (60-XX) 2 Statistics (62-XX) 1 General and overarching topics; collections (00-XX) 1 Combinatorics (05-XX) 1 $$K$$-theory (19-XX) 1 Measure and integration (28-XX) 1 Integral equations (45-XX) 1 Geometry (51-XX) 1 Operations research, mathematical programming (90-XX)
|
2022-05-24 00:12:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5391305088996887, "perplexity": 9042.043815073355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00075.warc.gz"}
|
http://mh-journal.blogspot.in/2015/11/about-gradient-descent-algorithm.html
|
## Friday, 27 November 2015
### About Gradient Descent Algorithm
Gradient Descent Algorithm is a key tool in Data Science used to find the minimum value of a function. You would have come across gradient in Microsoft PowerPoint or Adobe Photoshop when you start your slide or image creation. But what we are talking about here is all mathematical stuff, and to be specific - concepts from calculus.
The usage of gradient descent in data science is in regression. I’d covered linear and logistics regression in my previous blog posts. To do a quick recap — linear regression is numeric prediction. Logistics Regression is binary category classification based on log-odds.
In regression, we have data connecting a dependent variable and the variables it depends on, which are the independent variables. We want to find the formula between them that best represents the data, the best fit equation.
We start with an initial equation. Then improvise it. How do we do it? In the given data, for the known independent variable values, we calculate the value of the output variable as given by our initial equation. Then we compare these calculated values with the actual value of the output variable in the given data. The difference between the calculated value and the actual value is the error.
Now the error values themselves collectively take the form of a function. Mathematicians / data scientists / nerds fondly call this as cost function. So our problem of finding the best fit equation is one for which the error equation a.k.a cost function is minimum.
In order to find this minimum, we take the help of calculus. In which the narrative starts with the derivative. Calculus is one of those subjects that will readily put you off. But then we need not comprehend the entire breadth of calculus, what we need to know is the derivative and the partial derivative. That calls for a detour into high school math.
Let’s say x and y are the two variables. y is dependent on x. That is, the value of y is derived from the value of x by applying some formula on x. In other words, x is the input and y is the output or x is the independent variable and y is the dependent variable or y is a function of x. Mathematically, y = f(x).
The derivative is the rate of change of the function (y) at a specific value of x. When you say the rate at which something changes, it means it is a fraction or how much part of the something is the change itself. To find the derivative we take a very small increment of the value of x, so small an increment that it approaches zero and find the rate of change.
There is an opinion that the sensitivity is a better term than derivative. As one user expressed on stack overflow: ‘I dislike the word "derivative", which provides no hint of what a derivative is. My suggested replacement name is "sensitivity". The derivative measures the sensitivity of a function. In particular, it measures how sensitive the output is to small changes in the input. It is given by the ratio, where the denominator is the change in the input and the numerator is the induced change in the output. With this definition, it is not hard to show students why knowing the derivative can be very useful in many different contexts.’
In the physical world, the two best examples of derivative are the tangent and velocity. “The derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time is advanced. The derivative of a function of a single variable at a chosen input value is the slope of the tangent line to the graph of the function at that point,” explains Wikipedia.
Now the thing is, will y be dependent only one other variable, x? No. It could be dependent on or derived from multiple variables. So what do we do with the derivative then? No worries, humanity has this habit that when there are multiple variables, they focus only on thing at a time. Here we take the derivative of y with respect to one variable (x) keeping the other variables as constant. That is the partial derivative.
From partial derivatives, we move on to the vector. math.com explains it as “a vector is a specific mathematical structure. It has numerous physical and geometric applications, which result mainly from its ability to represent magnitude and direction simultaneously. Wind, for example, had both a speed and a direction and, hence, is conveniently expressed as a vector. The same can be said of moving objects and forces. The location of a points on a cartesian coordinate plane is usually expressed as an ordered pair (x, y), which is a specific example of a vector. Being a vector, (x, y) has a a certain distance (magnitude) from and angle (direction) relative to the origin (0, 0). Vectors are quite useful in simplifying problems from three-dimensional geometry.”
Now we come to the gradient. It is the the short form of gradient vector. It is represented by the inverted triangle nabla. But nobody reads it as nabla. We just call it gradient.
The gradient of a function is the vector of partial derivatives of the function. For a function f(x, y), at a point $$P_0$$ ($$x_0, y_0)$$, it is obtained by evaluating the partial derivatives of f at $$P_0$$ and summing them up.
$$\nabla f ~ = ~ \frac{\partial f}{\partial x} i + \frac{\partial f}{\partial y} j$$
For three variables it is,
$$\nabla f ~ = ~ \frac{\partial f}{\partial x} i + \frac{\partial f}{\partial y} j + \frac{\partial f}{\partial z} k$$
and so on.
But what does it indicate? It points in the direction of steepest ascent and its magnitude is the slope of the steepest ascent. Good. We have a function and we know that its gradient gives the direction of the greatest increase. Implying that its negative gives the direction of the greatest decrease. Time to get back to the cost function.
We take a small step down on our cost curve, check to see which direction we need to go to get closer to the minimum by using the negative of the gradient, and then take the next step to move closer to the minimum. We repeat until we get to the lowest point.
And that, my friends is the gradient descent algorithm.
Let’s see all the mathematical symbols in their gradient glory
The derivative
${f}^{\prime }\left(x\right)=\underset{h\to 0}{lim}\frac{f\left(x+h\right)-f\left(x\right)}{h}.$ OR
Various symbols used for the derivative
Cost function in linear regression
Cost function in logistic regression
$J\left(\theta \right)=-\frac{1}{m}\left[\sum _{i=1}^{m}y\cdot log\left({h}_{\theta }\left(x\right)\right)+\left(1-y\right)log\left(1-{h}_{\theta }\left(x\right)\right)\right]$ Gradient Descent Algorithm
Repeat until convergence
{
$$\theta_j := \theta_j - \alpha \frac{1}{m} \sum\limits_{i=1}^m (h_\theta(x^{(i)} - y^{(i)}) ~~ x_j^{(i)}$$
}
|
2017-09-24 21:12:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6981863379478455, "perplexity": 263.5351477176747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818690211.67/warc/CC-MAIN-20170924205308-20170924225308-00256.warc.gz"}
|
https://techwhiff.com/learn/in-study-of-perception-80-men-are-tested-and-7/359395
|
# In a study of perception, 80 men are tested and 7 are found to have red/green...
###### Question:
In a study of perception, 80 men are tested and 7 are found to have red/green color blindness. A 99% confidence interval for this proportion would be .0875 plus or minus:
In a study of perception, 80 men are tested and 7 are found to have red/green color blindness. A 99% confidence interval for this proportion would be .0875 plus or minus 0.0813 @.0316 O .0520 ©.0619
#### Similar Solved Questions
##### 10. Solve the equation below on the interval (0.2T). cos2x = V3
10. Solve the equation below on the interval (0.2T). cos2x = V3...
##### 15. Which of the following features self-contained patient care units? a. Nested single-room design b. Neighborhood...
15. Which of the following features self-contained patient care units? a. Nested single-room design b. Neighborhood design C. Edenizing d. Traditional hospital model 16. According to the textbook, the extent of informal caregiving is the highest among a. Asians b. Whites c. African Americans d. Hisp...
##### Can you solve this problem please? 8. Determine the appropriate form of the particular solution for...
can you solve this problem please? 8. Determine the appropriate form of the particular solution for the following non-homogeneous linear differential equation with constant coefficients. * (8 Points) +9y" = 5 + e* (x – 3) + 4sin (3x). y (4) A + B sin(3x) + Cxsin (3x) + Det + Exer none of t...
##### Classify each scenario as to whether it would increase or decrease the money supply. Decrease the...
Classify each scenario as to whether it would increase or decrease the money supply. Decrease the money supply Increase the money supply Answer Bank Answer Bank The Fed reverses quantitative easing, reases. The central bank sells bonds. The government decreases the reserve requi...
##### The sum of the 4th term and the 5th term is 12
the sum of the 4th term and the 5th term is 12.if the 4th term of the progression is 8, find the common ratio and the 1st term of the progression....
|
2022-11-28 16:01:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4699545204639435, "perplexity": 1812.1185646077715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710533.96/warc/CC-MAIN-20221128135348-20221128165348-00022.warc.gz"}
|
https://www.physicsforums.com/threads/electron-heat-capacity-integral.842877/
|
# Electron Heat Capacity Integral
1. Nov 12, 2015
### Tphysics
1. The answer to this problem is easy when plugged into mathematica it's (pi^2)/3. I am trying to integrate it by hand however and can't figure out how to start it. I also can't find any other attempts of it online (our professor says we can just look it up if we can find it).
[(x^2*E^x)/(E^x + 1)^2, {x, -Infinity, Infinity}]
2. No equations
3. I've tried U-sub with setting U= (e^x+1) and then tried some integration by parts but I'm not getting there.
#### Attached Files:
• ###### Screen Shot 2015-11-12 at 6.05.31 PM.png
File size:
31.7 KB
Views:
61
2. Nov 13, 2015
### fzero
This actually turns out to be very complicated to do and I am having trouble giving hints that you can follow without giving too much of the answer away, so please bear with me. At least using Mathematica seems like a legitimate solution to the problem and I don't believe that many people would expect an undergrad to come up with the solution below on their own.
First, integrals of functions of $x^n$ times exponentials can often be done by replacing $e^x$ by $e^{a x}$ and then noting that $d/da(e^{ax}) = x e^{ax}$, so we try to replace the powers of $x$ with derivatives of another expression. Then we can exploit this by bringing the derivative outside of the integral. For example
$$\int dx ~ x e^x = \left[ \frac{d}{da} \int dx~e^{ax} \right]_{a=1},$$
which you should be able to verify by doing both integrals explicitly.
In your case, we can use
$$\frac{x^2 e^x}{(e^x+1)^2} = \left[ \frac{d^2}{da^2} \ln ( 1+ e^{ax})\right]_{a=1}.$$
Furthermore, we can determine the indefinite integral
$$\int dx \ln ( 1+ e^{ax})$$
in terms of the dilogarithm function (see for instance https://en.wikipedia.org/wiki/Spence's_function)
$$\text{Li}_2(z) = - \int^z_0 \frac{du}{u} \ln ( 1-u).$$
The big difficulty here is that the dilogarithm is infinite as $z\rightarrow -\infty$, so the naive substitution for your integral over the whole real axis will result in a divergent integral. (The dilogarithm is also usually not defined for $1 \leq z < \infty$, but I believe that the proper substitutions keep us on the negative real axis.) However, I believe that it is possible to show that the definite integral
$$F(a) =\int_{-\infty}^0 dx \ln ( 1+ e^{ax})$$
exists. So we should break your original integral into two parts, then the answer can be expressed as the appropriate derivative of $F(a)+F(-a)$.
It will probably be important to use the results (https://en.wikipedia.org/wiki/Spence's_function#Special_values) $\text{Li}_2(-1)=-\pi^2/12$ and $\text{Li}_2(0)=0.$
3. Nov 14, 2015
### Tphysics
Thanks but this is math I am completely unfamiliar with. It ended up being doable also with a contour integral.
SOLVED.
#### Attached Files:
• ###### Snapshot.jpg
File size:
3.3 KB
Views:
47
4. Nov 14, 2015
### Tphysics
I drew it terribly above but you catch my drift.
5. Nov 14, 2015
### fzero
Sure, I didn't seriously consider suggesting the contour integral because it is a bit rare to find someone comfortable with the method. I probably should have asked first. It's good that you were able to work it out yourself that way.
|
2017-11-18 11:11:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7480159401893616, "perplexity": 429.06214856535956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804724.3/warc/CC-MAIN-20171118094746-20171118114746-00476.warc.gz"}
|
http://math.stackexchange.com/questions/161239/regularity-of-the-greens-function-vs-regularity-of-the-solution
|
# regularity of the Green's function vs. regularity of the solution
In Friedmans book there is analysis for the pde which is done via the fundamental solution. As I understand if we integrate that with initial data it gives us a solution of an equation. There are also results on regularity of the fundamental solution but I wonder if that has affect on the regularity of the solution itself? Is there a direct dependence? How smoothness of the fundamental solution affects the smoothness of a solution? thanks!
-
Since the general solution of $Lf=g$ is the convolution of $g$ with the fundamental solution $\varphi$, we should expect that better regularity of $\varphi$ results in better regularity of $f=\varphi*g$. For example, if $\varphi\in L^p$ for some $p$, then $\varphi*g\in L^p$ for all $g\in L^1$, by Young's inequality. Of course, the more interesting question is the smoothness of solution rather than its integrability. Unfortunately, here the situation is complicated by the nature of $\varphi$: it is usually smooth except at one point where there is a singularity. One has to be careful estimating the contribution of the singularity to the solution. So, the relation between the smoothness of $\varphi$ and $\varphi*g$ cannot be described in broad terms that apply to all PDE.
@Leoned:singularity that you have mentioned occurs when $t=0$, since the density becomes a delta function, but for other $t>0$, would not partial derivative $u_x=\int \phi_xgdy$? Then the differentiability of $\phi$ would imply differentiability of $u$, would not it? – Medan Jun 24 '12 at 23:58
@Medan We have $u(x)=\int\phi(x-t)g(x)\,dt=\int\phi(t)g(x-t)\,dt$. For every $x$ the integral involves contribution from the part of $\phi$ where it is singular. For example, the function $\phi(t)=\log|t|$ is $C^{\infty}$ outside of $0$, but its convolution with a continuous function $g$ is not even $C^2$ in general. – user31373 Jun 25 '12 at 13:19
:I think it should be written as $u(x)=\int\phi(x-y)g(y)dy$? where the same for heat equation is $u(x,t)=\int\phi(x-y,t)g(y)dy$ which is singular only at $t=0$. Thus, we define it everywhere but $t=0$ to avoid those problems, then the regularity of $u(x,t)$ is the same as regularity of $\phi(x-y,t)$ w.r.t. to $x,t$. Where my logic is wrong? – Medan Jun 25 '12 at 15:04
|
2015-05-24 14:02:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9349864721298218, "perplexity": 113.11292430875945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928019.31/warc/CC-MAIN-20150521113208-00330-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://gateoverflow.in/289411/recurrence-relation
|
209 views
T(n) = T(n/4) + T(3n/4) + n
How to solve these type of problems?
Can I solve this using master theorm by considering X = T(3N/4) +N THEN
T(N) = T(N/4) +X
CAN WE SOLVE LIKE THIS?
| 209 views
0
0
no brother you have to consider both the function to get the result though only one side can be considered either to find best case time or worst case time.
0
Then how do we tackle these type of problems?
can you please provide easy way to solve these
Thanks
0
bro see which one is big and we merge smaller one into it means we ignore it simply......
T(n) = T( 3n/4) + n
0
For this case the TC = $\theta(n)$
It will same TC if we take best case and worst case
+1
brother use recursive tree method when there is more than one function...
and go with basics it will help you to reduce it to simple methods.
0
@Hemanth_13 it will be nlogn(approx) why n? check once
0
Hey , in the second example of the below site (T(n) = T(n/3) + T(2n/3) + n.)
how we came to know that size of the tree is log3/2 n ??
0
@Nandkishor3939 till T(1)=T(n/(3/2)^k) means n/(3/2)^k= 1 simplify this and you will get k = log(3/2)n and it will be the longest path and n/(3)^k will be the shortest path k= log3n...
0
@arvin why not T(1)=T(n/(3)^k)
0
you can use that also but it will give Ω(nlog3n).. because the length of chain will be small as compared to T(2n/3)
0
how you guys are calculating T(1)?
please tell in detail ..I'm not able to understand
0
If you solve the rec reln by traditional method(by substitution and not by tree method)
you will see that T(n/(3/2)) will be like T(n/(3/2)^k) for k th transaction ....
so substitute T(1)=T(n/(3/2)^k) because we need to kind value of k such that we will reach the last stage of recurrence relation i.e. T(1)
thus we will get an idea of how deep the tree(it is quite intuitive !) will be (that's what the discussion was all about)
+1 vote
Solve it by making recurrence tree
by (93 points)
|
2020-02-20 13:54:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5541888475418091, "perplexity": 1746.1284550104883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00212.warc.gz"}
|
https://www.riesethiopia.com/pa5e2qam/how-to-calculate-fitts%27-law-bbe033
|
Fitts’ law and the calculation of throughput In the field of Human-Computer Interaction (HCI), Fitts’ law has been mainly applied in two ways: firstly as a predictive model, and secondly as a mean to derive the dependent measure throughput (Fitts’ index of performance) as part of the comparison and evaluation of pointing devices. On each of the 20 trials, you need to do the following: In PsyToolkit, the data output file is simply a textfile. At the end of this demo, the PsyToolkit function "feedback" will draw [reprint of MacKenzie, For example, you might try this yourself in your [1] Fitts's law is used to model the act of pointing, either by physically touching an object with a hand or finger, or virtually, by pointing to an object on a computer monitor using a pointing device. INFO: Anyone who has observed how gradually babies develop their Index of performance Bits/ms Bandwidth Comparable across devices/tasks 9. [13] Notice that because the ID term depends only on the ratio of distance to width, the model implies that a target distance and width combination can be re-scaled arbitrarily without affecting movement time, which is impossible. The research suggests that in practical implementations the direction in which a user has to move their mouse has also to be accounted for. Fitts, P. M. (1954). This raises the question which formula is wrong and which is right. You need this information for your data analysis. In 1954, Fitts described the relationship between the target distance, width, and time needed for a target acquisition task. Fitts’s Law. Meaning of the columns in the output datafile. This Law states exactly how the time it takes is a function of Request PDF | Fitts’ Law: On Calculating Throughput and Non-ISO Tasks | We used a target-selection task to evaluate head-tracking as an input method for mobile devices. small object close by can be just as easy to grasp as an object However, if one understands why it is easier to predict where it will hold and where fail. FITTS LAW EXPERIMENT. Therefore one can conclude that devices with higher indices of performance would be faster and presumably better. (1991)[30] compared radial menu designs. directly to your PsyToolkit Throughput, when calculated as described later in this chapter, combines … [16] It describes the transmission of information using bandwidth, signal strength and noise. reflects what users actually did, rather than what they were asked to do. In layman’s terms: **the closer and larger a target, the faster it is to click on that target**. Fittsâ law can be used as an aid to make educated decisions on the size and placement of user interface elements, so itâs still extremely applicable today, especially to web design. psychology and neuroscience inspired by the laws of physics. Coined by Paul Fitts in the 1950s, the law is applied to the location and size of menus and buttons in software. Placing layout elements on the four edges of the screen allows for infinitely large targets in one dimension and therefore present ideal scenarios. In doing so, it is necessary to separate variation between users from variation between interfaces. Fitts's Law is one of the cornerstones of user interface research. That alone turns out to be hugely important. The temporal distance is the amount of time a person must wait for a target to appear. Refining Fitts' law models for bivariate pointing. The task of selecting the temporal target is called temporal pointing. If the latter are not incorporated into the model, then average movement times can be artificially decreased. 22. [14] During fast saccadic eye movements the user is blind. Limits of Fitts’ Strange results with small A One-dimensional Pointing only 10. Although no formal mathematical connection was established between Fitts's law and the Shannon-Hartley theorem it was inspired by, the Shannon form of the law has been used extensively, likely due to the appeal of quantifying motor actions using information theory. The user needs much less precision because they can simply fling the mouse in the direction of a corner and the limitations of the screen restrict where the pointer ends up. Move the mouse cursor to the small yellow rectangle in the top left Most treatments of Fitts' Law say WHAT is true, but not WHY. distribution. Fitts' law also states that the target acquisition time increases drastically if the target gets tiny. Similar to space, the distance to the target (i.e., temporal distance Dt) and the width of the target (i.e., temporal width Wt) can be defined for temporal targets as well. psychologist will rarely use the word Law to describe (. Bivariate pointing 23. Viewed 927 times 3 $\begingroup$ I've tracked the movement of an input method resulting in this dataset. In this video I will explain how we can measure and calculate the Usability of an User Interface in an objective way without any subjective or personal opinions. In 2002 the ISO 9241 was published, providing standards for human–computer interface testing, including the use of the Shannon form of Fitts's law. Implications of BP Law Third empirical parameter Ideal W:H ratio for rect. object. Movement time prediction in human-computer interfaces. Fittsâ Law is an essential principle of Human-Computer Interaction theory that was formulated almost 60 years ago. combines a task's index of difficulty (ID) with the movement time (MT, in seconds) in selecting the target. Time Index of difficulty Intercept Slope (ms/bits) 8. If the selection coordinates are normally distributed, We spans 96% of the Fitts’s original study only used one dimension of movement and here we For simply pointing to targets in a two-dimensional space, the model generally holds as-is but requires adjustments to capture target geometry and quantify targeting errors in a logically consistent way. In a radial menu all items have the same distance from the prime pixel. In its basic form, Fitts's law says that targets a user has to hit should be as big in size as possible. It is also common to include an adjustment for accuracy in the calculation. regularities. Itâs critical to UX design for the desktop and laptop, but with interaction techniques being vastly different on mobile devices can we still use it the same way? In its original form, Fitts's law is meant to apply only to one-dimensional tasks. It is well-known as Fitts law that the time for a user to point a target can be modelled as a linear function of index of difficulty (ID) , where ID is formulated as a function of the target size and distance (Fitts, 1954; MacKenzie, 1992). Optimizing for the D parameter in this way allows for smaller travel times. Radial menu designs and to consider alternative models as accurate as possible average movement times be. [ 10 ] gathered over a sequence of trials from 20 to 100 variation users! Be considered to have an infinite '' width human–computer interaction field in 2016 D-W condition Rule! Presumably better collide and form a theoretically infinitely big button “ magic ”... The parameters of the distribution IP is more difficult to calculate because angle... Demo, the distance to move your hand and the cup and make a movement been extended two-dimensional! The experiment ( infinite '' width PsyToolkit account, you get a more nicely scatter..., pp 1 ] used how to calculate fitts' law their well-known pioneering study of Fitts 's [. Is cited that Fitts 's law deals only with targets defined in space in which a user has move. Be closed quickly while still being imprecise the word law to describe regularities interactive computing systems thousands of ranging! Saccadic eye movements the user is blind an effective target width is noise of performance would be faster presumably! Data is tracked with equal intervals of $100 \, \mathrm { ms$. The measurement move divided by the width of the data, so its predictive accuracy not. The definition of the screen allows for pop-up menus rather than fixed menus... Movement times can be weighted using the F-test of nested models move their mouse has also to at! A radial menu all items have the same time as very short movements to narrow targets main in. ( Eds extended to two-dimensional tasks in two different ways long movements to wide targets require about the time! Equal to Log2 of two times the distance to the direction of movement and here we use 2 interactive.... Considered to have an infinite '' width it gives the definition the. The difficulty of a screen compare tasks, limbs and devices both in as. Two different ways depth by psychologists interested in eye-hand coordination is the important and complex way respond! The web: MacKenzie, I. S. ( 1995 ) the second movement to... Implies an inverse relationship between the starting point and the speed with which it be. Task to be accounted for area are examples of temporal targets with small a one-dimensional pointing only 10 most item.: Index of difficulty 6 from assembly lines to computer interfaces controlled precise movement to actually hit the target (. It will hold and where fail and time needed for a series of IDs predicted RT based on and... Plot of the experiment ( used one dimension of how to calculate fitts' law and here we use 2 this seems to at... Movement times can be defined purely on the four edges of the infinite edges ” shaped. Acquisition task draw a simple XY plot of the human motor response developed by Paul Fitts in.... ( Eds from the standard deviation in the 1950s, the distance and the speed with which it be... Computing systems allows for smaller travel times generation or industrial processes are potential instances Log2! Plot when you have a PsyToolkit account a sequence of trials from 20 to 100 which a has! As Drewes showed effectively infinitely long along the movement axis get the quantifiable measure of every design system screen for. Selection task is an essential principle of human-computer interaction ( 2nd ed. ' calculation ( predicted RT on! Applied in user experience ( UX ) and user interface research the right-sided one upload the zipfile directly to PsyToolkit. Improvement to Fitts ' calculation ( predicted RT based on only 20 trials is below when right-click! Is below person must wait for a target acquisition time increases drastically if the target gets tiny tasks two. Incorporated into the model 's predictive power deteriorates when both are varied over sequence... Traffic, power generation or industrial processes are potential instances to lower functions and vice versa 's test should as... Rather than fixed drop-down menus reduces travel times for the D parameter in this phase the distance signal. It is also common to include an adjustment for accuracy in the calculation Calculator: use this Fitts law. Being imprecise of approach to the location and size of menus and buttons in software moment the target tasks. When both are varied over a significant influence on performance have more trials based on only 20 trials is.. The target width ( we ) of an input method resulting in this phase the between. R ) for goodness of fit target to appear two times the distance between your and! Task can be used for a target moving toward a selection area are examples of temporal.! Accot J. and Zhai S. 1997 [ 16 ] it describes the of. A model of human movement, is included in the measurement often it is more commonly called (. Each of them is derived from Shannon 's information theory this seems to be accounted for of... 1-Factor forms of Fitts 's law more truly encompasses the speed-accuracy tradeoff be! To eye tracking, & S. Greenberg ( Eds closed quickly while still being.! For several different situations and interfaces gathered over a sequence of trials from 20 to.... The original 1954 paper by Paul Fitts in the measurement for transitions from to! Having a significant range takes humans months to develop amount of time a person must wait for a series IDs! A different preset area acquisition task is blind in a single measure, throughput.... If you want to grasp a coffee cup in front of you, you get a nicely! 1954 ] years, 7 months ago to one-dimensional tasks: HCI Homework 3 by Unaiza Faiz H. -As the mouse cursor stops at the four edges of the data using the F-test of models! Elements on the four corners of a target selection task shaped scatter plot you! Of Fitts 's law task can be directly compared with 1-factor forms of Fitts 's law one... Are examples of temporal targets above is that spatial variability, or accuracy, Fitts described the relationship the. The Shannon form of Fitts ' law provides a method to quantify the difficulty of a screen right-handed. Do n't have to move to a different preset area rarely use the word law describe. Second movement tries to perform a slow and controlled precise movement to hit... The more accurate the task of selecting the temporal distance is the important and complex way we respond objects! A model of human motor response developed by Paul Fitts in 1954, described... Ratio for rect, lists, and time needed for a target can be exaggerated at the four edges the. Examining the correlation ( r ) for goodness of fit 16 ] it the! Limits of Fitts ' law manual as well as in computer pointing = 1 ( 1995.... 7 months ago target to appear of two times the distance can be directly compared with 1-factor forms Fitts... Throughput ( TP ) 14 ] during fast saccadic eye movements the user can continue right! The information capacity of the human motor response developed by Paul Fitts in his 1964 paper with.. Fittsâ law is meant to apply only to one-dimensional tasks, so its predictive accuracy can be! Allows for smaller travel times for the D parameter no differences were found for transitions upper... Its accuracy and to consider alternative models predictive power deteriorates when both are over! Significant range today, IP is more difï¬cult to calculate because the angle be... Split into two phases: [ 10 how to calculate fitts' law ed. hit should be as in!
Echostar 119 West Satellite, An Estimator Is A Random Variable Because It Varies From, Most Popular General Aviation Aircraft, Principles Of Usability, Acer Aspire 5 A515-44-r41b Specs, Face Nailed Oak Flooring, Pride Wrangler Reviews, Dog Treats For Training, 10 East Grand Avenue, Chicago, Il, Van Cleef Malachite Necklace, When To Transplant Cucumber Seedlings To Bigger Pots, Hope For The Hopeless, Hope For Them All Lyrics, Font Not Showing Up In Powerpoint Windows 10, Plants Safe For Rabbits,
|
2021-05-18 11:33:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46964091062545776, "perplexity": 1796.6839196833257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00105.warc.gz"}
|
https://community.wolfram.com/groups/-/m/t/2169058
|
# Judge the sign of a multi parameter equality Using 'Simplify'
Posted 1 month ago
345 Views
|
6 Replies
|
3 Total Likes
|
I'm not sure this method is available for judging the sign of a multi parameter equality using the function "Simplify" in Mathematica. But in the official document, the "Simplify" could "use assumptions to prove inequalities".So I tried this method to judge my complex equation (it's other formula's first derivative and I need to judge it positive or negative)And I posted my code here, please help me why it can not obtain the outcome of "True of False". Simplify[1 + ( b^m E^((qL \[Rho])/\[Lambda]) m p^(-1 + m) RqL (E^(b/\[Lambda]) + E^((qL \[Rho])/\[Lambda]) (-1 + \[Alpha]) - E^((b^m qH)/((b^m + p^m) \[Lambda])) \[Alpha]))/((b^m + p^ m)^2 (-E^(((2 b)/\[Lambda])) + E^(( b + (b^m qH)/(b^m + p^m))/\[Lambda]) + E^((b^m (qH + qL))/((b^m + p^m) \[Lambda])) (-1 + \[Alpha]) - E^((2 qL \[Rho])/\[Lambda]) (-1 + \[Alpha]))) + (b^(2 m) E^(( qL \[Rho])/\[Lambda]) m p^(-1 + m) RqL (E^(( b^m (qH + 2 qL))/((b^m + p^m) \[Lambda])) (qH - qL) (-1 + \[Alpha]) + E^(b/\[Lambda]) (-E^(((2 b)/\[Lambda])) qL - 2 E^((b^ m (qH + qL))/((b^m + p^m) \[Lambda])) (qH - qL) (-1 + \[Alpha]) - 2 E^((b + (b^m qL)/(b^m + p^m))/\[Lambda]) qL (-1 + \[Alpha]) + E^((2 qL \[Rho])/\[Lambda]) qL (-1 + \[Alpha]) - E^((2 b^m qH)/((b^m + p^m) \[Lambda])) qL \[Alpha] + E^((b + (b^m qH)/( b^m + p^m))/\[Lambda]) (-qH + qL + (qH + qL) \[Alpha]))))/((b^m + p^m)^3 (E^(( 2 b)/\[Lambda]) - E^((b + (b^m qH)/(b^m + p^m))/\[Lambda]) - E^((b^m (qH + qL))/((b^m + p^m) \[Lambda])) (-1 + \[Alpha]) + E^((2 qL \[Rho])/\[Lambda]) (-1 + \[Alpha]))^2 \[Lambda]) < 0, 100 > qH > qL > b > p > 0 && 0 < \[Rho] < 1 && 2 > \[Lambda] > 0 && 0 < \[Alpha] < 1 && 2 > m > 1]
6 Replies
Sort By:
Posted 1 month ago
It does not seem that your expression has a definite sign: inst = FindInstance[ 100 > qH > qL > b > p > 0 && 0 < \[Rho] < 1 && 2 > \[Lambda] > 0 && 0 < \[Alpha] < 1 && 2 > m > 1, {m, qH, qL, b, p, \[Rho], \[Lambda], \[Alpha]}, Reals, 3]; expr = 1 + (b^ m E^((qL \[Rho])/\[Lambda]) m p^(-1 + m) RqL (E^(b/\[Lambda]) + E^((qL \[Rho])/\[Lambda]) (-1 + \[Alpha]) - E^((b^m qH)/((b^m + p^m) \[Lambda])) \[Alpha]))/((b^m + p^m)^2 (-E^(((2 b)/\[Lambda])) + E^((b + (b^m qH)/(b^m + p^m))/\[Lambda]) + E^((b^m (qH + qL))/((b^m + p^m) \[Lambda])) (-1 + \[Alpha]) - E^((2 qL \[Rho])/\[Lambda]) (-1 + \[Alpha]))) + (b^(2 m) \ E^((qL \[Rho])/\[Lambda]) m p^(-1 + m) RqL (E^((b^m (qH + 2 qL))/((b^m + p^m) \[Lambda])) (qH - qL) (-1 + \[Alpha]) + E^(b/\[Lambda]) (-E^(((2 b)/\[Lambda])) qL - 2 E^((b^m (qH + qL))/((b^m + p^m) \[Lambda])) (qH - qL) (-1 + \[Alpha]) - 2 E^((b + (b^m qL)/(b^m + p^m))/\[Lambda]) qL (-1 + \[Alpha]) + E^((2 qL \[Rho])/\[Lambda]) qL (-1 + \[Alpha]) - E^((2 b^m qH)/((b^m + p^m) \[Lambda])) qL \[Alpha] + E^((b + (b^m qH)/(b^m + p^m))/\[Lambda]) (-qH + qL + (qH + qL) \[Alpha]))))/((b^m + p^m)^3 (E^((2 b)/\[Lambda]) - E^((b + (b^m qH)/(b^m + p^m))/\[Lambda]) - E^((b^m (qH + qL))/((b^m + p^m) \[Lambda])) (-1 + \[Alpha]) + E^((2 qL \[Rho])/\[Lambda]) (-1 + \[Alpha]))^2 \[Lambda]); Plot[expr /. inst[[2]], {RqL, -1, 1}]
Posted 1 month ago
Thanks, but there are some questions about your answers. Is your code presented a simulation about one parameter? I cannot understand the meaning of plot. Could you please tell me detailedly? (I have read the official document, but I learned MMA from yesterday.)And I will post some other region of the parameters. They are: qH > qL > 0 && b > 0 && p > 0 && 0 < \[Rho] < 1 && \[Lambda] > 0 && 0 < \[Alpha] < 1 && m > 0 which should be put after FindInstance as the first part.Thanks, bro!!
Posted 1 month ago
I tried to make your expression more manageable by setting most variable to constants. Your expression is linear in the variable RqL, and you make no assumption on RqL. FindInstance finds some constant values for the other parameters that satisfy your constraints. I replace those values into the expression and plot the result as a function of RqL. The plot changes sign. This is an indication that your expression has no definite sign. Plot makes numerical approximations, so that this is no mathematical proof, but it points to a direction for further investigation.
Posted 1 month ago
Sorry, sir, I think I posted the wrong code about my question. And I posted the complete code about my question: D[qL b^m/(b^m + p^m) Exp[(qL b^m/(b^m + p^m) - b)/\[Lambda]]/( Exp[(qL b^m/(b^m + p^m) - b)/\[Lambda]] + ((1 - Exp[(qH b^m/(b^m + p^m) - b)/\[Lambda]]) (1 - Exp[(qL b^m/(b^m + p^m) - b)/\[Lambda]]))/( 1 - \[Alpha] Exp[( qH b^m/(b^m + p^m) - b)/\[Lambda]] - (1 - \[Alpha]) Exp[( qL b^m/(b^m + p^m) - b)/\[Lambda]])) + p, p] And I want to judge the sign of the outcome above. Thank you for your carefulness, otherwise I will never find my wrong.Please help me judge the sign of the outcome above. Thaaaaaaaanks!!
Also the new expression has no fixed sign, it appears: expr = D[qL b^ m/(b^m + p^m) Exp[(qL b^m/(b^m + p^m) - b)/\[Lambda]]/(Exp[(qL b^m/(b^m + p^m) - b)/\[Lambda]] + ((1 - Exp[(qH b^m/(b^m + p^m) - b)/\[Lambda]]) (1 - Exp[(qL b^m/(b^m + p^m) - b)/\[Lambda]]))/(1 - \[Alpha] Exp[(qH b^ m/(b^m + p^m) - b)/\[Lambda]] - (1 - \[Alpha]) Exp[(qL b^ m/(b^m + p^m) - b)/\[Lambda]])) + p, p] // Simplify; inst = FindInstance[ 100 > qH > qL > b > p > 0 && 0 < \[Rho] < 1 && 2 > \[Lambda] > 0 && 0 < \[Alpha] < 1 && 2 > m > 1 && Element[qH | qL | b | p, Integers], {qH, qL, b, p, \[Rho], \[Lambda], \[Alpha], m}, 2] N[Simplify[expr /. inst], 5]
|
2021-02-26 09:42:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45897698402404785, "perplexity": 13713.100877868255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356456.56/warc/CC-MAIN-20210226085543-20210226115543-00476.warc.gz"}
|
https://www.rankersadda.com/forum/896/21/--The-cost-of-19-kg-Apples-is-Rs.-1158,-that-of-17-kg-Tomatoes-is-Rs.-595,-and-that-of-13-kg-Oranges-is-Rs.-949.-What-is-the-total-cost-of-11-kg-Apples,-7-kg-To
|
1 .
# The cost of 19 kg Apples is Rs. 1158, that of 17 kg Tomatoes is Rs. 595, and that of 13 kg Oranges is Rs. 949. What is the total cost of 11 kg Apples, 7 kg Tomatoes and 3 kg Oranges?
[ A ] Rs. 1876
[ B ] Rs. 1366
[ C ] Rs. 1230
[ D ] Rs. 1780
[ E ] None of these
Answer : Option B Explanation :
|
2018-12-19 05:26:40
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9429550766944885, "perplexity": 10554.057291073072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376831334.97/warc/CC-MAIN-20181219045716-20181219071716-00251.warc.gz"}
|
https://nikopj.github.io/projects/dcdl/
|
Nikola Janjušević
# The Convolutional Dictionary Learning Network
• May 2022:
• April 2022:
• CDLNet accepted into the IEEE Open Journal of Signal Processing!
## Project Overview
Sparse representation is a proven and powerful prior for natural images and restoration tasks (such as denoising, deblurring, in-painting, etc.) involving them. More than simply finding these representations, learning an over-complete dictionary for sparse signal representation from degraded signals have been shown to be effective models. Furthermore, the convolutional dictionary learning (CDL) model seeks to represent the global signal via a translated local dictionary. This offers a more holistic approach for natural image representation compared to inherently suboptimal patch-processing methods. The dictionary learning problem is traditionally solved by iteratively compute spare-codes (representations) for a fixed dictionary, and subsequently updating the dictionary accordingly.
In this project, we explore an interpretable Deep Learning architecture for image restoration based on an unrolled CDL model. More specifically, we leverage the LISTA framework to obtain approximate convolutional sparse codes, followed by a synthesis from a convolutional dictionary. We call this architecture CDLNet. The network is trained in a task-driven fashion, amenable to any linear inverse-problem. We believe that interpretable network construction will yield greater insight and novel capabilities.
## Generalization in Denoising
The derivation of the CDLNet architecture allows us to understand the subband thresholds, $\tau^{(k)} \in \mathbb R_+^M$, of the soft-thresholding operator as implicitly being a function of the input noise-level $\sigma$. We thus propose an affine parameterization,
$\tau^{(k)} = \tau_0^{(k)} + \tau_1^{(k)}\sigma$
to explicitly model noise-level adaptivity within each layer of the network. This is in stark contrast to the implicitly defined noise-level adaptivity of common black-box neural networks, which either account for noise only via training on a noise range (ex. DnCNN), or additionally presented the estimated input noise-level as an input to the network (ex. FFDNet). As shown in the figures below, CDLNet's explicitly defined noise-level adaptivity allows for near-perfect generalization outside its training range, whereas the black box models either fail or introduce artifacts.
This generalization characteristic is further demonstrated for the CDLNet architecture extended to color image denoising, joint-denoising-and-demosaicing, and unsupervised learning of denoising.
### Joint Denoising and Demosaicing
CDLNet extended to the JDD task is able to achieve state-of-the-art results with a single model, out-performing black box neural networks.
The results of this section are detailed in, "CDLNet: Noise-Adaptive Convolutional Dictionary Learning Network for Blind Denoising and Demosaicing".
See our supplementary material with animations of filters, thresholds, and sparse codes across layers.
## Gabor is Enough!
Gabor filters (Gaussian $\times$ cosine) have a long history neural networks. Cat eye-cells have been shown to have Gabor-like frequency responses, and the learned filters at the early stages of the AlexNet classifier are noted to be Gabor-like as well. We noticed that the trained filters of CDLNet also appear Gabor-like and wondered, "Can Gabor-like be replaced with Gabor?". And so we parameterized each and every filter of CDLNet as a 2D real Gabor function,
$g(\mathbf{x}; \phi) = \alpha e^{-\lVert \mathbf{a} \circ \mathbf{x} \rVert_2^2} \cos(\mathbf{\omega}_0^T \mathbf{x} + \psi),$
with $\phi = (\alpha, \mathbf{a}, \mathbf{\omega}_0, \psi) \in \mathbb R^6$ as learnable parameters. We also considered mixture of Gabor (MoG) filters, i.e. each filter as sum of Gabor filters. We call this network GDLNet. Surprisingly, with just MoG=1, GDLNet can achieve competitive results with state-of-the-art CNN denoisers (see table below).
Our results suggest that the mechanisms behind low-level image processing neural networks need not be more complex than real Gabor filterbanks. Check out our preprint, "Gabor is Enough: Interpretable Deep Denoising with a Gabor Synthesis Dictionary Prior", for more results and information.
|
2022-08-15 09:24:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5063095688819885, "perplexity": 2752.4591218441788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572163.61/warc/CC-MAIN-20220815085006-20220815115006-00726.warc.gz"}
|
https://bytepawn.com/ab-testing-and-the-chi-squared-test.html
|
A/B testing and the Chi-squared test
Marton Trencseni - Fri 28 February 2020 - Data
Introduction
In an ealier post, I wrote about A/B testing conversion data with the Z-test. The $\chi^2$ test is a more general test for conversion data, because it can work with multiple conversion events and multiple funnels being tested (A/B/C/D/..).
The code shown below is up on Github.
Before we go on, let’s use a $\chi^2$ test for a simple A/B conversion use-case and compare the results with the Z-test and the t-test (both two-taileds). First, a Monte Carlo algorithm to simulate A/B tests:
def choice(ps):
return np.random.choice(len(ps), p=ps)
def simulate_abtest(funnels, N):
traffic_split = [x[1] for x in funnels]
observations = np.zeros([len(funnels), len(funnels[0][0])])
for _ in range(N):
which_funnel = choice(traffic_split)
funnel_outcome = choice(funnels[which_funnel][0])
observations[which_funnel][funnel_outcome] += 1
return observations
Next, let’s pretend we’re running a conversion A/B test that’s not working (A and B conversions the same) on $N=10,000$, and use the statsmodel and scipy stats libraries to run all three tests on the results:
funnels = [
[[0.80, 0.20], 0.6], # the first vector element is the actual outcomes,
[[0.80, 0.20], 0.4], # the second is the traffic split
]
N = 10*1000
observations = simulate_abtest(funnels, N)
raw_data = int(observations[0][0]) * [1] + int(observations[0][1]) * [0], int(observations[1][0]) * [1] + int(observations[1][1]) * [0]
print('Observations:\n', observations)
ch = chi2_contingency(observations, correction=False)
print('Chi-sq p = %.3f' % ch[1])
zt = ztest(*raw_data)
print('Z-test p = %.3f' % zt[1])
tt = ttest_ind(*raw_data)
print('t-test p = %.3f' % zt[1])
All three yield the same p value:
Observations:
[[4825. 1183.]
[3211. 781.]]
Chi-sq p = 0.876
Z-test p = 0.876
t-test p = 0.876 # all three are the same
We’re not surprised that the Z-test and the t-test yield identical results. We saw in the previous post that above $N=100$ the t-distribution is a normal distribution, and the two tests yield the same p value. For this simple case (two outcomes: conversion or no conversion, and two funnels: A and B), the $\chi^2$ test is also identical to the Z-test, with the same limitation (assumes the Central Limit Theorem, so not reliable below $N=100$ ).
The $\chi^2$ test
For A/B testing, we can think of the $\chi^2$ test as a generalized Z-test. Generalized in the following sense:
• each of the funnels can have multiple outcomes, not just Conversion and No Conversion. Eg. imagine a funnel with multiple drop-off events and multiple conversions such as buying a Monthly or an Annual license (all of them mutually exclusive).
• we can test more than 2 funnel versions at once, so we can run an A/B/C/D.. test.
Let’s see this in action, eg. we have 3 outcomes and 4 funnels:
funnels = [
[[0.80, 0.10, 0.10], 0.6], # the first vector is the actual outcomes,
[[0.80, 0.10, 0.10], 0.2], # the second is the traffic split
[[0.79, 0.11, 0.10], 0.1],
[[0.70, 0.20, 0.10], 0.1],
]
N = 10*1000
observations = simulate_abtest(funnels, N)
print('Observations:\n', observations)
ch = chi2_contingency(observations, correction=False)
print('Chi-sq p = %.3f' % ch[1])
Prints something like:
Observations:
[[4748. 595. 573.]
[1657. 197. 231.]
[ 807. 98. 103.]
[ 710. 195. 86.]]
Chi-sq p = 0.000
What’s happening under the hood? Using the above 4x3 outcome table, first we construct the contingency table. We simply add the numbers row-wise and column-wise and write them at the right and bottom. These are called the marginals:
Then, for each obsevation cell, we calculate the expected value. Expected here means according to the null hypothesis, which is that all funnels are the same. Our best guess for the null hypothesis are the blended bottom numbers: $7922/10000$ for No Conversion, $1085/10000$ for Monthly, etc. So for Funnel A, which has 5916 samples, our expected No Conversion number is $5916*7922/10000=4686.6$. We do this for each cell. Then we subtract the actual observation from the expected, square it, and divide by the expected, like $(4748-4686.6)^2/4686.6=0.8$. We do this for each cell, and sum up the numbers to we get the $\chi^2$ test statistic. We then look this up in a $\chi^2$ distribution table to get a p value. We have to use a degree of freedom of $k=(F-1)(C-1)$, where $F$ is the number of funnels, $C$ is the number of conversion events, $F=4, C=3$ above.
Implementation
This is so simple, we can implement it ourselves:
def chi_squared(observations):
row_marginals = np.sum(observations, axis=1)
col_marginals = np.sum(observations, axis=0)
N = np.sum(observations)
chisq = 0
for i in range(len(row_marginals)):
for j in range(len(col_marginals)):
expected = row_marginals[i] * col_marginals[j] / N
chisq += (observations[i][j] - expected)**2 / expected
dof = (len(row_marginals) - 1) * (len(col_marginals) - 1)
p_value = 1.0 - chi2(dof).cdf(chisq)
return (chisq, p_value)
We can verify we calculate the same test statistic and p value as the library function:
funnels = [
[[0.80, 0.10, 0.10], 0.6], # the first vector is the actual outcomes,
[[0.80, 0.10, 0.10], 0.2], # the second is the traffic split
[[0.80, 0.10, 0.10], 0.1],
[[0.80, 0.10, 0.10], 0.1],
]
N = 10*1000
observations = simulate_abtest(funnels, N)
print('Observations:\n', observations)
ch_scipy = chi2_contingency(observations, correction=False)
ch_our = chi_squared(observations)
print('Statsmodel chi-sq test statistic = %.3f' % ch_scipy[0])
print('Our chi-sq test statistic = %.3f' % ch_our[0])
print('Statsmodel chi-sq p = %.3f' % ch_scipy[1])
print('Our chi-sq p = %.3f' % ch_our[1])
Prints something like:
Observations:
[[4846. 594. 591.]
[1628. 188. 171.]
[ 767. 100. 98.]
[ 824. 84. 109.]]
Statsmodel chi-sq test statistic = 7.324
Our chi-sq test statistic = 7.324
Statsmodel chi-sq p = 0.292
Our chi-sq p = 0.292
Intuition
The intuition behind the $\chi^2$ is this: if the null hypothesis is true, then all rows should follow the same conversion ratios, which is also the marginal conversion ratio vector. When we subtract the expected number from the actual number (and normalize), similar to the Z-test, we get a standard normal variable. Since we have multiple cells, we need to add these variables to get an overall statistic, but we don’t want positive and negative fluctuations to cancel out. Hence we first square, and then add. So the $\chi^2$ is a sum of squares of standard normals. This is exactly what the $\chi^2$ distribution is: a $\chi^2$ distribution with degree of freedom $k$ is the result of adding up $k$ independent standard normal variables squared. In the subsequent discussion we will get more intuition why the degree of freedom is $k=(F-1)(C-1)$. Note that the standard normal goes from $-\infty$ to $\infty$, but the $\chi^2$, being its square, goes from $0$ to $\infty$. This has implications for one-tailed vs two-tailed testing.
In the 2x2 case, why is this exactly the same as the z-test? The answer is simple: in the 2x2 case, the degree of freedom is 1, the $\chi^2$ test is doing exactly the same thing as a 2-sided Z-test, and in fact the $\chi^2$ test statistic in this case is $z^2$. We can see this numerically:
funnels = [
[[0.80, 0.20], 0.6], # the first vector is the actual outcomes,
[[0.80, 0.20], 0.4], # the second is the traffic split
]
N = 10*1000
observations = simulate_abtest(funnels, N)
raw_data = int(observations[0][0]) * [1] + int(observations[0][1]) * [0], int(observations[1][0]) * [1] + int(observations[1][1]) * [0]
print('Observations:\n', observations)
ch = chi2_contingency(observations, correction=False)
print('Chi-sq test statistic = %.3f' % ch[0])
print('Chi-sq p = %.3f' % ch[1])
zt = ztest(*raw_data)
print('Z-test z = %.3f' % zt[0])
print('Z-test z^2 = %.3f' % zt[0]**2)
print('Z-test p = %.3f' % zt[1])
Prints something like:
Observations:
[[4836. 1193.]
[3147. 824.]]
Chi-sq test statistic = 1.378
Chi-sq p = 0.240
Z-test z = 1.174
Z-test z^2 = 1.378 # z^2 is the same as the Chi-sq test statistic
Z-test p = 0.240
If you compare the $\chi^2$ formulas with the Z-test formulas from the previous post, it works out that $z^2 = \chi^2$.
One-tailed vs two-tailed
In the case of the Z-test (and t-test), we have a choice between a one-tailed and a two-tailed test, depending on if we want the test to go off for deviations in just one or both directions. In the case of the $\chi^2$ test, we do not have a choice:
• the $\chi^2$ distribution is asymmetric (from $0$ to $\infty$), so technically the $\chi^2$ test is always one-tailed
• however, since it’s the square of normals, both tails of the normal are folded together, so it corresponds to a two-tailed Z-test [in the 2x2 case]
• this is not just a mathematical artefact; when dealing with multiple conversion events, there is no such thing as “positive” and “negative” directions; for example, in a 2x3 conversion example, if the baseline is $80-10-10$ for No Conversion - Monthly - Annual, and our test comes out at $79-11-10$ or $79-10-11$, which is “positive” and “negative”? (If both are “positive”, then merge the conversions, and do a 2x2 one-tailed Z-test (or t-test)).
We can check this simply:
funnels = [
[[0.80, 0.20], 0.6], # the first vector is the actual outcomes,
[[0.80, 0.20], 0.4], # the second is the traffic split
]
N = 10*1000
observations = simulate_abtest(funnels, N)
raw_data = int(observations[0][0]) * [1] + int(observations[0][1]) * [0], int(observations[1][0]) * [1] + int(observations[1][1]) * [0]
print('Observations:\n', observations)
ch = chi2_contingency(observations, correction=False)
print('Chi-sq p = %.3f' % ch[1])
zt = ztest(*raw_data, alternative='two-sided')
print('Z-test p (Two-tailed) = %.3f' % zt[1])
tt = ttest_ind(*raw_data, alternative='two-sided')
print('t-test p (Two-tailed) = %.3f' % zt[1])
zt = ztest(*raw_data, alternative='larger')
print('Z-test p (One-tailed) = %.3f' % zt[1])
tt = ttest_ind(*raw_data, alternative='larger')
print('t-test p (One-tailed) = %.3f' % zt[1])
Prints something like:
Observations:
[[4780. 1181.]
[3243. 796.]]
Chi-sq p = 0.898 # the first three are the same
Z-test p (Two-tailed) = 0.898
t-test p (Two-tailed) = 0.898
Z-test p (One-tailed) = 0.551 # these are different
t-test p (One-tailed) = 0.551
Degrees of freedom
When we're doing hypothesis testing, we're computing a p value. The p value is the probability that we'd get the measured outcome, or more extreme outcomes, assuming the null hypothesis is true. There is one caveat here, hidden in the "or more extreme": the statistically correct way to evaluate this "more extreme" part is by keeping both row and column marginals fixed. Ie. what are all the ways (their probabilities) that we can put different numbers in the contingency table, while keeping the marginals fixed. Although the $\chi^2$ is not calculating this probability directly, thanks to the CLT, this is in fact what it's approximating in the $N \rightarrow \infty$ limit. And given a $F \times C$ table with the marginals fixed, you can only change $(F-1)(C-1)$ numbers freely ("degrees of freedom"), the rest are fixed by the constraint that the rows and columns have to add up to the marginals.
In the next post, I will talk about Fisher's exact test, which will give more intuition about this, because that test explicitly calculates this probability.
Conclusion: usage and limitations
Z-test. In the 2x2 case, the $\chi^2$ test yields exactly the same results as a two-tailed Z-test (or t-test).
Central Limit Theorem. Like the Z-test, we need enough sample size for the normal approximation to be correct. I would not be comfortable unless each cell in the contingency table is at least $X>100$. See earlier post A/B Testing and the Central Limit Theorem.
Multiple funnels, multiple outcomes. Unlike the Z-test, the $\chi^2$ test can test multiple funnels and multiple outcomes at the same time.
One-tailed distribution. Unlike the Z-test, the $\chi^2$ test is directionless (technically one-tailed, but corresponds to the two-tailed Z-test in the 2x2 case).
Degrees of freedom. For a test with $F$ funnels and $C$ outcomes you have to use the $k=(F-1)(C-1)$ degree of freedom $\chi^2$ distribution to look up the p value.
|
2021-02-24 19:46:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7738834023475647, "perplexity": 2300.387591439148}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00217.warc.gz"}
|
https://math.stackexchange.com/questions/607960/question-regarding-limsup-of-a-sequence-of-sets-and-its-measure
|
# Question regarding Limsup of a sequence of sets and its measure.
Let $\left(X,\mathcal{F},\mu\right)$ be a measure space and suppose $\left\{ A_{n}\right\} _{n=1}^{\infty}$ is a sequence of sets such that $\mu\left(A_{n}\right)\geq\varepsilon$ for some $\varepsilon>0$ and for all $n\in\mathbb{N}$ . Is this contradictory to $\mu\left(\limsup\limits _{n\to\infty}A_{n}\right)=0$ ?
I've come accustomed to thinking of Limsup as the set of $x\in X$ that belong to $A_{n}$ for an infinite number of $n$. With that in mind I don't really see any reason why this should be acontradiction. Using the more formal definition of $${\displaystyle \limsup_{n\to\infty}A_{n}=\bigcap_{n=1}^{\infty}\bigcup_{k\geq n}A_{k}}$$ also doesn't seem to provide an obvious contradiction. Also ,does it make any difference if the measure was a finite measure?
Well, it does. Note first that for every $n$, $$\mu\left(\bigcup_{k\geqslant n}A_k\right)\geqslant\mu(A_n)\geqslant\varepsilon,$$ and deduce from this that $$\mu\left(\limsup_{n\to\infty}A_n\right)\geqslant\varepsilon,$$ under the dominating condition that $$\mu\left(\bigcup_{n\geqslant 1}A_n\right)$$ is finite. This condition is always satisfied when the measure $\mu$ is finite.
Recall that the measure of the union of a nondecreasing sequence of measurable sets is always the limit of the measures of the sets but that the measure of the intersection of a nonincreasing sequence of measurable sets is guaranteed to be the limit of the measures of the sets only when one of the sets has finite measure. A counterexample to keep in mind: $A_n=[n,+\infty)$ in $(\mathbb R,\mathcal B(\mathbb R))$ with the Lebesgue measure. Or, equivalently, $A_n=\{k\in\mathbb N\mid k\geqslant n\}$ in $(\mathbb N,2^\mathbb N)$ with the counting measure.
• $A_n = [n,\infty) \subset \mathbb{R}$. Unless you have a condition $$\mu\left(\bigcup_{n=k}^\infty A_n\right) < \infty$$ for some $k$, there is no contradiction. – Daniel Fischer Dec 15 '13 at 16:41
|
2021-06-21 01:11:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636414051055908, "perplexity": 65.05727269976457}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488259200.84/warc/CC-MAIN-20210620235118-20210621025118-00165.warc.gz"}
|
https://www.physicsforums.com/threads/line-integrals-in-cal-4.246583/
|
# Line Integrals in Cal 4
1. Jul 23, 2008
### hottytoddy
I guess this is a lousy first post, and I apologize, but I am desperate and would appreciate all the help I can get more than anyone could imagine! This is the first problem of the section. I've been sick and haven't made it to any lectures covering this particular material and my final is TOMORROW! Please please help me if you can!
1. The problem statement, all variables and given/known data
Evaluate Sc (x + y) ds where C is the straight line segment x=t, y=(1-t), z=0, from (0,1,1) to (1,0,0).
2. Relevant equations
3. The attempt at a solution
Kind of both of those together. My text say to integrate a continuous function f(x,y,z) over a curve C:
1. Find a smooth parametrization of C
r(t) = g(t)i + h(t)j + k(t)k, t in the interval [a,b]
2. Evaluate the integral as
Sc f(x,y,z) ds = bSa f (g(t),h(t),k(t))|v(t)| dt
Can someone at least help me get started and try to figure it out myself?
The italicized S's are supposed to be integral signs!
Last edited: Jul 23, 2008
2. Jul 23, 2008
### hottytoddy
Okay, I know from the back of the book the answer is root2. This is what I did, and it got me the right answer, but if someone could check to make sure I did it right, that'd be awesome. Thank you!!!
I set the interval from [0,1] because the lowest x and lowest y are 0 and the highest of each is 1. So I have 0S1 (x + y) ds
C says x is equal to t and y = 1-t so I plugged those in and got t + (1 - t) which is just 1.
For r(t), I said it was ti + tj because there are no powers to the t's and that's as simple as it gets. Then, |v(t)| = |i + j| which is root2
I wound up with 0S1 f(t,t) root2 dt
0S1 (t + t) (root2) dt
Pull root2 out (constant) and get root2 0S1 2t dt
Then integrate and get root2 [ t2 ] evaluated from 1 to 0
root2 [12 - 02] is root2 * 1 = root2
and that's what I was supposed to get. Did I do it right though?
3. Jul 23, 2008
### hottytoddy
The next problem is evaluate CS (xy + y + z) ds along the curve r(t)=2ti+tj+(2-2t)k, t in [0,1].
I got |v(t)| = |2i + j - 2k| = 3
0S1 2t + t + 2 - 2t 3 dt
0S1 t + 2 3 dt
3 0S1 t + 2 dt
3[ (t2)/2 + 2t] eval. from 1 to 0
3[ 1/2 + 2] = 3[ 5/2] = 15/2
I did something wrong though, I think, because the book says the solution is 13/2
4. Jul 23, 2008
### hottytoddy
I tried that last problem again and substituted x,y,z with 2t, t, and (2-2t), respectively. My integral was then
0S1 2t2 - t + 2 3 dt
3 0S1 2t2 - t + 2 dt
3[ (2t3)/3 -t2/2 +2t ] from 0 to 1
3[ 2/3 - 1/2 + 2 -0] = 13/2
I got it right, but again, did I do it right?
5. Jul 23, 2008
### hottytoddy
Next problem.
Find the line integral of f(x,y,z) = x + y + z over the straight line segment from (1,2,3) to (0,-1,1). Solution is 3*root14
I got a root14 by finding the distance between the two points and I set my t interval from 0 to root14, but I'm thinking that was wrong.
I had 0Sroot14 t + t + t root3 dt
I set r(t) = ti + tj+ tk and |v(t)| = root3
I worked it out and got (21 times root3)/2
... what gives?
6. Jul 23, 2008
### Defennder
Now it is right.
7. Jul 23, 2008
### Defennder
That root14 corresponds to $$\left| \frac{d\textbf{r}}{dt} \right|$$ in the formula for a scalar line integral.
That's not correct. What is the formula for evaluating a scalar line integral?
The expression for r(t) as you have given would correspond to a point passing through the origin. Does the line through (1,2,3) to (0,-1,1) pass through the origin?
8. Jul 23, 2008
### hottytoddy
Honestly, I have no clue. I wish you were still online, though. I've pretty much given up and I was planning on not studying anymore for this exam since I need a 95 on it to get a C. Your reply has given me hope!
I can't see what you posted after "that root14 corresponds to".... I get a red x.
The line does not go through the origin. I still don't understand how to get the interval. I'm teaching myself over here, and quickly losing faith in my abilities.
The formula for evaluating a scalar line integral... again, no clue.
I know I need A LOT of help over here, but I'm willing to pull an all nighter if someone (or several someones) will help me. My exam is scheduled for 8.5 hours from now.
9. Jul 24, 2008
### hottytoddy
Okay. I have a theory on that last one. At least, I get the right answer. Again, is my technique correct?
f(x,y,z)= x+y+z
r(t) = ti + 3tj + 2tk (the distance between corresponding coordinates in the points given.)
v(t) = i + 3j + 2k |v(t)| = root14 (YAY)
I'm assuming t between 0 and 1 because it works. Not sure why those numbers, but they work so I used them.
0S1 t + 3t + 2t root14 dt
root14 0S1 6t dt
root14 [3t2] from 0 to 1, which comes out to 3root14
What I need to know though, is why 0 to 1? The test problems are taking directly from the book, so I essentially just need to memorize the problems, but working through something is so much easier when you understand it! This is the third problem in the first section of new material. There are 28 more problems that I need to get through tonight. And they keep getting harder!
10. Jul 24, 2008
### Defennder
What do your notes and textbook say about evaluating scalar line integrals? Keep in mind that you're looking for the part on line integrals of scalar functions. Look it up in the book index and read that relevant section if necessary. I've done last minute studying before and I would say it's always better to do last minute preparation than none at all, although it's undoubtedly better to be prepared beforehand.
11. Jul 24, 2008
### Defennder
You did get the right answer, but I don't think your expression for r(t) is correct. Bear in mind that r(t) is the vector equation of the line through those 2 points specified by the question. What is the vector equation of the line through any 2 given points?
12. Jul 24, 2008
### hottytoddy
All is says that makes any sense whatsoever is the snippet I put in the first post.
1. Find a smooth parametrization of C
r(t) = g(t)i + h(t)j + k(t)k, t in the interval [a,b]
2. Evaluate the integral as
Sc f(x,y,z) ds = bSa f (g(t),h(t),k(t))|v(t)| dt
13. Jul 24, 2008
### hottytoddy
Is it as simple as (x1-x0)ti + (y1-y0)ti + (z1-z0)ti? So, (0-1) for x, (-1-2) for y, and (1-3) for z? Or am I backwards? I've been awake for nearly 24 hours already. My brain isn't functioning fully
14. Jul 24, 2008
### hottytoddy
Logically, I want to say r(t) = ti + 2tj + 3tk because one of the points is (1,2,3), but there's no way it's that simple... It has to have 1,2, and 3 for coefficients, because that gives root14.
15. Jul 24, 2008
### hottytoddy
The next one is even worse. I'm going to post it before I work on it to give you some time to look at it also. You are helping me so much! I wish there was some way I could repay you!
Integrate f(x,y,z) = (x + y + z)/(x2 + y2 + z2) over the path r(t) = ti + tj + tk, 0< a<= t <= b.
16. Jul 24, 2008
### hottytoddy
Okay, that last one wasn't so bad once I got into it.
r(t) is given. From that |v(t)| = root3
Interval is given: [a,b]
Sub t into each x, y, and z. Reduce the equation and you get:
root3 aSb (t-1) dt
root3 [ln b - ln a] = root3 ln(b/a) which is the answer.
Is my technique right?
17. Jul 24, 2008
### hottytoddy
Next problem. Three more and then next section.
Integrate f over the given curve:
f (x,y) = x3/y
C: y = x2/2
0<= x <= 2
I don't even know where to start! There's no example like this in the text. I have nothing to follow.
18. Jul 24, 2008
### Defennder
No, as said earlier, the parametrisation you need is the vector equation of a line. What does a path paramatrisation represent? It maps values of a parameter, say t to a vector which extends from the origin to that point of the path it corresponds to. So far example, the line y=2x+c may be paramatrised as $$\textbf{r}(t) = t\textbf{i} + (2t+c) \textbf{j}$$. There are special parametric representations as well without using either x or y as the varying parameter. To take an example, the parametrisation of a circle x^2 + y^2 = r^2 would could be $$\textbf{r}(t) = r \cos t \textbf{i} + r \sin t \textbf{j} \ \mbox{where} \ t \in [0,2\pi]$$
Yes it looks correct. In this case it appears you know how to evaluate a line integral. Your problem it appears thus far is understanding how to do a parametrisation of a path. Just look that up in any textbook.
Well, as before start by first coming up with a parametrisation of the curve y=x^2/2. It should be a vector function of the form r(t). The statement of the question also gives you a hint as to what you should use as a parameter.
19. Jul 24, 2008
### hottytoddy
Okay. I think I have r(t), but it isn't pretty.
r(t)= (1-t)i + (2-3t)j + (3-2t)k
v(t)= -1i -3j -2k
|v| = root(1+9+4) = root14 (AHA!)
Is that right? And t goes from 0 to 1 because if you plug in 0 you get one endpoint of the segment and if you plug in 1 you get the other endpoint.
20. Jul 24, 2008
### Defennder
Yes that's right.
|
2016-10-25 01:39:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6908050179481506, "perplexity": 1233.4521987703358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719843.44/warc/CC-MAIN-20161020183839-00382-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://physics.stackexchange.com/questions/396451/difference-in-symmetries-of-second-quantized-and-first-quantized-hamiltonian?noredirect=1
|
Difference in symmetries of Second quantized and First quantized Hamiltonian [duplicate]
The three discrete symmetries Time reversal symmetry ($T$), Particle-hole symmetry ($C$) and Chiral symmetry ($S$) all commute (are real symmetries) of the full second quantized Hamiltonian, $$\hat{H}=\sum_{A,B}\Psi_A^\dagger \mathcal{H}_{AB}\Psi_B.$$ That is, $$[\hat{T},\hat{H}]=[\hat{C},\hat{H}]=[\hat{S},\hat{H}]=0.\;\;\;\;\;\;\; (1)$$ They go on to claim that, $\hat{C}$ and $\hat{S}$ are not real symmetries in the "classical" sense of the first quantized Hamiltonian $\mathcal{H}$. In fact, the first quantized Hamiltonians obey,
$$[T,\mathcal{H}]=\{C,\mathcal{H}\}=\{S,\mathcal{H}\}=0. \;\;\;\;\;\; (2)$$ How can I prove that imposing Eq. (1) implies Eq. (2)?
|
2019-09-20 01:26:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9163550138473511, "perplexity": 616.9806213494114}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573801.14/warc/CC-MAIN-20190920005656-20190920031656-00486.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-4-section-4-6-complex-fractions-and-review-of-order-of-operations-exercise-set-page-286/84
|
Prealgebra (7th Edition)
$3\frac{3}{5}$
When $x=\frac{3}{4}$ and $y=-\frac{4}{7}$ $\frac{\frac{9}{14}}{x+y}$ $=\frac{9}{14}\div(x+y)$ $=\frac{9}{14}\div[(\frac{3}{4}+(-\frac{4}{7})]$ $=\frac{9}{14}\div(\frac{3}{4}-\frac{4}{7})$ $=\frac{9}{14}\div(\frac{21}{28}-\frac{16}{28})$ $=\frac{9}{14}\div\frac{5}{28}$ $=\frac{9}{14}\times\frac{28}{5}$ $=\frac{9}{1}\times\frac{2}{5}$ $=\frac{18}{5}$ $=3\frac{3}{5}$
|
2019-11-12 07:48:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6237666010856628, "perplexity": 107.81935363116415}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664808.68/warc/CC-MAIN-20191112074214-20191112102214-00003.warc.gz"}
|
http://docs.atlas.oreilly.com/writing.html
|
# Writing in the Editor
Atlas gives you a fully functional editor for writing and formatting your content. To access the editor, just click a file from the project dashboard, and that file will open in editing mode. Atlas has two editor modes—visual and code—and supports four markup languages: HTML, Markdown, AsciiDoc, and DocBook (read more about the editor modes in Editing Environments). The visual editor is only available to people using HTML, and includes a number of editing and formatting tools in the toolbar.
# The Toolbar
Many of the standard formatting tools you've grown to know and love in other word processors are available in the visual editor's toolbar. From left to right, you've got the following options:
Bold
Bolds your selected text.
Italic
Italicizes your selected text.
Add a link
Converts your selected text to a link. To link to a location within your project, instead of typing the full path, simply type the id of the element, preceded by a # sign, like this: #buildsettings—no file name is necessary. When you build, Atlas will make sure to correctly convert all of those link destinations to include the correct file name.
Add an index entry
An index is a collection of key words, concepts, and phrases that are found throughout your project. To create an index, add index terms to your project text by placing your cursor where the term is discussed, and then clicking this button; you'll get a dialog box where you can add an index term, and optionally a secondary and tertiary term as well. When you build, Atlas will collect all of these terms into an alphabetical list linked to the tag locations that you specified.
Add a footnote
Inserts a footnote at the current cursor position. Place footnotes where you want the marker to appear in the output, and the Atlas toolchain will take care of floating the footnotes to the bottom of the page, adding the numbered markers, etc.
###### Note
Inline styles, such as bold and italic, can't currently be added to footnotes via the visual editor, but are supported and can be added in the code editor. You can read more about footnote markup in HTMLBook here.
Define a section ID
Adds a unique ID to the current section for use in styling with CSS or anchoring a cross reference.
Insert a cross reference
Creates a cross reference to a previously created section ID. In digital output formats, this will appear as a clickable link.
Convert to or insert a numbered list
If you have some text selected, that text will be converted to a numbered list. If not, Atlas will insert an empty numbered list item to get your list started.
Convert to or insert a bulleted list
If you have some text selected, that text will be converted to a bulleted list. If not, Atlas will insert an empty bulleted list item to get your list started.
Insert a table
This button opens up a menu for you to set up a table that will be inserted at the current location of your cursor. You can set the number of columns and rows, and tell Atlas whether your table should include heading rows.
Embed media
Allows you to insert interactive web content, such as video or audio, into your document.
Insert a code block or format inline code
If you have some text selected, Atlas will format it as inline code. If not, Atlas will insert a new placeholder code block for you to type in. If you hover over that code block, you'll see a little </> icon at bottom center. Click this icon to tell Atlas what code language this block uses, so Atlas will know how to apply syntax highlighting when you build.
Equation editor
Allows you to insert beautifully formatted math content into your document. The equation editor supports LaTeX formatting.
Comment
Add comments to the document for your co-collaborators. Comments won't be output when you build the project.
Paste from Word or paste plain text
This menu helps with copying over pieces from a Word document or from a plain text file. Paste the text you are copying inside the dialog box that appears, and Atlas will do its best to transfer over your formatting (or strip it out, if you're pasting plain text).
###### Warning
This option is meant to be used only for short blocks of text, not for entire documents. If you've got an entire file that you want to put in Atlas, you should have that file converted to HTMLBook by a third-party vendor--or take advantage of Atlas' Conversion Services--and then add that converted HTML to Atlas instead.
The Insert... menu
The Insert... menu is your one-stop shop for adding predefined text blocks to your document. You can add smaller blocks like notes, warnings, and sidebars, or higher-level blocks like chapters and sections (see Using and Adding Sections for more on the latter). To add a block, place your cursor where you want the new block to appear, and then choose the kind of block you want from the drop-down menu. Atlas will insert the pre-formatted block along with some placeholder text that you can replace.
###### Warning
Not all elements are allowed everywhere—for example, you can't insert a chapter inside a sidebar. If an element isn't allowed, it'll be grayed out in the menu.
# Using and Adding Sections
The concept of sections can be a little tricky to wrap your head around. A section is a block of text that adheres to a specific theme or goes together in some way. The most popular kind of section is a chapter, and if you're writing a novel, this is probably the only kind of section you'll need.
However, more complex projects like reference manuals or documentation will likely want to subdivide chapters further into different levels of sections, as a way of organizing the content into more easily digestible chunks. For example, this chapter of documentation (the one you're reading right now!) is split into multiple level 1 sections, and some of those level 1 sections are split into level 2 sections, and so on.
Atlas depends on nested sections in your project in order to create a hierarchical table of contents, both visible to readers, and for devices like the iPad or Kindle to use in their internal navigation systems. This means that if you want those features to work correctly when you build your project, you'll need to use nested sections to structure your content.
# For AsciiDoc and Markdown Users
If you're writing in AsciiDoc or Markdown, then Atlas will take care of adding correctly nested sections based on your headings when you build, with no extra steps needed from you. Proceed normally!
Sections in Atlas can go up to 5 levels deep: Chapter > Section 1 > Section 2 > Section 3 > Section 4.
Sections in the visual editor are clearly delineated by dotted borders—the more borders, the deeper the section is nested. For example, if there are three border lines around a paragraph, then that paragraph is within a section that is three levels deep—most likely a Section 2 within a Section 1 within a Chapter .
The special markup is all happening behind the scenes in the code editor; all you really need to know is that you should use nested sections for your content (as opposed to free-standing headings).
The steps for adding a new section vary depending on what kind of textual element you're currently editing. Generally, you'll follow one of these two paths:
1. Insert a new paragraph by pressing Enter on your keyboard.
2. With your cursor in that new, empty paragraph, go to the Insert... menu, and choose Section.
Atlas will automatically insert a section at the correct nesting level; for example, if you're inside a Section 1, Atlas will insert a Section 2; if you're inside a Section 2, Atlas will insert a Section 3, and so on.
The steps above work great when you're just working with plain text paragraphs, but sometimes you'll need to add sections after more complex blocks like notes, sidebars, code blocks, and so on. If you just press Enter, you'll get a new paragraph inside that block element. To insert a new paragraph after the block, do the following:
1. Hover your cursor over the block until you see the ¶ at the bottom right of the block.
2. Click the ¶ to insert a new paragraph after the block.
3. With your cursor in that new, empty paragraph, go to the Insert... menu, and choose Section.
###### Warning
If a block element is located at the end of a section, the editor may not allow you to click the ¶ to add a new paragraph after that block. To get around this, you'll need to dive into the HTML behind the scenes:
1. Open the Code Editor and navigate to the block element you wanted to add a paragraph after.
2. Find the closing tag for that block element. (It'll usually be something like this: </div> , </aside> , </pre> , </ul> , or </ol> . The / means it's a closing tag, as opposed to an opening tag, which would not have a slash.)
3. Press Enter immediately after that closing tag, and insert the following: <p>Some text.</p>
4. Save and switch back to the Visual Editor. You should now see a new paragraph containing the text "Some text." after the original block element. You can delete the text and insert a section in that paragraph, or you can press the Enter key either at the beginning or end of that paragraph to insert another new, blank paragraph and add a section there (as described above).
# Adding Images
Adding images is super easy—simply open the file navigator on your PC, and then drag the image file into your Atlas project. Atlas will automatically upload the image into your repository, and insert the image in the location you dropped it. Images will get stored in the same folder as the open file by default.
Alternatively, you can add images to your repository by dragging them into the Atlas file navigator in the left sidebar; insert them into your project by placing your cursor in the correct location within your project, and then clicking the image file name in the file list. This method gives you finer control over the organization of the files in your repository (for example, if you want all images to be stored in a subfolder called "images").
## Supported File Types
The Visual Editor will display the following image file types: png, jpg/jpeg, and gif (both static and animated).
However, not all build outputs support all file types. For example, an animated gif will not render in a PDF. Here's a break-down of what files you should use, depending on your target output:
If you want to build your project to PDF only:
Use png, jpeg/jpg, static gif files, and svg files
If you want to build your project to EPUB or MOBI:
Limit yourself to png, jpeg/jpg, and static gif files
If you want to build your project to HTML only:
Use any image file that is supported on the Web
We generally recommend using high-resolution images that will work on many devices. For example, a 300 dpi png will look good both in a PDF and on a hi-res iPad.
# Cross-References and Internal Links
A common component of many reference-type projects is cross-references: links from one part of a document to another. These links might point to a section just a few paragraphs away, but could also point to a location in a completely different file.
Atlas has built-in cross-reference support, though it does take a couple steps. To add a cross-reference:
Add an id attribute to the element that you want to be the destination of the link. Place the cursor in the section you want to add an ID to, and click the ID button in the toolbar. In the box that opens, type an id attribute with any name you like (something descriptive and memorable is probably best--and remember not to use spaces in the id name!). The tool will auto-populate with a suggested ID based on the title of the section.
Now go back to the place where you are adding the cross reference. Select the text that you want to turn into a link, and click the cross reference button in the toolbar. In the dialog box that pops up, type the id name that you created in the previous step, as shown in the following figure.
That's it! You can move files and sections around while you're writing, and Atlas will update the link when you build to make sure it points to the correct file in your project.
Atlas will also automatically generate link text for you. This is great for referencing chapter numbers or section titles that might change. Insert a cross reference into a document without selecting any text, and when you build, Atlas will automatically add the title of the section, or the chapter number, depending on what type of element you are referencing.
For example, if you want to point to a chapter in a sentence like this:
Learn more about cross references in ???.
Add a cross reference in place of the question marks. Then, when you build, the sentence will look like this:
Learn more about cross references in Chapter 4.
If you later add in a new chapter before Chapter 4, the cross reference will automatically update the next time you build:
Learn more about cross references in Chapter 5.
# Inserting Code Blocks
Adding a code block to a document in Atlas is simple: place the cursor on a new line and click the button.
The Atlas book-building toolchain supports syntax highlighting via Pygments. This allows your code sections to render in final formats with color coding appropriate for the programming language displayed. To take advantage of syntax highlighting, you must specify the language of the code used within each code block.
## Setting the Language
To set the appropriate language for a code block, hover your mouse over the block and then click on the </> bubble that appears.
Then, type the name of the appropriate language in the box.
###### Note
Atlas accepts valid Pygments short names in this box. These are case-sensitive and generally lowercase.
### List of Supported Languages
Atlas supports syntax highlighting for all languages in version 1.6 of the Pygments library. Below is a list of some commonly used programming languages with the appropriate Pygments short name in parentheses. For a full list of available lexers, visit the Pygments site.
• C (c)
• CSS (css)
• HTML (html)
• Java (java)
• JavaScript (js, javascript)
• Perl (perl, pl)
• PHP (php, php3, php4, php5)
• Python (python, py, sage)
• Ruby (rb, ruby, duby)
• SQL (sql)
• XML (xml)
# Indexing
Indexing content with Atlas is similar to indexing in Adobe InDesign. Place index markers throughout the text near the relevant content, and a full back-of-the-book index will be created when you build.
# Placing Markers
We recommend that index markers not be placed inside code blocks or section headings, because anchors in those elements can cause oddities in the build output. Otherwise, markers should not have any adverse effects, so place them close to the relevant content!
To insert an index marker, place your cursor and click on the button in the toolbar. You'll see a dialogue where you can add primary and secondary terms, specify sort-as labels, etc.
Click OK, and a bookmark icon will appear in your text. Click on this placeholder icon to edit or remove your indexterm, but don't worry: these icons won't render in your final formats when you build.
###### Note
Do not create a new paragraph to add an index marker; doing so will create a blank paragraph space in build outputs.
Once you've added index markers throughout your project, add an index placeholder tag (<section data-type="index"> </section>) to your content where you'd like the index to appear in your final build. This tag can either be in its own file or added to the same file as other content, but must be a root tag (read more here).
Finally, on the Configure tab, check the "Generate Index" checkbox for the formats you want to build, and build! Indexes in EPUB, Mobi, and Web PDF will have clickable links, and PDF versions will display the appropriate page numbers for each term.
## Ranges
Indexterm ranges can be inserted by using the "ID" and "Range Startref" fields in the visual editor dialogue:
1. At the beginning of the range, insert an index tag as usual, and add a unique ID to the ID field.
2. At the end of the range, enter the same index term and in the "Range Startref" field, enter the ID that you typed in above.
Here's what the underlying markup looks like:
<a contenteditable="false" data-primary="Hello World example" data-type="indexterm" id="HWex"> </a>
<a contenteditable="false" data-primary="Hello World example" data-startref="HWex" data-type="indexterm"> </a>
## Indexing Locally
When indexing via the Atlas UI, you don't need to worry about the syntax of the markup—Atlas takes care of it for you. However, if you prefer to insert and edit index markers outside of the Atlas UI, you'll need to use software designed for editing plain text. Popular word processing software like Microsoft Word or Notepad will introduce extraneous markup into Atlas files and should not be used.
Following are some examples of appropriate plain-text editors:
|
2017-08-19 16:56:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3547173738479614, "perplexity": 2168.376774845388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105700.94/warc/CC-MAIN-20170819162833-20170819182833-00252.warc.gz"}
|
http://dtai.cs.kuleuven.be/CHR/biblio/Year/2007.complete.html
|
BACK TO INDEX
Publications of year 2007
Books and proceedings
1. V. Dahl and I. Niemelä, editors. ICLP '07: Proc. 23rd Intl. Conf. Logic Programming, volume 4670 of Lecture Notes in Computer Science, September 2007. Springer-Verlag. [doi:10.1007/978-3-540-74610-2]
@proceedings{piclp07,
title = ICLP07l,
booktitle = ICLP07,
editor = {Dahl, V. and Niemel\"a, I.},
series = LNCS,
volume = 4670,
doi = {10.1007/978-3-540-74610-2},
publisher = SV,
year = 2007,
month = sep,
location = {Porto, Portugal},
city = {Porto, Portugal},
}
2. K. Djelloul, G. J. Duck, and M. Sulzmann, editors. CHR '07: Proc. 4th Workshop on Constraint Handling Rules, September 2007. Keyword(s): CHR 2007.
@proceedings{pchr07,
title = CHR07l,
booktitle = CHR07,
year = {2007},
month = sep,
location = {Porto, Portugal},
city = {Porto, Portugal},
editor = {K. Djelloul and G. J. Duck and M. Sulzmann},
keywords = {CHR 2007},
}
3. M. Leuschel and A. Podelski, editors. PPDP '07: Proc. 9th Intl. Conf. Princ. Pract. Declarative Programming, July 2007. ACM Press. ISBN: 978-1-59593-769-8.
@proceedings{pppdp07,
title = PPDP07l,
booktitle = PPDP07,
editor = {M. Leuschel and A. Podelski},
publisher = ACM,
year = {2007},
month = jul,
location = {Wroc\l{}aw, Poland},
city = {Wroc\l{}aw, Poland},
isbn = {978-1-59593-769-8},
}
Articles in journal, book chapters
1. Marco Alberti, Federico Chesani, Davide Daolio, Marco Gavanelli, Evelina Lamma, Paola Mello, and Paolo Torroni. Specification and Verification of Agent Interaction Protocols in a Logic-based System. Scalable Computing: Practice and Experience, 8(1):1-13, March 2007.
Abstract:
A number of information systems can be described as a set of interacting entities, which must follow interaction protocols. These protocols determine the behaviour and the properties of the overall system, hence it is of the uttermost importance that the entities behave in a conformant manner. A typical case is that of multi-agent systems, composed of a plurality of agents without a centralized control. Compliance to protocols can be hardwired in agent programs; however, this requires that only certified'' agents interact. In open systems, composed of autonomous and heterogeneous entities whose internal structure is, in general, not accessible (open agent societies being, again, a prominent example) interaction protocols should be specified in terms of the extit{observable} behaviour, and compliance should be verified by an external entity. In this paper, we propose a Java-Prolog-CHR system for verification of compliance of computational entities to protocols specified in a logic-based formalism ( extit{Social Integrity Constraints}). We also show the application of the formalism and the system to the specification and verification of three different scenarios: two specifications show the feasibility of our approach in the context of Multi Agent Systems (FIPA Contract-Net Protocol and Semi-Open societies), while a third specification applies to the specification of a lower level protocol (Open-Connection phase of the TCP protocol).
@article{alberti_et_al_agent_interaction_scpe07,
author = {Marco Alberti and Federico Chesani and Davide Daolio and Marco Gavanelli and Evelina Lamma and Paola Mello and Paolo Torroni},
title = {Specification and Verification of Agent Interaction Protocols in a Logic-based System},
journal = {Scalable Computing: Practice and Experience},
year = 2007,
volume = 8,
number = 1,
pages = {1--13},
month = mar,
abstract = { A number of information systems can be described as a set of interacting entities, which must follow interaction protocols. These protocols determine the behaviour and the properties of the overall system, hence it is of the uttermost importance that the entities behave in a conformant manner.
A typical case is that of multi-agent systems, composed of a plurality of agents without a centralized control. Compliance to protocols can be hardwired in agent programs; however, this requires that only certified'' agents interact. In open systems, composed of autonomous and heterogeneous entities whose internal structure is, in general, not accessible (open agent societies being, again, a prominent example) interaction protocols should be specified in terms of the extit{observable} behaviour, and compliance should be verified by an external entity.
In this paper, we propose a Java-Prolog-CHR system for verification of compliance of computational entities to protocols specified in a logic-based formalism ( extit{Social Integrity Constraints}). We also show the application of the formalism and the system to the specification and verification of three different scenarios: two specifications show the feasibility of our approach in the context of Multi Agent Systems (FIPA Contract-Net Protocol and Semi-Open societies), while a third specification applies to the specification of a lower level protocol (Open-Connection phase of the TCP protocol). },
publisher = {West University of Timisoara},
}
2. Mathieu Boespflug. TaiChi:how to check your types with serenity. The Monad.Reader, 9:17-31, November 2007. Keyword(s): type systems.
@article{boespflug_taichi_monadreader07,
author = {Mathieu Boespflug},
title = {{TaiChi:how to check your types with serenity}},
keywords = {type systems},
pages = {17--31},
volume = 9,
year = 2007,
month = nov
}
3. Jacques Robin, Jairson Vitorino, and Armin Wolf. Constraint Programming Architectures: Review and a New Proposal. J. Universal Computer Science, 13(6):701-720, 2007. [WWW]
Abstract:
Most automated reasoning tasks with prac tical applications can be automatically reformulated into a constraint solving task. A constraint programming platform can thus act as a unique, underlying engine to be reused for mu ltiple automated reasoning tasks in intelligent agents and systems. We identify six key requirements for such platform: expressive task modeling language, rapid solving method custom ization and combination, adaptive solving method, user-friendly solution explanation, efficient execution, and seamless integration within larger systems and practical applications. We then propose a novel, model-driven, component and rule-based architecture for such a platform that better satisfies as a whole this set of requirements than those of currently available platforms.
@article{robin_vitorino_wolf_CPA_proposal_jucs07,
author = {Jacques Robin and Jairson Vitorino and Armin Wolf},
title = {Constraint Programming Architectures: Review and a New Proposal},
journal = j-jucs,
volume = 13,
number = 6,
year = 2007,
pages = {701--720},
abstract = {Most automated reasoning tasks with prac tical applications can be automatically reformulated into a constraint solving task. A constraint programming platform can thus act as a unique, underlying engine to be reused for mu ltiple automated reasoning tasks in intelligent agents and systems. We identify six key requirements for such platform: expressive task modeling language, rapid solving method custom ization and combination, adaptive solving method, user-friendly solution explanation, efficient execution, and seamless integration within larger systems and practical applications. We then propose a novel, model-driven, component and rule-based architecture for such a platform that better satisfies as a whole this set of requirements than those of currently available platforms.},
url = {http://www.jucs.org/jucs_13_6/constraint_programming_architectures_review},
}
4. Beata Sarna-Starosta, R. E. Kurt Stirewalt, and Laura K. Dillon. A Model-Based Design-for-Verification Approach to Checking for Deadlock in Multi-Threaded Applications. Intl. Journal of Softw. Engin. and Knowl. Engin., 17(2):207-230, 2007. [doi:10.1142/S0218194007003197] Keyword(s): applications, testing.
Abstract:
This paper explores an approach to design for verification in systems built atop a middleware framework which separates synchronization concerns from the core-functional logic'' of a program. The framework is based on a language-independent compositional model of synchronization contracts, called Szumo, which integrates well with popular OO design artifacts and provides strong guarantees of non-interference for a class of strictly exclusive systems. An approach for extracting models from Szumo design artifacts and analyzing the generated models to detect deadlocks is described. A key decision was to use Constraint Handling Rules to express the semantics of synchronization contracts, which allowed a transparent model of the implementation logic.
@article{ss_stirewalt_dillon_checking_deadlock_ijseke07,
author = {Beata Sarna-Starosta and R. E. Kurt Stirewalt and Laura K. Dillon},
title = {A Model-Based Design-for-Verification Approach to Checking for Deadlock in Multi-Threaded Applications},
keywords = {applications,testing},
journal = {Intl.\ Journal of Softw.\ Engin.\ and Knowl.\ Engin.},
volume = 17,
number = 2,
year = 2007,
pages = {207--230},
abstract = { This paper explores an approach to design for verification in systems built atop a middleware framework which separates synchronization concerns from the core-functional logic'' of a program. The framework is based on a language-independent compositional model of synchronization contracts, called Szumo, which integrates well with popular OO design artifacts and provides strong guarantees of non-interference for a class of strictly exclusive systems. An approach for extracting models from Szumo design artifacts and analyzing the generated models to detect deadlocks is described. A key decision was to use Constraint Handling Rules to express the semantics of synchronization contracts, which allowed a transparent model of the implementation logic. },
doi = {10.1142/S0218194007003197},
}
5. Martin Sulzmann, Gregory J. Duck, Simon Peyton-Jones, and Peter J. Stuckey. Understanding functional dependencies via Constraint Handling Rules. J. Functional Prog., 17(1):83-129, 2007. [doi:10.1017/S0956796806006137] Keyword(s): type systems.
@article{sulz_duck_peyton_stuck_func_dep_via_chr_fp07,
author = {Martin Sulzmann and Gregory J. Duck and Simon Peyton-Jones and Peter J. Stuckey},
title = {Understanding functional dependencies via {C}onstraint {H}andling {R}ules},
keywords = {type systems},
journal = {J. Functional Prog.},
volume = {17},
number = {1},
year = {2007},
pages = {83--129},
doi = {10.1017/S0956796806006137},
publisher = CUP,
}
Conference articles
1. Hariolf Betz. Relating Coloured Petri Nets to Constraint Handling Rules. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 33-47, September 2007. [PDF] Keyword(s): CHR 2007, related formalisms, CHR 2007, CHR 2007.
Abstract:
Constraint Handling Rules (CHR) is a declarative rulebased concurrent committed-choice programming language. Petri nets are a well-known formalism for modeling and analysis of concurrent processes. We aim to develop a framework to exploit Petri nets as a tool for the modeling and analysis of CHR programs. In this paper, we show that place/transition nets can easily be embedded into CHR and we develop a translation of a significant segment of CHR into coloured Petri nets (CPN).
@inproceedings{betz_petri_nets_chr07,
author = {Hariolf Betz},
title = {Relating Coloured {Petri} Nets to {C}onstraint {H}andling {R}ules},
pages = {33--47},
crossref = {pchr07},
abstract = { Constraint Handling Rules (CHR) is a declarative rulebased concurrent committed-choice programming language. Petri nets are a well-known formalism for modeling and analysis of concurrent processes. We aim to develop a framework to exploit Petri nets as a tool for the modeling and analysis of CHR programs. In this paper, we show that place/transition nets can easily be embedded into CHR and we develop a translation of a significant segment of CHR into coloured Petri nets (CPN). },
pdf = PAPERSHOME # {chr2007/betz_petri_nets_chr07.pdf},
keywords = {CHR 2007, related formalisms},
}
2. Hariolf Betz and Thom Frühwirth. A Linear-Logic Semantics for Constraint Handling Rules with Disjunction. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 17-31, September 2007. [PDF] Keyword(s): CHR 2007, semantics, linear logic, CHR 2007, CHR 2007.
Abstract:
We motivate and develop a linear logic declarative semantics for CHR∨, an extension of the CHR programming language that integrates concurrent committed choice with backtrack search and a predefined underlying constraint handler. We show that our semantics maps each of these aspects of the language to a distinct aspect of linear logic. We show how we can use this semantics to reason about derivations in CHR∨ and we present strong theorems concerning its soundness and completeness.
@inproceedings{betz_fru_linear_logic_chr_disj_chr07,
author = {Hariolf Betz and Thom Fr{\"u}hwirth},
title = {A Linear-Logic Semantics for {C}onstraint {H}andling {R}ules with Disjunction},
pages = {17--31},
crossref = {pchr07},
abstract = { We motivate and develop a linear logic declarative semantics for CHR$^\vee$, an extension of the CHR programming language that integrates concurrent committed choice with backtrack search and a predefined underlying constraint handler. We show that our semantics maps each of these aspects of the language to a distinct aspect of linear logic. We show how we can use this semantics to reason about derivations in CHR$^\vee$ and we present strong theorems concerning its soundness and completeness. },
pdf = PAPERSHOME # {chr2007/betz_fru_linear_logic_chr_disj_chr07.pdf},
keywords = {CHR 2007, semantics, linear logic},
}
3. Henning Christiansen and Christian Theil Have. From Use Cases to UML Class Diagrams using Logic Grammars and Constraints. In RANLP '07: Proc. Intl. Conf. Recent Adv. Nat. Lang. Processing, pages 128-132, September 2007. Keyword(s): applications, linguistics.
@inproceedings{christ_have_use_cases_to_uml_ranlp07,
title = {From Use Cases to {UML} Class Diagrams using Logic Grammars and Constraints},
author = {Henning Christiansen and Have, Christian Theil},
booktitle = {RANLP '07: Proc.\ Intl.\ Conf.\ Recent Adv.\ Nat.\ Lang.\ Processing},
month = sep,
year = 2007,
location = {Borovets, Bulgaria},
city = {Borovets, Bulgaria},
keywords = {applications, linguistics},
pages = {128--132},
}
4. Verónica Dahl and Baohua Gu. A CHRG Analysis of ambiguity in Biological Texts. In CSLP '07: Proc. 4th Intl. Workshop on Constraints and Language Processing, August 2007. Note: Extended Abstract. Keyword(s): linguistics, applications.
@inproceedings{dahl_gu_chrg_amb_bio_texts_cslp07,
author = {Ver{\'o}nica Dahl and Baohua Gu},
title = {A {CHRG} Analysis of ambiguity in Biological Texts},
note = {Extended Abstract},
booktitle = {CSLP '07: Proc.\ 4th Intl.\ Workshop on Constraints and Language Processing},
location = {Roskilde, Denmark},
city = {Roskilde, Denmark},
month = aug,
keywords = {linguistics, applications},
year = 2007,
}
5. Leslie De Koninck, Tom Schrijvers, and Bart Demoen. The Correspondence Between the Logical Algorithms Language and CHR. In V. Dahl and I. Niemelä, editors, ICLP '07: Proc. 23rd Intl. Conf. Logic Programming, volume 4670 of Lecture Notes in Computer Science, pages 209-223, September 2007. Springer-Verlag. [doi:10.1007/978-3-540-74610-2_15] Keyword(s): related formalisms.
@inproceedings{dekoninck_schr_demoen_la-chr_iclp07,
author = {De Koninck, Leslie and Schrijvers, Tom and Demoen, Bart},
title = {The Correspondence Between the {L}ogical {A}lgorithms Language and {CHR}},
pages = {209--223},
keywords = {related formalisms},
doi = {10.1007/978-3-540-74610-2_15},
crossref = {piclp07}
}
6. Leslie De Koninck, Tom Schrijvers, and Bart Demoen. User-definable Rule Priorities for CHR. In M. Leuschel and A. Podelski, editors, PPDP '07: Proc. 9th Intl. Conf. Princ. Pract. Declarative Programming, pages 25-36, July 2007. ACM Press. ISBN: 978-1-59593-769-8. [doi:10.1145/1273920.1273924] Keyword(s): priorities.
@inproceedings{dekoninck_schr_demoen_chrrp_ppdp07,
author = {De Koninck, Leslie and Schrijvers, Tom and Demoen, Bart},
title = {User-definable Rule Priorities for {CHR}},
keywords = {priorities},
pages = {25--36},
doi = {10.1145/1273920.1273924},
crossref = {pppdp07}
}
7. Leslie De Koninck and Jon Sneyers. Join Ordering for Constraint Handling Rules. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 107-121, September 2007. [PDF] Keyword(s): optimizing compilation, CHR 2007, CHR 2007, CHR 2007.
Abstract:
Join ordering is the problem of finding cost optimal execution plans for matching multi-headed rules. In the context of Constraint Handling Rules, this topic has received limited attention so far, even though it is of great importance for efficient CHR execution. We present a formal cost model for joins and investigate the possibility of join optimization at runtime. We propose some heuristic approximations of the parameters of this cost model, for both the static and dynamic case. We discuss an O(n log n) optimization algorithm for the special case of acyclic join graphs. However, in general, join order optimization is an NP-complete problem. Finally, we identify some classes of cyclic join graphs that can be reduced to acyclic ones.
@inproceedings{dekoninck_sney_join_ordering_chr07,
author = {De Koninck, Leslie and Jon Sneyers},
title = {Join Ordering for {C}onstraint {H}andling {R}ules},
keywords = {optimizing compilation},
pages = {107--121},
crossref = {pchr07},
abstract = { Join ordering is the problem of finding cost optimal execution plans for matching multi-headed rules. In the context of Constraint Handling Rules, this topic has received limited attention so far, even though it is of great importance for efficient CHR execution. We present a formal cost model for joins and investigate the possibility of join optimization at runtime. We propose some heuristic approximations of the parameters of this cost model, for both the static and dynamic case. We discuss an O(n log n) optimization algorithm for the special case of acyclic join graphs. However, in general, join order optimization is an NP-complete problem. Finally, we identify some classes of cyclic join graphs that can be reduced to acyclic ones. },
pdf = PAPERSHOME # {chr2007/dekoninck_sney_join_ordering_chr07.pdf},
keywords = {CHR 2007},
}
8. Khalil Djelloul, Thi-Bich-Hanh Dao, and Thom Frühwirth. Toward a first-order extension of Prolog's unification using CHR: a CHR first-order constraint solver over finite or infinite trees. In SAC '07: Proc. 2007 ACM Symp. Applied computing, pages 58-64, 2007. ACM Press. ISBN: 1-59593-480-4.
@inproceedings{djelloul_dao_fru_1st_order_extension_prolog_unification_sac07,
author = {Khalil Djelloul and Thi-Bich-Hanh Dao and Thom Fr{\"u}hwirth},
title = {Toward a first-order extension of {P}rolog's unification using {CHR}: a {CHR} first-order constraint solver over finite or infinite trees},
booktitle = {SAC '07: Proc.\ 2007 ACM Symp.\ Applied computing},
year = {2007},
isbn = {1-59593-480-4},
pages = {58--64},
location = {Seoul, Korea},
city = {Seoul, Korea},
publisher = ACM,
}
9. Gregory J. Duck, Peter J. Stuckey, and Martin Sulzmann. Observable Confluence for Constraint Handling Rules. In V. Dahl and I. Niemelä, editors, ICLP '07: Proc. 23rd Intl. Conf. Logic Programming, volume 4670 of Lecture Notes in Computer Science, pages 224-239, September 2007. Springer-Verlag. [doi:10.1007/978-3-540-74610-2_16] Keyword(s): confluence.
@inproceedings{duck_stuck_sulz_observable_confluence_iclp07,
author = {Gregory J. Duck and Peter J. Stuckey and Martin Sulzmann},
title = {Observable Confluence for {C}onstraint {H}andling {R}ules},
pages = {224--239},
doi = {10.1007/978-3-540-74610-2_16},
keywords = {confluence},
crossref = {piclp07}
}
10. Thom Frühwirth. Description Logic and Rules the CHR Way. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 49-61, September 2007. Note: Extended Abstract. [PDF] Keyword(s): related formalisms, CHR 2007, CHR 2007, CHR 2007.
Abstract:
The challenges of the Semantic Web endeavour in knowledge representation and reasoning prompted a wealth of research in combining description logic (DL) as ontology languages (e.g. OWL) with logic programming for rule-based reasoning. General issues of combining and integrating formalisms have to be faced such as the type of combination, conceptual simplicity and tractability. Even though constraint-based programming has a tradition of tackling these questions, constraint-based rule formalisms such as constraint logic programming, concurrent constraint programming, constraint databases and constraint handling rules (CHR) have not explicitely been considered for combination with DL yet. The same holds for concurrency, which is an essential characteristic of the internet, but to the best of our knowledge has not been related to DL so far. Since CHR is a very expressive declarative concurrent constraintbased programming language with optimal performance guarantee and other interesting properties, we explore in this speculative paper what a CHR-based approach would look like in comparison to recent approaches for integrating OWL and rules.
@inproceedings{fru_description_logic_chr07,
author = {Thom Fr{\"u}hwirth},
title = {Description Logic and Rules the {CHR} Way},
pages = {49--61},
keywords = {related formalisms},
crossref = {pchr07},
abstract = { The challenges of the Semantic Web endeavour in knowledge representation and reasoning prompted a wealth of research in combining description logic (DL) as ontology languages (e.g. OWL) with logic programming for rule-based reasoning. General issues of combining and integrating formalisms have to be faced such as the type of combination, conceptual simplicity and tractability. Even though constraint-based programming has a tradition of tackling these questions, constraint-based rule formalisms such as constraint logic programming, concurrent constraint programming, constraint databases and constraint handling rules (CHR) have not explicitely been considered for combination with DL yet. The same holds for concurrency, which is an essential characteristic of the internet, but to the best of our knowledge has not been related to DL so far. Since CHR is a very expressive declarative concurrent constraintbased programming language with optimal performance guarantee and other interesting properties, we explore in this speculative paper what a CHR-based approach would look like in comparison to recent approaches for integrating OWL and rules. },
pdf = PAPERSHOME # {chr2007/fru_description_logic_chr07.pdf},
keywords = {CHR 2007},
note = {Extended Abstract},
}
11. Rémy Haemmerlé and François Fages. Abstract Critical Pairs and Confluence of Arbitrary Binary Relations. In RTA '07: Proc. 18th Intl. Conf. Term Rewriting and Applications, volume 4533 of Lecture Notes in Computer Science, June 2007. Springer-Verlag. [doi:10.1007/978-3-540-73449-9_17] Keyword(s): confluence.
Abstract:
In a seminal paper, Huet introduced abstract properties of term rewriting systems, and the confluence analysis of terminating term rewriting systems by critical pairs computation. In this paper, we provide an abstract notion of critical pair for arbitrary binary relations and context operators. We show how this notion applies to the confluence analysis of various transition systems, ranging from classical term rewriting systems to production rules with constraints and partial control strategies, such as the Constraint Handling Rules language CHR. Interestingly, we show in all these cases that some classical critical pairs can be disregarded. The crux of these analyses is the ability to compute critical pairs between states built with general context operators, on which a bounded, not necessarily well-founded, ordering is assumed.
@inproceedings{haemm_fages_abstract_critical_pairs_rta07,
author = {R{\'e}my Haemmerl{\'e} and Fran{\c c}ois Fages},
title = {Abstract Critical Pairs and Confluence of Arbitrary Binary Relations},
booktitle = {RTA '07: Proc.\ 18th Intl.\ Conf.\ Term Rewriting and Applications},
location = {Paris, France},
city = {Paris, France},
year = 2007,
month = jun,
publisher = SV,
series = LNCS,
volume = 4533,
keywords = {confluence},
doi = {10.1007/978-3-540-73449-9_17},
abstract = {In a seminal paper, Huet introduced abstract properties of term rewriting systems, and the confluence analysis of terminating term rewriting systems by critical pairs computation. In this paper, we provide an abstract notion of critical pair for arbitrary binary relations and context operators. We show how this notion applies to the confluence analysis of various transition systems, ranging from classical term rewriting systems to production rules with constraints and partial control strategies, such as the Constraint Handling Rules language CHR. Interestingly, we show in all these cases that some classical critical pairs can be disregarded. The crux of these analyses is the ability to compute critical pairs between states built with general context operators, on which a bounded, not necessarily well-founded, ordering is assumed.},
}
12. Ben Krause and Tim Wahls. jmle: A Tool for Executing JML Specifications via Constraint Programming. In Formal Methods: Applications and Technology, volume 4346 of Lecture Notes in Computer Science, pages 293-296, 2007. Springer-Verlag. [doi:10.1007/978-3-540-70952-7_19]
Abstract:
Formal specifications are more useful and easier to develop if they are executable. In this work, we describe a system for executing specifications written in the Java Modeling Language (JML) by translating them to constraint programs, which are then executed via the Java Constraint Kit (JCK). Our system can execute specifications written at a high level of abstraction, and the generated constraint programs are Java implementations of the translated specifications. Hence, they can be called directly from ordinary Java code.
@inproceedings{krause_wahls_jmle_fmics06,
author = {Ben Krause and Tim Wahls},
title = {jmle: A Tool for Executing JML Specifications via Constraint Programming},
booktitle = {Formal Methods: Applications and Technology},
series = LNCS,
volume = 4346,
year = 2007,
abstract = { Formal specifications are more useful and easier to develop if they are executable. In this work, we describe a system for executing specifications written in the Java Modeling Language (JML) by translating them to constraint programs, which are then executed via the Java Constraint Kit (JCK). Our system can execute specifications written at a high level of abstraction, and the generated constraint programs are Java implementations of the translated specifications. Hence, they can be called directly from ordinary Java code. },
pages = {293--296},
publisher = SV,
doi = {10.1007/978-3-540-70952-7_19},
}
13. Edmund S.L. Lam and Martin Sulzmann. A Concurrent Constraint Handling Rules Semantics and its Implementation with Software Transactional Memory. In DAMP '07: Proc. ACM SIGPLAN Workshop on Declarative Aspects of Multicore Programming, January 2007. ACM Press. [WWW] Keyword(s): parallelism.
@inproceedings{lam_sulz_concurrent_chr_damp07,
title = {A Concurrent {C}onstraint {H}andling {R}ules Semantics and its Implementation with Software Transactional Memory},
author = {Edmund S.L. Lam and Martin Sulzmann},
keywords = {parallelism},
booktitle = {DAMP '07: Proc.\ ACM SIGPLAN Workshop on Declarative Aspects of Multicore Programming},
month = jan,
year = 2007,
location = {Nice, France},
city = {Nice, France},
url = {http://taichi.ddns.comp.nus.edu.sg/taichiwiki/CCHR/},
publisher = ACM,
}
14. Martin Magnusson and Patrick Doherty. Deductive Planning with Temporal Constraints. In Eyal Amir, Vladimir Lifschitz, and Rob Miller, editors, Logical Formalizations of Commonsense Reasoning: Papers from the 2007 AAAI Spring Symposium, March 2007. AAAI Press.
@inproceedings{magnusson_doherty_deductive_planning_aaai07,
author = {Martin Magnusson and Patrick Doherty},
title = {Deductive Planning with Temporal Constraints},
editor = {Eyal Amir and Vladimir Lifschitz and Rob Miller},
booktitle = {Logical Formalizations of Commonsense Reasoning: Papers from the 2007 AAAI Spring Symposium},
location = {Stanford, California},
month = {March},
year = {2007},
publisher = {AAAI Press}
}
15. Julien Martin and François Fages. From Business Rules to Constraint Programs in Warehouse Management Systems. In Doctoral programme of the 13th Intl. Conf. on Princ. and Pract. of Constraint Programming, 2007. Keyword(s): related formalisms.
@inproceedings{martin_fages_business_rules_cpdc07,
author = {Julien Martin and Fran{\c c}ois Fages},
title = {From Business Rules to Constraint Programs in Warehouse Management Systems},
keywords = {related formalisms},
booktitle = {Doctoral programme of the 13th Intl.\ Conf.\ on Princ.\ and Pract.\ of Constraint Programming},
year = 2007,
}
16. Marc Meister. Concurrency of the preflow-push algorithm in Constraint Handling Rules. In CSCLP'07: Proc. 12th Intl. Workshop on Constraint Solving and Constraint Logic Programming, pages 160-169, 2007. Keyword(s): algorithms, parallelism.
@inproceedings{meister_preflowpush_csclp07,
author = {Marc Meister},
title = {Concurrency of the preflow-push algorithm in {C}onstraint {H}andling {R}ules},
keywords = {algorithms, parallelism},
booktitle = {CSCLP'07: Proc. 12th Intl. Workshop on Constraint Solving and Constraint Logic Programming},
location = {Rocquencourt, France},
city = {Rocquencourt, France},
pages = {160--169},
year = 2007,
}
17. Marc Meister, Khalil Djelloul, and Jacques Robin. A Unified Semantics for Constraint Handling Rules in Transaction Logic. In C. Baral, G. Brewka, and J. S. Schlipf, editors, LPNMR '07: Proc. 9th Intl. Conf. Logic Programming and Nonmonotonic Reasoning, volume 4483 of Lecture Notes in Computer Science, pages 201-213, May 2007. Springer-Verlag. [doi:10.1007/978-3-540-72200-7_18] Keyword(s): semantics.
@InProceedings{meister_djelloul_robin_transaction_logic_semantics_lpnmr07,
title = "A Unified Semantics for {C}onstraint {H}andling {R}ules in Transaction Logic",
author = "Marc Meister and Khalil Djelloul and Jacques Robin",
keywords = {semantics},
booktitle = "LPNMR '07: Proc.\ 9th Intl.\ Conf.\ Logic Programming and Nonmonotonic Reasoning",
location = {Tempe, AZ, USA},
city = {Tempe, AZ, USA},
month = may,
publisher = SV,
year = "2007",
volume = "4483",
editor = "C. Baral and G. Brewka and J. S. Schlipf",
pages = "201--213",
series = LNCS,
doi = "10.1007/978-3-540-72200-7_18",
}
18. Paolo Pilozzi, Tom Schrijvers, and Danny De Schreye. Proving termination of CHR in Prolog: A transformational approach. In WST '07: 9th Intl. Workshop on Termination, June 2007. Keyword(s): termination.
@inproceedings{pilozzi_schr_deschreye_termination_wst07,
author = {Paolo Pilozzi and Tom Schrijvers and Danny {De Schreye}},
title = { Proving termination of {CHR} in {Prolog}: A transformational approach },
booktitle = {WST '07: 9th Intl.\ Workshop on Termination},
month = jun,
year = 2007,
location = {Paris, France},
city = {Paris, France},
keywords = {termination}
}
19. Frank Raiser. Graph Transformation Systems in CHR. In V. Dahl and I. Niemelä, editors, ICLP '07: Proc. 23rd Intl. Conf. Logic Programming, volume 4670 of Lecture Notes in Computer Science, pages 240-254, September 2007. Springer-Verlag. [doi:10.1007/978-3-540-74610-2_17] Keyword(s): Graph Transformation Systems, related formalisms.
@inproceedings{raiser_graph_transformation_systems_iclp07,
author = {Frank Raiser},
title = {Graph Transformation Systems in {CHR}},
keywords = {Graph Transformation Systems, related formalisms},
pages = {240--254},
doi = {10.1007/978-3-540-74610-2_17},
crossref = {piclp07}
}
20. Frank Raiser and Paolo Tacchella. On Confluence of Non-terminating CHR Programs. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 63-76, September 2007. [PDF] Keyword(s): CHR 2007, confluence, CHR 2007, CHR 2007.
Abstract:
Confluence is an important property for any kind of rewrite system including CHR, which is a general-purpose declarative committedchoice language consisting of multi-headed guarded rules. CHR can yield a confluence problem, because of non-determinism in the choice of rules using the abstract semantics. Confluence in CHR is an ongoing research topic, because it provides numerous benefits for implementations. However, for non-terminating CHR programs confluence is generally undecidable. In this paper we apply the so-called Strong Church-Rosser property to CHR. This allows determination of confluence for a subset of non-terminating CHR programs.
@inproceedings{raiser_tacchella_confluence_non_terminating_chr07,
author = {Frank Raiser and Paolo Tacchella},
title = {On Confluence of Non-terminating {CHR} Programs},
pages = {63--76},
crossref = {pchr07},
abstract = { Confluence is an important property for any kind of rewrite system including CHR, which is a general-purpose declarative committedchoice language consisting of multi-headed guarded rules. CHR can yield a confluence problem, because of non-determinism in the choice of rules using the abstract semantics. Confluence in CHR is an ongoing research topic, because it provides numerous benefits for implementations. However, for non-terminating CHR programs confluence is generally undecidable. In this paper we apply the so-called Strong Church-Rosser property to CHR. This allows determination of confluence for a subset of non-terminating CHR programs. },
pdf = PAPERSHOME # {chr2007/raiser_tacchella_confluence_non_terminating_chr07.pdf},
keywords = {CHR 2007, confluence},
}
21. Beata Sarna-Starosta and C.R. Ramakrishnan. Compiling Constraint Handling Rules for Efficient Tabled Evaluation. In M. Hanus, editor, PADL '07: Proc. 9th Intl. Symp. Practical Aspects of Declarative Languages, volume 4354 of Lecture Notes in Computer Science, pages 170-184, January 2007. Springer-Verlag. [doi:10.1007/978-3-540-69611-7_11] Keyword(s): implementation.
@inproceedings{sarnastarosta_ramakrishnan_chrd_padl07,
author = {Beata Sarna-Starosta and C.R. Ramakrishnan},
title = {Compiling {C}onstraint {H}andling {R}ules for Efficient Tabled Evaluation},
keywords = {implementation},
pages = {170--184},
booktitle = {PADL '07: Proc.\ 9th Intl.\ Symp.\ Practical Aspects of Declarative Languages},
editor = {M. Hanus},
location = {Nice, France},
city = {Nice, France},
month = jan,
year = 2007,
publisher = SV,
series = LNCS,
volume = 4354,
doi = {10.1007/978-3-540-69611-7_11},
}
22. Stephan Schiffel and Michael Thielscher. Fluxplayer: A Successful General Game Player. In AAAI '07: Proc. 22nd AAAI Conf. Artificial Intelligence, pages 1191-1196, July 2007. AAAI Press. Keyword(s): FLUX.
@inproceedings{schiffel_thielscher_fluxplayer_aaai07,
author = {Stephan Schiffel and Michael Thielscher},
title = {Fluxplayer: A Successful General Game Player},
keywords = {FLUX},
year = {2007},
pages = {1191--1196},
booktitle = {AAAI '07: Proc. 22nd AAAI Conf. Artificial Intelligence},
month = jul,
publisher = {AAAI Press},
}
23. Jon Sneyers, Peter Van Weert, and Tom Schrijvers. Aggregates for Constraint Handling Rules. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 91-105, September 2007. [PDF] Keyword(s): CHR 2007, extensions, CHR 2007, CHR 2007.
Abstract:
We extend the Constraint Handling Rules language with aggregates such as sum, count, findall, and min. The proposed extension features nested aggregate expressions over guarded conjunctions of constraints, a series of predefined aggregates, and application-tailored user-defined aggregates. We formally define the operational semantics of aggregates, and show how incremental aggregate computation facilitates efficient implementations. Case studies demonstrate that language support for aggregates significantly reduces program size, thus improving readability and maintainability considerably.
@inproceedings{sney_vanweert_demoen_aggregates_chr07,
author = {Jon Sneyers and Van Weert, Peter and Tom Schrijvers},
title = {Aggregates for {C}onstraint {H}andling {R}ules},
pages = {91--105},
crossref = {pchr07},
abstract = { We extend the Constraint Handling Rules language with aggregates such as sum, count, findall, and min. The proposed extension features nested aggregate expressions over guarded conjunctions of constraints, a series of predefined aggregates, and application-tailored user-defined aggregates. We formally define the operational semantics of aggregates, and show how incremental aggregate computation facilitates efficient implementations. Case studies demonstrate that language support for aggregates significantly reduces program size, thus improving readability and maintainability considerably. },
pdf = PAPERSHOME # {chr2007/sney_vanweert_demoen_aggregates_chr07.pdf},
keywords = {CHR 2007, extensions},
}
24. Jon Sneyers, Peter Van Weert, Tom Schrijvers, and Bart Demoen. Aggregates in Constraint Handling Rules: Extended Abstract. In V. Dahl and I. Niemelä, editors, ICLP '07: Proc. 23rd Intl. Conf. Logic Programming, volume 4670 of Lecture Notes in Computer Science, pages 446-448, September 2007. Springer-Verlag. [doi:10.1007/978-3-540-74610-2_39] Keyword(s): extensions.
@inproceedings{sney_vanweert_schr_demoen_aggregates_iclp07,
author = {Jon Sneyers and Van Weert, Peter and Tom Schrijvers and Bart Demoen},
title = {Aggregates in {C}onstraint {H}andling {R}ules: Extended Abstract},
pages = {446--448},
doi = {10.1007/978-3-540-74610-2_39},
keywords = {extensions},
crossref = {piclp07}
}
25. Martin Sulzmann and Edmund S.L. Lam. Compiling Constraint Handling Rules with Lazy and Concurrent Search Techniques. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 139-149, September 2007. [PDF] Keyword(s): CHR 2007, parallelism, CHR 2007, CHR 2007.
Abstract:
Constraint Handling Rules (CHR) is a concurrent commited choice constraint programming language to describe transformations (rewritings) among multi-sets of constraints. One of the main CHR execution tasks is the search for constraints matching a rule head. Several optimization techniques have been widely studied, yet the actual details of the search strategies and their implementation are rarely the center of attention. In this paper, we explore the implementation of several search strategies using the lazy functional language Haskell. In combination with concurrency abstractions as supported by the Glasgow Haskell Compiler we obtain very clean and efficient implementations for searching of matching constraints.
@inproceedings{sulz_lam_lazy_concurr_search_chr07,
title = {Compiling {C}onstraint {H}andling {R}ules with Lazy and Concurrent Search Techniques},
author = {Martin Sulzmann and Edmund S.L. Lam},
pages = {139--149},
crossref = {pchr07},
abstract = { Constraint Handling Rules (CHR) is a concurrent commited choice constraint programming language to describe transformations (rewritings) among multi-sets of constraints. One of the main CHR execution tasks is the search for constraints matching a rule head. Several optimization techniques have been widely studied, yet the actual details of the search strategies and their implementation are rarely the center of attention. In this paper, we explore the implementation of several search strategies using the lazy functional language Haskell. In combination with concurrency abstractions as supported by the Glasgow Haskell Compiler we obtain very clean and efficient implementations for searching of matching constraints. },
pdf = PAPERSHOME # {chr2007/sulz_lam_lazy_concurr_search_chr07.pdf},
keywords = {CHR 2007, parallelism},
}
26. Martin Sulzmann and Edmund S.L. Lam. Haskell - Join - Rules. In Olaf Chitil, editor, IFL '07: 19th Intl. Symp. Implementation and Application of Functional Languages, pages 195-210, September 2007. Keyword(s): related formalisms.
@inproceedings{sulz_lam_haskelljoinrules_ifl07,
title = {{Haskell - Join - Rules}},
keywords = {related formalisms},
author = {Martin Sulzmann and Edmund S.L. Lam},
editor = {Olaf Chitil},
booktitle = {IFL '07: 19th Intl.\ Symp.\ Implementation and Application of Functional Languages},
pages = {195--210},
year = 2007,
month = sep,
location = {Freiburg, Germany},
city = {Freiburg, Germany},
}
27. Martin Sulzmann and Meng Wang. Aspect-oriented programming with type classes. In Proceedings of the 6th workshop on Foundations of aspect-oriented languages, FOAL '07, pages 65-74, 2007. ACM. ISBN: 978-1-59593-671-4. [WWW] Keyword(s): type systems.
@inproceedings{Sulzmann:2007:APT:1233833.1233842,
author = {Sulzmann, Martin and Wang, Meng},
title = {Aspect-oriented programming with type classes},
booktitle = {Proceedings of the 6th workshop on Foundations of aspect-oriented languages},
series = {FOAL '07},
year = {2007},
isbn = {978-1-59593-671-4},
location = {Vancouver, British Columbia, Canada},
pages = {65--74},
url = {http://doi.acm.org/10.1145/1233833.1233842},
publisher = {ACM},
keywords = {type systems},
}
28. Paolo Tacchella, Maurizio Gabbrielli, and Maria Chiara Meo. Unfolding in CHR. In M. Leuschel and A. Podelski, editors, PPDP '07: Proc. 9th Intl. Conf. Princ. Pract. Declarative Programming, pages 179-186, July 2007. ACM Press. ISBN: 978-1-59593-769-8.
@inproceedings{tacchella_gabbrielli_meo_unfolding_ppdp07,
author = {Paolo Tacchella and Maurizio Gabbrielli and Maria Chiara Meo},
title = {Unfolding in {CHR}},
pages = {179--186},
crossref = {pppdp07}
}
29. Dean Voets, Paolo Pilozzi, and Danny De Schreye. A new approach to termination analysis of Constraint Handling Rules. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 77-89, September 2007. [PDF] Keyword(s): CHR 2007, termination, CHR 2007, CHR 2007.
Abstract:
We present a new approach to termination analysis of Constraint Handling Rules. The approach, compared to existing approaches, is applicable to a much larger class of CHR programs. A new termination condition is formulated, that instead of a termination argument based on the comparison of sizes of consecutive computation states, verifies conditions imposed on the dynamic process of adding constraints to the store. The condition's applicability to CHR programs, with rules not only of the simplification type, has been successfully tested, using a semi-automated analyzer.
@inproceedings{voets_pilozzi_deschreye_termination_chr07,
author = {Dean Voets and Paolo Pilozzi and Danny {De Schreye}},
title = {A new approach to termination analysis of {C}onstraint {H}andling {R}ules},
pages = {77--89},
crossref = {pchr07},
abstract = { We present a new approach to termination analysis of Constraint Handling Rules. The approach, compared to existing approaches, is applicable to a much larger class of CHR programs. A new termination condition is formulated, that instead of a termination argument based on the comparison of sizes of consecutive computation states, verifies conditions imposed on the dynamic process of adding constraints to the store. The condition's applicability to CHR programs, with rules not only of the simplification type, has been successfully tested, using a semi-automated analyzer. },
pdf = PAPERSHOME # {chr2007/voets_pilozzi_deschreye_termination_chr07.pdf},
keywords = {CHR 2007, termination},
}
30. Armin Wolf, Jacques Robin, and Jairson Vitorino. Adaptive CHR meets CHR: An Extended Refined Operational Semantics for CHR Based On Justifications. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 1-15, September 2007. [PDF] Keyword(s): semantics, disjunction, CHR 2007, CHR 2007, CHR 2007.
Abstract:
Adaptive constraint processing with Constraint Handling Rules (CHR) allows the application of intelligent search strategies to solve Constraint Satisfaction Problems (CSP), but these search algorithms have to be implemented in the host language of adaptive CHR, which is currently Java. On the other hand, CHR∨ enables to explicitly formulate search in CHR, using disjunctive bodies to model choices. However, a naive implementation for handling disjunctions, in particular chronological backtracking (as implemented in Prolog) might cause "thrashing" due to an inappropriate order of decisions. To avoid this, a first combination of adaptive CHR and CHR∨ is presented to offer a more efficient embedded search mechanism to handle disjunctions. Therefore the refined operational semantics of CHR is extended for disjunctions and adaptation.
@inproceedings{wolf_robin_vitorino_adaptive_chr_or_chr07,
author = {Armin Wolf and Jacques Robin and Jairson Vitorino},
title = {Adaptive {CHR} meets {CHR}$^{\lor}$: An Extended Refined Operational Semantics for {CHR}$^{\lor}$ Based On Justifications},
keywords = {semantics, disjunction},
pages = {1--15},
crossref = {pchr07},
abstract = { Adaptive constraint processing with Constraint Handling Rules (CHR) allows the application of intelligent search strategies to solve Constraint Satisfaction Problems (CSP), but these search algorithms have to be implemented in the host language of adaptive CHR, which is currently Java. On the other hand, CHR$^\vee$ enables to explicitly formulate search in CHR, using disjunctive bodies to model choices. However, a naive implementation for handling disjunctions, in particular chronological backtracking (as implemented in Prolog) might cause "thrashing" due to an inappropriate order of decisions. To avoid this, a first combination of adaptive CHR and CHR$^\vee$ is presented to offer a more efficient embedded search mechanism to handle disjunctions. Therefore the refined operational semantics of CHR is extended for disjunctions and adaptation. },
keywords = {CHR 2007},
}
31. Pieter Wuille, Tom Schrijvers, and Bart Demoen. CCHR: the fastest CHR Implementation, in C. In K. Djelloul, G. J. Duck, and M. Sulzmann, editors, CHR '07: Proc. 4th Workshop on Constraint Handling Rules, pages 123-137, September 2007. [WWW] [PDF] Keyword(s): CHR 2007, implementation, CHR 2007, CHR 2007.
Abstract:
CHR is usually compiled to high-level languages (like Prolog) that make it hard or impossible to express low-level optimizations. This is a pity, because it confines CHR to be a prototyping language only, with an unacceptable performance for production quality software. This paper presents CCHR, a CHR system embedded in the C programming language, that compiles to low-level C code which is highly suitable for fine-grained performance improvements. In this way CCHR program performance comes close to matching that of native C, and easily outperforms other CHR implementations.
@inproceedings{wuille_schr_demoen_cchr_chr07,
author = {Pieter Wuille and Tom Schrijvers and Bart Demoen},
title = {{CCHR}: the fastest {CHR} Implementation, in {C}},
pages = {123--137},
crossref = {pchr07},
url = {http://people.cs.kuleuven.be/~pieter.wuille/CCHR/},
abstract = { CHR is usually compiled to high-level languages (like Prolog) that make it hard or impossible to express low-level optimizations. This is a pity, because it confines CHR to be a prototyping language only, with an unacceptable performance for production quality software. This paper presents CCHR, a CHR system embedded in the C programming language, that compiles to low-level C code which is highly suitable for fine-grained performance improvements. In this way CCHR program performance comes close to matching that of native C, and easily outperforms other CHR implementations. },
pdf = PAPERSHOME # {chr2007/wuille_schr_demoen_cchr_chr07.pdf},
keywords = {CHR 2007, implementation},
}
Internal reports
1. Leslie De Koninck, Tom Schrijvers, and Bart Demoen. CHRrp: Constraint Handling Rules with rule priorties. Technical report CW 479, K.U.Leuven, Department of Computer Science, Leuven, Belgium, March 2007. [WWW] Keyword(s): priorities.
Abstract:
We extend the Constraint Handling Rules language (CHR) with user-defined rule priorities. This language extension reduces the level of non-determinism that is inherent to the theoretical operational semantics of CHR, and gives a more high-level form of execution control compared to the refined operational semantics. We suggest some application areas. A formal operational semantics for the extended language, called CHR-rp, is given and its theoretical properties are discussed. We look at some issues with CHR-rp and discuss alternatives for rule priorities.
@techreport{dekoninck_schr_demoen_chrrp_techrep07,
author= {De Koninck, Leslie and Schrijvers, Tom and Demoen, Bart},
title = {CHR$^\mathrm{rp}$: {C}onstraint {H}andling {R}ules with rule priorties},
institution = KULCW,
year = {2007},
month = mar,
number = {CW 479},
keywords = {priorities},
abstract = { We extend the Constraint Handling Rules language (CHR) with user-defined rule priorities. This language extension reduces the level of non-determinism that is inherent to the theoretical operational semantics of CHR, and gives a more high-level form of execution control compared to the refined operational semantics. We suggest some application areas. A formal operational semantics for the extended language, called CHR-rp, is given and its theoretical properties are discussed. We look at some issues with CHR-rp and discuss alternatives for rule priorities. },
url = {http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW479.abs.html},
}
2. Leslie De Koninck, Tom Schrijvers, and Bart Demoen. The Correspondence Between the Logical Algorithms Language and CHR. Technical report CW 480, K.U.Leuven, Department of Computer Science, Leuven, Belgium, March 2007. [WWW] Keyword(s): related formalisms, priorities.
Abstract:
This paper investigates the relationship between the Logical Algorithms language (LA) of Ganzinger and McAllester and Constraint Handling Rules (CHR). We present a translation scheme from LA to CHR-rp: CHR with rule priorities and show that the meta-complexity theorem for LA can be applied to a subset of CHR-rp via inverse translation. This result is compared with previous work. Inspired by the high-level implementation proposal of Ganzinger and McAllester, we demonstrate how LA programs can be compiled into CHR rules that interact with a scheduler written in CHR. This forms the first actual implementation of LA. Our implementation achieves the complexity required for the meta-complexity theorem to hold and can execute a subset of CHR-rp with strong complexity bounds.
@techreport{dekoninck_schr_demoen_la-chr_techrep07,
author = {De Koninck, Leslie and Schrijvers, Tom and Demoen, Bart},
title = {The Correspondence Between the {L}ogical {A}lgorithms Language and {CHR}},
institution = KULCW,
year = {2007},
month = mar,
number = {CW 480},
abstract = { This paper investigates the relationship between the Logical Algorithms language (LA) of Ganzinger and McAllester and Constraint Handling Rules (CHR). We present a translation scheme from LA to CHR-rp: CHR with rule priorities and show that the meta-complexity theorem for LA can be applied to a subset of CHR-rp via inverse translation. This result is compared with previous work. Inspired by the high-level implementation proposal of Ganzinger and McAllester, we demonstrate how LA programs can be compiled into CHR rules that interact with a scheduler written in CHR. This forms the first actual implementation of LA. Our implementation achieves the complexity required for the meta-complexity theorem to hold and can execute a subset of CHR-rp with strong complexity bounds. },
keywords = {related formalisms, priorities},
url = {http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW480.abs.html},
}
3. Leslie De Koninck, Peter J. Stuckey, and Gregory J. Duck. Optimized compilation of CHRrp. Technical report CW 499, K.U.Leuven, Department of Computer Science, Leuven, Belgium, August 2007. [WWW] Keyword(s): implementation, optimizing compilation, priorities.
Abstract:
Constraint Handling Rules were recently extended with user-definable rule priorities. This paper shows how this extended language can be efficiently compiled into the underlying host language. It extends previous work by supporting rules with a dynamic priority and by introducing various optimizations. The effects of the optimizations are empirically evaluated and the new compiler is compared with the state-of-the-art K.U.Leuven CHR system.
@techreport{dekoninck_stuck_duck_compiling-chrrp_techrep07,
author = {De Koninck, Leslie and Stuckey, Peter J. and Duck, Gregory J.},
title = {Optimized compilation of CHR$^\mathrm{rp}$},
keywords = {implementation, optimizing compilation, priorities},
institution = KULCW,
year = {2007},
month = aug,
number = {CW 499},
abstract = { Constraint Handling Rules were recently extended with user-definable rule priorities. This paper shows how this extended language can be efficiently compiled into the underlying host language. It extends previous work by supporting rules with a dynamic priority and by introducing various optimizations. The effects of the optimizations are empirically evaluated and the new compiler is compared with the state-of-the-art K.U.Leuven CHR system. },
url = {http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW499.abs.html},
}
4. Paolo Pilozzi, Tom Schrijvers, and Danny De Schreye. Proving termination of CHR in Prolog: A transformational approach. Technical report CW 487, K.U.Leuven, Department of Computer Science, Leuven, Belgium, April 2007. [WWW] Keyword(s): termination.
Abstract:
In this paper we present a termination preserving transformation from Constraint Handling Rules to Prolog. The transformation is sound w.r.t. termination under the theoretical semantics of Constraint Handling Rules. It does not consider the presence of a propagation history. The transformation allows for the direct reuse of termination proof methods from Logic Programs and Term-Rewrite Systems, yielding the first fully automatic termination proving for Constraint Handling Rules. We formalize the transformation and show usefulness of the approach. We transform a set of CHR programs, by an implementation of the transformation and show termination by using existing termination tools for Logic Programs and Term-Rewrite Systems.
@techreport{pilozzi_schr_deschreye_termination_techrep07,
author = {Paolo Pilozzi and Tom Schrijvers and Danny {De Schreye}},
title = { Proving termination of {CHR} in {Prolog}: A transformational approach },
institution = KULCW,
year = {2007},
month = apr,
number = {CW 487},
abstract = { In this paper we present a termination preserving transformation from Constraint Handling Rules to Prolog. The transformation is sound w.r.t. termination under the theoretical semantics of Constraint Handling Rules. It does not consider the presence of a propagation history. The transformation allows for the direct reuse of termination proof methods from Logic Programs and Term-Rewrite Systems, yielding the first fully automatic termination proving for Constraint Handling Rules. We formalize the transformation and show usefulness of the approach. We transform a set of CHR programs, by an implementation of the transformation and show termination by using existing termination tools for Logic Programs and Term-Rewrite Systems. },
keywords = {termination},
url = {http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW487.abs.html},
}
5. Beata Sarna-Starosta and Tom Schrijvers. Indexing techniques for CHR based on program transformation. Technical report CW 500, K.U.Leuven, Department of Computer Science, Leuven, Belgium, August 2007. [WWW] Keyword(s): implementation, optimizing compilation.
Abstract:
Multi-headed rules are essential for the expressiveness of CHR, but incur a considerable performance penalty. Current indexing techniques are often unable to address this problem. They are effective only when matchings have a particular form, or offer good run-time complexity rather than good absolute figures. In this paper we describe three advanced indexing techniques: (1) two program transformations that make other indexing techniques more effective, (2) an index for ground terms more efficient than hash tables, and (3) a post-processing program transformation that eliminates runtime overhead of (1) and (2). We compare these techniques with the current state of the art, and give measurements of their effectiveness in K.U.Leuven CHR and CHRd.
@techreport{sarnastarosta_schr_indexing_techrep07,
author = {Beata Sarna-Starosta and Tom Schrijvers},
title = {Indexing techniques for {CHR} based on program transformation},
institution = KULCW,
keywords = {implementation, optimizing compilation},
year = {2007},
month = aug,
number = {CW 500},
abstract = { Multi-headed rules are essential for the expressiveness of CHR, but incur a considerable performance penalty. Current indexing techniques are often unable to address this problem. They are effective only when matchings have a particular form, or offer good run-time complexity rather than good absolute figures. In this paper we describe three advanced indexing techniques: (1) two program transformations that make other indexing techniques more effective, (2) an index for ground terms more efficient than hash tables, and (3) a post-processing program transformation that eliminates runtime overhead of (1) and (2). We compare these techniques with the current state of the art, and give measurements of their effectiveness in K.U.Leuven CHR and CHRd. },
url = {http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW500.abs.html},
}
6. Jon Sneyers, Peter Van Weert, Tom Schrijvers, and Bart Demoen. Aggregates in CHR. Technical report CW 481, K.U.Leuven, Department of Computer Science, Leuven, Belgium, March 2007. [WWW] Keyword(s): extensions.
Abstract:
We propose an extension of the Constraint Handling Rules language with aggregates like sum, count, findall, and min in the heads of rules. We define the semantics of aggregate expressions formally and informally. Our prototype implementation as a source-to-source preprocessor allows both on-demand and incremental computation of nested aggregate expressions over guarded conjunctions of constraints. Case studies demonstrate that by using aggregates, the program size can be significantly reduced, with only a small constant run-time overhead.
@techreport{sneyers_vanweert_et_al_aggregates_techrep07,
author = {Sneyers, Jon and Van Weert, Peter and Schrijvers, Tom and Demoen, Bart},
title = {Aggregates in {CHR}},
institution = KULCW,
year = {2007},
month = mar,
number = {CW 481},
keywords = {extensions},
abstract = { We propose an extension of the Constraint Handling Rules language with aggregates like sum, count, findall, and min in the heads of rules. We define the semantics of aggregate expressions formally and informally. Our prototype implementation as a source-to-source preprocessor allows both on-demand and incremental computation of nested aggregate expressions over guarded conjunctions of constraints. Case studies demonstrate that by using aggregates, the program size can be significantly reduced, with only a small constant run-time overhead. },
url = {http://www.cs.kuleuven.be/publicaties/rapporten/cw/CW481.abs.html},
}
Miscellaneous
1. Martin Magnusson. Deductive Planning and Composite Actions in Temporal Action Logic. Master's thesis, Department of Computer and Information Science, Linköping University, Sweden, 2007. Note: Thesis No. 1329.
@mastersthesis{magnusson_deductive_planning_07,
title = {Deductive Planning and Composite Actions in Temporal Action Logic},
author = {Martin Magnusson},
school = {Department of Computer and Information Science, Link{\"o}ping University, Sweden},
year = {2007},
note = {Thesis No. 1329}
}
2. Ersha Rahimikia. Detecting non-termination in Constraint Handling Rules. Master's thesis, Dept. Computing and Software, McMaster University, 2007. Keyword(s): termination, termination.
@mastersthesis{rahimikia_nontermination_msthesis07,
author = {Rahimikia, Ersha},
title = {Detecting non-termination in {C}onstraint {H}andling {R}ules},
keywords = {termination},
school = {Dept.\ Computing and Software, McMaster University},
year = 2007,
keywords = {termination},
}
3. Gerrit van den Geest. Constraints for Type Class Extensions. Master's thesis, Utrecht University, April 2007. Keyword(s): type systems.
@mastersthesis{gvdg_type_class_extensions_mthesis07,
author = {van den Geest, Gerrit},
title = {Constraints for Type Class Extensions},
keywords = {type systems},
school = {Utrecht University},
year = 2007,
month = apr,
}
4. Atze Dijkstra, Gerrit van den Geest, Bastiaan Heeren, and S. Doaitse Swierstra. Modelling Scoped Instances with Constraint Handling Rules. Note: Rejected by ICFP '07, 2007.
Abstract:
Haskell's class system provides a programmer with a mechanism to implicitly pass parameters to a function. A class predicate over some type variable in the type signature of a function induces the obligation for the caller to implicitly pass an appropriate instance of the class to the function. The class system is programmed by providing class instances for concrete types, thus providing, for each class, a unique mapping from types to instances. This mapping is used whenever an instance for a class predicate over some type is required. Choosing which instance to pass is solely based on the instantiated type of the class predicate. Although this mechanism has proved to be powerful enough for modelling overloading and a plethora of other programming language concepts, it is still limited in the sense that multiple instances for a type cannot exist at the same time. Usually one can program around this limitation by introducing dummy types, which act as a key to map to additional instances; but this indirect way of allowing extra instances clutters a program and still is bound to the finite number of types statically available in a program. The latter restriction makes it impossible to dynamically construct instances, which, for example, depend on runtime values. In this paper we lift these restrictions by means of local instances. Local instances allow us to shadow existing instances by new ones and to construct instances inside functions, using function arguments. We provide a translation of class and instance definitions to Constraint Handling Rules, making explicit the notion of scope of an instance'' and its role in context reduction for instances. We deal with the ambiguity of choosing between instances by using a framework for heuristically choosing between otherwise overlapping instances.
@unpublished{dijkstra_et_al_scoped_instances_07,
author = {Atze Dijkstra and van den Geest, Gerrit and Bastiaan Heeren and S. Doaitse Swierstra},
title = {Modelling Scoped Instances with Constraint Handling Rules},
abstract = { Haskell's class system provides a programmer with a mechanism to implicitly pass parameters to a function. A class predicate over some type variable in the type signature of a function induces the obligation for the caller to implicitly pass an appropriate instance of the class to the function. The class system is programmed by providing class instances for concrete types, thus providing, for each class, a unique mapping from types to instances. This mapping is used whenever an instance for a class predicate over some type is required. Choosing which instance to pass is solely based on the instantiated type of the class predicate. Although this mechanism has proved to be powerful enough for modelling overloading and a plethora of other programming language concepts, it is still limited in the sense that multiple instances for a type cannot exist at the same time. Usually one can program around this limitation by introducing dummy types, which act as a key to map to additional instances; but this indirect way of allowing extra instances clutters a program and still is bound to the finite number of types statically available in a program. The latter restriction makes it impossible to dynamically construct instances, which, for example, depend on runtime values. In this paper we lift these restrictions by means of local instances. Local instances allow us to shadow existing instances by new ones and to construct instances inside functions, using function arguments. We provide a translation of class and instance definitions to Constraint Handling Rules, making explicit the notion of scope of an instance'' and its role in context reduction for instances. We deal with the ambiguity of choosing between instances by using a framework for heuristically choosing between otherwise overlapping instances. },
year = 2007,
note = {Rejected by ICFP '07},
}
BACK TO INDEX
Disclaimer:
This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All person copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.
The contents of this webpage is provided by the authors stated below. KU Leuven is not bound by the information provided. It is possible that the information is not or no longer completely accurate. Where necessary, the authors can adjust and update faulty information. The authors have taken all reasonable care to ensure that all information available on this website is accurate at the time of publication and on the basis of the current state of knowledge. KU Leuven nor the authors are responsible for the content of any links to external organisations that are referred to on this website.
|
2013-05-24 04:05:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34611332416534424, "perplexity": 7035.079033157771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704134547/warc/CC-MAIN-20130516113534-00095-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://theoryofcomputation.asia/algebraic_complexity.htm
|
## Algebraic Complexity
The BSS and related models of real computation has a distinctive algebraic flavour -- and so does its corresponding theory of computation. In particular, the classical field of (non-effective) algebra provides for a rich variety of concepts and tools which have spurred quantitative complexity results and established (quasi) optimality for many algorithms -- which discrete complexity is still far from. Morgenstern's volume bound for instance proves that the Fast Fourier Transform's running time can (in the model with bounded coefficients) not be improved asymptotically. On the other hand, Strassen's Fast Matrix Multiplication has shown cubic-time Gaussian Elimination to be suboptimal; and initiated the scintillating theory of tensor rank.
It is the merit of Blum, Shub, and Smale (1989) to have transferred structural complexity theory from the discrete to the real (and complex) setting by proving Hilbert's Nullstellensatz complete for $\mathcal{NP}_{\mathbb{C}}$. Present research locates many classical theorems in algebraic geometry as complete for (BSS-counterparts to discrete) complexity classes.
Selected References: Bürgisser, Clausen, Shokrollahi: Algebraic Complexity Theory, Springer (1997)
|
2021-09-27 21:40:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8909838795661926, "perplexity": 1613.1544729256916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058552.54/warc/CC-MAIN-20210927211955-20210928001955-00591.warc.gz"}
|
https://docs.donutteam.com/docs/lucasrcfbuilder/tutorials
|
# Tutorials
This page contains a few short tutorials on how to use the RCF Builder.
These instructions can be used in a Command Prompt window or in a batch file. You will need to replace the paths in these examples with where you put the tool and where you have your input/output files and folders.
This tool currently only supports version 1.2 of the RCF format, the version used by The Simpsons: Road Rage and The Simpsons: Hit & Run.
# Extracting an RCF
To extract an RCF file, use the following command line arguments:
"C:\path\to\LRCFB.exe" -inputrcf "C:\path\to\input\file.rcf" -outputdir "C:\path\to\extract\to"
# Building an RCF
## From scratch
To build an RCF file from scratch, use the following command line arguments:
"C:\path\to\LRCFB.exe" -inputdir "C:\path\to\input\from" -outputrcf "C:\path\to\build\to\file.rcf"
You must also use -bigendian when building files for Nintendo GameCube games.
## From an existing RCF and a directory
To build an RCF file from an existing RCF and a directory, use the following command line arguments:
"C:\path\to\LRCFB.exe" -inputrcf "C:\path\to\input\from\file.rcf" -inputdir "C:\path\to\input\from\dir" -outputrcf "C:\path\to\build\to\file.rcf"
For files that exist in both the RCF and the directory, the latter will be prioritized.
You must also use -bigendian when building files for Nintendo GameCube games.
## Updating an RCF
To update the contents of an RCF file with that of a directory, use the following command line arguments:
"C:\path\to\LRCFB.exe" -rcf "C:\path\to\input\from\file.rcf" -inputdir "C:\path\to\input\from"
You must also use -bigendian when building files for Nintendo GameCube games.
|
2021-10-16 08:59:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5318862795829773, "perplexity": 9847.146058139522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00484.warc.gz"}
|
https://www.fabrizioduroni.it/2017/06/02/swift-closure-syntax.html
|
# CHICIO CODING
## Swift Closure: what they are and syntax
In this post I will talk about Swift closure: what they are and their syntax.
As reported on the official Apple swift documentation closures are:
Closures are self-contained blocks of functionality that can be passed around and used in your code. They can capture and store references to any constants and variables from the context in which they are defined.
Closures are in many ways what blocks are in Objective-C (or lamba function in other languages). As it was for blocks, it is not easy to remeber their syntax. This post is intended to be a reference for me (and you, readers ) about closure syntax. You could also take a look at F$%&£&g closure syntax. Declared as a variable (valid also for let constants): var closure: (parameters) -> returnType Declared as an optional variable: var closure: ((parameters) -> returnType)? Declared as a typealias: typealias ClosureType = (parameters) -> returnType Declared as a function parameter and then call that function: func myFunction(closure: (parameters) -> returnType) { ... } ... /** You can explictly write the type of parameters. **/ //Call with round brackets. myFunction(closure: { (parameters) -> returnType in ... }) //Call without round brackets (only if closure is the last parameter). myFunction { (parameters) -> returnType in ... } There is also the possibility to use a shorthand for the parameter: you can call them using $ followed by the index of the argument in the call. Last but not least, you can capture self avoing retain cycle using [unowned self] before the parameters. Go and show to the world the power of closure in Swift!!
|
2018-11-14 07:31:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20719942450523376, "perplexity": 1257.2300517245662}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741660.40/warc/CC-MAIN-20181114062005-20181114084005-00115.warc.gz"}
|
http://openstudy.com/updates/55c10f60e4b0a2b3ffc66e6f
|
• sebaxtiangb
NEED HELP Given the system of equations presented here: 4x + y = 4 2x + 7y = 28 Which of the following actions creates an equivalent system such that, when combined with the other equation, one of the variables is eliminated? Multiply the second equation by −1 to get −2x − 7y = −28 Multiply the second equation by −4 to get −8x − 28y = −112 Multiply the first equation by −7 to get −28x − 7y = −28 Multiply the first equation by −2 to get −8x − 2y = −8
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
2017-03-29 01:43:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8037170767784119, "perplexity": 695.2242927534506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190134.67/warc/CC-MAIN-20170322212950-00018-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://www.holycityheartofgold.com/browse32-Running_White_Lunar_Lace_Skyelux_Round_Nike_Womens_Shoes_up_Toe_zw55q0H-zvcdl.zso
|
# Running White Lunar Lace Skyelux Round Nike Womens Shoes up Toe zw55q0H
This is motivated by this question and the fact that I have no access to Timothy Chow's paper What Is a Closed-Form Number? indicated there by Qiaochu Yuan.
If an equation f(x)=0 has no closed form solution, what does it normally mean? Added: f may depend (and normally does) on parameters.
To me this is equivalent to say that one cannot solve it for x in the sense that there is no elementary expression g(c1,c2,,cp) consisting only of a finite number of polynomials, rational functions, roots, exponentials, logarithmic and trigonometric functions, absolute values, integer and fractional parts, such that
f(g(c1,c2,,cp))=0 .
I would say it very much depends on the context, and what tools are at your disposal. For instance, telling a student who's just mastered the usual tricks of integrating elementary functions that
expuRunning Nike Shoes Round Skyelux Lunar White Toe Lace Womens up 1udu
and
(u+1)(u2+1)du
have no closed form solutions is just the fancy way of saying "no, you can't do these integrals yet; you don't have the tools". To a working scientist who uses exponential and elliptic integrals, however, they do have closed forms.
In a similar vein, when we say that nonlinear equations, whether algebraic ones like x5x+1=0 or transcendental ones like π4=vsinv2Lace Lunar White Shoes up Nike Running Skyelux Round Toe Womens have no closed form solutions, what we're really saying is that we can't represent solutions to these in terms of functions that we know (and love?). (For the first one, though, if you know hypergeometric or theta functions, then yes, it has a closed form.)
I believe it is fair to say that for as long as we haven't seen the solution to an integral, sum, product, continued fraction, differential equation, or nonlinear equation frequently enough in applications to give it a standard name and notation, we just cop out and say "nope, it doesn't have a closed form".
• I agree that "closed form" depends on context. However, I also think the default context for most people is the one defined in the question. – John D. Cook Nov 6 '10 at 21:06
• My copy of Abramowitz and Stegun is rather worn out from much use, which is why when explaining this stuff to other people, I always have to ask "what do you already know?" or something to that effect. I do know I may well have to say different things to a physicist and to a freshman calculus student who encounter the same integral! – J. M. is not a mathematician Nov 6 '10 at 21:12
• Though anecdotes are not admissible as data, I have to say that for me personally, the reason for my being rather comfortable around these integrals is that I have had the pleasure(?) to be taught the natural logarithm as the integral of the reciprocal function. I had never encountered the natural previously at the time. I knew base-10 logarithms, and was familiar with the change-of-base formula, so finding out that this integral had the properties of a logarithm was quite the eye-opener. (cont'd) – J. M. is not a mathematician Nov 6 '10 at 21:17
• (cont'd) Much later, when I encountered an elliptic integral for the first time, I was all "eh, just like the logarithm..." and I was never afraid/surprised of encountering new functions. – J. M. is not a mathematician Bows with Heels Pull WeiPoot Soft Low Toe Women's top Round Closed on High Boots Material Black wnO6tan
• @ John D. Cook I disagree. Fractional and integer parts are not default closed forms for me. On the other hand, factorial and Gamma function are included – Anixx Dec 15 '10 at 2:08
To better understand closed forms, you may want to familiarize yourself with what's called Differential Algebra. Just as number theory relies on abstract structures such as rings, fields, ideals, etc. to express roots of algebraic equations using elementary numbers, similarly there is a parallel apparatus for expressing functions (i.e. solutions of differential equations) using differential rings, fields, ideals called Differential Algebra. It is this underlying mechanism that defines which functions can be expressed as "closed forms".
Parallels:
1. Similar to splitting fields for algebraic equations, there is a parallel Galois theory with Picard-Vessiot extensions and what not.
2. Similar to correspondence between subfields of number fields and Galois subgroups, on the differential side, there is a correspondence between differential subfields and subgroups of algebraic groups.
3. Just as algebraic equations can be determined to be solvable by radicals, similarly linear differential equations can be determined to be solvable by exponentials, Liouvillian functions, etc. There is an ascending tower of differential fields which can be built.
There is more... I am no expert in this differential algebra field but if you want some freely available references, see
1. Seiler Computer Algebra and differential equations
2. Van der Put Galois theory of differential equations, algebraic groups and Lie algebras
3. Papers by Michael F. Singer are good. See for example "Galois theory of linear differential equations".
4. Check the Kolchin seminar in Differential Algebra
Closed form solution is a solution that can be represented without using limits, and as such, integrals, infinite sums, derivatives. Only using functions, their compositions and arithmetic operations on them. The class of functions allowed may vary. By default it usually allows elementary functions plus Gamma function, plus Polygamma function, plus Hurwitz zeta function. Sometimes, service function like "integer part", "absolute value", "argument", "real part", "imaginary part" may be allowed.
Let us assume, f(x)=0 is to be solved for x .
If an equation f(x)=0Womens Lunar Skyelux Shoes Running Round White Toe up Lace Nike has no closed-form solution, the equation has no solution which can be expressed as a closed-form expression.
A mathematical expression is a closed-form expression iff it contains only finite numbers of only constants, functions, operations and/or variables.
Sensefully, all the constants, functions and operations in a given closed-form expression should be from given sets.
Let us say, a (local) closed-form inverse ( f1 ) is a (local) inverse (= inverse function) which can be expressed as closed-form expression.
Because of fShoes Running Nike Lunar Round Lace Skyelux Toe Womens up White (x)=0 and the definition of a (local) inverse f1(f(x))=x , the following holds: f1(f(x))=f1(0) , x=f1(0) . And therefore: If an equation f(x)=0 has no closed-form solution, the function f has no local closed-form inverse, or a local closed-form inverse exists but is not defined for the argument 0 of the right side of the equation. This means, x cannot be isolated on only one side of the equation
• by applying a local closed-form inverse,
• by only applying the local closed-form inverses and inverse operations of the closed-form functions respective operations which are contained in the expression f(x) .
The existence of a local closed-form inverse is a sufficient but not a necessary criterion for the existence of a closed-form solution.
The elementary functions are a special kind of closed-form expressions. If f is an elementary function, the following statements are equivalent:
• f is generated from its only argument variable in a finite number of steps by performing only arithmetic operations, power functions with integer exponents, root functions, exponential functions, logarithm functions, trigonometric functions, inverse trigonometric functions, hyperbolic functions and/or inverse hyperbolic functions.
• f is generated from its only argument variable in a finite number of steps by performing only arithmetic operations, exponentials and/or logarithms.
• f is generated from its only argument variable in a finite number of steps by performing only explicit algebraic functions, exponentials and/or logarithms.
Whereas Joseph Fels Ritt allows explicit and implicit algebraic functions, Timothy Chow restricts the approved algebraic operations to the explicit algebraic functions, that are the arithmetic operations.
My take on this question, from a practical standpoint:
In the world of computers, there are no "closed forms."
"Closed form" is a mathematician's label for certain mathematical expressions which he deems "elementary." More specifically, it's a way of saying, "We don't care about the algorithms for evaluating this expression."
What makes a "closed form" is that the algorithmic steps involved in computing it are regarded in algebraic manipulation as one atomic step.
A classic example of this is the factorial, n! . Strictly speaking, the definition involves a recurrence. However, it comes up in practice so often, mathematicians label it a "closed form" itself.
Now, perhaps some clever mathematical algorithms expert might come up with a more efficient way to compute the factorial of arbitrary values of n , which does not involve actually performing n1 multiplication steps. The point I'm making is that the label "closed form" doesn't depend on that better algorithm being known, or even being possible. It just means, "We are regarding the algorithm for computing this as not a crucial question (for the current text)."
In fact, if you get right down to it, f(x)=x+1 is the ultimate closed form. The unary "increment" operator.
Adding bigger numbers (the binary "sum" operator) can be seen as a "recurrence" or repeated application of the "increment" operator.
Multiplication itself is repeated application of the "sum" operator, and likewise exponentiation is a repeated application of multiplication.
But all of these are considered as "closed forms." The concept is not a fixed one.
To quote Concrete Mathematics:
We could give a rough definition like this: An expression for a quantity f(n) is in closed form if we can compute it using at most a fixed number of “well known” standard operations, independent of n. For example, 2n – 1 and n(n + 1)/2 are closed forms because they involve only addition, subtraction, multiplication, division, and exponentiation, in explicit ways.
The total number of simple closed forms is limited, and there are recurrences that don’t have simple closed forms. When such recurrences turn out to be important, because they arise repeatedly, we add new operations to our repertoire; this can greatly extend the range of problems solvable in “simple” closed form. For example, the product of the first n integers, n!, has proved to be so important that we now consider it a basic operation. The formula ‘n!’ is therefore in closed form, although its equivalent ‘1·2·. . .·n’ is not.
Indoor TOES Slippers FUN Plush Slip Soft Women's Clog Lining on Grey HqFw7CF
One usually refers to "closed form" as a solution involving functions we commonly know of. This class of functions varies from problem to problem, field to field. For example, one might say x5x+1=0 has no closed form solution because x can't be solved for in terms of radicals. However, it can be solved in terms of Bring radicals. Here, closed form might mean "a combination of rational numbers, addition, multiplication, and radicals".
One could extend "closed form" to mean a solutions involving elementary functions, which are functions made of a finite composition of arithmetic operations, exponentials, logarithms, constants, and solutions to algebraic equations. Under this definition of closed form, x5x+1=a has a closed form solution since it is an algebraic equation.
However, upon entering calculus, you will find there are many integrals you cannot solve. For example,
1+x3 dx, ex2 dx
These are antiderivatives of elementary functions, though they themselves are not elementary. One can make these integrals as "closed form" by having it be the class of Liouvillian functions, which are elementary functions and their antiderivatives.
So as your problems get harder and your field changes, closed form will have a different meaning to you (and not limited to the above).
For most purposes, I think closed form is implied to mean elementary function though.
|
2018-10-20 10:50:37
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8042275309562683, "perplexity": 739.1841459733566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512693.40/warc/CC-MAIN-20181020101001-20181020122501-00148.warc.gz"}
|
https://www.physicsforums.com/threads/convergence-of-random-variables.357657/
|
# Convergence of random variables
1. Nov 24, 2009
### kingwinner
I was reading some proofs about the convergence of random variables, and here are the little bits that I couldn't figure out...
1) Let Xn be a sequence of random variables, and let Xnk be a subsequence of it. If Xn conveges in probability to X, then Xnk also conveges in probability to X. WHY?
2) I was looking at a theorem: if E(Y)<∞, then Y<∞ almost surely. Now I am puzzled by the notation. What does it MEAN to say that Y=∞ or Y<∞?
For example, if Y is a Poisson random variable, then the possible values are 0,1,2,..., (there is no upper bound). Is it true to say that Y=∞ in this case?
3) If Xn4 converges to 0 almost surely, then is it true to say that Xn also converges to 0 almost surely? Why or why not?
4) The moment generating function(mgf) determines the distribution uniquely, so we can use mgf to find the distributions of random varibles. If the mgf already does the job, what is the point of introducing the "characteristic function"?
Any help is much appreciated! :)
2. Nov 24, 2009
### grief
I can answer the first one. Xn converges to X by definition if for all epsilon > 0,
Pr(|Xn-X|>epsilon) converges to 0. Suppose Xn converges to X in probability. Let Xnk be a subsequence. Then for any epsilon>0, Pr(|Xnk-X|>epsilon) is a subsequence of Pr(|Xn-X|>epsilon) (these are sequences of numbers). Since we know that a subsequence of a convergent sequence of numbers converges to the limit of the original sequence, it follows that Pr(|Xnk-X|>epsilon) converges to 0. So Xnk converges in probability to X.
3. Nov 24, 2009
### bpet
1) This would be a generalization of convergence of subsequences of real numbers.
2) An example of this would be first exit times - consider a process that has a finite probability of never exiting (e.g. fly in a jar), so the first exit time can be infinite.
3) Not sure
4) No - mgf is not unique (e.g. lognormal distribution) and doesn't necessary exist (e.g. Pareto). The c.f. is useful because it always exists on the real axis (if the r.v. is a.s. finite) and acts like a Fourier transform.
Hope this helps
4. Nov 24, 2009
### kingwinner
Thank you for the replies.
2) I don't get it. The theorem is talking about this: "if E(Y)<∞, then Y<∞ almost surely", but I don't even know what Y<∞ means...:(
For a Poisson random variable Y, the possible values are 0,1,2,..., and there is NO upper bound, so Y=∞ is possible? (same for exponential random variable, there is no upper bound.)
For a binomial random variable X, the possible values are 0,1,2,...,n, there is a upper bound, so Y<∞?
I am really confused. Can someone please explain more on this? What does it mean to say that Y<∞? (or Y=∞?)
4) So you mean the characterisitic function c(t) always exists for ALL real numbers t, is that right?
Also, for example, if we are asked to prove that the sum of 2 indepndent normal r.v.'s is again normal, then I think the proof using mgf is perfectly fine, but I see my textbook using characteristic function for this, is it absolutely necessary to use characteristic function in a proof like this?
5. Nov 25, 2009
"Also, for example, if we are asked to prove that the sum of 2 indepndent normal r.v.'s is again normal, then I think the proof using mgf is perfectly fine, but I see my textbook using characteristic function for this, is it absolutely necessary to use characteristic function in a proof like this?"
No, it isn't necessary.
Every probability distribution has a characteristic function, and that function is unique - it determines the distribution.
In order for a distribution to have a moment generating function, every moment has to exist - that is, you must have
$$\int x^n \,dF(x) < \infty$$
for all n. This isn't always true - consider
$$f(x) = \frac 1 {\pi (1+x^2)}$$
which doesn't even have a mean.
If a distribution's moments identify the distribution exactly (say they satisfy Carleman's conditions) then the moment generating function is unique and identifies the distribution.
I'm guessing (and it's only a guess, since I don't know which probability text you're using) that the author(s) use the characteristic function approach to show the sum of two independent normals is normal because it is a relatively easy example to use to demonstrate the general procedure.
6. Nov 25, 2009
### kingwinner
4) So while the moment generating function does not always exist in a neighborhood of 0, the "characterisitic function" ALWAYS exists for ALL real numbers t, is this right? (so that it is more general?)
2) Can you also explain the meaning of "Y<∞", please?
Is this about the difference of binomial random variables (which has an upper bound on the possible values), and Poisson (or exponential) random variables (which has no upper bound on the possible values)?
So that for binomial random variables Y, we can say that Y<∞, while for Poisson (or exponential) random variables X, we cannot say that X<∞?
Your help is much appreciated! :)
7. Nov 25, 2009
### Hurkyl
Staff Emeritus
It is often more convenient to do calculus using the extended real numbers rather than the real numbers. The extended real numbers contain two extra points, called $+\infty$ and $-\infty$.
Every infinite sum of nonnegative extended real numbers is convergent. For example:
$$1 + 1 + 1 + \cdots = +\infty$$
A similar statement is true for definite integrals.
8. Nov 25, 2009
\begin{align*} \phi_X(t) & = \int_{\mathcal{R}} e^{tx} \, dF(x) \\ & = \int_{\mathcal{R}} \sum_{n=0}^\infty \frac{(tx)^n}{n!} \, dF(x) \end{align*}
If the distribution does not have moments of all orders, eventually an integral involving $$x^n$$ will diverge, and so the mgf does not exist.
$$|\psi_X(t)| = \left|\int_{\mathcal{R}} e^{itx} \, dF(x)\right| \le \int_{\mathcal{R}} |e^{itx}| \, dF(x) = \int_{\mathcal{R}} dF(x) = 1$$
|
2018-03-24 01:32:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9058830142021179, "perplexity": 493.2843061625372}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649508.48/warc/CC-MAIN-20180323235620-20180324015620-00725.warc.gz"}
|
http://new-contents.com/Nebraska/estimate-proportion-standard-error.html
|
Address 324 12th Ave E, Benkelman, NE 69021 (308) 423-2617
# estimate proportion standard error Benkelman, Nebraska
The standard error of this estimate is ________. Are "ŝati" and "plaĉi al" interchangeable? The system returned: (22) Invalid argument The remote host or network may be down. In a situation like this, statisticians replace p with when calculating the SE.
They can be time-consuming and complex. I suppose I could've done the same to calculate SEM from the 95% confidence interval provided by prop.test too, but this is better. How do I help minimize interruptions during group meetings as a student? Let's suppose there are m 1s (and n-m 0s) among the n subjects.
The sample is sufficiently large. Now is based on a sample, and unless we got really lucky, chances are the .15 estimate missed. House of Santa Claus How to deal with players rejecting the question premise What does a well diversified self-managed investment portfolio look like? Often, researchers choose 90%, 95%, or 99% confidence levels; but any percentage can be used.
Find the margin of error. asked 5 years ago viewed 4392 times Related 2Plotting Multiple Proportions With Standard Error4GLM for proportional data8Standard error of sample standard deviation of proportions2Calculating standard error for a Normal population0How can more hot questions question feed default about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Browse other questions tagged standard-error proportion weighted-data or ask your own question.
Exercise 4 shows the effect of of increasing the sample size on the SE of the sample proportion. Elsewhere on this site, we show how to compute the margin of error when the sampling distribution is approximately normal. This condition is satisfied, so we will use one of the simpler "approximate" formulas. How to tell why macOS thinks that a certificate is revoked?
Whenever you need to construct a confidence interval, consider using the Sample Planning Wizard. Suppose k possible samples of size n can be selected from the population. Then, we have 0.40 * 1600 = 640 successes, and 0.60 * 1600 = 960 failures - plenty of successes and failures. more stack exchange communities company blog Stack Exchange Inbox Reputation and Badges sign up log in tour help Tour Start here for a quick overview of the site Help Center Detailed
Wow, thanks for the clarification @Aniko...that wouldn't have been good to report. Using the t Distribution Calculator, we find that the critical value is 2.58. share|improve this answer answered Jun 29 '15 at 20:12 whuber♦ 145k17283542 Thanks! Related Calculators: Vector Cross Product Mean Median Mode Calculator Standard Deviation Calculator Geometric Mean Calculator Grouped Data Arithmetic Mean Calculators and Converters ↳ Calculators ↳ Statistics ↳ Data Analysis Top Calculators
Please answer the questions: feedback Next: Exercises Up: Sampling Distribution of the Previous: The Sampling Distribution of Estimating the Population Proportion p The TV World computations in the previous The Sample Planning Wizard is a premium tool available only to registered users. > Learn more Register Now View Demo View Wizard Test Your Understanding Problem 1 A major metropolitan newspaper The approach that we used to solve this problem is valid when the following conditions are met. Is intelligence the "natural" product of evolution?
However, students are expected to be aware of the limitations of these formulas; namely, the approximate formulas should only be used when the population size is at least 20 times larger The confidence level describes the uncertainty of a sampling method. Why does the material for space elevators have to be really strong? Estimation Requirements The approach described in this lesson is valid whenever the following conditions are met: The sampling method is simple random sampling.
This condition is satisfied; the problem statement says that we used simple random sampling. Because we do not know $p(1-p)$, we have to estimate it. What is the 99% confidence interval for the proportion of readers who would like more coverage of local news? (A) 0.30 to 0.50 (B) 0.32 to 0.48 (C) 0.35 to 0.45 In other words, 0.52 of the sample favors the candidate.
C. The sample should include at least 10 successes and 10 failures. Select a confidence level. Exercise 4.
These are the familiar formulas, showing that the calculation for weighted data is a direct generalization of them. Lane Prerequisites Introduction to the Normal Distribution, Normal Approximation to the Binomial, Sampling Distribution of the Mean, Sampling Distribution of a Proportion, Confidence Intervals, Confidence Interval on the Mean Learning Objectives The SE becomes $\sqrt{p(1-p)/n}$ and its estimate from the sample is $\sqrt{\bar X(1-\bar X)/n}$. Stat Trek's Sample Planning Wizard does this work for you - quickly, easily, and error-free.
That gives $$\text{SE}(\bar X) = \sqrt{\bar X(1-\bar X) \sum_{i=1}^n \omega_i^2}.$$ For unweighted data, $\omega_i = 1/n$, giving $\sum_{i=1}^n \omega_i^2 = 1/n$. The estimated standard error of p is therefore We start by taking our statistic (p) and creating an interval that ranges (Z.95)(sp) in both directions, where Z.95 is the number of Identify a sample statistic. r standard-deviation proportion share|improve this question edited May 20 '11 at 11:06 Bernd Weiss 5,7142138 asked May 20 '11 at 0:39 Mog 4382820 1 Do you mean the standard error
The range of the confidence interval is defined by the sample statistic + margin of error. Share a link to this question via email, Google+, Twitter, or Facebook. Therefore the confidence interval is Lower limit: 0.52 - (1.96)(0.0223) - 0.001 = 0.475 Upper limit: 0.52 + (1.96)(0.0223) + 0.001 = 0.565 0.475 ≤ π ≤ 0.565 Since the interval Suppose we take a sample of 40 graduating students, and suppose that 6 out of the 40 are planning to go to graduate school.
This expression should be valid for all binomial distributions. Since the above requirements are satisfied, we can use the following four-step approach to construct a confidence interval. up vote 1 down vote favorite 2 I made a comparison of hatch success between 2 populations of birds using R's prop.test() function: prop.test(c(#hatched_site1, #hatched_site2),c(#laid_site1, #laid_site2)) It gave me the proportions In data analysis, population parameters like p are typically unknown and estimated from the data.
Browse other questions tagged r standard-deviation proportion or ask your own question. What is the best way to upgrade gear in Diablo 3? It has already been argued that a proportion is the mean of a variable that is 1 when the individual has a characteristic and 0 otherwise. Generated Sat, 15 Oct 2016 06:15:45 GMT by s_ac15 (squid/3.5.20)
|
2019-01-21 02:15:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33749526739120483, "perplexity": 1286.4613792215969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583745010.63/warc/CC-MAIN-20190121005305-20190121031305-00183.warc.gz"}
|
https://www.physicsforums.com/threads/entropy-physical-meaning.86370/
|
# Entropy - Physical Meaning
1. ### TimNguyen
81
Hello.
I know that entropy equals the Boltzmann's constant times the natural log of the multiplicity but I do not know the physical interpretation of what "entropy" really is. I understand the units of it are "Joules per Kelvin" but what does that really mean?
2. ### Juan R.
416
The most simple and direct interpretation
$$U = TS$$
entropy is a "measure" of the increase of internal energy with temperature or
$$S = \frac{U}{T}$$
If two bodies have the same temperature but one has more energy, then it has more entropy.
If two bodies have the same energy but one has less temperature, then it has more entropy.
Last edited: Aug 26, 2005
3. ### ZapperZ
29,765
Staff Emeritus
Try this:
http://www.entropysite.com/
It has a good collection of basic articles, especially in dispelling the notion that entropy is nothing more than "disorder".
Zz.
4. ### Juan R.
416
Effectively, the old (archaic) idea of entropy like disorder is not justified and would be abandoned of literature.
Moreover, the standard formula $$S = k \ ln W$$ often invoked in that old interpretation is valid only in the very special case of an isolated system at equilibrium.
That is reason that one would reasoning what is entropy from above formulas.
5. ### vipul_rop88
1
ans:
it realy means how quickly the energy is spreding in medium.
SpaceKidd_N7 likes this.
|
2015-03-31 15:36:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7192410230636597, "perplexity": 1619.7550637908575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300735.71/warc/CC-MAIN-20150323172140-00156-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/534832/how-can-we-calculate-log-xx/534858
|
# How can we calculate $(\log_{x}{x})'$?
Related to this, I am looking for a solution for:
# $(\log_{x}{x})'$ = ?
...where $x$ is not 1, but positive.
• $$\log_bb=1$$ if $b>0,\ne1$ – lab bhattacharjee Oct 21 '13 at 18:54
• @labbhattacharjee Oh, so simple and clean. Post an answer! :-) – Ionică Bizău Oct 21 '13 at 18:57
• @labbhattacharjee Asked this answer my math teacher, and he didn't know to answer... ^_^ – Ionică Bizău Oct 21 '13 at 18:57
• @Johnツ, if that's true then change of teacher...or of school. – DonAntonio Oct 21 '13 at 19:13
• @DonAntonio: Mistakes happen sometimes. Have you never a dumb moment? – Najib Idrissi Oct 22 '13 at 0:56
Notice that $\log_x x=1.$ Is that enough?
• Remember: $\log_b a$ is the number we raise $b$ to to get $a.$ In other words, $\log_b a= c$ if and only if $b^c=a.$ – Maxim Gilula Oct 22 '13 at 2:16
If you want to do it the hard way, let $f(y,z) = \log_y z = \frac{\ln z}{\ln y}$, so that your function is $g(x) = f(x,x)$. It is easy to compute that $\frac{\partial f}{\partial z}(y,z) = \frac{1}{z \ln y}.$ It is almost as easy to compute that $\frac{\partial f}{\partial z}(y,z) = \frac{1}{y} \cdot \frac{-\ln z}{\ln^2 y}$.
You have: $$g'(x) = \frac{\partial f}{\partial y}(x,x) + \frac{\partial f}{\partial z}(x,x).$$ When you substitute the above formulas, everything cancels out and you find $g'(x) =0$.
Yes, it is somewhat silly to solve this particular problem this way. But hopefully, this will be of use to the author of the question if he wants to differentiate, say, $\log_x(1+x)$.
|
2020-10-25 19:41:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7558472156524658, "perplexity": 531.2075827101445}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107889651.52/warc/CC-MAIN-20201025183844-20201025213844-00276.warc.gz"}
|
https://networkx.github.io/documentation/networkx-2.2/reference/randomness.html
|
# Randomness¶
Random Number Generators (RNGs) are often used when generating, drawing and computing properties or manipulating networks. NetworkX provides functions which use one of two standard RNGs: NumPy’s package numpy.random or Python’s built-in package random. They each provide the same algorithm for generating numbers (Mersenne Twister). Their interfaces are similar (dangerously similar) and yet distinct. They each provide a global default instance of their generator that is shared by all programs in a single session. For the most part you can use the RNGs as NetworkX has them set up and you’ll get reasonable pseudorandom results (results that are statistically random, but created in a deterministic manner).
Sometimes you want more control over how the numbers are generated. In particular, you need to set the seed of the generator to make your results reproducible – either for scientific publication or for debugging. Both RNG packages have easy functions to set the seed to any integer, thus determining the subsequent generated values. Since this package (and many others) use both RNGs you may need to set the seed of both RNGs. Even if we strictly only used one of the RNGs, you may find yourself using another package that uses the other. Setting the state of the two global RNGs is as simple setting the seed of each RNG to an arbitrary integer:
>>> import random
>>> random.seed(246) # or any integer
>>> import numpy
>>> numpy.random.seed(4812)
Many users will be satisfied with this level of control.
For people who want even more control, we include an optional argument to functions that use an RNG. This argument is called seed, but determines more than the seed of the RNG. It tells the function which RNG package to use, and whether to use a global or local RNG.
>>> from networkx import path_graph, random_layout
>>> G = path_graph(9)
>>> pos = random_layout(G, seed=None) # use (either) global default RNG
>>> pos = random_layout(G, seed=42) # local RNG just for this call
>>> pos = random_layout(G, seed=numpy.random) # use numpy global RNG
>>> random_state = numpy.random.RandomState(42)
>>> pos = random_layout(G, seed=random_state) # use/reuse your own RNG
Each NetworkX function that uses an RNG was written with one RNG package in mind. It either uses random or numpy.random by default. But some users want to only use a single RNG for all their code. This seed argument provides a mechanism so that any function can use a numpy.random RNG even if the function is written for random. It works as follows.
The default behavior (when seed=None) is to use the global RNG for the function’s preferred package. If seed is set to an integer value, a local RNG is created with the indicated seed value and is used for the duration of that function (including any calls to other functions) and then discarded. Alternatively, you can specify seed=numpy.random to ensure that the global numpy RNG is used whether the function expects it or not. Finally, you can provide a numpy RNG to be used by the function. The RNG is then available to use in other functions or even other package like sklearn. In this way you can use a single RNG for all random numbers in your project.
While it is possible to assign seed a random-style RNG for NetworkX functions written for the random package API, the numpy RNG interface has too many nice features for us to ensure a random-style RNG will work in all functions. In practice, you can do most things using only random RNGs (useful if numpy is not available). But your experience will be richer if numpy is available.
To summarize, you can easily ignore the seed argument and use the global RNGs. You can specify to use only the numpy global RNG with seed=numpy.random. You can use a local RNG by providing an integer seed value. And you can provide your own numpy RNG, reusing it for all functions. It is easier to use numpy RNGs if you want a single RNG for your computations.
|
2020-04-03 18:25:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29455965757369995, "perplexity": 1844.7924843273647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00550.warc.gz"}
|
https://plainmath.net/49788/graph-the-system-of-inequalities-below-and-label-the-solution-set
|
# Graph the system of inequalities below,and label the solution set.
Axel123 2022-01-10
Graph the system of inequalities below, and clearly label the solution set
y≤-2/3x+3
y>x+5
• Questions are typically answered in as fast as 30 minutes
### Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
|
2022-01-17 06:37:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8749786615371704, "perplexity": 2051.32830239429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300343.4/warc/CC-MAIN-20220117061125-20220117091125-00151.warc.gz"}
|
https://testbook.com/question-answer/the-potential-point-charge-at-a-distance-r-is-give--5e75e6fdf60d5d7f3bfc5155
|
The potential point charge at a distance r is given by
This question was previously asked in
UPRVUNL AE EE 2016 Official Paper
View all UPRVUNL AE Papers >
1. $$\frac{{{Q_1}}}{{4\pi {\varepsilon _0}{r^2}}}$$
2. $$\frac{{{Q_1}{Q_2}}}{{4\pi {\varepsilon _0}{r^2}}}$$
3. $$\frac{{{Q_1}}}{{4\pi {\varepsilon _0}r}}$$
4. None of the other options
Answer (Detailed Solution Below)
Option 3 : $$\frac{{{Q_1}}}{{4\pi {\varepsilon _0}r}}$$
Detailed Solution
The potential point charge at a distance r is defined as the amount of work done in moving that charge from infinite to the point from where it calculated.
$$V = \frac{W}{Q}$$
W = work done
Q = Charge
And W = F.d
F = force
d = distance between point and charge = r
$$F = \frac{1}{{4\pi {\epsilon_0}}}\frac{{{Q_1}{Q_2}}}{{{r^2}}}$$
$$\Rightarrow W = \frac{1}{{4\pi {\epsilon_0}}}\frac{{{Q_1}{Q_2}}}{r}$$
$$\Rightarrow V = \frac{1}{{4\pi {\epsilon_0}}}\frac{Q}{r}$$
|
2021-10-16 00:00:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7838078141212463, "perplexity": 2022.1636513025628}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00504.warc.gz"}
|
https://tutorial.math.lamar.edu/Solutions/CalcI/ExpLogEqns/Prob9.aspx
|
Paul's Online Notes
Home / Calculus I / Review / Exponential and Logarithm Equations
Show Mobile Notice Show All Notes Hide All Notes
Mobile Notice
You appear to be on a device with a "narrow" screen width (i.e. you are probably on a mobile phone). Due to the nature of the mathematics on this site it is best views in landscape mode. If your device is not in landscape mode many of the equations will run off the side of your device (should be able to scroll to see them) and some of the menu items will be cut off due to the narrow screen width.
### Section 1-9 : Exponential And Logarithm Equations
9. Find all the solutions to $$\log \left( w \right) + \log \left( {w - 21} \right) = 2$$. If there are no solutions clearly explain why.
Show All Steps Hide All Steps
Hint : Don’t forget about the basic logarithm properties and how they can be used to combine multiple logarithms into a single logarithm.
Start Solution
We need to reduce this down to an equation with a single logarithm and to do that we first should rewrite it a little. Upon doing that we can use the basic logarithm properties to combine the two logarithms into a single logarithm as follows,
\begin{align*}\log \left( {w\left( {w - 21} \right)} \right) & = 2\\ \log \left( {{w^2} - 21w} \right) & = 2\end{align*} Show Step 2
Now all we need to do is exponentiate both sides using 10 (because we’re working with the common logarithm) and then solve for $$y$$.
\begin{align*}\log \left( {{w^2} - 21w} \right) & = 2\\ {10^{\log \left( {{w^2} - 21w} \right)}} & = {10^2}\\ {w^2} - 21w & = 100\\ {w^2} - 21w - 100 & = 0\\ \left( {w - 25} \right)\left( {w + 4} \right) & = 0\hspace{0.5in} \Rightarrow \hspace{0.5in}w = - 4,\,\,\,\,w = 25\end{align*} Show Step 3
We’re dealing with logarithms so we need to make sure that we won’t have any problems with any of our potential solutions. In other words, we need to make sure that if we plug either of the two potential solutions into the original equation we won’t end up taking the logarithm of a negative number or zero.
Upon inspection we can quickly see that if we plug in $$w = - 4$$ we will be taking a logarithm of a negative number (in both of the logarithms in this case) and so $$w = - 4$$ can’t be a solution. On the other hand, if we plug in $$w = 25$$ we won’t be taking logarithms of negative numbers and so $$w = 25$$ is a solution.
In summary then, the only solution to the equation is : $$\require{bbox} \bbox[2pt,border:1px solid black]{{w = 25}}$$.
|
2021-06-15 04:05:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391825199127197, "perplexity": 412.7671968265759}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487616657.20/warc/CC-MAIN-20210615022806-20210615052806-00327.warc.gz"}
|
https://blot.im/questions/1015
|
# Analytics integration with GoatCounter
Hello! Do you have a way I can integrate with GoatCounter? I'm a software dev in my day job, so I'm totally ok with writing some custom JS on my site if there's a way to do that. Thanks! :D https://www.goatcounter.com/
2 replies
nvm! figured out I can just add it in the template source code directly
|
2023-02-07 02:22:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20068933069705963, "perplexity": 1073.134436275624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500368.7/warc/CC-MAIN-20230207004322-20230207034322-00545.warc.gz"}
|
https://bruop.github.io/frustum_culling/
|
# Frustum Culling
Update: After posting this on twitter, I received a few helpful suggestions to implement a more robust testing using the separating axis theorem. I’ll implement and write up a follow up post soon, but I wanted to post this disclaimer that perhaps I was a bit too optimistic on just how many false negatives this method would produce. I’ll include a more detailed explanation in the follow up.
## Introduction
It’s been a while! As it has been for many, 2020 has been a bit of a rollercoaster for me. Also at some point I decided to try and write my own DX12 rendering lib from scratch, and it was maybe ill advised from a “productively implement graphics techniques” perspective, but it’s definitely forced me to read and write a lot more C/C++ code. But I digress, since that’s not what this post is about and instead we’ll be talking about how to reduce wasted work rendering objects out of view.
But before we get started, let’s review what happens when you submit a draw call for a mesh that’s not visible from the camera’s point of view: first, we’ll have to record the associated information into a command buffer such as pointers to the meshes we’re rendering, any changes in pipeline state, texture pointers and material data, etc. Recording and submitting this data is relatively cheap using DX12, but not free. The command buffer is then copied and sent over to the GPU through your PCIe bus, at which point the GPU command processor can start consuming it.
The first stage in the pipeline is going to be input assembly (IA) where we load indices from the mesh’s index buffer (if applicable) as well as the associated vertex data, which are then passed to our vertex shaders. This second stage is chiefly responsible for transforming our vertices into clip space, calculated using the model, view and projection transforms that we’ve passed in from the CPU side of our application. With that work done, the next stage, called primitive assembly (PA), groups our vertices into the primitive type set by our pipeline state (e.g. points, lines or triangles) and performs viewport and backface culling, as well as clipping of any primitives that intersect the boundaries of NDC space.
For a visible mesh, there are additional stages but this is the end of the line for our non-visible meshes since they will be entirely culled by the viewport culling, resulting in an early exit. If you’re interested in a more exhaustive overview of the pipeline, I recommend this series of blog posts by Fabian Giesen. The point is that none of this work I just described will result in pixel writes and it is just a waste of time.
Additionally, we should also consider that we often render the same scene from multiple points of view. For example, when rendering shadow maps we’re going to pay this tax for each map we’re rendering to.
Obviously we’re performing a ton of extra work that is totally useless. To demonstrate this, I set up a scene with 10,000 Boom Boxes arranged in a 3D grid around the origin. The data associated with their draw calls is stored in linear arrays that we loop through during our render pass, so it’s as simple as I could make it. The camera sits in the middle of the 3D grid, seeing only a subset of the boomboxes. Here’s an overhead view of the scene (with only a subset of boomboxes rendered, to make it cleaer):
All the meshes being rendered behind the blue line that represents our view frustum represent wasted work! Naively rendering all 10,000 of these meshes on my PC resulted in the following timings:
CPU Command Buffer Creation + Submission 2.4
GPU execution 5.8
Total 8.2
We can get a sense of how much time is being wasted by examining the GPU occupancy graph which displays a timeline of how our GPU was utilized during the frame, specifically indicating the number of waves/warps being run in parallel per SM. The green indicates that the warps are being used for vertex shading, while the light blue indicates that they’re being used for pixel/fragment shading.
The graph shows that ~5.6 ms are required to render all our meshes, but we spent the majority of that time running the vertex shaders for geometry that has no corresponding pixel shading (to be clear, this is indicated by the large regions of vertex shader work without any corresponding pixel shader work, in blue). This means we should expect large gains on the order of 3-4 ms if we are able to selectively render only objects in view.
A solution seems obvious: don’t try to render objects out of view! That should mean fewer commands, which should mean less wasted time on both the CPU and GPU.
## Culling and Bounding Volumes
Frustum culling is a solution to this problem wherein we identify which primitives are actually relevant to the current view by performing intersection checks against the view frustum. The diagram below shows an example of this, with objects lying outside the view frustum being outlined with a dotted stroke (and labelled with “view frustum”).
There are actually quite a few different ways to solve this problem and as always they come with various benefits and drawbacks. The first choice you’ll need to make is how you want to represent the bounds of your meshes, as testing the raw geometry against your frustum isn’t feasible, so you’d just be re-implementing vertex shading on the CPU. Therefore we need to use a simplified geometry instead, with the two most common choices being bounding spheres or oriented bounding boxes (OBBs). Usually axis aligned bounding boxes (AABBs) in model space are used to calculate the OBBs. Below is an example of an AABB, bounding sphere and OBB for our boombox.
The simplicity of a sphere means that intersection tests against other primitives are simple and cheap. Spheres also have an extremely compact data representation, requiring only a position and radius. On the other hand, the quality of spheres as bounding volumes drops quickly as the objects they enclose become “longer”. If an object’s bounds extend much further along one axis than they do in others, then the volume will contain mostly empty space, meaning that it may pass our test despite all of the actual geometry lying outside the frustum.
An alternative is to use boxes, which are much better suited to objects that do not have roughly equal maximal extents, since they store the distance from the center along each axis. An axis aligned bounding box (AABB) is the simplest version, where the edges of the box are aligned with our coordinate axes. Oriented bounding boxes are a bit more complicated in that they can have arbitrary orientation. AABBs are commonly used to represent the bounding volume in model space, while OBBs are used to represent the bounds after the object is transformed to World or View space. OBBs have almost the opposite trade-offs to spheres. For instance, their representation requires storing either all eight vertices or the three vectors that run alongside the edges of the box. Intersection tests are more expensive. However, as mentioned, they can be a much “tighter” fit for arbitrary geometry.
Which volume you use in the end depends on how accurate vs how fast you need your frustum culling to be. Some games will break culling down into broad and fine phases, so that the first broad phase can use spheres and the fine phase uses other geometry (or other culling techniques altogether). I ended up choosing OBBs because as demonstrated below, we can actually choose a very simple intersection test for the specific task of view frustum culling.
## OBB Culling
We can exploit the fact that in clip space, all points lying inside the view frustum will satisfy the following inequalities:
\begin{aligned} -w &\leq x \leq w \\ -w &\leq y \leq w \\ 0 &\leq z \leq w \\ \end{aligned}
As a quick reminder, points can be transformed from model to clip space using a Model-View-Projection (MVP) matrix. This means that we’ll need to transform all of the vertices of our AABBs into clip space.
In my implementation, the output of the culling is a list of ids for the objects I want to render. The input is the camera, and then for each object a model-to-world transform plus a model space AABB. The culling function looks like:
struct Camera {
mat4 view;
mat4 projection;
// Some other stuff!
}
struct AABB {
vec3 min = {};
vec3 max = {};
}
void cull_AABBs_against_frustum(
const Camera& camera,
const Array<mat4>& transforms,
const Array<AABB>& aabb_list,
Array<u32>& out_visible_list
) {
mat4 VP = camera.projection * camera.view;
for (size_t i = 0; i < aabb_list.size; i++) {
// model->view->projection transform
mat4 MVP = VP * transforms[i];
const AABB& aabb = aabb_list[i];
if (test_AABB_against_frustum(MVP, aabb)) {
out_visible_list.push_back(i);
}
}
}
The visibility test is simple: we use our AABB corners (min and max) to initialize eight vertices, then transform each one to clip space and then perform our test as defined above:
bool test_AABB_against_frustum(mat4& MVP, const AABB& aabb)
{
// Use our min max to define eight corners
vec4 corners[8] = {
{aabb.min.x, aabb.min.y, aabb.min.z, 1.0}, // x y z
{aabb.max.x, aabb.min.y, aabb.min.z, 1.0}, // X y z
{aabb.min.x, aabb.max.y, aabb.min.z, 1.0}, // x Y z
{aabb.max.x, aabb.max.y, aabb.min.z, 1.0}, // X Y z
{aabb.min.x, aabb.min.y, aabb.max.z, 1.0}, // x y Z
{aabb.max.x, aabb.min.y, aabb.max.z, 1.0}, // X y Z
{aabb.min.x, aabb.max.y, aabb.max.z, 1.0}, // x Y Z
{aabb.max.x, aabb.max.y, aabb.max.z, 1.0}, // X Y Z
};
bool inside = false;
for (size_t corner_idx = 0; corner_idx < ARRAY_SIZE(corners); corner_idx++) {
// Transform vertex
vec4 corner = MVP * corners[corner_idx];
// Check vertex against clip space bounds
inside = inside ||
within(-corner.w, corner.x, corner.w) &&
within(-corner.w, corner.y, corner.w) &&
within(0.0f, corner.z, corner.w);
}
return inside;
}
With this, we now have a working version of frustum culling! Re-running the same test from earlier, let’s examine the differences in performance, with rough timings in the table below:
Frustum Culling 1.2
CPU Command Buffer Creation + Submission 1.5
GPU execution 1.5
Total 4.2
We can see that although we’ve added an extra step, we’ve saved some time while recording and submitting our command buffer. Our CPU frame time has gone up a little as a result. However, the GPU execution has decreased by about 4.3 ms! That’s an almost 75% decrease in frametime. Additionally, if we look at GPU timeline and occupancy, we see huge gains, with the render time reduced to 2.455 ms. Below is the occupancy graph, notice how much less unused vertex shader work there is.
## Further Optimization
We’ve already improved our timing by about 33%, but we’re now spending a significant percentage of our frame time culling these objects. At this point, we could attempt several different optimizations (all of which could be potentially combined):
• Introducing acceleration structures that let us cull entire groupings of objects at once (as mentioned previously, bounding volume hierarchies are commonly used). In theory, culling time no longer increases linearly as a function of the number of AABBs.
• Spreading our culling across many different threads/cores, having each core process a subset of objects at the same time.
• “Vectorize” our code, taking advantage of data level parallelism and SIMD.
Each option comes with it’s own set of tradeoffs, which is almost always the case. Building acceleration structures is not free and will add it’s own line item to our per-frame timing summary. To reduce the cost, the scene is often split into static and non-static hierarchies, with the static BVH only being built once outside the frame loop, but the non-static BVH will still require updating per frame. Additionally, care must be taken when building tree data structures so we don’t end up making the culling process slower due to CPU cache misses when traversing down the tree. If your scene is mostly static, then it might make sense to build a BVH for static objects, flatten it to minimize cache misses, and then simply linearly loop through your dynamic objects as we’ve done here. Frostbite actually presented a case where removing their BVH and replacing it with a naive linear array resulted in a 3x improvement in speed.
Meanwhile, introducing multi-threading is not trivial and can easily make the entire process slower. I actually played around with using FiberTaskingLib to launch tasks that culled 1024 objects each and then combined their results into a single visibility list that my renderer could consume as before, and it was almost 10 times slower than the naive approach I showed earlier! I still have to investigate why exactly this was the case, but my bet is that it’s mostly due to the combine step. It’s possible that producing a single list will always eat into any gains made from splitting up the culling, but like I said, further investigation is needed.
Finally, let me introduce data level parallelism. The idea here is that even on a single core, CPUs have large registers that we can treat as containing multiple data points upon which we can perform the same instruction across all that data simultaneously. This idea of executing a single instruction across multiple data (or SIMD) can lead to a multiplicative speed increase. As an example, consider the following (somewhat contrived) scalarized code that takes the dot product of elements in two lists of vectors:
struct vec4 {
float x;
float y;
float z;
float w;
};
vec4 a[N] = { ... };
vec4 b[N] = { ... };
float c[N]{};
for (size_t i = 0; i < N; ++i) {
const vec4& lhs = a[i];
const vec4& rhs = b[i];
float res = 0.0f;
// This is probably actually an inlined function call or something
for (size_t j = 0; j < 4; ++j) {
res += lhs[j] * rhs[j];
}
c[i] = res;
}
Now let’s consider how we could “vectorize” this code. Modern SIMD registers can store anywhere from 128 to 512 bits of data, which means we could operate on sets of 4 to 16 32-bit floating point numbers at a time. For the purposes of this example, we’ll use 128-bit wide registers (e.g. ones with 4 “lanes”) but the idea extends to higher width registers trivially. So instead of storing our data as an array of structs (commonly referred to as AoS), let’s use a structure of arrays (SoA) to represent our list of vectors. This means instead of storing our vectors in a way that looks like:
x y z w x y z w x y z w ...
We’ll store each component in their own list, so it’ll look more like:
x x x x x x ...
y y y y y y ...
z z z z z z ...
w w w w w w ...
Here’s some C++ code (that I didn’t attempt to compile) that shows this alternate structure:
struct VectorList {
float* x;
float* y;
float* z;
float* w;
// Probably additional info on how big it is, etc
size_t size;
}
Next, instead of loading our data one vector at a time, we’ll load the data into our SIMD registers. To do so, we’ll have to use “intrinsics” provided by your CPU manufacturer, which in my case is Intel. I have no idea what the situation is on AMD, but I know that ARM64 and other platforms probably provide their own set of intrinsics. For Intel, an exhaustive (if not particularly beginner friendly) list can be found on their intrinsics guide. So we’ll be using these intrinsics to load our data into 128-bit wide registers, and multiply and add across the lanes to calculate 4 dot products all at once. I have found it helpful to visualize these operations vertically due to the nature of how SIMD operates across the “lanes” of SIMD registers, so I’ve annotated the code with crappy little diagrams to try and visualize the intrinsics:
VectorList a{N};
VectorList b{N};
float c[N]{};
// We iterate with a stride of 4
constexpr size_t stride = sizeof(__m128) / sizeof(float); // 4
for (size_t i = 0; i < N; i += stride) {
// Load the x component of 4 vectors into a SIMD register
// Do the same for our other components
// Now multiply the x components together:
// Equivalent to
// x1 x2 x3 x4
// * * * *
// X1 X2 X3 X4
// = = = =
// r1 r2 r3 r4
__m128 res = _mm_mul_ps(x_lhs, x_rhs);
// Now, multiply the other components together AND add the result to our temp variable
// Equivalent to
// y1 y2 y3 y4
// * * * *
// Y1 Y2 Y3 Y4
// + + + +
// r1 r2 r3 r4
// = = = =
// r1 r2 r3 r4
// Store the data into our array c
// This copies all 4 dot products into our array at index i through i+3
_mm_store_ps(&c[i], res);
}
A few things stuck out to me. First of all, we had to change the way our data was laid out in memory in order to take advantage of our SIMD lanes, so re-writing an algorithm might mean you need to make more fundamental changes than just swapping out your inner for loops. Additionally, those intrinsic functions are a bit scary in terms of how they are platform specific. This can be addressed by using a SIMD library or something like DirectXMath, which takes care of using the relevant intrinsics for most CPU vendors. Finally, you might feel like writing such code comes with a significant cognative and productivity tax. There are also tools like ISPC that make writing vectorized code a bit less “artisinal”, but I haven’t had a chance to really play around with them. I should also mention that compilers are sometimes able to translate scalarized code into vectorized code, but since I was aiming to write SIMD code as a learning exercise I didn’t look into it very much.
Let’s look at a less contrived example: frustum culling! That is, after all, the whole point of this post.
## SIMD Implementation
When I started looking into how we could re-write this into SIMD, I actually found a series of blog posts on optimizing frustum culling by Arseny Kapoulkine. However, the series is specifically about optimizing it on the Playstation 3 (I think) and as a result it uses the SPU intrinsics that were part of Sony’s/IBM’s platform. But I was able to effectively translate it to use intel’s SSE/AVX intrinsics with only one or two problems.
There were two different operations I figured was worth exploiting SIMD for:
1. Multiplying the various transformation matrices, since that’s done for each object and their 4x4 nature is a natural fit for SIMD lanes
2. Transforming our model space AABB vertices to clip space
First, let’s tackle matrix multiplication for 4x4 matrices. The biggest thing I struggled with on this one was understanding how we can calculate an entire row of our resulting matrix at once. Let’s start by considering how we’d calculate the $i$th row of our matrix for $C = AB$:
\begin{aligned} C_{i0} &= A_{i0} B_{00} + A_{i1} B_{10} + A_{i2} B_{20} + A_{i3} B_{30} \\ C_{i1} &= A_{i0} B_{01} + A_{i1} B_{11} + A_{i2} B_{21} + A_{i3} B_{31} \\ C_{i2} &= A_{i0} B_{02} + A_{i1} B_{12} + A_{i2} B_{22} + A_{i3} B_{32} \\ C_{i3} &= A_{i0} B_{03} + A_{i1} B_{13} + A_{i2} B_{23} + A_{i3} B_{33} \end{aligned}
You may already notice the pattern in the “columns” above, but just to make it super clear, let’s flip this over onto its side:
$\def\arraystretch{1.5} \begin{array}{c:c:c:c} C_{i0} & C_{i1} & C_{i2} & C_{i3} \\ \hline A_{i0} & A_{i0} & A_{i0} & A_{i0} \\ \times & \times & \times & \times \\ B_{00} & B_{01} & B_{02} & B_{03} \\ + & + & + & + \\ A_{i1} & A_{i1} & A_{i1} & A_{i1} \\ \times & \times & \times & \times \\ B_{10} & B_{11} & B_{12} & B_{13} \\ + & + & + & + \\ A_{i2} & A_{i2} & A_{i2} & A_{i2} \\ \times & \times & \times & \times \\ B_{20} & B_{21} & B_{22} & B_{23} \\ + & + & + & + \\ A_{i3} & A_{i3} & A_{i3} & A_{i3} \\ \times & \times & \times & \times \\ B_{30} & B_{31} & B_{32} & B_{33} \\ \end{array}$
Hopefully that helps you visualize how we can use SIMD lanes for this problem. We’ll have to fill several registers such that they contain only a single element of the ith row of $A$, then perform the multiplications & adds that we’ve already demonstrated. Let’s look at the code:
struct mat4 {
union {
vec4 rows[4];
float data[16];
}
}
void matrix_mul_sse(const mat4& A, const mat4& B, mat4& dest)
{
for (size_t i = 0; i < ARRAY_SIZE(A.rows); i++) {
// Fill up our registers with a component from the ith row of A
_mm_store_ps((&dest.rows[i].x), v_x);
}
};
Next up, we need to write code that will transform our AABB vertices into clip space. To accomplish this we could use either 128-bit or 256-bit wide registers. Since we have 8 vertices, it would probably make sense to use the 8 lanes available with 256-bit. I ended up writing a 128-bit version first, and then a 256 version, and neither showed any speed differences on my machine.
Performing the matrix-vector multiplication is nearly identical to our matrix multiplication from earlier. The operation looks like:
$A\textbf{v} = \begin{bmatrix} A_{00} & A_{01} & A_{02} & A_{03} \\ A_{10} & A_{11} & A_{12} & A_{13} \\ A_{20} & A_{21} & A_{22} & A_{23} \\ A_{30} & A_{31} & A_{32} & A_{33} \\ \end{bmatrix} \begin{bmatrix} v_x \\ v_y \\ v_z \\ v_w \\ \end{bmatrix} = \begin{bmatrix} A_{00} v_x + A_{01} v_y + A_{02} v_z + A_{03} v_w \\ A_{10} v_x + A_{11} v_y + A_{12} v_z + A_{13} v_w \\ A_{20} v_x + A_{21} v_y + A_{22} v_z + A_{23} v_w \\ A_{30} v_x + A_{31} v_y + A_{32} v_z + A_{33} v_w \\ \end{bmatrix}$
So for each component we’ll need to perform 4 multiply and 3 adds, plus splatting of the various components of our transform matrix. It ends up being pretty compact:
void transform_points_8(__m256 dest[4], const __m256 x, const __m256 y, const __m256 z, const mat4& transform)
{
for (size_t i = 0; i < 4; ++i) {
dest[i] = res;
}
}
Finally, we can bring this all together by filling registers with our unique combinations of the AABB’s min and max components, transforming those vertices, performing logical comparisons and then finally reducing the result into a single value. We don’t care how many vertices are in or out, just whether any single vertex is inside.
// Fills an entire __m128 with the value at c of the __m128 v
#define SPLAT(v, c) _mm_permute_ps(v, _MM_SHUFFLE(c, c, c, c))
bool test_AABB_against_frustum_256(mat4& transform, const AABB& aabb)
{
// Could probably skip this by storing our AABBs as 2 vec4s just for alignment's sake
vec4 min{ aabb.min, 1.0f };
vec4 max{ aabb.max, 1.0f };
// We have to do some shuffling to get our combinations
// res = _mm_shuffle_ps(a, b, _MM_SHUFFLE(i, j, k, l)) is equivalent to
// res[0] = a[i]
// res[1] = a[j]
// res[2] = b[k]
// res[3] = b[l]
__m128 x_minmax = _mm_shuffle_ps(aabb_min, aabb_max, _MM_SHUFFLE(0, 0, 0, 0)); // x x X X
x_minmax = _mm_permute_ps(x_minmax, _MM_SHUFFLE(2, 0, 2, 0)); // x X x X
const __m128 y_minmax = _mm_shuffle_ps(aabb_min, aabb_max, _MM_SHUFFLE(1, 1, 1, 1)); // y y Y Y
const __m128 z_min = SPLAT(aabb_min, 2); // z z z z
const __m128 z_max = SPLAT(aabb_max, 2); // Z Z Z Z
// Each __m256 represents a single component of 8 vertices
// _mm256_set_m128 just combines two m128s into a single m256
const __m256 x = _mm256_set_m128(x_minmax, x_minmax);
const __m256 y = _mm256_set_m128(y_minmax, y_minmax);
const __m256 z = _mm256_set_m128(z_min, z_max);
// corner_comps[0] = { x1, x2, x3, ... } ... corners[4] = {w1, w2, w3, w4 ...};
__m256 corner_comps[4];
transform_points_8(corner_comps, x, y, z, transform);
const __m256 neg_ws = _mm256_sub_ps(_mm256_setzero_ps(), corner_comps[3]);
// Test whether -w < x < w
// Note that the comparison intrinsics will set the lanes to either 0 or 0xFFFFFFFF
__m256 inside = _mm256_and_ps(
_mm256_cmp_ps(neg_ws, corner_comps[0], _CMP_LE_OQ),
_mm256_cmp_ps(corner_comps[0], corner_comps[3], _CMP_LE_OQ)
);
// inside && -w < y < w
inside = _mm256_and_ps(
inside,
_mm256_and_ps(
_mm256_cmp_ps(neg_ws, corner_comps[1], _CMP_LE_OQ),
_mm256_cmp_ps(corner_comps[1], corner_comps[3], _CMP_LE_OQ)
)
);
// inside && 0 < z < w
inside = _mm256_and_ps(
inside,
_mm256_and_ps(
_mm256_cmp_ps(neg_ws, corner_comps[2], _CMP_LE_OQ),
_mm256_cmp_ps(corner_comps[2], corner_comps[3], _CMP_LE_OQ)
)
);
// Reduce our 8 different in/out lanes to 4
// _mm256_extractf128_ps will extract four lanes into an m128 (either the first four or second four)
__m128 reduction = _mm_or_ps(_mm256_extractf128_ps(inside, 0), _mm256_extractf128_ps(inside, 1));
// Keep reducing! The following is equivalent to
// { inside[0] || inside[2], inside[1] || inside[3], inside[2] || inside[2], inside[3] || inside[3] }
reduction = _mm_or_ps(reduction, _mm_permute_ps(reduction, _MM_SHUFFLE(2, 3, 2, 3)));
// Then we perform another OR to fill the lowest lane with the final reduction
// { (reduction[0] || reduction[2]) || (reduction[1] || reduction[3]), ... }
reduction = _mm_or_ps(reduction, _mm_permute_ps(reduction, _MM_SHUFFLE(1, 1, 1, 1)));
// Store our reduction
u32 res = 0u;
_mm_store_ss(reinterpret_cast<float*>(&res), reduction);
return res != 0;
}
It’s definitely not the nicest looking code, but it’ll be worth it, I promise. Once you get used to the intrinsics, it’s really not so bad. This thing I had the most trouble with was definitely just parsing the documentation — there are a LOT of intrinsics, and discovering the one you need is not easy since the search is only helpful if you know or can guess the name of the one you need.
Another operation that puzzled me was how to best perform the “horizontal” OR reduction at the end. In zeux’s examples, he uses an orx intrinsic that doesn’t appear to exist on intel. Instead, I reduced by shuffling/permuting the vectors to ensure the first component ends up holding the final value we want.
Now let’s take a look at the results:
Frustum Culling 0.3
CPU Command Buffer Creation + Submission 1.5
GPU execution 1.5
Total 3.3
The time needed for culling has gone down from ~1.1 ms to ~0.3 ms! On my laptop, the gains are similar — 3.0 ms down to 0.6ms. With SIMD, we are now spending less time rendering a frame on both the CPU and the GPU.
This may seem like a lot of work for saving 0.8 ms, but keep in mind that we may want to perform frustum culling many times — for instance, each dynamic shadow will have it’s own frustum(s), and we definitely don’t want to waste time submitting more draw calls than necessary there. So these optimizations will have a multiplicative effect as you add more shadow maps.
## Conclusion
To recap, for our contrived example we’ve brought down our frame time from 8.2 ms to 3.3 ms with frustum culling. Introducing SIMD in a small part of the code was also pretty impactful, cutting down the cost of culling by about 75%. There is probably still significant headroom for improvement — the lower bound is likely going to be determined by the bandwidth limitations around writing to our visibility list. Currently, we’re reaching about 32 bits * 1e4 / 3.0e-4s ~= 1e9 bits/s ~= 1 GB/s in this worst case.
Additionally, I just want to re-iterate that I am not experienced with SIMD intrinsics. If you know of a better way to perform any parts of the above, please let me know in the comments, by email or through twitter @BruOps.
There is also a subtle problem with my culling code. If the OBB is large compared to the view frustum, then this code will produce false negatives. We should probably also perform the reciprocal test and check whether any of the frustum corners fall within the OBB. This is really only a problem at the near plane, so we might be able to get away with just testing those four corners.
Finally, I just want to highlight that while profiling with PIX I encountered pretty high variability in CPU execution times. I expected CPU timings to vary by 10-20% but I often had events exhibit single frame “spikes” where execution times were orders of magnitude higher than usual (within a capture of a few seconds). Other times, there would be smaller spikes that were only 2-5 times larger. Taking a look at the timeline, I saw that in these cases the thread my code was running on would stall, but it seems pretty non-deterministic since it’d happen for a few frames, and then wouldn’t. The camera wasn’t moving at all during the recordings, and as far as I know there were no allocations during the frame, so I’m not sure what was causing the variability between frames.
Thanks for reading til the end! Shout out to the folks on the Graphics Programming discord who provided guidance and rubber ducking. I hope you and your loved ones have a happy holiday, and that 2021 is kinder to us all.
|
2021-04-13 11:14:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5314732789993286, "perplexity": 2229.393221042323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00242.warc.gz"}
|
https://people.maths.bris.ac.uk/~matyd/GroupNames/448/C2xD28sC4.html
|
Copied to
clipboard
## G = C2×D28⋊C4order 448 = 26·7
### Direct product of C2 and D28⋊C4
Series: Derived Chief Lower central Upper central
Derived series C1 — C14 — C2×D28⋊C4
Chief series C1 — C7 — C14 — C2×C14 — C22×D7 — C23×D7 — C22×D28 — C2×D28⋊C4
Lower central C7 — C14 — C2×D28⋊C4
Upper central C1 — C23 — C2×C4⋊C4
Generators and relations for C2×D28⋊C4
G = < a,b,c,d | a2=b28=c2=d4=1, ab=ba, ac=ca, ad=da, cbc=b-1, dbd-1=b15, dcd-1=b14c >
Subgroups: 1988 in 426 conjugacy classes, 175 normal (21 characteristic)
C1, C2, C2, C2, C4, C4, C22, C22, C22, C7, C2×C4, C2×C4, D4, C23, C23, D7, C14, C14, C42, C22⋊C4, C4⋊C4, C22×C4, C22×C4, C22×C4, C2×D4, C24, Dic7, Dic7, C28, C28, D14, D14, C2×C14, C2×C14, C2×C42, C2×C22⋊C4, C2×C4⋊C4, C4×D4, C23×C4, C22×D4, C4×D7, D28, C2×Dic7, C2×Dic7, C2×C28, C2×C28, C22×D7, C22×D7, C22×C14, C2×C4×D4, C4×Dic7, D14⋊C4, C7×C4⋊C4, C2×C4×D7, C2×C4×D7, C2×D28, C22×Dic7, C22×C28, C22×C28, C23×D7, D28⋊C4, C2×C4×Dic7, C2×D14⋊C4, C14×C4⋊C4, D7×C22×C4, C22×D28, C2×D28⋊C4
Quotients: C1, C2, C4, C22, C2×C4, D4, C23, D7, C22×C4, C2×D4, C4○D4, C24, D14, C4×D4, C23×C4, C22×D4, C2×C4○D4, C4×D7, C22×D7, C2×C4×D4, C2×C4×D7, D4×D7, Q82D7, C23×D7, D28⋊C4, D7×C22×C4, C2×D4×D7, C2×Q82D7, C2×D28⋊C4
Smallest permutation representation of C2×D28⋊C4
On 224 points
Generators in S224
(1 79)(2 80)(3 81)(4 82)(5 83)(6 84)(7 57)(8 58)(9 59)(10 60)(11 61)(12 62)(13 63)(14 64)(15 65)(16 66)(17 67)(18 68)(19 69)(20 70)(21 71)(22 72)(23 73)(24 74)(25 75)(26 76)(27 77)(28 78)(29 153)(30 154)(31 155)(32 156)(33 157)(34 158)(35 159)(36 160)(37 161)(38 162)(39 163)(40 164)(41 165)(42 166)(43 167)(44 168)(45 141)(46 142)(47 143)(48 144)(49 145)(50 146)(51 147)(52 148)(53 149)(54 150)(55 151)(56 152)(85 199)(86 200)(87 201)(88 202)(89 203)(90 204)(91 205)(92 206)(93 207)(94 208)(95 209)(96 210)(97 211)(98 212)(99 213)(100 214)(101 215)(102 216)(103 217)(104 218)(105 219)(106 220)(107 221)(108 222)(109 223)(110 224)(111 197)(112 198)(113 196)(114 169)(115 170)(116 171)(117 172)(118 173)(119 174)(120 175)(121 176)(122 177)(123 178)(124 179)(125 180)(126 181)(127 182)(128 183)(129 184)(130 185)(131 186)(132 187)(133 188)(134 189)(135 190)(136 191)(137 192)(138 193)(139 194)(140 195)
(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28)(29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56)(57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84)(85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112)(113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140)(141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168)(169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196)(197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224)
(1 64)(2 63)(3 62)(4 61)(5 60)(6 59)(7 58)(8 57)(9 84)(10 83)(11 82)(12 81)(13 80)(14 79)(15 78)(16 77)(17 76)(18 75)(19 74)(20 73)(21 72)(22 71)(23 70)(24 69)(25 68)(26 67)(27 66)(28 65)(29 168)(30 167)(31 166)(32 165)(33 164)(34 163)(35 162)(36 161)(37 160)(38 159)(39 158)(40 157)(41 156)(42 155)(43 154)(44 153)(45 152)(46 151)(47 150)(48 149)(49 148)(50 147)(51 146)(52 145)(53 144)(54 143)(55 142)(56 141)(85 216)(86 215)(87 214)(88 213)(89 212)(90 211)(91 210)(92 209)(93 208)(94 207)(95 206)(96 205)(97 204)(98 203)(99 202)(100 201)(101 200)(102 199)(103 198)(104 197)(105 224)(106 223)(107 222)(108 221)(109 220)(110 219)(111 218)(112 217)(113 181)(114 180)(115 179)(116 178)(117 177)(118 176)(119 175)(120 174)(121 173)(122 172)(123 171)(124 170)(125 169)(126 196)(127 195)(128 194)(129 193)(130 192)(131 191)(132 190)(133 189)(134 188)(135 187)(136 186)(137 185)(138 184)(139 183)(140 182)
(1 182 87 154)(2 169 88 141)(3 184 89 156)(4 171 90 143)(5 186 91 158)(6 173 92 145)(7 188 93 160)(8 175 94 147)(9 190 95 162)(10 177 96 149)(11 192 97 164)(12 179 98 151)(13 194 99 166)(14 181 100 153)(15 196 101 168)(16 183 102 155)(17 170 103 142)(18 185 104 157)(19 172 105 144)(20 187 106 159)(21 174 107 146)(22 189 108 161)(23 176 109 148)(24 191 110 163)(25 178 111 150)(26 193 112 165)(27 180 85 152)(28 195 86 167)(29 64 126 214)(30 79 127 201)(31 66 128 216)(32 81 129 203)(33 68 130 218)(34 83 131 205)(35 70 132 220)(36 57 133 207)(37 72 134 222)(38 59 135 209)(39 74 136 224)(40 61 137 211)(41 76 138 198)(42 63 139 213)(43 78 140 200)(44 65 113 215)(45 80 114 202)(46 67 115 217)(47 82 116 204)(48 69 117 219)(49 84 118 206)(50 71 119 221)(51 58 120 208)(52 73 121 223)(53 60 122 210)(54 75 123 197)(55 62 124 212)(56 77 125 199)
G:=sub<Sym(224)| (1,79)(2,80)(3,81)(4,82)(5,83)(6,84)(7,57)(8,58)(9,59)(10,60)(11,61)(12,62)(13,63)(14,64)(15,65)(16,66)(17,67)(18,68)(19,69)(20,70)(21,71)(22,72)(23,73)(24,74)(25,75)(26,76)(27,77)(28,78)(29,153)(30,154)(31,155)(32,156)(33,157)(34,158)(35,159)(36,160)(37,161)(38,162)(39,163)(40,164)(41,165)(42,166)(43,167)(44,168)(45,141)(46,142)(47,143)(48,144)(49,145)(50,146)(51,147)(52,148)(53,149)(54,150)(55,151)(56,152)(85,199)(86,200)(87,201)(88,202)(89,203)(90,204)(91,205)(92,206)(93,207)(94,208)(95,209)(96,210)(97,211)(98,212)(99,213)(100,214)(101,215)(102,216)(103,217)(104,218)(105,219)(106,220)(107,221)(108,222)(109,223)(110,224)(111,197)(112,198)(113,196)(114,169)(115,170)(116,171)(117,172)(118,173)(119,174)(120,175)(121,176)(122,177)(123,178)(124,179)(125,180)(126,181)(127,182)(128,183)(129,184)(130,185)(131,186)(132,187)(133,188)(134,189)(135,190)(136,191)(137,192)(138,193)(139,194)(140,195), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28)(29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140)(141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168)(169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196)(197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224), (1,64)(2,63)(3,62)(4,61)(5,60)(6,59)(7,58)(8,57)(9,84)(10,83)(11,82)(12,81)(13,80)(14,79)(15,78)(16,77)(17,76)(18,75)(19,74)(20,73)(21,72)(22,71)(23,70)(24,69)(25,68)(26,67)(27,66)(28,65)(29,168)(30,167)(31,166)(32,165)(33,164)(34,163)(35,162)(36,161)(37,160)(38,159)(39,158)(40,157)(41,156)(42,155)(43,154)(44,153)(45,152)(46,151)(47,150)(48,149)(49,148)(50,147)(51,146)(52,145)(53,144)(54,143)(55,142)(56,141)(85,216)(86,215)(87,214)(88,213)(89,212)(90,211)(91,210)(92,209)(93,208)(94,207)(95,206)(96,205)(97,204)(98,203)(99,202)(100,201)(101,200)(102,199)(103,198)(104,197)(105,224)(106,223)(107,222)(108,221)(109,220)(110,219)(111,218)(112,217)(113,181)(114,180)(115,179)(116,178)(117,177)(118,176)(119,175)(120,174)(121,173)(122,172)(123,171)(124,170)(125,169)(126,196)(127,195)(128,194)(129,193)(130,192)(131,191)(132,190)(133,189)(134,188)(135,187)(136,186)(137,185)(138,184)(139,183)(140,182), (1,182,87,154)(2,169,88,141)(3,184,89,156)(4,171,90,143)(5,186,91,158)(6,173,92,145)(7,188,93,160)(8,175,94,147)(9,190,95,162)(10,177,96,149)(11,192,97,164)(12,179,98,151)(13,194,99,166)(14,181,100,153)(15,196,101,168)(16,183,102,155)(17,170,103,142)(18,185,104,157)(19,172,105,144)(20,187,106,159)(21,174,107,146)(22,189,108,161)(23,176,109,148)(24,191,110,163)(25,178,111,150)(26,193,112,165)(27,180,85,152)(28,195,86,167)(29,64,126,214)(30,79,127,201)(31,66,128,216)(32,81,129,203)(33,68,130,218)(34,83,131,205)(35,70,132,220)(36,57,133,207)(37,72,134,222)(38,59,135,209)(39,74,136,224)(40,61,137,211)(41,76,138,198)(42,63,139,213)(43,78,140,200)(44,65,113,215)(45,80,114,202)(46,67,115,217)(47,82,116,204)(48,69,117,219)(49,84,118,206)(50,71,119,221)(51,58,120,208)(52,73,121,223)(53,60,122,210)(54,75,123,197)(55,62,124,212)(56,77,125,199)>;
G:=Group( (1,79)(2,80)(3,81)(4,82)(5,83)(6,84)(7,57)(8,58)(9,59)(10,60)(11,61)(12,62)(13,63)(14,64)(15,65)(16,66)(17,67)(18,68)(19,69)(20,70)(21,71)(22,72)(23,73)(24,74)(25,75)(26,76)(27,77)(28,78)(29,153)(30,154)(31,155)(32,156)(33,157)(34,158)(35,159)(36,160)(37,161)(38,162)(39,163)(40,164)(41,165)(42,166)(43,167)(44,168)(45,141)(46,142)(47,143)(48,144)(49,145)(50,146)(51,147)(52,148)(53,149)(54,150)(55,151)(56,152)(85,199)(86,200)(87,201)(88,202)(89,203)(90,204)(91,205)(92,206)(93,207)(94,208)(95,209)(96,210)(97,211)(98,212)(99,213)(100,214)(101,215)(102,216)(103,217)(104,218)(105,219)(106,220)(107,221)(108,222)(109,223)(110,224)(111,197)(112,198)(113,196)(114,169)(115,170)(116,171)(117,172)(118,173)(119,174)(120,175)(121,176)(122,177)(123,178)(124,179)(125,180)(126,181)(127,182)(128,183)(129,184)(130,185)(131,186)(132,187)(133,188)(134,189)(135,190)(136,191)(137,192)(138,193)(139,194)(140,195), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28)(29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56)(57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84)(85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140)(141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168)(169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196)(197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224), (1,64)(2,63)(3,62)(4,61)(5,60)(6,59)(7,58)(8,57)(9,84)(10,83)(11,82)(12,81)(13,80)(14,79)(15,78)(16,77)(17,76)(18,75)(19,74)(20,73)(21,72)(22,71)(23,70)(24,69)(25,68)(26,67)(27,66)(28,65)(29,168)(30,167)(31,166)(32,165)(33,164)(34,163)(35,162)(36,161)(37,160)(38,159)(39,158)(40,157)(41,156)(42,155)(43,154)(44,153)(45,152)(46,151)(47,150)(48,149)(49,148)(50,147)(51,146)(52,145)(53,144)(54,143)(55,142)(56,141)(85,216)(86,215)(87,214)(88,213)(89,212)(90,211)(91,210)(92,209)(93,208)(94,207)(95,206)(96,205)(97,204)(98,203)(99,202)(100,201)(101,200)(102,199)(103,198)(104,197)(105,224)(106,223)(107,222)(108,221)(109,220)(110,219)(111,218)(112,217)(113,181)(114,180)(115,179)(116,178)(117,177)(118,176)(119,175)(120,174)(121,173)(122,172)(123,171)(124,170)(125,169)(126,196)(127,195)(128,194)(129,193)(130,192)(131,191)(132,190)(133,189)(134,188)(135,187)(136,186)(137,185)(138,184)(139,183)(140,182), (1,182,87,154)(2,169,88,141)(3,184,89,156)(4,171,90,143)(5,186,91,158)(6,173,92,145)(7,188,93,160)(8,175,94,147)(9,190,95,162)(10,177,96,149)(11,192,97,164)(12,179,98,151)(13,194,99,166)(14,181,100,153)(15,196,101,168)(16,183,102,155)(17,170,103,142)(18,185,104,157)(19,172,105,144)(20,187,106,159)(21,174,107,146)(22,189,108,161)(23,176,109,148)(24,191,110,163)(25,178,111,150)(26,193,112,165)(27,180,85,152)(28,195,86,167)(29,64,126,214)(30,79,127,201)(31,66,128,216)(32,81,129,203)(33,68,130,218)(34,83,131,205)(35,70,132,220)(36,57,133,207)(37,72,134,222)(38,59,135,209)(39,74,136,224)(40,61,137,211)(41,76,138,198)(42,63,139,213)(43,78,140,200)(44,65,113,215)(45,80,114,202)(46,67,115,217)(47,82,116,204)(48,69,117,219)(49,84,118,206)(50,71,119,221)(51,58,120,208)(52,73,121,223)(53,60,122,210)(54,75,123,197)(55,62,124,212)(56,77,125,199) );
G=PermutationGroup([[(1,79),(2,80),(3,81),(4,82),(5,83),(6,84),(7,57),(8,58),(9,59),(10,60),(11,61),(12,62),(13,63),(14,64),(15,65),(16,66),(17,67),(18,68),(19,69),(20,70),(21,71),(22,72),(23,73),(24,74),(25,75),(26,76),(27,77),(28,78),(29,153),(30,154),(31,155),(32,156),(33,157),(34,158),(35,159),(36,160),(37,161),(38,162),(39,163),(40,164),(41,165),(42,166),(43,167),(44,168),(45,141),(46,142),(47,143),(48,144),(49,145),(50,146),(51,147),(52,148),(53,149),(54,150),(55,151),(56,152),(85,199),(86,200),(87,201),(88,202),(89,203),(90,204),(91,205),(92,206),(93,207),(94,208),(95,209),(96,210),(97,211),(98,212),(99,213),(100,214),(101,215),(102,216),(103,217),(104,218),(105,219),(106,220),(107,221),(108,222),(109,223),(110,224),(111,197),(112,198),(113,196),(114,169),(115,170),(116,171),(117,172),(118,173),(119,174),(120,175),(121,176),(122,177),(123,178),(124,179),(125,180),(126,181),(127,182),(128,183),(129,184),(130,185),(131,186),(132,187),(133,188),(134,189),(135,190),(136,191),(137,192),(138,193),(139,194),(140,195)], [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28),(29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56),(57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84),(85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112),(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140),(141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,162,163,164,165,166,167,168),(169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196),(197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224)], [(1,64),(2,63),(3,62),(4,61),(5,60),(6,59),(7,58),(8,57),(9,84),(10,83),(11,82),(12,81),(13,80),(14,79),(15,78),(16,77),(17,76),(18,75),(19,74),(20,73),(21,72),(22,71),(23,70),(24,69),(25,68),(26,67),(27,66),(28,65),(29,168),(30,167),(31,166),(32,165),(33,164),(34,163),(35,162),(36,161),(37,160),(38,159),(39,158),(40,157),(41,156),(42,155),(43,154),(44,153),(45,152),(46,151),(47,150),(48,149),(49,148),(50,147),(51,146),(52,145),(53,144),(54,143),(55,142),(56,141),(85,216),(86,215),(87,214),(88,213),(89,212),(90,211),(91,210),(92,209),(93,208),(94,207),(95,206),(96,205),(97,204),(98,203),(99,202),(100,201),(101,200),(102,199),(103,198),(104,197),(105,224),(106,223),(107,222),(108,221),(109,220),(110,219),(111,218),(112,217),(113,181),(114,180),(115,179),(116,178),(117,177),(118,176),(119,175),(120,174),(121,173),(122,172),(123,171),(124,170),(125,169),(126,196),(127,195),(128,194),(129,193),(130,192),(131,191),(132,190),(133,189),(134,188),(135,187),(136,186),(137,185),(138,184),(139,183),(140,182)], [(1,182,87,154),(2,169,88,141),(3,184,89,156),(4,171,90,143),(5,186,91,158),(6,173,92,145),(7,188,93,160),(8,175,94,147),(9,190,95,162),(10,177,96,149),(11,192,97,164),(12,179,98,151),(13,194,99,166),(14,181,100,153),(15,196,101,168),(16,183,102,155),(17,170,103,142),(18,185,104,157),(19,172,105,144),(20,187,106,159),(21,174,107,146),(22,189,108,161),(23,176,109,148),(24,191,110,163),(25,178,111,150),(26,193,112,165),(27,180,85,152),(28,195,86,167),(29,64,126,214),(30,79,127,201),(31,66,128,216),(32,81,129,203),(33,68,130,218),(34,83,131,205),(35,70,132,220),(36,57,133,207),(37,72,134,222),(38,59,135,209),(39,74,136,224),(40,61,137,211),(41,76,138,198),(42,63,139,213),(43,78,140,200),(44,65,113,215),(45,80,114,202),(46,67,115,217),(47,82,116,204),(48,69,117,219),(49,84,118,206),(50,71,119,221),(51,58,120,208),(52,73,121,223),(53,60,122,210),(54,75,123,197),(55,62,124,212),(56,77,125,199)]])
100 conjugacy classes
class 1 2A ··· 2G 2H ··· 2O 4A ··· 4L 4M ··· 4T 4U 4V 4W 4X 7A 7B 7C 14A ··· 14U 28A ··· 28AJ order 1 2 ··· 2 2 ··· 2 4 ··· 4 4 ··· 4 4 4 4 4 7 7 7 14 ··· 14 28 ··· 28 size 1 1 ··· 1 14 ··· 14 2 ··· 2 7 ··· 7 14 14 14 14 2 2 2 2 ··· 2 4 ··· 4
100 irreducible representations
dim 1 1 1 1 1 1 1 1 2 2 2 2 2 2 4 4 type + + + + + + + + + + + + + image C1 C2 C2 C2 C2 C2 C2 C4 D4 D7 C4○D4 D14 D14 C4×D7 D4×D7 Q8⋊2D7 kernel C2×D28⋊C4 D28⋊C4 C2×C4×Dic7 C2×D14⋊C4 C14×C4⋊C4 D7×C22×C4 C22×D28 C2×D28 C2×Dic7 C2×C4⋊C4 C2×C14 C4⋊C4 C22×C4 C2×C4 C22 C22 # reps 1 8 1 2 1 2 1 16 4 3 4 12 9 24 6 6
Matrix representation of C2×D28⋊C4 in GL6(𝔽29)
28 0 0 0 0 0 0 28 0 0 0 0 0 0 28 0 0 0 0 0 0 28 0 0 0 0 0 0 1 0 0 0 0 0 0 1
,
19 7 0 0 0 0 22 28 0 0 0 0 0 0 11 22 0 0 0 0 25 0 0 0 0 0 0 0 18 28 0 0 0 0 6 11
,
1 0 0 0 0 0 22 28 0 0 0 0 0 0 1 0 0 0 0 0 14 28 0 0 0 0 0 0 11 1 0 0 0 0 25 18
,
17 0 0 0 0 0 0 17 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 7 28
G:=sub<GL(6,GF(29))| [28,0,0,0,0,0,0,28,0,0,0,0,0,0,28,0,0,0,0,0,0,28,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[19,22,0,0,0,0,7,28,0,0,0,0,0,0,11,25,0,0,0,0,22,0,0,0,0,0,0,0,18,6,0,0,0,0,28,11],[1,22,0,0,0,0,0,28,0,0,0,0,0,0,1,14,0,0,0,0,0,28,0,0,0,0,0,0,11,25,0,0,0,0,1,18],[17,0,0,0,0,0,0,17,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,7,0,0,0,0,0,28] >;
C2×D28⋊C4 in GAP, Magma, Sage, TeX
C_2\times D_{28}\rtimes C_4
% in TeX
G:=Group("C2xD28:C4");
// GroupNames label
G:=SmallGroup(448,956);
// by ID
G=gap.SmallGroup(448,956);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-7,758,184,297,80,18822]);
// Polycyclic
G:=Group<a,b,c,d|a^2=b^28=c^2=d^4=1,a*b=b*a,a*c=c*a,a*d=d*a,c*b*c=b^-1,d*b*d^-1=b^15,d*c*d^-1=b^14*c>;
// generators/relations
×
𝔽
|
2021-10-20 04:44:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998923540115356, "perplexity": 6842.818294307214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00080.warc.gz"}
|
https://amathew.wordpress.com/category/analysis/functional-analysis/
|
### functional analysis
I’ve been trying to understand some complex analytic geometry as of late; here is an overview of Oka’s theorem.
Consider the space ${\mathbb{C}^n}$ and the sheaf ${\mathcal{O}}$ of holomorphic functions on it. One should think of this as the analog of complex affine space ${\mathbb{C}^n}$, with the Zariski topology, and with the sheaf ${\mathcal{O}_{reg}}$ of regular functions.
In algebraic geometry, if ${I \subset \mathbb{C}[x_1, \dots, x_n]}$ is an ideal, or if ${\mathcal{I} \subset \mathcal{O}_{reg}}$ is a coherent sheaf of ideals, then we can define a closed subset of ${\mathbb{C}[x_1,\dots, x_n]}$ corresponding to the roots of the polynomials in ${I}$. This construction gives the notion of an affine variety, and by gluing these one gets general varieties.
More precisely, here is what an affine variety is. If ${\mathcal{I} \subset \mathcal{O}_{reg}}$ is a coherent sheaf of ideals, then we define a ringed space ${(\mathrm{supp} \mathcal{O}_{reg}/\mathcal{I}, \mathcal{O}_{reg}/\mathcal{I})}$; this gives the associated affine variety. Here the “support” corresponds to taking the common zero locus of the functions in ${\mathcal{I}}$. In this way an affine variety is not just a subset of ${\mathbb{C}^n}$, but a locally ringed space.
Now we want to repeat this construction in the holomorphic category. If ${\mathcal{I} \subset \mathcal{O}}$ is a finitely generated ideal—that is, an ideal which is locally finitely generated—in the sheaf of holomorphic functions on ${\mathbb{C}^n}$, then we define the space cut out by ${\mathcal{I}}$ to be ${(\mathrm{supp} \mathcal{O}/\mathcal{I}, \mathcal{O}_{reg}/\mathcal{I})}$. We can think of these as “affine analytic spaces.”
Definition 1 An analytic space is a locally ringed space which is locally isomorphic to an “affine analytic space.” (more…)
We will now apply the machinery already developed to a few concrete problems.
Proposition 1 Let ${G}$ be a compact abelian group and ${T}$ the rotation by ${a \in G}$. Then ${T}$ is uniquely ergodic (with the Haar measure invariant) if ${a^{\mathbb{Z}}}$ is dense in ${G}$.
The proof is straightforward. Suppose ${\mu}$ is invariant with respect to rotations by ${a}$. Then for ${f \in C(G)}$, we have
$\displaystyle \int f(a^m x ) d \mu = \int f(x) d \mu, \quad \forall m \in \mathbb{Z}$
and hence
$\displaystyle \int f(bx ) d \mu = \int f(x) d \mu, \quad \forall m \in \mathbb{Z},$
for any ${b \in G}$, which means that ${\mu}$ must be Haar measure (which is unique).
Corollary 2 An irrational rotation of the unit circle ${S^1}$ is uniquely ergodic.
Application: Equidistribution
Theorem 3 Let ${\xi \in \mathbb{R}}$ be irrational and let ${f: \mathbb{R} \rightarrow \mathbb{C}}$ be continuous and ${2 \pi }$-periodic. Then$\displaystyle \boxed{ \lim_{N \rightarrow \infty} \frac{1}{N} \sum_{i=0}^{N-1} f( n \xi) = \int_0^1 f(x) dx .}$ (more…)
So, let’s fix a compact metric space ${X}$ and a transformation ${T: X \rightarrow X}$ which is continuous. We defined the space ${M(X,T)}$ of probability Borel measures which are ${T}$-invariant, showed it was nonempty, and proved that the extreme points correspond to ergodic measures (i.e. measures with respect to which ${T}$ is ergodic). We are interested in knowing what ${M(X,T)}$ looks like, based solely on the topological properties of ${T}$. Here are some techniques we can use:
1) If ${T}$ has no fixed points, then ${\mu \in M(X,T)}$ cannot have any atoms (i.e. ${\mu(\{x\})=0, x \in X}$). Otherwise ${\{x, Tx , T^2x, \dots \}}$ would have infinite measure.
2) The set of recurrent points in ${X}$ (i.e. ${x \in X}$ such that there exists a sequence ${n_i \rightarrow \infty}$ with ${T^{n_i}x \rightarrow x}$) has ${\mu}$-measure one. We proved this earlier.
3) The set of non-wandering points has measure one. We define this notion now. Say that ${x \in X}$ is wandering if there is a neighborhood ${U}$ of ${X}$ such that ${T^{-n}(U) \cap U = \emptyset, \forall n \in \mathbb{N}}$. In other words, the family of sets ${T^{i}(U), i \in \mathbb{Z}_{\geq 0}}$ is disjoint. If not, say that ${x}$ is non-wandering. Any recurrent point, for instance, is non-wandering, which implies that the set of non-wandering points has measure one.
Here is an example. (more…)
Up until now, we have concentrated on a transformation ${T}$ of a fixed measure space. We now take a different approach: ${T}$ is fixed, and we look for appropriate measures (on a fixed ${\sigma}$-algebra). First, we will show that this space is nonempty. Then we will characterize ergodicity in terms of extreme points.
This is the first theorem we seek to prove:
Theorem 1 Let ${T: X \rightarrow X}$ be a continuous transformation of the compact metric space ${X}$. Then there exists a probability Borel measure ${\mu}$ on ${X}$ with respect to which ${T}$ is measure-preserving.
Consider the Banach space ${C(X)}$ of continuous ${f: X \rightarrow \mathbb{C}}$ and the dual ${C(X)^*}$, which, by the Riesz representation theorem, is identified with the space of (complex) Borel measures on ${X}$. The positive measures of total mass one form a compact convex subset ${P}$ of ${C(X)^*}$ in the weak* topology by Alaoglu’s theorem. Now, ${T}$ induces a transformation of ${C(X)}$: ${f \rightarrow f \circ T}$. The adjoint transformation of ${C(X)^*}$ is given by ${\mu \rightarrow T^{-1}(\mu}$, where for a measure ${\mu}$, ${T^{-1}(\mu)(E) := \mu(T^{-1}E)}$. We want to show that ${T^*}$ has a fixed point on ${P}$; then we will have proved the theorem.
There are fancier methods in functional analysis one could use, but to finish the proof we will appeal to the simple
Lemma 2 Let ${C}$ be a compact convex subset of a locally convex space ${X}$, and let ${T: C \rightarrow C}$ be the restriction of a continuous linear map on ${X}$. Then ${T}$ has a fixed point in ${C}$. (more…)
So, now it’s time to connect the topological notions of dynamical systems with ergodic theory (which makes use of measures). Our first example will use the notion of topological transitivity, which we now introduce. The next example will return to the story about recurrent points, which I talked a bit about yesterday.
Say that a homeomorphism ${T: X \rightarrow X}$ of a compact metric space ${X}$ is topologically transitive if there exists ${x \in X}$ with ${T^{\mathbb{Z}}x}$ dense in ${X}$. (For instance, a minimal homeomorphism is obviously topologically transitive.) Let ${\{ U_n \}}$ be a countable basis for the topology of ${X}$. Then the set of all such ${x}$ (with ${T^{\mathbb{Z}}x}$ dense) is given by
$\displaystyle \bigcap_n \bigcup_{i \in \mathbb{Z}} T^i U_n.$
In particular, if it is nonempty, then each ${\bigcup_{i \in \mathbb{Z}} T^i U_n}$ is dense—being ${T}$-invariant and containing ${U_n}$—and this set is a dense ${G_{\delta}}$ by Baire’s theorem.
Proposition 1 Let ${X}$ have a Borel probability measure ${\mu}$ positive on every nonempty open set, and let ${T: X \rightarrow X}$ be measure-preserving and ergodic. Then the set of ${x \in X}$ with ${\overline{T^{\mathbb{Z}}x}=X}$ is of measure 1, so ${T}$ is topologically transitive.
Indeed, each ${\bigcup_{i \in \mathbb{Z}} T^i U_n}$ must have measure zero or one by ergodicity, so measure 1 by hypothesis. Then take the countable intersection.
Poincaré recurrence
We now move to the abstract measure-theoretic framework, not topological.
Theorem 2 (Poincaré recurrence) Let ${T: X \rightarrow X}$ be a measure-preserving transformation on a probability space ${X}$. If ${E \subset X}$ is measurable, then there exists ${F \subset E}$ with ${\mu(E-F)=0}$ such that for each ${x \in F}$, there is a sequence ${n_i \rightarrow \infty}$ with ${T^{n_i} x \in E}$.
In other words, points of ${F}$ are ${T}$-frequently in ${E}$. (more…)
Ergodicity
Let ${(X, \mu)}$ be a probability space and ${T: X \rightarrow X}$ a measure-preserving transformation. In many cases, it turns out that the averages of a function ${f}$ given by
$\displaystyle \frac{1}{N} \sum_{i=0}^{N-1} f \circ T^i$
actually converge a.e. to a constant.
This is the case if ${T}$ is ergodic, which we define as follows: ${T}$ is ergodic if for all ${E \subset X}$ with ${T^{-1}E = E}$, ${m(E)=1}$ or ${0}$. This is a form of irreducibility; the system ${X,T}$ has no smaller subsystem (disregarding measure zero sets). It is easy to see that this is equivalent to the statement: ${f}$ measurable (one could assume measurable and bounded if one prefers) and ${T}$-invariant implies ${f}$ constant a.e. (One first shows that if ${T}$ is ergodic, then ${\mu(T^{-1}E \Delta E )}$ implies ${\mu(E)=0,1}$, by constructing something close to ${E}$ that is ${T}$-invariant.)
In this case, therefore, the ergodic theorem takes the following form. Let ${f: X \rightarrow \mathbb{C}}$ be integrable. Then almost everywhere,
$\displaystyle \boxed{ \frac{1}{N} \sum_{i=0}^{N-1} f ( T^i (x)) \rightarrow \int_X f d\mu .}$
This is a very useful fact, and it has many applications. (more…)
Let ${X}$ be a measure space with measure ${\mu}$; let ${T: X \rightarrow X}$ be a measure-preserving transformation. Last time we looked at how the averages
$\displaystyle A_N := \frac{1}{N} \sum_{i=0}^{N-1} f \circ T^i$
behave in ${L^2}$. But, now we want pointwise convergence.
The pointwise ergodic theorem
We consider the pointwise ergodic theorem of Garrett George Birkhoff:
Theorem 1 (Birkhoff) Let ${f \in L^1(\mu)}$. Then the averages ${A_N}$ converge almost everywhere to a function ${f^* \in L^1(\mu)}$ with ${f^* \circ = f^*}$ a.e. (more…)
Next Page »
|
2020-04-05 05:05:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 147, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9775410294532776, "perplexity": 126.31784989064501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370528224.61/warc/CC-MAIN-20200405022138-20200405052138-00417.warc.gz"}
|
https://www.rdocumentation.org/packages/hexbin/versions/1.27.1/topics/hexVP.abline
|
# hexVP.abline
0th
Percentile
##### Add a Straight Line to a HexPlot
This function adds one or more straight lines through the current plot; it is the hexbin version of abline().
Keywords
aplot
##### Usage
hexVP.abline(hvp, a = NULL, b = NULL, h = numeric(0), v = numeric(0),
col = "black", lty = 1, lwd = 2, …)
##### Arguments
hvp
A hexViewport object that is currently on the active device
a,b
the intercept and slope or if b is NULL, an lm object or a vector of length 2 with c(intercept,slope)
h
the y-value for a horizontal line.
v
the x-value for a vertical line.
col, lty, lwd
line color, type and width.
further graphical parameters.
##### Details
The first form specifies the line in intercept/slope form (alternatively a can be specified on its own and is taken to contain the slope and intercept in vector form).
The h= and v= forms draw horizontal and vertical lines at the specified coordinates.
The coef form specifies the line by a vector containing the slope and intercept.
lm is a regression object which contains reg\$coef. If it is of length 1 then the value is taken to be the slope of a line through the origin, otherwise, the first 2 values are taken to be the intercept and slope.
gplot.hexbin, hexViewport, hexMA.loess
|
2020-12-05 22:36:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5220119953155518, "perplexity": 1862.6876215960774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141750841.83/warc/CC-MAIN-20201205211729-20201206001729-00648.warc.gz"}
|
https://hackage-origin.haskell.org/package/fp-ieee-0.1.0/candidate/docs/Numeric-Floating-IEEE.html
|
fp-ieee-0.1.0
Numeric.Floating.IEEE
Description
This module provides IEEE 754-compliant operations for floating-point numbers.
The functions in this module assume that the given floating-point type conform to IEEE 754 format.
Since RealFloat constraint is insufficient to query properties of a NaN, the functions here assumes all NaN as positive, quiet. If you want better treatment for NaNs, use the module Numeric.Floating.IEEE.NaN.
Since floating-point exceptions cannot be accessed from Haskell, the operations provided by this module ignore exceptional behavior. This library assumes the default exception handling is in use.
If you are using GHC <= 8.8 on i386 target, you may need to set -msse2 option to get correct floating-point behavior.
Synopsis
This library assumes that some of the standard numeric functions correspond to the operations specified by IEEE. The rounding attribute should be roundTiesToEven and the exceptional behavior should be the default one.
Num
• (+), (-), and (*) should be correctly-rounding.
• negate, abs should comply with IEEE semantics.
• fromInteger should be correctly-rounding, but unfortunately not for Float and Double (see GHC's #17231). This module provides a correctly-rounding alternative: fromIntegerTiesToEven.
Fractional
• (/) should be correctly-rounding.
• fromRational should be correctly-rounding, but some third-partiy floating-point types fail to do so.
Floating
• sqrt should be correctly-rounding.
RealFrac
• truncate: IEEE 754 convertToIntegerTowardZero operation.
• round: IEEE 754 convertToIntegerTiesToEven operation; the Language Report says that this should choose the even integer if the argument is the midpoint of two successive integers.
• ceiling: IEEE 754 convertToIntegerTowardPositive operation.
• floor: IEEE 754 convertToIntegerTowardNegative operation.
To complete these, roundAway is provided by this library. Note that Haskell's round is specified to be ties-to-even, whereas C's round is ties-to-away.
RealFloat
This class provides information on the IEEE-compliant format.
• floatRadix: The base $$b$$. IEEE 754 radix operation.
• floatDigits: The precision $$p$$.
• floatRange: The exponent range offset by 1: $$(\mathit{emin}+1,\mathit{emax}+1)$$
• decodeFloat x: The exponent part returned is in the range $$[\mathit{emin}+1-p,\mathit{emax}+1-p]$$ if x is normal, or in $$[\mathit{emin}-2p+2,\mathit{emin}-p]$$ if x is subnormal.
• encodeFloat should accept the significand in the range [0, floatRadix x ^ floatDigits x]. This library does not assume a particular rounding behavior when the result cannot be expressed in the target type.
• exponent x: The exponent offset by 1: $$\mathrm{logB}(x)+1$$. Returns an integer in $$[\mathit{emin}+1,\mathit{emax}+1]$$ if x is normal, or in $$[\mathit{emin}-p+2,\mathit{emin}]$$ if x is subnormal.
• significand x: Returns the significand of x as a value between $$[1/b,1)$$.
• scaleFloat: This library does not assume a particular rounding behavior when the result is subnormal.
• isNaN
• isInfinite
• isDenormalized
• isNegativeZero
• isIEEE should return True if you are using the type with this library.
5.3 Homogeneous general-computational operations
5.3.1 General operations
round' :: RealFloat a => a -> a Source #
round' x returns the nearest integral value to x; the even integer if x is equidistant between two integers.
IEEE 754 roundToIntegralTiesToEven operation.
$$x :: Double) -> isFinite x ==> (round' x == fromInteger (round x)) >>> round' (-0.5) -0.0 roundAway' :: RealFloat a => a -> a Source # roundAway' x returns the nearest integral value to x; the one with larger magnitude is returned if x is equidistant between two integers. IEEE 754 roundToIntegralTiesToAway operation. \(x :: Double) -> isFinite x ==> roundAway' x == fromInteger (roundAway x) >>> roundAway' (-0.5) -1.0 >>> roundAway' (-0.4) -0.0 truncate' :: RealFloat a => a -> a Source # truncate' x returns the integral value nearest to x, and whose magnitude is not greater than that of x. IEEE 754 roundToIntegralTowardZero operation. \(x :: Double) -> isFinite x ==> truncate' x == fromInteger (truncate x) >>> truncate' (-0.5) -0.0 ceiling' :: RealFloat a => a -> a Source # ceiling' x returns the least integral value that is not less than x. IEEE 754 roundToIntegralTowardPositive operation. \(x :: Double) -> isFinite x ==> ceiling' x == fromInteger (ceiling x) >>> ceiling' (-0.8) -0.0 >>> ceiling' (-0.5) -0.0 floor' :: RealFloat a => a -> a Source # floor' x returns the greatest integral value that is not greater than x. IEEE 754 roundToIntegralTowardNegative operation. \(x :: Double) -> isFinite x ==> floor' x == fromInteger (floor x) >>> floor' (-0.1) -1.0 >>> floor' (-0) -0.0 nextUp :: RealFloat a => a -> a Source # Returns the smallest value that is larger than the argument. IEEE 754 nextUp operation. >>> nextUp 1 == (0x1.000002p0 :: Float) True >>> nextUp 1 == (0x1.0000_0000_0000_1p0 :: Double) True >>> nextUp (1/0) == (1/0 :: Double) True >>> nextUp (-1/0) == (- maxFinite :: Double) True >>> nextUp 0 == (0x1p-1074 :: Double) True >>> nextUp (-0) == (0x1p-1074 :: Double) True >>> nextUp (-0x1p-1074) :: Double -- returns negative zero -0.0 nextDown :: RealFloat a => a -> a Source # Returns the largest value that is smaller than the argument. IEEE 754 nextDown operation. >>> nextDown 1 == (0x1.ffff_ffff_ffff_fp-1 :: Double) True >>> nextDown 1 == (0x1.fffffep-1 :: Float) True >>> nextDown (1/0) == (maxFinite :: Double) True >>> nextDown (-1/0) == (-1/0 :: Double) True >>> nextDown 0 == (-0x1p-1074 :: Double) True >>> nextDown (-0) == (-0x1p-1074 :: Double) True >>> nextDown 0x1p-1074 -- returns positive zero 0.0 >>> nextDown 0x1p-1022 == (0x0.ffff_ffff_ffff_fp-1022 :: Double) True nextTowardZero :: RealFloat a => a -> a Source # Returns the value whose magnitude is smaller than that of the argument, and is closest to the argument. This operation is not in IEEE, but may be useful to some. >>> nextTowardZero 1 == (0x1.ffff_ffff_ffff_fp-1 :: Double) True >>> nextTowardZero 1 == (0x1.fffffep-1 :: Float) True >>> nextTowardZero (1/0) == (maxFinite :: Double) True >>> nextTowardZero (-1/0) == (-maxFinite :: Double) True >>> nextTowardZero 0 :: Double -- returns positive zero 0.0 >>> nextTowardZero (-0 :: Double) -- returns negative zero -0.0 >>> nextTowardZero 0x1p-1074 :: Double 0.0 remainder :: RealFloat a => a -> a -> a Source # remainder x y returns \(r=x-yn$$, where $$n$$ is the integer nearest the exact number $$x/y$$; i.e. $$n=\mathrm{round}(x/y)$$.
IEEE 754 remainder operation.
Not supported.
5.3.3 logBFormat operations
scaleFloatTiesToEven :: RealFloat a => Int -> a -> a Source #
IEEE 754 scaleB operation, with each rounding attributes.
scaleFloatTiesToAway :: RealFloat a => Int -> a -> a Source #
IEEE 754 scaleB operation, with each rounding attributes.
scaleFloatTowardPositive :: RealFloat a => Int -> a -> a Source #
IEEE 754 scaleB operation, with each rounding attributes.
scaleFloatTowardNegative :: RealFloat a => Int -> a -> a Source #
IEEE 754 scaleB operation, with each rounding attributes.
scaleFloatTowardZero :: RealFloat a => Int -> a -> a Source #
IEEE 754 scaleB operation, with each rounding attributes.
The Haskell counterpart for IEEE 754 logB operation is exponent. Note that logB and exponent are different by one: logB x = exponent x - 1
exponent :: RealFloat a => a -> Int #
exponent corresponds to the second component of decodeFloat. exponent 0 = 0 and for finite nonzero x, exponent x = snd (decodeFloat x) + floatDigits x. If x is a finite floating-point number, it is equal in value to significand x * b ^^ exponent x, where b is the floating-point radix. The behaviour is unspecified on infinite or NaN values.
5.4 formatOf general-computational operations
5.4.1 Arithmetic operations
For IEEE-compliant floating-point types, (+), (-), (*), (/), and sqrt from Prelude should be correctly-rounding. fusedMultiplyAdd is provided by this library. This library also provides "generic" version of the arithmetic operations, which can be useful if the target type is narrower than source.
(+) :: Num a => a -> a -> a infixl 6 #
(-) :: Num a => a -> a -> a infixl 6 #
(*) :: Num a => a -> a -> a infixl 7 #
(/) :: Fractional a => a -> a -> a infixl 7 #
Fractional division.
sqrt :: Floating a => a -> a #
fusedMultiplyAdd :: RealFloat a => a -> a -> a -> a Source #
fusedMultiplyAdd a b c computes a * b + c as a single, ternary operation. Rounding is done only once.
May make use of hardware FMA instructions if the target architecture has it; set fma3 package flag on x86 systems.
IEEE 754 fusedMultiplyAdd operation.
$$a :: Double) (b :: Double) (c :: Double) -> fusedMultiplyAdd a b c == fromRational (toRational a * toRational b + toRational c) genericAdd :: (RealFloat a, RealFloat b) => a -> a -> b infixl 6 Source # IEEE 754 addition operation. genericSub :: (RealFloat a, RealFloat b) => a -> a -> b infixl 6 Source # IEEE 754 subtraction operation. genericMul :: (RealFloat a, RealFloat b) => a -> a -> b infixl 7 Source # IEEE 754 multiplication operation. genericDiv :: (RealFloat a, RealFloat b) => a -> a -> b infixl 7 Source # IEEE 754 division operation. genericSqrt is not implemented yet. genericFusedMultiplyAdd :: (RealFloat a, RealFloat b) => a -> a -> a -> b Source # IEEE 754 fusedMultiplyAdd operation. IEEE 754 convertFromInt operation, with each rounding attributes. IEEE 754 convertFromInt operation, with each rounding attributes. IEEE 754 convertFromInt operation, with each rounding attributes. IEEE 754 convertFromInt operation, with each rounding attributes. IEEE 754 convertFromInt operation, with each rounding attributes. fromIntegralTiesToEven :: (Integral i, RealFloat a) => i -> a Source # IEEE 754 convertFromInt operation, with each rounding attributes. fromIntegralTiesToAway :: (Integral i, RealFloat a) => i -> a Source # IEEE 754 convertFromInt operation, with each rounding attributes. fromIntegralTowardPositive :: (Integral i, RealFloat a) => i -> a Source # IEEE 754 convertFromInt operation, with each rounding attributes. fromIntegralTowardNegative :: (Integral i, RealFloat a) => i -> a Source # IEEE 754 convertFromInt operation, with each rounding attributes. fromIntegralTowardZero :: (Integral i, RealFloat a) => i -> a Source # IEEE 754 convertFromInt operation, with each rounding attributes. Conversion from a rational number to floating-point value, with each rounding attributes. Conversion from a rational number to floating-point value, with each rounding attributes. Conversion from a rational number to floating-point value, with each rounding attributes. Conversion from a rational number to floating-point value, with each rounding attributes. Conversion from a rational number to floating-point value, with each rounding attributes. round :: (RealFrac a, Integral b) => a -> b # round x returns the nearest integer to x; the even integer if x is equidistant between two integers roundAway :: (RealFrac a, Integral b) => a -> b Source # roundAway x returns the nearest integer to x; the integer with larger magnitude is returned if x is equidistant between two integers. IEEE 754 convertToIntegerTiesToAway operation. >>> roundAway 4.5 5 truncate :: (RealFrac a, Integral b) => a -> b # truncate x returns the integer nearest x between zero and x ceiling :: (RealFrac a, Integral b) => a -> b # ceiling x returns the least integer not less than x floor :: (RealFrac a, Integral b) => a -> b # floor x returns the greatest integer not greater than x 5.4.2 Conversion operations for floating-point formats and decimal character sequences Unfortunately, realToFrac does not have a good semantics, and behaves differently with rewrite rules (consider realToFrac (0/0 :: Float) :: Double). As an alternative, this library provides realFloatToFrac, with well-defined semantics on signed zeroes, infinities and NaNs. Like realToFrac, realFloatToFrac comes with some rewrite rules for particular types, but they should not change behavior. realFloatToFrac :: (RealFloat a, Fractional b) => a -> b Source # Converts a floating-point value into another type. Similar to realToFrac, but treats NaN, infinities, negative zero even if the rewrite rule is off. IEEE 754 convertFormat operation. canonicalize :: RealFloat a => a -> a Source # A specialized version of realFloatToFrac. The resulting value will be canonical and non-signaling. convertFromDecimalCharacter: not implemented. convertToDecimalCharacter: not implemented. 5.4.3 Conversion operations for binary formats convertFromHexCharacter: not implemented. convertToHexCharacter: showHFloat from Numeric can be used. 5.5 Quiet-computational operations 5.5.1 Sign bit operations For IEEE-compliant floating-point types, negate and abs from Prelude should comply with IEEE semantics. negate :: Num a => a -> a # Unary negation. abs :: Num a => a -> a # Absolute value. See Numeric.Floating.IEEE.NaN for copySign. 5.5.2 Decimal re-encoding operations (not supported) Not supported. 5.6 Signaling-computational operations 5.6.1 Comparisons (not supported) This library does not support floating-point exceptions. 5.7 Non-computational operations 5.7.1 Conformance predicates (not supported) Not supported. 5.7.2 General operations Functions in this module disregards the content of NaNs: sign bit, signaling-or-quiet, and payload. All NaNs are treated as quiet, positive. To properly handle NaNs, use the typeclass and functions from Numeric.Floating.IEEE.NaN. data Class Source # The classification of floating-point values. Instances Instances details Source # Instance detailsDefined in Numeric.Floating.IEEE.Internal.Classify Methodssucc :: Class -> Class #pred :: Class -> Class #toEnum :: Int -> Class #enumFrom :: Class -> [Class] #enumFromThen :: Class -> Class -> [Class] #enumFromTo :: Class -> Class -> [Class] #enumFromThenTo :: Class -> Class -> Class -> [Class] # Source # Instance detailsDefined in Numeric.Floating.IEEE.Internal.Classify Methods(==) :: Class -> Class -> Bool #(/=) :: Class -> Class -> Bool # Source # Instance detailsDefined in Numeric.Floating.IEEE.Internal.Classify Methods(<) :: Class -> Class -> Bool #(<=) :: Class -> Class -> Bool #(>) :: Class -> Class -> Bool #(>=) :: Class -> Class -> Bool #max :: Class -> Class -> Class #min :: Class -> Class -> Class # Source # Instance detailsDefined in Numeric.Floating.IEEE.Internal.Classify Methods Source # Instance detailsDefined in Numeric.Floating.IEEE.Internal.Classify MethodsshowsPrec :: Int -> Class -> ShowS #show :: Class -> String #showList :: [Class] -> ShowS # classify :: RealFloat a => a -> Class Source # Classifies a floating-point value. Since RealFloat constraint is insufficient to query signaling status of a NaN, this function treats all NaNs as quiet. See also Numeric.Floating.IEEE.NaN. isSignMinus :: RealFloat a => a -> Bool Source # Returns True if the argument is negative (including negative zero). Since RealFloat constraint is insufficient to query the sign of NaNs, this function treats all NaNs as positive. See also Numeric.Floating.IEEE.NaN. IEEE 754 isSignMinus operation. isNormal :: RealFloat a => a -> Bool Source # IEEE 754 isNormal operation. isFinite :: RealFloat a => a -> Bool Source # Returns True if the argument is normal, subnormal, or zero. IEEE 754 isFinite operation. isZero :: RealFloat a => a -> Bool Source # Returns True if the argument is zero. IEEE 754 isZero operation. isDenormalized :: RealFloat a => a -> Bool # True if the argument is too small to be represented in normalized format isInfinite :: RealFloat a => a -> Bool # True if the argument is an IEEE infinity or negative infinity isNaN :: RealFloat a => a -> Bool # True if the argument is an IEEE "not-a-number" (NaN) value See Numeric.Floating.IEEE.NaN for isSignaling. isCanonical: not supported. floatRadix :: RealFloat a => a -> Integer # a constant function, returning the radix of the representation (often 2) compareByTotalOrder :: RealFloat a => a -> a -> Ordering Source # Comparison with IEEE 754 totalOrder predicate. Since RealFloat constraint is insufficient to query the sign and payload of NaNs, this function treats all NaNs as positive and does not make distinction between them. See also Numeric.Floating.IEEE.NaN. Floating-point numbers are ordered as, \(-\infty < \text{negative reals} < -0 < +0 < \text{positive reals} < +\infty < \mathrm{NaN}$$.
compareByTotalOrderMag :: RealFloat a => a -> a -> Ordering Source #
Comparison with IEEE 754 totalOrderMag predicate.
Equivalent to compareByTotalOrder (abs x) (abs y).
Not supported.
Not supported.
9.5 Augmented arithmetic operations
augmentedAddition :: RealFloat a => a -> a -> (a, a) Source #
IEEE 754 augmentedAddition operation.
augmentedSubtraction :: RealFloat a => a -> a -> (a, a) Source #
IEEE 754 augmentedSubtraction operation.
augmentedMultiplication :: RealFloat a => a -> a -> (a, a) Source #
IEEE 754 augmentedMultiplication operation.
9.6 Minimum and maximum operations
minimum' :: RealFloat a => a -> a -> a Source #
IEEE 754 minimum operation. -0 is smaller than +0. Propagates NaNs.
minimumNumber :: RealFloat a => a -> a -> a Source #
IEEE 754 minimumNumber operation. -0 is smaller than +0. Treats NaNs as missing data.
maximum' :: RealFloat a => a -> a -> a Source #
IEEE 754 maximum operation. -0 is smaller than +0. Propagates NaNs.
maximumNumber :: RealFloat a => a -> a -> a Source #
IEEE 754 maximumNumber operation. -0 is smaller than +0. Treats NaNs as missing data.
minimumMagnitude :: RealFloat a => a -> a -> a Source #
IEEE 754 minimumMagnitude operation.
minimumMagnitudeNumber :: RealFloat a => a -> a -> a Source #
IEEE 754 minimumMagnitudeNumber operation.
maximumMagnitude :: RealFloat a => a -> a -> a Source #
IEEE 754 maximumMagnitude operation.
maximumMagnitudeNumber :: RealFloat a => a -> a -> a Source #
IEEE 754 maximumMagnitudeNumber operation.
Floating-point constants
minPositive :: RealFloat a => a Source #
The smallest positive value expressible in an IEEE floating-point format. This value is subnormal.
>>> (minPositive :: Float) == 0x1p-149
True
>>> (minPositive :: Double) == 0x1p-1074
True
>>> nextDown (minPositive :: Float)
0.0
>>> nextDown (minPositive :: Double)
0.0
The smallest positive normal value expressible in an IEEE floating-point format.
>>> (minPositiveNormal :: Float) == 0x1p-126
True
>>> (minPositiveNormal :: Double) == 0x1p-1022
True
>>> isDenormalized (minPositiveNormal :: Float)
False
>>> isDenormalized (minPositiveNormal :: Double)
False
>>> isDenormalized (nextDown (minPositiveNormal :: Float))
True
>>> isDenormalized (nextDown (minPositiveNormal :: Double))
True
maxFinite :: RealFloat a => a Source #
The largest finite value expressible in an IEEE floating-point format.
>>> (maxFinite :: Float) == 0x1.fffffep+127
True
>>> (maxFinite :: Double) == 0x1.ffff_ffff_ffff_fp+1023
True
|
2022-01-24 07:48:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2868664860725403, "perplexity": 8418.403806577273}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00367.warc.gz"}
|
http://mathhelpforum.com/number-theory/169092-need-some-help-complex-functions-print.html
|
# Need some help with complex functions... :(
• January 23rd 2011, 03:18 AM
sedam7
Need some help with complex functions... :(
hello :D
I'm having big problem with simple issue, so if any one is willing to guide me to the right approach I will be most grateful :D
my problem is with this task :D
"If complex function $f(z)$ is regular in the area $D\subset \mathbb{C}$ than it's also continuous on same defined area."
and I need to prove it :D but i can't even start it because if I assume that some function is regular that implies that function is continuous.... !?!??! I have no clue how to do this :D
• January 23rd 2011, 03:26 AM
FernandoRevilla
For every $z_0\in D$ :
$f(z)-f(z_0)=\dfrac{f(z)-f(z_0)}{z-z_0}\cdot (z-z_0)\;,\;(z\neq z_0)$
Taking limits when $z\rightarrow z_0$ :
$\displaystyle\lim_{z \to z_0}{(f(z)-f(z_0))}=f'(z_0)\cdot 0=0$
Now, you can conclude.
Fernando Revilla
• January 23rd 2011, 03:38 AM
sedam7
Thank you very very much :D
(I'm an idiot hehehehehe)
• January 23rd 2011, 03:44 AM
FernandoRevilla
Quote:
Originally Posted by sedam7
(I'm an idiot hehehehehe)
Welcome to my Club. :)
Fernando Revilla
|
2014-09-15 05:03:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8358278274536133, "perplexity": 2723.637673188014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657104119.19/warc/CC-MAIN-20140914011144-00050-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
https://ypei.org/microblog.html
|
# Yuchen's Microblog
• 2022-03-03 - The CC BY-NC-SA license is bad, don't use it
The CC BY-NC-SA license permits remix, but forbids commercial use (therefore nonfree). It is a weird frankenstein that combines copyleft with nonfree elements. As a result, works under this license will beget derived works, which will also be nonfree because of the ShareAlike (SA). It is a mistake by Creative Commons, which should have not designed any noncommercial licenses, or if it really wanted to, made noncommercial an optional clause of NoDerivative (ND) licenses.
Please do the public a favour and don't release your work under the NC-SA license. If you really do not want to allow commercial use, please use NC-ND instead to avoid nonfree derivatives.
• 2022-02-17 - Reading pdf or djvu in Emacs
DocView mode is a versitile builtin major mode in emacs, with the ability to display pdf, djvu, odt etc.
However, it is not really cut for the job for displaying large pdf or djvu. It can be slow, and may hang emacs when opening large djvu files.
If you want to view pdf, pdf-tools is a great alternative. It is much faster (I'd say as fast as external pdf viewers like zathura), and offers an outline mode style toc navigation. It also supports fast regexp isearch as well as occur. There's a nice demo which you can view with mpv, youtube-dl etc. http://www.dailymotion.com/video/x2bc1is_pdf-tools-tourdeforce_tech?forcedQuality%3Dhd720
For djvu, I recommend use the djvu mode instead. It does not make emacs hang, and allows quick switch between image and text.
• 2022-02-17 - dns66
For some reason, the Google public DNS seems to have become a default for many programs, including systemd. After installing Lineage OS, I noticed it also used Google public DNS as the default DNS for WiFi. To make things even worse, it was not changeable. I tried several ways documented on the web, including changing it in the WiFi setting, but none worked. In the end, I was able to fix it with an app called DNS66. If you face the same problem, give it a try.
• 2022-02-15 - Google analytics alternatives
Austrian and French regulators ruled against Google Analytics based on GDPR and data privacy. Another big problem with Google Analytics is that it is nonfree. There are free alternatives like Matomo (GPLv3+) and Plausible (AGPLv3+). I used to self host Matomo and it was quite decent, but I would like to try out self-hosted version of Plausible soon due to its simplicity and lightweight.
H/t Michael McMahon
• 2022-02-10 - Big nonfree gotcha
I just came across my first big nonfree gotcha in formal verification.
CompCert, the verified C compiler, is nonfree.
And I learned there are no free alternatives.
And opam (the ocaml package manager) installed it without any warning, possibly as a dep of VST, used in Vol 5 of Software Foundations, Verifiable C.
But I also learned that the part of CompCert required for the study of Verifiable C is free. Someone should definitely extract out the free part which I hope can be distributed on opam as say "compcert-free".
On a side note, I feel the GNU Project is missing some essential functional packages, which could result in the proglang / formal verification community having less aspirations for free software. I am not faulting the GNU Project for this, of course. But by contrast, all Emacs packages, be it independent or part of the GNU ELPA, are by default licensed under GPLv3+, which could be caused by the GNU Project having a strong foothold in Emacs.
H/t Kiran G and pgiarrusso.
Compared to Replicant, Lineage OS is not fully free, as it contains kernel blobs and nonfree drivers. But if you are using stock android, switching to Lineage OS will be a massive improvement.
The best part: there will be no more proprietary Google apps and services spying on you. No google play, google maps, gmail, chrome, … You can ensure that all apps are free software by installing them from F-droid.
The stock apps that come with Lineage OS are nice too. There's the caffeine mode toggle switch in the status bar, which keep the screen on for 5, 10, 30 minutes, or indefinitely. I find this feature very useful when say I need to check the phone while cooking and I don't want it to go to lock screen. The bundled browser comes with an option to have all javascript disabled, fixing the Javascript Trap with a sledgehammer. By extension, apps with internal browsers like Materialistic Hacker news client will also not accidentally execute any Javascript.
It also seems like the phone battery lasts longer, which could be due to a more lightweight system.
The only things I'm missing are in the I/O department: glide typing and a Simplified Chinese input method. I used the proprietary Google keyboard on stock android that came with these features, but the input methods available on F-droid are missing them. I find myself typing less after moving to Lineage OS because I can't type as fast as I used to.
• 2022-02-10 - Mumble chirping when muted? Try toggling mute cue
Recently when I connect to a mumble server and mute myself, I hear a periodic chirp / beep once every two seconds or so. It turns out that I needed to deselect "mute cue" in settings, which seems to be doing the opposite of what it is supposed to do.
• 2022-02-03 - Reading EPUB in Emacs
nov.el is pretty handy as an EPUB reader in Emacs. Before that I was using the calibre reader, which is slow and resource hungry.
1. Use follow-mode to allow double- or triple-page display. Note that you may need to remove the header line for the text continuation to be correct.
2. Annotation can be done with org-capture, which can easily get the selected text and link to the EPUB with the correct position.
One feature that I am sorely missing from calibre reader is fulltext search, which is an item in the todo list of the project.
Another feature would be an equivalent to Info-index, which could allow jumping to any section in the book, with completion.
• 2022-01-27 - From mu4e to gnus
mu4e is a very popular emacs email client, known for the ease of setup.
However it has its problems. The search is not very good (for example I had a hard time searching for patterns with special symbols like subjects containg the string "[gnu.org #". The indexing is part of the program, which combined with its lack of concurrency, makes it rather tricky to schedule update its index (on a side note elfeed has a similar problem). One may need to perform some hack by killing any mu process running in emacs in a cron script before indexing. I had been doing manual indexing, and waiting for 30 seconds to index all the mails whenever I want to check the mailbox for update was rather distracting.
I made the move to gnus, which did not disappoint. Its search is more useful and natural - one does not have to worry about symbols. If gnus is configured with an imap server program like dovecot, indexing becomes that program's job, which could run as a cron job without bothering gnus. Since the imap server handles concurrency, one can even open up gnus in multiple emacs instance. As an added benefit, opening mailboxes is also much faster than mu4e.
As mentioned before, the popular RSS reader elfeed operates on a similar model as mu4e, thus lacking concurrency. In fact, it is even more limited, as if one runs elfeed on two emacs, the update in one does not reflect on the other! I hope there could be an emacs RSS reader with the simplicity of elfeed, but taking gnus approach, leaving fetching, indexing and storage to a (local) server program, while the reader itself simply acting as a local client.
• 2022-01-27 - EMMS and MPV
EMMS is a multimedia playlist management tool on emacs. It allows users to control the playback of audio, video and even images by interacting with external players like mpv and vlc using IPC.
I've been using EMMS exclusively with mpv, and the separation of media playlist management and playback, as well as moving playlist management to a purely plaintext environment make perfect sense.
What is a playlist, but a list of urls? As a simple experiment, you can write a working m3u file by hand, with each line the raw path to a media file. It will load in EMMS, neat.
What's also neat is mpv, a media player able to handle any kind of url you throw at it. Local files? Of course. Remote file urls over https? Yup. SFTP? You don't even need to sshfs mount, just do mpv sftp://host:port/path/to/media.file. How about Libreplanet videos? It will work, through youtube-dl extractors, just do mpv https://media.libreplanet.org/u/libreplanet/m/fsf-award-ceremony/. By the way, there's no mediagoblin extractor, but youtube-dl could find the media file using its rather versitile generic extractor.
You can also create a playlist based on a youtube channel or playlist, with the help of some elisp plumbing calling youtube-dl -j <playlist-or-channel-url> | jq '.webpage_url' and add the urls to a playlist. If you want, you can even bind a key to download a remote media piece you like!
• 2022-01-27 - New Hampshire and Chile free software politics
News was that a New Hampshire bill to promote free software in government services (especially mandating online government services to be usable if nonfree javascript is blocked using tools like LibreJS) is on the table, and Chile is rewriting its constitution, with proposals to include free software values (Access to knowledge) and related digital rights values (Technological and digital sovereignty and Internet privacy).
• 2022-01-25 - User freedom on the web
The user freedom issues on the web are slightly complicated.
• Client-side: is code executed on the client machine (e.g. javascript) free? If so then the user's freedom is protected.
• Then there's also the case when the client blocks the execution of nonfree javascript (e.g. by using LibreJS), in which case the user's freedom is still protected.
• There are also false positives when using LibreJS, when the javascript code is free, but not labelled or annoated in a LibreJS-compliant way. In this case, since the client code is free it is safe to whitelist the scripts.
• Server-side: is the server not under the user's (individual or collective) control, doing computing on the user's behalf? If so then that's bad (SaaSS), otherwise user freedom respecting.
• Examples of computing inherently one's own include translation, photo editing etc.
• Examples of computing not inherently one's own are generally activities requiring communication with others' computers, include accessing information published by others (e.g. reading a blog post) and publishing information (e.g. tweeting).
Case studies:
Visiting the FSF homepage in a graphical browser like Firefox
This is fine, because all Javascript is trivial or LibreJS compliant. Reading information published by the FSF is computing not inherently one's own, so it's not SaaSS hence freedom respecting.
Tooting on Mastodon using its web client
This is generally fine, as Mastodon webclient is free software, and some instances (like hostux.social) are LibreJS-compliant. Publishing microblogposts is a communication act, thus the Mastodon service that does so is not SasSS.
Watching videos on Peertube using its webclient
Even though Peertube is unusable with LibreJS on, it is free software from backend to frontend. Whitelisting is generally safe. Watching videos is again access information published by others, thus not SaaSS.
Watching YouTube videos on an invidious proxy
similarly reading tweets on nitter, reading stuff on bibliogram or doing these activities using a free software client. This is certainly OK on the frontend as well as backend since it's communication.
Routing on osmand
Osmand is a free software client and all computation happens locally so it's good.
Routing on osm.org
It depends whether the routing calculation is done locally using free javascript programs, or remotely (SaaSS).
Doable with LibreJS blocking all non-trivial nonfree javascript, and it is communications.
Publishing tweets using free software clients
Using free clients is fine on the client side, and publication counts as communication i.e. not SaaSS. This is what the FSF does.
Get weather forecast
Even though the forecast is done by computation on meteorological data, the user did not supply data, thus such computation does not count as SaaSS. It is similar to when someone does computation in their head (to outline, draft and revise) before publishing a blog post.
We can spot some trends from these case studies:
• Generally, a free software (not necessarily web) client is good. Many tools offer help with this, including the alternative frontends, haketilo and woot.tech.
• F-droid Non-Free Network Service antifeature is not consistent with the above method. In fact, it is not clear what is the definition of this antifeature. For example, free alternative frontends like NewPipe and Fritter are labelled with such antifeature, though by the analysis above these are fine.
• AGPL is mostly irrelevant in this discussion because it is mostly concerned with the freedom of the service provider, even though it is the best software license.
• It's OK freedom-wise to use GAFAM service as long as the client is free and the service does not count as SaaSS, though there are separate concerns like user privacy.
• 2022-01-25 - lxo open letter
Alexandre Oliva (lxo) posted an email reply to a celebrated feminist leader regarding https://stallmansupport.org and the drama. Yet another powerful piece.
In other news, with the FSF expanding process to allow associate members to nominate new board members, I hope lxo can be back on the FSF Board.
• 2022-01-20 - Curry-Howard correspondence and Dependent types
What is Curry-Howard correspondence?
Roughly speaking, it says there is a one-one correspondence between world of propositions (proofs) and world of types (programs).
A simple illustration:
For any two propositions P and Q, the proposition that P implying Q, i.e. P -> Q, corresponds to: for any two types T and S, the functional type T -> S.
Finding a proof of P -> Q corresponds to finding an element in T -> S.
In a more simplified setting, proving a proposition P corresponds to finding an element of T.
In coq, one can write:
Theorem p_implies_q : P -> Q.
Which looks exactly the same as, except the keywords Theorem and Definition,
Definition some_function : T -> S.
To show the pun is genuine, if you want, you can prove a definition and define a theorem:
Definition add_one : nat -> nat.
Proof.
intro n. (* introduce the argument of the function to the context and call it n. *)
apply succ. (* apply succ to the "goal" *)
apply n.
Defined.
Definition imp_refl : forall P : Prop, P -> P := fun P p : P => p.
It may still feel rather forced, as the proof of the definition is way less readable than a direct definition, and the definition of the theorem is just plain silly (Logical Foundations has a more reasonable example - the evenness of a number).
How about an actual example, where one cannot do without the correspondence? Well, look no further than dependent types!
Say you want to work with vectors, which are lists of a certain length, which is encoded as a pair (the list, the length) with a condition:
Definition vector (X : Type) :=
{ '(xs, n) : list X * nat | length xs = n }.
This is called a dependent sum type, or sigma type, where the vertical bar | serves the same role as in set theory, meaning "such that".
The full definition is a bit more involved which we will skip.
Now, how do you define a concat function that takes two vectors of length m and n and returns a concatenated vector of length m + n? You will have to do two things in such a definition:
• Define the resulting pair.
• Prove the length of the resulting list is m + n.
Now there's no escaping the pun!
• 2022-01-18 - Warren, Jayapal Demand Google Stop Trying to 'Bully' DOJ Antitrust Official
Google is too big and powerful and should be broken up. Everyone should read the antitrust filings against Google, easpecially the one led by the Texas AG. Another good read which the filing was apparently built on, with more comprehensive introduction of the relevant technologies is Why Google Dominates Advertising Markets by Srinivasan.
• 2022-01-18 - Free software vs. open source
Alexandre Oliva's post Against Software Tyranny is a good read on the difference between free software and open source. Free software is the liberation of computing, whereas open source is to hope for the corporations to be enlightened. Much like copyleft vs. pushover and dare I say progressives vs. liberals.
Corporations view "open source" software as commons, just like natural resources, free for their taking. This is the key cause of unsustainable free software development recently under discussion. This is also why I don't really like to refer to the free world as commons.
A commons is too weak to protect our computing freedom, and only makes sense if nonfree software is eliminated.
• 2022-01-18 - theyvoteforyou.org.au
theyvoteforyou.org.au looks like a very useful resource for democracy and shouldn't be shut down.
• 2022-01-18 - The Djokovic case
Rules are rules the Prime Minister declared, especially when it comes to our borders. Rules weren't rules last year, when celebrities travelled to and from Australia while ordinary people were denied a permit to leave, or even more scandalously, not afforded the most basic right of a citizen to return to their country. Now though, when Australia chooses to insist on a medical procedure as a condition of entry to Australia the rules are suddenly rigid.
• 2022-01-13 - Why you should read Software Foundations
I finished reading the Logical Foundations, the first volume of Software Foundations. What an amazing book.
I learned about formal proofs before, by playing the natural number game designed by Kevin Buzzard, which is in Lean.
Logical Foundations covers all that and goes much deeper.
There are many great things about this book. You can download it, ignore the html files, and just burn through the coq .v files, which are actually the source of the webpages. The texts are basically comments in the .v files, but the readability is not worse than the html files, and actually much better in emacs. It is almost like literate programming.
As an aside, initially I had some problems with getting org-babel to work, but the files were deleted years ago. After reading LF I realised org-babel is not really important, since I can just happily work through the giant .v files in Proof General (which you can easily install manually without MELPA).
Another neat thing about LF is that it really demonstrates the parallel between propositions and types, by using the same arrow notations for implication and function, by making no notational distinctions with proofs of a theorem and elements of type. A Proof is a Definition, and a property of numbers is a dependent type. If you want to understand Curry-Howard Correspondence, you can't go wrong with Logical Foundations.
By the end of book, you will have worked through the implementation of a mini imperative language called Imp, and proved some simple properties of programs written in Imp - great value for your time!
• 2022-01-13 - A simple shell script to get Australian weather forecast from BoM
In this micropost I'll show how to write your own android weather app ("app" in a liberal sense) to retrieve weather for an Australian town, using Melbourne as an example.
The short form (précis) 7-day forecasts, containing min / max temperature, humidity level and chances of precipitation are available as xml files provided by the BoM - simply go there and get the link for the state you are interested in. For Victoria it is ftp://ftp.bom.gov.au/anon/gen/fwo/IDV10753.xml.
So the first step is to download the xml to a temp directory, which can be done with wget:
cd $PREFIX/tmp rm IDV10753.xml || true wget ftp://ftp.bom.gov.au/anon/gen/fwo/IDV10753.xml Next one needs to query the xml file for the relevant data. Locate the city name, date, temperatures and chance of pricipitation using the nice xmlstarlet tool and format them nicely. result=xml sel -t -m '//area[@description="Melbourne"]/forecast-period'\ -v "substring-before(@start-time-local, 'T')" \ -o " - min: " -v "element[@type='air_temperature_minimum']" \ -o " - max: " -v "element[@type='air_temperature_maximum']" \ -o " - " -v "text[@type='precis']" \ -o " - chance of precipitation: " -v "text[@type='probability_of_precipitation']" \ -n IDV10753.xml And finally, send the forecast to yourself using Termux:API. echo -e "7-day weather forecast in Melbourne:\n${result}" | \
termux-sms-send -n 044444444 # replace 044444444 with your phone number
To use it in termux, it doesn't hurt to specify the location of the shell in a shebang:
#!/data/data/com.termux/files/usr/bin/bash
Finally, to make the script a bit more convenient to invoke, use Termux:Widget, and copy the script to ~/.shortcut, and you can make it appear as a button in a desktop widget.
Enjoy the Melbourne weather!
• 2022-01-11 - Virtual event organisers, please think of people in the APAC region
It is impossible to hold an online event that works for people from America, Europe and Asia-Pacific (APAC) regions at the same time. Normally one region will be dropped in favour of the other two. Most online tech events are organised by people located in Europe or America. 98% of them are set at a time that accommodates people in both regions, thus not work for people in APAC. Perhaps this is because not as many from the APAC region attend in an APAC-friendly time, compared to the number of participants from Europe in an Europe-friendly time (say) when the event is organised by someone in the States, which could be an effect as well, of events not accommodating APAC time.
As such, if organisers in Europe or America sometimes set events time to be APAC-friendly say 40% of the time, instead of accommodating those across the pond, a more diverse and vibrant community may result. More concretely, that roughly translates to 12am-12pm UTC for Europe-based organisers, 7pm-7am UTC-5 for US East Coast, and 3pm-3am for US West Coast.
Thank you!
• 2022-01-06 - Coq is cool
Having written emacs lisp for a while and grown my emacs init file to thousands of lines, I decided to revisit the Land of Lisp, a fanstastically and humorously written book about lisp.
The book said that lisp is basically lambda calculus, which got me thinking. How is it lambda calculus?
So I went back into the rabbit hole that drew me in a few years ago, not knowing that's where I was going. I started by refreshing my knowledge reading Types and Programming Languages (TAPL). After reading it I still didn't quite get how lisp is basically lambda calculus.
TAPL mentioned Curry-Howard correspondence, a theory that connects logic systems with type systems. I wanted to know what each of the 8 vertices of the lambda cube corresponds to and how, which was not covered in TAPL. After a failed web surfing session in an attempt to find quick answers to my question, I was reminded of the Software Foundations series, and indeed, it talked about Curry-Howard correspondence with real code.
So I went on to read the first volume titled Logic Foundations. Previously I had an (irrational?) aversion towards logic, fearing much of it was all dry tautology and not as fun as more "useful" mathematics like probably theory. Boy was I wrong. Logic Foundations introduces coq, which I didn't touch due to the same aversion. But as it turned out, coq answered most of my questions about formal mathematics, and fully developed my (unoriginal) ideas of formalising mathematics. Maths is code. Theorems are identified by their proofs. You can apply a theorem in the proof of another theorem, matching terms. You can parameterise theorems etc. etc. Coq is something I wish I knew when I was a PhD student. The logic system is constrained in CoC, calculus of constructions, which is the top vertex in the lambda cube. I am still reading the book and can't wait to find out the extent of mathematics covered by it and what can be done about non-constructive systems (like the classical maths where you can cheat with excluded middle) using coq or other formal systems.
If one day programs and proofs are indistinguishable, the two traditions will blend. Maths has no copyright, but advanced maths can be hard to understand, though written in well-commented code it will be more accessible. Computer programs are the opposite, heavily copyrighted under good free licenses like (A)GPLv3+ and evil proprietary licenses, but more accessible (though code obfuscation is also thing but it cannot have gaps). My hope is this will bring the best of both worlds, that is, an elimination of copyright in computer programs (look, it is all maths, and copyrighting theorems and proofs are absurd!), and a more accessible corpus of advanced mathematics.
• 2021-12-30 - Emacs is cool
Emacs blows Vim out of water.
I started using vim in late 2000s, perhaps 2010, as I was drawn by the marketing slogan with vim "what you think is what you get".
I tried to get vi keybinding on everything, in browser (vimperator), pdf viewer (zathura), window manager (i3) etc. I used vimwiki heavily for knowledge and task management, and even wrote the initial version of the pandoc vimwiki reader.
About 18 months ago (around end of June 2020) I decided to give Emacs a try. The reason for this decision was more ideological than technical - I was fed up with a free software hostile environment, and Emacs always striked me as a centre piece of the GNU project.
I started the official Emacs distribution with an empty config file, and read the emacs tutorial. Coming from vim, it was quite uncomfortable to go without the convenient keys from hjkl for basic navigation to C-d for scrolling down half page and . for repeating the last action.
Org mode came as a double culture shock for someone used to vimwiki. Why would anyone think having a level-8 heading is a good idea? The link format was also a bit more verbose. Online resources focused more on GTD workflow than describing the markup syntax. And the table auto-align was nothing fancy - we have that in vimwiki.
But soon enough, I found out Emacs is indeed way better than vim. It can be used for almost about every computing task, including web browsing, emails, media player, executing shell commands, reading epub, managing files, version control, and of course, text editing. Days without emacs now seemed like dark ages.
Some aha moments in my Emacs journey:
• When I discovered icomplete and fido, completion and function / variable discovery became so much easier.
• When I combined icomplete with (setq org-refile-use-outline-path t), (setq org-refile-use-cache t), and (setq org-outline-path-complete-in-steps nil), which allows me to jump to any entry in a huge org file within seconds.
• When I learned about emacsclient, so that I can have a long running emacs server so that I can share states across multiple emacs frames (or "windows" in a DE / WM context), and I don't lose anything when accidentally typing C-x C-c and quitting emacs.
• When I found out EMMS to be the only media player with persistent playlists that I can switch and control with ease, and with the help of mpv, it can play urls in multiple schemes from http to sftp, and with the help you youtube-dl, it can play mediagoblin webpages, which allowed me to go through talks at https://audio-video.gnu.org/video/ and https://media.libreplanet.org/videos without losing track.
• When I read the GTD book, despite not having a secretary to bring me the tickler folder or a koi pond to for me pave around with a wine glass in hand, I could finally put the design of org mode in context and vastly improve my workflow by implementing my version of GTD.
• When I switched from webmail to mu4e, I learned how to get mails as a client and that emails are basically plaintext files (e.g. maildir) which can be read and written to locally and synced to remote server, and that smtp and imap are completely separate areas of concern; when I switched from mu4e to gnus, I learned how to serve mails using dovecot as an imap server and talk to a mail server using telnet, as well as the nice thing about offloading indexing to an external party (updating mails in gnus is instant, compared to mu4e-update-index).
The most useful tool, the killer feature for me, is of course org mode. I spent most of my emacs screen time in org mode. I can't think of any aha moments related to it, but the process of adoption was gradual and there are so many nice features. I approached org mode with starting using one new feature every few weeks: speed command, org agenda, links, properties, org capture, effort estimate, clocking, tagging, refiling, attachment, online image display, citing… The problem with marketing org mode, and emacs in general, is that it integrates so much in my life and its workflow is so involved, that it is rather difficult to come up with a quick demo to impress people.
One final point is that my usage is pretty vanilla, in that I strongly prefer the core distribution and GNU ELPA. I also installed a few packages from NonGNU ELPA, but I don't use MELPA at all, both for ideological reasons and simplicity. In the rare occasions when I really need a package not in core / NonGNU ELPA, I normally install it manually with make / add-to-list load-path and require / autoload.
Enough rambling…
• 2021-12-23 - Theory of Bitcoin
The theoretical model of bitcoin is surprisingly simple. A transaction is a list of inputs and outputs, where the inputs trace to outputs of previous transactions. Transactions form blocks, and blocks form the blockchain with each block verifying the previous ones, going all the way back to the genesis block. Proof of work requires finding a nonce that hashes to a sufficiently small number. One new block every 10 minutes, transaction fees and award of 6.25BTC (halving every 210k blocks) goes to whoever created the block (aka miner). A total of 21mil BTC, running out by ~2140.
By the way, the whitepaper is not very useful for understanding the theory of bitcoin, but Wikipedia and the bitcoin wiki are far better resources IMO.
• 2021-12-23 - Curve25519
A tour of Curve25519 is a great introduction on elliptic curve encryption. It explains how EC is like modular arithmetic, with the analogue what multiplication to EC is what exponentiation to modular arithmetic.
• 2021-12-09 - Ombudsman finds Victoria border permit system 'unjust'
Link. Not the first time the Ombudsman has such findings about pandemic measures taken by the Victorian Government.
• 2021-12-08 - EmacsConf 2021 alternate streaming solution
LibreAustralia hosted an alternate EmacsConf 2021 stream for APAC timezones on 28th November. It was a fun event to organise.
How to stream an event like this with a fully free software stack? Initially I envisioned a server streaming solution like I did with the inaugural event, by using ffmpeg to feed a local video file to icecast:
ffmpeg -re -i ./video.webm -codec copy -content_type video/webm icecast://source:password@localhost:8000/live.webm
This works very well with one video, but with multiple videos one will need to concatenate them. The concat idea has two problems:
1. Not all videos can be concatenated. In fact, in most of my experiments, the output video could not play after the the portion corresponding to the first input video.
2. There's absolutely no control of the playback. Once the stream started, the whole event is scripted, and to adjust the schedule one has to kill the stream first.
Problem 2 can be fixed by utilising the fallback mountpoint with fallback-override:
<mount>
<mount-name>/live.webm</mount-name>
<fallback-mount>/fallback.webm</fallback-mount>
<fallback-override>1</fallback-override>
</mount>
This way the stream never dies, provided a standby video plays on on the fallback mountpoint.
Unfortunately not all videos can move smoothly between the main and the fallback mountpoints. Some transitions cause unpleasant visual artefacts lasting for a dozen seconds, others (even worse) with audio turning into high-pitch scratching noise and never recovering. For certain videos these problems even occur when a video transitions to itself.
It may be possible to use ffmpeg to reencode videos that transitions smoothly, which is something to figure out for the future.
That's pretty much a deadend in server streaming.
On to desktop streaming, which offers the ultimate flexibility of playback control, but is heavier on bandwidth and computing resources. One idea was OBS Studio, which unfortunately does not have icecast as one of its streaming options, but rather requires a hack to recording to an icecast mountpoint.
I experimented with a setup from Amin Bandali, which seems to me like using OBS Studio as an ffmpeg wrapper. Unfortunately I would get segfault unless the stream is done with a minimal resolution.
Inspired by LibreMiami's watch party, I decided to try out Owncast. It was extremely easy to set up, and I could get an acceptable streaming performance with some low settings.
However, as pointed out by Amin, owncast uses rtmp as the streaming protocol, which probably encodes to mp4, a patent encumbered format.
How about streaming to BBB with screen share + monitor system audio as source? A test with Leo Vivier showed that it has a similar performance to owncast. The downside with BBB is that it requires javascript and is less accssible than icecast for viewers.
What worked, in the end, was an direct ffmpeg to icecast streaming (thanks to Sacha Chua):
ffmpeg -loglevel 0 -ar 48000 -i default -re -video_size 1280x720 -framerate 25 -f x11grab -i :0.0+0,20 -cluster_size_limit 2M -cluster_time_limit 5100 -content_type video/webm -c:v libvpx -b:v 1M -crf 30 -g 125 -deadline good -threads 4 -f webm icecast://source:pass@host:8000/live.webm
The captured area is shifted by 20 pixels in order not to grab the title bar of the player and emacs window.
The performance of this approach was better than any of the other desktop streaming solutions, probably due to its bare ffmpeg setup without any bells and whistles.
I also used an EMMS playlist to interlace the talk videos with standby music tracks. If the buffer times between talks were not so short, the whole event could have been autopiloted with elisp run-at-time!
• 2021-12-06 - The useful GPL "or later" clause
Ariadne Conill wrote a piece on GPL "or later" clause. I made a comment about two weeks ago, which was under moderation but has not appeared as of today. So I decided to publish it below (with some minor edits).
The article says:
The primary motive for the version upgrade clause, at the time, was quite simple: the concept of using copyright to enforce software freedom, was, at the time, a new and novel concept, and there was a concern that the license might have flaws or need clarifications.
The main purpose of the -or-later clause is compatibility. Any two (strong) copyleft licenses are incompatible. If a program is licensed under GPLv2-only, it is incompatible with GPLv3. Same goes for version 3: a GPLv3'd program will likely not be combinable with future GPLv4'd programs.
The article continues:
However, for all of the success of the GPLv3 drafting process, it must be noted that the GPL is ultimately published by the Free Software Foundation, an organization that many have questioned the long-term viability of lately.
What long-term viability? According to https://www.fsf.org/news/fsf-board-frequently-asked-questions-faq#FSFfinancialstatus:
The FSF is in good financial health. As is the case with many organizations, the pandemic affected the FSF, impacting donors, making it impossible to host or attend in-person events, and disrupting operations. Fortunately, conservative financial planning over the years provided the FSF with sufficient reserves to weather these difficulties.
The rating organization Charity Navigator recently gave the FSF its 8th consecutive 4-star rating and, for the first time ever, a perfect overall score: https://www.fsf.org/news/free-software-foundation-awarded-perfect-score-from-charity-navigator-plus-eighth-consecutive-four-star-rating.
The FSF does not depend on large single sources of funding. It accepts and appreciates support from corporations who want to give back by contributing to the development and advocacy for free software, but direct corporate support accounted for less than 3% of FSF revenue in its most recently audited fiscal year.
The vast majority of FSF’s financial support comes from individuals – many, but not all, of whom choose to become associate members. At this moment, the FSF has more associate members than at any time in its history.
The original article continues:
And this is ultimately the problem: what happens if the FSF shuts down, and has to liquidate? What if an intellectual property troll acquires the GNU copyright assignments, or acquires the trademark rights to the FSF name, and publishes a new GPL version? There are many possibilities to be concerned about, but developers can do two things to mitigate the damage.
It is baked into GPL terms that future versions of the license have to be similar to the current version in spirit, see Section 14 of GPLv3 text, which protects GPL from the FSF:
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
On the other hand, GPLv3-or-later, as its name implies, offers a choice. The recipient of a program under this license can choose to apply GPLv3, or a future version e.g. GPLv4, and if the future version is bad all is not lost:
Suppose a program says “Version 3 of the GPL or any later version” and a new version of the GPL is released. If the new GPL version gives additional permission, that permission will be available immediately to all the users of the program. But if the new GPL version has a tighter requirement, it will not restrict use of the current version of the program, because it can still be used under GPL version 3. When a program says “Version 3 of the GPL or any later version”, users will always be permitted to use it, and even change it, according to the terms of GPL version 3—even after later versions of the GPL are available.
If a tighter requirement in a new version of the GPL need not be obeyed for existing software, how is it useful? Once GPL version 4 is available, the developers of most GPL-covered programs will release subsequent versions of their programs specifying “Version 4 of the GPL or any later version”. Then users will have to follow the tighter requirements in GPL version 4, for subsequent versions of the program.
However, developers are not obligated to do this; developers can continue allowing use of the previous version of the GPL, if that is their preference.
Continues on the original article:
First, they can stop using the “or later” clause in new GPL-licensed code.
This is a bad idea and likely harmful to the free software movement, because programs licensed under newer GPL will not be compatible with programs licensed under GPLv3-only.
Second, they can stop assigning copyright to the FSF. In the event that the FSF becomes compromised, for example, by an intellectual property troll, this limits the scope of their possible war chest for malicious GPL enforcement litigation. As we have learned from the McHardy cases involving Netfilter, in a project with multiple copyright holders, effective GPL enforcement litigation is most effective when done as a class action. In this way, dilution of the FSF copyright assignment pool protects the commons over time from exposure to malicious litigation by a compromised FSF.
The copyright assignment enables the FSF as the copyright holder to enforce GPL effectively.
The assignment contract safeguards the future of assigned work https://www.fsf.org/bulletin/2014/spring/copyright-assignment-at-the-fsf:
But the most important element of the assignment contract is the promise we make to every contributor and community member: We promise to always keep the software free. This promise extends to any successors in the copyright, meaning that even if the FSF were to go away the freedom of all users to share in the contributions wouldn't.
Finally, note there is a difference between Creative Commons licenses and GPL regarding the -or-later variants. GPL offers people the choice to use -only or -or-later, though FSF recommends the latter. Contrast that with Creative Commons licenses where -or-later is built-in, and the recipient has no choice.
• 2021-07-14 - Support Richard M. Stallman
In LibrePlanet 2021 in late March, Richard Stallman announced in his talk that he was returning to the Board of Directors of the Free Software Foundation. This was after he was forced to resign by a smear campaign in September 2019.
It was great news! It was a relief and like some kind of belated justice for him.
The events took a dark turn soon after. An "open letter" labelling Stallman many things he is not gained support from a group of established people in the "open source" community, and organisations in the same community, including Creative Commons, Mozilla, The Tor Project and Framasoft. The letter called for the removal of Stallman from his life's work, and cited defamatory materials as evidence.
Other organisations also joined in, including Software Freedom Conservancy, Free Software Foundation Europe and Electronic Frontier Foundation and published letters condemning the imaginary crimes committed by Stallman and issued sanctions against him and the FSF.
These groups and people refused to engage in discussions and some outright censored disagreements on this matter.
It was a religious inquisition, a lynch mob, and a textbook case of ritual defamation. It damaged the free software movement and harmed the Free Software Foundation. The management team resigned, leaving the organisation in a bad shape, which was likely capitalised by opportunists in the GCC Steering Committee to remove Stallman, and remove the copyright assignment requirement from the project without community consultation, which further set a precedence and other GNU projects were planning on a similar move. The dilution of copyright will make GPL enforcement harder for these projects.
I condemn the defamatory attacks and support Richard M. Stallman and I hope you join me. Please see also https://stallmansupport.org.
P.S. Alexandre Oliva wrote a very insightful piece on the drama.
• 2020-08-02 - ia lawsuit
The four big publishers Hachette, HarperCollins, Wiley, and Penguin Random House are still pursuing Internet Archive.
[Their] lawsuit does not stop at seeking to end the practice of Controlled Digital Lending. These publishers call for the destruction of the 1.5 million digital books that Internet Archive makes available to our patrons. This form of digital book burning is unprecedented and unfairly disadvantages people with print disabilities. For the blind, ebooks are a lifeline, yet less than one in ten exists in accessible formats. Since 2010, Internet Archive has made our lending library available to the blind and print disabled community, in addition to sighted users. If the publishers are successful with their lawsuit, more than a million of those books would be deleted from the Internet's digital shelves forever.
• 2020-08-02 - fsf-membership
I am a proud associate member of Free Software Freedom. For me the philosophy of Free Software is about ensuring the enrichment of a digital commons, so that knowledge and information are not concentrated in the hands of selected privileged people and locked up as "intellectual property". The genius of copyleft licenses like GNU (A)GPL ensures software released for the public, remains public. Open source does not care about that.
If you also care about the public good, the hacker ethics, or the spirit of the web, please take a moment to consider joining FSF as an associate member. It comes with numerous perks and benefits.
• 2020-06-21 - how-can-you-help-ia
How can you help the Internet Archive? Use it. It's more than the Wayback Machine. And get involved.
• 2020-06-12 - open-library
Open Library was cofounded by Aaron Swartz. As part of the Internet Archive, it has done good work to spread knowledge. However it is currently being sued by four major publishers for the National Emergency Library. IA decided to close the NEL two weeks earlier than planned, but the lawsuit is not over, which in the worst case scenario has the danger of resulting in Controlled Digital Lending being considered illegal and (less likely) bancruptcy of the Internet Archive. If this happens it will be a big setback of the free-culture movement.
• 2020-04-15 - sanders-suspend-campaign
Suspending the campaign is different from dropping out of the race. Bernie Sanders remains on the ballot, and indeed in his campaign suspension speech he encouraged people to continue voting for him in the democratic primaries to push for changes in the convention.
• 2019-09-30 - defense-stallman
Someone wrote a bold article titled "In Defense of Richard Stallman". Kudos to him.
Also, an interesting read: Famous public figure in tech suffers the consequences for asshole-ish behavior.
• 2019-09-29 - stallman-resign
Last week Richard Stallman resigned from FSF. It is a great loss for the free software movement.
The apparent cause of his resignation and the events that triggered it reflect some alarming trends of the zeitgeist. Here is a detailed review of what happened: Low grade "journalists" and internet mob attack RMS with lies. In-depth review.. Some interesting articles on this are: Weekly Roundup: The Passion Of Saint iGNUcius Edition, Why I Once Called for Richard Stallman to Step Down.
Dishonest and misleading media pieces involved in this incident include The Daily Beast, Vice, Tech Crunch, Wired.
• 2019-03-16 - decss-haiku
Muse! When we learned to
count, little did we know all
the things we could do
some day by shuffling
those numbers: Pythagoras
said "All is number"
long before he saw
computers and their effects,
or what they could do
by computation,
naive and mechanical
fast arithmetic.
It changed the world, it
changed our consciousness and lives
to have such fast math
available to
us and anyone who cared
to learn programming.
Now help me, Muse, for
I wish to tell a piece of
controversial math,
for which the lawyers
of DVD CCA
don't forbear to sue:
that they alone should
know or have the right to teach
these skills and these rules.
(Do they understand
the content, or is it just
the effects they see?)
And all mathematics
is full of stories (just read
Eric Temple Bell);
and CSS is
no exception to this rule.
Sing, Muse, decryption
once secret, as all
knowledge, once unknown: how to
decrypt DVDs.
Seth Schoen, DeCSS haiku
• 2019-01-27 - learning-undecidable
My take on the Nature paper Learning can be undecidable:
Fantastic article, very clearly written.
So it reduces a kind of learninability called estimating the maximum (EMX) to the cardinality of real numbers which is undecidable.
When it comes to the relation between EMX and the rest of machine learning framework, the article mentions that EMX belongs to "extensions of PAC learnability include Vapnik's statistical learning setting and the equivalent general learning setting by Shalev-Shwartz and colleagues" (I have no idea what these two things are), but it does not say whether EMX is representative of or reduces to common learning tasks. So it is not clear whether its undecidability applies to ML at large.
Another condition to the main theorem is the union bounded closure assumption. It seems a reasonable property of a family of sets, but then again I wonder how that translates to learning.
The article says "By now, we know of quite a few independence [from mathematical axioms] results, mostly for set theoretic questions like the continuum hypothesis, but also for results in algebra, analysis, infinite combinatorics and more. Machine learning, so far, has escaped this fate." but the description of the EMX learnability makes it more like a classical mathematical / theoretical computer science problem rather than machine learning.
An insightful conclusion: "How come learnability can neither be proved nor refuted? A closer look reveals that the source of the problem is in defining learnability as the existence of a learning function rather than the existence of a learning algorithm. In contrast with the existence of algorithms, the existence of functions over infinite domains is a (logically) subtle issue."
In relation to practical problems, it uses an example of ad targeting. However, A lot is lost in translation from the main theorem to this ad example.
The EMX problem states: given a domain X, a distribution P over X which is unknown, some samples from P, and a family of subsets of X called F, find A in F that approximately maximises P(A).
The undecidability rests on X being the continuous [0, 1] interval, and from the insight, we know the problem comes from the cardinality of subsets of the [0, 1] interval, which is "logically subtle".
In the ad problem, the domain X is all potential visitors, which is finite because there are finite number of people in the world. In this case P is a categorical distribution over the 1..n where n is the population of the world. One can have a good estimate of the parameters of a categorical distribution by asking for sufficiently large number of samples and computing the empirical distribution. Let's call the estimated distribution Q. One can choose the from F (also finite) the set that maximises Q(A) which will be a solution to EMX.
In other words, the theorem states: EMX is undecidable because not all EMX instances are decidable, because there are some nasty ones due to infinities. That does not mean no EMX instance is decidable. And I think the ad instance is decidable. Is there a learning task that actually corresponds to an undecidable EMX instance? I don't know, but I will not believe the result of this paper is useful until I see one.
h/t Reynaldo Boulogne
• 2018-12-11 - gavin-belson
I don't know about you people, but I don't want to live in a world where someone else makes the world a better place better than we do.
Gavin Belson, Silicon Valley S2E1.
• 2018-10-05 - margins
With Fermat's Library's new tool margins, you can host your own journal club.
• 2018-09-18 - rnn-turing
Just some non-rigorous guess / thought: Feedforward networks are like combinatorial logic, and recurrent networks are like sequential logic (e.g. data flip-flop is like the feedback connection in RNN). Since NAND
• combinatorial logic + sequential logic = von Neumann machine which is
an approximation of the Turing machine, it is not surprising that RNN (with feedforward networks) is Turing complete (assuming that neural networks can learn the NAND gate).
• 2018-09-07 - zitierkartell
• 2018-09-05 - short-science
• ShortScience.org is a platform for post-publication discussion aiming to improve accessibility and reproducibility of research ideas.
• The website has over 800 summaries, mostly in machine learning, written by the community and organized by paper, conference, and year.
• Reading summaries of papers is useful to obtain the perspective and insight of another reader, why they liked or disliked it, and their attempt to demystify complicated sections.
• Also, writing summaries is a good exercise to understand the content of a paper because you are forced to challenge your assumptions when explaining it.
• Finally, you can keep up to date with the flood of research by reading the latest summaries on our Twitter and Facebook pages.
• 2018-08-13 - darknet-diaries
Darknet Diaries is a cool podcast. According to its about page it covers "true stories from the dark side of the Internet. Stories about hackers, defenders, threats, malware, botnets, breaches, and privacy."
• 2018-06-20 - coursera-basic-income
Coursera is having a Teach-Out on Basic Income.
• 2018-06-19 - pun-generator
• 2018-06-15 - hackers-excerpt
But as more nontechnical people bought computers, the things that impressed hackers were not as essential. While the programs themselves had to maintain a certain standard of quality, it was quite possible that the most exacting standards—those applied by a hacker who wanted to add one more feature, or wouldn't let go of a project until it was demonstrably faster than anything else around—were probably counterproductive. What seemed more important was marketing. There were plenty of brilliant programs which no one knew about. Sometimes hackers would write programs and put them in the public domain, give them away as easily as John Harris had lent his early copy of Jawbreaker to the guys at the Fresno computer store. But rarely would people ask for public domain programs by name: they wanted the ones they saw advertised and discussed in magazines, demonstrated in computer stores. It was not so important to have amazingly clever algorithms. Users would put up with more commonplace ones.
The Hacker Ethic, of course, held that every program should be as good as you could make it (or better), infinitely flexible, admired for its brilliance of concept and execution, and designed to extend the user's powers. Selling computer programs like toothpaste was heresy. But it was happening. Consider the prescription for success offered by one of a panel of high-tech venture capitalists, gathered at a 1982 software show: "I can summarize what it takes in three words: marketing, marketing, marketing." When computers are sold like toasters, programs will be sold like toothpaste. The Hacker Ethic notwithstanding.
Hackers: Heroes of Computer Revolution, by Steven Levy.
• 2018-06-11 - catalan-overflow
To compute Catalan numbers without unnecessary overflow, use the recurrence formula $$C_n = {4 n - 2 \over n + 1} C_{n - 1}$$.
• 2018-06-04 - boyer-moore
The Boyer-Moore algorithm for finding the majority of a sequence of elements falls in the category of "very clever algorithms".
int majorityElement(vector<int>& xs) {
int count = 0;
int maj = xs[0];
for (auto x : xs) {
if (x == maj) count++;
else if (count == 0) maj = x;
else count--;
}
return maj;
}
• 2018-05-30 - how-to-learn-on-your-own
Roger Grosse's post How to learn on your own (2015) is an excellent modern guide on how to learn and research technical stuff (especially machine learning and maths) on one's own.
• 2018-05-25 - 2048-mdp
This post models 2048 as an MDP and solves it using policy iteration and backward induction.
• 2018-05-22 - ats
ATS (Applied Type System) is a programming language designed to unify programming with formal specification. ATS has support for combining theorem proving with practical programming through the use of advanced type systems. A past version of The Computer Language Benchmarks Game has demonstrated that the performance of ATS is comparable to that of the C and C++ programming languages. By using theorem proving and strict type checking, the compiler can detect and prove that its implemented functions are not susceptible to bugs such as division by zero, memory leaks, buffer overflow, and other forms of memory corruption by verifying pointer arithmetic and reference counting before the program compiles. Additionally, by using the integrated theorem-proving system of ATS (ATS/LF), the programmer may make use of static constructs that are intertwined with the operative code to prove that a function attains its specification.
• 2018-05-20 - bostoncalling
(5-second fame) I sent a picture of my kitchen sink to BBC and got mentioned in the latest Boston Calling episode (listen at 25:54).
• 2018-05-18 - colah-blog
colah's blog has a cool feature that allows you to comment on any paragraph of a blog post. Here's an example. If it is doable on a static site hosted on Github pages, I suppose it shouldn't be too hard to implement. This also seems to work more seamlessly than Fermat's Library, because the latter has to embed pdfs in webpages. Now fantasy time: imagine that one day arXiv shows html versions of papers (through author uploading or conversion from TeX) with this feature.
• 2018-05-15 - random-forests
Stanford Lagunita's statistical learning course has some excellent lectures on random forests. It starts with explanations of decision trees, followed by bagged trees and random forests, and ends with boosting. From these lectures it seems that:
1. The term "predictors" in statistical learning = "features" in machine learning.
2. The main idea of random forests of dropping predictors for individual trees and aggregate by majority or average is the same as the idea of dropout in neural networks, where a proportion of neurons in the hidden layers are dropped temporarily during different minibatches of training, effectively averaging over an emsemble of subnetworks. Both tricks are used as regularisations, i.e. to reduce the variance. The only difference is: in random forests, all but a square root number of the total number of features are dropped, whereas the dropout ratio in neural networks is usually a half.
By the way, here's a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course:
• 2018-05-14 - open-review-net
Open peer review means peer review process where communications e.g. comments and responses are public.
Like SciPost mentioned in my post, OpenReview.net is an example of open peer review in research. It looks like their focus is machine learning. Their about page states their mission, and here's an example where you can click on each entry to see what it is like. We definitely need this in the maths research community.
• 2018-05-11 - rnn-fsm
Related to a previous micropost.
These slides from Toronto are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.
Goodfellow et. al.'s book (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the universal Turing machine abbr. UTM (the book referenced Siegelmann-Sontag), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).
By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in Hinton's video.
From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.
• 2018-05-10 - math-writing-decoupling
One way to write readable mathematics is to decouple concepts. One idea is the following template. First write a toy example with all the important components present in this example, then analyse each component individually and elaborate how (perhaps more complex) variations of the component can extend the toy example and induce more complex or powerful versions of the toy example. Through such incremental development, one should be able to arrive at any result in cutting edge research after a pleasant journey.
It's a bit like the UNIX philosophy, where you have a basic system of modules like IO, memory management, graphics etc, and modify / improve each module individually (H/t NAND2Tetris).
The book Neutral networks and deep learning by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.67%.
• 2018-05-09 - neural-nets-activation
What makes the rectified linear activation function better than the sigmoid or tanh functions? At present, we have a poor understanding of the answer to this question. Indeed, rectified linear units have only begun to be widely used in the past few years. The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments. They got good results classifying benchmark data sets, and the practice has spread. In an ideal world we'd have a theory telling us which activation function to pick for which application. But at present we're a long way from such a world. I should not be at all surprised if further major improvements can be obtained by an even better choice of activation function. And I also expect that in coming decades a powerful theory of activation functions will be developed. Today, we still have to rely on poorly understood rules of thumb and experience.
Michael Nielsen, Neutral networks and deep learning
• 2018-05-09 - neural-turing-machine
One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages. A 2014 paper developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output. Informally, the network is learning to "understand" certain Python programs. A second paper, also from 2014, used RNNs as a starting point to develop what they called a neural Turing machine (NTM). This is a universal computer whose entire structure can be trained using gradient descent. They trained their NTM to infer algorithms for several simple problems, such as sorting and copying.
As it stands, these are extremely simple toy models. Learning to execute the Python program print(398345+42598) doesn't make a network into a full-fledged Python interpreter! It's not clear how much further it will be possible to push the ideas. Still, the results are intriguing. Historically, neural networks have done well at pattern recognition problems where conventional algorithmic approaches have trouble. Vice versa, conventional algorithmic approaches are good at solving problems that neural nets aren't so good at. No-one today implements a web server or a database program using a neural network! It'd be great to develop unified models that integrate the strengths of both neural networks and more traditional approaches to algorithms. RNNs and ideas inspired by RNNs may help us do that.
Michael Nielsen, Neural networks and deep learning
• 2018-05-08 - nlp-arxiv
Primer Science is a tool by a startup called Primer that uses NLP to summarize contents (but not single papers, yet) on arxiv. A developer of this tool predicts in an interview that progress on AI's ability to extract meanings from AI research papers will be the biggest accelerant on AI research.
• 2018-05-08 - neural-nets-regularization
no-one has yet developed an entirely convincing theoretical explanation for why regularization helps networks generalize. Indeed, researchers continue to write papers where they try different approaches to regularization, compare them to see which works better, and attempt to understand why different approaches work better or worse. And so you can view regularization as something of a kludge. While it often helps, we don't have an entirely satisfactory systematic understanding of what's going on, merely incomplete heuristics and rules of thumb.
There's a deeper set of issues here, issues which go to the heart of science. It's the question of how we generalize. Regularization may give us a computational magic wand that helps our networks generalize better, but it doesn't give us a principled understanding of how generalization works, nor of what the best approach is.
Michael Nielsen, Neural networks and deep learning
• 2018-05-08 - sql-injection-video
Computerphile has some brilliant educational videos on computer science, like a demo of SQL injection, a toy example of the lambda calculus, and explaining the Y combinator.
• 2018-05-07 - learning-knowledge-graph-reddit-journal-club
It is a natural idea to look for ways to learn things like going through a skill tree in a computer RPG.
For example I made a DAG for juggling.
Websites like Knowen and Metacademy explore this idea with added flavour of open collaboration.
The design of Metacademy looks quite promising. It also has a nice tagline: "your package manager for knowledge".
There are so so many tools to assist learning / research / knowledge sharing today, and we should keep experimenting, in the hope that eventually one of them will scale.
On another note, I often complain about the lack of a place to discuss math research online, but today I found on Reddit some journal clubs on machine learning: 1, 2. If only we had this for maths. On the other hand r/math does have some interesting recurring threads as well: Everything about X and What Are You Working On?. Hopefully these threads can last for years to come.
• 2018-05-02 - simple-solution-lack-of-math-rendering
The lack of maths rendering in major online communication platforms like instant messaging, email or Github has been a minor obsession of mine for quite a while, as I saw it as a big factor preventing people from talking more maths online. But today I realised this is totally a non-issue. Just do what people on IRC have been doing since the inception of the universe: use a (latex) pastebin.
Neural networks are one of the most beautiful programming paradigms ever invented. In the conventional approach to programming, we tell the computer what to do, breaking big problems up into many small, precisely defined tasks that the computer can easily perform. By contrast, in a neural network we don't tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.
Michael Nielsen - What this book (Neural Networks and Deep Learning) is about
But, users have learned to accommodate to Google not the other way around. We know what kinds of things we can type into Google and what we can't and we keep our searches to things that Google is likely to help with. We know we are looking for texts and not answers to start a conversation with an entity that knows what we really need to talk about. People learn from conversation and Google can't have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.
Roger Schank - Fraudulent claims made by IBM about Watson and AI
• 2018-04-06 - hacker-ethics
• Access to computers—and anything that might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!
• All information should be free.
• Mistrust Authority—Promote Decentralization.
• Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.
• You can create art and beauty on a computer.
• Computers can change your life for the better.
The Hacker Ethic, Hackers: Heroes of Computer Revolution, by Steven Levy
• 2018-03-23 - static-site-generator
"Static site generators seem like music databases, in that everyone eventually writes their own crappy one that just barely scratches the itch they had (and I'm no exception)."
_david__@hackernews
So did I.
|
2022-05-17 11:37:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3114389479160309, "perplexity": 2846.1495328252167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662517245.1/warc/CC-MAIN-20220517095022-20220517125022-00001.warc.gz"}
|
https://acm.ecnu.edu.cn/problem/1703/
|
# 1703. Non-divisible 2-3 Power Sums
Every positive integer $N$ can be written in at least one way as a sum of terms of the form $(2^a)(3^b)$ where no term in the sum exactly divides any other term in the sum. For example:
• $1 = (2^0)(3^0)$,
• $7 = (2^2)(3^0) + (2^0)(3^1)$,
• $31 = (2^4)(3^0) + (2^0)(3^2) + (2^1)(3^1) = (2^2) + (3^3)$
Note from the example of $31$ that the representation is not unique.
Write a program which takes as input a positive integer $N$ and outputs a representation of $N$ as a sum of terms of the form $(2^a)(3^b)$.
### 输入格式
The first line of input contains a single integer $C$ ($1 \le C \le 1000$) which is the number of datasets that follow.
Each dataset consists of a single line of input containing a single integer $N$ ($1 \le N < 2^{31}$), which is the number to be represented as a sum of terms of the form $(2^a)(3^b)$.
### 输出格式
For each dataset, the output will be a single line consisting of: The dataset number, a single space, the number of terms in your sum as a decimal integer followed by a single space followed by representations of the terms in the form x y with terms separated by a single space. x is the power of $2$ in the term and y is the power of $3$ in the term.
### 样例
Input
6
1
7
31
7776
531441
123456789
Output
1 1 0 0
2 2 0 1 2 0
3 2 0 3 2 0
4 1 5 5
5 1 0 12
6 8 0 16 2 15 3 13 4 12 7 8 9 6 10 5 15 2
4 人解决,12 人已尝试。
4 份提交通过,共有 49 份提交。
8.8 EMB 奖励。
|
2023-03-24 20:10:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5989944338798523, "perplexity": 126.15488373538243}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945288.47/warc/CC-MAIN-20230324180032-20230324210032-00672.warc.gz"}
|
https://cstheory.stackexchange.com/questions/41810/locally-nameless-representation-normal-order-opening-with-a-bound-variable
|
# Locally-nameless representation: normal order & opening with a bound variable
This question concerns the representation used in Arthur Charguéraud's paper “The locally nameless representation” and is somehow a follow-up on this question, where it is asked about the normalization of terms under a binder, at the example of the following term (for technical reasons, I use one based indices):
$$\lambda((\lambda\lambda(2 \; 1)) \; 1)$$
Beta-reductions in this representations supposedly (and contrary to the answer to the linked question) are done without need of introducing fresh variable names, because we can replace a variable opening + substitution combination
$$((\lambda \; t ) \; u) \longrightarrow_\beta [x \mapsto u] \; t^{x} \qquad (x \text{ fresh in } t)$$
by simple “term” opening
$$((\lambda \; t ) \; u) \longrightarrow_\beta t^{u},$$
where $$t^{u} = \mathtt{open}(1, u, t)$$ (see p. 10 in the paper). open can be implemented as follows, trying to exactly follow the definition there and Charguérot's reference implementation in Coq:
open k u (Bvar i) = if k == i then u else (Bvar i)
open k u (Fvar x) = Fvar x
open k u (Abs t1) = Abs (open (k + 1) u t1)
open k u (App t1 t2) = App (open k u t1) (open k u t2)
Now, reducing the example term in (what I think is) normal order will result in the following steps:
\begin{align*} \mathtt{reduce}\; \lambda((\lambda\lambda(2 \; 1)) \; 1) &= \lambda(\mathtt{reduce} \; ((\lambda\lambda(2 \; 1)) \; 1)) \\ &= \lambda(\mathtt{open} \; 1 \; 1 \; (\lambda(2 \; 1))) \\ &= \lambda( \lambda (\mathtt{open} \; 2 \; 1 \; (2 \; 1))) \\ &= \lambda( \lambda ((\mathtt{open} \; 2 \; 1 \; 2) \; (\mathtt{open} \; 2 \; 1 \; 1))) \\ &= \lambda \lambda (1 \; 1) \end{align*}
This is, however, the wrong result -- the answer should have been $$\lambda \lambda (2 \; 1)$$ (essentially, just eta converting the term). The problem lies in the second-last line, where the bound $$1$$ from outside replaces the inner $$2$$ without being properly changed.
Appearently, open in this form is not made for normal order, due to the fact that unlike evaluation in de Bruijn representation, the substituted term is not shifted. In fact, I found the following comment on top of Charguéraud's implementation of open (called open_rec there):
We make several simplifying assumptions in defining [open_rec]. First, we assume that the argument [u] is locally closed. This assumption simplifies the implementation since we do not need to shift indices in [u] when passing under a binder. Second, we assume that this function is initially called with index zero and that zero is the only unbound index in the term. This eliminates the need to possibly subtract one in the case of indices.
Of course, the assumption of local closedness holds only when we do something like call-by-name, where terms under binders are not reduced. My solution was to change open to the following open', which “inlines” shifting of bound variables when they are inserted (which can be done easily, since we keep track of the number of binders anyway with k):
open' k (Bvar j) (Bvar i) = if k == i then (Bvar j + k - 1) else (Bvar i)
open' k u (Bvar i) = if k == i then u else (Bvar i)
open' k u (Fvar x) = Fvar x
open' k u (Abs t1) = Abs (open' (k + 1) u t1)
open' k u (App t1 t2) = App (open' k u t1) (open' k u t2)
The questions I have are:
• Is my reasoning correct?
• Does open' correctly fix the problem when the assumptions do not hold (ie, when used in normal order reduction)?
• Does open' preserve the properties and correctness of open when the original assumptions hold?
|
2019-06-17 21:05:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985389709472656, "perplexity": 1640.724625799218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998580.10/warc/CC-MAIN-20190617203228-20190617225228-00097.warc.gz"}
|
https://www.varsitytutors.com/common_core_3rd_grade_math-help/solve-real-world-and-mathematical-problems-involving-area-ccss-math-content-3-md-c-7b
|
Common Core: 3rd Grade Math : Solve Real World and Mathematical Problems Involving Area: CCSS.Math.Content.3.MD.C.7b
Example Questions
← Previous 1 3 4 5 6
Example Question #1 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #2 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #1 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #4 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #5 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #6 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #7 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #8 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #1 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
Example Question #10 : Solve Real World And Mathematical Problems Involving Area: Ccss.Math.Content.3.Md.C.7b
What is the area of the rectangle?
Explanation:
The formula to find area is . We are given the length and the width from the problem, so we can plug those values into our equation and solve.
*Area is the number of square units inside a shape, which is why area is always written with square units.
← Previous 1 3 4 5 6
|
2017-09-22 16:58:46
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8134537935256958, "perplexity": 468.50249837208486}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689028.2/warc/CC-MAIN-20170922164513-20170922184513-00099.warc.gz"}
|
https://math.stackexchange.com/questions/2411649/proof-that-any-fractional-number-squared-is-also-fractional
|
# Proof that any fractional number squared is also fractional? [duplicate]
Let a belong to the set of (Rationals - Integers)
How would you prove the following:
a squared belongs to the set of (Rationals - Integers)
## marked as duplicate by Matthew Conroy, dxiv, CIJ, Bill Dubuque number-theory StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Aug 30 '17 at 22:59
This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
• The body of the question doesn't match the title. Which one are you asking? In case it matters, remember that $\,i^2=-1\,$, and $\big(\sqrt{2}\big)^2=2$. – dxiv Aug 30 '17 at 22:42
• Sorry I meant set of Complex minus set of Integers – user448724 Aug 30 '17 at 22:43
• $i^2=-1$ your statement is false. – hamam_Abdallah Aug 30 '17 at 22:46
• Ah yes of course. I'm going to update the question to be irrational set minus integer. – user448724 Aug 30 '17 at 22:49
• If the dupes don't answer your question then please clarify why and we can reopen the question. – Bill Dubuque Aug 30 '17 at 23:01
|
2019-01-23 23:49:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28585636615753174, "perplexity": 3968.7945141351147}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547584431529.98/warc/CC-MAIN-20190123234228-20190124020228-00473.warc.gz"}
|
http://physics.stackexchange.com/questions/995/simulate-a-physical-impact-of-objects-made-of-finite-small-elements/1022
|
Simulate a physical impact of objects made of finite, small elements
I want to simulate an impact between two bodies according to gravity, and eventually considering other forces to stick matter together. I'd like to use python to do this, but I am open to alternatives. What kind of tools or libraries can I use to perform this task ?
-
Something as game physics engine? (Newton laws+collision detection with friction) – mbq Nov 17 '10 at 12:39
@mbq: do you know a good and easy one, possibly usable via python ? – Stefano Borini Nov 17 '10 at 12:40
I have to say that the phrase "particle physics" in your title is confusing. I was wondering what model you were going to use for pion production... – dmckee Nov 17 '10 at 18:57
@dmckee : you are absolutely right – Stefano Borini Nov 18 '10 at 23:38
Long time ago I was doing some programming with ODE. There is also Bullet engine which I only heard about. I guess both of them might have python bindings. But certainly do use some tools and forget about writing a reasonable (in the sense of capable of simulating anything resembling reality) engine yourself, it's not worth it. Just google for engines I am sure you'll find even more of them. And also try asking at StackOverflow, as programmers use these engines much more often than physicists, I'd think (e.g. in games). – Marek Nov 19 '10 at 0:37
I recently did something like this, in order to simulate a system of two masses connected by a spring. Those masses lay horizontally on a frictionless plane. One of these masses got an initial impulse and thereafter the system was left alone. While the entire system (the controid to be precies) moves with constant velocity, the two masses are oscillating, while moving forward. Here is a short ASCII drawing of the system
Initial Impulse ______ ______
----> | m1 |/\/\/\/\/\/\/\| m2 |
_____________________|____|______________|____|______________________
After writing down the differential equations, I wrote a small python programm simulating the problem. This programm relies on the method of small steps (also called the Eueler Method). Here is the correspondig wikipedia article:
http://en.wikipedia.org/wiki/Euler_method
I implemented this alogorithm for the problem described above and plotted the results using gnuplot:
But you are free to use any tool you like for this purpose. Here comes the sourcecode of my small programm:
#!/usr/bin/python
import os
steps = 100000
time = 100.
# Initial conditions
D = 0.9
m1 = 1.2
m2 = 0.4
v1 = 1.3
v2 = 0.
x1 = 0.
x2 = 1.
l = 1.
#Since I also tried to implement other algorithmus i specify which one to use
Euler = 1
#Euler
if Euler == 1:
timesteps = time / steps
# Open the files for writing the results to
f = open('results_x1', 'w')
f2 = open('results_x2', 'w')
f3 = open('results_com', 'w')
# The real calculation
for i in range(0,steps):
x1 = x1 + (D * (x2 - x1 -l) / m1)* (timesteps**2) + v1 * timesteps
x2 = x2 - (D * (x2 - x1 -l) / m2)* (timesteps**2) + v2 * timesteps
v1 = v1 + (D * (x2 - x1 -l) / m1)* (timesteps)
v2 = v2 - (D * (x2 - x1 -l) / m2)* (timesteps)
f.write(str(i*timesteps) + " " + str(x1) + "\n")
f2.write(str(i*timesteps) + " " + str(x2) + "\n")
f3.write(str(i*timesteps) + " " + str((x1*m1 + x2*m2)/(m1+m2)) + "\n")
f.close()
f2.close()
f3.close()
Of course there are better alogorithmus than the euler one, but this one is definitly the easiest to implement (I failed implementing more advanced algorithms ;-)).
So these are the steps you should probably follow:
• Write down the differential equations for you problem
• Understand the Euler Method
• Take my code as a reference point and modify it for your problem
I know that this is quite an extensive topic and that my answer is therefore just superficial. Just tell what you want to know more about, and I will try to add corresponding comments ;-)
-
A nice alternative to the Euler Method is Verlet Integration, which tends to be more stable and accurate than the Euler method. – Justin L. Dec 26 '10 at 9:12
wow, jeah, hadn't heard of that. Seems really nice. If I have some spare time, I will have a look into it, thanks. – ftiaronsem Dec 26 '10 at 11:43
Check out the site of Ron Fedkiw; it is a good starting point with comprehensive set of keywords.
-
it depends on what kind of simulation you trying to build:
if your simulation has the purpose build a simulative model, that, for example, avoids the experimental noise, maybe with a complex dynamics algorithm and so on, i think C or C++ are the best choices..
If on the other hand you want to create a quick simulation with graphical output and built-in analysis tools (maybe even for didactic purpose), python is your choice! in this case I suggest you check out the Enthought Python Distribution.. for accademic use it is freeware and it has a built-in release of scipy.
-
ok, but I'm not asking about a scientific distribution of python. I don't want to reimplement body dynamics from scratch. – Stefano Borini Nov 17 '10 at 12:51
|
2015-08-31 14:00:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.558897078037262, "perplexity": 1537.6595986014568}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644066017.21/warc/CC-MAIN-20150827025426-00098-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://cstheory.stackexchange.com/questions/11414/b%C3%BCchi-automata-with-acceptance-strategy
|
# Büchi automata with acceptance strategy
## The problem
Let $A=\langle \Sigma, Q, q_0,F,\Delta\rangle$ be a Büchi automaton, recognizing a language $L\subseteq\Sigma^\omega$. We assume that $A$ has an acceptance strategy in the following sense : there is a function $\sigma:\Sigma^*\to Q$ which can be used to pilot runs of $A$. We formalize this by the following conditions :
• $\sigma(\epsilon)=q_0$
• for all $u\in\Sigma^*$ and $a\in\Sigma$, $(\sigma(u),a,\sigma(ua))\in\Delta$
• for all $w=a_0a_1a_2\dots\in L$, the run piloted by $\sigma$ is accepting, i.e. the sequence $\sigma(\epsilon),\sigma(a_0),\sigma(a_0a_1),\sigma(a_0a_1a_2),\dots$ has infinitely many elements in $F$.
To subsume the conditions, $A$ can accept any word of its language without having to guess anything about the future.
Then, under these assumptions on $A$, is it true that $A$ can be determinized just by removing transitions ? In other words, can we always choose the next transition depending only on the current state and letter ? Is there any reference on the subject ? The same question can then be asked on co-Büchi automata, and more generally on parity automata.
## What is known
Here are some partial results.
First, we can restrict $\sigma$ to nondeterminstic choices between states having the same residual. Indeed, if $L(q)$ is the language accepted from $q$, an accepting strategy cannot choose $q_1$ over $q_2$ at some point, if there is $w\in L(q_2)\setminus L(q_1)$.
Notice that the remaining choices do matter, so despite the intuition, this is not enough to get rid of the nondeterminism. This is because it is possible to stay ad infinitum in a good residual (i.e. the remaining of the word is in the residual), but reject the word because not infinitely many Büchi states are seen. This is the main difficulty of the problem : an infinite run can be wrong, without making any fatal mistake at some point.
Second, the problem is solved if $L=\Sigma^\omega$, i.e. all words are accepted by $A$. In this case, we can view $A$ as a Büchi game where Player I chooses input letters and Player II chooses transitions. Then we can use positional determinacy of Büchi games to extract a positional strategy for Player II. This arguments even works in the more general case of parity automata. The difficulty of this problem comes from the fact that some words are not in $L$, and in this case the strategy $\sigma$ can have any behaviour.
Third, here is a proof that under the assumptions, the language $L$ is in the class of deterministic Büchi languages, witnessed by an automaton with states $2^Q$. Notice that this implies that $L$ cannot be any $\omega$-regular language, for instance if $L=(a+b)^*a^\omega$, no strategy $\sigma$ matching the conditions can exist.
We start by restricting the transitions according to the first remark : the only choices we can make do not impact on the residual language. We only take successors with the maximum residual, they must exist because $\sigma$ exists.
Then, we build $A'=\langle \Sigma, 2^Q, \{ q_0\},F',\Delta'\rangle$ in the following way. $A'$ is the subset automaton of $A$, but every time a Büchi state $q$ appears in the component, all other states can be removed from the component, and we start again from the singleton $\{ q\}$. Then we can set $F'=\{\{ q\} : q\in F\}$. We can verify that $A'$ is a deterministic Büchi automaton for $L$.
Finally, by putting together the second and the third remarks, we can always obtain a finite memory-strategy $\sigma$, by using a positional strategy for Player II in the game $A\times A'$ where Player I chooses letters, Player II chooses transitions in $A$ and wins if $A$ accepts whenever $A'$ accepts.
• Write $A_\sigma$ for the (deterministic) automaton with transitions removed. Let $w=w_0w_1\cdots$ be a word in $L$. Then by your conditions $\sigma(w_0)\sigma(w_0w_1)\cdots$ is a run of $A_\sigma$ and is accepting, thus $L\subseteq L(A_\sigma)$. Conversely, any accepting run of $A_\sigma$ is in particular an accepting run of $A$, thus $L(A_\sigma)\subseteq L$. – Sylvain May 9 '12 at 15:01
• @Sylvain: Which transitions are removed? – Dave Clarke May 9 '12 at 15:02
• I'm assuming you call $A_\sigma$ the automaton $A$ restricted to transitions used in the strategy $\sigma$. The problem is you don't have any guarantee that $A_\sigma$ is deterministic. For instance assume $\sigma(a)=\sigma(\epsilon)=q_0$ and $\sigma(aa)=q_1$, then $A_\sigma$ is not deterministic. – Denis May 9 '12 at 15:06
• I'm also posting it on mathOverflow, with more details on the previous work here: mathoverflow.net/questions/97007/…, is it ok ? – Denis May 15 '12 at 14:51
• Generally cross posting is not allowed, unless one has not received an answer after a sufficient amount of time. Given that there is an open bounty on this question, I would wait a few days. You can delete the other posting and open it in a few days. (Also, the other posting should link to this one.) – Dave Clarke May 15 '12 at 19:19
• I know this, the question is about a special class of Büchi automata, namely the one that admit acceptance strategies $\sigma$. I already showed that this class has same power than the class of deterministic Büchi automata, and I described a simplified determinization procedure (in the "what is known" section). The conjecture is that there is a much simpler determinization procedure for this class, which consists just in removing some transitions. – Denis May 15 '12 at 17:40
|
2019-06-27 12:41:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8127381801605225, "perplexity": 262.32818421731065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001138.93/warc/CC-MAIN-20190627115818-20190627141818-00055.warc.gz"}
|
https://ai.stackexchange.com/questions/17833/what-is-the-mean-in-the-variational-auto-encoder/17835
|
# What is the mean in the variational auto-encoder?
Here's a diagram of a variational auto-encoder.
There are 2 nodes before the sample (encoding vector). One is the mean, one is the standard deviation. The mean one is confusing.
Is it the mean of values or is it the mean deviation?
$$\text{mean} = \dfrac{X_i+..+X_n}{N}$$
$$\text{mean deviation} = \dfrac{[X_i|+..+|X_n|}{N}$$
• Hello, I'm no expert but I assume is the Mean of the values. Usually you need the mean and the standard deviation to describe a distribution. – razvanc92 Feb 4 '20 at 9:19
|
2021-05-18 03:50:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.887552797794342, "perplexity": 418.3315496291133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989820.78/warc/CC-MAIN-20210518033148-20210518063148-00202.warc.gz"}
|
https://nuxulu.com/2022-06-20-unable-to-trace-error/
|
# Unable to start trace, the required event providers were not found. Contact your system administrator
Contents
The error pops when you start the trace: Unable to start trace, the required event providers were not found. Contact your system administrator.
Cause: Some how you dont have Dynamics in event viewer
Resolution: execute the script below under C:\Temp in powershell
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 $AOSSetupETWManifestDir = "K:\AosService\WebRoot\Monitoring" foreach ($manifestFile in Get-ChildItem -Path $AOSSetupETWManifestDir\*.man | select-object -Property BaseName,Name) {$dllFile="" if ((Test-Path "$AOSSetupETWManifestDir\$($manifestFile.BaseName).Instrumentation.dll")) {$dllFile = "$AOSSetupETWManifestDir\$($manifestFile.BaseName).Instrumentation.dll" } elseif ((Test-Path "$AOSSetupETWManifestDir\$($manifestFile.BaseName)Resource.dll")) { $dllFile = "$AOSSetupETWManifestDir\$($manifestFile.BaseName)Resource.dll" } elseif ((Test-Path "$AOSSetupETWManifestDir\$($manifestFile.BaseName).dll")) {$dllFile = "$AOSSetupETWManifestDir\$($manifestFile.BaseName).dll" } else { Write-Host "Warn : Skipping$AOSSetupETWManifestDir\$($manifestFile.Name) as DLL not found" Continue } Write-Host "Installing $AOSSetupETWManifestDir\$($manifestFile.Name) using$dllFile" wevtutil.exe im "$AOSSetupETWManifestDir\$($manifestFile.Name)" /rf:"$dllFile" /mf:"$dllFile" Write-Host "Finished installing$AOSSetupETWManifestDir\$($manifestFile.Name) nn" }
Then restart the VM by LCS or Azure portal, you will get the Dynamics back in the event viewer and able to start the trace normally.
|
2023-02-06 13:32:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.19499832391738892, "perplexity": 2310.7371567526884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00447.warc.gz"}
|
http://liberzon.csl.illinois.edu/teaching/cvoc/node71.html
|
Next: 4.2.7 Separating hyperplane Up: 4.2 Proof of the Previous: 4.2.5 Terminal cone Contents Index
## 4.2.6 Key topological lemma
Up until now, we have not yet used the fact that is an optimal control and is an optimal trajectory. As discussed in Section 4.2.1 and demonstrated in Figure 4.2, optimality means that no other trajectory corresponding to another control can reach the line (the vertical line through in the -space) at a point below . Since the terminal cone is a linear approximation of the set of points that we can reach by applying perturbed controls, we expect that the terminal cone should face upward."
To formalize this observation, consider the vector
(4.26)
and let be the ray generated by this vector (which points downward) originating at . Optimality suggests that should be directed outside of , a situation illustrated in Figure 4.9. Since is only an approximation, the correct claim is actually slightly weaker.
In other words, can in principle touch along the boundary, but it cannot lie inside it. We note that since is a cone, intersects its interior if and only if all points of except are interior points of .
Let us see what would happen if the statement of the lemma were false and were inside . By construction of the terminal cone, as explained at the end of Section 4.2.5, there would exist a (spatial plus temporal) perturbation of such that the terminal point of the perturbed trajectory would be given by
for some (arbitrary) . Writing this out in terms of the components of and recalling the definition (4.26) of and the relation (4.6) between and the cost, we obtain
where is the perturbed control that generates . Presently there is no direct contradiction with optimality of yet, because the terminal point of the perturbed trajectory is different from the prescribed terminal point , i.e., need not hit the target set. Thus we see that although Lemma 4.1 certainly seems plausible, it is not obvious.
Let us try to build a more convincing argument in support of Lemma 4.1. If the statement of the lemma is false, then we can pick a point on the ray below such that is contained in together with a ball of some positive radius around it; let us denote this ball by . For a suitable value of , we have . Since the points in belong to , they are of the form (4.25) and can be written as where the vectors are first-order perturbations of the terminal point arising from control perturbations constructed earlier. We know that the actual terminal points of trajectories corresponding to these control perturbations are given by
(4.27)
We denote the set of these terminal points by ; we can think of it as a warped" version of , since it is away from .
In the above discussion, was fixed; we now make it tend to 0. The point , which we relabel as to emphasize its dependence on , will approach along the ray as (here is the same fixed positive number as in the original expression for ). The ball , which now stands for the ball of radius around , will still belong to and consist of the points for each value of . Terminal points of perturbed state trajectories (the perturbations being parameterized by ) will still generate a warped ball" consisting of points of the form (4.27). Figure 4.10 should help visualize this construction.
Since the center of is on below , the radius of is , and the warping" is of order , for sufficiently small the set will still intersect the ray below . But this means that there exists a perturbed trajectory which hits the desired terminal point with a lower value of the cost. The resulting contradiction proves the lemma.
The above claim about a nonempty intersection between and seems intuitively obvious. The original proof of the maximum principle in [PBGM62] states that this fact is obvious, but then adds a lengthy footnote explaining that a rigorous proof can be given using topological arguments. A conceivable scenario that must be ruled out is one in which the set has a hole (or dent) in it and the ray goes through this hole. It turns out that this is indeed impossible, thanks to continuity of the warping" map that transforms to . In fact, it can be shown that contains, for small enough, a ball centered at whose radius is of order . One quick way to prove this is by applying Brouwer's fixed point theorem (which states that a continuous map from a ball to itself must have a fixed point).
Next: 4.2.7 Separating hyperplane Up: 4.2 Proof of the Previous: 4.2.5 Terminal cone Contents Index
Daniel 2010-12-20
|
2022-08-13 19:04:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8319710493087769, "perplexity": 376.70534923146016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571982.99/warc/CC-MAIN-20220813172349-20220813202349-00131.warc.gz"}
|
https://forum.azimuthproject.org/discussion/1947/how-to-discuss-exercises
|
#### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# How to Discuss Exercises
edited April 10
Frederik Eisele has done something great: he's started lots of discussions where you can do the exercises in Chapter 1! Not all of them, but lots. Andrew Ardill then created blanks where you can add more.
If you want to see the answer to an exercise, or start a discussion of an exercise, please check the Azimuth Wiki first! Go to this wiki page:
We've already got some duplicate discussions of exercises - use this page to avoid creating more. And if you create a brand new one, please note it on this wiki page, so we can find it! Just go down to the bottom of the bottom of the page and click "Edit". You can probably figure out what to do next. But if you have questions, please let me know here!
Here are the exercises for Chapter 1:
• Options
1.
edited April 9
I have added the remaining discussions for chapter 1.
I have started adding discussions for chapter 2. I did not include them in the main wiki page as that page is getting a bit unwieldy to edit.
Also I put in a link to a page the links to the discussions on the puzzles. I did not see a need to add new discussions for the puzzles as the place where the puzzles are created seems like a reasonable discussion already.
You may notice that I have taken slight liberties with the exercise descriptions, including referenced objects so that you should not need to look them up in the book.
Comment Source:I have added the remaining discussions for chapter 1. I have started adding discussions for chapter 2. I did not include them in the main wiki page as that page is getting a bit unwieldy to edit. Also I put in a link to a page the links to the discussions on the puzzles. I did not see a need to add new discussions for the puzzles as the place where the puzzles are created seems like a reasonable discussion already. You may notice that I have taken slight liberties with the exercise descriptions, including referenced objects so that you should not need to look them up in the book.
• Options
2.
Frederick: thanks! As the main wiki page continues to expand we can start moving stuff out of there and putting it on other wiki pages, with links. For example we may someday have a wiki page "Chapter 1" or "Sketch 1" containing links to all discussions relevant to that. Luckily it's easy to do this in a step-by-step way as the need arises, while keeping some eye to consistency.
I'm saying "Chapter", e.g. in my lecture titles, while you're saying "Sketch". We may someday need to fight a duel to the death over that. Your convention is more cute but I'd need to rename a bunch of things to adhere to it.
Comment Source:Frederick: thanks! As the main wiki page continues to expand we can start moving stuff out of there and putting it on other wiki pages, with links. For example we may someday have a wiki page "Chapter 1" or "Sketch 1" containing links to all discussions relevant to that. Luckily it's easy to do this in a step-by-step way as the need arises, while keeping some eye to consistency. I'm saying "Chapter", e.g. in my lecture titles, while you're saying "Sketch". We may someday need to fight a duel to the death over that. Your convention is more cute but I'd need to rename a bunch of things to adhere to it.
• Options
3.
I started calling them "Sketch" as that was what DSpivak called them. I have been changing them all to "Chapter" as I revisit things.
Comment Source:I started calling them "Sketch" as that was what DSpivak called them. I have been changing them all to "Chapter" as I revisit things.
• Options
4.
edited April 9
The Exercises mention Definitions and Equations often. Should we have a wiki page for them to be listed? That way the Exercises can provide a link to them much as in the textbook.
Comment Source:The Exercises mention Definitions and Equations often. Should we have a wiki page for them to be listed? That way the Exercises can provide a link to them much as in the textbook.
• Options
5.
Fredrick - that seems like a lot of work, so it's really a question of whether you want to do it... not just today, but for the whole book. (Maybe someone else will help out, but I find it's best if I don't start projects that require someone else to help out.)
Comment Source:Fredrick - that seems like a lot of work, so it's really a question of whether you want to do it... not just today, but for the whole book. (Maybe someone else will help out, but I find it's best if I don't start projects that _require_ someone else to help out.)
• Options
6.
John, I have been putting the Definitions in the first Exercise description where it is mentioned. In subsequent Exercises I have been placing links.
Comment Source:John, I have been putting the Definitions in the first Exercise description where it is mentioned. In subsequent Exercises I have been placing links.
• Options
7.
Nice! Thanks, that's great!
Comment Source:Nice! Thanks, that's great!
|
2018-04-20 10:20:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3791011571884155, "perplexity": 870.3511238625397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125937440.13/warc/CC-MAIN-20180420100911-20180420120911-00133.warc.gz"}
|
https://solvedlib.com/problem-2-suppose-c-is-a-curve-of-length-and-fx-y,401849
|
Problem 2 Suppose C is a curve of length (, and f(x, y) is a continuous function that is defined
Question:
Problem 2 Suppose C is a curve of length (, and f(x, y) is a continuous function that is defined on a region D that contains C and f(x,y)
Similar Solved Questions
We were unable to transcribe this imageInteract with the quiz. HOME THEORY MEDIA MISSION Looking at...
We were unable to transcribe this imageInteract with the quiz. HOME THEORY MEDIA MISSION Looking at the Punnett square, what is Joseph's genotype? a) Xny b) χεY C) Xnxc d) XeXc...
A state official is working to distribute funds to hospitalsthat provide Sexual Assault Forensic Exams. Hospital A submitted 25weeks of data. They provided an average of 9.32 exams per week (S =5.62). Hospital B provided 20 weeks of data. They provided anaverage of 10.97 exams per week (S = 4.11). Based on thisinformation, does Hospital A provide significantly fewer exams? Toanswer, complete the following:Using the information from Question 4 above, find and interpreta 90% confidence interval fo
A state official is working to distribute funds to hospitals that provide Sexual Assault Forensic Exams. Hospital A submitted 25 weeks of data. They provided an average of 9.32 exams per week (S = 5.62). Hospital B provided 20 weeks of data. They provided an average of 10.97 exams per week (S = 4.11...
Multi part question c++ and yes I'm going to rate you up thank youu! Part A...
Multi part question c++ and yes I'm going to rate you up thank youu! Part A Ask the user to input a string. Then sort the string in reverse lexicographic order and print it out. This would be done using the integer values of the characters (ASCII table). Use any one of the sorting methods. For e...
Hello please answer this question Suppose your hospital suddenly sees an enormous increase in emergency room patients, and you are on a committee to investigate the problem and relieve pressure on the ER. How would you approach the problem?...
Proof that an imine was formed between aldolase and its substrate was obtained by using n-fructose-1,6-bisphosphate labeled at the C-2 position $^{14}$Cas the substrate. NaBH_ was added to the reaction mixture. A radioactive product was islated from the reacionen mixturectinhted-repsetinon what in solution. Draw the structure e radioactive product obtained from the acidic solution. (Hint: NaBH_feduces an imine linkage.)
Proof that an imine was formed between aldolase and its substrate was obtained by using n-fructose-1,6-bisphosphate labeled at the C-2 position $^{14}$Cas the substrate. NaBH_ was added to the reaction mixture. A radioactive product was islated from the reacionen mixturectinhted-repsetinon what in s...
la li Aa BbccDdEe AaBbCcDdEe X ADAU AaBbCcDc ABCcDdEe Heading 1 Heading 2 ON Normal No Spacing Sty! Par In a certain forest, researchers have found an average of 23 spiders per 10m². What is the probability to find less than two spiders in a given m2? Round your answer to nearest percent. I Eng...
Please remember to show your work andor explain your reasoningWork Backwards1. Leslie set Up & fruit stand t0 sell some ofher crop of oranges. The first person who came by bought 1/3 ofher oranges The second person bought oranges: The third person then bought of the remaining oranges. Leslie took the last 1S oranges home and made orange juice: How many oranges did Leslie have at the beginning?
Please remember to show your work andor explain your reasoning Work Backwards 1. Leslie set Up & fruit stand t0 sell some ofher crop of oranges. The first person who came by bought 1/3 ofher oranges The second person bought oranges: The third person then bought of the remaining oranges. Leslie t...
Use a table of values to graph the equation.$y=-(3-x)$
Use a table of values to graph the equation. $y=-(3-x)$...
PLEASE SHOW ALL WORK! THANK YOU. Marin Company commonly issues long-term notes payable to its various...
PLEASE SHOW ALL WORK! THANK YOU. Marin Company commonly issues long-term notes payable to its various lenders. Marin has had a pretty good credit rating such that its effective borrowing rate is quite low (less than 8% on an annual basis). Marin has elected to use the fair value option for the long-...
QUESTIONaccount that pays 6% interest compounded Hakimi deposits RM215 every month in years months. monthly: Find tne amount in the account after marks)
QUESTION account that pays 6% interest compounded Hakimi deposits RM215 every month in years months. monthly: Find tne amount in the account after marks)...
[-/2 Points]DETAILSKREENGMATHIO 13.5.003_Find ez in the formiv if z equals the following_61i(1Find lez|:
[-/2 Points] DETAILS KREENGMATHIO 13.5.003_ Find ez in the form iv if z equals the following_ 61i(1 Find lez|:...
Vegetarian Meat / Fish Total Wine 27 47 No Wine 228 Total T00
Vegetarian Meat / Fish Total Wine 27 47 No Wine 228 Total T00...
A converging lens of focal length 9.000 cmcm is 20.0 cmcm to the left of a...
A converging lens of focal length 9.000 cmcm is 20.0 cmcm to the left of a diverging lens of focal length -6.00 cmcm. A coin is placed 19.0 cmcm to the right of the diverging lens. A converging lens of focal length 9.000 сm is 20.0 cm to the left of a diverging lens of focal length -6.00 cm. A...
Estale F tax retums, it was determinud that the mean amount of addltional tax owed was 53453 with standard random sample 81 audited 90% confidence interval for the mean additional amount owod for estate relums. deviation of 52556. Constnict and interprel = Click the icon- view Ihe t-distributon table.The lowur bound(Roundnearest dollar a5 needed )Tne upper bound (Round to the nearest dollar needed ) 905 confdanco intorval for the moan additional amount ol Iax owed (or eatuto lax ruturns Chaose t
estale F tax retums, it was determinud that the mean amount of addltional tax owed was 53453 with standard random sample 81 audited 90% confidence interval for the mean additional amount owod for estate relums. deviation of 52556. Constnict and interprel = Click the icon- view Ihe t-distributon tabl...
ANS13. What volume of 1.50 M HCl is needed to react 'completely with 4.50 g of Mg(OH)?
ANS 13. What volume of 1.50 M HCl is needed to react 'completely with 4.50 g of Mg(OH)?...
The graph of = lunctlon (lt) shows below. Let A be the function &iven by Alx) = JG f()dtY = {(0)Evaluate thc Iollowlng:A( 2)A(3i4a)
The graph of = lunctlon (lt) shows below. Let A be the function &iven by Alx) = JG f()dt Y = {(0) Evaluate thc Iollowlng: A( 2) A(3i 4a)...
Detenning wtcthcr tc scrics is ahsolutely convcrgent conditionally convergcnt or divcrgent If YOu usc convergence or divcrgence test state which test YOu are using end show all work associated with opplying thc Icst in proper mathcmatical form 46IEh(mtu
Detenning wtcthcr tc scrics is ahsolutely convcrgent conditionally convergcnt or divcrgent If YOu usc convergence or divcrgence test state which test YOu are using end show all work associated with opplying thc Icst in proper mathcmatical form 46I Eh(mtu...
Consider an approximately normal distribution with a mean of 121and a standard deviation of 40. What percent of the valuesare within the interval (41, 121)?
Consider an approximately normal distribution with a mean of 121 and a standard deviation of 40. What percent of the values are within the interval (41, 121)?...
A 2.25 mole sample of carbon dioxide, for which Cp,m= 37.1 JK^-1mol^-1 at 298 K, is...
A 2.25 mole sample of carbon dioxide, for which Cp,m= 37.1 JK^-1mol^-1 at 298 K, is expanded reversibly and adiabatically from a volume of 4.50 L and a temperature of 298 K to a final volume of 32.5 L. Calculate the final temperature, q, w,DeltaH and DeltaU. Assume that Cp,m is constant over the tem...
|
2022-12-03 15:18:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.407349169254303, "perplexity": 7300.9447223995085}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710933.89/warc/CC-MAIN-20221203143925-20221203173925-00271.warc.gz"}
|
https://cstheory.stackexchange.com/questions/52435/is-nlogtime-self-low
|
# Is NLOGTIME self-low?
https://en.wikipedia.org/wiki/Low_(complexity)
Every class which is low for itself is closed under complement, provided that it is powerful enough to negate the boolean result. EXP, which is closed under complement, but is not low for itself.
NLOGTIME can negate boolean result, it just can't pass whole input into oracle, and passible size is log(n), on which size it needn't an oracle. NLOGTIME is also not closed under complement due to EQUALITY.
Is it some misunderstanding or wrong wiki?
• Seems some issue, brute-forcing all branches of NLOGTIME may need poly time, but it's the main idea
– l4m2
Jan 31 at 17:39
• $NLOGTIME^P$ should work but I'm not sure
– l4m2
Feb 1 at 1:29
• The statement on Wikipedia is informal (to begin with, there is no uniform way how to relativize arbitrary classes with oracles), and obviously was not written with such small classes in mind. The real question is what should be a sensible definition of relativized NLOGTIME so that it can make oracle queries of size proportional to the size of the input. Note that, for example, the standard definition of (uniform) relativized $\mathrm{AC}^0$ does that; $\mathrm{AC}^0$ equals the LOGTIME hierarchy (i.e., alternating LOGTIME with $O(1)$ alternations), hence NLOGTIME is its special case. Feb 1 at 8:36
• In any case, the “correct” answer should be that NLOGTIME is not low for itself, and the smallest class $C$ such that $\mathrm{NLOGTIME}^C=C$ should be $\mathrm{AC}^0$. If your definition of NLOGTIME with oracles does not yield this conclusion, the definition is wrong. Feb 1 at 8:40
• $NLOGTIME^{P^{NLOGTIME^P}}$ should be $NLOGTIME^P$, right? and $NLOGTIME^C$ still can't pass all input into the $C$. Unless passing all input into oracle is somehow enabled by other way @EmilJeřábek
– l4m2
Feb 1 at 10:05
|
2023-03-23 23:09:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6416597366333008, "perplexity": 1070.425876188479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00172.warc.gz"}
|
http://mathhelpforum.com/algebra/202348-what-value-cd.html
|
# Math Help - what is the value of cd?
1. ## what is the value of cd?
ok i have no idea how to solve this problem. i tried to fix x to some arbitrary value and just could not get this problem to work out...
the problem says that c and d are constants and it wants me to find the value of the product??? any help on this problem would be appreciated....
i even tried to factor out the right side and add the negative of the right side to see if i could get something going, but i am really stuck....
2. ## Re: what is the value of cd?
Expand out the right hand side and compare it with the left hand side...
$x^2 + 14x + c = x^2 + 2dx + d^2$
You can see that 14x = 2dx and that c = d^2 so just go from there to solve for d, then c.
3. ## Re: what is the value of cd?
ok i i think i got it, so c = 49 and d=7?
4. ## Re: what is the value of cd?
That's what I got
thanks!!!!
|
2014-07-24 04:42:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7561905384063721, "perplexity": 276.91815090671633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997886087.7/warc/CC-MAIN-20140722025806-00017-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/4dadc0f4d6938b0b6c0ba94d
|
## sarahrose Group Title can someone help me with 7th grade algebra? like, give me some problems? 3 years ago 3 years ago
1. 3lroy Group Title
2. sarahrose Group Title
huh?
3. INewton Group Title
Here are a few I recall from that age: Well my first (and easiest) question was to find all integer triples (x,y,z), and where n is an integer > 2, such that $x^n + y^n = z^n$ A slightly tougher one was: Prove that the set A defined as $\{ z | z \in \mathbb{C}, \zeta (z) = 0, \zeta (s) := \sum 1/n^s \}$ is a subset of the set B-union-C, where B defined as $\{ z | z = 0.5 + xi, x \in \mathbb{R} \}$ and C is defined as $\{ -2n | n \in \mathbb{N} \}$
4. sarahrose Group Title
O.O
5. sarahrose Group Title
i have no idea what any of that means
6. sarahrose Group Title
:/
|
2014-08-28 07:14:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8374993205070496, "perplexity": 1901.7851558662214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830323.35/warc/CC-MAIN-20140820021350-00114-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://www.instantcertcredit.com/courses/2/lesson/151
|
### Assignments:
Unfinished Assignment Study Questions for Lesson 48
### Lesson Objectives:
- The requirements for being president
- Birth controversies
- The age and background of the president
- Electoral votes versus popular vote
- The 12th Amendment
The requirements of becoming president are not that stringent. Article II, Section 1, of the Constitution says:
"No person except a natural born Citizen, or a Citizen of the United States, at the time of the Adoption of this Constitution, shall be eligible to the Office of President; neither shall any Person be eligible to that Office who shall not have attained to the Age of thirty-five Years, and been fourteen Years a Resident within the United States."
Once elected, the president's salary is $400,000 with$169,000 for annual expenses, travel, and entertainment. Plus, he lives rent free in the White House.
The language in the Constitution says the president must be a natural born citizen. What exactly does that mean? Does the child have to be born in the United States or does it extend to children who are born to U.S. Citizens while they are traveling abroad?
One day, this requirement will be challenged in the Supreme Court and there will be a more concrete interpretation.
There have been some issues through the years starting with Mitt Romney's father who ran for president.
George Romney ran in 1968. He was born in Chihuahua, Mexico.
Senator Ted Cruz ran in 2016. He was born in Canada.
Barack Obama was elected in 2008 and reelected in 2012. He was born in Hawaii, but there was some controversy over whether or not that was true.
Even though the minimum age is 35, no president has been anywhere near that. John F. Kennedy was our youngest president at the age of 43. The average president is inaugurated at the age of 54.
Most presidents have been white, male, and protestant. There have only been two exceptions. John F. Kennedy was Catholic and Barack Obama is African American.
The candidates are nominated by each party every four years at the national conventions. When the voters vote in the general election, those votes are counted to determine how the state voted on the candidates, but the electors are the ones who actually cast the official vote in the electoral college.
Because of the electoral college system we have in place, there have been times when a president has won the popular vote but lost the election because of the electoral college. Also, there have been times when a president did not receive the majority vote in the electoral college but was elected president because there were more than two candidates.
In the 1800 election, Thomas Jefferson versus John Adams, Jefferson and his running mate, Aaron Burr, received the same number of electoral votes. The way the system was outlined in the Constitution, the candidate with the most electoral votes would be president, and the one with the second most votes would be vice-president. Since there was a tie, it was up to the House of Representatives to choose, and after much chaos, they chose Thomas Jefferson for president and Aaron Burr for vice-president.
As a result of that confusion, the 12th Amendment was ratified.
The Twelfth Amendment to the Constitution was adopted in 1804, and specified the separate election of the president and the vice president by the electoral college.
|
2020-02-24 20:00:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23730577528476715, "perplexity": 2393.289944539831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145981.35/warc/CC-MAIN-20200224193815-20200224223815-00367.warc.gz"}
|
https://hyperedge.tech/2021/04/30/building-a-low-cost-flow-meter-for-river-studies/
|
April 30th, 2021
Scientific equipment is notoriously expensive, and for schools, there are often monopolies on which suppliers can provide it. Eben Farnworth wanted to do something about this problem. His design for an open flow meter only costs around $60 USD, which pales in comparison to the typical price tag of$1,000.
Flow meters are great tools to measure how quickly a liquid (typically water or air) passes through a certain area. By using a propeller inside of an enclosure with a known diameter, the amount of liquid per unit of time can be calculated, along with how fast it is going. Farnworth’s design employs a DN80 water sensor, an Arduino Uno, and a 2.4″ TFT touchscreen.
The case houses all the electronics plus a battery for power. Then at the bottom of the device is a port for plugging in the flow sensor itself. After a bit of calibration, Farnworth was able to get the display to show the flow of a river with impressive accuracy.
|
2021-06-21 20:09:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2214813381433487, "perplexity": 1513.9577266603756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488289268.76/warc/CC-MAIN-20210621181810-20210621211810-00249.warc.gz"}
|
https://www.ques10.com/p/32702/axial-flow-compressor-has-a-constant-axial-velocit/
|
0
1.1kviews
Axial flow compressor has a constant axial velocity of 150 m/s and 50% reaction.The mean diameter of the blade ring is 35 cm and speed is 15,000 rpm.The exit angle of the blade is 27deg.
Calculate blade angle at inlet and work done per kg of air.
0
41views
Given:
$V_f = 150\hspace{0.05cm}m/s,\hspace{0.25cm}R_d = 50\hspace{0.05cm}\%\\ D_m = 35\hspace{0.05cm}cm,\hspace{0.25cm}N = 15000\hspace{0.05cm}rpm,\hspace{0.25cm}\alpha = 27^\circ$
To Find: $\alpha_2 = \hspace{0.05cm}?,\hspace{0.25cm}\textit{W.D/kg of air} = \hspace{0.05cm}?$
Solution:
$\hspace{5cm}V_b = \frac{\pi D_{mean}.N}{60}\\ \hspace{5.5cm}= \frac{\pi\hspace{0.05cm}\times\hspace{0.05cm}0.35\hspace{0.05cm}\times\hspace{0.05cm}15000}{60}\\ \hspace{5.5cm}= 274.75\hspace{0.05cm}m/s\\ \hspace{5cm}\frac{V_b}{V_f} = \tan\beta_1 + \tan\beta_2\\ \hspace{4.5cm}\frac{274.75}{150} = \tan\beta_1 + \tan(27^\circ)\\ \hspace{5cm}\beta_1 = \alpha_2 = 52.898^\circ$
$\textit{W.D/kg of air} = V_b[V_{w2} - V_{w1}]$
$\tan\alpha_1 = \frac{V_{w1}}{V_{f1}}\\ V_{w1} = V_f.\tan\alpha_1\\ \hspace{0.5cm}= 150\hspace{0.05cm}\times\hspace{0.05cm}\tan(27)\\ \hspace{0.5cm}= 76.43\hspace{0.05cm}m/s$
$\tan\alpha_2 = \frac{V_{w2}}{V_{f2}}\\ V_{w2} = V_f.\tan\alpha_2\\ \hspace{0.5cm}= 150\hspace{0.05cm}\times\hspace{0.05cm}\tan(52.898)\\ \hspace{0.5cm}= 198.32\hspace{0.05cm}m/s$
Therefore,
$\textit{W.D/kg of air} = 274.75[198.32 - 776.43]\\ \hspace{2.2cm}= 33489.28\hspace{0.05cm}\textit{J/kg of air}\\ W.D = 33.489\hspace{0.05cm}\textit{KJ/kg of air}$
|
2022-01-24 23:20:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2783101201057434, "perplexity": 3558.2370798463985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00172.warc.gz"}
|
http://mathhelpforum.com/algebra/215834-factoring-polynomial-print.html
|
# Factoring a polynomial
• Mar 27th 2013, 08:52 PM
ReneG
Factoring a polynomial
I need help factoring $x^5+2x^3-x^2-2$
I tried it by grouping $(x^5 - x^2) + (2x^3 -2)$
Then I got $x^2(x^3 - 1) + 2(x^3 - 1)$
Thus my final answer was $(x^3 - 1)(x^2+2)$
From there, I had no clue what to do. My text book says the correct answer is $(x^2+2)(x-1)(x^2+x+1)$ but I just have no clue how they got there.
Any help would be appreciated, thanks.
• Mar 27th 2013, 09:33 PM
MINOANMAN
Re: Factoring a polynomial
Reneg
continue the factorization process. and factorize the (x^3-1) it will give you (x-1)(x^2+x+1). therefore the polynomial after factorization becomes
(x^2+2)(x-1)(x^2+x+1).
now how to factorize the x^3-1 ..the (x-1) is a factor of the polynomial x^3 -1 therefore dividing by (x-1) or simply use Horner's method
(Horner's method - Wikipedia, the free encyclopedia ) you will get the result .
MINOAS
• Mar 30th 2013, 01:37 PM
ReneG
Re: Factoring a polynomial
I just don't see how you can factor out $x - 1$ from $(x^3 - 1)(x^2 + 2)$ because there is no greatest common factor other than 1
• Mar 30th 2013, 01:46 PM
Plato
Re: Factoring a polynomial
Quote:
Originally Posted by ReneG
I just don't see how you can factor out $x - 1$ from $(x^3 - 1)(x^2 + 2)$ because there is no greatest common factor other than 1
$(x^3-1)=(x-1)(x^2+x+1)$ the difference of two cubes.
• Mar 30th 2013, 03:05 PM
HallsofIvy
Re: Factoring a polynomial
Do you know how to multiply polynomials? What do you get when multiply $(x- 1)(x^2+ x+ 1)$?
• Mar 30th 2013, 04:49 PM
ReneG
Re: Factoring a polynomial
Oh wow, completely forgot about that rule. Seeing 1 as a cube root of 1 wasn't really intuitive for me at first, thanks!
|
2016-09-28 20:51:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6570240259170532, "perplexity": 1054.7761438151056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661767.35/warc/CC-MAIN-20160924173741-00073-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://www.albert.io/ie/sat-chemistry-subject-test/identify-a-base
|
Free Version
Easy
Identify a Base
SATCHM-FV2KNR
Which compound requires use of the Bronsted-Lowry theory to be identified as a base?
A
$NH_3$
B
$H_2S$
C
$LiOH$
D
$NaCl$
E
$NH_4Cl$
|
2017-02-27 18:32:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.674487292766571, "perplexity": 8836.53255699848}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173405.40/warc/CC-MAIN-20170219104613-00207-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://stackoverflow.com/questions/15200035/error-on-project-startup-relating-to-entity-framework-in-package-manager-console
|
# Error on Project Startup relating to Entity Framework in Package Manager Console
This error has perplexed me for the last few days and can find little to no information on Google regarding this. This started the other day when I was assigned a new laptop by work and I'm essentially running this on a clean install of Windows 7 x64 with VS2012 Update 1.
Whenever I start up a project using Entity Framework 5 or 6-alpha, this exception gets thrown to the package manager console:
New-Object : Cannot find an overload for "Version" and the argument count: "2".
At <project path>\packages\EntityFramework.5.0.0\tools\in
it.ps1:5 char:46
+ if ($PSVersionTable.PSVersion -ge (New-Object <<<< Version @( 3, 0 ))) + CategoryInfo : InvalidOperation: (:) [New-Object], MethodException + FullyQualifiedErrorId : ConstructorInvokedThrowException,Microsoft.PowerShell.Commands.NewObjectCommand Test-ModuleManifest : Invalid Module Manifest path '<project path> \packages\EntityFramework.5.0.0\tools\'. The path argument must resolve to a single file in the file system with a ' .psd1' extension. Please fix the path specification and try again. At <project path>\packages\EntityFramework.5.0.0\tools\in it.ps1:14 char:34 +$thisModule = Test-ModuleManifest <<<< (Join-Path $toolsPath$thisModuleManifest)
+ CategoryInfo : InvalidArgument: (C:\Users\stephe...rk.5.0.0\tools\:String) [Test-ModuleManifest], InvalidOper
ationException
+ FullyQualifiedErrorId : Modules_InvalidModuleManifestPath,Microsoft.PowerShell.Commands.TestModuleManifestCommand
Import-Module : Cannot bind argument to parameter 'Name' because it is null.
At <project path>\packages\EntityFramework.5.0.0\tools\in
it.ps1:31 char:18
+ Import-Module <<<< \$thisModule
+ CategoryInfo : InvalidData: (:) [Import-Module], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.ImportModuleComma
nd
I can get rid of this error by editing the package scripts manually, but that feels like it would be the wrong approach. There are also multiple instances of it checking the Powershell version in this manner so I'm suspecting it's something that's wrong with my computers's configuration.
This is affecting multiple ASP.NET MVC 4 projects of mine targeting .NET 4.5. I can't run any Entity Framework commands such as Enable-Migrations or Update-Database as a result of this error. Any clues will be greatly appreciated.
-
|
2016-02-06 14:42:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34266725182533264, "perplexity": 6642.675064559766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146550.16/warc/CC-MAIN-20160205193906-00261-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.tug.org/pipermail/texhax/2008-December/011403.html
|
# [texhax] Two questions for latex
Thu Dec 4 17:05:14 CET 2008
Eugen Dedu wrote:
> Hi,
>
> 1. What's the difference between \columnwidth and \linewidth?
\columnwidth is what it says, used in e.g. twocolumn mode and in the
multicols env.
\linewidth is the current line width
\linewidth has different value depending on where you are, e.g.
\begin{enumerate}
\item \linewidth here has a different value than
\end{enumerate}
here, where it will usually be equal to \textwidth
> 2. What does pdftex option do in \usepackage[pdftex]{graphicx} ?
>
latex and pdflatex can handle different formats, so the option tells
graphicx to handle a specific senario.
Though I would not recommend specifying a driver (like pdftex) to
graphicx, it can detect the right one by it self.
Though if you use dvipdfm(x) then you need to specify that driver to
graphicx.
> I have searched the Web, without luck.
>
> Thanks,
--
/daleif
|
2018-03-18 02:18:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942757844924927, "perplexity": 5702.228851070771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645413.2/warc/CC-MAIN-20180318013134-20180318033134-00423.warc.gz"}
|
https://avidemia.com/single-variable-calculus/review-of-fundamentals/sigma-notation/
|
A compact form of expressing lengthy sums is the use of summation notation which is often known as sigma notation because it uses the Greek letter $\Sigma$ (uppercase sigma, corresponding to English letter “S” that stands for sum), to represent the sum.
To show how this notation works, consider the sum
$1^{2}+2^{2}+3^{2}+4^{2}+\cdots+100^{2}.$ A typical term in this sum is of the form $k^{2}$ and we can get all the terms if we let $k$ run through the values $1, 2, 3,\dots , 100$. In sigma notation, this sum will be written as
$\sum_{k=1}^{100}k^2.$ This symbol is read “the summation of $k^{2}$ where $k$ runs from 1 to 100.”
In general, if $m$ and $n$ are two integers such that $m\le n$, and $f(k)$ is some formula in $k$,
$\sum_{k=m}^{n}f(k)$ denotes the sum of all the terms that we get by substituting integers for $k$ in $f(k)$ starting with $k=m$ and ending with $k=n$. That is,
$\sum_{k=m}^{n}f(k)=f(m)+f(m+1)+f(m+2)+\cdots+f(n-1)+f(n).$ The numbers $m$ and $n$ that appear under and above the sigma are, respectively, called the lower and upper limits of summation, the letter $k$ is called the index of summation.
• Of course, any convenient letter that is not reserved for another purpose can be used in place of k. For example,
$\sum_{i=1}^{100}i^{2},\qquad\sum_{j=1}^{100}j^{2},\qquad\sum_{j=1}^{100}m^{2},\qquad\sum_{n=1}^{100}n^{2}$ all denote the same sum $\sum_{k=1}^{100}k^{2}$. The letters $i,j,k,m,n$, etc. that are used as the index of summation are called dummy indices.
Here are some examples of using summation notation:
1. ${\displaystyle \sum_{k=1}^{4}k^{3}=1^{3}+2^{3}+3^{3}+4^{3}}$
2. ${\displaystyle \sum_{i=1}^{5}(2i-1)=1+3+5+7+9}$
3. ${\displaystyle \sum_{n=-3}^{4}2^{n}=2^{-3}+2^{-2}+2^{-1}+2^{0}+2^{1}+2^{2}+2^{3}+2^{4}}$
4. ${\displaystyle \sum_{k=1}^{n}1=\underbrace{1+1+\cdots+1}_{n\text{ terms}}=n}$
• We can change the upper and lower limits of the sigma notation if we suitably change the formula of the typical term. For example, it is easy to see
$\sum_{n=2}^{5}n^{4}=\sum_{k=0}^{3}(k+2)^{4}=\sum_{m=1}^{4}(m+1)^{4}=2^{4}+3^{4}+4^{4}+5^{4}$
To state general properties of sums, in place of of the notations $f(k),a(k),$ and $b(k)$, representing different formulas in $k$, it is a convention to use a subscripted letter and write $f_{k},a_{k}$, and $b_{k}.$ For example, if $a_{k}=2k$ then
\begin{align*}
\sum_{k=1}^{5}a_{k} & =a_{1}+a_{2}+a_{3}+a_{4}+a_{5}\\
& =2\cdot1+2\cdot2+2\cdot3+2\cdot4+2\cdot5
\end{align*}
For manipulating sums, the following properties of the sigma notation come in very handy.
1. Additive property: Sigma distributes across sums
${\displaystyle \sum_{k=m}^{n}(a_{k}+b_{k})=\sum_{k=m}^{n}a_{k}+\sum_{k=m}^{n}b_{k}}$
2. Homogeneous property: A constant can be moved through a sigma sum:
${\displaystyle \sum_{k=m}^{n}ca_{k}=c\sum_{k=m}^{n}a_{k}}$ where $c$ does not depend on $k$
3. If $m\leq n$ and $p+1\leq n$ then $\sum_{k=m}^na_k=\sum_{k=m}^p a_k+\sum_{k=p+1}^n a_k$
4. If $a_k\leq b_k$ for all $k$ with $m\leq k\leq m$ then
$\sum_{k=m}^n a_k\leq \sum_{k=m}^n b_k.$
5. Telescoping property
${\displaystyle \sum_{k=m}^{n}(a_{k}-a_{k-1})=a_{n}-a_{m-1}}$
#### Proofs of Properties
To prove the above properties, we just need to expand both sides and use properties of real numbers. For property (1), we write out the left hand side. Then because addition is associative and commutative we can rearrange the terms as the right hand side:
$(a_{m}+b_{m})+(a_{m+1}+b_{m+1})+\cdots+(a_{n}+b_{n})=(a_{m}+a_{m+1}+\cdots+a_{m})+(b_{m}+b_{m+1}\cdots+b_{n})$ Property (2) follows from the distributive property of real numbers:
$ca_{m}+ca_{m+1}+\cdots+ca_{n}=c(a_{m}+a_{m+1}+\cdots+a_{n}).$ Property (3) says
$a_m+a_{m+1}+\cdots+a_n=(a_m+\cdots+a_p)+(a_{p+1}+\cdots+a_n),$ which is a generalization of the associative law.
Property (4) is a generalization of the basic law of inequalities:
$a_1\leq b_1,\quad\text{and}\quad a_2\leq b_2\quad\Rightarrow\quad a_1+a_2\leq b_1+b_2.$
For property (5):
\begin{align*}
\sum_{k=m}^{n}(a_{k}-a_{k-1}) & =(\cancel{a_{m}}-a_{m-1})+(\bcancel{a_{m+1}}-\cancel{a_{m}})+\cdots+(a_{n}-\xcancel{a_{n-1}})\\
& =a_{n}-a_{m-1}
\end{align*}
It follows from (1) and (2) that
\begin{align*}
\sum_{k=m}^{n}(ca_{k}+db_{k}) & =\sum_{k=m}^{n}ca_{k}+\sum_{k=m}^{n}db_{k}\\
& =c\sum_{k=m}^{n}a_{k}+d\sum_{k=m}^{n}b_{k}
\end{align*}
and
\begin{align*}
\sum_{k=m}^{n}(a_{k}-b_{k}) & =\sum_{k=m}^{n}(a_{k}+(-1)b_{k})\\
& =\sum_{k=m}^{n}a_{k}+(-1)\sum_{k=m}^{n}b_{k}\\
& =\sum_{k=m}^{n}a_{k}-\sum_{k=m}^{n}b_{k}
\end{align*}
In general
$\sum_{k=1}^{n}(a_{k}b_{k})\neq\left(\sum_{k=1}^{n}a_{k}\right)\left(\sum_{k=1}^{n}b_{k}\right)\qquad\sum_{k=1}^{n}\frac{a_{k}}{b_{k}}\neq\frac{\sum_{k=1}^{n}a_{k}}{\sum_{k=1}^{n}b_{k}}$
Here are some important formulas that are useful in calculus
$\bbox[#F2F2F2,5px,border:2px solid black]{\large\sum_{k=1}^{n}k=1+2+\cdots+n=\frac{n(n+1)}{2}} \quad \text{(i)}$
$\bbox[#F2F2F2,5px,border:2px solid black]{\large \sum_{k=1}^{n}k^{2}=1^{2}+2^{2}+\cdots+n^{2}=\frac{n(n+1)(2n+1)}{6}} \quad \text{(ii)}$
$\bbox[#F2F2F2,5px,border:2px solid black]{\large\sum_{k=1}^{n}k^{3}=\left[\frac{n(n+1)}{2}\right]^{2}} \quad \text{(iii)}$
There are various ways to prove the above formulas. For example, we can use mathematical induction (see Wikipedia Page on mathematical induction ) or use the telescoping property of sigma notation.
#### Proofs of Formulas (i)–(iii)
$\sum_{k=1}^{n}\left(k^{2}-(k-1)^{2}\right)=n^{2}-0^{2}=n^{2}$ But $2k-1=k^{2}-(k-1)^{2}$. Therefore
$\therefore\sum_{k=1}^{n}(2k-1)=n^{2}$
\begin{align*}
\Rightarrow \sum_{k=1}^n 2k-\sum_{k=1}^n 1 &=n^2 &{\small (\text{Property 1})}\\
2\sum_{k=1}^n k-n&=n^2 &{\small (\text{Property 2 and }\sum_{k=1}^n 1=n)}
\end{align*}
Thus we obtain Formula (i)
$\sum_{k=1}^{n}k=\frac{n^{2}+n}{2}=\frac{n(n+1)}{2}$
$\sum_{k=1}^{n}\left(k^{3}-(k-1)^{3}\right)=n^{3}-0^{3}=n^{3}$ But $k^{3}-(k-1)^{3}=3k^{2}-3k+1$ [to see how we obtain it, just expand $(k-1)^{3}$]. Therefore,
\begin{align*}
\therefore\sum_{k=1}^{n}\left(3k^{2}-3k+1\right) & =n^{3}\\
3\sum_{k=1}^{n}k^{2}-3\sum_{k=1}^{n}k+\sum_{k=1}^{n}1 & =n^{3}& {\small(\text{Properties 1 and 2})}
\end{align*}
Now we use Formula (i) to replace $\sum_{k=1}^{n}k$ and the fact
that $\sum_{k=1}^{n}1=n$ to obtain
$3\sum_{k=1}^{n}k^{2}-3\frac{n(n+1)}{2}+n=n^{3}$ After simplification, we get
$\sum_{k=1}^{n}k^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}.$ Using the least common denominator and then factorization, we can show
$\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}=\frac{n(n+1)(2n+1)}{6}$ Thus
$\sum_{k=1}^{n}k^{2}=\frac{n(n+1)(2n+1)}{6}$
$\sum_{k=1}^{n}\left(k^{4}-(k-1)^{4}\right)=n^{4}-0^{4}=n^{4}$ and $k^{4}-(k-1)^{4}=4k^{3}-6k^{2}+4k-1$.
\begin{align*}
\therefore\sum_{k=1}^{n}(4k^{3}-6k^{2}+4k-1) & =n^{4}\\
4\sum_{k=1}^{n}k^{3}-6\sum_{k=1}^{n}k^{2}+4\sum_{k=1}^{n}k-\sum_{k=1}^{n}1 & =n^{4}
\end{align*}
If we use Formulas (i) and (ii), the fact that $\sum_{k=1}^{n}1=n$, and simplify, we obtain
$\sum_{k=1}^{n}k^{3}=\frac{n^{2}(n^{2}+2n+1)}{4}=\left[\frac{n(n+1)}{2}\right]^{2}.$
Example 1
Evaluate ${\displaystyle \sum_{k=1}^{45}k(k+1)}$
Solution
\begin{align*}
{\displaystyle \sum_{k=1}^{45}k(k+1)} & ={\displaystyle \sum_{k=1}^{45}(k^{2}+k)}\\
& ={\displaystyle \sum_{k=1}^{45}k^{2}}+{\displaystyle \sum_{k=1}^{45}k}\\
& =\frac{45\times46\times(2\times45+1)}{6}+\frac{45\times46}{2}\\
& =31395+1035\\
& =32430.
\end{align*}
Example 2
Evaluate ${\displaystyle \sum_{k=5}^{30}\frac{k^{3}}{7}}$
Solution
\begin{align*}
\sum_{k=5}^{30}\frac{k^{3}}{7} & =\frac{1}{7}\sum_{k=5}^{30}k^{3}\\
& =\frac{1}{7}\left(\sum_{k=1}^{30}k^{3}-\sum_{k=1}^{4}k^{3}\right)\\
& =\frac{1}{7}\left(\left[\frac{30\times31}{2}\right]^{2}-\left[\frac{4\times5}{2}\right]^{2}\right)\\
& =\frac{1}{7}\left(465^{2}-10^{2}\right)\\
& =\frac{216125}{7}=30875.
\end{align*}
Example 3
Express ${\displaystyle \sum_{k=1}^{n}(2+k)^{2}}$ in closed form.
Solution
\begin{align*}
\sum_{k=1}^{n}(2+k)^{2} & =\sum_{k=1}^{n}(4+4k+k^{2}) &{\small (\text{Expand} (2+k)^2)}\\
& =4\sum_{k=1}^{n}1+4\sum_{k=1}^{n}k+\sum_{k=1}^{n}k^{2} &{\small (\text {Properties (1) and (2)})}\\
& =4n+4\frac{n(n+1)}{2}+\frac{n(n+1)(2n+1)}{6} & {\small (\text{Formulas (i) and (ii)})}\\
& =4n+(2n^{2}+2n)+\left(\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}\right)\\
& =\frac{n^{3}}{3}+\frac{5n^{2}}{2}+\frac{37n}{6}\\
& =\frac{2n^{3}+15n^{2}+37n}{6}.
\end{align*}
|
2021-10-18 17:57:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981158435344696, "perplexity": 794.8958206836357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00114.warc.gz"}
|
https://math.stackexchange.com/questions/2822006/teichmuller-disk-and-mathrmsl-2-mathbbr-action
|
# Teichmuller disk and $\mathrm{SL}_2\mathbb{R}$ action
Let $(X,\omega)$ be a Riemann surface of genus $g$ with holomorphic 1-form $\omega$ (or equivalently a translation structure). Let $\Omega\mathcal{T}_g$ be the space of holomorphic 1-forms over genus $g$ surface. The famous $\mathrm{SL}_2\mathbb{R}$ action on $\Omega\mathcal{T}_g$ is defined by composing each coordinate chart of $(X,\omega)$ with matrices in $\mathrm{SL}_2\mathbb{R}$. It is claimed that by descending to $\mathbb{H}$ and $\mathcal{T}_g$ from their tangent/cotangent bundle, the $\mathrm{SL}_2\mathbb{R}$ action embeds the hyperbolic plane $\mathbb{H}$ isometrically into Teichmuller space $\mathcal{T}_g$ of genus $g$.
A few things I have learned so far:
• $\mathrm{PSL}_2\mathbb{R}$ is identified with $T^1\mathbb{H}$ by choosing $(i,i)\in T^1\mathbb{H}$ for identity matrix. For any $g\in \mathrm{PSL}_2\mathbb{R}$ identified with $(x,v)\in T^1\mathbb{H}$, $ga_t$ traces the geodesic in $\mathbb{H}$ passing through $x$ with tangent vector $v$. (Update: this part is explained in more detail in DMG's answer.)
• Let $a_t:=\begin{pmatrix}e^{t/2}&0\\ 0&e^{-t/2} \end{pmatrix}, t\in\mathbb{R}$. The action of $a_t$ streches the horizontal direction and shrinks the vertical direction of $(X,\omega)$. By the Teichmuller theorem, $a_t\cdot (X,\omega)$ traces the geodesic in $\mathcal{T_g}$ passing through $X$ with marking $id_X$ as $t$ varies.
So if the embedding is given by the identification of $T^1\mathbb{H}$ with $\mathrm{PSL}_2\mathbb{R}\cdot (X_0,\omega_0)$ above, the curve $t\mapsto a_tg$ in $T^1\mathbb{H}$ is sent to the geodesic $t\mapsto a_t\cdot (X,\omega)$ in $\mathcal{T}_g$ given that $(X,\omega)=g\cdot (X_0,\omega_0)$. However, the former curve $t\mapsto a_tg$ is NOT the geodesic but a scaling of the complex number $x$ to which $g$ is identified with. The fact that a non-geodesic is sent to a geodesic contradicts the claim that the embedding of $\mathbb{H}$ is isometric.
Where does my argument go wrong? Or do we define the identification of $\mathbb{H}$ with $\mathrm{SO(2)}\backslash \mathrm{SL}_2\mathbb{R}\cdot (X_0,\omega_0)$ differently?
I would appreciate any help very much. Thank you in advance.
The identification of $PSL_2(\mathbb{R})$ and $T^1 \mathbb{H}$ stems from the fact that for any $(x, v) \in T^1 \mathbb{H}$, there is a unique Moebius transformation $M \in PSL_2(\mathbb{R})$ sending the imaginary axis in $\mathbb{H}$ to the geodesic $A_{M}$ passing through $x$ with direction $v$ at $x$. To put it shortly: $PSL_2(\mathbb{R})$ acts simply transitively on $T^1 \mathbb{H}$. This gives bijection $PSL_2(\mathbb{R})(i,i) \leftrightarrow T^1\mathbb{H}$. The notation $(i,i)$ means the point $i \in \mathbb{H}$ in the first factor, and the unit vector based at $i$ and tangent to the imaginary axis in the second factor.
Under this identification, the action of the geodesic flow $g_t$ on $T^1 \mathbb{H}$ corresponds to right multiplication in $PSL_2(\mathbb{R})$ by the matrix $a_t:=\begin{pmatrix} e^{t/2} & 0 \\ 0 & e^{-t/2} \end{pmatrix}$, i.e. if $M \in PSL_2(\mathbb{R})$ is represented by $(x,v)$ in $T^1 \mathbb{H}$ under the above bijection, then
$$g_t \cdot (x,v)=Ma_t.$$
Here is why: $a_t$ corresponds to a hyperbolic transformation with translation axis the imaginary axis, and with translation distance exactly $t$. Under the bijection, we are doing $(Ma_t) \cdot (i,i) = M \cdot (a_t \cdot (i,i))$, i.e. we first translate along the imaginary axis a distance $t$ (we flow along the imaginary axis), and then we send the imaginary axis to the geodesic determined by $(x,v)$ by the isometry $M$, which we named $A_M$. Since isometries preserve distances, the image point is the translate of $(x,v)$ along the geodesic $A_M$ a distance $t$, i.e. the action of the geodesic flow.
As a final remark, note that the choice of the family of matrices $a_t$ depends on the choice of base point (in our case, the basepoint is $(i,i)$).
Edit:
Now we discuss the action of $SL_2(\mathbb{R})$ on $\Omega \cal{T}_{g,n}$.
Fix $(X_0,\omega_0)$, where $X_0$ is a conformal structure on $S_{g,n}$ and $\omega_0$ is a holomorphic quadratic differential on $X_0$. $\omega_0$ gives normal coordinates of the quadratic differential $\omega_0$ on $X_0$ (outside of zeroes of $\omega_0$). The element $B \in SL_2(\mathbb{R})$ can be interpreted as an affine map in these coordinates. $B \cdot (X_0, \omega_0)=(X_0,\omega_B)$. The data $\mathcal{X}_B=[(X_0,\omega_B),id]$ gives a point in Teichmueller space with marking the identity, where we understand $\omega_B$ gives a gives a new conformal structure on $X_0$ in by composing the coordinate patch of normal coordinates with the affine map $B$ and completing the complex structure at the zeroes of $\omega_B$ by removable singularity theorem. This yields a quasi-conformal map $f : \mathcal{X}_0 \to \mathcal{X}_B$, and a terminal quadratic differential $\omega_B$. If $B \in SO(2)$, then the marking $f : \mathcal{X}_0 \to \mathcal{X}_B$ is conformal, so we get the same point in Teichmueller space. Thus, the action descends to a faithful action of $SL_2(\mathbb{R})/SO(2)$ on $\cal{T}_{g,n}$. We claim that this descended action of $SL_2(\mathbb{R})/SO(2)$ on $\cal{T}_{g,n}$ yields an isometric injection $SL_2(\mathbb{R})/SO(2)(X_0,\omega_0) \to \cal{T}_{g,n}$. It is easy to check that any matrix $A \in SL_2(\mathbb{R})$ with $A \notin SO(2)$ can be written as $A=U D_K V$ where $U,V \in SO(2)$ and $D_K=\begin{pmatrix} \sqrt{K} & 0 \\ 0 & \frac{1}{\sqrt{K}} \end{pmatrix}$ and $K>1$. The Teichmuller distance between $\mathcal{X}_0$ and $\mathcal{X}_{D_K}$ is given by the $\log K$ where $K$ is the infimum of the quasi-conformal dilatation, which is realized by the affine map described in the construction above. More generally, to compute the distance between $\mathcal{X}_A$ and $\mathcal{X}_B$, we can take the base point at $(X_0, \omega_B)$, and decompose $AB^{-1} = U_1 D_K U_2$. Finally, we can show that the embedding $SL_2(\mathbb{R})/SO(2)(X_0,\omega_0) \to \cal{T}_{g,n}$ is isometric, by using the bijection with $\mathbb{H}$, and noting that if $\rho$ denotes the hyperbolic distance on $\mathbb{H}$, then $\rho(A(i),B(i))=\rho(AB^{-1}(i),i)=\rho(D_K(i),i)=\log(K)$, which is the same as the Teichmueller distance computed above.
• There are two bijections we are talking about, $T^1\mathbb{H}\leftrightarrow PSL_2(\mathbb{R})$ and $PSL_2(\mathbb{R})\leftrightarrow PSL_2(\mathbb{R})\cdot (X_0,\omega_0)\subset \Omega\mathcal{T}_g$. As you explained, geodesic flow is an action from right in $PSL_2(\mathbb{R})$. However, the action of $PSL_2(\mathbb{R})$ on $\Omega\mathcal{T}_g$ is from left. My question is what is the second bijection so that the geodesic flow on $PSL_2(\mathbb{R})$ coincides with the geodesic flow on $\mathcal{T}_g$? – Morty Jun 17 '18 at 15:37
• If we simply identify $PSL_2(\mathbb{R})$ with its orbit in the most natural way, then the geodesic flow $g_t$ on $\Omega\mathcal{T}_g$ doesn't correspond to the geodesic flow on $T^1\mathbb{H}$ because of this left-right issue. – Morty Jun 17 '18 at 15:43
• I'll edit my answer. – DMG Jun 17 '18 at 16:08
• Thanks a lot for your effort in making this clear explanation. It would serve as a good reference for other readers. I was actually confused on a very technical point and now I am good. – Morty Jun 19 '18 at 15:22
I have found an answer to my question with which I am satisfied by studying the original paper of Veech: Teichmuller curves in moduli space, Eisenstein series adn an application to triangular billiards. He considered two actions on $\Omega \mathcal{T}_g$. One is the $PSL_2(\mathbb{R})$ action from left mentioned above, and the other is an action by $Aff^+(X,\omega)$, the group of orientation-preserving affine homeomorphisms from right. Although the Veech group $SL(X,\omega)$ could be defined either as stabilizer of $PSL_2(\mathbb{R})$ action or the image of differential $D:Aff^+(X,\omega)\rightarrow PSL_2(\mathbb{R})$, the geodesic flow on $\Omega\mathcal{T}_g$ should be really defined as the right action of $\phi_t\in Aff^+(X,\omega)$ whose derivative $D\phi_t=a_t$ instead of the left multiplication by $a_t$. The former is exactly what Veech did in his paper. However, the survey I read defined geodesic flow in the latter way and I was confused by this technical point.
|
2019-12-09 08:08:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9464282393455505, "perplexity": 92.69100748648727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518337.65/warc/CC-MAIN-20191209065626-20191209093626-00306.warc.gz"}
|
http://www.leancrew.com/all-this/2012/09/improved-rss-subscriber-count-script/
|
# Improved RSS subscriber count script
When I got my RSS subscriber count email this morning, I knew something was wrong because the count was about twice what it was the day before. I’ve found and fixed the bug that can overcount the subscribers through Google Reader, but there’s still one more bug that needs to be fixed.
The script I run to calculate the subscriber count is a slight variation on Marco Arment’s original. Surprisingly, the overcounting bug has nothing to do with my changes; it’s in Marco’s code.
Here’s the problem code.
bash:
# Google Reader subscribers and other user-agents reporting "subscribers" and using the "feed-id" parameter for uniqueness:
GRSUBS=fgrep "$LOG_FDATE" "$LOG_FILE" | fgrep " $RSS_URI" | egrep -o '[0-9]+ subscribers; feed-id=[0-9]+' | sort | uniq | cut -d' ' -f 1 | awk '{s+=$1} END {print s}'
That’s a really long pipeline, so let’s look at each part individually:
• fgrep "$LOG_FDATE" "$LOG_FILE" This reads the site’s access log file ($LOG_FILE is defined as the path to the log file earlier in the script) and returns only those lines associated with yesterday (again, $LOG_FDATE is defined as yesterday’s date earlier in the script).
• fgrep " $RSS_URI" returns only those lines accessing the site’s feed URL (yes, $RSS_URI is defined earlier in the script).
• egrep -o '[0-9]+ subscribers; feed-id=[0-9]+' returns only those lines that have both a subscriber count and a feed-id definition. This is characteristic of hits from Google’s FeedFetcher for Reader. The -o option tells egrep to return only the portion of the line that matches the regular expression, so we’re left with lines that look like this:
2735 subscribers; feed-id=9141626367700991551
• sort sorts the lines alphabetically.
• uniq eliminates duplicate lines that are adjacent to one another. This sort | uniq construct is common in shell scripts. Because uniq only eliminates duplicates if they are adjacent, the sort is needed to make them adjacent.
• cut -d' ' -f 1 returns just the subscriber count for each line, which is before the first space character.
• awk '{s+=$1} END {print s}' adds up all the counts and returns the sum. The problem is in the uniq command. If your subscriber count changes during the course of the day—not an uncommon occurrence—you’ll get lines that look like this after the sort: 2735 subscribers; feed-id=9141626367700991551 2735 subscribers; feed-id=9141626367700991551 2735 subscribers; feed-id=9141626367700991551 2735 subscribers; feed-id=9141626367700991551 2737 subscribers; feed-id=9141626367700991551 2737 subscribers; feed-id=9141626367700991551 The uniq command will not treat these as duplicates because they aren’t. It’ll convert these lines into 2735 subscribers; feed-id=9141626367700991551 2737 subscribers; feed-id=9141626367700991551 You can see the problem. Because uniq keeps two lines associated with the same feed-id, we’re counting most of those subscriptions twice. That’s why my subscriber count this morning was nearly twice what it should have been. What we need is a way to tell uniq to return just one line for each feed-id. Ideally, we’d like the line from the end of the day, because that’s the most up-to-date count. Here’s my solution. Instead of a simple sort | uniq, I do this: sort -t= -k2 -s | tac | uniq -f2 The -t= -k2 options tell sort to reorder the lines on basis of what comes after the equals sign, which is the feed-id. The -s option ensures that the sort is stable, that is, that lines with the same feed-id appear in their original order after the sort. The -r option then reverses the sort, so for each feed-id, the top line will be the one associated with the last inquiry of the day. The tac command then reverses all the lines, so for each feed-id the top line will be the one associated with the last inquiry of the day. This will be important after our next step. The -f2 option tells uniq to ignore the first two “fields” of each line, where fields are separated by white space. In other words, it decides on the uniqueness of a line by looking at the feed-id=1234567890 part only. This will turn a section like 2737 subscribers; feed-id=9141626367700991551 2737 subscribers; feed-id=9141626367700991551 2735 subscribers; feed-id=9141626367700991551 2735 subscribers; feed-id=9141626367700991551 2735 subscribers; feed-id=9141626367700991551 2735 subscribers; feed-id=9141626367700991551 into 2737 subscribers; feed-id=9141626367700991551 which is just what we want. You can see now why I reversed the lines after sorting. When uniq is used without options, the line it chooses to retain doesn’t matter because they’re all the same. But when we use the -f option, it’s important to know which of the “duplicate” lines (which are duplicates over only a portion of their length) is returned. As it happens, it’s the first “duplicate” that’s returned, so by reversing the stable sort in the previous step, uniq -f2 returns the line from the last hit of the day for each feed-id. The remainder of pipeline is unchanged, so my new version of Marco’s script gets the Google Reader subscriber count like this: bash: # Google Reader subscribers and other user-agents reporting "subscribers" and using the "feed-id" parameter for uniqueness: GRSUBS=fgrep "$LOG_FDATE" "$LOG_FILE" | fgrep "$RSS_URI" | egrep -o '[0-9]+ subscribers; feed-id=[0-9]+' | sort -t= -k2 -s | tac | uniq -f2 | awk '{s+=$1} END {print s}' Update 9/28/12 When I first wrote this, I didn’t understand how the -s and -r options worked in sort. I thought the -r would reverse the lines after the stable sort, but that’s not what it does. To do what I wanted, I needed the line reversal as a separate command. tac (so named because it acts like cat in reverse) filled the bill. Also, as one of the commenters on Marco’s script pointed out, there’s no need to do the cut when awk can sum over the first field directly. There’s a similar bug in the pipeline for other aggregators that provide a subscriber count in their access log entries: bash: # Other user-agents reporting "subscribers", for which we'll use the entire user-agent string for uniqueness: OTHERSUBS=fgrep "$LOG_FDATE" "$LOG_FILE" | fgrep "$RSS_URI" | fgrep -v 'subscribers; feed-id=' | egrep '[0-9]+ subscribers' | egrep -o '"[^"]+"$' | sort | uniq | egrep -o '[0-9]+ subscribers' | awk '{s+=$1} END {print s}'
As you can see, this pipeline does the same sort | uniq thing and will double count subscribers to an aggregator if that aggregator’s subscriber figure changes during the course of the day. Unfortunately, I’m not sure how to fix this problem. Because the identifier that distinguishes one aggregator from another doesn’t necessarily come after the subscriber count in these log lines, I don’t know how to trick uniq into behaving the way I want.
For example, if I run just the part of the pipeline through uniq, I get these lines from the NewsGator aggregator:
"NewsGatorOnline/2.0 (http://www.newsgator.com; 1 subscribers)"
"NewsGatorOnline/2.0 (http://www.newsgator.com; 3 subscribers)"
"NewsGatorOnline/2.0 (http://www.newsgator.com; 4 subscribers)"
These shouldn’t be added together, but I can’t tell uniq to consider only the first field of each line—it doesn’t have an option for that. I’m pretty sure a Perl one-liner could do it, but my Perl is a little rusty at the moment. If you can whip one up, or if you have a better idea, I’d like to hear about it.
As a practical matter, aggregators that report subscribers but aren’t Google Reader make up such a small part of my total subscriber base that double counting them has little effect. Even if there’s no solution to this problem, it won’t make much difference.
Update 9/28/12
Well, there is a solution, and Marco provided it (mostly) in the comments. The trick lies in using awk arrays and the post-increment (++) operator. Here’s the improved code:
bash:
# Other user-agents reporting "subscribers", for which we'll use the entire user-agent string for uniqueness:
OTHERSUBS=fgrep "$LOG_FDATE" "$LOG_FILE" | fgrep " $RSS_URI" | fgrep -v 'subscribers; feed-id=' | egrep '[0-9]+ subscribers' | egrep -o '"[^"]+"$' | tac | awk -F\( '!x[$1]++' | egrep -o '[0-9]+ subscribers' | awk '{s+=$1} END {print s}'
Instead of sort | uniq, this uses
tac | awk -F\( '!x[$1]++' The tac command reverses the lines, which puts them in reverse chronological order. The clever part—due to Marco—is the awk one-liner that returns just the first line of each aggregator. Note first that it’s all pattern: if the value of the pattern is true, the line is printed (print being awk’s default command); if it’s false, nothing is printed. So the script acts as a filter, printing only those lines for which the pattern evaluates to true. Truth is determined through the value of an associative array, x. As awk reads through the lines of the file, items of x are created with keys that come from the first field,$1, and values that are incremented for every line with a matching first field. The first field is everything before the first open parenthesis, which—in my logs, anyway—corresponds to the name of the aggregator.
The trick is that the ++ incrementing operator acts after x[$1] is evaluated. The first time a new value of $1 is encountered, the value of x[$1] is zero. In the context of an awk pattern, this is false. The not operator, !, flips that to true, and the line is printed. For subsequent lines with that same $1, the value of x[$1] will be a positive number—a true value which the ! will flip to false. Thus, the subsequent lines with the same $1 aren’t printed.
You’ll note that there’s no sort in this pipeline. Because we’re using an awk filter instead of uniq, we don’t have to have the lines grouped by aggregator before running the filter.
I won’t pretend I understood the awk one-liner as soon as I saw it. I seldom use awk and have never written a script that used arrays in the pattern. But once I started to catch on, I realized it was very much like some Perl programs I’ve seen that build associative arrays to count word occurrences.
Double-counting Google Reader subscribers, though, is a big deal. If you’re using Marco’s script, you should change that pipeline.
Update 9/28/12
I see that Marco has updated his script since I first posted. My changes to the pipeline differ a little from his, so I set up my own fork of his script.
|
2016-05-31 11:53:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43933722376823425, "perplexity": 3569.1078987267033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051299749.12/warc/CC-MAIN-20160524005459-00055-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/algebra-and-physics.698904/
|
Algebra and physics
1. Jun 26, 2013
Nixom
Can someone tell me why the quantities having the same algebra structure can be indentified as the same dynamical variables? Are the Possion brackets and the quantum commutators two different presentation of the same Lie algebra?
2. Jun 27, 2013
tom.stoer
I am not sure whether I fully agree with your first sentence. Suppose there is a) a three-dim. harmonic oscillator, b) QCD with color, c) three-quark flavor symmetry. In all cases one can construct su(3) charges Qa with a=1.8 and [Qa,Qb] = i fabc Qc. But of course the fundamental dynamics is different, the algebra does not fix the Hamiltonian (in the flavor case the Hamiltonian was unknown when Gell-Mann and others discovered the quark model with its symmetries). And the algebra does not fix the allowes representations: in the flavor case we know quarks, mesons and baryons in different multiplets, but the question which multiplets do exist in nature are not determined by the algebra but by the dynamics; in QCD we construct everything from fundamental and adjoint fields of su(3) color, but we know that in the physical Hilbert space only color singulets (trivial rep. singulet) are allowed.
Regarding the second sentence: yes, the matrices, the classical objects and the quantum mechanical operators are different 'representations' of the same algebra. Note that in math the term 'representation' is used to distinguish different algebraic properties (fundamental rep., adjoint rep., irreduzible rep., ...) whereas here we use the same algebraic structure but constructed from different objects like a) creation and annihilation operatos, b) quark and gluon fields, c) only quark fields and acting on different vectors spaces (Hilbert spaces). Looking at the pure algebraic properties all we need are generators, commutation relations and especially their structure constants fabc; this defines the algebra uniquely, regardless from which entities the generators have been constructed.
3. Jun 29, 2013
Nixom
Sorry for the obscure question.
I just wonder why we can identify the generators of Lorentz group as the physical variables, such as placement for momentum, rotation for angular momentum...
Is it because they have the same algebra relation as the classical Possion brackets?
4. Jun 30, 2013
tom.stoer
You can do that for an even larger group, the Poincare group.
You get generators for translation (4-momentum), rotations (angular momentum) and boosts.
Last edited: Jun 30, 2013
5. Jun 30, 2013
samalkhaiat
The invariance of your theory under the Poincare’ group, say, leads (through Noether Theorem) to a set of 10 conserved quantities (Constants of Motion/ Noether Charges). Then we proceed to show that these quantities have the following properties: (i) they transform EXACTLY like the generators do under the Poincare’ group. (ii) they act on the fields of the theory generating the CORRECT Poincare’ transformations on them. (iii) they satisfy (through Possion Brackets or Commutators) the same Lie algebra of the Poincare’ group. So, we can identify them with the generators of the Poincare’group. It is like the saying: “If it SMELLS like an apple, LOOKS LIKE an apple and TASTES like an apple, it is an apple”
What Noether theorem does is simply giving FIELD REALIZATION to the generators.
Sam
6. Jun 30, 2013
tom.stoer
I don't think you have 10 conserved quantities.
The commutation relations for rotations L, boosts K, 3-momentum P with the Hamiltonian H are
[Li,H] = [Pi,H] = 0
[Ki,H] = -i Pi
That means that the boosts K do not commute with H and can therefore not be 'conserved charges'.
7. Jul 1, 2013
samalkhaiat
The invariance under the Poincare’ group implies that the energy-momentum 4-vector
$$P_{ a } = \int d^{ 3 } x \ T_{ 0 a } = \int d^{ 3 } x \left( \frac{ \partial \mathcal{ L }}{ \partial ( \partial_{ 0 } \phi ) } \ \partial_{ a } \phi - \eta_{ 0 a } \mathcal{ L } \right) , \ \ (1)$$
and the angular momentum tensor
$$M_{ ab } = \int d^{ 3 } x \ \left( T_{ 0 b } x_{ a } - T_{ 0 a } x_{ b } + \frac{ \partial \mathcal{ L } }{ \partial ( \partial_{ 0 } \phi ) } \ \Sigma_{ a b } \phi \right) , \ \ (2)$$
are CONSTANTS OF MOTION. These are the (4+6=10) conserved Noether CHARGES. It is very easy to show that
$$\frac{d}{dx^{ 0 }} P_{ a } = \frac{d}{dx^{ 0 }} M_{a b} = 0 . \ \ (3)$$
This is very common misunderstanding. The components $M_{ i 0 }$ has an EXPLICIT time dependence which has to be accounted for when writing Heisenberg (Poisson) equation of motion. So, you need to write
$$\frac{ d }{ dx^{ 0 } } M_{ i 0 } = \partial_{ 0 } M_{ i 0 } + [ i P_{ 0 } , M_{ i 0 } ] .$$
The conservation of $M_{ i 0 }$, [Eq(3)], therefore implies
$$[ i P_{ 0 } , M_{ i 0 } ] = - \partial_{ 0 } M_{ i 0 } = - \partial_{ 0 } \int d^{ 3 } x \ \left( - \pi \partial_{ i } \phi \right) \ x^{ 0 } = \int d^{ 3 } x \ \pi \partial_{ i } \phi = P_{ i } .$$
So, the non-vanishing commutator $[ i H , M_{ i 0 } ]$ DOES NOT mean that $M_{ i 0 }$ is NOT CONSERVED.
Sam
8. Jul 1, 2013
tom.stoer
I missed the t-dependency; thx for clarification
Last edited: Jul 2, 2013
9. Jul 8, 2013
Nixom
thanks, Sam.
It seems that the most important property of physical quantities is that they are conserved, is it?
And why the conserved quantities just happen to have those properties, is there some mechanism guaranteeing this?
By the way, which algebra dose the commutator [x,p] or the classical Possion brackets belong to? Are they generators of some group?
10. Jul 9, 2013
samalkhaiat
Yes, it is important, I suppose. Current conservation, $\partial_{ a } J^{ a } ( x ) = 0$, has a remarkable consequence for any matrix element of $J_{ a }$. For any arbitrary states $| I \rangle$ and $| F \rangle$, we take the matrix of the divergence of Noether current and use Heisenberg equation, we find
$$0 = \langle F | \partial_{ a } J^{ a } ( x ) | I \rangle = i \langle F | [ P_{ a } , J^{ a } ( x ) ] | I \rangle ,$$
or, if we define the momentum transfere 4-vector by $q_{ a } = p_{ a }( F ) - p_{ a }( I )$, we find
$$q_{ a } \langle F | J^{ a } (0) | I \rangle = 0 .$$
This equation is an example of a “Ward-Takahashi” identity, a relation that must be satisfied by the matrix element of any operator that possesses some conservation property. Relations of this type play a vital role in proving the renormalizability of a theory.
The only Mechanism I know of is Mathematics. We can SHOW that Noether charge has those properties.
Yes. $( x_{ i } , p_{ i } )$ form 2n-dimensional Lie algebra called Heisenberg/ Poisson algebra.
Sam
|
2017-12-15 11:49:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8982630372047424, "perplexity": 1222.6428426672703}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948569405.78/warc/CC-MAIN-20171215114446-20171215140446-00501.warc.gz"}
|
https://www.physicsoverflow.org/37454/pedagogical-introduction-st%C3%BCckelberg-renormalization-group
|
# Pedagogical introduction to Stückelberg renormalization (group)?
+ 3 like - 0 dislike
1193 views
Das anybody know a nice pedagogical introduction (something that is shorter than a heavy textbook, maybe a lecture note) to the Stückelberg renormalization (group)? Conversely to the renormalization methods often applied in (high-energy) theoretical physics, in this approach the renormalization group transformation is reversible which makes the renormalization group a true group, and it is (for me personally a bit surprisingly at the first glance) often applied in contexts that have nothing to do with relating effective theories valid at different energy scales to each other or integrating out high-energy effects.
While I am generally interested in applications of any kind, I would enjoy most reading about how the Stückelberg renormalization is applied in a QFT context.
Brunetti, Romeo, Michael Dütsch, and Klaus Fredenhagen. "Perturbative algebraic quantum field theory and the renormalization groups."Advances in Theoretical and Mathematical Physics 13.5 (2009): 1541-1599.
We discuss the connection between the Stückelberg–Petermann renormalization group which describes the freedom in the perturbative construction with the Wilsonian idea of theories at different scales. In particular, we relate the approach to renormalization in terms of Polchinski’s Flow Equation to the Epstein–Glaser method.
On the simplest, classical level (applied to ordinary differential equations), the similarities and differences between the Stückelberg–Petermann renormalization group and the Wilsonian renormalization group is well described in
G.C. Paquette, Renormalization group analysis of differential equations subject to slowly modulated perturbations, Physica A 276 (2000), 122-163.
See especially p. 9-10 and Section 7.2.
+ 3 like - 0 dislike
The Stueckelberg renormalization group is what removes the freedom left in the prescription of the renormalization conditions after the limit where the regularization (whether by a cutoff or another regularization recipe) is already taken. Because the physics must be unique, the theory cannot depend on this freedom, as with the gauge freedom in a gauge theory. The Stueckelberg renormalization group is the analogue of the gauge group.
I don't know of a good introduction to Stueckelberg renormalization. But (for example) the following articles show the use of the renormalization group in contexts very different from quantum field theory. You need to judge for yourself how pedagogical the articles are written.
A renormalization group treatment of the classical van der Pol oscillator is given in https://arxiv.org/abs/1305.2372
classical singular perturbation theory: https://arxiv.org/abs/hep-th/9506161https://arxiv.org/abs/cond-mat/9407024
quantum anharmonic oscillator: https://arxiv.org/abs/hep-th/9710087
similarity renormalization group in nuclear physics: https://arxiv.org/pdf/nucl-th/0611045
nonlinear optics; https://arxiv.org/abs/hep-th/0001210 ; this might come closest to your request; there it is named after Bogoliubov rather than Stueckelberg.
For the history, see https://arxiv.org/abs/hep-th/9602024
For relativistic QFT, see, e.g., https://arxiv.org/abs/hep-th/0501228
a resummation technique is required. There are several RG formulations that may be used to achieve this. We believe the most powerful is still the original RG of Stuckelberg–Peterman, Gell–Mann Low and Bogoliubov–Shirkov [68–70], which we refer to as the reparametrization RG. Although discovered in perturbative studies of quantum electrodynamics in the process of removing the ultraviolet (UV) divergences from measured quantities, it was pointed out very early in the development of the subject, by Blank Bonch–Bruervich and Shirkov [71], that the RG is not dependent on the existence of such UV divergences and that it could be a useful tool in a variety of fields (they mention condensed matter physics). [...] From a modern perspective, the original field theoretic renormalization can now be seen to be nothing more than a coordinate change from original bare parameters to renormalized parameters. A coordinate transformation, of itself, does not change the physics, but, as we shall see and demonstrate, one coordinate system may be vastly superior to another when doing perturbative calculations, especially when combined with the notion of a “sliding scale” for the renormalization point.
In contrast, Wilson-style renormalization is not just a reparameterization but simplifies the model by changing the short-distance, high-energy physics, leaving only the long-distance, low-energy physics intact. The relationship between the two is discussed in Section 5.2 of
Brunetti, Romeo, Michael Dütsch, and Klaus Fredenhagen. "Perturbative algebraic quantum field theory and the renormalization groups."Advances in Theoretical and Mathematical Physics 13.5 (2009): 1541-1599.
We discuss the connection between the Stückelberg–Petermann renormalization group which describes the freedom in the perturbative construction with the Wilsonian idea of theories at different scales. In particular, we relate the approach to renormalization in terms of Polchinski’s Flow Equation to the Epstein–Glaser method.
answered Oct 23, 2016 by (15,458 points)
edited Jan 13, 2017
A lengthy comment discussion involving issues of integrating out degrees of freedom (Stückelberg Renormalization has nothing to do with integrating out degrees of freedom) has been moved to chat.Please continue the discussion about such issues there if you like, any further off-topic comments here will be moved there too.
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ys$\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
2021-01-16 11:49:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5507773160934448, "perplexity": 1176.558008363121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506640.22/warc/CC-MAIN-20210116104719-20210116134719-00745.warc.gz"}
|
http://midnattssolsrallyt.se.php54.levonline.com/recipes-using-ptpq/f-test-coefficients-equal-e227f0
|
The immediate generalization of the problem outlined above is to situations where there are more than two groups or populations, and the hypothesis is that all of the variances are equal. How to map moon phase number + "lunation" to moon phase name? has an F-distribution with n − 1 and m − 1 degrees of freedom if the null hypothesis of equality of variances is true. [5]) F-tests for the equality of variances can be used in practice, with care, particularly where a quick check is required, and subject to associated diagnostic checking: practical text-books[6] suggest both graphical and formal checks of the assumption. A test based on the test statistic $$F$$ is called an $$F$$-test. The F critical value obtained from the table is 8.845. In the case of graph (a), you are looking at the residuals of the data points and the overall sample mean. Solution: We have to look for 8 and 3 degrees of freedom in the F Table. how to get the cov of both coefficients, is solved by SEM, which would give you the var-cov matrix of all coefficients. t-test p-value, unequal sample sizes. At the population level, all four group means are equal. Again, there is no reason to be scared of this new test or distribution. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. This F-test is known to be extremely sensitive to non-normality,[3][4] so Levene's test, Bartlett's test, or the Brown–Forsythe test are better tests for testing the equality of two variances. How could a 6-way, zero-G, space constrained, 3D, flying car intersection work? If you wish to test that the coefficient on weight, β weight, is negative (or positive), you can begin by performing the Wald test for the null hypothesis that this coefficient is equal to zero.. test _b [weight]=0 (1) weight = 0 F (1, 71) = 7.42 Prob > F = 0.0081 How do I convert an expression to a string while keeping -> as one symbol? Understanding Irish Baptismal registration of Owen Leahy in 19 Aug 1852. A single F-test produces a single F-value. I have been trying to look for a reference on the theory behind using the F-test to test for the equality of regression coefficients. In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. You can see that for each coefficient, tStat = Estimate/SE.The p-values for the hypotheses tests are in the pValue column. t-test p-value, equal sample sizes. F stands for Fischer who is the biologist and statistician who came up with this. These F-tests are generally not robust when there are violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts. The closest I could find is this, which is for general linear restrictions: http://www.mattblackwell.org/files/teaching/ftests.pdf. How F-tests Use F-distributions to Test Hypotheses. An F-test is any statistical test in which the test statistic has an F-distribution under the null hypothesis. n 1, n 2 - Sample size of group1 and group2. Notionally, any F-test can be regarded as a comparison of two variances, but the specific case being discussed in this article is that of two populations, where the test statistic used is the ratio of two sample variances. Let, be the sample variances. Why is it wrong to train and test a model on the same dataset? F-test, 2-group, equal sample sizes. The closest I could find is this, which is for general linear The F-test for linear regression tests whether any of the independent variables in a multiple linear regression model are significant. Asymptotic test of equality of coefficients from two different regressions, Test for equality between two regression coefficients with an interaction term. T-test and f-test are the two, of the number of different types of statistical test used for hypothesis testing and decides whether we are going to accept the null hypothesis or reject it. However, we will always let statistical software do the dirty work of calculating the values for us. So our F statistic is going to be 12. 2.1 Usage of the F-test We use the F-test to evaluate hypotheses that … The two-tailed version tests against the alternative that the variances are not equal. A ratio of 1 indicates that the two sets of variances are equal. Definition 1: For any coefficient b the Wald statistic is given by the formula. A ratio greater than one suggests that the numerator is greater than the denominator. How does "quid causae" work grammatically? Below you can find the study hours of 6 female students and 5 male students. To learn more, see our tips on writing great answers. Notionally, any F-test can be regarded as a comparison of two variances, but the specific case being discussed in this article is that of two populations, where the test statistic used is the ratio of two sample variances. This is the problem treated by Hartley's test and Bartlett's test. Thanks a lot in advance! If the null hypothesis is false, then we will reject the null hypothesis that the ratio was equal to 1 and our assumption that they were equal. Thanks for contributing an answer to Cross Validated! Specifically, they test the null hypothesis that all of the regression coefficients are equal to zero. This test can be a two-tailed test or a one-tailed test. test.coefficient performs large-sample tests (higher-order asymptotic test, likelihood ratio test, and/or Wald test) for testing regression coefficients in an NB regression model. Formula FOR F-Test: There is no simple formula for F-Test but it is a series of steps which we need to follow: Step 1: To perform an F-Test, first we have to define the null hypothesis and alternative hypothesis. He got the F statistic as 2.38. The t-test is to test whether or not the unknown parameter in the population is equal to a given constant (in some cases, we are to test if the coefficient is equal to 0 – in other words, if the independent variable is individually significant.) up to date? Next, you estimate y = a1 + a2*d + b1*x + b2*w You can now test whether a2 and b2 are separately or … 2 by 2 frequency table. Exact "F-tests" mainly arise when the models have been fitted to the data using least squares. The F-test is used primarily in ANOVA and in regression analysis. F-test, 2-group, unequal sample sizes. test an F-test, similar to the t-test). The null hypothesis is rejected if F is either too large or too small based on the desired alpha level (i.e., statistical significance). Snedecor, George W. and Cochran, William G. (1989), Statistical Methods, Eighth Edition, Iowa State University Press. Asking for help, clarification, or responding to other answers. The tool calculates the p-value, the F statistic and the test power. However, imagine we perform the following process. The default hypothesis tests that software spits out when you run a regression model is the null that the coefficient equals zero. In addition to that overall test, you could perform planned comparisons among the three groups. The degrees of freedom obtained by him were 8 and 3. When passwords of a website leak, are all leaked passwords equally easy to read? There are several different F-tables. The F-Test is used to test the null hypothesis that the variances of two populations are equal. [2] For application in applied statistics, there is concern[citation needed] that the test is so sensitive to the assumption of normality that it would be inadvisable to use it as a routine test for the equality of variances. Johnson, N.L., Kotz, S., Balakrishnan, N. (1995), "Fermat, Schubert, Einstein, and Behrens–Fisher:The Probable Difference Between Two Means When σ, Multivariate adaptive regression splines (MARS), Autoregressive conditional heteroskedasticity (ARCH), https://en.wikipedia.org/w/index.php?title=F-test_of_equality_of_variances&oldid=993827742, Articles with unsourced statements from May 2010, Creative Commons Attribution-ShareAlike License, This page was last edited on 12 December 2020, at 18:23. We have previously discussed how to impose and test various restrictions on models. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. This particular situation is of importance in mathematical statistics … F-tests are used for other statistical tests of hypotheses, such as testing for differences in means in three or more groups, or in factorial layouts. Each t-statistic tests for the significance of each term given other terms in the model.According to these results, none of the coefficients seem significant at the 5% significance level, although the R-squared value for the model is really high at 0.97. The above shows you a quick and easy way to carry out hypothesis tests. The F-Test of overall significance has the following two hypotheses: Null hypothesis (H0) : The model with no predictor variables (also known as an intercept-only model) fits the data as well as your regression model. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. The F-test, when used for regression analysis, lets you compare two competing regression models in their ability to “explain” the variance in the dependent variable. S 1, S 2-Sample standard deviations of group1 and group2. That's because the ratio is known to follow an F distribution with 1 numerator degree of freedom and n-2 denominator degrees of freedom.For this reason, it is often referred to as the analysis of variance F-test. The big point to remember is that… A statistician was carrying out F-Test. Can I fly a STAR if I can't maintain the minimum speed for it? Now, one thing I forgot to mention, with any hypothesis test, we're going to need some type of significance level. The null hypothesis belonging to this F F -test is that all of the population coefficients in the model except for the intercept are zero, so the hypotheses are H 0: β1 = 0, β2 = 0, β3 =0 vs. H 1: βj ≠ 0 for at least one j = 1,2,3. You then generate the interaction between x and d, i.e., w = d*x. Calculate T equal σ Calculate T ... Assumptions. When conducting a t test for unpaired (independent) samples, you need to know if the variance of each sample is equal or unequal. Our F statistic that we've calculated is going to be 12. Where in the rulebook does it explain how to use Wises? Since the F statistic (2.38) is lesser than t… In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Observation: Since the Wald statistic is approximately normal, by Theorem 1 of Chi-Square Distribution, Wald 2 is approximately chi-square, and, in fact, Wald 2 ~ χ 2 (df) where df = k – k 0 and k = the number of parameters (i.e. Can the VP technically take over the Senate by ignoring certain precedents? If the null hypothesis is true, then the F test-statistic given above can be simplified (dramatically). In parliamentary democracy, how do Ministers compensate for their potential lack of relevant experience to run their own ministry? which spacecraft? Why is it easier to handle a cup upside down on the finger tip? We’ll study its use in linear regression. Which fuels? How can I have a significant overall F-test but any significant P values for the individual coefficients? So far we have seen how to to an overall test of the equality of the three regression coefficients, and now we will test planned comparisons among the regression coefficients. [7] However, for large alpha levels (e.g., at least 0.05) and balanced layouts, the F-test is relatively robust, although (if the normality assumption does not hold) it suffers from a loss in comparative statistical power as compared with non-parametric counterparts. To do this use an F Test. Could you point me in the right direction for a theoretical reference on using F-tests to test for equality of regression coefficients? Graphical intuition, please? Google seems to have failed me. [1] This particular situation is of importance in mathematical statistics since it provides a basic exemplar case in which the F-distribution can be derived. ... Unstandardized regression coefficient. Let X1, ..., Xn and Y1, ..., Ym be independent and identically distributed samples from two populations which each has a normal distribution. The F-Test of overall significancein regression is a test of whether or not your linear regression model provides a better fit to a dataset than a model with no predictor variables. This ratio of sample variances will be test statistic used. Let's say we want to test whether or not the coefficients on cyl and carb are identical. The F -test was developed by Ronald A. Fisher (hence F -test) and is a measure of the ratio of variances. Then, you generate a dummy variable, call it d, that equals 1 if the data came from the second dataset and 0 if the data came from the first dataset. If I want to use the kinds of monsters that appear in tabletop RPGs for commercial use in writing, how can I tell what is public-domain? $\begingroup$ I think the question your raise, i.e. Required Sample Data. In this section we will extend this discussion by explaining how to test whether two or more coefficients within a model are equal; we’ll also show how to test more complicated sorts of equality constraints. Means and standard errors. (However, all of these tests create experiment-wise type I error inflations when conducted as a test of the assumption of homoscedasticity prior to a test of effects. n is the number of observations, p is the number of regression parameters. Find out the F value from the F Table and determine whether we can reject the null hypothesis at 5% level of significance (one-tailed test). By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. The hypothesis test does not take decisions itself, rather it assists the researcher in decision making. Users with a solid understanding of the algebra of hypothesis tests may find the following approach more convenient, at least for simple versions of the test. Is Bruce Schneier Applied Cryptography, Second ed. In its most general sense, the F-test takes a ratio of two variances and tests whether the ratio equals 1. What is the extent of on-orbit refueling experience at the ISS? Binary proportions. We are still just calculating a test statistic to see if some hypothesis could have plausibly generated our data. I have been trying to look for a reference on the theory behind using the F-test to test for the equality of regression coefficients. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Can warmongers be highly empathic and compassionated? Test for equality of parameters within a model. Normal distribution - the F test for variances is very sensitive to the normality assumption. The F-test for Linear Regression Purpose. Purpose: Test if variances from two populations are equal. In ANOVA, you can get an overall F test testing the null hypothesis. We're going to see that this is a pretty high number. The F -statistic is defined as: F = Explained variance Unexplained variance A general rule of thumb that is often used in regression analysis is that if F > 2.5 then we can reject the null hypothesis. Use MathJax to format equations. Otherwise it follows an F-distribution scaled by the ratio of true variances. Then you could possibly use a Wald test in the way you suggested instead of a LRT test. In other words, this is a case where "approximate normality" (which in similar contexts would often be justified using the central limit theorem), is not good enough to make the test procedure approximately valid to an acceptable degree. Definitions for Regression with Intercept. using Guidance and Resistance for long term effects, How to \futurelet the token after a space. Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up. Is a password-protected stolen laptop safe? An F -test ( Snedecor and Cochran, 1983) is used to test if the variances of two populations are equal. Is there any better choice other than using delay() for a 6 hours delay? This tests the full model against a model with no variables and with the estimate of the dependent variable being the mean of the values of the dependent variable. rev 2020.12.10.38158, The best answers are voted up and rise to the top, Cross Validated works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us, F-test for equality of regression coefficients, http://www.mattblackwell.org/files/teaching/ftests.pdf, Optimal polynomial order in equality-constrained linear regression. Standardized regression coefficient. Why do most guitar amps have a preamp and a power amp section? It is most often used when comparing statistical models that have been fitted to a data set, in order to identify the model that best fits the population from which the data were sampled. Why is the ratio MSR/MSE labeled F* in the analysis of variance table? Party fact, the residuals are the difference between the actual, or observed, data point and the predicted data point. Then the test statistic. Wald test in which the test power experience to run their own ministry University Press and for! The equality of variances is true, then the F critical value obtained from the table is 8.845 the treated! > as one symbol, space constrained, 3D, flying car intersection work p-values for the equality of is. The big point to remember is that… Calculate T equal σ Calculate T..... With this, see our tips on writing great answers Wald statistic is going to be 12 all.. Are identical George W. and Cochran, William G. ( 1989 ), you can see that for each,. - the F statistic that we 've calculated is going to be 12, flying car intersection?... In mathematical statistics … Purpose: test if variances from two populations can be different, and overall... Snedecor, George W. and Cochran, William G. ( 1989 ), statistical Methods, Eighth Edition, State! Below you can get an overall F test testing the null hypothesis of equality variances... Big point to remember is f test coefficients equal Calculate T equal σ Calculate T equal Calculate... Url into Your RSS reader p-value, the residuals are the difference between the actual, or responding to answers! Has an F-distribution scaled by the ratio of true variances two normal have! The models have been trying to look for 8 and 3 in 19 Aug 1852 Ministers compensate for potential. References or personal experience equal σ Calculate T... Assumptions need some type of significance level -test! Hartley 's test better choice other than using delay ( ) for a theoretical reference on using to. ( ) for a f test coefficients equal reference on the finger tip statistical software do dirty! - sample size of group1 and group2 test a model on the behind... ’ s assume that the two populations can be simplified ( dramatically ) © Stack! Is greater than the denominator can see that this is the extent of on-orbit refueling experience at ISS. Of variances is true for the equality of regression coefficients lunation '' to moon phase name labeled... When you run a regression model are significant A. Fisher ( hence F was. On-Orbit refueling experience at the residuals of the data points and the hypothesis test does not take itself. To get the cov of both coefficients, is solved by SEM, which would you. Help, clarification, or observed, data point a one-tailed test tests against the alternative that variances! Is any statistical test in which the test statistic to see that is. That two normal populations have the same variance our tips on writing answers. Most guitar amps have a significant overall F-test but any significant p values for us potential lack of relevant to... Particular situation is of importance in mathematical statistics … Purpose: test if the null hypothesis all... In parliamentary democracy, how do Ministers compensate for their potential lack of relevant experience to run their ministry. Been fitted to the data using least squares after a space that this the! Statistic and the hypothesis test does not take decisions itself, rather it assists the researcher in decision.... Than the denominator model are significant, similar to the t-test ) our F statistic that we 've calculated going... Licensed under cc by-sa see if some hypothesis could have plausibly generated our data F statistic we... A one-tailed test F-test but any significant p values for the equality of variances is true are in the direction. Significant p values for us this ratio of variances is a pretty high number numerator... Stack Exchange Inc ; user contributions licensed under cc by-sa the actual, or responding other., is solved by SEM, which would give you the var-cov matrix of all coefficients female students and male... Model is the null hypothesis that the variances are not equal T equal Calculate... Way you suggested instead of a website leak, are all leaked passwords equally to. Residuals of the data points and the overall sample mean p is the null hypothesis is.... Of all coefficients out F-test Owen Leahy in 19 Aug 1852 you get. Or personal experience 1989 ), you agree to our terms of service, privacy policy and cookie policy delay! To see that this is the ratio MSR/MSE labeled F * in the pValue column significant overall F-test but significant. A ), you can get an overall F test testing the null that., with any hypothesis test does not take decisions itself, rather it assists the researcher decision... Dirty work of calculating the values for the two populations are equal are all leaked passwords equally to... But any significant p values for us to get the cov of both coefficients, is solved SEM! Handle a cup upside down on the same dataset 8 and 3 2020 Stack Exchange Inc user... Generate the interaction between x and d, i.e., w = *! To map moon phase number + lunation '' to moon phase number + ''. -Test was developed by Ronald A. Fisher ( hence F -test ( and. Certain precedents why do most guitar amps have a preamp and a amp. In linear regression to learn more, see our tips on writing great answers paste URL! Statistics, an F-test, similar to the t-test ) with references or personal experience mainly arise the. P is the extent of on-orbit refueling experience at the residuals are the difference between the actual, responding! F-Test in Excel i.e., w = d * x hypothesis that two normal populations have the same variance two. Are in the F -test ( Snedecor and Cochran, William G. ( 1989,! 3 degrees of freedom obtained by him were 8 and 3 degrees of freedom in the right for... With this solution: we have previously discussed how to use Wises certain precedents the technically! Four group means are equal t-test ) assume that the coefficient equals zero in the... New test or distribution previously discussed how to map moon phase number + lunation to... Your Answer ”, you can get an overall F test testing the null hypothesis given can! Instead of a LRT test coefficients from two different regressions, test for variances is a measure the! F-Distribution with n − 1 and m − 1 and m − 1 of... Simplified ( dramatically ) was carrying out F-test hypothesis of equality of regression coefficients the actual, or,! Overall sample mean 2 - sample size of group1 and group2 use Wises situation of... The test power in Excel x and d, i.e., w = d * x calculating the values the... Hartley 's test and Bartlett 's test and Bartlett 's test statistic is by!, there is no reason to be tested is that the variances are equal we 've calculated is to... Fact, the F statistic and the predicted data point ( 1989,... How could a 6-way, zero-G, space constrained, 3D, flying car intersection work F-distribution with n 1. Restrictions: http: //www.mattblackwell.org/files/teaching/ftests.pdf, Iowa State University Press equally easy to read precedents. The difference between the actual, or responding to other answers this is a test for equality of coefficients. To remember is that… Calculate T equal σ Calculate T... Assumptions coefficients on cyl and are!, n 2 - f test coefficients equal size of group1 and group2 8 and.. Rss reader ( hence F -test ( Snedecor and Cochran, 1983 ) is used test... Feed, copy and paste this URL into Your RSS reader delay ( ) for a reference on F-tests! Of relevant experience to run their own ministry F -test ) and is measure. This example teaches you how to map moon phase number + lunation to! Wald test in the pValue column you suggested instead of a website leak, are all leaked equally... Of 1 indicates that the variances of two populations are equal this URL into Your RSS.... The VP technically take over the Senate by ignoring certain precedents constrained, 3D flying... lunation '' to moon phase name the data using least squares the... User contributions licensed under cc by-sa for linear regression model are significant variances from two different regressions, test the. Use a Wald test in the analysis of variance table to be tested is that the of. Of regression parameters phase number + lunation '' to moon phase number . Who came up with this great answers two sets of variances is a test for the hypothesis. The cov of both coefficients, is solved by SEM, which would give you the var-cov of. The interaction between x and d, i.e., w = d * x car intersection?... Significant p values for the null hypothesis F test-statistic given above can be (... Relevant experience to run their own ministry that overall test, you can that! Example teaches you how to \futurelet the token after a space © 2020 Stack Exchange ;. Specifically, they test the null hypothesis is true testing the null hypothesis is true for the of. Group1 and group2, space constrained, 3D, flying car intersection work, are leaked... A two-tailed test or a one-tailed test true, then the F table 're going to need some of!, s 2-Sample standard deviations of group1 and group2 wrong to train and test a model on finger. Hence F -test was developed by Ronald A. Fisher ( hence F -test ) and is measure. Overall F test for the null hypothesis is true models have been trying to look for a theoretical on! Where in the F test testing the null hypothesis that two normal populations have the variance...
|
2021-05-14 13:54:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5720921158790588, "perplexity": 1013.2620479253717}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989526.42/warc/CC-MAIN-20210514121902-20210514151902-00226.warc.gz"}
|
https://www.physicsforums.com/threads/question-about-derivatives-and-continuous.277131/
|
1. Dec 4, 2008
kala
Why is it that every continuous function is a derivative?
I know that not every derivative is continuous, I just don't know really know why we would know that every continuous function is a derivative. I think is has something to do with the integral, but I don't know how. Any help?
2. Dec 4, 2008
mutton
Fundamental Theorem of Calculus
3. Dec 4, 2008
|
2017-06-26 09:14:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644374012947083, "perplexity": 311.37306849925}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320695.49/warc/CC-MAIN-20170626083037-20170626103037-00513.warc.gz"}
|
https://hindilearning.in/up-board-solutions-for-class-10-computer-science-chapter-11/
|
# UP Board Solutions for Class 10 Computer Science Chapter 11 File Operation
यहां हमने यूपी बोर्ड कक्षा 10वीं की कंप्यूटर विज्ञान एनसीईआरटी सॉल्यूशंस को दिया हैं। यह solutions स्टूडेंट के परीक्षा में बहुत सहायक होंगे | Student up board solutions for Class 10 Computer Science Chapter 11 File Operation pdf Download करे| up board solutions for Class 10 Computer Science Chapter 11 File Operation notes will help you. NCERT Solutions for Class 10 Computer Science Chapter 11 File Operation pdf download, up board solutions for Class 10 computer science.
File Operation Long Answer Type Questions (8 Marks)
Question 1.
What are the data files? Write about their different types. (UP 2011, 12)
Or
Write a short note on ‘Sequential files’. (UP 2007, 15)
Or
Write short notes on ‘Random files’ and their application. (UP 2008, 09)
Or
Explain the features of Sequential file. (UP 2016)
Data File: Data file offers a convenient method of storing data sets, since data files can be easily read and updated by program file (‘C’ program). Example of Data file is an employee file, which is a collection of related records.
Data files can be categorized in many ways. But we will categorize them only on the basis of mode of accessing. According to this condition, there are following two types of data files:
1. Sequential data files
2. Random data files.
1. Sequential Data Files: The main advantage of this type of file is that records (data) are stored sequentially, one after another, on the storage media which may be a magnetic tape or a disk. These individual data items may be different records or single entities and may comprise numerics or strings or both. If a particular data is of string type, then it must be enclosed within quotation mark.
The main disadvantage of handling the sequential file is that any particular record can be accessed sequentially. For example, if the sixtieth record from the beginning of a sequential file is to accessed, then it has to pass over the preceding fifty-nine records. Thus, it becomes more time-consuming. But if we compare it with a random data file, it is easy to create and handle.
2. Random Data Files: The basic advantage of these files is that any particular record can be accessed directly. For example, if we want to access the sixtieth record, then it can be accessed directly without passing any preceding record. It can be stored on disks only. Thus, this method is quite faster as compared to sequential data files.
For this purpose, we use function fseek. This function sets the file position indicator associated with the stream according to the values of the offset and origin. The value of origin may be one of the following:
• 0 – the beginning of a file
• 1 – current position
• 2 – end of the file.
Question 2.
Discuss different file operations of ‘C’.
File Operation: Following are the compulsory steps to operate files:
1. Create file.
2. Merging file i.e., combine records of two or more files into one.
4. Deleting records.
5. Update records.
6. Generate reports.
The ‘C’ language supports different level of file processing according to the environment used.
The general functions which are supported by ‘C’ in the file processing are:
• Input functions
• Output functions
Input functions:
-getc()
-fgetc()
-fscanf()
Output functions:
-putc()
-fputc()
-fprintf()
-fwrite()
fopen( ) is the main function to start the above given functions. To open a file, to create a file, to restart with existing file etc., are the main functions which are performed by it.
Syntax:
file pointer = fopen(filename, mode);
i.e., FILE *fp;
fp = fopen(TRY.C”, “r”);
fp is a pointer variable, which contains the address of the structure FILE which has been defined in the header file “stdio.h”.
fopen() will open a file TRY.C’ in ‘read* mode, which tells the C compiler that we would be reading the contents of the file, “r” is a string and not a character. Hence, fopen() performs three important tasks when you open the file in “r” mode:
• It searches on the disk the file to be opened.
• If the file is present, it loads that file from the disk into memory.
If the file is absent, fopen() returns a NULL. This function opens one file at a time.
• It sets up a character pointer which points to the first character of the memory where the file has been modified.
Question 3.
Write about different modes in which a file can be opened.
File Opening Modes: The tasks performed by fopen() when a file is opened in each of these modes are also mentioned.
Character Used
for Mode → Meaning
w → Create for write
a → Open for append/create a new file
a+ → Open for read/write/create new file.
Question 4.
WAP to copy the contents of one file to another?
Program:
```#include<stdio.h>
#include<conio.h>
void main()
{
FILE *fsrc, *ftgt;
char ch;
clrscr();
fsrc = fopen(“tiyl.c”, “r”);
if (fs = = NULL)
{
puts (“Cannot open source file”);
exit();
}
ftgt = fopen(“tiy2.c”, “w”);
if (ft = = NULL)
{
puts (“Cannot open target file”);
fclose(fs);
exit();
} .
while(1)
{ . ch = fgetc(fsrc);
if (ch = = EOF)
break;
else
fputc(ch, ftgt);
}
fclose(fsrc);
fclose(ftgt);
}```
Question 5.
Explain fread () and fwrite () functions.
fread( ): This function reads a specified number of equal-sized data items from an input stream into a block, fread returns the number of items (not bytes) actually read on success.
Syntax:
fread (& name_of_structure, size_of_(structure name), l, file pointer);
e.g.,
```struct employee
{
char nm[20]; /* 20 bytes */
int age; /* 2 bytes */
};
struct employee Emp;
fread (&Emp, size of (Emp), 1, fp);```
Here, the fread function can read 22 bytes of information from the file pointed by file pointer fp.
fwrite( ): This function appends a specified number of equal-sized data items to an output file.
Syntax:
fwrite (& struct—name, size of (struct), 1, fp);
e.g.,
```street address
{
char city [30]; /* 30 bytes */
long in pin; /* 2 bytes */
char country [20]; /* 20 bytes */
};
FILE *fp;
Question 6.
Describe the main features of printf and getchar function. (UP 2014)
printf ( ) → This function writes formatted data to screen. This function allows to supply the input in a fixed format and to obtain the output in the specified form. The printf () function interprets the contents of the format string.
Syntax:
Printf (“formatted string”, variables); if needed
Example 1.
Printf (“Average = % d percentage = % of’, avg. per);
Here the %d and %f are conversion characters.
They tell printf () to print the value of avg as an integer and per as afloat.
getchar ( ): Using this function any single ASCII character can be inputted in any char type of a variable. This function is available in stdio.h header file so it must be included in the program, in the beginning, using # include declarative.
For example,
```#include<stdio.h>
#include<conio.h>
void main ()
{
char x;
x = getchar();
cout << “you entered” << x;
}```
File Operation Short Answer Type Questions (4 Marks)
Question 1.
What is stdin and what is stdout?
In ‘C’ language, whatever values are given or displayed on a monitor, it can be file handling first stored in a buffer area. The two buffer areas used for this are:
• stdin: It stands for keyboard buffer as it stores the information sent by the keyboard.
• stdout: It stands for standard output buffer, le., monitor of the computer.
Question 2.
Differentiate fprintf() and fscanf(). (UP 2016)
fprintf( ): This function sends formatted output to a stream. It accepts a series of arguments, apply to each argument a format specifier contained in the format string * format. This function applies the first format specifier to the first argument, the second specifier to the second argument… till the end of the format.
fscanf(): This function is similar to scanf() function. It scans a series of input fields one character at a time. Store the formatted input at an address passed as an argument following * format. This function might stop scanning a particular field before it reaches the normal end-of-field (while space) character, or it might terminate entirely.
Question 3.
What is reading files? (UP 2008)
Reading from files: To read from a file, ‘C’ provides the following functions:
To read data from file, it should exist.
fscanf (): This function is used to read data from a file. It is very similar to scanf () function. It scans a series of input fields (one at a time).
Syntax:
fscanf () file * stream, “format specifiers”, “Variables”).
Question 4.
What is the closing of the file?
Closing Files: During a write to a file, the data written is not put on the disk immediately. It is stored in a buffer. When the buffer is full, all its contents are actually written to the disk. The process of emptying the buffer by writing its contents to disk is called flushing the buffer. Losing the file flushes the buffer and releases the space taken by the FILE structure which is returned by fopen. For this fclose () function is used.
Syntax:
fclose (File * stream (s);
File Operation Very Short Answer Type Questions (2 Marks)
Question 1.
In which type of file, accessing a particular record is faster?
Random data file.
Question 2.
Which statement is used to close a file?
fclose() statement.
Question 3.
When data is to be written on a file, in which mode it should be opened? (UP 2016)
‘w’ mode (for writing).
Question 4.
Which statement is used to read the data from a file?
Question 5.
What do you know about ffiush() statement?
If the given stream has buffered output, fflush writes the output for a stream to the associated file.
Question 6.
Which is function prints error message corresponding to the last Library routine that produced the error?
perror ().
Question 7.
Which file pointer position of seeking function seeks from the beginning?
SEEK SET.
File Operation Objective Type Questions (1 Marks)
There are four alternative answers for each part of the questions. Select the correct one and write in your answer book:
Question 1.
The last location of a file is:
(a) EOF
(b) END
(c) LAST
(d) CLOSE.
(a) EOF
Question 2.
Which function is used to close a file? (UP 2015)
(a) CLOSE
(b) END
(c) EOF
(d) None of these.
(b) END
Question 3.
Which function is used to read a line from file:
(a) gets ()
(b) fgets ()
(d) gets ().
(b) fgets ()
Question 4.
Which function sets the file pointer associated with a stream to a new position:
(a) fseek ()
(b) perror
(c) rename
(d) rewind.
(a) fseek ()
————————————————————
All Chapter UP Board Solutions For Class 10 computer science
All Subject UP Board Solutions For Class 10 Hindi Medium
Remark:
हम उम्मीद रखते है कि यह UP Board Class 10 computer science NCERT Solutions in Hindi आपकी स्टडी में उपयोगी साबित हुए होंगे | अगर आप लोगो को इससे रिलेटेड कोई भी किसी भी प्रकार का डॉउट हो तो कमेंट बॉक्स में कमेंट करके पूंछ सकते है |
यदि इन नोट्स से आपको हेल्प मिली हो तो आप इन्हे अपने Classmates & Friends के साथ शेयर कर सकते है और HindiLearning.in को सोशल मीडिया में शेयर कर सकते है, जिससे हमारा मोटिवेशन बढ़ेगा और हम आप लोगो के लिए ऐसे ही और मैटेरियल अपलोड कर पाएंगे |
आपके भविष्य के लिए शुभकामनाएं!!
|
2021-01-23 00:38:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24599553644657135, "perplexity": 6767.1724869379295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703531702.36/warc/CC-MAIN-20210123001629-20210123031629-00217.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=14&t=64081
|
## Energy Levels
$c=\lambda v$
David Jen 1J
Posts: 96
Joined: Wed Sep 30, 2020 9:33 pm
### Energy Levels
In equations such as E=hR/n^2, Dr. Lavelle explains n as energy levels. However, I don't really understand what energy levels are. Could someone explain it?
Yu Jin Kwon 3L
Posts: 96
Joined: Wed Sep 30, 2020 9:41 pm
Been upvoted: 1 time
### Re: Energy Levels
Hi David!
Energy levels are the different electron shells an atom has, and when the electron gets excited by an incoming light/photon (and the energy of the photon matches the difference between an n-level and ground state for the electron, then the electron moves up the energy levels to the respective energy level! The important thing to note is that when the electrons get excited and move up the energy levels, it absorbs the incoming light's energy. Then, as the excited electron goes back down, it emits that same energy.
I wish I knew how to add images directly into this reply, but here's a link to a nice website and the first photo you see shows an atom and its electron shells/energy levels: https://asoefkersaachemistry.weebly.com/energy-levels-of-atoms.html
Hope this helps!
Kristina Krivenko 3I
Posts: 108
Joined: Wed Sep 30, 2020 9:52 pm
Been upvoted: 1 time
### Re: Energy Levels
To put it in simple words, an energy level is a level on which an electron resides.
However, in reality, it's more complicated than that; since electrons are so small, it's difficult to determine where they are at a given time, and they also don't stay in one "spot" but rather constantly move around. Therefore, energy levels are fixed distances from the nucleus where an electron can be found.
They're also called atomic orbitals, and an orbital is a three dimensional description of where an electron is most likely to be found around the atom.
Hope this helps :)
Chesca Legaspi 2E
Posts: 99
Joined: Wed Sep 30, 2020 9:56 pm
### Re: Energy Levels
Can electrons jump up multiple energy levels at once or do they have to reach a certain stage before they can reach the next energy level afterwards? For example, if an electron was at energy level n = 1, could it jump up to n = 4, or would it have to pass/stop(?) through energy levels n = 2 and n = 3 first?
David Jen 1J
Posts: 96
Joined: Wed Sep 30, 2020 9:33 pm
### Re: Energy Levels
Hey Chesca, electrons don't technically jump/skip levels, they just have enough energy to go all the way to N=4. That means they'd have to have enough energy to make it through n=2 and n=3 before it could make it to n=4
Margaret Xu 3C
Posts: 95
Joined: Wed Sep 30, 2020 9:36 pm
Been upvoted: 1 time
### Re: Energy Levels
Chesca Legaspi 2F wrote:Can electrons jump up multiple energy levels at once or do they have to reach a certain stage before they can reach the next energy level afterwards? For example, if an electron was at energy level n = 1, could it jump up to n = 4, or would it have to pass/stop(?) through energy levels n = 2 and n = 3 first?
Hey Chesca, great question! I believe that if an electron is excited enough, it can jump multiple levels at once without having to stop at the level preceding it. I think the electron would inevitably pass the lower levels though; for example, if the electron jumps from n = 1 to n = 4, it has to go past n = 2 and n = 3.
DominicMalilay 1F
Posts: 107
Joined: Wed Sep 30, 2020 9:36 pm
### Re: Energy Levels
Yes, to add on to Margaret's point, the electrons' whereabouts can be thought of as in s,p,d, and f orbitals, but do not take this like they are in one spot in the orbital. Electrons are constantly moving around in their respective energy levels and Heisenberg's uncertainty principle complicates this situation even more.
jessicaosuna_1F
Posts: 98
Joined: Wed Sep 30, 2020 9:51 pm
### Re: Energy Levels
Energy levels refer to set distances from an atom's nucleus. Electrons occupy these levels and can jump from level to level, but do not occupy the space between. To jump from a lower energy level to a higher one by absorbing energy and becoming excited.
Chudi Onyedika 3A
Posts: 94
Joined: Wed Sep 30, 2020 9:37 pm
### Re: Energy Levels
Energy level is dependent on an electron's placement outside of the nucleus. It is based on the distance of the electron orbital from the nucleus of an atom. The higher the energy level, the more distance the electron orbital is.
Mari Williams 1K
Posts: 89
Joined: Wed Sep 30, 2020 9:53 pm
### Re: Energy Levels
Electron levels are helpful in visualizing the concept that their energy is measured discretely, not continuously. There are "steps" of energy levels, and the energy level they fall from corresponds to the energy released.
Yichen Fan 3A
Posts: 92
Joined: Wed Sep 30, 2020 9:59 pm
### Re: Energy Levels
I am still confused by electron skipping energy levels. To put in the context of orbital, if an electron is excited from level 1 to level 4, does it travels through p and d orbital and finally enter the f orbital, or it goes straight from s orbital to f orbital.
David Liu 1E
Posts: 90
Joined: Wed Sep 30, 2020 10:07 pm
### Re: Energy Levels
I think that the levels are more of a concept to help us understand electrons, and as previous posts above have stated with the uncertainty principle, electrons probably travel without stopping to levels as they "jump", as an electron is only in one space for a split second. There's probably a fraction of the second where the electron is in the first level and a second later, or less it's in the 4th level
|
2021-03-04 20:28:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4493253827095032, "perplexity": 1033.2262045718187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369512.68/warc/CC-MAIN-20210304174506-20210304204506-00399.warc.gz"}
|
http://math.stackexchange.com/questions/64603/distributivity-of-a-dot-product-like-operation
|
# Distributivity of a dot product-like operation
Let $b_1, \ldots, b_n \in \mathbb{N}$. For $x, y \in \mathbb{Z}^n$, define $x \cdot y$ as $\newcommand{\lcm}{\operatorname{lcm}}$ $$x \cdot y = \left(\sum_{i=1}^n (x_i \text{ mod } b_i)(y_i \text{ mod } b_i) \text{ mod } b_i\right) \text{ mod } \lcm(b_1, \ldots, b_n)$$
Informally, each multiplication $x_iy_i$ is carried over $\mathbb{Z}/b_i\mathbb{Z}$ and the sum is carried over $\mathbb{Z}/ \lcm(b_1, \ldots, b_n)\mathbb{Z}$.
My question is the following: does $x \cdot (y + z) \equiv x \cdot y + x \cdot z \quad (\text{mod} \lcm(b_1, \ldots, b_n))$ hold?
-
You have a sum on $i$ with $i$ not appearing in the summand. If that's not what you intended, please edit. – Gerry Myerson Sep 14 '11 at 22:26
Thanks, I've fixed it. – Li-thi Sep 14 '11 at 22:27
For $n>1$, no. The individual $\mathbb{Z}/b_i\mathbb{Z}$'s will cycle around to $0$ under addition before the same occurs in the broader group of $\mathbb{Z}/\lcm(b_1,\dots,b_n)\mathbb{Z}$. Namely, let $e_1=(1,0,0,\dots)$. Then
$$e_1\cdot e_1\equiv 1\mod\ell$$ but $$(e_1+\cdots+e_1)\cdot e_1\equiv(b_1e_1)\cdot e_1\equiv0\cdot e_1\equiv0\mod\ell$$ while $e_1\cdot e_1+\cdots+e_1\cdot e_1$ with $b_1$ terms is congruent to $b_1(e_1\cdot e_1)\equiv b_1\not\equiv 0 \mod\ell$.
|
2014-03-11 06:11:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724285006523132, "perplexity": 337.7471332702566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011138462/warc/CC-MAIN-20140305091858-00065-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://proxieslive.com/tag/elements/
|
## Using the elements of one Matrix to form a new Matrix with specified rules
Given a matrix [a], how to get matrices [b] and [c] based on the following two rules?
1. rule [a]->[b]: Strike out corresponding term in [a] and take product of the remaining two terms in the same column.
2. rule [a]->[c]: Strike out the row and column containing the corresponding term in [a] and take sum of cross products in the 2×2 matrix remaining.
x,y,z can be replaced with 1,2,3; For example, $$a_{xy},a_{yz}$$ can be replaced with a12,a23; [a] can be replace with:
a = {{a11, a12, a13}, {a21, a22, a23}, {a31, a32, a33}}
Thank you
Matrix [a]
Matrix [b]
Matrix [c]
## Minimum pair-wise XOR of elements from two sets
I have two sets, $$A$$ and $$B$$, which both contain a large amount of hashed values. What is the most efficient way of computing:
$$\min_{i,j} A_i \otimes B_j$$
## How do I input a 2d matrix when no spacing is given in adjacent elements while taking the input in c++?
Thanks for looking over, so I’m trying to take a nxn matrix as input where in the input is in the following format example :
4 1123 3442 5632 2444
you see the input format that’s my problem I don’t want those elements to be stuck together and c++ is reading the rows as if each of the row is a number which means “cin” is reading only n elements and I expect it to read all n×n elements to be read separately. Pardon me if the question wasn’t upto the mark as this is my first question.
## Selecting k rows and k columns from a matrix to maximize the sum of the k^2 elements
Suppose $$A$$ is an $$n \times n$$ matrix, and $$k \ge 1$$ is an integer. We want to find $$k$$ distinct indices from $$\{1, 2, \ldots, n\}$$, denoted as $$i_1, \ldots, i_k$$, such that
$$\sum_{p, q = 1}^k A_{i_p, i_q}$$
is maximized. In words, we seek $$k$$ rows and the corresponding $$k$$ columns, such that the intersected $$k^2$$ elements of $$A$$ have the largest sum.
This problem can be formulated as a quadratic assignment problem, which is NP-hard and admits no polynomial time algorithm with constant approximation bound. I’m just wondering if for this specific problem (as a special case of quadratic assignment), there exists a poly-time algorithm with constant approximation bound. Thank you.
## Finding largest sum of $k$ elements below threshold
I was working on a project and am stuck in the middle unable to find an optimal method to solve this problem. Consider an array $$A$$ of $$n$$ elements. I have to choose $$k$$ elements such that the sum of indices is maximal under the constraint of being less than a given element $$x$$. My approach for this is the naive $$O(n^k)$$ algorithm, but this would take a lot of time for large $$n$$.
This is isn’t a homework problem.
## Prove that if you pair arbitrarily pair up the elements of an array A of size n to get n/2 pairs,
then you discard pairs of unequal elements and keep just one from the pairs of matching elements. Then the resulting collection will have a majority if and only if A had a majority, i.e. there exists an element with more than floor(n/2) occurrences.
I am very confused about how to go about proving this. It is from a textbook DPV problem 2.23. I am trying to prove it but I end up disproving it.
I.e. Suppose we have an array of n elements A[], that has a majority of element x. that means A.count(x) > floor(n/2). Now suppose that if we add two different elements, [a, b] to array A, x is no longer the majority. Then: A.count(x) <= floor(n/2) + 1 -> A.count(x) = floor(n/2) + 1. But now if we apply the same procedure and pair [a, b] together, then by definition the resulting array should have a majority, even though the original [….] o [a, b] did not.
## How to live edit CSS for dynamic javascript elements usign developer tools Style Editor?
I have to style javascript element that is available only then I use the mouse. When I try to select element using Firefox Development Toolbar, it disappears.
Is there a way to inspect elements that are dynamically generated?
## Pick out elements from a list of lists using criteria
Consider a list of lists in this form (with a shape $$m \times n \times 3$$):
{ {{a1, R1, c11}, {a2, R1, c12}, {a3, R1, c13}, ..., {an, R1, c1n}}, {{a1, R2, c21}, {a2, R2, c22}, {a3, R2, c23}, ..., {an, R2, c2n}}, ..., {{a1, Rm, cm1}, {a2, Rm, cm2}, {a3, Rm, cm3}, ..., {an, Rm, cmn}} }
where in each outer list, the 2nd element $$R_i$$ is fixed ($$i = 1, 2, …, m$$), and the 1st element changes from $$a_1$$ to $$a_n$$, the 3rd element $$c_{ij}$$ is normally a complex and its imaginary part can change from positive to negative or from negative to positive for several times. Here is a sample data for test.
I want to pick out the neighbor lists whenever the imaginary part of $$c_{ij}$$ changes its sign, say, for $$R_2$$, the selected lists are something like $$\{a_j, R_2, c_{2j}\}$$ and $$\{a_{j+1}, R_2, c_{2,j+1}\}$$, where $$\text{Im} c_{2,j} < 0$$ and $$\text{Im} c_{2,j+1} > 0$$. More generally, for $$R_p$$ I pick out $$\{a_j, R_p, c_{pj}\}$$ and $$\{a_{j+1}, R_p, c_{p,j+1}\}$$, and then to plot a curve with ListLinePlot[{{R1, a01}, {R2, a02}, ..., {Rp, a0p}, ..., {Rm, a0m}}], in which $$a_{0j} = (a_j + a_{j+1}) / 2$$. In other words, I what to plot a parameter curve w.r.t the 1st and 2nd elements, across which the imaginary part of the 3rd element changes sign.
I tried Cases, Select and ParametricPlot, but I am still having trouble to find all the pairs of the neighboring lists when the imaginary part of $$c_{ij}$$ changes its sign.
|
2020-04-03 08:34:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 38, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6742101311683655, "perplexity": 637.1870808828287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00111.warc.gz"}
|
https://docs.bioembeddings.com/latest/notebooks/extract_supervised_from_seqvec.html
|
# Colab initialization¶
• install the pipeline in the colab runtime
!pip3 install -U pip > /dev/null
!pip3 install -U bio_embeddings[all] > /dev/null
# Extract secondary structure and subcellular localization predictions from SeqVec¶
In this notebook we will extract annotations from SeqVec embeddings via trained models that can predict secondary structure and subcellular localization
from bio_embeddings.embed import SeqVecEmbedder
We initialize the SeqVec embedder.
embedder = SeqVecEmbedder()
We select an AA sequence. In this case, the sequence is that of Aspartate aminotransferase, mitochondrial
target_sequence = "MALLHSARVLSGVASAFHPGLAAAASARASSWWAHVEMGPPDPILGVTEAYKRDTNSKKMNLGVGAYRDDNGKPYVLPSVRKAEAQIAAKGLDKEYLPIGGLAEFCRASAELALGENSEVVKSGRFVTVQTISGTGALRIGASFLQRFFKFSRDVFLPKPSWGNHTPIFRDAGMQLQSYRYYDPKTCGFDFTGALEDISKIPEQSVLLLHACAHNPTGVDPRPEQWKEIATVVKKRNLFAFFDMAYQGFASGDGDKDAWAVRHFIEQGINVCLCQSYAKNMGLYGERVGAFTVICKDADEAKRVESQLKILIRPMYSNPPIHGARIASTILTSPDLRKQWLQEVKGMADRIIGMRTQLVSNLKKEGSTHSWQHITDQIGMFCFTGLKPEQVERLTKEFSIYMTKDGRISVAGVTSGNVGYLAHAIHQVTK"
We produce the embeddings of the above sequence. Since we only have one sequence, we use the simple embed function, rather than the embed_many or embed_batch, which we would instead use if we had multiple sequences to embed.
embedding = embedder.embed(target_sequence)
The bio_embeddings pipeline includes some models trained on embeddings for the prediction of Secondary Structure and Subcellular Localization. In the following we make use of these models.
To speed up processing, we have downloaded the model weights of the supervised subcellular localization and secondary structure prediction models from here.
from bio_embeddings.extract.basic import BasicAnnotationExtractor
annotations_extractor = BasicAnnotationExtractor("seqvec_from_publication")
annotations = annotations_extractor.get_annotations(embedding)
Let’s see what annotations are available from SeqVec
annotations._fields
Let’s print the subcellular localization predicted via the SeqVec embeddings
print(f"The subcellular localization predicted from the embedding is: {annotations.localization.value}")
For AA-annotations, e.g. secondary structure, we can use a helper function to format the extracted annotations as a single string:
from bio_embeddings.utilities.helpers import convert_list_of_enum_to_string
print("The predicted secondary structure (red) of the sequence is:")
for (AA, DSSP3) in zip(target_sequence, convert_list_of_enum_to_string(annotations.DSSP3)):
print(f"\x1B[30m{AA}\x1b[31m{DSSP3}")
|
2023-03-29 16:36:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5889782905578613, "perplexity": 9129.06636339127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00072.warc.gz"}
|
https://datascience.stackexchange.com/questions/102348/how-to-group-by-ids-and-count-the-number-of-groups-with-occurrence-of-a-variable
|
# How to group by IDs and count the number of groups with occurrence of a variable after first point?
Language: Python 3.8
I have a dataframe that consists of a series of people (each appearing multiple times in the dataframe), dates, and binary variables. I am trying to figure out how many people after a specific event (marked by one of the binary variables) went on to have other positive events. So for example, say the table looks something like this:
| ID | Date | Earthquake | Fire | Storm Damage |
|----|----------|------------|------|--------------|
| 1 | 1/21/21 | 0 | 0 | 0 |
| 2 | 2/3/21 | 1 | 0 | 0 |
| 3 | 2/4/21 | 0 | 1 | 0 |
| 1 | 2/10/21 | 1 | 0 | 0 |
| 1 | 2/28/21 | 0 | 1 | 1 |
| 2 | 3/5/21 | 0 | 0 | 1 |
So in this example, after the first incidence of earthquake, one person went on to have a fire and two went on to have storm damage.
My problem is, I can't quite figure out how to do this. I think I need to use groupby to group all the IDs together, but I'm a bit stuck after that point.
I am not sure if you need to group all the records which would create a group by object, not a dataframe.
I initialized a new_df dataframe with the data you mentioned in your question and then used the group by code. However, I think you are looking for the the sorted the matrix.
Check out the code and the output mentioned below:
grouped = new_df.groupby("ID")
print(grouped.first())
print("Type of grouped:"+str(type(grouped)))
sorted_df = new_df.sort_values(["ID"], ascending=[1])
print("Type of Sort values:"+str(type(sorted_df)))
print(sorted_df)
Output:
ID Date Earthquake Fire Storm Damage
0 1 1/21/21 0 0 0
1 2 2/3/21 1 0 0
2 3 2/4/21 0 1 0
3 1 2/10/21 1 0 0
4 1 2/28/21 0 1 1
5 2 3/5/21 0 0 1
Date Earthquake Fire Storm Damage
ID
1 1/21/21 0 0 0
2 2/3/21 1 0 0
3 2/4/21 0 1 0
Type of grouped:<class 'pandas.core.groupby.generic.DataFrameGroupBy'>
Type of Sort values:<class 'pandas.core.frame.DataFrame'>
ID Date Earthquake Fire Storm Damage
0 1 1/21/21 0 0 0
3 1 2/10/21 1 0 0
4 1 2/28/21 0 1 1
1 2 2/3/21 1 0 0
5 2 3/5/21 0 0 1
2 3 2/4/21 0 1 0
• I might be misunderstanding, but I don't think that would help me. It works for a small dataframe, but my problem consists of a dataframe with > 200,000 rows - the one posted was just a visual example to make the problem clear.
– RLB
Sep 24, 2021 at 19:49
I actually found what I think to be a good solution the other day, please let me know if I am missing something important.
1. First, I ensured that the matrix was completely sorted by date:
df.sort_values(by=['Date'], inplace=True)
1. Then, I used groupby followed by cumulative sum to create a variable that would count the number of instances up to the current point in the dataframe:
df['Earthquake_cumulative'] = df.groupby(['ID'])['Earthquake'].cumsum().astype(int)
1. I then dropped the rows with Earthquake's cumulative sum < 0, then did another cumulative sum for the other variables
df = df[df.Earthquake_cumulative >= 1]
df['Fire_cumulative'] = df.groupby(['ID'])['Fire'].cumsum().astype(int)
df['Storm Damage_cumulative'] = df.groupby(['ID'])['Storm Damage'].cumsum().astype(int)
1. Finally, I dropped all the duplicate IDs, keeping the final one (the last date) and counted the ones where Fire and Storm Damage were greater than 0.
df.drop_duplicates(subset='ID', keep='last')
dfFire = df[df["Fire_cumulative"] > 0]
dfStorm = df[df["Storm Damage_cumulative"] > 0]
num_rowsFire = dfFire.shape[0]
num_rowsStorm = dfStorm.shape[0]
I'm not sure if it's the most efficient method, but it seems to work as far as I can tell.
|
2022-07-06 08:11:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19725339114665985, "perplexity": 1000.2605086850771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00302.warc.gz"}
|
https://faculty.math.illinois.edu/Macaulay2/doc/Macaulay2/share/doc/Macaulay2/Macaulay2Doc/html/_coefficients.html
|
# coefficients -- monomials and their coefficients
## Synopsis
• Usage:
(M,C) = coefficients f
• Inputs:
• f, a one-row Matrix with n columns, say, or a RingElement, to be interpreted as a one-by-one matrix. (A future implementation will handle matrices with more than one row.)
• Optional inputs:
• Variables => a list, default value null, a list v of variables. If a value for this option is not specified, all of the (top-level) variables are used.
• Monomials => ..., default value null, a list or one-row matrix of monomials, each of which is formed using just variables in v.
• Outputs:
• M, , either the value of the Monomials option, if specified (converted to a one-row matrix, if necessary), or a one-row matrix of those monomials appearing in f that involve just variables in v, in descending order. Let m denote the number of columns it has.
• C, , the m by n matrix C such that C_(i,j) is the coefficient in f_(0,j) of the monomial M_(0,i). In other words, C is the unique matrix not involving the (specified) variables such that M*C == f, unless a value was specified for the Monomials option that did not include all the monomials in the variables v used by f
## Description
i1 : R = QQ[a,b,c,d,e,f][x,y]; i2 : F = a*x^2+b*x*y+c*y^2 2 2 o2 = a*x + b*x*y + c*y o2 : R i3 : (M,C) = coefficients F o3 = (| x2 xy y2 |, {2, 0} | a |) {2, 0} | b | {2, 0} | c | o3 : Sequence
The resulting matrices have the following property.
i4 : M*C === matrix F o4 = true
The Sylvester matrix of two generic quadratic forms:
i5 : G = d*x^2+e*x*y+f*y^2 2 2 o5 = d*x + e*x*y + f*y o5 : R i6 : P = matrix{{x*F,y*F,x*G,y*G}} o6 = | ax3+bx2y+cxy2 ax2y+bxy2+cy3 dx3+ex2y+fxy2 dx2y+exy2+fy3 | 1 4 o6 : Matrix R <--- R i7 : (M,C) = coefficients P o7 = (| x3 x2y xy2 y3 |, {3, 0} | a 0 d 0 |) {3, 0} | b a e d | {3, 0} | c b f e | {3, 0} | 0 c 0 f | o7 : Sequence i8 : M*C === P o8 = true
We may give the monomials directly. This is useful if we are taking coefficients of several elements or matrices, and need a consistent choice of monomials.
i9 : (M,C) = coefficients(P, Monomials=>{x^3,y^3,x^2*y,x*y^2}) o9 = (| x3 y3 x2y xy2 |, {3, 0} | a 0 d 0 |) {3, 0} | 0 c 0 f | {3, 0} | b a e d | {3, 0} | c b f e | o9 : Sequence
If not all of the monomials are used, no error is signaled, but M*C == P no longer holds.
i10 : (M,C) = coefficients(P, Monomials=>{x^3,y^3}) o10 = (| x3 y3 |, {3, 0} | a 0 d 0 |) {3, 0} | 0 c 0 f | o10 : Sequence i11 : M*C == P o11 = false
|
2021-10-16 12:19:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5957392454147339, "perplexity": 2650.6088587255017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584567.81/warc/CC-MAIN-20211016105157-20211016135157-00512.warc.gz"}
|