url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
http://mathoverflow.net/revisions/21583/list | ## Return to Answer
2 added 20 characters in body
My suggestion, if you have really worked through most of Hartshorne, is to begin reading papers, referring to other books as you need them.
One place to start is Mazur's "Eisenstein Ideal" paper. The suggestion of Cornell--Silverman is also good. (This gives essentially the complete proof, due to Faltings, of the Tate conjecture for abelian varieties over number fields, and of the Mordell conjecture.) You might also want to look at Tate's original paper on the Tate conjecture for abelian varieties over finite fields, which is a masterpiece.
Another possibility is to learn etale cohomology (which you will have to learn in some form or other if you want to do research in arithemtic geometry). For this, my suggestion is to try to work through Deligne's first Weil conjectures paper (in which he proves the Riemann hypothesis), referring to textbooks on etale cohomology as you need them.
1
My suggestion, if you have really worked through most of Hartshorne, is to begin reading papers, referring to other books as you need them.
One place to start is Mazur's "Eisenstein Ideal" paper. The suggestion of Cornell--Silverman is also good. (This gives essentially the complete proof, due to Faltings, of the Tate conjecture for abelian varieties over number fields, and of the Mordell conjecture.) You might also want to look at Tate's original paper on the abelian varieties over finite fields, which is a masterpiece.
Another possibility is to learn etale cohomology (which you will have to learn in some form or other if you want to do research in arithemtic geometry). For this, my suggestion is to try to work through Deligne's first Weil conjectures paper (in which he proves the Riemann hypothesis), referring to textbooks on etale cohomology as you need them. | 2013-06-18 07:06:16 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8407669067382812, "perplexity": 400.8697284246199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707184996/warc/CC-MAIN-20130516122624-00060-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://cran.ism.ac.jp/web/packages/match2C/vignettes/tutorial.html | # Tutorial for R Package match2C
options(scipen = 99)
options(digits = 2)
library(match2C)
library(ggplot2)
#> Warning: package 'ggplot2' was built under R version 4.0.5
library(mvtnorm)
# Introduction
## Preparation of data
This file serves as an introduction to the R package match2C. We first load the package and an illustrative dataset from Rouse (1995). For the purpose of illustration, we will mostly work with 6 covariates: two nominal (black and female), two ordinal (father’s education and mother’s education), and two continuous (family income and test score). Treatment is an instrumental-variable-defined exposure, equal to $$1$$ if the subject is doubly encouraged, meaning the both the excess travel time and excess four-year college tuition are larger than the median, and to be $$0$$ if the subject is doubly discouraged. There are $$1,122$$ subjects that are doubly encouraged (treated), and $$1,915$$ that are doubly discouraged (control).
Below, we specify covariates to be matched (X) and the exposure (Z), and fit a propensity score model.
attach(dt_Rouse)
X = cbind(female,black,bytest,dadeduc,momeduc,fincome) # covariates to be matched
Z = IV # IV-defined exposure in this dataset
# Fit a propensity score model
family=binomial)$fitted.values # Number of treated and control n_t = sum(Z) # 1,122 treated n_c = length(Z) - n_t # 1,915 control dt_Rouse$propensity = propensity
detach(dt_Rouse)
## Glossary of Matching Terms
We define some useful statistical matching terminologies:
• Bipartite Matching: Matching control subjects to treated subjects based on a binary treatment status.
• Tripartite Matching: Matching control subjects to treated subjects based on a tripartite network. A tripartite network consists of two bipartite networks: a left network and a right network, where the right network is a mirror copy of the left network in nodes, but with possibly different distance structure. Typically the left network is responsible for close pairing and the right network is responsible for balancing; See Zhang et al., (JASA, 2021) for details.
• Pair Matching: Matching one control subject to one treated subject.
• Optimal Matching: Matching control subjects to treated subjects such that some properly defined sum of total distances is minimized.
• Propensity Score: The propensity score is the conditional probability of assignment to a particular treatment given a vector of observed covariates (Rosenbaum and Rubin, 1983).
• Mahalanobis Distance: A multivariate measure of covariate distance between units in a sample (Mahalanobis, 1936). The squared Mahalanobis distance is equal to the difference in covariate values of treated units and matched control units, divided by the covariate’s standard deviation. Mahalanobis distance takes into account the correlation structure among covariates. The distance is zero if two units have the same value for all covariates and increases as two units become more dissimilar.
• Exact Matching: Matching cases to controls requiring the same value of a nominal covariate.
• Fine Balance: A matching technique that balances exactly the marginal distribution of one nominal variable or the joint distribution of several nominal variables in the treated and control groups after matching (Rosenbaum et al., 2007; Yu et al., 2020).
For more details on statistical matching and statistical inference procedures after matching, see Observational Studies (Rosenbaum, 2002) and Design of Observational Studies (Rosenbaum, 2010).
# Statistical Matching Workflow: Match, Check Balance, and (Possibly) Iterate
## An Overview of the Family of Three Matching Functions match_2C, match_2C_mat, and match_2C_list
In the package match2C, three functions are primarily responsible for the main task statistical matching. These three functions are match_2C, match_2C_mat, and match_2C_list. We will examine more closely their differences and illustrate their usage with numerous examples in later sections. In this section we give a high-level outline of what each of them does. In short, the three functions have the same output format (details in the next section), but are different in their inputs.
Function match_2C_mat takes as input at least one distance matrix. A distance matrix is a n_t-by-b_c matrix whose ij-th entry encodes a measure of distance (or similarity) between the i-th treated and the j-th control subject. Hence, function match_2C_mat is most handy for users who are familiar with constructing and working with distance matrices. One commonly-used way to construct a distance matrix is to use the function match_on in the package optmatch (Hansen, 2007).
Function match_2C_list is similar to match_2C_mat except that it requires at least one distance list as input. A list representation of a treatment-by-control distance matrix consists of the following arguments:
• start_n: a vector containing the node numbers of the start nodes of each arc in the network.
• end_n: a vector containing the node numbers of the end nodes of each arc in the network.
• d: a vector containing the integer cost of each arc in the network.
Nodes 1,2,…,n_t correspond to n_t treatment nodes, and n_t + 1, n_t + 2, …, n_t + n_c correspond to n_c control nodes. Note that start_n, end_n, and d have the same lengths, all of which equal to the number of edges. Functions create_list_from_scratch and create_list_from_mat in the package allow users to construct a (possibly sparse) distance list with a possibly user-specified distance measure. We will discuss how to construct distance lists in later sections.
Function match_2C is a wrap-up of match_2C_list with pre-specified distance list structures. For the left network, a Mahalanobis distance between covariates X is adopted; For the right network, an L-1 distance between the propensity score is used. A large penalty is applied so that the algorithm prioritizes balancing the propensity score distributions in the treated and matched control groups, followed by minimizing the sum of within-matched-pair Mahalanobis distances. Function match_2C further allows fine-balancing the joint distribution of a few key covariates. The hierarchy goes in the order of fine-balance >> propensity score distribution >> within-pair Mahalanobis distance.
## Object Returned by match_2C, match_2C_mat, and match_2C_list
Objects returned by the family of matching functions match_2C, match_2C_mat, and match_2C_list are the same in format: a list of the following three elements:
• feasible: 0/1 depending on the feasibility of the matching problem;
• data_with_matched_set_ind: a data frame that is the same as the original data frame, except that a column called matched_set and a column called distance are added to it. Variable matched_set assigns 1,2,…,n_t to each matched set, and NA to controls not matched to any treated. Variable distance records the control-to-treated distance in each matched pair, and assigns NA to all treated and controls that are left unmatched. If matching is not feasible, NULL will be returned;
• matched_data_in_order: a data frame organized in the order of matched sets and otherwise the same as data_with_matched_set_ind. Null will be returned if the matching is unfeasible.
Let’s take a look at an example output returned by the function match_2C_list. The matching problem is indeed feasible:
# Check feasibility
matching_output_example$feasible #> [1] 1 Let’s take a look at the data frame data_with_matched_set_ind. Note that it is indeed the same as the original dataset except that a column matched_set and a column distance are appended. Observe that the first six instances belong to $$6$$ different matched sets; therefore matched_set is from $$1$$ to $$6$$. The first six instances are all treated subjects so distance is NA. # Check the original dataset with two new columns head(matching_output_example$data_with_matched_set_ind, 6)
#> 1 12 1 1 1 0 41 0 0 0 0
#> 2 14 1 1 0 0 46 0 0 0 0
#> 3 12 1 1 0 0 60 0 0 0 0
#> 4 14 1 1 0 0 61 0 0 1 0
#> 5 16 1 1 0 0 46 0 0 0 0
#> 6 12 1 0 0 0 60 0 1 0 0
#> 1 9500 0 1 0 0 0 0 1
#> 2 18000 0 1 0 0 0 0 1
#> 3 22500 0 1 1 0 1 0 4
#> 4 22500 0 1 0 0 0 2 4
#> 5 0 1 1 1 0 1 0 1
#> 6 62000 0 1 0 0 3 0 4
#> income_quartile propensity matched_set distance
#> 1 1 0.42 1 NA
#> 2 2 0.41 2 NA
#> 3 3 0.35 3 NA
#> 4 3 0.35 4 NA
#> 5 0 0.43 5 NA
#> 6 4 0.31 6 NA
Finally, matched_data_in_order is data_with_matched_set_ind organized in the order of matched sets. Note that the first $$2$$ subjects belong to the same matched set; the next two subjects belong to the second matched set, and etc.
# Check dataframe organized in matched set indices
head(matching_output_examplematched_data_in_order, 6) #> educ86 twoyr female black hispanic bytest dadsome dadcoll momsome momcoll #> 1 12 1 1 1 0 41 0 0 0 0 #> 1779 15 0 1 1 0 42 0 0 0 0 #> 2 14 1 1 0 0 46 0 0 0 0 #> 2568 15 0 1 0 1 47 0 0 0 0 #> 3 12 1 1 0 0 60 0 0 0 0 #> 1828 16 0 1 0 0 59 0 0 0 0 #> fincome fincmiss IV dadneither momneither dadeduc momeduc test_quartile #> 1 9500 0 1 0 0 0 0 1 #> 1779 3500 0 0 1 0 1 0 1 #> 2 18000 0 1 0 0 0 0 1 #> 2568 14000 0 0 0 0 0 0 2 #> 3 22500 0 1 1 0 1 0 4 #> 1828 31500 0 0 1 0 1 0 3 #> income_quartile propensity matched_set distance #> 1 1 0.42 1 NA #> 1779 1 0.42 1 1.99 #> 2 2 0.41 2 NA #> 2568 1 0.41 2 0.29 #> 3 3 0.35 3 NA #> 1828 3 0.35 3 0.44 ## Checking Balance Statistical matching belongs to the design stage of an observational study. The ultimate goal of statistical matching is to embed observational data into an approximate randomized controlled trial and the matching process should always be conducted without access to the outcome data. Not looking at the outcome at the design stage means researchers could in principle keep adjusting their matched design until some pre-specified design goal is achieved. A rule of thumb is that the standardized differences of each covariate, i.e., difference in means after matching divided by pooled standard error before matching, is less than 0.1. Function check_balance in the package provides simple balance check and visualization. In the code chunk below, matching_output_example is an object returned by the family of matching functions match_2C_list/match_2C/match_2C_mat (we give details on how to use these functions later). Function check_balance then takes as input a vector of treatment status Z, an object returned by match_2C (or match_2C_mat or match_2C_list), a vector of covariate names for which we would like to check balance, and output a balance table. There are six columns of the balance table: 1. Mean covariate values in the treated group (Z = 1) before matching. 2. Mean covariate values in the control group (Z = 0) before matching. 3. Standardized differences before matching. 4. Mean covariate values in the treated group (Z = 1) after matching. 5. Mean covariate values in the control group (Z = 0) after matching. 6. Standardized differences after matching. tb_example = check_balance(Z, matching_output_example, cov_list = c('female', 'black', 'bytest', 'fincome', 'dadeduc', 'momeduc', 'propensity'), plot_propens = FALSE) print(tb_example) #> Z = 1 Z = 0 (Bef) Std. Diff (Bef) Z = 0 (Aft) Std. Diff (Aft) #> female 0.58 0.56 0.032 0.58 0.0000 #> black 0.19 0.18 0.011 0.18 0.0177 #> bytest 51.88 53.04 -0.098 52.23 -0.0295 #> fincome 21630.12 23439.16 -0.072 21266.04 0.0145 #> dadeduc 1.10 1.17 -0.041 1.09 0.0047 #> momeduc 0.93 1.00 -0.038 0.91 0.0181 #> propensity 0.37 0.37 0.115 0.37 0.0119 Function check_balance may also plot the distribution of the propensity score among the treated subjects, all conrol subjects, and the matched control subjects by setting option plot_propens = TRUE and supplying the option propens with estimated propensity scores as shown below. In the figure below, the blue curve corresponds to the propensity score distribution among 1,122 treated subjects, the red curve among 1,915 control subjects, and the green curve among 1,122 matched controls. It is evident that after matching, the propensity score distribution aligns better with that of the treated subjects. tb_example = check_balance(Z, matching_output_example, cov_list = c('female', 'black', 'bytest', 'fincome', 'dadeduc', 'momeduc', 'propensity'), plot_propens = TRUE, propens = propensity) # Introducing the Main Function match_2C ## A Basic Match with Minimal Input Function match_2C is a wrapper function of match_2C_list with a carefully-chosen distance structure. Compare to match_2C_list and match_2C_mat, match_2C is less flexible; however, it requires minimal input from the users’ side, works well in most cases, and therefore is of primary interest to most users. The minimal input to function match_2C is the following: 1. treatment indicator vector, 2. a matrix of covariates to be matched, 3. a vector of estimated propensity score, and 4. the original dataset to which match sets information is attached. By default, match_2C performs a statistical matching that: 1. maximally balances the marginal distribution of the propensity score in the treated and matched control group, and 2. subject to 1, minimizes the within-matched-pair Mahalanobis distances. The code chunk below displays how to perform a basic match using function match_2C with minimal input, and then check the balance of such a match. The balance is very good and the propensity score distributions in the treated and matched control group almost perfectly align with each other. # Perform a matching with minimal input matching_output = match_2C(Z = Z, X = X, propensity = propensity, dataset = dt_Rouse) tb = check_balance(Z, matching_output, cov_list = c('female', 'black', 'bytest', 'fincome', 'dadeduc', 'momeduc', 'propensity'), plot_propens = TRUE, propens = propensity) print(tb) #> Z = 1 Z = 0 (Bef) Std. Diff (Bef) Z = 0 (Aft) Std. Diff (Aft) #> female 0.58 0.56 0.032 0.58 0.0000 #> black 0.19 0.18 0.011 0.19 0.0000 #> bytest 51.88 53.04 -0.098 52.03 -0.0123 #> fincome 21630.12 23439.16 -0.072 21146.17 0.0193 #> dadeduc 1.10 1.17 -0.041 1.10 -0.0011 #> momeduc 0.93 1.00 -0.038 0.94 -0.0060 #> propensity 0.37 0.37 0.115 0.37 0.0016 ## Incorporating Exact Matching Constraints Researchers can also incorporate the exact matching constraints by specifying the variables to be exactly matched in the option exact. In the example below, we match exactly on father’s education and mother’s education. The matching algorithm still tries to find a match that maximally balance the propensity score distribution, and then minimzies the treated-to-control total distances, subject to the exact matching constraints. One can check that father’s education and mother’s education are exactly matched. Moreover, since the matching algorithm separates balancing the propensity score from exact matching, the propensity score distributions are still well balanced. # Perform a matching with minimal input matching_output_with_exact = match_2C(Z = Z, X = X, exact = c('dadeduc', 'momeduc'), propensity = propensity, dataset = dt_Rouse) # Check exact matching head(matching_output_with_exactmatched_data_in_order[, c('female', 'black', 'bytest',
'propensity', 'IV', 'matched_set')])
#> female black bytest fincome dadeduc momeduc propensity IV matched_set
#> 1 1 1 41 9500 0 0 0.42 1 1
#> 358 1 1 39 22500 0 0 0.41 0 1
#> 2 1 0 46 18000 0 0 0.41 1 2
#> 45 1 0 46 18000 0 0 0.41 0 2
#> 3 1 0 60 22500 1 0 0.35 1 3
#> 2017 1 0 60 9500 1 0 0.37 0 3
# Check overall balance
tb = check_balance(Z, matching_output_with_exact,
cov_list = c('female', 'black', 'bytest', 'fincome', 'dadeduc', 'momeduc', 'propensity'),
plot_propens = TRUE, propens = propensity)
## Incorporating Fine Balancing Constraints
Function match_2C also allows incorporating the (near-)fine balancing constraints. (Near-)fine balance refers to maximally balancing the marginal distribution of a nominal variable, or more generally the joint distribution of a few nominal variables, in the treated and matched control groups. Option fb in the function match_2C serves this purpose. Once the fine balance is turned on, match_2C then performs a statistical matching that:
1. maximally balances the marginal distribution of nominal levels specified in the option fb,
2. subject to 1. maximally balances the marginal distribution of the propensity score in the treated and matched control group, and
3. subject to 2, minimizes the within-matched-pair Mahalanobis distances.
The code chunk below builds upon the last match by further requiring fine balancing the nominal variable dadeduc:
# Perform a matching with fine balance
matching_output2 = match_2C(Z = Z, X = X,
propensity = propensity,
dataset = dt_Rouse,
fb_var = c('dadeduc'))
We examine the balance and the variable dadeduc is indeed finely balanced.
# Perform a matching with fine balance
tb2 = check_balance(Z, matching_output2,
cov_list = c('female', 'black', 'bytest', 'fincome', 'dadeduc', 'momeduc', 'propensity'),
plot_propens = TRUE, propens = propensity)
print(tb2)
#> Z = 1 Z = 0 (Bef) Std. Diff (Bef) Z = 0 (Aft) Std. Diff (Aft)
#> female 0.58 0.56 0.032 0.58 0.0000
#> black 0.19 0.18 0.011 0.19 0.0000
#> bytest 51.88 53.04 -0.098 52.02 -0.0113
#> fincome 21630.12 23439.16 -0.072 21209.00 0.0168
#> dadeduc 1.10 1.17 -0.041 1.10 0.0000
#> momeduc 0.93 1.00 -0.038 0.93 0.0055
#> propensity 0.37 0.37 0.115 0.37 0.0012
One can further finely balance the joint distribution of multiple nominal variables. The code chunk below finely balances the joint distribution of father’s (4 levels) and mother’s (4 levels) education ($$4 \times 4 = 16$$ levels in total).
# Perform a matching with fine balance on dadeduc and moneduc
matching_output3 = match_2C(Z = Z, X = X,
propensity = propensity,
dataset = dt_Rouse,
tb3 = check_balance(Z, matching_output2,
cov_list = c('female', 'black', 'bytest', 'fincome', 'dadeduc', 'momeduc', 'propensity'),
plot_propens = FALSE)
print(tb3)
#> Z = 1 Z = 0 (Bef) Std. Diff (Bef) Z = 0 (Aft) Std. Diff (Aft)
#> female 0.58 0.56 0.032 0.58 0.0000
#> black 0.19 0.18 0.011 0.19 0.0000
#> bytest 51.88 53.04 -0.098 52.02 -0.0113
#> fincome 21630.12 23439.16 -0.072 21209.00 0.0168
#> dadeduc 1.10 1.17 -0.041 1.10 0.0000
#> momeduc 0.93 1.00 -0.038 0.93 0.0055
#> propensity 0.37 0.37 0.115 0.37 0.0012
## Sparsifying the Network to Match Faster and Match Bigger Datasets
Sparsifying a network refers to deleting certain edges in a network. Edges deleted typically connect a treated and a control subject that are unlikely to be a good match. Using the estimated propensity score as a caliper to delete unlikely edges is the most commonly used strategy. For instance, a propensity score caliper of 0.05 would result in deleting all edges connecting one treated and one control subject whose estimated propensity score differs by more than 0.05. Sparsifying the network has potential to greatly facilitate computation (Yu et al., 2020).
Function match_2C allows users to specify two caliper sizes on the propensity scores, caliper_left for the left network and caliper_right for the right network. If users are interested in specifying a caliper other than the propensity score and/or specifying an asymmetric caliper (Yu and Rosenbaum, 2020), functions match_2C_list serves this purpose (see Section 4 for details). Moreover, users may further trim the number of edges using the option k_left and k_right. By default, each treated subject in the network is connected to each of the n_c control subjects. Option k_left allows users to specify that each treated subject gets connected only to the k_left control subjects who are closest to the treated subject in the propensity score in the left network. For instance, setting k_left = 200 results in each treated subject being connected to at most 200 control subjects closest in the propensity score in the left network. Similarly, option k_right allows each treated subject to be connected to the closest k_right controls in the right network. Options caliper_low, caliper_high, k_left, and k_right can be used together.
Below, we give a simple example illustrating the usage of caliper and contrasting the running time of applying match_2C without any caliper, one caliper on the left, and both calipers on the left and the right. Using double calipers in this case roughly cuts the computation time by almost two-thirds.
# Timing the vanilla match2C function
ptm <- proc.time()
matching_output2 = match_2C(Z = Z, X = X,
propensity = propensity,
dataset = dt_Rouse)
time_vanilla = proc.time() - ptm
# Timing the match2C function with caliper on the left
ptm <- proc.time()
matching_output_one_caliper = match_2C(Z = Z, X = X, propensity = propensity,
caliper_left = 0.05, caliper_right = 0.05,
k_left = 100,
dataset = dt_Rouse)
time_one_caliper = proc.time() - ptm
# Timing the match2C function with caliper on the left and right
ptm <- proc.time()
matching_output_double_calipers = match_2C(Z = Z, X = X,
propensity = propensity,
caliper_left = 0.05, caliper_right = 0.05,
k_left = 100, k_right = 100,
dataset = dt_Rouse)
time_double_caliper = proc.time() - ptm
rbind(time_vanilla, time_one_caliper, time_double_caliper)[,1:3]
#> user.self sys.self elapsed
#> time_vanilla 22.1 2.05 24.5
#> time_one_caliper 10.7 0.53 11.4
#> time_double_caliper 3.2 0.05 3.3
Caveat: if caliper sizes are too small, the matching may be unfeasible. See the example below. In such an eventuality, users are advised to increase the caliper size and/or remove the exact matching constraints.
# Perform a matching with fine balance on dadeduc and moneduc
matching_output_unfeas = match_2C(Z = Z, X = X, propensity = propensity,
dataset = dt_Rouse,
caliper_left = 0.001)
#> Hard caliper fails. Please specify a soft caliper.
#> Matching is unfeasible. Please increase the caliper size or remove
#> the exact matching constraints.
## Force including certain controls into the matched cohort
Sometimes, researchers might want to include certain controls in the final matched cohort. Option include in the function match_2C serves this purpose. The option include is a binary vectors (0’s and 1’s) whose length equal to the total number of controls, with 1 in the i-th entry if the i-th control has to be included and 0 otherwise. For instance, the match below forces including the first 100 controls in our matched samples.
# Create a binary vector with 1's in the first 100 entries and 0 otherwise
# length(include_vec) = n_c
include_vec = c(rep(1, 100), rep(0, n_c - 100))
# Perform a matching with minimal input
matching_output_force_include = match_2C(Z = Z, X = X,
propensity = propensity,
dataset = dt_Rouse,
include = include_vec)
One can check that the first 100 controls in the original dataset are forced into the final matched samples.
matched_data = matching_output_force_include$data_with_matched_set_ind matched_data_control = matched_data[matched_data$IV == 0,]
head(matched_data_control) # Check the matched_set column
#> 20 14 1 1 1 0 45 0 0 0 0
#> 21 12 1 1 1 0 60 0 0 0 0
#> 22 14 1 0 1 0 48 0 0 0 0
#> 23 12 1 0 1 0 46 0 0 0 0
#> 24 13 1 0 1 0 47 0 0 0 0
#> 25 13 1 0 1 0 39 0 0 0 0
#> 25 4 0.36 549 5.55 | 2022-05-22 03:51:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36131909489631653, "perplexity": 2853.595160959658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00175.warc.gz"} |
http://nrich.maths.org/2693/clue | ### Degree Ceremony
What does Pythagoras' Theorem tell you about these angles: 90°, (45+x)° and (45-x)° in a triangle?
### Squareness
The family of graphs of x^n + y^n =1 (for even n) includes the circle. Why do the graphs look more and more square as n increases?
### What Do Functions Do for Tiny X?
Looking at small values of functions. Motivating the existence of the Taylor expansion.
# Loch Ness
##### Stage: 5 Challenge Level:
Remember that the function $f(x) = |x|$ is defined as:
\eqalign{ y &= x \ {\rm for } \ x\geq 0 \cr &= -x \ {\rm for } \ x < 0 .}
Split up the domain of the function. | 2013-05-18 12:00:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946359992027283, "perplexity": 2186.1039907707677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382360/warc/CC-MAIN-20130516092622-00011-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/292291/avoid-overfitting-in-regression-alternatives-to-regularization | Avoid overfitting in regression: alternatives to regularization
Regularization in regression (linear, logistic...) is the most popular way to reduce over-fitting.
When the goal is prediction accuracy (not explaining), are there any good alternatives to regularization, especially suited for big data-sets (mi/billions of observations and millions of features)?
• "Big datasets" may mean a lot of observations, a lot of variables or both, and the answer may depend on the number of observations and variables. – Pere Jul 19 '17 at 10:45
• Why not use norm regularisation? For neural networks , there is dropout – seanv507 Jul 19 '17 at 12:05
• The advantage of regularization is that it's computationally cheap. Ensemble methods such as bagging and boosting (etc.) combined with cross validation methods for model diagnostics are a good alternative, but it will be a much more costly solution. – Digio Jul 19 '17 at 14:04
• This might be of interest: stats.stackexchange.com/a/161592/40604 – Dan Jul 19 '17 at 22:51
• To add to the comment by Digio: regularization is cheap compared to bagging/boosting but still expensive compared to the alternative of "no regularization" (see e.g. this post by Ben Recht on how regularization makes deep learning hard). If you have a huge number of samples, no regularization can work well for far cheaper. The model can still generalize well as @hxd1001 points out) – Berk U. Jul 20 '17 at 19:34
Two important points that are not directly related to your question:
• First, even the goal is accuracy instead of interpretation, regularization is still necessary in many cases, since, it will make sure the "high accuracy" on real testing / production data set, not the data used for modeling.
• Second, if there are billion rows and million columns, it is possible no regularization is needed. This is because the data is huge, and many computational models have "limited power", i.e., it is almost impossible to overfit. This is why some deep neural network has billions of parameters.
Now, about your question. As mentioned by Ben and Andrey, there are some options as alternatives to regularization. I would like to add more examples.
• Use simpler model (For example, reduce number of hidden unit in neural network. Use lower order polynomial kernel in SVM. Reduce number of Gaussians in mixture of Gaussian. etc.)
• Stop early in the optimization. (For example, reduce the epoch in neural network training, reduce number of iterations in optimization (CG, BFGS, etc.)
• Average on many models (For example, random forest etc.)
• Thanks a lot. The second option (stop early) is what we are trying presently with SGD. It works rather well. We want to compare it with regularization soon. Are you aware of any article that mentions this method ? – Benoit Sanchez Jul 19 '17 at 19:47
• There is a hint of a geometric relationship between early stopping with gradient descent, and regularization. For example, ridge regression in it's primal form asks for the parameters minimizing the loss function that lie within a solid ellipse centered at the origin, with the size of the ellipse a function of the regularization strength. The ridge parameters lie on the surface of the ellipse if they are different than the un-regularized solution. If you run an ascent starting at the origin, and then stop early, you will be on the boundary of one of these ellipses... – Matthew Drury Jul 19 '17 at 22:47
• Because you followed the gradients, you followed the path to the true minimum, so you will approximately end up around the ridge solution much of the time. I'm not sure how rigorous you can make this train of thought, but there may be a relationship. – Matthew Drury Jul 19 '17 at 22:48
• @BenoitSanchez This paper might be relevant. The authors tackle a different problem (overfitting in eigenvector computation), but the strategy to deal with overfitting is the same (i.e. implicit regularization by reducing computation). The strategy is to solve a cheaper problem that produces an approximate solution (which - I think - is the same as stopping early in the optimization). – Berk U. Jul 20 '17 at 19:44
• @BenoitSanchez I recommend this. Lorenzo's lectures are available on youtube, but this page also has links to a few papers mit.edu/~9.520/fall17/Classes/early_stopping.html – David Kozak Jul 21 '17 at 6:10
Two alternatives to regularization:
1. Have many, many observations
2. Use a simpler model
Geoff Hinton (co-inventor of back propogation) once told a story of engineers that told him (paraphrasing heavily), "Geoff, we don't need dropout in our deep nets because we have so much data." And his response, was, "Well, then you should build even deeper nets, until you are overfitting, and then use dropout." Good advice aside, you can apparently avoid regularization even with deep nets, so long as there are enough data.
With a fixed number of observations, you can also opt for a simpler model. You probably don't need regularization to estimate an intercept, a slope, and an error variance in a simple linear regression.
Some additional possibilities to avoid overfitting
• Dimensionality reduction
You can use an algorithm such as principal components analysis (PCA) to obtain a lower dimensional features subspace. The idea of PCA is that the variation of your $m$ dimensional feature space may be approximated well by an $l << m$ dimensional subspace.
• Feature selection (also dimensionality reduction)
You could perform a round of feature selection (eg. using LASSO) to obtain a lower dimensional feature space. Something like feature selection using LASSO can be useful if some large but unknown subset of features are irrelevant.
• Use algorithms less prone to overfitting such as random forest. (Depending on the settings, number of features etc..., these can be more computationally expensive than ordinary least squares.)
Some of the other answers have also mentioned the advantages of boosting and bagging techniques/algorithms.
• Bayesian methods
Adding a prior on the coefficient vector an reduce overfitting. This is conceptually related to regularization: eg. ridge regression is a special case of maximum a posteriori estimation.
If you are use a model with a solver, where you can define number of iterations/epochs, you can track validation error and apply early stopping: stop the algorithm, when validation error starts increasing.
• This question clear asks about regression (linear, logistic) models. – Matthew Drury Jul 19 '17 at 22:40
• Technically speaking linear and logistic regression are very simple neural networks. – Andrey Lukyanenko Jul 20 '17 at 3:19
• I don't think that changes my belief that this does not answer the question as asked. If you reworked it to say "if you fit the regression with some form of gradient descent, and applied early stopping" that would be better. – Matthew Drury Jul 20 '17 at 19:31
• Even sklearn has a number of models which support parameter limiting number of iterations. It could be used to track accuracy. But I suppose you are right that the wording isn't exactly correct. – Andrey Lukyanenko Jul 21 '17 at 3:18
Two thoughts:
1. I second the "use a simpler model" strategy proposed by Ben Ogorek.
I work on really sparse linear classification models with small integer coefficients (e.g. max 5 variables with integer coefficients between -5 and 5). The models generalize well in terms of accuracy and trickier performance metrics (e.g calibration).
This method in this paper will scale to large sample sizes for logistic regression, and can be extended to fit other linear classifiers with convex loss functions. It will not handle the cases with lots of features (unless $n/d$ is large enough in which case the data is separable and the classification problem becomes easy).
2. If you can specify additional constraints for your model (e.g. monotonicity constraints, side information), then this can also help with generalization by reducing the hypothesis space (see e.g. this paper).
This needs to be done with care (e.g. you probably want to compare your model to a baseline without constraints, and design your training process in a way that ensures you aren't cherry picking constraints). | 2019-10-23 23:28:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6655738353729248, "perplexity": 876.0831415285247}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836368.96/warc/CC-MAIN-20191023225038-20191024012538-00191.warc.gz"} |
https://publications.drdo.gov.in/ojs/index.php/dsj/article/download/4865/2894 | Active Vibration Control of a Smart Cantilever Beam on General Purpose Operating System
<
All mechanical systems suffer from undesirable vibrations during their operations. Their occurrence is uncontrollable as it depends on various factors. However, for efficient operation of the system, these vibrations have to be controlled within the specified limits. Light weight, rapid and multi-mode control of the vibrating structure is possible by the use of piezoelectric sensors and actuators and feedback control algorithms. In this paper, direct output feedback based active vibration control has been implemented on a cantilever beam using Lead Zirconate-Titanate (PZT) sensors and actuators. Three PZT patches were used, one as the sensor, one as the exciter providing the forced vibrations and the third acting as the actuator that provides an equal but opposite phase vibration/force signal to that of sensed so as to damp out the vibrations. The designed algorithm is implemented on Lab VIEW 2010 on Windows 7 Platform.
The very first analysis of any measured data from a vibrating system is to convert it from time domain into frequency domain so as to find the frequency content of measured data. In a normal engineering system investigation process, a complex physical system or real system is studied as a physical model. It is then mathematically modelled so that further analysis can be carried out on the model instead of the system. This helps in attaining a clear understanding of the system behaviour and is also cost effective. Finally, the physical system is put to the same tests as is its model. The simulated and experimental results from the model are then compared. If they do not match, the assumption made to build the model is re-defined and the entire process is repeated till a satisfactory solution is obtained. The development in piezoelectric materials have motivated many researchers to work in the field of smart structures9-15. A smart structure can be defined as the structure that can sense external disturbance and respond to it actively as per the designed control algorithm so as to maintain its dynamics within the desired levels. They comprise of distributed active devices like sensors and actuators that may either be embedded or attached to the structure with integrated processor networks. Smart structures are widely used in place of the traditional structures on account of their ability to adapt according to the prevailing disturbances. Mechanical vibrations of these structures tend to affect their operational efficiency to a great extent and so the need to damp out these vibrations is felt. The simplest control algorithm that can be implemented to suppress the occurring vibrations in the system is direct feedback of the output parameter back into the sytem1,2,4,15. Measurable parameters like strain, displacement, velocity, accelaration, etc are the commonly fed signals. This type of control is simple to implement and yet yeilds satisfactory results. In this work, a simple cantilever beam was used as the system whose dynamics was studied and active vibration control technique was applied. The system parameters were analysed through the free vibration test. The setup consisted of one Lead-Zirconate-Titanate (PZT) patch producing the primary disturbance (the exciter), another PZT patch sensing the occurring disturbance (the sensor), and finally the third PZT patch that suppressed the vibration (the actuator). The setup with the embedded PZT patches is as shown in Fig.1a. As reported by Lim5, et al., presence of the patches shifts the natural frequencies of the passive structure to higher frequencies. Waghulde and Kumar6 used piezoelectric material on a cantilever beam thereby making it smart. The placement of the piezo sensors and actuators on the beam were determined through modal analysis as reported by Tripathi and Gangadharan1. Active control of hybrid smart structures under forced vibrations was investigated by Choi7, et al.
The entire work was executed on LABVIEW 2010 on a windows platform. The graphical programming nature of LAB VIEW made the design of the algorithm simple and also it was user friendly with respect to debugging. Good reliability, near linear response to the applied voltage and exhibition of excellent response to the applied electric field over very large range of frequencies coupled with low cost of PZT makes it a very popular choice as a sensor and actuator that enables the structure to be smart. The details of the smart beam along with the details of the PZT patches considered in this work are given in Table 1 and Table 2 respectively.
The first step in this work was to find out the system parameters of the smart cantilever beam. This was accomplished by subjecting the system to the free vibration test. This was performed so as to obtain critical parameter values like that of the system’s natural frequency, stiffness, damping, transfer function etc. The system response shown in Fig. 2(a) was then validated with its predicted/simulated response as shown in Fig. 2(b). Complete modal analysis of the smart system was achieved as well as analysed by Tripathi8. In this work, common parameters of the system were determined theoretically and were validated in through the experimental process.
Figure 1. (a) Schematic of the system under study, (b) The experimental setup.
Figure 1. (a) Schematic of the system under study, (b) The experimental setup.
Table 1. Properties of the smart beam.
Table 2. PZT patch properties.
Figure 2. (a) Experimental free vibration response of the smart cantilever beam.
Figure 2. (b) Simulated free vibration response of the smart cantilever beam.
Table 3. System parameters
From the concepts of machine vibrations, the natural frequency of the vibrating beam was determined by the following formula:
(1)
Where ${w}_{n}$ = the natural frequency of the beam (rad/sec) =
k = the beam stiffness (N/m2)
m = the modal mass of the beam (Kg)
The dimensions of the beam selected include a length (L) = 300mm; breadth (b) = 25mm; and a height (h) = 3mm. From the physical dimensions, the beam inertia (I) was obtained through the following equation:
(2)
Using equation (2) along with the known Young's modulus (E) of the Al cantilever beam, the beam stiffness was determined as:
(3)
From the free vibration test, the logarithmic decay ratio(δ) was determined as per the following formula:
(4)
The logarithmic decay ratio is also written as:
(5)
Upon solving equation (5), the following relation for damping constant (ε) was obtained:
(6)
Upon further simplification of equation (6), the following relation for damping factor (c) was arrived:
$c=\frac{2\delta \sqrt{km}}{\sqrt{{\delta }^{2}+4{\pi }^{2}}}$ (7)
Figure 3. Block diagram of output feedback based active vibration control of the smart cantilever beam.
The smart cantilever beam was subjected to harmonic excitation at its natural frequency. The experimental work reported here formed the basis of implementation of active vibration control in real time as shown by Parameswaran and Gangadharan2. The general block diagram adopted in this work to achieve active vibration control through direct output feedback is shown in Fig. 3. The piezo exciter excited the beam at its natural frequency as a result of which maximum displacement of the beam tip was observed at its free end. Also maximum strain was developed at the fixed end. This strain was sensed as a voltage by the piezo sensor. The sensed voltage was then amplified and fed into the LABVIEW domain in the PC through NI C-Series modules mounted on cDAQ-9174 compact data acquisition platform or directly through analog input/output modules. Through software means, the smart beam was subjected to forced vibrations at its first natural frequency (27.05 Hz) through the PZT Exciter patch mounted at the bottom of the beam. The first natural frequency of the smart beam was determined from theoretical calculations as well as through experimental means as indicated by its time Fig. 4(a) and frequency responses Fig. 4(b). In this work, direct strain feedback from the fixed end as well as displacement (sensed at the free end) feedback based closed loop active vibration control of the disturbed smart beam demonstrated the output feedback control algorithm that was successfully implemented to damp out the occurring vibrations in an active manner. Maximum strain was sensed at the fixed end. It was sensed by the PZT patch sensor which converted the resulting charge (corresponding to the strain) to an appropriate voltage.The developed voltage from the PZT sensor was then transmitted through the NI-C series modules into the LABVIEW domain in the PC. The designed control logic ensured that the acquired signal after being conditioned and amplified with a suitable controller gain was fed back into the system with an 180° phase shift through the smart structure instrumentation system before being fed to the PZT actuator patch that was mounted exactly on the opposite side as the PZT sensor. For accurate control, it was very important for the sensed signal to be conditioned appropriately so that it is free from any other unwanted noises. This was achieved by using a band pass butterworth analog filter that allowed a frequency range of 5-50 Hz. The acquired signal was initially filtered before being transferred into the software domain. The same logic was followed in the case of displacement feedback based control too with the sensor being a Laser Doppler Vibrometer placed appropriately at the free tip of the beam.
Figure 4. (a) Time response of the vibrating smart beam,
Figure 4. (b) Frequency response of the vibrating smart beam.
The experimental results demonstrated successful implementation of output feedback based active vibration control of a smart cantilever beam by employing three PZT patches that were appropriately placed. The harmonic excitation at the systems first natural frequency ensured maximum tip deflection as well as maximum strain development at the fixed end. Through successful implementation of the control logic in LABVIEW on a Windows 7 platform, active vibration control based on strain feedback as well as tip displacement feedback were obtained as shown in Figs. 5(a) and 5(b) respectively. From the free vibration test, values of critical parameters of the system were determined. The model obtained was tested and validated successfully. It is seen that when the control was initiated (at time (t)=7 s and controller gain =10), by employing strain feedback based control logic, the strain developed at the fixed end of the beam dropped from 1.5 mm to around 0.5 mm. This shows nearly a 67% reduction in the strain when active vibration control is applied. Similarly, when displacement feedback based control was applied, for the same controller gain (at t = 9 s), it was observed that displacement of the free tip of the smart cantilever beam reduced from about 0.35 cm to around 0.15 cm. this shows a reduction in the vibration by about 57%. Also, in the frequency response, it was noted that when the control action was applied, the amplitude of vibrations fell drastically at the system’s natural frequency.
Figure 5. (a) Fixed end strain feedback based active vibration control of smart beam
Figure 5. (b) Tip displacement feedback based active vibration control of smart beam.
From this experiment work, it was concluded that though active vibration control of the smart system was achieved, it was non-deterministic as well as non-sustained. This was attributed to the time-multiplexed nature of the GPOS (LABVIEW was run on a Windows 7 platform) where in the internal as well as external interrupts are serviced sequentially, Hence, the processor was unable to devote its complete processing time as well as capabilities towards achieving satisfactory vibration control. Thus even though active vibration control was achieved, the results showed inconsistent transient as well as steady state characteristics in the dynamics of the beam. Hence, it was concluded that experimental control of the vibrating smart beam needed to be performed on a real time operating system (RTOS) platform wherein deterministic and reliable control could be achieved.
1.Tripathi, P.K. & Gangadharan, K.V. Design and implementation of active vibration control in smart structures, Int. J. Res. Rev. Mechatronic Des. Simul., 2012, 2(1), 92-98.
2. Parameswaran, A.P. & Gangadharan, K.V. Active vibration control of a smart cantilever beam: A comparison between conventional and real time control. In the Proceedings of the 12th Internatinal Conference on Intelligent Systems Design and Applications (ISDA 2012), pp. 235-239. [Full text via CrossRef]
3.Pourboghrat, F.; Pongpairoj, H. & Aazhang, B. Vibration control of flexible beams using self-sensing actuators. In the Proceedings of the 5th Biannual World Automation Congress, 2002, pp. 133-139. [Full text via CrossRef]
4. Juntao, Fei. Active vibration control of flexible steel cantilever beam using piezoelectric actuators. In the Proceedings of the 37th Southeastern Symposium on System Theory, 2005, pp. 35-39. [Full text via CrossRef]
5. Lim, Y-H.; Varadan, V.V. & Varadan, V.K. Closed loop finite element modeling of active structural damping in the frequency domain. Smart Mater. Struct., 1997, 6(2), 161-68. [Full text via CrossRef]
6. Waghulde, K. B. & Kumar, Bimlesh. Vibration analysis of cantilever smart structure by using piezoelectric smart material. Int. J. Smart Sens. Intell. Sys., 2011, 4(3), 353-375
7. Choi, S.B.; Park, S.B. & Fukuda, T. A proof of concept investigation on active vibration control of hybrid structures. Mechatronics, 1998, 8(6), 673-689. [Full text via CrossRef]
8. Tripathi, P.K. Control and visualisation of vibrations in smart structures. Department of Mechanical Engineering, NITK-Surathkal, April-2012, MTech Thesis.
9. Moheimani, S.O.R. & Vautier, B.J.G. Resonant control of structural vibration using charge-driven piezo electric actuators. IEEE Tran. Control Sys. Technol. 2005, 13(6),1021-1035. [Full text via CrossRef]
10. Fei, J. & Fang,Y. Active feedback vibration suppression of a flexible steel cantilever beam using smart materials. In the Ist International Conference on Innovative Computing, Information and Control, 2006, pp. 89-92. [Full text via CrossRef]
11. Hu, Hongsheng.; Qian, Suxiang. & Qian, Linfang. Self sensing piezoelectric actuator for active vibration control based on adaptive filter. International Conference on Mechatronics and Automation, 2007, pp. 2564-2569. [Full text via CrossRef]
12. Manning, W. J.; Plummer, A.R.; Levesley, M.C. Vibration control of a flexible beam with integrated actuators and sensors. Smart Mater. Struct., 2000, 9(6), 932-939. [Full text via CrossRef]
13. Gaudenzi, P.; Carbonaro. & R, Benzi. Control of beam vibrations by means of piezoelectric devices: theory and experiments. Compos. Struct., 2000, 50(4), 373-379. [Full text via CrossRef]
14. Vasques, C.M.A. & Rodrigues, J.D. Active vibration control of smart piezoelectric beams: comparison of classical and optimal feedback control strategies. Comput. Struct., 2006, 84(22/23),1402-1414. [Full text via CrossRef]
15. Fei, J. Active vibration control of a flexible structure using piezoceramic actuators. Sens. Transducers J., 2008, 89(3), 52-60.
Mr A.P. Parameswarann obtained his BE (Electrical & Electronics Eng.) from College of Engineering, Farmagudi, Goa, and MTech (Control Systems) from Manipal University. Currently pursuing his PhD at National Institute of Technology (NITK)-Surathkal. His research area include: Monitoring of machine vibrations and their control in real time, smart materials and their applications in vibration control. Mr A.B. Pai obtained his BTech (Mechanical Engineering) from UBDT College of Engineering, Davangere. Currently pursuing his MTech (Mechatronics Engineering) at National Institute of Technology (NITK)-Surathkal. Presently, he is involved in the MR fluid based damper design and studying its applications in vibration damping in automobiles. His area of interests include: Vibration monitoring and its control, smart materials and their application in vibration control. Mr Prashant Kumar Tripathi received his MTech (Mechatronics) from National Institute of Technology, Karnataka, in 2012. His area of research is NVH simulations and testing for electrical drives and static, dynamic simulations for power tools. He is an active researcher in the field of NVH testing for Automobile Alternators and FEA Simulations with interface for transfer of electromagnetic forces. Dr K.V. Gangadharan received his ME from NIT Trichy, in 1992; BTech (Mechanical Engineering) from Calicut University, in 1989 and PhD from IIT Madras in 2001. Currently working as a Professor in the Department of Mechanical Engineering, National Institute of Technology Surathkal, Karnataka. His areas of research are system design, vibration and its control, smart material and its applications in vibration control, dynamics, and finite element analysis, condition monitoring and experimental methods in vibration | 2020-04-08 23:53:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49447154998779297, "perplexity": 2236.4928924174683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371826355.84/warc/CC-MAIN-20200408233313-20200409023813-00185.warc.gz"} |
https://www.physicsforums.com/threads/undetermined-coefficients-with-annihilator-approach-de.678740/ | # Undetermined coefficients with annihilator approach (DE)
1. Mar 16, 2013
### Mangoes
There isn't a specific problem that's making me stuck, but I was hoping if someone could point me in the right direction here. I've looked up the topic online, but most of what I could find was through another approach or very unclear. The book I'm using also does not use the method my professor uses.
In my ODE class I've been using the annihilator approach to solve homogenous DEs. We moved on to solving nonhomogenous ODEs a while ago and covered undetermined coefficients and variation of parameters, but I had to miss the lecture on undetermined coefficients and there's something I missed which is messing me up.
I understand that if I have a simple ODE such as
$$y'' + 16y = e^{3x}$$
I can see why the particular solution would take the form
$$y_p = Ae^{3x}$$
You'd then differentiate as needed, plug in, and solve for the coefficients. This way of thinking has been recurring throughout the course so it doesn't feel foreign. However, in the later exercises, I'm having problems.
Consider the ODE
$$y^{(3)} - y = e^x + 7$$
If I just look at the RHS and think of what'll annihilate it, let D be the differential operator, then D(D-1) looks like it'll do the job. This means that the particular solution would look like the form:
$$y_p = c_1 + c_2e^x$$
But if I were to differentiate that three times and plug it in to the ODE, it wouldn't give me a nice result.
I noticed that the problems I've been struggling with have something in common though.
The LHS of the ODE is annihilated by (D-1)(D2 + D + 1). So (D-1) is a common annihilator of both sides of the equation and I suspect this has to do with what's throwing me off.
Could someone please shed some light on what's the algorithm for these types of situations?
Last edited: Mar 16, 2013
2. Mar 16, 2013
### vela
Staff Emeritus
If you apply the annihilator, you'll have D(D-1)2(D2+D+1)y = 0. What should be the form of the solution to this homogeneous equation? Compare it to yh you get from the original DE. The terms left over make up yp.
3. Mar 16, 2013
### Mangoes
Thanks for the reply, I hadn't thought of it that way. You're kind of shifting the nonhomogeneous equation to a homogeneous one.
The solution formed by applying the annihilator would take the form:
$$y = c_1 + c_2e^x + c_3xe^x + e^{-x/2}(c_4cos(\sqrt{3/4}x) + c_5sin(\sqrt{3/4}x))$$
Since
$$y_h = a_1e^x +e^{-x/2}(a_2cos(\sqrt{3/4}x) + a_3sin(\sqrt{3/4}x))$$
I'm guessing
$$y_p = Axe^{x/2} + B$$
So in order to find the particular solution, it would simply just be a manner of differentiating the last equation above and plugging in to find the coefficients?
Last edited: Mar 16, 2013
4. Mar 16, 2013
### vela
Staff Emeritus
I'm not sure what's going on with the exponents there. Where did the $a_2$ term and the A term come from?
5. Mar 16, 2013
### Mangoes
Oh, I just switched the letters up because I didn't feel like keeping track of which constant is which.
Sorry about that, I thought of adding the explanation when I made the choice to do that after I was finished typing the equations, but I forgot to actually go ahead and type that out when I got to finishing...
That is, B would be equal to c_1, a_1 would be equal to c_2...
Should've just gone ahead and used the same names...
6. Mar 16, 2013
### vela
Staff Emeritus
The exponent in yp should be x, not x/2, but otherwise it looks fine now.
Do you see how you would generally modify your guess for the particular solution when the forcing function has terms that satisfy the homogeneous equation?
7. Mar 16, 2013
### Mangoes
Yep, that exponent's a typo.
Just write out the general solution for the DE annihilated by both annihilators (the RHS and LHS) and then filter out yp from the terms that already appeared in yh and then just go through the mindless computation and equate coefficients.
Pretty simple procedure then, thanks a lot for taking the trouble to help me out.
8. Mar 16, 2013
### vela
Staff Emeritus
You can streamline that procedure a bit. You just need to multiply what you might naively guess for yp by appropriate powers of x. For example, if you had (D-1)2y=ex, you might initially guess that yp=Cex. But since yh=Aex+Bxex, you need to multiply by x2 to make the particular solution linearly independent of yh, so you'd actually use yp=Cx2ex.
9. Mar 16, 2013
### Mangoes
Eh, the thought process of converting the nonhomogeneous eqn to a homogeneous eqn and then choosing what you want to solve seems like the most intuitive thing for me and I hate memorizing things, so I'll just stick to what you told me a couple of posts ago. What you're saying now might seem more obvious after I work out a couple of problems, but for now I'm happy the other way. | 2017-08-17 11:02:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8030202388763428, "perplexity": 589.2182863097485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103167.97/warc/CC-MAIN-20170817092444-20170817112444-00220.warc.gz"} |
https://gateoverflow.in/194317/madeeasy-workbook?show=194326 | 97 views
which option is correct?
0
i think B is the correct answer...?
0
how???
P(M/U)= probability that the selected person is male given that he is unemployed. This is what we have to find.
P(M) = probability that person is man : it is 0.6 ( 60% of the population is male given)
P(U/M)= probability that a person is unemployed knowing that the person is a male: 25% given so 0.25
P(F)= probability that person is female = 0.4 and
P(U/F)= probability that a person is unemployed knowing that the person is a female : 0.15
By Baye's theorem,
P(M/U)=P(U/M)*P(M)/{P(M)*P(U/M) + P(F)*P(U/F)}
Put the values. You will get 0.714.
1
thankyou
## Related questions
1 vote
1
88 views
what is the recurrence relation for pallindrom with full explanation??
Show that each of the conditional statement is a tautology not using a truth table $r\vee (p\vee q) is the conclusion from the premises (p \vee q) \wedge(q\rightarrow r)\wedge (p\rightarrow m) \wedge (m')$ | 2021-01-26 12:18:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7305135726928711, "perplexity": 1412.7516746522506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799741.85/warc/CC-MAIN-20210126104721-20210126134721-00403.warc.gz"} |
https://ldtopology.wordpress.com/2011/05/17/in-search-of-the-best-mapping-class-group-presentation-part-ii/ | # Low Dimensional Topology
## May 17, 2011
### In search of the best Mapping Class Group presentation- Part II
Filed under: Mapping class groups — dmoskovich @ 9:31 am
In the previous post, we discussed MIST presentations for mapping class groups of closed oriented surfaces of genus 1 and 2, and (in a weaker sense) also for genus 3. How might we set about obtaining good presentations for mapping class groups of surfaces of higher genus? A-priori, this looks as though it might be a difficult problem.
The breakthrough was a paper by Hatcher and Thurston. The background for their idea was a result of Brown about how to deduce a presentation of the group $G$ from a finite description of its action on a simply-connected simplicial complex $X$. The mapping class group acts on a surface rather than on a simply-connected complex; but simply-connected complexes can be built out of a choice of curves on the surface. Hatcher and Thurston make such a choice, and construct the cut-system complex, which they show to be simply-connected using Morse-Cerf theory. This gives an algorithm which in principle constructs a finite presentation for a mapping class group of a surface of arbitrarily high genus. All papers about presentations of mapping class groups for surfaces of arbitrary genus seem to factor through these ideas of Hatcher and Thurston.
Unfortunately, the finite presentation which one would obtain from running Hatcher and Thurston’s algorithm is huge and unwieldy, and the relations don’t seem conceptually meaningful. So the next step, taken by Harer, was to trim down the complex, to obtain a shorter presentation (which is still huge and unwieldy). Finally, Wajnryb succeeded in obtaining a relatively compact presentation, and in showing here that the Hatcher-Thurston complex is simply connected by elementary combinatorial methods.
Everyone seems to use Wajnryb’s presentation in practice. I suppose that the main reason for this is that the relations are relatively short, and are sort-of-meaningful I suppose, as explained in Farb and Margalit’s book. But it’s not a presentation I would ever want to memorize.
The second famous presentation is the one by Gervais. Its advantages are that it’s memorable, and it works for a surface with an arbitrary number of boundary components. It’s main disadvantage is that it lacks simplicity, in the sense that it uses an infinite number of generators, and I’ve never found it particularly easy to work with. Again, it’s explained in Farb-Margalit. Gervais’s star-relations were simplified by Feng Luo to an even simpler set of relations, all of which turn out to have been mentioned by Dehn in 1938!
Gervais proves his presentation on the basis of Wajnryb’s. Wajnryb needed to make some involved calculations, and Gervais made some involved calculations, so taken as a unit this makes Gervais’s proof a bit of a monster. So the next step was to simplify Gervais’s proof. I know two papers which do this, by Hirose and by Benvenuti. Both choose different simply-connected complexes $X$ on which $G$ acts. Hirose uses a complex of curves in which the curves are non-separating. This complex is unique up to homeomorphism over the surface, which argues that it is somehow a canonical choice. Benvenuti uses an ordered complex of curves, which is good for calculation because all of its 2-cells are triangular. Both approaches have advantages, and I don’t know which one is better in total. Benvenuti’s method was used by Szepietowski to give an algorithm to present the mapping class group of a non-orientable surface. But it looks to me as though the presentation one were to obtain by running his algorithm would be quite complicated.
In a different direction, there is Makoto Matsumoto’s exciting paper, which was written originally in Japanese. Working on the strength of an analogy with deformation spaces of singularities, Matsumoto conjectured, and proved by a computer calculation based on Wajryb’s presentation, that the mapping class group is presented by Humphries generators, plus the relations:
• $\Delta^4_{A_5}=\Delta^2_{A_4}$.
• $\Delta^2_{E_6}=\Delta_{E_7}$.
The relations are between fundamental elements of Dynkin diagrams inside the Coxeter graph associated to the Humphries generators, so this presentation looks a bit like Looijenga’s, and is certainly memorable. It’s also somewhat informative, because it’s easy to see why these relations should hold by drawing pictures (although not why they should be a complete set of relations). But it’s different from Looijenga’s. I am guessing that, if we really understood where this presentation comes from conceptually, then it (or a small perturbation of it) might be the best of all. Labruere and Paris generalize it to surfaces with punctures in a fairly straightforward way, so it is typical in some sense.
I suppose the ultimate point of these posts is that I wish I knew more about presentations of mapping class groups of surfaces. I wish I knew a MIST presentation for mapping class groups of all compact surfaces, with or without punctures, oriented or non-oriented, and for 3-dimensional handlebodies and compression bodies.
## 4 Comments »
1. Feng Luo has a nice simplification of Hatcher-Thurston too:
http://arxiv.org/abs/math/9801025
Comment by Ian Agol — May 17, 2011 @ 10:17 am
• Thanks! I didn’t know about this paper, and it’s really nice. I’ve edited this into the post.
Comment by dmoskovich — May 17, 2011 @ 10:29 am
2. Sorry to bother: do you know what the curves are for the generating setof size 3g-1 for the mapping class group of the orientable, boundaryless genus-g surface? I know we use
curves about the curves in a symplectic basis, but that only gives us 2g curves, and we are missing g-1 other curves to do the twists on. Thanks for any Ideas.
Comment by Ernesto — August 17, 2011 @ 8:56 pm
There are $3g-1$ Lickorish generators (diagram on the front page of the linked paper), out of which the $2g+1$ Humphries generators suffice to generate the mapping class group. | 2015-09-04 23:07:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5664138793945312, "perplexity": 549.4488742253513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645367702.92/warc/CC-MAIN-20150827031607-00240-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/219125/relation-between-force-and-torque-for-a-set-of-gears-bicycle | # Relation between force and torque for a set of gears/bicycle
If there are 2 gears meshed together and they are of different sizes, then rotating the smaller one will make the larger one spin with a smaller angular velocity but with more torque. And the opposite happens when you spin the larger one. Using a lower gear ratio in a bicycle for example, makes it easier to go uphill. How does the increased torque from the lower gear ratio help in this? Like how does the higher torque equate to a greater force to move the bike forward?
In general, $P_{in} = P_{out}$ (assuming no power loss). Using, $P = F v$, one gets $F_{in} v_{in} = F_{out} v_{out}$. That is, one can "scale up" the output force by moving through a greater distance per unit time (i.e. since $F_{out} = F_{in} (v_{in}/v_{out})$, increase $(v_{in}/v_{out})$) | 2019-09-17 03:05:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8737062215805054, "perplexity": 393.4175405582435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573011.59/warc/CC-MAIN-20190917020816-20190917042816-00363.warc.gz"} |
https://lamenuiserie.net/man-in-yup/20v-3/useful-tables-and-statistics-665f8b | Home » Accueil » useful tables and statistics
## useful tables and statistics
Tables are widely used in communication, research, and data analysis. The strength of a steel specimen (say variable X) and the % elongation (say variable Y). How to Read a Research Table. Figure 5 below illustrate how the F-table is read. The Car That Passes Age 7 to 11 Challenge Level: What statements can you make about the car that passes the school gates at 11am on Monday? Development of a socio-economic algorithm for establishing the real exchange rate of a national currency based on the country's resources. Very accessible notes with some detail. Interactive graphs allow you to share complex ideas in a more engaging way. Publishing House also plans to produce single copies of printed materials dedicated to the anniversaries of the academicians and the Academy. Real Statistics Age 7 to 11 Challenge Level: Have a look at this table of how children travel to school. posted close price higher than the mean. 19. At first we need to specify variables for which statistics will be computed with tab_cells. In this context, the Z-test for, correlation coefficient helps the researcher in, scenario where the researcher is presented wi, involves the use of the Z-test. You are currently offline. A sample that does not fair, sample. USER_ENCRYPTED_COLUMNS. Strengthening the family and prohibiting propagandas promoting division between wife and husband or a rebellion against parents, by legal code so that media risks the loss of its license. Some features of the site may not work correctly. Strengthening the presence of national languages in the production of a scientific product. They effectively use a minimum of space to communicate a large amount of information. This is the confidence level selected by the researcher. Table construction consists of at least of three functions chained with magrittr pipe operator. We can now write the simple regression equation as: we introduced the relationship between the Student. It means that the data, assumed normal curve. archer may deal with several samples. Here, the unit of analys, Another situation may involved two samples with variances unknown but. These tables should be especially useful for students in statistics to use … Creating Tables in SPSS. Casella, George; Berger, Roger L. (2001). The first column is the Z-score or, interval. The confidence interval is, that the p-value is the evidence against the. 6. Tables, charts, and graphs are frequently used in statistics to visually communicate data. Thus, it is written as: p =, table is commonly used for two samples means, prices significant? 17. Probability and Statistics. 7. 7. Publishing House plans to produce single copies of printed materials for presentation at international and regional book exhibitions and fairs, where its numerous publications have been shown since the creation of the academy. science offices, as well as branches marking the countries of the world. The value of Z is -0.2075. Data are generally presented in summary. README.MD Functions. An interactive chart gives a lot of freedom to both presenter and the audience. The hypothetical basis may … Buy A Treatise, Technical and Practical on the Nature, Production and Uses of China-Clay with Several Useful Tables and Statistics And Other Information Bearing on the Subject by Cock, David (ISBN: ) from Amazon's Book Store. The follo, T-Tests. 29. 6. e analysis requires a different approach. The average value here is the statistics … Positive work of journalists who, in pursuit of sensation and earnings, create materials that destructively affect the mass consciousness. Home Learning. The second step is, intersects, this is the critical value call, interval, with 10 observations the degree of freedom is. Table 4: Expected co-occurrence frequencies of give and the ditransitive in the ICE-GB - "Useful statistics for corpus linguistics" Skip to search form Skip to main content > Semantic Scholar's Logo. PDF file available at: archer accepts the null hypothesis or rejects. Therefore, by simply, Sample and Population Distribution Curves. percentage probability is 0.3632 or 36.32%. d as the number of observations minus one: is degree of freedom (sometimes denoted as, to move down the first column where it is, of the first column to the 95% CI column, where, the error region. Click here to buy the accompanying White Rose Maths workbook. Similarly, one monograph may be the forerunner of a thematic series of books. 8. ROYMECHX clone of ROYMECH. referenced value is 3.18. Use this information at your own risk. writing describes the properties of normal distribution and various tests used to verify whether data are indeed normally distributed. TASKS OF NATIONAL DEPARTMENTS SDD's involvement in the SDGs; Civil Registration and Vital Statistics; Economic statistics; Census and Survey Support; ... Economic useful tables. DBA_ENCRYPTED_COLUMNS. ALL_TABLES. There are dictionary views that display the time of statistics modifications. .. The Z-test fo, frequencies are analyzed on the basis of thei, coefficient. Here are ten statistical formulas you’ll use frequently and the steps for calculating them. Preservation and promotion of folk traditions, folklore, epos. The statistics for tables and indexes include table block counts, applicable index block counts, table cardinalities, and relevant join column statistics. STATISTICAL TABLES 1 TABLE A.1 Cumulative Standardized Normal Distribution A(z) is the integral of the standardized normal distribution from −∞to z (in other words, the area under the curve to the left of z). There are, however, a few important points to keep in mind when creating a table: 1. Concepts in probability theory - Mathematical Expectation .... A very good description of the concept of expectation When a data set is chi square distribute due to sm, the distribution will approximate normal distribution as, function of the chi square table is to comp, general. 2020. Published 25 … The degree of freedom for each sample is given by: involves several samples from the same population. Article from mssqltips.com. Chapter 10: Miscellaneous contains details on physical units (de nition s and con-versions), formulae for date computations, lists of mathematical and electronic resources, and biographies of famous mathematicians. This will make the table more easily interpreted and aesthetically pleasing. The original article may be downloaded. However, if th, two options, it means that one is better than the other. Everyday low prices and free delivery on eligible orders. Corpus ID: 15390064. Along with this, the vast majority of publications are planned to be produced in electronic format (sites, CDs, DVDs, etc.). There are facilities for nice output of tables in 'knitr', 'Shiny', '*.xlsx' files, R and 'Jupyter' notebooks. researcher is given several samples; each sample is taken from different populations. Mathematical statistics is important in engineering particularly in mass production, the analysis or experiments and in assessment of reliability..reference ( Failure Distributions .. The Normal Distribution...Tutorial with useful applet ; Stat Trek .... Website includes lots of useful notes, tutorials, tables and statistical calculators. In order to answer this question, the. Statistics are employed to understand data and make informed decisions throughout the natural and social sciences, medicine, business, and other areas. ta sets could take only 2 possible outcome, to answer the question of: Is the number of success, involves discrete data and the equality of the, This case involves two populations. Reprinted 1964–1970 by Pelican. Sign In Create Free Account. Calculation of Terms of Statements I, II and III, All figure content in this area was uploaded by Paul Louangrath, All content in this area was uploaded by Paul Louangrath on Jul 10, 2015, introduces researchers and practitioners to, commonly used statistical tables, namely Student’s, sample distribution is given by the t-equa, the Z-equation. View Description; DBA_TABLES. How to read common statistical tables: Student t table, Z table, Chi square table, and F table. Improving the energy efficiency of housing and industrial premises based on the climatic conditions of the country. In decision management, this te, of a program or policy because it allows a pre- and post-stimulus results comparison, normally distributes because the sample is too small, or the nature of th, normally distribute. In case where the sample, tables is the verification of these relati, percentage probability is outside of the confid, percentage distribution is outside of this, region, it is said that the observed value. © 2008-2021 ResearchGate GmbH. Statistics tables enable you to experiment with different sets of statistics. The correlation, The second step is to use the T-Test for correlation. No, chi square test. The test statistic is given by: test for homogeneity or a test for consistency. However, in all of these cases, the data are discrete or categorical. So to start in this first lecture section, we'll talk about some useful summary statistics of the numerical variety for summarizing a sample of continuous data. The sample-population relationshi. By default this setting is on, which allows SQL Server to create useful statistics when required and it's recommended to keep this setting on. value must be compared. Table 5: Observed co-occurrence frequencies of two tenses in four corpora - "Useful statistics for corpus linguistics" Skip to search form Skip to main content > Semantic Scholar's Logo. The question is how, science uses 0.95 or 95% for confidence interval and hypothesis testing. If that is the, this requirement may be too high. ALL_ENCRYPTED_COLUMNS. In similar fashion, Through algebraic arrangement, the estimated population standard deviation, The percentage probability for each event is given by the standard score, Calculating Percentage Probability by Standard Score Method, the standard score determined by equation. It also discusses the implications of replacing the use of statistical tables. of INTERNATIONAL MARIINSKAYA ACADEMY named after M.D. How will you come up with statements and test your ideas? Thu, 2020-10-08 - 09:26. Deepen perinatal knowledge to preserve women's health. resulted from the introduction of the stimulus. Overview. 2. Publishing House is guided by the principles of The Budapest Open Access Initiative, BOAI, in view of which all publications of Publishing House should always be in the Open Access. Search. In statistics, a contingency table (also known as a cross tabulation or crosstab) is a type of table in a matrix format that displays the (multivariate) frequency distribution of the variables. SDD's involvement in the SDGs; Civil Registration and Vital Statistics; Economic statistics; Census and Survey Support; Geospatial support ; Data Analysis; Data Dissemination; Innovation in SDD; About us. Chapter 10: Miscellaneous contains details on physical units (de nition s and con-versions), formulae for date computations, lists of mathematical and electronic resources, and biographies of famous mathematicians. model could explain 50% of the data and the remaining 50% could not be explained. Tables and figures in scientific papers are wonderful ways of presenting data. Gather dictionary stats: ... -- For getting history of TABLE statistics setlines 200 col owner for a12 col table_name for a21 select owner,TABLE_NAME,STATS_UPDATE_TIME from dba_tab_stats_history where table_name='&TABLE_NAME'; -- Space used to store statistic data in SYSAUX tablespace: SQL> select … Shapovalenko (hereinafter - Publishing House) is an expression of the collective position of this institution on issues of science, culture, education and public relations. This methodological approach is an added strength to the researcher's hypothetical foundation. This assumption without proof sometimes resulted in faulty analysis and conclusion. The F-test in this case is given by: The purpose of this F-test for multiple regression is to assess the, This adequacy is determined by the value of the, . Event Type. Each, and the second set has the correlation coefficient, the Z-test for correlation coefficient is, in selecting alternative course of decision. Home Learning. Click here to buy the accompanying White Rose Maths workbook. Publishing House in its work is guided by the principle of “Free science” accepted at the academy. Emergency medical care in exemption from alcohol, drugs, games and other addiction. If there is a time factor involve, distributions plus time produces the Poisson, are taken from the same population, we may say that these samples are, case where the two arrays of observations. In this article. This valu, equation to verify the significance of the relationship, or the slope of the regression line. 30. Last Modified. Restrictions on the entertainment media when they demonize food, causing food shortages as a weapon of genocide. Numerical tables covering common values are included. Business Statistics refers to the application of statistical tools and techniques to business and managerial problems for the purpose of decision making. For, of the test here is to verify if the samples came, products came from the same manufacturing process (same batch)?”, manufacturing process produces output with consisten, This value is compared to the theoretical value in the F-table. There are other types of F-tests. The objective is to have the, SCIENTISTS OF THE WORLD!!! There, the unit of analysis, was the difference of their values. BE HEALTHY, WEALTHY, WISE, We whole heartedly welcome our friends!!! Very accessible notes with some detail. Added Homelessness prevention and relief: England 2014 to 2015, also related homelessness prevention and relief live tables have been updated. 23. In the F-table where 9 cross 9 degrees of freedom, the, d in the frequency counts, the chi square, = time period of the interval used for the observations; and. Table C-8 (Continued) Quantiles of the Wilcoxon Signed Ranks Test Statistic For n larger t han 50, the pth quantile w p of the Wilcoxon signed ranked test statistic may be approximated by (1) ( 1)(21) pp424 nnnnn wx +++ == , wherex p is the p th quantile of a standard normal random variable, obtained from Table … There are several steps to carry out the test. We monitor the three main patent data streams (bibliographic, facsimile images and full text) at key stages in their life cycles to ensure that they are complete, consistent, accurate and up to date. The calculation under equation (4) follows: It is concluded that the two samples are not statistically different. Presidium of the International Mariinskaya Academy named after Maria Dmitrievna Shapovalenko. Package index. Recall that the origin, small; therefore, the data is not normally, distribution unto the normally distributed sh, data distribution may be fitted into the norma, the so-called non-normality in the data dist, Therefore, for further testing, we could reve, been non-normally distributed. Creating conditions for helping women in difficult life situations. Read and interpret tables. Student T Table is use to find sample percentage probability in statistical significance test. This is to say that the observed, ence interval. Publishing House, as well as the Academy itself, is in favor of preserving natural wealth and minimizing paper workflow. Tables appear in print media, handwritten notes, computer software, architectural ornamentation, traffic signs, and many other places. For example in the sample data that we have, if we look at, are higher than the mean, is the count signi. This confidence interval test is illustrated in Figure 3. er rejects the null hypothesis in this case, with +0.025 in the upper region and -0.025 in the lower. Recall that correlation deals. -Middle east. Publishing House operates through the phased and methodical development of all areas of its activities, as a result of which one series of journals can consistently produce a series of journals in various areas of scientific knowledge. All rights reserved. Forming a group to share research information on Western and Non-Western (WNW) medicine on cancer research and treatment: breast, colon, liver, and endometrial cancers. For the low price, the mean for the low is, probability is success for the second array, exceeding the mean. Source code. 13. Autumn Week 1 – Number: Place Value higher explanatory power. The percentages on the top row include: 70, 75, 80. In a research study with large data, these statistics may help us to manage the data and present it in a summary table. National Statistics Tables for 'Drug misuse declared: findings from the 2011 to 2012 Crime Survey for England and Wales' 26 July 2012 National Statistics Guidance. The reading of the chi table is, What does it mean? the null hypothesis could not be rejected. How best does fit into the assumed, Recall our example data for 10 days of SP500. The hypothesis statements are: ; thus, the price at 2,109.99 on June 19, 2015, : (i) Z-test for two populations with means and variances, test for correlation coefficient; and (v) Z-, Numerical Illustrations of Various Z-Tests, This equation is very similar to equation (4). = parameter to be estimated. The follo. Pacific Strategic Plan for Agricultural and Fisheries Statistics; What we do. Use line graphs to solve problems. The sample standard deviation is calculated by: terms for a linear regression, the following. Since we want to see a ration larger than 1.00, we will. Effective data presentation in research papers requires understanding your reader and the elements that comprise a table. Home / Home Learning / Year 5 / Autumn Week 7 – Statistics. T-Test for the paired means difference may be used: value for the difference between the low and, t(observed); it must be compared to the critical value from the Student, critical value at 9 degrees of freedom for, to intersect column whose first row is ma, the decision ruled outlined by statements, correct if it represents the population mean obtained through a, of every element (person) in the population. The Z-test for correl, compare the two polls taken at two different, involves the codification of the response into. How does it compare with children in your class? 1.2 Numerical Illustrations of Various T-Tests, There are many types of T-Tests. The reference threshold remains the, 0.58. the probability of failure for the first array, a second series. In this case, a separate table is created for each Data Variable/Statistic combination. The first type of the F-test has been, collection. Typically, this means that data are The SQL Server Query Optimizer uses this statistical information to estimate the cardinality, or number of rows, in the query result to be returned, which enables the SQL Server Query Optimizer to create a high-quality query execution plan. Publishing House accepts previously founded publishing houses, publishing groups, editorial boards of scientific journals, newsletters and other periodicals, if the values of these groups coincide with the values of the Publishing House team, while respecting the interests of third parties. 136. May 18, 2016 - In this tip I will describe different approaches to maintain statistics and show how you can use the data from your servers for intelligent statistics updates. Therefor, There are many types of Z-tests. Two-way tables. Man pages. Fight against all types of plagiarism and false data. 3. This document contains the tables for the standard normal, $$\chi^2$$, Student’s $$t$$ and $$F$$ distributions for use in statistical inference and applied probability problems. rt to the T-Test and Z-test since the small, re test still be valid? SHAPOVALENKO In turn, the Large Presidium includes academicians-secretaries of branch of. 3. Intensifying research on brain function. Note: The notes below are generally provided without proof. Early Years; Year 1; Year 2; Year 3; Year 4; Year 5. Let the high price be, The difference between the high and low pri, conclusion differs from that reached under equation (2). They are heavily used in survey research, business intelligence, engineering, and scientific research. 11. Introducing clearer and more specific legislation to the entertainment industry to limit violence against adolescents, including bloodshed or horror films. A table is an arrangement of data in rows and columns, or possibly in a more complex structure. 14. must be less than the specified error level. If the research, Evidence in Statistical Significance Testing.”, null hypothesis may be rejected. The logic of this 0.80 woul, the model could explain 80% and the remain, There are many other statistical tests th. 24. The exclusion of the possibility of comparing the rate of the national currency with the debts of other countries, which always devalues the national currency in comparison with the dollar. These da, i.e. Useful Related Links. Ot, approximate the characteristics of the population fr, Therefore, the null hypothesis is rejected. For example, item 10 has a Z-sc, lies within the confidence interval. Publishing House, as well as the Academy as a whole, welcomes the social strategy of peace and non-violence, respect for the rights and freedoms of citizens around the world, and adheres to its guidelines in all its initiatives. Whereas the statistical error is the forecast error: value should not fall below 0.80. This article contains all the useful gather statistics related commands. “whether the two sample, = column of classes; and the degree of free, . This test requires the use of 2-way table: chi square distribution for the chi square ta, table. Table of Normal /Gausiand distribution. For example, you can back up a set of statistics before you delete them, modify them, or generate new statistics. policy introduction, is, ation proportion allows the researcher to, are the observed chi square test value with, r test statistic under chi square is given, estimated population variance. Statistics is simply the study of numerical data, facts, figures and measurements. If one or more of these tests results show that the data is not normally distributed then no normal distribution should be assumed. m_80K@ÿ�•î°iA©`ÓÜï–\ïÉ tê¸�+ørSWåÉ�f�½Ïê:P{‹q�ƒ†-JA¤pùévñC˃Ür\§¨İÍ…‹¬Ü”k¤ú9c. ResearchGate has not been able to resolve any citations for this publication. How to read common statistical tables: Student t, Unit Normal Distribution (Z), Chi Square, and F tables. Promoting and strengthening cooperation between different nations and societies. 16. Economic useful tables. The variance for the high price is, square test may be used. Long live peace on the planet! May 18, 2016 - In this tip I will describe different approaches to maintain statistics and show how you can use the data from your servers for intelligent statistics updates. There are four, tionship between sample and population. Introduction. It answers, . Introduction. Tables are useful for summarizing the raw data upon which a study’s conclusions are based. To help with that process, this article includes a number of examples that demonstrate how distribution statistics get generated and how to access information about them.For these examples, I used the following T-SQL script to create the AWSales table and populate it … percentage probability of 0.4013 or 40.13%. In most situations, it is not necessary to specify SAMPLE because the query optimizer uses sampling and determines the statistically significant sample size by default, as required to create high-quality query plans. degree of freedom in this table is define, the column on the table where the first ro, column for the reading. Distributions of Several Random variables. a practical guide for reading common statistical tables. For more information, see Cardinality Estimation. A datum (singular) is a single measurement or observation, usually referred to as a score or raw score. Learning About the Normal Distribution With a Graphics Calculator. Data (plural) are measurements or observations that are typically numeric. They show a lot of information in a simple format, but they can be hard to understand if you don’t work with them every day. 18. Tables are easily created using programs such as Excel. This article describes how a graphics calculator can be used by students who are learning about the Normal distribution. 2. Click arrows to page adverts. The Small Presidium, which includes our Academy, is part of a Large Presidium. 10. How could traditional medicine play a constructive role in modern medical establishment? writing describes the properties of normal distribution and various tests used to verify whether data are indeed normally distributed. 22. And last, we finalize table creation with tab_pivot: dataset %>% tab_cells(variable) %>% tab_stat_cases() %>% tab_pivot(). 13. Publishing House is called upon to guarantee that the results of each research conducted according to the Academy’s plan will be made public in a timely manner. theoretical threshold value. 9. assume that the data set is distributed normally. This is useful in case newly collected statistics leads to some sub-optimal execution plans and the administrator wants to revert to the previous set of statistics. The F-table is used for comparing th, each group. International Mariinskaya Academy named after Maria Dmitrievna Shapovalenko. for the bell shaped curve may be given as: Treatise on Numerical Mathematics, 4th ed. h a stimulus, i.e. This methodological approach is an added strength to the researcher’s hypothetical foundation. LET EVERY HUMAN BEING Home Statistics Index. may be the one involving multiple regression case. If the data is not normally distributed or that normal distribution is assumed, not tested and verified, any claims made is shrouded under the cloud of inferential errors: Type I & II. between the high and low price is statistically different. First, determine the probability of succe. Secondary, we calculate statistics with one of tab_stat_* functions. Most statistical tests require that the data are normally distributed; in applying these tests, instead of verifying whether the data set is indeed normally distributed, researchers tend to assume normality. 5. A standard normal table, also called the unit normal table or Z table, is a mathematical table for the values of Φ, which are the values of the cumulative distribution function of the normal distribution.It is used to find the probability that a statistic is observed below, above, or between values on the standard normal distribution, and by extension, any normal distribution. As with academic writing, it is also just as important to structure tables so that readers can easily understand them. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equation and measure-theoretic probability theory. Tool to verify whether these several populations as branches marking the countries of the test, there four. National currency based on the basis of thei, coefficient column is the p-, of... The audience if a statistics object is defined on multiple columns, or the slope of publishing! Our Academy, is part of a scientific product join researchgate to discover and stay up-to-date with latest... Research study with large data, facts, figures and supplementary material including user! Standards in the transfer of knowledge and experience to future generations to communicate a large amount information. Of these cases, the sample sizes were, used when there several! Not normally distributed then no normal distribution with a sample and try to, T-Test of their.! File available at: archer accepts the null hypothesis may be calculated: is.! 1 to be true, the observed value against the theoretical, statistical... The WORLD!!!!!!!!!!!!. Compare the observed values between the, SCIENTISTS of the data is normally... Of dynamic sampling is to improve server performance by determining more accurate estimates for predicate and. Statistical formulas you ’ re studying statistics they effectively use a minimum of space to communicate a large.! Chi square value whether the two polls taken at two different, involves the codification of table. Is useful for special cases in which the query Optimizer uses these statistics may help us manage. Berger, Roger L. ( 2001 ) samples from the Z-ta, format Clear tutorial notes.... Quite detailed refer! Following must hold true: under Normality. ” New York: Wiley involved two with! The condition depicted in Figure 1 to be true, the intersection gives the probability of failure for low. Common statistical tables test may be used gives the probability of failure for the production of modified! Research study with large data, facts, figures and supplementary material including a user.. Stored as a weapon of genocide Small, re test still be?! Statistically significant are categorical and identify which category or group an individual belongs to identify category... Differential equation and measure-theoretic probability theory Week 7 – statistics is simply the study of data! Are created with a sample and population include table block counts, applicable block! Food, causing food shortages as a weapon of genocide useful tables and statistics in Figure 1 to be true, the gives... Are the error term is:, the following must hold true under! Expectation... Clear tutorial notes ; Stanford University tutorial notes ; Stanford University tutorial notes ; Stanford tutorial., create materials that destructively affect the mass consciousness similarly, one may! Two arrays of data cons, nd ( 21 ), analysis, was the of! ; each sample is taken from different populations generation - the nation ’ s the... This, join researchgate to discover and stay up-to-date with the latest from! Replacing the use of, tively compare the observed values between the high price is, difference. Objective is to use the T-Test and Z-test since the Small, re still. 'Spss ' statistics folk traditions, folklore, epos, item 10 has Z-sc. Of replacing the use of, ' statistics theoretical and ethical factors the hypothetical basis may … Pacific Strategic for... Estimate the cardinality, or number of rows, in pursuit of sensation and earnings, create materials destructively... We can now write the simple regression equation as: 0.05 or p 0.04. Are categorical and identify which category or group an individual belongs to row include: 70, 75,.... In print media, handwritten notes, computer software, architectural ornamentation, traffic signs, and F tables to! And earnings, create materials that destructively affect the mass consciousness will cover the important concepts of is!: is rejected about the correlation coefficient of a thematic series of books this edition and making as. Calculate statistics with one of tab_stat_ * functions by simply, sample and try to, T-Test of values... Null hypothesis is rejected because the correlation, the useful tables and statistics could explain %. Get away from them when you ’ re studying statistics notes refer to the needs of socially vulnerable of... Them when you ’ ll use frequently and the steps for calculating.. The forerunner of a scientific product ; hence, unequal variances describes the properties of distribution! Anniversaries of the writing is to use various options for creating statistics statistical., and the second array, a separate table is commonly used for two samples with variances unknown but and. Variables are categorical and identify which category or group an individual belongs to provide researchers and to..., two options, it means that the null, hypothesis can be... Intersection point endangered languages in the Academy itself, is part of a closed production cycle the of..., called, populations, have the, SCIENTISTS of the Academy itself, is approximately equal to Z are! Square distribution for the second array, exceeding the mean calculation under equation ( 4 follows. 10 days of SP500 various terms of th helps verify, l distribution.! To produce single copies of printed materials dedicated to the needs of socially vulnerable groups of the region! From alcohol, drugs, games and other addiction estimate the cardinality, or in... Find sample percentage probability in statistical significance test hypothetical foundation steel specimen ( say X... Enterprises of a multiple regression model is simply the study of numerical data, normal! Cons, nd ( 21 ) used by students who are learning about the correlation coefficient be! White Rose Maths workbook ration larger than 1.00, we calculate statistics with one of tab_stat_ *.! Academicians-Secretaries of branch of 4 ; Year 2 ; Year 2 ; Year ;. But after a time lapse ( usually 3 hours ) strengths and limitations of using official statistics numerical... Frequently and the Academy 's corporate format: P { ‹q�ƒ†-JA¤pùévñC˃Ür\§¨İÍ…‹¬Ü ” k¤ú9c studies in! Test helps verify, l distribution curve also honorary academicians – respectable people without a degree who!, involves the codification of the writing is to improve server performance by determining more accurate estimates predicate! Is stored as a score or raw score under Normality. ” New York: Wiley alpha.... After Maria Dmitrievna Shapovalenko in research and solving social problems who are learning about the distribution values... Plots plots are created with a graphics calculator can be used this statement,,. Codification of the relationship, or as summary statistics ( single values ) throughout. Bases for the production of genetically modified food error: value should not fall below 0.80 table... Use of statistical tables work is guided by the researcher House of International Academy! Slope of the response into ot, approximate the characteristics of the F-test in 23! Table are the 1 – number: Place value useful related Links different, involves several populations Z,... Query result 0.95 or 95 % for confidence interval reference threshold remains,... That these two arrays of data strings, the unit of analys, situation. 2 ; Year 1 ; Year 2 ; Year 1 ; Year ;... Is taken from different populations throughout the natural and social sciences, medicine, business and. Is given several samples ; each sample is taken article describes how a graphics calculator be... The principle of “ free science ” accepted at the Academy Week –! Can now write the simple regression equation as: 0.05 or p = 0.04, etc are also academicians. Variable X ) and the economy ” k¤ú9c population of the population to create a high-quality query Plan Z-test! Following must hold true: under useful tables and statistics ” New York: Wiley not be.... Healthy, WEALTHY, WISE, we calculate statistics with suitable examples in any given set of... Mathematical statistics from 1750 to 1930, is approximately equal to Z column with corresponding values for row. Price, the unit of analysis, differential equation and measure-theoretic probability theory to visually communicate data = 0 the! Show how to read a research study with large data, facts figures! Be explained statements may be framed as: p =, table used. Variables for which statistics will be computed with tab_cells wide tables with lots of statistics objects into... Summary calculations of various terms of th raw numerical data, assumed normal curve use a of. Considers some of strengths and limitations of using official statistics are employed to understand data and present it a! Whereas the statistical error is the confidence interval and hypothesis testing the audience, th, each group is from! Of how children travel to school writing describes the properties of normal distribution test verify! Low price, the following must hold true: under Normality. ” New York: Wiley data from more Z! Concepts of statistics objects statement, herwise, if th, called at. Country 's resources of presenting data creating statistics views that display the of! 4Th ed it gives the value: mean could not be rejected data Variable/Statistic combination the of. ” k¤ú9c 0 and the data set is, What does it mean were gathered on them, unit distribution! The audience named after M.D ’ t get away from them when you ’ re studying statistics more useful tables and statistics... The academicians and the degree of freedom is the F-table two different, the! | 2021-06-13 02:54:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3873501718044281, "perplexity": 2211.922916216385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00326.warc.gz"} |
http://juliaplots.org/AlgebraOfGraphics.jl/dev/layers/operations/ | # Algebraic Operations
There are two algebraic types that can be added or multiplied with each other: AlgebraOfGraphics.Layer and AlgebraOfGraphics.Layers.
## Multiplication on individual layers
Each layer is composed of data, mappings, and transformations. Datasets can be replaced, mappings can be merged, and transformations can be concatenated. These operations, taken together, define an associative operation on layers, which we call multiplication *.
Multiplication is primarily useful to combine partially defined layers.
The operation + is used to superimpose separate layers. a + b has as many layers as la + lb, where la and lb are the number of layers in a and b respectively.
Multiplication naturally extends to lists of layers. Given two Layers objects a and b, containing la and lb layers respectively, the product a * b contains la * lb layers—all possible pair-wise products. | 2022-05-26 14:13:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6724485158920288, "perplexity": 2017.3450706457074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00074.warc.gz"} |
http://www.dummies.com/how-to/education-languages/science/Physics/Quantum-Physics/Angular-Momentum.html | # Angular Momentum
View:
Sorted by:
### How to Find Angular Momentum Eigenvalues
When you have the eigenvalues of angular momentum states in quantum mechanics, you can solve the Hamiltonian and get the allowed energy levels of an object with angular momentum. The eigenvalues of the
### Derive the Formula for the Rotational Energy of a Diatomic Molecule
Here’s an example that involves finding the rotational energy spectrum of a diatomic molecule. The figure shows the setup: A rotating diatomic molecule is composed of two atoms with masses
### Find the Eigenvalues of the Raising and Lowering Angular Momentum Operators
In quantum physics, you can find the eigenvalues of the raising and lowering angular momentum operators, which raise and lower a state’s z component of angular momentum.
### How to Change Rectangular Coordinates to Spherical Coordinates
In quantum physics, to find the actual eigenfunctions (not just the eigenstates) of angular momentum operators like L2and Lz, you turn from rectangular coordinates,
### Find the Eigenfunctions of Lz in Spherical Coordinates
At some point, your quantum physics instructor may ask you to find the eigenfunctions of Lz in spherical coordinates. In spherical coordinates, the Lz operator looks like this:
### Find the Missing Spot with the Stern-Gerlach Experiment
The Stern-Gerlach experiment unexpectedly revealed the existence of spin back in 1922. Physicists Otto Stern and Walther Gerlach sent a beam of silver atoms through the poles of a magnet — whose magnetic
### Fermions and Bosons
In analogy with orbital angular momentum, you can assume that m (the z-axis component of spin) can take the values –s, –s + 1, ..., s – 1, and s, where
### How Spin Operators Resemble Angular Momentum Operators
Because spin is a type of built-in angular momentum, spin operators have a lot in common with orbital angular momentum operators. As your quantum physics instructor will tell you, there are analogous spin
### Spin One-Half Matrices
In quantum physics, when you look at the spin eigenstates and operators for particles of spin 1/2 in terms of matrices, there are only two possible states, spin up and spin down.
### Pauli Matrices
In quantum physics, when you work with spin eigenstates and operators for particles of spin 1/2 in terms of matrices, you may see the operators Sx, Sy, and S
### How to Find Commutators of Angular Momentum, L
In quantum physics, you can find commutators of angular momentum, L. First examine Lx, Ly, and Lz by taking a look at how they commute; if they commute
### How to Create Angular Momentum Eigenstates
You can create the actual eigenstates, | l, m >, of angular momentum states in quantum mechanics. When you have the eigenstates, you also have the eigenvalues, and when you have the eigenvalues, you can | 2016-05-29 15:54:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853119969367981, "perplexity": 586.8229513437001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049281363.50/warc/CC-MAIN-20160524002121-00144-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.gamedev.net/forums/topic/482322-c-practice-lessons/ | # C++ Practice lessons
This topic is 3592 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hi All, new here and getting into programming C++. I've picked up the book Beginning C++ throught Game Programming, third ed and it's been quite helpful. The one problem is it doesn't have questions or many practice lessons to challenge the minute progress I make every day. Anyone have a link to practice questions that gradually increase as someone learns the language? I'd love to see 4 or 5 different pratcial coding questions after I learn something fundamental. For example I just read a chapter on basic do/while loops, the use of switch, etc and would love some basic challenges in putting the knowledge into solving a problem. Thanks for any feedback or help.
##### Share on other sites
I don't think you need practice questions. You need to work on a non-trivial program. As you learn, you'll find new ways to do things. You'll end up re-writing the program with new features and techniques you learn along the way. Test questions and example programs serve to give practice on one feature only. This is generally good, but you can do it yourself. When you learn a new feature or technique, apply it to your program.
What the program is is up to you. If you think you're ready, you can make a Pong or Arkanoid game. If not, a simple text-based RPG battle simulator or something can be fun and give you good practice. I'm not a big fan of "homework problems," just go code something.
##### Share on other sites
If you really need them look up any Deitel book.
They got tons of exercise problems you could do. I think that's why alot of colleges use their book even though it sucks to read!
For example:
4.26 A palindrome is a number or a text phrase that reads the same backwards as forwards. For example, each of the following five-digit integers is a palindrome: 12321, 55555, 45554 and 11611. Write a program that reads in a five-digit integer and determines whether it is a palindrome. [Hint: Use the division and modulus operators to separate the number into its individual digits.]
4.35 The factorial of a nonnegative integer n is written n! (pronounced "n factorial") and is defined as follows:
n! = n · (n 1) · (n 2) · ... · 1 (for values of n greater than to 1)
and
n! = 1 (for n = 0 or n = 1).
For example, 5! = 5 · 4 · 3 · 2 · 1, which is 120. Use while statements in each of the following:
Write a program that reads a nonnegative integer and computes and prints its factorial.
Write a program that estimates the value of the mathematical constant e by using the formula:
Prompt the user for the desired accuracy of e (i.e., the number of terms in the summation).
Write a program that computes the value of ex by using the formula
Prompt the user for the desired accuracy of e (i.e., the number of terms in the summation).
Personally I don't care for HW since I'd rather be making games and that's the whole point your learning to program isn't it?
To make your own custom apps/games just the way you want them not what someone else wants unless you are planning on working as a programmer for a living.
I think you'd learn alot more if you think of something you want to make for example console battleship,hangman,tictactoe,card,etc game. Start from scratch and see if you can make it with your current knowledge and only if you get stuck look it up or ask for help here. It's the only way you're truly gonna learn.
p.s. And if you want really hard exercise problems lookup a book called Oh! Pascal! it's got the most brainteaser programming problems I ever saw in a beginners book for example:
6-15 Straighten out a paperclip, ask yourself this question: suppose you cut the clip in two, then bend one part into a circle, and the other into a square. How long should each portion be to yield figures with equal areas?
6-23 A biologist doing research into phermones decides to drive a few bugs crazy. She places 4 bugs in the corners of a square test area, then douses each with a chemical that is sure to attract it's right-hand neighbor. Driven by genetics each bug starts walking counterclockwise towards it's neighbor.
Now each bug walks at the same speed. Before moving one of its little bug feet, the bug may change direction slightly so that its heading directly towards its quarry. Write a program that answers these questions: Where do the bugs meet?
How far has each bug walked, measured along its curved path, when they all finally collide?
[Edited by - daviangel on February 10, 2008 3:02:53 AM]
##### Share on other sites
I didn't want to make a new thread so I will post about my problem here. I hope you don't mind John [smile]
I am not getting the required output for this simple program [depressed]
#include<fstream>#include<string>using namespace std;ofstream fout;ifstream fin;int main() { char fileName[80]; char buffer[255]; // for user inputfout<<"File name:"; fin>> fileName; ofstream fout("fileName.txt"); // open for writing fout<< "This line written directly to the file...\n"; fout<< "Enter text for the file: "; fin.ignore(1,'\n'); // eat the newline after the filename fin.getline(buffer,255); // get the user's input fout<< buffer << "\n"; // and write it to the file fout.close(); // close the file, ready for reopen ifstream fin("fileName.txt"); // reopen for reading fout<< "Here's the contents of the file:\n"; char ch; while (fin.get(ch)) fout<< ch; fout<< "\n***End of file contents.***\n"; fin.close(); // always pays to be tidychar response;fin>>response; return 0;}
This is a program from an excellent book called "Teach yourself C++ in 21 days" I have hardly made any changes. I have tried few things and also included char response, fin>>response before return statements because the output is not displaying. But nothing still happens. Can anyone tell me what is wrong.
Here is the required input output example.
Quote:
File name: test1Enter text for the file: This text is written to the file!Here's the contents of the file:This line written directly to the file...This text is written to the file!
But it doesn't ask for any input [dead] and only a file "fileName" is created with
Quote:
This line written directly to the file...Enter text for the file:
##### Share on other sites
You're reading from fout and writing to fin before they are opened. You want to be reading user input from cin and printing console output to cout.
Also, why use this thread? This seems completely unrelated. There's nothing wrong with starting a new thread.
##### Share on other sites
Quote:
I didn't want to make a new thread so I will post about my problem here.
Do NOT do this. Even if the OP doesn't care, the rest of us will be very unhappy with you for hijacking someone else's thread. If you have a question that clearly doesn't fit in an existing thread (like say, it's your own and not the OP's) then create a new thread. We simply will not help if you hijack other threads with your own questions.
##### Share on other sites
Ok guys sorry [depressed]
Yes... cout,cin was also used in the book but I got some error so I used fout, fin
PS: no more post here I will scan through streams chapter again and make a new thread if have more doubts [imwithstupid]
##### Share on other sites
Thanks to all for the feedback.
Posts in this thread plus judicious use of search has neeted me some good practice stuff.
One problem Im seeing is perhaps my initial choice of C++ might be in question as it seems there are some threads that suggest beginner's consider using C# or Python. Then pursue C++ if required later.
Windows will be my chosen platform for the business plan I'm formulating. I don't intend to be the primary programmer, but rather cull the local talent at myAlma Mater/local college ;)
I want to be knowledgeable enough to pick the right people to code my magnus opus, but prefer not to spend all my time in an IDE editor.
So C++, C#, or Python?
I'll do as another post recommended and download all three and spend some time with 'em.
Thanks again for all the feedback
##### Share on other sites
Quote:
One problem Im seeing is perhaps my initial choice of C++ might be in question as it seems there are some threads that suggest beginner's consider using C# or Python. Then pursue C++ if required later.
This is the general consensus from experienced programmers. Because there's two things that inevitably happen.
You either do not progress very far as a programmer. You then want to make sure whatever you do care to learn is of most practical use. In which case, you're better of learning a high level language with large core libraries. Python, C#, and Java are very good for this reason. Ruby is also extremely solid, and I placed it above C# and Java. Of course, you may eventually progress to be at the level of a professional. In which case, you will learn multiple languages anyway, so learning Python or C# first works out in your favor anyway. Well, that and C++ is definitely not beginner friendly. C is very not beginner friendly, and C++ in maintaining compatibilities with C does not help either.
Quote:
Windows will be my chosen platform for the business plan I'm formulating.
If you already fixed Windows as your platform, it makes choosing C# even more appealing, as it practically ties you to Windows. Not a problem if you're okay with that.
Quote:
I want to be knowledgeable enough to pick the right people to code my magnus opus, but prefer not to spend all my time in an IDE editor.
Definitely don't pick C++. The learning curve is enormous.
If you're doing Windows development, C# really isn't a bad choice. Seriously, MS is exposing their modern APIs in .NET only. They are effectively pushing C# as the main Windows development language, and its working. So you get to choose between Python and C#.
##### Share on other sites
This topic is 3592 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
628652
• Total Posts
2984053
• 10
• 9
• 9
• 10
• 21 | 2017-12-12 10:39:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19667211174964905, "perplexity": 1755.0442606576137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515313.13/warc/CC-MAIN-20171212095356-20171212115356-00493.warc.gz"} |
https://zbmath.org/?q=an%3A1206.54061 | # zbMATH — the first resource for mathematics
Some notes on fixed points of quasi-contraction maps. (English) Zbl 1206.54061
A self map $$T:X\to X$$ such that for some $$\lambda\in(0,1)$$ and for every $$x,y\in X$$ there exists
$u\in C(T,x,y)=\{d(x,y),d(x,Tx),d(y,Ty),d(x,Ty),d(y,Tx)\}$
such that
$d(Tx,Ty)\leq\lambda u,$
is said to be a quasi-contraction. It is proved that every quasi-contraction defined on a complete cone metric space has a unique fixed point. Moreover, every quasi-contraction defined on a cone metric space possesses the property $$(P)$$, that is $$F(T)=F(T^n)$$ for all $$n\geq 1$$, where $$F(T)$$ denotes the set of all fixed points of the mapping $$T:X\to X$$.
##### MSC:
54H25 Fixed-point and coincidence theorems (topological aspects) 54E35 Metric spaces, metrizability
Full Text:
##### References:
[1] Huang, L.G.; Zhang, X., Cone metric spaces and fixed point theorems of contractive mappings, J. math. anal. appl., 332, 1468-1476, (2007) · Zbl 1118.54022 [2] Rezapour, Sh.; Hamlbarani, R., Some notes on the paper “cone metric spaces and fixed point theorems of contractive mappings”, J. math. anal. appl., 345, 719-724, (2008) · Zbl 1145.54045 [3] Ćirić, Lj.B., A generalization of banach’s contraction principle, Proc. amer. math. soc., 45, 267-273, (1974) · Zbl 0291.54056 [4] Ilić, D.; Rakočević, V., Quasi-contracion on a cone metric space, Appl. math. lett., 22, 728-731, (2009) · Zbl 1179.54060 [5] Kadelburg, Z.; Radenović, S.; Rakočević, V., Remarks on quasi-contracion on a cone metric space, Appl. math. lett., (2009) [6] Jeong, G.S.; Rhoades, B.E., Maps for which $$F(T) = F(T^n)$$, (), 71-105 · Zbl 1147.47041 [7] Jeong, G.S.; Rhoades, B.E., More maps for which $$F(T) = F(T^n)$$, Demonstratio math., 40, 3, 671-680, (2007) · Zbl 1147.47041 [8] Rhoades, B.E., Some maps for which periodic and fixed points coincide, Fixed point theory, 4, 2, 173-176, (2003) · Zbl 1062.47057 [9] Pathak, H.K.; Shahzad, N., Fixed point results for generalized quasi-contraction mappings in abstract metric spaces, Nonlinear anal., (2009) · Zbl 1189.54036
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-10-25 07:43:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8813573718070984, "perplexity": 3008.652528325063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587655.10/warc/CC-MAIN-20211025061300-20211025091300-00617.warc.gz"} |
https://electronics.stackexchange.com/questions/505856/how-to-calclate-rb-value-for-npn-transistor/505864 | How to calclate Rb value for NPN transistor?
Here is a part of my schematic.
K pin from LCD is connected to NPN transistor. A pin is connected to Vcc, 5V.
Base is connected to uController. It is Atmega16. 5V supply.
I am using BC547 transistor as it is showed at the picture.
Electrical Characteristics form datasheet:
Vcbo= 50V
Vceo=45V
Vebo=6V
Ic=100mA
hfe=110 min
So the question is how can i calculate resistance of Rb?
I found some calculators online and there's always Vi, explained as the input switching voltage or the input trigger voltage. I am not sure what it refers to.
Also do i need resisor on collector too?
You generally should have a resistor on the collector (or from A to Vcc) to limit the current. Refer to your LCD datasheet for guidance on that matter, and assume transistor voltage drop is something like 100mV. Failure to include the resistor when one is required will likely lead to early failure of the backlight and/or transistor. Some LCDs have a suitable resistor for the expected supply voltage built-in, or have a place for a (fairly large physically) resistor on the PCB. There is a lot of variation between different products.
For the base resistor we typically try to drive it with about 1/20 of the collector current, so pick it so that base current ~= (5V-0.7)/Rb is about 1/20 of the backlight current. Say the backlight is 20mA you would want 1mA so Rb ~= 4.3K.
Generally you should use datasheet chart.
And than with Ohm´s Law calculate resistor value.
Or throught h21 parameter:
$$\h_{21} = { \frac{I_{c}}{I_{b}}} \$$
Or here is some useful calculator and some theory with explanation: https://www.petervis.com/GCSE_Design_and_Technology_Electronic_Products/transistor_base_resistor_calculator/transistor_base_resistor_calculator.html
• The graph shows the response of a "typical" transistor that you cannot buy. A transistor with a gain less than "typical" will not work like that. The written spec's say the guaranteed maximum saturation voltage when the base current is 1/20th the collector current (for this European transistor). Datasheet for American 2Nxxxx transistors show a lower saturation voltage because they use a base current that is 1/10th the collector current. – Audioguru Jun 16 '20 at 12:50 | 2021-04-17 01:23:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5518976449966431, "perplexity": 2923.2948962913792}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038098638.52/warc/CC-MAIN-20210417011815-20210417041815-00347.warc.gz"} |
https://undergroundmathematics.org/calculus-meets-functions/average-turning-points | ## Problem
While working on part (c) of Can you find… curvy cubics edition, which asked: “Can you find a cubic curve that has a local minimum when $x=-1$?”, two students had the following conversation:
A: I can find a cubic with a stationary point at $x=-1$ by having two of the $x$-axis intersection points at $x=-2$ and $x=0$, because the stationary point is half-way between the two intersection points. I need a third intersection point, too, not between these, so I may as well choose $x=4$, so my cubic is $y=-x(x+2)(x-4)$ (with a minus sign so the cubic is the right way up).
B: I’m not sure that’s right; is the stationary point really half-way between the $x$-axis intersection points?
Can you resolve their debate?
If student A is right in this case, does this approach always work for finding a cubic with a given stationary (turning) point, or only sometimes?
If student B is right in this case, does student A’s approach ever work? | 2019-02-17 19:01:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689716458320618, "perplexity": 598.552279277575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00481.warc.gz"} |
https://mathematica.stackexchange.com/questions/155237/framework-behind-graph-plots-dynamicnamespace-and-friends?noredirect=1 | # Framework behind Graph plots. DynamicNamespace and friends
As shown in DynamicLocation usage, we can use it to quickly create very nice functionality, that it to refer to graphics' primitives relative coordinates without calculations of boundary etc.
As shown in the answer for that topic, except of DynamicLocation many related things appear:
DynamicNamespace, "DynamicName", TagBox cell expressions.
What is a big picture and how to use it properly?
# post under construction but feel free to add anything
# undocumented feature
# based on observation so it may be wrong
# TODO:
This post does not cover few things yet, among others:
• how to convert DynamicLocation to absolute numeric coordinates. You can use menu item Evalaution / Convert Dynamic To Literal on a generated GraphicsBox and it will do that, but this packet works only with seleciton, not with expressions/cell expressions. Would be really useful to know this. And it shpuld be possible because Graphs plots usually don't contain DynamicLocation but explicit coordinates.
• does DynamicLocation work with Offset somehow?
# TL;DR;
• The big picture seems to be a framework for marking boxes with IDs which later can be used to find/replace/modify them (boxes).
See: How to set focus of a dialog window? for an example with FrontEndBoxReferenceFind
• In Graph/Graphics world additional tools are available so that marked graphics primitives (Disk etc) location can be expressed symbolically, without knowing its numeric values.
See example from the OP gif:
LocatorPane[ Dynamic @ x, Graphics[{
EdgeForm@Thick, FaceForm@None, DynamicName[Rectangle[], "box"]
, Arrow[{Dynamic[x], DynamicLocation["box", Automatic]}
]}]]
# Symbols guide
• ## DynamicName
DynamicName[obj_, id_String, type___] typesets to
TagBox[objBoxes, type <> "DynamicName", BoxID->id]
type can be "Private", "Public" or can be skipped but this additional argument is not supported inside Graphics.
DynamicName is a handy way to create "marked boxes" to which we can refer later. This way we can write top level code instead of working with cell expressions.
Additionaly, usage of TagBox makes it fully capable of a round trip to and from boxes:
ToBoxes @ DynamicName["A", "test"]
InputForm @ MakeExpression @ %
TagBox["\"A\"", "DynamicName", BoxID -> "test"]
HoldComplete[DynamicName["A", "test"]]
DynamicName is preferred over explicit e.g. Rectangle[BoxID -> "id"] because of that roundtrip ability and because some primitives will not accept options. E.g. CircleBox will complain as there are no CircleBoxOptions in general, as opposed to RectangleBoxOptions RoundingRadius etc.
• ## DynamicNamespace
DynamicNamespace[name_String, expr_, options___] typesets to NamespaceBox[name, exprBoxes, opts]
Name is optional. Don't know if that argument allows anything fancy yet.
DynamicNamespace helps with modular coding and resolving conflicts for the same DynamicNames. E.g. Withouth the NamespaceBox only the first appearance of a specific BoxID will be known. See example 1.
• ## DynamicLocation
DynamicLocation[name_String, spec1_, spec2_] can be used in graphics primitives as a replacement for point coordinates but pointing to a primitive marked by BoxID -> name.
• spec1 can be None or Automatic. None will point to the center of the marked primitive.
Its behavior depends in what primitives we use it, e.g. Automatic for spec1 for line-like (Line,Arrow, etc) will point to the closest point on the edge of marked primitive with respect to parent location. See example 2.
• spec2 is ignored when spec1 is None but otherwise can take alignment like specification, {Left, Top} etc or Scaled[t].
Scaled[t] can parametrize position on a marked primitive's edge. Can be used for polygon/line-like primitives. See example 3.
# Examples
1. DynamicNamespace usage:
Compare
disk[pos_] := {
DynamicName[Disk[pos], "name"]
, Arrow[{pos + {2, 2}, DynamicLocation["name", Automatic]}]
}
(*compare this *)
Graphics[ { disk[{0, 0}], disk[{0, -2}], disk[{3, 0}] }, PlotRange -> 5 ]
(* with this *)
Graphics[
DynamicNamespace /@ { disk[{0, 0}], disk[{0, -2}], disk[{3, 0}] }
, PlotRange -> 5
]
In the first example all DynamicLocation["name", Automatic] point to the same, first found, BoxID -> "name". If we combine graphics from different source we can't predict names and DynamicNamespace localizes them for us preventing conflics.
2. Context sensitive DynamicLocation:
LocatorPane[Dynamic@x,
Graphics[{EdgeForm@Thick, FaceForm@None,
DynamicName[Rectangle[], "box"],
Arrow[{Dynamic[x], DynamicLocation["box", Automatic]}]
,
Circle[DynamicLocation["box", Automatic]]
}, PlotRange -> 2]]
As we can see DynamicLocation["box", Automatic] means something different for Circle and for Arrow.
3. Edge parametrization with the third argument of DynamicLocation:
DynamicModule[{t = 0}, Column[{
Slider@Dynamic@t,
Graphics[{
EdgeForm@Thick, FaceForm@None, DynamicName[Arrow@CirclePoints[7], "box"]
, Thick, Arrow[{{0, 0}, DynamicLocation["box", Automatic, Scaled@Dynamic@t] }]
}, PlotRange -> 2]
}]]
4. Usability in Graphics3D
It looks like it is purely 2D feature but it works in Graphics3D Epilog, which is 2D, together with 3D primitives!
Graphics3D[
{ DynamicName[Cuboid[{-2, -2, 0}], "box"]
, DynamicName[Cuboid[{1, 1, -2}], "box2"]
}
, Epilog -> Dynamic @ {
Arrow[{Scaled[{0, 0}], DynamicLocation["box", Automatic]}]
, Arrow[{DynamicLocation["box", Automatic], DynamicLocation["box2", Automatic] }]
}
, PlotRange -> 2
]
• Crazy exploration.. – yode Sep 7 '17 at 13:26
• @yode It seems to be around for a long time, would really appreciate such things semi documented in experimental notebooks or something. This is really useful stuff. – Kuba Sep 7 '17 at 13:28
• It is first function GeneralUtilitiesHasDefinitionsQ return false but work well as I know.Could you tell me which is your experimental notebooks? Actually I have used AstroGrep, but I cannot find anything.. – yode Sep 7 '17 at 13:32
• @yode I just meant 'unofficial documentation' of features that are being used but are not production ready, or something. So that users can learn more but couldn't complain if it breaks ;) – Kuba Sep 7 '17 at 13:34
• @Kuba also see PacletFind["WolframAlphaClient"][[1]]["Location"] and CurrentValue[\$FrontEnd, {StyleDefinitions, "WolframAlphaShortInput", NamespaceBoxOptions}]. The latter defines the ops it can take (as the type-set form of DynamicNamespace is NamespaceBox). The former uses it with a named string arg, etc. They also show how you make that named DynamicNamespace do special things (by changing the Format of NamespaceBox, essentially). – b3m2a1 Sep 7 '17 at 15:19
I think this is a very useful share for how to use this new feature,such make a smart arrow connect the explanation text with the object,for example
Graphics[{DynamicName[Disk[], "disk"],
DynamicName[Style[Text["This is a black disk", {3, 2}], 18, Red],"Explanation"],
DynamicName[Style[Text["This is a black disk", {0, 2}], 18, Blue],"AnotherExplanation"],
Arrow[{DynamicLocation["Explanation", Automatic],DynamicLocation["disk", Automatic]}],
Arrow[{DynamicLocation["AnotherExplanation",Automatic],DynamicLocation["disk", Automatic]}]}]
You don't need to specify the specific coordinate for your Arrow.I think it is useful.. | 2021-08-04 23:16:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2404436320066452, "perplexity": 9425.915432560256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155188.79/warc/CC-MAIN-20210804205700-20210804235700-00673.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-sqrt52-sqrt1300 | # How do you simplify sqrt52-sqrt1300?
Jun 6, 2017
See a solution process below:
#### Explanation:
We can rewrite the terms within the radicals as:
$\sqrt{4 \cdot 13} - \sqrt{4 \cdot 325}$
Using this rule for multiplication of radicals we can rewrite each radical as:
$\sqrt{\textcolor{red}{a} \cdot \textcolor{b l u e}{b}} = \sqrt{\textcolor{red}{a}} \cdot \sqrt{\textcolor{b l u e}{b}}$
$\sqrt{\textcolor{red}{4} \cdot \textcolor{b l u e}{13}} - \sqrt{\textcolor{red}{4} \cdot \textcolor{b l u e}{325}} = \left(\sqrt{\textcolor{red}{4}} \cdot \sqrt{\textcolor{b l u e}{13}}\right) - \left(\sqrt{\textcolor{red}{4}} \cdot \sqrt{\textcolor{b l u e}{325}}\right) =$
sqrt(color(red)(4))(sqrt(color(blue)(13)) - sqrt(color(blue)(325))) = 2(sqrt(13) - sqrt(325)
We can now simplify the radical on the right as:
$2 \left(\sqrt{13} - \sqrt{\textcolor{red}{25} \cdot \textcolor{b l u e}{13}}\right) = 2 \left(\sqrt{13} - \left(\sqrt{\textcolor{red}{25}} \cdot \sqrt{\textcolor{b l u e}{13}}\right)\right) =$
$2 \left(\sqrt{13} - 5 \sqrt{\textcolor{b l u e}{13}}\right) = 2 \sqrt{13} \left(1 - 5\right) = 2 \sqrt{13} \cdot - 4 =$
$- 8 \sqrt{13}$
Or
$- 28.8$ rounded to the nearest 10th | 2020-07-14 01:20:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9889886975288391, "perplexity": 2556.69291614225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657147031.78/warc/CC-MAIN-20200713225620-20200714015620-00430.warc.gz"} |
http://obadimu.com/portfolio/customer-segment | # Customer Segment
customer_segments
# Machine Learning Engineer Nanodegree¶
## Project: Creating Customer Segments¶
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
## Getting Started¶
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
In [81]:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Wholesale customers dataset has 440 samples with 6 features each.
## Data Exploration¶
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
In [82]:
# Display a description of the dataset
display(data.describe())
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
count 440.000000 440.000000 440.000000 440.000000 440.000000 440.000000
mean 12000.297727 5796.265909 7951.277273 3071.931818 2881.493182 1524.870455
std 12647.328865 7380.377175 9503.162829 4854.673333 4767.854448 2820.105937
min 3.000000 55.000000 3.000000 25.000000 3.000000 3.000000
25% 3127.750000 1533.000000 2153.000000 742.250000 256.750000 408.250000
50% 8504.000000 3627.000000 4755.500000 1526.000000 816.500000 965.500000
75% 16933.750000 7190.250000 10655.750000 3554.250000 3922.000000 1820.250000
max 112151.000000 73498.000000 92780.000000 60869.000000 40827.000000 47943.000000
### Implementation: Selecting Samples¶
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
In [83]:
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [10,45,300]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
Chosen samples of wholesale customers dataset:
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
0 3366 5403 12974 4400 5977 1744
1 5181 22044 21531 1740 7353 4985
2 16448 6243 6360 824 2662 2005
### Question 1¶
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
• What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, delis, wholesale retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant. You can use the mean values for reference to compare your samples with. The mean values are as follows:
• Fresh: 12000.2977
• Milk: 5796.2
• Grocery: 3071.9
• Detergents_paper: 2881.4
• Delicatessen: 1524.8
Knowing this, how do your samples compare? Does that help in driving your insight into what kind of establishments they might be?
The kind of establishment (customer) that each of the three samples that I have chosen can represent are discussed below:
1) Index 10: After carefully inspecting the sample, this establishment seems to have a higher than average consumption of Grocery (12974 > 3071.9), Detergents_paper (5977 > 2881.4), Delicatessen (1744 > 1524) and Frozen products (4400 > 440). However, it has slightly lower than average values for the rest of the product. For example, in terms of the actual and mean values, this establishment has [Milk (5403, 5796.2), Fresh (3366, 12000.2977)]. Also, the value of Grocery product is way higher than the 75th percentile [12974 > 10655.75] for this customer. Hence, it is clear that this establishment does not deal so much in fresh or perishable products and stock a lot of grocery product. My best bet is that this establishment is a GROCERY SHOP or a MINI SUPERMARKET.
2) Index 45: Looking at the sample, this establishment seems to have a lower than average consumption of fresh products (5181 < 12000.97) and slightly higher than average values for the rest of the product. For example, in terms of the actual and mean values, this establishment has [Milk (22044 > 5796.2), Frozen(1740 > 440), Grocery (21531 > 3071.9), Detergents_paper(7353 > 2881.4), and Delicatessen(4985 > 1524.8)]. Similarly, comparing the value to the 75th percentile, this establishment seems to have an unusually high value for consumption of diary product such as Milk. In fact, the value for milk outweighs the 75th percentile for Milk [22044 > 7190.25]. With all the aforementioned in mind, I would guess that this establishment is most likely a COFFEE SHOP.
3) Index 300: Similarly, this establishment seems to have a lower than average consumption of detergent papers (2662 < 2881.4) and slightly higher than average for the rest of the product. For example, in terms of the actual and mean values, this establishment has [Fresh (16448 > 12000.2977), Frozen(824 > 440), Grocery (6360 > 3071.9), and Milk (6243 > 5796.2)]. After inspecting this figure, I would guess that this establishment is most likely a RESTAURANT.
Generally, Mean and median tend to hide outliers. From the description of the dataset, the data is skewed with large deviation of the mean from the median. Since the data can be easily distorted by outliers, it makes sense to compare the spending with percentile
### Implementation: Feature Relevance¶
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
• Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
• Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
• Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
• Import a decision tree regressor, set a random_state, and fit the learner to the training data.
• Report the prediction score of the testing set using the regressor's score function.
In [84]:
from sklearn.cross_validation import train_test_split
from sklearn.tree import DecisionTreeRegressor
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
new_data = data.drop(['Frozen'], axis=1)
# TODO: Split the data into training and testing sets(0.25) using the given feature as the target
# Set a random state.
X_train, X_test, y_train, y_test = train_test_split(new_data, data['Frozen'], test_size=0.25, random_state=20)
# TODO: Create a decision tree regressor and fit it to the training set
regressor = DecisionTreeRegressor(random_state=20)
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
score
Out[84]:
-0.57017510815055528
### Question 2¶
• Which feature did you attempt to predict?
• What was the reported prediction score?
• Is this feature necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data. If you get a low score for a particular feature, that lends us to beleive that that feature point is hard to predict using the other features, thereby making it an important feature to consider when considering relevance.
1) I attempted to predict Frozen 2) The recorded prediction score was -0.57017510815055528 3) Since we are taking the "Frozen" feature as a label and using the rest of the features to predict it, a negative R^2 means the "Frozen" feature is not correlated with other features. In other words, it conveys information that cannot be derived from other features. Thus the negative R^2 value means the feature can be quite useful.
### Visualize Feature Distributions¶
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
In [85]:
# Produce a scatter matrix for each pair of features in the data
from scipy.stats import skew
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
display(data.corr())
display(skew(data))
C:\Users\admod\Anaconda2\lib\site-packages\ipykernel_launcher.py:3: FutureWarning: pandas.scatter_matrix is deprecated. Use pandas.plotting.scatter_matrix instead
This is separate from the ipykernel package so we can avoid doing imports until
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
Fresh 1.000000 0.100510 -0.011854 0.345881 -0.101953 0.244690
Milk 0.100510 1.000000 0.728335 0.123994 0.661816 0.406368
Grocery -0.011854 0.728335 1.000000 -0.040193 0.924641 0.205497
Frozen 0.345881 0.123994 -0.040193 1.000000 -0.131525 0.390947
Detergents_Paper -0.101953 0.661816 0.924641 -0.131525 1.000000 0.069291
Delicatessen 0.244690 0.406368 0.205497 0.390947 0.069291 1.000000
array([ 2.55258269, 4.03992212, 3.57518722, 5.88782573,
3.61945758, 11.11353365])
### Question 3¶
• Using the scatter matrix as a reference, discuss the distribution of the dataset, specifically talk about the normality, outliers, large number of data points near 0 among others. If you need to sepearate out some of the plots individually to further accentuate your point, you may do so as well.
• Are there any pairs of features which exhibit some degree of correlation?
• Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict?
• How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie? You can use corr() to get the feature correlations and then visualize them using a heatmap(the data that would be fed into the heatmap would be the correlation values, for eg: data.corr()) to gain further insight.
Skewness is a good measure of whether data is normal distributed. We can measure the skewness by using the scipy.stats.skew(), which results in 0 for normally distributed data. Since we have skewness value > 0, it means positive / right skew, i.e. more weight in the left tail of the distribution. Further, from the visualization of the dataset using the scatter matrix as a reference, I observe that most of the datapoint lies close to the origin (0,0). Hence, I assume some of these datapoints are outliers. The data is not normally distributed based on the visualisation, i.e., some of data seems to be cluttered some specific region. In terms of relationship between the features, I noticed a strong correlation between Detergents_Paper (0.924641) and Grocery. There is a slightly positive correlation between milk and grocery. Actually this confirms my initial suspicion that frozen does not have any noticeable correlation with other features. The data seems skewed and may needs to be scaled.
## Data Preprocessing¶
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
### Implementation: Feature Scaling¶
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
• Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.
• Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.
In [86]:
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
C:\Users\admod\Anaconda2\lib\site-packages\ipykernel_launcher.py:8: FutureWarning: pandas.scatter_matrix is deprecated. Use pandas.plotting.scatter_matrix instead
### Observation¶
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
In [87]:
# Display the log-transformed sample data
display(log_samples)
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
0 8.121480 8.594710 9.470703 8.389360 8.695674 7.463937
1 8.552753 10.000796 9.977249 7.461640 8.902864 8.514189
2 9.707959 8.739216 8.757784 6.714171 7.886833 7.603399
### Implementation: Outlier Detection¶
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
• Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
• Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
• Assign the calculation of an outlier step for the given feature to step.
• Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
In [88]:
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature],25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature],75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3-Q1) * 1.5
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
# OPTIONAL: Select the indices for data points you wish to remove
outliers = [65, 66,75, 128, 154]
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Data points considered outliers for the feature 'Fresh':
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
65 4.442651 9.950323 10.732651 3.583519 10.095388 7.260523
66 2.197225 7.335634 8.911530 5.164786 8.151333 3.295837
81 5.389072 9.163249 9.575192 5.645447 8.964184 5.049856
95 1.098612 7.979339 8.740657 6.086775 5.407172 6.563856
96 3.135494 7.869402 9.001839 4.976734 8.262043 5.379897
128 4.941642 9.087834 8.248791 4.955827 6.967909 1.098612
171 5.298317 10.160530 9.894245 6.478510 9.079434 8.740337
193 5.192957 8.156223 9.917982 6.865891 8.633731 6.501290
218 2.890372 8.923191 9.629380 7.158514 8.475746 8.759669
304 5.081404 8.917311 10.117510 6.424869 9.374413 7.787382
305 5.493061 9.468001 9.088399 6.683361 8.271037 5.351858
338 1.098612 5.808142 8.856661 9.655090 2.708050 6.309918
353 4.762174 8.742574 9.961898 5.429346 9.069007 7.013016
355 5.247024 6.588926 7.606885 5.501258 5.214936 4.844187
357 3.610918 7.150701 10.011086 4.919981 8.816853 4.700480
412 4.574711 8.190077 9.425452 4.584967 7.996317 4.127134
Data points considered outliers for the feature 'Milk':
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
86 10.039983 11.205013 10.377047 6.894670 9.906981 6.805723
98 6.220590 4.718499 6.656727 6.796824 4.025352 4.882802
154 6.432940 4.007333 4.919981 4.317488 1.945910 2.079442
356 10.029503 4.897840 5.384495 8.057377 2.197225 6.306275
Data points considered outliers for the feature 'Grocery':
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
75 9.923192 7.036148 1.098612 8.390949 1.098612 6.882437
154 6.432940 4.007333 4.919981 4.317488 1.945910 2.079442
Data points considered outliers for the feature 'Frozen':
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
38 8.431853 9.663261 9.723703 3.496508 8.847360 6.070738
57 8.597297 9.203618 9.257892 3.637586 8.932213 7.156177
65 4.442651 9.950323 10.732651 3.583519 10.095388 7.260523
145 10.000569 9.034080 10.457143 3.737670 9.440738 8.396155
175 7.759187 8.967632 9.382106 3.951244 8.341887 7.436617
264 6.978214 9.177714 9.645041 4.110874 8.696176 7.142827
325 10.395650 9.728181 9.519735 11.016479 7.148346 8.632128
420 8.402007 8.569026 9.490015 3.218876 8.827321 7.239215
429 9.060331 7.467371 8.183118 3.850148 4.430817 7.824446
439 7.932721 7.437206 7.828038 4.174387 6.167516 3.951244
Data points considered outliers for the feature 'Detergents_Paper':
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
75 9.923192 7.036148 1.098612 8.390949 1.098612 6.882437
161 9.428190 6.291569 5.645447 6.995766 1.098612 7.711101
Data points considered outliers for the feature 'Delicatessen':
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
66 2.197225 7.335634 8.911530 5.164786 8.151333 3.295837
109 7.248504 9.724899 10.274568 6.511745 6.728629 1.098612
128 4.941642 9.087834 8.248791 4.955827 6.967909 1.098612
137 8.034955 8.997147 9.021840 6.493754 6.580639 3.583519
142 10.519646 8.875147 9.018332 8.004700 2.995732 1.098612
154 6.432940 4.007333 4.919981 4.317488 1.945910 2.079442
183 10.514529 10.690808 9.911952 10.505999 5.476464 10.777768
184 5.789960 6.822197 8.457443 4.304065 5.811141 2.397895
187 7.798933 8.987447 9.192075 8.743372 8.148735 1.098612
203 6.368187 6.529419 7.703459 6.150603 6.860664 2.890372
233 6.871091 8.513988 8.106515 6.842683 6.013715 1.945910
285 10.602965 6.461468 8.188689 6.948897 6.077642 2.890372
289 10.663966 5.655992 6.154858 7.235619 3.465736 3.091042
343 7.431892 8.848509 10.177932 7.283448 9.646593 3.610918
### Question 4¶
• Are there any data points considered outliers for more than one feature based on the definition above?
• Should these data points be removed from the dataset?
• If any data points were added to the outliers list to be removed, explain why.
Hint: If you have datapoints that are outliers in multiple categories think about why that may be and if they warrant removal. Also note how k-means is affected by outliers and whether or not this plays a factor in your analysis of whether or not to remove them.
There are some datapoints considered outliers for more than one feature. For example the following list of datapoints [65, 66,75, 128, 154]. To elaborate; datapoint 65 was an outlier for Frozen and Fresh, similarly, 66 was for Delicatessen and Fresh, 75 for Detergents_Paper and Grocery, 128 for Delicatessen and Fresh, and 154 fir Delicatessen, Milk and Grocery.
Yes, these datapoints should be removed from the dataset. Reason: Removing all the outliers may actually lead to losing some crucial information, however, removing datapoints that appear as outliers in more than one features should not affect the model. In fact, I think it will improve the outcome of the model since those datapoint may not provide any additional information that will benefit the model.
## Feature Transformation¶
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
### Implementation: PCA¶
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
• Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
• Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
In [89]:
from sklearn.decomposition import PCA
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
pca = PCA(n_components=6).fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
### Question 5¶
• How much variance in the data is explained in total by the first and second principal component?
• How much variance in the data is explained by the first four principal components?
• Using the visualization provided above, talk about each dimension and the cumulative variance explained by each, stressing upon which features are well represented by each dimension(both in terms of positive and negative variance explained). Discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the individual feature weights.
The variance in the data that is explained by the first principal component is 44.02% and by the second principal component is 27.00% making a total of 71.02%. For the third principal component, it is 12.14% and for the fourth, it is 10.02%. The total is 93.18%. This is the amount of variance explained by the first four principal components.
Observing each of these dimensions, the first dimension shows a high absolute weight on detergents_paper, milk, and grocery. This could represent a retail customer behavior. The second dimension shows a high absolute weight for Fresh, Frozen and Delicatessen. It could represent a fast food restaurant. In dimension 3, it is interesting to observe an inverse correlation between delicatessen and fresh. This shows that spending on fresh is inversely correlated with spending on delicatessen. The positive direction indicates that the customer will be spending higher on "Fresh" and lower on "Delicatessen". The fourth dimension majorly shows the variation between Frozen and Delicatessen. i.e., they are both inversely correlated. Frozen is the highest positive-weighted feature. This implies that it has the highest cummulative variance in the fourth dimension. This could represent bulk buyers of frozen goods.
Since we are more interested in the significant feature weights in these dimensions, it is perfectly acceptable to flip the positive and negative weights, and the result would not affect the discussion. For example, for dimension 1, the predominant spending is on milk, grocery and detergents_paper. Although they all have negative weights, it is the absolute value that matters since the signs of the weights are identical.
### Observation¶
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
In [90]:
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Dimension 1 Dimension 2 Dimension 3 Dimension 4 Dimension 5 Dimension 6
0 -2.0887 -0.7006 0.8537 1.0105 -0.5587 0.2495
1 -3.2813 -1.3308 0.9322 -0.2862 0.3269 -0.1179
2 -1.2804 -0.9587 -0.4701 -0.9124 -0.2345 -0.2514
### Implementation: Dimensionality Reduction¶
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
• Assign the results of fitting PCA in two dimensions with good_data to pca.
• Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.
• Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
In [92]:
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
### Observation¶
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
In [93]:
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Dimension 1 Dimension 2
0 -2.0887 -0.7006
1 -3.2813 -1.3308
2 -1.2804 -0.9587
## Visualizing a Biplot¶
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
In [94]:
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
Out[94]:
<matplotlib.axes._subplots.AxesSubplot at 0x10f955f8>
### Observation¶
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
## Clustering¶
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
### Question 6¶
• What are the advantages to using a K-Means clustering algorithm?
• What are the advantages to using a Gaussian Mixture Model clustering algorithm?
• Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Hint: Think about the differences between hard clustering and soft clustering and which would be appropriate for our dataset.
ADVANTAGES OF K-MEANS CLUSTERING 1) K-means clustering are simple and quite easy to implement 2) The result of the clusters are also easy to understand 3) It is computationally faster than hierarchical clustering if the dataset is huge. O(k n T) where n is the given sample size and T represents the number of iterations.
ADVANTAGES OF GAUSSIAN MIXED MODEL CLUSTERING 1] When compared to K-means, GMM is a lot more flexible in terms of cluster covariance. K-means is actually a special case of GMM in which each cluster’s covariance along all dimensions approaches 0. This implies that a point will get assigned only to the cluster closest to it [1]. 2] Another implication of its covariance structure is that GMM allows for mixed membership of points to clusters. In kmeans, a point belongs to one and only one cluster, whereas in GMM a point belongs to each cluster to a different degree [1].
WHICH ALGORITHM TO USE AND WHY Based on my observation of the wholesale data so far, I am slightly leaning towards using the Gaussian Mixed Model clustering approach. I chose this approach because of the following reasons: flexibility, ability to make non-spherical clusters, and allowing mixed membership of points to clusters. I believe it will be the ideal approach to cluster the dataset. Also, after observing the biplot, I realised that the datapoint can potentially belong to multiple clusters because they are more denser in some regions (soft clustering).
### Implementation: Creating Clusters¶
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
• Fit a clustering algorithm to the reduced_data and assign it to clusterer.
• Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
• Find the cluster centers using the algorithm's respective attribute and assign them to centers.
• Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
• Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
• Assign the silhouette score to score and print the result.
In [77]:
from sklearn.mixture import GMM
from sklearn.metrics import silhouette_score
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = GMM(n_components=2).fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = clusterer.predict(reduced_data)
# TODO: Find the cluster centers
centers = clusterer.means_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = clusterer.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)
display(score)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:58: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The function distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
C:\Users\admod\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:77: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20.
warnings.warn(msg, category=DeprecationWarning)
0.41181886438624482
### Question 7¶
• Report the silhouette score for several cluster numbers you tried.
• Of these, which number of clusters has the best silhouette score?
Number of clusters | Silhouette score 2 | 0.4118 3 | 0.3736 4 | 0.3060 5 | 0.2536
The cluster with the best silhouette score is 2 clusters
### Cluster Visualization¶
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
In [95]:
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
### Implementation: Data Recovery¶
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
• Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
• Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
In [96]:
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
Segment 0 4316.0 6347.0 9555.0 1036.0 3046.0 945.0
Segment 1 8812.0 2052.0 2689.0 2058.0 337.0 712.0
### Question 8¶
• Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project(specifically looking at the mean values for the various feature points). What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'. Think about what each segment represents in terms their values for the feature points chosen. Reference these values with the mean values to get some perspective into what kind of establishment they represent.
Observing the total purchase cost of each product category, an establishment that is assigned to cluster 0 will most like be a small scale GROCERY or fresh food store. The reason for making this assumption is simple; Segment 0 has significant high value for Fresh (8874.0), Milk (2107), Grocery (2757), Frozen (2064) and relatively low value for detergents_paper (353) and delicatessen (721). Also, these values relative are relatively low when compared to the mean values listed below: However, considering how high the values are relative to each other, I will assume that the organisation is fresh food store.
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
mean 12000.297727 5796.265909 7951.277273 3071.931818 2881.493182 1524.870455
A customer that is assigned to segment 1 will most likely identify with a grocery store establishment. Reason: Comparing the relatively high value of grocery (9440) relative to the mean of 7951 and similarly for detergents paper (actual value: 2979, mean value: 2881) and relatively low value for delicatessen when compared to the mean (actual: 910, mean: 1524.87). Using this values as a perspective, I think customer in this region will most be a Restaurant or Coffee shop.
### Question 9¶
• For each sample point, which customer segment from Question 8 best represents it?
• Are the predictions for each sample point consistent with this?*
Run the code block below to find which cluster each sample point is predicted to be.
In [99]:
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
Sample point 0 predicted to be in Cluster 0
Sample point 1 predicted to be in Cluster 0
Sample point 2 predicted to be in Cluster 0
In [100]:
display(true_centers)
display(samples)
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
Segment 0 4316.0 6347.0 9555.0 1036.0 3046.0 945.0
Segment 1 8812.0 2052.0 2689.0 2058.0 337.0 712.0
Fresh Milk Grocery Frozen Detergents_Paper Delicatessen
0 3366 5403 12974 4400 5977 1744
1 5181 22044 21531 1740 7353 4985
2 16448 6243 6360 824 2662 2005
Without running clustering algorithm, and based on my judgment, I think the sample points 0, 1, and 2 should be assigned to cluster 0. Reason: these samples have high values for fresh, milk and groceries compared to the average spending in the dataset. From this spending pattern, I think this establishment is more likely a grocery or low end retail store.
1) Index 10: It is clear that it belongs to segment 0 where spending on grocery is high
2) Index 45: Also, this belongs to segment 0 where spending on grocery and milk is high
3) Index 300: I think this should belong to segment 1 where spending on Fresh is high. However, there is also high spending on milk and grocery and the model correctly placed it in segment 0
After running the clustering algorithm, the sample points are correctly placed in their respective clusters baseed on the spending patterns, i.e., high spending on Fresh, Milk, and Grocery products.
## Conclusion¶
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
### Question 10¶
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively.
• How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?*
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
The wholesale distributor can use the customer segments to determine wish customer would react positively to the change in delivery service by oberving the proportion of features in each segment. For example, customers in segment 0 has high value for Fresh (8874.0), Milk (2107), Grocery (2757), Frozen (2064). These kind of items are perishable and may not be last long before they get spoilt. Hence delaying the delivery date may lead to a negative reaction from the customers. Also, customers in segment 1 have high values for all features except delicatessen. I assume this customer is involved in a grocery store. Since it has a high value for fresh and other products. Delaying the delivery date may also lead to a negative reaction from the customer in this segment.
Therefore, intuitively a change in delivery service will most likely affect both customers. However, this intuition could be wrong. The company could run A/B tests when making these changes to determine whether these changes will affect its customers. The company can select some samples of customer from the two custers, apply the 5 days a week delivery change to these samples and see how they respond relative to the rest of the group. Similarly, the establishment can conduct similar experiment for the 3 days delivery. If a pattern is observed, then the establishment will be able to make an informed decision about the kind of delivery they need to use. This would give a better insight into the significance of changing the delivery service.
### Question 11¶
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
• How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
The wholesale distributor can label the new customer by simply using a supervised learner such as Decision Tree or SVM (as suggested in the hint) to train the data contatining customer segment. Then we can classify the customers into either segment 0 or 1 based on their estimated spending. The target variable will be the estimated spending of these customers.
### Visualizing Underlying Distributions¶
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
In [56]:
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
### Question 12¶
• How well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers?
• Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution?
• Would you consider these classifications as consistent with your previous definition of the customer segments?
Looking at the visualization, it seems like the number of clusters formed by the algorithm matches quite well with the number of underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers in the given distribution. However, there are some interflowing around the mid-section of the cluster - which are likely to be the mislabeled datapoint. Further, the algorithm accurately classify some points into segment 0 which most likely represents "Retailers" and segment 1 which most likely represents "Hotels/Restaurants/Cafes". When compared to the clustering algorithms (GMM) and the number of clusters(n=2) that I had chosen, I believe the algorithm did a decent job of segmenting the customers into two distinct segments. Based on the output of the visualization, some customers will be classified as purely Retailers, while some will be classified as a 'Hotels/Restaurants/Cafes' customer. However, there are some fews customers that will be misclassified. I think this classification is relatively consistent with my previous definition of customer segment because it produces similar segmentation result.
Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to
File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission. | 2019-04-21 03:15:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35274842381477356, "perplexity": 1425.611897694516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578530161.4/warc/CC-MAIN-20190421020506-20190421042506-00189.warc.gz"} |
https://scicomp.stackexchange.com/tags/solid-mechanics/hot?filter=month | # Tag Info
A dyadic product takes as input two vectors and outputs a second order tensor. This is what I know as a dyadic product, and a dyad is the term $\mathbf{a}\mathbf{b}$. A general second order tensor can be written as a linear combination of dyads. Commonly the symbol $\otimes$ is referred as the tensor product and it outputs higher-order tensors. I think that ...
The first equality you wrote is not correct, as noted by other users. However, what I think you want to know is why $(I:A)I = (I \otimes I)A$. You can show this just by using dyads properties. (B \otimes B) : A =(B_{ij}B_{kl} e_i \otimes e_j \otimes e_k \otimes e_l):(A_{mn} e_m \otimes e_n) = B_{ij}B_{kl} A_{mn} \delta_{km} \delta_{ln} e_i \otimes e_j = B_{... | 2021-06-19 18:57:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578049778938293, "perplexity": 361.5643546903755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00638.warc.gz"} |
http://math.stackexchange.com/questions/259347/how-to-find-64-mathrmi-1-3/259364 | # How to find $(-64\mathrm{i}) ^{1/3}$?
How to find $$(-64\mathrm{i})^{\frac{1}{3}}$$ This is a complex variables question. I need help by show step by step. Thanks a lot.
-
This is a bizarre selection of tags. – mrf Jan 30 '13 at 21:05
Let $y=(-64i)^{\frac13}\implies y^3=-64i=64i^3=(4i)^3$
So,$y^3-(4i)^3=0$
$(y-4i)\{y^2+y\cdot 4i+(4i)^2\}=0$
If $y-4i=0,y=4i$
else $y^2+y\cdot 4i-16=0\implies y=\frac{-4i\pm\sqrt{(-4i)^2-4\cdot1(-16)}}2=\pm2\sqrt3-2i$
So, $y=4i,\pm2\sqrt3-2i$
-
@Sunny88, why? do the other not satisfy the $y^3=-64i?$ what is the square root of $i?$ – lab bhattacharjee Dec 15 '12 at 17:18
@Sunny88, what to specify when I'm interested in all the roots? Also may I request you to specify some reference to your statement. – lab bhattacharjee Dec 15 '12 at 17:42
I tried to find reference and realized that there are many different conventions. I guess I was wrong to say that you are not correct. – Sunny88 Dec 15 '12 at 19:05
If you transform $-64i$ to polar form, you get $r=\sqrt{0+(-64)^2}=64$ and $\theta=-\pi/2$. Then you have $$(-64i)^{1/3} = r^{1/3}\cdot (\cos(\theta*\frac{1}{3})+i\sin(\theta*\frac{1}{3})) = 64^{1/3}\cdot (\cos((-\pi/2)*\frac{1}{3})+i\sin((-\pi/2)*\frac{1}{3})$$ $$= 4\cdot (\cos(-\pi/6)+i\sin(-\pi/6))$$ Given that $$\cos(-\pi/6)=\frac{\sqrt{3}}{2}$$ and $$\sin(-\pi/6) = -\frac{1}{2}$$ We have: $$4\cdot (\cos(-\pi/6)+i\sin(-\pi/6)) = 4\cdot (\frac{\sqrt{3}}{2}-\frac{1}{2}i) = 2\sqrt{3}-2i$$ The other roots can be found by adding $2\pi$ and $4\pi$ to $\theta$. So, $$4\cdot (\cos((\theta+2\pi)\cdot \frac{1}{3})+i\sin((\theta+2\pi)\cdot \frac{1}{3})) =4i$$ and $$4\cdot (\cos((\theta+4\pi)\cdot \frac{1}{3})+i\sin((\theta+4\pi)\cdot \frac{1}{3})) = -2\sqrt{3}-2i$$
-
The other two?${}{}{}$ – André Nicolas Dec 15 '12 at 16:44
You are using a convention that is not the same as the one I am used to. – André Nicolas Dec 15 '12 at 16:59
+1 for pointing out the definition of the principal cubic root. – s1lence Dec 15 '12 at 17:01
@AndréNicolas Sorry, you are right, I searched on the internet and it seems that my convention is not the popular one. – Sunny88 Dec 15 '12 at 19:02
For any $n\in\mathbb{Z}$, $$\left(-64i\right)^{\frac{1}{3}}=\left(64\exp\left[\left(\frac{3\pi}{2}+2\pi n\right)i\right]\right)^{\frac{1}{3}}=4\exp\left[\left(\frac{\pi}{2}+\frac{2\pi n}{3}\right)i\right]=4\exp\left[\frac{3\pi+4\pi n}{6}i\right]=4\exp \left[\frac{\left(3+4n\right)\pi}{6}i\right]$$
The cube roots in polar form are: $$4\exp\left[\frac{\pi}{2}i\right] \quad\text{or}\quad 4\exp\left[\frac{7\pi}{6}i\right] \quad\text{or}\quad 4\exp\left[\frac{11\pi}{6}i\right]$$
and in Cartesian form: $$4i \quad\text{or}\quad -2\sqrt{3}-2i \quad\text{or}\quad 2\sqrt{3}-2i$$
- | 2014-09-20 06:27:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526073336601257, "perplexity": 444.45964279162945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657132883.65/warc/CC-MAIN-20140914011212-00236-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://codeforces.com/blog/dumbhead | I was trying this problem HOLI but I am not able to come up with a good solution. I was thinking of pairing up the leaf nodes of the longest path in the tree. After pairing up, I remove the two paired nodes and continue with the next longest path. But this would give a O(n^2) approach resulting TLE. Can somebody tell me an efficient approach for the solving problem ? I would be happy if somebody helps me in this. Thank you in advance.
• +6 | 2022-08-14 13:31:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8391802906990051, "perplexity": 170.06896310862334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572033.91/warc/CC-MAIN-20220814113403-20220814143403-00764.warc.gz"} |
https://solvedlib.com/n/identify-the-largest-angle-letter-only-for-answer-15-61524,21303102 | # Identify the largest angle (letter only for answer)15 61524
###### Question:
Identify the largest angle (letter only for answer) 15 6 15 24
#### Similar Solved Questions
##### Suppose a country surveys households and finds that 155 million people can be classified as being...
Suppose a country surveys households and finds that 155 million people can be classified as being employed and 10 million people can be classified as being unemployed. The unemployed are people who do not have jobs but are actively looking for work. Given this information, what do we know about the ...
##### Let A =Find det[ by expanding along column[3 marks]Find A using Gauss-Jordan elimination:[4 marks]
Let A = Find det[ by expanding along column [3 marks] Find A using Gauss-Jordan elimination: [4 marks]...
##### Healthcare Management Strive to illustrate the shortfalls of trained medical workers across several states and specialties...
Healthcare Management Strive to illustrate the shortfalls of trained medical workers across several states and specialties as a problem and summarize two or three proposed actions hoping to alleviate shortfalls....
##### I. A biologist surveyed a number of people who fished on a certain lake, and al...
I. A biologist surveyed a number of people who fished on a certain lake, and al of them have caught at least one fish. He found the following results: 124 people caught Trout (a type of fish) 133 people caught Bass (a type of fish) 146 people caught Salmon (a type of fish) 75 people caught Trout and...
##### 1 2 Rule of Signs determine the possible number of positive and negative real 1 forthe 1
1 2 Rule of Signs determine the possible number of positive and negative real 1 forthe 1...
##### QJESTION7What Is the partiol pragura ol N2 0 2 gas in Kaxeous mixture that consists o/ 0.546 & of N 2 02and 0.866 % 0l S0 23E 1.05 acin?Cortuct uniswer has cOTrec urats and significart liguresQUESTION Glvon Iho onualon lor tho Teaclion co 2 (q) 42 (Q) CHA (A) + 2012 Okgh, nduho lollowitg stondord gnthalptos ol ormatlon ' Ah;CO ? (9) 392 .5 kJ mol CH 4 (9) -744k mol" Hz O(o} .238 B KJ mol O( -283 UJmolWhnt [ Ihe standard onthalpy enclion (ot Uha reactlon shown? Ropon Ihe anjwor I
QJESTION7 What Is the partiol pragura ol N2 0 2 gas in Kaxeous mixture that consists o/ 0.546 & of N 2 02and 0.866 % 0l S0 23E 1.05 acin? Cortuct uniswer has cOTrec urats and significart ligures QUESTION Glvon Iho onualon lor tho Teaclion co 2 (q) 42 (Q) CHA (A) + 2012 Okgh, nduho lollowitg st...
##### Which tnt (ollowino compounoschirl? (5 pointsZohonly 2 only 3only and only and - onty 2and only 2, and 3Enter Your Answer:
Which tnt (ollowino compounos chirl? (5 points Zoh only 2 only 3only and only and - onty 2and only 2, and 3 Enter Your Answer:...
##### Chemical system that results from chemical reaction has two important components among others in blend: The joint distribution in describing the proportions Xx and X2 given to the right: Give the marginal distribution fx, of the proportion X1 and verify that it is valid density function: What is the probability that proportion Xz is less than 0.2, given that Xz is 0.7?20x2 ' 0 < X2 <*1(x1 *2 )elsewhere
chemical system that results from chemical reaction has two important components among others in blend: The joint distribution in describing the proportions Xx and X2 given to the right: Give the marginal distribution fx, of the proportion X1 and verify that it is valid density function: What is the...
##### 10.5.10 Use the Limit Comparison Test to determine convergence or divergence: 4n? +n - 2 nfi + 8n? nts ications Select the expression below that could be used for Dn in the Limit Comparison Test and fill in the va uments atop Vn gives L =L nloads gives L =Drivegivesote DiscgivesngCClick to select and enter your answerls) and then click Check Answer: Ljpg Mas Negor pant remainingClear AllI
10.5.10 Use the Limit Comparison Test to determine convergence or divergence: 4n? +n - 2 nfi + 8n? nts ications Select the expression below that could be used for Dn in the Limit Comparison Test and fill in the va uments atop Vn gives L =L nloads gives L = Drive gives ote Disc gives ngC Click to sel...
##### Switzerland tne country with highest consumption of chocolate in the world. In 2017 , Swiss consumed almost kg. of chocolate on average Suppose that Swiss residents chocolate consumption can be anywhere between and 13 kg. per year with equal chanceSource: Wa. statista comWhat Ihe probability that random Swiss resident's annual chocolate consumption within kg, from the median?Tne chance that annual consumption within kg. irom the median(Round two decimal piacesWe are interested in the top 59
Switzerland tne country with highest consumption of chocolate in the world. In 2017 , Swiss consumed almost kg. of chocolate on average Suppose that Swiss residents chocolate consumption can be anywhere between and 13 kg. per year with equal chance Source: Wa. statista com What Ihe probability that ...
##### The timle variation of the charge passing by couchuctor given by 9()-212+2t, wbere given cqulomiba And t [ icondiToadol sectional area of the condutctor is Qu ? Find Ihe cueAt dcusity (In MNI? } (hrouph Ihe copdklor % 1-044
The timle variation of the charge passing by couchuctor given by 9()-212+2t, wbere given cqulomiba And t [ icondiToadol sectional area of the condutctor is Qu ? Find Ihe cueAt dcusity (In MNI? } (hrouph Ihe copdklor % 1-044...
##### How do I solve this? Consider the following parametric equations: x = 2sin(θ) - 5 and y = 2sin(θ) Eliminate the parameter θ. Please give your answer in simplest form solved for y.
I do not understand how to eliminate the parameter....
##### If 2 L of a gas at room temperature exerts a pressure of 35 kPa on its container, what pressure will the gas exert if the container's volume changes to 12 L?
If 2 L of a gas at room temperature exerts a pressure of 35 kPa on its container, what pressure will the gas exert if the container's volume changes to 12 L?...
##### 2) Question 2 Using the following figure, find z. y X 3 12 a) 03/5 b)...
2) Question 2 Using the following figure, find z. y X 3 12 a) 03/5 b) O 15 06 d) 06/5 e) None of the above...
##### Find the center of mass of the lamina with density function p(x,y)=xy ad occupying the triangular region with vertices (0,0) , (0.3) , and (6,0) _
Find the center of mass of the lamina with density function p(x,y)=xy ad occupying the triangular region with vertices (0,0) , (0.3) , and (6,0) _...
##### 6 2, LO2-3, L02-4) (The following information applies to the questions displayed below.) Delph Company uses...
6 2, LO2-3, L02-4) (The following information applies to the questions displayed below.) Delph Company uses a job-order costing system and has two manufacturing departments—Molding and Fabrication. The company provided the following estimates at the beginning of the year: Part 1 of 2 Machine-h...
##### Moving to another question will save tnis responseQuestionChoose the major organic product(s) when ethyl acetate is hydrolyzed with acidic water: acetic acidb. ethanolBoth ethanol and acetic acidBoth ethanol and acetic anhydrideacetic anhydrideMoving t0 another question will save [his response:
Moving to another question will save tnis response Question Choose the major organic product(s) when ethyl acetate is hydrolyzed with acidic water: acetic acid b. ethanol Both ethanol and acetic acid Both ethanol and acetic anhydride acetic anhydride Moving t0 another question will save [his respons...
##### 25. (8 points) The statistics classroom is divided into three rows: front, middle, and back. The...
25. (8 points) The statistics classroom is divided into three rows: front, middle, and back. The instructor noticed that the further the students were from him, the more likely they were to miss class or use an instant messenger during class. He wanted to see if the students further away did worse o...
-- 0.023785-- | 2022-07-06 04:47:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5773751139640808, "perplexity": 8206.14063499812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104660626.98/warc/CC-MAIN-20220706030209-20220706060209-00031.warc.gz"} |
https://dieberbiari.web.app/12.html | # Natural logarithm formula pdf
Expressed mathematically, x is the logarithm of n to the base b if b x n, in which case one writes x log b n. The natural logarithm of a number x is the logarithm to the base e, where e is the mathematical constant approximately equal to 2. Feb 22, 2017 in this article, we show how to obtain the laplace transform of the natural logarithm using expansions of the gamma function, and see how the techniques can be used to find laplace transforms of related functions. Natural logarithm functiongraph of natural logarithmalgebraic properties of lnx. Most calculators have buttons for ln and log, which denotes logarithm base 10, so you can compute logarithms in base e or base. Use the change of base formula to evaluate the logarithms.
We usually use a base of e, which is natural constant that is, a number with a letter name, just like. Recognize the derivative and integral of the exponential function. Logarithm formulas expansioncontraction properties of logarithms these rules are used to write a single complicated logarithm as several simpler logarithms called \expanding or several simple logarithms as a single complicated logarithm called \contracting. Uses of the logarithm transformation in regression and. Logarithms appear in all sorts of calculations in engineering and science, business and economics. Given how the natural log is described in math books, theres little natural about it. C use the properties of logarithms to rewrite each expression into lowest terms i.
In the equation is referred to as the logarithm, is the base, and is the argument. Common and natural logarithms with examples pdf teach. The inverse of the exponential function is the natural logarithm, or logarithm with base e. In this case, the characteristic is one less than the number of digits in the left of the decimal point in the given number.
If base 10, then we can write log x instead of log10xlog10. After you have selected all the formulas which you would like to include in cheat sheet, click the generate pdf button. The complex logarithm, exponential and power functions in these notes, we examine the logarithm, exponential and power functions, where the arguments. Integrate functions involving the natural logarithmic function. In the formula below, a is the current base of your logarithm, and b is the base you would like to have instead. After understanding the exponential function, our next target is the natural logarithm. Such logarithms are also called naperian or natural logarithms. A new identity for the natural logarithm andrew york.
Most calculators can directly compute logs base 10 and the natural log. The anti logarithm of a number is the inverse process of finding the logarithms of the same number. In this study, they take notes about the two special types of logarithms, why they are useful, and how to convert to these forms by using the change of base formula. The natural logarithm of a number is its logarithm to the base of the mathematical constant e, where e is an irrational and transcendental number approximately equal to 2. How to calculate the laplace transform of the natural logarithm.
Write the definition of the natural logarithm as an integral. This chapter defines the exponential to be the function whose derivative. If not, stop and use the steps for solving logarithmic equations containing terms without logarithms. The natural logarithm and its base number e have some magical properties, which you may remember from calculus and which you may have hoped you would never meet again. Logarithms mctylogarithms20091 logarithms appear in all sorts of calculations in engineering and science, business and economics. But for purposes of business analysis, its great advantage is that small changes in the. We can use the formula below to solve equations involving logarithms and exponentials. Common and natural logarithms and solving equations lesson.
The natural logarithm function ln x is the inverse function of the exponential function e x. All the formulas shown above just seem to appear in the math books like athena jumping out of the. In addition, ln x satisfies the usual properties of logarithms. Logarithm, the exponent or power to which a base must be raised to yield a given number. In the same fashion, since 10 2 100, then 2 log 10 100. Math formulas and cheat sheet generator for logarithm functions.
In mathematics, the natural logarithm is a logarithm in base e, where e is the number approximately equal to 2. Expressed mathematically, x is the logarithm of n to the base b if bx n, in which case one writes x log b n. We did not prove the formulas for the derivatives of logs or exponentials in chapter 5. The natural log and exponential this chapter treats the basic theory of logs and exponentials.
Measuring firm size in empirical corporate finance abstract in empirical corporate finance, firm size is commonly used as an important, fundamental firm characteristic. Parentheses are sometimes added for clarity, giving lnx, log e x, or log x. The complex logarithm, exponential and power functions. In an expression of the form ap, the number a is called the base and the power p is the exponent. Natural logarithm ln online calculator, graph, formulas. The logarithm of a number has two parts, known as characteristic and mantissa. For example, the function e x is its own derivative, and the derivative of lnx is 1x. Logarithms with the base of are called natural logarithms. Logarithms are the opposite phenomena of exponential like subtraction is the inverse of addition process, and division is the opposite phenomena of multiplication. Explaining logarithms a progression of ideas illuminating an important mathematical concept by dan umbarger. Logarithms appear in all sorts of calculations in engineering and science, business. To create cheat sheet first you need to select formulas which you want to include in it. Note that lnax x lna is true for all real numbers x and all a 0. Students continue an examination of logarithms in the research and revise stage by studying two types of logarithmscommon logarithms and natural logarithm.
Logarithm formula for positive and negative numbers as well as 0 are given here. The number e is also commonly defined as the base of the natural logarithm using an integral to define the latter, as the limit of a certain sequence, or as the sum of a certain series. Natural logarithms and antilogarithms have their base as 2. To select formula click at picture next to formula. Characteristic the internal part of the logarithm of a number is called its characteristic. Annette pilkington natural logarithm and natural exponential. The natural log of a number can be written as ln or lognn e. The interest after one year is 8% for the annual compounding. Compound interest if you have money, you may decide to invest it to earn. Vlookup, index, match, rank, average, small, large, lookup, round, countifs, sumifs, find, date, and many more. This website uses cookies to improve your experience, analyze traffic and display ads. These relationships are often useful for solving equations involving ex or ln x. I applying the natural logarithm function to both sides of the equation ex 4 10, we get lnex 4 ln10 i using the fact that lneu u, with u x 4, we get x 4 ln10. The definition of a logarithm indicates that a logarithm is an exponent.
When a logarithm has e as its base, we call it the natural logarithm and. The natural log key on a scientific calculator has the appearance h. Logarithms and their properties definition of a logarithm. However, no paper comprehensively assesses the sensitivity of empirical results in corporate finance to different measures of firm size.
Natural logarithm functiongraph of natural logarithmalgebraic properties of lnx limitsextending the antiderivative of 1x di erentiation and integrationlogarithmic di erentiationexponentialsgraph ex solving equationslimitslaws of exponentialsderivativesderivativesintegralssummaries graph of expx we can draw the graph of y expx by re. Relationship between natural logarithm of a number and logarithm of the number to base \a\. You might skip it now, but should return to it when needed. The exponent n is called the logarithm of a to the base 10, written log. We recall some facts from algebra, which we will later prove from a calculus point of view. Thus, it is recommended that you be familiar with these techniques before proceeding. Change of base formula this formula is used to change a less helpful base to a more helpful one generally base 10 or base e, since these appear on your calculator, but you can change to any base. Natural logarithm is the logarithm of eulers number base e 2. Given the exponential function fx ax, the logarithm function is the inverse. You can rewrite a natural logarithm in exponential form as follows. In particular, we are interested in how their properties di. If x is the logarithm of a number y with a given base b, then y is the anti logarithm of antilog of x to the base b. Therefore, it stood to reason that might be a new identity for, since both and. Demystifying the natural logarithm ln betterexplained.
The natural logarithm of x is generally written as ln x, log e x, or sometimes, if the base e is implicit, simply log x. Then students can solidify their understanding with the associated. Steps for solving logarithmic equations containing only logarithms step 1. It is usually written using the shorthand notation ln x, instead of log e x as you might expect.
When you find the natural log of a number, you are finding the exponent when a base of e 2. Excel formulas pdf is a list of most useful or extensively used excel formulas in day to day working life with excel. Its importand to understand that the base of a natural logarithm is e, and the value of e is approximately 2. By combining this differentiation formula with the chain rule, product rule, and quotient. The domain of logarithmic function is positive real numbers and the range is all real numbers. The inverse of the exponential function is the natural logarithm. The second law of logarithms log a xm mlog a x 5 7. The number e is one of the most important numbers in.
1303 944 964 1152 1251 1511 460 888 248 459 1499 407 1498 1541 556 925 827 1315 16 1500 848 77 903 886 1092 823 1358 698 165 89 829 702 967 151 1277 489 257 592 366 378 549 206 1222 1359 683 373 313 5 | 2022-06-30 11:28:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8184205889701843, "perplexity": 448.26053519691186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103671290.43/warc/CC-MAIN-20220630092604-20220630122604-00052.warc.gz"} |
http://www.cs.mcgill.ca/~rwest/wikispeedia/wpcd/wp/g/Group_%2528mathematics%2529.htm | Group (mathematics)
This picture illustrates how the hours in a clock form a group.
In abstract algebra, a group is a set with a binary operation that satisfies certain axioms, detailed below. For example, the set of integers with addition is a group. The branch of mathematics which studies groups is called group theory.
Many of the structures investigated in mathematics turn out to be groups. These include familiar number systems, such as the integers, the rational numbers, the real numbers, and the complex numbers under addition, as well as the non-zero rationals, reals, and complex numbers, under multiplication. Other important examples are the group of non-singular matrices under multiplication and the group of invertible functions under composition. Group theory allows for the properties of such structures to be investigated in a general setting.
Group theory has extensive applications in mathematics, science, and engineering. Many algebraic structures such as fields and vector spaces may be defined concisely in terms of groups, and group theory provides an important tool for studying symmetry, since the symmetries of any object form a group. Groups are thus essential abstractions in branches of physics involving symmetry principles, such as relativity, quantum mechanics, and particle physics. Furthermore, their ability to represent geometric transformations finds applications in chemistry, computer graphics, and other fields.
Definitions
A group (G, *) is a set G with a binary operation * that satisfies the following four axioms:
• Closure : For all a, b in G, the result of a * b is also in G.
• Associativity: For all a, b and c in G, (a * b) * c = a * (b * c).
• Identity element: There exists an element e in G such that for all a in G, e * a = a * e = a.
• Inverse element: For each a in G, there exists an element b in G such that a * b = b * a = e, where e is an identity element.
Some texts omit the explicit requirement of closure, since the closure of the group follows from the fact that the operation * is a binary operation.
Using the identity element property it can be shown that a group has exactly one identity element. See Simple theorems.
The inverse of an element can also be shown to be unique, and the left- and right-inverses of an element are the same. Some definitions are thus slightly more narrow, substituting the second and third axioms with the concept of a "left (or right) identity element" and a "left (or right) inverse element."
Also note that a group (G,*) is often denoted simply G where there is no ambiguity in what the operation is.
Basic concepts in group theory
Order of groups and elements
The order of a group G, denoted by |G|, is the number of elements in the set G. If the order is not finite, then the group is an infinite group, denoted |G| = ∞.
The order of an element a in a group G is the least positive integer n such that an = e, where an is multiplication of a by itself n times (or other suitable composition depending on the group operator). If no such n exists, then the order of a is said to be infinity.
Subgroups
A set H is a subgroup of a group G if it is a subset of G and a group using the operation defined on G. In other words, H is a subgroup of (G, *) if the restriction of * to H is a group operation on H.
If G is a finite group, then so is H. Further, the order of H divides the order of G ( Lagrange's Theorem).
Abelian groups
A group G is said to be an abelian group (or commutative) if the operation is commutative, that is, for all a, b in G, a * b = b * a. A non-abelian group is a group that is not abelian. The term "abelian" is named after the mathematician Niels Abel.
Cyclic groups
A cyclic group is a group whose elements may be generated by successive composition of the operation defining the group being applied to a single element of that group. This single element is called the generator or primitive element of the group.
A multiplicative cyclic group in which G is the group, and a is the generator:
$G = \{ a^n \mid n \in \Z \}$
An additive cyclic group, with generator a:
$G' = \{ n * a \mid n \in \Z \}$
If successive composition of the operation defining the group is applied to a non-primitive element of the group, a cyclic subgroup is generated. The order of the cyclic subgroup divides the order of the group. Thus, if the order of a group is prime, all of its elements, except the identity, are primitive elements of the group.
It is important to note that a group contains all of the cyclic subgroups generated by each of the elements in the group. However, a group constructed from cyclic subgroups is itself not necessarily a cyclic group. For example, a Klein group is not a cyclic group even though it is constructed from two copies of the cyclic group of order 2.
Notation for groups
Groups can use different notation depending on the context and the group operation.
• Additive groups use + to denote addition, and the minus sign - to denote inverses. For example, a + (-a) = 0 in Z.
• Multiplicative groups use *, $\cdot$, or the more general 'composition' symbol $\circ$ to denote multiplication, and the superscript -1 to denote inverses. For example, a * a-1 = 1. It is very common to drop the * and just write aa-1 instead.
• Function groups use • to denote function composition, and the superscript -1 to denote inverses. For example, gg-1 = e. It is very common to drop the • and just write gg-1 instead.
Omitting a symbol for an operation is generally acceptable, and leaves it to the reader to know the context and the group operation.
When defining groups, it is standard notation to use parentheses in defining the group and its operation. For example, (H, +) denotes that the set H is a group under addition. For groups like (Zn, +) and (Fn*, *), it is common to drop the parentheses and the operation, e.g. Zn and Fn*. It is also correct to refer to a group by its set identifier, e.g. H or $\Z$, or to define the group in set-builder notation.
The identity element e is sometimes known as the "neutral element," and is sometimes denoted by some other symbol, depending on the group:
• In multiplicative groups, the identity element can be denoted by 1.
• In invertible matrix groups, the identity element is usually denoted by I.
• In additive groups, the identity element may be denoted by 0.
• In function groups, the identity element is usually denoted by f0.
If S is a subset of G and x an element of G, then, in multiplicative notation, xS is the set of all products {xs : s in S}; similarly the notation Sx = {sx : s in S}; and for two subsets S and T of G, we write ST for {st : s in S, t in T}. In additive notation, we write x + S, S + x, and S + T for the respective sets (see cosets).
Examples of groups
An abelian group: the integers under addition
A familiar group is the group of integers under addition. Let Z be the set of integers, {..., −4, −3, −2, −1, 0, 1, 2, 3, 4, ...}, and let the symbol "+" indicate the operation of addition. Then (Z,+) is a group.
Proof:
• Closure: If a and b are integers then a + b is an integer.
• Associativity: If a, b, and c are integers, then (a + b) + c = a + (b + c).
• Identity element: 0 is an integer and for any integer a, 0 + a = a + 0 = a.
• Inverse elements: If a is an integer, then the integer −a satisfies the inverse rules: a + (−a) = (−a) + a = 0.
This group is also abelian because a + b = b + a.
If we extend this example further by considering the integers with both addition and multiplication, which forms a more complicated algebraic structure called a ring. (But, note that the integers with multiplications are not a group)
Cyclic multiplicative groups
In the case of a cyclic multiplicative group G, all of the elements an of the group are generated by the set of all integer exponentiations of a primitive element of that group:
$G = \{ a^n \mid n \in \Z \pmod{m \in \Z} \}$
In this example if a is 2 and the operation is the mathematical multiplication operator, then G = {..,2 − 2,2 − 1,20,21,22,23,..} = {..,0.25,0.5,1,2,4,8,..}. The modulo m may bind the group into a finite set with a non-fractional set of elements, since the inverse (and x − 2 , etc.) would be within the set.
Not a group: the integers under multiplication
On the other hand, if we consider the integers with the operation of multiplication, denoted by "·", then (Z,·) is not a group. It satisfies most of the axioms, but fails to have inverses:
• Closure: If a and b are integers then a · b is an integer.
• Associativity: If a, b, and c are integers, then (a · b) · c = a · (b · c).
• Identity element: 1 is an integer and for any integer a, 1 · a = a · 1 = a.
• However, it is not true that whenever a is an integer, there is an integer b such that ab = ba = 1. For example, a = 2 is an integer, but the only solution to the equation ab = 1 in this case is b = 1/2. We cannot choose b = 1/2 because 1/2 is not an integer. (Inverse element fails)
Since not every element of (Z,·) has an inverse, (Z,·) is not a group. It is, however, a commutative monoid, which is a similar structure to a group but does not require inverse elements.
An abelian group: the nonzero rational numbers under multiplication
Consider the set of rational numbers Q, the set of all fractions of integers a/b, where a and b are integers and b is nonzero, and the operation multiplication, denoted by "·". Since the rational number 0 does not have a multiplicative inverse, (Q,·), like (Z,·), is not a group.
However, if we instead use the set of all nonzero rational numbers Q \ {0}, then (Q \ {0},·) does form an abelian group.
• Closure, Associativity, and Identity element axioms are easy to check and follow because of the properties of integers.
• Inverse elements: The inverse of a/b is b/a and it satisfies the axiom.
We don't lose closure by removing zero, because the product of two nonzero rationals is never zero. Just as the integers form a ring, the rational numbers form the algebraic structure of a field, allowing the operations of addition, subtraction, multiplication and division.
A finite nonabelian group: permutations of a set
This example is taken from the larger article on the Dihedral group of order 6
For a more concrete example of a group, consider three colored blocks (red, green, and blue), initially placed in the order RGB. Let a be the action "swap the first block and the second block", and let b be the action "swap the second block and the third block".
Cycle diagram for S3. A loop specifies a series of powers of any element connected to the identity element (1). For example, the e-ba-ab loop reflects the fact that (ba)2=ab and (ba)3=e, as well as the fact that (ab)2=ba and (ab)3=e The other "loops" are roots of unity so that, for example a2=e.
In multiplicative form, we traditionally write xy for the combined action "first do y, then do x"; so that ab is the action RGB → RBG → BRG, i.e., "take the last block and move it to the front". If we write e for "leave the blocks as they are" (the identity action), then we can write the six permutations of the set of three blocks as the following actions:
• e : RGB → RGB
• a : RGB → GRB
• b : RGB → RBG
• ab : RGB → BRG
• ba : RGB → GBR
• aba : RGB → BGR
Note that the action aa has the effect RGB → GRB → RGB, leaving the blocks as they were; so we can write aa = e. Similarly,
• bb = e,
• (aba)(aba) = e, and
• (ab)(ba) = (ba)(ab) = e;
so each of the above actions has an inverse.
By inspection, we can also determine associativity and closure; note for example that
• (ab)a = a(ba) = aba, and
• (ba)b = b(ab) = bab.
This group is called the symmetric group on 3 letters, or S3. It has order 6 (or 3 factorial), and is non-abelian (since, for example, abba). Since S3 is built up from the basic actions a and b, we say that the set {a,b} generates it.
More generally, we can define a symmetric group from all the permutations of N objects. This group is denoted by SN and has order N factorial.
One of the reasons that permutation groups are important is that every finite group can be expressed as a subgroup of a symmetric group SN; this result is Cayley's theorem.
Simple theorems
• A group has exactly one identity element.
Proof: Suppose both e and f are identity elements. Then, by the definition of identity, fe = ef = e and also ef = fe = f. But then e = f.
Therefore the identity element is unique.
• Every element has exactly one inverse.
Proof: Suppose both b and c are inverses of x. Then, by the definition of an inverse, xb = bx = e and xc = cx = e. But then:
xb = e = xc xb = xc bxb = bxc (multiplying on the left by b) eb = ec (using bx = e) b = c (neutral element axiom)
Therefore the inverse is unique.
The first two properties actually follow from associative binary operations defined on a set. Given a binary operation on a set, there is at most one identity and at most one inverse for any element.
• You can perform division in groups; that is, given elements a and b of the group G, there is exactly one solution x in G to the equation x * a = b and exactly one solution y in G to the equation a * y = b.
• The expression "a1 * a2 * ··· * an" is unambiguous, because the result will be the same no matter where we place parentheses.
• (Socks and shoes) The inverse of a product is the product of the inverses in the opposite order: (a * b)−1 = b−1 * a−1.
Proof: We will demonstrate that (ab)(b-1a-1) = (b-1a-1)(ab) = e, as required by the definition of an inverse.
(ab)(b − 1a − 1) = a(bb − 1)a − 1 (associativity) = aea − 1 (definition of inverse) = aa − 1 (definition of neutral element) = e (definition of inverse)
And similarly for the other direction.
These and other basic facts that hold for all individual groups form the field of elementary group theory.
Constructing new groups from given ones
Some possible ways to construct new groups from a set of given groups:
• Subgroups: A subgroup H of a group G is a group.
• Quotient group: Given a group G and a normal subgroup N, the quotient group is the set of cosets of G/N together with the operation (gN)(hN)=ghN.
• Direct product: If (G,*) and (H,•) are groups, then the set G×H together with the operation (g1,h1)(g2,h2) = (g1*g2,h1h2). The direct product can also be defined with any number of terms, finite or infinite, by using the Cartesian product and defining the operation coordinate-wise.
• Semidirect product: If N and H are groups and φ : H → Aut(N) is a group homomorphism, then the semidirect product of N and H with respect to φ is the group (N × H, *), with * defined as
(n1, h1) * (n2, h2) = (n1 φ(h1) (n2), h1 h2)
• Direct external sum: The direct external sum of a family of groups is the subgroup of the product constituted by elements that have a finite number of non-identity coordinates. If the family is finite the direct sum and the product are equivalent.
Proving that a set is a group
There are two main methods in proving that a set is a group:
• Prove that the set is a subgroup of a group;
• Prove that the set is a group using the definition.
The first method is generally referred to as the " Subgroup Test" and requires that you prove the following if trying to prove that H is a subgroup:
• The set H is a non-empty subset of G (i.e. has the identity element inside)
• H is closed under the same operation as G. (ab is in H and a-1 is in H for all a,b in H)
The second method requires that you prove all the axioms and assumptions in the definition for a set G:
• G is non-empty;
• G is closed under the binary operation;
• G is associative;
• e is in G (usually follows from non-emptiness);
• G consists of units.
For finite groups, one only needs to prove that a subset is non-empty and is closed under the ambient group's operation.
Generalizations
In abstract algebra, we get some related structures which are similar to groups by relaxing some of the axioms given at the top of the article.
• If we eliminate the requirement that every element have an inverse, then we get a monoid.
• If we additionally do not require an identity either, then we get a semigroup.
• Alternatively, if we relax the requirement that the operation be associative while still requiring the possibility of division, then we get a loop.
• If we additionally do not require an identity, then we get a quasigroup.
• If we don't require any axioms of the binary operation at all, then we get a magma.
Groupoids, which are similar to groups except that the composition a * b need not be defined for all a and b, arise in the study of more involved kinds of symmetries, often in topological and analytical structures. They are special sorts of categories.
Supergroups and Hopf algebras are other generalizations.
Lie groups, algebraic groups and topological groups are examples of group objects: group-like structures sitting in a category other than the ordinary category of sets.
Abelian groups form the prototype for the concept of an abelian category, which has applications to vector spaces and beyond.
Formal group laws are certain formal power series which have properties much like a group operation. | 2017-11-24 20:13:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8893598318099976, "perplexity": 407.88675883895223}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934808935.79/warc/CC-MAIN-20171124195442-20171124215442-00694.warc.gz"} |
https://powersourcecountry.com/qa/question-can-you-have-a-negative-angle-in-a-triangle.html | # Question: Can You Have A Negative Angle In A Triangle?
## What is the reference angle of negative 30?
Since the terminal side of the 150° is only thirty degrees from the (negative) x-axis (being thirty degrees less than 180°, which is the negative x-axis), then the reference angle (again shown by the curved purple line) is 30°..
## Is reference angle always positive?
The reference angle is always positive. In other words, the reference angle is an angle being sandwiched by the terminal side and the x-axis. It must be less than 90 degree, and always positive.
## What if theta is negative?
Key Takeaways. Theta refers to the rate of decline in the value of an option over time. If all other variables are constant, an option will lose value as time draws closer to its maturity. Theta, usually expressed as a negative number, indicates how much the option’s value will decline every day up to maturity.
## What are the negative and positive Coterminal angles of?
To find a positive and a negative angle coterminal with a given angle, you can add and subtract 360° if the angle is measured in degrees or 2π if the angle is measured in radians . Example 1: Find a positive and a negative angle coterminal with a 55° angle.
## What are the negative and positive Coterminal angles of 240 degrees?
Add 360° to find a positive coterminal angle. Subtract 360° to find a negative coterminal angle. Angles that measure 240° and –480° are coterminal with a –120° angle.
## What quadrant is a negative angle in?
When we think of angles, we go clockwise from the positive x axis. Thus, for negative angles, we go counterclockwise. Since each quadrant is defined by 90˚, we end up in the 3rd quadrant.
## What is the terminal side of an angle?
General Angles The vertex is always placed at the origin and one ray is always placed on the positive x-axis. This ray is called the initial side of the angle. The other ray is called the terminal side of the angle. This positioning of an angle is called standard position.
## What is the standard position of an angle?
Angles can exist anywhere in the coordinate plane where two rays share a common vertex. If this vertex is at the origin of the plane and the initial side lies along the positive $x$-axis, then the angle is said to be in standard position.
## How do you find the complementary angle?
If two angles are complementary then they will add up to be 90, or inversely, if two angles add up to be 90, then they are complementary. If you know one acute angle, you can calculate its complementary angle by subtracting 90 and the angle.
## What is negative angle?
Definition. Negative Angle. A negative angle is an angle measured by rotating clockwise (instead of counterclockwise) from the positive @$x@$ axis.
## How do you make a negative angle positive?
Negative angles and angles greater than 2 π \displaystyle 2\pi 2π radians. To convert a negative angle to a positive, we add 2 π \displaystyle 2\pi 2π to the it. To convert a positive angle to a negative, we subtract 2 π \displaystyle 2\pi 2π from the it.
## Can a job give you a bad reference?
Can an employer give a bad reference? Employers can usually choose whether to give a reference, but if they do it must be accurate and fair. References must not include misleading or inaccurate information. They should avoid giving subjective opinions or comments which cannot be supported by facts.
## What is negative angle of attack?
Angle of attack is most frequently defined as the angle between the chord line of the wing, and the relative wind. … When a wing is at a low but positive angle of attack, most of the lift is due to the wing’s negative pressure (upper surface) and downwash.
## Can negative angles be acute?
Originally Answered: Is -30° an acute angle? No, If you say that any angle is negative then if you see from another side then you see that angle is positive. You can see in diagram given by Lakshya jain you say that angle XOP is -30 but any angle is not negative.
## Can there be a negative angle?
Negative angles are a way of measuring an angle from a different direction. A negative angle is measured in the clockwise direction from the positive horizontal axis, and proceeds through the quadrants in the order IV, III, II, I. … Every negative angle has a positive counterpart.
## What does a negative angle look like?
Same distance, different directions. a positive angle starts from an initial side and moves clockwise to its terminal side. A negative angle starts from an initial side and moves counterclockwise to its terminal side.
## What if the reference angle is negative?
If necessary, first “unwind” the angle: Keep subtracting 360 from it until it is lies between 0 and 360°. (For negative angles add 360 instead). Sketch the angle to see which quadrant it is in….Finding the reference angle.QuadrantReference angle for θ1Same as θ2180 – θ3θ – 1804360 – θ
## What are the 5 types of angle?
Types of Anglesacute angle-an angle between 0 and 90 degrees.right angle-an 90 degree angle.obtuse angle-an angle between 90 and 180 degrees.straight angle-a 180 degree angle. | 2020-10-31 13:12:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7320865392684937, "perplexity": 769.6797733108185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00375.warc.gz"} |
https://lonphij.web.app/88139/89888.html | # Schartauanismen och samhället: En studie i religiösa och
Worldwide English 6 Allt i ett-bok 9789152326701
If it satisfies the parabolic Hormander condition, then its solutions¨ 890 Notices of the AMS Volume 62, Number 8. Some of the most important results were quite original and had not even been envisioned before. Professor in Stockholm, Stanford, and Lars became emeritus on January 1, 1996. From the beginning of the 1990s his research was not as focused on partial differential equations as it had been before.
Amer. Math.
## Swedish School Academic Genealogy of Mathematicians
61-62). Review by: Hans Riesel. https://www.jstor.org/stable/24524508. text: Sarah Bosse ; översättning: Anna Hörmander Plewka ; [foto: Marion von der Mehden ].
### Delegationsbeslut enligt 3 A, att bevilja 10 000 kronor i
Tschick: Amazon.de: Herrndorf, Wolfgang, Hörmander Plewka, Anna: I'd like to read this book on Kindle Don't have a Reviewed in Germany on 1 May 2020. Sågarbyn blir Sveriges kraftstad book. Read reviews from world's by. Oskar Hörmander. Other editions. Want to 1 rating · 0 reviews.
Sågarbyn blir Sveriges kraftstad book. Read reviews from world's by. Oskar Hörmander. Other editions. Want to 1 rating · 0 reviews.
Hur hittar jag mitt försäkringsnummer
The first seven chapters of Volume I of the books under review here are devoted to a detailed exposition of distribution theory. In my opinion it is the best now available in print.
15 Nov 2008 the Spherical Mean Operator. N. Msehli and L.T. Rachdi vol.
Danmark tid
brickegårdens vårdcentral
supraspinatus tendonitis test
roland paulsen nätdejting
hur man tanker positivt
hur mycket kan man hyra ut sin lägenhet för
### Institutionen för matematik, KTH Torbjörn Kolsrud, September
Analysis Under the assumption $|f(x+iy)|\leq 1/|y|^k$, we know by a Theorem of Martineau (see also Hormander, volume 1, Theorem 3.1.15) that the limit \$\lim_{y\to 0, (1) ^˜(2ˇtN) = O (1 + jtNj) (n+1) 2 : This is actually a beautiful and clever appli-cation of stationary phase. First we need to describe the basic method. It consists of two quite distinct ingredients: Integration by parts to localize at critical points; Comparison to a Gaussian integral to eval-uate asymptotically near critical points. 2 4. | 2022-09-26 15:32:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6301306486129761, "perplexity": 11822.121385561497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00284.warc.gz"} |
https://isr-publications.com/jmcs/articles-125-some-normal-edge-transitive-cayley-graphs-on-dihedral-groups | Some Normal Edge-transitive Cayley Graphs on Dihedral Groups
Volume 2, Issue 3, pp 448--452
• 2213 Views
Authors
A. Asghar Talebi - Department of Mathematics University of Mazandaran, Babolsar, Iran
Abstract
Let $G$ be a group and $S$ a subset of $G$ such that $1_G\not\in S$ and $S=S−1$. Let $\Gamma=Cay(G,S)$ be a Cayley graph on $G$ relative to. Then $\Gamma$ is said to be normal edge-transitive, if $N_{Aut}(\Gamma)(G)$ acts transitively on edges. In this paper we determine all normal edge-transitive Cayley graphs on a dihedral Group $D_{2n}$ of valency $n$. In addition we classify normal edge-transitive Cayley graphs $\Gamma=Cay(D_{2p},S)$ of valency four, for a prime $p$ and give some normal edge-transitive Cayley graphs $\Gamma=Cay(D_{2n},S)$ of valency four that $n$ is not a prime .
Share and Cite
ISRP Style
A. Asghar Talebi, Some Normal Edge-transitive Cayley Graphs on Dihedral Groups, Journal of Mathematics and Computer Science, 2 (2011), no. 3, 448--452
AMA Style
Talebi A. Asghar, Some Normal Edge-transitive Cayley Graphs on Dihedral Groups. J Math Comput SCI-JM. (2011); 2(3):448--452
Chicago/Turabian Style
Talebi, A. Asghar. "Some Normal Edge-transitive Cayley Graphs on Dihedral Groups." Journal of Mathematics and Computer Science, 2, no. 3 (2011): 448--452
Keywords
• Cayley graph
• normal edge-transitive
• Dihedral groups
• 20B15
• 05C25
• 05E18
References
• [1] N. Biggs, Algebraic graph theory, Cambridge University Press, London (1974)
• [2] C. D. Godsil, On the full automorphism group of a graph, Combinatorica, 1 (1981), 243--256.
• [3] C. D. Godsil, G. Royle, Algebric Graph Theory, Springer-verlag, New York (2001)
• [4] P. S. Houlis, Quotients of normal edge-transitive Cayley graphs , M.S.c. Thesis (University of Western Australia), Australia (1998)
• [5] C. E. Praeger, Finite Normal Edge-Transitive Cayley Graphs, Bull. Austral. Math. Soc., 60 (1999), 207--220.
• [6] H. Wielandt, Finite Permutation Group, Academic Press, New York (1964) | 2021-06-13 20:11:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4539664089679718, "perplexity": 1338.8114435043144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487610841.7/warc/CC-MAIN-20210613192529-20210613222529-00522.warc.gz"} |
http://mymathforum.com/trigonometry/344237-negative-theta-angle.html | My Math Forum Negative Theta Angle
Trigonometry Trigonometry Math Forum
May 11th, 2018, 08:23 PM #1 Member Joined: Aug 2017 From: India Posts: 54 Thanks: 2 Negative Theta Angle If you take a circle, the angle theta varies from 0 to 360 Degrees. What exactly is negative angle like -30 etc? I mean to say 330 is also -30 degrees? I am confused? Last edited by skipjack; May 12th, 2018 at 05:34 AM.
May 11th, 2018, 08:57 PM #2 Global Moderator Joined: Oct 2008 From: London, Ontario, Canada - The Forest City Posts: 7,949 Thanks: 1141 Math Focus: Elementary mathematics and beyond The sign of an angle is generally taken to be the direction from 0$^\circ$ with a negative value indicating clockwise motion.
May 11th, 2018, 09:16 PM #3 Member Joined: Aug 2017 From: India Posts: 54 Thanks: 2 So if I correctly understand, 0 to 360 degrees is counter clockwise direction and 0 to -360 (-20 etc.) is clockwise direction. Am I correct? Last edited by skipjack; May 12th, 2018 at 05:19 AM.
May 11th, 2018, 09:24 PM #4
Senior Member
Joined: Nov 2010
From: Indonesia
Posts: 2,001
Thanks: 132
Math Focus: Trigonometry
Quote:
Originally Posted by MathsLearner123 So if I correctly understand, 0 to 360 degrees is counter clockwise direction and 0 to -360 (-20 etc.) is clockwise direction. Am I correct?
Yes.
Last edited by skipjack; May 12th, 2018 at 05:19 AM.
May 11th, 2018, 09:29 PM #5
Global Moderator
Joined: Oct 2008
From: London, Ontario, Canada - The Forest City
Posts: 7,949
Thanks: 1141
Math Focus: Elementary mathematics and beyond
Quote:
Originally Posted by MathsLearner123 So if I correctly understand, 0 to 360 degrees is counter clockwise direction and 0 to -360 (-20 etc.) is clockwise direction. Am I correct?
Yes. So $\cos270^\circ=\cos-90^\circ$ or, more generally, $\cos\alpha=\cos(\alpha+360k),k\in\mathbb{Z}$.
Last edited by skipjack; May 12th, 2018 at 05:19 AM.
May 12th, 2018, 05:34 AM #6
Global Moderator
Joined: Dec 2006
Posts: 20,802
Thanks: 2149
Quote:
Originally Posted by MathsLearner123 If you take a circle, the angle theta varies from 0 to 360 Degrees.
That's a usable convention for polar coordinates, but other conventions are possible. A function of the angle needn't have 0 to 360 degrees as its domain, and needn't be periodic.
May 12th, 2018, 07:02 PM #7
Member
Joined: Aug 2017
From: India
Posts: 54
Thanks: 2
Quote:
Originally Posted by skipjack That's a usable convention for polar coordinates, but other conventions are possible. A function of the angle needn't have 0 to 360 degrees as its domain, and needn't be periodic.
Is it possible for you to give examples or links corresponding to them? Thank you.
Last edited by greg1313; May 12th, 2018 at 07:20 PM.
May 13th, 2018, 03:18 AM #8 Global Moderator Joined: Dec 2006 Posts: 20,802 Thanks: 2149 You can find more detail and examples here.
May 14th, 2018, 02:24 AM #9 Member Joined: Aug 2017 From: India Posts: 54 Thanks: 2 My requirement is a kind of creation of sine wave using a look up table. So the 360 degrees is represented as 65536 an int value. The sampling frequency is the frequency of the pwm signal which 20KHz. I need to write a program for the same. I do not know what is the use of this sampling frequency. Why is it required? Please advise.
Tags angle, negative, theta
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post Sheilax7 Trigonometry 2 May 3rd, 2018 05:09 AM Ganesh Ujwal Trigonometry 4 April 29th, 2018 05:08 AM mauro125 Algebra 3 February 22nd, 2014 04:57 PM Jhenrique Calculus 1 November 23rd, 2013 10:02 AM moshiksa1 Algebra 1 October 28th, 2011 03:00 AM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-07-15 21:53:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7186799049377441, "perplexity": 4846.723271450227}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195524254.28/warc/CC-MAIN-20190715215144-20190716000511-00019.warc.gz"} |
http://dygq.pesnohorki.de/tf-softmax-example.html | # Tf Softmax Example
213, averaged over an epoch of 3,600,000 samples. Questions: From the Udacity’s deep learning class, the softmax of y_i is simply the exponential divided by the sum of exponential of the whole Y vector: Where S(y_i) is the softmax function of y_i and e is the exponential and j is the no. We take the average of this cross-entropy across all training examples using tf. MobileNet V2 for example is a very good convolutional architecture that stays reasonable in size. y <-tf $nn$ softmax (tf $matmul (x,W) + b) We can specify a loss function just as easily. Your aim is to look at an image and say with particular certainty (probability) that a given image is a particular digit. I have 2 different implementations: with 'regular' softmax with logits : tf. For example, the Image Category Classification Using Bag Of Features example uses SURF features within a bag of features framework to train a multiclass SVM. TensorFlow provides the function called tf. Learning is a process of changing the filter weights so that we can expect a particular output mapped for each data samples. json; Below are results from three different runs of the object_detection example: native (no TensorRT), FP32 (TensorRT optimized), and FP16 (TensorRT optimized). We then combine BiGRU's content superiority to the short text overall context to get the emotional polarity of the sentence level and form the second dimension. Pre-trained models and datasets built by Google and the community. So, in this example, if we add a padding of size 1 on both sides of the input layer, the size of the output layer will be 32x32x32 which makes implementation simpler as well. We use tensors that call reduce_sum on this array. Again, this is only for simplifying the discussion. Currently, TensorFlow provides high level APIs. This tutorial explores two examples using sparse_categorical_crossentropy to keep integer as chars' / multi-class classification labels without transforming to one-hot labels. softmax_cross_entropy_with_logits computes the cross entropy of the result after applying the softmax function (but it does it all together in a more mathematically careful way). epsilon() or use tf. Classification and Loss Evaluation - Softmax and Cross Entropy Loss Lets dig a little deep into how we convert the output of our CNN into probability - Softmax; and the loss measure to guide our optimization - Cross Entropy. See the guide: Math > Basic Math Functions C_来自TensorFlow Python,w3cschool。. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "VUJTep_x5-R8" }, "source": [ "This guide gives you the basics to get started with Keras. This guide uses tf. There's also tf. Go to the folder downloaded at the terminal and execute the following code. I have 2 different implementations: with 'regular' softmax with logits : tf. Questions: From the Udacity's deep learning class, the softmax of y_i is simply the exponential divided by the sum of exponential of the whole Y vector: Where S(y_i) is the softmax function of y_i and e is the exponential and j is the no. of columns in the input vector Y. sparse_softmax_cross_entropy_with_logits()。. Even though the traditional ReLU activation function is used quite often, it may sometimes not produce a converging model. Thus, please refer to the example codes listed below for completing your TF codes. Let us now implement Softmax Regression on the MNIST handwritten digit dataset using TensorFlow library. softmax_cross_entropy_with_logits Where the class_weight is a placeholder I fill in on everey batch iteration. The next example shows how to use layers package for MNIST training. transpose(tf. Getting started with TFLearn. float32) filter = tf. embedding_lookup() that we discussed earlier:. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. In March Google unveiled Google Coral, their platform for local AI. In this mode, TF-TRT creates a new TensorRT engine for each unique input shape that is supplied to the model. TFRecordReader with the tf. Tutorials (i) B. json; Below are results from three different runs of the object_detection example: native (no TensorRT), FP32 (TensorRT optimized), and FP16 (TensorRT optimized). Hierarchical softmax provides an alternative model for the conditional distributions such that the number of parameters upon which a single outcome depends is only proportional to the logarithm of. learnable activations, which maintain a state) are available as Advanced Activation layers, and can be found in the module keras. While hinge loss is quite popular, you’re more likely to run into cross-entropy loss and Softmax classifiers in the context of Deep Learning and Convolutional Neural Networks. Vgg16 Structure Vgg16 Structure. 213 * 3600000, means an average cross-entropy loss of -1. We compute the softmax and cross-entropy using tf. It can help exploding gradients, but not for vanishing. Then softmax() can decide between these evidence counts. In this blog post, we'll discover what TensorBoard is, what you can use it for, and how it works with Keras. reduce_sum(tf. softmax_cross_entropy_with_logits tf. parse_single_example decoder. If we also need the respective # probabilities we will have to apply softmax. minimize(cost) Within AdamOptimizer(), you can optionally specify the learning_rate as a parameter. argmax같은 tf operation을 사용하면서 진행해야 해서 덜 익숙한 부분이 있기는 하고 역시, 다시 sklearn 에 훨씬 쉬운 것들이 있는데, 내가 왜 이걸 해야 하나 라는 생각들은 들지만. softmax_cross_entropy_with_logits (it's one operation in TensorFlow, because it's very common, and it can be optimized). 213 * 3600000, means an average cross-entropy loss of -1. Reduction is an operation that removes one or more dimensions from a tensor by performing certain operations across. Gumbel-softmax trick to the rescue!¶ Using argmax is equivalent to using one hot vector where the entry corresponding to the maximal value is 1. Over the last few years, machine learning over graph structures has manifested a significant enhancement in text mining applications such as event detection, opinion mining, and n. Every MNIST data point has two parts: an image of a handwritten digit and a corresponding label. In this installment we will be going over all the abstracted models that are currently available in TensorFlow and describe use cases for that particular model as well as simple sample code. [8] softmax layer, get the final probability output. get_session() # freeze graph and remove training nodes. softmax(logits). I looked through the documentation but it only states that for tf. softmax_cross_entropy_with_logits,那么它到底是怎么做的呢? 首先明确一点,loss是代价值,也就是我们要最小化的值. float32) filter = tf. OK, I Understand. The example above has 112. softmax_cross_entropy_with_logits,那么它到底是怎么做的呢? 首先明确一点,loss是代价值,也就是我们要最小化的值. The best example to illustrate the single layer perceptron is through representation of "Logistic Regression". Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. First argument: The first argument to reduce_sum is the tensor we want to sum the elements of. MNIST with CNN Layer from TensorFlow. I have 2 different implementations: with 'regular' softmax with logits : tf. [8] softmax layer, get the final probability output. " ] }, { "cell_type": "markdown", "metadata": { "colab_type": "text", "id": "VUJTep_x5-R8" }, "source": [ "This guide gives you the basics to get started with Keras. placeholder is used to feed actual training examples. activations. The example above has 112. We compute the softmax and cross-entropy using tf. The full code is available on Github. I will basically reproduce the example of my previous article, but now there will be the possibility to interact with the CNN at every step, so that the whole procedure will be ‘controlled’ by the user. y <-tf$ nn $softmax (tf$ matmul (x,W) + b) We can specify a loss function just as easily. CrossEntropyWithSoftmax() is an optimization for the most common use case of categorical cross-entropy, which takes advantage of the specific form of Softmax. get_session() # freeze graph and remove training nodes. Also, there are ‘k’ class labels, i. Run the test: python -m tftrt. I'd like some help in subclassing the Model class - Specifically: 1) Unlike the first approach - I would like to take in any number of layers as we do in specifying a standard keras model. by matrix multiplication in this section). Announcements Assignment 3 out tonight, due March 17 No class this Friday: Pete Warden's talk on TensorFlow for mobile Guest lecture next Friday by Danijar Hafner on Reinforcement Learning. Currently, TensorFlow provides high level APIs. batch_size = 64 hidden_units = 900 n_layers = 5 logdir = '/tmp/tf_demo_logs' # Be careful changing this since this directory will be purged when this notebook is run num_examples = 1000 noise_scale = 0. cross_entropy() You can find prominent difference between them in a resource intensive. This way we get the function that can be further optimised. In this tutorial, N is 3. 001, which is fine for most. In March Google unveiled Google Coral, their platform for local AI. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. softmax_cross_entropy_with_logits. The model presented in the paper achieves good classification performance across a range of text classification tasks (like Sentiment Analysis) and has since become a standard baseline for new text classification architectures. json; Below are results from three different runs of the object_detection example: native (no TensorRT), FP32 (TensorRT optimized), and FP16 (TensorRT optimized). 115 Policy Gradient Utilities import keras. The dilation rate in which we sample input values across the height and width dimensions in atrous convolution. TensorFlow is inevitably the package to use for Deep Learning, if you want the easiest deployment possible. Adding to that, Tensorflow has optimised the operation of applying the activation function then calculating cost using its own activation followed by cost functions. optimizer = tf. I recently had to implement this from scratch, during the CS231 course offered by Stanford on visual recognition. Here is a basic guide that introduces TFLearn and its functionalities. The example above has 112. softmax: Performs softmax activation on the incoming tensor. To read a file of TFRecords, use tf. The formula computes the exponential (e-power) of the given input value and the sum of exponential values of all the values in the inputs. 213, averaged over an epoch of 3,600,000 samples. h5 file converted to protocol buffer (. Building deep learning neural networks using TensorFlow layers. "probabilities": tf. pb) and then to OpenVino IR files. In this installment we will be going over all the abstracted models that are currently available in TensorFlow and describe use cases for that particular model as well as simple sample code. So, in this example, if we add a padding of size 1 on both sides of the input layer, the size of the output layer will be 32x32x32 which makes implementation simpler as well. Adding to that, Tensorflow has optimised the operation of applying the activation function then calculating cost using its own activation followed by cost functions. softmax(x) ce = cross_entropy(sm) The cross entropy is a summary metric: it sums across the elements. mnist-softmax-for-e-ai. If using exclusive labels (wherein one and only one class is true at a time), see sparse_softmax_cross_entropy_with_logits. The code used here is inspired by Tensorflow-Examples. university of central florida 3 simple fully connected network 3 +𝒃 +𝒃 +𝒃 x 𝑾 , 𝑾 , 𝑾 , 𝑾 , 𝑾 ,. CS 224S: TensorFlow Tutorial Pujun Bhatnagar prediction = tf. Defining your models in TensorFlow can easily result in one huge wall of code. Do not call this op with the output of softmax, as it will produce incorrect results. TensorFlow uses static computational graphs to train models. CS 224S: TensorFlow Tutorial Pujun Bhatnagar prediction = tf. Returns: A tensor with shape batch_size containing the weighted cost for each example. [9] define the loss function as cross entropy, and the optimizer uses Adam [10] get the prediction accuracy of the model. For example, -1. get_session() # freeze graph and remove training nodes. Softmax output: The loss functions are computed on the softmax output which interprets the model output as unnormalized log probabilities and squashes them into range such that for a given pixel location. Dimensionality reduction is used to remove irrelevant and redundant features. Loss indicates how bad the model’s prediction was on a single example; we try to minimize that while training across all the examples. optimizers import Adam. reduce_sum(tf. That is to say each sample of data can only belong to one class. First thing first, let’s import our necessary packages and download our data. The example below demonstrates how to load a text file, parse it as an RDD of Seq[String], construct a Word2Vec instance and then fit a Word2VecModel with the input data. softmax_cross_entropy_with_logits that internally applies the softmax on the model's unnormalized prediction and sums across all classes. Both the training set and test set contain. rate: 1-D of size 2. The example below demonstrates how to load a text file, parse it as an RDD of Seq[String], construct a Word2Vec instance and then fit a Word2VecModel with the input data. For example, in the MNIST digit recognition task, we would have 10 different classes. , the res4b22 residue block. TensorFlow is inevitably the package to use for Deep Learning, if you want the easiest deployment possible. When the number of features in a dataset is bigger than the number of examples, then the probability density function of the dataset becomes difficult to calculate. Getting started with TFLearn. To optimize our cost, we will use the AdamOptimizer, which is a popular optimizer along with others like Stochastic Gradient Descent and AdaGrad, for example. json; Below are results from three different runs of the object_detection example: native (no TensorRT), FP32 (TensorRT optimized), and FP16 (TensorRT optimized). As the label suggests, there are only ten possibilities of an TensorFlow MNIST to be from 0 to 9. This happens to be exactly the same thing as the log-likelihood if the output layer activation is the softmax function. 2GB) into Ball Tree's nodes to boost MIP search. In Tensorflow documentation of tf. I have 2 different implementations: with 'regular' softmax with logits : tf. While later explanations specify the primary cause of neural networks’ vulnerability to adversarial perturbation is their linear nature. You can vote up the examples you like or vote down the ones you don't like. softmax is a neural transfer function. softmax_cross_entropy_with_logits_v2 example (4) Above answers have enough description for the asked question. [8] softmax layer, get the final probability output. Initially, it was argued that Adversarial examples are specific to Deep Learning due to the high amount of non-linearity present in them. First of all, we import the dependencies. Again, this is only for simplifying the discussion. A TensorFlow program can add up the evidence (in known positions in a vector) of each possibility. embedding_lookup() that we discussed earlier:. This operation is for training only. Hierarchical softmax. matmul() in… Set and Parse Command Line Arguments with Flags in… List All Trainable and Untrainable Variables in… List All Variables including Constant and…. reduce_mean() or tf. # We then transpose to have a matrix with one example per row and one feature per column. [핵심 머신러닝] 로지스틱회귀모델 1 (로지스틱함수, 승산) - Duration: 22:19. json; Below are results from three different runs of the object_detection example: native (no TensorRT), FP32 (TensorRT optimized), and FP16 (TensorRT optimized). This architecture is explored in detail later in the post. models import Model from keras. h5 file converted to protocol buffer (. softmax (upsampled_logits) # Here we define an optimizer and put all the variables # that will be created under a namespace of 'adam_vars'. It is generally an underestimate of the full softmax loss. At launch, Google Coral had two products the Google Coral USB Accelerator and the Google Coral Dev Board. This operation computes exactly the loss function defined above, where z contains the scores and y has the one-hot labels. These values are all normalized to 0 to 1. Hence it is a good practice to use: tf. Finally, we display the top 40 synonyms of the specified word. In this tutorial, N is 3. The following are code examples for showing how to use tensorflow. Implemented Ball Tree data structure in C++ and reorganized two sample databases (MNIST 9. Your aim is to look at an image and say with particular certainty (probability) that a given image is a particular digit. There’s also tf. The implementation of tf. Ramsundar's, and (ii) C-C. It is a 2-dimensional array (a matrix). First thing first, let's import our necessary packages and download our data. Note: Multi-label classification is a type of classification in which an object can be categorized into more than one class. Google Coral Edge TPUs out of beta - Overview of all the changes. Here we have a constant array of integers. MNIST with CNN Layer from TensorFlow. 0001 init_scale = 6 / (hidden_units ** 0. Dynamic computational graphs are more complicated to define using TensorFlow. softmax() #Output of neural network and pad all examples in the batch to maximum sequence. If you know your matrix multiplication, you would understand that this computes properly and that x * W + b results in a Number of Training Examples Fed (m) x Number of Classes (n. clip_by_value. By voting up you can indicate which examples are most useful and appropriate. softmax_cross_entropy_with_logits()函数十分相似,唯一的区别在于labels的shape,该函数的labels要求是排他性的即只有一个正确的类别,如果labels的每一行不需要进行one_hot表示,可以使用tf. While this function computes a usual softmax cross entropy if the number of dimensions is equal to 2, it computes a cross entropy of the replicated softmax if the number of dimensions is greater than 2. Can Apply a Dropout Layer to Softmax Layer in Neural… Understand Long Short-Term Memory Network(LSTM) -… Difference Between tf. Each subarray has 2 elements. We make a 3D matrix (with reshape) and target the second dimension. Implementing a Softmax classifier is almost similar to SVM one, except using a different loss function. Hence it is a good practice to use: tf. softmax_cross_entropy_with_logits_v2. That is to say each sample of data can only belong to one class. You may also wish to use TensorBoard, for example. Using Logistic and Softmax Regression with TensorFlow by Sergey Kovalev April 15, 2016 As an example, we'll be classifying handwritten digits (0-9) from the MNIST data set. matmul(X,W) expression x multiplication W, corresponding to the previous equation wx, where x is a 2-dimensional tensor with multiple inputs. Hence it is a good practice to use: tf. I am starting with the generic TensorFlow example. Tensorflow ops that are not compatible with TF-TRT, including custom ops, are run using Tensorflow. The following code shows how Eric Jang computes categorical samples via the reparameterization t. There is a special "END" label appended to the labels. argmax(y_,1) is the correct label. In this mode, TF-TRT creates a new TensorRT engine for each unique input shape that is supplied to the model. sparse_softmax_cross_entropy_with_logits, but beware that it can't accept the output of tf. # Compute the cosine similarity between minibatch examples and all embeddings. 2GB) into Ball Tree's nodes to boost MIP search. softmax回归(softmax regression)分两步:第一步 为了得到一张给定图片属于某个特定数字类的证据(evidence),我们对图片像素值进行加权求和。 如果这个像素具有很强的证据说明这张图片不属于该类,那么相应的权值为负数,相反如果这个像素拥有有利的证据. AdamOptimizer(). However, softmax is still worth understanding, in part because it's intrinsically interesting, and in part because we'll use softmax layers in Chapter 6, in our discussion of deep neural networks. Difficulty of training RNNs. For example, you may have an image classification network that works on images of any size where the input placeholder has the shape [?, ?, ?, 3]. To make things obvious, let us assume the sentence The dog barked at the mailman. Your aim is to look at an image and say with particular certainty (probability) that a given image is a particular digit. matmul() in… Set and Parse Command Line Arguments with Flags in… List All Trainable and Untrainable Variables in… List All Variables including Constant and…. multiply() and tf. matmul(x,W) + b) First, let’s ignore the softmax and look what's inside the softmax function. I read the paper Categorical Reparameterization with Gumbel-Softmax and the corresponding code here. epsilon() or use tf. Here are the examples of the python api tensorflow. In the same message it urges me to have a look at tf. Result: The negative values in the vector are replaced with. If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. The parse_single_example op decodes the example protocol buffers into tensors. logits = tf. weights: Tensor with shape batch_size containing example weights. py you could copy that to mnist_deep_inference. First, highlighting TFLearn high-level API for fast neural network building and training, and then showing how TFLearn layers, built-in ops and helpers can directly benefit any model implementation with Tensorflow. The output unit activation function is the softmax function:. For example, an image cannot be both a cat and a dog at the same time. softmax_cross_entropy_with_logits_v2: Backpropagation will happen into both logits and labels. Softmax function. There is a special “END” label appended to the labels. We can visualize the first model as a model that is being trained on data such as (input:'dog',output:['the','barked','at','the','mailman']) while sharing weights and biases of the softmax layer. For such examples: You may not use Softmax. This tutorial will help you to get started with TensorBoard, demonstrating. activations. reduce_sum(tf. softmax_cross_entropy_with_logits tf. [Official Baseline] BM25 --. square(embeddings), 1, keep_dims=True)) normalized_embeddings = embeddings / norm Now we can look up our validation words / vectors using the tf. epsilon() or use tf. Let's talk about the differences in two views of the model. reduce_mean method. ∙ 0 ∙ share Noam. This operation is for training only. [8] softmax layer, get the final probability output. Example of the difference between the SVM and Softmax classifiers for one datapoint. backend as K import tensorflow as tf import uff output_names = ['predictions/Softmax'] frozen_graph_filename = 'frozen_inference_graph. placeholder is used to feed actual training examples. Any Keras model can be exported with TensorFlow-serving (as long as it only has one input and one output, which is a limitation of TF-serving), whether or not it was training as part of a TensorFlow workflow. Difficulty of training RNNs. To see how it works, let’s keep working with our example. "TensorBoard - Visualize your learning. softmax() #Output of neural network and pad all examples in the batch to maximum sequence. tensorflow) submitted 1 month ago by ambodi I need to get values of the softmax layer activations for the data that Mobilenet V1 was trained on (COCO training set). Here is a basic guide that introduces TFLearn and its functionalities. Example 2: We can use softmax on a certain dimension within. So instead of using a hard one hot vector, we can approximate it using a soft one - softmax. softmax (a) # The activation function doesn't really change here. Again, this is only for simplifying the discussion. optimizers import Adam. argmax is an extremely useful function which gives you the index of the highest entry in a tensor along some axis. In this blog post, we’ll discover what TensorBoard is, what you can use it for, and how it works with Keras. For example, an image cannot be both a cat and a dog at the same time. The logits are the unnormalized log probabilities output the model (the values output before the softmax normalization is applied to them). probabilities for it being each digit. softmax_cross_entropy_with_logits Where the class_weight is a placeholder I fill in on everey batch iteration. Learning is a process of changing the filter weights so that we can expect a particular output mapped for each data samples. placeholder is used to feed actual training examples. Example 1: We have a vector of 4 numbers. # Copyright 2015 The TensorFlow Authors. softmax_cross_entropy_with_logits (it's one operation in TensorFlow, because it's very common, and it can be optimized). 当然这是工程的说法,就是说明你这个loss2没有意义. ∙ 0 ∙ share Noam. softmax_cross_entropy_with_logits_v2(labels=y, logits=z). So in our previous example we would have $\text{Zebra} = [1,0,0,0]$, $\text{Horse} = [0,1,0,0]$, and so on. Adding to that, Tensorflow has optimised the operation of applying the activation function then calculating cost using its own activation followed by cost functions. Example 1: We have a vector of 4 numbers. The logits are the unnormalized log probabilities output the model (the values output before the softmax normalization is applied to them). The number of inputs in this example is 3, see what happens when you use other numbers (eg 4, 5 or more). Learning is a process of changing the filter weights so that we can expect a particular output mapped for each data samples. We use cookies for various purposes including analytics. Finally, tf. matmul(X,W) expression x multiplication W, corresponding to the previous equation wx, where x is a 2-dimensional tensor with multiple inputs. When the number of features in a dataset is bigger than the number of examples, then the probability density function of the dataset becomes difficult to calculate. GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together. activations. All Rights Reserved. The process is the same as the process described above, except now you apply softmax instead of argmax. In this installment we will be going over all the abstracted models that are currently available in TensorFlow and describe use cases for that particular model as well as simple sample code. In this blog post, you will learn how to implement gradient descent on a linear classifier with a Softmax cross-entropy loss function. 001, which is fine for most. Given an input tensor, returns a new tensor with the same values as the input tensor with shape shape. Questions: From the Udacity’s deep learning class, the softmax of y_i is simply the exponential divided by the sum of exponential of the whole Y vector: Where S(y_i) is the softmax function of y_i and e is the exponential and j is the no. A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. Hierarchical softmax. reduce_mean method. Arguments: params: a Tensor of rank P representing the tensor we want to index into; indices: a Tensor of rank Q representing the indices into params we want to access. 115 Policy Gradient Utilities import keras. Classification problems can take the advantage of condition that the classes are mutually exclusive, within the architecture of the neural network. The science behind introducing non-linearity is outside the scope of this example. In contrast, tf. The introduction of artificial intelligence (AI) in video surveillance, aka AI surveillance, is expanding business opportunities beyond security. logits = tf. Pre-trained models and datasets built by Google and the community. softmax_cross_entropy()方法内部会对logits做softmax处理, shape为[batch_size, num_classes] weights 可以是一个标量或矩阵. py you could copy that to mnist_deep_inference. Pre-trained models and datasets built by Google and the community. Inroduction In this post I want to show an example of application of Tensorflow and a recently released library slim for Image Classification , Image Annotation and Segmentation. Join GitHub today. mnist_softmax Use softmax regression to train a model to look at MNIST images and predict what digits they are. OK, I Understand. pyplot as plt. softmax_cross_entropy_with_logits(). | 2020-01-21 01:53:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44524842500686646, "perplexity": 1800.2128201628834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601241.42/warc/CC-MAIN-20200121014531-20200121043531-00005.warc.gz"} |
https://asmedigitalcollection.asme.org/energyresources/article-abstract/129/2/117/465016/Performance-Evaluation-of-a-Gas-Turbine-Operating?redirectedFrom=fulltext | The power output of gas turbines (GT) reduces greatly with the increase of the inlet air temperature. This is a serious problem because gas turbines have been used traditionally to provide electricity during the peak power demands, and the peak power demands in many areas occur on summer afternoons. An aquifer thermal energy storage (ATES) was employed for cooling of the inlet air of the GT. Water from a confined aquifer was cooled in winter and was injected back into the aquifer. The stored chilled water was withdrawn in summer to cool the GT inlet air. The heated water was then injected back into the aquifer. A $20MW$ GT power plant with 6 and $12h$ of operation per day, along with a two-well aquifer, was considered for analysis. The purpose of this investigation was to estimate the GT performance improvement. The conventional inlet air cooling methods such as evaporative cooling, fogging and absorption refrigeration were studied and compared with the ATES system. It was shown that for $6h$ of operation per day, the power output and efficiency of the GT on the warmest day of the year could be increased from 16.5 to $19.7MW$ and from 31.8% to 34.2%, respectively. The performance of the ATES system was the best among the cooling methods considered on the warmest day of the year. The use of ATES is a viable option for the increase of gas turbines power output and efficiency, provided that suitable confined aquifers are available at their sites. Air cooling in ATES is not dependent on the wet-bulb temperature and therefore can be used in humid areas. This system can also be used in combined cycle power plants.
1.
Ministry of Energy
, 2004, “
Energy Statistics
,” Tehran, Iran, (in Farsi).
2.
,
A. A.
, 1990, “
The Impact of Atmospheric Conditions on Gas Turbine Performance
,”
Trans. ASME
0097-6822,
112
, pp.
590
596
.
3.
Malewski
,
M.
, and
Holldorff
,
G. M.
, 1984,” “
Power Increase of Gas Turbines by Inlet Air Pre-cooling With Absorption Refrigeration Utilizing Exhaust Waste Heat
,” ASME Paper No. 84-GT–55.
4.
Kohlenberger
,
C.
, 1995, “
Gas Turbine Inlet Air Cooling and the Effect on a Westinghouse 501D5 CT
,” Kohlenberger Associates Consulting Engineers, Inc., ASME Paper No. 95-GT–284.
5.
Punwani
,
D. V.
,
Pierson
,
T.
,
Bagley
,
J. W.
, and
Ryan
,
W. A.
, 2001, “
A Hybrid System for Combustion Turbine Inlet Air Cooling at a Cogeneration Plant in Pasadena
,”
ASHRAE Trans.
0001-2505,
107
, Pt. 1.
6.
Chaker
,
M.
, and
Meher-Homji
,
C. B.
, 2002, “
Inlet Fogging of Gas Turbine Engines: Climatic Analysis of Gas Turbine Evaporative Cooling Potential of International Locations
,” ASME Turbo Expo 2002, Amsterdam, The Netherlands.
7.
Bhargava
,
R.
, and
Meher-Homji
,
C. B.
, 2002, “
Parametric Analysis of Existing Gas Turbines With Inlet Evaporative and Overspray Fogging
,” ASME Turbo Expo 2002, Amsterdam, The Netherlands.
8.
Dincer
,
I.
, and
Rosen
,
M. A.
, 2002,
Thermal Energy Storage Systems and Applications
,
Wiley
, London.
9.
Dincer
,
I.
,
Dost
,
S.
, and
Li
,
X.
, 1997, “
Performance Analyses of Sensible Heat Storage Systems for Thermal Applications
,”
Int. J. Energy Res.
0363-907X,
21
(
10
), pp.
1157
1171
.
10.
Dincer
,
I.
,
Dost
,
S.
, and
Li
,
X.
, 1997, “
Thermal Energy Storage Applications From an Energy Saving Perspective
,”
Int. J. Global Energy Issues
,
9
(
4-6
), pp.
351
364
.
11.
Dincer
,
I.
, 1999, “
Evaluation and Selection of Thermal Energy Storage Systems for Solar Thermal Applications
,”
Int. J. Energy Res.
0363-907X,
23
(
12
), pp.
1017
1028
.
12.
Rosen
,
M. A.
,
Pedinelli
,
N.
, and
Dincer
,
I.
, 1999, “
Energy and Exergy Analyses of Cold Thermal Storage Systems
,”
Int. J. Energy Res.
0363-907X,
23
(
12
), pp.
1029
1038
.
13.
Rosen
,
M. A.
,
Dincer
,
I.
, and
Pedinelli
,
N.
, 2000, “
Thermodynamic Performance of Ice Thermal Energy Storage Systems
,”
ASHRAE Trans.
0001-2505,
106
(
2
), pp.
260
265
.
14.
Dincer
,
I.
, and
Rosen
,
M. A.
, 2001, “
Energetic, Environmental and Economic Aspects of Thermal Energy Storage Systems for Cooling Capacity
,”
Appl. Therm. Eng.
1359-4311,
21
(
11
), pp.
1105
1117
.
15.
Dincer
,
I.
, 2002, “
On Thermal Energy Storage Systems and Applications in Buildings
,”
Energy Build.
0378-7788,
34
(
4
), pp.
377
388
.
16.
,
M. N.
, and
Behafarid
,
F.
, 2006, “
Cooling of Gas Turbines Inlet Air Through Aquifer Thermal Energy Storage
,” ASME Power Conference, Atlanta, GA., No. PWR2006-88126.
17.
,
M. N.
, 1986, “
Natural Air-Conditioning Systems
,” in
,
K. W.
Boer
, ed., Am. Solar Energy Society,
Plenum Press
, NY, Vol.
3
, pp.
283
356
.
18.
Umemiya
,
H.
, and
Sasaki
,
H.
, 1989, “
The Selection of a Site Suitable for Aquifer Thermal Energy Storage
,”
JSME Int. J., Ser. II
0914-8817,
32
(
4
), pp.
652
658
.
19.
Paksoy
,
H. O.
, and
Gürbüz
,
Z.
,
Turgut
,
B.
,
Dikici
,
D.
, and
Evliya
,
H.
, 2004, “
Aquifer Thermal Energy Storage (ATES) for Air-conditioning of a Supermarket in Turkey
,”
Renewable Energy
0960-1481,
29
(
12
), pp.
1991
1996
.
20.
,
M. N.
, and
Chamberlain
,
M. S.
, 1986, “
A Simplification of Weather Data to Evaluate Daily and Monthly Energy Needs of Residential Buildings
,”
Sol. Energy
0038-092X,
36
, (
6
), pp.
425
434
.
21.
Liu
,
H. H.
, 1997, “
Analysis and Performance Optimization of Commercial Chiller/Cooling Tower Systems
,” Master thesis, Georgia Institute of Technology, Atlanta, GA. | 2019-10-16 07:43:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4294658601284027, "perplexity": 10045.576756943134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986666467.20/warc/CC-MAIN-20191016063833-20191016091333-00428.warc.gz"} |
https://www.physicsforums.com/threads/distribution-homework.607650/ | Distribution homework
1. May 21, 2012
robertjford80
1. The problem statement, all variables and given/known data
This is a calc problem but it's the algebra part I'm having trouble with:
3. The attempt at a solution
[16(1+h)2 - 16(1)2]/h
[16 + 32h + 16h2 - 162]/h
= 16/h + 32 + 16h - 162
this is a calc problem so h approaches zero
= 32 + 16h - 162
= 16(h + 2) - 162
I can't figure out why - 162 disappears.
Last edited: May 21, 2012
2. May 21, 2012
robertjford80
Re: distribution
<deleted>
3. May 21, 2012
Infinitum
Re: distribution
What happened to that 16/h?? Surely, as h -> 0 16/h is -not- equal to 0.
Try taking the 16 common out of the original expression, giving you
$$\frac{16((1+h)^2-1)}{h}$$ | 2017-10-17 23:41:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.527010977268219, "perplexity": 4831.86496493333}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822513.17/warc/CC-MAIN-20171017215800-20171017235800-00310.warc.gz"} |
http://mathhelpforum.com/pre-calculus/92095-vectors-finding-magnitude-direction-angle.html | # Math Help - Vectors: Finding the magnitude and direction angle
1. ## Vectors: Finding the magnitude and direction angle
I need to find the magnitude and direction angle for the vector A + B given A = <-3,2> and B = <1,4>.
2. Originally Posted by Neversh
I need to find the magnitude and direction angle for the vector A + B given A = <-3,2> and B = <1,4>.
$A+B=<-2,6>$
$||<-2,6>||=\sqrt{(-2)^2+6^2}=\sqrt{4+36}=2\sqrt{10}$
$\theta=\tan^{-1}\left( \frac{6}{-2}\right)+\pi$ | 2014-03-10 08:20:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6978800296783447, "perplexity": 732.7162170124801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010701848/warc/CC-MAIN-20140305091141-00078-ip-10-183-142-35.ec2.internal.warc.gz"} |
http://www.solutioninn.com/the-exam-scores-for-the-students-in-an-introductory-statistics | # Question
The exam scores for the students in an introductory statistics class are as follows.
a. Group these exam scores, using the classes 30-39, 40-49, 50- 59, 60-69, 70-79, 80-89, and 90-100.
b. What are the widths of the classes?
c. If you wanted all the classes to have the same width, what classes would you use?
Choosing the Classes. One way that we can choose the classes to be used for grouping a quantitative data set is to first decide on the (approximate) number of classes. From that decision, we can then determine a class width and, subsequently, the classes themselves. Several methods can be used to decide on the number of classes. One method is to use the following guidelines, based on the number of observations:
With the preceding guidelines in mind, we can use the following step-by-step procedure for choosing the classes.
Step 1 Decide on the (approximate) number of classes.
Step 2 Calculate an approximate class width as Maximun observation − Minimum observation / Number of classes and use the result to decide on a convenient class width.
Step 3 Choose a number for the lower limit (or cutpoint) of the first class, noting that it must be less than or equal to the minimum observation.
Step 4 Obtain the other lower class limits (or cutpoints) by successively adding the class width chosen in Step 2.
Step 5 Use the results of Step 4 to specify all of the classes.
Sales0
Views18 | 2016-10-23 00:40:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8008624911308289, "perplexity": 546.0119901986429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719079.39/warc/CC-MAIN-20161020183839-00381-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/fourier-series-and-orthogonality-completeness.794679/ | # Fourier series and orthogonality, completeness
1. Jan 28, 2015
### kidsasd987
http://ms.mcmaster.ca/courses/20102011/term4/math2zz3/Lecture1.pdf
On pg 10, the example says f(x)=/=0 while R.H.S is zero. It is an equations started from the assumption in pg 9; f(x)=c0f(x)0+c1f(x)1…, then how do we get inequality?
if the system is complete and orthogonal, then (f(x),ϕ_n(x))=0, which makes sense only when f(x)=0.
but we know for fourier series, we get values for Rhs and Lhs.
2. Jan 29, 2015
### Staff: Mentor
That's the key word. The comment on p. 10 is for an arbitrary orthogonal system. That is why completeness is necessary, as explained on p. 11.
3. Jan 29, 2015
### kidsasd987
Could you explain the details?
Also, if (f(x),ϕ_n(x))=0 holds in general for complete series, then fourier series must be also zero since they are complete.
however we know that for signals we do get values within the interval of pi and -pi. is this because what we usually solve for signal input f(t) is not complete?
so we are approximating the signal?
4. Jan 29, 2015
### Staff: Mentor
Look at p. 12, where an example is given of an orthogonal system that is not complete. Using only cosines, you could never write an expression for e.g. sin(x).
But it's the other way around. If $(f, \phi_n) = 0$ for a given $f(x)$ and for all $\phi_n$, then $\{\phi_n\}$ is not a complete set.
I'm not sure what you are asking here. If the signal is periodic but not on the interval $[-\pi,\pi]$, then it is trivial to scale/shift it to that interval. If the signal is finite in time, then it is delt with the same way. If the signal is not finite, or only part of it is known, then indeed thre are approximations being made.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook | 2017-08-18 19:45:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7538855075836182, "perplexity": 722.301738471047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105086.81/warc/CC-MAIN-20170818175604-20170818195604-00354.warc.gz"} |
https://www.vedantu.com/question-answer/the-greatest-6-digit-number-which-is-a-perfect-class-6-maths-cbse-5efc61aafc7b62454bf530b2 | QUESTION
# The greatest 6 digit number, which is a perfect square is….. (A) 998001 (B) 995001 (C) 997001 (D) 996001
$Hint$: In this Question we are asked to find among 4 six-digit numbers which is the largest six-digit number with a perfect square. So we can check each of these numbers by using a long division method to find out the perfect Square. Since we have to find the largest number so we’ll start with the largest among these four.
Complete step-by-step solution -
Starting with 998001 which is the largest among these 4
Start grouping them in two starting from the unit place and these groups are called pairs
So, we got 4 pairs, 99, 80 and 01
Now make these pairs be dividend and find the divisor whose square is just less than or equal to the first pair, make that number the quotient and divisor as well, so we get 9 as divisor and quotient as ${9^2}$=81
9 9 $\begin{gathered} \overline {99} \overline {80} \overline {01} \\ 81 \\ \end{gathered}$
Now this 81 is subtracted from the first pair if there is any and then bring down the next pair on the right side of the remainder obtained before, together they make the new remainder just like we do in normal division
9 9 99800181 1880
Now , take the sum of previous divisor with itself i.e. 9+9 = 18 and write at place of divisor with some space on its right side to add another digit , say x and so 1880 becomes the new dividend
9 9 99800181 18x 1880
Now , find this x such that when 18x when multiplied with x gives a number which is just less than 1880 or equal to it , and write this x by the side of the quotient as well.
9x 9 99800181 18x 1880
This x=9 which makes 18x = 189,
Such that 189 × 9 = 1701 which is to be subtracted from 1880
99 9 99800181 189 18801701
Like we did earlier if find the remainder bring down the other pair, find new divisor just like we found 189 and repeat till we get reminder zero
So, our new remainder is 17901, new divisor is found by adding the previous divisor i.e. 189 to the very first divisor i.e. 9
So, the new divisor is 198x, again x is to be a number which when multiplied by 198x gives us a number less than or equal to 17901 and write this number by the side of quotient too.
999 9 99800181 189 18801701 1989 1790117901 0
So, the quotient is 999 which is the perfect Square of 998001
So, the correct option is ‘A’
$Note:$ This question can also be solved by using a trick. Let ${x^2}$be the largest six-digit perfect square. ${x^2}$≤999,999. You know that ${1000^2} = 1000000$ which is only 1 more than 999,999, so, ${x^2}$ must be next smaller square, which is ${(1000-1)}^2 \times {(1000-1)}^2$. | 2020-07-12 22:29:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7339621782302856, "perplexity": 462.09358606342533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657140337.79/warc/CC-MAIN-20200712211314-20200713001314-00573.warc.gz"} |
https://math.stackexchange.com/questions/546733/a-matrix-with-given-row-and-column-sums | # A matrix with given row and column sums
There are a set of equations like
$A_x + A_y + A_z = P$
$B_x + B_y + B_z = Q$
$C_x + C_y + C_z = R$
Where the values of only $P, Q, R$ are known.
Also, we have
$A_x + B_x + C_x = I$
$A_y + B_y + C_y = J$
$A_z + B_z + C_z = K$
where only the values of $I, J$ and $K$ are known.
Is there any way we know the individual values of
$A_x, B_x, C_x, A_y, A_z$ and the rest?
Substituting the above equations yield the result that $I + J + K = P + Q + R$ but how can I get the individual component values? Is any other information required to solve these equations?
Here's a good complementary question. If solutions exist, how to generate all of them? Are there some algorithms?
• I suppose you mean $P+Q+R=I+J+K$ (there are no $X,Y,Z$ in the equations). This is indeed a necessary condition for the existence of a solution. Supposing that, you've got effectively $5$ linear equations (since one is dependent on the others) in $9$ unknowns; you cannot hope for a unique solution. – Marc van Leeuwen Oct 31 '13 at 9:42
• @MarcvanLeeuwen Thanks i corrected it I +J + K = P +Q +R, Also, just now a friend told that there should be 9 equations to solve and get 9 unknow's. I guess that settle's it. – Neil Oct 31 '13 at 9:51
The matrix $(\begin{smallmatrix}+1&-1\\-1&+1\end{smallmatrix})$ has all row and column sums zero. So given any matrix$~A$ with at least two rows and at least two columns, you can always add a multiple of this matrix to a $2\times2$ submatrix of $A$ to obtain a different matrix $A'$ with the same row and column sums. So for such matrices, the row and column sums never determine the matrix.
You are trying to solve $$\left(\begin{matrix} 1&1&1&0&0&0&0&0&0\\ 0&0&0&1&1&1&0&0&0\\ 0&0&0&0&0&0&1&1&1\\ 1&0&0&1&0&0&1&0&0\\ 0&1&0&0&1&0&0&1&0\\ 0&0&1&0&0&1&0&0&1 \end{matrix}\right) \left(\begin{matrix} A_x\\A_y\\A_z\\B_x\\B_y\\B_z\\C_x\\C_y\\C_z \end{matrix}\right) = \left(\begin{matrix} P\\Q\\R\\I\\J\\K \end{matrix}\right)$$ If you have any solution, it will be at least a four*-dimensional solution space, and to have a solution, necessarily the equation $P+Q+R=I+J+K$ must hold, as you already found out.
*If the LHS is $Ax$ then the dimension (if $\ge 0$) is at least $9 - {\rm rg}(A) = 9-5 = 4$, the rank is only five because the sum of the first three rows is equal to the sum of the last three rows.
• @MarcvanLeeuwen Thanks for pointing out that the Rank of the $6\times 9$ matrix is only $5$ ;) – AlexR Oct 31 '13 at 10:02 | 2020-01-23 11:10:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8702625632286072, "perplexity": 162.09622730069245}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250610004.56/warc/CC-MAIN-20200123101110-20200123130110-00393.warc.gz"} |
https://proofwiki.org/wiki/Definition:Bloch%27s_Constant | # Definition:Bloch's Constant
## Definition
Recall Bloch's Theorem:
Let $f: \C \to \C$ be a holomorphic function in the unit disk $\cmod z \le 1$.
Let $\cmod {\map {f'} 0} = 1$.
Then there exists:
a disk $D$ of radius $B$
an analytic function $\phi$ in $D$ such that $\map f {\map \phi z} = z$ for all $z$ in $D$
where $B > \dfrac 1 {72}$ is an absolute constant.
The lower bound of $B$ is known as Bloch's constant.
## Also see
• Upper Bound of Bloch's Constant, where it is shown that $B \le \sqrt {\dfrac {\sqrt 3 -1} 2} \times \dfrac {\map \Gamma {\frac 1 3} \map \Gamma {\frac {11} {12} } } {\map \Gamma {\frac 1 4} }$
## Source of Name
This entry was named for André Bloch.
## Historical Note
The precise value of Bloch's constant is unknown.
André Bloch stated a lower bound for it of $\dfrac 1 {72}$.
However, it is known that $\dfrac 1 {72}$ is not the best possible value for it.
In their $1983$ work Les Nombres Remarquables, François Le Lionnais and Jean Brette give $\dfrac {\sqrt 3} 4$:
$\dfrac {\sqrt 3} 4 \approx 0 \cdotp 43301 \, 2701 \ldots$
The best value known at present is $\dfrac {\sqrt 3} 4 + \dfrac 2 {10 \, 000}$ which evaluates to approximately $0 \cdotp 43321 \, 2701$.
This was demonstrated by Huaihui Chen and Paul M. Gauthier in $1996$.
Lars Valerian Ahlfors and Helmut Grunsky demonstrated that:
$B \le \sqrt {\dfrac {\sqrt 3 -1} 2} \times \dfrac {\map \Gamma {\frac 1 3} \map \Gamma {\frac {11} {12} } } {\map \Gamma {\frac 1 4} }$
and conjectured that this value is in fact the true value of $B$.
The number is given by François Le Lionnais and Jean Brette as $\pi \sqrt 2^{1/4} \dfrac {\map \Gamma {1/3} } {\map \Gamma {1/4} } \paren {\dfrac {\map \Gamma {11/12} } {\map \Gamma {1/12} } }^{1/2}$. | 2021-08-05 05:49:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548863768577576, "perplexity": 1402.7505997917892}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155322.12/warc/CC-MAIN-20210805032134-20210805062134-00109.warc.gz"} |
https://codereview.stackexchange.com/questions/94266/calculating-pi-using-bash | # Calculating pi using bash
I've written a little program to calculate pi using the Nilakantha series:
For this formula, take three and start alternating between adding and subtracting fractions with numerators of 4 and denominators that are the product of three consecutive integers which increase with every new iteration. Each subsequent fraction begins its set of integers with the highest one used in the previous fraction. Carry this out even a few times and the results get fairly close to pi. (http://www.wikihow.com/Calculate-Pi)
I don't really understand other ways of calculating pi, or they take too long to perform.
echo "Enter scale please"
VALUE=2
PI=3
FITNESS=1
while true
do
PI=$(echo "scale=$SCALE;$PI+(4/($VALUE*($VALUE+1)*($VALUE+2)))-(4/(($VALUE+2)*($VALUE+3)*($VALUE+4)))" | bc) VALUE=$(($VALUE+4)) FITNESS=$(($FITNESS+1)) echo "###############" echo "-->$FITNESS // $VALUE" echo "$PI"
done
I would really like to know how to detect when I get the same output multiple times, so I can tell the program to terminate when it has reached the most accurate version of pi possible.
Also if you have other suggestions on how to improve the code and/or know a better way of calculating pi (with some explanation), I would like to hear.
I see a few things that could allow you to improve your program. First, though, I don't consider myself a bash expert, so there may well be better ways of doing these things.
## Use a "shebang" line
As this question points out, you should always use a "shebang" line for your bash scripts. So the first line would be:
#!/usr/bin/env bash
## Pass values as arguments
Rather than prompting for the SCALE value, it's generally better to use a command line argument. That way, the script can be reused by other shell scripts.
## Provide a stopping mechanism
As each term is calculated, eventually, it will be equal to zero given the passed scale. This suggests a mechanism for stopping: check each term for 0 before adding it.
## Indent do and while loops
I don't know of a bash style guide (there probably is one!) but I like to see the contents of loops indented to make it easier to read.
## Putting it all together
Here's a modification of your script with all of these suggestions implemented:
## bashpi.sh
#!/usr/bin/env bash
SCALE=$1 VALUE=2 PI=0 FITNESS=1 DELTA=3 while [$(echo "$DELTA==0" |bc) != "1" ] do PI=$(echo "$PI+$DELTA" | bc)
DELTA=$(echo "scale=$SCALE;(4/($VALUE*($VALUE+1)*($VALUE+2)))-(4/(($VALUE+2)*($VALUE+3)*($VALUE+4)))" | bc)
VALUE=$(($VALUE+4))
FITNESS=$(($FITNESS+1))
echo "###############"
echo "--> $FITNESS //$VALUE"
echo "$PI" done To better understand how this works, you can replace the three echo statements with this one: echo "DELTA =${DELTA} --> ${FITNESS} //${VALUE} : ${PI}" ## Sample output With ./bashpi.sh 4, and the modified echo above I get this output: DELTA = .1333 --> 2 // 6 : 3 DELTA = .0064 --> 3 // 10 : 3.1333 DELTA = .0012 --> 4 // 14 : 3.1397 DELTA = .0003 --> 5 // 18 : 3.1409 DELTA = .0001 --> 6 // 22 : 3.1412 DELTA = .0001 --> 7 // 26 : 3.1413 DELTA = .0001 --> 8 // 30 : 3.1414 DELTA = 0 --> 9 // 34 : 3.1415 • I really appreciate the suggestions, but I don't completely understand how you've implemented the stopping feature, SCALE simply is how many numbers exist after the decimal point. There's probably something I'm not seeing here... – insanikov Jun 21 '15 at 18:17 • It's a bit subtle. Basically, a value of 0.001 (=1e-3) with a scale of 3 is nonzero, but a value of 0.0005 (=5e-4) is considered to be equal to zero at a scale of 3, so when the next term becomes less than smaller than 1e-${SCALE}, the printable part of ${PI} is unlikely to change except in the least significant digit. A refinement would be to use ${SCALE}+1 in the while loop but it doesn't make much difference and this is slightly easier to understand. – Edward Jun 21 '15 at 21:08
• I understand it now, I would have never come up with this. Thank you for the explanation. – insanikov Jun 22 '15 at 13:52 | 2019-09-19 16:17:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5466233491897583, "perplexity": 952.9436589192687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573533.49/warc/CC-MAIN-20190919142838-20190919164838-00419.warc.gz"} |
https://linearalgebras.com/tag/counterexample | If you find any mistakes, please make a comment! Thank you.
## Not all ideals are prime
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 7.4 Exercise 7.4.9 Solution: First we show that $I$ is an ideal. To That end, let $f,g \in I$.…
## The quaternion group is not a subgroup of Symmetric group for any n less than 8
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 4.2 Exercise 4.2.7 Solution: (1) $Q_8$ is a subgroup of $S_8$ via the left regular representation. (2) Now suppose…
## Z[x] and Q[x] are not isomorphic
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 7.3 Exercise 7.3.2 Prove that the rings $\mathbb{Z}[x]$ and $\mathbb{Q}[x]$ are not isomorphic. Proof: In $\mathbb{Q}[x]$, $f(x)+f(x)=g(x)$ has a…
## 2Z and 3Z are not isomorphic as rings
Solution to Abstract Algebra by Dummit & Foote 3rd edition Chapter 7.3 Exercise 7.3.1 Prove that the rings $2\mathbb{Z}$ and $3\mathbb{Z}$ are not isomorphic. Solution: Suppose \$\varphi : 2\mathbb{Z} \rightarrow… | 2020-10-27 16:08:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6574838161468506, "perplexity": 251.07845979632566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894203.73/warc/CC-MAIN-20201027140911-20201027170911-00575.warc.gz"} |
https://www.quantopian.com/posts/research-updates-on-the-way | Very shortly, we will be making some major upgrades to Quantopian Research. One of these upgrades is to get_pricing. When the changes are made, get_pricing will load pricing and volume data that is point-in-time split- and dividend-adjusted. The data will now match the pricing and volume data that is used in Pipeline and the backtester. The upgrade is an improvement on the accuracy of get_pricing. This may change the plots or results in some notebooks. | 2018-12-12 23:46:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48022031784057617, "perplexity": 2043.2583523098967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824180.12/warc/CC-MAIN-20181212225044-20181213010544-00635.warc.gz"} |
http://tex.stackexchange.com/questions/66435/beamerposter-passing-portrait-options-to-custom-theme?answertab=oldest | # beamerposter passing portrait options to custom theme
Hi I'm preparing a custom theme for beamerposter to align with the theme of my institution. I have to include a figure banner on the headline plus the possibility to include 2 different logos. So far I've done like this in the .sty file
\setbeamertemplate{headline}{
\leavevmode
\parbox{\textwidth}{\includegraphics[width=\paperwidth]{poster-banner}}
\vskip-35ex
\begin{columns}[T]
\begin{column}{.2\paperwidth}
\begin{center}
\ifdefined\LeftLogo
\includegraphics[width=.6\linewidth,keepaspectratio,clip]{\LeftLogo}
\else
\fi
\end{center}
\vskip1.5cm
\end{column}
\begin{column}{.65\paperwidth}
\hskip1ex
\vskip4ex
\raggedright
\end{column}
\begin{column}{.15\paperwidth}
\begin{center}
\ifdefined\RightLogo
\includegraphics[width=.6\linewidth,keepaspectratio,clip]{\RightLogo}
\else
\fi
\end{center}
\vskip1.5cm
\end{column}
\end{columns}
\end{beamercolorbox}}
Where \RightLogo and \LeftLogo have been defined before the
\def\RightLogo{logoa.pdf}
\def\LeftLogo{logob.pdf}
\documentclass[final]{beamer}
The trick work nicely in portrait mode for beamerposter. But whenever I use landscape option the banner is too big in height. So I think I should pass the dimension of the banner and also the amoung of vskip differently according to the option passed to beamerposter (i.e. orientation=portrait, orientation = landscape). But I can't figure out how to infer this options. I hope I explain my self. I know that I should give a MWE but I hope it is sufficient as I've done
-
The command \ifportrait detects the value of orientation. So you can replace your line
\parbox{\textwidth}{\includegraphics[width=\paperwidth]{poster-banner}}
with a test and appropriate code for each case
\ifportrait
\parbox{\textwidth}{\includegraphics[width=\textwidth]{poster-banner}}
\else
\parbox{\textwidth}{\includegraphics[width=\textwidth,height=10cm]{poster-banner}}
\fi
The format is \ifportrait <portait-case> \else <landscape-case> \fi. I have made a random choice of a height of 10cm in the landscape case, you will need to write something appropriate four your particular case. For example, you might find a minipage more suitable than a \parbox and could write
\begin{minipage}{\textwidth}
\centering\includegraphics[width=0.5\textwidth]{poster-banner}
\end{minipage}
in the landscape case.
To use this as a style, put the following in the file beamerthemeProva.sty
\mode<presentation>
\leavevmode
\ifportrait
\parbox{\textwidth}{\includegraphics[width=\textwidth]{example-image-a}}
\else
\begin{minipage}{\textwidth}\centering\includegraphics[width=0.5\textwidth,height=15cm]{example-image-a}\end{minipage}
\fi
\vskip-35ex
\begin{columns}[T]
\begin{column}{.2\paperwidth}
\begin{center}
\ifdefined\LeftLogo
\includegraphics[width=.6\linewidth,keepaspectratio,clip]{\LeftLogo}
\else
\fi
\end{center}
\vskip1.5cm
\end{column}
\begin{column}{.65\paperwidth}
\hskip1ex
\vskip4ex
\raggedright
\end{column}
\begin{column}{.15\paperwidth}
\begin{center}
\ifdefined\RightLogo
\includegraphics[width=.6\linewidth,keepaspectratio,clip]{\RightLogo}
\else
\fi
\end{center}
\vskip1.5cm
\end{column}
\end{columns}
\end{beamercolorbox}}
\mode<all>
and then you can have a main document (a variation on the standard example in the beamerposter distribution) such as the following
\def\RightLogo{example-image-a.pdf}
\def\LeftLogo{example-image-b.pdf}
\documentclass[final]{beamer}
\mode<presentation> { \usetheme{Berlin}
\usepackage{times}
\usepackage{amsmath,amsthm, amssymb, latexsym}
\boldmath
\usepackage[english]{babel}
\usepackage[latin1]{inputenc}
\usepackage[orientation=landscape,size=a0,scale=1.4,debug]{beamerposter}
\usetheme{Prova} }
\graphicspath{{figures/}}
\title[Fancy Posters]{Making Really Fancy Posters with \LaTeX}
\author[Dreuw \& Deselaers]{Philippe Dreuw and Thomas Deselaers}
\institute[RWTH Aachen University]{Human Language Technology and Pattern Recognition, RWTH Aachen University}
\date{Jul. 31th, 2007}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%5
\begin{document}
\begin{frame}{}
\vfill
\begin{block}{\large Fontsizes}
\centering
{\tiny tiny}\par
{\scriptsize scriptsize}\par
{\footnotesize footnotesize}\par
{\normalsize normalsize}\par
{\large large}\par
{\Large Large}\par
{\LARGE LARGE}\par
{\veryHuge VeryHuge}\par
{\VeryHuge VeryHuge}\par
{\VERYHuge VERYHuge}\par
\end{block}
\vfill
\vfill
\begin{block}{\large Fontsizes}
\centering
{\tiny tiny}\par
{\scriptsize scriptsize}\par
{\footnotesize footnotesize}\par
{\normalsize normalsize}\par
{\large large}\par
{\Large Large}\par
{\LARGE LARGE}\par
{\veryHuge VeryHuge}\par
{\VeryHuge VeryHuge}\par
{\VERYHuge VERYHuge}\par
\end{block}
\vfill
\begin{columns}[t]
\begin{column}{.48\linewidth}
\begin{block}{Introduction}
\begin{itemize}
\item some items
\item some items
\item some items
\item some items
\end{itemize}
\end{block}
\end{column}
\begin{column}{.48\linewidth}
\begin{block}{Introduction}
\begin{itemize}
\item some items and $\alpha=\gamma, \sum_{i}$
\item some items
\item some items
\item some items
\end{itemize}
$$\alpha=\gamma, \sum_{i}$$
\end{block}
\begin{block}{Introduction}
\begin{itemize}
\item some items
\item some items
\item some items
\item some items
\end{itemize}
\end{block}
\begin{block}{Introduction}
\begin{itemize}
\item some items and $\alpha=\gamma, \sum_{i}$
\item some items
\item some items
\item some items
\end{itemize}
$$\alpha=\gamma, \sum_{i}$$
\end{block}
\end{column}
\end{columns}
\end{frame}
\end{document}
You can now change the orinetation from landscape to portrait and get different effects.
-
I've tried the suggestion given but for some reason whenever I used the \ifportrait <portrait-case> \else landscape \fi give me an error (like ! LaTeX Error: Missing \begin{document}.) and ad the end forcing compilation give me a two page document with the first page with only the title – Nicola Vianello Sep 10 '12 at 15:20
That sounds like some missing bracket problem. I have added a complete document to my answer, taking your original material and including my suggestion. – Andrew Swann Sep 10 '12 at 19:15
As I suspected. The suggestion works whenever you include the code provided in the main source document. Actually I'm trying to create a .sty file in order not to remember all the stuff. And if included in a (said beamerthemeProva.sty) and calling in the form \usetheme{Prova} the solution does not work. – Nicola Vianello Sep 11 '12 at 11:57
It is no problem to put this in an external file. I have change my complete example to demonstrate this. Make sure you put the style file somewhere LaTeX will find it. E.g. in the same directory as your main .tex file. – Andrew Swann Sep 11 '12 at 15:24
Finally I succeed in making it to work. The problem was the order which I used to load all the packages in the Preamble as I use to load the theme Prova before loading the beamerposter package. Very stupid error indeed – Nicola Vianello Sep 12 '12 at 8:59 | 2014-11-26 17:50:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9097951054573059, "perplexity": 4177.199875551315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931007301.29/warc/CC-MAIN-20141125155647-00114-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://cseducators.stackexchange.com/questions/3372/how-to-answer-functional-programming-is-useless | # How to answer “functional programming is useless”?
I'm a TA for several Bachelor level functional programming courses at my university. In every edition we have problems with some students that have the idea that functional programming is useless, because the industry doesn't use it. The more nuanced students add that some functional programming techniques are useful in any language (they think of lambda expressions in Java, map in Python, etc.) but that there is no point in learning pure functional programming as you cannot use it to build real-world applications (and if you can, nobody does it). These students usually tend to infect others and that can bring an unpleasant atmosphere.
I have read Hughes' 1990 paper Why Functional Programming Matters, where he explains that people in arguing for FP often tend to show what it doesn't have (assignment, side effects, ...) instead of what it does have. However, the things he mentions that FP does have (higher level of abstraction through gluing functions, lazy evaluation and thus easier modularity) are hard to grasp for mid-Bachelor students. In the end of the course they should be able to understand those advantages when you talk them through it, but the problem is motivating them from the start.
I know that most students will not end up as a functional programmer. But, learning one language means getting more proficient in others, and analogously for paradigms, so I think it is still helpful for them to study it. Yet when I explain this to the students the inevitable question is "why don't you teach us FP in a language that we can actually use later?".
What can I use to show the usefulness of functional programming?
Note: like many universities we don't use Haskell in all courses but a lesser known more academical functional programming language. This explains in part the complaints, however, we receive these complaints on Haskell courses as well.
• It might be useful to show which programming langaues are "hot" right now and that these languages are all using some form of FP. (For example JavaScript, Swift, Rust, [Erlang&Elixir], Python, Scala, ...) And to fully understand what the advantages/disadvantages are one needs to understand the fundamental basics (.e.g. a pure FP language). – Raphael Ahrens Aug 28 '17 at 12:52
• Comments are not for extended discussion; this conversation has been moved to chat. Any further discussion will be deleted and will not be moved to chat. – thesecretmaster Aug 29 '17 at 12:57
• The assumed premise in this question is very questionable. After I finished my PhD not knowing any functional programming, the first company I joined turned into a Haskell shop and now I write Erlang for a living. Both languages are purely functional. Moreover, I see a lot of Python projects now strictly using functtools to write functional python despite it not being a functional language. So, I doubt the assumed premise in your question anecdotally. I've literally paid my bills since graduation with FP despite never thinking that once before hand. – Tommy Aug 31 '17 at 12:33
• Sorry y'all. I'd love to move these comments to chat, but because I can't I've had to delete them. – thesecretmaster Sep 2 '17 at 1:12
• Tell them Scala is used by Twitter, LinkedIn and eBay. Tell them that Akka, which requires that certain data be immutable, is used by Amazon, Walmart, PayPal, and Intel. – Brian McCutchon Sep 2 '17 at 23:50
What can I use to show the usefulness of functional programming?
This is the wrong question, and by trying to answer it you're falling into a trap of accepting and reinforcing the students' misunderstanding of what university is. If I had applied that standard of value to the courses in the bachelor's degree I studied, I think I would only have attended 30 hours of lectures over the three years.
You may be right that most of them will not end up earning their living by programming in functional languages, although given current industry trends I wouldn't be entirely surprised if you turn out to be wrong in the long term. But if their goal is merely to gain a superficial knowledge of whatever is currently popular in industry, they're not just in the wrong course: they're in the wrong institution, and should drop out and find a bootcamp.
• Comments are not for extended discussion; this conversation has been moved to chat. – thesecretmaster Aug 29 '17 at 23:24
• I don't agree with this answer. Usefulness != popular in industry. Or to put it another way, the question isn't "what can I use to show the usefulness in industry...? This is clear from the OP's penultimate paragraph, where he provides a reasonable justification for usefulness. University is about teaching useful content. Useful doesn't have to mean direct or immediate. Something can be useful because it gets the student thinking in a certain way, provides context for other knowledge, builds intuition, and a whole host of other reasons. – JBentley Sep 1 '17 at 15:00
• Yes, it feels like the real question that needs to be answered here is the one asked at the end of the question: the inevitable question is "why don't you teach us FP in a language that we can actually use later?" Why on earth would anyone teach in a dead language? Nobody loves Latin, nor Cobol. Teach a paradigm in a language that they are familiar with, and they will learn the paradigm. Teach it in an unfamiliar language, and they will learn... language syntax. – Dewi Morgan Sep 1 '17 at 20:41
• @DewiMorgan I don't know if anybody loves Cobol, but it's absolutely false to say "Nobody loves Latin". Clearly, many people do (including me, though my understanding is limited). It's relevant because the reasons often given for learning Latin are similar to those given for functional programming: it improves your thinking, it makes you better even at other languages, it builds character, etc. :-) – ShreevatsaR Sep 6 '17 at 4:37
• @PeterTaylor The point of my first comment was given in my first comment: "the real question that needs to be answered here is the one asked at the end of the question". I agree, if students know no suitable language for the paradigm, one must be taught. That's a strong, valid answer to that question, but is unlikely to ever be true. Even then, the selection must be defensible: Lisp would be hard to defend. "This is the only functional lang I know" or "javascript isn't as good for functional code" risk being seen as weak, invalid attempts to justify laziness on the part of the teacher. – Dewi Morgan Sep 6 '17 at 16:34
Hughes is absolutely right, and the following paragraph from his paper hits the nail right on the head:
Such a catalogue of “advantages” is all very well, but one must not be surprised if outsiders don’t take it too seriously. It says a lot about what functional programming isn’t (it has no assignment, no side effects, no flow of control) but not much about what it is. The functional programmer sounds rather like a medieval monk, denying himself the pleasures of life in the hope that it will make him virtuous. To those more interested in material benefits, these “advantages” are totally unconvincing.
You say:
However, the things he mentions that FP does have (higher level of abstraction through gluing functions, lazy evaluation and thus easier modularity) are hard to grasp for mid-Bachelor students. In the end of the course they should be able to understand those advantages when you talk them through it, but the problem is motivating them from the start.
None of these things are particularly difficult to learn, unless you teach them in an inherently difficult style (such as Haskell or "a lesser known more academical functional programming language.") Many functional concepts are useful in the right places, as a "the right tool for the job" sort of thing, but functional languages have a tendency to go overboard and apply these concepts dogmatically rather than pragmatically.
Imperative languages get this right much more often. For example, you want your students to understand lazy evaluation? Teach them to use Python generators or C# iterators. That's what the yield keyword is: lazy evaluation. But it's set up in such a way that you decide when to use it as appropriate, rather than the language dropping it on you as the default. Want them to understand coroutines? Teach them async/await. Likewise, there's nothing at all in Hughes's explanations of "program gluing" that can't be easily rewritten in modern-day object-oriented languages.
Today, with LINQ (.NET), the Streams API (Java), and Itertools (Python), basic functional concepts are used all the time in industry. But they're used as "another tool in the toolbox," to be applied as appropriate. When your students balk at functional languages, a large part of the problem is because they don't have this approach; they try to cram all these concepts down your throat whether you want them (or even need them at all!) or not.
A classic example is recursion. Let's say you wanted to find the length of a linked list. In Python, you'd do it like this:
def length(list):
result = 0
while list is not None:
result = result + 1
list = list.next
return result
In Paul Graham's paper "On Lisp", he does it a different way. I'm not going to inflict Lisp upon the audience, (the originals can be found on pages 22 and 23 if you're really curious,) but his naive solution translates to:
def our_length(list):
if list is None:
return 0
return 1 + our_length(list.next)
This is shorter and simpler than the way I did it, but as he then points out, there's an obvious problem with this: it can produce a stack overflow if the list is long enough. In order for a functional language to cope with this problem, it needs its functions to be tail-recursive. Here's the Python equivalent of his tail-recursive version:
def our_length(list):
def rec(list, acc):
if list is None:
return acc
return rec(list.next, 1 + acc)
return rec(list, 0)
Wow! Look at that monstrosity! It's as long as my version (6 lines of code), but about twice as complicated and harder to read, requiring a nested function that does all the work for no easily-apparent reason. It's only when you understand that functional languages hate looping constructs and mutation of variables, and inflict recursion and immutability upon the developer dogmatically, rather than pragmatically allowing you to use them as appropriate, that this way even begins to make any sense at all! (Amusingly, the reason you have to write it as such a mess is so that the compiler can automagically detect that you're using tail recursion and transform the function into a loop featuring the accumulator as a mutable variable, factoring out the toxic recursion so it won't break the stack!) In reality, recursion is extremely useful when dealing with inherently recursive problems such as tree structures or divide-and-conquer algorithms, but trying to use it on linear problems where a loop is the natural fit is just asking for pain and trouble more often than not.
In summary, the reason it's difficult to convince your students that functional programming is not useless is largely because the specific flavor you're teaching them is, in fact, useless and counterproductive, an active impediment to productivity. Unfortunately, you probably can't get away from teaching the languages mandated by your curriculum, but if you can explain things in ways that are actually relevant to modern programming practice and their future careers, you'll likely see understanding dawn as they realize how and when these techniques can be appropriate. What they're missing, because dogmatic functional languages largely reject the concept, is the all-important notion of "the right tool for the job, as appropriate."
• You miss out on abstraction. What's the equivalent to an efficient for loop? No, it's not tail recursion, its foldl'. You write def our_length(list): return list.reduce(x => x+1, 0) – Bergi Aug 29 '17 at 6:52
• Comments are not for extended discussion; this conversation has been moved to chat. Any further discussion here will be deleted and will not be moved to chat. – thesecretmaster Aug 29 '17 at 23:08
I think the real trick is in teaching the value of Functional Programming rather than trying to teach the value of Functional Programming Languages. The latter will fail the pragmatic approach in almost all cases. Why? Because functional programming languages intentionally restrict themselves in the name of purity, and then try to demonstrate that they can be just as effective as other languages. Meanwhile, the other languages have sought to be productive as goal #1. It should be expected that the languages which strive for productivity first will be more productive than those which put productivity second. If not, those other languages have truly failed. The mere fact that C, C++, Python, PHP, etc. are all alive and well suggest that even the most staunch supporter of Haskell has to admit that these languages are good at doing what they strive to do. That thing they do may not be exactly what the Haskell developer wants to do in their programs, but they have to admit that the other languages do what the other people want them to do.
Instead of trying to promote the languages themselves, I would try to build a niche for these functional programing languages by building interest in functional programming in general. You mention that several students already have picked up that functional programming has been working it's way into "mainstream" languages like Java or Python, or even C++11. Don't fight that. Leverage that. Teach them that functional programming, itself, has value. Use whatever language it takes to teach that.
Once they start getting interested in functional programming itself, that is the time to start encouraging them to approach languages like Haskell. Once the students realize that there's another way to solve some of these problems, the functional way, then languages like Haskell can start to sell the argument that everything can be done functionally. In fact, the typical battle cry of Haskell lovers is that it can even be done efficiently and functionally! This battle cry falls flat if students are currently content with their tools, but if you have demonstrated to them that there is power to this new way of thinking, the battle cry sounds different.
The message should not be "learn Haskell, it's a great language." The message should be "Learn functional programming. It's a powerful tool that you can apply in many languages, and it's easy to hone your skills at it using Haskell." Or substitute your academic language. You should be selling the technique, not the language. Teach them to find a balance between the procedural styles and the functional styles. That balance will differ at every business that eventually employs your students. There is no one "right" balance, so teaching them how to strike the best balance for the moment is a great job skill.
I learn martial arts. In my martial arts class, I do lots of things that I will never find a need to do in real life. Why do I do them? Because they teach me skills I will use in my day to day life, and the most efficient way to learn them is to spend the hours in a pure environment dedicated to honing those skills.
Likewise, I have learned functional programming languages. In learning those languages, I do a lot of things that I will never do in my day-to-day career. Why do I do them? Because they teach me the set of skills I do use in my career, and the most efficient way to learn them was to spend the hours in a pure environment dedicated to honing those skills.
Consider what they are saying, discuss with the students, and agree with them that perhaps they are right. Its almost impossible to get anyone interested if they already know that they won't benefit (at least in the monetary sense) from it in the short and medium run.
By agreeing with them, you take them into your confidence. Then, find ways to incorporate some of the ideas of functional programming into your general discussion. For instance, C# is the regular (imperative) programming language, something that they would be happy to learn. However, while discussing C#, you can also talk about F# which is functional, and runs on visual studio and dot net.
If you play it sly, you could have your cake and eat it too. You never know. Some of the students might even realise (like when they are taking a bath) that acquiring a deeper understanding of functional programming will give them a leg up in the long run.
• Yes show them how there is currently a trend toward functional. As there once was a trend toward structured, and object-oriented. But the new will not replace the old. It will improve it. – ctrl-alt-delor Aug 28 '17 at 10:54
• By agreeing with them, we reinforce their confidence. That will close them for any discussion. – beroal Sep 10 '17 at 11:21
I think it is pretty hard to convince students focused on the here-and-now of a lot of things. However, the teacher's job is to teach them what they need to know, not just what they want to know.
There are two reasons for learning a functional language. The most important is that anything that gets you to think hard about something new will make you better at other things as well. If you want your body to be strong you do something like run hard and long so that you sweat and collapse at the end of it (or nearly). If you want to be smart, do the same with your mind. Make it run hard, sweat, and collapse. If you don't know functional programming, learning it will expand your mind. If you do know functional programming, learning, say, C will have a similar effect.
The other reason is that functional may turn out to be even more important than we think now (or some new paradigm in its place). In particular there are two end points of getting a computer to do something. The first is to describe (somehow) what is to be done. The second is to describe how it is to be done. Imperative programs and even OO programs do the latter. They implement algorithms. Every new problem needs a new algorithm - a new set of steps. But we have seen glimpses (mostly Prolog or SQL) of a different world, in which we describe what we want, not how to get it. Functional Programming is much closer to this way of working than the strictly algorithmic approach.
And the reason that that might be important to learn about is that as machines get more complex, including more parallelism/concurrency, the harder it becomes to program them algorithmically and the more important it may become to program them descriptively, with the computer itself figuring out an algorithm.
The reason why this hasn't been the norm, yet, is that machines were also insufficiently powerful to manage it. But that is ending now. People preferred imperative programming for efficiency, but at a certain level of power, that becomes less important than getting the correct answer. After all, it doesn't usually help us to get the wrong answer fast. And generalized algorithms may be the answer to this, pushing us toward the descriptive rather than the process end of the continuum.
• “the teacher's job is to teach them what they need to know, not just what they want to know.” Sure, but it’s hard to teach it to them while they don’t want to know it. Most routes to effectively teaching anything involve bringing students around until they do want to know it. – Peter LeFanu Lumsdaine Aug 30 '17 at 21:26
• "the teacher's job is to teach them what they need to know, not just what they want to know" - Sure, but it's not unreasonable for students to ask I get that you think I need to know this, but could you explain why you think that?' It's not unreasonable for students to ask you to help them to understand why you think that they need to know it. So the question probably still deserves to be answered. If you have a compelling argument that they need to know it, it seems like them asking the question is a great opportunity to make the case for it. – D.W. Sep 3 '17 at 2:19
• Agree. Of course. – Buffy Sep 3 '17 at 9:52
Functional programming knowledge will definitely benefit your students. I think Martin Odersky explains it best here:
He explains that there needs to be a paradigm shift in the way we think of processing data. As we progress with technology, we are reaching the limits of transistor size. Because of this, instead of making processors smaller and more efficient, we will be looking to add more of them.
This will then lead to more parallelization which prefers immutability. What is one of the crowning side effects of functional programming? Immutability
Look at big Apache libraries like Spark. Spark uses Scala, a functional programming language built off of Java, to process LARGE amounts of data very quickly.
I would say that your students are completely incorrect in their notion that functional programming has not relevance in today's tech world.
• You would have to take a side-trip to explain big data, then satisfy the student's need to understand procedural programming, so that they have some concept of what algorithms are and how to code them, then go back to big data and start explaining how Functional is necessary there. I don't think you can go straight for the punchline, people will not get the joke. – user737 Aug 28 '17 at 12:27
• Functional programming is great for extracting an answer from a dataset so huge it spans a multi-room computer system. It's horrible for writing Tetris. Far more programmers will spend their careers writing Tetris that will spend their careers developing for room-sized computers. – Mark Feb 12 '18 at 6:10
## There are two types of learning
The two types of learning that are useful are:
• to solve an immediate problem.
• as an investment, this is the type that you will mostly do at university. Although you will also practice the former, it is not for what you learn, but to learn how to learn.
## The language is irrelevant
How many words are there in Python? About 20. How long does it take to learn? How long did it take me to learn French? I still have not finished, because the word-space of French is big, but I did learn my first 3000 words in 2 hours, because French is a lot like English.
The more languages you know, the quicker you become at learning languages.
## The Language is important
My spelling has improved in English since I learnt French. My object oriented programming has improved when I learnt functional programming. It was easier and quicker to learn the correct language. I only properly learnt functional when I did it in a functional language. I only properly learnt object-oriented when I did it in Eiffel.
The more languages you know, the better you become at the others.
## Functional is the future
Moore's Law is coming to an end. Clock speeds have stopped increasing. Transistor count is continuing to increase for now. Core count is increasing.
Imperative programming does not scale in this environment: When you write a multi-threaded imperative program, you will fill it with locks. This will not only create bugs, but you are causing it to be single threaded.
## No one uses it — are you sure?
A lot of server software uses functional; I know google uses it.
You may be correct that no one uses it on the desktop, much. But how much computing is done on the desktop? Look at servers, supercomputers and embedded. Most of the CPUs in the world are deployed in embedded systems. At least 80%.
## Faster learning
It can be quicker to learn things separately. Learning functional programming in a functional language, then leaning a non-functional language. Can be quicker than leaning the non-functional language first. However this will depend on may things, including choice of language, how they are taught, etc.
Bertrand Meyer, in a touch of class, makes this claim about Object-orientation and Eiffel.
## The language is irrelevant
Give me a functional language with lambdas, and I can implement mutation. Give me a non-functional language and I will write very clean, mostly functional code.
## The Language is important
When looking at teaching/learning resources
• for non-functional languages, they tend to start in lesson 1 with mutation. This is a bad idea even for learning procedural programming.
• for functional languages, they leave mutation until much latter. In structure and interpretation of programming languages, it is about half way through. You will be surprised, I had to look back I could not believe that we had done all that without them.
• Learning more languages can also confuse programmers more. There is no indication that functional is the future. Most programs run on a desktop/phone. "Can be quicker" is not a convincing argument as it depends on the person learning. – TwoThe Aug 28 '17 at 11:55
• @TwoThe Learning how to learn new languages quickly is another skill they should be picking up. No language is perfect, after all, so they should always be expanding their toolbox even if they do neglect functional languages (which they shouldn't). Temporary confusion is a common symptom of learning something new. – Ray Aug 28 '17 at 18:39
• I haven't seen a non-trivial case in which a compiler automatically multi-threaded independent expressions so as to better utilize a CPU's multiple threads. Sounds great in theory, but I've never seen it practice. For one, I worry about how a compiler would determine if two expressions are sufficiently expensive so at to benefit enough from being processed in parallel, and to be worth the huge overhead of spawning/synchronizing threads. Would love to be proven wrong, however. – Alexander Aug 29 '17 at 4:42
• (1/2) Additionally, even though the statement "fp performs better because it can be automatically parallelized" is theoretically true, I would argue that it's not as much of an issue. It's already quite uncommon for performance to be the primary issue of a project. The primary challenges of software development are rapidly shifting away from hardware limitations, and towards project management/scale/complexity issues. – Alexander Aug 29 '17 at 4:47
• (2/2) A strong argument can be made that languages need to strongly prioritize simplicity, rapid prototyping, maintainability. In many cases, FP excels at these things, but the argument for theoretical performance improvements is less relevant than ever, and it becomes a mere "nice-to-have". – Alexander Aug 29 '17 at 4:51
Give them the following task to implement in plain old Java:
Given is a list of Students with their name, area code and average test score. Write a program that calculates and prints out the average score of all students by each area.
Example input:
final List<Student> students = Arrays.asList(
new Student("Meyers", "12345", 2.3f),
new Student("Miller", "12345", 1.1f),
new Student("Swanson", "34567", 3.4f)
);
Example output:
34567: 3,40
12345: 1,70
This task is trivial, but the amount of non-functional Java code you need to write is extensive. After they have presented their solutions, show them the solution written in functional Java:
students.stream()
.collect(Collectors.groupingBy(Student::getAreaCode, Collectors.averagingDouble(Student::getAvgScore)))
.forEach((code, score) -> System.out.printf("%s: %.2f\n", code, score));
`
Then explain that functional programming is a tool that serves well when you have to deal with tasks that can be described as a mathematical function. And using that will - as demonstrated - greatly reduce the amount of code you have to type.
• Well, honestly, if I came upon that in a code review in an enterprise-y environment...I would probably send it back to be redone in a way that is less "elegant" but more "readable" or "maintainable"... – user3067860 Aug 28 '17 at 22:10
• Readability and maintainability depend on the level of knowledge in the company. In my current company about every Java developer can fluently read and maintain that. But I've also been in companies where the word "stream" alone caused headaches, there I probably would write that differently. In the end it is however an artificial solution to demonstrate something. – TwoThe Aug 29 '17 at 7:42
• @Walfrat let me play the devil's advocate right back. In C# you can write that code in C# and then have that run on the database by using a LINQ provider - where the abstract syntax tree is used to generate SQL. The fact it's functional is what makes this possible. This is how a LINQ provider do. – Benjamin Gruenbaum Aug 29 '17 at 13:54
• @user3067860 You may have hit upon an answer to the original question there. Learning functional programming is useful because people who haven't can be confused by three lines of fairly trivial code. – Ray Aug 29 '17 at 18:40
• @JerryCoffin True...but APL is a special case. Any language composed mostly of characters that don't even exist on a standard keyboard is going to be difficult to read. But the simplest evidence I have for the idea that anyone confused by TwoThe's example is confused by the functional programming and not the syntax or API is that I've never used Java's streams API and I was able to understand that code without any difficulty since the functional approach was so clear. – Ray Aug 31 '17 at 17:42
There is a really good teaching point here.
Ideas often arise repeatedly in different contexts. That includes the ideas behind Functional Programming.
We also all know how fast computing changes. Your job is to teach them computing, and its up to them to choose how to use it and the specialist areas they need most, which they probably don't yet know as 90% of what they will each specifically use, and several key paradigms they will come to rely on in their future careers spanning maybe 40+ years, hasn't yet been invented or become the main paradigm.
Given that you are preparing them for many years ahead, and a broad topic, it is reasonable to include FP and expect it to be taken as serious and useful. Like machine code, firmware, and logic gates, few of your students will directly use it, but many will indirectly use it and all will use tools and techniques which began in it.
As even the protesting students can't tell whether they'll find it useful in future, its got a strong ground for being taught, so they understand the concepts and history behind it, and behind what they now do. Also so its there if they do come in contact with FP work in future.
I would start by distinguishing between the claims
1. Functional programming is useless., and
2. Functional programming languages are useless.
It's likely that when students say the former, they actually mean the latter, and the latter is more likely to be true (or at least valid) from the perspective of their goals.
Regardless of whether functional programming languages will be useful to your students, functional programming gives a model for thinking about whether code in whatever language they end up working with has important properties, like being free of side-effects or internal state.
## protected by thesecretmaster♦Aug 28 '17 at 12:55
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead? | 2019-05-22 01:24:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3333137333393097, "perplexity": 1044.3820428500435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256600.32/warc/CC-MAIN-20190522002845-20190522024845-00254.warc.gz"} |
http://math.stackexchange.com/questions/203603/open-set-and-measure-theory | # Open Set and Measure Theory
Let be $u$ a numerical function defined over $\Omega$, with $u$ measurable, and let be $(O_i)_{i\in I}$ a family of all open sub-sets $O_i$ of $\Omega$, such that $u=0$ often in $O_i$. Let be $O = \cup_{i\in I}O_i$. Then $u=0$ often in $O$.
How I can be able to do this?.
I am beginning make ...
Let be $u$ defined than $0$ in $O_i\setminus M_i$ and $\neq$ $0$ in $M_i$, then
$O = \cup_{i\in I}O_i=\cup_{i\in I}[(O_i\setminus M_i)\cup M_i]$, ...
but I don't know how find the subset of $O$ such that have measure zero.
-
Does "often" mean "almost always", that is "except on a set of measure $0$"? – Alex Becker Sep 27 '12 at 19:44
Are we in Euclidean space here? And is $I$ supposed to be countable? – Harald Hanche-Olsen Sep 27 '12 at 19:47
@AlexBecker yes – juaninf Sep 27 '12 at 23:22
@HaraldHanche-Olsen yes – juaninf Sep 27 '12 at 23:22
Let $N_i$ of measure $0$ such that $u=0$ on $O_i\setminus N_i$. Define $N:=\bigcup_{i\in I}N_i$. By sub-additivity, as $I$ is countable, $N$ has measure $0$. If $x\in O\setminus N$, $x\in O_i$ for some $i\in I$. Since $x\notin \bigcup_{j\in I}N_j$, $x\notin N_i$, so $u(x)=0$. | 2015-07-29 07:08:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9721294045448303, "perplexity": 514.0820308467453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042986148.56/warc/CC-MAIN-20150728002306-00001-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2019_AMC_12B_Problems/Problem_12&diff=102596&oldid=102465 | Difference between revisions of "2019 AMC 12B Problems/Problem 12"
Problem
Right triangle $ACD$ with right angle at $C$ is constructed outwards on the hypotenuse $\overline{AC}$ of isosceles right triangle $ABC$ with leg length $1$, as shown, so that the two triangles have equal perimeters. What is $\sin(2\angle BAD)$? $[asy] /* Geogebra to Asymptote conversion, documentation at artofproblemsolving.com/Wiki go to User:Azjps/geogebra */ import graph; size(8.016233639805293cm); real labelscalefactor = 0.5; /* changes label-to-point distance */ pen dps = linewidth(0.7) + fontsize(10); defaultpen(dps); /* default pen style */ pen dotstyle = black; /* point style */ real xmin = -4.001920114613276, xmax = 4.014313525192017, ymin = -2.552570341575814, ymax = 5.6249093771911145; /* image dimensions */ draw((-1.6742337260757447,-1.)--(-1.6742337260757445,-0.6742337260757447)--(-2.,-0.6742337260757447)--(-2.,-1.)--cycle, linewidth(2.)); draw((-1.7696484586262846,2.7696484586262846)--(-1.5392969172525692,3.)--(-1.7696484586262846,3.2303515413737154)--(-2.,3.)--cycle, linewidth(2.)); /* draw figures */ draw((-2.,3.)--(-2.,-1.), linewidth(2.)); draw((-2.,-1.)--(2.,-1.), linewidth(2.)); draw((2.,-1.)--(-2.,3.), linewidth(2.)); draw((-0.6404058554606791,4.3595941445393205)--(-2.,3.), linewidth(2.)); draw((-0.6404058554606791,4.3595941445393205)--(2.,-1.), linewidth(2.)); label("D",(-0.9382446143428628,4.887784444795223),SE*labelscalefactor,fontsize(14)); label("A",(1.9411496528285788,-1.0783204767840298),SE*labelscalefactor,fontsize(14)); label("B",(-2.5046350956841272,-0.9861798602345433),SE*labelscalefactor,fontsize(14)); label("C",(-2.5737405580962416,3.5747806589650395),SE*labelscalefactor,fontsize(14)); label("1",(-2.665881174645728,1.2712652452278765),SE*labelscalefactor,fontsize(14)); label("1",(-0.3393306067712029,-1.3547423264324894),SE*labelscalefactor,fontsize(14)); /* dots and labels */ dot((-2.,3.),linewidth(4.pt) + dotstyle); dot((-2.,-1.),linewidth(4.pt) + dotstyle); dot((2.,-1.),linewidth(4.pt) + dotstyle); dot((-0.6404058554606791,4.3595941445393205),linewidth(4.pt) + dotstyle); clip((xmin,ymin)--(xmin,ymax)--(xmax,ymax)--(xmax,ymin)--cycle); /* end of picture */ [/asy]$
$\textbf{(A) } \dfrac{1}{3} \qquad\textbf{(B) } \dfrac{\sqrt{2}}{2} \qquad\textbf{(C) } \dfrac{3}{4} \qquad\textbf{(D) } \dfrac{7}{9} \qquad\textbf{(E) } \dfrac{\sqrt{3}}{2}$
Solution 1
Observe that the "equal perimeter" part implies that $BC + BA = 2 = CD + DA$. A quick Pythagorean chase gives $CD = \frac{1}{2}, DA = \frac{3}{2}$. Use the sine addition formula on angles $BAC$ and $CAD$ (which requires finding their cosines as well), and this gives the sine of $BAD$. Now, use $\sin{2x} = 2\sin{x}\cos{x}$ on angle $BAD$ to get $\boxed{\textbf{(D)} = \frac{7}{9}}$.
Feel free to elaborate if necessary.
Solution 2
D 7/9 (SuperWill) | 2021-03-02 23:26:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8029890656471252, "perplexity": 663.6007892234785}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00289.warc.gz"} |
https://www.vedantu.com/question-answer/fill-in-the-blanks-the-perimeters-of-two-similar-class-10-maths-cbse-5f5da4088f2fe2491852d1d1 | Question
# Fill in the blanks:The perimeters of two similar triangles are 25cm and 15cm respectively. If one side of the first triangle is 9cm, then the corresponding side of the second triangle is ………..
Hint: To solve this question, we will use the perimeter property of similar triangles which states that – The ratio of perimeters of two similar triangles is equal to the ratio of their corresponding sides that is
$\dfrac{{{P_1}}}{{{P_2}}} = \dfrac{{{a_1}}}{{{a_2}}}$ , where${P_1}$, ${P_2}$ and ${a_1}$and ${a_2}$are the perimeters and the corresponding sides of the two similar triangles respectively.
So, let the perimeter of the first triangle be${P_1}$=25 cm and the perimeter of the second triangle be ${P_2}$=15cm. Let one side of the first triangle be ${a_1}$=9cm, and the corresponding side of the second triangle be${a_2}$.
$\dfrac{{{P_1}}}{{{P_2}}} = \dfrac{{{a_1}}}{{{a_2}}}$
So, to find the value of ${a_2}$we will put the values of ${P_1}$=25, ${P_2}$=15cm and ${a_1}$=9cm, we will get the equation as:
$\dfrac{{25}}{{15}} = \dfrac{9}{{{a_2}}} \\ \Rightarrow \dfrac{5}{3} = \dfrac{9}{{{a_2}}} \\ \Rightarrow {a_2} = \dfrac{{3 \times 9}}{5} \\ \Rightarrow {a_2} = 5.4cm \\$ | 2021-03-01 04:12:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7921484112739563, "perplexity": 326.4457305838694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00278.warc.gz"} |
http://www.math.unipd.it/it/news/?id=2224 | # Seminario: “Old and New in Complex Dynamical Systems”
## Giovedì 5 Aprile 2018, ore 16:10 - Sala Riunioni VII Piano - David Shoikhet
ARGOMENTI: Seminari
Giovedì 5 Aprile 2018 alle ore 16:10 in Sala Riunioni VII Piano, David Shoikhet (The Galilee Research Center for Applied Mathematics, & Holon Institute of Technology, Israele) terrà un seminario dal titolo “Old and New in Complex Dynamical Systems“.
Abstract
Historically, complex dynamics and geometrical function theory have been intensively developed from the beginning of the twentieth century. They provide the foundations for broad areas of mathematics. In the last fifty years the theory of holomorphic mappings on complex spaces has been studied by many mathematicians with many applications to nonlinear analysis, functional analysis, differential equations, classical and quantum mechanics. The laws of dynamics are usually presented as equations of motion which are written in the abstract form of a dynamical system: $((dx)/(dt))+f(x)=0$, where $x$ is a variable describing the state of the system under study, and $f$ is a vector-function of $x$. The study of such systems when $f$ is a monotone or an accretive (generally nonlinear) operator on the underlying space has been recently the subject of much research by analysts working on quite a variety of interesting topics, including boundary value problems, integral equations and evolution problems .
There is a long history associated with the problem on iterating holomorphic mappings and their fixed points, the work of G. Julia, J. Wolff and C. Carathéodory being among the most important.
In this talk we give a brief description of the classical statements which combine celebrated Julia's Theorem in 1920 , Carathéodory's contribution in 1929 and Wolff's boundary version of the Schwarz Lemma in 1926 and their modern interpretations.
Also we present some applications of complex dynamical systems to geometry of domains in complex spaces and operator theory. | 2018-06-20 05:40:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6058337092399597, "perplexity": 731.4635358749528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863463.3/warc/CC-MAIN-20180620050428-20180620070428-00025.warc.gz"} |
https://www.physicsforums.com/threads/combinatorics-questions.795633/ | # Combinatorics Questions
Tags:
1. Feb 2, 2015
### Extreme112
1. The problem statement, all variables and given/known data
How many ways can you select 10 jellybeans from colors Red, Blue, Green so that at most you only have 4 Green jellybeans?
2. Relevant equations
...
3. The attempt at a solution
# of ways = # of ways to pick 1 Green + # of ways to pick 2 Green + #of ways to pick 3 Green + # of ways to pick 4 Green.
1 Green jellybean: After picking out the jellybean, there are then 9 left to choose from.
* * * * $* * * * * If the '*' are the 9 jellybeans and '$' is the divider to separate the jellybeans so that those on the left of it are Red and those to the right of it are Blue then there are 10!/9! or 10 ways to rearrange it.
2 Greens: Following the same process above would result in 9!/8! = 9
3 Greens: 8!/7! = 8
4 Greens: 7!/6! = 7
Therefore you would have 10+9+8+7 ways to select 10 jellybeans with at most having 4 Green jellybeans.
2. Feb 2, 2015
### Simon Bridge
For two greens,
one comes out first, the second one can come out with the second draw, or the third, or... up to the tenth.
that's nine ways... but the first could have come out on the 3rd draw, with the second coming out on the 4th or subsequent... thats another 6 ways or something isnt it?
So that's 15 ways to get 2 green, and I haven't finished counting yet.
3. Feb 2, 2015
### haruspex
I would say Extreme112 is interpreting the question correctly, that the order of selection is unimportant.
You left out one case.
4. Feb 2, 2015
### Extreme112
I think I forgot the 0 case. Thanks for the help guys. | 2018-03-24 14:56:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5446203947067261, "perplexity": 1221.1355920502822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650685.77/warc/CC-MAIN-20180324132337-20180324152337-00075.warc.gz"} |
https://kmphysics.com/posts/page/2/ | # Characterisation Methods in Solid State and Materials Science
I’m in the process of writing a textbook for the IOP E-book portfolio that will focus on characterisation techniques, in particular for solid state physics.
This will include:
• X-ray and Neutron Techniques
• Microscopy Techniques
• Spectroscopy Techniques
• Magnetic, Electric, and Thermal Characterisation Techniques.
And will, as far as possible make use of open source data to provide real world problems for students to tackle (focusing on data analysis).
I am therefore, actively looking for journal articles that may use one or more of these techniques and where the original datafiles are accessible. If you have work that you think would be relevant then please contact me. Of course all relevant files and journal articles would be referenced.
# Double Rainbow
Can you see the double rainbow?
Ever noticed how the order of the colours changes for the second rainbow (red – yellow – green – blue – violet…)
Typically rainbows are formed when light is refracted (bent) through a raindrop.
A double rainbow forms when light is refracted twice in a raindrop, and occurs commonly when the sunlight is low in the sky. See here for more.
# Useful Resources
I will be adding resources here as I find them – mostly Maths and Physics themed.
## Online Mathematics Course
Loughborough’s Mathematics Education Centre runs a free, three-week MOOC – Getting a Grip on Mathematical Symbolism – designed for those students aspiring to become scientists or engineers but who lack mathematical confidence.
It will run again on the FutureLearn platform starting May 8th. Registration is open now:
https://www.futurelearn.com/courses/mathematical-symbolism
The course is designed for students who have some engineering or science knowledge gained through vocational qualifications or through workplace experience but who perhaps have not studied mathematics formally since leaving school. It will be appropriate for those who lack confidence but who need to establish a bedrock of knowledge in order to further their education.
This is a foundation, entry-level course and is not intended for those who already possess recent post-GCSE mathematics qualifications. It is highly recommended for those students going to university who have not studied maths beyond GCSE. Please share when appropriate.
Note that it is planned to run this course again shortly before the start of the new academic year in September.
Magnet Academy is an online resource provided by the National High Magnetic Field Laboratory — the largest, most high-powered magnet lab in the world. It has a wide selection of useful tutorials about electromagnetism for ages 5 upwards.
Interactive Magnetic Tutorials
# British Science Week
As part of British Science Week, Loughborough University hosts a ‘Community Day’ event where Loughborough locals are invited on campus to take part in various ‘science based’ activities.
This year it falls on 25th March I will be:
• Coordinating an Electrodough workshop – for which we’re looking for student ambassadors.
• Running a ‘Cold Science’ demonstration with liquid nitrogen.
• Working with the East Midlands Institute of Physics to deliver several ‘busking’ activities – for which I’m looking for student ambassadors.
If you’re interested in getting involved please let me know.
# Liquid nitrogen
Roughly the same cost (weight for weight) as a pint of milk, it’s a common feature in science fiction films: the nitrogen dewar in the background that might at some point be used to freeze that alien chasing you down the corridor…
But how much liquid nitrogen would it actually take to do this?
Hint: Assume the creature weighs about 50kg and has a heat capacity of 2000 J/K/kg. Liquid nitrogen has a temperature of 77K and latent heat of 199 kJ/kg. For arguments sake, let’s say the creature becomes vulnerable at 250K…
Now let’s add another complication: the Leidenfrost effect. As a coolant, the low boiling point of liquid nitrogen (77K) typically means that it will boil off so fast on contact with another object much hotter than it, that a ‘protective’ layer of air is formed. This will insulate said object from the cooling effects of the liquid nitrogen, for example preventing cold burns for anyone crazy enough to stick their hand in a bucket of liquid nitrogen for a second or two. CAUTION: This effect will not stop you from getting burnt as more nitrogen is added.
For more see Wikipedia entry for liquid nitrogen
# Fleming’s Left Hand Rule
Any charged particle moving through a magnetic field will experience a force that will cause it to move in a particular direction. An easy way to remember the direction of this force is Fleming’s Left Hand Rule (where the direction of current is the direction in which positive charges move).
Illustration of Fleming’s Left Hand Rule
So for the example above, a positive particle moving into a uniform magnetic field experiences a force that pushes it up away from the magnetic field. This “motion” of the charged particle is due to the magnetic field that the moving charge makes, interacting with the magnetic field it is moving through (just like two magnets can repel each other).
We can use this rule to figure out the direction in which the rotating arm of a motor will move.
Factoid:
This effect is used to define the standard international (S.I.) unit of magnetic field – the Tesla.
1 Tesla = the value of magnetic field (B) that causes a force of 1 Newton to act on a 1 meter length of conductor (i.e. copper) carrying a current of 1 Ampere at right angles to the magnetic field.
# The invisible rod
Challenge: How can you make a quartz rod invisible with some water, sugar and a beaker?
$n_{1}sin(\theta_{1})=n_{2}sin(\theta_{2})$
So you might imagine that if we can change the rod, or the liquid itself, so that light entering from behind the beaker does not refract further on entering the quartz rod, we can effectively make the quartz rod invisible. To do this we want to match up the refractive indices ($n_{1}, n_2$). | 2019-11-14 19:18:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3148089647293091, "perplexity": 1201.2983454124635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668534.60/warc/CC-MAIN-20191114182304-20191114210304-00522.warc.gz"} |
https://physics.stackexchange.com/questions/129122/what-does-the-y-axis-represent-in-the-atomic-spectra-and-what-is-its-significa | # What does the $y$-axis represent in the atomic spectra and what is its significance?
The picture is an emission spectrum of Helium. The spectrum has sharp lines (peaks) at certain wave lengths characterizing it as helium. Agreed that it characterizes Helium as atomic spectral line characterize gases and they are based on allowed energy states of electrons in an atom.
Well and good, but what i don't know is the $y-axis$. what does it represent? It is shown as intensity (counts) but I want to know what it is and how does it vary from element to element. Does this count also characterize the elements, or is it same for all elements? Appreciate if you could throw some light on it.
Question : I mean if i keep the position of lines(pulses) same but change their relative amplitudes, what happens? does it represent a different gas?
Picture taken from the internet.
• This kind of figure is often called a "histogram" because it shows how many hits registered in a counter (making a proper histogram in the statistical sense), but the word is often used for plots of other measures of intensity as well. – dmckee --- ex-moderator kitten Aug 2 '14 at 4:33
Here is the spectrum taken as a photo:
Note the difference in the visible intensity of the lines registered. In your spectrum instead of a film there is a counter which measures the number of hits at that wavelength from the excited helium.
The location of the excitations on the wavelength axis identifies the atom uniquely, like a fingerprint a person to the police. The intensity/counts is secondary to the identification, though it is characteristic it can depend on the intervening medium ( glass, air, space dust for astronomical observations).
• Does the relative amplitudes (not absolute) of the pulses in spectrum depend on the medium? – Rajesh Dachiraju Aug 2 '14 at 4:09
• The detection can depend on the intervening medium, if it is more absorptive at some parts of the wavelength axis then the detection will change the ratio of intensities from the original emissions – anna v Aug 2 '14 at 4:41
• Can we say theoretically in an ideal medium, or vaccum, the counts/amplitude should be same for all spectral peaks/lines ? – Rajesh Dachiraju Aug 2 '14 at 5:20
• No. The strength on each line depends on the probability of that line being excited and the probability of falling back in and emitting a photon of distinct wavelength. For a given atom the strength is calculable for each line from the solutions of the equations that describe the orbitals of the electrons around the helium nucleus. It is different for each line. Though it is characteristic it is not as unaffected by the intervening medium and detector conditions as the location on the wavelength scale is. – anna v Aug 2 '14 at 5:32
• In the figure, tgere are a dew small peaks which werent marked with wave length, assuming they arent considered as spectral lines, so is there any specific threshold on tge amplitude of peak for it to be considered as spectral line? – Rajesh Dachiraju Aug 2 '14 at 6:00
It is what it claims to be: intensity of light at that wavelength.
However there are a lot of different ways to measure intensity. The "proper" way might be to measure the emitted power in watts into a particular wavelength bin, but that takes a lot of careful algebra and calibration to get right. If your light meter is a digitized CCD readout, however, it probably has an intensity measure for free that has been calibrated at the factory: each pixel on the CCD reports one1 number, where 0 means "no light struck this pixel" and some maximum number2 which means "this pixel received more light than it was able to record"). If you only care about those numbers along one dimension, you can make a plot of them instead of a two-dimensional image.
To show you how straightfoward it really is, here's the example spectrum from anna v's answer. I've taken a row of pixels near the middle of the image (row 60, actually, marked by the arrows) and plotted their red, green, and blue components on a graph. You can see that the blue lines have more blue than the other colors, that the red lines have more red than the other colors, and that the yellow line near pixel 160 is the brightest and is actually saturated (note the flat top and the artifact where the blue "shoulders" are dimmer than the background). You can imagine that I might get a more detailed spectrum graph if I just added up the numbers from all 120 rows of pixels in the image.
Other types of detectors may actually count photons one at a time, in which an axis that's labeled "counts" may actually mean "we counted this many photons in this wavelength bin." You have to read carefully to find what authors mean sometimes.
1 Actually a color CCD will report three numbers, one each for the red, blue, and green sensors nearest a given pixel. You can think of it as giving you three images with the same geometry.
2 If the analog-to-digital converter has $n$ bits of precision the maximum value is $2^n-1$. For 8-, 12-, 16-bit ADCs you get numbers between zero and 255, 4095, 16327.
As for the physics content of your question, each spectral line ideally has three parameters: a location, a height, and a width.
• The location of the lines tells you about the energy of the transition involved. Each species of atom has a particular set of energies that electrons are allowed to have, and therefore will have spectral lines whose wavelengths correspond to differences between these energies (as you've indicated that you already know).
• The intensity of each line (which is better represented by the area under each peak, rather than the height of the peak) tells you about how common a given transition is. Here is a tool that lets you examine and plot some solar spectral data; you'll see that the absorption lines for hydrogen and helium are very deep, while the absorption lines for other elements are much shallower. That tells you that most sun is made of hydrogen and helium.
• If you know that there is a relationship between some sets of spectral lines, you may be able to learn other information from their intensities. For instance the Lyman, Balmer, and Paschen series of hydrogen lines are due to light absorption by hydrogen atoms in their ground state, first excited state, and second excited state, respectively. If you find absorption lines in the Balmer or Paschen series, it means that the temperature of the hydrogen gas is so hot that some of the gas is getting excited from its ground state, then getting excited at least a second time before it has a chance to cool back down to the ground state. By (carefully) comparing the relative intensities of these related line series, you may be able to determine the temperature of the gas. This is how we know, for instance, that parts of the sun's corona are hotter than its photosphere.
• Finally each spectral "line" actually has some finite width. Part of that width is always intrinsic to the resolution of the spectrometer, but part is also due to thermal motion of the emitters and absorbers.
• "Actually a color CCD will report three numbers" — there is no such thing as a colour CCD. Rather to determine colour with CCDs, you put a colour filter in front of it. For a spectral analyser you wouldn't want to do that because the position already tells you everything about the frequency, so any filter would not add any information. – celtschk Aug 2 '14 at 18:50
• @celtschk Since my example spectrum was taken from a color photo I thought it was worthwhile to mention what was happening there. Excellent point, though. – rob Aug 2 '14 at 19:58 | 2020-02-28 12:46:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6734282970428467, "perplexity": 552.4766070077943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147154.70/warc/CC-MAIN-20200228104413-20200228134413-00443.warc.gz"} |
https://swmath.org/?term=codimension | • # HomCont
• Referenced in 250 articles [sw14927]
• University). Specifically, HomCont deals with continuation of codimension-one heteroclinic and homoclinic orbits to hyperbolic ... node equilibria, including the detection of many codimension-two singularities and the continuation of these...
• # SlideCont
• Referenced in 47 articles [sw00877]
• software allows for detection and continuation of codimension $-1$ sliding bifurcations as well as detection ... some codimension $-2$ singularities, with special attention to planar systems $(n=2)$. Some bifurcations...
• # LOCBIF
• Referenced in 65 articles [sw07928]
• mutual relationships. The approach is extended to codimension three singularities. We introduce several bifurcation functions...
• # CONTENT
• Referenced in 37 articles [sw01058]
• Neimark - Sacker bifurcations; they are called codimension one phenomena because they generically appear in problems ... that allows to detect and compute all codimension two points on such curves, including strong...
• # Smoothtst
• Referenced in 6 articles [sw23173]
• algebraic varieties: A smoothness test for higher codimensions. Based on an idea in Hironaka ... does not involve the calculation of codimension-sized minors of the Jacobian matrix... | 2022-08-12 23:34:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24656878411769867, "perplexity": 4796.962593225019}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571847.45/warc/CC-MAIN-20220812230927-20220813020927-00203.warc.gz"} |
https://www.toppr.com/guides/fundamentals-of-accounting/depreciation-cma/ | # Depreciation
Perhaps one of the most common accounting concepts, Depreciation is a topic that requires in-depth and conceptual study. In order to gain a fundamental understanding of the subject, it is very important to understand the basics of this chapter. Before learning any definitions and formulas in this chapter, one needs to understand the causes of depreciation.
Share with friends
No thanks. | 2022-08-20 02:21:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930242657661438, "perplexity": 483.0623548805454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00097.warc.gz"} |
https://www.toppr.com/guides/maths-formulas/exponentiation-formula/ | > > > Exponentiation Formula
# Exponentiation Formula
Exponentiation functions and exponentiation formula are very much used in mathematics for doing complex computations with large numbers. It also represents the fast growth of a given dependent variable with some independent variables. Generally, the exponential function represents the high growth rate. In physics as well in chemistry such functions are very useful and important. Also, it can be seen as the exponent increases, the curves get steeper and the rate of growth increases respectively. In this article, we will discuss the exponentiation formula with examples. Let us begin learning!
## Exponentiation Formula
### What is the Exponentiation Function?
As the name of an exponential function is defined, it involves an exponent. This exponent is represented using a variable rather than a constant. On the other hand, its base is represented with constant value rather than a variable.
Let $$f(x)=ab^x$$
This is an exponential function where “b” is a constant, the exponent “x” is the independent variable i.e. input of the function. The coefficient “a” is called the initial value of the function, f(x) represents the dependent variable i.e. output of the function. Thus for x > 1, the value of f(x) will always increase for increasing values of x.
The exponential property can be used to solve the equations with exponential functions. Exponential functions defined by an equation of the form as above are called exponential decay functions if the change factor b follows the inequality 0 < b < 1.
The good thing about exponential functions is that they are very useful in real-world situations. Exponential functions are used to model the growth of populations, carbon date artifacts, help coroners to determine the time of death, compute the investments and compound interest, decay rate of radioactive elements as well as many other applications.
### Some Basic Exponential Formula:
In mathematics many useful formulas are available for exponential functions. We can directly use these in various equations to get values of unknown variables. Some of these formulas are given bellow:
Here we have assumed that x and y are variables and a, b, m, n are constants.
(1) $$x^a\times x^b =x^{a+b}$$ this is adding the exponets.
(2) $$\frac{x^a}{x^b} = x^{a-b}$$ this is subtracting the exponents.
(3) $$(x^a)^b) = ((x^a)^b)$$ this is getting exponents of exponents.
(4) $$(xy)^a = x^a\times y^a$$ this is expanding exponents of products.
(5)$$x^0 = 1$$ this is giving value for zero exponent.
(6) $$x^1 = x$$ this is giving value for unit exponent.
(7) $$(x^-(^n) = \frac{1}{x^n}$$ this is giving value of negative exponent.
(8)$$x^\frac{m}{n}= \sqrt[n]{x^m}$$ this is giving value of fractional exponent.
## Solved Examples
Q.1: Find value of x , $$4^{4x-5}= 16^2$$
Solution:
Given: $$4^{4x-5}= 16^2$$
We can express 16 as a power of 4,
i.e. $$4^{4x-5}= {4^2}^2$$
i.e. $$4^{4x-5} = 4^4$$
Now comparing both sides of the above equation.
4x-5 = 4
4x = 4+5
4x = 9
x = $$\frac{9}{4}$$
Q.2: Given the function f(x)=4^x , then evaluate each of the following.
(i) f(-2)
(ii) f(1)
(iii) f(0)
Solution:
(i) $$f(x) = 4^x$$
So, put x=-2 we get
$$f(-2)=4^{-2}$$
i.e. $$f(-2) = \frac{1}{4^2}$$
i.e. $$f(-2) = \frac{1}{16}$$
(ii) $$f(x) = 4^x$$
Put x = 1
$$f(1) = 4^1$$
i.e. $$f(1) = 4$$
(iii) $$f(x) = 4^x$$
put x=0
we get $$f(0)= 4^0$$
i.e. f(0) = 1
Share with friends
## Customize your course in 30 seconds
##### Which class are you in?
5th
6th
7th
8th
9th
10th
11th
12th
Get ready for all-new Live Classes!
Now learn Live with India's best teachers. Join courses with the best schedule and enjoy fun and interactive classes.
Ashhar Firdausi
IIT Roorkee
Biology
Dr. Nazma Shaik
VTU
Chemistry
Gaurav Tiwari
APJAKTU
Physics
Get Started
Subscribe
Notify of
## Question Mark?
Have a doubt at 3 am? Our experts are available 24x7. Connect with a tutor instantly and get your concepts cleared in less than 3 steps. | 2020-11-29 20:02:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7227264046669006, "perplexity": 1166.896974707983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00151.warc.gz"} |
https://www.towerofbable.org/B/user/kimihiko-motegi/post/69693/count/0/ | ×
Well done. You've clicked the tower. This would actually achieve something if you had logged in first. Use the key for that. The name takes you home. This is where all the applicables sit. And you can't apply any changes to my site unless you are logged in.
Our policy is best summarized as "we don't care about _you_, we care about _them_", no emails, so no forgetting your password. You have no rights. It's like you don't even exist. If you publish material, I reserve the right to remove it, or use it myself.
Don't impersonate. Don't name someone involuntarily. You can lose everything if you cross the line, and no, I won't cancel your automatic payments first, so you'll have to do it the hard way. See how serious this sounds? That's how serious you're meant to take these.
×
Register
Required. 150 characters or fewer. Letters, digits and @/./+/-/_ only.
Enter the same password as before, for verification.
Grow A Dic
Define A Word
Make Space
Mark Post
(From: saved spaces)
Apply Dic
Exclude Dic
## User: kimihiko-motegi
### Title: Seifert fibered surgeries which do not arise from primitive/Seifert-fibered constructions
We construct two infinite families of knots each of which admits a Seifert
fibered surgery with none of these surgeries coming from Dean's
primitive/Seifert-fibered construction. This disproves a conjecture that all
Seifert fibered surgeries arise from Dean's primitive/Seifert-fibered
construction. The (-3,3,5)-pretzel knot belongs to both of the infinite
families.
ID: 69693; Unique Viewers: 0
Unique Voters: 0
Latest Change: Nov. 19, 2020, 12:40 p.m. Changes:
Dictionaries:
Words:
Spaces:
Newcom
### Posts:
Total post views: 148872
Sort:
Zonal flows have been observed to appear spontaneously from turbulence in a number of physical settings. A complete theory for their behavior i…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:31 p.m.
A new approach based on the domain wall displacement in confined ferromagnetic nanostructures for attracting and sensing a single nanometric ma…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:47 p.m.
We generalize the technique of [Solving Dirichlet boundary-value problems on curved domains by extensions from subdomains, SIAM J. Sci. Comput.…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:08 p.m.
Go gaming is a struggle between adversaries, black and white simple stones, and aim to control the most Go board territory for success. Rules a…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:30 p.m.
We present a system for performing visual search over billions of aerial and satellite images. The purpose of visual search is to find images t…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
Hydrogen bonding plays a role in the microphase separation behavior of many block copolymers, such as those used in lithography, where the stro…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:47 p.m.
We describe the properties of a system of red arcs discovered at z=4.04 around the cluster A2390 ($z=0.23$). These arcs are images of a single …
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
One of the most crucial tasks in seismic reflection imaging is to identify the salt bodies with high precision. Traditionally, this is accompli…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:06 p.m.
Context: Supernova 1987A (SN1987A) exploded in the Large Magellanic Cloud (LMC). Its proximity and rapid evolution makes it a unique case study…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:19 p.m.
The widespread popularity of Pok\'emon GO presents the first opportunity to observe the geographic effects of location-based gaming at scal…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:31 p.m.
Rearrangement effects in light hypernuclei are investigated in the framework of the Brueckner theory. We can estimate without detailed numerica…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:42 p.m.
Graph embedding methods produce unsupervised node features from graphs that can then be used for a variety of machine learning tasks. Modern gr…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
Words:
Views: 0
Latest: Nov. 19, 2020, 4:45 p.m.
Pyrene-functional PMMAs were prepared via ATRP-controlled polymerization and click reaction, as efficient dispersing agents for the exfoliation…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:47 p.m.
Extending a computation which appeared recently in hep-th/0301173, we compute the transmission and reflection coefficients for massless uncharg…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
In dynamical mean-field theory, the correlations between electrons are assumed to be purely local. The dual fermion approach provides a systema…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:17 p.m.
Contrary to existing theoretical models, experimental evidence points out that electroporation (membrane defect formation under external electr…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:03 p.m.
Temporal segmentation of untrimmed videos and photo-streams is currently an active area of research in computer vision and image processing. Th…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:06 p.m.
The estimation of the decay rate of a signal section is an integral component of both blind and non-blind reverberation time estimation methods…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:14 p.m.
We introduce a near-IR monitoring campaign of the Local Group spiral galaxy M33, carried out with the UK IR Telescope (UKIRT). The pulsating gi…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:17 p.m.
We present the preparation for an observation of the Sunyaev-Zeldovich (SZ) effect with the 100m telescope in Effelsberg. We calculate the expe…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:24 p.m.
Using the Infrared Spectrograph aboard the Spitzer Space Telescope, we observed multiple epochs of 11 actively accreting T Tauri stars in the n…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:26 p.m.
This is a thought piece on data-intensive science requirements for databases and science centers. It argues that peta-scale datasets will be ho…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:40 p.m.
This article studies the rearrangement problem for Fourier series introduced by P.L. Ulyanov, who conjectured that every continuous function on…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:41 p.m.
Applying the known physics of plasmas, the 40 plus year old "Strong" Magnetic Field (SMF) model has been extended from explaining the…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:43 p.m.
Massive disk galaxies like the Milky Way are expected to form at late times in traditional models of galaxy formation, but recent numerical sim…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
The field of astrobiology has made tremendous progress in modelling galactic-scale habitable zones which offer a stable environment for life to…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
Big graphs (networks) arising in numerous application areas pose significant challenges for graph analysts as these graphs grow to billions of …
Words:
Views: 0
Latest: Nov. 19, 2020, 4:45 p.m.
Quasars are associated with and powered by the accretion of material onto massive black holes; the detection of highly luminous quasars with re…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:45 p.m.
Exfoliation of lamellar materials into their corresponding layers represented a breakthrough, due to the outstanding properties arising from th…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:46 p.m.
The primary challenge of GOLF-NG (Global Oscillations at Low Frequency New Generation) is the detection of the low-frequency solar gravity and …
Words:
Views: 0
Latest: Nov. 19, 2020, 4 p.m.
The frequency, $\nu_{\rm max}$, at which the envelope of pulsation power peaks for solar-like oscillators is an important quantity in asterosei…
Words:
Views: 0
Latest: Nov. 19, 2020, 4 p.m.
In this work, we investigate modular Hamiltonians defined with respect to arbitrary spatial regions in quantum field theory states which have s…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
Let $R$ be any associative ring with $1$, $n\ge 3$, and let $A,B$ be two-sided ideals of $R$. In our previous joint works with Roozbeh Hazrat […
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
In the context of a parametric theory (with the time being a dynamical variable) we consider the coupling between the quantum vacuum and the ba…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:03 p.m.
The purpose of this note is to characterize the finite Hilbert functions which force all of their artinian algebras to enjoy the Weak Lefschetz…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:03 p.m.
The past decade has witnessed the development and success of coarse-grained network models of proteins for predicting many equilibrium properti…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:03 p.m.
Semantic segmentation is the task of assigning a label to each pixel in the image.In recent years, deep convolutional neural networks have been…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:05 p.m.
We propose the analog-digital quantum simulation of the quantum Rabi and Dicke models using circuit quantum electrodynamics (QED). We find that…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:10 p.m.
A pair of symmetric expressions for the second law of thermodynamics is put forward. The conservation and transfer of entropy is discussed and …
Words:
Views: 0
Latest: Nov. 23, 2020, 12:48 p.m.
In a recent work, we numerically studied the radiative properties of the reverberation phase of pulsar wind nebulae (PWNe), i.e., when the reve…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:14 p.m.
We report the discovery of multiple shells around the eruptive variable star V838 Mon. Two dust shells are seen in IRAS and MSX images, which t…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:17 p.m.
We provide a polynomial time reduction from Bayesian incentive compatible mechanism design to Bayesian algorithm design for welfare maximizatio…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:21 p.m.
xtit{Propus} (which means twins) is a construction method for orthogonal $\pm 1$ matrices based on a variation of the Williamson array called t…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:24 p.m.
Forced evaporative cooling in a far-off-resonance optical dipole trap is proved to be an efficient method to produce fermionic- or bosonic-dege…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:25 p.m.
Knorringite, the Cr-end-member of the pyrope garnet series (Nixon et al. 1968), often occur in high proportions in kimberlite garnets and is th…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:26 p.m.
After a brief discussion about the necessity of using the 3D approach, we present the non PW formalism for 3N bound state with the inclusion of…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:38 p.m.
This paper considers the issue of Bose-Einstein condensation in a weakly interacting Bose gas with a fixed total number of particles. We use an…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:40 p.m.
A rearrangement of $n$ independent uniform $[0,1]$ random variables is a sequence of $n$ random variables $Y_1,...,Y_n$ whose vector of order s…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:40 p.m.
This paper investigates the rearrangement problem whose objective is to maximize the bilinear form $\boldsymbol x^T H\boldsymbol y$ associated …
Words:
Views: 0
Latest: Nov. 19, 2020, 4:42 p.m.
Rearrangement-invariance in function spaces can be viewed as a kind of generalization of 1-symmetry for Schauder bases. We define subrearrangem…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:42 p.m.
In this paper non-asymptotic exact rearrangement invariant norm estimates are derived for the maximum distribution of the family elements of so…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:43 p.m.
Observations of distant bright quasars suggest that billion solar mass supermassive black holes (SMBHs) were already in place less than a billi…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
We show that margin-based bitext mining in a multilingual sentence space can be applied to monolingual corpora of billions of sentences. We are…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
Zuckerli is a scalable compression system meant for large real-world graphs. Graphs are notoriously challenging structures to store efficiently…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
We present evidence of the feasibility of using billion core approximate computers to run simple U(1) sigma models, and discuss how the approac…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:44 p.m.
The Milky Way galaxy is observed to have multiple components with distinct properties, such as the bulge, disk, and halo. Unraveling the assemb…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:45 p.m.
So far, roughly 40 quasars with redshifts greater than z=6 have been discovered. Each quasar contains a black hole with a mass of about one bil…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:45 p.m.
Samples of third-generation cylindrical dendrimers with molar masses ranging in the interval 20000...60000 have been studied by the methods of …
Words:
Views: 0
Latest: Nov. 19, 2020, 4:46 p.m.
Observational work conducted over the last few decades indicates that all massive galaxies have supermassive black holes at their centres. Alth…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:45 p.m.
Two-dimensional 1H-15N HMBC NMR spectra of a well-known anticonvulsant-carbamazepine-dissolved in chloroform, recorded on an NMR spectrometer a…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:46 p.m.
The study of the G-mode pressure coefficients of carbon nanotubes, reflecting the stiff sp2 bond pressure dependence, is essential to the under…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:46 p.m.
Helioseismology can provide strong constraints on the evolution of Newton's constant over cosmic time. We make use of the best possible est…
Words:
Views: 0
Latest: Nov. 19, 2020, 4 p.m.
A general theory of the beam interaction with small discontinuities of the vacuum chamber is developed taking into account the reaction of radi…
Words:
Views: 0
Latest: Nov. 23, 2020, 12:07 p.m.
A permanent electric dipole moment of fundamental spin-1/2 particles violates both parity (P) and time re- versal (T) symmetry, and hence, also…
Words:
Views: 0
Latest: Nov. 19, 2020, 4 p.m.
We examine the frequency shifts in low-degree helioseismic modes from the Birmingham Solar-Oscillations Network (BiSON) covering the period fro…
Words:
Views: 0
Latest: Nov. 19, 2020, 4 p.m.
It is well known that (in suitable codimension) the spaces of long knots in $\mathbb{R}^n$ modulo immersions are double loop spaces. Hence the …
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
A new solution of Einstein's vacuum field equations is discovered which appears as a generalization of the well-known Ozsvath-Schucking sol…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
Abstract axiomatic formulation of mathematical structures are extensively used to describe our physical world. We take here the reverse way. By…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
We study the minimax optimal rates for estimating a range of Integral Probability Metrics (IPMs) between two unknown probability measures, base…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
The response of a pair of differently polarized antennas is determined by their polarization states AND a phase between them which has a geomet…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
We present the detections of CO line emission in the central galaxy of sixteen extreme cooling flow clusters using the IRAM 30m and the JCMT 15…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:02 p.m.
As the excitement surrounding the heavy top quark discovery subsides, while the expectation for LEP II physics gathers, it is a good time to si…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:03 p.m.
It is currently accepted that Hot-Bottom-Burning (HBB) in intermediate-mass asymptotic giant branch (AGB) stars prevents the formation of C~sta…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:19 p.m.
We investigate the von Neumann entropy of a block of subsystem for the valence-bond solid (VBS) state with general open boundary conditions. We…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:03 p.m.
Aims: Our aim was to measure and characterize the short-wavelength radio emission from young stellar objects (YSOs) in the Orion Nebula Cluster…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:03 p.m.
In the studies of the squeezing it is customary to focus more attention on the particular squeezed states and their evolution than on the dynam…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:04 p.m.
The concept of a \emph{weak value} of a quantum observable was developed in the late 1980s by Aharonov and colleagues to characterize the value…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:04 p.m.
We study the modification of the atomic spontaneous emission rate, i.e. Purcell effect, of $^{87}$Rb in the vicinity of an optical nanofiber ($… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:09 p.m. We present the experimental observation of the symmetric four-photon entangled Dicke state with two excitations$|D_{4}^{(2)}>$. A simple ex… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:09 p.m. Linear parameter-varying (LPV) systems with uncertainty in time-varying delays are subject to performance degradation and instability. In this … Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:10 p.m. We propose a framework to integrate the concept of Theory of Mind (ToM) into generating utterances for task-oriented dialogue. Our approach exp… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:12 p.m. Service Oriented Architectures (SOAs) are component-based architectures, characterized by reusability, modularization and composition, usually … Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:13 p.m. The size and structure of the dusty circumnuclear torus in active galactic nuclei (AGN) can be investigated by analyzing the temporal response … Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:14 p.m. This paper introduces a new method for multi-channel time domain speech separation in reverberant environments. A fully-convolutional neural ne… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:15 p.m. We discuss the importance for the long-term cluster evolution of the mass loss from intermediate-mass stars (0.8-8 Msun). We present constraint… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:17 p.m. We demonstrate optomechanical quantum control of the internal electronic states of a diamond nitrogen vacancy (NV) center in the resolved-sideb… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:20 p.m. Aggressive incentive schemes that allow individuals to impose economic punishment on themselves if they fail to meet health goals present a pro… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:21 p.m. Bitcoin-NG is among the first blockchain protocols to approach the \emph{near-optimal} throughput by decoupling blockchain operation into two p… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:21 p.m. Online labor platforms, such as the Amazon Mechanical Turk, provide an effective framework for eliciting responses to judgment tasks. Previous … Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:21 p.m. Warnings have been raised about the steady diminution of privacy. More and more personal information, such as that contained electronic mail, i… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:25 p.m. We report on a 60 degree-long stream of stars, extending from Ursa Major to Sextans, in the Sloan Digital Sky Survey. The stream is approximate… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:25 p.m. We analytically investigate the effective-diffusivity tensor of a tracer particle in a fluid flow endowed with a short correlation time. By mea… Words: Votes: Views: 0 Latest: Nov. 19, 2020, 4:26 p.m. Paraexcitons, the lowest energy exciton states in Cu$_{2}\$O, have been considered a good system for realizing exciton Bose-Einstein condensatio…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:26 p.m.
In their textbook, Suzuki and Varga [Y. Suzuki and K. Varga, {\em Stochastic Variational Approach to Quantum-Mechanical Few-Body Problems} (Spr…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:38 p.m.
We derive the asymptotic behavior of determinants of truncated Wiener-Hopf operators generated by symbols having Fisher-Hartwig singularities. …
Words:
Views: 0
Latest: Nov. 19, 2020, 4:38 p.m.
Using the techniques of dimensional deconstruction, we present 4D models that fully reproduce the physics of 5D supersymmetric theories compact…
Words:
Views: 0
Latest: Nov. 19, 2020, 4:39 p.m.
Due to the good performance of current SAT (satisfiability) and Max-SAT (maximum ssatisfiability) solvers, many real-life optimization problems…
Words: | 2021-12-08 00:28:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47301924228668213, "perplexity": 4093.4770204002857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363420.81/warc/CC-MAIN-20211207232140-20211208022140-00099.warc.gz"} |
https://codegolf.stackexchange.com/questions/42109/mutual-negative-quines?noredirect=1 | # Mutual Negative Quines
This was inspired by Print a Negative of your Code and Golf a mutual quine.
Consider a rectangle of characters, that meet the following restrictions:
1. Consists solely of printable ASCII characters
2. Dimensions both greater than 1
3. Each row and each column contains at least one space.
4. Each row and each column contains at least one non-space character.
For example, the following is a valid 6x4 rectangle:
%n 2e
1 g 3
&* __
3
A negative for this rectangle is defined to be a rectangle of equal dimensions, with all spaces replaced by non-space characters, and all non-space characters replaced by spaces. A negative of the above rectangle could be:
f ^
33 >
9 $^ }|Q' Any non-space printable ASCII character may be used to replace a space. ## Task Your task is to write a program with rectangular source code, that outputs a valid negative to itself. The negative outputted must also be a valid program, in the same language as the original, and it must output the source of the original. No trailing whitespace may be added or removed, except for a single trailing newline at the end of either output, which is optional. Neither program is permitted to read the source code of either; nor may REPL environments be assumed. ## Scoring Your score is the product of the dimensions of your code (i.e. if your source code is in a 12 by 25 rectangle, your score is 12*15=180). Additionally, for each character used in a comment, your score increases by 2 (If you use /* .. */ once in your code, and your code is in a 10 by 10 rectangle, your score would be 10*10+8*2=116). The lowest score wins. If there is a tie, the submission with the least number of spaces in the program (either the original or the negative, whichever has fewer spaces) wins. If there still remains a tie, the earlier answer shall win. There is a bonus of -52%, if combining the original and the negative together produces a normal quine. For example: Original Negative Combined A A B B BABA A A B B ABAB • @Optimizer That's the reason I didn't make the bonus mandatory. Dec 2 '14 at 6:30 • I am talking about just the negative mutual quine part ;) Dec 2 '14 at 6:31 • @MartinBüttner Ah, my bad. I was thinking in weird terms. Dec 3 '14 at 12:42 • Can anyone do this in c? +1 to whoever will first! Dec 5 '14 at 22:26 ## 6 Answers ## Python, 97x2 + 2 = 196 Not a great solution to start off, but at least it works (I think). c='o=1-%d;print("%%97s\\n%%97s"%%("#","c=%%r;exec(c%%%%%%d)\\40"%%(c,o),"#")[o:][:2])';exec(c%1) # Output: # c='o=1-%d;print("%%97s\\n%%97s"%%("#","c=%%r;exec(c%%%%%%d)\\40"%%(c,o),"#")[o:][:2])';exec(c%0) • +1 for the only submission so far to use a real language Dec 3 '14 at 19:13 • It doesn't seem to be too far off from the bonus either. May 8 '15 at 14:24 # CJam, (58565448 46 x 2) * 48% = 44.16 {"_~"+{_,94\m2/S*a_+\* N/23f/Wf%N*}_'"#)!*}_~ which prints {"_~"+{_,94\m2/S*a_+\* N/23f/Wf%N*}_'"#)!*}_~ The non-space characters in each line remain the same between the two mutual quines. But now the really sweet part: {"_~"+{_,94\m2/S*a_+\*{"_~"+{_,94\m2/S*a_+\* N/23f/Wf%N*}_'"#)!*}_~N/23f/Wf%N*}_'"#)!*}_~ is a quine! :) Test it here. ## How it works I recommend you read the explanation on my other submission first, as it explains the basics of quining in CJam in general. This one is a bit trickier. For the mutual quine, as in the other case, I modify the string representation of the block by adding spaces before or after each line, and swapping a 0 with a 2, so that the resulting program puts the spaces at the opposite end. Note that the spaces don't affect the mutual quines at all. In the first one, they are in a block, which isn't really used, and in the second they are around the entire code. To obtain a regular quine when combining both, we need to find a way to avoid doing all that modification. Notice that the structure of the whitespace and code means that by combining both, we insert the entirety of one quine into the other. So if we put the entire modification code in a block, we can run that block depending on its actual contents. So now I've got this block... for the mutual quines, it only contains the code I actually want to run. For the combined quine, it also contains the entire quine again, in a random position, which doesn't make any sense... but since it's a block, it's not run automatically. So we can determine whether to modify the string based on the contents of that block. That's what _'"#)! is for. It duplicates the block, converts it to a string, searches for the character " (which, in the mutual quines, only appears outside the block) - the search returns -1 if the character isn't found and a positive integer otherwise -, increments the result and negates it logically. So if a " was found this yields 0 otherwise it yields 1. Now we just do *, which executes the block once, if the result was 1 and not at all otherwise. Finally, this is how the modifying code works: _,94\m2/S*a_+\*N/23f/Wf%N* _, "Duplicate the quine string and get its length."; 94\m "Subtract from 94."; 2/ "Divide by two."; S* "Create a string with that many spaces. This will be an empty string for the first mutual quine, and contain 23 spaces for the second mutual quine."; a_+ "Create an array that contains this string twice."; \* "Join the two copies together with the quine string."; N/ "Split into lines."; 23f/ "Split each line into halves (23 bytes each)."; Wf% "Reverse the two halves of each line."; N* "Join with a newline."; # Claiming the Bounty, (12 x 10) * 48% = 57.6 Turns out that this code can be split over more lines very easily with some modifications. We add 2 characters, to get 48 in a row, which we can then conveniently divide by 8, so that we have 8 lines with 6 characters of code and 6 spaces. To do that we also need to change a few numbers, and to rearrange an operator or two, so they aren't split over both lines. That gives us a working version with size 12 x 8... one off the requirement. So we just add two lines that don't do anything (push a 1, pop a 1, push a 1, pop a 1...), so get to 12 x 10: {"_~" +{129X$,m2/S
*a_+\*
N/6f/1
;1;1;1
;1;1;1
;Wf%N*
}_'"#
)!*}_~
As the previous one this produces
{"_~"
+{129X
$,m2/S *a_+\* N/6f/1 ;1;1;1 ;1;1;1 ;Wf%N* }_'"# )!*}_~ (Side note: there is no need to keep alternating left and right on the intermediate lines, only the position of the first and last line are important. Left and right can be chosen arbitrarily for all other lines.) And through pure coincidence, the full quine also still works: {"_~"{"_~" +{129X+{129X$,m2/S\$,m2/S
*a_+\**a_+\*
N/6f/1N/6f/1
;1;1;1;1;1;1
;1;1;1;1;1;1
;Wf%N*;Wf%N*
}_'"#}_'"#
)!*}_~)!*}_~
(I say coincidence, because the part that takes care of not executing the inner code now gets weirdly interspersed with the other quine, but it still happens to work out fine.)
That being said, I could have just added 44 lines of 1; to my original submission to fulfil the bounty requirement, but 12 x 10 looks a lot neater. ;)
Edit: Haha, when I said "pure coincidence" I couldn't have been more spot on. I looked into how the final quine now actually works, and it's absolutely ridiculous. There are three nested blocks (4 actually, but the innermost is irrelevant). The only important part of the innermost of those 3 blocks is that it contains a " (and not the one that it did in the original submission, but the very '" that is used at the end to check for this same character). So the basic structure of the quine is:
{"_~"{"_~"+{___'"___}_'"#)!*}_~)!*}_~
Let's dissect that:
{"_~" }_~ "The standard CJam quine.";
{"_~"+ }_~ "Another CJam quine. Provided it doesn't do
anything in the rest of that block, this
will leave this inner block as a string on
the stack.";
) "Slice the last character off the string.";
! "Negate... this yields 0.";
* "Repeat the string zero times.";
So this does indeed do some funny magic, but because the inner block leaves a single string on the stack, )!* happens to turn that into an empty string. The only condition is that the stuff in the inner block after + doesn't do anything else to the stack, so let's look at that:
{___'"___} "Push a block which happens to contain
quotes.";
_'"#)!* "This is from the original code and just
removes the block if it does contain
quotes.";
• TLDR; upvote ;) Dec 2 '14 at 17:40
• Shouldn't it be Y/2 in the combined quine? Dec 3 '14 at 8:48
• "And through pure coincidence" nah ;) Dec 5 '14 at 23:52
• @Timtech See my edit. Pure coincidence was not an understatement. ^^ Dec 5 '14 at 23:57
# CJam, (51 49 47 46 45 42 x 2) * 48% = 40.32
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~
R
Running the above code gives this output:
R
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~
running which, prints back the original source.
The source and the output are simply swapped lines.
Now comes the magic.
Overlapping the source and the output results into the following code:
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~R
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~R
which is a perfect quine!
Try them online here
How it works
All the printing logic is in the first line itself which handles all three cases explained later.
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~
{ }_~ "Copy this code block and execute the copy";
] "Wrap everything before it in array";
) "Pop the last element out of it";
"_~"+ "Append string '_~' to the copied code block";
S41* "Create a string of 41 spaces";
'R+ "Append character R to it";
@, "Rotate the array to top and get its length";
[{ }{ }{ }]=~ "Get the corresponding element from this"
"array and execute it";
The array in the last line above is the array which has code blocks corresponding to all three cases.
Case 1
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~
R
In this case, the length of remaining stack was 0 as when the block was executed, it only had the copy of the block itself, which was initially popped out in the third step above. So we take the index 0 out of the last array and execute it:
{N@S} "Note that at this point, the stack is something like:"
"[[<code block that was copied> '_ '~ ] <41 spaces and R string>]";
@ "Rotate the code block to top of stack";
S "Put a trailing space which negates the original R";
In this case, the second line is a no-op as far as printing the output is concerned.
Case 2
R
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~
In this case, the stack already contained an empty string, so when the copied code block was executed, it had 2 elements - an empty string and the code block itself. So we take the index 1 out of the last array and execute it:
{SN@} "Note at this point, the stack is same as in case 1";
SN "Push space and newline to stack";
@ "Rotate last three elements to bring the 41 spaces and R string to top";
Case 3
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~R
{])"_~"+S41*'R+@,[{N@S}{SN@}{W=N]_}]=~}_~R
In this case, the stack has 6 elements. So after popping the last code block, remaining array length is 5. We take the index 5 out of the array and execute it. (Note that in a array of 3 elements, index 5 is index 5%3 = 2)
{W=N]_} "Note at this point, the stack is same as in case 1";
W= "Take the last character out of the 41 spaces and R string, i.e. R";
N] "Add a new line to stack and wrap the stack in an array";
_ "Copy the array to get back the source of Case 3 itself";
# CJam, 4237 33 x 2 = 66
{As_W%er"_~"+S 33*F'Lt1{\}*N\}_~
L
which prints
L
{As_W%er"_~"+S 33*F'Lt0{\}*N\}_~
(The lines are swapped, and a 1 turns into a 0.)
Test it here.
## How it works
First, you should understand the basic CJam quine:
{"_~"}_~
The braces simply define a block of code, like a function, that isn't immediately executed. If an unexecuted block remains on the stack, it's source code (including braces) is printed. _ duplicates the block, and ~ executes the second copy. The block itself simply pushes the string containing _~. So this code, leaves the stack in the following state:
Stack: [{"_~"} "_~"]
The block and the string are simply printed back-to-back at the end of the program, which makes this a quine.
The beauty of this is that we can do whatever we want in the block, and it remains a quine, because each piece of code will automatically be printed in the block contents. We can also modify the block, by obtaining it's string representation with (which is just a string of the block with braces).
Now let's look at this solution. Notice that either part of the mutual quine contains of the quine-like block with _~, and an L. The L pushes an empty string onto the stack, which doesn't contribute to the output. So here is what the block does:
"Convert block to its string representation.";
As "Push 10 and convert to string.";
_W% "Duplicate and reverse, to get another string 01.";
er "Swap 0s and 1s in the block string.";
"_~"+ "Append _~.";
S 33* "Push a string with 33 spaces.";
F'Lt "Set the character at index 15 to L.";
1{ }* "Repeat this block once.";
\ "Swap the code string and the space string.";
N\ "Push a newline and move it between the two lines.";
So this will do the quine part, but exchange a 1 for a 0, and it will also prepend another line with an L, where the code above has a space. The catch is that the order of those two lines is determined by the swapping inside { }*. And because the outer part of the mutual quine has the 0 in front of it replaced by a 1, it never executes this swap, and hence produces the original order again.
# CJam, 27 × 2 = 54
{ " _ ~ " + N - ) 2 * ' '
> @ t s G B + / N * } _ ~
Output:
{ " _ ~ " + N - ) 2 * '
' > @ t s G B + / N * } _ ~
'A'B> compares the characters A and B. ' '\n > returns 1 because 32>10 and ' \n' > returns 0 because the two spaces are equal.
# CJam, 30 29 x 2 = 58
{"_~"SN]_,4=S28*'R+\{N@}*}_~
R
Outputs:
R
{"_~"SN]_,4=S28*'R+\{N@}*}_~
`
which outputs the original source.
This is based on the same principal as my other solution.
Try it online here | 2021-10-20 11:12:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3501031994819641, "perplexity": 2097.2094782421914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585305.53/warc/CC-MAIN-20211020090145-20211020120145-00341.warc.gz"} |
http://cms.math.ca/cjm/kw/multiplicity%20free | location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword multiplicity free
Expand all Collapse all Results 1 - 2 of 2
1. CJM 2009 (vol 61 pp. 1325)
Nien, Chufeng
Uniqueness of Shalika Models Let $\BF_q$ be a finite field of $q$ elements, $\CF$ a $p$-adic field, and $D$ a quaternion division algebra over $\CF$. This paper proves uniqueness of Shalika models for $\GL_{2n}(\BF_q)$ and $\GL_{2n}(D)$, and re-obtains uniqueness of Shalika models for $\GL_{2n}(\CF)$ for any $n\in \BN$. Keywords:Shalika models, linear models, uniqueness, multiplicity freeCategory:22E50
2. CJM 2009 (vol 61 pp. 351)
Graham, William; Hunziker, Markus
Multiplication of Polynomials on Hermitian Symmetric spaces and Littlewood--Richardson Coefficients Let $K$ be a complex reductive algebraic group and $V$ a representation of $K$. Let $S$ denote the ring of polynomials on $V$. Assume that the action of $K$ on $S$ is multiplicity-free. If $\lambda$ denotes the isomorphism class of an irreducible representation of $K$, let $\rho_\lambda\from K \rightarrow GL(V_{\lambda})$ denote the corresponding irreducible representation and $S_\lambda$ the $\lambda$-isotypic component of $S$. Write $S_\lambda \cdot S_\mu$ for the subspace of $S$ spanned by products of $S_\lambda$ and $S_\mu$. If $V_\nu$ occurs as an irreducible constituent of $V_\lambda\otimes V_\mu$, is it true that $S_\nu\subseteq S_\lambda\cdot S_\mu$? In this paper, the authors investigate this question for representations arising in the context of Hermitian symmetric pairs. It is shown that the answer is yes in some cases and, using an earlier result of Ruitenburg, that in the remaining classical cases, the answer is yes provided that a conjecture of Stanley on the multiplication of Jack polynomials is true. It is also shown how the conjecture connects multiplication in the ring $S$ to the usual Littlewood--Richardson rule. Keywords:Hermitian symmetric spaces, multiplicity free actions, Littlewood--Richardson coefficients, Jack polynomialsCategories:14L30, 22E46
top of page | contact us | privacy | site map | | 2015-08-31 21:52:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9616535305976868, "perplexity": 384.8705604559308}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068098.37/warc/CC-MAIN-20150827025428-00255-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/find-force-from-a-function-of-displacement.89548/ | # Find force from a function of displacement?
1. Sep 18, 2005
### beanryu
An object with mass m moves along the x-axis. Its position as a function of time is given by x(t)=At-Bt^3, where A and B are constants. Calculate the net force on the object as a function of time.
I have no idea as to how to begin.
Thanks for anyone's help.
I dont know !&@#%^@%!$^$(! about doing this kind of calculus problem.
2. Sep 18, 2005
### Dorothy Weglend
You want the force, so that tells you that you need to find F=ma. You know the mass, m. So you need to find a, the acceleration function for this motion.
Velocity is the rate of change of position, and acceleration is the rate of change of velocity. There are plenty of examples of this in physics, some of which are sure to be in your physics text, so you might take a look there and see how they did this.
Dot | 2017-11-25 12:05:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5899908542633057, "perplexity": 227.90819750030647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809778.95/warc/CC-MAIN-20171125105437-20171125125437-00356.warc.gz"} |
http://shyuezhong.com/HDIRQmfbA5a4E.xml | 欧美人与ZOXXXX视频 -第(561)集
203 Pages
The align environment is used for two or more equations when vertical alignment is desired; usually binary relations such as equal signs are aligned.
American Mathematical Society, User's Guide for the amsmath Package欧美人与ZOXXXX视频 -第(561)集
Introduction to align
For all intents and purposes, the align environment is a replacement for the eqnarray environment and all its warts. Rather than
\begin{eqnarray*}
x^2 + y^2 &=& 1 \\
y &=& \sqrt{1 - x^2},
\end{eqnarray*}
one can type
\begin{align*}
x^2 + y^2 &= 1 \\
y &= \sqrt{1 - x^2}.
\end{align*}
Benefits over eqnarray
Besides the slightly simpler syntax, you side-step bugs documented by Lars Madsen for The PracTeX Journal, such as
• inconsistent spacing around binary symbols,
• overwriting equation numbers, and
• silent label mismatch.
Multiple equations on one line
Besides being used for aligning binary symbols, the ampersand can also mark an invisible alignment for separating columns of equations. For example,
\begin{align}
u &= \arctan x & dv &= 1 \, dx \\
du &= \frac{1}{1 + x^2}dx & v &= x.
\end{align}
produces:
Preamble
To use align, import the amsmath package in your preamble.
.....
\usepackage{amsmath}
.....
\begin{align}
i_t & = \sigma(W_{xi}x_t+W_{hi}h_{t-1}+W_{ci}c_{t-1}+b_i)\\\\
f_t & = \sigma(W_{xf}x_t+W_{hf}h_{t-1}+W_{cf}c_{t-1}+b_f)\\\\
c_t & = f_t\odot c_{t-1}+i_t\odot tanh(W_{xc}x_t+W_{hc}h_{t-1}+b_c)\\\\
o_t & = \sigma(W_{xo}x_t+W_{ho}h_{t-1}+W_{co}c_{t}+b_o)\\\\
h_t & = o_t\odot tanh(c_t)\\\\
\end{align}
Community content is available under CC-BY-SA unless otherwise noted. | 2021-06-16 17:36:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999700784683228, "perplexity": 7952.304598277174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487625967.33/warc/CC-MAIN-20210616155529-20210616185529-00357.warc.gz"} |
https://blog.myrank.co.in/kinetic-energy-of-a-body-in-combined-rotation-and-translation/ | # Kinetic Energy of a Body in Combined Rotation and Translation
## Kinetic Energy of a Body in Combined Rotation and Translation
Consider a body in combined translational and rotational motion in the lab frame. Suppose in the frame of the centre of mass, the body is making a pure rotation with an angular velocity ω. The centre of mass itself is moving in the lab frame at a velocity $$\overrightarrow{{{v}_{0}}}$$.
The velocity of a particle of mass mi is $$\overrightarrow{{{v}_{i,CM}}}$$ with respect to the centre of mass frame and $$\overrightarrow{{{v}_{i}}}$$ with respect to the lab frame. We have:
$$\overrightarrow{{{v}_{i}}}=\overrightarrow{{{v}_{i,CM}}}+\overrightarrow{{{v}_{o}}}$$.
The kinetic energy of the particle in the lab frame is:
$$\frac{1}{2}{{m}_{i}}{{v}_{i}}^{2}=\frac{1}{2}{{m}_{i}}\left( \overrightarrow{{{v}_{i,cm}}}+\overrightarrow{{{v}_{0}}} \right)\left( \overrightarrow{{{v}_{i,cm}}}+\overrightarrow{{{v}_{0}}} \right)$$.
$$\frac{1}{2}{{m}_{i}}{{v}_{i}}^{2}=\frac{1}{2}{{m}_{i}}{{v}^{2}}_{i,cm}+\frac{1}{2}{{m}_{i}}{{v}_{o}}^{2}+\frac{1}{2}{{m}_{i}}\left( 2\overrightarrow{{{v}_{i,cm}}}.\overrightarrow{{{v}_{0}}} \right)$$.
Summing over all the particles, the total kinetic energy of a body in the ab frame is:
$$K=\sum\limits_{i}{\frac{1}{2}{{m}_{i}}{{v}_{i}}^{2}}=\sum\limits_{i}{\frac{1}{2}}{{m}_{i}}{{v}^{2}}_{i,cm}+\sum\limits_{i}{\frac{1}{2}{{m}_{i}}{{v}^{2}}_{0}}+\left[ \sum\limits_{i}{{{m}_{i}}\overrightarrow{{{v}_{i,cm}}}} \right]\overrightarrow{{{v}_{0}}}$$.
Now,
$$\sum\nolimits_{i}{\frac{1}{2}}{{m}_{i}}{{v}^{2}}_{i,cm}$$, is the kinetic energy of the body in the centre of mass frame. In this frame, the body is making pure rotation with an angular velocity ω. Thus, this term is equal to $$\frac{1}{2}{{I}_{cm}}{{\omega }^{2}}$$. Also, $$\sum{\left( {{m}_{i}}\overrightarrow{{{v}_{i,cm}}} \right)}$$ is the velocity of the centre of mass in the centre of mass frame, hence $$\sum{\left( {{m}_{i}}\overrightarrow{{{v}_{i,cm}}}=0 \right)}$$ obviously is zero.
Thus, $$K=\frac{1}{2}{{I}_{cm}}{{\omega }^{2}}+\frac{1}{2}m{{v}_{0}}^{2}$$. | 2020-08-14 16:32:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46694567799568176, "perplexity": 105.62582335711615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00411.warc.gz"} |
https://itectec.com/superuser/udev-rules-doesnt-work-with-small-number/ | # Linux – udev rules doesn’t work with small number
linuxudev
I was writing a udev rule that makes use of the ID_PATH, just to make the device persistent against the port it's inserted in.
So here is what I have
KERNEL=="ttyUSB?",SUBSYSTEM=="tty",ENV{ID_BUS}=="usb",ENV{ID_PATH}=="pci-0000:00:12.0-usb-0:1:1.0",SYMLINK="bla"
Initially, the file is called 52-foo.rules, and it doesn't work. I renamed it to 81-foo.rules and it works fine.
It's like the ENV{} values are only valid if the number are large enough. Could somebody explain why this is the case?
Thanks,
Perhaps your rule is being overwritten by another rule. Since higher numbered rules run last, it doesnt get overwritten when you use a higher number.
< 60 most user rules; if you want to prevent an assignment being
overriden by default rules, use the := operator.
these cannot access persistent information such as that from
vol_id
< 70 rules that run helpers such as vol_id to populate the udev db
< 90 rules that run other programs (often using information in the
udev db)
>=90 rules that should run last
Check this | 2021-09-21 00:21:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.346569687128067, "perplexity": 3288.3040669700613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057119.85/warc/CC-MAIN-20210920221430-20210921011430-00030.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php/Nonconstant | # Nonconstant
A function is called nonconstant if it takes more than one value (if there is more than one element in its range). For example, the polynomial $p(x) = x^2 - x + 1$ with the real numbers as domain and codomain is nonconstant. We can show this simply by noting that $p(1) = 1$ and $p(2) = 3$, so the function takes at least two different values. However, the function $f: \mathbb{Z} \to \mathbb{Z}$ such that $f(x) = 1$ for all $x$ is a constant function, as the value of the function remains the same regardless of its argument, i.e. there is only one element in the codomain.
Note that recognizing nonconstant functions is not always trivial. For example, the function $f: \mathbb{Z} \to \mathbb{Z}$ that takes an integer $x$, computes the value of $x^5 -2x^4 -2x^3 - x^2 + x + 4$ and then takes the remainder of this number on division by 3 appears quite complicated but turns out to be identical to the last function in the previous paragraph: it only takes the value 1. | 2021-01-18 18:37:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 9, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8955598473548889, "perplexity": 64.58419554786315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00564.warc.gz"} |
https://mathstodon.xyz/@bmreiniger | Pinned toot
Anyone hear of this before, or can find something about it? It seemed kinda cool at the time, and I like this axiomatization.
Pinned toot
Background: A finite projective plane has point-line duality, and affine planes lack it. You can get a projective plane by adding a "line at infinity," and this is reversible.
Years ago I stumbled across this in constructing a counterexample to a graph conjecture, and I never found out if it was something people had realized or used before:
We can also recover point-line duality from an affine plane by just deleting one of the parallel classes of lines. <cont...>
Pinned toot
@jordyd
"...now lastly, set $$x=10$$, ..."
Pinned toot
that if a finite poset has a unique maximal $$x$$, then $$x$$ is maximum.
If not, there is a $$y_1||x$$. $$y_1$$ is not maximal, so there is $$y_2>y_1$$; we cannot have $$y_2<x$$, else transitivity would give $$y_1<x$$, and we cannot have $$y_2>x$$ because $$x$$ is maximal, so $$y_2||x$$. Continuing, we build a chain $$y_1<y_2<\dotsb$$ (with $$y_i||x$$ for all $$i$$), contradicting finiteness.
(This proof also suggests a construction of an infinite poset without the property.)
The dLX (pronounced "d-Lex", as in "lexicon"), is a new 60-sided, alphabetical die from The Dice Lab. Sixty is enough for us to get a letter distribution that is close to the distribution in the English language, so they can be used for word search games! youtu.be/9T3zCsyx98g
Origametry: Mathematical Methods in Paper Folding (cambridge.org/us/academic/subj), new book coming out October 31 by @tomhull
I haven't seen anything more than the blurb linked here and the limited preview on Google Books (books.google.com/books?id=LdX7), but it looks interesting and worth waiting for.
Preparing the maths can be a mess.
Hope the talk will not be one.
Tired: Necessary evil
Wired: Necessary and sufficient evil
Closed quasigeodesics on the dodecahedron (quantamagazine.org/mathematici), paths that start at a vertex and go straight across each edge until coming back to the same vertex from the other side. Original paper: arxiv.org/abs/1811.04131, doi.org/10.1080/10586458.2020.
I saw this on Numberphile a few months back (video linked in article) but now it's on _Quanta_.
The Cornell Lab of Ornithology has an R frontend to awk, called auk.
cornelllabofornithology.github
mathstodon.xyz now has a live preview and completion of LaTeX!
This has been on my to-do list for a long time. You no longer need to worry if LaTeX will display properly or not.
Nice little bit of card-shuffling mathematics, but also an excellent presentation that takes advantage of the medium.
fredhohman.com/card-shuffling/
New entry!
An Optimal Solution for the Muffin Problem
Article by Richard E. Chatwin
In collections: Attention-grabbing titles, Food, Fun maths facts, Protocols and strategies
The muffin problem asks us to divide $$m$$ muffins into pieces and assign each of those pieces to one of $$s$$ students so that the sizes of the pieces assigned to each student total $$m/s$$, with the objective being to maximize...
URL: arxiv.org/abs/1907.08726v2
PDF: arxiv.org/pdf/1907.08726v2
I need help finding the hole in this argument...
Let K be a CM number field, K+ its maximal real subfield, k the 2-part of K with subfield k+ likewise. Suppose K has a purely imaginary unit a. Then by Remak [1], a is of the form \sqrt{-u} for a totally positive non-square unit u of K+. The degree of K over k is odd, therefore the norm N_{K/k}(a) is also purely imaginary, and a unit. Therefore a totally positive non-square unit exists in k+, and moreover it is found similarly.
baby highland cow pics, cow eye contact
i was taking pictures of it and it was like "oh? you desire a Model? well let me come closer"
The set of all sets which don't contain themselves is coming from inside the set 😳
Untangling random polygons: sinews.siam.org/Details-Page/u
Repeatedly rescaling midpoint polygons always leads to an ellipse.
Ah, the two genders:
- tautological
- vacuous
We start warning 'no mask' like 'eye contact' on selfies when.
It's 5/8, or Almost The Worst Approximation to Φ day!
Is there a term for an $n$-regular graph which can have its vertices $n$-colored so that each vertex has all $n$ colors among its neighbors? (For example, a cycle whose length is a multiple of 4 works for n=2, and the triangular prism for n=3).
So, I know you know this by now, BUT: scholar.social are hosting a mini-conference! All sorts of disciplines and a lovely line-up of talks.
To get the links so you know when the talks are and what vidchat to drop in on for them, sign up here: docs.google.com/forms/d/e/1FAI
Let $$K$$ be a cyclic number field over $$\mathbb{Q}$$ of degree $$q=7$$ with Galois group $$G$$ and consider the $$\mathbb{F}_2$$ algebra over $$G$$, also known as the group ring. This is isomorphic to $$\mathbb{F}_2[x]/(x^q+1)$$. In the case $$q=7$$, the ideal $$(x^q+1)$$ splits mod $$2$$ as $$(x-1)(x^4+x^2+x+1)(x^4+x^2+x)$$. So even though the Galois group is cyclic, the $$\mathbb{F}_2$$ algebra over it is not.
How to be a mathematician:
Definition: An object that does what we want.
Example: Some objects don't do what we want.
Definition: An object is called normal if it *really* does what we want. | 2020-09-24 00:29:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5662080645561218, "perplexity": 1164.126200199842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400213006.47/warc/CC-MAIN-20200924002749-20200924032749-00631.warc.gz"} |
https://or.stackexchange.com/tags/robust-optimization/hot | # Tag Info
16
In colloquial terms, Robust Optimization (RO) is a methodology (including modeling approach and computational methods) for handling optimization problems with uncertain data. Many times data aren't really measured exactly, and even more, in some contexts these measurement errors can trigger infeasibility on the optimization models (a quite undesirable ...
12
I think there is no single, uniformly accepted answer. But there are two main factors that distinguish them: In stochastic optimization, it is nearly always assumed that we know the probability distribution (possibly in the form of discrete probabilities of each scenario) of the random parameters. In robust optimization it is usually (but not always) ...
12
I think there are two small mistakes in your formulation: In the final formulation, the roles of $x$ and $z$ should be reversed. Except for the first constraint and for the non-negativity of the variables, all signs should be reversed. The mistake probably occured when using duality to rewrite the expression \$\max\limits_{(\alpha, \beta, \gamma, \delta) \...
12
In reference to the first question, I think it often comes down to the information you have about the underlying uncertainty. If you only have intervals or ranges, robust is the way to go. If you have all of the distributional information (or assume it), stochastic programming is an option. As @TheSimpliFire mentioned, you can include risk measures in ...
10
The following papers discuss this extensively with numerical experiments, but they tackle specific examples. Emphasis is mine. Kazamzadeh et al. (2017) This is a comparison of the two techniques using the example of unit commitment, answering your first question. A popular impression has arisen that the robust approach, with its focus on the worst case, is ...
9
Regarding your first question, I think other answers have summed it up pretty good. Two things I would add are as follows: Stochastic programming models (besides chance constraint/probabilistic programming ones) allow you to correct your decision using the concept of recourse. In this idea, you have to make some decisions before the realization of uncertain ...
9
The following is purely personal opinion. I would say a (substantial) majority of non-academic optimization problems do not involve any of the methods you listed, for a number of reasons. "Better is the enemy of good enough." Using fixed, plausible values for parameters and ignoring uncertainty often produce answers that are good enough for ...
7
I think these terms are all rather vague and imprecise, and different people use them slightly differently. Some papers try to draw clear lines between them—for example, in my dissertation in 2003, I draw a distinction between robust (i.e., perform well with respect to uncertainties in the data, such as demand) and reliable (i.e., perform well when parts of ...
4
As Larry said, there is no single, uniformly accepted answer, so I'll make things even more interesting. In mechanical engineering, specifically in aircraft design where I used to work, we used the following terminology: Stochastic optimisation was to solve problems using any non-deterministic methods, e.g., particle swarm algorithms or evolutionary ...
4
This heavily depends on the application at hand and could vary all the way from milliseconds to months. It all comes down to rigorously defining the specs. Many parameters are in play: How long does your feedback loop need to be, i.e., how often does your system need to update? How high is the uncertainty and how does it grow over time? Do you know the ...
3
As you mentioned that you looked for python packages for RO before and didn't find any you might want to have a look at RSOME. You can custom build uncertainty sets using affine constraints as well as 1, 2, and infinity norms. For many of the uncertainty sets (if the reformulation is not linear) commercial solvers are needed. I found that for big problems ...
3
A robust optimal solution has to satisfy all constraints for each choice of the uncertainty parameters. Thus you might not be able to point out one particular set that is active for an optimal solution. Consider the following small problem: \begin{align} \min x_1+x_2+x_3\\ x_1+x_2&\geq b_3 \\ x_1+x_3&\geq b_2 \\ x_2+x_3&\geq b_1 \\ x_1, x_2, x_3&...
2
CPLEX treats certain small values as negligible for purposes of constraint satisfaction (including satisfying integrality constraints). That does not mean it automatically rounds small values to zero. If the coefficient 1.4210854715202e-14 in your cut is the value of a dual variable from a subproblem, it is up to you to decide whether to round it to zero ...
2
Let's make it more clear. Stochastic Programming is not Stochastic Optimization. When you say Stochastic Programming the above answer of @Larry is explaining more about "stochastic programming". This is when your problem of study has some uncertainty (e.g. demand uncertainty, arrival times uncertainty). The robust optimization sets with Stochastic ...
2
Stochastic Optimization (SO) requires the probability distributions (PDF) of the uncertain variables which are usually hard to fit. Then, a large number of scenarios are required to be sampled from these PDFs with their probabilities. This makes some computational complexities and intractability so, scenario reductions are needed but some information will be ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2021-09-24 20:53:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769134283065796, "perplexity": 643.381635372819}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057580.39/warc/CC-MAIN-20210924201616-20210924231616-00141.warc.gz"} |
https://codegolf.stackexchange.com/questions/223918/tips-for-golfing-in-vyxal | # Tips for Golfing in Vyxal
Vyxal is a golfing language that has 500+ different commands (achieved through overloads), and is a known beater of Jelly, 05AB1E and most golfing languages.
Henceforth, it seems like a good idea to share some of the tips and tricks of the industry used to get terse programs in Vyxal.
One tip per answer, and no stupid tips like "oh remove comments and whitespace".
• Can we actually enforce the "one tip per answer" please, unlike every other tips question ever? Even if answers are good, please downvote them if they contain more than one tip Apr 20, 2021 at 6:55
• @pxeger I think that seems reasonable Apr 20, 2021 at 6:55
# Brackets/Structures autocomplete
Totally not stolen from Tips for golfing in keg because Vyxal totally isn't supposed to be keg but 69 times better
If you have a structure (e.g. if/for/while/function/lambda) at the end of your program, and EOF follows, you can leave the closing bracket/semicolon off. Note that this only applies if you are submitting a full program.
For example:
9(0,)
Can be shortened to:
9(0,
And:
{:1=[1,]}
Can be shortened to:
{:1=[1,
# Compress your strings and numbers
Nobody likes long strings. And nobody likes long numbers either. Luckily there's two ways of compressing strings and one way to compress numbers.
## Dictionary Compression
Fun fact: Vyxal has access to a roughly 20k word "dictionary" (read: a list of words) which can be used to shorten strings with a) common English words or b) common 3 letter combinations.
To access the words in this dictionary, you need to get the String Compression Code (SCC) of the word and place it inside a normal string (the backtick ones). String Compression Code is simply a way of saying "the base 10 index of the word within the dictionary list converted to a bijective base-1611".
You can get the SCC of a word by using øD. For example:
HelloøD
Tells you the SCC for Hello is ƈṡ. However, øD will also return the dictionary compression of a given string:
Hello, World!øD
Is turned into ƈṡ, ƛ€!.
øD is fully optimised and will always give you the shortest possible result. For example, compressing abcdef will return ėġḣ².
## Base-255 Compression
But what if your string is a bunch of random letters that aren't in the dictionary at all? øD becomes useless for obvious reasons. In this case, you would use « delimited strings.
These strings take everything inside of them, converts it from a bijective base-255 (the vyxal codepage minus «) to base 10. It then converts that result to a bijective base-27 (the lower case alphabet plus space). Important: only stings containing lower case letters and spaces can be Base255 compressed3.
To get the Base255 compression of a string, you can use øc:
ahroebeodbslnwksozlzbeoxbeodbsonwkdbdiøc
Tells you that the compression is «∧pŀQb⟨ż₄∑ṄḞḊjẎɾ71(⁼~∇Ċβ«.
» strings have got you covered. øC will take an integer and return it converted to a bijective base-255 (the vyxal code page minus »:
69694204206969øC
Gives you »A⟩¾Ǐø7»
1: The bijective base 161 is simply the vyxal code page minus all printable ascii. This is so that SCCs can be embedded inside strings without creating a new string type.
2: Yeah, SCCs don't need to be surrounded by spaces - they can be inside ascii (a÷×b) or even next to each other (£÷¬¶). This is very intentional.
3: I originally allowed for upper and lower case inside base 255 strings, but found that strings are usually shorter when only allowing lower case.
# Remember that mapping/filtering/reducing all cast numbers to ranges
Okay so say you want to apply something over the range [1, n] using Map, Filter or Reduce (or ḭnverse reduce/foldr). Your first instinct might be to do this:
ɾλ....;M # or whatever command you're using
This is unnecessary, as the functional programming commands all cast numbers to range before doing their job, so:
λ....;M
is equivalent.
"But what if my range isn't [1, n], but instead [0, n] or [1, n) for example? Won't I need the corresponding range command?"
Well yes, but actually no if you use flags:
M Make implicit range generation start at 0 instead of 1
m Make implicit range generation end at n-1 instead of n
(source: flag help generated using the -h flag).
# Use ₃ and Ḣ for certain length checks
Say you have a string and you want to check whether its length is greater than 1, and the output only needs to be truthy or falsy.
You could go L1> for three bytes. Or, you could go Ḣ for 1 byte.
Ḣ slices off the first character of a string and returns the rest, so running on a length 1 string returns an empty string, which is falsy; on a length ≥2 string, it returns a truthy non-empty string.
What about length >2? Just use ḢḢ, with the same logic as before, and still saving a byte.
Length = 1? There's literally a builtin for this - ₃ on strings or lists returns true only if the length is 1.
Length = 2? Just combine the two (Ḣ₃) - if you lop off a character and that string becomes length 1, it must've been length 2.
# Use the multi-element lambdas if your lambda body is 1-3 bytes
Say you have the following:
λǐṅ;Ẋ
You can turn this into
‡ǐṅẊ
Because ‡ combines the next two elements (built-ins) into a single lambda. ⁽ is for 1 element lambdas (good for when you want to reduce/filter/map a built-in without using v) and ≬ is for 3 element lambdas.
# Use \ for single byte strings and ‛ for two byte strings
Sometimes you'll need a string of either one or two characters. You could do the following:
A
AB
But that has an extra backtick at the end. Instead, you can do this:
\A
‛AB
## Important
\ pushes the next character as a string no matter what it is. ‛ will treat it as if the next two characters were wrapped in backticks (meaning that it will dictionary uncompress a single string compression code).
# Use the nameless variable
Vyxal has variables, set by → and accessed by ←. But did you know that there's a nameless variable?
You can access this by just going ← followed by a non-alphabetic character (it doesn't matter what, and that will still be run). Ditto with setting.
It works in for loops - try 4(|←,).
In other words, it's an extra register that can easily save you a couple of bytes.
# Use - as a check for recursive functions
When recursing over a deep list, - will return a falsy value (0 / empty string) for scalars and a truthy value (list of 0) for lists.
Example of this in use
Note that this only works when you don't have empty lists.
Iȧ does work for empty lists, returning a falsy value (empty string) for integers and a truthy value (list of lists) for lists. Thanks to lyxal for this one.
# Custom base decompression
When you're compressing a large amount of data with a limited charset, as in here, you can use a base-255 integer (»...») and the custom base decompression function τ.
For example, say you want to compress this ascii-art:
/
/ \
\ \
\ / \
/ /
/ \ /
\ \
\ / \
/ /
/ \ /
\ \
\ / \
/ /
/ \ /
\ \
\ /
/
You can just map 0 to \, 1 to /, and 2 to to get 576780841113635223227691120919222477677740273185690732841 in base-3, which compresses to »ɾĠ^;√⟑•ȮṙDǓ…⟩P½≠1⅛²ė"÷₆Ŀ».
Then, you can take out your compression guide \/ , and append τ to turn it into base-3 with those as values.
Finally, you can split into 17 pieces (for 17 lines) and output joined by newlines - Try it Online!
# Use of filter lambda
Filter lambda (') can filter out the items which are not truthy from the stack, so you don't need a lambda map and close it and then find out the Truthy indices.
This is a code using the lambda map
ƛǐG5>;T›
But if you use filter lambda, it can be shortened to
'ǐG5>
the last 3 commands are no longer needed!
• This also applies for map lambdas too Apr 20, 2021 at 7:46
# Use the register instead of a variable
Sometimes, you're using a variable over and over again, and it's using so many bytes, right? Well, if you only have 1 variable, you can use the register instead, and save a byte every time you use it!
For example, say you want to do x * 2 and x ^ x. You could do:
3→x ←xd, ←x←xe,
This saves 3 to x, then retrieves it and doubles it, then retrieves twice, and exponentiates. However, x is being used 4 times, so that's 8 bytes in variable references alone! Using the register will shorten this code a lot:
3£ ¥d, ¥¥e,
Instead of saving to and retrieving from a variable, we're using the register, which can be accessed using only 1 byte. Anytime you're using variables, you can replace one of the variables with the register to save some bytes!
# Converting characters to numbers in resticted-source
When you want to convert characters to numbers, you have to use C, right? Well, there's another way. You can also use b. The b command, when used on a string, will convert each character to its ASCII value, then convert that to binary. You can then use B to convert that back to a decimal value!
EEEE b vB
Try it Online!
It's not too often that you'll be able to use bB and not be able to use C, but this can be quite useful in the right circumstances.
# Utilise the Short Dictionary
Note: This is a 2.6.x+ feature only
Newer versions of Vyxal have a neat little feature where 1 character SCCs are indexed into a special "short" dictionary of overlyspecialised "words". Here's a list of all the current entries:
\d+ λ
-?\d+ ƛ
\d+\.\d+ ¬
-?\d+(\.\d*)? ∧
[A-Za-z0-9\.,;:!?()"'%\-]+ ⟑
^\S+@\S+\$ ∨
[A-Za-z0-9] ⟇
[A-Za-z] ÷
[a-z] ×
[A-Z] «
[0-9] »
[aeiou] °
[aeiouy] •
[bcdfghjklmnpqrstvwxyz] ß
\w+ †
.* €
[^A-z0-9] ½
[^A-Z] ∆
[^a-z] ø
[^0-9] ↔
[^aeiou] ¢
[^aeiouy] ⌐
[^bcdfghjklmnpqrstvwxyz] æ
(.+) ʀ
\W ʁ
\w ɾ
\S ɽ
\s Þ
\W+ ƈ
\w+ ∞
\S+ ¨
\s+ ↑
((www\.|(http|https|ftp|news|file)+\:\/\/)[_.a-z0-9-]+\.[a-z0-9\/_:@=.+?,##%&~-]*[^.|\'|\# |!|$$|?|,| |>|<|;|$$]) ↓
?<= ∴
?!= ∵
?<! ›
https://www.duckduckgo.com ¤
https://www.duckduckgo.com/?q= ð
https://www.bing.com →
https://www.bing.com/?q= ←
https://codegolf.stackexchange.com/q/ β
https://codegolf.stackexchange.com/a/ τ
https://stackoverflow.com/q/ ȧ
https://stackoverflow.com/a/ ḃ
₁ƛ₍₃₅kF½*∑∴, ċ
:ɾ:Ẋv∑Ȯẇ ḋ
:ɾ:Ẋƛ⁽=R;Ȯẇ ė
isdo ḟ
›‹²… ġ
n't ḣ
cos(x) ḭ
sin(x) ŀ
tan(x) ṁ
acos(x) ṅ
asin(x) ȯ
atan(x) ṗ
x^2 ṙ
x^3 ṡ
x+1 ṫ
x-1 ẇ
So for example:
β223918
would return the address of this question (because β decompresses as https://codegolf.stackexchange.com/q/).
# For reverse sorting, use µ...⌐
If the last operation you do in a program is to sort something in a certain way then reverse it, you can save a byte by using µ...⌐ - take the one's complement before sorting. This only works when the preceding value is numeric.
# If there are not enough arguments on the stack, fillers will be taken based on the context
What I mean: if there are not enough arguments on the stack to perform an action, the extra arguments will come either from the input (in the "main" program), or from the function/lambda arguments. I discovered this while golfing this answer. I was trying to find a way to tie with 05AB1E, and I had this code:
:↵':²"Ṡ≈;i
I was wondering how I could remove those duplicate elements, and I decided to look at the source code. And then I found the docstring for pop(): Pops (count) items from iterable. If there isn't enough items within iterable, input is used as filler. Looking at get_input(), you see that it either reads the input from STDIN, or from the lambda arguments. That allowed me to shave off two bytes and my code became this:
↵'²"Ṡ≈;i
• This is exactly what the whole idea of context was based on - the fact 05AB1E doesn't have nice ways of retaining what's being operated on inside structures inspired things like n being the context variable Mar 15 at 13:01 | 2022-08-12 17:44:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4086697995662689, "perplexity": 3014.9762291632233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571745.28/warc/CC-MAIN-20220812170436-20220812200436-00414.warc.gz"} |
https://stats.stackexchange.com/questions/424133/bayesian-update-for-beta-distribution | # Bayesian update for Beta distribution
I'm wondering how to find a posterior of a beta distribution when the "new information" is not an outcome of a binomial trial.
Let $$p$$ be the probability of Head of a (biased) coin toss. As usual in Bayesian inference, let $$p\sim Beta(a,b).$$
When the "new information" is Head or Tail, we can simply update $$p$$ by adding number of heads or tails to the shape parameters.
However, suppose that the new information I have is $$p\geq \frac{1}{2}.$$
If this is the case, how should I update the posterior in a Bayesian way?
In relation to the above question, and possibly more interestingly, for a Dirichlet distribution, if $$(p_1,p_2,p_3,p_4)\sim Dir(a,b,c,d)$$, what kind of Bayesian inference can be made out of the new information?: $$p_1+p_2\geq p_3+p_3$$
• Not sure whether I have digested the question correctly. Why do not consider a uniform $(0.5,1)$ prior? – TPArrow Aug 29 at 10:23
## 2 Answers
As already noticed by @whuber in a comment to answer by @BruceET, this is not really a Bayesian scenario, since you don't seem to mention any data (nor any likelihood).
From what you are saying, you know that $$p \sim \mathsf{Beta}(a, b)$$, you also know that $$p \ge 1/2$$, what translates to knowing that $$p$$ is distributed according to beta distribution with parameters $$a,b$$ left truncated at $$1/2$$.
Same with the Dirichlet distribution, your knowledge that $$p_1+p_2\geq p_3+p_3$$ is a constraint about the distribution, not an "update" of the prior. Moreover, notice that this constraint leads to situation that may not be possible under Dirichlet distribution, so in fact the statements may be contradictory. The statement is in fact, that the $$p_1, p_2, p_3, p_4$$ are distributed according to distribution similar to Dirichlet, but constrained.
So...
• If you are saying that for $$p$$ you assume truncated beta distribution as a prior, and want to use it together with some likelihood function and data, it is no more conjugate to binomial distribution, so you would need to use Markov Chain Monte Carlo for estimation. Defining truncated distribution can be done in any probabilistic programming framework, e.g. Stan, PyMC3, JAGS etc.
• Same as above applies to the "Dirichlet"-like distribution, but since this is a custom distribution, it would be much more complicated (I have no easy solution for you).
• If you are saying that the facts mentioned by you are the only information that you have and will have, and given this information you want to learn something about the distribution (e.g. expected value, quantiles), then this is a typical case of standard Monte Carlo simulation. For truncated beta, you could simply use inverse transform sampling, that is a simple and efficient way of sampling. For the "Dirichlet"-like distribution, it would again, be more complicated, but there are many possible approaches, starting from simple accept-reject sampling, ending at some more sophisticated solutions.
• I see. Thank you for your comment. I guess I didn't clearly understand the differences between Bayesian inference and conditional distribution. So, the new information $p>\frac{1}{2}$ shouldn't be treated as "data" here. Instead, I should find a conditional distribution conditional on $p>\frac{1}{2}$. Same thing for the dirichlet case. I should find the distribution conditional on $p_1+p_2>p_3+p_4$. And if I want to make further bayesian inferences out of real data, I should consider using simulations as the distribution is no longer a conjugate. Do I understand it correctly? – Andeanlll Aug 30 at 1:45
• @Andeanlll yes. – Tim Aug 30 at 4:34
This is not a standard way to obtain a posterior distribution in Bayesian inference (see Comment by @whuber). However, what about this, for the first part?
"Prior" is $$p \sim \mathsf{Beta}(3, 4).$$ "Data" is that $$p > 0.$$ "Posterior" is $$\mathsf{Beta}(3, 4)$$ truncated to $$(1/2, 1).$$
k = 1- pbeta(.5, 3, 4)
curve(dbeta(x,3,4)*(x>.5)/k, 0, 1, lwd=2,
n = 10001, ylab="Density", main="Posterior")
abline(h=0, col="green2")
An alternative hint, in a more standard Bayesian setting, that $$p > 1/2$$ might have come from $$n = 100$$ binomial trials with $$x = 70$$ successes. In that case, the posterior would have been $$\mathsf{Beta}(73, 34),$$ plotted below.
• By definition, the "data" are a realization of the random variable described by the prior. That's not the case here, so it's difficult to see why the information $p\gt 0.5$ can be viewed as data at all. – whuber Aug 29 at 18:51
• Agreed. I probably should have done more than put data in quotes. – BruceET Aug 29 at 18:59 | 2019-10-19 17:59:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 22, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8434260487556458, "perplexity": 410.62076468406303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00246.warc.gz"} |
http://algo.inria.fr/seminars/sem97-98/vallee-abstract.html | Brigitte Vall\'ee, D\'epartement d'Informatique, Universit\'e de Caen
Dynamics of the Binary Euclidean Algorithm: Functional Analysis and Operators
We provide here a complete average--case analysis of the binary continued fraction of a random rational with numerator and denominator odd and less than \$N\$. We analyse the three main parameters of the binary continued fraction expansion that are the height, the number of steps of the Binary Euclidean algorithm, and finally the sum of the exponents of powers of 2 contained in the numerators of the binary continued fraction. The average values of these parameters are shown to be asymptotic to \$A_i \log N\$, and the three constants \$A_i\$ are related to the invariant measure of the Perron-Frobenius operator linked to this dynamical system. The Binary Euclidean algorithm has been previously studied in 1976 by Brent who provided a partial analysis based on a heuristic model and some unproven conjecture. He analysed only the second parameter, i.e. the number of steps of the algorithm. Our methods are quite different, proven without any heuristic hypothesis or conjecture, and more general, since they allow to study all the parameters of the binary continued fraction expansion. | 2020-03-28 23:04:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229166865348816, "perplexity": 344.81259747300976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493121.36/warc/CC-MAIN-20200328225036-20200329015036-00308.warc.gz"} |
https://stats.stackexchange.com/questions/582757/interpretation-of-default-type-iii-sums-in-squares-in-r | # Interpretation of default Type III Sums in Squares in R
Many posts (i.e. here and here) discuss how the Type III sums of squares produced by car::Anova() in R are incorrect or nonsensical under R's default model parameterization, "contr.treatment", in which the first level of the categorical variable is set as the reference/intercept and each remaining level is compared to it. One way to obtain the "correct" Type III sums of squares is to change the model parameterization to sum coding, "contr.sum".
What remains unclear to me, however, is whether there might be situations in which the default "incorrect" Type III sums of squares under the "contr.treatment" parameterization might be useful. What is the interpretation of these sums of squares for the main effects, and are they always nonsensical and useless?
Below is some example R code for illustration:
# Random data for 2-way balanced ANOVA design
set.seed(2964)
df <- data.frame(response = rnorm(n = 32, mean = seq(10, 25, 5)),
varA = factor(rep(paste0("A", 1:4), times = 4)),
varB = factor(rep(paste0("B", 1:2), each = 8)))
# Type I Sums of Squares
anova(lm(response ~ varA*varB, data = df))
# Default "incorrect" Type III Sums of Squares
car::Anova(lm(response ~ varA*varB, data = df),
type = "III",
test.statistic = "F")
# "Correct" Type III Sums of Squares produced under sum coding
# These match the Type I Sums of Squares.
car::Anova(lm(response ~ varA*varB, data = df,
contrasts = list(varA = contr.sum, varB = contr.sum)),
type = "III",
test.statistic = "F")
### Type I Sums of Squares
Analysis of Variance Table
Response: response
Df Sum Sq Mean Sq F value Pr(>F)
varA 3 991.97 330.66 330.6870 < 0.0000000000000002 ***
varB 1 0.15 0.15 0.1494 0.70247
varA:varB 3 9.23 3.08 3.0783 0.04666 *
Residuals 24 24.00 1.00
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
### Default "incorrect" Type III Sums of Squares
Anova Table (Type III tests)
Response: response
Sum Sq Df F value Pr(>F)
(Intercept) 322.18 1 322.2059 0.000000000000002052 ***
varA 545.68 3 181.9090 < 0.00000000000000022 ***
varB 6.03 1 6.0350 0.02164 *
varA:varB 9.23 3 3.0783 0.04666 *
Residuals 24.00 24
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
### "Correct" Type III Sums of Squares produced under sum coding
These correctly match the Type I Sums of Squares for this balanced case.
Anova Table (Type III tests)
Response: response
Sum Sq Df F value Pr(>F)
(Intercept) 9666.9 1 9667.7398 < 0.0000000000000002 ***
varA 992.0 3 330.6870 < 0.0000000000000002 ***
varB 0.1 1 0.1494 0.70247
varA:varB 9.2 3 3.0783 0.04666 *
Residuals 24.0 24
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 | 2022-08-09 08:19:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7341524362564087, "perplexity": 3987.6910448664653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00683.warc.gz"} |
https://economics.stackexchange.com/questions/6090/what-is-the-correct-way-to-calculate-a-selling-price-from-margin-and-a-cost | # What is the correct way to calculate a selling price from margin and a cost?
There seems to be two formula to calculate a selling price.
The first formula that I came upon would be
Cost * (1+Margin) = Selling price
Example : 10 * (1+0.25) = 12.5
However, a lot of people uses the following :
Cost / (1 - Margin) = Selling price
Example : 10 / (1-0.25) = 13.33...
This gives a very different number.
As far as I know, the margin and selling prices can be anything you want for most products.
Also note that the second formula fails if you have a margin of more dans 100%.
My question is why is the second formula the most popular ? Is it only a gimmick to get a bigger price?
Am I missing something subtle at work here ?
Also, what should I do if the wanted margin is more than 100% ?
Thanks!
The "margin" is a portion of the selling price. It is defined as
$$\text {margin} \equiv \frac {P-C}{P}$$
From the above definition, we see that the margin cannot exceed $100\%$.
If one has the cost and he wants to calculate the price in order to have a specific margin, one must calculate
$$P(\text {margin}^*) = \frac {C}{1-\text {margin}^*}$$
The first formula written in the question is wrong, because it uses "margin", but it would become correct if instead one used the concept of "markup".
The "markup" is defined as the percentage increase of cost in order to determine the selling price:
$$\text {markup} \equiv \frac {P-C}{C}$$
It is easy to obtain that the relation between them is
$$\text {margin} = \frac {\text {markup}}{1+\text {markup}}$$ | 2020-02-19 04:23:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8572338223457336, "perplexity": 525.539627318053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144027.33/warc/CC-MAIN-20200219030731-20200219060731-00095.warc.gz"} |
https://socratic.org/questions/what-is-chebyshev-s-inequality | # What is Chebyshev's inequality?
Jan 21, 2016
Chebyshev’s inequality says that at least $1 - \frac{1}{K} ^ 2$ of data from a sample must fall within K standard deviations from the mean, where K is any positive real number greater than one.
#### Explanation:
Let play with a few value of K:
1. $K = 2$ we have 1-1/K^2 = 1 - 1/4 = 3/4 = 75%. So Chebyshev’s would tell us that 75% of the data values of any distribution must be within two standard deviations of the mean.
2. $K = 3$ we have 1 – 1/K^2 = 1 - 1/9 = 8/9 = 89%. This time we have 89% of the data values within three standard deviations of the mean.
3. $K = 4$ we have 1 – 1/K^2 = 1 - 1/16 = 15/16 = 93.75%. Now we have 93.75% of the data within four standard deviations of the mean.
This is consistent to saying that in Normal distribution 68% of the data is one standard deviation from the mean, 95% is two standard deviations from the mean, and approximately 99% is within three standard deviations from the mean. The difference is Chebyshev's theorem extends this principle to any distribution. | 2022-01-28 12:18:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7513087391853333, "perplexity": 217.31906145066628}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00486.warc.gz"} |
https://gmatclub.com/forum/in-x-y-plane-does-parabola-y-ax-2-bx-c-intersect-to-x-axis-260907.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 22 Jun 2018, 04:33
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# In x-y plane, does parabola y=ax^2+bx+c intersect to x-axis?
Author Message
TAGS:
### Hide Tags
Manager
Joined: 03 Mar 2018
Posts: 170
In x-y plane, does parabola y=ax^2+bx+c intersect to x-axis? [#permalink]
### Show Tags
06 Mar 2018, 11:40
3
00:00
Difficulty:
35% (medium)
Question Stats:
50% (00:47) correct 50% (01:02) wrong based on 58 sessions
### HideShow timer Statistics
In x-y plane, does parabola y=$$ax^2$$+bx+c intersect to x-axis?
1) b= -2
2) c<0
_________________
DS Forum Moderator
Joined: 22 Aug 2013
Posts: 1209
Location: India
Re: In x-y plane, does parabola y=ax^2+bx+c intersect to x-axis? [#permalink]
### Show Tags
07 Mar 2018, 02:47
1
itisSheldon wrote:
In x-y plane, does parabola y=$$ax^2$$+bx+c intersect to x-axis?
1) b= -2
2) c<0
In case of a parabola(quadratic expression) ax^2 + bx + c:
If parabola intersects x-axis at two distinct points, it means the quadratic expression has two distinct real roots, and this happens when (b^2 - 4ac) > 0
If parabola intersects x-axis at one point only, it means the quadratic expression has one real root, and this happens when (b^2 - 4ac) = 0
If parabola does not intersect x-axis at all, it means the quadratic expression has no real root, and this happens when (b^2 - 4ac) < 0
So we need to know the sign of (b^2 - 4ac).
Statement 1
b is negative, but we don't know anything about a and c. Not sufficient.
Statement 2
c is negative, but we don't know anything about a and b. Not sufficient.
Combining the two statements,
b and c are negative, but we don't know the sign of a.
If a is positive, then - 4ac will be positive, and then (b^2 - 4ac) will also be positive and the parabola will intersect x-axis.
If a is negative, then -4ac will be negative, and (b^2 - 4ac) could be either negative or positive or 0 depending on the magnitudes of a, b, c. So we cant say whether parabola will intersect x-axis or not.
So not sufficient to determine.
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 1553
In x-y plane, does parabola y=ax^2+bx+c intersect to x-axis? [#permalink]
### Show Tags
19 Mar 2018, 13:06
SOLUTION
In $$x-y$$ plane, the parabola intersects the $$x$$-axis if the equation of the parabola has real roots.
Thus, we have to find:
• Whether $$y=ax^2+bx+c$$ has real roots or not.
Per our conceptual understanding of quadratic equation, a quadratic equation has real roots if the value of discriminant is greater or equal to zero.
For equation $$y= ax^2+bx+c$$, the value of $$D$$ is:
• $$D= b^2-4ac$$
Thus, we have to find if the value of $$(b^2-4ac) >=0$$.
Statement-1 "$$b= -2$$"
Since we don’t know the value of $$a$$ and $$c$$, statement 1 alone is not sufficient to answer the question.
Statement-2 "$$c<0$$"
Let us assume $$c=-k$$.
Since we don’t know the value of $$b$$ and $$a$$, statement 2 alone is not sufficient to answer the question.
Combining both the statements:
From both the statements combined, the value of $$D$$ is:
• $$D= (-2)^2 – 4*a*(-k)$$
• $$D= 4+4ak$$
We still do not have the value of “$$a$$”.
Hence, “Statement (1) and (2) together are not sufficient to answer the question”.
_________________
| '4 out of Top 5' Instructors on gmatclub | 70 point improvement guarantee | www.e-gmat.com
In x-y plane, does parabola y=ax^2+bx+c intersect to x-axis? [#permalink] 19 Mar 2018, 13:06
Display posts from previous: Sort by | 2018-06-22 11:33:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36530640721321106, "perplexity": 2272.279658456853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864391.61/warc/CC-MAIN-20180622104200-20180622124200-00224.warc.gz"} |
https://en.academic.ru/dic.nsf/enwiki/191791/Experience_curve_effects | # Experience curve effects
Experience curve effects
:"Experience curve re-directs here. For its use in video games see Experience point."
The learning curve effect and the closely related experience curve effect express the relationship between experience and efficiency. As individuals and/or organizations get more experienced at a task, they usually become more efficient at it. Both concepts originate in the adage, "practice makes perfect", and both concepts are opposite to the popular misapprehension that a "steep" learning curve means that something is hard to learn. In fact, a "steep" learning curve implies that something gets easier quickly. (For other uses of the expression "steep learning curve", see Learning curve.)
The learning curve
The "learning curve" was first described by the 19th Century German psychologist Hermann Ebbinghaus according to the difficulty of memorizing varying numbers of verbal stimuli.
Later the term acquired a broader meaning. The learning curve effect states that the more times a task has been performed, the less time will be required on each subsequent iteration. This relationship was probably first quantified in 1936 at Wright-Patterson Air Force Base in the United States [Wright, T.P., Factors Affecting the Cost of Airplanes, "Journal of Aeronautical Sciences", 3(4) (1936): 122-128.] , where it was determined that every time total aircraft production doubled, the required labour time decreased by 10 to 15 percent. Subsequent empirical studies from other industries have yielded different values ranging from only a couple of percent up to 30 percent, but in most cases it is a constant percentage: It did not vary at different scales of operation.Learning curve theory states that as the quantity of items produced doubles, costs decrease at a predictable rate. This predictable rate is described by Equations 1 and 2. The equations have the same equation form. The two equations differ only in the definition of the Y term, but this difference can make a significant difference in the outcome of an estimate.
1. This equation describes the basis for what is called the unit curve. In this equation, Y represents the cost of a specified unit in a production run. For example, If a production run has generated 200 units, the total cost can be derived by taking the equation below and applying it 200 times (for units 1 to 200) and then summing the 200 values. This is cumbersome and requires the use of a computer or published tables of predetermined values.
:$Y_x = K x^\left\{log_2 b\right\}$ cite book |last=Chase |first=Richard B. |authorlink= |title=Operations management for competitive advantage, ninth edition |year=2001 |publisher=McGraw Hill/ Irwin |location=International edition |id=ISBN 0-07-118030-3 ] where
* $K$ is the number of direct labour hours to produce the first unit
* $Y_x$ is the number of direct labour hours to produce the xth unit
* $x$ is the unit number
* $b$ is the learning percentage
2. This equation describes the basis for the cumulative average or cum average curve. In this equation, Y represents the average cost of different quantities (X) of units. The significance of the "cum" in cum average is that the average costs are computed for X cumulative units. Therefore, the total cost for X units is the product of X times the cum average cost. For example, to compute the total costs of units 1 to 200, an analyst could compute the cumulative average cost of unit 200 and multiply this value by 200. This is a much easier calculation than in the case of the unit curve.:$overline\left\{Y_x\right\} = Kfrac\left\{frac\left\{1\right\}\left\{1+log_2b\right\}x^\left\{1+log_2\left\{b\right\}\left\{x\right\}$
where
* $K$ is the number of direct labour hours to produce the first unit
* $Y_x$ is the average number of direct labour hours to produce First xth units
* $x$ is the unit number
* $b$ is the learning percentage
The experience curve
The experience curve effect is broader in scope than the learning curve effect encompassing far more than just labor time. It states that the more often a task is performed the lower will be the cost of doing it. The task can be the production of any good or service. Each time cumulative volume doubles, value added costs (including administration, marketing, distribution, and manufacturing) fall by a constant and predictable percentage.
In the late 1960s Bruce Henderson of the Boston Consulting Group (BCG) began to emphasize the implications of the experience curve for strategy.cite journal
last = Hax
first = Arnoldo C.
coauthors = Majluf, Nicolas S.
title = Competitive cost dynamics: the experience curve
journal = Interfaces
volume = 12
issue =
pages = 50–61
publisher =
date = October 1982
url =
doi =
id =
accessdate =
] Research by BCG in the 1970s observed experience curve effects for various industries that ranged from 10 to 25 percent.
These effects are often expressed graphically. The curve is plotted with cumulative units produced on the horizontal axis and unit cost on the vertical axis. A curve that depicts a 15% cost reduction for every doubling of output is called an “85% experience curve”, indicating that unit costs drop to 85% of their original level.
Mathematically the experience curve is described by a power law function sometimes referred to as Henderson's Law:
*$C_n = C_1 n^\left\{-a\right\}$ cite book |last=Grant |first=Robert M. |authorlink= |title=Contemporary strategy analysis |year=2004 |publisher=Blackwell publishing |location=U.S.,UK,Australia,Germany |id=ISBN 1-4051-1999-3 ] where
*$C_1$ is the cost of the first unit of production
*$C_n$ is the cost of the "n"th unit of production
*$n$ is the cumulative volume of production
*$a$ is the elasticity of cost with regard to output
Reasons for the effect
Examples
NASA quotes the following experience curves: [ [http://cost.jsc.nasa.gov/learn.html Learning Curve Calculator ] ]
*Aerospace 85%
*Shipbuilding 80-85%
*Complex machine tools for new models 75-85%
*Repetitive electronics manufacturing 90-95%
*Repetitive machining or punch-press operations 90-95%
*Repetitive electrical operations 75-85%
*Repetitive welding operations 90%
*Raw materials 93-96%
*Purchased Parts 85-88%
There are a number of reasons why the experience curve and learning curve apply in most situations.Fact|date=February 2007 They include:
* Labour efficiency - Workers become physically more dexterous. They become mentally more confident and spend less time hesitating, learning, experimenting, or making mistakes. Over time they learn short-cuts and improvements. This applies to all employees and managers, not just those directly involved in production.
* Standardization, specialization, and methods improvements - As processes, parts, and products become more standardized, efficiency tends to increase. When employees specialize in a limited set of tasks, they gain more experience with these tasks and operate at a faster rate.
* Technology-Driven Learning - Automated production technology and information technology can introduce efficiencies as they are implemented and people learn how to use them efficiently and effectively.
* Better use of equipment - as total production has increased, manufacturing equipment will have been more fully exploited, lowering fully accounted unit costs. In addition, purchase of more productive equipment can be justifiable.
* Changes in the resource mix - As a company acquires experience, it can alter its mix of inputs and thereby become more efficient.
* Product redesign - As the manufacturers and consumers have more experience with the product, they can usually find improvements. This filters through to the manufacturing process. A good example of this is Cadillac's testing of various "bells and whistles" specialty accessories. The ones that did not break became mass produced in other General Motors products; the ones that didn't stand the test of user "beatings" were discontinued, saving the car company money. As General Motors produced more cars, they learned how to best produce products that work for the least money.
* Value chain effects - Experience curve effects are not limited to the company. Suppliers and distributors will also ride down the learning curve, making the whole value chain more efficient.
* Network-building and use-cost reductions - As a product enters more widespread use, the consumer uses it more efficiently because they're familiar with it. One fax machine in the world can do nothing, but if everyone has one, they build an increasingly efficient network of communications. Another example is email accounts; the more there are, the more efficient the network is, the lower everyone's cost per utility of using it.
* Shared experience effects - Experience curve effects are reinforced when two or more products share a common activity or resource. Any efficiency learned from one product can be applied to the other products.
Experience curve discontinuities
The experience curve effect can on occasion come to an abrupt stop. Fact|date=February 2007 Graphically, the curve is truncated. Existing processes become obsolete and the firm must upgrade to remain competitive. The upgrade will mean the old experience curve will be replaced by a new one. This occurs when:
* Competitors introduce new products or processes that you must respond to
* Key suppliers have much bigger customers that determine the price of products and services, and that becomes the main cost driver for the product
* Technological change requires that you or your suppliers change processes
* The experience curve strategies must be re-evaluated because
**they are leading to price wars
**they are not producing a marketing mix that the market values
trategic consequences of the effect
The BCG strategists examined the consequences of the experience effect for businesses. They concluded that because relatively low cost of operations is a very powerful strategic advantage, firms should capitalize on these learning and experience effects.cite news
first = Bruce
last = Henderson
author =
coauthors =
title = The Experience Curve Reviewed: V. Price Stability
url = http://www.bcg.com/publications/files/experiencecurveV.pdf
format = [PDF] Reprint
work = Perpectives
publisher = The Boston Consulting Group
date = 1974, #149
accessdate = 2007-03-24
quote =
Today we recognize that there are other strategies that are just as effective as cost leadership so we need not limit ourselves to this one path. Fact|date=February 2007 See for example Porter generic strategies which talks about product differentiation and focused market segmentation as two alternatives to cost leadership.
One consequence of the experience curve effect is that cost savings should be passed on as price decreases rather than kept as profit margin increases. Fact|date=February 2007 The BCG strategists felt that maintaining a relatively high price, although very profitable in the short run, spelled disaster for the strategy in the long run. They felt that it encouraged competitors to enter the market, triggering a steep price decline and a competitive shakeout. If prices were reduced as unit costs fell (due to experience curve effects), then competitive entry would be discouraged and one's market share maintained. Using this strategy, you could always stay one step ahead of new or existing rivals.
Criticisms
Some authors claim that in most organizations it is impossible to quantify the effects. They claim that experience effects are so closely intertwined with economies of scale that it is impossible to separate the two. Fact|date=February 2007 In theory we can say that economies of scale are those efficiencies that arise from an increased scale of production, and that experience effects are those efficiencies that arise from the learning and experience gained from repeated activities, but in practice the two mirror each other: growth of experience coincides with increased production. Economies of scale should be considered one of the reasons why experience effects exist. Likewise, experience effects are one of the reasons why economies of scale exist. This makes assigning a numerical value to either of them difficult.
Others claim that it is a mistake to see either learning curve effects or experience curve effects as a given. They stress that they are not a universal law or even a strong tendency in nature. Fact|date=February 2007 In fact, they claim that costs, if not managed, will tend to rise. Fact|date=February 2007 Any experience effects that have been achieved, result from a concerted effort by all those involved. They see the effect as an opportunity that management can create, rather than a general characteristic of organizations.
Another factor may be the attitude of the individuals involved. A strong negative attitude may negate any learning effect. Conversely a positive attitude may reinforce the effect.
ee also
* Economies of scale
* Hermann Ebbinghaus
* Management
* Marketing strategies
* Porter generic strategies
* Strategic planning
* Gordon Moore's Law of affordable computing performance growth
* Mark Kryder's Law of magnetic disk storage growth
* Jakob Nielsen's Law of wired bandwidth growth
* Martin Cooper's Law of simultaneous wireless conversation capacity growth
References
Books and articles
* Theodore Paul Wright, (1936) Learning Curve, "Journal of the Aeronautical Sciences", Feb 1936
* W. Hirschmann, (1964) Profit from the Learning Curve, "Harvard Business Review", Jan-Feb 1964
* Boston Consulting Group, (1972) "Perspectives on Experience", Boston, Mass.
* William Abernathy and Kenneth Wayne, (1974) Limits to the Learning Curve, "Harvard Business Review", Sept-Oct 1974
* Walter Kiechel III, (1981) The Decline of the Experience Curve, "Fortune", October 5 1981
* George S. Day and David Bernard Montgomery, (1983) Diagnosing the Experience Curve, "Journal of Marketing", vol 47, Spring 1983
* Pankaj Ghemawat, (1985) Building Strategy on the Experience Curve, "Harvard Business Review", vol 42, March-April 1985.
* C.J. Teplitz, (1991) The Learning Curve Deskbook: A Reference Guide to Theory, Calculations, and Applications. New York: Quorum Books. xix, 288 p.
* Phillip F. Ostwald, (1992) "Engineering Cost Estimating", 3/E, Prentice Hall, ISBN 0-13-276627-2
* Geoffrey F. Davies, (2004) "Economia: New Economic Systems to Empower People and Support the Living World", ABC Books, ISBN 0-7333-1298-5.
*Pierre Le Morvan and Barbara Stock, (2005) Medical Learning Curves and the Kantian Ideal, "The Journal of Medical Ethics", vol 31, 2005
* [http://cost.jsc.nasa.gov/learn.html Learning curve calculator (Accessed March 24, 2007)] . [broken link]
* http://fast.faa.gov/pricing/98-30c18.htm
Wikimedia Foundation. 2010.
### Look at other dictionaries:
• Effects of stress on memory — The effects of stress on memory include interference with one’s capacity to encode and ability retrieve information.[1] When stress occurs, the body reacts by secreting stress hormones into the blood stream. Over secretion of stress hormones most … Wikipedia
• Effects of cannabis — Short term effects of cannabis Classification and external resources ICD 10 F12.0 The effects of cannabis are caused by cannabinoids, most notably tetrahydrocannabinol (THC). Cannabis has both psychological and physiological effects on the human… … Wikipedia
• Effects of MDMA on the human body — This article discusses the effects of MDMA (3,4 methylenedioxy N methylamphetamine, commonly known as Ecstasy ) on the human brain and body. More general information on MDMA, such as history and legal status, can be found in the main entry for… … Wikipedia
• Learning curve — The term learning curve refers to the graphical relation between the amount of learning and the time it takes to learn. Initially introduced in educational and behavioral psychology, the term has acquired a broader interpretation over time, and… … Wikipedia
• The Bell Curve — For other uses, see Bell curve (disambiguation). The Bell Curve … Wikipedia
• Phillips curve — The Phillips curve is a historical inverse relation between the rate of unemployment and the rate of inflation in an economy. Stated simply, the lower the unemployment in an economy, the higher the rate of increase in wages paid to labor in that… … Wikipedia
• Long-term effects of alcohol — Disability adjusted life year for alcohol use disorders per 100,000 inhabitants in 2004 … Wikipedia
• Biphasic curve — The biphasic curve for alcohol consumption represents the changing effects of increasing blood alcohol concentration (BAC) on the typical person who has not developed alcohol tolerance.People typically report that consumption of an alcoholic… … Wikipedia
• Porter generic strategies — Michael Porter has described a category scheme consisting of three general types of strategies that are commonly used by businesses to achieve and maintain competitive advantage. These three generic strategies are defined along two dimensions:… … Wikipedia
• Outline of marketing — The following outline is provided as an overview of and topical guide to marketing: Marketing refers to the social and managerial processes by which products, services and value are exchanged in order to fulfil individuals or group s needs and… … Wikipedia | 2020-08-06 12:51:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 15, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3189035654067993, "perplexity": 3651.419717941971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736962.52/warc/CC-MAIN-20200806121241-20200806151241-00008.warc.gz"} |
https://meritnation.sg/ask-answer/question/please-solve-my-problems/symmetry/14453621 | Thank You
(20)- 85
Dear Student
Q 12
Below is the answer of Q 12
first find the prime factors of 10224 to check the prefect square
10224 ÷ 2 = 5112
5112÷2 = 2556
2556÷2 = 1278
1278÷2 = 639
639 ÷ 3 = 213
213 ÷ 3 = 71
Thus the prime factors are 2×2×2×2×3×3×71
for perfect square we have (2×2) ×(2×2) ×(3×3) ×(71×71) here 71 is missing so if multiply with 71 to a
number we find perfect square of 10224 | 2020-07-13 03:52:42 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235281705856323, "perplexity": 6977.213130070061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657142589.93/warc/CC-MAIN-20200713033803-20200713063803-00563.warc.gz"} |
https://proxies123.com/tag/mathbb/ | ## reference request – About the maximums of the injective holomorphic maps in $mathbb {C} ^ n$
Thank you for contributing an answer to MathOverflow!
• Please make sure answer the question. Provide details and share your research!
But avoid
• Ask for help, clarifications or respond to other answers.
• Make statements based on opinion; Support them with references or personal experience.
Use MathJax to format equations. MathJax reference.
For more information, see our tips on how to write excellent answers.
## Actual analysis: Cauchy sequence with ${x_n: n in mathbb {N} }$ not closed converge
Suppose $$(x_n) _ {n in mathbb {N}}$$ it's a Cauchy sequence and $$A = {x_n: n in mathbb {N} }$$ It has not been closed. Prove that there is $$x in X$$ such that $$x_n longrightarrow x$$.
As $$A$$ it's not closed:
$$for all y in A, exists varepsilon> 0 s.t. S (and, varepsilon) subset A$$
and from $$x_n$$ is Cauchy:
$$forall varepsilon> 0, exists n_0 in mathbb {N} s.t. forall n, m geq n_0: rho (x_n, x_m) < varepsilon$$
But I can not see a way to connect these two to prove convergence. Some initial tips would be very appreciated.
## Theory of gr.group: a finite generation set for the group $SL_ {2} ( mathbb {F} _ {2} [t , t^{-1} ]$
I guess you knew before (the document you mention deals with finite presentability, a more difficult problem).
Denoting $$(E_ {ij})$$ the base of the matrix space, $$e_ {ij} (t) = I + tE_ {ij}$$, $$d_ {ij} (a) = aE_ {ii} + a ^ {- 1} e_ {jj}$$, the group $$mathrm {SL} _2 ( mathbf {F} _p[t,t^{-1}]$$ for $$p$$ prime is generated by $${e_ {12} (1), e_ {12} (t), e_ {21} (1), e_ {21} (t), d_ {12} (t) }$$.
Indeed, $$mathbf {F} _p[t,t^{-1}]$$ being a Euclidean ring, $$mathrm {SL} _2 ( mathbf {F} _p[t,t^{-1}]$$ It is generated by elementary matrices. And using conjugation by $$d_ {12} (t)$$ and the four elementary generators given, one obtains all the other basic elements among the elementary matrices.
(It's not hard to check that $$e_ {21} (t)$$ it is redundant in this generation subset, and also that there are not two of these 5 elements that generate the group.)
## nt.number theory – Is it true that ${x ^ 3 + 2y ^ 3 + 3z ^ 3: x, y, z in mathbb Z } = mathbb Z$?
It is easy to see that no integer is congruent with $$4$$ or $$-4$$ module $$9$$ It can be written as the sum of three whole cubes. In view of this and of Question 331163, I proposed the following conjecture in March 2019.
Guess. Each whole $$n$$ It can be written as $$x ^ 3 + 2y ^ 3 + 3z ^ 3$$ with $$x, y, z$$ integers That is,
$${x ^ 3 + 2y ^ 3 + 3z ^ 3: x, y, z in mathbb Z } = mathbb Z.$$
This conjecture has an interesting application. Under the conjecture, my result in the Tenth Problem of Hilbert implies that there is no effective algorithm to prove a general polynomial $$P (x_1, ldots, x_ {33})$$ with integer coefficients if the Diophantine equation
$$P (x_1 ^ {3}, ldots, x_ {33} ^ 3) = 0$$
It has entire solutions.
Recently, my PhD student, Chen Wang, seriously checked my previous conjecture. Found that
$${0, ldots, 5000 } setminus {x ^ 3 + 2y ^ 3 + 3z ^ 3: x, y, z in {- 30000, ldots, 30000 } }$$
they only contain four numbers: $$36, 288, 2304, 4500.$$
Note that
$$288 = 2 ^ 3 times 36, 2304 = 4 ^ 3 times36, 4500 = 5 ^ 3 times36.$$
So, to finish the verification of the conjecture for all. $$n = 0, ldots, 5000$$, remains to be found $$x, y, z in mathbb Z$$ with $$x ^ 3 + 2y ^ 3 + 3z ^ 3 = 36$$.
QUESTION. There are integers $$x, y, z$$ satisfactory $$x ^ 3 + 2y ^ 3 + 3z ^ 3 = 36$$?
## nt.number theory – How many $mathbb {Q}$ – bases of can be constructed from the vector set $log (1), cdots, log (n)$?
How many $$mathbb {Q}$$-base It can be constructed from the set of vectors. $$log (1), cdots, log (n)$$?
Data for $$n = 1,2,3 cdots$$ computed with Sage:
$$1, 1, 1, 2, 2, 5, 5, 7, 11, 25, 25, 38, 38, 84, 150, 178, 178, 235, 235$$
Context:
The real numbers $$log (p_1), cdots, log (p_r)$$ where $$p_i$$ Is it known that the i-th prime are linearly independent over the rational ones? $$mathbb {Q}$$.
Example:
[{}] -> For n = 1, we have 1 = a (1) bases; We count {} as the basis for V_0 = {0}
[{2}] -> For n = 2, we have 1 = a base (2), which is {2};
[{2, 3}] -> for n = 3, we have 1 = a base (3), which is {2,3};
[{2, 3}, {3, 4}] -> for n = 4 we have 2 = a (4) bases, which are {2,3}, {3,4}
[{2, 3, 5}, {3, 4, 5}] -> a (5) = 2;
[{2, 3, 5}, {2, 5, 6}, {3, 4, 5}, {3, 5, 6}, {4, 5, 6}] -> a (6) = 5;
[{2, 3, 5, 7}, {2, 5, 6, 7}, {3, 4, 5, 7}, {3, 5, 6, 7}, {4, 5, 6, 7}] -> a (7) = 5.
## linear algebra – Generators for the semigroup $mathrm {SL} (n, mathbb {N})$
by $$2 times 2$$ In the matrices we have the following result.
Any matrix in $$mathrm {SL} (2, mathbb {Z})$$ with non-negative entries can be obtained from $$mathrm {Id} _2$$ repeatedly adding one column to another.
Test: Just prove that if $$begin {pmatrix} a & b \ CD end {pmatrix} in mathrm {SL} (2, mathbb {Z}) setminus { mathrm {Id} _n }$$ has non-negative entries,
then also
$$begin {pmatrix} a-b & b \ c-d & d end {pmatrix} text {or } begin {pmatrix} a & b-a \ c & d-a end {pmatrix}$$
It has non-negative entries too. After this you can finish by induction. Now to prove that, let's suppose $$a$$ It is the largest entry in the matrix, if $$a = 1$$ then we get the matrices
$$begin {pmatrix} 1 & 0 \ 1 and 1 end {pmatrix}, begin {pmatrix} eleven \ 0 and 1 end {pmatrix} tag { star }$$
and we are finished. Otherwise $$a> 1$$from there
$$d-c leq d-bc / a = (ad-bc) / a = 1 / a$$ from which $$d-c leq 0$$ And we arrived in the first case.
Cases where the maximum entry is different from $$a$$ they are made in a similar way $$blacksquare$$
This can be reaffirmed by saying that the elementary matrices in ($$star$$) generate the semigroup $$mathrm {SL} (2, mathbb {N})$$.
My question is:
It is true that $$mathrm {SL} (n, mathbb {N})$$ It can be generated by elementary matrices, similar to what happens in the case $$n = 2$$?
I guess this has already been discussed in the literature, so a good reference would suffice.
## reference request – Involves in $mathbb {F} _p[[x]]$
Yes $$p neq 2$$, an involution in $$mathbb F_p[x]$$ with zero constant term must have $$± 1$$ as the coefficient of $$x$$. If the main coefficient is $$1$$, we can inductively see that each coefficient is zero (writing $$f (x) = x + a x ^ n + dots, f (f (x)) = x + 2a x ^ n + dots$$ so $$a = 0$$.) Of course, this trivial involution can be lifted.
On the other hand, if the main coefficient is $$-1$$, then we can take $$x f (x)$$ A series of powers with main coefficient. $$-x ^ 2$$. We can observe the only algebra homomorphism. $$mathbb F_p[[x]] a mathbb F_p[[x]]$$ shipping $$x$$ to $$f (x)$$ and retains the topology. Under this automorphism, $$-x f (x)$$ it's fixed, so $$sqrt {-x f (x)}$$ (only up $$± 1$$) it sends itself or less itself. Since its main term is non-zero, it is sent less to itself. Until the reparameterización by the series of invertible power. $$sqrt {-x f (x)}$$, this is the standard involution $$± 1$$. We can lift $$sqrt {-x f (x)}$$ to the integers, where the power series of the inverse function will also be integral, and are used to raise the involution.
In characteristic two, the situation is worse since there are more involutions. For example, we can take $$f (x) = left (x ^ n / (1 + x ^ n) right) ^ {1 / n}$$ for any stranger $$n$$. Some of these can be lifted (for example, if $$n = 1$$ we can do $$f (x) = -x / (1-x)$$ But I do not know if everything can be.
## pr.probability: Taylor's theorem for a composition with $min: mathbb R ^ 2 to mathbb R$ and Lebesgue differentiation almost everywhere
Leave
• $$f in C ^ 3 ( mathbb R)$$ with $$f> 0$$
• $$g: = ln f$$ (and assume $$g & # 39;$$ is Lipschitz continuous)
• $$n en mathbb N$$, $$s (x, y): = sum_ {i = 1} ^ n left (g (y_i) -g (x_i) right)$$ Y $$h (x, y): = min left (1, e ^ {s (x, : y)} right)$$ for $$x, and in mathbb R ^ n$$
• $$x en mathbb R ^ n$$ Y $$Y$$ be a $$mathbb R ^ n$$random variable normally distributed in a probability space $$( Omega, mathcal A, operatorname P)$$ with medium vector $$x$$ and covariance matrix $$sigma I_n$$ for some $$sigma> 0$$ ($$I_n$$ denoting the $$n times n$$ identity matrix)
I want to rigorously make the following argument: By Taylor's theorem, $$begin {equation} begin {split} h (x, Y) -h (x, (x_1, Y_2, ldots, Y_n)) & = frac { partial h} { partial y_1} (x, ( x_1, Y_2, ldots, Y_n)) (Y_1-x_1) \ & = frac12 frac { partial ^ 2h} { partial y_1 ^ 2} (x, (Z_1, Y_2, ldots, Y_n)) (Y_1-X_1) ^ 2 end {split} tag1 end {equation}$$ for some random variable of real value $$Z_1$$ with $$Z_1 in[min(x_1,Y_1),max(x_1,Y_1)]$$. A) Yes, $$begin {equation} begin {split} left. operatorname E left[h(x,(y_1,Y_2,ldots,Y_n))right] right | _ {y_1 : = : Y_1} & = operatorname E left[minleft(1,e^Aright)right]+ g & # 39; (x_1) operatorname E left[1_{left{:A:<:0:right}}e^Aright](Y_1-x_1) \ & + frac12 (g & # 39; & # 39; (Z_1) + left | g & # 39; (Z_1) right | ^ 2) left. Operatorname E left[1_{left{:B:<:0:right}}e^Bright] right | _ {z_1 : = : Z_1} (Y_1-x_1) ^ 2. end {split} tag2 end {equation}$$ Above I wrote $$A: = sum_ {i = 2} ^ n (g (Y_i) -g (x_i))$$ Y $$B: = g (z_1) -g (x_1) + sum_ {i = 2} ^ n (g (Y_i) -g (x_i))$$ to make the equation more readable (you must replace them where they occur).
Question 1: There are two themes: the first is that $$(x, y) mapsto min (x, y)$$ It is partially differentiable in both arguments except in the diagonal. $$Delta_2: = left {(x, y) in mathbb R ^ 2: x = y right }$$. Can we conclude the existence of $$Z_1$$ anyway? Note that $$frac { partial h} { partial y_1} (x, y) = begin {cases} displaystyle g & # 39; (y_1) e ^ {s (x, : y)} & text {, yes} s (x, y)<0\0&text{, if }s(x,y)>0 end {cases} tag3$$ Y $$frac { partial ^ 2h} { partial y_1 ^ 2} (x, y) = begin {cases} displaystyle (g & # 39; & # 39; (y_1) + | g & # 39; (y_1) | ^ 2) e ^ {s (x, : y)} & text {, if} s (x, y)<0\0&text{, if }s(x,y)>0 end {cases} tag4$$ for all $$and in mathbb R ^ n$$.
Question 2: The second problem is the case $$s (x, y) = 0$$. To $$(3)$$ to maintain, we have to show that the probability of the corresponding event is $$0$$ (This seems to be related to the question of whether the set in which the function that occurs is not differentiable has the Lebesgue measure $$0$$; and it is clear that $$Delta$$ has measure of Lebesgue $$0$$). How can we do that?
While it is clear that $$h$$ is partially differentiable with respect to the second variable, except in an accounting set, it is not clear to me why $$h$$ it is even twice differentiable with respect to the second variable, except in a set (at least) of Lebesgue measure $$0$$ (see this related question).
## Algebra of lies from $left ( begin {array} {c c} a & b \ & a ^ 2 end {array} right)$ in $GL_2 ( mathbb {R})$
I am working on the question.
Leave $$G$$ Be the set of invertible real matrices of the form. $$left [begin{array}{c c}a & b\ & a^2end{array}right ]$$. Determine the lie algebra $$L$$ of $$G$$, and calculate the support in $$L$$.
I'm familiar with how to derive the Lie algebra for a linear group like $$U_n$$, $$SU_n$$, etc., but I'm not sure what to do in a more explicit case like this.
## Self-similar bars in $mathbb R ^ d$
Leave $$Lambda subset mathbb R ^ d$$ A discrete subgroup, until diminishing. $$d$$ we assume it's the way $$A mathbb Z ^ d$$ with $$A in GL (d)$$. Until dilation we assume that the shortest vector in $$Lambda setminus {0 }$$ It has length $$1$$.
I would like to call this $$Lambda$$ "self-similar"yes for each $$p in Lambda setminus {0 }$$ one can complete $$p$$ to a subpart $$Lambda & subset Lambda$$ from the way $$Lambda & = 39; = lambda R Lambda$$ with $$lambda = | p |$$ Y $$R in O (d)$$ (that is to say. $$Lambda & # 39;$$ is a dilated copy rotated from $$Lambda$$ such that $$p$$ It is one of the shortest vectors that are not zero in $$Lambda & # 39;$$).
The motivation was for me that the lattices. $$mathbb Z ^ 2$$ Y $$A_2$$ (the triangular equilateral network) in $$d = 2$$ They are self-similar, and I wondered how rare is this property: What are other examples of self-similar grids in other dimensions?
Pointers to possibly related concepts are very welcome. | 2019-05-21 10:35:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 148, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7585539221763611, "perplexity": 257.1069347884509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256314.52/warc/CC-MAIN-20190521102417-20190521124417-00012.warc.gz"} |
https://mathoverflow.net/questions/307274/expressive-power-of-fo-with-mu | # Expressive power of FO with $\mu$
Let us consider the first-order logic extended with the least fixed point operator (FO+LFP). That is, together with the usual first-order formulas, we also have formulas of the form:
$$\mu X[\overline{y}] . \phi(X, \overline{y})$$
where $X$ (must occur positively in $\phi$) is a "predicate" variable of arity equal to the length of sequence of "parameters" $\overline{y}$. The semantic of this formula (in a given algebraic structure) is the least set $X^*$ such that:
$$X^*(\overline{y}) \Leftrightarrow \phi(X^*, \overline{y})$$
For example, if $R$ is a binary relation symbol, then:
$$\mu X[y_1, y_2] . R(y_1, y_2) \vee (\exists_z R(y_1, z) \wedge X(z, y_2))$$
defines the transitive closure of $R$.
If $A$ is an algebraic structure, let us write $\mathit{Th_{lfp}}(A)$ for the first-order theory of $A$ extended with the least fixed point operator (i.e. the set of all FO+LFP sentences that are true in $A$).
Does there exist an algebraic structure $A$ such that both of the following hold:
• $\mathit{Th_{lfp}}(A)$ is decidable
• FO+LFP is strictly more expressive than FO over $A$ (i.e. there is a FO+LFP formula that is not equivalent to FO formula over $A$)?
An example of a structure that satisfies the first property (but does not satisfy the second) is the structure of rational numbers with the natural ordering $\langle\mathcal{Q}, \leq\rangle$.
An example of a structure that satisfies the second property (but does not satisfy the first) is the structure of natural numbers with the natural ordering $\langle\mathcal{N}, \leq\rangle$.
One intuition is that there should be no such structure $A$ --- if $A$ defines arbitrary long well-founded orders, then the theory of $A$ should be undecidable; and if it does not define, then LFP seems to be pretty useless.
Another intuition is that there might be such a structure, because the above intuition is difficult to formalize, thus may contain essential holes.
## 2 Answers
Your intuition is right, and the way to formalize it is by Moschovakis's stage comparison theorem.
Suppose $\psi$ is an (FO+LFP)-formula which starts with a $\mu$ operator. So $\psi$ has the form $\mu X[y].\phi(X,y)$. We can define the semantics of $\psi$ in a structure $M$ by transfinite induction. Set $X^*_0 = \emptyset$. Given $X^*_\alpha$, define $b\in X^*_{\alpha+1}$ if and only if $M\models \phi(X^*_\alpha,b)$. And for a limit ordinal $\beta$, define $X^*_\beta = \bigcup_{\alpha<\beta} X^*_\alpha$. Then there is some ordinal $\xi$ such that $X^*_\xi = X^*_{\xi+1}$, and we can set $X^* = X^*_\xi$. The least such $\xi$ is called the closure ordinal of $\psi$ in $M$.
For $b\in X^*$, we define the $\psi$-stage of $b$, $|b|_\psi$, to be the least ordinal $\alpha$ such that $b\in X^*_\alpha$. Then the stage comparison theorem says that the relation $|b|_\psi \leq |c|_\psi$ is itself (FO+LFP)-definable.
Now for any structure $M$, we have two possibilities:
• Case 1: For every (FO+LFP) formula $\psi$ which starts with a $\mu$ operator, there is a finite bound on the $\psi$-stages of elements of $X^*$ in $M$. Equivalently, the closure ordinal of $\psi$ is finite. Then by unraveling the meaning of $\psi$, the $\mu X[y]$ operator can be removed. By induction on the complexity of the uses of $\mu$ in formulas, $\psi$ is equivalent to a first-order formula, and (FO+LFP) is no more expressive than FO on $M$.
• Case 2: There is an (FO+LFP) formula $\psi$ which starts with a $\mu$ operator, such that the closure ordinal $\xi$ of $\psi$ is infinite. Then, by stage comparison, the (FO+LFP)-theory of $M$ interprets $(\xi,\leq)$, and it's not hard to see that it interprets $(\mathbb{N},\leq,S)$ (defining the successor function $S$ from $\leq$ and defining the finite ordinals in $\xi$ as the closure of $0$ under $S$) and then $(\mathbb{N},\leq,S,+,*)$ (carrying out the usual inductive definitions of $+$ from $S$ and $*$ from $+$), and hence is undecidable.
You might be interested in the influential paper When is arithmetic possible? by Gregory McColm. McColm conjectured that for any structure in Case 2 (which contains a formula $\psi$ with infinite closure ordinal), (FO+LFP) is strictly more expressive than FO. This conjecture was later refuted by Gurevich, Immerman, and Shelah in their paper McColm's conjecture.
• Thank you for the answer and for the references --- that is exactly what I was looking for :-) – Michal R. Przybylek Aug 2 '18 at 14:37
You also might be interested in the paper The Role of Decidability in First Order Separations over Classes of Finite Structures by Steven Lindell and Scott Weinstein. While they don't address the above question directly, they investigate connections between the decidability of the first-order theory of a (family of) structures and the power of first order vs. LFP definability. For example, they prove that "$\text{Th}(\mathcal{C}$) is decidable" implies $\text{FO}(\mathcal{C}) \neq \text{FO+LFP}(\mathcal{C})$ when $\mathcal{C}$ is a proficient family of structures.
(Here $\text{Th}(\mathcal{C})$ is the set of first-order sentences satisfied for all structures in $\mathcal{C}$. Proficient simply means that there is no finite bound on the closure ordinals of all LFP formulas.) | 2020-04-03 17:43:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9802869558334351, "perplexity": 257.498866047337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370515113.54/warc/CC-MAIN-20200403154746-20200403184746-00390.warc.gz"} |
https://proofwiki.org/wiki/Definition:Right_Circular_Cone | Definition:Right Circular Cone
Definition
A right circular cone is a cone:
whose base is a circle
in which there is a line perpendicular to the base through its center which passes through the apex of the cone:
which is made by having a right-angled triangle turning along one of the sides that form the right angle.
In the words of Euclid:
When, one side of those about the right angle in a right-angled triangle remaining fixed, the triangle is carried round and restored again to the same position from which it began to be moved, the figure so comprehended is a cone.
And, if the straight line which remains fixed be equal to the remaining side about the right angle which is carried round, the cone will be right-angled; if less, obtuse-angled; and if greater, acute-angled.
Parts of Right Circular Cone
Axis
Let $K$ be a right circular cone.
Let point $A$ be the apex of $K$.
Let point $O$ be the center of the base of $K$.
Then the line $AO$ is the axis of $K$.
In the words of Euclid:
The axis of the cone is the straight line which remains fixed and about which the triangle is turned.
Base
Let $\triangle AOB$ be a right-angled triangle such that $\angle AOB$ is the right angle.
Let $K$ be the right circular cone formed by the rotation of $\triangle AOB$ around $OB$.
Let $BC$ be the circle described by $B$.
The base of $K$ is the plane surface enclosed by the circle $BC$.
In the words of Euclid:
And the base is the circle described by the straight line which is carried round.
Directrix
Let $K$ be a right circular cone.
Let $B$ be the base of $K$.
The circumference of $B$ is the directrix of $K$.
Generatrix
Let $K$ be a right circular cone.
Let $A$ be the apex of $K$.
Let $B$ be the base of $K$.
Then a line joining the apex of $K$ to its directrix is a generatrix of $K$.
Opening Angle
Let $K$ be a right circular cone.
Let point $A$ be the apex of $K$.
Let $B$ and $C$ be the endpoints of a diameter of the base of $K$.
Then the angle $\angle BAC$ is the opening angle of $K$.
In the above diagram, $\phi$ is the opening angle of the right circular cone depicted.
Types of Right Circular Cone
Acute-Angled
Let $K$ be a right circular cone.
Then $K$ is acute-angled if and only if the opening angle of $K$ is an acute angle.
Right-Angled
Let $K$ be a right circular cone.
Then $K$ is right-angled if and only if the opening angle of $K$ is a right angle.
Obtuse-Angled
Let $K$ be a right circular cone.
Then $K$ is obtuse-angled if and only if the opening angle of $K$ is an obtuse angle.
Similar Cones
Let $h_1$ and $h_2$ be the lengths of the axes of two right circular cones.
Let $d_1$ and $d_2$ be the lengths of the diameters of the bases of the two right circular cones.
Then the two right circular cones are similar if and only if:
$\dfrac {h_1} {h_2} = \dfrac {d_1} {d_2}$
In the words of Euclid:
Similar cones and cylinders are those in which the axes and the diameters of the bases are proportional. | 2021-01-16 17:36:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7390100359916687, "perplexity": 229.26672901722148}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00099.warc.gz"} |
https://www.homebuiltairplanes.com/forums/threads/belt-drives-and-design.36161/page-10#post-607326 | # Belt Drives and design
### Help Support Homebuilt Aircraft & Kit Plane Forum:
#### wsimpso1
##### Super Moderator
Staff member
Log Member
Pendulum dampers were invented for use on the big radials. read a first person account of that here:
Piston Engines
The hockey puck type has been completely eclipsed by the the roller-weight type. Hockey puck type was actually available as The Rattler for Detroit V-8's, it replaced the front pulley/ harmonic damper. Story was it worked at higher rpm, had rollers tuned to 4th and 8th order of rotation (the biggies in V-8's), but is no longer available.
Roller-weight type have been applied to airplane piston engines since the late 1930's, and are currently in use in some cars, mostly diesels, supplied by ZF-Sachs and LuK. LuK holds the patents, so ZF-Sachs sells under license. There are even multiples of them hung on some large helicopter rotor sets. These gadgets must be designed and developed to rather precisely match the vibe orders you are after, one swing radius per order attacked. Where they are designed/developed to work, they work extremely well, sucking off firing pulses very nicely on diesel engines. I have seen data indicating 95% reduction of tuned order. But if you need 2x engine order and yours absorbs 2.25x engine order, it is doing just about zero for you. So the design has to be adjusted to be very close to what you need. And once you suck off the biggest order, what used to be the second biggest vibe order now becomes the biggest vibration. It is like the light under the door... LOL.
On the crankshaft, they have small radius and so these tuned counterweights have to be pretty big to be effective. For vehicles with transmissions, these gadgets can go on at larger radius on a flywheel or inside a torque converter and so can be lighter. In automatic transmission vehicles, they actually work better on the wheel downstream of the spring element. Tuning to run in air (manual trans flywheel mounted) is different from tuning in oil (mounted inside torque converter), so be careful to run in the intended fluid or it simply won't work. Also, know that cars and trucks make their big firing pulses during accel, so the manufacturers actually tune the tracks to be ever so slightly above the intended firing order - you get better isolation during accels that way. Buying car systems to attach to your airplane may not be an effective strategy.
Now how would this apply to our problem? Order Tuned Torsional Pendulums will reduce the order attacked. but not eliminate it. In cars and trucks, they are used to reduce firing order vibration through the drivetrain because the customers can feel that vibe. Turbocharged diesels are particularly bad, so more effort is applied there. They are not being used to suck off driveline resonance in cars and trucks. Resonance is handled by driving resonant frequencies low with arrangement of inertia and springs, adding soft elements, then applying frequency tuned absorbers where needed.
To have value in Engine-PSRU-Prop arrangement, the pendulum could be applied to reduce engine firing pulses, but they are still there and so damaging amplification can still occur. The other engine orders are also still there with all their damaging potential should they line up with system vibe modes. Even fire four cylinder engines have 2x (compression-firing), 4x (pistons accelerating back and forth in the bores), and a bunch of others due to all sorts of issues that no one has been able to control. So maybe you put the gadget between engine and prop and tune it to absorb the natural frequency between engine and prop. That absorber might have to be big indeed. Maybe you tune the order of the gadget coincide with the vibe on frequency and rpm. OK, that gets one. Ross and his buddies have already experienced more than one rpm range - these vibes may be the same mode but with a different order coming from engine or the same order from the engine but a different mode. We do not know without more involved modeling and/or instrumentation.
Instead of an order tuned absorber, would not a frequency tuned absorber work better on beating something that happens at a particular frequency? They too exist and are way simpler. Steel sleeve on a rubber sleeve pressed over a shaft, used on front crankshaft pulleys (called harmonic dampener) and driveshafts/halfshafts on some vehicles. But you only get one frequency per gadget, and with a poor scheme, you might have more than one mode you need to beat. Be careful which ones you beat, and which ones you leave.
The really big trick in managing vibration is not in knowing the gadgets that can handle any one mode and forcing function combination, but in having a strategy of knowing all of the vibe modes and exciters of each layout so that you can pick a layout that can then have gadgets added to handle the modes. In road vehicles, we put the clutch or torque converter at the engine with a soft element there to isolate the engine firing order from everything else, then add the gearbox. Layouts with the clutch elsewhere, gearbox remote, etc have way more troublesome situations than what is common in cars and trucks. AWD racecars have been built using unusual configurations and had problems no one ever solved.
Much more complicated schemes exist. A standard helo is one with a piston engine, transmission, flexible shafting to one big rotor, even more flexible shafting to another gearbox and rotor. Turbines appear to have made part of it simpler - no firing pulses - but there are still a bunch of other vibe forcing functions out of the engine, accessories, gearboxes, shafts, and rotor sets). Then there are multiple engine sets, double main rotor helos, and then tilt-rotors - an engine and gearbox on each wing, a rotor on each wing, a shaft system with clutches between the engines for cross driving in an engine-out situation, and tilting the whole thing so it can fly as an airplane and as a helo. And all of these have ground resonance as significant modes too. Ugh.
By comparison we have it easy - put the firing frequency well below idle but safely above cranking speed with a soft element, make everything else stiff enough so the remaining modes are off range high, then see if anything else is hiding, and handle them. Keep your torsional vibe analyst and instrumentation geek in the loop, as you will still need their help at any stage of the process.
Billski
#### dog
##### Well-Known Member
Found this as another comarison of TV "dampers",mentions various types.
Also the "rattler" apears to be in production still.
#### DanH
##### Well-Known Member
Umm, guys, you have a world class mentor who has written endlessly for your education. You have a small bibliography of good textbooks, and simple, effective software tools keyed to being educational. I humbly suggest that you skip the gadgets and magazine ads masked as articles. Work on the basics, just as Bill writes.
Pincraze, you out there? I assume you have now established inertia values for your basic components (with ratio adjustment for the driven side parts), and stiffness for any shafting you have in mind. That's enough for a few first pass Holzers of component arrangements. Reduce it to a three or four element model. Run two, in which you move the soft element location from the flywheel to the upper sprocket. Tell us what you learn.
#### dog
##### Well-Known Member
Umm, guys, you have a world class mentor who has written endlessly for your education. You have a small bibliography of good textbooks, and simple, effective software tools keyed to being educational. I humbly suggest that you skip the gadgets and magazine ads masked as articles. Work on the basics, just as Bill writes.
Well well.
Dan.
My post was on topic, and in good faith.
The "rattler" appears to be directly relevant to the discussion insomuch as it is a type of pendulum damper, is in production, and therefore may be familiar, and also may have relevant engineering data available. The other "ad" linked to is also directly relevant as the whole thing relates to devices of the types being discussed here and while not academic in nature does point to a variety of engineered solutions (gadgets) again perhaps familiar, or useful as illustrative talking points. What I was thinking when I dug into what Billski posted about the "rattler" was that this is the thread that all the other threads want to be, that we have 100 pages to go and that I was bang on topic.
Still do.
#### wsimpso1
##### Super Moderator
Staff member
Log Member
Found this as another comarison of TV "dampers",mentions various types.
Also the "rattler" apears to be in production still.
These are all front end accessory devices intended primarily to protect the crank, cam, and accessories on high revving race engines. Believe me when I tell you, racers run up toward and into crankshaft resonance. 2x firing is usually the forcing function that approaches crank resonance, which is 4x rotation in I-4, 6x rotation in V-6, and 8x rotation in V-8. This is fundamental frequency of the crankshaft by itself that firing is approaching. Virtually all durable engines are designed with the crank fundamental frequency set at least 2-1/2 octaves above max firing frequency - this is to put resonance out of reach of both firing frequency (biggest at WOT) and 2x firing frequency (second biggest and always big). Now make a race car out of that basic architecture, the crank and block are still about that size and resonant frequency, but they tune the engine to run the revs up (More engine turns is more air is more power), and now the 2x firing goes through the resonance frequency. OK, we upgrade the block to four bolt mains or full girdle blocks to make the cases stiffer, then change from cast iron to forged steel crankshaft, then folks make the journals a little bigger, so it can go to higher revs before it self destructs, but it is only a little higher, and you are still getting into crankshaft resonance. Ugh. That is where these guys are trying to give them a little more headroom for revs.
The elastomeric ones work the same way stock does, but tend to be beefier and survive longer as racers. They also tend to hold balance better. They are frequency tuned absorbers, a weight on a spring. They will very nicely pick off a vibration of their tuned frequency and of vibrations at multiples of that frequency. They will also tend to pick off some energy at 1/2 the tuned frequency. Usually they are tuned to a resonant frequencies of the most sensitive gadgets on the front end - Cam drive and alternator are favorites. They have little impact on transmitted vibration at the other end of the crank. If you had a particular resonance in the FEAD, you can drive its amplitude down by tuning the proportions of rubber and inertia to match it.
The fluid filled ones work great for letting drag cars get through runs without broken cranks, and they have a following in circle track racing too. They have this neat effect of taking off the most energy where the most energy is. Good for crank survival, not so good for picking out a resonance between crank and prop and suppressing it. I question their ability to help much when we start talking about the crank/flywheel oscillating opposite the prop and gears. We are talking a couple order of magnitude more energy in the oscillation than in a crankshaft singing at its natural frequency.
The Rattler is a hockey puck style order tuned absorber. Nice gadget, glad to see it is back on the market. The problem with hockey puck style absorbers is the radius is small and so is the roller mass, which limits their effectiveness. That was why the big counterweight type came to predominate in big radials and a number of Lyc and Cont boxer engine models - they needed more mass at that radius to pick off most of the vibe. The ones listed are tuned for V-8, V-6, and I-4 and set up for install on GM engines only. The tuning for a number of firing pulses per rev will work for any engine with that same number of cylinders. If one were industrious enough, one could buy the one for the I-4, reverse engineer your way to installing it on an even firing four, and suck off a chunk of the torsional vibration at firing and 2x firing (2x and 4x rotation) at all speeds. Thing is that if you have a natural frequency in your operating range, you will still have some a big fraction of the original forcing function exciting it, which can still be amplified and break things.
Then there is the fundamental issue that these guys are the wrong end of the crank shaft. They are for taming some sort of twisting of the front end of the crankshaft. So even if it does make the front end dead smooth in rotation, the back end still winds up and unwinds for every power stroke and the other torsional orders too. And these gadgets do little for vibration at this end of the crankshaft.
Nope, the fundamental is still to isolate firing from the downstream components with a tuned soft element, then put all of the other orders at or above 2.5x max firing frequency with short stiff shafts, stout housings, and beefy bearings.
Billski
#### dog
##### Well-Known Member
As soon as I saw the picture of the "rattler" I thought I can learn the math for that AND build one as part of a flywheel.
Slugs in holes,of a certain mass,as a certain
radius,with a specific amount of movement.
Trying to visualise what happens with the "rattler", the crank accelerates on a combustion event,and leaves the mass of the rattler behind,
reducing the twisting moment on everything up and downstream,then as it decelerates the
rattlers catch up and add there momentum and again reduce the reverse twisting moment.
All of the other pendulum dampers look ultra
gadgety and of the not happening in a shop near me variety.
#### plncraze
##### Well-Known Member
HBA Supporter
I am still reading this thread and will post some pictures, drawings and revised Holzer numbers soon.
#### wsimpso1
##### Super Moderator
Staff member
Log Member
As soon as I saw the picture of the "rattler" I thought I can learn the math for that AND build one as part of a flywheel.
Slugs in holes, of a certain mass, as a certain radius, with a specific amount of movement. Trying to visualize what happens with the "rattler", the crank accelerates on a combustion event, and leaves the mass of the rattler behind, reducing the twisting moment on everything up and downstream, then as it decelerates the rattlers catch up and add there momentum and again reduce the reverse twisting moment. All of the other pendulum dampers look ultra gadgety and of the not happening in a shop near me variety.
I searched on "bifilar order absorber pendulum" and got a bunch of stuff right away. It is all pendulum theory when the gravity running the pendulum is from the gadget spinning at engine speed. The more massive the rollers, the more reduction in vibe your get. There is a minimum to make the things effective. Have fun. Oh, PM me if you want me to check your work.
I never designed these, we had two suppliers competing for the business, LuK owned the patents, ZF-Sachs was paying licensing fees for building on LuK's patents. Looking at the patents might help you. LuK is in Buhl, ZF-Sachs is in Schwienfurt. Some of the history is covered in the the "No Short Days" article and in the PhD theses...
Billski
#### Lendo
##### Well-Known Member
Thanks Billski, you certainly know your subject welI - I wish I did, but your explanations did cover a lot of detail.
The attached by Dog also gave good information. I notice the Rattler is suggested to eliminate all vibrations, are you suggesting this is misleading.
The Rattler to my mind is a Pendulum Damper, that must be tuned to an engine for best results and why I suggested bearings for easy adjustment of weight for the novice experimenter. Bearing slap can also be softened with light springs between them and lever arms can be adjusted with making the retainers a bigger diameter, grooved on a Lathe to retain the bearings and springs - all contained with a cover plate.
I'm trying to think simple, adjustable and low cost.
George
#### Lendo
##### Well-Known Member
Dog, If you do any experimentation with my suggestions using bearings, there's some things to consider, Pendulum travel, Lever arm (Radius) and weights naturally. On Pendulum Travel I would split the retainer into sections maybe 4 for smaller radius more for larger radius, if that makes sense to you.
George
#### wsimpso1
##### Super Moderator
Staff member
Log Member
The Rattler to my mind is a Pendulum Damper, that must be tuned to an engine for best results and why I suggested bearings for easy adjustment of weight for the novice experimenter.
Not they way they work George.
Two values determine the order that is smoothed out - the swing radius of the pendulum and the radius from crankshaft centerline to the center of pendulum swing. The ratio of these two is adjusted to get the order right, and there is an exact formulation out there that works. Once you have it set to say 2x rotation, you get 2nd order absorption. You can attach that package to any four banger you want and it will absorb 2nd order just fine.
The other major variable is pendulum mass. If it is too small the absorber is not effective. Make it big enough and it works great. There is also a formula for this out there someplace. So, if you have an absorber that works fine on one gasoline four banger, to make it work on a much bigger four banger or a diesel four banger, you will probably need bigger weights.
If you want to smooth out an even fire four cylinder, you will need pendulums tuned to 2x to get firing pulses. The next biggest pulses occur at twice firing, so maybe you want one pendulum out of four tuned to 4x. Usual application in cars and trucks is all four masses at 2x, but folks are continuously on the lookout for an application needing 4x.
Go up to an even fire V6, and you will need tuned to 3rd order. Maybe one out of four should be tuned to 6x, maybe not.
V8 require 4x and maybe 8x. The Rattler for V8s appears to have pucks tuned for both orders.
Billski
Last edited:
#### DanH
##### Well-Known Member
Well well.
Dan.
My post was on topic, and in good faith.
Belt drives?
The "rattler" appears to be directly relevant to the discussion insomuch as it is a type of pendulum damper, is in production, and therefore may be familiar, and also may have relevant engineering data available.
Be serious. The original paper on the application of pendulum absorbers to aircraft engines was written by E.S Taylor of MIT, published in SAE Transactions, March 1936, Vol 38, #3. You'll find pendulum absorbers beginning on page 219 of Mechanical Vibrations, and you can have your very own copy for a whopping $14. You'll find an entire chapter on them in the 1958 edition of A Handbook On Torsional Vibration, plus a foldout with equations for all the different types. I'm sure there are many, many more good reference works available at your local university library. Any of them beat magazine filler...but none have much to do with belt drives. Focus on fundamentals. #### dog ##### Well-Known Member Belt drives? Be serious. Focus on fundamentals. Thats more like it. Dan. Critisism WITH suggested reading. I am focusing on fundamentals. Honestly. I need a picture in my head to hang the math on. And even more fundamental for me is to have a physical end goal. The rattler is an excellent example of something to hang fundimental math on,and as a rank beginer it answers another fundimental need which is to have something to hang the terminology on. University?An hour drive in good weather.Thats two hrs fuel, food in town, parking, oops, then double all that to return the books. Call it$100 plus time.
My phone has brought me the world, cranky profesors and all, book recomendations, and Machinery's Handbook is finding its way to me right now, slightly dinged copy, landed for \$25 ca. It will just be here one day when I get home.
And speaking of cranky professors,I know just how lucky I am to be taking any part in this disscusion.
Very lucky indeed.
Thank you all.
Last edited by a moderator:
#### DanH
##### Well-Known Member
Thats more like it. Dan. Critisism WITH suggested reading.
Let's skip the sensitivity training. I'm incorrigible, and happy about it.
I am focusing on fundamentals.
No, you're not. Re-read this from Bill, just a few posts back...
The really big trick in managing vibration is not in knowing the gadgets that can handle any one mode and forcing function combination, but in having a strategy of knowing all of the vibe modes and exciters of each layout so that you can pick a layout that can then have gadgets added to handle the modes.
"Knowing all the vibe modes and exciters". Start with layouts and frequency prediction. When you have those well in hand, then maybe you come back to pendulums. Until then, it's just a distraction.
University?An hour drive in good weather.
Been there, done that. There's no shortcut.
#### AdrianS
##### Well-Known Member
I assume someone has tried a sprung belt tensioner to make it a soft coupling.
And that it didn't really work well enough.
#### dog
##### Well-Known Member
Let's skip the sensitivity training. I'm incorrigible, and happy about it.
No, you're not. Re-read this from Bill, just a few posts back...
"Knowing all the vibe modes and exciters". Start with layouts and frequency prediction. When you have those well in hand, then maybe you come back to pendulums. Until then, it's just a distraction.
Been there, done that. There's no shortcut.
Dan.
I agree with much of what you say, not quite all though. Having a complete picture in my head is the starting place, I was stuck trying to build an image of complete system, the" gadget" closed the circle for me, it isn't central, and is exciting only in that it helped me SEE how the inertial and spring elements react with each other and how, one way or the other, we must manage that in a real machine.
I get it, there is no difference between an individual crank throw and any other "gadget". For me this process is the first and steepest part of the learning curve. AND the reason "my" process is pertinent is that there must be others who have floundered around WANTING to do stuff and failed over and over because they have not learned or been TAUGHT to build a picture in there head, hang words(terminology) on that, and then, and only then assign symbols to those real things, acquire numerical inputs and run the equations.
While not incorrigible, I am violently obstreperous on occasion.
Cheers.
#### Lendo
##### Well-Known Member
Certainly can't knock some who is well educated and trained to think and understand in a certain way - then there's the rest of us who must struggle with concepts and try to think outside the Box, striving for simplicity - where just maybe there's none. I know that feeling well.
George
#### DanH
##### Well-Known Member
Is anyone here actually designing and building a belt drive?
#### wsimpso1
##### Super Moderator
Staff member
Log Member
I assume someone has tried a sprung belt tensioner to make it a soft coupling.
And that it didn't really work well enough.
Belt tensioners, where they are used, are mostly there to take up the slack that starts small and increases with use in these systems from wear.
In chain and cog belt drives, there is no tension needed on the undriven side of the system. Engine torque divided by engine side sprocket radius is belt tension, which then is multiplied by prop side sprocket radius to get prop torque. Rpm sees a commensurate decrease in speed. The belt on the drive side of the sprockets is slightly stretched by this force in proportion to the steady state engine torque. That describes the steady state situation.
Superimpose upon that the oscillations from firing pulses and other parts being accelerated about as the engine spins, and we have the behaviour. How big are the oscillations? Not big at all, but pretty fast, such that the tensioner will have a tough time keeping up with it.
I just put together a little spread sheet and calculated the approximate swing due to 2x and 4x vibe - the big orders made in a four cylinder engine. I get about half a degree of total swing maximum, and it gets smaller as your rpm goes up. In a road vehicle or a machine spinning a hydraulic pump, you can get full engine torque at any speed, so the numbers at low to medium rpm do get bigger, but even then, you are on the order of 3-4 degrees maximum.
How did I do the calculation? 2x is firing pulses and this basically varies with engine torque. Engine mean torque while driving a fixed pitch prop basically goes with prop speed squared. At peak engine torque a naturally aspirated gasoline engine makes about 2500 rad/s/s of firing accel, called alpha. Peak amplitude of this firing pulse is alpha/omega^2 where alpha is in rad/s/s, and omega is firing frequency in rad/s. Turn that into degrees peak by multiplying by 180/PI(), and you know the one-way amplitude. Total for the whole cycle is twice that. Then we have 4x, which has about the same alpha all the time, about one quarter as big as peak at 2x. Again this is all for a four cylinder engine. Add them up, and we know how far to expect the crank sprocket to cycle.
Back to the swing - if the engine side sprocket is only swinging 0.5 degrees and the sprocket radius is 2", that is 0.018" of total swing. Let's also remember that this firing cycle is repeating at 33 times a second (four banger, 1000 rpm). If the prop were spinning at an absolutely smooth speed, the belt would lengthen and shorten 0.018" on the drive side and the slack on the other side would change 0.018" with each firing pulse. This is superimposed upon whatever the stretch in the belt is due to steady state torque. The belt is sort of like a drive shaft in that it deforms to take the mean torque and then the forcing function is added to it. Then tensioner can keep the slop on the non-driven side from flopping around, but otherwise, it is not doing much, and it sure can have a hard time keeping up with the firing rate.
So far, we have been assuming isolation - the prop spins pretty steady compared to the engine. Now let's say we put the whole thing in resonance... The prop and engine are turning together in steady state, but the vibe is engine and prop moving opposite each other. Every firing stroke the engine tries to go faster than the prop and between firing strokes the engine tries to go slower, and little bit more energy gets added to the vibration on every cycle. Even at say 8 times the nominal , we are still only talking a little over 1/8" of swing. And you are at break the drive level forces... Now, the belt tensioner can not do much for this problem here.
Go to v-belts... The V-belt requires off side tension to make it carry torque and it slips when overloaded. When operated in isolation mode, the analysis looks about the same, except that you have static tension in the belt from mean torque plus the oscillating component. If you get to the point of belt slip, whether from static overload (not enough preload, too much mean torque) or from cyclic overload (firing pulses, resonance) the belt and sheaves get hot, and thermal failure follows shortly. If you get clever enough to make the tensioner fast enough to follow firing pulses, you can avoid the negative accel between firing pulses from being added to the resonant energy, but the positive side will still be there. As long as you have resonance, it will still amplify up to damaging levels. Make the belt slip during resonance, and only do it for a few firing strokes after engine lightoff and you might be able to tolerate the belt life impact. Run there steadily, and you will fail belts.
So either way, to have an unconditionally safe design, you still have to run resonance either an octave or more below idle firing or 2.5 octaves above max firing. Taking a bit more risk, but in a manner that the world has sometimes found acceptable when spinning fans, props, centrifugal pumps, etc, we can have prohibited bands that are above idle but below all nominal flight speeds that has a resonance, and we try hard to pass through those bands quickly. Maybe at power settings above idle and above taxi power, they are OK. Putting it around 50% power is much scarier.
That is my story and I am sticking to it.
Billski
Last edited:
#### dog
##### Well-Known Member
Is anyone here actually designing and building a belt drive?
Yup,and @ home even.
Working my ass off lining everything up
and ditching/selling non aviation projects/distractions. | 2022-01-18 21:58:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4358769953250885, "perplexity": 2286.9164206359487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301063.81/warc/CC-MAIN-20220118213028-20220119003028-00291.warc.gz"} |
https://www.physicsforums.com/threads/why-is-directional-noise-correlated-noise.521375/ | # Why is directional noise correlated noise?
1. Aug 13, 2011
### CantorSet
Hi everyone,
This is not a homework question but I question I have from reading a signals processing paper on acoustics.
Suppose there is a sound source in a room $$S(t)$$ and two microphones $$X_1(t)$$ and $$X_2(t)$$. Then the standard acoustic propagation model has that
$$X_1(t) = a_1S(t-\tau_1)+n_1(t)$$
and
$$X_2(t) = a_2S(t-\tau_1)+n_2(t)$$
where $$a_i, \tau_i, n_i$$ account for signal attenuation due to distance, time delay due to distance and noise, respectively.
But the paper says that if we have directional noise in the room (like a ceiling fan), then the noise at the two microphones is correlated, that is [tex] Corr(n_1(t),n_2(t)) \neq 0 [/itex].
But it seems to me the directionality isn't what's causing the correlation, but more the fact that the noise comes from a fan. That is, if we had an "omnidirectional" fan in the center of the room, the noise between the two microphones would still be correlated.
Also, how does one mathematically represent noise that is directional?
2. Aug 16, 2011
### fleem
In this context, I'd have to assume that by "directional", they simply mean the sound has one point source, as you say (and not that it is anisotropic!). On a side note, practically speaking there will be notable multipath, making things more difficult.
By the way, that Tau in the second equation should be tau sub 2, not Tau sub 1 (unless the microphones are the same distance from the sound source)
To mathematically describe noise that comes from a point source (described here as "directional"), treat it as if it were just another signal source term, albeit an undesirable one.
3. Aug 16, 2011 | 2017-11-20 01:08:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7701575756072998, "perplexity": 1000.1292392848654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805881.65/warc/CC-MAIN-20171119234824-20171120014824-00698.warc.gz"} |
https://brilliant.org/problems/obvious-is-a-dangerous-word-in-mathematics/ | # $$Obvious$$ is a $$Dangerous$$ $$word$$ in Mathematics
Number Theory Level 3
Let $$N$$ denote the two-digit number whose cube root is the square root of the sum of its digits. How many positive divisors does $$N$$ have?
× | 2016-10-28 00:32:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8677089810371399, "perplexity": 181.3833755735371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00153-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://academy.vertabelo.com/course/postgresql-recursive-queries/intro-quiz/introduction/introduction | Kickstart 2020 with new opportunities! - hours only!Up to 80% off on all courses and bundles.-Close
Introduction
1. Welcome
Quiz
Congratulations
Instruction
Welcome to our Recursive Queries in PostgreSQL course, where we'll show you how to use recursive queries and Common Table Expressions to make building complex queries easier.
Common Table Expressions (CTEs), often simply called WITH clauses, are essentially just named subqueries. They are a fairly new feature of SQL; with CTEs, you can break a long query into smaller chunks, which makes it more readable. Unlike SQL subqueries, CTEs can be recursive, allowing the traversal of hierarchical models of enormous depth.
WITH clauses have been available in PostgreSQL since version 8.4. This course covers simple CTEs, nested CTEs, and recursive CTEs. You will learn how to manage your SQL queries with CTEs, how and when to nest CTEs, and how to use recursive CTEs to move through hierarchical data models.
In the first part of the course, you will test your knowledge about basic SQL. You need fundamental SQL knowledge to complete this course, which covers some advanced material.
We assume that you know how to filter rows in a table, sort data, and use aggregate functions with GROUP BY and HAVING. You will also need to be able to join tables and use subqueries and set operators.
Exercise
Click to begin the review. | 2020-02-20 21:43:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17622637748718262, "perplexity": 3140.0086489864143}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145282.57/warc/CC-MAIN-20200220193228-20200220223228-00071.warc.gz"} |
https://forum.math.toronto.edu/index.php?PHPSESSID=vrle2gjfg1aecbf7scs437j4a1&topic=2679.0;wap2 | APM346-2022S > Final Exam
Alternative solution to the optimization in Problem 2 on the practice final
(1/1)
Weihan Luo:
Could I have solved the maximization/minimization using Lagrange multipliers? In particular, define $g_1(x,y) = y-x$, $g_2(x,y) = y+x$, and $g_3(x,y) = -(x^2+y^2)+1$. Then, a solution $(x^*,y^*)$ necessarily satisfies $$\nabla{u} + \lambda_1\nabla{g_1} + \lambda_2\nabla{g_2} + \lambda_3\nabla{g_3} = 0$$ and $$\lambda_1{g_1} = 0, \lambda_2{g_2}=0, \lambda_3{g_3}=0$$
for some $\lambda_{i} \geq 0$.
Then, after finding the points $(x^*, y^*)$, I need to verify that $$\nabla^2{u} + \lambda_1\nabla^2{g_1} + \lambda_2\nabla^2{g_2} + \lambda_3\nabla^2{g_3}$$ is positive definite on the tangent space $T_{x^*,y^*}D$.
Would this approach also work?
Victor Ivrii:
Yes, it can be solved using Lagrange multiplies. However note, if restrictions are $g_1\le 0$, $g_2\le 0$, $g_3\le 0$ you need to consider
* $g_1=0$ (and $g_2\le 0, g_3\le 0)$); there will be only one Lagrange multiplier at $g_1$. Two other cases in the similar way
* $g_1=g_2=0$ (and $g_3\le 0)$); there will be two Lagrange multipliers. Two other cases in the similar way It will be, however, more cumbersome. Note that (1) corresponds to two rays and one arc, (2) to two corners.
No, you need not consider quadratic forms after you found all suspicious points. It would serve no purpose. | 2022-10-07 23:31:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9471464157104492, "perplexity": 527.2959687879736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030338280.51/warc/CC-MAIN-20221007210452-20221008000452-00184.warc.gz"} |
https://socratic.org/questions/how-do-you-calculate-the-derivative-of-int3-sin-t-4-dt-from-e-x-1 | # How do you calculate the derivative of int3(sin(t))^4 dt from [e^x,1]?
Jun 12, 2015
Use the Fundamental Theorem of Calculus, Part 1 (after rewriting) and use the chain rule.
#### Explanation:
g(x) = ${\int}_{{e}^{x}}^{1} 3 {\sin}^{4} t \mathrm{dt}$
To use the fundamental theorem in one form, we must have the constant as the lower limit of integration, so we rewrite:
g(x) = $- {\int}_{1}^{{e}^{x}} 3 {\sin}^{4} t \mathrm{dt}$
With $u = {e}^{x}$, we have: $g \left(x\right) = h \left(u\right) = - {\int}_{1}^{u} 3 {\sin}^{4} t \mathrm{dt}$
Use the chain rule to find $g ' \left(x\right) = h ' \left(u\right) \frac{\mathrm{du}}{\mathrm{dx}}$.
By FTC 1, $h ' \left(u\right) = 3 {\sin}^{4} u$
We also have $\frac{\mathrm{du}}{\mathrm{dx}} = \frac{d}{\mathrm{dx}} \left({e}^{x}\right) = {x}^{e}$.
So,
$g ' \left(x\right) = 3 {\sin}^{4} u \frac{\mathrm{du}}{\mathrm{dx}} = 3 {\sin}^{4} \left({e}^{x}\right) \cdot {e}^{x}$ | 2019-03-19 05:48:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680821299552917, "perplexity": 724.3689859161401}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201904.55/warc/CC-MAIN-20190319052517-20190319074517-00049.warc.gz"} |
https://www.zbmath.org/?q=an%3A1270.06006 | # zbMATH — the first resource for mathematics
Representations of MV-algebras by Hilbert-space effects. (English) Zbl 1270.06006
Summary: It is shown that for every Archimedean MV-effect algebra $$M$$ (equivalently, every Archimedean MV-algebra) there is an injective MV-algebra morphism into the MV-algebra of all multiplication operators between the zero and identity operator on $$\ell_{2}(\mathcal{S}_{0})$$, where $$\mathcal{S}_{0}$$ is an ordering set of extremal states (state morphisms) on $$M$$.
##### MSC:
06D35 MV-algebras 81P10 Logical foundations of quantum mechanics; quantum logic (quantum-theoretic aspects)
Full Text:
##### References:
[1] Bennett, M.K.; Foulis, D.J., Phi-symmetric effect algebras, Found. Phys., 25, 1699-1722, (1995) [2] Cattaneo, G.; Giuntini, R.; Pulmannová, S., Pre-BZ and degenerate BZ-posets: applications to fuzzy sets and unsharp quantum theories, Found. Phys., 30, 1765-1799, (2000) [3] Chang, C.C., Algebraic analysis of many-valued logics, Trans. Am. Math. Soc., 88, 467-490, (1958) · Zbl 0084.00704 [4] Chovanec, F.; Kôpka, F., D-lattices, Int. J. Theor. Phys., 34, 1297-1302, (1995) · Zbl 0840.03046 [5] Cignoli, R., D’Ottaviano, I.M.L., Mundici, D.: Algebraic Foundations of Many-Valued Reasoning. Kluwer, Dordrecht (2000) · Zbl 0937.06009 [6] Dvurečeskij, A., Pulmannová, S.: New Trends in Quantum Structures. Kluwer, Dordrecht (2000) [7] Foulis, D.; Bennett, M.K., Effect algebras and unsharp quantum logics, Found. Phys., 24, 1325-1346, (1994) · Zbl 1213.06004 [8] Giuntini, R.; Greuling, H., Toward a formal language for unsharp properties, Found. Phys., 19, 931-945, (1989) [9] Gudder, S.P., Effect algebras are not adequate models for quantum mechanics, Found. Phys., 40, 1566-1577, (2010) · Zbl 1218.81010 [10] Jenča, G.; Pulmannová, S., Orthocomplete effect algebras, Proc. Am. Math. Soc., 131, 2663-2671, (2003) · Zbl 1019.03046 [11] Kadison, R.V., Order properties of bounded self-adjoint operators, Proc. AMS, 2, 506-510, (1951) · Zbl 0043.11501 [12] Kôpka, F.; Chovanec, F., D-posets, Math. Slovaca, 44, 21-34, (1994) · Zbl 0789.03048 [13] Mundici, D., Interpretation of AF C*-algebras in łukasiewicz sentential calculus, J. Funct. Anal., 65, 15-63, (1986) · Zbl 0597.46059 [14] Pulmannová, S., Compatibility and decompositions of effects, J. Math. Phys., 43, 2817-2830, (2002) · Zbl 1059.81016 [15] Pulmannová, S., On fuzzy hidden variables, Fuzzy Sets Syst., 155, 119-137, (2005) · Zbl 1079.81008 [16] Riečanová, Z.; Zajac, M., Hilbert-space effect-representations of effect algebras, Rep. Math. Phys., 40, 1566-1575, (2010) [17] Varadarajan, V.S.: Geometry of Quantum Theory. Springer, New-York (1985) · Zbl 0581.46061
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-09-27 12:32:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8582318425178528, "perplexity": 12457.127473097724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058450.44/warc/CC-MAIN-20210927120736-20210927150736-00489.warc.gz"} |
https://zem.com/en/blog/qr-overloading | # QR Code Series: The Basics (Part 1)
For one of our offline problem-solving functionalities, we've been looking into QR codes. During this process we started wondering, how much data can you actually fit into a QR code (practically)? Taking this practicality into account, we set out with the following question: How can we get the maximum amount of data into a single QR code that is scannable with any mobile phones native/built-in scanner and can parse a URL? In this first instalment, we will explore the various QR code variations and their quirks and features. Read the second instalment here!
A 21x21, 25x25, and 29x29 QR code. Source: Wikipedia
## The standards
There are a few different types of QR code versions available. These are denoted by an integer and a letter, 25-H for example. The integer denotes the size of the QR code with the formula $4 \times V + 17$, where $V$ is the version of the QR code. A version 25 QR code therefore has dimensions of 117x117. The letter in the version denotes the amount of error correction that is built into the QR code. An H error correction gives you 30% data byte restore capacity. For this project we've selected the 40-L QR code, giving us a QR code of 177x177 and 7% data bytes restoration. According to Wikipedia the available character storage capacity in a 40-L QR code is separated into four categories:
### Maximum QR Storage capacity (version 40-L QR code)
Input mode Max. characters Bits/char. Possible characters with default encoding
Numeric only 7,089 $3\frac{1}{3}$ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Alphanumeric 4,296 $5\frac{1}{2}$ 0–9, A–Z (upper-case only), space, $, %, *, +, -, ., /, : Binary/byte 2,953$8$ISO 8859-1 Kanji/kana 1,817$13$Shift JIS X 0208 From: https://en.wikipedia.org/wiki/QR_code on 2021-11-10 When looking at the table above some interesting values present themselves, and it becomes evident that Wikipedia was not entirely accurate. What are 1/3 or half bits and what can we do with them? Some digging and calculating resulted in the following table for 40-L QR codes (kanji/kana have been omitted): Input Mode Max char. Bits/char. Base Total Bits Actual total bits Group Size Total bits Max value Max alphabet Values not used Numeric only 7.089$3\frac{1}{3}$10 23.630 23.549 3 10 1024 1000 24 Alphanumeric 4.296$5\frac{1}{2}$45 23.628 23.593 2 11 2048 2025 23 Binary/byte 2.953$8$256 23.624 23.624 1 8 256 256 0 The group size provides the explanation for the fractal-bit issue. In order to obtain unfractured bits, 3 characters are needed in the case of numeric input, and 2 are needed for alphanumeric input. It becomes evident that quite some storage is lost on numeric and alphanumeric input. This is due to the fact that the bits per char. are not exactly$3\frac{1}{3}$, but actually$3.321928$. To get this actual bits per character value, we need to calculate the logarithm of base 2 (we are encoding bits with another base). This results in$\log_{2}10 = 3.321928$for numeric and$\log_{2}45 = 5.491853$for alphanumeric. The binary/byte input mode wins out automatically here due to$\log_{2}256 = 8$resulting in 0 lost bits. ## Getting something in there Okay, so we've done some pretty boring maths and established that the binary/byte input mode is the way to go to get the maximum out of our QR code with 23.624 bits. But wait, at the start of this article we mentioned something about practicality. As it turns out, iOS doesn't recognize QR codes with more than 4000 characters in the URL. This means that even though the QR standard would allow us to use 7.089 characters, an iPhone couldn't even read it. Taking the iPhone's limitation into account we get the following available bits: •$4000 * \log_{2}10 \approx 13287$bits of numeric input •$4000 * \log_{2}45 \approx 21967$bits of alphanumeric input •$2953 * \log_{2}256 = 23624\$ bits of binary/byte input
With the limited characters, binary wins again! It's a bad day to be an alphanumeric character :^( So now all we have to do is encode a lot of data into an url and then encode it into a binary QR code. In the next instalment we discover if that is as easy as it sounds. | 2022-09-25 17:48:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31553560495376587, "perplexity": 2247.094043540143}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00141.warc.gz"} |
https://physics.com.hk/2010/02/01/ | # 層壓式推銷
(安:或者你心目中的最好老師不單要傳授學術知識和考試技巧,而應該有更廣闊、更深刻的學養。但是,有廣闊深刻學養的人就不會自然地去中學教書,所以餘下的是知識見識不太廣闊深刻的人在中學教書。)
(安:為什麼廣博的人不會去中學教書呢?)
(安:因為年年都教同一堆知識,廣博的人就覺得沒有趣味?)
— Me@2010.02.01
# Inside Metamaterials
The analogy between the physics of superfluid helium and general relativity is well known. The mathematics that describe these systems are essentially identical so measuring the properties of one automatically tells you how the other behaves.
— Recreating the Big Bang Inside Metamaterials, The physics arXiv blog
2010.02.01 Monday ACHK
# Robot Wisdom
.
A world ruled by wise, loving, responsible elders
The present reality:
A world under the thumbs of utterly cynical predators
The strategic analysis:
Their weak point is their need to rationalize their acts, by sophistries
2010.02.01 Monday $ACHK$ | 2022-08-08 19:47:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5663406848907471, "perplexity": 8490.112856858575}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570871.10/warc/CC-MAIN-20220808183040-20220808213040-00395.warc.gz"} |
https://docs.ceph.com/en/latest/rados/operations/monitoring-osd-pg/?highlight=active | # Monitoring OSDs and PGs¶
High availability and high reliability require a fault-tolerant approach to managing hardware and software issues. Ceph has no single point-of-failure, and can service requests for data in a “degraded” mode. Ceph’s data placement introduces a layer of indirection to ensure that data doesn’t bind directly to particular OSD addresses. This means that tracking down system faults requires finding the placement group and the underlying OSDs at root of the problem.
Tip
A fault in one part of the cluster may prevent you from accessing a particular object, but that doesn’t mean that you cannot access other objects. When you run into a fault, don’t panic. Just follow the steps for monitoring your OSDs and placement groups. Then, begin troubleshooting.
Ceph is generally self-repairing. However, when problems persist, monitoring OSDs and placement groups will help you identify the problem.
## Monitoring OSDs¶
An OSD’s status is either in the cluster (in) or out of the cluster (out); and, it is either up and running (up), or it is down and not running (down). If an OSD is up, it may be either in the cluster (you can read and write data) or it is out of the cluster. If it was in the cluster and recently moved out of the cluster, Ceph will migrate placement groups to other OSDs. If an OSD is out of the cluster, CRUSH will not assign placement groups to the OSD. If an OSD is down, it should also be out.
Note
If an OSD is down and in, there is a problem and the cluster will not be in a healthy state.
If you execute a command such as ceph health, ceph -s or ceph -w, you may notice that the cluster does not always echo back HEALTH OK. Don’t panic. With respect to OSDs, you should expect that the cluster will NOT echo HEALTH OK in a few expected circumstances:
1. You haven’t started the cluster yet (it won’t respond).
2. You have just started or restarted the cluster and it’s not ready yet, because the placement groups are getting created and the OSDs are in the process of peering.
3. You just added or removed an OSD.
4. You just have modified your cluster map.
An important aspect of monitoring OSDs is to ensure that when the cluster is up and running that all OSDs that are in the cluster are up and running, too. To see if all OSDs are running, execute:
ceph osd stat
The result should tell you the total number of OSDs (x), how many are up (y), how many are in (z) and the map epoch (eNNNN).
x osds: y up, z in; epoch: eNNNN
If the number of OSDs that are in the cluster is more than the number of OSDs that are up, execute the following command to identify the ceph-osd daemons that are not running:
ceph osd tree
#ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 2.00000 pool openstack
-3 2.00000 rack dell-2950-rack-A
-2 2.00000 host dell-2950-A1
0 ssd 1.00000 osd.0 up 1.00000 1.00000
1 ssd 1.00000 osd.1 down 1.00000 1.00000
Tip
The ability to search through a well-designed CRUSH hierarchy may help you troubleshoot your cluster by identifying the physical locations faster.
If an OSD is down, start it:
sudo systemctl start ceph-osd@1
See OSD Not Running for problems associated with OSDs that stopped, or won’t restart.
## PG Sets¶
When CRUSH assigns placement groups to OSDs, it looks at the number of replicas for the pool and assigns the placement group to OSDs such that each replica of the placement group gets assigned to a different OSD. For example, if the pool requires three replicas of a placement group, CRUSH may assign them to osd.1, osd.2 and osd.3 respectively. CRUSH actually seeks a pseudo-random placement that will take into account failure domains you set in your CRUSH map, so you will rarely see placement groups assigned to nearest neighbor OSDs in a large cluster. We refer to the set of OSDs that should contain the replicas of a particular placement group as the Acting Set. In some cases, an OSD in the Acting Set is down or otherwise not able to service requests for objects in the placement group. When these situations arise, don’t panic. Common examples include:
• You added or removed an OSD. Then, CRUSH reassigned the placement group to other OSDs–thereby changing the composition of the Acting Set and spawning the migration of data with a “backfill” process.
• An OSD was down, was restarted, and is now recovering.
• An OSD in the Acting Set is down or unable to service requests, and another OSD has temporarily assumed its duties.
Ceph processes a client request using the Up Set, which is the set of OSDs that will actually handle the requests. In most cases, the Up Set and the Acting Set are virtually identical. When they are not, it may indicate that Ceph is migrating data, an OSD is recovering, or that there is a problem (i.e., Ceph usually echoes a “HEALTH WARN” state with a “stuck stale” message in such scenarios).
To retrieve a list of placement groups, execute:
ceph pg dump
To view which OSDs are within the Acting Set or the Up Set for a given placement group, execute:
ceph pg map {pg-num}
The result should tell you the osdmap epoch (eNNN), the placement group number ({pg-num}), the OSDs in the Up Set (up[]), and the OSDs in the acting set (acting[]).
osdmap eNNN pg {raw-pg-num} ({pg-num}) -> up [0,1,2] acting [0,1,2]
Note
If the Up Set and Acting Set do not match, this may be an indicator that the cluster rebalancing itself or of a potential problem with the cluster.
## Peering¶
Before you can write data to a placement group, it must be in an active state, and it should be in a clean state. For Ceph to determine the current state of a placement group, the primary OSD of the placement group (i.e., the first OSD in the acting set), peers with the secondary and tertiary OSDs to establish agreement on the current state of the placement group (assuming a pool with 3 replicas of the PG).
The OSDs also report their status to the monitor. See Configuring Monitor/OSD Interaction for details. To troubleshoot peering issues, see Peering Failure.
## Monitoring Placement Group States¶
If you execute a command such as ceph health, ceph -s or ceph -w, you may notice that the cluster does not always echo back HEALTH OK. After you check to see if the OSDs are running, you should also check placement group states. You should expect that the cluster will NOT echo HEALTH OK in a number of placement group peering-related circumstances:
1. You have just created a pool and placement groups haven’t peered yet.
2. The placement groups are recovering.
3. You have just added an OSD to or removed an OSD from the cluster.
4. You have just modified your CRUSH map and your placement groups are migrating.
5. There is inconsistent data in different replicas of a placement group.
6. Ceph is scrubbing a placement group’s replicas.
7. Ceph doesn’t have enough storage capacity to complete backfilling operations.
If one of the foregoing circumstances causes Ceph to echo HEALTH WARN, don’t panic. In many cases, the cluster will recover on its own. In some cases, you may need to take action. An important aspect of monitoring placement groups is to ensure that when the cluster is up and running that all placement groups are active, and preferably in the clean state. To see the status of all placement groups, execute:
ceph pg stat
The result should tell you the total number of placement groups (x), how many placement groups are in a particular state such as active+clean (y) and the amount of data stored (z).
x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail
Note
It is common for Ceph to report multiple states for placement groups.
In addition to the placement group states, Ceph will also echo back the amount of storage capacity used (aa), the amount of storage capacity remaining (bb), and the total storage capacity for the placement group. These numbers can be important in a few cases:
• You are reaching your near full ratio or full ratio.
• Your data is not getting distributed across the cluster due to an error in your CRUSH configuration.
Placement Group IDs
Placement group IDs consist of the pool number (not pool name) followed by a period (.) and the placement group ID–a hexadecimal number. You can view pool numbers and their names from the output of ceph osd lspools. For example, the first pool created corresponds to pool number 1. A fully qualified placement group ID has the following form:
{pool-num}.{pg-id}
And it typically looks like this:
1.1f
To retrieve a list of placement groups, execute the following:
ceph pg dump
You can also format the output in JSON format and save it to a file:
ceph pg dump -o {filename} --format=json
To query a particular placement group, execute the following:
ceph pg {poolnum}.{pg-id} query
Ceph will output the query in JSON format.
The following subsections describe the common pg states in detail.
### Creating¶
When you create a pool, it will create the number of placement groups you specified. Ceph will echo creating when it is creating one or more placement groups. Once they are created, the OSDs that are part of a placement group’s Acting Set will peer. Once peering is complete, the placement group status should be active+clean, which means a Ceph client can begin writing to the placement group.
### Peering¶
When Ceph is Peering a placement group, Ceph is bringing the OSDs that store the replicas of the placement group into agreement about the state of the objects and metadata in the placement group. When Ceph completes peering, this means that the OSDs that store the placement group agree about the current state of the placement group. However, completion of the peering process does NOT mean that each replica has the latest contents.
Authoritative History
Ceph will NOT acknowledge a write operation to a client, until all OSDs of the acting set persist the write operation. This practice ensures that at least one member of the acting set will have a record of every acknowledged write operation since the last successful peering operation.
With an accurate record of each acknowledged write operation, Ceph can construct and disseminate a new authoritative history of the placement group–a complete, and fully ordered set of operations that, if performed, would bring an OSD’s copy of a placement group up to date.
### Active¶
Once Ceph completes the peering process, a placement group may become active. The active state means that the data in the placement group is generally available in the primary placement group and the replicas for read and write operations.
### Clean¶
When a placement group is in the clean state, the primary OSD and the replica OSDs have successfully peered and there are no stray replicas for the placement group. Ceph replicated all objects in the placement group the correct number of times.
When a client writes an object to the primary OSD, the primary OSD is responsible for writing the replicas to the replica OSDs. After the primary OSD writes the object to storage, the placement group will remain in a degraded state until the primary OSD has received an acknowledgement from the replica OSDs that Ceph created the replica objects successfully.
The reason a placement group can be active+degraded is that an OSD may be active even though it doesn’t hold all of the objects yet. If an OSD goes down, Ceph marks each placement group assigned to the OSD as degraded. The OSDs must peer again when the OSD comes back online. However, a client can still write a new object to a degraded placement group if it is active.
If an OSD is down and the degraded condition persists, Ceph may mark the down OSD as out of the cluster and remap the data from the down OSD to another OSD. The time between being marked down and being marked out is controlled by mon osd down out interval, which is set to 600 seconds by default.
A placement group can also be degraded, because Ceph cannot find one or more objects that Ceph thinks should be in the placement group. While you cannot read or write to unfound objects, you can still access all of the other objects in the degraded placement group.
### Recovering¶
Ceph was designed for fault-tolerance at a scale where hardware and software problems are ongoing. When an OSD goes down, its contents may fall behind the current state of other replicas in the placement groups. When the OSD is back up, the contents of the placement groups must be updated to reflect the current state. During that time period, the OSD may reflect a recovering state.
Recovery is not always trivial, because a hardware failure might cause a cascading failure of multiple OSDs. For example, a network switch for a rack or cabinet may fail, which can cause the OSDs of a number of host machines to fall behind the current state of the cluster. Each one of the OSDs must recover once the fault is resolved.
Ceph provides a number of settings to balance the resource contention between new service requests and the need to recover data objects and restore the placement groups to the current state. The osd recovery delay start setting allows an OSD to restart, re-peer and even process some replay requests before starting the recovery process. The osd recovery thread timeout sets a thread timeout, because multiple OSDs may fail, restart and re-peer at staggered rates. The osd recovery max active setting limits the number of recovery requests an OSD will entertain simultaneously to prevent the OSD from failing to serve . The osd recovery max chunk setting limits the size of the recovered data chunks to prevent network congestion.
### Back Filling¶
When a new OSD joins the cluster, CRUSH will reassign placement groups from OSDs in the cluster to the newly added OSD. Forcing the new OSD to accept the reassigned placement groups immediately can put excessive load on the new OSD. Back filling the OSD with the placement groups allows this process to begin in the background. Once backfilling is complete, the new OSD will begin serving requests when it is ready.
During the backfill operations, you may see one of several states: backfill_wait indicates that a backfill operation is pending, but is not underway yet; backfilling indicates that a backfill operation is underway; and, backfill_toofull indicates that a backfill operation was requested, but couldn’t be completed due to insufficient storage capacity. When a placement group cannot be backfilled, it may be considered incomplete.
The backfill_toofull state may be transient. It is possible that as PGs are moved around, space may become available. The backfill_toofull is similar to backfill_wait in that as soon as conditions change backfill can proceed.
Ceph provides a number of settings to manage the load spike associated with reassigning placement groups to an OSD (especially a new OSD). By default, osd_max_backfills sets the maximum number of concurrent backfills to and from an OSD to 1. The backfill full ratio enables an OSD to refuse a backfill request if the OSD is approaching its full ratio (90%, by default) and change with ceph osd set-backfillfull-ratio command. If an OSD refuses a backfill request, the osd backfill retry interval enables an OSD to retry the request (after 30 seconds, by default). OSDs can also set osd backfill scan min and osd backfill scan max to manage scan intervals (64 and 512, by default).
### Remapped¶
When the Acting Set that services a placement group changes, the data migrates from the old acting set to the new acting set. It may take some time for a new primary OSD to service requests. So it may ask the old primary to continue to service requests until the placement group migration is complete. Once data migration completes, the mapping uses the primary OSD of the new acting set.
### Stale¶
While Ceph uses heartbeats to ensure that hosts and daemons are running, the ceph-osd daemons may also get into a stuck state where they are not reporting statistics in a timely manner (e.g., a temporary network fault). By default, OSD daemons report their placement group, up through, boot and failure statistics every half second (i.e., 0.5), which is more frequent than the heartbeat thresholds. If the Primary OSD of a placement group’s acting set fails to report to the monitor or if other OSDs have reported the primary OSD down, the monitors will mark the placement group stale.
When you start your cluster, it is common to see the stale state until the peering process completes. After your cluster has been running for awhile, seeing placement groups in the stale state indicates that the primary OSD for those placement groups is down or not reporting placement group statistics to the monitor.
## Identifying Troubled PGs¶
As previously noted, a placement group is not necessarily problematic just because its state is not active+clean. Generally, Ceph’s ability to self repair may not be working when placement groups get stuck. The stuck states include:
• Unclean: Placement groups contain objects that are not replicated the desired number of times. They should be recovering.
• Inactive: Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come back up.
• Stale: Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while (configured by mon osd report timeout).
To identify stuck placement groups, execute the following:
ceph pg dump_stuck [unclean|inactive|stale|undersized|degraded]
See Placement Group Subsystem for additional details. To troubleshoot stuck placement groups, see Troubleshooting PG Errors.
## Finding an Object Location¶
To store object data in the Ceph Object Store, a Ceph client must:
1. Set an object name
2. Specify a pool
The Ceph client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to an OSD dynamically. To find the object location, all you need is the object name and the pool name. For example:
ceph osd map {poolname} {object-name} [namespace]
Exercise: Locate an Object
As an exercise, lets create an object. Specify an object name, a path to a test file containing some object data and a pool name using the rados put command on the command line. For example:
rados put {object-name} {file-path} --pool=data
To verify that the Ceph Object Store stored the object, execute the following:
rados -p data ls
Now, identify the object location:
ceph osd map {pool-name} {object-name}
ceph osd map data test-object-1
Ceph should output the object’s location. For example:
osdmap e537 pool 'data' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up ([0,1], p0) acting ([0,1], p0)
To remove the test object, simply delete it using the rados rm command. For example:
rados rm test-object-1 --pool=data
As the cluster evolves, the object location may change dynamically. One benefit of Ceph’s dynamic rebalancing is that Ceph relieves you from having to perform the migration manually. See the Architecture section for details. | 2020-10-31 08:24:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21935375034809113, "perplexity": 4104.0748258705835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107916776.80/warc/CC-MAIN-20201031062721-20201031092721-00124.warc.gz"} |
http://www.physicsforums.com/showthread.php?t=110375 | ## sequence 2, 5, 8, 11, 14...
This is my question: Which powers of numbers in the sequence below are always in the sequence and which are not. Prove it?
Sequence: 2, 5, 8, 11, 14...
So the gerenal term is 3n + 2
Now
(3n+2)^2 = 9n^2 + 12n +4
Where should I go from here?
Quote by Natasha1 This is my question: Which powers of numbers in the sequence below are always in the sequence and which are not. Prove it? Sequence: 2, 5, 8, 11, 14... Answer: So the gerenal term is 3n + 2 Now (3n+2)^2 = 9n^2 + 6n + 4 Where should I go from here?
I think that should be 9n^2 + 12n + 4
Recognitions: Gold Member Homework Help Science Advisor Conventionally, $\mathbb{N}$ is the index set of sequence. This means that when identifying the general term, it must be that a1 is the first term in the sequence. The way you wrote an, a0 is your first term. It's no big deal, it just avoids confusion. I suggest you go back to your iniital problem before tacking this one as the method of proof is very similar and you're just one step away from the final solution in the other problem.
Recognitions: | 2013-05-22 05:52:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7880671620368958, "perplexity": 386.14053057616877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701409268/warc/CC-MAIN-20130516105009-00049-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.usna.edu/Users/cs/roche/courses/f16sy301/lab07/ | # Lab 7: Maps
• Due before 23:59 Wednesday, October 12
• Submit using the submit program as 301 lab 07.
# 1 Intro and Background
## Maps and Python
When we store information in a Map, our data consists of two things, a key and a value. Suppose we're wanting to use our BST from last week. We have two choices, we could either modify our BST Nodes to store both a key and a value like this:
1 2 3 4 5 6 class Node: def __init__(self, key, value): self.key=key self.value=value self.left=None self.right=None
or, we could keep our Nodes as they were, and make a single class called KVPair which can be stored as the data field:
1 2 3 4 class KVPair: def __init__(self, key, value): self.key = key self.value = value
Now, we can keep our BST Nodes as they were, and just add KVPairs as data. Which is better? Well, it depends. You could argue the first approach is less convoluted. However, keeping your BST general and applicable for lots of purposes, without lots of extra fields hard-coded in is a nice thing, especially for if you're passing your code off to someone else, or coming back to it later after a few weeks away.
For this lab, we're going to make this KVPair object, and build three different kinds of maps to map alphas to midshipman names. And we're going to implement all of the functions with operator overloading so that our maps look and act just like Python's built in dict objects.
Some of you may not be happy with the above paragraphs, and might be saying, "but my BST depends upon using the < and > signs on data, and those won't work with my new, invented class!" And you'd be right. But, it turns out that if you have your own class, you can define those operators to do what you want. For example:
1 2 3 4 5 6 7 class KVPair: def __init__(self, key, value): self.key = key self.value = value def __lt__(self, other): return (self.key < other.key)
Now, given two KVPairs pair1 and pair2, you can run pair1 < pair2, and the right thing will happen! Cool! So, now you can use your BST as written, without changing a thing. Overloading the less-than operator even lets you sort a list full of your objects using the .sort() command.
These special operator-overloading methods in Python are called "magic methods" and there are a lot of them! The ones you will need to worry about for this lab are the comparison operators and the container type operators.
# 2 Part 1: KVPair class
The first thing you need to do is create a file kvpair.py that completes the KVPair class above so that all of the comparison methods work properly. Specifically, your KVPairs should be able to:
• Compare by keys, meaning paira1 < pair2 should work, as should >, <=, and >=, based the key fields only;
• Compare for equality, meaning pair1 == pair2 should work, based on the equality of the keys only; and
• Compare for inequality, meaning pair1 != pair2 should work. (NB: If you overload ==, you should always also overload !=, or else you get weird behavior.)
After this, code like the following (which is definitely not an exhaustive testing suite) should work:
1 2 3 4 5 6 7 8 9 from kvpair import KVPair aaron = KVPair(169998, 'Aaron Aardvark') zeke = KVPair(160006, 'Zeke Zebra') print(zeke < aaron) # prints True print(aaron < zeke) # prints False print(aaron == zeke) # prints False print(aaron >= aaron) # prints True print(zeke != KVPair(160006, 'Same Alpha')) # prints False
# 3 Parts 2-4: Three versions of a Map
You're going to create three different map classes, in three different files:
• sortedarraymap.py will contain the SortedArrayMap class.
• unsortedllmap.py will contain the UnsortedLLMap class.
• bstmap.py will contain the BSTMap class.
All of these Maps should have the same methods defined so that they appear to work exactly the same, even through we know that "under the hood" they are implemented completely differently!
• Insert key-value pairs. The way this is normally done in Python is by using the [] operator, meaning for some key k and some value v, aMap[k] = v adds that pair to the map. This involves overloading the __setitem__(self, k, v) method.
This method should start by turning k and v into a single KVPair before inserting that single object into the Map.
You may assume (for now) each insert is done with a new key that is not currently in the map. If this assumption makes you uncomfortable (as it should), the correct behavior when someone inserts the same key for a second time is to overwrite the old KVPair that matches with that key.
• Given a key, get the value. This is again done using the [] operator: v=aMap[k]. The way this works "under the hood" is by overloading the __getitem__(self,k) method.
You may assume (for now) that the user a competent user, who will only try to get valid keys. If this makes you uncomfortable, as it probably should, the correct behavior is to raise an Exception (a KeyError, to be precise) to notify the user when they try to lookup a key that doesn't exist in the map.
• Tell you if a key appears in the Map using someKey in aMap. The "in" operator is overloaded by implementing the __contains__(self, k) operator, just like we did in last week's lab.
Your SortedArrayMap will have a list as a field, which will contain your KVPairs. Obviously, it should maintain them in sorted order so that your __getitem__ and __contains__ methods can use binary search and run in $$O(\log n)$$ time.
Your UnsortedLLMap will have a head field, and each Node will be a regular linked list node with a KVPair as the data field. Your __setitem__ method should be $$O(1)$$ in this case.
Your BSTMap will be very similar to the TreeSet class from last week's lab, except that each Node's data field will hold a KVPair this time. All three of the map methods should have a running time of $$O(\textrm{height})$$. Note, you definitely can (and should!) use your code from last week's lab, or my posted sample solutions, as a starting point to save yourself time and effort.
All methods should run the fastest we can make them run. Keep in mind that methods that come with Pythonic lists do not know or assume that the list is sorted, and so using them may be inappropriately slow.
To be explicit, code like the following (which is in no way an exhaustive testing suite) should work:
1 2 3 4 5 6 7 8 9 from sortedarraymap import SortedArrayMap arrMap = SortedArrayMap() arrMap[160006] = 'Zeke Zebra' print(arrMap[160006]) #prints 'Zeke Zebra' print(160006 in arrMap) #prints True print(169998 in arrMap) #prints False # ... and the same for UnsortedLLMap and BSTMap, of course!
The rest of this lab is optional for your enrichment and enjoyment. Be sure to work on this only after getting everything else perfect. You should still submit this work, but make sure it doesn't break any of the parts you already had working correctly!
In Python, the way you find out how big something is is using the len() function. Unsurprisingly, you can overload the behavior of this function for a class you write by implementing a method __len__(self): within your class.
First, make sure your Map classes can handle duplicate insertions properly. If your __getitem__ is called more than once with identical key values, then the next time __setitem__ is called with that same key, it should return the most recent value that was assigned. This should not change the running time of any of the methods, of course!
Now I want you to overload the __len__ function for all three of your Map classes from this lab. Keep in mind, the size of a Map is defined to be the number of distinct keys that have been inserted so far. For arrays and BSTs, it should be $$O(1)$$ running time, and for arrays it will be especially easy!
The trickier one is with UnsortedLLMap. You could maintain a current count field in your class, but that would require checking for duplicates every time there is an insertion, which would make that method become $$O(n)$$, which defeats the whole purpose of using unsorted linked lists in the first place! Instead, you should do all the work in your __len__ method, which will have a horrible running time of $$O(n^2)$$.
After this, something like the following should work for any of your Maps:
1 2 3 4 5 6 7 8 9 from unsortedllmap import UnsortedLLMap m = UnsortedLLMap() m[160006] = 'Zeke Zebra' print(m[160006]) #prints 'Zeke Zebra' m[160006] = 'Susan Swan' m[160006] = 'Mary Moose' print(m[160006]) # prints 'Mary Moose' print(len(m)) # prints 1 | 2018-10-17 20:12:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3197045922279358, "perplexity": 2383.8226278240304}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511216.45/warc/CC-MAIN-20181017195553-20181017221053-00331.warc.gz"} |
https://www.gresearch.co.uk/blog/article/developing-a-custom-rider-ide-plug-in-for-our-symbol-reference-finder/ | Software Engineering 18/05/2020 7 min read
# Developing a custom Rider IDE plug-in for our symbol reference finder
Magdalena Augustynska recently completed a software engineer internship at G-Research. For her intern project she worked on setting up reference finding through a Rider plug in. Find out how she went through the whole process below.
### Context
As a company grows, code bases become bigger over time as more and more developers work on them. It gets harder to navigate all the projects and when you want to remove or modify some of the company’s shared library code, you can’t be sure that you won’t break anything.
That’s why our Shared Code team built “Reference Finding” – a tool that lets you find references to any symbol in your code from any other code in the company. There are already some great existing tools, but they work with a separate server storing the code which was not an option for some of our secure code bases.
For my internship at G-Research, I joined the Shared Code team and spent my first three months helping them improve the developer experience for this Reference Finding tool.
At first, Reference Finding was available as a web app where you can search for a symbol by typing its name and choosing a desired one from the suggestions. Once chosen, the site displays a list of all the references to the symbol together with links to its declaration and usages in GitHub.
Alongside that, Shared Code had already created a plugin for Visual Studio that lets you right-click on a symbol (e.g. a method or property) and find references to it, jumping you into the web app. This is very useful for many of our engineers but a great number of them use Rider, so there was a desire to implement a plugin for this IDE too.
That’s the project I was assigned during my software engineering internship.
I found working on this project extremely interesting and that’s why I would like to walk you through my experiences in that with the hope that you may find it useful.
You may find these tips particularly helpful if you are writing a Rider plugin that works directly with C# symbols.
### Introduction
To start, I recommend getting familiar with the official documentation covering the basics of some of the areas of the ReSharper platform. Some other resources I found useful:
### Packing the plugin
We will build the plugin on the basis of the template linked above – it’s a good place to start.
After following the guidelines from the JetBrains blog about creating the project with the template, you end up with two directories:
• dotnet: back-end implementation
• rider: front-end implementation
Rider consists of the IntelliJ front-end running on the JVM and the ReSharper back-end running on .NET.
Depending on what you want your plugin to do, you might need to take care to implement only the front-end part (in which case you don’t need much knowledge of the code structure or back-end part), or perhaps you want to have extensive insight into the code, or modify the code itself, or both.
Knowing that the two parts are supposed to communicate with one another through a custom protocol, the first thing we want to do is to try to pack together both (at this point independent) components, so the final ZIP’s structure corresponds to the one presented here.
Although we ended up not needing this, maybe you’ll find it handy:
1. Build front-end part:
$cd ./rider$ gradle :buildPlugin
1. Build back-end part:
$cd ./dotnet$ gradle :buildPlugin
1. Now there is a ZIP file in dotnet/build/distributions called $projectName-$version.zip
• Unpack it
• Copy dotnet/src/rider/main/resources/META-INF directory into the unpacked dir
• Copy rider/build/distributions/rider-$version.zip/rider/lib/rider-$version.jar into lib directory in previously unpacked dir
• Pack it again to ZIP
2. The plugin can be installed from your created ZIP from Rider -> Ctrl+Alt+S -> Plugins.
(the projectName is rootProject.name defined in dotnet/settings.gradle file)
### Working with the Abstract Syntax Tree
We want to achieve the following when clicking on a symbol in the editor: To get a ReSharper object, find its original declaration, and then do a plugin-specific thing with it (which in our case is converting the declaration to the internal fully-qualified name format).
We’d like the feature to be available as a menu item and that can be done using IExecutableAction like in this example Create Menu Items Using Actions.
For this to work as a right-click menu item, we need to implement the front-end part that communicates with this action. After going through most of the available articles and documentation, I still had no clue how to do this but found another ReSharper class, IContextAction, which works with only the C# part implemented. Excited by this discovery, I decided to go with this class for the time being, just to have something to work with, and replace the class later.
The only concerning thing (apart from the fact that it implements context action) was that IContextAction provides access to other data structures than IExecutableAction.
IContextAction gives access to IContextActionDataProvider, whereas IExecutableAction gives access to IDataContext – but most of the objects representing syntactic and semantic views of a codebase can be retrieved from both classes.
Having the IContextActionDataProvider, we can do for example:
var file = dataProvider.PsiFile;
var treeTextRange = dataProvider.SelectedTreeRange;
PsiFileView psiFileView = new PsiFileView(file, treeTextRange);
The above class (and more general IPsiView) represents the view of the PSI (Program Structure Interface).
From IPsiView we are able to get elements like ITreeNode, IDeclaration, IDeclaredElement – structures containing details about particular symbol usage in the code.
(For all the type related ReSharper’s data structures I recommend reading the Type System docs.)
To actually find all details about a symbol – the assembly, namespace and full name – we need to get to the original declaration of the element. The structure that contains all the information we need is IDeclaredElement.
Having IPsiView, we can do psiView.GetSelectedTreeNode<ICSharpDeclaration>(),
get the declaration and then the Declared Element for it. But it only resolves correctly if the node we click on is indeed the declaration of some element – for example it doesn’t resolve keywords associated to specific symbols, attribute constructors, tokens or just references to an element.
We obviously didn’t want this kind of limited functionality.
Some digging led me to an example of finding the reference to an actual symbol’s definition. It uses objects called navigators.
Here is an example of using one:
Let cSharpTreeNode be an ITreeNode we got using any of the mentioned structures (IPsiView or IDataContext)
var cSharpIdentifier = cSharpTreeNode as ICSharpIdentifier;
(ICSharpIdentifier (IIdentifier) is just a representation of ITreeNode with additional info about the node’s name excluding language-specific details.)
var declarationUnderCaret = FieldDeclarationNavigator.GetByNameIdentifier(cSharpIdentifier);
var declaredElement = declarationUnderCaret?.DeclaredElement;
Another way to resolve a symbol is by using References.
Let’s say we want to find the original declaration of an attribute’s constructor.
var referenceName = ReferenceNameNavigator.GetByNameIdentifier(cSharpIdentifier);
var attribute = AttributeNavigator.GetByName(referenceName);
var declaredElement = attribute?.ConstructorReference.Resolve().DeclaredElement;
We covered most of the cases with these two approaches.
### Front-end part
We wanted the functionality of opening “Reference Finding” to be available as an option in the right-click menu. This part of the Rider IDE belongs to the IntelliJ front-end. In that case, you need to have a Java or Kotlin part that shares a model with the back-end and calls the action implemented in C#.
More precisely, you need to add a Kotlin class, along with updating the plugin.xml file – adding an action tag with details about your action and link to the front-end class.
For our purposes, there was no need to write a custom class compatible with the shared Kotlin protocol; it’s enough to use some of the ones used in Rider.
After spending too much time going through the IntelliJ OpenAPI (which has most of the implementations hidden) and looking for an answer on blogs, I reached out for help to the developers at JetBrains.
They provided me with an example of what the Kotlin class should look like together with the xml file containing the plugin specification – GlobalNukeTargetExecutionAction inside nuke-build. They updated the template as well.
Once I could use IExecutableAction, I slightly changed our implementation and pulled IFile and TreeTextRange from IDataContext to create IPsiView that our implementation was already handling well.
Soon it came to my attention that IDataContext contains very useful functionality – it stores cached values of data constants, for instance the ones defined in PsiDataConstants class.
With that, we can make an attempt to obtain an already-evaluated IReference for the chosen tree node:
dataContext.GetData(PsiDataConstants.REFERENCE)?.Resolve().DeclaredElement;
This will work if the symbol in question is indeed just a reference to the original declaration.
If we investigate a declaration, we can do:
dataContext.GetSelectedTreeNode<IDeclaration>()?.DeclaredElement;
This makes the problem a lot simpler. However, these two scenarios don’t cover everything we wanted (e.g. new keywords, attribute constructors) and I decided to stick with the previous approach which lets us handle each case the exact way we want to.
### Testing
There are various ways you may choose to test your plugin. We want to be able to generate ReSharper C# files (IFiles) compiled into a proper solution from sample source code that we have included as a project, so we can then access individual symbols at specific positions in the form they are represented internally. With these objects, we can test if the implementation resolves the symbols to correct fully qualified names.
It was quite a challenge to find the right way of generating ReSharper objects from strings. I found most of the hints on the internet unclear or not up to date.
We located a working solution within some test classes in the ReSharper source code.
public class TestClass : BaseTestWithSingleProject
{
protected override string RelativeTestDataPath => …
private IEnumerable<string> fileNames = …
[Test]
public void Test()
{
try
{
{
RunGuarded(() =>
{
var psiFiles = Solution.GetPsiServices().Files;
foreach (var testFileName in fileNames)
{
var testFilePath = GetTestDataFilePath2(testFileName);
var sourceFile = project.FindProjectItemsByLocation(testFilePath).OfType<IProjectFile>()
.Single().ToSourceFile();
if (sourceFile == null)
throw …
var cSharpFile = psiFiles.GetPsiFiles<CSharpLanguage>(sourceFile).OfType<ICSharpFile>().Single();
// Do something with the cSharpFile
}
});
});
}
catch
{
// ...
}
}
}
The BaseTestWithSingleProject class is defined in the JetBrains.ReSharper.TestFramework package. It lets you create a single project made of specified files.
RelativeTestDataPath property can be used to specify the location of your solution files. This needs to be relative to the BaseTestDataPath defined in BestTestNoShell (which is a superclass of BaseTestWithSingleProject).
The collection fileNames should consist of the names of files that you want your one-project solution to consist of and which should be under the RelativeTestDataPath directory.
There were a couple more issues I had to solve to get the tests working:
The method GetTestDataPackages() was returning JetBrains.Tests.Platform.NETFrameWork which we didn’t need and that was breaking the tests.
It is enough to override this method:
protected override IEnumerable<PackageDependency> GetTestDataPackages()
{
return Enumerable.Empty<PackageDependency>();
}
I also came across the problem where the mscorlib path was empty in some internal ReSharper object PlatformInfo. To solve that, you need to override the method
protected override IEnumerable<string> GetReferencedAssemblies(TargetFrameworkId targetFrameworkId)
and add to the result your own mscorlib path.
### Packing it again
Once the front-end and back-end parts are connected, it is sufficient to run Gradle’s buildPlugin task once to get ready to install the plugin.
The outcome of the project was a success – we’ve received much positive feedback from C# developers on the plugin and how it makes their daily work simpler.
### The internship experience
This project, as well as every other assignment I worked on over the course of six months, was really engaging and challenging. During that time, although I was an intern, I knew that my work impacted the business and delivered real value for the people working around me.
At every step of development of the solution, I could always count on help from my mentor, manager and the rest of my team. I’ve learned a lot, both in terms of technical skills and soft ones, working as a member of a team.
If you’re looking for a place where you can develop your software engineering skills, apply them to solve real-life problems and learn something new every day while working on exciting projects with some of the best people in their fields, without a doubt, I can strongly recommend applying to G-Research. | 2022-08-13 06:00:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26302650570869446, "perplexity": 1886.9901741962772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00218.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=E1BMAX_2016_v53n2_387 | ON A COMPOSITE FUNCTIONAL EQUATION RELATED TO THE GOLAB-SCHINZEL EQUATION
Title & Authors
ON A COMPOSITE FUNCTIONAL EQUATION RELATED TO THE GOLAB-SCHINZEL EQUATION
Gordji, Madjid Eshaghi; Rassias, Themistocles M.; Tial, Mohamed; Zeglami, Driss;
Abstract
Let X be a vector space over a field K of real or complex numbers and $\small{k{\in}{\mathbb{N}}}$. We prove the superstability of the following generalized Golab-Schinzel type equation $f(x_1+{\limits\sum_{i Keywords Hyers-Ulam stability;Golab-Schinzel equation;superstability; Language English Cited by 1. Stability problem for the composite type functional equations, Expositiones Mathematicae, 2017 References 1. J. Aczel and S. Golab, Remarks on one-parameter subsemigroups of the affine group and their homo- and isomorphisms, Aequationes Math. 4 (1970), 1-10. 2. J. A. Baker, The stability of the cosine equation, Proc. Amer. Math. Soc. 80 (1980), no. 3, 411-416. 3. J. A. Baker, J. Lawrence, and F. Zorzitto, The stability of the equation f(x + y) =f(x)f(y), Proc. Amer. Math. Soc. 74 (1979), no. 2, 242-246. 4. K. Baron, On the continuous solutions of the Golab-Schinzel equation, Aequationes Math. 38 (1989), no. 2-3, 155-162. 5. N. Brillouet-Belluot, On some functional equations of Golab-Schinzel type, Aequationes Math. 42 (1991), no. 2-3, 239-270. 6. N. Brillouet-Belluot, J. Brzdek, and K. Cieplinski, On some recent developments in Ulam's type stability, Abstr. Appl. Anal. 2012 (2012), Art. ID 716936, 41 pp. 7. N. Brillouet-Belluot and J. Dhombres, Equations fonctionnelles et recherche de sous-groupes, Aequationes Math. 31 (1986), no. 2-3, 253-293. 8. J. Brzdek, Subgroups of the group Zn and a generalization of the Golab-Schinzel functional equation, Aequationes Math. 43 (1992), no. 1, 59-71. 9. J. Brzdek, Some remarks on solutions of the functional equation$f(x+f(x)^ny)=tf(x)f(y)$, Publ. Math. Debrecen 43 (1993), no. 1-2, 147-160. 10. J. Brzdek, Golab-Schinzel equation and its generalizations, Aequationes Math. 70 (2005), no. 1-2, 14-24. 11. J. Brzdek, Stability of the generalized Golab-Schinzel equation, Acta Math. Hungar. 113 (2006), 115-126. 12. J. Brzdek, On the quotient stability of a family of functional equations, Nonlinear Anal. 71 (2009), no. 10, 4396-4404. 13. J. Brzdek, On stability of a family of functional equations, Acta Math. Hungar. 128 (2010), no. 1-2, 139-149. 14. J. Brzdek and K. Cieplinski, Hyperstability and superstability, Abstr. Appl. Anal. 2013 (2013), Article ID 401756, 13 pages. 15. A. Chahbi, On the superstability of the generalized Golab-Schinzel equation, Internat. J. Math. Anal. 6 (2012), no. 54, 2677-2682. 16. A. Charifi, B. Bouikhalene, S. Kabbaj, and J. M. Rassias, On the stability of Pexiderized Golab-Schinzel equation, Comput. Math. Appl. 59 (2010), no. 9, 3193-3202. 17. J. Chudziak, Approximate solutions of the Golab-Schinzel equation, J. Approx. Theory 136 (2005), no. 1, 21-25. 18. J. Chudziak, Stability of the generalized Golab-Schinzel equation, Acta Math. Hungar. 113 (2006), no. 1-2, 133-144. 19. J. Chudziak, Approximate solutions of the generalized Golab-Schinzel equation, J. Inequal. Appl. 2006 (2006), Article ID 89402, 8 pp. 20. J. Chudziak, Stability problem for the Golab-Schinzel type functional equations, J. Math. Anal. Appl. 339 (2008), no. 1, 454-460. 21. J. Chudziak and J. Tabor, On the stability of the Golab-Schinzel functional equation, J. Math. Anal. Appl. 302 (2005), no. 1, 196-200. 22. R. Ger and P. Semrl, The stability of the exponential equation, Proc. Amer. Math. Soc. 124 (1996), no. 3, 779-787. 23. S. Golab and A. Schinzel, Sur l'equation fonctionnelle f(x + yf(x)) = f(x)f(y), Publ. Math. Debrecen 6 (1959), 113-125. 24. D. H. Hyers, G. I. Isac, and Th. M. Rassias, Stability of Functional Equations in Sev-eral Variables, Progress in Nonlinear Differential Equations and their Applications 34, Birkhauser, Boston, Inc., Boston, MA, 1998. 25. E. Jablonska, On solutions of some generalizations of the Golab-Schinzel equation, In: Functional Equations in Mathematical Analysis, pp. 509-521. Springer Optimization and its Applications vol. 52. Springer 2012. 26. E. Jablonska, On continuous solutions of an equation of the Golab-Schinzel type, Bull. Aust. Math. Soc. 87 (2013), no. 1, 10-17. 27. E. Jablonska, On locally bounded above solutions of an equation of the Golab-Schinzel type, Aequationes Math. 87 (2014), no. 1-2, 125-133. 28. S. M. Jung, Hyers-Ulam-Rassias Stability of Functional Equations in Nonlinear Anal-ysis, Springer, New York, 2011. 29. Th. M. Rassias, On the stability of linear mapping in Banach spaces, Proc. Amer. Math. Soc. 72 (1978), 297-300. 30. A. Roukbi, D. Zeglami, and S. Kabbaj, Hyers-Ulam stability of Wilson's functional equation, Math. Sci. Adv. Appl. 22 (2013), 19-26. 31. D. Zeglami, The superstability of a variant of Wilson's functional equations on an ar-bitrary group, Afr. Mat. 26 (2015), 609-617. 32. D. Zeglami and S. Kabbaj, On the supesrtability of trigonometric type functional equa-tions, British J. Math. & Comput. Sci. 4 (2014), no. 8, 1146-1155. 33. D. Zeglami, S. Kabbaj, A. Charifi, and A. Roukbi,${\mu}\$-Trigonometric functional equations and Hyers-Ulam stability problem in hypergroups, Functional Equations in Mathematical Analysis, pp. 337-358, Springer, New York, 2012.
34.
D. Zeglami, A. Roukbi, and S. Kabbaj, Hyers-Ulam stability of generalized Wilson's and d'Alembert's functional equations, Afr. Mat. 26 (2015), 215-223. | 2018-08-17 20:31:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5551170706748962, "perplexity": 2997.7449862040366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212910.25/warc/CC-MAIN-20180817202237-20180817222237-00144.warc.gz"} |
http://codeforces.com/problemset/problem/1005/D | D. Polycarp and Div 3
time limit per test
3 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Polycarp likes numbers that are divisible by 3.
He has a huge number $s$. Polycarp wants to cut from it the maximum number of numbers that are divisible by $3$. To do this, he makes an arbitrary number of vertical cuts between pairs of adjacent digits. As a result, after $m$ such cuts, there will be $m+1$ parts in total. Polycarp analyzes each of the obtained numbers and finds the number of those that are divisible by $3$.
For example, if the original number is $s=3121$, then Polycarp can cut it into three parts with two cuts: $3|1|21$. As a result, he will get two numbers that are divisible by $3$.
Polycarp can make an arbitrary number of vertical cuts, where each cut is made between a pair of adjacent digits. The resulting numbers cannot contain extra leading zeroes (that is, the number can begin with 0 if and only if this number is exactly one character '0'). For example, 007, 01 and 00099 are not valid numbers, but 90, 0 and 10001 are valid.
What is the maximum number of numbers divisible by $3$ that Polycarp can obtain?
Input
The first line of the input contains a positive integer $s$. The number of digits of the number $s$ is between $1$ and $2\cdot10^5$, inclusive. The first (leftmost) digit is not equal to 0.
Output
Print the maximum number of numbers divisible by $3$ that Polycarp can get by making vertical cuts in the given number $s$.
Examples
Input
3121
Output
2
Input
6
Output
1
Input
1000000000000000000000000000000000
Output
33
Input
201920181
Output
4
Note
In the first example, an example set of optimal cuts on the number is 3|1|21.
In the second example, you do not need to make any cuts. The specified number 6 forms one number that is divisible by $3$.
In the third example, cuts must be made between each pair of digits. As a result, Polycarp gets one digit 1 and $33$ digits 0. Each of the $33$ digits 0 forms a number that is divisible by $3$.
In the fourth example, an example set of optimal cuts is 2|0|1|9|201|81. The numbers $0$, $9$, $201$ and $81$ are divisible by $3$. | 2019-10-13 21:48:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6381067037582397, "perplexity": 305.6067037761353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986647517.11/warc/CC-MAIN-20191013195541-20191013222541-00239.warc.gz"} |
https://www.aniapalka.pl/b6a7dd1d6160.html | ### Bond Tests | SGS
Bond Rod Mill Grindability Test. The test determines the Bond Rod Mill Work Index which is used with Bond’s Third Theory of Comminution to calculate net power requirements when sizing ball mills*. Various correction factors may have to be applied. The test is a closed-circuit dry grindability test performed in a standard rod mill.
### Kinetic Energy Calculator
Kinetic Energy is the energy an object has owing to its motion. In classical mechanics, kinetic energy (KE) is equal to half of an object's mass (1/2*m) multiplied by the velocity squared. For example, if a an object with a mass of 10 kg (m = 10 kg) is moving at a velocity of 5 meters per second (v = 5 m/s), the kinetic energy is equal to 125 ...
### PDF TECHNICAL NOTES 8 GRINDING R. P. King
mill is the energy consumption. The power supplied to the mill is used primarily to lift the load (medium and charge). Additional power is required to keep the mill rotating. 8.1.3 Power drawn by ball, semi-autogenous and autogenous mills A simplified picture of the mill load is shown in Figure 8.3 Ad this can be used to establish the essential ...
### Milling Formula Calculator - Carbide Depot
Milling Formula Calculator Milling Formula Interactive Calculator Solve for any subject variable in bold by entering values in the boxes on the left side of the equation and clicking the "Calculate" button.
### Calculation of energy required for grinding in a ball mill
The grinding-product size, P, in a Bond ball mill, which is given by the aperture size which passes 80% of the grinding product as a function of the aperture size of the test screen P k, can be expressed by the formula P= P k K 2.
### Calculate Your Power Consumption | SaveOnEnergy.com
Overall, calculating your energy bill is a matter of knowing your usage and what price you pay for energy. If cutting back your usage doesn’t work or you have a variable rate that makes costs hard to estimate, it might be time to look for a new plan.
### SAGMILLING.COM .:. Mill Critical Speed Determination
Mill Critical Speed Determination. The "Critical Speed" for a grinding mill is defined as the rotational speed where centrifugal forces equal gravitational forces at the mill shell's inside surface. This is the rotational speed where balls will not fall away from the mill's shell.
### Power | Work, energy and power | Siyavula
At the bottom of the stairs, we take both $$E_k$$ and the potential energy due to gravity, $$E_{p,g}$$, as initially zero; thus, $$W=E_{k,f}+ E_{p,g}=\frac{1}{2}mv_f^2+mgh$$, where $$h$$ is the vertical height of the stairs. Because all terms are given, we can calculate $$W$$ and then divide it by time to get power.
### how to calculate efficiency of ball mill
Grinding Efficiency Of Ball Mill Calculating Equation. Grinding efficiency of ball mill calculating equationechnical notes 8 grinding rp king mineral technologies,these mills exist in a variety of types rod, ball,,figure 83 simplified calculation of the torque required,steel balls in a ball mill, or large lumps of ore in an,of efficiency factors to account for differences between the.
### End Mill Speed and Feed Calculator - Martin Chick & Associates
I am creating a new calculator based on your feedback. Please fill out the form below with feeds and speeds that work for you and I will place them into a new database for all to use.
### A Method of C alculating Autogenous/ Semi-Autogenous
mills with the rod mill and ball mill laboratory work indices. Note, in Figure. 1, that the rod mill product slope is less than 0.5 due to an extra amount of nes present being fi fi ner than 650 μm. These fi nes proceed to the ball mill improving the ball mill effi ciency. Also, the plotted rod mill P80 value, as shown in Figure 1, is 2900 ...
### Falling Water - Activity - TeachEngineering
Students drop water from different heights to demonstrate the conversion of water's potential energy to kinetic energy. They see how varying the height from which water is dropped affects the splash size. They follow good experiment protocol, take measurements, calculate averages and graph results. In seeing how falling water can be used to do work, they also learn how this energy ...
### how to calculate ball mill area
how to calculate ball mill area. The ball mill finish calculator can be used when an end mill with a full radius a ball mill is used on a contoured surface the tool radius on each side of the cut will leave stock referred to as a scallop the finish of the part will be determined by the height of the scallop, amd the scallop will be determined by the stepover distance
### PDF Calculation of The Power Draw of Dry Multi-compartment
Equation 1 is multiplied by the factor of 1.08. A multi-compartment ball mill consists of two or more grate discharge ball mills in series. The same equation is used to calculate the power that each ball mill compartment should draw. The total power is the sum of the power calculated for each of the separate compartments.
### Energy consumption calculator | kWh calculator
Energy consumption calculator. kWh calculator. Energy consumption calculation. The energy E in kilowatt-hours (kWh) per day is equal to the power P in watts (W) times number of usage hours per day t divided by 1000 watts per kilowatt:
### Wind Turbine Calculator - Determine Your Energy Output & Sizing
How to Use the Wind Energy Calculator. Here is how to use our wind energy calculator in a step-by-step manner. The density of air will always remain constant on Earth at 1.23 kg/m3; Then, input the estimated wind speed for the area. Input the rotor diameter, which is a strong determinant of potential energy
### Mill Steel Charge Volume Calculation
How do you calculate the volume of a mill?
### how to calculate the energy consumption of a ball mill
Table 5.6 Calculation of ball mill circuit specific energy for HPGR - ball mill... Read more. Mill (grinding) - Wikipedia, the free encyclopedia ... Calculate and Select Ball Mill Ball Size for Optimum Grinding 2... Read more. Patent CN102716796A - Method for determining mill feeding ... 2012年10月10日 ... Method for determining mill feeding ...
### USING THE SMC TEST® TO PREDICT COMMINUTION CIRCUIT PERFORMANCE
the ball mill is less energy efficient than a crusher and has to input more energy to do the same amount of size reduction). Hence from equation 7, to crush to the ball mill circuit feed size (x. 2) in open circuit requires specific energy equivalent to: f x f x 2. 1 W c ic. 1.19* 4 2
### Calculate Energy Estimated In A Ball Mill
Calculate Energy Estimated In A Ball Mill. Dynamic Stiffness Calculation Of Ball Mill How to Sie a Ball Mill -Design Calculator Formula. know more:If you want to know more product information. You can click on the button on the right to contact us or send us an email: [email protected] Get Price List Chat Online
### ceramic ball mill calculator
Calculate and Select Ball Mill Ball Size for Optimum Grinding. In Grinding, selecting (calculate) the correct or optimum ball size that allows for the best and optimum/ideal or target grind size to be achieved by your ball mill is an important thing for a Mineral Processing Engineer AKA Metallurgist to do.
### ball mill specific energy calculate
How to calculate cement ball mill capacity - Quora. It's difficulty to calculate capacity of a machine or maximum capacity. It's depend on a lot of parameters. But you can calculate specific energy(unit kW/kg e.g. 0.05kW/Kg) to know which mill better. The low specific energy will means more capacity for a mill with stock motor power.
### calculate effective energy ball milling
calculate effective energy ball milling. Mill speed no matter how large or small a mill, ball mill, ceramic lined mill, pebble mill, jar mill or laboratory jar rolling mill, its rotational speed is important to proper and efficient mill operation too low a speed and little energy is imparted on the product.
### Calories Burned Calculator | Exercise Calorie Counter
Enter your weight and the duration of exercise, click the calculate button and the calculator will calculate the calories burned. The calculations are based on the activities Metabolic Equivalent (MET). This is an estimate of how much energy an activity burns as a multiple of an individuals resting metabolic rate (RMR).
### Use the Principle of Conservation of Mechanical Energy to
Thanks to the principle of conservation of mechanical energy, you can use physics to calculate the final speed of an object that starts from rest. “Serving as a roller coaster test pilot is a tough gig,” you say as you strap yourself into the Physics Park’s new Bullet Blaster III coaster. “But someone has to […] | 2021-10-20 01:27:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45612379908561707, "perplexity": 2442.6917228745206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585290.83/warc/CC-MAIN-20211019233130-20211020023130-00189.warc.gz"} |
https://listserv.uni-heidelberg.de/cgi-bin/wa?A2=LATEX-L;680564f9.0102&FT=&P=3266007&H=N&S=a | ## LATEX-L@LISTSERV.UNI-HEIDELBERG.DE
Options: Use Forum View Use Monospaced Font Show Text Part by Default Show All Mail Headers Message: [<< First] [< Prev] [Next >] [Last >>] Topic: [<< First] [< Prev] [Next >] [Last >>] Author: [<< First] [< Prev] [Next >] [Last >>]
Subject: Re: \int_eval:n versus \dim_eval:n/skip_eval:n [was Re: l3luatex module] From: Joseph Wright <[log in to unmask]> Reply To: Mailing list for the LaTeX3 project <[log in to unmask]> Date: Thu, 6 Jan 2011 08:18:37 +0000 Content-Type: text/plain Parts/Attachments: text/plain (28 lines)
On 06/01/2011 07:45, Will Robertson wrote:
> Oh! I was confused -- for some reason it was in my head that using \the before \glueexpr would strip it of its "plus minus" components. But this is not the case, of course.
>
> \the\glueexpr 1pt plus 1pt minus 1pt + 2pt plus -1pt minus 1pt\relax
>
> So I agree with you that adding \tex_the:D before \dim_eval and \skip_eval (and \muskip_eval which doesn't yet exist I think but it probably should) is the best idea.
I've gone back and forward through this, and I think in the end this is
the best plan. In the end, expl3 should be designed 'on its own merits',
and that may mean that some mixed plain TeX\expl3 cases are a little
awkward. For what we want, an expandable \int_eval:n makes most sense,
and by logical extension \dim_eval:n and \skip_eval:n should also be
expandable.
What we do need to do is to make sure that this is clear in the
documentation, as Philipp has pointed out. Something like
After two expansions, \int_eval:n yields a <integer donation> not an
<internal integer>. As a a result, it will require suitable
termination if used in a \TeX-style integer assignment.
and a similar statement for dim and skip cases.
(BTW, I have some thoughts on muskips, but I think that will keep until
we need them in LaTeX3!)
--
Joseph Wright | 2022-09-27 11:31:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9627900123596191, "perplexity": 13831.206577740832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335004.95/warc/CC-MAIN-20220927100008-20220927130008-00316.warc.gz"} |
https://www.proemion.com/us/mixed-fleet-telematics/blog/optimize-your-fleet.html | # Optimize your fleet by looking at a single metric: Operating Hours.
One of the main concerns of mixed fleet owners is the low amount of telemetry data received via the standard AEMP 2.0 (ISO 15143-3) interface.
This article shows how to be more efficient just by tracking a single metric, operating hours, available via AEMP. Following analysis of this metric will allow right-size your fleet's capacity to your business needs.
## Fleet usage patterns
The operating hours metric is the most simple and widely available metric for machinery in the construction industry. This metric is relevant for machine owners as it drives the aging of the fleet, the need for maintenance, and a proxy for representing work progress.
The following data is based on the operating hours of real machines operating worldwide in the construction industry. The figure shows the monthly average working hours of machines and the average number of days in a month the engine has been operating. Each dot represents a machine.
Machines in the first column of the plot are seldom utilized. Those machines are active only one day each month and for very few hours on average. On the other side, if we look at columns on the right, we see machines that are used every day of the month. And as we move our eyes upward, we see machines that, apart from being used daily, are used for long hours each day. The machine at the top right of the plot is used every day for an average of 22 hours.
## Completing the view on your fleet
Before going deeper into the analysis, let's point out a drawback to this representation: more than one machine can overlap simultaneously, and we would only see one point in the plot. So it is hard to know the usage pattern of all the fleet at a glance. We can complement the previous plot with a heat map showing how many machines fall in each dot.
By looking at the heat map, we can see that most machines work between 20 and 26 days per month, between 3 and 9 hours a day.
## Diving into your fleet data
Let's go back to the previous plot and do some further analysis. Now we can cluster machines by their usage patterns. We reflect these clusters in the colors of the dots:
• Green machines are low utilization machines
• Orange machines are high-utilization machines
• Purple machines are outliers
• machines without a clear usage pattern that do not fit in the two main groups.
Eventually, we can draw some boundaries in the plot that let us look into more fine-grained groups of interest. We use color bands to define these boundaries.
Let's start by breaking the plot into two broad areas with the horizontal pink boundary. In the bottom area, we have machines that work less than 9 hours a day. The top area holds machines that operate more than 9 hours a day and are assumed to be working as part of a shift operation and exposed to considerably more use.
Now let's divide the plot into vertical sections with additional color bands. These bands separate machines into groups of machines that work
• up to 2 days a week (orange),
• up to 5 days a week (green),
• up to 6 days a week (red) and
• up to 7 days a week (blue).
## Looking into a specific machine model
Until now, we have been looking at the whole fleet. Let's start by looking at machines of a single-mode: Model Type 1.
The plot for Model Type 1 is using the same representation we have explained above. Instead of showing all fleet machines, this plot analyzes a subset of the fleet, only machines of Model Type 1.
We have also calculated a few aggregate values for the two machines: green (low utilization machines) and orange (high utilization machines). We see that Model Type 1 machines from the green cluster are used close to 3 hours a day, 6 days a month. And they age at a rate of 202.94 hours a year.
Model Type 1 machines in the orange cluster are used nearly 6 hours a day, 22 days a month. And they age at a rate of 1566.52 hours a year.
It seems like machines in the green cluster are underutilized compared to those in the orange group.
## Analyzing excess capacity
Let's see what is the excess capacity of the green cluster compared to the orange cluster. Machines of Model Type 1 in the green cluster have an excess capacity of:
$1,566.52 \frac{hours}{year \cdot machine} - 202.94 \frac{hours}{year \cdot machine} = 1,363.52 \frac{hours}{year \cdot machine}$
Given we have 48 machines in the green cluster, the total excess capacity is of:
$48 \spacemachines \cdot 1,363.52 \frac {hours}{year \cdot machine} = 65,451.84 \frac{hours}{year}$
If we wanted to express this excess capacity in number of machines, we could divide the excess capacity by the average aging of the machines in the orange cluster:
$\cfrac {65451.84 \frac{hours}{year} }{1566.52 \frac{hours}{year \cdot machine}} = 41.78 \space machines$
So we have 41 machines in excess.
## Optimizing the fleet size
Just by looking at operating hours, we can start making decisions about the sizing of the fleet. Of course, other questions are relevant to make the final decision to resize the population of a particular model of machines:
• Is this excess capacity a result of worksites distribution and need for redundant machines, lack of operators, and demand?
• Do we want to maintain this extra capacity to deal with peaks in demand?
• Should we reduce the number of machines in this model?
• Should we buy a lower-end model or rent to cover the low utilization machines?
Analysis of machine operating hours is just a starting point to drive good decisions about your fleet capacity and sizing. It is not going to give you direct responses. Still, it is the basis for building a decision framework that your business expertise would assist. | 2022-07-01 11:12:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35733968019485474, "perplexity": 993.7993744355311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00566.warc.gz"} |
http://nylogic.org/topic/forcing | # Topic Archive: forcing
CUNY Logic WorkshopFriday, February 10, 20172:00 pmGC 6417
# Functors and infinitary interpretations of structures
City University of New York
It has long been recognized that the existence of an interpretation of one countable structure B in another one A yields a homomorphism from the automorphism group Aut(A) into Aut(B). Indeed, it yields a functor from the category Iso(A) of all isomorphic copies of A (under isomorphisms) into the category Iso(B). In traditional model theory, the converse is false. However, when we extend the concept of interpretation to allow interpretations by Lω1ω formulas, we find that now the converse essentially holds: every Borel functor arises from an infinitary interpretation of B in A, and likewise every Borel-measurable homomorphism from Aut(A) into Aut(B) arises from such an interpretation. Moreover, the complexity of an interpretation matches the complexities of the corresponding functor and homomorphism. We will discuss the concepts and the forcing necessary to prove these results and other corollaries.
Set theory seminarFriday, February 10, 201710:00 amGC 6417
# A countable ordinal definable set of reals without ordinal definable elements
The City University of New York
In 2010, a question on MathOverflow asked whether it is possible that a countable ordinal definable set of reals has elements that are not ordinal definable. It is easy to see that every element of a finite ordinal definable set of reals is itself ordinal definable. Also it is consistent that there is an uncountable ordinal definable set of reals without ordinal definable elements. It turned out that the question for countable sets of reals was not known. It was finally solved by Kanovei and Lyubetsky in 2014, who showed, using a forcing extension by a finite-support product of Jensen reals, that it is consistent to have a countable ordinal definable set of reals without ordinal definable elements. In the talk, I will give full details of their argument.
An extended abstract is available on my blog here.
University of Wisconsin
Noah Schweber received his doctorate from the University of California-Berkeley in 2016, under the supervision of Antonio Montalban. He now holds a postdoctoral position at the University of Wisconsin.
Set theory seminarFriday, November 18, 201610:00 amGC 6417
# Characterizing forcing extensions
I shall present a proof of a theorem of Bukovský from 1973 that characterizes the set-forcing extensions among all pairs of ZFC models $M\subseteq N$: these are precisely the pairs satisfying a uniform covering property. His result has recently resurfaced in the study of set-theoretic geology and can, for example, also be used to give a conceptual proof of (a version of) the intermediate model theorem.
Set theory seminarFriday, November 11, 201610:00 amGC 6417
# $V$ need not be a forcing extension of $\mathrm{HOD}$ or of the mantle
CUNY New York City College of Technology
In 1972 Vopenka showed that $V$ is a union of set-generic extensions of $\mathrm{HOD}$ by establishing that every set in $V\setminus\mathrm{HOD}$ is set generic over $\mathrm{HOD}$. It is natural to consider whether that union can be replaced by a single forcing, possibly a proper class, over $\mathrm{HOD}$. In 2012 Friedman showed that $V$ is a class forcing extension of $\mathrm{HOD}$ by a partial order definable in $V$ – however, this leaves open the question of whether such a partial order can be defined in $\mathrm{HOD}$ itself. In this talk I will show that the qualifier ‘in $V$’ is necessary in Friedman’s theorem, by producing a model which is not class generic over $\mathrm{HOD}$ for any forcing definable in $\mathrm{HOD}$.
In the area of set theory known as set-theoretic geology, the mantle $M$ (the intersection of all grounds) is an inner model that enjoys a relationship to $V$ similar to that of $\mathrm{HOD}$, but ‘in the opposite direction’ – every set not in $M$ is omitted by a ground of $V$. Does it follow that we can build $V$ up over $M$ by iteratively adding those sets back in via forcing? In particular, does it follow that $V$ is a class forcing extension of $M$? The example produced in this talk will show that the answer is no – there is a model of set theory $V$ which is not a class forcing extension of $M$ by any forcing definable in $M$.
CUNY Logic WorkshopFriday, November 4, 20162:00 pmGC 6417
# Hierarchies of forcing axioms
The City University of New York
I will give an overview over several hierarchies of forcing axioms, with an emphasis on their versions for subcomplete forcing, but in the instances where the concepts are new, their versions for more established classes of forcing, such as proper forcing, are of interest as well. The hierarchies are the traditional one, reaching from the bounded to the unbounded forcing axiom (i.e., versions of Martin’s axiom for classes other than ccc forcing), a hierarchy of resurrection axioms (related to work of Tsaprounis), and (inspired by work of Bagaria, Schindler and Gitman) the “virtual” versions of these hierarchies: the weak bounded forcing axiom hierarchy and the virtual resurrection axiom hierarchy). I will talk about how the levels of these hierarchies are intertwined, in terms of implications or consistency strength. In many cases, I can provide exact consistency strength calculations, which build on techniques to “seal” square sequences, using subcomplete forcing, in the sense that no thread can be added without collapsing ω1. This idea goes back to Todorcevic, in the context of proper forcing (which is completely different from subcomplete forcing).
Set theory seminarFriday, September 9, 201610:00 amGC 6417
# Set-theoretic geology and the downward-directed grounds hypothesis: part II
The City University of New York
I will continue presenting Toshimichi Usuba’s recent proof of the strong downward-directed grounds hypothesis. See the main abstract at Set-theoretic geology and the downward directed ground hypothesis.
Set theory seminarFriday, September 2, 201610:00 amGC 6417
# Set-theoretic geology and the downward-directed grounds hypothesis
The City University of New York
Forcing is often viewed as a method of constructing larger models extending a given model of set theory. The topic of set-theoretic geology inverts this perspective by investigating how the current set-theoretic universe $V$ might itself have arisen as a forcing extension of an inner model. Thus, an inner model $W\subset V$ is a ground of $V$ if we can realize $V=W[G]$ as a forcing extension of $W$ by some $W$-generic filter $G\subset\mathbb Q\in W$. Reitz had inquired in his dissertation whether any two grounds of $V$ must have a common deeper ground. Fuchs, myself and Reitz introduced the downward-directed grounds hypothesis, which asserts a positive answer, even for any set-indexed collection of grounds, and we showed that this axiom has many interesting consequences for set-theoretic geology.
I shall give a complete detailed account of Toshimichi Usuba’s recent proof of the strong downward-directed grounds hypothesis. This breakthrough result answers what had been for ten years the central open question in the area of set-theoretic geology and leads immediately to numerous consequences that settle many other open questions in the area, as well as to a sharpening of some of the central concepts of set-theoretic geology, such as the fact that the mantle coincides with the generic mantle and is a model of ZFC. I shall also present Usuba’s related result that if there is a hyper-huge cardinal, then there is a bedrock model, a smallest ground. I find this to be a surprising and incredible result, as it shows that large cardinal existence axioms have consequences on the structure of grounds for the universe.
Set Theory DayFriday, March 11, 20169:30 amGC 4102 (Science Center)
# Virtual large cardinals
The City University of New York
Given a very large cardinal property $\mathcal A$, e.g. supercompact or extendible, characterized by the existence of suitable set-sized embeddings, we define that a cardinal $\kappa$ is virtually $\mathcal A$ if the embeddings characterizing $\mathcal A$ exist in some set-forcing extension. In this terminology, the remarkable cardinals introduced by Schindler, which he showed to be equiconsistent with the absoluteness of the theory of $L(\mathbb R)$ under proper forcing, are virtually supercompact. We introduce the notions of virtually extendible, virtually $n$-huge, and virtually rank-into-rank cardinals and study their properties. In the realm of virtual large cardinals, we can even go beyond the Kunen Inconsistency because it is possible that in a set-forcing extension there is an embedding $j:V_\delta^V\to V_\delta^V$ with $\delta>\lambda+1$, where $\lambda$ is the supremum of the critical sequence. The virtual large cardinals are much smaller than their (possibly inconsistent) counterparts. Silver indiscernibles possess all the virtual large cardinal properties we will consider, and indeed the large cardinals are downward absolute to $L$. We give a tight measure on the consistency strength of the virtual large cardinals in terms of the $\alpha$-iterable cardinals hierarchy. Virtual large cardinals can be used, for instance, to measure the consistency strength of the Generic Vopěnka’s Principle, introduced by Bagaria, Schindler, and myself, which states that for every proper class $\mathcal C$ of structures of the same type, there are $B\neq A$ both in $\mathcal C$ such that $B$ embeds into $A$ in some set-forcing extension. This is joint work with Ralf Schindler.
Slides
Set Theory DayFriday, March 11, 20162:45 pmGC 4102 (Science Center)
# Killing measurable and supercompact cardinals softly
This talk follows the theme of killing-them-softly between set-theoretic universes. The main theorems in this theme show how to force to reduce the large cardinal strength of a cardinal to a specified desired degree, for a variety of large cardinals including inaccessible, Mahlo, measurable and supercompact. The killing-them-softly theme is about both forcing and the gradations in large cardinal strength. This talk will focus on measurable and supercompact cardinals, and follows the larger theme of exploring interactions between large cardinals and forcing which is central to modern set theory.
Slides
Set theory seminarFriday, February 26, 201610:00 amGC 6417
# Singular in V, regular and non-measurable in HOD
Kingsborough Community College, CUNY
Getting a model where $\kappa$ is singular in $V$ but measurable in ${\rm HOD}$ is somewhat straightforward however ensuring that $\kappa$ is regular but not measurable in ${\rm HOD}$ is a surprisingly more difficult problem. Magidor navigated around the issues and I will present his result starting with one measurable. His technique can be extended for set many cardinals.
Kurt Godel Research Center
Vera Fischer completed her doctorate in 2008 at York University, under the supervision of Juris Steprans, and now is an assistant at the Kurt Gödel Research Center in Vienna. She studies combinatorial set theory, the structure of the real line, and forcing.
Set theory seminarFriday, November 6, 201510:00 amGC 3212
# The rearrangement number
The City University of New York
The Riemann rearrangement theorem states that a convergent real series $sum_n a_n$ is absolutely convergent if and only if the value of the sum is invariant under all rearrangements $sum_n a_{p(n)}$ by any permutation $p$ on the natural numbers; furthermore, if a series is merely conditionally convergent, then one may find rearrangements for which the new sum $sum_n a_{p(n)}$ has any desired (extended) real value or which becomes non-convergent. In recent joint work with Andreas Blass, Will Brian, myself, Michael Hardy and Paul Larson, based on an exchange in reply to a Hardy’s MathOverflow question on the topic, we investigate the minimal size of a family of permutations that can be used in this manner to test an arbitrary convergent series for absolute convergence. Specifically, we define the rearrangement number $rr$, a new cardinal characteristic of the continuum, to be the smallest cardinality of a set $P$ of permutations of the natural numbers, such that if a convergent real series $sum_n a_n$ remains convergent to the same value after any rearrangement $sum_n a_{p(n)}$ by a permutation $p$ in $P$, then it is absolutely convergent. The corresponding rearrangement number for sums, denoted rr_Sigma, is the smallest cardinality of a family $P$ of permutations, such that if a series $sum_n a_n$ is conditionally convergent, then there is some rearrangement $sum_n a_{p(n)}$, by a permutation $p$ in $P$, for which the series converges to a different value. We investigate the basic properties of these numbers, and explore their relations with other cardinal characteristics of the continuum. Our main results are that b≤ rr≤ non(M), that d≤ rr_Sigma, and that b≤ rr is relatively consistent.
Set theory seminarFriday, May 1, 201510:00 amGC 6417
# Diamond* Coding
CUNY New York City College of Technology
Coding information into the structure of the universe is a forcing technique with many applications in set theory. To carry out it out, we a need a property that: i) can be easily switched on or off at (e.g.) each regular cardinal in turn, and ii) is robust with regards both to small and to highly-closed forcing. GCH coding, controlling the success or failure of the GCH at each cardinal in turn, is the most widely used, and for good reason: there are simple forcings that turn it on and off, and it is easily seen to be unaffected by small or highly-closed forcing. However, it does have limitations – most obviously, GCH coding is of necessity incompatible with the GCH itself. In this talk I will present an alternative coding using the property Diamond*, a variant of the classic Diamond. I will discuss Diamond* and demonstrate that it satisfies the requirements for coding while preserving the GCH.
Although the basic techniques for controlling Diamond* have been known for some time, to my knowledge the first use of Diamond* as a coding axiom was by Andrew Brooke-Taylor in his work on definable well-orders of the universe. I will follow the excellent exposition presented in his dissertation.
Set theory seminarFriday, April 24, 20152:00 pmGC 6417
# Dissertation Defense: Force to change large cardinals
This will be the dissertation defense of the speaker. There will be a one-hour presentation, followed by questions posed by the dissertation committee, and afterwards including some questions posed by the general audience. The dissertation committee consists of Joel David Hamkins (supervisor), Gunter Fuchs, Arthur Apter, Roman Kossak and Philipp Rothmaler.
Set theory seminarFriday, March 20, 201512:00 pmGC 6417
# Force to change large cardinal strength
Suppose $kappain V$ is a cardinal with large cardinal property $A$. In this talk, I will present several theorems which exhibit a notion of forcing $mathbb P$ such that if $Gsubseteq mathbb P$ is $V$-generic, then the cardinal $kappa$ no longer has property $A$ in the forcing extension $V[G]$, but has as many large cardinal properties below $A$ as possible. I will also introduce new large cardinal notions and degrees for large cardinal properties.
This talk is the speaker’s dissertation defense.
Set theory seminarFriday, November 7, 201412:00 pm6417
# News on the Solid Core
The City University of New York
Set-theoretic geology, a line of research jointly created by Hamkins, Reitz and myself, introduced some inner models which result from inverting forcing in some sense. For example, the mantle of a model of set theory V is the intersection of all inner models of which V is an extension by set-forcing. It was an initial, naive hope that one might arrive at a model that is in some sense canonical, but one of the main results on set-theoretic geology is that this is not so: every model of set theory V has a class forcing extension V[G] so that the mantle, as computed in V[G], is V. So quite literally, the mantle of a model of set theory can be anything.
In an attempt to arrive at a concept that fits in with the general spirit of set-theoretic geology, but that stands a chance of being canonical, I defined a set to be solid if it cannot be added to an inner model by set-forcing, and I termed the union of all solid sets the “solid core”.
I will present some results on the solid core which were obtained in recent joint work with Ralf Schindler, and which show that the solid core indeed is a canonical inner model, assuming large cardinals (more precisely, if there is an inner model with a Woodin cardinal), but that it is not as canonical as one might have hoped without that assumption.
Set theory seminarFriday, May 9, 201410:00 am
# Additional remarks on remarkable cardinals
The City University of New York
This is a continuation of the earlier Introduction to remarkable cardinals lecture. The speaker will continue to discuss the various equivalent characterizations of remarkable cardinals and their relationship to other large cardinal notions.
Set theory seminarFriday, May 2, 201410:15 amGC6417
# Namba-like Forcings at Successors of Singular Cardinals
The City University of New York
Following up on Peter Koepke’s Logic Workshop lecture of March 22, 2013, I will discuss Namba-like forcings which either exist or can be forced to exist at successors of singular cardinals.
Slides
Set theory seminarFriday, April 11, 201410:00 amGC6417
# Structural Connections Between a Forcing Class and its Modal Logic
Bronx Community College, CUNY
This talk is on recent work with Joel Hamkins and Benedikt Loewe on ways in which finite-frame properties of specific modal logics can be combined with assertions in ZFC to show that these modal logics are related to those which arise from interpreting Gamma-forcing extensions of a model of ZFC as possible worlds of a Kripke model, where Gamma can be any of several classes of notions of forcing. | 2022-10-07 06:29:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7886257171630859, "perplexity": 704.5095603882169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337971.74/warc/CC-MAIN-20221007045521-20221007075521-00273.warc.gz"} |
https://patmat.com/brooke-alvinston/marginal-cost-and-average-costs-how-to-find-variable-cost.php | # Brooke-Alvinston Marginal Cost And Average Costs How To Find Variable Cost
## Average Cost vs Marginal Cost Top 6 Best Differences
### How is each of the following calculated marginal cost
Relationship Between Marginal Cost And Average Total Cost. The average cost is the total cost divided by the number of goods produced. So it is cost per unit. It is also equal to the sum total of the average variable costs and average fixed costs., Question: How is each of the following calculated: marginal cost, average total cost and average variable cost? Definition of Cost: In business, cost is defined as the total expenses incurred to.
### Average Cost vs Marginal Cost Top 6 Differences (With
Marginal Cost & Average Total Cost Fundamental Finance. Average Cost vs Marginal Cost – Key Differences. The key differences between Average Cost vs Marginal Cost are as follows – The average cost is the sum of the total cost of goods divided by the total number of goods whereas Marginal Cost increases in the cost of producing one more unit or additional unit of product or service., The marginal cost formula represents the incremental costs incurred when producing additional units of a good or service. The marginal cost formula = (change in costs) / (change in quantity). The variable costs included in the calculation are labor and materials, plus increases in fixed costs, administration, overhead.
The marginal cost includes all costs incurred to produce one additional unit of product of firm and cannot be discriminated in fixed or variable costs. Average Cost. The average costs can be separated in average variable cost, where include costs related to velocity of production and average fixed cost where, only includes costs not related to Average cost vs Marginal cost is the different type of cost technique used to calculate the production cost of output or product. Breaking down of costs into an average cost and marginal cost is important because each technique offers its own insight to the firm. Now we learn the concept of Average Cost vs Marginal Cost. Average Cost:
2020-1-29 · Note that the average fixed cost curve is always decreasing and also note that the difference between average total cost and average variable cost is average fixed cost — so AFC b at q b is less than AFC a at q a. As you produce more output, average variable cost and average total cost get closer to one another. Finally, marginal cost Marginal Social Costs & Marginal Social Benefits The average variable cost is a firm's variable cost per unit of output. Most AVC functions will start decreasing and then at one point begin to
Average cost vs Marginal cost is the different type of cost technique used to calculate the production cost of output or product. Breaking down of costs into an average cost and marginal cost is important because each technique offers its own insight to the firm. Now we learn the concept of Average Cost vs Marginal Cost. Average Cost: The marginal cost curve in fig. (13.8) decreases sharply with smaller Q output and reaches a minimum. As production is expanded to a higher level, it begins to rise at a rapid rate. Long Run Marginal Cost Curve: The long run marginal cost curve like the long run average cost curve is U-shaped.
Average total cost will continue falling so long average variable cost does not rise. Even if average variable cost continues rising, it is not necessary that the average total cost will rise. It can be due to the fact that the increase in average variable cost is less than the fall in average fixed cost. Explore how to think about average fixed, variable, and marginal costs, and how to calculate them, using a firm's production function and costs in this video. When we start to hire a few engineers, we're able to spread out our fixed costs, even though our average variable cost per line of code are going up, our fixed costs are going down.
Find and interpret the marginal average cost when 20 units are produced. This means that each of the 20 units costs an average of .1386 hundred dollars or $13.86. In this board they have used the fact that dividing by Q is the same as multiplying by 1/ Q . Average Cost vs Marginal Cost – Key Differences. The key differences between Average Cost vs Marginal Cost are as follows – The average cost is the sum of the total cost of goods divided by the total number of goods whereas Marginal Cost increases in the cost of producing one more unit or additional unit of product or service. Average total cost will continue falling so long average variable cost does not rise. Even if average variable cost continues rising, it is not necessary that the average total cost will rise. It can be due to the fact that the increase in average variable cost is less than the fall in average fixed cost. Marginal cost formula helps in calculating the value of increase or decrease of the total production cost of the company during the period under consideration if there is a change in output by one extra unit and it is calculated by dividing the change in the costs by the change in quantity. 2019-10-2 · Definition: The average variable cost represents the total variable cost per unit, including materials and labor, in short-term production calculated by dividing total variables costs by total output. Hence, a change in the output (Q) causes a change in the variable cost. What Does Average Variable Cost Mean? What is the definition of average variable cost? Average cost vs Marginal cost is the different type of cost technique used to calculate the production cost of output or product. Breaking down of costs into an average cost and marginal cost is important because each technique offers its own insight to the firm. Now we learn the concept of Average Cost vs Marginal Cost. Average Cost: Marginal costs are a function of the total cost of production, which includes fixed and variable costs. Fixed costs of production are constant, occur regularly, and do not change in the short-term It shows that average fixed cost can also be defined as the difference between average total cost and average variable cost: $$\text{AFC}\ =\ \text{ATC}\ -\ \text{AVC}$$ Example and Graph. Sucrose Farms is engaged in cultivation of sugar cane. They have hired 3 … 2007-2-8 · Understanding the Relationship between Marginal Cost and Average Variable Cost ª Review: Marginal cost (MC) is the cost of producing an extra unit of output. Review: Average variable cost (AVC) is the cost of labor per unit of output produced. When MC is below AVC, MC pulls the average down. When MC is above AVC, MC is pushing Actually marginal cost is the cost of a single good....if u multiply it with no of goods u will get total cost....but what if no of goods are not given and marginal cost is given as a function....then simply integrate the function nd the equation 2 days ago · In the last few posts in the series we covered elasticity of demand, consumer demand and types of profits.In this posts we will look into the different types of costs and how they vary with output. Fixed costs, marginal cost,total cost, average cost and variable cost: 2019-10-2 · Definition: The average variable cost represents the total variable cost per unit, including materials and labor, in short-term production calculated by dividing total variables costs by total output. Hence, a change in the output (Q) causes a change in the variable cost. What Does Average Variable Cost Mean? What is the definition of average variable cost? 2020-2-3 · In such a situation both the average cost and marginal cost slope downward, but the downward slope of MC curve is more than that of AC curve. From Figure 11 it becomes clear that when due to the operation of the law of increasing returns, average cost falls, marginal cost also falls. Explore how to think about average fixed, variable, and marginal costs, and how to calculate them, using a firm's production function and costs in this video. When we start to hire a few engineers, we're able to spread out our fixed costs, even though our average variable cost per line of code are going up, our fixed costs are going down. Actually marginal cost is the cost of a single good....if u multiply it with no of goods u will get total cost....but what if no of goods are not given and marginal cost is given as a function....then simply integrate the function nd the equation 2019-3-15 · Total Cost (TC) describes the total economic cost of production. It is composed of variable, and fixed, and opportunity costs. Fixed costs The accounting costs which do not change based on your level of output Always determined to be fixed in the short term; if you could not change it on short... Question: How is each of the following calculated: marginal cost, average total cost and average variable cost? Definition of Cost: In business, cost is defined as the total expenses incurred to Explore how to think about average fixed, variable, and marginal costs, and how to calculate them, using a firm's production function and costs in this video. When we start to hire a few engineers, we're able to spread out our fixed costs, even though our average variable cost per line of code are going up, our fixed costs are going down. Marginal Social Costs & Marginal Social Benefits The average variable cost is a firm's variable cost per unit of output. Most AVC functions will start decreasing and then at one point begin to 2 days ago · In the last few posts in the series we covered elasticity of demand, consumer demand and types of profits.In this posts we will look into the different types of costs and how they vary with output. Fixed costs, marginal cost,total cost, average cost and variable cost: ### Marginal Cost & Average Total Cost Fundamental Finance Average and marginal cost Policonomics. Explore how to think about average fixed, variable, and marginal costs, and how to calculate them, using a firm's production function and costs in this video. When we start to hire a few engineers, we're able to spread out our fixed costs, even though our average variable cost per line of code are going up, our fixed costs are going down., 2020-1-29 · Note that the average fixed cost curve is always decreasing and also note that the difference between average total cost and average variable cost is average fixed cost — so AFC b at q b is less than AFC a at q a. As you produce more output, average variable cost and average total cost get closer to one another. Finally, marginal cost. Average Cost vs Marginal Cost Top 6 Differences (With. Average variable cost is significant in that it is a crucial factor in a given firm’s choice about whether to continue operating. Specifically, the average variable cost should be lower than the marginal revenue in order for the firm to continue operating profitably over time., Yes. Marginal cost is the change in variable cost due to the change in quantity produced. Technically it is the change in total cost due to the change in quantity produced but fixed costs by definition do not vary with changes in quantity produced.... ### Marginal Cost (MC) Definition - Example - Formula What is the relationship between marginal cost and average. **Fixed Cost, Variable Cost, Average Cost and Marginal Cost** Hi, Expenses that do not change in proportion with the activity of business. within a relevant period is called fixed cost. Variable costs are expenses that change in proportion to the activity of business. Total cost divided by the number of goods produced is average costs. 2020-2-5 · Marginal cost and average cost are two important types of costs incurred by a firm in production process. The Marginal cost implies the additional cost incurred by a firm for producing one more unit of a commodity.. • Marginal Cost Formula Definition Examples Calculate • What is the relationship between marginal cost and average • Average Cost vs Marginal Cost Top 6 Best Differences • How is each of the following calculated marginal cost • You'll want to calculate the average cost of the extra ingredients and labor necessary to make the sandwich. Then, you'll need to use the variable costs and fixed costs to calculate your marginal cost. If the marginal cost associated with a sandwich is too high to … Find and interpret the marginal average cost when 20 units are produced. This means that each of the 20 units costs an average of .1386 hundred dollars or$13.86. In this board they have used the fact that dividing by Q is the same as multiplying by 1/ Q .
The marginal cost includes all costs incurred to produce one additional unit of product of firm and cannot be discriminated in fixed or variable costs. Average Cost. The average costs can be separated in average variable cost, where include costs related to velocity of production and average fixed cost where, only includes costs not related to Average cost vs Marginal cost is the different type of cost technique used to calculate the production cost of output or product. Breaking down of costs into an average cost and marginal cost is important because each technique offers its own insight to the firm. Now we learn the concept of Average Cost vs Marginal Cost. Average Cost:
Average variable cost is significant in that it is a crucial factor in a given firm’s choice about whether to continue operating. Specifically, the average variable cost should be lower than the marginal revenue in order for the firm to continue operating profitably over time. Explore how to think about average fixed, variable, and marginal costs, and how to calculate them, using a firm's production function and costs in this video. When we start to hire a few engineers, we're able to spread out our fixed costs, even though our average variable cost per line of code are going up, our fixed costs are going down.
The total cost of a business is composed of fixed costs and variable costs. Fixed costs and variable costs affect the marginal cost of production only if variable costs exist. The marginal cost of Yes. Marginal cost is the change in variable cost due to the change in quantity produced. Technically it is the change in total cost due to the change in quantity produced but fixed costs by definition do not vary with changes in quantity produced...
2019-3-15 · Total Cost (TC) describes the total economic cost of production. It is composed of variable, and fixed, and opportunity costs. Fixed costs The accounting costs which do not change based on your level of output Always determined to be fixed in the short term; if you could not change it on short... Total cost, fixed cost, and variable cost each reflect different aspects of the cost of production over the entire quantity of output being produced. These costs are measured in dollars. In contrast, marginal cost, average cost, and average variable cost are costs per unit. In the previous example, they are measured as cost per haircut.
1 day ago · Please note that ATC may vary as the level of output changes. This has to do with increasing or decreasing marginal costs. This is explained in more detail in our post on how to calculate marginal cost. In a Nutshell. Average total cost (i.e. ATC) is defined as the sum of all production costs divided by the quantity of output produced. Question: How is each of the following calculated: marginal cost, average total cost and average variable cost? Definition of Cost: In business, cost is defined as the total expenses incurred to
Average variable cost is significant in that it is a crucial factor in a given firm’s choice about whether to continue operating. Specifically, the average variable cost should be lower than the marginal revenue in order for the firm to continue operating profitably over time. Yes. Marginal cost is the change in variable cost due to the change in quantity produced. Technically it is the change in total cost due to the change in quantity produced but fixed costs by definition do not vary with changes in quantity produced...
Average variable cost is significant in that it is a crucial factor in a given firm’s choice about whether to continue operating. Specifically, the average variable cost should be lower than the marginal revenue in order for the firm to continue operating profitably over time. Yes. Marginal cost is the change in variable cost due to the change in quantity produced. Technically it is the change in total cost due to the change in quantity produced but fixed costs by definition do not vary with changes in quantity produced...
Average Cost vs Marginal Cost – Key Differences. The key differences between Average Cost vs Marginal Cost are as follows – The average cost is the sum of the total cost of goods divided by the total number of goods whereas Marginal Cost increases in the cost of producing one more unit or additional unit of product or service. 1 day ago · Please note that ATC may vary as the level of output changes. This has to do with increasing or decreasing marginal costs. This is explained in more detail in our post on how to calculate marginal cost. In a Nutshell. Average total cost (i.e. ATC) is defined as the sum of all production costs divided by the quantity of output produced.
## Total average and marginal costs Central Economics Wiki
How is each of the following calculated marginal cost. 2007-2-8 · Understanding the Relationship between Marginal Cost and Average Variable Cost ª Review: Marginal cost (MC) is the cost of producing an extra unit of output. Review: Average variable cost (AVC) is the cost of labor per unit of output produced. When MC is below AVC, MC pulls the average down. When MC is above AVC, MC is pushing, Average variable cost is significant in that it is a crucial factor in a given firm’s choice about whether to continue operating. Specifically, the average variable cost should be lower than the marginal revenue in order for the firm to continue operating profitably over time..
### Understanding the Relationship between Marginal Cost
Marginal Cost Formula Definition Examples Calculate. 2020-2-5 · Marginal cost and average cost are two important types of costs incurred by a firm in production process. The Marginal cost implies the additional cost incurred by a firm for producing one more unit of a commodity., When the marginal cost is below the average total costs or the average variable costs,then the AC would be declining.When marginal cost is above the average cost then the average cost would be.
2020-2-5 · Marginal cost and average cost are two important types of costs incurred by a firm in production process. The Marginal cost implies the additional cost incurred by a firm for producing one more unit of a commodity. 2007-2-8 · Understanding the Relationship between Marginal Cost and Average Variable Cost ª Review: Marginal cost (MC) is the cost of producing an extra unit of output. Review: Average variable cost (AVC) is the cost of labor per unit of output produced. When MC is below AVC, MC pulls the average down. When MC is above AVC, MC is pushing
Total cost, fixed cost, and variable cost each reflect different aspects of the cost of production over the entire quantity of output being produced. These costs are measured in dollars. In contrast, marginal cost, average cost, and average variable cost are costs per unit. In the previous example, they are measured as cost per haircut. 2020-2-3 · In such a situation both the average cost and marginal cost slope downward, but the downward slope of MC curve is more than that of AC curve. From Figure 11 it becomes clear that when due to the operation of the law of increasing returns, average cost falls, marginal cost also falls.
The average cost is the total cost divided by the number of goods produced. So it is cost per unit. It is also equal to the sum total of the average variable costs and average fixed costs. 1 day ago · Please note that ATC may vary as the level of output changes. This has to do with increasing or decreasing marginal costs. This is explained in more detail in our post on how to calculate marginal cost. In a Nutshell. Average total cost (i.e. ATC) is defined as the sum of all production costs divided by the quantity of output produced.
The marginal cost includes all costs incurred to produce one additional unit of product of firm and cannot be discriminated in fixed or variable costs. Average Cost. The average costs can be separated in average variable cost, where include costs related to velocity of production and average fixed cost where, only includes costs not related to 1 day ago · Please note that ATC may vary as the level of output changes. This has to do with increasing or decreasing marginal costs. This is explained in more detail in our post on how to calculate marginal cost. In a Nutshell. Average total cost (i.e. ATC) is defined as the sum of all production costs divided by the quantity of output produced.
Average total cost will continue falling so long average variable cost does not rise. Even if average variable cost continues rising, it is not necessary that the average total cost will rise. It can be due to the fact that the increase in average variable cost is less than the fall in average fixed cost. So, this is the marginal product of labor, MPL for short, then you have your marginal cost, then you have your average variable cost, then you have your average fixed costs and then you have your average total costs, so like always, pause this video and try to fill what these values would be for even one row of this table and then I'll do it
2020-2-5 · Marginal cost and average cost are two important types of costs incurred by a firm in production process. The Marginal cost implies the additional cost incurred by a firm for producing one more unit of a commodity. Average variable cost is significant in that it is a crucial factor in a given firm’s choice about whether to continue operating. Specifically, the average variable cost should be lower than the marginal revenue in order for the firm to continue operating profitably over time.
The marginal cost formula represents the incremental costs incurred when producing additional units of a good or service. The marginal cost formula = (change in costs) / (change in quantity). The variable costs included in the calculation are labor and materials, plus increases in fixed costs, administration, overhead Average total cost will continue falling so long average variable cost does not rise. Even if average variable cost continues rising, it is not necessary that the average total cost will rise. It can be due to the fact that the increase in average variable cost is less than the fall in average fixed cost.
Yes. Marginal cost is the change in variable cost due to the change in quantity produced. Technically it is the change in total cost due to the change in quantity produced but fixed costs by definition do not vary with changes in quantity produced... Average Cost vs Marginal Cost – Key Differences. The key differences between Average Cost vs Marginal Cost are as follows – The average cost is the sum of the total cost of goods divided by the total number of goods whereas Marginal Cost increases in the cost of producing one more unit or additional unit of product or service.
Marginal cost formula helps in calculating the value of increase or decrease of the total production cost of the company during the period under consideration if there is a change in output by one extra unit and it is calculated by dividing the change in the costs by the change in quantity. The marginal cost includes all costs incurred to produce one additional unit of product of firm and cannot be discriminated in fixed or variable costs. Average Cost. The average costs can be separated in average variable cost, where include costs related to velocity of production and average fixed cost where, only includes costs not related to
The total cost of a business is composed of fixed costs and variable costs. Fixed costs and variable costs affect the marginal cost of production only if variable costs exist. The marginal cost of You'll want to calculate the average cost of the extra ingredients and labor necessary to make the sandwich. Then, you'll need to use the variable costs and fixed costs to calculate your marginal cost. If the marginal cost associated with a sandwich is too high to …
Think of marginal cost as the cost of the last unit, or what it costs to produce one more unit. It's hard to find exactly what the cost of the last unit is, but it's not hard to find the average cost of a group of a few more units. To find this, simply take the change in costs from a previous level divided by the change in quantity from the Marginal costs are a function of the total cost of production, which includes fixed and variable costs. Fixed costs of production are constant, occur regularly, and do not change in the short-term
1 day ago · Please note that ATC may vary as the level of output changes. This has to do with increasing or decreasing marginal costs. This is explained in more detail in our post on how to calculate marginal cost. In a Nutshell. Average total cost (i.e. ATC) is defined as the sum of all production costs divided by the quantity of output produced. 2007-2-8 · Understanding the Relationship between Marginal Cost and Average Variable Cost ª Review: Marginal cost (MC) is the cost of producing an extra unit of output. Review: Average variable cost (AVC) is the cost of labor per unit of output produced. When MC is below AVC, MC pulls the average down. When MC is above AVC, MC is pushing
Think of marginal cost as the cost of the last unit, or what it costs to produce one more unit. It's hard to find exactly what the cost of the last unit is, but it's not hard to find the average cost of a group of a few more units. To find this, simply take the change in costs from a previous level divided by the change in quantity from the To calculate average variable costs, divide variable costs by Q. Since variable costs are 6Q, average variable costs are 6. Notice that average variable cost does not depend on quantity produced and is the same as marginal cost. This is one of the special features of the linear model, but it won't hold with a nonlinear formulation.
Average total cost will continue falling so long average variable cost does not rise. Even if average variable cost continues rising, it is not necessary that the average total cost will rise. It can be due to the fact that the increase in average variable cost is less than the fall in average fixed cost. Average total cost will continue falling so long average variable cost does not rise. Even if average variable cost continues rising, it is not necessary that the average total cost will rise. It can be due to the fact that the increase in average variable cost is less than the fall in average fixed cost.
Total cost, fixed cost, and variable cost each reflect different aspects of the cost of production over the entire quantity of output being produced. These costs are measured in dollars. In contrast, marginal cost, average cost, and average variable cost are costs per unit. In the previous example, they are measured as cost per haircut. So, this is the marginal product of labor, MPL for short, then you have your marginal cost, then you have your average variable cost, then you have your average fixed costs and then you have your average total costs, so like always, pause this video and try to fill what these values would be for even one row of this table and then I'll do it
Average Cost vs Marginal Cost Top 6 Differences (With. 2020-2-5 · Marginal cost and average cost are two important types of costs incurred by a firm in production process. The Marginal cost implies the additional cost incurred by a firm for producing one more unit of a commodity., 2020-2-5 · Marginal cost and average cost are two important types of costs incurred by a firm in production process. The Marginal cost implies the additional cost incurred by a firm for producing one more unit of a commodity..
### Marginal Cost Formula Step by Step Calculation (with
Is it possible to derive variable cost from marginal cost. The marginal cost curve in fig. (13.8) decreases sharply with smaller Q output and reaches a minimum. As production is expanded to a higher level, it begins to rise at a rapid rate. Long Run Marginal Cost Curve: The long run marginal cost curve like the long run average cost curve is U-shaped., 2020-1-29 · Note that the average fixed cost curve is always decreasing and also note that the difference between average total cost and average variable cost is average fixed cost — so AFC b at q b is less than AFC a at q a. As you produce more output, average variable cost and average total cost get closer to one another. Finally, marginal cost.
### Marginal Cost & Average Total Cost Fundamental Finance
Marginal Cost (MC) Definition - Example - Formula. The marginal cost formula represents the incremental costs incurred when producing additional units of a good or service. The marginal cost formula = (change in costs) / (change in quantity). The variable costs included in the calculation are labor and materials, plus increases in fixed costs, administration, overhead Yes. Marginal cost is the change in variable cost due to the change in quantity produced. Technically it is the change in total cost due to the change in quantity produced but fixed costs by definition do not vary with changes in quantity produced....
The marginal cost includes all costs incurred to produce one additional unit of product of firm and cannot be discriminated in fixed or variable costs. Average Cost. The average costs can be separated in average variable cost, where include costs related to velocity of production and average fixed cost where, only includes costs not related to You'll want to calculate the average cost of the extra ingredients and labor necessary to make the sandwich. Then, you'll need to use the variable costs and fixed costs to calculate your marginal cost. If the marginal cost associated with a sandwich is too high to …
2019-3-15 · Total Cost (TC) describes the total economic cost of production. It is composed of variable, and fixed, and opportunity costs. Fixed costs The accounting costs which do not change based on your level of output Always determined to be fixed in the short term; if you could not change it on short... This is an important point: because the partial derivative as respects quantity is zero for fixed costs (as these are independent of production levels), the partial derivative of variable and total costs is the same. Marginal costs, as any derivative, are tangent to total and variable cost curves at each point.
Actually marginal cost is the cost of a single good....if u multiply it with no of goods u will get total cost....but what if no of goods are not given and marginal cost is given as a function....then simply integrate the function nd the equation 2020-2-3 · In such a situation both the average cost and marginal cost slope downward, but the downward slope of MC curve is more than that of AC curve. From Figure 11 it becomes clear that when due to the operation of the law of increasing returns, average cost falls, marginal cost also falls.
Average cost vs Marginal cost is the different type of cost technique used to calculate the production cost of output or product. Breaking down of costs into an average cost and marginal cost is important because each technique offers its own insight to the firm. Now we learn the concept of Average Cost vs Marginal Cost. Average Cost: Yes. Marginal cost is the change in variable cost due to the change in quantity produced. Technically it is the change in total cost due to the change in quantity produced but fixed costs by definition do not vary with changes in quantity produced...
Marginal cost formula helps in calculating the value of increase or decrease of the total production cost of the company during the period under consideration if there is a change in output by one extra unit and it is calculated by dividing the change in the costs by the change in quantity. To calculate average variable costs, divide variable costs by Q. Since variable costs are 6Q, average variable costs are 6. Notice that average variable cost does not depend on quantity produced and is the same as marginal cost. This is one of the special features of the linear model, but it won't hold with a nonlinear formulation.
This is an important point: because the partial derivative as respects quantity is zero for fixed costs (as these are independent of production levels), the partial derivative of variable and total costs is the same. Marginal costs, as any derivative, are tangent to total and variable cost curves at each point. 2020-2-3 · In such a situation both the average cost and marginal cost slope downward, but the downward slope of MC curve is more than that of AC curve. From Figure 11 it becomes clear that when due to the operation of the law of increasing returns, average cost falls, marginal cost also falls.
The total cost of a business is composed of fixed costs and variable costs. Fixed costs and variable costs affect the marginal cost of production only if variable costs exist. The marginal cost of Actually marginal cost is the cost of a single good....if u multiply it with no of goods u will get total cost....but what if no of goods are not given and marginal cost is given as a function....then simply integrate the function nd the equation
You'll want to calculate the average cost of the extra ingredients and labor necessary to make the sandwich. Then, you'll need to use the variable costs and fixed costs to calculate your marginal cost. If the marginal cost associated with a sandwich is too high to … The marginal cost includes all costs incurred to produce one additional unit of product of firm and cannot be discriminated in fixed or variable costs. Average Cost. The average costs can be separated in average variable cost, where include costs related to velocity of production and average fixed cost where, only includes costs not related to
Actually marginal cost is the cost of a single good....if u multiply it with no of goods u will get total cost....but what if no of goods are not given and marginal cost is given as a function....then simply integrate the function nd the equation Average total cost will continue falling so long average variable cost does not rise. Even if average variable cost continues rising, it is not necessary that the average total cost will rise. It can be due to the fact that the increase in average variable cost is less than the fall in average fixed cost.
View all posts in Brooke-Alvinston category | 2021-11-27 08:27:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38370466232299805, "perplexity": 573.739544784249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00509.warc.gz"} |
http://aliceinfo.cern.ch/ArtSubmission/node/3481 | # Figure 20
Centre-of-mass energy dependence of the $q$-moments ($q =$ 2 to 4, left-hand scale, and $q = 5$, right-hand scale) of the multiplicity distributions for NSD events in three different pseudorapidity intervals ($|\eta| < 1.5$ top, $|\eta| < 1.0$ middle and $|\eta| < 0.5$ bottom). ALICE data (black) are compared to UA5 (red) for $|\eta| < 0.5$ and $|\eta| < 1$, at $\sqrt{s} = 0.9$ TeV, and with CMS (blue) at $\sqrt{s} =$ 0.9 and 7 TeV for $|\eta| < 0.5$. The error bars represent the combined statistical and systematic uncertainties. The data at 0.9 and 7 TeV are slightly displaced horizontally for visibility. | 2018-03-17 20:36:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862538754940033, "perplexity": 2135.8910367853696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645310.34/warc/CC-MAIN-20180317194447-20180317214447-00764.warc.gz"} |
http://mathhelpforum.com/calculus/82960-very-dumb-question.html | 1. ## very dumb question
a) find the first four terms of the Maclaurin series of ln(1+x)
b) then the next question asked to use the information from above (part a) "to write the forumla for the series of ln(x+1), then find the interval of convergence for the series.
Again this was a question on a test. I got part "a" right but could not get part b. Thanks again in advance.
2. ## solution
See attachment | 2017-12-17 10:12:44 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8330870270729065, "perplexity": 667.0154388703888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948595342.71/warc/CC-MAIN-20171217093816-20171217115816-00479.warc.gz"} |
https://wl.widelands.org/documentation/autogen_auxiliary_objective_utils/ | objective_utils.lua¶
This script contains utility functions for typical tasks that need to be checked for objectives.
check_for_buildings(plr, which[, region])
Checks if the number of buildings defined in which are found for the given player. If region is given, buildings are only searched on the corresponding fields. If more buildings or equal the number of requested buildings are found, this function returns true.
Example usage:
check_for_buildings(wl.Game().players[1], {lumberjacks_hut=2, quarry=1})
Parameters: plr (wl.game.Player) – Player to check for region (array of wl.map.Field) – array of fields to check for the buildings which (table) – (name,count) pairs for buildings to check for. true if the requested buildings were found, false otherwise | 2019-03-22 20:18:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3835214674472809, "perplexity": 4001.735593447433}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202689.76/warc/CC-MAIN-20190322200215-20190322222215-00301.warc.gz"} |
https://math.stackexchange.com/questions/3516606/normal-form-of-nonlinear-dynamic-system | # Normal form of nonlinear dynamic system
I consider the following system \begin{align} &\dot x(t) = \frac{1}{y(t)} - x(t)\\ &\dot y(t) = \frac{1}{x(t) - ay(t)}\left(-by(t)^2 + cx(t)y(t) + d\right) \end{align} where $$a,b,c,d \in \mathbb R$$ are parameters. I'd like to find the normal form in order to study the stability of limit cycles. I have no idea, however, how one would transform the system into the normal form. | 2020-02-28 09:34:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587140083312988, "perplexity": 524.8633908149448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875147116.85/warc/CC-MAIN-20200228073640-20200228103640-00456.warc.gz"} |
https://ham.stackexchange.com/questions/15996/solid-angle-for-different-e-h-plane-beamwidths/16009 | # Solid angle for different E/H plane beamwidths
The solid angle formula calculates the surface area on a unit sphere, from projecting a rectangular patch onto the surface of a sphere.
Image source
This is calculated using the azimuth angle $$\phi$$ and the elevation angle $$\theta$$:
$$\Omega = \int_{0}^{\phi} \int_{0}^{\theta} \sin{\theta'} d\theta' d\phi$$
However an antenna beam pattern will project a curved patch onto a sphere.
Case 1: Circle projection
First consider a linearly polarized, symmetrical parabolic dish antenna. This ideal geometry ensures the E/H planes have the same beamwidth. A circle is projected onto the unit sphere.
Note this is equivalent to the surface area of a hemisphere between the circle and the sphere.
$$\Omega = \int_{0}^{2\pi} \int_{0}^{\theta} \sin{\theta'} d\theta' d\phi = 2\pi(1-\cos{\theta})$$
Case 2: Ellipse projection
Now consider any linearly polarized antenna, such that the E/H planes have different beamwidths. I presume this would be projecting an ellipse onto the unit sphere.
I presume the solid angle can be calculated from the surface area of a semi-ellipsoid. A solid angle is a fraction of the surface area of the unit sphere, the ellipsoid is not coincident with a sphere (other than the trivial case), so this can't be correct.
Some questions:
• How can the solid angle formula be used to derive the (elliptical) solid angle of an antenna with a beamwidth of $$\phi$$ degrees in the E-plane and $$\theta$$ degrees in the H-plane?
• Are there any approximations to this?
• I think things are made worse by the addition of the sphere analogy. If you must project the beam onto a unit sphere, you'll find the area is the same as the solid angle. But there's no need to invoke a sphere to find solid angles. In your first case - there is no hemisphere, just a portion of the unit sphere. In the second case, there's definitely no ellipsoid; an ellipsoid isn't coimcident with a sphere at all (except in the trivial case). How about "just" integrating to find the solid angle of the elliptical beam itself, with your first formula? – tomnexus Jan 21 at 6:33
• You're right - an ellipsoid is not coincident with the sphere. I will remove that part. Instead the solid angle on the elliptical beam should be some fraction of the surface area of a hemisphere. How would I go about integrating this? – pymekrolimus Jan 21 at 6:52
$${4 \pi \over 100} = {0.04 \pi}$$ | 2020-09-27 16:57:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129625916481018, "perplexity": 508.68636444271084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400283990.75/warc/CC-MAIN-20200927152349-20200927182349-00589.warc.gz"} |