text
stringlengths 256
16.4k
|
|---|
Let Ω be the region in $ℝ^3$ defined by $$ Ω={(x_1,x_2,x_3):max(∣∣x_1∣∣,∣∣x_2∣∣,∣∣x_3∣∣)≤1}$$ Let ∂Ω denote the boundary of Ω.
Calculate $$∫_{∂Ω}ϕF⋅ndσ$$
where n is the unit normal vector, dσ denotes integration over ∂Ω,
$F_i=\large \frac{x_i}{(x_1^2+x_2^2+x_3^2)^{\frac{3}{2}}}^=\frac{x_i}{r^3}$
and $ϕ(y_1,y_2,y_3)$ is a continuously differentiable function of $\large y_i=\frac{x_i}{r}$. Assume that ϕ has unit mean over the unit sphere.
I just started on this problem so I don't want solutions.
My question is: how should I interpret $\phi F$? Is $\phi$ another vector field and that I should take the inner product of $\phi$ with $F$?
This wouldn't make much sense since; I'd end up with a scalar, and then scalar.$\vec n$ wouldn't really make sense either.
Any hints or suggestions are welcome.
Thanks,
EDIT: I'd welcome solutions at this point. I am getting weird computations -- such as an integral that is equal to zero. I tried using the "product rule" that I found on Wolfram Alpha to compute the divergence of $\phi F$. I notice first that divF=0, so F alone is divergence-free. But I honestly do not know whether I have the correct vector field after multiplication with $\phi$. So, when computing the divergence of $\phi F$, I might be using an incorrect vector field.
|
Adiabatic Short Circuit Temperature Rise
Adiabatic short circuit temperature rise normally refers to the temperature rise in a cable due to a short circuit current. During a short circuit, a high amount of current can flow through a cable for a short time. This surge in current flow causes a temperature rise within the cable.
Contents Derivation
An “adiabatic” process is a thermodynamic process in which there is no heat transfer. In the context of cables experiencing a short circuit, this means that the energy from the short circuit current contributes only to raising the temperature of the cable conductor (e.g. copper) without any heat loss through the cable (i.e. through resistive effects). This is obviously a simplifying assumption as in reality, there would be heat lost during a short circuit, but it is a conservative one and yields a theoretical temperature rise higher than that found in practice.
The derivation of the short circuit temperature rise is based on a simple application of specific heat capacity. The specific heat capacity of a body (for instance the solid conductor in a cable) is the amount of energy required to change the temperature of the body, and is given by the following basic formula:
[math] c_{p} = \frac{E}{m \Delta T} \, [/math]
Where [math]E \,[/math] is the energy dissipated by the body (in Joules), [math]m \,[/math] is the mass of the body (in grams), and [math]\Delta T \,[/math] is the change in temperature (in Kelvins).
The energy from a current flowing through a cable is based on the SI definition for electrical energy:
[math] E = QV = i^{2}Rt \, [/math]
Where [math]i \,[/math] is the current (in Amps), [math]R \,[/math] is the resistance of the body which the current is flowing through (in Ω) and [math]t \,[/math] is the duration of the current flow (in seconds).
The mass and resistance of an arbitrary conductive body is proportional to the dimensions of the body and can be described in general terms by the following pair of equations:
[math] m = \rho_{d} Al \, [/math] [math] R = \frac{\rho_{r} l}{A} \, [/math]
Where [math]\rho_{d} \,[/math] is the density of the body (in g [math]mm^{-3}[/math]), [math]\rho_{r} \,[/math] is the resistivity of the body (in Ω mm), [math]A \,[/math] is the cross-sectional area of the body (in [math]mm^{2}[/math]) and [math]l \,[/math] is the length of the body (in mm).
Putting all of these equations together and re-arranging, we get the final result for adiabatic short circuit temperature rise:
[math] \Delta T = \frac{i^{2}t \rho_{r}}{A^{2} c_{p} \rho_{d}} \, [/math]
Alternatively, we can re-write this to find the cable conductor cross-sectional area required to dissipate a short circuit current for a given temperature rise is:
[math] A = \frac{\sqrt{i^{2}t}}{k} \, [/math]
With the constant term [math] k = \sqrt{\frac{c_{p} \rho_{d} \Delta T}{\rho_{r}}} \, [/math]
Where [math]A \,[/math] is the minimum cross-sectional area of the cable conductor ([math]mm^{2}[/math])
[math]i^{2}t \,[/math] is the energy of the short circuit ([math]A^{2}s[/math]) [math]c_{p} \,[/math] is the specific heat capacity of the cable conductor ([math]J g^{-1}K^{-1}[/math]) [math]\rho_{d} \,[/math] is the density of the cable conductor material ([math]g mm_{-3}[/math]) [math]\rho_{r} \,[/math] is the resistivity of the cable conductor material ([math]\Omega mm[/math]) [math]\Delta T \,[/math] is the maximum temperature rise allowed (°[math]K[/math])
In practice, it is common to use an [math]i^{2}t[/math] value that corresponds to the let-through energy of the cable's upstream protective device (i.e. circuit breaker or fuse). The manufacturer of the protective device will provide let-through energies for different prospective fault currents.
Worked Example
This example is illustrative and is only intended to show how the equations derived above are applied. In practice, the IEC method outlined below should be used.
Suppose a short circuit with let-through energy of [math]1.6 x 10^{7}[/math] occurs on a cable with a copper conductor and PVC insulation. Prior to the short circuit, the cable was operating a temperature of 75°C. The temperature limit for PVC insulation is 160°C and therefore the maximum temperature rise is 85°K.
The specific heat capacity of copper at 25°C is [math]c_{p}[/math] = 0.385 [math]J g^{-1}K^{-1}[/math]. The density of copper is [math]\rho_{d}[/math] = 0.00894 [math]g mm^{-3}[/math] and the resistivity of copper at 75°C is [math]\rho_{r}[/math] = 0.0000204 [math]\Omega mm [/math].
The constant is calculated as k = 119.74. The minimum cable conductor cross-sectional area is calculated as 33.4 [math]mm^{2}[/math]. it should be stressed that the calculated value of k is probably inaccurate because the specific heat capacity of copper is subject to change at different temperatures.
Effects of Short Circuit Temperature Rise
High temperatures can trigger unwanted reactions in the cable insulation, sheath materials and other components, which can prematurely degrade the condition of the cable. Cables with larger cross-sectional areas are less sensitive to short circuit temperature rises as the larger amount of conductor material prevents excessive temperature rises.
The maximum allowable short circuit temperature rise depends on the type of insulation (and other materials) used in the construction of the cable. The cable manufacturer will provide specific details on the maximum temperature of the cable for different types of insulation materials, but typically, the maximum temperatures are 160°C for PVC and 250°C for EPR and XLPE insulated cables.
Treatment by International Standards
IEC 60364 (Low voltage electrical installations) contains guidance on the sizing of cables with respect to adiabatic short circuit temperature rise. The minimum cable conductor cross-sectional area is given by the following equation:
[math] A = \frac{\sqrt{i^{2}t}}{k} \, [/math]
Where [math]A \,[/math] is the minimum cross-sectional area of the cable conductor ([math]mm^{2}[/math])
[math]i^{2}t \,[/math] is the energy of the short circuit ([math]A^{2}s[/math]) [math]k \,[/math] is a constant that can be calculated from IEC 60364-5-54 Annex A. For example, for copper conductors: [math] k = 226 \sqrt{\ln{\left(1 + \frac{\theta_{f}-\theta_{i}}{234.5+\theta_{i}}\right)}} \, [/math]
Where [math]\theta_{i} \,[/math] and [math]\theta_{f} \,[/math] are the initial and final conductor temperatures respectively.
The National Electricity Code (NEC) does not have any specific provisions for short circuit temperature rise.
|
What this post is about
Personally, I find violin plots with error bars a great way to present repeated-measures data from experiments, as they show the data distribution as well as the uncertainty surrounding the mean.
However, there is some confusion (at least for me) about how to correctly calculate error bars for within-subjects designs. Below, I present my learning process: First, how I’ve calculated the wrong error bars for a long time. I explain why these error bars are wrong and end with calculating and visualizing them correctly (I hope…).
If you’re not interested in the learning process, feel free to immediately go to the final header.This post closely follows the logic outlined by Ryan Hope for his
Rmisc package here.
UPDATE: Thanks to Brenton Wiernik who pointed out that the Morey method described below is not without criticism.I’ll update this post soon. Creating data
Alright, let’s create a data set that has the typical structure of an experiment with a within-participants factor and multiple trials per factor level.
In this case, we create a data set with 30 participants, where each participant gives us a score on three conditions. Say there are trials as well, with each participant providing ten scores per condition.
We start with defining the parameters for our data set: how many participants, the names of the three conditions (aka factor levels), how many trials (aka measurements) each participant provides per condition, and the means and standard deviations for each condition. We assume participants provide us with a score on a scale ranging from 0 to 100.
set.seed(42)library(Rmisc)library(tidyverse)library(truncnorm)
# number of participantspp_n <- 30# three conditionsconditions <- c("A", "B", "C")# number of trials (measures per condition per participant)trials_per_condition <- 10# condition Acondition_a_mean <- 40condition_a_sd <- 22# condition Bcondition_b_mean <- 45condition_b_sd <- 17# condition Ccondition_c_mean <- 50condition_c_sd <- 21
Okay, next we generate the data. First, we have a tibble with 30 rows for each of the 30 participants (3 conditions x 10 trials = 30 rows per participant).
dat <- tibble( pp = factor(rep(1:(length(conditions) * trials_per_condition), each = pp_n)), condition = factor(rep(conditions, pp_n * trials_per_condition)))
However, simulating data for a condition across all participants based on the same underlying distribution disregards that there are differences between participants. If you know mixed-effects models, this will sound familiar to you: it’s probably not realistic to assume that each participant will show a similar mean for each condition, and a similar difference between conditions.
Instead, it makes sense that a) each participant introduces systematic bias to their scores (e.g.,
pp1 might generally give higher scores to all conditions than
pp2, or
pp3 might show a larger difference between conditions than
pp4), and b) there’s a bit of random error for everyone (e.g., sampling error).
Thus, we try to simulate that error.
pp_error <- tibble( # recreate pp identifier pp = factor(1:pp_n), # some bias for the means we use later bias_mean = rnorm(pp_n, 0, 6), # some bias for the sd we use later bias_sd = abs(rnorm(pp_n, 0, 3)), )# some random error per trialerror <- rnorm(900, 0, 5)
Next, we simulate the whole data set. For each participant and condition, we sample ten trial scores.
However, rather than sampling just from the mean and standard deviation we determined above for the respective condition, we also add the bias of each specific participant in their (1) means per condition and (2) variability around the means per condition. After that, we add the extra random error.
Because our scores should fall within 0 and 100, we use a trunctuated normal distribution function from the
truncnorm package.
dat <- left_join(dat, pp_error) %>% # add the bias variables to the data set add_column(., error) %>% # add random error group_by(pp, condition) %>% mutate( score = case_when( # get 10 trials per participant and condition condition == "A" ~ rtruncnorm(trials_per_condition, a = 0, b = 100, (condition_a_mean + bias_mean), (condition_a_sd + bias_sd)), condition == "B" ~ rtruncnorm(trials_per_condition, a = 0, b = 100, (condition_b_mean + bias_mean), (condition_b_sd + bias_sd)), condition == "C" ~ rtruncnorm(trials_per_condition, a = 0, b = 100, (condition_c_mean + bias_mean), (condition_c_sd + bias_sd)), TRUE ~ NA_real_ ) ) %>% mutate(score = score + error) %>% # add random error # because of error, some trials got outside boundary, clip them again here mutate( score = case_when( score < 0 ~ 0, score > 100 ~ 100, TRUE ~ score ) ) %>% select(-bias_mean, -bias_sd, -error) # kick out variables we don't need anymore
If we take a look at the ten first trials of the first participant, we see that our simulation appears to have worked: there’s quite a lot of variation.
## # A tibble: 10 x 3## # Groups: pp, condition [3]## pp condition score## <fct> <fct> <dbl>## 1 1 A 68.3## 2 1 B 61.1## 3 1 C 75.9## 4 1 A 59.0## 5 1 B 31.3## 6 1 C 59.1## 7 1 A 29.2## 8 1 B 47.4## 9 1 C 46.6## 10 1 A 34.9
Let’s have a look at the aggregated means and
SDs.Indeed, there’s quite some variation around each condition per participant, so we have data that resemble messy real-world data.Let’s visualize those data.
dat %>% group_by(pp, condition) %>% summarise(agg_mean = mean(score), agg_sd = sd(score)) %>% head(., n = 10)
## # A tibble: 10 x 4## # Groups: pp [4]## pp condition agg_mean agg_sd## <fct> <fct> <dbl> <dbl>## 1 1 A 53.4 29.0## 2 1 B 50.6 14.2## 3 1 C 62.7 12.1## 4 2 A 43.0 19.7## 5 2 B 43.0 16.2## 6 2 C 44.7 22.4## 7 3 A 52.5 25.3## 8 3 B 46.7 21.8## 9 3 C 59.8 17.3## 10 4 A 45.3 25.2
Creating a violin plot (with wrong errors bars)
Creating the violin plot follows the same logic as all other
ggplot2 commands.
ggplot(dat, aes(x = condition, y = score)) + geom_violin()
We can already see that the group means seem to increase from left to right, just as we specified above.To make it easier to see that, we need to add a second layer, namely the means plus error bars.
ggplot doesn’t take those from the raw data, but we need to provide them by calculating and storing means and standard error ourselves.We can then feed those calculations to the ggplot layer.
If you’re like me, you had to look up the formula for the standard error. Here it is:
\[SE = \frac{SD} {\sqrt{n}}\]
Thankfully, the
summarySE command from the
Rmisc by Ryan Hope calculates the
SE for us, plus the 95% CI interval around the SE.
dat_summary <- summarySE(dat, measurevar = "score", groupvars = "condition")dat_summary
## condition N score sd se ci## 1 A 300 42.11566 21.97562 1.268763 2.496837## 2 B 300 45.89995 19.77976 1.141985 2.247347## 3 C 300 50.47729 22.07185 1.274319 2.507770
Okay, now that we have the means and standard errors per group, let’s put them on top of the violins.
ggplot(dat, aes(x = condition, y = score)) + geom_violin() + geom_point(aes(y = score), data = dat_summary, color = "black") + geom_errorbar(aes(y = score, ymin = score - ci, ymax = score + ci), color = "black", width = 0.05, data = dat_summary)
Cool, at this point it looks like we’re done.
Except that we’re not. 1What’s going on?
Let’s have a look at the summary statistic again:
dat_summary
## condition N score sd se ci## 1 A 300 42.11566 21.97562 1.268763 2.496837## 2 B 300 45.89995 19.77976 1.141985 2.247347## 3 C 300 50.47729 22.07185 1.274319 2.507770
Inspecting
N shown by
summarySE, we see that the summary statistics take all 300 rows per condition (30 participants x 10 trials) into account when calculating the standard error.The formula once more:
\[SE = \frac{SD} {\sqrt{n}}\]
By increasing the denominator
2, we artificially decrease the size of the SE, although we know that we don’t have 300 participants.We have 30 participants.This is crucial: we want to summarize the variability for these 30 participants, not for all observations.
This also means we should take a look at the
SDs.Here’s the formula for the standard deviation:
\[SD = \sqrt{\frac{1} {N - 1} \sum_{i = 1}^{N} (x_i - \bar{x})^2}\]Actually, the
SDs we obtained from
dat_summary are quite large, which makes sense because they summarize the variability of
all observations – so all 300 trials per condition.That is, the formula above is sensitive to the considerable variability within each condition that we have introduced earlier in the simulation.
Thus, the summary statistics that we calculated are not the ones we’re looking for
3.We want to summarize the variability of the 30 participants per condition, not the 300 observations per condition.We need to calculate SE taking into account that we know that we have multiple measurements per participant. Creating another plot (with error bars that’re still wrong)
Thus, we first aggregate the data, so we calculate the
average score per participant.
dat_agg <- dat %>% group_by(pp, condition) %>% summarise(mean_agg = mean(score))head(dat_agg, n = 10)
## # A tibble: 10 x 3## # Groups: pp [4]## pp condition mean_agg## <fct> <fct> <dbl>## 1 1 A 53.4## 2 1 B 50.6## 3 1 C 62.7## 4 2 A 43.0## 5 2 B 43.0## 6 2 C 44.7## 7 3 A 52.5## 8 3 B 46.7## 9 3 C 59.8## 10 4 A 45.3
Alright, now let’s calculate the means and
SE again, based on these aggregated means.
dat_summary2 <- summarySE(dat_agg, measurevar = "mean_agg", groupvars = "condition")dat_summary2
## condition N mean_agg sd se ci## 1 A 30 42.11566 9.308671 1.699523 3.475915## 2 B 30 45.89995 7.393815 1.349920 2.760896## 3 C 30 50.47729 9.290633 1.696230 3.469179
These are different than the previous ones (except for the means, of course).Besides the correct sample size of 30, you will also note that we now have smaller
SDs per condition.That makes sense, because this time we obtain the SD based on a measure per participant per condition (only 30) which is a more accurate measure than providing the SD of all observations (the full 300).
Let’s plot that one more time.
ggplot(dat, aes(x = condition, y = score)) + geom_violin() + geom_point(aes(y = mean_agg), data = dat_summary2, color = "black") + geom_errorbar(aes(y = mean_agg, ymin = mean_agg - ci, ymax = mean_agg + ci), color = "black", width = 0.05, data = dat_summary2)
Alright, are we done?
Nope. Turns out those SEs are still not entirely correct. The last plot (this time correct)
So what’s wrong the
SEs this time?If we had a between-subjects design, so if the scores for each condition came from different participants,then we’d be done.However, we have a within-subjects design, meaning we have one aggregated score per participant per condition – so each participant has multiple scores.Remember how we introduced additional variability for each participant plus random error when we simulated the data?These two sources of variability are now conflated with the difference between conditions that we’re interested in plotting.This is similar to presenting the results of an independent samples t-test rather than a paired samples t-test.
Consequently, we need a way to disentange the variability around the true difference between conditions from the variability of each participant and random error.Thankfully, people much smarter than I have done this already.I’m not going to pretend I understand exactly what he did, but Morey (2008) describes such a method, and Ryan Hope has implemented Morey’s method in the
Rmisc package that we’ve been using in this post.
However, so far I called the
summarySE function, which provides summary statistics for between-subjects designs.In other words, I made a mistake: that function was not appropriate for our design.
Luckily, the package also has a
summarySEwithin function that provides correct
SEs.Here we specify the measurement, what variable specifies the within-subjects condition, and, crucially, the variable that signals that the measurements come from the same participant.Note that we used the data set with aggregated means.
dat_summary3 <- summarySEwithin(dat_agg, measurevar = "mean_agg", withinvars = "condition", idvar = "pp")dat_summary3
## condition N mean_agg sd se ci## 1 A 30 42.11566 8.102615 1.4793284 3.025566## 2 B 30 45.89995 5.323399 0.9719152 1.987790## 3 C 30 50.47729 8.058000 1.4711828 3.008907
Now, finally, we can create our plot with correct
SE.You can decide for yourself whether you want error bars that represent the 95%CI of the SE, or whether the error bars represent one SE.I prefer plotting the 95%CI.
ggplot(dat, aes(x = condition, y = score)) + geom_violin() + geom_point(aes(y = mean_agg), data = dat_summary3, color = "black") + geom_errorbar(aes(y = mean_agg, ymin = mean_agg - ci, ymax = mean_agg + ci), color = "black", width = 0.05, data = dat_summary2)
While we’re at it, let’s make the graph a bit prettier.
ggplot(dat, aes(x = condition, y = score)) + geom_violin(aes(fill = condition), color = "grey15") + geom_point(aes(y = mean_agg), data = dat_summary3, color = "black") + geom_errorbar(aes(y = mean_agg, ymin = mean_agg - ci, ymax = mean_agg + ci), color = "black", width = 0.05, data = dat_summary2) + labs(x = "Condition", y = "Score") + theme_classic() + scale_color_grey() + scale_fill_grey() + theme(legend.position = "none", strip.background.x = element_blank())
If we’d present this figure in a paper though, we should explicitly state in the figure caption what those error bars represent. After all, we’re doing something strange here: we show all of the raw data (i.e., violin plots) plus means, yet the uncertainty arround these means is not that of the raw data, but expressed taking into account the design that produced these data (i.e., within-subject design).
For example, we could write something along these lines:
Violin plots represent the distribution of the data per condition. Black points represent the means; bars of these points represent the 95% CI of the within-subject standard error (Morey, 2008), calculated with the Rmisc package (Hope, 2013).
Alright, that’s it. Thanks a lot to Dale Barr for proof reading this post and helpful feedback. If you have suggestions, spotted a mistake, or want to tell me I should stay away from R, let me know in the comments or via Twitter.
|
This article provides answers to the following questions, among others:
What is steel made of? What is the difference between steel and cast iron regarding their composition? Why is carbon used as an alloying element for steel? Why do further phase transformations take place in the already solidified state in steels? In which lattice structure does steel crystallize first? Which lattice structure does steel (usually) have at room temperature? What is the steel part in the phase diagram? How does the carbon content influence the solidification range of steels? What is austenite and ferrite? How does the carbon content affect the transformation of \(\gamma\)-iron into \(\alpha\)-iron? Which microstructural changes occur in the stable or metastable system during the \(\gamma\)-\(\alpha\)-transformation? What promotes the respective systems? Which system is relevant for steel and which for cast iron? Introduction
In principle, steels are binary systems consisting of the host element iron and the alloying element carbon with a maximum content of 2 % (above 2% carbon, the iron-carbon alloy is called
cast iron!). The carbon provides the necessary strength and hardness because iron alone would be too soft as a construction material. In order to be able to produce steels according to these different requirements (high hardness or high strength, or a compromise of both), a deeper understanding of the alloy system iron/carbon is required.
Steel is an alloy of iron and carbon! With a carbon content of more than 2 % one speaks of cast iron!
In contrast to the binary systems previously considered, phase transformation does not only take place during solidification. Iron also shows an allotropy (polymorphism), i.e. depending on temperature iron exists in different lattice structures. In the solid state, these cause further phase transformations. Therefore, the phase diagram of the iron/carbon alloy system is somewhat more complex.
In order to understand the microstructural processes inside a steel, it makes sense to first take a closer look at the microstructure formation of pure iron. For this reason, the cooling curve of iron is discussed in more detail in the following section.
Microstructure formation of soft iron
In the following, the cooling curve of pure iron will be examined in more detail. Since pure iron is relatively soft in the solidified state, it is also called
soft iron.
The cooling curve of pure iron (Fe) has a series of thermal arrests at which different processes take place in the microstructure. The first thermal arrest is at the solidification temperature of 1536 °C. At this point the melt crystallizes in a body-centered cubic lattice structure (bcc). In this state the iron is also called \(\delta\)-iron (\(\delta\)-Fe). Note that the entire microstructure of \(\delta\)-iron is already completely solidified. Thus, all further phase transformations finally take place in the already solidified state!
At a temperature of 1392 °C, the body-centered cubic \(\delta\)-iron transforms into the face-centered cubic structure (fcc) at a constant temperature. In this lattice modification the iron is also called \(\gamma\)-iron. Since the atomic structure and thus the binding energies change during a lattice transformation, this is also associated with an energy conversion. Therefore, the lattice structure changes at a constant temperature (thermal arrest)!
A further lattice transformation finally takes place at 911 °C. At this temperature, the face-centered cubic iron transforms back into the body-centered cubic structure. In this form the iron is also called (\beta\)-iron.
A last thermal arrest finally occurs at a temperature of 769 °C. However, this is not due to a lattice transformation! The reason for the thermal arrest is a quantum mechanical effect, which is responsible for the fact that the iron is magnetic below this temperature and not above! This temperature is also called
Curie temperature (apart from iron, only the elements cobalt and nickel are ferromagnetic at room temperature). The magnetic state of iron with its body-centered cubic lattice structure is also called (\alpha\)-iron.
The Curie temperature is the temperature at which a ferromagnetic material loses its magnetic properties!
The micrograph below shows soft iron (\(\alpha\)-iron) in an almost carbon-free state. The iron grains (white areas) and silicate inclusions (dark spots) can be seen.
Now that the microstructural transformations of pure iron have been explained, the following article describes the phase transformations in the presence of carbon (steel) in more detail.
Microstructure formation of steel
In the previous section, the phase transformations of pure iron were examined in more detail. In addition to iron, however, steels also consist of carbon. This leads to a shift in the described phase transformations of the iron! How the carbon influences the phase transitions ist best explained by the corresponding phase diagram (state diagram).
The state diagram of the iron-carbon system is also called the
iron-carbon phase diagram. Due to its complexity, the creation of the phase diagram on the basis of selected cooling curves will not be discussed. Furthermore, the iron-carbon diagram in the following sections is initially only considered up to a carbon content of around 2%, as only this range is relevant for steels. This area in the iron-carbon diagram is therefore also referred to as the steel part. Higher carbon concentrations are discussed in more detail in separate sections.
The steel part is the section of the iron-carbon phase diagram up to a carbon content of 2% relevant for steels!
Carbon initially influences the solidification of the steel like a solid solution. The steel part of the phase diagram terefore has the typical lenticular
two-phase region during solidification. The start of solidification is described by the liquidus line and the end of solidification by the solidus line. The microstructure is formed between these lines with a correspondingly slower cooling rate. The phase diagram shows that the solidification range shifts towards lower temperatures with increasing carbon content.
Carbon shifts the solidification range of the steel towards lower temperatures!
In addition, even small amounts of carbon (> 0.1%) completely suppress the body-centered cubic phase of \(\delta\)-iron. The steel then immediately crystallizes in the face-centered cubic lattice structure of \(\gamma\)-iron. Since the \(\delta\) phase has no technical significance anyway, the phase diagram is very often presented in simplified form without this phase region.
Steels behave during solidification like solid solutions in which the alloying element carbon is completely soluble in the host material iron.
The good solubility of carbon is due to the face-centered cubic lattice structure of \(\gamma\)-iron. The relatively small carbon atoms find their place in the free centers of the unit cells. In this case, it is a interstitial solid solution in which the carbon atom is embedded in the interstitials of the iron lattice. This face-centered cubic lattice structure of iron with carbon atoms embedded in it is also called
austenite.
Austenite is the face-centered cubic lattice structure of \(\gamma\)-iron with carbon atoms embedded therein (solid solution)!
Accordingly, the two-phase region between the liquidus line and the solidus lines contains the phases melt (L) and austenite (A). In the two-phase region, the respective carbon concentrations of the two phases can be determined as usual by drawing a perpendicular line onto the concentration axis. The phase fractions are again determined by means of the
lever rule.
In general, the same basic mechanisms take place during solidification of steels as for solid solutions. However, this only applies as long as the temperatures are sufficiently high and the iron is thus in the face-centered cubic state. Only then is the carbon completely soluble in the iron lattice and the alloy can be regarded as a solid solution.
The austenite phase only exists at sufficiently high temperatures as long as the iron is present in the face-centered cubic structure!
However, due to its allotropy, when the temperature drops, iron eventually changes its face-centered cubic structure and transforms into the body-centered cubic \(\alpha\)-iron. With decreasing temperature a further phase transformation is connected, which takes place now however in the already solidified microstructure! This conversion will be discussed in more detail in the next section.
Carbon precipitation (\(\gamma\)-\(\alpha\)-transformation)
Pure iron changes its face-centered cubic lattice structure of \(\gamma\)-iron when the temperature falls below 911 °C and changes to the body-centered cubic lattice structure of \(\alpha\)-iron. In principle, this lattice transformation also occurs in the presence of carbon, but at other temperatures!
As the carbon content increases, this so-called
\(\gamma\)-\(\alpha\)-transformation is shifted towards lower temperatures. In addition, the carbon causes this lattice transformation to take place in a temperature range rather than in thermal arrest at a constant temperature. Only from a carbon content of 0.8 % does the \(\alpha\)-iron form again at constant temperature so that the polylines of the beginning and end of the \(\gamma\)-\(\alpha\)-conversion coincide in the phase diagram.
The presence of carbon shifts the \(\gamma\)-\(\alpha\)-transformation towards lower temperatures!
In contrast to the solid solution of \(\gamma\)-iron, the unit cell of the body-centered cubic lattice of \(\alpha\)-iron is already occupied by an iron atom in the center of the cube. \(\alpha\)-iron can therefore dissolve almost no carbon. The maximum solubility at 723 °C is only 0.02 % and even drops below 0.001 % at room temperature (the exact solubility limit is shown in the diagram with a green
solvus line). To simplify matters, it is therefore assumed in the following that no carbon is soluble in the lattice of \(\alpha\)-iron.
The carbon atom previously embedded in the austenite is therefore “pressed out” of the lattice structure during the \(\gamma\)-\(\alpha\)-transformation. Thus, it is an almost carbon-free \(\alpha\)-iron lattice. In contrast to the carbon-containing face-centered cubic lattice of \(\gamma\)-iron, which was called austenite, the almost carbon-free body-centered cubic lattice of \(\alpha\)-iron is also called
ferrite.
Ferrite is the almost carbon-free cubic space-centered lattice structure of \(\alpha\)-iron!
Stable system
During the \(\gamma\)-\(\alpha\)-conversion, the carbon that is no longer soluble in \(\alpha\)-iron can in principle precipitate from the lattice in two ways. With slow cooling and a relatively high carbon content, a sufficient number of carbon atoms can come together to form their own hexagonal lattice structure. In this lattice modification, carbon is also called
graphite.
Such graphite precipitation is not only favoured by relatively slow cooling speeds but can also be specifically promoted by adding silicon. The precipitation of carbon in the form of graphite is also referred to as a
stable system, since the carbon in this form can no longer decay further and is therefore stable in the thermodynamic sense.
A microstructure solidified according to the stable system basically consists of iron (Fe) and graphite (C). This applies in particular to cast iron!
Cast iron usually has a relatively high carbon content (> 2 %) and is therefore a typical representative of the stable system. However, some types of cast iron also solidify according to the metastable system described below. This applies in particular to steels.
Metastable system
If the solidified microstructure is no longer cooled relatively slowly but faster and only small amounts of carbon are present, the carbon atoms can no longer attach to a common graphite lattice structure. In this case, the precipitating carbon combines with three iron atoms to form the iron carbide compound \(Fe_3C\) and forms a rhombohedral lattice structure. This intermediate (intermetallic) iron carbide compound is also called
cementite.
Cementite is a relatively hard but brittle intermetallic compound consisting of three iron atoms and one carbon atom (\(Fe_3C\))!
As the name suggests, cementite is very hard and significantly responsible for the increase in hardness of the steel! Precipitation of cementite can not only be achieved through faster cooling but also by specific additives such as manganese. The precipitation of carbon in the form of cementite is also called a
metastable system in the thermodynamic sense, since the iron carbide compound would decompose into the thermodynamically stable graphite form by diffusion processes at sufficiently high temperatures and sufficiently long annealing times.
In contrast to cast iron, steels generally have a relatively low carbon content (< 2 %) and are therefore typical representatives of the metastable system.
A microstructure solidified according to the metastable system basically consists of iron (\(Fe\)) and cementite (\(Fe_3C\)). This applies in particular to steels!
Depending on the precipitation of carbon in the form of graphite or cementite, the polylines in the iron-carbon phase diagram differ slightly from one another (more on this in the article on cast iron). Since the metastable system with its cementite precipitation is particularly important for steels, only this metastable system will be discussed in more detail in the following articles.
|
I think we must first understand the description of a machine and the input size, so that the comparison is of only valid objects.Let say
is a input size. This means machines will have these resource bounds. N
\begin{array}{|l|l|l|}\hline\mbox{Resource} & \mbox{Finite Automata:}\quad \mathcal{A} & \mbox{LBTM:} \quad \mathcal{M}\\\hline\mbox{Input Tape Size} & O(N) & O(N)\\\mbox{Tape Operations} & \mbox{Read Only}& \mbox{Read, Write}\\\mbox{Tape Movement} & \mbox{Left to right, One pass only}& \mbox{Both directions, No pass limit}\\\mbox{# of Locations (States)} & M & M\\\mbox{Input Alphabet} & \Sigma & \Sigma\\\mbox{Acceptance Condition} & \mbox{Reach finite location: }\ell_f & \mbox{Reach finite location: }\ell_f\\\hline\end{array}
Now, here $\mathcal{M}$ is more expressive than $\mathcal{A}$. That's simply because tape movement and restrictions are limited for $\mathcal{A}$.
Now let's make an
invalid comparison.\begin{array}{|l|l|l|}\hline\mbox{Resource} & \mbox{Finite Automata:}\quad \mathcal{A'} & \mbox{LBTM:} \quad \mathcal{M}\\\hline\mbox{Input Tape Size} & O(N) & O(N)\\\mbox{Tape Operations} & \mbox{Read Only}& \mbox{Read, Write}\\\mbox{Tape Movement} & \mbox{Left to right, One pass only}& \mbox{Both directions, No pass limit}\\\mbox{# of Locations (States)} & M \times 2^N & M\\\mbox{Input Alphabet} & \Sigma & \Sigma\\\mbox{Acceptance Condition} & \mbox{Reach finite location: }\ell'_f & \mbox{Reach finite location: }\ell_f\\\hline\end{array}
Here $\mathcal{A}'$ and $\mathcal{M}$ have same expressive power. But, note that the size of $\mathcal{A}'$ depends on input $N$ in exponential manner.Earlier size of $\mathcal{A}$ did not depend on $N$.This means for every input to $\mathcal{M}$, you will need to generate new FA, even though $\mathcal{M}$ remains unchanged.
|
This is inevitably an imprecise question, but there are already several questions like this on the site so I thought i'd try anyway.
If I understand correctly, for any reductive algebraic group $G$ the points of $G$ over the field with one element should be $G(\mathbb{F}_1) = W$ where $W$ is the Weyl group of $G$. My question, stated briefly (and generally), is the following:
For a reductive group there are several notions which make sense over any base field, what are the analogs of these for $W$:
Parabolic subgroups Levi subgroups Borel subgroups Bruhat decomposition
I will be satisfied with an answer for the case $G=GL_n$ so let's consider this case in a little more detail:
In this case: $GL_n(\mathbb{F}_1)=W=\Sigma_n$. It seems to me like the only reasonable notion of a
Levi subgroup of $\Sigma_n$ would be a Young subgroup $\Sigma_{\lambda} := \Sigma_{\lambda_1} \times \dots \times \Sigma_{\lambda_l}$ for some partition $\lambda = (\lambda_1,...,\lambda_l)$. Is this correct? If so what are the corresponding parabolic subgroups?
Following this line of thought we get that the maximal torus should be trivial (which is somehow reasonable).
Should the Borel be the trivial subgroup then? If so this is somehow disappointing...
|
This question already has an answer here:
$\ce{AgNO3 + NaCl}$ gives $\ce{NaNO3 + AgCl}$ but why it's vice versa is not possible. My question is how to predict if a double displacement reaction will occur naturally or not.
Chemistry Stack Exchange is a question and answer site for scientists, academics, teachers, and students in the field of chemistry. It only takes a minute to sign up.Sign up to join this community
This question already has an answer here:
$\ce{AgNO3 + NaCl}$ gives $\ce{NaNO3 + AgCl}$ but why it's vice versa is not possible. My question is how to predict if a double displacement reaction will occur naturally or not.
On a microscopic level, you still have «a forward» reaction described by
$\ce{AgNO3 + NaCl -> NaNO3 + AgCl}$
and a «backward reaction», which may be described by
$\ce{AgCl + NaNO3 -> AgNO3 + NaCl}$.
However, reaching the thermodynamic equilibrium of these two reactions is heavily influenced by the low solubility product of $\ce{AgCl}$ at about $\sqrt{1.1 \times 10^{-7}}\,\mathrm{mol}$ (or $1.9\,\mathrm{mg}/\mathrm{L}$) in pure water at room temperature (reference).
Consequently, you may either replace the «balanced» double arrow about chemical equilibria
$\ce{AgNO3 + NaCl <=> NaNO3 + AgCl}$
by one tilted in favour to one side
$\ce{AgNO3 + NaCl <=>> NaNO3 + AgCl v}$
where $\ce{v}$ is used to highlight the preciptation of solid $\ce{AgCl}$ shifting the equilibrium.
Or write the reaction equation as
if there were no microscopic reversibility at all:
$\ce{AgNO3 + NaCl -> NaNO3 + AgCl v}$
Depending on school and what you want to put emphasis on, the microscopic reversibility often is «omitted» (i.e. the double arrow is dropped in favour of the single arrow) for reactions with a $K > 10^4$ or $K < 10^{-4}$. On the other hand, you see in many textbooks
$\ce{2 H2O <=> H3O^+ + OH^-}$
despite a $K_w \approx 10^{-14}$, too.
|
Call a category $C$
rigid if every equivalence $C \to C$ is isomorphic to the identity. I don't know if this is standard terminology. Many of the usual algebraic categories are rigid, for example sets, commutative monoids, groups, abelian groups, commutative rings, but also the category of topological spaces. The category of monoids (or rings) is not rigid because $M \mapsto M^{\mathrm{op}}$ is an equivalence which is not isomorphic to the identity. See [M] for a survey and the general strategy for proving rigidity. The case of commutative rings was discussed recently on MO here. The philosophy is that a category is rigid if every object can be defined in a categorical way, which is a quite interesting property. Question: Is the category of schemes rigid?
Here is what I've done so far: The initial scheme is $\emptyset$ and the terminal scheme is $\text{Spec}(\mathbb{Z})$. Spectra of fields are characterized by the property that they are non-initial and and every morphism from a non-initial object to them is an epimorphism, see Kevin's answer here. The underlying set $|X|$ of a scheme is the set of equivalence classes of morphisms $Y \to X$, where $Y$ is the spectrum of a field. So this recovers $|X|$ from $X$ in a categorical manner. If $x \in |X|$, then $\text{Spec}(\kappa(x))$ is the terminal spectrum of a field which maps to $X$ and has (set) image $x$.
However, I'm not able to recover the topology from $X$. I don't know how to characterize open or closed immersions. They are exactly the étale resp. proper monomorphisms, see this MO question, but it seems to be hard to characterize étale and proper categorically. After all, if are able to characterize affine schemes, then we will be done, since the category of affine schemes is rigid and every scheme is the canonical colimit of the affine schemes mapping into it.
In order to characterize affine schemes, it is enough to characterize the ring object $\mathbb{A}^1_\mathbb{Z}$ in the category of schemes, since we can then define the ring of global sections of a scheme categorically and then say that affine schemes $Y$ are characterized by the property that for all schemes $X$ the map $Hom(X,Y) \to Hom(\mathcal{O}(Y),\mathcal{O}(X))$ is bijective.
Other approaches: 1. First show that the category of fields is rigid. I've already shown that the notions of prime field, $\mathbb{F}_p$, $\mathbb{Q}$, finite, characteristic, normal, separable, algebraic, galois, transcendent, transcendence degree are categorical, but this is not enough to distinguish, for example, $\mathbb{Q}(\sqrt{2})$ and $\mathbb{Q}(\sqrt{3})$. If $F$ is a self-equivalence of the category of fields, then $F$ maps $K(X)$ to $F(K)(X)$, so taking automorphisms there is a natural isomorphism $\text{PGL}(2,K) \cong \text{PGL}(2,F(K))$, but I wonder if this already implies that $K \cong F(K)$ naturally. 2. Characterize local schemes as a special full reflective subcategory containing the spectra of fields. 3. Try to categorify cohomology theory and use Serre's criterion for affineness.
EDIT (May '11): I've restarted this project in the last days. If $k$ is a field with only trivial endomorphisms, then I can show that every self-equivalence of $\text{Sch}/k$ preserves $\text{Spec}(k[\epsilon]/\epsilon^2)$, but also $\text{Spec}(k[[t]])$. But I still have no idea how to approach $\text{Spec}(k[t])$ categorically. Even basic notions such as "closed point" or "quasicompact" remain unclear.
EDIT (Feb '12): Let's work with $\mathrm{Sch}/k$ for some algebraically closed field $k$. Then $F$ maps $\mathbb{A}^1_k$ to a ring object in $\mathrm{Sch}/k$. If we already knew that it is of finite type over $k$ and irreducible, then a Theorem by Greenberg (Cor. 4.4 in
Algebraic Rings, Trans. AMS, Vol. 111, No. 3, pp. 472 - 481) will imply that the underlying scheme is just $\mathbb{A}^n_k$ for some $n$. Now using my question about factorization we should be able to conclude $n=1$. Of course, many details are missing here; for example it is not clear at all why $F$ should preserve schemes of finite type.
Any ideas concerning the categorical characterization of other properties / objects are appreciated. Feel free to add every piece as a single answer even if it does not answer the whole question.
[M] E. Makai jun,
Automorphisms and Full Embeddings of Categories in Algebra and Topology, online
|
Adiabatic Short Circuit Temperature Rise
Adiabatic short circuit temperature rise normally refers to the temperature rise in a cable due to a short circuit current. During a short circuit, a high amount of current can flow through a cable for a short time. This surge in current flow causes a temperature rise within the cable.
Contents Derivation
An “adiabatic” process is a thermodynamic process in which there is no heat transfer. In the context of cables experiencing a short circuit, this means that the energy from the short circuit current contributes only to raising the temperature of the cable conductor (e.g. copper) without any heat loss through the cable (i.e. through resistive effects). This is obviously a simplifying assumption as in reality, there would be heat lost during a short circuit, but it is a conservative one and yields a theoretical temperature rise higher than that found in practice.
The derivation of the short circuit temperature rise is based on a simple application of specific heat capacity. The specific heat capacity of a body (for instance the solid conductor in a cable) is the amount of energy required to change the temperature of the body, and is given by the following basic formula:
[math] c_{p} = \frac{E}{m \Delta T} \, [/math]
Where [math]E \,[/math] is the energy dissipated by the body (in Joules), [math]m \,[/math] is the mass of the body (in grams), and [math]\Delta T \,[/math] is the change in temperature (in Kelvins).
The energy from a current flowing through a cable is based on the SI definition for electrical energy:
[math] E = QV = i^{2}Rt \, [/math]
Where [math]i \,[/math] is the current (in Amps), [math]R \,[/math] is the resistance of the body which the current is flowing through (in Ω) and [math]t \,[/math] is the duration of the current flow (in seconds).
The mass and resistance of an arbitrary conductive body is proportional to the dimensions of the body and can be described in general terms by the following pair of equations:
[math] m = \rho_{d} Al \, [/math] [math] R = \frac{\rho_{r} l}{A} \, [/math]
Where [math]\rho_{d} \,[/math] is the density of the body (in g [math]mm^{-3}[/math]), [math]\rho_{r} \,[/math] is the resistivity of the body (in Ω mm), [math]A \,[/math] is the cross-sectional area of the body (in [math]mm^{2}[/math]) and [math]l \,[/math] is the length of the body (in mm).
Putting all of these equations together and re-arranging, we get the final result for adiabatic short circuit temperature rise:
[math] \Delta T = \frac{i^{2}t \rho_{r}}{A^{2} c_{p} \rho_{d}} \, [/math]
Alternatively, we can re-write this to find the cable conductor cross-sectional area required to dissipate a short circuit current for a given temperature rise is:
[math] A = \frac{\sqrt{i^{2}t}}{k} \, [/math]
With the constant term [math] k = \sqrt{\frac{c_{p} \rho_{d} \Delta T}{\rho_{r}}} \, [/math]
Where [math]A \,[/math] is the minimum cross-sectional area of the cable conductor ([math]mm^{2}[/math])
[math]i^{2}t \,[/math] is the energy of the short circuit ([math]A^{2}s[/math]) [math]c_{p} \,[/math] is the specific heat capacity of the cable conductor ([math]J g^{-1}K^{-1}[/math]) [math]\rho_{d} \,[/math] is the density of the cable conductor material ([math]g mm_{-3}[/math]) [math]\rho_{r} \,[/math] is the resistivity of the cable conductor material ([math]\Omega mm[/math]) [math]\Delta T \,[/math] is the maximum temperature rise allowed (°[math]K[/math])
In practice, it is common to use an [math]i^{2}t[/math] value that corresponds to the let-through energy of the cable's upstream protective device (i.e. circuit breaker or fuse). The manufacturer of the protective device will provide let-through energies for different prospective fault currents.
Worked Example
This example is illustrative and is only intended to show how the equations derived above are applied. In practice, the IEC method outlined below should be used.
Suppose a short circuit with let-through energy of [math]1.6 x 10^{7}[/math] occurs on a cable with a copper conductor and PVC insulation. Prior to the short circuit, the cable was operating a temperature of 75°C. The temperature limit for PVC insulation is 160°C and therefore the maximum temperature rise is 85°K.
The specific heat capacity of copper at 25°C is [math]c_{p}[/math] = 0.385 [math]J g^{-1}K^{-1}[/math]. The density of copper is [math]\rho_{d}[/math] = 0.00894 [math]g mm^{-3}[/math] and the resistivity of copper at 75°C is [math]\rho_{r}[/math] = 0.0000204 [math]\Omega mm [/math].
The constant is calculated as k = 119.74. The minimum cable conductor cross-sectional area is calculated as 33.4 [math]mm^{2}[/math]. it should be stressed that the calculated value of k is probably inaccurate because the specific heat capacity of copper is subject to change at different temperatures.
Effects of Short Circuit Temperature Rise
High temperatures can trigger unwanted reactions in the cable insulation, sheath materials and other components, which can prematurely degrade the condition of the cable. Cables with larger cross-sectional areas are less sensitive to short circuit temperature rises as the larger amount of conductor material prevents excessive temperature rises.
The maximum allowable short circuit temperature rise depends on the type of insulation (and other materials) used in the construction of the cable. The cable manufacturer will provide specific details on the maximum temperature of the cable for different types of insulation materials, but typically, the maximum temperatures are 160°C for PVC and 250°C for EPR and XLPE insulated cables.
Treatment by International Standards
IEC 60364 (Low voltage electrical installations) contains guidance on the sizing of cables with respect to adiabatic short circuit temperature rise. The minimum cable conductor cross-sectional area is given by the following equation:
[math] A = \frac{\sqrt{i^{2}t}}{k} \, [/math]
Where [math]A \,[/math] is the minimum cross-sectional area of the cable conductor ([math]mm^{2}[/math])
[math]i^{2}t \,[/math] is the energy of the short circuit ([math]A^{2}s[/math]) [math]k \,[/math] is a constant that can be calculated from IEC 60364-5-54 Annex A. For example, for copper conductors: [math] k = 226 \sqrt{\ln{\left(1 + \frac{\theta_{f}-\theta_{i}}{234.5+\theta_{i}}\right)}} \, [/math]
Where [math]\theta_{i} \,[/math] and [math]\theta_{f} \,[/math] are the initial and final conductor temperatures respectively.
The National Electricity Code (NEC) does not have any specific provisions for short circuit temperature rise.
|
I'm in a pre rigorous phase but as a programmer I need to know certain things on a case by case basis to do certain very particular things.
Therefore, I was hoping for validation of why the following, observed in a book about bayesian networks, is true. That being:
Let $\mathcal{G}$ be a bayesian network over $X_1, ..., X_n$. We say that a distribution $P_\beta$ over the same space factorizes according to $\mathcal{G}$ if $P_\beta$ can be expressed a product:
$$\mathbb{P}_\beta (X_1, ..., X_n) = \prod\limits_{i=1}^n\mathbb{P}(X_i \mid \mathbf{Pa}_{X_i})$$
where $\mathbf{Pa}_{X_i}$ is a vector of the observations of the parent nodes of $X_i$
I was wondering where we get the product formula for $\mathbb{P}_\beta$ from. Does it follow directly from the chain rule for probability? If so, can so please do a derivation.
|
Let $S_n$ and $T_m$ be two binomial variables satisfying $S_n\sim B(n,\frac12)$ and $T_m\sim B(m,\frac12)$. Define $\tilde{S}_n=\frac{2S_n-n}{\sqrt{n}}$ and define $\tilde{T}_m$ similarly. For any fixed $s$ and $t$, it is well-known that (Central Limit Theorem) $$\mathbb \lim_{n,m\rightarrow\infty}(\mathbb P(\tilde S_n\le s),\mathbb P(\tilde T_m\le t))=(\Phi(s),\Phi(t)),$$ where $\Phi(x)=\frac1{\sqrt{2\pi}}\int_{-\infty}^{x}\exp(-\frac{z^2}{2})dz$ is the distribution function of a standard normal variable.
Let $c=c_{n,m}=\mathbb E(\tilde{S}_n\tilde{T}_m)$ for short. Here $c$ may depend on $n$ and $m$.
Suppose that $\alpha\le c_{n,m}\le\beta$ holds uniformly for two absolute constants $\alpha,\beta\in(0,1)$. For fixed $s$ and $t$, it "seems reasonable" to conjecture that the joint distribution of $\tilde S_n$ and $\tilde T_m$ is close to corresponding 2-dimensional normal distribution: \begin{equation}\label{close} \mathbb P(\tilde S_n\le s,\tilde T_m\le t)=(1+o(1))\hat\Phi_c(s,t),~~~~~~~~~~~~~~~~~~~~~~~~~~(*) \end{equation} where $o(1)$ tends to 0 as $n,m$ tend to infinity and $\hat\Phi_c(s,t)$ is defined by $$\hat\Phi_c(s,t)=\frac1{2\pi\sqrt{1-c^2}}\iint_{x\le s,~y\le t}\exp\left[-\frac{x^2-2cxy+y^2}{2(1-c^2)}\right]dxdy.$$
Here are my questions:
$1.$ Is it sufficient to guarantee $(*)$ hold that $\alpha\le c\le\beta$. How about $c=\alpha$?
$2.$ If $1.$ is ture, can $o(1)$ term in $(*)$ be improved ($O(\frac1{\sqrt{n}}+\frac1{\sqrt{m}})$ for example)?
$3.$ If $1.$ is not true, can we get a good estimation of $\mathbb P(\tilde S_n\le s,\tilde T_m\le t)$ for fixed $s$ and $t$?
$\bf{Remark:}$ This problem is one of my attempts to solve Link. Any hint or reference will be appreciated. :)
|
Recap
In the previous chapter:
We considered games of incomplete information; Discussed some basic utility theory; Considered the principal agent game.
In this chapter we will take a look at a more general type of random game.
Stochastic games Definition of a stochastic game
A stochastic game is defined by:
X a set of states with a stage game defined for each state; A set of strategies \(S_i(x)\) for each player for each state \(x\in X\); A set of rewards dependant on the state and the actions of the other players: \(u_i(x,s_1,s_2)\); A set of probabilities of transitioning to a future state: \(\pi(x’|x,s_1,s_2)\); Each stage game is played at a set of discrete times \(t\).
We will make some simplifying assumptions in this course:
The length of the game is not known (infinite horizon) - so we use discounting; The rewards and transition probabilities are not dependent; We will only consider strategies called Markov strategies. Definition of a Markov strategy
A strategy is call a
Markov strategy if the behaviour dictated is not time dependent. Example
Consider the following game with \(X=\{x,y\}\):
\(S_1(x)=\{a,b\}\) and \(S_2(x)=\{c,d\}\); \(S_1(y)=\{e\}\) and \(S_2(x)=\{f\}\);
We have the stage game corresponding to state \(x\):
The stage game corresponding to state \(y\):
The transition probabilities corresponding to state \(x\):
The transition probabilities corresponding to state \(y\):
A concise way of representing all this is shown.
We see that the Nash equilibrium for the stage game corresponding to \(x\) is \((a, c)\) however as soon as the players play that strategy profile they will potentially go to state \(y\) which is an absorbing state at which players gain no further utility.
To calculate utilities for players in infinite horizon stochastic games we use a discount rate. Thus without loss of generality if the game is in state \(x\) and we assume that both players are playing \(\sigma^*_i\) then player 1 would be attempting to maximise future payoffs:
where \(U_1^*\) denotes the expected utility to player 1 when both players are playing the Nash strategy profile.
Thus a Nash equilibrium satisfies:
Solving these equations is not straightforward. We will take a look at one approach by solving the example we have above.
Finding equilibria in stochastic games
Let us find a Nash equilibrium for the game considered above with \(\delta=2/3\).
State \(y\) gives no value to either player so we only need to consider state \(x\). Let the future gains to player 1 in state \(x\) be \(v\), and the future gains to player 2 in state \(x\) be \(u\). Thus the players are facing the following game:
We consider each strategy pair and state the condition for Nash equilibrium:
\((a,c)\): \(v\leq 21\) and \(u\leq 3\). \((a,d)\): \(u\geq3\). \((b,c)\): \(v\geq 21\) and \(5\geq 6\). \((b,d)\): \(5\leq2\).
Now consider the implications of each of those profiles being an equilibrium:
\(8+v/3=v\) \(\Rightarrow\) \(v=12\) and \(4+u/3=u\) \(\Rightarrow\) \(u=6\) which contradicts the corresponding inequality. \(3+2u/3=u\) \(\Rightarrow\) \(u=9\). The second inequality cannot hold. The inequality cannot hold.
Thus the unique Markov strategy Nash equilibria is \((a,d)\)
which is not the stage Nash equilibria!
|
No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of more coins.
Let $X$ and $Y$ denote the Bernoulli distributed variables corresponding to the two cases, $X \sim \mathrm{Ber}(p)$, $Y \sim \mathrm{Ber}(q)$. First, recall that the correlation of $X$ and $Y$ is
$$\mathrm{corr}(X, Y) = \frac{E[XY] - E[X]E[Y]}{\sqrt{\mathrm{Var}(X)\mathrm{Var}(Y)}},$$
and since you know the marginals, you know $E[X]$, $E[Y]$, $\mathrm{Var}(X)$, and $\mathrm{Var}(Y)$, so by knowing the correlation, you also know $E[XY]$. Now, $XY = 1$ if and only if both $X = 1$ and $Y = 1$, so$$E[XY] = P(X = 1, Y = 1).$$
By knowing the marginals, you know $p = P(X = 1, Y = 0) + P(X = 1, Y = 1)$, and $q = P(X = 0, Y = 1) + P(X = 1, Y = 1)$. Since we just found that you know $P(X = 1, Y = 1)$, this means that you also know $P(X = 1, Y = 0)$ and $P(X = 0, Y = 0)$, but now you're done, as the probability you are looking for is
$$P(X = 1, Y = 0) + P(X = 0, Y = 1) + P(X = 1, Y = 1).$$
Now, I personally find all of this easier to see with a picture. Let $P_{ij} = P(X = i, Y = j)$. Then we may picture the various probabilities as forming a square:
Here, we saw that knowing the correlations meant that you could deduce $P_{11}$, marked red, and that knowing the marginals, you knew the sum for each edge (one of which are indicated with a blue rectangle).
The case of three coins
This will not go as easily for three coins; intuitively it is not hard to see why: By knowing the marginals and the correlation, you know a total of $6 = 3 + 3$ parameters, but the joint distribution has $2^3 = 8$ outcomes, but by knowing the probabilities for $7$ of those, you can figure out the last one; now, $7 > 6$, so it seems reasonable that one could cook up two different joint distributions whose marginals and correlations are the same, and that one could permute the probabilities until the ones you are looking for will differ.
Let $X$, $Y$, and $Z$ be the three variables, and let
$$P_{ijk} = P(X = i, Y = j, Z = k).$$
In this case, the picture from above becomes the following:
The dimensions have been bumped by one: The red vertex has become several coloured edges, and the edge covered by a blue rectangle have become an entire face. Here, the blue plane indicates that by knowing the marginal, you know the sum of the probabilities within; for the one in the picture,
$$P(X = 0) = P_{000} + P_{010} + P_{001} + P_{011},$$
and similarly for all other faces in the cube. The coloured edges indicate that by knowing the correlations, you know the sum of the two probabilities connected by the edge. For example, by knowing $\mathrm{corr}(X, Y)$, you know $E[XY]$ (exactly as above), and
$$E[XY] = P(X = 1, Y = 1) = P_{110} + P_{111}.$$
So, this puts some limitations on possible joint distributions, but now we've reduced the exercise to the combinatorial exercise of putting numbers on the vertices of a cube. Without further ado, let us provide two joint distributions whose marginals and correlations are the same:
Here, divide all numbers by $100$ to obtain a probability distribution. To see that these work and have the same marginals/correlations, simply note that the sum of probabilities on each face is $1/2$ (meaning that the variables are $\mathrm{Ber}(1/2)$), and that the sums for the vertices on the coloured edges agree in both cases (in this particular case, all correlations are in fact the same, but that's doesn't have to be the case in general).
Finally, the probabilities of getting at least one head, $1 - P_{000}$ and $1 - P_{000}'$, are different in the two cases, which is what we wanted to prove.
For me, coming up with these examples came down to putting numbers on the cube to produce one example, and then simply modifying $P_{111}$ and letting the changes propagate.
Edit: This is the point where I realized that you were actually working with fixed marginals, and that you know that each variable was $\mathrm{Ber}(1/10)$, but if the picture above makes sense, it is possible to tweak it until you have the desired marginals.
Four or more coins
Finally, when we have more than three coins it should not be surprising that we can cook up examples that fail, as we now have an even bigger discrepancy between the number of parameters required to describe the joint distribution and those provided to us by marginals and correlations.
Concretely, for any number of coins greater than three, you could simply consider the examples whose first three coins behave as in the two examples above and for which the outcomes of the final two coins are independent from all other coins.
|
In this chapter, we will discuss the Solutions to Friedmann Equations relating to the Matter Dominated Universe. In cosmology, because we are seeing everything in a large scale, the solar systems, galaxies, everything happens to be like dust particles (that’s what we see it with our eyes), we can call it dusty universe or matter only universe.
In the
Fluid Equation,
$$\dot{\rho} = -3\left ( \frac{\dot{a}}{a} \right )\rho -3\left ( \frac{\dot{a}}{a} \right )\left ( \frac{P}{c^2} \right )$$
We can see there is a pressure term. For a dusty universe,
P = 0, because the energy density of the matter will be greater than radiation pressure, and matter is not moving with relativistic speed.
So, the Fluid Equation will become,
$$\dot{\rho} = -3\left ( \frac{\dot{a}}{a} \right )\rho$$
$$\Rightarrow \dot{\rho}a + 3\dot{a}\rho = 0$$
$$\Rightarrow \frac{1}{a^3}\frac{\mathrm{d}}{\mathrm{d} t}(a^3 \rho) = 0$$
$$\Rightarrow \rho a^3 =\: constant$$
$$\Rightarrow \rho \propto \frac{1}{a^3}$$
There is no counter intuition in this equation because density should scale as $a^{-3}$ because Volume is increasing as $a^3$.
From the last relation, we can say that,
$$\frac{\rho (t)}{\rho_0} = \left [ \frac{a_0}{a(t)} \right ]^3$$
For the present universe,
a, which is equal to a 0 should be 1. So,
$$\rho(t) = \frac{\rho_0}{a^3}$$
In a matter dominated flat universe, k = 0. So, Friedmann equation will become,
$$\left ( \frac{\dot{a}}{a} \right )^2 = \frac{8 \pi G\rho}{3}$$
$$\dot{a}^2 = \frac{8\pi G \rho a^2}{3}$$
By solving this equation, we will get,
$$a \propto t^{2/3}$$
$$\frac{a(t)}{a_0} = \left ( \frac{t}{t_0} \right )^{2/3}$$
$$a(t) = \left( \frac{t}{t_0} \right )^{2/3}$$
This means that the universe will keep on increasing with a diminishing rate. The following image show the expansion of a Dusty Universe.
Take a look at the following equation −
$$\frac{\rho(t)}{\rho_0} = \left ( \frac{t_0}{t} \right )^2$$
We know that the scale factor changes with time as $t^{2/3}$. So,
$$a(t) = \left ( \frac{t}{t_0} \right )^{2/3}$$
Differentiating it, we will get,
$$\frac{(da)}{dt} = \dot{a} = \frac{2}{3} \left ( \frac{t^{-1/3}}{t_0} \right )$$
We know that the
Hubble Constant is,
$$H(t) = \frac{\dot{a}}{a} = \frac{2}{3t}$$
This is the equation for
Einstein-de sitter Universe. If we want to calculate the present age of the universe then,
$$t_0 = t_{age} = \frac{2}{3H_0}$$
After putting the value of $H_0$ for the present universe, we will get the value of the age of the universe as
9 Gyrs. There are many Globular Cluster in our own milky way galaxy which have ages more than that.
That was all about the dusty universe. Now, if you assume that the universe is dominated by radiation and not by matter, then the radiation energy density goes as $a^{-4}$ rather than $a^{-3}$. We will see more of it in the next chapter.
In cosmology, everything happens to be like dust particles, hence, we call it dusty universe or matter only universe.
If we assume that the universe is dominated by radiation and not by matter, then the radiation energy density goes as $a^{-4}$ rather than $a^{-3}$.
|
Wikipedia has an extensive list of languages that use the off-side rule1:ABCBooBuddyScriptCobraCoffeeScriptConvergeCurryElixir (, do: blocks)ElmF# (if #light "off" is not specified)GenieHaskell (only for where, let, do, or case ... of clauses when braces are omitted)Inform 7ISWIM, the abstract language that ...
There are: Elm, Haskell, its predecessor Miranda and its predecessor ISWIM,YAML where spaces are crucial for syntax and tabs are forbidden, OCCAM, Coffee script and Cokescript both are language to language compilers with JavaScript as target and esoteric Whitespaces.There is also Agda - interactive theorem prover, which is probably not what you had in ...
Unfortunately, this has no name—because it doesn't work. Pontus provided a good test case.lst = [2, 1, 3, 4, 5]sort_algo(lst)print(lst)[2, 1, 3, 4, 5]It's been mathematically proven that comparison-based sorting algorithms (that is, sorting algorithms based on comparing elements against each other, rather than exploiting certain clever tricks) can ...
Make fits your description, even though it probably isn't quite what you have in mind, with its limited syntax and power.It infamously indicates its code blocks (recipes) with a particular form of whitespace: one tab character. Alternative ways are available (e.g. GNU Make supports using an alternative character), but rarely used in practice.Another ...
You haven't extracted the 14 most significant bits. First, you have to write $r$ as a $w$-bit number:$$00000001000011001100000001000000$$Now you extract the 14 most significant bits:$$00000001000011$$Converting to decimal, this is 67.
There is absolutely no problem adapting dynamic programming to count solutions without regard to order (i.e., when order doesn't matter). Let $D(S,m,n)$ be the number of ways to obtain a change of $n$ using the first $m$ coins of $S = S_1,\ldots,S_M$. We have $D(S,m,0) = 1$, $D(S,m,n) = 0$ when $n < 0$, and otherwise$$D(S,m,n) = \sum_{i=1}^m D(S,i,n-S_i)...
This area is known as black-box optimization: you have a function $f(x,y,z)$ where you have the ability to evaluate $f$ on an input of your choice, and you want to find $x,y,z$ that maximizes $f(x,y,z)$. (Here $x$ is the decision threshold, $y$ is the maximum % gain per trade, and $z$ is the stop loss, and $f(x,y,z)$ is the amount of ending money at the end ...
Parallelism has costs. The processes have to be scheduled, communicate with each other, manage resources, etc. In return you can do multiple things at the same time.When you have a lot of slow tasks that can be done independently, parallel processing will speed things up a lot.But when you try to parallelize an easy task it might take longer to handle ...
Your problem is a slightly more general version of computing the vertex-connectivity of a graph. If all weights are equal, then it is equivalent to the vertex-connectivity problem.The problem can be solved in polynomial time with network flow, yes; but you'll need to invoke a network flow subroutine several times; just one invocation won't be enough....
The length of the binary representation of a natural number $n$ is roughly $\log_2 n$. As an example, the number represented by the binary string $10^{n-1}$ of length $n$ is $2^n$.Your sources are misleading. Usually $n$ is reserved for the input length or a related quantity, not the input value. If the input to a function is an integer $m$, then the input ...
So you have the right logic, if you have a loop of $O(n)$ iterations, the complexity of the entire loop will be $O(n*\text{complexity of loop operations})$. In this case, you again are correct that your loop's complexity is $O(n)$ for each iteration.Your last bullet point shows that you understand this as well, as the total loop complexity is then $O(n^2)$...
Let's call your proposal X, instead:X = lambda f : (lambda x : f( lambda z: x(x) (z) )) (lambda x : f(x(x)))For convenience, we can rewrite it asM = (lambda x : f(x(x))) # depends on fX = lambda f : (lambda x : f( lambda z: x(x) (z) )) MNow, when we invoke X(f), we getX(f) =(lambda x : f( lambda z: x(x) (z) )) M =f( lambda z: M(M) (z) )...
Think carefully about the flow.Your innermost While loop runs through b = 1, 2, each time it hits.It does this for EACH value of a in the outermost loop.So when a == 2, we progress inward and run through that loop twice. a == 2 both times.
Your question appears to be a very long version of "Could we add some sort of punctuation to Polish notation so that it's unambiguous where each numerical operand ends?" Yes, of course you could. Normally, we use a space and write the numbers most-significant digit first but, sure, if you want to use the symbol ∅ and write the digits in the opposite order, ...
To answer your question literally, yes, the code does just return a string of length $n$ where $n$ is the length of the integer that we pass. And this is the right way to think about it.Your source, though, is using $n$ to denote the value of the integer, not its length. This is an unusual thing to do and it is, in my opinion, a very bad idea when ...
Your problem is no harder and no easier than the approximate subset-sum problem. There is a natural approach for your problem:Find any subset that sums to something close to zero, and output it. Remove those numbers from the set $A$.Go back to step 1 and repeat, until the set $A$ is empty.This requires a way to find a subset that sums to approximately ...
You can solve this in $O(|E| \log |V|)$ time, if all weights are non-negative. Basically, you'll build a larger graph, of twice the size, then do a shortest-paths query in this graph.I will call the edges of the original multigraph regular and added edges of the form $i\to j$ for $i\subset j$ as irregular.For each set $i$, you have two vertices $i^-,i^+$...
When you slice a Numpy array, you get a special "slice" object, which holds pointers back into the array it was taken from. So if you do the same slicing operation twice, you'll get two different slice objects, which contain pointers to the same array in memory. (If you compare them using == instead of is, you'll see they compare equal, even though they're ...
You've divided the x values into 10 columns, and divided the y values into 10 rows, so we have a 10x10 grid.Take any row that is already covered by some already evaluated point, and remove it from the set of rows. Do the same with the columns. In your picture, we are left with 6 rows and 6 columns. Consider the 6x6 grid obtained by looking only at those ...
The constraints you have are not very clear (why should you start with "central items" ?). You should maybe try to come up with a well defined problem by precisely defining the clustering(s) you are looking for.For example, if the clustering you want is such that the maximum number of clusters is $k_{max}$ and the maximum distance between two elements in ...
Your analogy is mistaken. You assume that, if you glanced at your calendar, you would quickly be able to identify all ten-day blocks of free time. Actually I bet you'd have all kinds of problems with that.Can you really tell at a glance the difference between a free block of $863\ 999$s (ten days minus a second) and one of $864\,000$s? Honestly, even if ...
I would suggest looking into standard methods for image processing. You could use the Hough transform to detect circles. You could potentially use morphological transforms and the watershed algorithm to smooth out and remove noise and detect boundaries between the regions.
Backslash is the escape character to allow you to enter non-printable characters into a string literal.One of the most known escape sequences for example is \n for a newline.\a is the bell character and \b is the backspace character.
The problem is NP-hard, even when you ignore the constraints about categories. See https://cstheory.stackexchange.com/q/17462/5038 for a simple proof based on a reduction from the longest path problem (or from the Hamiltonian path problem). Therefore, you should not expect any efficient algorithm.There are algorithms whose running time is polynomial in ...
Let $T(x)$ denote the running time of the algorithm. The following recurrence captures the running time of the algorithm:$T(x)=T(\frac{x}{2})+c,\ x\geq3,\ c \in O(1)$where $T(0) = T(1) = a \in O(1)$Solving this we get, $T(n) \in O(log_2 x)$
|
For which non-constant rational functions $f(x)$ in $\mathbb{Q}(x)$ is there $\alpha$, algebraic over $\mathbb{Q}$, such that $\alpha$ and $f(\alpha) \neq \alpha$ are algebraic conjugates? More generally, can one describe the set of such $\alpha$ (empty/non-empty, finite/infinite etc.) if one is given $f$?
Examples:
$f(x)=-x$: These are precisely the square roots of algebraic numbers $\beta$ such that there is no square root of $\beta$ in $\mathbb{Q}(\beta)$. There are infinitely many $\alpha$ and even infinitely many of degree $2$. $f(x)=x^2$: These are precisely the roots of unity of odd order, since $H(\alpha)=H(\alpha^2)=H(\alpha)^2$ implies $H(\alpha)=1$, so $\alpha$ a root of unity. Here $H(\alpha)$ is the absolute multiplicative Weil height of $\alpha$. There are infinitely many $\alpha$, but only finitely many of degree $\leq D$ for any $D$. $f(x)=x+1$: There is no $\alpha$. If there was and $P(x)$ was its minimal polynomial, then $P(x+1)$ would be another irreducible polynomial, vanishing at $\alpha$, with the same leading coefficient and hence $P(x+1)=P(x)$. Looking at the coefficients of the second highest power of $x$ now leads to a contradiction. Analogously for $f(x)=x+a$ if $a$ is any non-zero rational number.
So the existence of such an $\alpha$ and the set of all possible $\alpha$ seem to depend rather intricately on $f(x)$, which seems interesting to me. As I found no discussion of this question in the literature, I post it here.
UPDATE: Firstly thanks to all who have contributed so far! As Eric Wofsey pointed out, any solution $\alpha$ will satisfy $f^n(\alpha)=\alpha$ for some $n>1$, where $f^n$ is the $n$-th iterate of $f$. So one should consider solutions of the equation $f^n(x)-x=0$ or $f^p(x)-x=0$ for $p$ prime.
If the degree of $f$ is at least 2, one can always find irrational such solutions $\alpha$ with $f(\alpha) \neq \alpha$ by the answer of Joe Silverman. However, for his proof to work, we'd need to know that $f^k(\alpha)$ and $\alpha$ are conjugate for some $k$ with $0 < k <p$. I'm not enough of an expert to follow through with his hints for proving this, but if someone does, I'd be very happy about an answer!
If the degree of $f$ is 1, then $f$ is a Möbius transformation and all $f^n$ will have the same fixed points as $f$ (so there's no solution) unless $f$ is of finite order. In that case, if $f(x) \neq x$, the order is 2, 3, 4 or 6 (see http://home.wlu.edu/~dresdeng/papers/nine.pdf). By the same reference, in the latter three cases, $f$ is conjugate to $\frac{-1}{x+1}$, $\frac{x-1}{x+1}$ or $\frac{2x-1}{x+1}$, so it suffices to consider these $f$, which give rise to the minimal polynomials $x^3-nx^2-(n+3)x-1$ (closely related to the polynomial in GNiklasch's answer), $x^4+nx^3-6x^2-nx+1$ and $x^6-2nx^5+5(n-3)x^4+20x^3-5nx^2+2(n-3)x+1$, if my calculations are correct. If the order is 2, the map is of the form $\frac{-x+a}{bx+1}$ or $\frac{a}{x}$, which leads to $x^2+(bn-a)x+n$ or $x^2+nx+a$ respectively. So this case is somewhat degenerate, which explains the unusual behavior of $f(x)=-x$ and $f(x)=x+1$ above.
|
I'm looking for a reference book or article for the following two facts. In both statements, a Polish space $E$ and an ambient probability space $(\Omega, {\cal A}, \Pr)$ are given, and I consider the topology of weak convergence on the space $E'$ of probability measures on $E$ (thus $E'$ is Polish too).
1) Let $X$ be a random variable taking its values in $E$, and let ${\cal B} \subset {\cal A}$ be a $\sigma$-field. Then the conditional law ${\cal L}(X \mid {\cal B})$ is a $E'$-valued random variable. This should be a consequence of the measurability of the conditional expectations $\Bbb E[f(X) \mid {\cal B}]={\cal L}(X \mid {\cal B})(f)$ but I don't find any standard probability theory book asserting this fact (I'd prefer a "standard" probability theory book rather than a more technical book about random measures).
2) Let $(\mu_n)$ be a sequence of random probability measures on $E$ (in other words the $\mu_n$ are $E'$-valued random variables). Let $\mu_\infty$ be another random probability on $E$ and assume that for each bounded continuous function $f\colon E\to \Bbb R$, the convergence $\mu_n(f) \to \mu_\infty(f)$ holds almost surely. It is tempting to conclude that $\mu_n \to \mu_\infty$ but this is not straightforward since the full set of convergence in $\mu_n(f) \to \mu_\infty(f)$ could depend on $f$. So I'm looking for a reference book showing this almost sure convergence in $E'$. Theorem 7.5.2 in this book by Kuksin's and Shirikyan answers the question but it is stronger than desired because it does not assume the presence of $\mu_\infty$. Moreover I'd prefer a more standard probability/measure theory book.
Actually I would be satisfied to find a reference for the special case when $E$ is compact. Thanks in advance.
|
Let $\Omega$ be the set of all infinite binary sequences $(x_i)_{i\ge 0}$ endowed with the product topology coming from discrete topology on $\{0,1\}$. Consider $0<\alpha<1$ and let $$K_\alpha=\{(x_i)\in\Omega:\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}x_i=\alpha\}.$$ Let $\mathcal{M}_\sigma(\Omega)$ stand for the family of all shift invariant Borel probability measures on $\Omega$. For $\mu\in\mathcal{M}_\sigma(\Omega)$ we write $h(\mu)$ for the Kolmogorov-Sinai (metric) entropy of $\mu$. Let $M_\alpha$ be the set of shift invaraint measures concentrated on $K_\alpha$, that is, $M_\alpha= \{\mu\in\mathcal{M}_\sigma(\Omega):\mu(K_\alpha)=1\}$. It is easy to see that $M_\alpha$ is a closed subset of $\mathcal{M}_\sigma(\Omega)$ equipped with the weak$^*$ topology. What can be said about the number $\eta=\sup\{h(\mu): \mu\in M_\alpha\}$?
It is clear that the supremum is achieved by some ergodic measure, because $\mu\mapsto h(\mu)$ is upper semicontinous on $\mathcal{M}_\sigma(\Omega)$. But is a measure achieving that maximum unique?
Uniqueness is true for $\alpha=1/2$, where the Bernoulli measure attains the maximum.
A similar (but I am not sure if equivalent question) is the following:
Let $K_\alpha'$ be the set of all numbers in the unit interval whose binary expansion belongs to $K_\alpha$. What is the Hausdorff dimension of $K'_\alpha$?
|
Permanent link: https://www.ias.ac.in/article/fulltext/joaa/038/01/0017
We developed a generic formalism to estimate the event rate and the redshift distribution of Fast Radio Bursts (FRBs) in our previous publication (Bera
et al. 2016), considering FRBs are of an extragalactic origin. In this paper, we present (a) the predicted pulse widths of FRBs by considering two different scattering models, (b) the minimum total energy required to detect events, (c) the redshift distribution and (d) the detection rates of FRBs for the Ooty Wide Field Array (OWFA). The energy spectrum of FRBs is modelled as a power law with an exponent $-\alpha$ and our analysis spans a range $-3\leq \alpha \leq 5$. We find that OWFA will be capable of detecting FRBs with $\alpha\geq 0$. The redshift distribution and the event rates of FRBs are estimated by assuming two different energy distribution functions; a Delta function and a Schechter luminosity function with an exponent $-2\le \gamma \le 2$. We consider an empirical scattering model based on pulsar observations (model I) as well as a theoretical model (model II) expected for the intergalactic medium. The redshift distributions peak at a particular redshift $z_p$ for a fixed value of α, which lie in the range $0.3\leq z_p \leq 1$ for the scattering model I and remain flat and extend up to high redshifts ($z\lesssim 5$) for the scattering model II.
Current Issue
Volume 40 | Issue 5 October 2019
Since January 2016, the Journal of Astrophysics and Astronomy has moved to Continuous Article Publishing (CAP) mode. This means that each accepted article is being published immediately online with DOI and article citation ID with starting page number 1. Articles are also visible in Web of Science immediately. All these have helped shorten the publication time and have improved the visibility of the articles.
Click here for Editorial Note on CAP Mode
|
I can't really understand MixColumns in Advanced Encryption Standard, can anyone help me how to do this?
I found some topic in the internet about MixColumns, but I still have a lot of question to ask.
ex.
$$ \begin{bmatrix} \mathtt{d4} \\ \mathtt{bf} \\ \mathtt{5d} \\ \mathtt{30} \\ \end{bmatrix} \cdot \begin{bmatrix} \mathtt{02} & \mathtt{03} & \mathtt{01} & \mathtt{01} \\ \mathtt{01} & \mathtt{02} & \mathtt{03} & \mathtt{01} \\ \mathtt{01} & \mathtt{01} & \mathtt{02} & \mathtt{03} \\ \mathtt{03} & \mathtt{01} & \mathtt{01} & \mathtt{02} \\ \end{bmatrix} = \begin{bmatrix} \mathtt{04} \\ \mathtt{66} \\ \mathtt{81} \\ \mathtt{e5} \\ \end{bmatrix} $$
Here, the first element is calculated as
$$(\mathtt{d4} \cdot \mathtt{02}) + (\mathtt{bf} \cdot \mathtt{03}) + (\mathtt{5d} \cdot \mathtt{01}) + (\mathtt{30} \cdot \mathtt{01}) = \mathtt{04}$$
First we will try to solve $\mathtt{d4} \cdot \mathtt{02}$.
We will convert $\mathtt{d4}$ to it's binary form, where $\mathtt{d4}_{16} = \mathtt{1101\,0100}_2$.
$$\begin{aligned} \mathtt{d4} \cdot \mathtt{02} &= \mathtt{1101\,0100} \ll 1 & \text{(}{\ll}\text{ is left shift, 1 is the number of bits to shift)} \\ &= \mathtt{1010\,1000} \oplus \mathtt{0001\,1011} & \text{(XOR because the leftmost bit is 1 before shift)}\\ &= \mathtt{1011\,0011} & \text{(answer)} \end{aligned}$$
Calculation:
$$\begin{aligned} & \mathtt{1010\,1000} \\ & \mathtt{0001\,1011}\ \oplus \\ =& \mathtt{1011\,0011} \end{aligned}$$
The binary value of $\mathtt{d4}$ will be XORed with $\mathtt{0001\,1011}$ after shifting if the left most bit of the binary value of $\mathtt{d4}$ is equal to 1 (before shift).
My question is, what if the left most bit of the binary value is equal to 0, what do I XOR it with then? ex. $\mathtt{01}_{16} = \mathtt{0000\,0001}_2$ ..?
|
I must have read and re-read introductory differential geometry texts ten times over the past few years, but the "torsion free" condition remains completely unintuitive to me.
The aim of this question is to try to finally put this uncomfortable condition to rest.
Ehresmann Connections
Ehresmann connections are a very intuitive way to define a connection on any fiber bundle. Namely, an Ehressmann connection on a fiber bundle $E\rightarrow M$ is just a choice of a complementary subbundle to $ker(TE \rightarrow TM)$ inside of $TE$. This choice is also called a horizontal bundle.
If we are dealing with a linear connection, then $E=TM$, and the Ehresmann connection is a subbundle of $TTM$. This makes intuitive sense -- basically it's saying that for each point in $TM$ it tells you how to move it to different vectors at the tangent spaces of different points. ($ker(TTM \rightarrow TM)$ will mean moving to different vectors at the same tangent space; so that is precluded.)
I like this definition -- it makes more intuitive sense to me than the definition of a Koszul connectionan $\mathbb{R}$-linear map $\Gamma(E)\rightarrow\Gamma(E\otimes T^*M)$ satisfies some condition. Unlike that definition it puts parallel transport front and center.
Torsion-Freeness
A Levi-Civita connection is a connection that: 1. It preserves with the Riemannian metric. (Basically, parallel transporting preserves inner products.) 2. It is torsion-free. Torsion free means $\nabla_XY - \nabla_YX = [X,Y]$.
This definition very heavily uses the less intuitive notion of connection.
So:
Questions How can you rephrase the torsion-free condition in terms of the horizontal bundle of the connection? (Phrased differently: how can it be phrased in terms of parallel transports?) I realized that I don't actually have handy an example of a connection on $\mathbb{R}^2$ that preserves the canonical Riemannian metric on $\mathbb{R}^2$ but that does have torsion. I bet that would help elucidate the answer to my first question.
|
It's been a while since my last blog entry, but the problem I posed on my earlier blog entry still persists - how to efficiently choose a good split plane for an n-vector data structure. To summarize the structure, geographic points are stored as n-vectors (unit vectors) in a binary tree. Branch nodes of this tree define a plane that bisects/halves the unit sphere - one child of the branch contains points that are below the plane, the other child contains points that are above. Like all trees, leaf nodes are turned into branch nodes as they are filled beyond a threshold. That is the basic idea of it.
Split plane determination must occur when a leaf becomes a branch or when a branch becomes unbalanced. In either case, the method called to determine the split plane is agnostic as to why it was called - it simply receives a bunch of points that it must determine a good split for. My initial implementation was naive:
bestsplit = ... bestscore = infinite; for (i=0; i<numpoints; i++) { for (j=i+1; j<numpoints; j++) { split = (points[i]-points[j]).normalize(); score = 0; for (k=0; k<numpoints; k++) { dot = split.dot(points[k]); // accumulate some scoring heuristics, such as is each point above/below the split, how far above/below, etc. score += ...; } if ( bestscore > score ) { bestscore = score; bestsplit = split; } } }
So basically - for all points, determine the point between it and every other point, normalize it, and consider this difference as the normal to the split plane, then test it to see how it "scores" using some heuristics. Probably you can see how this will perform miserably for any significant number of points. The big-O notation is something like n^3 which doesn't include the expensive normalization that is called n^2 times. Several other methods were tested, such as fixing the normal of the split plane to be perpendicular to the x, y, or z axis, but these also proved too expensive and/or also had test cases where the split determination was unsatisfactory.
Heuristics
Enter calculus. If we can represent each of the heuristics as a mathematical function, we can determine when the function reaches what is called a "critical point". Specifically we are interested in the critical point that is the global maximum or minimum, depending on which heuristic it is for. So far we have three of these.
1. Similar Distance
We don't want a split plane where, for example, all points above are nearby and all points below are far away. The points should be as evenly distributed as possible on either side. Given that the dot product of a plane normal and any other point is negative when the point is below the plane and positive when the point is above, and the absolute value of the dot product increases as distance to the plane increases, the sum of all dot products for a good split will be at or close to zero. If we let \(P\) be the array of points, \(N\) be the number of points in the array, and \(S\) the split plane, the following function will add all dot products:
\(SumOfDots = \displaystyle\sum_{i=1}^{N} P_i \cdot S\)
The summation here is not really part of a mathematical function, at least not one we can perform meaningful calculus on, since the calculation must be done in the code. We don't know ahead of time how many points there will be or what their values are, so the function should be agnostic in this regard. As written we cannot use it without inordinate complication, but consider that it is really doing this:
\(SumOfDots = \displaystyle\sum_{i=1}^{N} P_{ix} * S_x + P_{iy} * S_y + P_{iz} * S_z\)
This summation expanded will look something like:
\(SumOfDots = P_{1x} * S_x + P_{1y} * S_y + P_{1z} * S_z + P_{2x} * S_x + P_{2y} * S_y + P_{2z} * S_z + P_{3x} * S_x + P_{3y} * S_y + P_{3z} * S_z + \cdots\)
We can then rewrite this as:
\(SumOfDots = S_x*(P_{1x} + P_{2x} + P_{3x} + \cdots) + S_y*(P_{1y} + P_{2y} + P_{3y} + \cdots) + S_z*(P_{1z} + P_{2z} + P_{3z} + \cdots)\)
Or similarly:
\(SumOfDots = S_x*(\displaystyle\sum_{i=1}^{N} P_{ix}) + S_y*(\displaystyle\sum_{i=1}^{N} P_{iy}) + S_z*(\displaystyle\sum_{i=1}^{N} P_{iz})\)
As far as the mathematical function is concerned, the sums are constants and we can replace them with single characters to appear concise:
\(A = \displaystyle\sum_{i=1}^{N} P_{ix}\)
\(B = \displaystyle\sum_{i=1}^{N} P_{iy}\)
\(C = \displaystyle\sum_{i=1}^{N} P_{iz}\)
We can pre-calculate these in the code as such:
double A = 0, B = 0, C = 0; for (i=0; i<N; i++) { A += points[i].x; B += points[i].y; C += points[i].z; }
We will rewrite the function with these constants:
\(SumOfDots = S_x*A + S_y*B + S_z*C\)
This is great so far. We are interested in when this function reaches zero. To make it simpler, we can square it which makes the negative values positive, and then we become interested in when this function reaches a global minimum:
\(SquaredSumOfDots = (S_x*A + S_y*B + S_z*C)^2\)
So again, when this function reaches zero it means that the points on either side of the split plane - as denoted by \(S\) - are spread apart evenly. This does not mean that \(S\) is a good split plane overall, as the points could all lie on the plane, or some other undesirable condition occurs. For that we have other heuristics.
As a final note, since the vector \(S\) represents a unit normal to a plane, any determination of it must be constrained to space of the unit sphere:
\(S_x^2 + S_y^2 + S_z^2 = 1\)
2. Large Distance
Practically speaking, the points will not be random and will originate from a grid of points or combination of grids. But random or not, if the points form a shape that is not equilateral - in other words if they form a rectangular shape instead of a square or an ellipse instead of a circle - the larger axis of the shape should be split so that child areas are not even less equilateral. To achieve this we can emphasize that the sum of the absolute value of all the dot products is large, meaning that the points are farther away from the split plane. To do this, we want to find the global maximum of a function that calculates the sum:
\(SumOfAbsoluteDots=\displaystyle\sum_{i=1}^{N} |P_i \cdot S|\)
Unfortunately there is no way, that I know of, to handle absolute values and still determine critical points, so we need to rewrite this function without the absolute value operator. Really all we are interested in is when this function reaches a maximum, so we can replace it with a square:
\(SumOfSquaredDots=\displaystyle\sum_{i=1}^{N} (P_i \cdot S)^2\)
Not unlike the previous heuristic function, we need to rewrite this so that \(S\) is not contained in the summation, and we can extract the constant values. If we skip some of the expanding and reducing the squared dot product we will arrive at this step:
\(SumOfSquaredDots=(S_x^2*\displaystyle\sum_{i=1}^{N} P_{ix}^2) + (S_y^2*\displaystyle\sum_{i=1}^{N} P_{iy}^2) + (S_z^2*\displaystyle\sum_{i=1}^{N} P_{iz}^2) + (S_x * S_y * 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iy})+ (S_x * S_z * 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iz})+ (S_y * S_z * 2 * \displaystyle\sum_{i=1}^{N} P_{iy}*P_{iz})\)
Again we will create some named constants to be concise:
\(D = \displaystyle\sum_{i=1}^{N} P_{ix}^2\)
\(E = \displaystyle\sum_{i=1}^{N} P_{iy}^2\)
\(F = \displaystyle\sum_{i=1}^{N} P_{iz}^2\)
\(G = 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iy}\)
\(H = 2 * \displaystyle\sum_{i=1}^{N} P_{ix}*P_{iz}\)
\(I = 2 * \displaystyle\sum_{i=1}^{N} P_{iy}*P_{iz}\)
As before, these can be pre-calculated:
double D = 0, E = 0, F = 0, G = 0, H = 0, I = 0;for (i=0; i<N; i++) { D += points[i].x * points[i].x; E += points[i].y * points[i].y; F += points[i].z * points[i].z; G += points[i].x * points[i].y; H += points[i].x * points[i].z; I += points[i].y * points[i].z;}G *= 2.0;H *= 2.0;I *= 2.0;
And then the function becomes:
\(SumOfSquaredDots=(S_x^2*D) + (S_y^2*E) + (S_z^2*F) + (S_x * S_y * G)+ (S_x * S_z * H) + (S_y * S_z * I)\)
Again when this function maximizes, the points are farthest away from the split plane, which is what we want.
3. Similar number of points
A good split plane will also have the same number of points on either side. We can again use the dot product since it is negative for points below the plane and positive for points above. But we cannot simply sum the dot products themselves, since a large difference for a point on one side will cancel out several smaller differences on the other. To account for this we normalize the distance to be either +1 or -1:
\(SumOfNormalizedDots=\displaystyle\sum_{i=1}^{N} \frac{P_i \cdot S}{\sqrt{(P_i \cdot S)^2}}\)
Expanding this becomes:
\(SumOfNormalizedDots=\displaystyle\sum_{i=1}^{N} \frac{P_{ix} * S_x + P_{iy} * S_y + P_{iz} * S_z}{\sqrt{(S_x^2*P_{ix}^2) + (S_y^2*P_{iy}^2) + (S_z^2*P_{iz}^2) + (S_x*S_y*2*P_{ix}*P_{iy})+ (S_x*S_z*2*P_{ix}*P_{iz})+ (S_y*S_z*2*P_{iy}*P_{iz})}}\)
Unfortunately there is no way to reduce this function so that we can extract all \(S\) references out of the summation, and as with the previous heuristics, put all \(P\) references into constants that we pre-calculate and use to simplify the function. So at present, this heuristic is not able to be used. I am still working on it. The problem is that each iteration of the sum is dependent on the values of \(S\) in such a way that it cannot be extracted.
Putting it all together
What we ultimately want to do is combine the heuristic functions into one function, then use this one function find the critical point - either the global minimum or global maximum depending on how we combine them. The issue is that for \(SquaredSumOfDots\) we want the global minimum and for \(SumOfSquaredDots\) we want the global maximum. We can account for this by inverting the former so that we instead want the global maximum for it. The combined function then becomes:
\(Combined = SumOfSquaredDots - SquaredSumOfDots\)
Applying the terms from each function, we get:
\(Combined = (S_x^2*D) + (S_y^2*E) + (S_z^2*F) + (S_x * S_y * G)+ (S_x * S_z * H) + (S_y * S_z * I) - (S_x*A + S_y*B + S_z*C)^2\)
Expanding the square on the right side and combining some terms will give us:
\(Combined = (S_x^2*(D-A^2)) + (S_y^2*(E - B^2)) + (S_z^2*(F - C^2)) + (S_x * S_y * (G - (2 * A * B)))+ (S_x * S_z * (H - (2 * A * C))) + (S_y * S_z * (I - (2 * B * C)))\)
Again we can combine/pre-calculate some constants to simplify:
\(J = D-A^2\)
\(K = E-B^2\)
\(L = F-C^2\)
\(M = G - (2 * A * B)\)
\(N = H - (2 * A * C)\)
\(O = I - (2 * B * C)\)
double J = D - (A * A); double K = E - (B * B); double L = F - (C * C); double M = G - (2 * A * B); double N = H - (2 * A * C); double O = I - (2 * B * C);
And then apply these to the function:
\(Combined = (S_x^2*J) + (S_y^2*K) + (S_z^2*L) + (S_x * S_y * M)+ (S_x * S_z * N) + (S_y * S_z * O)\)
Because we are lazy, we then plug this into a tool that does the calculus for us. And this is where I am currently blocked on this issue, as I have found no program that can perform this computation. I've actually purchased Wolfram Mathematica and attempted it with the following command:
Maximize[{((x^2)*j) + ((y^2)*k) + ((z^2)*l) + (x*y*m) + (x*z*n) + (y*z*o), ((x^2) + (y^2) + (z^2)) == 1}, {x, y, z}]
After 5 or 6 days this had not finished and I had to restart the computer it was running on. I assumed that if it took that long, it would not complete in a reasonable amount of time. I will update this blog entry if I make any progress on this problem as well as the 3rd heuristic function.
Further Optimizations
While I have not gotten this far yet, it may be ultimately necessary to emphasize (or de-emphasize) one heuristic over the others in order to further optimize the split determination. This could be done by simply multiplying each heuristic function by a scalar value to either increase or decrease it's emphases on the final result. The reason I haven't researched this yet is because if I cannot find the global maximum without these extra variables, I certainly cannot do it with them.
|
Dear Uncle Colin, I've been given $u = (2\sqrt{3} - 2\i)^6$ and been told to express it in polar form. I've got as far as $u=54 -2\i^6$, but don't know where to take it from there! - Not A Problem I'm Expecting to Resolve Hello, NAPIER, and thanks for yourRead More →
Someone recently asked me where I get enough ideas for blog posts that I can keep up such a 'prolific' schedule. (Two posts a week? Prolific? If you say so.) The answer is straightforward: Twitter Reddit One reliable source of interesting stuff is @WWMGT - What Would Martin Gardner Tweet?Read More →
Dear Uncle Colin, I'm told that $z=i$ is a solution to the complex quadratic $z^2 + wz + (1+i)=0$, and need to find $w$. I've tried the quadratic formula and completing the square, but neither of those seem to work! How do I solve it? - Don't Even Start ContemplatingRead More →
It turns out I was wrong: there is something worse than spurious pseudocontext. It's pseudocontext so creepy it made me throw up a little bit: This is from 1779: a time when puzzles were written in poetry, solutions were assumed to be integers and answers could be a bit creepy...Read More →
Dear Uncle Colin, I recently had to decompose $\frac{3+4p}{9p^2 - 16}$ into partial fractions, and ended up with $\frac{\frac{25}{8}}{p-\frac{4}{3}} + \frac{\frac{7}{8}}{p-\frac{4}{3}}$. Apparently, that's wrong, but I don't see why! -- Drat! Everything Came Out Messy. Perhaps Other Solution Essential. Hi, there, DECOMPOSE, and thanks for your message - and yourRead More →
In this month's episode of Wrong, But Useful, @reflectivemaths1 and I are joined by consultant and lapsed mathematician @freezingsheep2. We discuss: Mel's career trajectory into 'maths-enabled type things that are not actually maths', although she gets to wave her hands a lot. What you can do with a maths degree,Read More →
There is a danger, when your book comes plastered in praise from people like Art Benjamin and Ron Graham, that reviewers will hold it to a higher standard than a book that doesn't. That would be unfair, and I'll try to avoid that. What it does well This is aRead More →
Dear Uncle Colin, In an answer sheet, they've made a leap from $\arctan\left(\frac{\cos(x)+\sin(x)}{\cos(x)-\sin(x)}\right)$ to $x + \frac{\pi}{4}$ and I don't understand where it's come from. Can you help? -- Awful Ratio Converted To A Number Hello, ARCTAN, and thank you for your message! There's a principle I want to introduceRead More →
Last week, I wrote about the volume and outer surface area of a spherical cap using different methods, both of which gave the volume as $V = \frac{\pi}{3}R^3 (1-\cos(\alpha))^2(2-\cos(\alpha))$ and the surface area as $A_o = 2\pi R^2 (1-\cos(\alpha))$. All very nice; however, one of my most beloved heuristics failsRead More →
Dear Uncle Colin, One of my students recently attempted the following question: "At time $t=0$ particle is projected upwards with a speed of 10.5m/s from a point 10m above the ground. It hits the ground with a speed of 17.5m/s at time $T$. Find $T$." They used the equation $sRead More →
|
Due date: Friday 9/17 at 11:59pm
In this assignment we will use the following datasets:
In this assignment you will work with several variants of the perceptron algorithm:
In each case make sure that your implementation of the classifier includes a bias term (in slide set 2 and page 7 in the book you will find guidance on how to add a bias term to an algorithm that is expressed without one).
Before we get to the adatron, we will derive an alternative form of the perceptron algorithm — the dual perceptron algorithm. All we need to look at is the weight update rule:
$$\mathbf{w} \rightarrow \mathbf{w} + \eta y_i \mathbf{x}_i.$$
This is performed whenever example $i$ is misclassified by the current weight vector. The thing to notice, is that the weight vector is always a weighted combination of the training examples since it is that way to begin with, and each update maintains that property. So in fact, rather than representing $\mathbf{w}$ explicitly, all we need to do is to keep track of how much each training example is contributing to the value of the weight vector, i.e. we will express it as:
$$\mathbf{w} = \sum_{i=1}^N \alpha_i y_i \mathbf{x}_i,$$
where $\alpha_i$ are positive numbers that describe the magnitude of the contribution $\mathbf{x}_i$ is making to the weight vector, and $N$ is the number of training examples.
Therefore to initialize $\mathbf{w}$ to 0, we simply initialize $\alpha_i = 0$ for $i = 1,\ldots,N$. In terms of the variables $\alpha_i$, the perceptron update rule becomes:
$$\alpha_i = \alpha_i + \eta y_i,$$
and you can always retrieve the weight vector using its expansion in terms of the $\alpha_i$.
Now we're ready for the adatron - the only difference is in the initialization and update equation.
Initialization:
$\alpha_i = 1$ for $i = 1,\ldots,N$
Like in the perceptron we run the algorithm until convergence, or until a fixed number of epochs has passed (an epoch is a loop over all the training data), and an epoch of training consists of the following procedure:
for each training example $i=1,\ldots,N$ perform the following steps:
$\gamma = y_i * \mathbf{w}^{t} \mathbf{x}_i$
$\delta\alpha = \eta * (1 - \gamma)$ if $(\alpha_i + \delta\alpha < 0)$ : $~~~~\alpha_i = 0$ else : $~~~~\alpha_i = \alpha_i + \delta\alpha$
The variable $\eta$ plays the role of the learning rate $\eta$ employed in the perceptron algorithm and $\delta \alpha$ is the proposed magnitude of change in $\alpha_i$. We note that the adatron tries to maintain a
sparse representation in terms of the training examples by keeping many $\alpha_i$ equal to zero. The adatron converges to a special case of the SVM algorithm that we will learn later in the semester; this algorithm tries to maximize the margin with which each example is classified, which is captured by the variable $\gamma$ in the algorithm (notice that the magnitude of change proposed for each $\alpha_i$ becomes smaller as the margin increases towards 1). Note: if you observe an overflow issues in running the adatron, add an upper bound on the value of $\alpha_i$.
Here's what you need to do:
Whenever we train a classifier it is useful to know if we have collected a sufficient amount of data for accurate classification. A good way of determining that is to construct a
learning curve, which is a plot of classifier performance (i.e. its error) as a function of the number of training examples. Plot a learning curve for the perceptron algorithm (with bias) using the Gisette dataset. The x-axis for the plot (number of training examples) should be on a logarithmic scale - something like 10,20,40,80,200,400,800. Use numbers that are appropriate for the dataset at hand, choosing values that illustrate the variation that you observe. What can you conclude from the learning curve you have constructed? Make sure that you use a fixed test set to evaluate performance while varying the size of the training set.
In this section we will explore the effect of normalizing the data, focusing on normalization of features. The simplest form of normalization is to scale each feature to be in the range [-1, 1]. We'll call this
scaling.
Here's what you need to do:
Submit your report via Canvas. Python code can be displayed in your report if it is short, and helps understand what you have done. The sample LaTex document provided in assignment 1 shows how to display Python code. Submit the Python code that was used to generate the results as a file called
assignment2.py (you can split the code into several .py files; Canvas allows you to submit multiple files). Typing
$ python assignment2.py
should generate all the tables/plots used in your report.
A few general guidelines for this and future assignments in the course:
We will take off points if these guidelines are not followed.
Grading sheet for assignment 1 Part 1: 60 points. (25 points): Correct implementation of the classifiers (10 points): Good protocol for evaluating classifier accuracy; results are provided in a clear and concise way (10 points): Discussion of the results Part 2: 20 points. (15 points): Learning curves are correctly generated and displayed in a clear and readable way ( 5 points): Discussion of the results Part 3: 20 points. ( 5 points): How to perform data scaling (10 points): Comparison of normalized/raw data results; discussion of results ( 5 points): Range of features after standardization
|
A (Very Short) Detour for the Traveling SalesmanIn this article, we'll explore why the Traveling Salesman Problem is an interesting problem and describe a recent result concerning it. ....
David Austin
Introduction
My teenaged son's job requires him to deliver papers to 148 houses. When I deliver papers with him, we park my car, walk to each of the houses, and then return to the car. Naturally, we would like to choose the route that requires us to walk the shortest possible distance.
From one point of view, this is a pretty simple task: there are a finite number of routes so we could list them all and search through them to find the shortest. When we start to list all the routes, however, we find a huge number of them: 148!, to be precise, and
That looks like a big number. To put it in context, today's computers are able to perform about a billion operations every second, and the number of seconds in the life of the universe is about 10
Consider the same problem in a different context. Shown below are ten of my favorite places in Michigan:
If I would like to visit all of them and find the shortest possible route, I could examine the 10! = 3,628,800 possibilities and find this one:
The point is that even with a small number of sites to visit, the number of possible routes is very large and considerable effort is required to study them all.
The Traveling Salesman Problem
These problems are instances of what is known as the
I have described one way to find the optimal route by considering all of the possibilities, yet we intuitively feel that there must be a better way. Certainly, my son and I have found what we think is the best possible route through other means.
In fact, mathematicians have not been able to find a method for obtaining the optimal route in a general Traveling Salesman Problem that can be guaranteed to be significantly more efficient than this brute force technique in every instance. Simply said, this is a hard problem to solve in general. In this article, we'll explore why this is an interesting problem and describe a recent result concerning it.
After reading the two examples described above, you may start to see instances of the Traveling Salesman Problem all around you; for instance, how will you organize your day to go to work, the grocery store, take your kids to and from school, and stop by the coffee shop in the most efficient way. You can imagine that postal delivery services, like UPS and FedEx, face this problem when scheduling daily pickup and delivery of packages. Most likely, political campaigns try to minimize costs and maximize their candidate's exposure as they schedule campaign appearances around Iowa and New Hampshire.
William Cook's wonderful new book
As another example, when some digital circuit boards are produced, several thousand holes are drilled so that electronic components, such as resistors and integrated circuits, may be inserted. The machine that drills these holes must visit each of these locations on the circuit board and do it as efficiently as possible. Once again, the Traveling Salesman Problem.
Clearly, this problem is ubiquitous. We would like to have a good way to solve it.
How good is good?
As mentioned above, one way to solve a Traveling Salesman Problem (TSP) is to search through each of the possible routes for the shortest. If we have $n$ sites to visit, there are $n!$ possible routes, and this is an impossibly large number for even modest values of $n$, such as my son's paper route. When we consider the thousands of holes in an electronic circuit board, it is an understatement to say the problem is astronomically more difficult.
Mathematicians have developed a means of assessing the difficulty of a problem by measuring the
Suppose, for instance, that you would like to look up a number in the phone book. One way to do this is to start with "AAA Aardvarks" and proceed through the book, one entry at a time, until you find the one you're looking for. The effort here is proportional to $n$, the number of entries in the phone book, so we say that the complexity of this algorithm is $O(n)$. If the size of the phone book doubles, we have roughly twice as much work to do.
Of course, there's a better algorithm for finding phone numbers, a binary search algorithm in which you open the phone book in the middle, determine if the entry you're looking for comes before or after, and repeat. The effort required here is $O(\log(n))$, roughly proportional to the base 2 logarithm of $n$. In this case, if the size of the phone book doubles, you only need one more step. Clearly, this is a better algorithm.
Notice that the measure of complexity is associated to an algorithm that solves a problem, not the problem itself.
Now suppose that you have $n$ pairs of pants and $n$ shirts and that you consider each possible combination when you get dressed in the morning. Since there are $n^2$ combinations, the effort required is $O(n^2)$. If $n$ doubles, we need to do four times as much work.
We may now define an important class of problems, known as class
We generally think of class
At this time, we do not know if the TSP is in class
A second class of problems is known as
Generally speaking, it is easier to check a potential solution than it is to find solutions. For instance, finding the prime factors of 231,832,825,921 is challenging; checking that 233,021$\cdot$994,901 = 231,832,825,921 is relatively easy. Indeed, it follows that ${\bf P}\subset{\bf NP}$; that is, if we can generate solutions in polynomial time, we can verify that we have a solution in polynomial time as well.
Surprisingly enough, we do not know whether the TSP is in
Given this, it is tempting to think that class
One final class of problems is
In a poll conducted in 2002, Bill Gasarch asked a group of experts whether they believed that ${\bf P}={\bf NP}$ or not. Sixty-one believed that it was not true, and only nine thought it was. However, many believed that it would be a long time before we knew with certainty and that fundamentally new ideas were required to settle the question. You may read the results of this poll here.
It's easy to believe that ${\bf P}\neq{\bf NP}$; our experience is that it is typically easier to test potential solutions than it is to find solutions. However, the study of algorithms is relatively young, and one may imagine that, as the field matures, we will discover some fundamental new techniques that will show that these tasks require the same amount of effort. Perhaps we will be surprised with the kind of things we can easily compute.
What we do know
So the TSP feels like a hard problem, but we can't really be sure. Worse yet, we might not know for a long time. In the meantime, what have we learned about this problem?
Besides the brute-force $O(n!)$ algorithm, the best algorithm we currently have is an $O(n^2\cdot2^n)$ algorithm found by Held and Karp in 1962. Notice that this complexity is not polynomial given the presence of the exponential. This result has not been improved upon in 50 years.
However, new ideas have continually been found to attack the TSP that enable us to find optimal routes through ever larger sets of sites. In 1954, Dantzig, Fulkerson, and Johnson used linear programming to find the shortest route through major cities in (then) all 48 states plus the District of Columbia. Since that time, optimal routes have been found through 3038 cities (in 1992), 13,509 (in 1998), and 85,900 in 2006. The computer code used in some of these calculations is called Concorde and is freely available.
A striking example is the nearly optimal TSP route through 100,000 sites chosen so as to reproduce Leonardo da Vinci's Mona Lisa.
Remember that the general TSP is defined by some measurement of the cost of a trip between different sites. Perhaps the general TSP is hard to solve but perhaps better algorithms can be found for special cases of the TSP, such as the Euclidean TSP, which is also known to be
Polynomial Time Approximation Schemes
Perhaps if a job is too hard to complete, we would be satisfied with almost finishing it. That is, if we cannot find the absolute shortest TSP route, perhaps we can find a route that we can guarantee is no more than 10% longer than the shortest route or perhaps no more than 5% longer.
In 1976, Christofides found a polynomial-time algorithm that produced a route guaranteed to be no more than 50% longer than the optimal route. But the question remained: if we fix a tolerance, is there a polynomial-time algorithm that will find a route whose length is within the given tolerance of the optimal route.
In the mid 1990's, Arora and Mitchell independently found algorithms that produce approximate solutions to the Euclidean TSP in polynomial time. Arora created an algorithm that, given a value of $c>1$, finds a route whose length is no more than $1+1/c$ times the shortest route. The complexity of a randomized version of his algorithm, which we will describe now, is $O(n(\log n)^{O(c)})$.
Arora's algorithm applies dynamic programming to a situation created by a data structure known as a
For the time being, let's focus on the square and subdivide it to obtain four squares.
Continuing this process creates a quadtree; every node in the tree is a square and internal nodes have four children. The root of the tree is the initial square.
Now we can get started describing Arora's algorithm to find approximate solutions to the TSP. We fix a number $c$ and look for a route whose length is no more than $1+1/c$ times the optimal route. For instance, if $c=10$, then we will find a route whose length is no more than 10% longer than the optimal route.
We will construct a quadtree so that each leaf contains a single location in our TSP. If two locations are very close together, our tree may have to descend relatively deep in the tree to find squares that separate the two points. To avoid this, we will perturb the points slightly: if the side of the original bounding box has length $L$, we construct a grid of width $L/(8nc)$ and move each location to its nearest grid point.
Let's make two observations: suppose that $OPT$ is the length of the optimal route. Since $L$ is the side of the bounding box of the locations, we have $OPT\geq 2L$. Second, notice that the distance each point is moved by less than $L/(8nc)$, which means that the total length of any route will change by at most $2n\cdot L/(8nc) < OPT/(4c)$. Since we are looking for a route whose length is no more than $OPT/c$ longer than the optimal route, this is an acceptable perturbation.
Though two sites may be moved to the same grid point, we may otherwise guarantee that the sites are separated by a distance of at least $L/(8nc)$. We now construct a quadtree by subdividing squares when there is more than one point within the square.
Due to our perturbation, the depth of the quadtree is now $O(\log n)$ so the number of squares in the quadtree is $T=O(n\log n)$.
We may also rescale the plane to assume that the sites have integral coordinates. This does not affect the relative lengths of routes and will pay off later.
Next we will describe a dynamic programming algorithm to find an approximate solution to our TSP. In each square, we will restrict the points at which routes may enter that square. To this end, we choose an integer $m$, a power of 2 in the range $[c\log L, 2c\log L]$. On the side of each square, we place $m+1$ equally spaced points, called
The motivation for introducing portals is so that we need only examine a relatively small number of locations at which a path may enter a square. The more portals we have, the closer our route will be to optimal.
Since $m$ is a power of 2, the portals of an internal node are also portals for each of its children.
On each square, we now consider the
For the leaves, this is a relatively simple task; we list all the possible paths and find the shortest one. For the problem above, there are only two possibilities:
and the shortest collection of paths, the solution to the multipath problem, is the one on the right. For each collection of portals $a_1,a_2,\ldots, a_{2p}$, we keep a record of this solution.
For internal nodes, we may have a multipath problem like this:
We assume that we have already solved all possible multipath problems for the children. To solve the multipath problem for the internal node, we consider each possible choice of portals on the inner edges of the children that could produce a multipath.
Now we look up the solutions of the multipath problems on the children to construct the multipath on the parent node.
In this way, we only need to search over all the choices for portals on the inner edges to find the solution to the multipath problem in the parent node.
We work our way up the tree until we are at the root, at which point we solve the multipath problem with no portals on the boundary. This gives a route that is a candidate for our approximate solution to the TSP.
To estimate the computation effort required to run this algorithm, we need to remember that there are $T = O(n\log n)$ squares in the quadtree. With some care, we may enumerate the choices of portals for which we must solve the multipath problem, which leads to Arora's estimate of the complexity as $O(n(\log n)^{O(c)})$.
The route we find may have crossings, but we may eliminate crossings without increasing the length of the route.
Now it turns out that the algorithm, as stated above, may not produce a route approximating the TSP solution with the desired accuracy. For instance, this may happen when the route crosses a square in the quadtree many times.
Therefore, Arora introduces a
Arora analyzes the effect that these shifts have on the number of times that the route produced by the algorithm crosses the sides of squares. This analysis shows that if a shift is chosen at random, then the probability is at least 1/2 that the route found by the algorithhm has a length less than $(1+1/c)OPT$ and is therefore within our desired tolerance.
To guarantee that we obtain a route within the desired tolerance, we may run through every possible shift and simply take the shortest of these routes. Assuming that the sites have integral coordinates, we only need to consider $L^2$ shifts, which increases the running time by a factor of $L^2=O(n^2)$, so that the algorithm still completes in polynomial-time.
Summary
As implementations of Arora's algorithm are not particularly efficient, we should view this result primarily as a theoretical one.
Remember that the Euclidean TSP is
Both Arora and Mitchell were awarded the Gödel Prize of the Association of Computing Machinery in 2010 in recognition of the significance of their results.
References
There has been a great deal written about the TSP and ${\bf P}$ versus ${\bf NP}$. Included below are references I found helpful in preparing this article.
The AMS encourages your comments, and hopes you will join the discussions. We review comments before they're posted, and those that are offensive, abusive, off-topic or promoting a commercial product, person or website will not be posted. Expressing disagreement is fine, but mutual respect is required.
David Austin
Welcome to the
These web essays are designed for those who have already discovered the joys of mathematics as well as for those who may be uncomfortable with mathematics.
Search Feature Column
Feature Column at a glance
|
I want to find a solution to a system of linear inequalities of the following form
\begin{aligned} a_1 + b &\ge a_2 \\\ \vdots \\\ a_4 + c &\ge a_1 \end{aligned}
where $a_i \in \mathbb N \setminus \{0\}$ and $b,c \in \mathbb Z$. All inequalities consist of exactly two variables and one free factor. All $a_i$ appear as they are, and have no coefficient other than $1$.
The goal is to find a solution (perhaps more than one, but not necessarily), or declare the system as not having one. Such a solution would be vastly preferable to any linear programming ones.
I do remember some algorithm based on constructing a graph and then launching Floyd-Warshall algorithm could solve a similar problem, but I'm not sure.
EDIT: Problem expansion - how can I find such a solution that the number of distinct integers in the solution is maximized (the default Bellman-Ford behavior minimizes it)
|
I'm trying to figure out for which algebraic structure
$$\underbrace{a+a+\cdots+a}_{n \text{-times}} = a * n$$
is true.
Now I know the question '
Is all multiplication repeated addition?' has been asked many times with the answer: NO because you cannot express non-integer (such as fractions or complex numbers) multiples as repeated addition. However I'm pretty sure that the reverse is true; that ' Repeated addition is always multiplication'
So my first thought was that Rings would be the appropriate algebraic structure for this, seeing that they have both addition and multiplication. However, the definition of a ring does not mention this property.
So I was thinking about this property and it seems like it holds for many rings, including the following:
Integers Rationals Reals Complex numbers $m\times m$ Matrix Ring Polynomials where multiplication is scaling by a number
But... Then I ran into the Boolean ring where $\lor$ is the addition in the ring, and $\land$ is the multiplication. So...
$$???\,\,\underbrace{a\lor a\lor\cdots\lor a}_{n \text{-times}} = a \land n \,\,???$$
Now the problem is the type of the entity is totally different (true/false values). This doesn't even make sense; or does it? If this isn't true, then I'm not sure where that leaves me then, since it would imply that this property doesn't hold for rings in general. But then what does it hold for?
Any insight would be greatly apprecitated. :)
|
Due: October 17th at 11:59pm
Formulate a soft-margin SVM without the bias term, i.e. one where the discriminant function is equal to $\mathbf{w}^{T} \mathbf{x}$. Derive the saddle point conditions, KKT conditions and the dual. Compare it to the standard SVM formulation that was derived in class.
In this question we will explore the leave-one-out error for a hard-margin SVM for a linearly separable dataset.First, we define a set of
key support vectors as a subset of the support vectors such that removal of any one vector from the set changes the maximum margin hyperplane.
$$ E_{cv} \leq \frac{\textrm{number of key support vectors}}{N}, $$ where $N$ is the number of training examples.
Suppose you are given a linearly separable dataset, and you are training the soft-margin SVM, which uses slack variables with the soft-margin constant $C$ set to some positive value. Consider the following statement:
Since increasing the $\xi_i$ can only increase the cost function of the primal problem (which we are trying to minimize), at the solution to the primal problem, i.e. the hyperplane that minimizes the primal cost function, all the training examples will have $\xi_i$ equal to zero.
Is this true or false? Explain!
The data for this question comes from a database called SCOP (structural classification of proteins), which classifies proteins into classes according to their structure (download it from here). The data is a two-class classification problem of distinguishing a particular class of proteins from a selection of examples sampled from the rest of the SCOP database using features derived from their sequence (a protein is a chain of amino acids, so as computer scientists, we can consider it as a sequence over the alphabet of the 20 amino acids). I chose to represent the proteins in terms of their motif composition. A sequence motif is a pattern of amino acids that is conserved in evolution. Motifs are usually associated with regions of the protein that are important for its function, and are therefore useful in differentiating between classes of proteins. A given protein will typically contain only a handful of motifs, and so the data is very sparse. Therefore, only the non-zero elements of the data are represented. Each line in the file describes a single example. Here's an example from the file:
d1scta_,a.1.1.2 31417:1.0 32645:1.0 39208:1.0 42164:1.0 ....
The first column is the ID of the protein, the second is the class it belongs to (the values for the class variable are
a.1.1.2, which is the given class of proteins, and
rest which is the negative class representing the rest of the database); the remainder consists of elements of the form
feature_id:valuewhich provide an id of a feature and the value associated with it.This is an extension of the format used by LibSVM, that scikit-learn can read.See a discussion of this format and how to read it here.
We note that the data is very high dimensional since the number of conserved patterns in the space of all proteins is large. The data was constructed as part of the following analysis of detecting distant relationships between proteins:
In this part of the assignment we will explore the dependence of classifier accuracy on the kernel, kernel parameters, kernel normalization, and the SVM soft-margin parameter. In your implementation you can use the scikit-learn svm class.
In this question we will consider both the Gaussian and polynomial kernels: $$ K_{gauss}(\mathbf{x}, \mathbf{x'}) = \exp(-\gamma || \mathbf{x} - \mathbf{x}' ||^2) $$ and $$ K_{poly}(\mathbf{x}, \mathbf{x'}) = (\mathbf{x}^T \mathbf{x}' + 1) ^{p}. $$
Plot the accuracy of the SVM, measured using the area under the ROC curve as a function of both the soft-margin parameter of the SVM, and the free parameter of the kernel function. Accuracy should be measured in five-fold cross-validation. Show a couple of representative cross sections of this plot for a given value of the soft margin parameter, and for a given value of the kernel parameter. Comment on the results. When exploring the values of a continuous classifier/kernel parameter it is useful to use values that are distributed on an exponential grid, i.e. something like 0.01, 0.1, 1, 10, 100 (note that the degree of the polynomial kernel is not such a parameter).
Next, we will compare the accuracy of an SVM with a Gaussian kernel on the raw data with accuracy obtained when the data is normalized to be unit vectors (the values of the features of each example are divided by its norm). This is different than standardization which operates at the level of individual features. Normalizing to unit vectors is more appropriate for this dataset as it is sparse, i.e. most of the features are zero. Perform your comparison by comparing the accuracy measured by the area under the ROC curve in five-fold cross validation, where the classifier/kernel parameters are chosen by by nested cross-validation, i.e. using grid search on the training set of each fold. Use the scikit-learn grid-search class for model selection.
Finally, visualize the kernel matrix associated with the dataset. Explain the structure that you are seeing in the plot (it is more interesting when the data is normalized).
Submit your report via Canvas. Python code can be displayed in your report if it is short, and helps understand what you have done. The sample LaTex document provided in assignment 1 shows how to display Python code. Submit the Python code that was used to generate the results as a file called
assignment3.py (you can split the code into several .py files; Canvas allows you to submit multiple files). Typing
$ python assignment4.py
should generate all the tables/plots used in your report.
A few general guidelines for this and future assignments in the course:
We will take off points if these guidelines are not followed.
Grading sheet for assignment 4 Part 1: 40 points. ( 5 points): Primal SVM formulation is correct (10 points): Lagrangian found correctly (10 points): Derivation of saddle point equations (15 points): Derivation of the dual Part 2: 15 points. Part 2: 15 points. Part 3: 30 points. (15 points): Accuracy as a function of parameters and discussion of the results (10 points): Comparison of normalized and non-normalized kernels and correct model selection ( 5 points): Visualization of the kernel matrix and observations made about it
|
This article provides answers to the following questions, among others:
What is meant by the packing density (or packing factor)? How is the packing density calculated for the body-centered cubic lattice (bcc)? How is the packing density calculated in the face-centered cubic lattice (fcc)? Why is the packing density for the hexagonal closest packed lattice identical to the fcc-lattice? Definition of the packing density
The packing density is the ratio of atomic volume \(V_A\) within a unit cell to the total volume of the unit cell \(V_{U}\):
\begin{align}
\boxed{\text{PD}=\frac{V_A}{V_{U}} } \\[5px] \end{align}
Depending on the grid structure, there is a certain packing density. The packing factors of the most important lattice types are to be derived in this article.
Body-centered cubic lattice
In order to determine the packing density for the body-centered cubic crystalline structure, the spatial diagonal \(e\) of the cube-shaped unit cell is considered. The three atoms lying on this diagonal are just touching each other. Thus, the spatial diagonal corresponds to 4 times the atomic radius \(r\). In a cube, the spatial diagonal is larger by a factor √3 than the edge of the cube \(a\). Thus, the atomic radius \(r\) depending on the cube edge \(a\) as follows:
\begin{align}
e=\sqrt{3} \cdot a = 4 \cdot r ~\Rightarrow ~ \underline{r= \frac{\sqrt{3}}{4} \cdot a} \end{align}
In the unit cell, there is a whole atom in the middle and eight others on the cube corner, but only with one eighth each. In total the volume \(V_A\) of two atomic spheres is in the unit cell:
\begin{align}
\underline{V_A} =2 \cdot V_{sphere} =2 \cdot \frac{4}{3} \pi \cdot r^3 =\frac{8}{3} \pi \cdot \left( \frac{\sqrt{3}}{4} \cdot a \right)^3 = \underline{ \frac{\sqrt{3}}{8} \pi \cdot a^3} \end{align}
This atomic volume \(V_A\) can now put into relation to the unit cell volume \(V_{U}=a^3\) in order to obtain the packing density \(\text{PD}\) of the body-centered cubic lattice:
\begin{align}
\underline{\underline{\text{PD}}}= \frac{V_A}{V_{U}} =\frac{\frac{\sqrt{3}}{8} \pi \cdot a^3}{a^3}=\frac{\sqrt{3}}{8} \pi \approx \underline{\underline{0,68}} \end{align}
Thus, the bcc-lattice has a packing facotr of 68 %.
Face-centered cubic and hexagonal closest packed lattice (fcc, hcp)
The packing density of the face-centered cubic lattice (fcc) can be determined in an analogous manner as for thebody-centered cubic structure. Three atomic spheres touch each other on the surface diagonal of the unit cell. This diagonal \(f\) thus corresponds to 4 times the atomic radius and equals to the value \(\sqrt{2} \cdot a\) (where \(a\) is the cube edge). Thus, the atomic radius \(r\) depends on the cube edge as follows:
\begin{align}
f=\sqrt{2} \cdot a = 4 \cdot r ~\Rightarrow ~ \underline{r= \frac{\sqrt{2}}{4} \cdot a} \end{align}
In the fcc unit cell, there are six atoms on the cube surfaces, but only with one half of their sphere volume (in total 3 whole atomic volumes). In addition, there are eight other atoms on the cube corner, but only with one eighth each (1 whole atomic volume). In total the volume of four atomic spheres is in the unit cell with the atomic volume \(V_A\):
\begin{align}
\underline{V_A} =4 \cdot V_{sphere} =4 \cdot \frac{4}{3} \pi \cdot r^3 = \frac{16}{3} \pi \cdot \left( \frac{\sqrt{2}}{4} \cdot a \right)^3 = \underline{ \frac{\sqrt{2}}{6} \pi \cdot a^3} \end{align}
This atomic volume \(V_A\) can now put into relation to the unit cell volume \(V_{U}=a^3\). The packing density \(\text{PD}\) of the face-centered cubic grid is calculated as follows:
\begin{align}
\underline{\underline{\text{PD}}}= \frac{V_A}{V_{U}} =\frac{\frac{\sqrt{2}}{6} \pi \cdot a^3}{a^3}=\frac{\sqrt{2}}{6} \pi \approx \underline{\underline{0,74}} \end{align}
The fcc-lattice thus has an packing factor of 74 %. However, there is no need to differentiate between the fcc-structure and the hexagonal closest packed crystal (hcp), since in both cases they built up by densest packed atomic planes (for further information see post on Important lattice types). The packing density in the hcp-lattice thus also has the maximum possible value of 74 %.
|
Given a square matrix
A, how can I generate a basis for the
generalized eigenspace corresponding to all eigenvectors $\lambda_i$ such that $\vert \lambda_i \vert > 1$? I.e., if $A$ is $n \times n$ and acts on $\mathbb{R}^n$, then $\mathbb{R}^n = E^u \oplus E^1 \oplus E^s$, corresponding to the action of eigenvalues of modulus greater than / less than / equal to 1.
If
A is diagonalizable, this can easily be accomplished with the
Eigensystem command:
{evals, evecs}=Eigensystem[A];Take[evecs, Length@Select[evals, Abs[#] > 1 &]]
This approach relies on the
Eigenvalue command returning the eigenvalues (and hence eigenvectors) in decreasing order of their moduli. Also note that MMA returns the eigenvalues with repetition according to their algebraic multiplicity, and that the eigenvectors are returned as "row vectors" (technically, a list of vectors).
My problem arises when
A is
not diagonalizable. The natural function to look at would be
JordanDecomposition, which retuns the generalized eigenvectors (N.B. as columns in a matrix!) and the Jordan Canonical Form. However, the ordering is a bit...abstract. As best as I've been able to tell from taking the
JordanDecomposition of random matrices, the ordering of the Jordan blocks seems to be some combination of the order in which roots are returned by the
Root command for irreducible factors of the characteristic polynomial, the ordinary
Greater sorting, the size of the Jordan block, and whether it corresponds to a complex eigenvalue.
The kludge that I've been able to come up with is the following:
{s, j} = JordanDecomposition[A];bigevals=Union@Select[Eigenvalues[A], Abs[#] > 1 &];cols= Flatten[ Position[Diagonal[j], #]& /@ bigevals];s[[All, cols]]
This seems to return the appropriate generalized eigenvectors.
The Questions Surely there's a better way? The above kludge seems...kludgy. Perhaps something like
NullSpace[MatrixPower[A-lambda IdentityMatrix[First@Dimensions@A], n]]upon detecting that an eigenvalue
lambdahas algebraic multiplicity
n, i.e., going directly to the definition?
For my application, the matrix
Awill only ever be real valued. Thus all complex eigenvalues occur as complex conjugates (perhaps with multiplicity!). I would prefer to have a real basis. For example, if
A={{0,2},{-2,0}then the eigenvalues are $\lambda_{\pm} = \pm 2 i$. Then $\left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix} , \begin{bmatrix} 0 \\ 1 \end{bmatrix} \right\}$ is a basis for the unstable manifold of $A$.
Nullspace[MatrixPower[A,2]+4 IdentityMatrix[2]]gives the correct result here. Note that this question is a tertiary concern at best; there are other ways of dealing with this in "post-production."
|
Let $(P,\pi,B,G)$ be a principal bundle with total space $P$, base $B$, projection $\pi$ and structure group $G$.
Now I am searching for a good reference (with proofs) for the following facts:
1) The fundamental vector fields on $P$ span pointwise the vertical space - or equivalently they generate the $C^\infty(P)$-module of smooth sections of the vertical bundle.
2) Let $\gamma \colon TP \to \mathrm{Lie}(G)$ a connection one-form. The horizontal lifts of vector fields span pointwise the horizontal space - or equivalently they generate the $C^\infty(P)$-module of smooth sections of the horizontal bundle.
|
In the previous STT5100 course, last week, we’ve seen how to use monte carlo simulations. The idea is that we do observe in statistics a sample \{y_1,\cdots,y_n\}, and more generally, in econometrics \{(y_1,\mathbf{x}_1),\cdots,(y_n,\mathbf{x}_n)\}. But let’s get back to statistics (without covariates) to illustrate. We assume that observations y_i are realizations of an underlying random variable Y_i. We assume that Y_i are i.id. random variables, with (unkown) distribution F_{\theta}. Consider here some estimator \widehat{\theta} – which is just a function of our sample \widehat{\theta}=h(y_1,\cdots,y_n). So \widehat{\theta} is a real-valued number like . Then, in mathematical statistics, in order to derive properties of the estimator \widehat{\theta}, like a confidence interval, we must define \widehat{\theta}=h(Y_1,\cdots,Y_n), so that now, \widehat{\theta} is a real-valued random variable. What is puzzling for students, is that we use the same notation, and I have to agree, that’s not very clever. So now, \widehat{\theta} is .
There are two strategies here. In classical statistics, we use probability theorem, to derive properties of \widehat{\theta} (the random variable) : at least the first two moments, but if possible the distribution. An alternative is to go for computational statistics. We have only one sample, \{y_1,\cdots,y_n\}, and that’s a pity. But maybe we can create another one \{y_1^{(1)},\cdots,y_n^{(1)}\}, as realizations of F_{\theta}, and another one \{y_1^{(2)},\cdots,y_n^{(2)}\}, anoter one \{y_1^{(3)},\cdots,y_n^{(3)}\}, etc. From those counterfactuals, we can now get a collection of estimators, \widehat{\theta}^{(1)},\widehat{\theta}^{(2)}, \widehat{\theta}^{(3)}, etc. Instead of using mathematical tricks to calculate \mathbb{E}(\widehat{\theta}), compute \frac{1}{k}\sum_{s=1}^k\widehat{\theta}^{(s)}That’s what we’ve seen last friday.
I did also mention briefly that looking at densities is lovely, but not very useful to assess goodness of fit, to test for normality, for instance. In this post, I just wanted to illustrate this point. And actually, creating counterfactuals can we a good way to see it. Consider here the height of male students,
1 2 3 4 Davis=read.table( "http://socserv.socsci.mcmaster.ca/jfox/Books/Applied-Regression-2E/datasets/Davis.txt") Davis[12,c(2,3)]=Davis[12,c(3,2)] X=Davis$height[Davis$sex=="M"]
We can visualize its distribution (density and cumulative distribution)
1 2 3 4 5 6 7 8 9 10 u=seq(155,205,by=.5) par(mfrow=c(1,2)) hist(X,col=rgb(0,0,1,.3)) lines(density(X),col="blue",lwd=2) lines(u,dnorm(u,178,6.5),col="black") Xs=sort(X) n=length(X) p=(1:n)/(n+1) plot(Xs,p,type="s",col="blue") lines(u,pnorm(u,178,6.5),col="black")
Since it looks like a normal distribution, we can add the density a Gaussian distribution on the left, and the cdf on the right. Why not test it properly. To be a little bit more specific, I do not want to test if it’s a Gaussian distribution, but if it’s a \mathcal{N}(178,6.5^2). In order to see if this distribution is relevant, one can use monte carlo simulations to create conterfactuals
1 2 3 4 5 6 7 8 9 10 hist(X,col=rgb(0,0,1,.3)) lines(density(X),col="blue",lwd=2) Y=rnorm(n,178,6.5) hist(Y,col=rgb(1,0,0,.3)) lines(density(Y),col="red",lwd=2) Ys=sort(Y) plot(Xs,p,type="s",col="white",lwd=2,axes=FALSE,xlab="",ylab="",xlim=c(155,205)) polygon(c(Xs,rev(Ys)),c(p,rev(p)),col="yellow",border=NA) lines(Xs,p,type="s",col="blue",lwd=2) lines(Ys,p,type="s",col="red",lwd=2)
We can see on the left that it is hard to assess normality from the density (histogram and also kernel based density estimator). One can hardly think of a valid distance, between two densities. But if we look at graph on the right, we can compare the empirical distribution cumulative distribution \widehat{F} obtained from \{y_1,\cdots,y_n\} (the blue curve), and some conterfactual, \widehat{F}^{(s)} obtained from \{y_1^{(s)},\cdots,y_n^{(s)}\} generated from F_{\theta_0} – where \theta_0 is the value we want to test. As suggested above, we can compute the yellow area, as suggest in Cramer-von Mises test, or the Kolmogorov-Smirnov distance.
1 2 3 4 5 6 7 8 9 10 d=rep(NA,1e5) for(s in 1:1e5){ d[s]=ks.test(rnorm(n,178,6.5),"pnorm",178,6.5)$statistic } ds=density(d) plot(ds,xlab="",ylab="") dks=ks.test(X,"pnorm",178,6.5)$statistic id=which(ds$x>dks) polygon(c(ds$x[id],rev(ds$x[id])),c(ds$y[id],rep(0,length(id))),col=rgb(1,0,0,.4),border=NA) abline(v=dks,col="red")
If we draw 10,000 counterfactual samples, we can visualize the distribution (here the density) of the distance used a test statistic \widehat{d}^{(1)}, \widehat{d}^{(2)}, etc, and compare it with the one observe on our sample \widehat{d}. The proportion of samples where the test-statistics exceeded the one observed
1 2 mean(d>dks) [1] 0.78248
is the computational version of the p-value
1 2 3 4 5 6 7 ks.test(X,"pnorm",178,6.5) One-sample Kolmogorov-Smirnov test data: X D = 0.068182, p-value = 0.8079 alternative hypothesis: two-sided
I thought about all that a couple of days ago, since I got invited for a panel discussion on “coding”, and why “coding” helped me as professor. And this is precisely why I like coding : in statistics, either manipulate abstract objects, like random variables, or you actually use some lines of code to create counterfactuals, and generate fake samples, to quantify uncertainty. The later is interesting, because it helps to visualize complex quantifies. I do not claim that maths is useless, but coding is really nice, as a starting point, to understand what we talk about (which can be very usefull when there is a lot of confusion on notations).
|
While reading through my textbook it says "the most important example of an inner-product space is $F^n$", where $F$ denotes $\mathbb{C}$ or $\mathbb{R}$ .
Our definition of an inner product on a vector space $V$ is as follows:
1) Positive definite: $\langle v,v \rangle \ge 0$ with equality if and only if $v=0$ 2) Linearity in the first arguement: $\langle a_1v_1+a_2v_2,w \rangle = a_1 \langle v_1,w \rangle + a_2\langle v_2,w \rangle$ 3) Conjugate symmetric: $\langle u,v\rangle = \overline{\langle v,u\rangle}$
Let $$\displaystyle w=(w_1\ldots,w_n) , z=(z_1,\ldots,z_n)$$
Then: $$\displaystyle \langle w,z\rangle =w_1\overline{z_1}+\cdots+w_n\overline{z_n}$$
I'm trying to verify that this is indeed true. So first I want to check that $\langle w,z\rangle$ satisfies condition (1).
Say that $w,z\in \mathbb{C}$.
Just looking at say $w_1=a+bi$ and $z_1=c+di$, how can we guarantee that $w_1\overline{z_1}\geq 0$? If we can observe this, it would need to hold true for the other coordinates as well. So my question is, how do we know that $w_1\overline{z_1}\geq 0$?
|
A 2-clause is a clause with at most two propositions (clauses?) : $(p \wedge q,\neg p \wedge q, \neg p,...)$. I have to show that the folllowing problem is $\in$ P:
2-SAT: Input : A conjunction $\Phi$ of 2-clauses. Question : Is it satisfisable?
For each set of 2-clause $C$ we associate $G_C$ defined as
$V_{G_C}$ contains two vertices $x$ and $\neg x$ for each variables in $C$. $E_{G_C}$ contains an edge $\alpha \rightarrow \beta$ if and only if $\neg \alpha \vee\beta$ or $\beta \vee\neg\alpha$ is in $C$.
This graph should help me show this problem belongs to
P. Yet, Why is $C$ satisfiable if and only if $G_C$ doesn't have any loop? Why does $G_C$ space is polynomial? Why is $G_C$ calculation from $C$ is polynomial?
|
This question concerns a set-theoretic aspect that I found interesting in the recent question asked by user Nick R., namely, Is $\mathbb{R}^3\setminus\mathbb{Q}^3$ simply connected? He had asked whether $\mathbb{R}^3$ remains simply connected after deleting a countable set of points, such as the collection of rational points $\mathbb{Q}^3$.
That question was answered affirmatively by Martin M. W. My question is, can we do better? Specifically, I want to understand, in general context where the continuum may be very large, exactly how many points we may freely delete from $\mathbb{R}^3$, whilst remaining simply connected. What is the fewest number of points that we must delete from $\mathbb{R}^3$ in order to make it no longer simply connected?
Let us define the
simplyconnected deletion number, $\delta$, to be the smallest cardinality of a subset$A\subset\mathbb{R}^3$, such that the complement$\mathbb{R}^3\setminus A$ is no longer simply connected.
Martin's answer to the earlier question shows that deleting any countable number of points preserves the simply connected property, and so the simply connected deletion number is definitely uncountable, at least $\omega_1$. And since it is clearly at most the continuum, the question is settled if the continuum hypothesis holds. Like all other cardinal characteristics of the continuum, this number is more interesting when the continuum hypothesis fails.
In a comment,I had suggested that Martin's argument suggested that the simplyconnected deletion number should be at least as large as
cov$(\cal{M})$, the covering number of the meagerideal, which is the fewest number of meager sets whose union isthe whole space. My reason for suggesting this was that as far as I understandMartin's answer (which I admit is imperfectly), he is proposing that forany one point $x$, there is a comeager set of homotopies thatavoids $x$. So in order to avoid all the points in a set $P$, we need toknow that the intersection of $|P|$ many comeager sets in his space of homotopies isnonempty. This is the same as knowing that the unions of $|P|$many meager sets (the complements) is not the whole space of homotopies, in order that there is atleast one desired homotopy that avoids every point in $P$.
If this is right, then we would deduce that the simply connecteddeletion number is at least
cov$(\cal{M})$, provided that the covering number for meager sets in his space was the same as for our other more familiar spaces. (If someonecould explain and confirm this inequality in greater detail, please postan answer! I would want to see more details than Martin had provided about the space of homotopies.) Question. What is the simply connected deletion numberexactly? Is it consistent that this number is strictly less thanthe continuum? Is it necessarily the continuum? How does it relate to the other standard cardinalcharacteristics of the continuum? What is the value under Martin'saxiom? Is it equal to cov($\cal{M}$)? Can it bestrictly larger than cov$(\cal{M})$?
|
In the following exhibits we give an advanced or alternative way of thinking about mathematics concepts which are likely to be known in a more familiar form.
Explore these structures and experiment by substituting particular values such as $0, \pm 1$. Can you work out what they represent? Exhibit A All pairs of integers such that: $$(a, b) + (c, d) = (ad+bc, bd)\quad\quad (Na, Nb) \equiv (a, b) \mbox{ for all } N\neq 0$$ Can you find two pairs which add up to give $(0, N)$ or $(0, M)$ for various values of $N$, $M$?
A set of ordered pairs of real numbers which can be added and multiplied such that
$(x_1, y_1) + (x_2, y_2) = (x_1 + x_2, y_1 +y_2)$ $(x_1, y_1)\times (x_2, y_2) = (x_1x_2 -y_1y_2, x_1y_2+y_1x_2)$ Exhibit C A set defined recursively such that $+_k(1) = +_1(k)$ $+_k(+_1(n)) = +_1(+_k(n))$ $\times_k(1) = k$ $\times_k(+_1(n)) = +_k(\times_k(n))$ In these rules, $k$ and $n$ are allowed to be any natural numbers Once you have figured out what these structures represent ask yourself this: Are these good representations? What benefits can you see to such a representation? How might familiar properties from the structures be represented in these ways?
|
It is easy to turn any boolean formula and any quantified boolean formula into an equisatisfiable formula in CNF using
Tseitin transformation:
$$ Q_1 z_1 Q_2 z_2 \ldots Q_n z_n \Phi \Rightarrow Q_1 z_1 Q_2 z_2 \ldots Q_n z_n \exists x ((\neg{x} \vee \Psi) \wedge \Phi[x/\Psi] ),\ Q_i \in \{ \exists, \forall \} $$
(For details, see for example here). My two questions are:
Can any formula of the form $\forall z_1 \ldots \forall z_n \exists z_{n+1} \ldots \exists z_m \Phi$ be turned into an equisatisfiableformula of the same form (so beginning with $\forall$ and with justone alternation of quantifiers) in CNF using Tseitin transformation?I'm assuming Yes, since Tseitin transformation only adds existential quantifiers ($\exists$) right before $\Phi$. Can any formula of the form $\exists z_1 \ldots \exists z_n \forall z_{n+1} \ldots \forall z_m \Phi$ also be converted into an equisatisfiable formula of the same form (so beginning with $\exists$ and with justone alternation of quantifiers) in CNF in polynomial time? I'm assuming Nosince Tseitin transformation would give us the form $\exists z_1 \ldots \exists z_n \forall z_{n+1} \ldots \forall z_m \exists x_1 \ldots \exists x_l \Phi'$ and I'm not aware of any other polynomial time transformation useful in this case.
|
I am confused how to do this question. I need to use Green's first identity and if $\nabla(f)=0$ then $f$ is constant on $\Omega$ since $\Omega$ is path connected.
I have subbed in the information into green's identity but I don't get anything useful.
$$\iiint_\Omega \nabla f\cdot \nabla g \,dV =\iint_{\partial\Omega} f\nabla g\cdot n \,dA - \iiint_\Omega f\cdot \Delta g \,dV$$
We get:
$$\iiint_\Omega \nabla f\cdot \nabla g \,dV =-\iiint_\Omega f\cdot \Delta g \,dV$$
$$\iiint_\Omega \nabla. (f \nabla g) \,dV =0$$ $$\iint_{\partial\Omega} f\nabla g\cdot n \,dA=0$$
|
June 4th, 2014, 05:59 AM
# 1
Newbie
Joined: Jun 2014
From: UK
Posts: 1
Thanks: 0
Solving ODEs without making an Ansatz
Hi,
Last year in my final year of sixth form (High school?), one of the topics we covered was solving linear ODEs with various methods. Now I'm in University we've recovered solving second order or higher with constant coefficients, but the only method we've used to do so is by making an Ansatz.
Last year it was proved to us that the solution must be of the form e^kx with use of differential operators, but I can't find this proof anywhere online, and I was wondering if anyone was able to reproduce it.
Thanks.
June 4th, 2014, 08:59 PM
# 2
Math Team
Joined: Dec 2013
From: Colombia
Posts: 7,689
Thanks: 2669
Math Focus: Mainly analysis and algebra
I can sketch a proof (due to Tom Apostol) for second order equations.
Briefly, we can prove that there is a unique non-trivial solution to the differential equation
$$y^{\prime\prime} + by = 0 \qquad \text{with initial conditions} \qquad y(c) = d \; y^\prime(c) = e$$
We do this by assuming that $f(x)$ and $g(x)$ are both solutions to the problem, and then creating the Taylor expansion of $h(x) = f(x) - g(x)$. The error term of the expansion can be made arbitrarily small.
We can then observe that solutions to $y^{\prime\prime} + by = 0$ are easy to see. If
\begin{align*}
b &\lt 0 \qquad \text{write $k^2 = -b$ and then} & y &= c_1 e^{-kx} + c_2 e^{kx} \\
b &= 0 \qquad \text{then} & y &= c_1 + c_2 x \\
b &\gt 0 \qquad \text{write $k^2 = b$ and then} & y &= c_1 \sin{(kx)} + c_2 \cos{(kx)} \\
\end{align*}
We then prove that these represent all solutions, by showing that if $f(x)$ is a solution of the initial value problem described above, and there exist constants $c_1$ and $c_2$ such that $y(0) = f(0)$ and $y^\prime(0) = f^\prime(0)$, then by the uniqueness theorem they are the same. And thus we have all solutions.
Finally, we can show that the general differential equation
$$y^{\prime\prime} + ay^\prime + by = 0$$
reduces to the special form studied above. If we write $y = uv$ we get
$$y^{\prime\prime} + ay^\prime + by = (v^{\prime\prime} + av^\prime + bv)u + (2v^\prime + av)u^\prime + vu^{\prime\prime}$$
We can then choose $v$ so that the coefficient of $u^\prime$ is zero which gives $v = e^\frac{-ax}{2}$. Then
$$v^{\prime\prime} + av^\prime + bv = \frac{4b -a^2}{4}v$$ and
$$y^{\prime\prime} + ay^\prime + by = \left( u^{\prime\prime} + \frac{4b -a^2}{4}u \right)v$$
But $v$ is never zero, so for $y^{\prime\prime} + ay^\prime + by = 0$ we must have $$u^{\prime\prime} + \frac{4b -a^2}{4}u = 0$$
which is an equation of the simpler for discussed above.
The second order (and higher) equations have a characteristic polynomial equation, the roots of which determine distinct solutions of the differential equation. All solutions of the equation are linear combination of these distinct solutions. Positive real roots $r = k$ correspond to solutions $c_i e^{kx}$. Complex roots $r = p \pm iq$ correspond to solutions $e^{px}\left( c_i \cos {qx} + c_j \sin {qx} \right)$. When roots are repeated, the corresponding solutions are multiplied by successive powers of $x$, $c_i f(x) + c_j x f(x) + \cdots$. This is probably the quickest way of forming a solution if you can factor the characteristic equation into linear and/or quadratic roots. This paragraph is a summary of the Wikipedia page.
Tags ansatz, making, odes, solving
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post ODEs are killing me!!! zibb3r Calculus 1 September 21st, 2013 01:35 PM ODEs that are not quite simple for me. Execross02 Applied Math 9 July 1st, 2013 08:15 AM Please solve the 6 ODEs... singapore Calculus 2 March 19th, 2012 06:48 PM Solving system of nonlinear ODEs thetouristbr Calculus 0 March 16th, 2011 05:47 AM Linearizing ODEs acnash Calculus 2 March 25th, 2010 04:19 PM
|
On $\mathbb{R}^3$ with coordinates $x,y,z$ consider the Riemannian metric $g=\displaystyle{\frac{dx^2+dy^2+dz^2}{x^2+y^2}}$ defined on $X:=\mathbb{R}^3\setminus\{(0,0,z)\}$.
Given any point $p_1\in X$ and $p_2$ any point in a sufficiently small neighborhood of $p_1$, I want to compute the distance $d(p_1,p_2)$ with respect to the metric $d$ which is the intrinsic metric induced by $g$.
Take for example $p_1=(1,1,1)$ and $p_2=(3/2,-3/2,2)$ how do I compute $d(p_1,p_2)?$
The only way I've figured to compute $d(p_1,p_2)$ is to compute the geodesic $\gamma$ from $p_1$ to $p_2$ and then to compute its length. I've computed the geodesic equations of $g$ ($\gamma_1$ is the component of $\gamma$ in the coordinate $x$ and so on):
$$\ddot\gamma_1=\displaystyle{\frac{2y\dot\gamma_1\dot\gamma_2+x(\dot\gamma_1^2-\dot\gamma_2^2-\dot\gamma_3^2)}{x^2+y^2}}$$
$$\ddot\gamma_2=\displaystyle{\frac{2x\dot\gamma_1\dot\gamma_2-y(\dot\gamma_1^2-\dot\gamma_2^2+\dot\gamma_3^2)}{x^2+y^2}}$$
$$\ddot\gamma_3=\displaystyle{\frac{2(x\dot\gamma_1+y\dot\gamma_2)\dot\gamma_3}{x^2+y^2}}$$
But I don't know how to integrate these differential equations to get the equation of the geodesic from $p_1$ to $p_2$ and then compute its length.
|
Electrical Power
How do the various formulae for electrical power fit together? What is the difference between DC, AC and complex power and how do they harmonise with our physical conceptions of energy and power.
Contents Definition
By formal definition, any form of power (e.g. electrical, mechanical, thermal, etc) is the rate at which energy or work is performed. The standard unit of power is the watt (or joules per second). Electrical power is the rate at which electrical energy is delivered to a load (via an electrical circuit) and converted into another form of energy (e.g. heat, light, sound, chemical, kinetic, etc). In terms of electrical quantities current and voltage, power can be calculated by the following standard formula:
[math] P = VI \, [/math]
Where P is power in watts, V is potential difference in volts and I is current in amperes. But how did this relationship come about?
DC Power Historical Derivation
19th century English physicist James Prescott Joule observed that the amount of heat energy (H) dissipated by a constant (DC) electrical current (I), through a material of resistance R, in time t, had the following proportional relationship:
[math] H \propto I^{2}Rt [/math]
As power is the rate of change of energy over time, Joule’s observation above can be restated in terms of electrical power:
[math] P \propto I^{2}R [/math]
since P = ∆H/∆t.
Now applying Ohm’s law R = V/I we get:
[math] P \propto VI [/math] Alternative Derivation
The SI unit for energy is the joule. For electrical energy, one joule is defined as the work required to move an electric charge of one coloumb through a potential difference of one volt. In other words:
[math] E=QV \, [/math]
Where E is electrical energy (in joules), Q is charge (in coulombs) and V is potential difference (in volts). Given that electric current is defined as the amount of charge flowing per unit time (I = Q/t), then
[math] E=VIt \, [/math]
As power is the rate of change of energy over time, this reduces to:
[math] P=VI \, [/math]
Which is the familiar equation for electrical power.
AC Power
In its unaltered form, the power equation P = VI is only applicable to direct current (DC) waveforms. In alternating current (AC) waveforms, the instantaneous value of the waveform is always changing with time, so AC power is slightly different conceptually to DC power.
Derivation
AC waveforms in power systems are typically sinusoidal with the following general form, e.g. for a voltage waveform:
[math] v(t)=V \cos(\omega t - \phi) \, [/math]
Where V is the amplitude of the voltage waveform (volts)
[math]\omega[/math] is the angular frequency = 2πf [math]\phi[/math] is the phase displacement v(t) is the instantaneous value of voltage at any time t (seconds)
If the current waveform i(t) had a similar form, then it can be clearly seen that the instantaneous power p(t)= v(t)i(t) also varies with time.
Suppose the current and voltage waveforms are both sinusoids and have a phase difference such that the current lags the voltage by a phase angle [math]\theta[/math]. Therefore we can write voltage and current as:
[math] v(t)=V\cos(\omega t) \, [/math] [math] i(t)=I\cos(ω\omega t - \theta) \, [/math]
The instantaneous power is therefore:
[math] p(t)= v(t)i(t) \, [/math] [math] p(t)= V \cos(ωt)I\cos(\omega t - \theta) \, [/math] [math] p(t)= \frac{VI}{2} (\cos\theta + \cos(2\omega t - \theta) ) [/math] [math] p(t)= \frac{VI}{2} (\cos\theta + \cos(2\omega t)\cos\theta + \sin(2\omega t)\sin\theta) [/math] [math] p(t)= \frac{VI}{2} (\cos\theta (1 + \cos(2\omega t) )+ \sin(2\omega t)\sin\theta) [/math]
Since the root-mean-square (rms) values of voltage and current are [math] V_{rms} = \frac{V}{\sqrt{2}} [/math] and [math] I_{rms} = \frac{I}{\sqrt{2}} [/math], then
[math] p(t) = V_{rms} I_{rms} (\cos\theta(1+\cos(2\omega t))+ \sin(2\omega t)\sin\theta) \, [/math]
We can simplify this equation further by defining the following terms:
[math] P = V_{rms} I_{rms} \cos\theta \, [/math]
and
[math] Q = V_{rms} I_{rms} \sin\theta \, [/math]
We then get the final AC instantaneous power equation:
[math] p(t) = P (1 + \cos(2\omega t) )+ Q \sin(2\omega t) \, [/math]
The term P is called the active (or real) power and the term Q is called the reactive power.
Note that the term cosθ is called the power factor and refers to the proportion of active or real component of AC power being delivered. The active power is the component of power that can do real work (e.g. be converted to useful forms of energy like mechanical, heat or light).
Physical Interpretation
From the basic power equation:
[math] p(t) = \frac{VI}{2} \left[ \cos\theta + \cos(2\omega t - \theta) \right] [/math]
We can see that power flow is a sinusoidal waveform with twice the frequency of voltage and current.
From the power equation, we can also break p(t) down into two components:
A constant term (active power), [math] V_{rms} I_{rms} \cos\theta \, [/math] An alternating term, [math] V_{rms} I_{rms} \cos(2\omega t - \theta) \, [/math]
Notice that the alternating term fluctuates around zero and the constant term in the above example is positive. It turns out that the alternating term always fluctuates around zero and the constant term (active power) depends on the power factor cosθ. But what does the power factor represent?
Power Factor
Power factor is defined as the cosine of the power angle [math]\cos \theta \,[/math], the difference in phase between voltage and current. People will often refer to power factor as leading or lagging. This is because the power angle can only range between -90° and +90°, and the cosine of an angle in the fourth quadrant (between 0 and -90°) is always positive. Therefore the power factor is also always positive and the only way to distinguish whether the power angle is negative or positive from the power factor is to denote it leading or lagging.
Lagging power factor: when the current lags the voltage, this means that the current waveform comes delayed after the voltage waveform (and the power angle is positive). Leading power factor: when the current leads the voltage, this means that the current waveform comes before the voltage waveform (and the power angle is negative). Unity power factor: refers to the case when the current and voltage are in the same phase.
The physical significance of power factor is in the load impedance. Inductive loads (e.g. coils, motors, etc) have lagging power factors, capacitative loads (e.g. capacitors) have leading power factors and resistive loads (e.g. heaters) have close to unity power factors.
Relation to Energy
By definition, power is the rate at which work is being done (or the rate at which energy is being expended). As AC power varies with time, the amount of energy delivered by a given power flow in time T is found by integrating the AC power function over the specified time:
[math] E = \int_{0}^{T} p(t) dt [/math]
We can see that power is made up of a constant component [math] V_{rms}I_{rms}\cos\theta [/math] and an alternating component [math] V_{rms}I_{rms}\cos(2\omega t - \theta) [/math]. The integration can therefore be broken up as follows:
[math] E = \int_{0}^{T} V_{rms}I_{rms}\cos\theta dt + \int_{0}^{T} V_{rms}I_{rms}\cos(2\omega t - \theta) dt [/math]
Suppose we were to integrate over a single period of an AC power waveform (e.g. [math] T = \frac{\pi}{\omega} [/math]). The alternating component drops out and the integration is solved as follows:
[math] E = V_{rms}I_{rms}\cos\theta . \frac{\pi}{\omega} [/math]
From this we can see that work is done by the active power component only and the alternating component does zero net work, i.e. the positive and negative components cancel each other out.
Complex Power
Books often mention AC power in terms of complex quantities, mainly because it has attractive properties for analysis (i.e. use of vector algebra). But often, complex power is simply defined without being derived. So how do complex numbers follow from the previous definitions of power?
For more information on how complex numbers are used in electrical engineering, see the related article on complex electrical quantities. Much of the derivation below is reproduced from this article.
Derivation
Back in 1897, Charles Proteus Steinmetz first suggested representing AC waveforms as complex quantities in his book "Theory and Calculation of Alternating Current Phenomena". What follows is a sketch of Steinmetz’s derivation, but specifically using AC power as the quantity under consideration.
Previously, we found that AC power is a sinusoidal waveform with the general form (for lagging power factors):
[math] p(t) = \frac{VI}{2} \left[ \cos\theta + \cos(2\omega t - \theta) \right] [/math]
Where V and I are the rms values for voltage and current (A rms)
For a fixed angular frequency ω, this waveform can be fully characterized by two parameters: the rms voltage and current product VI and the lagging phase angle [math] -\theta [/math].
Using these two parameters, we can represent the AC waveform p(t) as a two-dimensional vector
S which can be expressed in polar coordinates with magnitude VI and polar angle [math] -\theta [/math]:
This vector
S can be converted into a pair of rectangular coordinates (x, y) such that: [math] x = VI \cos\theta \, [/math] [math] y = -VI \sin\theta \, [/math]
It can be shown trigonometrically that the addition and subtraction of AC power vectors follow the general rules of vector arithmetic, i.e. the rectangular components of two or more sinusoids can be added and subtracted (but not multiplied or divided!).
However working with each rectangular component individually can be unwieldy. Suppose we were to combine the rectangular components using a meaningless operator j to distinguish between the horizontal (x) and vertical (y) components. Our vector
S now becomes: [math] \boldsymbol{S} = x+jy \, [/math] [math] \boldsymbol{S} = VI \cos\theta - jVI \sin\theta \, [/math]
Note that the addition sign does not denote a simple addition because x and y are orthogonal quantities in a two-dimensional space. At the moment, j is a meaningless operator to distinguish the vertical component of V. Now consider a rotation of the vector by 90°:
The rotated vector [math] \boldsymbol{S}' = -y + jx \, [/math]
Suppose we were to define the operator j to represent a 90° rotation so that multiplying a vector V by j rotates the vector by 90°. Therefore:
[math] j\boldsymbol{S} = \boldsymbol{S}' \, [/math] [math] jx + j^{2} y = -y + jx \, [/math] [math] j^{2} + 1 = 0 \, [/math] [math] j = \sqrt{-1} \, [/math]
Therefore using our definition of j as a 90° rotation operator, j actually turns out to be an imaginary number and the vector
S=x+jy is a complex quantity. Therefore our vector S: [math] \boldsymbol{S} = VI\cos\theta - jVI\sin\theta \, [/math]
Is referred to as
complex power or sometimes apparent power (refer to the section below). It is most commonly written in this form: [math] \boldsymbol{S} = P - jQ \, [/math] (for lagging power factor) [math] \boldsymbol{S} = P + jQ \, [/math] (for leading power factor)
Where [math]P = VI\cos\theta \,[/math] and [math] Q = VI\sin\theta \,[/math] are the active (or real) and reactive power quantities defined earlier.
Complex Power from Phasors
Given voltage and current phasors
V and I such that: [math] \boldsymbol{V} = V \angle \phi \, [/math] and [math] \boldsymbol{I} = I \angle \delta \, [/math]
Then the complex power
S can be calculated as follows: [math] \boldsymbol{S} = \boldsymbol{V} \boldsymbol{I}^{*} \, [/math] [math] = V \angle \phi \times I \angle (-\delta) \, [/math] [math] = VI \angle (\phi - \delta) \, [/math] [math] = VI \angle (\theta) = VI\cos\theta - jVI\sin\theta \, [/math]
Where [math] \theta = \phi - \delta \, [/math] is the power angle (i.e phase difference between voltage and current)
Complex Exponentials
Using Euler’s formula, we can represent our complex power vector as a complex exponential using the original polar parameters:
[math] S = VI e^{-j\theta} \, [/math]
The use of complex exponentials gives us an alternative way to think about complex power. We have seen that the vector S rotates around the origin when we vary the phase angle [math] \theta [/math]. The complex exponential [math] e^{j\theta} \, [/math] is actually a rotation operator used to rotate vectors around a circle in a two-dimensional space (there's a good explanation of this at Better Explained. Therefore [math] S = VI e^{-j\theta} \, [/math] is a vector with magnitude VI rotated clockwise by angle [math]P = \theta [/math].
In other words, complex power is a two-dimensional vector representation of AC power, which is more amenable for manipulation than the time-domain function of AC power p(t).
Apparent Power
In the previous section, we saw that complex power
S is also sometimes called apparent power. However in practice, apparent power is often used to refer to the magnitude of S, which is [math] |\boldsymbol{S}| = VI \, [/math]. Three-Phase Power
So far, we have only been talking about DC and single-phase AC power. The power transferred in a balanced three-phase system is equal to the sum of the powers in each phase, i.e.
[math] P_{3\phi} = 3 V_{ph} I_{ph} \cos\theta \, [/math]
where [math]P_{3\phi} \,[/math] is the three-phase active power (W)
[math]V_{ph} \,[/math] is the phase-neutral voltage (Vac) [math]I_{ph} \,[/math] is the phase current (A) [math]\cos\theta \,[/math] is the power factor (pu)
For a star (wye) connected load:
[math]V_{ph} = \frac{V_{l}}{\sqrt{3}} \,[/math] and [math]I_{ph} = I_{l} \,[/math]
For a delta connected load:
[math]V_{ph} = V_{l} \,[/math] and [math]I_{ph} = \frac{I_{l}}{\sqrt{3}} \,[/math]
where [math]V_{l} \,[/math] and [math]I_{l} \,[/math] are the line-to-line voltage and line current respectively.
Therefore, three-phase active power is the same for both star and delta connected loads (in terms of line quantities):
[math] P_{3\phi} = 3 V_{ph} I_{ph} \cos\theta \, [/math] [math] = \frac{3}{\sqrt{3}} V_{l} I_{l} \cos\theta \, [/math] [math] = \sqrt{3} V_{l} I_{l} \cos\theta \, [/math]
Similarly, three-phase reactive power and apparent power are as follows:
[math] Q_{3\phi} = \sqrt{3} V_{l} I_{l} \sin\theta \, [/math] [math] S_{3\phi} = \sqrt{3} V_{l} I_{l} \, [/math]
|
We study the evolution of convex hypersurfaces with initial at a rate equal to <i>H</i> 鈥<i>f</i> along its outer normal, where <i>H</i> is the inverse of harmonic mean curvature of is a smooth, closed, and uniformly convex hypersurface. We find a <i>胃</i>* > 0 and a sufficient condition about the anisotropic function <i>f</i>, such that if <i>胃</i> > <i>胃</i>*, then remains uniformly convex and expands to infinity as <i>t</i> 鈫+ 鈭and its scaling, , converges to a sphere. In addition, the convergence result is generalized to the fully nonlinear case in which the evolution rate is log<i>H</i>-log <i>f</i> instead of <i>H</i>-f.
In this paper the existence of positive 2\pi -periodic solutions to the ordinary differential equation \begin{equation*} u^{\prime\prime}+u=\frac{f}{u^3} \ \textrm{ in } \mathbb{R} \end{equation*}is studied, where 2\pi is a positive 2\pi -periodic smooth function. By virtue of a new generalized Blaschke鈥揝antal贸 inequality, we obtain a new existence result of solutions.
In this paper we study the solvability of the rotationally symmetric centroaffine Minkowski problem. By delicate blow-up analyses, we remove a technical condition in the existence result obtained by Lu and Wang [30].
In this paper the Orlicz鈥揗inkowski problem, a generalization of the classical Minkowski problem, is studied. Using the variational method, we obtain a new existence result of solutions to this problem for general measures.
The centroaffine Minkowski problem is studied, which is the critical case of the <i>L</i><sub><i>p</i></sub>-Minkowski problem. It admits a variational structure that plays an important role in studying the existence of solutions. In this paper, we find that there is generally no maximizer of the corresponding functional for the centroaffine Minkowski problem.
|
The space $S$ of holomorphic functions on the disk $\mathbb{D}$ which are also in $L^2(\mathbb{D})$ is a Hilbert space, with the inner product being $\langle f, g\rangle = \int f(z) \overline{g(z)} dA(z)$, where $dA$ denotes Lebesgue measure. The linear functional $L_{z_0}$, given by $f\mapsto f(z_0)$, is a bounded linear functional on $S$, and thus there must exist some $g_{z_0}\in S$ so that $L_{z_0}(f) = \langle f, g_{z_0}\rangle$ for all $f$. It turns out, this function is given by $$g_{z_0}(w) = \frac{1}{(1-\overline{z_0}z)^2}$$ I have seen this proven by representing $f$ and $g$ as power series, performing the multiplication, and working out the coefficients of the $g$ power series by brute force, which is fine. But I want to think that there is an easier way to do it, or at least a more conceptually satisfying way. For example, I would think to do it by first noting that for $z_0 = 0$, $g_{z_0}$ can be a constant function (I believe $1/\pi$), and then getting the general case by composing with a Mobius transformation; indeed, the function $g_{z_0}$ is closely related to the derivative of such a Mobius transformation, but I have not been able to get such an approach to work. I realized that I don't really know any machinery for working with non-holomorphic functions (like $\overline{g_{z_0}}$), and area integrals of complex functions are also somewhat new to me.
In sum,
Can anyone give me a conceptually satisfying way of approaching this problem, hopefully one that uses some classical complex analysis tricks like Cauchy's integral formula, Mobius transformations, and the like?
Or,
Can you direct me to a resource which has the basic tools for solving this problem?
|
This article provides answers to the following questions, among others:
Which properties of the hydrostatic pressure could Pascal demonstrate with his barrel experiment? How does the pressure change with increasing water depth? What are communicating vessels? How does a water levelwork? How does a water tower work? Hydrostatic pressure
In the article Pressure in liquids, the formation of hydrostatic pressure and its calculation was explained in detail. It was shown that the hydrostatic pressure \(p_h\) depends only on the depth \(h\) below the liquid surface besides the density \(\rho\) of the liquid and the gravitational acceleration \(g\):
\begin{align}
\label{h} &\boxed{p_h =\rho \cdot g \cdot h} \\[5px] \end{align}
In the following sections, the importance of hydrostatic pressure for everyday life will be explained in more detail.
Barrel experiment of Blaise Pascal
The following experiment, based on the experiment of Blaise Pascal in the 17th century, demonstrates the mere dependence of hydrostatic pressure on depth. For this purpose, a large glass vessel is completely filled with water. The (hydrostatic) water pressure at the bottom of the bottle can be calculated with equation (\ref{h}). If a height of half a meter is assumed, then the hydrostatic water pressure at the bottom is about 0.05 bar. The glass bottle can still withstand this relatively low water pressure without any problems.
However, if a small vertical tube is attached to the neck of the bottle and filled with water, the hydrostatic pressure increases as the water level rises. If, for example, the tube is run over several storys of a building, the pressure can increase considerably. At a height of 30 meters, the water pressure rises to over 3 bar. After all, the water pressure will eventually be so high that the glass bottle can no longer withstand the enormous forces and breaks.
The impressive aspect of this experiment is that it does not matter which inner diameter the tube has as long as capillary effects can be neglected. A tube with an inner diameter of 4 mm is theoretically sufficient. In order to fill this tube with water, just about 380 ml of water are required. 380 ml of water are therefore completely sufficient to increase the water pressure in the glass vessel more than 60 times, regardless of the capacity of the vessel!
Water pressure in the ocean
The hydrostatic pressure causes the pressure in water to increase more and more with increasing depth. With a water density of around 1000 kg/m³ and a gravitational acceleration of about 10 N/kg, the water pressure increases by around 1 bar per 10 metres of water depth. Note that the pressure values in the figure below refer only to the hydrostatic pressure (
water pressure). For the absolute pressure at a certain depth, the ambient pressure of 1 bar (atmospheric pressure) on the water surface must be added.
The water pressure increases by about 1 bar per 10 meters of water depth!
The increasing water pressure during dives, for example, results in more air having to be inhaled from the scuba tanks as the depth increases. In order to balance the surrounding water pressure, the lungs must generate the same pressure through the inhaled air, otherwise the lungs would be compressed by the greater water pressure. Greater lung pressure can only be achieved by inhaling more air, similar to a bicycle tyre where more air needs to be pumped into to increase the pressure. So the air supply in the diving tanks will run out faster the deeper you dive.
Watering can
The fact that hydrostatic pressure only depends on depth is evident in many places in everyday life. It is also the reason why the same water level is found everywhere in vessels connected by pipes (so-called
communicating vessels). This can be seen, for example in a watering can that is filled with water. Over time, the same water level will set in the pouring pipe (also called spout) as in the can itself.
Communicating vessels are containers filled with liquid, which are connected to each other by pipes and which have a common liquid level!
This can be explained mathematically as follows. The water in the can leads to a certain hydrostatic pressure \(p_{c}\) at the depth \(h_{c}\) where the nozzle is welded on:
\begin{align}
&p_{c} =\rho \cdot g \cdot h_{c} ~~~\text{hydrostatic pressure in the can} \\[5px] \end{align}
In the same way, the hydrostatic pressure in the spout \(p_s\) at depth \(h_{s}\) below the water level can be determined:
\begin{align}
&p_{s} =\rho \cdot g \cdot h_{s} ~~~\text{hydrostatic pressure in the spout} \\[5px] \end{align}
During the filling of the can, different water levels between the can and the pouring pipe can be observed. After all, the water in the spout is pushed upwards by the greater water pressure in the can.
After filling, however, a state of equilibrium is reached and the water is no longer forced through the pipe. In this case, the hydrostatic pressure caused by the water column in the spout must obviously be as high as the hydrostatic pressure caused by the water column in the can. If this were not the case, then the larger of the two hydrostatic pressures would press either the water in the can or in the spout further upwards. In a state of equilibrium, the hydrostatic pressures must inevitably be the same, which is ultimately only the case with a common water level:
\begin{align}
\require{cancel} p_{c} &\overset{!}{=} p_{s} \\[5px] \bcancel {\rho \cdot g} \cdot h_{c} &= \bcancel{\rho \cdot g} \cdot h_{s} \\[5px] h_{c} &= h_{s} \\[5px] \end{align}
The fact that identical hydrostatic pressures form in the equilibrium state is also shown by the fact that pressures in liquids act equally in all directions. Thus, there cannot be two different hydrostatic pressures at a certain depth. If this were the case, currents would form and there would be no equilibrium.
Water level
The fact that identical water levels form in vessels that are connected to each other is technically used in so-called
water level devices. Two vessels are each provided with a scale and connected to each other by a flexible tube filled with water. The water level can be read off the scales.
Since the same water level is reached in both vessels, it is very easy to set the same level even over long distances where simple spirit levels cannot be used. Water levels are used, for example, in construction technology, whereby electronic sensors are mostly used nowadays.
Water tower
Another technical implementation that uses the hydrostatic pressure or the striving for a common liquid level in communicating vessels is the
water tower.
In principle, a water tower is an elevated tank that is filled with water by pumps. Due to the resulting hydrostatic pressure, the water can be forced into the lower-lying households without additional pumps. Due to the large water reservoir in the tower, usually several million litres, the water level sinks only relatively slowly. This ensures an almost constant water pressure before water is pumped again when the water level falls below a certain limit.
Nowadays, however, water towers are used less and less. In modern water supply systems, pumps are mostly used to transport the water directly to the consumers.
|
We all know some curves can be described by $y=f(x)$ and some surfaces can be described by $z=f(x,y)$ However, there exists curves and surfaces which cannot be described by those, such as a circle and a sphere. Therefore, we introduce parameterized vector equations, which can describe them.
For example, circle: $\vec r(t)=r\cos(t)\hat i+r\sin(t)\hat j$ sphere: $\vec r(u,v)=\rho\cos(u)\sin(v)\hat i+\rho\sin(u)\sin(v)\hat j+\rho\cos(v)\hat k$ However, all curves described by $y=f(x)$ and $z=f(x,y)$ can be parameterized. For curves, $$\vec r(t)=t\hat i+f(t)\hat j$$ For surfaces, $$\vec r(u,v)=u\hat i+v\hat j+f(u,v)\hat k$$ Therefore, I think this suggests the set of all parameterized surfaces (or curves) is the super-set of the set of all surfaces (or curves) described by $z=f(x,y)$ (or $y=f(x)$). Is that correct? Now, here comes the the real challenge. A curve can also be described by an implicit function $f(x,y)=0$ and a surface can also be described by an implicit function $f(x,y,z)=0$ I have 3 questions regarding this. Can all surfaces (curves) described by an implicit function be parameterized? (If yes, then what is the general way?) Can all surfaces (curves) described by parametric vector equations be represented using implicit function? (If yes, then what is the general way?) Compare the set of all parameterized surfaces (curves) and the set of all surfaces (curves) represented by implicit function. (which is which super-set?)
Sorry for the use of nontechnical terms. I use them because I don't know the technical ones. I have only started learning vector calculus last year in university.
EDIT: I think my question is not too clear, so I will give an example of writing the surface $f(x,y,z)=0$ into $\vec r(u,v)$ We want to parameterize a sphere. $$x^2+y^2+z^2-\rho^2=0$$ Let $x=\rho\cos(u)\sin(v)$, $y=\rho\sin(u)\sin(v)$, $z=\rho\cos(v)$, $$\rho^2\cos^2(u)\sin^2(v)+\rho^2\sin^2(u)\sin^2(v)+\rho^2\cos^2(v)-\rho^2$$ $$=\rho^2\sin^2(v)(\cos^2(u)+\sin^2(u))+\rho^2\cos^2(v)-\rho^2=\rho^2-\rho^2=0$$ I want to know if there is a general way of finding $x=x(u,v)$, $y=y(u,v)$ and $z=z(u,v)$ for any given $f(x,y,z)=0$
|
This question arose from another one of mine, Homotopy type of some lattices with top and bottom removed.
An element $d$ of a bounded lattice $L$ is called $\mathit{dense}$ if $$ \forall x\in L\ (d\land x=\bot)\Rightarrow(x=\bot) $$ holds.
It is well known that a pseudocomplemented distributive lattice is Boolean if and only if $\top$ is the only dense element: in this case dense elements are precisely those of the form $a\lor\neg a$ where $\neg$ is the pseudocomplement.
Is a characterization of general (non-distributive) lattices with the same property known? That is, which bounded lattices have the property that $\top$ is the unique dense element?
Variations: this property + the only codense element is $\bot$; not necessarily bounded lattices such that all intervals have these properties; just the finite case; etc., etc.
Also, what about those distributive lattices which are neither pseudocomplemented nor copseudocomplemented? And what about (co)pseudocomplemented non-distributive ones?
|
The problem is Poincare' s lemma joined with compactness (without boundary) of the 2-manifold $M$ we are considering. Due to them, the flux of the electrical field should be simultaneously zero and $1$ for a $1$-surface (a closed curve) sourronding the support of the delta function, since this curve can viewed as the boundary of a region including the charge but also the boundary of its complement.
This property must be valid for every density of charge over the manifold: the total charge must be zero. As a consequence, there is no solution of the eqution you wrote since you are dealing with a density of charge whose integral is different from zero.
There are however some ways out. The simplest one is obtained by adding a negative constant to the right hand side of your equation whose integral on the whole compact surface cancel the integral of the delta. This is a continuous density of charge that compensates the localized charge arising from the delta function.
As a matter of fact the fundamental solution must satisfy $$\Delta_x G(x,y) = \delta(x,y) - S^{-1}\tag{1},$$where $S$ is the $2$-volume of the whole manifold $M$.
This procedure works for a
flat 2- torus in particular, where this sort of foundamental solution can be explicitly computed by means of a double Fourier series (try and pay attention to the zero mode).
The obtaioned fundamental solution produces, using the standard convolution procedure, the potential field (satisfying Poisson equation) generated by a smooth density of charge $\rho$ provided the total (integrated) charge vanishes as it is required.$$\varphi(x) = \int_M G(x,y) \rho(y) dy\tag{2}.$$This is evident from (1), passing the Laplacian under the sign of integration.
All that works also in compact $n$ dimensional manifold (without boundary) referring to Poisson equation constructed out of a smooth Riemannian metric defined on it.
It is interesting to notice that outside the support of the delta function, the said fundamental solutions which are distributions, are however smooth functions as a consequence of elliptic regularity theorems. For the 2-torus the Fourier series you find has to be considered a series of distributions. However it weakly converges to a smooth function outside the support of the delta.
|
You are right about the dropped $\sim$, it's probably just a typo. Furthermore, remember that in stochastic calculus, you have to take into account second order derivatives, i.e.
$$d\left(\frac{1}{Y_t}\right) = -\frac{1}{Y_t^2}dY_t + \frac{1}{2}\frac{2}{Y_t^3}dY_t^2$$
which is the Taylor expansion up to second order. Then you substitute $dY_t$ in the right hand side and take into account that
$$dY_t^2 = \gamma^2 Y_t^2 d\tilde{W}_t^2 = \gamma^2 Y_t^2 dt \; . $$
The reason you keep second order terms is because they might contain terms with quadratic variation proportional to $dt$. This is the case of Brownian motion itself which has $dW_t^2=dt$.
Extra on Taylor expansion:
Provided a function is sufficiently differentiable in some point of its domain, it is possible to approximate it by a polynomial in some neighborhood of that point. Say $f(x)$ around the point $a$ is approximately equal to
$$f(x) \approx f(a)+f'(a)(x-a)+\frac{1}{2}f''(a)(x-a)^2$$
Or if we put $x-a=dx$ and $f(x)-f(a)=df$ we can write this as
$$df \approx f'(a)dx+\frac{1}{2}f''(a)dx^2$$
This is true if the function is twice differentiable and some additional conditions which are a bit too technical to expand upon here.
|
This article uses a very idealized model to roughly calculate daylight hours and midday moments, and to calculate the time of sunrise and the day.
How do you calculate Daylight time? Or, how do you calculate the percentage of daylight time in a day? Figure out how much of the Sun's Sunday parallel loop (the trajectory of the Sun's motion on the celestial sphere) is above the horizon (ignoring the sun's revolution in one day and the uneven rotation of the Earth).
To figure out this ratio, you need these two quantities:
Symbol Meaning Why? \ (\delta \) Sun Latitude (Longitude of direct sun Point, north latitude is positive) Summer day long, winter day short \ (\varphi \) Local latitude (north latitude is positive) Arctic summertime, Antarctic polar Night
, taking the northern hemisphere winter as an example, draw the celestial sphere, the horizon, and the parallel circle, and then make several auxiliary lines, using the relations of \ (\delta \) and \ (\varphi \) to represent the required proportions:
The center of the celestial sphere is \ (o\), the radius is \ (1\), the circle of the circle is \ (O ' \), the sun rises from the \ (a\) point, falls from the (b\) point, the Observer is not the extreme. Connect \ (ab\), \ (ao\), \ (AO ' \), \ (bo\), \ (BO \).
Take the midpoint of \ (ab\) (m\), connect \ (mo\), \ (MO ' \). Easy to know \ (\ANGLE{OAO '} =-\delta \), \ (\angle{moo '} = \varphi \).
Set \ (\angle{mo ' A} = \theta \), the ratio is \ (\frac{2\theta}{360^{\circ}} = \frac{\theta}{180^{\circ}} \).
\ (\because OO ' \perp plane O ' AB \)
\ (\therefore oo ' \perp o ' M, OO ' \perp o ' A \)
\ (\therefore o ' o = \sin{(-\delta)}, o ' A = \cos{(-\delta)} \)
\ (\therefore o ' M = O ' o \cdot \tan{\varphi} = \sin{(-\delta)}\tan{\varphi} \)
\ (\therefore \cos{\theta} = \frac{o ' M}{o ' A} = \frac{\sin{(-\delta)}\tan{\varphi}}{\cos{(-\delta)}} = \tan{(-\delta)}\ Tan{\varphi} \)
\ (\therefore \theta = \arccos{(\tan{(-\delta)}\tan{\varphi})} \)
\ (\therefore \frac{\theta}{{180}^{\circ}} = \frac{\arccos{(\tan{(-\delta)}\tan{\varphi})}}{{180}^{\circ}} \)
Get the day-long expressions about \ (\delta \) and \ (\varphi \):
\ (\mathrm{daytime} = \frac{\arccos{(\tan{(-\delta)}\tan{\varphi})}}{{180}^{\circ}} \cdot {24}^{\mathrm{h}} \) \ ((\ \ Varphi \neq \pm {90}^{\CIRC}) \)
Although this formula was introduced from the "Northern Hemisphere winter" scenario, it also applies to the southern hemisphere and the summer (-\delta \) and \ (-\varphi \) are easy to see.
......
So the question is: \ (\delta \) How to ask? You have to draw a ball:
, the center of the celestial sphere is the Earth, the ecliptic and the equator intersect, set the vernal equinox point for \ (e\), Equinox for \ (E ' \), the Sun for \ (s\), the obliquity angle for \ (\varepsilon \), the Sun Center Huangching for \ (\lambda \).
Representation of \ (\delta \) with \ (\varepsilon \) and \ (\lambda \) through relations:
(s\) as a large arc \ (SS ' \) perpendicular to the equatorial intersection of the equator (S ' \) (Figure bit move place).
According to the spherical triangle sine theorem, in the sphere \ (\triangle{ses '} \), there are:
\ (\frac{\sin{(-\delta)}}{\sin{\varepsilon}}=\frac{\sin{(-\LAMBDA)}}{\sin{{90}^{\circ}} \)
\ (\sin{\delta} = \sin{\lambda}\sin{\varepsilon} \)
\ (\delta = \arcsin{(\sin{\lambda}\sin{\varepsilon})} \)
This formula is also applicable to a variety of situations.
......
The question comes again, \ (\lambda \) How to beg? There is no need to draw the ball, there is no ball to draw. One can be directly with the orbital parameters to calculate, the second is to get the date of the various solar terms, and then in the vicinity of the two throttle for linear interpolation.
The expression of the latitude and the yellow meridian of the sun can be made by the above-mentioned various types of daylight time:
\ (\mathrm{daytime} = {24}^{\mathrm{h}}\cdot\frac{\arccos{(\tan{(-\arcsin{(Sin{\lambda}sin{\varepsilon})})}\tan{\ Varphi})}}{{180}^{\circ} \) \ ((\varphi \neq \pm {90}^{\CIRC}) \)
Or
\ (\mathrm{daytime} = {24}^{\mathrm{h}}\cdot (1-\frac{\arccos{(\tan{(\arcsin{})}) Sin{\lambda}sin{\varepsilon {\varphi})}} {{180}^{\CIRC}}) \) \ ((\varphi \neq \pm {90}^{\CIRC}) \)
Or
\ (\mathrm{daytime} = {24}^{\mathrm{h}}\cdot (1-\frac{\arccos{\frac{\sin{\lambda}\sin{\varepsilon}\tan{\varphi}}{\ SQRT{1-\SIN^2{\LAMBDA}\SIN^2{\VAREPSILON}}}}}{{180}^{\CIRC}}) \ ((\varphi \neq \pm {90}^{\CIRC}) \)
Northern latitude \ ({36.5}^{\CIRC} \) Daylight time about the change of the solar yellow meridian such as:
Looks particularly like a sinusoidal curve and tries to fit it:
\ (\mathrm{daytime} \approx {12}^{\mathrm{h}} \cdot [(1-\frac{\arccos{(\tan{\varepsilon}\tan{\varphi})}}{{90}^{\ CIRC}}) \cdot Sin{\lambda} + 1] \) \ ((\varphi \neq \pm {90}^{\CIRC}) \)
Look at the error:
Maximum error \ (\pm 6 \mathrm{min} \), the effect is more general, the expression is not simplified much, it seems that the original formula.
With daylight time, the Sunrise Day is not always available in the following two formulas:
\ (T_{sunrise} = T_{noon}-\frac{\mathrm{daytime}}{2} \) (1)
\ (T_{sunset} = T_{noon} + \frac{\mathrm{daytime}}{2} \) (2)
The problem comes again, midday \ (T_{noon} \) is not necessarily 12:00:
First, there may be a lot worse between time and place, which can lead to a departure from 12:00 in the noon hour.
Second, due to the uneven movement of the Sun (both times), the noon hour will also deviate from 12:00.
The two effects are superimposed. The first problem is solved, as long as you know the longitude of the time zone central meridian and the local longitude, it's all right. The second one is not very good to forget. An approximate formula was given in the astronomical algorithm (I threw out the higher-order items):
\ (E ' = t_{mean}-t_{true} = 4 \cdot [\tan^2{(\frac{\varepsilon}{2})}\cdot \sin{2l}+2e\cdot \sin{(L-\bar{\omega})}] \) ( The unit of E is minutes)
where \ (l\) for Taiyangping Huangching (that is, the flat Sun Yellow Meridian, (imaginary) flat sun with a regression year for the cycle on the ecliptic do uniform circular motions. It can be considered approximately equal to the number of days between the last vernal equinox and today (or multiplied by the angular velocity of the Sun (\frac{{360}^{\circ}}{{365.2422}^{\mathrm{d}}})), and \ (e\) for the Earth's orbital eccentricity, \ (\bar{\ Omega} \) Huangching for the Earth's recent point (Ri Xin). \ (e \approx 0.0167 \) and \ (\bar{\omega}\approx {102.982}^{\CIRC} \) can be considered constant for a short time.
Graphing (\ (e\) about \ (l\) changes):
Now ask for the noon time, make \ (t_{true} = 12:00 \), Solution:
\ (T_{mean}=12:00+e ' \)
The deviation from the time and place of the upper zone is finally obtained:
\ (t_{noon}=12:00 + 4^{\mathrm{m}}\cdot (l_{zone}-l_{local}) + 4^{\mathrm{m}} \cdot [\tan^2{(\frac{\varepsilon}{2})}\ CDOT \sin{2l}+2e\cdot \sin{(L-\bar{\omega})}] \)
where \ (L_{zone} \) and \ (L_{local} \) are the longitude and local longitude (east longitude) of the time zone's central Meridian, respectively.
At this point, you are done, using (1) and (2) Calculate Sunrise Day is not possible.
A rough calculation of daylight time and Sunrise day without time
|
Which, if any, axioms of ZFC are known to not be derivable from the other axioms?
Which, if any, axioms of PA are known to not be derivable from the other axioms?
There are several interesting issues here.
The first is that there are different axiomatizations of PA and ZFC.
If you look at several set theory books you are likely to find several different sets of axioms called "ZFC". Each of these sets is equivalent to each of the other sets, but they have subtly different axioms. In one set, the axiom scheme of comprehension may follow from the axiom scheme of replacement; in another set of axioms it may not. That makes the issue of independence harder to answer in general for ZFC; you have to really look at the particular set of axioms being used. PA has two different common axiomatizations. For the rest of this answer I will assume the axiomatization from Kaye's book Models of Peano Arithmeticwhich is based on the axioms for a discretely ordered semring.
The second issue is that both PA and ZFC (in any of their forms) have an infinite number of axioms, because they both have infinite axiom schemes. Moreover, neither PA nor ZFC is finitely axiomatizable. That means, in particular, that given any finite number of axioms of one of these theories, there is some other axiom that is not provable from the given finite set.
Third, just to be pendantic, I should point out that, although PA and ZFC are accepted to be consistent, if they
were inconsistent, then every axiom would follow from a minimal inconsistent set of axioms. The practical effect of this is that any proof of independence has to either prove the consistency of the theory at hand, or assume it.
Apart from these considerations, there are other things that can be said, depending on how much you know about PA and ZFC.
In PA, the axiom scheme of induction can be broken into infinitely many infinite sets of axioms in a certain way using the arithmetical hierarchy; these sets of axioms are usually called $\text{I-}\Sigma^0_0$, $\text{I-}\Sigma^0_1$, $\text{I-}\Sigma^0_2$ , $\ldots$. For each $k$, $\text{I-}\Sigma^0_k \subseteq \text{I-}\Sigma^0_{k+1}$. The remaining non-induction axioms of PA are denoted $\text{PA}^-$. Then the theorem is that, for each $k$, there is an axiom in $\text{I-}\Sigma^0_{k+1}$ that is not provable from $\text{PA}^- + \text{I-}\Sigma^0_k$. This is true for both common axiomatizations of PA.
In ZFC, it is usually more interesting to ask which axioms
do follow from the others. The axiom of the empty set (for the authors who include it) follows from an instance of the axiom scheme of separation and the fact that $(\exists x)[x \in x \lor x \not \in x]$ is a formula in the language of ZFC that is logically valid in first order logic, so ZFC trivially proves that at least one set exists.
In ZFC, there are some forms of the axiom scheme of separation that follow from the remainder of ZFC when particular forms of the axiom of replacement are used. The axiom of pairing is also redundant from the other axioms in many presentation. There are likely to be other redundancies in ZFC as well, depending on the presentation.
One reason that we do not remove the redundant axioms from ZFC is that it is common in set theory to look at fragments of ZFC in which the axiom of powerset, the axiom scheme of replacement, or both, are removed. So axioms that are redundant when these axioms are included may not be redundant once these axioms are removed.
This is a community wiki answer to gather references. Please feel free to edit it.
Abraham Robinson. On the independence of the axioms of definiteness (Axiome der Bestimmtheit), Journal of Symbolic Logic, Volume 4, Number 2 (1939), pp. 69-72.
Elliott Mendelson, Some Proofs of Independence in Axiomatic Set Theory, Journal of Symbolic Logic, Volume 21, Number 3 (Sep., 1956), pp. 291-303. MR0084463 (18,864c).
Paul Cohen. The independence of the continuum hypothesis, Proc. Nat. Acad. Sci. U.S.A., Volume 50, Number 6 (1963), pp. 1143–1148. MR0157890 (28 #1118); and The independence of the continuum hypothesis. II, Proc. Nat. Acad. Sci. U.S.A., Volume 51, Number 1 (1964), pp. 105–110. MR0159745 (28 #2962).
Alexander Abian and Samuel LaMacchia. On the consistency and independence of some set-theoretical axioms. Notre Dame Journal of Formal Logic, Volume 19, Number 1 (1978), pp. 155-158. MR0477290 (81e:04001).
Greg Oman.
On the axiom of union, Arch. Math. Logic, 49 (3), (2010), 283–289. MR2609983 (2011g:03122). See also this MO question, and here. (I do not know of an original reference for the fact that $\mathsf{ZFC}-\mathrm{Union}$ does not suffice to prove the existence of infiniteunions. The paper includes the usual proof of this fact, and clarifies precisely which unions can be proved to exist in this theory: $\bigcup x$ exists iff $\{|y|\colon y\in x\}$ is bounded.)
|
Let $G$ be a "nice" infinite group: at least finitely presented and residually finite, maybe also linear and right-orderable (or even bi-orderable, or residually free nilpotent).
Consider an element $\lambda$ in the group ring $\mathbb Z[G]$ which is "residually invertible", ie every image $\overline\lambda\in\mathbb Z[G/H]$, $H$ a normal subgroup of finite index, is invertible. Is $\lambda$ itself invertible ?
Motivation: one can generalize the question to matrices in ${\rm M}_n(\mathbb Z[G])$, and also replace $\mathbb Z[G]$ by its Novikov completion in the direction of a nonzero morphism $u:G\to\mathbb R$ :
$$\mathbb Z[G]_u=\{\sum_{n=0}^\infty a_ng_n: a_n\in\mathbb Z,g_n\in G,u(g_n)\to+\infty\}.$$
A positive answer to the analogous question on detecting invertible matrices in ${\rm M}_n(\mathbb Z[G])_u$, for $G=\pi_1(M)$ with $M$ a closed $3$-manifold, would have the following consequence: a nonzero class $u\in H^1(M,\mathbb Z)={\rm Hom}(\pi_1(M),\mathbb Z)$ would be represented by a fibration $M\to S^1$ if and only if every twisted Alexander polynomial associated to a finite covering of $M$ is unitary (= bi-unitary).
Remarks. I have a proof for $\mathbb Z$ (!), elementary but not completely obvious. This implies the result (also for matrices) if $G$ is virtually free Abelian. From this one can prove the result for $G={\rm Heis}_3(\mathbb Z)$, the $3$-dimensional Heisenberg group over $\mathbb Z$ (matrices $\pmatrix{1&x&z\cr0&1&y\cr0&0&1}$ with $x,y,z\in\mathbb Z$), the simplest free nilpotent group. However, I do not see how to prove it for matrices over ${\rm Heis}_3(\mathbb Z)$, nor for general free nilpotent groups. Note that a proof for all free nilpotent groups would imply the result for residually free nilpotent groups, which include (I believe) most fundamental groups of closed $3$-manifolds, in particular all the hyperbolic ones.
|
I considered a particle in polar coordinates, $(r,\theta)$, with mass $m$. The standard basis vectors in polar coordinates are: $$\mathbf{\hat{r}}=\cos{\theta}\mathbf{\hat{x}}+\sin{\theta}\mathbf{\hat{y}}$$ And: $$\boldsymbol{\hat{\theta}}=\frac{\partial\mathbf{\hat{r}}}{\partial\theta}=-\sin{\theta}\mathbf{\hat{x}}+\cos{\theta}\mathbf{\hat{y}}$$ Differentiating the vector $\mathbf{r}$ to the particle twice, we find that: $$\mathbf{\ddot{r}}=(\ddot{r}-r\dot{\theta}^2)\mathbf{\hat{r}}+(2\dot{r}\dot{\theta}+r\ddot{\theta})\boldsymbol{\hat{\theta}}$$ From which it follows that the radial component of force on this particle is $F_r=m(\ddot{r}-r\dot{\theta}^2)$ and the tangential component is $F_\theta=m(2\dot{r}\dot{\theta}+r\ddot{\theta})$.
I was able to understand three out of four of the terms in this pair of equations by considering the particle undergoing radial and circular motion (in which case $\dot{\theta}=0$ and $\dot{r}=0$, respectively).
Incidentally, however, the $2m\dot{r}\dot{\theta}$ term is the Coriolis force. But isn't this force fictitious and only observable in a non-inertial reference frame? Was I working in a non-inertial reference frame during this derivation? Does what I'm asking even make sense?
I think I primarily need some clarification of how inertial/non-inertial reference frames come into play in this derivation.
|
Considerations When Using Cylinder Lenses
This is a supplementary section of the Laser Optics Resource Guide.
Cylinder lenses are similar to spherical lenses in the sense that they use curved surfaces to converge or diverge light, but they have optical power in only one dimension and will not affect light in the perpendicular dimension. This is impossible to accomplish using spherical lenses as light will focus or diverge uniformly in a rotationally symmetric manner. Cylinder lenses play an important role in the manipulation and shaping of laser light and are used for forming laser light sheets and circularizing elliptical beams. Due to the asymmetric nature of cylinder lenses and the specialized manufacturing processes required, it is important that the centration, wedge, and axial twist are specified and properly controlled.
For this reason, cylinder lenses require specialized equipment and skills to manufacture, along with requiring a unique coordinate system to effectively reference features of a lens. Two orthogonal directions define the reference system: the power direction and the non-power direction. The first direction is called the “power direction” because it runs along the curved length of the lens, and is the only axis with optical power (
Figure 1). The second direction is called the “non-power direction” because it runs along the length of the lens without any optical power. The length of the cylinder lens along the non-power direction can extend without affecting the optical power of the lens. Cylinder lenses can have a variety of form factors including rectangular, square, circular, and elliptical shapes. Figure 1: Power and non-power directions in both rectangular and circular cylinder lenses Errors, Aberrations, and Specifications
No manufacturing process is free of imperfections, and cylinder lens manufacturing is no exception, which makes small amounts of geometric errors unavoidable. Misalignment during the polishing process could lead to a number of mechanical errors specific to cylinder lenses that can cause optical aberrations and negatively impact performance. Therefore, these errors must be tightly controlled to guarantee the performance of the lens. These errors are defined with respect to geometric datums including the planar side of the lens and the edges of the lens.
Wedge
In an ideal cylinder, the planar side of the lens is parallel to the cylinder axis. Angular deviation between the planar side of the lens and the cylinder axis is known as the wedge, which is typically measured in arcmin (
Figure 2). This angle is determined by measuring the two end thicknesses of the lens and calculating the angle between them. Wedge leads to an image shift in the non-power direction, just like wedge in a window. Figure 2: Example of an exaggerated wedge caused by end thickness difference in the non-power direction of a cylinder lens Centration
The optical axis of the curved surface is parallel to the edges of the lens in an ideal cylinder lens (
Figure 3). Similar to decenter of a surface with optical power in a spherical optic, the centration error of a cylinder lens is an angular deviation of the optical axis with respect to the edges of the lens. This centration angle (α) causes the optical and mechanical axes of the lens to no longer be collinear, leading to beam deviation. If the edges of the lens are used as a mounting reference, this error can make optical alignment very difficult. However, if the edges of the lens is not relied on for mounting reference, it is possible to remove this error by decentering the lens in the correct direction. The larger the diameter of a cylinder lens, the larger the associated edge thickness difference for a given centration angle. Figure 3: Example of centration error caused by an edge thickness difference in the power direction of a cylinder lens Axial Twist
Axial twist is an angular deviation between the cylinder axis and the edges of a lens. Axial twist represents a rotation of the powered surface of the cylinder lens with respect to the outer dimensions, leading to a rotation of the image about the optical plane. This is especially detrimental to an application when rectangular elements are secured by their outer dimensions (
Figure 4). Rotating a cylinder lens to realign the cylinder axis can counteract axial twist. Figure 4: Example of axial twist in a cylinder lens Applications
Cylinder lenses are most commonly used in laser beam shaping to correct an asymmetric beam, create a line, or generate a light sheet. Modern scientific methods such as Particle Image Velocimetry (PIV) and Laser Induced Fluorescence (LIF) often require a thin laser line or an even laser light sheet. Structured laser light is also an important tool for scanning, measurement, and alignment applications. With low cost laser diodes now readily available, another common application is simply circularizing the elliptical output from a diode to create a collimated and symmetric beam.
Forming a Light Sheet
A light sheet is a beam that diverges in both the X and the Y axes. Light sheets include a rectangular field orthogonal to the optical axis, expanding as the propagation distance increases. A laser line generated using a cylinder lens can also be considered a light sheet, although the sheet has a triangular shape and extends along the optical axis.
To create a true laser light sheet with two diverging axes, a pair of convex or concave cylinder lenses orthogonal to each other are required (
Figure 5). Each lens acts on a different axis and the combination of both lenses produces a diverging sheet of light. Figure 5: Example of orthogonal cylinder lenses used to generate a rectangular light sheet Circularizing a Beam
A laser diode with no collimating optics will diverge in an asymmetrical pattern. A spherical optic cannot be used to produce a circular collimated beam as the lens acts on both axes at the same time, maintaining the original asymmetry. An orthogonal pair of cylinder lenses allows each axis to be treated separately.
To achieve a symmetrical output beam, the ratio of the focal lengths of the two cylinder lenses should match the ratio of the X and Y beam divergences. Just as with standard collimation, the diode is placed at the focal point of both lenses and the separation between the lenses is therefore equal to the difference of their focal lengths (
Figure 6). Figure 6: Example of circularizing an elliptical beam using cylinder lenses
Laser diodes may have a very large divergence, which can be a challenge when trying to collimate because divergence has a direct effect on the allowable length of the system, as well as the required sizes of the lenses. As the relative positions of each component are fairly fixed due to their focal length, it is possible to calculate the maximum beam width (d) at each lens using the focal length of the lens (f) and the divergence angle (θ) of the axis it is collimating. The clear aperture of each lens must then be larger than the corresponding maximum beam width.
(1)$$ d = 2f\times \tan\!\left(\frac{\theta}{2}\right) $$
|
The question is simple: How do I find the function derivative of $$(\delta/\delta \phi(x)) (\partial_\mu \phi(x))~?$$ As far as I can tell, I cannot use any of the standard computational rules for the functional derivative.
The expression $\frac{\delta \partial_\mu\phi(y)}{\delta \phi(y)}$ is mathematically
meaningless.
By definition, given a functional $F$ associating reals (or, more generally, complex numbers) $F[\phi]$ to smooth functions $\phi$, we say that the
distribution $\frac{\delta F}{\delta \phi(x)}$ is the functional derivative of $F$, if $$\frac{d}{d\alpha}|_{\alpha=0} F[\phi + \alpha f] = \int \frac{\delta F}{\delta \phi(x)} f(x) dx$$for every compactly-supported smooth function $f$.
In the considered case, one has to compute the functional derivative of the functional $F$ associating $\partial_\mu \phi(y)$ to $\phi$, i.e., $$\partial_\mu \phi(y) := \int \partial_\mu \phi(x) \delta(x-y) dx\:.$$ We have $$\frac{d}{d\alpha}|_{\alpha=0} F[\phi + \alpha f] = \frac{d}{d\alpha}|_{\alpha=0} \int \partial_\mu (\phi(x)+ \alpha f(x)) \delta(x-y) dx = \int \partial_\mu f(x) \delta(x-y) dx$$ $$= -\int f(x) \partial_\mu\delta(x-y) dx\:.$$ We conclude that $$\frac{\delta \partial_\mu\phi(y)}{\delta \phi(x)} = - \partial^{(x)}_\mu\delta(x-y) = \partial^{(y)}_\mu\delta(x-y)\:. $$ So $\frac{\delta }{\delta \phi(x)}$ and $\partial^{(y)}_\mu$ commute as said by @AccidentalFourierTransform.
In summary, $\frac{\delta \partial_\mu\phi(y)}{\delta \phi(y)}$ is
not defined because the value at a fixed point of a non-regular distribution has no meaning.
As correctly pointed out in the answer by Valter Moretti, it is mathematically ill-defined to apply (the traditional definition of) the functional/variational derivative (FD) $$ \frac{\delta {\cal L}(x)}{\delta\phi^{\alpha} (x)} \tag{1}$$ to the same spacetime point.
However, it is very common to introduce a 'same-spacetime' FD as $$ \frac{\delta {\cal L}(x)}{\delta\phi^{\alpha} (x)}~:=~ \frac{\partial{\cal L}(x) }{\partial\phi^{\alpha} (x)} - d_{\mu} \left(\frac{\partial{\cal L}(x) }{\partial\partial_{\mu}\phi^{\alpha} (x)} \right)+\ldots. \tag{2} $$ which obscures/betrays its variational origin, but is often used for notational convenience. (The ellipsis $\ldots$ in eq. (2) denotes possible contributions from higher-order spacetime derivatives.) See e.g. this, this, & this Phys.SE posts.
If we interpret OP's expression via eq. (2), then OP's Lagrangian density ${\cal L}=\partial_{\mu} \phi$ is a total space-time derivative, so that OP's 'same-spacetime' FD vanishes, cf. e.g. this Phys.SE post.
|
Electrochemical Impedance Spectroscopy: Experiment, Model, and App
Electrochemical impedance spectroscopy is a versatile experimental technique that provides information about an electrochemical cell’s different physical and chemical phenomena. By modeling the physical processes involved, we can constructively interpret the experiment’s results and assess the magnitudes of the physical quantities controlling the cell. We can then turn this model into an app, making electrochemical modeling accessible to more researchers and engineers. Here, we will look at three different ways of analyzing EIS: experiment, model, and simulation app.
Electrochemical Impedance Spectroscopy: The Experiment
Electrochemical impedance spectroscopy (EIS) is a widely used experimental method in electrochemistry, with applications such as electrochemical sensing and the study of batteries and fuel cells. This technique works by first polarizing the cell at a fixed voltage and then applying a small additional voltage (or occasionally, a current) to perturb the system. The perturbing input oscillates harmonically in time to create an alternating current, as shown in the figure below.
An oscillating perturbation in cell voltage gives an oscillating current response.
For a certain amplitude and frequency of applied voltage, the electrochemical cell responds with a particular amplitude of alternating current at the same frequency. In real systems, the response may be complicated for components of other frequencies too — we’ll return to this point below.
EIS experiments typically vary the frequency of the applied perturbation across a range of mHz and kHz. The relative amplitude of the response and time shift (or phase shift) between the input and output signals change with the applied frequency.
These factors depend on the rates at which physical processes in the electrochemical cell respond to the oscillating stimulus. Different frequencies are able to separate different processes that have different timescales. At lower frequencies, there is time for diffusion or slow electrochemical reactions to proceed in response to the alternating polarization of the cell. At higher frequencies, the applied field changes direction faster than the chemistry responds, so the response is dominated by capacitance from the charge and discharge of the double layer.
The time-domain response is not the simplest or most succinct way to interpret these frequency-dependent amplitudes and phase shifts. Instead, we define a quantity called an
impedance. Like resistance in a static system, impedance is the ratio of voltage to current. However, it uses the real and imaginary parts of a complex number to represent the relation of both amplitude and phase to the input signal and output response. The mathematical tool that relates the impedance to the time-domain response is a Fourier transform, which represents the frequency components of the oscillating signal.
To explain the idea of impedance more fully for a simple case, consider the input voltage as a cosine wave oscillating at an angular frequency (
ω):
Then the response is also a cosine wave, but with a phase offset (
φ). Compared to the time shift in the image above, the phase offset is given as \phi = -\omega \,\delta t . The magnitude of the current and its phase offset depend on the physics and chemistry in the cell.
Now, let’s consider the resistance from Ohm’s law:
This quantity varies in time with the same frequency as the perturbing signal. It equals zero at times when the numerator also equals zero and becomes singular when the denominator equals zero. So unlike the resistance in a DC system, it’s not a very useful quantity!
Instead, from Euler’s theorem, let’s express the time-varying quantities as the real parts of complex exponentials, so that:
and
We denote the coefficients V_0 and I_0\,\exp(i\phi) as quantities \bar{V} and \bar{I}, respectively.
These are complex amplitudes that can be understood in terms of the Fourier transformation of the original time-domain sinusoidal signals. They express the distinct amplitudes and phase difference of the voltage and current. Because all of the quantities in the system are oscillating sinusoidally, we understand the physical effects by comparing these complex quantities, rather than the time-domain quantities. To describe the oscillating problem (often called
phasor theory), we define a complex analogue of resistance as:
This is the impedance of the system and, as the name suggests, it’s the quantity we measure in electrochemical impedance spectroscopy. It’s a complex quantity with a magnitude and phase, representing both resistive and capacitive effects. Resistance contributes the real part of the complex impedance, which is in-phase with the applied voltage, while capacitance contributes the imaginary part of the complex impedance, which is precisely out-of-phase with the applied voltage.
EIS specialists look at the impedance in the form of a spectrum, normally with a Nyquist plot. This plots the imaginary component of impedance against the real component, with one data point for every frequency at which the impedance has been measured. Below is an example from a simulation — we’ll discuss how it’s modeled in the next section.
Simulated Nyquist plot from an electrochemical impedance spectroscopy experiment. Points toward the top right are at lower frequencies (mHz), while those toward the bottom left are at higher frequencies (>100 Hz).
In the figure above, the semicircular region toward the left side shows the coupling between double-layer capacitance and electrode kinetic effects at frequencies faster than the physical process of diffusion. The diagonal “diffusive tail” on the right comes from diffusion effects observed at lower frequencies.
EIS experiments are useful because information about many different physical effects can be extracted from a single analysis. There is a quantitative relationship between properties like diffusion coefficients, kinetic rate constants, and dimensions of the features in Nyquist plots. Often, EIS experiments are interpreted using an “equivalent circuit” of resistors and capacitors that yields a similar frequency-dependent impedance to the one shown in the Nyquist plot above. This idea was discussed in my colleague Scott’s blog post on electrochemical resistances and capacitances.
When there is a linear relation between the voltage and current, only one frequency will appear in the Fourier transform. This simplifies the analysis significantly.
For the simple harmonic interpretation of the experiment in terms of impedance, we need the current response to oscillate at the same frequency as the voltage input. This means that the system must respond linearly. For an electrochemical cell, we can usually accomplish this by ensuring that the applied voltage is small compared to the quantity
RT/F — the ratio of the gas constant multiplied by the temperature to the Faraday constant. This is the characteristic “thermal voltage” in electrochemistry and is about 25 mV at normal temperatures. Smaller voltage changes usually induce a linear response, while larger voltage changes cause an appreciably nonlinear response.
Of course, with simulation to predict the time-domain current, we can always consider a nonlinear case and perform a Fourier transform numerically to study the effect on the impedance. In practice, the interpretation in terms of impedance illustrated above is best suited to the harmonic assumption. Impedance measurements are therefore often used in a complementary manner with transient techniques, such as amperometry or voltammetry, which are better suited for investigating nonlinear or hysteretic effects.
Let’s look at a simple example of the physical theory that underpins these ideas to see how the impedance spectrum relates to the real controlling physics.
Electrochemical Impedance Spectroscopy: The Model
To model an EIS experiment, we must describe the key underlying physical and chemical effects, which are the electrode kinetics, double-layer capacitance, and diffusion of the electrochemical reactants. In electroanalytical systems, a large quantity of artificially added supporting electrolytes keeps the electric field low so that solution resistance can be neglected. In this case, we can describe the mass transport of chemical species in the system using the diffusion equation (Fick’s laws) with suitable boundary conditions for the electrode kinetics and capacitance. In the COMSOL Multiphysics® software, we use the
Electroanalysis interface together with an Electrode Surface boundary feature to describe these equations.
For more details about how to set up this model, you can download the Electrochemical Impedance Spectroscopy tutorial example in the Application Library.
Model tree for the Electroanalysis interface in an EIS model.
Under
Transport Properties, we can specify the diffusion coefficients of the redox species under consideration. We at least need the reduced and oxidized species for a single redox couple, such as the common redox couple ferro/ferricyanide, to use as an analytical reference. The Concentration boundary condition defines the fixed bulk concentrations of these species. The Electrode Reaction and Double Layer Capacitance subnodes for the Electrode Surface boundary feature contribute Faradaic and non-Faradaic current, respectively. For the double-layer capacitance, we typically use an empirically measured equivalent capacitance and specify the electrode reaction according to a standard kinetic equation like the Butler-Volmer equation.
Note that we’re not referring to equivalent circuit properties at all here. In COMSOL Multiphysics, all of the inputs in the description of the electrochemical problem are physical or chemical quantities, while the output is a Nyquist plot. When analyzing the problem in reverse, we’re able to use an observed Nyquist plot from our experiments to make inferences about the real values of these physical and chemical inputs.
In the settings for the Electrode Surface feature, we represent the impedance experiment by applying a
Harmonic Perturbation to the cell voltage. Settings for the Electrode Surface boundary feature in an EIS model.
Here, the quantity
V_app is the applied voltage.
The harmonic perturbation is applied with respect to a resting steady voltage (or current) on the cell. In this case, we have set this to a reference value of zero volts. With more advanced models, we might consider using the results of another COMSOL Multiphysics model, one that’s significantly nonlinear for example, to find the resting conditions to which the perturbation is applied. If you’re interested in understanding the mathematics of the harmonic perturbation in greater detail, my colleague Walter discussed them in a previous blog post.
When studying lithium-ion batteries, for example, we can perform a time-dependent analysis of the cell’s discharge, studying its charge transport, diffusion and migration of the lithium electrolyte, and the electrode kinetics and diffusion of the intercalated lithium atoms. We can pause this simulation at various times to consider the impedance measured from a rapid perturbation. For further insight into the physics involved, you can read my colleague Tommy’s blog post on modeling electrochemical impedance in a lithium-ion battery.
Electrochemical Impedance Spectroscopy: The Simulation App
A frequent demand for electrochemical simulations is that they “fit” experimental data in order to determine unknown physical quantities or, more generally, to interpret the data at all. Even for experienced electroanalytical chemists, it can be difficult to intuitively “see” the physics and chemistry in the underlying graphs like the Nyquist plot. However, by simulating the plots under a range of conditions, the influence of different effects on the overall graph is revealed.
Simulation is helpful for analyzing EIS, but it can also be time consuming for the experts involved. As was the case with my old research group, these experts can spend more time writing programs and running models to fit data together with experimental researchers than on the science. Wouldn’t it be nice if all electrochemical researchers could load experimental data into a simple interface, simulate impedance spectra for a given physical model and inputs, and even perform automatic parameter fitting? The good news is that we can! With the Application Builder in COMSOL Multiphysics, we can create an easy-to-use EIS app based on an underlying model. As a model can contain any level of physical detail, the app provides direct access to the physical data and isn’t confined to simple equivalent circuits.
To highlight this, we have an EIS demo app based on the model available in the Application Library. The app user can set concentrations for electroactive species and tune the diffusion coefficients as well as the electrode kinetic rate constant and double-layer capacitance. After clicking the
Compute button, the app generates results that can be visualized through Nyquist and Bode plots. The EIS simulation app in action.
As well as enabling physical parameter estimation, this app is very helpful for teaching, since we can quickly change inputs and visualize the results that would occur in the experiment. A natural extension for the app is to import experimental data to the same Nyquist plot for direct comparison. We can also build up the underlying physical model to consider the influence of competing electrochemical reactions or follow-up homogeneous chemistry from the products of an electrochemical reaction.
Concluding Thoughts
Here, we’ve introduced electrochemical impedance spectroscopy and discussed some methods used to model it. We also saw how a simulation app built from a simple theoretical model can provide greater insight into the relationship between the theory of an electrochemical system and its behavior as observed in an experiment.
Further Reading Explore other topics related to electrochemical simulation on the COMSOL Blog Comments (1) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
January 15th, 2015, 05:49 AM
# 1
Newbie
Joined: May 2013
Posts: 5
Thanks: 0
riemann hypothesis proof?: zeta(s) is never 0 for Re(s)>1/2, flaws? thanks!
The Riemann Zeta function can be represented by the Euler product \begin{align}
\zeta \left( s \right) = \prod^{\infty}_{p=primes} \frac {1}{1-1/p^{s}}
\end{align}
for $ s = a + ib \in C$ and $a=Re\left( s \right) > 1$.
Let
\begin{align}
F \left( s \right) = \prod^{\infty}_{p=primes} e^{- \frac {1}{p^{s}}}=e^{- P \left( s \right)}
\end{align}
for $ s = a + ib \in C $ and $a=Re\left( s \right) > 1$, where $P \left( s \right)$ is the Riemann Prime Zeta function. $P \left( s \right), F \left( s \right)$ and $\zeta \left( s \right)$ are analytic and can be extended to the complex plane for $0 < Re\left( s \right) < 1$.
The norm (squared) of the product $F \left( s \right)\zeta \left( s \right)$ is defined for $a=Re\left( s \right) > 1$ by
\begin{align}
|F \left( s \right)\zeta \left( s \right) |^2 =
\prod^{\infty}_{p=primes} | \frac {1}{1-1/p^{s}} |^2
\prod^{\infty}_{p=primes} | e^{- \frac {1}{p^{s}}} |^2
\end{align}
\begin{align}
|F \left( s \right)\zeta \left( s \right) |^2 =
\prod^{\infty}_{p=primes} \left( \frac {1}{1-1/p^{2a}} \right)
\left(\frac {p^{2a}-1}{p^{2a}-2 p^{a}\cos(b\ln{p})+1} \right)
\prod^{\infty}_{p=primes} e^{- \frac {2 \cos(b\ln{p}) }{p^{a}}}
\end{align}
\begin{align}
|F \left( s \right)\zeta \left( s \right) |^2 =
\prod^{\infty}_{p=primes} \left( \frac {1}{1-1/p^{2a}} \right)
\prod^{\infty}_{p=primes} \left(\frac {p^{2a}-1}{p^{2a}-2 p^{a}\cos(b\ln{p})+1} \right)
e^{- \frac {2 \cos(b\ln{p}) }{p^{a}}} (1)
\end{align}
The first product $ \prod^{\infty}_{p=primes} \left( \frac {1}{1-1/p^{2a}} \right)$ obviously converges absolutely for $a=Re\left( s \right) > \frac {1}{2}$ and since
\begin{align*}
\frac {p^{2a}-1}{p^{2a}-2 p^{a}\cos(b\ln{p})+1} = 1 + 2 \sum^{\infty}_{j=1} \frac {\cos \left( jb \ln {p} \right)}{p^{ja}} &
\end{align*}
for $a=Re\left( s \right) > 0$, then the second product in $(1)$
\begin{align}
\prod^{\infty}_{p=primes} \left(\frac {p^{2a}-1}{p^{2a}-2 p^{a}\cos(b\ln{p})+1} \right)
e^{- \frac {2 \cos(b\ln{p}) }{p^{a}}}
\end{align}
also converges absolutely for $a=Re\left( s \right) > \frac {1}{2}$ and $(1)$ can be extended in that range.
Since all terms in the products of $(1)$ are positive and real, then the norm
\begin{align*}
|F \left( s \right)\zeta \left( s \right) |^2 =
|e^{- P \left( s \right)}\zeta \left( s \right) |^2 =
\prod^{\infty}_{p=primes} \left( \frac {1}{1-1/p^{2a}} \right)
\prod^{\infty}_{p=primes} \left(\frac {p^{2a}-1}{p^{2a}-2 p^{a}\cos(b\ln{p})+1} \right)
e^{- \frac {2 \cos(b\ln{p}) }{p^{a}}}
\end{align*}
is always greater than 0 for $a=Re\left( s \right) > \frac {1}{2}$, and the Riemann hypothesis would be correct.
I would appreciate, if you have time, that you let me know what you think of this. If I have made a false assumption regarding analycity, continuation of the norm inside the critical strip, or if the products were to be only semi convergent (possibly allowing exceptions to the argument?), I would still be at a loss to explain why the partial products agree so well with the actual result. I have checked the solution numerically for several points of $a=Re\left( s \right) > \frac {1}{2}$ and compared it with mathematical software output.
January 15th, 2015, 12:49 PM
# 2
Newbie
Joined: May 2013
Posts: 5
Thanks: 0
I finally figured out that I have just proved 1=1 ...
all that I needed was to substitute the analytic formula for P(s)=sum mu(k) ln(zeta(ks))/k.
Thread will be closed.
Last edited by skipjack; January 15th, 2015 at 02:08 PM.
Tags flaws, hypothesis, proof, res>1 or 2, riemann, riemann hypothesis, zetas
Thread Tools Display Modes
Similar Threads Thread Thread Starter Forum Replies Last Post A proof of Robin's inequality (and so of the Riemann Hypothesis) Vincenzo Oliva Number Theory 12 November 27th, 2014 03:14 PM Riemann Zeta Function proof Nick Vandaele Number Theory 4 October 1st, 2014 07:43 AM Proof of Riemann Hypothesis? eddybob123 Number Theory 18 May 21st, 2013 06:10 PM Riemann Zeta & the RH mathbalarka Number Theory 9 February 12th, 2013 07:24 AM Proof for Riemann hypothesis and more joexian Number Theory 5 January 16th, 2013 06:13 AM
|
You probably meant to assume $R$ and $S$ are noetherian. The answer is "no" to the initial hypergeneral part of the question. EDIT: In the 2nd half (below the long line), I now give a proof of an affirmative answer to the added part involving maps of affine spaces.
Counterexamples to the initial hypergeneral part can be made using 2-dimensional regular excellent local rings built from henselization and completion of local rings at $k$-points on smooth schemes over any field $k$.
Let $R$ be a noetherian henselian local ring, and $S = \widehat{R}$, so $\widehat{S}=S$. Let $\pi:{\rm{Spec}}(\widehat{R}) \rightarrow {\rm{Spec}}(R)$ be the natural map. Then in this case an affirmative answer to your question says exactly that $\pi_{\ast}$ is exact and $\pi^{\ast} \circ \pi_{\ast} \rightarrow {\rm{id}}$ is an isomorphism of functors (all on the category of constructible abelian etale sheaves on ${\rm{Spec}}(\widehat{R})$). In particular, every such sheaf on ${\rm{Spec}}(\widehat{R})$ would be the $\pi$-pullback of one on ${\rm{Spec}}(R)$. Clearly this cannot be true in general, so it is just a game to find a suitable counterexample to that.
Let $Z' \subset {\rm{Spec}}(\widehat{R})$ is a Zariski-closed set and $j':U' \hookrightarrow {\rm{Spec}}(\widehat{R})$ the open subscheme complementary to $Z'$. Consider the extension-by-zero $F' := j'_{!}(\mathbf{Z}/(n))$ for an integer $n > 1$. Suppose $F' = \pi^{\ast}(F)$ for an abelian etale sheaf $F$ on ${\rm{Spec}}(R)$.
Let $Z$ be the set of points $x \in {\rm{Spec}}(R)$ such that $F_x = 0$. This satisfies $Z' = \pi^{-1}(Z)$, so $Z$ is closed since $\pi$ is topologically a quotient map (as it is fpqc). Hence, if there is a closed set $Z'$ that is not the preimage of a closed set in ${\rm{Spec}}(R)$ then we have a counterexample.
Suppose $R$ is excellent, so the fpqc map $\pi$ is a "regular morphism" (flat and its fiber algebras that are regular and remain so after finite extension of the base field). Thus, if $Z$ is a reduced closed set in ${\rm{Spec}}(R)$ then the scheme-theoretic preimage $\pi^{-1}(Z)$ is reduced. Consider any radical ideal $J'$ in $\widehat{R}$ and let $Z' := {\rm{Spec}}(\widehat{R}/J')$. If $Z' = \pi^{-1}(Z)$ topologically for a closed set $Z$ in ${\rm{Spec}}(R)$ then by using the reduced scheme structure on $Z$ we would have $\pi^{-1}(Z) = Z'$ as schemes. In other words, for the radical ideal $J$ in $R$ corresponding to such a $Z$ we would necessarily have $J' = J\widehat{R}$.
Note that $J \otimes_R \widehat{R} \rightarrow J\widehat{R}$ is an isomorphism, so if $J'$ is invertible then necessarily $J$ is invertible. In particular, if $J' = r'\widehat{R}$ for $r'$ that is not a zero-divisor then necessarily $J = rR$ for some $r \in R$ that is not a zero-divisor. In such a situation, $r'$ would have to be a unit multiple of $r$ in $\widehat{R}$. Thus, if we can find $r' \in \widehat{R}$ that isn't a zero-divisor such that no unit multiple of $r'$ lies in the subring $R$ then we have our counterexample (using $Z' = {\rm{Spec}}(\widehat{R}/(r'))$ and the associated $F'$ as above).
In the special case that $R$ is regular, so $\widehat{R}$ is also regular, we're just looking for a nonzero $r' \in \widehat{R}$ having no unit multiple in $R$. Obviously in the dvr case this cannot be arranged, so we go on to dimension 2.
Take $R = k[x,y]_{(x,y)}^{\rm{h}}$ for a field $k$, so $\widehat{R} = k[\![x,y]\!]$. The ring $R$ is excellent, since passage to henselization preserves excellence (18.7.6, EGA IV$_4$). Since $k(\!(x)\!)$ is not algebraic over $k(x)$ (lazy way is to use $e^x$ and assume characteristic is 0), we can find $h \in x k[\![x]\!]$ that is not algebraic over $k(x)$. For such $h$, the irreducible element $r' = y - h(x)$ in $\widehat{R}$ will do the job.
Indeed, suppose that $r'$ admits a unit multiple $r \in R$. We will show that $h$ must be algebraic over $k(x)$. Clearly $R/(r) \rightarrow \widehat{R}/(r') = k[\![x]\!]$ is a completion map, so $R/(r)$ is a dvr. Since henselization is compatible with quotients, it follows by the link between transcendence degree and dimension for finitely generated domains over $k$ that $R/(r)$ is a direct limit of a directed system of local-etale extensions of local rings at $k$-points on regular curves over $k$, and these curves must all be quasi-finite over the affine $x$-line over $k$ (since $x \in k[\![x]\!]$ is transcendental over $k$). Thus, everything in $R/(r)$ is algebraic over $k(x)$, including the class of $y$. But mapping that into $k[\![x]\!]$ with $y \mapsto h(x)$ in there, we conclude that $h$ is algebraic over $k(x)$.
Now let's give an affirmative proof under some additional "smoothness" hypotheses as follows. The key point is to use Artin-Popescu approximation and to work at the level of derived categories. Also, one has to be extremely careful because certain schemes will intervene that (as far as I know) might fail to be noetherian or excellent, even though our original setup will involve only excellent schemes and smooth maps. (Maybe for some reasons which escape me, the non-noetherian concerns in the argument below cannot really happen?)
Setup: Let $\mathcal{X} \rightarrow \mathcal{Y}$ be a smooth map of noetherian schemes with $\mathcal{Y}$
excellent. Choose $y \in Y$ and $x \in X_y$ a $k(y)$-rational point. Let $X = {\rm{Spec}}(O_{\mathcal{X},x}^{\rm{h}})$, $Y = {\rm{Spec}}(O_{\mathcal{Y},y}^{\rm{h}})$. Let $X'$ and $Y'$ denote Spec of the corresponding completed local rings, and let $\pi:X \rightarrow Y$, and $\pi':X' \rightarrow Y'$ be the natural maps.
Claim: Let $F$ be a torsion etale abelian sheaf on $X$ whose torsion-orders are
invertible on $Y$. Let $G \rightsquigarrow G'$ be shorthand for pullback on derived categories (of abelian etale sheaves) from $Y$ to $Y'$ or from $X$ to $X'$. Then the natural base change morphism $$R\pi_{\ast}(F)' \rightarrow R\pi'_{\ast}(F')$$ is an isomorphism for any torsion $F$ on $X$ whose torsion-orders are invertible on $Y$.
Proof: By Artin-Popescu approximation, since $\mathcal{Y}$ is excellent we know that the map $Y' \rightarrow Y$ is a limit of smooth maps. Hence, if we let $T = X \times_Y Y'$ and $p:T \rightarrow Y'$ be the projection then by the smooth base change theorem and standard limit stuff with etale sheaf theory, the natural map $R\pi_{\ast}(F)' \rightarrow Rp_{\ast}(G)$ is an isomorphism, where $G$ is the pullback of $F$ along the first projection.
I don't know if $T$ is noetherian, by the way, so I am tacitly using the good behavior of the limit formalism in etale sheaf theory without noetherian hypotheses (but even the proof of the smooth base change theorem in SGA4.5 leaves the noetherian framework due to such kind of fiber products intervening, so I suppose you don't mind).
The $y$-fiber of $p$ is identified with $X_y$, so naturally $x \in T$, and $O_{T,x}$ is a "partial henselization" of the local ring at a rational point on the special fiber of a finite type (even smooth) $Y'$-scheme. So even though I/we do not know if $T$ is noetherian, we
do know that $O_{T,x}$ is noetherian, by the same reasoning which proves that passage to henselization preserves the noetherian property (namely, EGA 0$_{\rm{III}}$, 10.3.1.3).
Note that the henselization of $O_{T,x}$ coincides with that of a local ring $R$ on a finite type (even smooth) $Y'$-scheme, and $Y'$ is excellent (as for any complete local noetherian ring: IV$_2$, 7.8.3(iii)), so $R$ is excellent. Consequently $R^{\rm{h}}$ is excellent (IV$_4$, 18.7.6), so $O_{T,x}$ has excellent henselization. Now it is
not true that excellence descends from the henselization (counterexample in IV$_4$, 18.7.7), but this is for geometric reasons related to failure of being universally catenary, and we don't actually care about that.
Indeed, what matters for the following is geometric regularity of formal fibers of $O_{T,x}$, which is to say that the flat map of noetherian schemes ${\rm{Spec}}(O_{T,x}^{\wedge}) \rightarrow {\rm{Spec}}(O_{T,x})$ is a regular morphism (since Artin-Popescu approximation is about regular morphisms being a limit of smooth morphisms, which in practice is easiest to remember under the banner of "excellence" but is not strictly a necessary condition). So all we care is to know that geometric regularity of formal fibers descends from the henselization, and that is fine; see IV$_4$, 18.7.4.
The upshot is that $O_{T,x}$ is noetherian and its completion morphism is regular.
Now comes the crux of the matter: I claim that the map $h:X' \rightarrow T$ (or rather, its factorization through ${\rm{Spec}}(O_{T,x})$) is
computing the completion of the noetherian local ring $O_{T,x}$. (This is the step at which our argument breaks down in the setting of the counterexamples above! Indeed, in that setting the map in the role of $h$ would be akin to a graph morphism, super-far from an isomorphism.)
In view of the local structure theorem for smooth morphisms (applied to a smooth $Y'$-scheme whose "partial henselization" at a suitable point computes $T$), ourhypothesis that $k(x) = k(y)$, and the fact that completion of a local noetherian ring is insensitive to "partial henselization", justifying our assertion about $h$ amounts to the following down-to-earth observation: if $A$ is a henselian (e.g., complete) local noetherian ring, $B = A\{x_1,\dots,x_N\}$ is the henselization at the origin of the special fiber of an affine space over $A$, and $B'$ is the local ring at the origin on the special fiber of the ring $\widehat{A} \otimes_A B$ (a ring I/we do not know to be noetherian, but the local ring $B'$ certainly is, as explained above), then the natural map $B' \rightarrow \widehat{A}[\![x_1,\dots,x_N]\!]$ is the completion of $B'$. This verification is a simple exercise using the compatibility of henselization and quotients.
From our description of $h$ via completion of a local noetherian ring having geometrically regular formal fibers, $h$ is a regular morphism. Thus, by Popescu's theorem, $h$ is a limit of smooth maps. Hence, by the acyclicity theorem for smooth maps and the usual limit games (which do not require knowing $T$ to be noetherian), it follows that the natural map $G\rightarrow Rh_{\ast}(h^{\ast}G)$ is an isomorphism. But $h^{\ast}(G)=F'$, so we have a natural isomorphism$$R\pi_{\ast}(F)' \simeq Rp_{\ast}(G)=Rp_{\ast}Rh_{\ast}(F')=R(p \circ h)_{\ast}(F').$$But $p\circ h=\pi'$, and as such it is a standard exercise to check that the composite isomorphism we have made is the base change morphism of initial interest. QED
|
It's common knowledge (and has been discussed in other questions on this site) that the standard BCS ground state $ \left|\Psi_{BCS}\right\rangle = \prod_k \left( u_k + v_k c_{k\uparrow}^{\dagger} c_{-k\downarrow}^{\dagger}\right) \left|0\right\rangle$ does not have a well-defined particle number and that this doesn't matter in bulk superconductors because the standard deviation is $\Delta N \propto \sqrt{N}$ and hence irrelevant for $N\rightarrow \infty$.
But I also read that you can arrive at a BCS state with well-defined particle number by first defining
$$\left|\Psi_{BCS}(\phi)\right\rangle = \left( |u_k| + e^{i\phi} |v_k| c_{k\uparrow}^{\dagger} c_{-k\downarrow}^{\dagger}\right) \left|0\right\rangle $$
and then "integrating out" the phase according to
$$\left|\Psi_{BCS}(N)\right\rangle = \int_{0}^{2\pi} \mathrm{d}\phi\, e^{-iN\phi/2} \left|\Psi_{BCS}(\phi)\right\rangle \,\, ,$$which gives you a BCS state with precisely N particles
at the cost of having a completely ill-defined phase.
If that is true, then surely this is the actual "physical" state of a superconductor and the original BCS state is merely used for convenience.
But that, in turn,
would make the well-defined phase of the superconducting state a mere mathematical artifact, when every other textbook highlights it as something very fundamental (and if I remember correctly, it's very important for things like the Josephson effect as well).
So, can anyone point to the error in the logic above?
|
We study the evolution of convex hypersurfaces with initial at a rate equal to <i>H</i> 鈥<i>f</i> along its outer normal, where <i>H</i> is the inverse of harmonic mean curvature of is a smooth, closed, and uniformly convex hypersurface. We find a <i>胃</i>* > 0 and a sufficient condition about the anisotropic function <i>f</i>, such that if <i>胃</i> > <i>胃</i>*, then remains uniformly convex and expands to infinity as <i>t</i> 鈫+ 鈭and its scaling, , converges to a sphere. In addition, the convergence result is generalized to the fully nonlinear case in which the evolution rate is log<i>H</i>-log <i>f</i> instead of <i>H</i>-f.
In this paper the existence of positive 2\pi -periodic solutions to the ordinary differential equation \begin{equation*} u^{\prime\prime}+u=\frac{f}{u^3} \ \textrm{ in } \mathbb{R} \end{equation*}is studied, where 2\pi is a positive 2\pi -periodic smooth function. By virtue of a new generalized Blaschke鈥揝antal贸 inequality, we obtain a new existence result of solutions.
In this paper we study the solvability of the rotationally symmetric centroaffine Minkowski problem. By delicate blow-up analyses, we remove a technical condition in the existence result obtained by Lu and Wang [30].
In this paper the Orlicz鈥揗inkowski problem, a generalization of the classical Minkowski problem, is studied. Using the variational method, we obtain a new existence result of solutions to this problem for general measures.
The centroaffine Minkowski problem is studied, which is the critical case of the <i>L</i><sub><i>p</i></sub>-Minkowski problem. It admits a variational structure that plays an important role in studying the existence of solutions. In this paper, we find that there is generally no maximizer of the corresponding functional for the centroaffine Minkowski problem.
In this paper the centroaffine Minkowski problem, a critical case of the L p-Minkowski problem in the n+ 1 dimensional Euclidean space, is studied. By its variational structure and the method of blow-up analyses, we obtain two sufficient conditions for the existence of solutions, for a generalized rotationally symmetric case of the problem.
Consider the existence of rotationally symmetric solutions to the L_p -Minkowski problem for L_p . Recently a sufficient condition was obtained for the existence via the variational method and a blow-up analysis in [16]. In this paper we use a topological degree method to prove the same existence and show the result holds under a similar complementary sufficient condition. Moreover, by this degree method, we obtain the existence result in a perturbation case.
In this paper we study the prescribed centroaffine curvature problem in the Euclidean space R n+ 1. This problem is equivalent to solving a Monge鈥揂mp猫re equation on the unit sphere. It corresponds to the critical case of the Blaschke鈥揝antal贸 inequality. By approximation from the subcritical case, and using an obstruction condition and a blow-up analysis, we obtain sufficient conditions for the a priori estimates, and the existence of solutions up to a Lagrange multiplier.
In this paper we study the L p-Minkowski problem for p=鈭n鈭1, which corresponds to the critical exponent in the Blaschke鈥揝antalo inequality. We first obtain volume estimates for general solutions, then establish a priori estimates for rotationally symmetric solutions by using a Kazdan鈥揥arner type obstruction. Finally we give sufficient conditions for the existence of rotationally symmetric solutions by a blow-up analysis. We also include an existence result for the L p-Minkowski problem which corresponds to the super-critical case of the Blaschke鈥揝antalo inequality.
We consider the asymptotics of the Turaev-Viro and the Reshetikhin-Turaev invariants of a hyperbolic $3$-manifold, evaluated at the root of unity $exp(2\pi\sqrt{-1}/r)$ instead of the standard $exp(\pi\sqrt{-1}/r)$. We present evidence that, as $r$ tends to $\infty$, these invariants grow exponentially with growth rates respectively given by the hyperbolic and the complex volume of the manifold. This reveals an asymptotic behavior that is different from that of Witten's Asymptotic Expansion Conjecture, which predicts polynomial growth of these invariants when evaluated at the standard root of unity. This new phenomenon suggests that the Reshetikhin-Turaev invariants may have a geometric interpretation other than the original one via $SU(2)$ Chern-Simons gauge theory.
For almost all Riemannian metrics (in the C1 Baire sense) on a closed manifold M^{n+1}, 3 \leq n + 1 \leq 7, we prove that there is a sequence of closed, smooth, embedded, connected minimal hypersurfaces that is equidistributed in M. This gives a quantitative version of a result by Irie and the first two authors, that established density of minimal hypersurfaces for generic metrics. The main tool is the Weyl Law for the Volume Spectrum proven by Liokumovich and the first two authors .
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely, it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, that is, sparse linear regression, sparse logistic regression, sparse precision matrix estimation, and sparse quantile regression.
We construct a special principal series representation for the modular double Uqq∼(gR) of type Ar representing the generators by positive essentially self-adjoint operators satisfying the transcendental relations that also relate q and. We use the cluster variables parameterization of the positive unipotent matrices to derive the formulas in the classical case. Then we quantize them after applying the Mellin transform. Our construction is inspired by the previous results for gR =sl(2,R) and can be generalized to all other types of simple split real Lie algebra. We conjecture that our positive representations are closed under the tensor product and we discuss the future perspectives of the new representation theory following the parallel with the established developments of the finite-dimensional representation theory of quantum groups.
For each simple Lie algebra g, we construct an algebra embedding of the quantum group Uq(g) into certain quantum torus algebra Dg via the positive representations of split real quantum group. The quivers corresponding to Dg is obtained from an amalgamation of two basic quivers, each of which is mutation equivalent to one describing the cluster structure of the moduli space of framed G-local system on a disk with 3 marked points on its boundary when G is of classical type. We derive a factorization of the universal R-matrix into quantum dilogarithms of cluster monomials, and show that conjugation by the R-matrix corresponds to a sequence of quiver mutations which produces the half-Dehn twist rotating one puncture about the other in a twice punctured disk.
We impose constraints on the odd coordinates of super-Teichmüller space in the uniformization picture for the monodromies around Ramond punctures, thus reducing the overall odd dimension to be compatible with that of the moduli spaces of super Riemann surfaces. Namely, the monodromy of a puncture must be a true parabolic element of the canonical subgroup SL(2 , R) of OSp(1\2).
In our previous work, we studied positive representations of split real quantum groups U q q~ (g R ) restricted to their Borel part and showed that they are closed under taking tensor products. But the tensor product decomposition was only constructed abstractly using the GNS representation of a C*-algebraic version of the Drinfeld–Jimbo quantum groups. Here, using the recently discovered cluster realization of quantum groups, we write the decomposition explicitly by realizing it as a sequence of cluster mutations in the corresponding quiver diagram representing the tensor product.
Based on earlier work of the latter two named authors on the higher super-Teichmueller space with N=1, a component of the flat OSp(1\2) connections on a punctured surface, here we extend to the case N=2 of flat OSp(2\2) connections. Indeed, we construct here coordinates on the higher super-Teichmueller space of a surface F with at least one puncture associated to the supergroup OSp(2\2), which in particular specializes to give another treatment for N=1 simpler than the earlier work. The Minkowski space in the current case, where the corresponding super Fuchsian groups act, is replaced by the superspace R2,2\4, and the familiar lambda lengths are extended by odd invariants of triples of special isotropic vectors in R2,2\4 as well as extra bosonic parameters, which we call ratios, defining a flat R+-connection on F. As in the pure bosonic or N=1 cases, we derive the analogue of Ptolemy transformations for all these new variables.
Using the complex coloring method, we present the graphs of the quantum dilogarithm function Gb(z) and visualize its analytic and asymptotic behaviours. In particular we demonstrate the limiting process when the modified Gb(z)→Γ(z) as b→0. We also survey the relations of Gb(z) with different variants of the quantum dilogarithm function.
We construct the positive principal series representations for Uq(gR) where g is of simply-laced type, parametrized by Rr where r is the rank of g. In particular, the positivity of the operators and the transcendental relations between the generators of the modular double are shown. We define the modified quantum group $\mathbf{U}_{q\tilde{q}(g_R)$ of the modular double and show that the representation of both parts of the modular double commute with each other, there is an embedding into the q-tori polynomials, and the commutant is the Langlands dual. We write down explicitly the action for type An,Dn and give the details of calculations for type E6,E7 and E8.
We study the tensor product decomposition of the split real quantum group Uqq~(sl(2,R)) from the perspective of finite dimensional representation theory of compact quantum groups. It is known that the class of positive representations of Uqq~(sl(2,R)) is closed under taking tensor product. In this paper, we show that one can derive the corresponding Hilbert space decomposition, given explicitly by quantum dilogarithm transformations, from the Clebsch-Gordan coefficients of the tensor product decomposition of finite dimensional representations of the compact quantum group Uq(sl2) by solving certain functional equations and using normalization arising from tensor products of canonical basis. We propose a general strategy to deal with the tensor product decomposition for the higher rank split real quantum group Uqq~(gR)
A counterpart of the modular double for quantum superalgebra Uq(osp(1\2)) is constructed by means of supersymmetric quantum mechanics. We also construct the R-matrix operator acting in the corresponding representations, which is expressed via quantum dilogarithm.
We study the positive representations Pλ of split real quantum groups Uqq (gℝ) restricted to the Borel subalgebra Uqq (bℝ). We prove that the restriction is independent of the parameter λ. Furthermore, we prove that it can be constructed from the GNS-representation of the multiplier Hopf algebra UqqC ∗ (b ℝ) defined earlier, which allows us to decompose their tensor product using the theory of the “multiplicative unitary”. In particular, the quantum mutation operator can be constructed from the multiplicity module, which will be an essential ingredient in the construction of quantum higher Teichmüller theory from the perspective of representation theory, generalizing earlier work by Frenkel-Kim.
|
Glossary
We are planning to prepare a Glossary for the course that contains a list of the key terms that are used in the course.
Which terms would you like us to explain here?
\(\exists\) :There exists.
\(\in\): Belongs to
\(\forall\): For every
\(\mathbb N\): The set of natural numbers \(0,1,2,3,…\)
\(\mathbb Q\): The set of rational numbers.
\(\mathbb R\): The set of real numbers.
\(\mathbb Z\): The set of integer numbers \(…,-3,-2,-1,0, 1,2,3,…\)
Integer: a number of the set \(\mathbb Z=\lbrace …,-3,-2,-1,0,1,2,3,…\rbrace\)
Natural number: a number of the set \(\mathbb N=\lbrace 0,1,2,3,…\rbrace\)
Rational number: a number of the set \(\mathbb Q=\lbrace …\frac ab:\, a\in\mathbb Z, b\in\mathbb N\rbrace\)
Real number: an element of \(\mathbb R\). The set \(\mathbb R\) contains the limits of sequences of rational numbers and elements like square roots, \(\pi, e,…\)
Square root of \(x\ge 0\): the positive number \(y\) satisfying \(y^2=x\).
\(m-\)th square root of \(x\ge 0\): the positive number \(y\) satisfying \(y^m=x\).
|
66 9 Homework Statement Let f and g be derivable functions and let a be a real number such that ##f(a)=g(a)=0 ## ##g'(a) ≠ 0 ## Justify that ##\frac{f'(a)}{g'(a)} ## = ##\lim_{x\to a}\frac{f(x)}{g(x)}## You may only use the definition of the derivative and boundary rules. Homework Equations ##\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}##
My attempt:
##\frac{f'(a)}{g'(a)} ## = ##\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}\cdot\frac{h}{g(a+h)-g(a)}## = ##\lim_{h\to 0}\frac{f(a+h)-f(a)}{g(a+h)-g(a)}## I don't think I am doing this right. I don't even understand how I am supposed to use the boundary rules. I really appreciate some help!
##\frac{f'(a)}{g'(a)} ## =
##\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}\cdot\frac{h}{g(a+h)-g(a)}##
= ##\lim_{h\to 0}\frac{f(a+h)-f(a)}{g(a+h)-g(a)}##
I don't think I am doing this right. I don't even understand how I am supposed to use the boundary rules. I really appreciate some help!
|
I'm trying to figure out how to translate a piece of code from Velocity Verlet to Runge-Kutta, while treating the time step dependence of the thermal noise correctly.
The Langevin equation for my system reads reads
$$ ma = - \gamma v - \frac{dU}{dx} + \xi(t), $$
where $U$ is some interaction potential, $\gamma$ is damping, and $\xi$ is a gaussian noise term with $\mu = 0$ and $\sigma = \sqrt{2\gamma m k_B T}$.
I can use Velocity Verlet with Langevin dynamics as
$$ v_{t+1} = v_t + h(a - \gamma v + \xi(t)). $$
Qualitatively, what the Langevin equation does here is that it models thermal fluctuations by adding random kicks to the acceleration while counteracting them with a constant damping term to stabilize the energy. My question then is, how does this translate to 4th order Runge-Kutta (RK4)?
In RK4 we calculate the velocity as
$$ v_{t+1} = v_t + \frac{h}{6}(a_1 + 2a_2 + 2a_3 + a_4) $$
where $a_i$ are the partial accelerations calculated in the RK4 steps.
It is not obvious to me where to introduce the Langevin dynamics here. My best guess is that it should be applied in every separate RK-step? Meaning e.g. for $a_1$
$$ a_1 = a_t - \gamma v_t + \xi(t). $$
Of course we would have to use the same $\xi(t)$ for all the $a_i$ during one time step for this to make sense. Meaning we generate one $\xi(t)$ at the start of every time step that we then use for every calculation during that time step.
Still, something is missing here... Because now the noise is not dependent on the time step, and it should be! This was not an issue in Velocity Verlet because we just multiplied the noise term with $h$ during every time step, but this is not the case here. It seems to me that the time step has to be included somewhere in the $\sigma$ term of the Langevin equation, but I can't really figure out how...
edit1: changed to a more sensible notation.
edit2: I realized, for RK4 to work, you probably have to add a timestep to the noise term as $\sigma = \sqrt{\frac{2\gamma m k_B T}{h}}$ for the units to come out correctly.
|
In this chapter, we will understand what the Angular Diameter Distance is and how it helps in Cosmology.
For the present universe −
$\Omega_{m,0} \: = \: 0.3$
$\Omega_{\wedge,0} \: = \: 0.69$
$\Omega_{rad,0} \: = \: 0.01$
$\Omega_{k,0} \: = \: 0$
We’ve studied two types of distances till now −
Proper distance (lp) − The distance that photons travel from the source to us, i.e., The Instantaneous distance. Comoving distance (lc) − Distance between objects in a space which doesn’t expand, i.e., distance in a comoving frame of reference.
Consider a galaxy which radiates a photon at time
t 1 which is detected by the observer at
$$l_p = \int_{t_1}^{t_0} cdt$$
Let the galaxy’s redshift be
z,
$$\Rightarrow \frac{\mathrm{d} z}{\mathrm{d} t} = -\frac{1}{a^2}\frac{\mathrm{d} a}{\mathrm{d} t}$$
$$\Rightarrow \frac{\mathrm{d} z}{\mathrm{d} t} = -\frac{\frac{\mathrm{d} a}{\mathrm{d} t}}{a}\frac{1}{a}$$
$$\therefore \frac{\mathrm{d} z}{\mathrm{d} t} = -\frac{H(z)}{a}$$
Now, comoving distance of the galaxy at any time
t will be −
$$l_c = \frac{l_p}{a(t)}$$
$$l_c = \int_{t_1}^{t_0} \frac{cdt}{a(t)}$$
In terms of z,
$$l_c = \int_{t_0}^{t_1} \frac{cdz}{H(z)}$$
There are two ways to find distances, which are as follows −
$$F = \frac{L}{4\pi d^2}$$
where
d is the distance at the source.
If we know a source’s size, its angular width will tell us its distance from the observer.
$$\theta = \frac{D}{l}$$
where
l is the angular diameter distance of the source. θ is the angular size of the source. D is the size of the source.
Consider a galaxy of size D and angular size
dθ.
We know that,
$$d\theta = \frac{D}{d_A}$$
$$\therefore D^2 = a(t)^2(r^2 d\theta^2) \quad \because dr^2 = 0; \: d\phi ^2 \approx 0$$
$$\Rightarrow D = a(t)rd\theta$$
Changing
r to r c, the comoving distance of the galaxy, we have −
$$d\theta = \frac{D}{r_ca(t)}$$
Here, if we choose
t = t 0, we end up measuring the present distance to the galaxy. But
$$\therefore d\theta = \frac{D}{r_ca(t_1)}$$
Comparing this with the previous result, we get −
$$d_\wedge = a(t_1)r_c$$
$$r_c = l_c = \frac{d_\wedge}{a(t_1)} = d_\wedge(1+z_1) \quad \because 1+z_1 = \frac{1}{a(t_1)}$$
Therefore,
$$d_\wedge = \frac{c}{1+z_1} \int_{0}^{z_1} \frac{dz}{H(z)}$$
d A is the Angular Diameter Distance for the object.
If we know a source’s size, its angular width will tell us its distance from the observer.
Proper distance is the distance that photons travel from the source to us.
Comoving distance is the distance between objects in a space which doesn’t expand.
|
To calculate any equilibrum constant we need to be sure of the fugacity of each species. Note that fugacities are defined for every common state of matter (gas, liquid and solid), although they are most commonly used for gases. Look at the end of this answer for more details.
The activity of the coumpound $i$ written $a_i$ is defined as:
$$a_i=\exp\left(\frac{\mu_i -\mu_i^{\circ}}{RT}\right)$$
where $\mu_i$ is the chemical potential, which depends on the fugacity.
However this expression can be hard to use if you have not a strong idea of what you are doing because the are a lot of different ways to calculate things, depending on the formulas and laws or theories used.
What you must remember all the time
Activity can also be defined as $$a_i=\gamma_i \cdot \frac{X_i}{X_i^\circ}$$
where $X_i$ can be a concentration, a partial pressure, a molar fraction, a molality, and so on, and where $\gamma_i$ is the activity coefficient. Be careful; this coefficient depends on what $X_i$ is.
The role of $\gamma_i$ is to correct for the interactions between each constituent of the mixture.
If you are
in the ideal regime, which means that the amount of your solute is almost negligible with respect to the entire mixture, then $\gamma_i \approx 1$. Only then can you use $X_i$ in place of $a_i$.
In general, the determination of this coefficient is not quite simple, especially for ions: see the example here for hydrogen ion.
Ionic solutions
Electrolyte solutions (e.g. a solution of $\ce{NaCl}$) can be modelled by Debye–Hückel theory. There are different versions of this model depending on the precision you want. One of the most used is given in the paper about hydrogen ion:
$$\log(\gamma_i)=-\frac{A_m z_i^2\sqrt{I_m}}{1+\sqrt{I_m}}$$
where $I_m$ is the molar ionic strength, given by:
$$I_m=\frac{1}{2}\sum_i z_i^2m_i$$
where $m_i$ is the molality of $i$.
To answer the other part of your question, whether your system is behaving "ideally" or not ultimately depends on the precision you want.
For example if you have a reaction constant expressed as:
$$K=\frac{a_\ce{A}^2 a_\ce{B}^3}{a_\ce{C}}=\frac{\gamma_\ce{A}^2 \gamma_\ce{B}^3}{\gamma_\ce{C}} \cdot \frac{(c_\ce{A}/c^\circ)^2 (c_\ce{B}/c^\circ)^3}{c_\ce{C}/c^\circ}=\frac{\gamma_\ce{A}^2 \gamma_\ce{B}^3}{\gamma_\ce{C}} K^\circ$$
which corresponds to the reaction $\ce{C -> 2A + 3B}$, you will have a large error if you assume you are under ideal conditions when you aren't.
For example if you assume ideality in calculating the $\mathrm{pH}$ of $\pu{5 mol/L}$ of $\ce{HCl}$ solution, like I did here, the answer will not be the same as if you were to consider the activity of the ion, like this.
So at least for strong acid, if the concentration is much than $\pu{1 mol/L}$, you must use activity coefficients. Depending on your problem you'll need to think by yourself and do calculations with and without assuming ideality. If you are unsure as to whether your solution can be considered as being ideal, it's good to do a check (and feel free to use a computer in hard cases).
I would add, as an example, if one of your compounds is 10% of the solution in mass, the solution must no longer be considered as ideal. But, as said earlier, the limit of what is ideal is fixed by you, depending on the precision you want.
|
Talk:Principle of least action Reviewer A
This is a well-written article on the Principle of Least Action by one of the leaders in this field. Here are some comments about the material presented in some of the eleven sections.
In Section 1, it should perhaps be pointed out that, like Fermat's Principle of "Least" Time, Maupertuis' Principle of Least Action is a "geodesic" principle (since it involves the infinitesimal length element ds) while Hamilton's Principle of Least Action is a "dynamic" principle (since it involves the infinitesimal time dt). Lastly, in Eq.(5), it might be more appropriate to place indices on the q variable.
In Section 4, it should be pointed out that the second variation \(\delta^{2}S\) can be expressed in terms of the variation \(\delta x\) and the Jacobian deviation \(u\) (at fixed time t) as \(\delta^{2}S = \int_{0}^{T} \frac{\partial^{2}L}{\partial \dot{x}^{2}} \left( \delta\dot{x} \;-\; \delta x\;\frac{\dot{u}}{u} \right)^{2} \geq 0,\) which vanishes only when \(\delta\dot{x}\;u = \delta x\;\dot{u}\). The latter expression defines the kinetic focus. An explicit reference (such as C. Fox, An Introduction to the Calculus of Variations, Dover, 1987) would be appropriate in addition to Gray and Taylor's.
In Section 7, the exact solution of the quartic-potential problem is given in terms of the Jacobi elliptic function \({\rm cn}(z|m)\) as \(x(t) \;=\; (4 E/C)^{1/4}\;{\rm cn}\left( \frac{4K}{T}\;t \;\left|\;\frac{1}{2}\right. \right),\) where \(K = K(1/2) = 1.85407...\) is the complete elliptic integral of the first kind evaluated at \(m = 1/2\) (the lemniscatic case) and the period is \(T = 4 K (m^{2}/4 EC)^{1/4}.\) When we compare the exact angular frequency \(\omega = 2\pi/T\) with Eq.(15), we indeed find that \(\omega/[{\rm Eq.(15)}] = \pi/(2^{3/4}\,K) = 1.0075.\)
In Section 8, while the choice of sign for the relativistic Lagrangian (or action) given in Eq.(16) might appear to be a matter of choice, the incorrect sign chosen in Eq.(16) is incompatible with the conservation laws derived from it by Noether method.
In Section 9, it might be useful to mention the wonderfully elegant derivation of the Schroedinger equation by Feynman and Hibbs in Chapter 4 of their textbook "Quantum Mechanics and Path Integrals" (McGraw-Hill, 1965). The reader should also be refered to Yourgrau and Mandelstam for additional historical comments.
Reviewer B
Since my expertise is restricted to classical (i.e. non-relativistic and non-quantum) mechanics, my review refers only to the classical part of the article. This is one of the best articles of its kind, i.e. among those found in electronic and traditional encyclopaedias (Wiki, Britannica etc.). It's brief, authoritative, with unusual detail (e.g. study of second variation), and highly readable, i.e. no uneccessary formalisms ("epsilonics"). Its author has published several original papers on this subject. If I could make one suggestion for improvement, that would be to use small Greek delta for isochronous (vertical) variations, i.e. time kept fixed, and UPPER case Greek delta for non-isochronous (non-vertical, or skew) ones; see e.g. E. Whittaker's "Analytical Dynamics" (1937, pp.246 ff.) Therefore I do recommend its publication.
|
Topological proof of Benoist-Quint's orbit closure theorem for $ \boldsymbol{ \operatorname{SO}(d, 1)} $
Department of Mathematics, Yale University, New Haven, CT 06520, USA
We present a new proof of the following theorem of Benoist-Quint: Let $ G: = \operatorname{SO}^\circ(d, 1) $, $ d\ge 2 $ and $ \Delta<G $ a cocompact lattice. Any orbit of a Zariski dense subgroup $ \Gamma $ of $ G $ is either finite or dense in $ \Delta \backslash G $. While Benoist and Quint's proof is based on the classification of stationary measures, our proof is topological, using ideas from the study of dynamics of unipotent flows on the infinite volume homogeneous space $ \Gamma \backslash G $.
Mathematics Subject Classification:Primary: 37A17; Secondary: 22E40. Citation:Minju Lee, Hee Oh. Topological proof of Benoist-Quint's orbit closure theorem for $ \boldsymbol{ \operatorname{SO}(d, 1)} $. Journal of Modern Dynamics, 2019, 15: 263-276. doi: 10.3934/jmd.2019021
References:
[1] [2] [3] [4] [5]
S. G. Dani and G. A. Margulis,
[6] [7] [8] [9]
G. A. Margulis, Problems and conjectures in rigidity theory, in
[10]
G. A. Margulis and G. M. Tomanov,
Invariant measures for actions of unipotent groups over local fields on homogeneous spaces,
[11] [12] [13] [14] [15] [16] [17]
N. A. Shah, Invariant measures and orbit closures on homogeneous spaces for actions of subgroups generated by unipotent elements, in
[18] [19]
show all references
References:
[1] [2] [3] [4] [5]
S. G. Dani and G. A. Margulis,
[6] [7] [8] [9]
G. A. Margulis, Problems and conjectures in rigidity theory, in
[10]
G. A. Margulis and G. M. Tomanov,
Invariant measures for actions of unipotent groups over local fields on homogeneous spaces,
[11] [12] [13] [14] [15] [16] [17]
N. A. Shah, Invariant measures and orbit closures on homogeneous spaces for actions of subgroups generated by unipotent elements, in
[18] [19]
[1] [2] [3] [4] [5] [6] [7] [8] [9]
S. A. Krat.
On pairs of metrics invariant under a cocompact action of a group.
[10]
François Gay-Balmaz, Cesare Tronci, Cornelia Vizman.
Geometric dynamics on the automorphism group of principal bundles: Geodesic flows, dual pairs and chromomorphism groups.
[11] [12] [13] [14] [15] [16] [17] [18]
Andres del Junco, Daniel J. Rudolph, Benjamin Weiss.
Measured topological orbit and Kakutani equivalence.
[19] [20]
2018 Impact Factor: 0.295
Tools Metrics Other articles
by authors
[Back to Top]
|
This question already has an answer here:
I know that there have already been a lot of questions about why the likelihood is no probability density function and I ve read most of the answers. However, to me the point is still not clear yet why the likelihood is no pdf. There have been several arguments, mainly involving that
it does not integrate to 1 it is a distribution over the parameters with the data fixed
However, there has also been an accepted answer that says "it is the probability (density) of the data given the parameter value", which to me sounds like a probability then.
My general confusion and problem about the understanding of the likelihood contains the following items:
1.) The Likelihood is (often) defined as $L(\theta|X)=p(X|\theta)$. But that IS a (conditional) pdf. There is nothing I can do about it to interprete it otherwise (e.g. by assuming to hold any of $X$ or $\theta$ fixed). The above expression means that I have a pdf over the random variable $X$ conditioned on the parameter $\theta$ which in fact is a conditional pdf, no?
2.) One could argue (such as on wikipedia) that the likelihood function is defined as $L(\theta|X)=p(X;\theta)$, i.e., explicitely not as a conditional pdf. However in Bayes theorem the likelihood is always a conditional pdf, as Bayes theorem is in principle only a consequence of the definition of conditional probability (density). Therefore in Bayes theorem I have to inteprete the likelihood as a conditional probability density.
3.) I am also confused about the definition of the likelihood in frequentist and Bayesian framework. In the former one assumes the data to be random variables and the parameters to be fixed unknowns and in the latter one assumes the data to be fixed and the paramters to be random variables. So it seems that the interpretation of the likelihood also depends on the framework I am working in?
4.) The pdf of a given distribution is often written as a conditional probability, e.g., the gaussian is often written as $p(X=x|\mu, \sigma)$ and then treated as a likelihood when using e.g. Bayes theorem. In that case we explicitly assume the likelihood to be a (conditional) pdf then. However, how is this then justified (if the likelihood is not a conditional pdf)?
5.) Why are there many textbooks in applied statistics and machine learning, that just use the likelihood as a conditional pdf just like in point 4.) if this is not correct?
EDIT: The discussion I have looked at involve:
|
Units Topics Marks I Relations and Functions 10 II Algebra 13 III Calculus 44 IV Vectors and 3-D Geometry 17 V Linear Programming 6 VI Probability 10 Total 100 Chapter 1: Relations and Functions Chapter 2: Inverse Trigonometric Functions Chapter 1: Matrices
Concept, notation, order, equality, types of matrices, zero and identity matrix, transpose of a matrix, symmetric and skew symmetric matrices.
Operation on matrices: Addition and multiplication and multiplication with a scalar
Simple properties of addition, multiplication and scalar multiplication
Noncommutativity of multiplication of matrices and existence of non-zero matrices whose product is the zero matrix (restrict to square matrices of order 2)
Concept of elementary row and column operations
Invertible matrices and proof of the uniqueness of inverse, if it exists; (Here all matrices will have real entries).
Chapter 2: Determinants
Determinant of a square matrix (up to 3 × 3 matrices), properties of determinants, minors, co-factors and applications of determinants in finding the area of a triangle
Ad joint and inverse of a square matrix
Consistency, inconsistency and number of solutions of system of linear equations by examples, solving system of linear equations in two or three variables (having unique solution) using inverse of a matrix
Chapter 1: Continuity and Differentiability
Continuity and differentiability, derivative of composite functions, chain rule, derivatives of inverse trigonometric functions, derivative of implicit functions
Concept of exponential and logarithmic functions.
Derivatives of logarithmic and exponential functions
Logarithmic differentiation, derivative of functions expressed in parametric forms. Second order derivatives
Rolle's and Lagrange's Mean Value Theorems (without proof) and their geometric interpretation
Chapter 2: Applications of Derivatives
Applications of derivatives: rate of change of bodies, increasing/decreasing functions, tangents and normal, use of derivatives in approximation, maxima and minima (first derivative test motivated geometrically and second derivative test given as a provable tool)
Simple problems (that illustrate basic principles and understanding of the subject as well as real-life situations)
Chapter 3: Integrals
Integration as inverse process of differentiation
Integration of a variety of functions by substitution, by partial fractions and by parts
Evaluation of simple integrals of the following types and problems based on them
$\int \frac{dx}{x^2\pm {a^2}'}$, $\int \frac{dx}{\sqrt{x^2\pm {a^2}'}}$, $\int \frac{dx}{\sqrt{a^2-x^2}}$, $\int \frac{dx}{ax^2+bx+c} \int \frac{dx}{\sqrt{ax^2+bx+c}}$
$\int \frac{px+q}{ax^2+bx+c}dx$, $\int \frac{px+q}{\sqrt{ax^2+bx+c}}dx$, $\int \sqrt{a^2\pm x^2}dx$, $\int \sqrt{x^2-a^2}dx$
$\int \sqrt{ax^2+bx+c}dx$, $\int \left ( px+q \right )\sqrt{ax^2+bx+c}dx$
Definite integrals as a limit of a sum, Fundamental Theorem of Calculus (without proof)
Basic properties of definite integrals and evaluation of definite integrals
Chapter 4: Applications of the Integrals
Applications in finding the area under simple curves, especially lines, circles/parabolas/ellipses (in standard form only)
Area between any of the two above said curves (the region should be clearly identifiable)
Chapter 5: Differential Equations
Definition, order and degree, general and particular solutions of a differential equation
Formation of differential equation whose general solution is given
Solution of differential equations by method of separation of variables solutions of homogeneous differential equations of first order and first degree
Solutions of linear differential equation of the type −
dy/dx + py = q, where p and q are functions of x or constants
dx/dy + px = q, where p and q are functions of y or constants
Chapter 1: Vectors
Vectors and scalars, magnitude and direction of a vector
Direction cosines and direction ratios of a vector
Types of vectors (equal, unit, zero, parallel and collinear vectors), position vector of a point, negative of a vector, components of a vector, addition of vectors, multiplication of a vector by a scalar, position vector of a point dividing a line segment in a given ratio
Definition, Geometrical Interpretation, properties and application of scalar (dot) product of vectors, vector (cross) product of vectors, scalar triple product of vectors
Chapter 2: Three - dimensional Geometry
Direction cosines and direction ratios of a line joining two points
Cartesian equation and vector equation of a line, coplanar and skew lines, shortest distance between two lines
Cartesian and vector equation of a plane
Angle between −
Two lines
Two planes
A line and a plane
Distance of a point from a plane
Chapter 1: Linear Programming Chapter 1: Probability
To download pdf Click here.
|
I suspect this problem is ill-posed: the degree of $f$ may depend on the specific enumeration of Turing machines used.
However, I can prove that the degree of the Halting Problem is attainable:
Claim: there is an admissible numbering $\varphi_i$ of Turing machines such that $f$, defined relative to this numbering, computes the Halting Problem.
(See https://en.wikipedia.org/wiki/Admissible_numbering.)
Let $\varphi_i$ be an admissible numbering such that for all $n$, $\varphi_{2n}$ is the program which always halts immediately - that is, in zero (= an even number of) steps. Now let $R$ be a computable function such that for all $e$, $\varphi_{R(e)+1}$ is an index for the program which on all inputs runs $\varphi_e(0)$, and halts in an odd number of stages if $\varphi_e(0)$ ever halts (via padding with a "dummy step" if necessary), and diverges otherwise. Such an $R$ exists by the Recursion theorem, since $\varphi_i$ is an admissible numbering. Then $\varphi_e(0)\downarrow\iff f(R(e))=0$.
Note that since you care about run time, "admissible numbering" isn't really the right term to use, since for instance if we just slow every machine down by a factor of 2 (so $f(e)=0$ always) this is still an admissible numbering.
You want a numbering of
machines, not just functions.
Def'n. A numbering of machines is a function $\nu:\omega\rightarrow\omega$ such that $(i)$ $\nu$ is computable and $(ii)$ for every $i$, there is some $k$ such that for all $s, x, y$, we have $$\varphi_i(x)[s]=y\iff \varphi_{\nu(k)}(x)[s]=y,$$ that is, every machine occurs in the range of $\nu$.
(Here "$[s]=$" means "halts in $s$ steps and equals"; this is really useful notation.)Similarly to the construction of a Friedberg enumeration (but much simpler), we may construct a numbering of machines whose $f$ is
computable! This is a good exercise.
The right notion of "tame numbering of machines," I think, is:
Def'n. A numbering of machines $\nu$ is tame if there is a computable function $g$ such that for all $i, x, y, s$, we have $$\varphi_i(x)[s]=y\iff \varphi_{\nu(g(i))}(x)[s]=y,$$ that is, we can computably find programs in $\nu$.
What I have absolutely no idea about:
Is there a tame numbering of machines whose $f$ is strictly weaker than the Halting Problem? Is there a t.n.m. whose $f$ is computable?
|
If I wanted to evaluate $\int_{C(0,1)}(z+\frac{1}{z})^{2n}\frac{1}{z}dz$ using the Binomial theorem what would my result be ?
So far I've rearranged the integral until we have $\int\frac{(z^2+1)^{2n}}{z^{2n+1}}dz$
Then by using the binomial theorem on the numerator we obtain
$(1+z^2)^{2n}=\sum_{k=0}^{2n} \binom{2n}{k}(z^2)^k=\binom{2n}{0}+\binom{2n}{1}z^2+...+ \binom{2n}{2n}(z^2)^{2n}$
Then I thought that I should decompose the original integral into a sum of new integrals and evaluate them using Cauchy's integral formula for derivatives, and it seemed like this would yield some fruitful result , but when i applied it I realised that if we take the derivative of $f(z_0)$ 2n times all but the final integral $\binom{2n}{2n}\int \frac{z^{2n+1}}{z^{2n+1}}dz$ will give zero as the result, using this approach.
This last integral then would give $\int dz=\int_{0}^{2\pi} ie^{it}dt$ after parametrisation which also yields zero ?
I don't feel like this is right at all, would anyone have any advice to guide me through where I'm going wrong, your help is much appreciated in advance.
|
Let $k$ be a complete non-archimedean field. In definitions I have seen of bornological vector spaces over $k$ there are usually some extra assumptions on the non-archimedean field. For instance in 'Espaces analytiques relatives et theorem de finitude' by Houzel it is assumed that the valuation is non-discrete and that $k$ is maximally complete. On page 43 of 'Seminaire Banach' published as Springer Lecture notes in Mathematics volume 277, it is assumed that the valuation is non-discrete. What is the main reason behind these restrictions? I am interested in bornological vector spaces over a field with trivial valuation. Banach spaces over such a field make sense and therefore one gets a metric space and therefore a bornological set where the bornology is compatible with the linear structure. This should still be a (complete) bornological vector space hopefully. For what parts of the theory are these extra restrictions needed or useful? Are there some pathologies about the category of bornological vector spaces over a general complete non-archimedean field that are not present when you add these extra assumptions?
I try to reply to your questions:
1) In the first part of "Espaces analytiques relatives et theorem de finitude" Houzel says that the field under consideration is supposed to be maximally compact. I don't see any point where he use this hypothesis in his article neither in the result that he recall from the "Seminarie Banach". Moreover, in the second part, where he expose the sheaf-theoretic (global) version of the result obtained in the first part he recall the notation and remove this hypothesis (see on page 29 the first two lines of the paragraph "Faisceaux bornologiques"). Hence my idea with respect to this hypothesis is that it is simply a misprint.
2) For sure you can develop a theory of bornological or topological vector spaces over any valued field (also over any field with more exotic structures). The problem is that already in the case of trivially valued fields you find pathologies. First, the notion of convexity is quite strange: since $k = k^\circ$ then you find that the natural generalization of the absolute convex hull of a subset $X \subset E$ (where $E$ is a $k$-vector space) is given by $\Gamma(X) = \left \{ \sum \lambda_i x_i | \lambda_i \in k^\circ = k, x_i \in X \right \}$. Hence $\Gamma(X)$ is the linear span of $X$.
The main problem in this situation is that there is no analog of the duality between bornological and topological vector spaces, which is the main topic of the seminaire Banach. Houzel construct two adjoint functors $t: Born \to Top$ and $b: Top \to Born$ where $Born$ is the category of bornological vector spaces of convex type and $Top$ is the category of locally convex topological vector spaces. For the $t$ functor you consider on a bornological vector space of convex type the vector space topology given by bornivorous sets (i.e. sets which absorbs all bounded sets). And for the $b$ functor you consider on locally convex topological vector space the Von Neumann bornology.
So if you perform this construction on a seminormed vector space $E$ over a non-trivially valued field you get that $E \cong b(t(E))$ and $E \cong t(b(E))$ (but this is also true for more general $k$-vector space topologies and bornologies). This is false for the trivial valued field even thinking of $k$ as a $1$-dimensional vector space over itself. Cause if $k$ is trivially valued, then all subsets of $k$ are bounded. In particular $k$ itself is bounded, so the only bornivorus set for this bornology is $k$. Therefore, in this case, $t(k)$ has the indiscrete topology but the trivial valuation gives to $k$ the discrete topology. Moreover, if you try to describe the Von Neumann bornology for the discrete topology, you don't find a bornology cause $0$ is a neighborhood of itself and $0$ doesn't absorb nothing else than itself. This mean the $0$ would be the only bounded set for the Von Neaumann bornology but this doesn't make sense.
This lack of duality is (i think) the main issue for which Houzel exclude the trivial valuation in his work.
|
What numerical method can approximately compute the $(n-1)$-dimensional surface area of the $\ell_p$ ball $\{x\in\mathbb R^n: \sum_{i=1}^n |x_i|^p=1\}$, for $p\in[1,\infty)$? Ideally the method should handle $n$ and $p$ both in the range of 5 to 10.
One approach begins with the definition of surface area as $$ \lim_{\varepsilon\to 0^+} \frac{\mu_n(B_p + \varepsilon B_2)-\mu_n(B_p)}{\varepsilon}, $$ where $B_p$ and $B_2$ are unit $\ell_p$ and $\ell_2$ balls, using Monte Carlo to estimate the volume of both bodies. This method fails numerically, because when $\varepsilon\to 0^+$ the two volumes are very close to each other.
Another approach uses
Cauchy's integration formula, which states thatthe volume of $\partial B_p$ is equal to$$\frac{1}{\mu_{n-1}(B_2)}\int_{S_2}\mu_{n-1}(B _p|u) du,$$where $\mu_{n-1}(B_p|u)$ is the volume of $B_p$ projecting onto the orthogonal complement of $u$. However, this projection seems difficult to numerically approximate.
What approaches would provide a better approximation?
|
We provide a novel method to increase SNR of segmented diffusion-weighted EPI acquisitions. Multiple gradient echoes were acquired after each diffusion-preparation and combined in an SNR-optimized way using weightings from quantitative T2* maps. The combination of diffusion-weighted echoes yielded an SNR-gain of 58% compared to single-echo dMRI data with an increase in the segmented readout duration by only 23.1 ms. The multi-echo diffusion MRI acquisition and combination were employed to acquire high-quality ex-vivo diffusion-weighted MRI data from a wild chimpanzee brain.
Typical challenges of ex-vivo diffusion MRI (dMRI) acquisitions include low diffusivity in fixed tissue requiring strong diffusion-weightings, signal drop from trapped air, image distortions and increased echo-times due to strong diffusion-weighting requirements. Highly segmented EPI (sEPI) acquisitions can be employed to counteract these challenges by shortening echo-times and reducing image distortions in ex-vivo dMRI
1,2. The short sEPI readout trains can easily be repeated to acquire multiple gradient-echoes (GRE) with a negligible increase in acquisition time. However, such Multi-Echo (ME) acquisitions are not employed in dMRI, due to T 2* signal decay between echoes. In functional MRI, T 2* relaxation maps are employed for SNR optimal echo-combination 3. In this work, an ME-sEPI Stejskal-Tanner dMRI sequence 4 was developed to achieve high-quality dMRI acquisition for ex-vivo imaging of a wild chimpanzee brain on a human-scale scanner (Figure 1A). Individual echoes were combined using T 2*-dependent weighting.
MRI data were acquired from the brain of a 6-year-old juvenile wild female chimpanzee from Taï National Forest (Ivory Coast). The animal died from natural cause without human interference. The brain was extracted on site by a veterinarian four hours after death and immersion-fixed with 4% paraformaldehyde. Further preparations included the removal of superficial vessels, washing out paraformaldehyde in phosphate-buffered saline and placement in Fomblin. ME-dMRI data were acquired using a 3T Connectom System (Siemens Healthineers, Erlangen, Germany) with maximum gradient strength of 300 mT/m and a flexible 23-channel surface coil
5. With a total left-right brain extent of 85 mm (100 mm including container), measurement on a small-bore scanner was not possible. ME-dMRI-sEPI and ME-GRE-sEPI datasets were acquired with matched resolution and acceleration (parameters in Table 1). The ME acquisition increased the readout by only 23.1 ms, which is neglectable compared to the diffusion-preparation plus readout of a single-echo only.
MP-PCA denoising
6 was employed prior to echo-combination. A T 2* map was calculated by fitting an exponential model to the ME-GRE data. A voxel-wise weighting-factor $$$w_i$$$ for each echo $$$S_i$$$ was calculated based on echo-time and T 2*. For optimal SNR, the individual ME-dMRI echoes were combined using weighted averaging: $$S_{comb}=\sum_{i=1}^{n} w_i S_i=\sum_{i=1}^{n}\frac{\exp{-\frac{TE(i)}{T_2^*}}}{\sum_{j=1}^{n}\exp{-\frac{TE(j)}{T_2^*}}}S_i $$
The T
2*-dependence of $$$w_i$$$ yielded a voxel-specific SNR-gain compared to single-echo dMRI. To estimate the SNR-gain, the noise was approximated as Gaussian with similar variance, for all echoes. The SNR-gain was then computed as a weighted sum of Gaussian random variables.
$$\text{SNR}_{comb}=\frac{S_{comb}}{\sigma_{comb}}=\frac{S_{comb}}{\sqrt{\sum_i^nw_i^2}\sigma}$$
Deterministic diffusion tensor tracking and visualization of the processed dataset was performed using the software package brainGL [https://github.com/braingl].
|
This is kind of a continuation of this question.
I want to automate these 2 steps on
Mathematica once I give an integer $N$. generate a list of variables $\{a_i\}_{i=1}^{i=N}$ then for some function like $f(x) = e^{x}$ or $f(x) = \sin^2(x)$ compute an expression like $\prod\limits_{i \neq j} f(a_i - a_j)$ or $\sum\limits_{i < j} f(a_i - a_j)$ or $\prod\limits_ {i,j} f(a_i - a_j)$
( - and then hopefully integrate such expressions over all $a_i$...)
I am having to type these expressions by hand for every $N$ and that is very hard once $N$ gets large (one would typically have $\binom{N}{2}$ terms to type by hand!) I would like to know how this can be automated - since I would typically need to to use large $N$.
(...also, help with that previous question would be great!)
|
It is well-know that $Qcoh$ is a fibered category on $Sch$. In more details let $\mathcal{C}$ be the category $(Sch/S)$ of schemes over a fixed base scheme S. For each scheme $U$ we define $Qcoh(U)$ to be the category of quasi-coherent sheaves on $U$. Given a morphism $f : U \to V$, we have a functor $f^* : Qcoh(V) \to Qcoh(U)$. However we don't have $(gf)^*=f^*g^*$ on the nose, but there is a canonical natural equivalence between them. See Vistoli's notes Section 3.1 and 3.2.1.
Now let $X$ be a scheme and $U_i$ be an open cover of $X$. We have the following cosimplicial diagram of categories $$ \prod Qcoh(U_i)\rightrightarrows\prod Qcoh(U_i\times_X U_j)\text{triple arrows}\prod Qcoh(U_i\times_X U_j\times_X U_k)\ldots $$ Keep in mind that this diagram is only a pseudo diagram in $Cat$, the $2$-category of categories, i.e. the cosimplicial identities only hold up to canonical natural equivalence.
It is also well-known that the descent data of quasi-coherent sheaves is given by a collection $(\xi_i,\phi_{ij})$, where $\xi_i$ is a quasi-coherent sheaf on each $U_i$ and $\phi_{ij}$ is an isomorphism $pr_2^*\xi_j\to pr_1^*\xi_i$ in $Qcoh(U_{i}\times_X U_j)$ which satisfies the cocycle condition $$ pr_{13}^*\phi_{ik}=pr_{12}^*\phi_{ij}\circ pr_{23}^*\phi_{jk}. $$ on $U_i\times_X U_j\times_X U_k$. See Vistoli's notes Section 4.1.2.
The above descent data is given "by hand". On the other hand, I've heard that descent data is a kind of (homotopy) limit. Nevertheless Vistoli's notes doesn't consider homotopy limit.
$\textbf{My question}$ is: is there any reference which studies the (pseudo) homotopy limit (I'm not sure whether we should call it 2-categorical limit) of the above cosimplicial pseudo-diagram and show that it coincides with the descent data given in the literature?
|
Hello,
working on some machine learning problem I end up facing a problem which looks like generalizing the notion of Cauchy product.
I briefly go back to Cauchy products before exposing my question. Consider, two sequences $(a_n)_{n \in \mathbb N}$ and $(b_n)_{n \in \mathbb N}$, which are assumed to be absolutely convergent (for simplicity). Then one can define another sequence $c_n = \sum_{k = 0}^{n}a_k b_{n-k}$ such that $$\sum_{n=0}^{+\infty} c_n = \Big(\sum_{n=0}^{+\infty} a_n\Big)\Big(\sum_{n=0}^{+\infty} b_n\Big)$$ In the simpler framework, this is known as Mertens theorem and the (pedestrian) proof can be found in this wikipedia page.
My question consists in the possible generalization / extension of such result to the case where the sequence $b$ would have two indices. More precisely, if one introduces another sequence $(\theta_n)_{n \in \mathbb N}$, I would like to consider the case of $$b_k^n = \prod_{i = k}^{n} \theta_i$$ and I am interested in computing the double sum $$\sum_{n=0}^{+\infty}\sum_{k = 0}^{n}a_k b_k^n$$ Note that, if all the $\theta_i$ are equal (and smaller than 1), then we fall under Mertens theorem range of application. But what if the $\theta_i$ are different? Is it still possible to have such a result where the sums eventually separate?
I have tried but failed to extend Mertens theorem's proof to this case. Any link to references which may help me figure this out would be most welcome. Besides, if one could show me a path to learn more about sequence with two indices, I would be very happy!
Cheers
|
Combinatorial and Discrete Optimization (2019 KSIAM Annual Meeting) November 8 Friday @ 12:00 PM - November 9 Saturday @ 7:00 PM Special Session @ 2019 KSIAM Annual Meeting Date Nov 8, 2019 – Nov 9, 2019
Address: 61-13 Odongdo-ro, Sujeong-dong, Yeosu-si, Jeollanam-do (전남 여수시 오동도로 61-13)
Venue
Address: 61-13 Odongdo-ro, Sujeong-dong, Yeosu-si, Jeollanam-do (전남 여수시 오동도로 61-13)
Speakers
Hyung-Chan An (안형찬), Yonsei University Tony Huynh, Monash University Dong Yeap Kang (강동엽), KAIST / IBS Discrete Mathematics Group Dabeen Lee (이다빈), IBS Discrete Mathematics Group Kangbok Lee (이강복), POSTECH Sang-il Oum (엄상일), IBS Discrete Mathematics Group / KAIST Kedong Yan, Nanjing University of Science and Technology Se-Young Yun (윤세영), KAIST Schedules Combinatorial and Discrete Optimization I: November 8, 2019 Friday, (Time: TBD) Kangbok Lee Se-young Yun Kedong Yan Dabeen Lee Combinatorial and Discrete Optimization II: November 9, 2019 Saturday, (Time: TBD) Hyung Chan An Tony Huynh Dong Yeap Kang Sang-il Oum Abstracts Kangbok Lee (이강복), Bi-criteria scheduling
The bi-criteria scheduling problems that minimize the two most popular scheduling objectives, namely the makespan and the total completion time, are considered. Given a schedule, makespan, denoted as $C_\max$, is the latest completion time of the jobs and the total completion time, denoted as $\sum C_j$, is the sum of the completion times of the jobs. These two objectives have received a lot of attention in the literature because of their practical implications. Scheduling problems are somehow difficult to solve even for single criterion. On the other hand, when it comes to a bi-criteria problem, a balanced solution coordinating both objectives is indeed essential. In this paper, we consider bi-criteria scheduling problems on $m$ identical parallel machines where $m$ is 2, 3 and an arbitrary number, denoted as $P2 || (C_\max,\sum C_j)$, $P3 || (C_\max,\sum C_j)$ and $P || (C_\max,C_j)$, respectively. For each problem, we explore its inapproximability and develop an approximation algorithm with analysis of its worst performance.
Se-Young Yun (윤세영), Optimal sampling and clustering algorithms in the stochastic block model
This paper investigates the design of joint adaptive sampling and clustering algorithms in the Stochastic Block Model (SBM). To extract hidden clusters from the data, such algorithms sample edges sequentially in an adaptive manner, and after gathering edge samples, return cluster estimates. We derive information-theoretical upper bounds on the cluster recovery rate. These bounds reveal the optimal sequential edge sampling strategy, and interestingly, the latter does not depend on the sampling budget, but only the parameters of the SBM. We devise a joint sampling and clustering algorithm matching the recovery rate upper bounds. The algorithm initially uses a fraction of the sampling budget to estimate the SBM parameters, and to learn the optimal sampling strategy. This strategy then guides the remaining sampling process, which confers the optimality of the algorithm.
Kedong Yan, Cliques for multi-term linearization of 0-1 multilinear program for boolean logical pattern generation
Logical Analysis of Data (LAD) is a combinatorial optimization-based machine learning method. A key stage of LAD is pattern generation, where useful knowledge in a training dataset of two types of, say, + and − data under analysis is discovered. LAD pattern generation can be cast as a 0-1 multilinear program (MP) with a single 0-1 multilinear constraint:
$$(PG): \max\limits_{x\in\{0,1\}^{2n}}f(x):=\sum_{i\in S^+}\Pi_{j\in J_i}(1-x_j)~~\text{subject to}~~g(x):=\sum_{i\in S^-}\Pi_{j\in J_i}(1-x_j)=0$$
The unconstrained maximization of $f$ (without $g$) is straightforward, thus the main difficulty of globally maximizing $(PG)$ arises primarily from the presence of g and the interaction between $f$ and $g$. We dealt with the task of linearizing $g$. Namely, we employed a graph theoretic analysis of data to discover sufficient conditions among neighboring data and also neighboring groups of data for ‘compactly linearizing’ $g$ in terms of a small number of stronger valid inequalities, as compared to those that can be obtained via 0-1 linearization techniques from the literature.
In an earlier work, we analyzed + and − data (that is, terms of $f$ and $g$ together) on a graph to develop a polyhedral overestimation scheme for $f$. Extending this line of research, this paper proposes a new graph representation of monomials in f in conjunction with terms in $g$ to more aggressively aggregate a set of terms/data through each maximal clique in the graph into yielding a stronger valid inequality. This is achieved by means of a new notion of ‘neighbors’ that allows us to join two data that are more than 1-Hamming distance away from each other by an edge in the graph. We show that new inequalities generalize and subsume those from the earlier paper. Furthermore, with using six benchmark data mining datasets, we demonstrate that new inequalities are superior to their predecessors in terms of a more efficient global maximization of $(PG)$; that is, for a more efficient analysis and classification of real-life datasets.
Dabeen Lee (이다빈), Joint Chance-constrained programs and the intersection of mixing sets through a submodularity lens
The intersection of mixing sets with common binary variables arise when modeling joint linear chance-constrained programs with random right-hand sides and finite sample space. In this talk, we first establish a strong and previously unrecognized connection of mixing sets to submodularity. This viewpoint enables us to unify and extend existing results on polyhedral structures of mixing sets. Then we study the intersection of mixing sets with common binary variables and also linking constraint lower bounding a linear function of the continuous variables. We propose a new class of valid inequalities and characterize when this new class along with the mixing inequalities are sufficient to describe the convex hull.
Hyung-Chan An (안형찬), Constant-factor approximation algorithms for parity-constrained facility location problems Facility location is a prominent optimization problem that has inspired a large quantity of both theoretical and practical studies in combinatorial optimization. Although the problem has been investigated under various settings reflecting typical structures within the optimization problems of practical interest, little is known on how the problem behaves in conjunction with parity constraints. This shortfall of understanding was rather disturbing when we consider the central role of parity in the field of combinatorics. In this paper, we present the first constant-factor approximation algorithm for the facility location problem with parity constraints. We are given as the input a metric on a set of facilities and clients, the opening cost of each facility, and the parity requirement—$\mathsf{odd}$, $\mathsf{even}$, or $\mathsf{unconstrained}$—of every facility in this problem. The objective is to open a subset of facilities and assign every client to an open facility so as to minimize the sum of the total opening costs and the assignment distances, but subject to the condition that the number of clients assigned to each open facility must have the same parity as its requirement.
Although the unconstrained facility location problem as a relaxation for this parity-constrained generalization has unbounded gap, we demonstrate that it yields a structured solution whose parity violation can be corrected at small cost. This correction is prescribed by a $T$-join on an auxiliary graph constructed by the algorithm. This graph does not satisfy the triangle inequality, but we show that a carefully chosen set of shortcutting operations leads to a cheap and
sparse $T$-join. Finally, we bound the correction cost by exhibiting a combinatorial multi-step construction of an upper bound. At the end of this paper, we also present the first constant-factor approximation algorithm for the parity-constrained $k$-center problem, the bottleneck optimization variant. Dong Yeap Kang (강동엽), The Alon-Tarsi number of subgraphs of a planar graph
In 1985, Mader showed that every $n(\geq4k+3)$-vertex strongly $k$-connected digraph contains a spanning strongly $k$-connected subgraph with at most $2kn-2k^2$ edges, and the only extremal digraph is a complete bipartite digraph $DK_{k,n−k}$. Nevertheless, since the extremal graph is sparse, Bang-Jensen asked whether there exists g(k) such that every strongly $k$-connected $n$-vertex tournament contains a spanning strongly $k$-connected subgraph with $kn + g(k)$ edges, which is an “almost $k$-regular” subgraph.
Recently, the question of Bang-Jensen was answered in the affirmative with $g(k) = O(k^2\log k)$, which is best possible up to logarithmic factor. In this talk, we discuss how to find minimal highly connected spanning subgraphs in dense digraphs as well as tournaments. In particular, we show that every highly connected dense digraph contains a spanning highly connected subgraph that is almost $k$-regular, which yields $g(k) = O(k^2)$ that is best possible for tournaments.
Tony Huynh, Stable sets in graphs with bounded odd cycle packing number
It is a classic result that the maximum weight stable set problem is efficiently solvable for bipartite graphs. The recent bimodular algorithm of Artmann, Weismantel and Zenklusen shows that it is also efficiently solvable for graphs without two disjoint odd cycles. The complexity of the stable set problem for graphs without $k$ disjoint odd cycles is a long-standing open problem for all other values of $k$. We prove that under the additional assumption that the input graph is embedded in a surface of bounded genus, there is a polynomial-time algorithm for each fixed $k$. Moreover, we obtain polynomial-size extended formulations for the respective stable set polytopes.
To this end, we show that 2-sided odd cycles satisfy the Erdos-Posa property in graphs embedded in a fixed surface. This result may be of independent interest and extends a theorem of Kawarabayashi and Nakamoto asserting that odd cycles satisfy the Erdos-Posa property in graphs embedded in a fixed orientable surface
Eventually, our findings allow us to reduce the original problem to the problem of finding a minimum-cost non-negative integer circulation of a certain homology class, which we prove to be efficiently solvable in our case.
Sang-il Oum, Rank-width: Algorithmic and structural results
Rank-width is a width parameter of graphs describing whether it is possible to decompose a graph into a tree-like structure by ‘simple’ cuts. This talk aims to survey known algorithmic and structural results on rank-width of graphs. This talk is based on a survey paper with further remarks on the recent developments.
|
Shuheng Zheng
Work as a software engineer by day. Rest of the time doodle with math, physics, or read about political economy and history. Eventually realized politics is circular and futile and then go back to math.
Seattle, Washington, United States
Member for 2 years, 6 months
0 profile views
Last seen Apr 8 '17 at 1:55 Communities (26) Top network posts 107 The deep reason why $\int \frac{1}{x}\operatorname{d}x$ is a transcendental function ($\log$) 43 Can I get an office in a university as an independent scholar? 22 What determines the top speed in ice skating? 7 Why isn't scaling space and time considered the 11th dimension of the Galilean group? 6 Classes without constructor in F# 6 When tuning piano, is it important to tune the notes that are not played? 5 When jump-starting a car, won't there be a huge back current into the dead battery? View more network posts → Keeping a low profile.
This user hasn't posted yet.
Badges (1) Gold Silver Bronze Rarest
Jun 3 '17
|
Let $P(z),Q(z)$ be polynomials, where $\text{deg}Q-\text{deg}P\ge 2$. Suppose $Q$ has no real roots.
Call the set of number that can be written as $p+iq$ ($p,q$ rational) as rational complex numbers.
Suppose $P,Q$ has coefficients that are rational complex numbers.
Prove or disprove:
$$\int^\infty_{-\infty}\frac{P(z)}{Q(z)}dz$$ and $\pi$ must be linearly dependent over rational complex numbers.
Complex analysis approach:Since residues of $\frac{P}Q$ are also rational complex, by residue theorem the integral and $\pi$ are linearly dependent.
Is this argument correct? If so, what is a real analysis approach to this problem?
|
Rate–distortion theory is a major branch of information theory which provides the theoretical foundations for lossy data compression; it addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel, so that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a given distortion D.
Contents Introduction 1 Rate–distortion functions 2 Memoryless (independent) Gaussian source 2.1 Connecting rate-distortion theory to channel capacity 3 See also 4 References 5 External links 6 Introduction
Rate–distortion theory gives an analytical expression for how much compression can be achieved using lossy compression methods. Many of the existing audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate–distortion functions.
Rate–distortion theory was created by Claude Shannon in his foundational work on information theory.
In rate–distortion theory, the
rate is usually understood as the number of bits per data sample to be stored or transmitted. The notion of distortion is a subject of on-going discussion. In the most simple case (which is actually used in most cases), the distortion is defined as the expected value of the square of the difference between input and output signal (i.e., the mean squared error ). However, since we know that most lossy compression techniques operate on data that will be perceived by human consumers (listening to music, watching pictures and video) the distortion measure should preferably be modeled on human perception and perhaps aesthetics: much like the use of probability in lossless compression, distortion measures can ultimately be identified with loss functions as used in Bayesian estimation and decision theory. In audio compression, perceptual models (and therefore perceptual distortion measures) are relatively well developed and routinely used in compression techniques such as MP3 or Vorbis, but are often not easy to include in rate–distortion theory. In image and video compression, the human perception models are less well developed and inclusion is mostly limited to the JPEG and MPEG weighting (quantization, normalization) matrix. Rate–distortion functions
The functions that relate the rate and distortion are found as the solution of the following minimization problem:
\inf_{Q_{Y|X}(y|x)} I_Q(Y;X)\ \mbox{subject to}\ D_Q \le D^*.
Here
Q ( Y | X y | x), sometimes called a test channel, is the conditional probability density function (PDF) of the communication channel output (compressed signal) Y for a given input (original signal) X, and I ( Q Y ; X) is the mutual information between Y and X defined as I(Y;X) = H(Y) - H(Y|X) \,
where
H( Y) and H( Y | X) are the entropy of the output signal Y and the conditional entropy of the output signal given the input signal, respectively: H(Y) = - \int_{-\infty}^\infty P_Y (y) \log_{2} (P_Y (y))\,dy H(Y|X) = - \int_{-\infty}^{\infty} \int_{-\infty}^\infty Q_{Y|X}(y|x) P_X (x) \log_{2} (Q_{Y|X} (y|x))\, dx\, dy.
The problem can also be formulated as a distortion–rate function, where we find the infimum over achievable distortions for given rate constraint. The relevant expression is:
\inf_{Q_{Y|X}(y|x)} E[D_Q[X,Y]]\ \mbox{subject to}\ I_Q(Y;X)\leq R.
The two formulations lead to functions which are inverses of each other.
The mutual information can be understood as a measure for 'prior' uncertainty the receiver has about the sender's signal (
H(Y)), diminished by the uncertainty that is left after receiving information about the sender's signal ( H( Y | X)). Of course the decrease in uncertainty is due to the communicated amount of information, which is I( Y; X).
As an example, in case there is
no communication at all, then H( Y | X) = H( Y) and I( Y; X) = 0. Alternatively, if the communication channel is perfect and the received signal Y is identical to the signal X at the sender, then H( Y | X) = 0 and I( Y; X) = H( Y) = H( X).
In the definition of the rate–distortion function,
D Q and D * are the distortion between X and Y for a given Q ( Y | X y | x) and the prescribed maximum distortion, respectively. When we use the mean squared error as distortion measure, we have (for amplitude-continuous signals): D_Q = \int_{-\infty}^\infty \int_{-\infty}^\infty P_{X,Y}(x,y) (x-y)^2\, dx\, dy = \int_{-\infty}^\infty \int_{-\infty}^\infty Q_{Y|X}(y|x)P_{X}(x) (x-y)^2\, dx\, dy.
As the above equations show, calculating a rate–distortion function requires the stochastic description of the input
X in terms of the PDF P ( X x), and then aims at finding the conditional PDF Q ( Y | X y | x) that minimize rate for a given distortion D *. These definitions can be formulated measure-theoretically to account for discrete and mixed random variables as well.
An analytical solution to this minimization problem is often difficult to obtain except in some instances for which we next offer two of the best known examples. The rate–distortion function of any source is known to obey several fundamental properties, the most important ones being that it is a continuous, monotonically decreasing convex (U) function and thus the shape for the function in the examples is typical (even measured rate–distortion functions in real life tend to have very similar forms).
Although analytical solutions to this problem are scarce, there are upper and lower bounds to these functions including the famous Shannon lower bound (SLB), which in the case of squared error and memoryless sources, states that for arbitrary sources with finite differential entropy,
R(D) \ge h(X) - h(D) \,
where
h(D) is the differential entropy of a Gaussian random variable with variance D. This lower bound is extensible to sources with memory and other distortion measures. One important feature of the SLB is that it is asymptotically tight in the low distortion regime for a wide class of sources and in some occasions, it actually coincides with the rate–distortion function. Shannon Lower Bounds can generally be found if the distortion between any two numbers can be expressed as a function of the difference between the value of these two numbers.
The Blahut–Arimoto algorithm, co-invented by Richard Blahut, is an elegant iterative technique for numerically obtaining rate–distortion functions of arbitrary finite input/output alphabet sources and much work has been done to extend it to more general problem instances.
When working with stationary sources with memory, it is necessary to modify the definition of the rate distortion function and it must be understood in the sense of a limit taken over sequences of increasing lengths.
R(D) = \lim_{n \rightarrow \infty} R_n(D)
where
R_n(D) = \frac{1}{n} \inf_{Q_{Y^n|X^n} \in \mathcal{Q}} I(Y^n, X^n)
and
\mathcal{Q} = \{ Q_{Y^n|X^n}(Y^n|X^n,X_0): E[d(X^n,Y^n)] \leq D \}
where superscripts denote a complete sequence up to that time and the subscript
0 indicates initial state. Memoryless (independent) Gaussian source
If we assume that
P ( X x) is Gaussian with variance σ 2, and if we assume that successive samples of the signal X are stochastically independent (or equivalently, the source is memoryless, or the signal is uncorrelated), we find the following analytical expression for the rate–distortion function: R(D) = \left\{ \begin{matrix} \frac{1}{2}\log_2(\sigma_x^2/D ), & \mbox{if } 0 \le D \le \sigma_x^2 \\ \\ 0, & \mbox{if } D > \sigma_x^2. \end{matrix} \right. [1]
The following figure shows what this function looks like:
Rate–distortion theory tell us that 'no compression system exists that performs outside the gray area'. The closer a practical compression system is to the red (lower) bound, the better it performs. As a general rule, this bound can only be attained by increasing the coding block length parameter. Nevertheless, even at unit blocklengths one can often find good (scalar) quantizers that operate at distances from the rate–distortion function that are practically relevant.
[1]
This rate–distortion function holds only for Gaussian memoryless sources. It is known that the Gaussian source is the most "difficult" source to encode: for a given mean square error, it requires the greatest number of bits. The performance of a practical compression system working on—say—images, may well be below the
R(D) lower bound shown.
Suppose we want to transmit information about a source to the user with a distortion not exceeding
D. Rate–distortion theory tells us that at least R( D) bits/symbol of information from the source must reach the user. We also know from Shannon's channel coding theorem that if the source entropy is H bits/symbol, and the channel capacity is C (where C < H), then H − C bits/symbol will be lost when transmitting this information over the given channel. For the user to have any hope of reconstructing with a maximum distortion D, we must impose the requirement that the information lost in transmission does not exceed the maximum tolerable loss of H − R( D) bits/symbol. This means that the channel capacity must be at least as large as R( D). See also References ^ a b Thomas M. Cover, Joy A. Thomas (2006). Elements of Information Theory. John Wiley & Sons, New York. ^ Toby Berger (1971). Rate Distortion Theory: A Mathematical Basis for Data Compression. Prentice Hall. External links
VcDemo Image and Video Compression Learning Tool
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
Modular Inverse
Tags:
crypto Introduction
I wasn’t happy with the very brief introduction we did to modular inverse and so I decided that I wanted to create an individual post on the matter, how it works, why we have it and so on. I think this post will be greatly beneficial to anyone who read the last two posts and was slightly confused so I’m aiming to clear all of that confusion up here.
What is Modular Inverse?
In modular based arithmetic we do not actually have a division operation instead it is thought of as multiplicative inverse instead. You’re probably asking yourself why and I will explain that now, there is a little bit of number theory here so non math geeks bare with me and I’ll do my best to explain it simply.
Why can’t we use the division operator?
As we’ve covered already in our Modulo post, performing arithmetic with moduli is actually pretty easy for addition, subtraction and multiplication, all we really do is calculate like normal arithmetic and then reduce the result to the smallest positive remainder by dividing the modulus, an example would be the following:
We covered that quite a lot so I won’t break it down again, if you don’t get this please see my A Not-So-Master Class in Modulo post for further detail. We can often simplify the calculations because for any integer $ a_1, b_1, a_2 \; \text{&} \; b_2 \; \text{if we know that} \; a_1 \equiv b_1 \bmod m \; \text{and} \; a_2 \equiv b_2 \bmod m $ then the following always holds:
However, this is not always this simple for division because division is not defined for every number meaning it’s not always possible to perform division in modular arithmetic. Take the number 0 for example, like in normal arithmetic division by zero is not defined so 0 cannot be the divisor. The problem is that the multiples of the modulus are congruent to 0. As an example $ {-12, -6, 6, 12} $ are all congruent to 0 with a modulus of 6. So not only is $4 \mid 0$ not allowed but neither would $4 \mid 12$ be allowed with a modulus of 6.
We also know that division is defined through multiplication but we run into problems trying to extend this into modular arithmetic. Let’s say we’re working with $\bmod 6$ and we want to compute $4 \mid 5$ what we need to find here is $x$ such that $5 \cdot x \equiv 4 \bmod 6$ well, the only thing that would satisfy this is 2. That’s because we only can go as high as 5 since our modulus is 6 thus the equation is $4 \mid 5 \equiv 2 \bmod 6$ what if we wanted to compute $4 \mid 2 \equiv x \bmod 6$ on it’s face value it seems quite easy we can just do $2 \cdot 2 \equiv 4 \bmod 6$ but, there is actually another possibility we can do $2 \cdot 5 \equiv 4 \bmod 6$ so look, division is not uniquely defined because we have two numbers that we can multiply by 2 to give 4 and again in this case division would not be allowed.
So when is division defined?
It’s quite simple - when the multiplicative inverse exists! As we covered before the inverse of an integer $a$ under a modulus $m$ exists if $a$ and $m$ are coprime. If you’re unsure what this means it is when the only positive integer which divides both $a$ & $m$ is 1.
Closing Notes
As mentioned before, we will cover the Euclidean algorithm later on in the series, this allows us to decide whether two numbers are coprime for now, we will keep it there. That was quite a lot of number theory so I suggest you do further reading if you’re still confused, I did my best to explain it based off a math paper I read a while back so if you still think this was too much math for you to understand perhaps try something like Khan Academy as they do have a good course on modular arithmetic and all the little details.
I realise that my scheduled post was Cryptanalysis but with me starting the OSCP soon and being dissatisfied with my lack of explanation of multiplicative inverse I thought I’d make this post instead and then come back to the cryptanalysis stuff when I have more time. Especially because the cryptanalysis stuff is going to be a lot of writing, a lot of detail and will have code samples so it’s quite some work and I want to ensure the posts I make are the best they can be (hence me back tracking a little into modular arithmetic) anyway, as always thanks for reading and I’ll see you in the next post!
|
Consider the following sequence $a_n$:
\begin{align*} a_0 &= \alpha \\ a_k &= \beta a_{k-1} + \kappa \end{align*}
Now consider the implementation of this sequence via lisp:
(defun an (k) (if (= k 0) alpha (+ (* beta (an (- k 1))) kappa)))
What is the running time to compute
(an k) just as it is written? Also, what would be the maximum stack depth reached? My hypothesis would be that both the running time and stack depth are $O(k)$ because it takes $k$ recursive calls to get to the base case and there is one call per step and one stack push per step.\.
Is this analysis corect?
|
I am interesting in solving the following nonlinear, time-dependent pde in 2 spatial dimensions (complex Gross-Pitaevskii eq):
$$i \frac{\partial \psi}{\partial t} = \left[ -\nabla^2 + (1-i \sigma)(|\psi|^2-1) \right] \psi$$
The goal is to find steady state solutions for the function $\psi(x,y,t)$, for different parameters $\sigma$. Here $\sigma$ is a positive real-valued parameter. The boundary conditions can initially be of the Dirichlet type, setting the function $\psi$ to zero on the contour of a square, but later on I plan to implement some absorbing BC (perhaps using the perfectly matched layer method?). I am also considering solving the equation in a different geometry later on, on a disk using polar coordinates.
For the time-dependence, I plan to use the Gryphon module of Erik Skare (https://launchpad.net/gryphonproject), which is basically a Runge-Kutta solver and for the spatial part Fenics.
So the question is, do you think this is feasible to do with Fenics, and if so, how would one proceed?
|
\begin{align}\begin{split}\pd{EU}{\alpha} &= d(\theta) U'(\pi_{1}) [G_{\alpha}(\alpha, \theta, \epsilon) - \widebar{P} C_{\alpha}(\alpha, \theta)] \\ &+ (1-d(\theta)) U'(\pi_{0}) [G_{\alpha}(\alpha, \theta, 0) - C_{\alpha}(\alpha, \theta)] = 0,\\\pd{EU}{\theta} &= h'(\theta)[U(\pi_{0}) - U(\pi_{1})] \\ &+ d(\theta) U'(\pi_{1})[G_{\theta}(\alpha, \theta, \epsilon) - \widebar{P} C_{\theta}(\alpha, \theta)] \\ &+(1-d(\theta))U'(\pi_{0})[G_{\theta}(\alpha, \theta, 0) - 1] = 0.\\\end{split}\end{align}
I want to label the equations 4 and 5 and then reference them. Here I only have the two equations labelled as one.
EDIT:
From the answer below, I was able to label the equations:
\begin{align}\begin{split}\pd{EU}{\alpha} &= d(\theta) U'(\pi_{1}) [G_{\alpha}(\alpha, \theta, \epsilon) - \widebar{P} C_{\alpha}(\alpha, \theta)] \\ &+ (1-d(\theta)) U'(\pi_{0}) [G_{\alpha}(\alpha, \theta, 0) - C_{\alpha}(\alpha, \theta)]=0\end{split}\label{eqn:4}\\\begin{split}\pd{EU}{\theta} &= h'(\theta)[U(\pi_{0}) - U(\pi_{1})] \\ &+ d(\theta) U'(\pi_{1})[G_{\theta}(\alpha, \theta, \epsilon) - \widebar{P} C_{\theta}(\alpha, \theta)] \\ &+(1-d(\theta))U'(\pi_{0})[G_{\theta}(\alpha, \theta, 0) - 1] = 0.\\\end{split}\label{eqn:5}\\end{align}
However, now the two equations are not aligned with each other, and I want them to be aligned. Any suggestions?
|
I couldn’t resist getting sucked into the hype associated with the US election and debates, and so I thought I had a little fun of my own and played around a bit with the numbers.
[OK: you may disagree with the definition of “fun” $-$ but then again, if you’re reading this you probably don’t…] So, I looked on the internet to find reasonable data on the polls. Of course there are a lot of limitations to this strategy. First, I’ve not bothered doing some sort of proper evidence synthesis, taking into account the different polls and pooling them in a suitable way. There are two reasons why I didn’t: the first one is that not all the data are publicly available (as far as I can tell), so you have to make do with what you can find; second, I did find some info here, which seems to have accounted for this issue anyway. In particular, this website contains some sort of pooled estimates for the proportion of people who are likely to vote for either candidate, by state, together with a “confidence” measure (more on this later). Because not all the states have data, I have also looked here and found some additional info. Leaving aside representativeness issues (which I’m assuming are not a problem, but may well be, if this were a real analysis!), the second limitation is of course that voting intentions may not directly translate into actual votes. I suppose there are some studies out there to quantify this, but again, I’m making life (too) easy and discount this effect. The data on the polls that I have collected in a single spreadsheet look like this ID State Dem Rep State_Name Voters Confidence 1 AK 38 62 Alaska 3 99.9999 2 AL 36 54 Alabama 9 99.9999 3 AR 35 56 Arkansas 6 99.9999 4 AZ 44 52 Arizona 11 99.9999 5 CA 53 38 California 55 99.9999 … … … … …
The columns Dem and Rep represent the pooled estimation of the proportion of voters for the two main parties (of course, they may not sum to 100%, due to other possible minor candidates or undecided voters). The column labelled Voters gives the number of Electoral Votes (EVs) in each state (eg if you win at least 50% of the votes in Alaska, this is associated with 3 votes overall, etc). Finally, the column Confidence indicates the level of (lack of) uncertainty associated with the estimation. States with high confidence are “nearly certain” $-$ for example, the 62% estimated for Republicans in Alaska is associated with a very, very low uncertainty (according to the polls and expert opinions). In most states, the polls are (assumed to be) quite informative, but there are some where the situation is not so clear cut.
I’ve saved the data in a file, which can be imported in R using the command
polls <- read.csv(“http://www.statistica.it/gianluca/Polls.csv”)
At this point, I need to compute the 2-party share for each state (which I’m calling $m$) and fix the number of states at 51
attach(polls)
m <- Dem/(Dem+Rep)
Nstates <- 51
Now, in truth this is not a “proper” Bayesian model, since I’m only assuming informative priors (which are supposed to reflect
the available knowledge on the proportion of voters, without any additional observed data). Thus, all I’m doing is a relatively easy analysis. The idea is to first define a suitable informative prior distribution based on the point estimation of the democratic share and with uncertainty defined in terms of the confidence level. Then I can use Monte Carlo simulations to produce a large number of “possible futures”; in each future and for each state, the Democrats will have an estimated share of the popular vote. If that is greater than 50%, Obama will have won that state and the associated EVs. I can then use the induced predictive distribution on the number of EVs to assess the uncertainty underlying an Obama win (given that at least 272 votes are necessary to become president). all In their book, Christensen et al show a simple way of deriving a Beta distribution based on an estimation of the mode, an upper limit and a confidence level that the variable is below that upper threshold. I’ve coded this in a function betaPar2, which I’ve made available from here (so you need to download it, if you want to replicate this exercise).
Using this bit of code, one can estimate the parameters of a Beta distribution centered on the point estimate and for which the probability of exceeding the threshold 0.5 is given by the level of confidence.
a <- b <- numeric()
for (s in 1:Nstates) {
if (m[s] < .5) {
bp <- betaPar2(m[s],.499999,Confidence[s]/100)
a[s] <- bp$res1
b[i] <- bp$res2
}
if (m[s] >=.5) {
bp <- betaPar2(1-m[s],.499999,Confidence[s]/100)
a[s] <- bp$res2
b[s] <- bp$res1
}
}
The function betaPar2 has several outputs, but the main ones are res1 and res2, which store the values of the parameters $\alpha$ and $\beta$, which define the suitable Beta distribution. In fact, the way I’m modelling is to say that if the point estimate is below 0.5 (a state $s$ where Romney is more likely to win), then I want to derive a suitable pair $(\alpha_s,\beta_s)$ so that the resulting Beta distribution is centered on $m_s$ and for which the probability of not exceeding 0.5 is given by $c_s$ (which is defined as the level of confidence for state $s$, reproportioned in [0;1]). However, for states in which Obama is more likely to win ($m_s\geq 0.5$), I basically do it the other way around (ie working with 1$-m_s$). In these cases, the correct Beta distribution has the two parameters swapped (notice that I assign the element res2 to $\alpha_s$ and the element res1 to $\beta_s$).
For example, for Alaska (the first state), the result is an informative prior like this.
In line with the information from the polls, the estimated average proportion of Democratic votes is around 38% and effectively there’s no chance of Obama getting a share that is greater than 50%.
Now, I can simulate the $n_{\rm{sims}}=10000$ “futures”, based on the uncertainty underlying the estimations using the code
nsims <- 10000
prop.dem <- matrix(NA,nsims,Nstates)
for (i in 1:Nstates) {
prop.dem[,i] <- rbeta(nsims,a[i],b[i])
}
The matrix prop.dem has 10000 rows (possible futures) and 51 columns (one for each state).
I can use the package coefplot2 and produce a nice summary graph
library(coefplot2)
means <- apply(prop.dem,2,mean)
sds <- apply(prop.dem,2,sd)
low <- means-2*sds
upp <- means+2*sds
reps <- which(upp<.5) # definitely republican states
dems <- which(low>.5) # definitely democratic states
m.reps <- which(means<.5 & upp>.5) # most likely republican states
m.dems <- which(means>.5 & low<.5) # most likely democratic states
cols <- character()
cols[reps] <- “red”
cols[dems] <- “blue”
cols[m.reps] <- “lightcoral”
cols[m.dems] <- “deepskyblue”
vn <- paste(as.character(State),” (“,Voters,”)”,sep=””)
coefplot2(means,sds,varnames=vn,col.pts=cols,main=”Predicted probability of democratic votes”)
abline(v=.5,lty=2,lwd=2)
This gives me the following graph showing the point estimate (the dots), 95% and 50% credible intervals (the light and dark lines, respectively). Those in dark blue and bright red are the “definites” (ie those that are estimated to be definitely Obama or Romney states, respectively). Light blues and reds are those undecided (ie for which the credible intervals cross 0.5).
Finally, for each simulation, I can check that the estimated proportion of votes for the Democrats exceeds 0.5 and if so allocate the EVs to Obama, to produce a distribution of possible futures for this variable.
obama <- numeric()
for (i in 1:nsims) {
obama[i] <- (prop.dem[i,]>=.5)%*%Voters
}
hist(obama,30,main=”Predicted number of Electoral Votes for Obama”,xlab=””,ylab=””)
abline(v=270,col=”dark grey”,lty=1,lwd=3)
So, based on this (veeeeery simple and probably not-too-realistic!!) model, Obama has a pretty good chance of being re-elected. In almost all the simulations his share of the votes guarantees he gets at least the required 272 EVs $-$ in fact, in many possible scenarios he actually gets many more than that.
Well, I can only hope I’m not jinxing it!
To
leave a comment
for the author, please follow the link and comment on their blog:
Gianluca Baio's blog
.
R-bloggers.com
offers
daily e-mail updates
about R
news and tutorials
on topics such as: Data science
, Big Data, R jobs
, visualization (ggplot2
, Boxplots
, maps
, animation
), programming (RStudio
, Sweave
, LaTeX
, SQL
, Eclipse
, git
, hadoop
, Web Scraping
) statistics (regression
, PCA
, time series
, trading
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor: e-mail
, twitter
, RSS
, or facebook
...
|
1. Homework StatementI have to prove that the expression$$\frac{\omega C - \frac{1}{\omega L}}{\omega C - \frac{1}{\omega L} + \omega L - \frac{1}{\omega C}}$$is equal to$$\frac{1}{3-( (\frac{\omega_r}{\omega})^2 + (\frac{\omega}{\omega_r})^2)}$$where ##\omega_r= \frac{1}{\sqrt{LC}}##...
So what I did was:I took the square of the length:$$ (x_1)^2 + \frac {2}{9} + \frac {2}{3} x_1 $$And then I calculated the 1st derivative of this expression and zero gave me value for x_1: -8/3I made a substitution of this value in my original expression and I got my solution
Thanks for the reply!So in that case I need to minimize the square of the length of (x_1, \frac{1}{3}, \frac{1}{3} + x_1) right? Because if I minimize the square length of (x_1, 0, x_1) I reach to a zero solution, right?Than I can write that the family of the least squares solution is...
1. Homework StatementIn R^3 with inner product calculate all the least square solutions, and choose the one with shorter length, of the system:x + y + z = 1x + z = 0y = 02. The attempt at a solutionSo I applied the formula A^T A x = A^T b with A as being the matrix with row 1...
|
I am curious to know under what conditions of the air pressure(atm), temperature, solute density in the water would cause the Niagara fall frozen?
In general, the answer is "a bit lower than 32 Fahrenheit". Here's two things which one might think would come into play, but actually do not to an appreciable extend.
The major solutes which are present in the Great Lakes are sodium, chloride, and a host of other ions in lesser amounts. According to the government website here, the typical sodium concentration found in the Great Lakes is on the order of 10 mg/L. This is well within the regime of ideal solution behavior, and so we get a freezing point depression of $\Delta T\approx k_fbi=0.0016$ Kelvin (assuming NaCl). This is tiny compared to seasonal temperature variations, so solute concentration can be mostly ignored.
Air pressure is also almost completely negligible, for two reasons. One is that the pressure variation through the water depth of the water is much greater than typical atmospheric pressure variations; for example, atmospheric pressure ranges from 29.5 to 30.5 inHg, which is comparable to the pressure variation in a foot of water (and the river is considerably deeper than one foot at most places). Two is that even accounting for pressure variations of both the atmosphere and within the water itself, the linearized Clausius-Clapeyron relation predicts a negligible temperature dependence. For example, for the water-ice phase transition we obtain$$\frac{\Delta T}{\Delta P}=\frac{T\Delta v}{L}=0.00025\mbox{K/inHg}$$which means that seasonal or daily variations in atmospheric pressure will only change the freezing point by something on the order of milliKelvin. Thus atmospheric pressure variation, and most likely water-depth variations in pressure, are negligible compared to seasonal variations in temperature.
This obviously leaves out a lot of kinetic and rate information. For example, dissipative heating at the foot of the waterfall will heat up the water there by approximately$$\Delta T=\frac{mgh}{C_p}=0.12\mbox{K}$$which easily swamps the contribution due to air pressure and solute concentration. This also ignores nucleation effects, and it ignores the rates of coolings from evaporative cooling and air cooling, so it says nothing about how long one would have to wait for the entire waterfall to freeze in so-and-so temperature. However, I think it's safe to say that the temperature needs to be a good bit lower than 32F for the falls to freeze. It may also be worthwhile to look through historical archives and see what the mean temperature was in the years for the photographs you posted.
I'm not quite sure what you mean by Van der Waals force, as that's pretty much going to be constant within a given substance.
|
Not with the obvious complex structure. Notice that $O_{\lambda}$ is a closed subvariety of $T^{\ast}(O_{\lambda})$ (namely the zero section). Closed subvarieties of Stein varieties are Stein. However, positive dimensional Stein varieties are never compact, and $O_{\lambda}$ is compact.
However, there is a sense in which $T^{\ast} O_{\lambda}$ is almost an affine variety, which I will now sketch. Before I get into the details, let me the first example.
When $G=SU(2)$, then $O_{\lambda}$ is $\mathbb{CP}^1$ and $T^{\ast} O_{\lambda}$ is the total space of the line bundle $\mathcal{O}(-2)$. This can be viewed as the blow up of the singularity $xz+y^2=0$, and the global holomorphic functions on $T^{\ast} O_{\lambda}$ are all pulled back from this singular cone. Notice that $xz+y^2=0$ is the same equation as $\left( \begin{smallmatrix} y & x \\ z & -y \end{smallmatrix} \right)^2=0$. This will be relevant later.
Consider the variety $xz+y^2 = 1$, which is also $\left( \begin{smallmatrix} y & x \\ z & -y \end{smallmatrix} \right)^2=\mathrm{Id}$. This is a smooth Stein variety and has a map to $\mathbb{CP}^1$ sending $\left( \begin{smallmatrix} y & x \\ z & -y \end{smallmatrix} \right)$ to its first column. The fibers of this map are affine spaces and, in fact, this is an affine bundle. The corresponding vector bundle is, indeed, $\mathcal{O}(-2)$. Since an affine bundle is diffeomorphic to its corresponding vector bundle, choosing such a diffeomorphism gives a new complex structure on $T^{\ast} O_{\lambda}$ and that structure is Stein.
Okay, now the general case. I'll write $K$ for the compact group, $S$ for a maximal torus. I'll write $G$ for the complexification of $K$, $T$ for a complexification of $S$ within $G$, $B$ for a Borel containing $T$ and $N$ for the unipotent radical of $B$. I'll use corresponding Fraktur letters for the Lie algebras. To make life simple, I'll assume that your orbit goes through a regular element of $\mathfrak{k}$ (one whose stabilizer for the adjoint action is just a torus). Otherwise, I'd also need to introduce a parabolic $P$.
As I imagine you know, $O_{\lambda} \cong K/S \cong G/B$. The tangent space to the coset $B$ in $G/B$ is $\mathfrak{g}/\mathfrak{b}$. As explained above, $T^{\ast}(G/B)$ is not Stein.
However, $G/T$ is Stein. In general, quotients of linear algebraic groups by reductive subgroups (such as $T$) exist and are affine; I'll also give a direct embedding of $G/T$ into $\mathfrak{g}$ below. The map $G/T \longrightarrow G/B$ is an affine bundle, and the corresponding vector bundle is $T^{\ast}(G/B)$. So we can make $T^{\ast}(G/B)$ into a Stein space by using a diffeomorphism between an affine bundle and the corresponding vector bundle to give a new complex structure.
There is a beautiful concrete way to realize these spaces. The tangent space to $G/B$ at the coset $B$ is $\mathfrak{g}/\mathfrak{b}$. Use the Killing form to identify $\mathfrak{g}$ with its dual; then the cotangent space at the coset $B$ is $\mathfrak{b}^{\perp} = \mathfrak{n}$. I'll write $\phi$ for this isomorphism $T^{\ast}_{B} (G/B) \cong \mathfrak{n}$. Let $g \in G$ and let $v$ be a cotangent vector to $G/B$ at the coset $gB$. Define an element of $\mathfrak{g}$ by $Ad(g) \cdot \phi(g^{\ast} v)$. (We are using the action of $g$ on $G/B$ to pull back from $T^{\ast}_{gB}$ to $T^{\ast}_B$.) One can check that this construction is unaltered by replacing $g$ by $gb$ for $b \in B$, so this gives a map $T^{\ast} (G/B) \to \mathfrak{g}$.
The image of this map is $\mathcal{N} := \bigcup_{g \in G} Ad(g) \cdot \mathfrak{n}$. This space is known as the nilpotent cone and is a (singular) closed subvariety of $\mathfrak{g}$; explicitly, an element $x$ of $\mathfrak{g}$ is in $\mathcal{N}$ if the coefficients of the characteristic polynomial of $Ad(x)$ are $0$. The map from $T^{\ast}(G/B)$ to $\mathcal{N}$ is called the Springer resolution. There is a good discussion of this in chapters 3 and 4 of Chriss and Ginzburg's
Representation Theory and Complex Geometry.
Take a regular element $t$ in $\mathfrak{t}$. Then $T$ is the stabilizer of $t$, so $G/T$ embeds in $\mathfrak{g}$ as the $G$ orbit through $t$. This is a closed embedding: an element $x$ of $\mathfrak{g}$ is in this orbit if and only if $Ad(x)$ and $Ad(t)$ have the same characteristic polynomial. So this gives an explicit embedding of $G/T$ into $\mathfrak{g}$ and thus gives a second proof that $G/T$ is Stein.
In summary: We can explicitly embed $G/T$ into $\mathfrak{g}$. The space $T^{\ast}(G/B)$ has an analogous map to $\mathfrak{g}$ which gives a resolution of the nilpotent cone.
The above example is this theory worked out for $SL(2)$.
|
Question
What is the best known
effective upper bound on the prime gap following x? Motivation
Suppose you needed to show a good bound for the gap between a fixed large constant, say $G=10^{10^{100}}$, and the following prime. Bertrand's postulate gives $G$, but we know unconditionally that the prime gap is $O(G^\theta)$ for $\theta$ near 1/2, so $G^\theta$ seems more reasonable. But it seems that the best that can be proved at present is much larger: $G/k\log^2G$.
Background
Bounds on large prime gaps seem to fall into three categories. We expect that the maximal prime gap below x is polylogarithmic: Cramér conjectured that it is $O(\log^2 x)$. (Maier suggests that the constant should be about $2e^{-\gamma}$.)
With the Riemann hypothesis, the gap falls to $O(\sqrt x\log x)$ thanks to Cramér. The unconditional result, due to Baker, Harman, & Pintz (extending Ingham's method), is nearly as good: $O(n^{21/40})$.
Schoenfeld's result allows an effective version of Cramér's conditional result with a constant near $1/4\pi$. But for an unconditional effective result, I know of nothing better than Dusart's $x/25\log^2x$.
That is, the results fall into three categories: those with exponent near 0 (Cramér's conjecture, Maier's theorem, etc.); those with exponent near 1/2 (Baker-Harman-Pintz, Cramér); and those with exponent near 1 (Rosser & Schoenfeld, Dusart, Chebyshev).
Further
If, as I suppose, there are no further results known, I raise this "soft" question:
Why is "the best we can (effectively) prove" in the same neighborhood as Bertrand's postulate, even though we can show much more (and expect quite a bit more)? It might be too much to expect an effective version for $\theta=0.525$, but we lack such a result for Chudakov's $\theta=3/4+\varepsilon$ and even Hoheisel's $\theta=32999/33000$.
References
On request. Most of the papers I gave are well-known: Maier 1985, Baker-Harman-Pintz 2001, Schoenfeld 1976, etc. The Dusart preprint is at http://arxiv.org/abs/1002.0442 .
|
Answer
$12\times \pi$
Work Step by Step
$V = \frac{1}{3}\pi \times r^2\times h$ By replacing the letters with their values (r=3, h=4) We have: $V = \frac{1}{3}\pi \times 3^2\times 4 = 12\times \pi$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
The following is more or less a copy-paste of a comment I made on the related ArsTechnica thread. Indeed, StackExchange is probably one of the better places to debate this.
A few reminders first:
there are approximately $p$ elliptic curves over the finite field of integers $\pmod{p}$; of these curves, only those with (almost) prime order are of cryptographic interest (I will write only about prime order, for simplicity): there are approximately $p/\log p$ such curves; among these prime curves, there are some known conditions which happen rarely and make the curve insecure; the "Suite B" generation procedure is basically: pick some seed $\sigma$ (randomly or maliciously; assume that it is malicious), hash it with a cryptographic hash function (and more particularly, a preimage-resistant hash function), and derive curve parameters from this.
The largest class of "subtly weak" curves (with prime order) that we know is the set of supersingular curves, which has a size of about $\sqrt{p}$ and therefore a probability of occurrence of $1/\sqrt{p}$ (neglecting the logarithmic factor). So finding one via the Suite B generation procedure, even in the malicious case, should take about $\sqrt{p}$ tries - which, coincidentally, is exactly as long as solving ECDLP in the first place. (Besides, this class is easy to detect anyway, but that's not the point).
So any useable (by the NSA) class of weak curves would need to be much (= exponentially) larger than this; this is, much larger than all known classes of weak curves.
Then: if such a class exists, then how does the NSA exploit it? Because of hashing, (assuming SHA-1 to be preimage-resistant, which seems plausible), they cannot have inserted backdoor info in the curve: any trap that they use is computable from the curve itself without knowing the seed $\sigma$. This means that such a backdoor is available to any good mathematician (no need to steal NSA secrets!).
So the Suite B curves can be considered as dangerous only if you believe
all three following conditions: there exists a class of curves which is exponentially larger than all known classes of weak curves; NSA knew about this class 20 years ago, but nobody else has been able to discover it since then; they deliberately published, and use for themselves, a curve which they know to be weak to anybody else. I personnally do not believe either (2) or (3), and tend not to believe (1) either. This is why I still believe P-256 to be safe.
Actually, even the DUAL_EC_DRBG scandal makes a strong case that both the P-256 curve (vs. ECDLP) and SHA-1 (vs. preimage computation) are probably safe: if the NSA had had, at the time of the DUAL_EC_DRGB parameter generation, a mean to either compute a SHA-1 preimage OR an elliptic curve discrete logarithm, then they would have been able to publish the seeds $\sigma_P, \sigma_Q$ for both points $P, Q$ while still knowing the discrete logarithm $\log(Q)/\log(P)$. They would have gained the same powers of prediction of the DRBG without leaving such a mess.
Of course, the preceding paragraph does not rule out that the whole DUAL_EC_DRBG scandal could have been deliberate misinformation from the NSA, and that Snowden could be a double agent. But this is leaving the crypto domain for the tinfoil-hat domain...
So why did NIST not use a verifiable method for generating the "Suite B" curves? Again, this is only borderline crypto, but my opinion on this is: nobody asked them to at the time, and it is only post-DUAL_EC_DRBG that we, the crypto community, have matured enough to require verifiability in all published parameters (which is a good thing, but does not mean by itself that P-256, or even worse, ECC in general, is broken!)
|
Longitudinal relaxation times for 11 human brain metabolites
are reported for GM and WM rich voxels at 9.4T. These values are reported to
potentiate the ability to perform absolute quantification at 9.4T in humans
with reference to water. A bi-exponential model was used to fit the signal
curve from using an inversion recovery metabolite cycling STEAM sequence. Results
are further extrapolated to report the T
1-relaxation from a
theoretically pure WM and GM voxel by means of a linear assumption of the
relaxation time and tissue contribution of a voxel.
11 healthy volunteers (mean
age = 26.9 ± 2.8 with 8 male and 3 female
participants) were recruited to participate in this study with ERB approval
and written consent from each volunteer. In order to determine the tissue
content of grey matter(GM) rich and white matter(WM) rich voxels, a MP2RAGE
sequence was used
3 with an 8Tx/16Rx volume coil 4, and
segmented into GM, WM, and cerebral spinal fluid(CSF) tissue probability maps
using SPM12 5 with tissue fractions within the voxel calculated by an
in-house method.
The same coil was used to acquire the spectroscopy data
driven as a surface coil using the bottom three channels alone to transmit by
utilizing a three-way power splitter
2. A 2x2x2cm 3 voxel
was placed spanning the longitudinal fissure of the occipital lobe for GM
measurements, and a voxel was placed within the right occipital-parietal
transition for WM measurements [Fig. 1A]. T 1-relaxation was measured
using the aforementioned IR-MC-STEAM sequence with TE/TM/TR = 8/50/10000ms. A
series of inversion times(TI = 20, 100, 400, 700, 1000, 1400, 2500ms) was
chosen to characterize the T 1-relaxation of a variety of metabolites
(Fig. 2).
A basis set was simulated using VeSPA simulation tool
6
using the ideal STEAM sequence matching our TM and TE. LCModel(v-6.3) 7
was used to fit spectra(Fig.1B) with manual phase correction for TI = 1000 and
1400ms; the spline baseline was set to have medium flexibility to fit
experimental imperfections and macromolecular components(dkntmn=0.5). The
concentration of metabolites was taken after LCModel fitting and fit to a
bi-exponential model:
$$ S = |A(1-2e^{\frac{-TI}{T_{1}}} + e^{\frac{-TR}{T_{1}}})|, \\ A\equiv \frac{\rho}{4kT\cdot R \cdot BW} $$
$$$S$$$ is the concentration, and $$$T_{1}$$$ is solved by a linear model curve fitting optimization, where $$$A$$$ is a constant with
$$$ \rho $$$ being the effective spin density, $$$ k $$$ the Boltzmann constant, $$$T$$$ the temperature, $$$R$$$
the effective resistance of the loaded coil, and $$$BW$$$ is the bandwidth of the
receiver, using the SciPy toolkit
8
in Python(v2.7) 9 and figures were created using the
matplotlib library 10.
Since T
1-relaxation has been
shown to vary due to tissue type and not spatially like T 2-relaxation 3,
an assumption to further estimate the relaxation of pure WM and GM voxels was
performed. Assuming a linear relationship in relaxation time to the
contribution of tissue type two linear equations of the following form were
solved:
$$f_{GM}\cdot T^{pure\,voxel}_{1, GM}+f_{WM}\cdot T^{pure\,voxel}_{1,WM}=T^{rich\,voxel}_{1,GM} \\f'_{GM}\cdot T^{pure\,voxel}_{1,GM}+f'_{WM}\cdot T^{pure\,voxel}_{1,WM}=T^{rich\,voxel}_{1,WM}$$
where $$$f$$$ represents the tissue fraction in measures from GM-rich voxels and $$$f’$$$ represents the tissue fraction in measures from WM-rich voxels.
A short TE was utilized in order to
maintain signal from fast T
2-decaying metabolites and J-evolving
components of metabolites. Thus, 11 metabolites are reported with a majority
showing stable results. A challenge with a short TE is the influence of MMs
underlying metabolites; which potentially affects the quality of fit in LCModel
of the metabolites.
T
1-relaxations of a pure GM
voxel measured herein are in agreement with previous work of Deelchand et al 1;
who measured the T 1-relaxation to be 1777ms and 1746ms for the NAA
singlet and CH 3-tCr group respectively in a GM rich voxel. The
slight disagreement between the T 1-relaxation times of tCho of
1513ms as measured previously 1 and GPC of 1233ms as measured herein
is potentially due to the MM contribution in the spectra from this work. A
similar effect could be affecting the Gln relaxation time. Future work will
utilize a tailored MM baseline model for correction of these signals to better
report metabolite T 1-relaxation times of a wide range of brain
metabolites.
1. Deelchand DK, Van de Moortele PF, Adriany G, Iltis I, Andersen P, Strupp JP, Vaughan JT, Uğurbil K, Henry PG. In vivo 1H NMR spectroscopy of the human brain at 9.4 T: initial results. Journal of Magnetic Resonance. 2010 Sep 1;206(1):74-80.
2. Giapitzakis IA, Shao T, Avdievich NI, Mekle R, Kreis R and Henning A (April-2018) Metabolite-cycled STEAM and semi-LASER localization for MR spectroscopy of the human brain at 9.4T Magnetic Resonance in Medicine 79(4) 1841-1850.
3. Hagberg GE, Bause J, Ethofer T, Ehses P, Dresler T, Herbert C, Pohmann R, Shajan G, Fallgatter A, Pavlova MA, Scheffler K. Whole brain MP2RAGE-based mapping of the longitudinal relaxation time at 9.4 T. Neuroimage. 2017 Jan 1;144:203-16.
4. Avdievich N, Giapitzakis I and Henning A (April-25-2017): Optimization of the Receive Performance of a Tight-Fit Transceiver Phased Array for Human Brain Imaging at 9.4T, 25th Annual Meeting and Exhibition of the International Society for Magnetic Resonance in Medicine (ISMRM 2017), Honolulu, HI, USA.
5. Ashburner J, Barnes G, Chen C, Daunizeau J, Flandin G, Friston K, Kiebel S, Kilner J, Litvak V, Moran R, Penny W. SPM12 manual. Wellcome Trust Centre for Neuroimaging, London, UK. 2014 Jun 26.
6. Soher BJ, Semanchuk P, Todd D, Steinberg J, Young K. VeSPA: integrated applications for RF pulse design, spectral simulation and MRS data analysis. InProc Int Soc Magn Reson Med 2011 (Vol. 19, p. 1410).
7. Provencher SW. Estimation of metabolite concentrations from localized in vivo proton NMR spectra. Magnetic resonance in medicine. 1993 Dec;30(6):672-9.
8. Jones E, Oliphant E, Peterson P, et al. SciPy: Open Source Scientific Tools for Python, 2001-, https://www.scipy.org.
9. G. van Rossum, Python Tutorial, Technical Report CS-R9526, Centrum voor Wiskunde en Informatixa (CWI), Amsterdam, May 1995.
10. John D. Hunter. Matplotlib: A 2D Graphics Environment, Comuting in Science & Engineering, 9, 90-95 (2007), DOI:10.1109/MCSE.2007.53
|
By Emily van Zee and Corinne Manogue This interpretative narrative is based upon a video of the class session and discussions with the instructor and the director of the Physics Paradigms Program, Corinne Manogue. In writing the narrative, Emily van Zee drew upon her research in the tradition of ethnography of communication (Hymes, 1972; Philipsen & Coutu, 2004; van Zee & Minstrell, 1997a,b), a discipline that studies cultures through the language phenomena observed. This interpretative narrative presents an example of an instructor engaging students as novice participants in the culture of “thinking like a physicist.”
This short narrative provides several examples of spontaneous use of small white boards. These examples occurred during a discussion that the instructor had initiated with a planned small white board question. A longer narrative presents the details of both the physics and the pedagogy that this planned small white board question had elicited. This discussion occurred on Thursday, September 27, 2007.
When students enter this classroom, each picks up a small white board (about 30 x 40 cm), marker, and cloth to use as an eraser. When the instructor asks a small white board question, the students write their answers on these small white boards. Sometimes they just hold up their boards so that the instructor can see the array of responses and make a quick assessment of the level of knowledge present in the group.
Sometimes, however, the instructor moves around the room as the students are writing on their small white boards. As the students finish, the instructor picks up examples representing various issues needing attention and places these, with writing facing away from the students, in a particular order along the chalk tray of the blackboard at the front of the room. The responses written on each of these selected small white boards then form the basis for the subsequent discussion.
During a discussion, the instructor also sometimes asks individual students to write on their small white boards as a way to help them communicate their ideas. Occasionally a student will spontaneously write on a small white board to clarify a question the student wants to ask. Sometimes students will change what they have written on their whiteboards in response to something said during the discussion. These more spontaneous uses of the small whiteboards are the focus of this short narrative.
This discussion occurred during the first week of the academic year, when the instructor was still establishing how students should think and behave now that they had reached upper level courses as physics majors. There were 21 students in this class, including two women. For details of the physics discussed and pedagogy evident in this discussion, please see the longer narrative for this date.
Both interpretative narratives are based upon a transcript of the small white board discussion and upon reflections recorded by the instructor, Corinne Manogue, and her colleagues, Liz Gire, a postdoc, and Kerry Brown, a graduate of the program, while viewing a video of this session. The narrative was drafted by Emily van Zee, a science teacher educator who drew upon her own teaching experiences in laboratory-centered physics courses for prospective teachers and her research in the tradition of ethnography of communication (Hymes, 1972; Philipsen & Coutu, 2004; van Zee & Minstrell, 1997a,b). Ethnographers of communication examine cultural practices by interpreting what is said, where, when, by whom, for what purpose, in what way, and in what context. These interpretative narratives present an example of an instructor initiating students into the culture of physics, specifically into the verbal and mathematical language that physicists use in thinking and talking about the focus of this discussion: the electrostatic potential at a location in space due to a point charge.
The short narrative presents what had happened when Corinne had asked the class to write on their small white boards an expression for the electrostatic potential at a particular location in space due to a point charge. She had collected a representative set of the students’ responses and then discussed these. Several times during that discussion, she spontaneously asked students to write what they were thinking on their small white boards so that she and the other students could understand better what they were trying to say. For example, a student offered an analogy as a way to think about potentials:
Student: Way I think about it is to relate the electric potential to the gravitational potential because they're pretty much the same thing, the force equation is the same, so you do like force over, force times the distance? I mean that’s work
Corinne asked him to write the equation on his small white board to assist him in communicating his thinking both to her and to the other students:
Corinne: Write me down your equation so I can hold it up
The student wrote $W = F\cdot d$ and then $V = \int K Q_1 Q_d/r^2$ from 0 to $d$
Corinne reached for the white board, moved away, and then returned the board to the student who added dr to complete the integral expression. Corinne then took back the white board and addressed the class:
Corinne: Ok (name) is claiming that you can use the gravitational analog, and for some peculiar reason many students are much better at reasoning about the gravitational case than reasoning about the electrostatic case and you are absolutely right, except for an overall constant that has units in it, and a very important, physically important sign, all of the mathematics for gravitational fields is the same as the mathematics for electrostatic fields so if you can do it for a gravitational field and that makes sense to you, use that analogy. Let it help you! All right.
Corinne then held the small white board up for all to see:
Corinne: (Name)'s claim is work is force times distance. All right. And then the potential is the integral of the force times the distance but I think that's a variant of this.
Corinne reached for and held up the white board with $V = \int E\cdot ds$ that had been discussed earlier.
Corinne: Since if there's only one thing in the universe, we can't yet be talking about forces, so there's good reasoning and good memory there going on but not what we need for this problem.
By welcoming the student’s suggestion, prompting him to write out his idea on his small white board, holding up the board for the rest of the class to see, and discussing its contents, Corinne had used a student’s writing on a small white board in the midst of the discussion to help communicate the student’s thinking to the rest of the class. Although the physics was not appropriate - his suggestion referred to forces, which could not occur with only one charge in the universe - she took care to complement the thinking - his reasoning and effort to pull relevant information from his memory.
After the class had come to agreement on an appropriate expression for the electrostatic potential due to a point charge, $V= k Q/r$, Corrine initiated a discussion about the meaning of $r$, what distance was it representing? Speaking with hesitation expressed as a question, a student offered a more nuanced expression for the distance between the charge and the probe:
Student: $r$ minus $r$ naught?
Corinne: What about $r$ minus $r$ naught?
Student: The distance between there and there
Corinne: Write me something on your white board. I don't know what $r$ minus $r$ naught means.
Corinne and the class waited and watched while the student wrote his expression on his small white board. Then she held it up for all to see:
While viewing this segment of the video, Liz commented about this use of the small white boards, that Corinne was asking individual students to write mathematical expressions on the small white boards so that the students could communicate more clearly with Corinne and so that Corinne could use the white board to communicate their shared meaning to the rest of the class. Not only can an instructor ask small white board questions to the entire class but one can also ask them of individuals to help clarify their meanings.
Corinne agreed that when a student asks about a formula, she and the student can communicate much more clearly if the student writes the formula on a small white board. She noted that this is what professionals do when talking with each other, they write on napkins or backs of envelopes or whatever board is nearby to be sure that they understand and agree on the formula that they’re discussing.
Corinne: Ah. Ok! He's trying to write a magnitude of $r$ minus $r$ naught. So where is $r$ and where is $r$ naught?
Student: $r$ naught is at the origin
Corinne: Where is the origin?
The student pointed at a representation of a coordinate system, the three dowel rods connected at right angles, that was sitting on a table in front of Corinne.
Student: That.
Corinne put down the student’s white board and while still holding up the ball representing the charge, held up the coordinate system so all could see it. She then put both the ball and the coordinate system down together on the table.
Corinne: Ok. So I'm going to put this charge at the origin.
Imagine it's in the center. All right. And then?
Corinne picked up the student’s white board again and held it up for the students to see.
Student: The $r$ is wherever you’re thinking about at the moment
Corinne walked back to the table and picked up the probe and held it high for all to see:
Student: wherever that is
Corinne: Ok. All right. If I put the charge at the origin, then what does that tell you about this?
She pointed at the $r$ in $r – r’$ on the student’s white board:
Students: $r$ naught is zero
Corinne: Then it's just zero. What kind of a zero? It's a zero vector. All right. So the zero vector minus $r$ prime is just the vector from the origin (points to coordinate system on the table) to here (picks up probe).
In discussing this video, Corinne noted that in a discussion in which she was trying to help the students understand which is $\vec r$ and which is $\vec r'$, that she had mixed it up herself. So the words “zero vector minus $r$ prime” should have been “the zero vector minus $r$.” Fortunately the rest of the discussion was correct and she hopes she did not confuse anybody too much.
So yes indeed, if I put this charge at the origin (points to coordinate system), then this r here (points to white board with correct formula $V = Q/4πε_0 r$) is just the distance from here (ball representing charge) to here (voltmeter probe), the distance from here to here.
In viewing this segment of the video, Corinne commented that she was trying to get the students to see the difference between scalars and vectors and by manipulating the physical things, trying to get them to focus on the geometry.
Example of a Student’s Spontaneous Correction of His Own Small White Board [webcam 1:26-1:27; 1:42-1:43]
During this conversation about r minus r naught, a webcam videoing group 6 shows a student sitting quietly until Corinne says “write on your white board” to the student with whom she is conversing about r minus r naught. The student shown on Webcam 6 turns to his white board in response even though the comment was not directed to him.
Visible in the video was the student’s initial response on his white board to Corinne’s planned small white board question, what is the electro potential due to a point charge? When Corinne initially posed this question, the student had drawn a circle, put a question mark in the circle, drawn an arrow pointing to the circle, and after a long pause while thinking, finally written $kQ/r^2$. He also added some words (that cannot be discerned on the video.) Throughout the long discussion of the various student responses that Corinne had facilitated, he had not changed anything on his small white board.
However, now in response to Corinne’s direction to another student “write on your white board,” this student took his cloth eraser and carefully erased the 2 from the $r$ squared. Then he also revised his drawing by extending the line to the center of the circle and labeling the line r.
This spontaneous correction by a student on what he had written earlier on the small white board illustrates another advantage of these devices, that they can provide a prominent visual image of the focus of a discussion for each student, one that the student can revise later as needed.
Considering a Student’s Spontaneous Use of a Small White Board to Contribute an Idea to the Discussion [01:18:50.11] -
After establishing the meaning of r in the formula for the electrostatic potential due to a single point charge, Corinne had introduced a second charge. How could one represent the electrostatic potential at a particular location due to two point charges? In the midst of this discussion, Corinne paused. In the video she can be seen bending forward to look at what one of the students was writing and drawing on her small white board at a table nearby. Meanwhile another student responded verbally:
Student A: Magnitude of the difference between the vectors
Corinne: Magnitude of the difference between the vectors
The student writing and drawing then held up her small white board to show Corinne.
Student B: (Could have one at the origin)
Corinne. I could have one at the origin
Corinne took the white board and showed it to the class.
Corinne accepted this student’s offer of an arrangement that would be useful under certain conditions, used the dowels representing the coordinate system to make the suggestive vivid, and then articulated those constraints.
Corinne: So if you're really clever, you can put one at the origin and you can put the other one along the $x$ axis (picks up coordinate system)
Right? So then you can just use a scalar distance there (points to board)
But do I have to put my probe also on the $x$ axis somewhere between them?
Student B: Well I
Corinne: What if I want to measure up here? (holds probe up high) Another student explained the situation if there were only one charge:
Student C: If you're working in a three dimensional space, it creates a potential around itself. So if you go out from it at any radius in 3-D, you get the same potential.
Corinne accepted her suggestion also, made it vivid by holding up the probe to represent measuring the potential anywhere in space around the charge, and then articulated the constraints in system with more than one charge.
Corinne: Right. If I go out anywhere around this one (holds up probe) the same radius, I get the same potential. Absolutely.
But as I go probing around here (moves probe around) and I'm getting the same potential from this one (points to ball), I'm not staying the same radius from that one (points to basketball) Corinne: So all I want is the mathematics of how I describe the distance between here (probe) and here (ball on table).
Another student contributed a thought:
Student D: The magnitude of the distance (?)
Corinne: Exactly. It's the Star Trek example.
A student’s spontaneous and unexpected contribution prompted this interaction among the instructor and several students. The student had used the resource of a small white board to express what she was thinking by sketching a diagram, the sketch made her thinking visually available to the instructor who leaned over to see what the student was drawing, the student was able to offer her thinking via the small white board to the instructor who then was able to convey and discuss the student’s ideas with the whole group; several members of the whole group then contributed to the conversation, and made explicit the connection between the current discussion and the activity in which the students had participated earlier in the session.
|
A new proof of the boundedness results for stable solutions to semilinear elliptic equations
1.
ICREA, Pg. Lluis Companys 23, 08010 Barcelona, Spain
2.
Universitat Politècnica de Catalunya, Departament de Matemàtiques, Diagonal 647, 08028 Barcelona, Spain
3.
BGSMath, Campus de Bellaterra, Edifici C, 08193 Bellaterra, Spain
We consider the class of stable solutions to semilinear equations $ -\Delta u = f(u) $ in a bounded smooth domain of $ \mathbb{R}^n $. Since 2010 an interior a priori $ L^\infty $ bound for stable solutions is known to hold in dimensions $ n\le 4 $ for all $ C^1 $ nonlinearities $ f $. In the radial case, the same is true for $ n\leq 9 $. Here we provide with a new, simpler, and unified proof of these results. It establishes, in addition, some new estimates in higher dimensions —for instance $ L^p $ bounds for every finite $ p $ in dimension 5.
Since the mid nineties, the existence of an $ L^\infty $ bound holding for all $ C^1 $ nonlinearities when $ 5\leq n\leq 9 $ was a challenging open problem. This has been recently solved by A. Figalli, X. Ros-Oton, J. Serra, and the author, for nonnegative nonlinearities, in a forthcoming paper.
Mathematics Subject Classification:35K57, 35B65. Citation:Xavier Cabré. A new proof of the boundedness results for stable solutions to semilinear elliptic equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (12) : 7249-7264. doi: 10.3934/dcds.2019302
References:
[1] [2]
H. Brezis, Is there failure of the Inverse Function Theorem?, in
[3] [4] [5] [6] [7] [8] [9]
X. Cabré and P. Miraglio, Universal Hardy-Sobolev inequalities on hypersurfaces of Euclidean space, forthcoming.Google Scholar
[10]
X. Cabré and G. Poggesi, Stable solutions to some elliptic problems: Minimal cones, the Allen-Cahn equation, and blow-up solutions,
[11]
X. Cabré and T. Sanz-Perela, BMO and $L^\infty$ estimates for stable solutions to fractional semilinear elliptic equations, forthcoming.Google Scholar
[12] [13]
M. G. Crandall and P. H. Rabinowitz,
Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems,
[14]
L. Dupaigne,
[15]
D. Gilbarg and N. S. Trudinger,
[16] [17] [18] [19] [20] [21]
show all references
References:
[1] [2]
H. Brezis, Is there failure of the Inverse Function Theorem?, in
[3] [4] [5] [6] [7] [8] [9]
X. Cabré and P. Miraglio, Universal Hardy-Sobolev inequalities on hypersurfaces of Euclidean space, forthcoming.Google Scholar
[10]
X. Cabré and G. Poggesi, Stable solutions to some elliptic problems: Minimal cones, the Allen-Cahn equation, and blow-up solutions,
[11]
X. Cabré and T. Sanz-Perela, BMO and $L^\infty$ estimates for stable solutions to fractional semilinear elliptic equations, forthcoming.Google Scholar
[12] [13]
M. G. Crandall and P. H. Rabinowitz,
Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems,
[14]
L. Dupaigne,
[15]
D. Gilbarg and N. S. Trudinger,
[16] [17] [18] [19] [20] [21]
[1]
Tomás Sanz-Perela.
Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian.
[2] [3]
Xavier Cabré, Manel Sanchón, Joel Spruck.
A priori estimates for semistable solutions of semilinear elliptic equations.
[4]
Claudia Anedda, Giovanni Porru.
Boundary estimates for solutions of weighted semilinear elliptic
equations.
[5] [6] [7] [8]
Soohyun Bae.
Positive entire solutions of inhomogeneous semilinear elliptic equations with supercritical exponent.
[9]
Yi-hsin Cheng, Tsung-Fang Wu.
Multiplicity and concentration of positive solutions for semilinear
elliptic equations with steep potential.
[10] [11] [12]
Zhijun Zhang.
Large solutions of semilinear elliptic equations
with a gradient term: existence and boundary behavior.
[13]
Sara Barile, Addolorata Salvatore.
Radial solutions of semilinear elliptic equations with broken symmetry on unbounded domains.
[14] [15]
Soohyun Bae, Yūki Naito.
Separation structure of radial solutions for semilinear elliptic equations with exponential nonlinearity.
[16]
Shoichi Hasegawa.
Stability and separation property of radial solutions to semilinear elliptic equations.
[17]
Jinlong Bai, Desheng Li, Chunqiu Li.
A note on multiplicity of solutions near resonance of semilinear elliptic equations.
[18]
Francesca Alessio, Piero Montecchiari, Andrea Sfecci.
Saddle solutions for a class of systems of periodic and reversible semilinear elliptic equations.
[19]
Cemil Tunç.
Stability, boundedness and uniform boundedness of solutions of nonlinear delay differential equations.
[20]
Alexandra Rodkina, Henri Schurz.
On positivity and boundedness of solutions of nonlinear
stochastic difference equations.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
I've recently asked a question on an issue I was facing with numerically integrating Hamiltonian equations of motion. I got a great answer.
Following on from this, I wanted to write a very similar code using alternate definitions in variational mechanics.
That is using the following:
The Lagrangian is given by $L= \frac{1}{2} g_{\mu\nu} \dot{x}^\mu(s) \dot{x}^\nu(s) $ The generalised coordinates are given by $x^\mu(s)$ with velocity given by $ \dot{x}^\mu(s)$. The conjugate momentum is given by $p = \partial L\;/ \;\partial \dot{x}^\mu(s)$ The Hamiltonian is given by $H=p\dot{x} - L$ or $H=\frac{1}{2} g^{\mu\nu} p_\mu p_\nu $. Hamilton's equations are given by $\dot{p} = -g^{\mu\nu}\partial_q H, \; \dot{q} = g^{\mu\nu}\partial_p H$
Now, with the help of @jjc385 I think it is almost there. however, I still can't seem to finalise the equations of motion. I know they are getting there because I can compare them with the geodesic equations of motion for the same problem. (There is a reason why I'm being awkward and proceeding this way). The updated code thanks to @jjc385 is now given by:
SetAttributes[m, Constant];q = {t[s], r[s], \[Theta][s], \[Phi][s]};vel = D[q, s];n = Length[q];tt = 1 - 2 m/r[s];rr = -1/tt;\[Theta]\[Theta] = -r[s]^2;\[Phi]\[Phi] = -(r[s] Sin[\[Theta][s]])^2;metric = {{tt, 0, 0, 0}, {0, rr, 0, 0}, {0, 0, \[Theta]\[Theta], 0}, {0, 0, 0, \[Phi]\[Phi]}};inversemetric = Simplify[Inverse[metric]];p = FullSimplify[Table[D[L, vel[[i]]], {i, 1, n}]];H = FullSimplify[1/2*(p.inversemetric.p)];pSym = Symbol@*(StringJoin["p" <> #] &)@*ToString@*Head /@ q;Solve[pSym == p, vel]Hnew = H /. Flatten@%;ivsnew = {1.3, 0, 0, 0.088}; ics = {0, 6.5, \[Pi]/2, 0};m = 1;pdot = FullSimplify[inversemetric.Table[-D[H, q[[i]]], {i, 1, n}]]qdot = inversemetric.Table[D[Hnew, pSym[[i]]], {i, 1, n}]eqs1 = {{D[q, s] == qdot, D[p, s] == pdot}, {(q /. s -> 0) == ics, (p /. s -> 0) == ivsnew}};time = {s, 0, 750};solee = NDSolve[eqs1, q, time, Method -> "ExplicitRungeKutta"];
Any suggestions?
I know the equations of motion are incorrect but I don't know how to make them correct!
|
perhaps he is implying some even stronger result
He is referring to the following result of Peter Freyd (Freyd uncertainty principle):
The homotopy category of spaces $HoTop$ does not admit a faithful functor to the category of sets $Set$. Specifically, for any functor $T: Top_* \to Set$ from base-pointed spaces to sets which is homotopy invariant, there exist a triple $f: X \to Y$ such that $f$ is not null-homotopic, but $T(f) = T(\ast)$. Here $\ast$ is the null map to the basepoint of $Y$.
In particular, any algebraic invariant is a set-valued homotopy invariant. This includes homotopy groups, cohomology, cohomology and homotopy operations and whatever you can think of. Freyd's theorem implies that we cannot describe the homotopy category as a category of algebras for some algebraic theory $\mathcal{T}$, since any algebraic category is concrete.
Fun fact: Freyd's theorem essentially relies only on the general set-theoretic arguments and cardinal counting.
Non-counter-example: Whitehead's theorem states that if a map $f: X \to Y$ between pointed connected CW-complexes induces an isomorphism on all homotopy groups $\pi_i, \ i=1, 2,\dots$, then $f$ is a homotopy equivalence between $X$ and $Y$. Note that you still can't discriminate spaces looking just at the collection of homotopy groups: there can be $X$ and $Y$ such that $\pi_i(X) = \pi_i(Y)$ for all $i$, but this isomorphism is not induced by any actual map $F: X\to Y$ and the spaces are not homotopy equivalent. The simplest example is $\Bbb R \Bbb P^2 \times S^3$ and $S^2 \times \Bbb R \Bbb P ^3$. They both have a double covering by $S^2 \times S^3$ and have thus the same homotopy groups, but their cohomology is not isomorphic.
On second thought, I can't see why Freyd's theorem would imply actual non-discriminable spaces without any extra conditions on the invariant. Perhaps someone can fill this gap, but imho the non-discrimination of maps is bad enough.
Since this theorem states indiscriminability even for non-homotopy equivalent spaces, it in particular does so for non-homeomorphic spaces. This could be weaker than what your professor implied since we could in principle consider invariants of spaces which are not homotopy invariants. However, this requires some more specifics on what we would call "algebraic invariants" since e.g. the lattice of open subsets looks like a perfectly fine algebraic invariant to me, but it certainly discriminates spaces.
|
Answer
Please see the work below.
Work Step by Step
We know that $\Delta T_F=(\frac{9}{5})\Delta T_C$ We plug in the known values to obtain: $\Delta T_F=(\frac{9}{5})(10)$ $\Delta T_F=18F^{\circ}$
You can help us out by revising, improving and updating this answer.Update this answer
After you claim an answer you’ll have
24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
|
This chapter deals with the parameters of radiated beam of the antenna. These parameters help us to know about the beam specifications.
According to the standard definition, “Beam area is the solid angle through which all the power radiated by the antenna would stream if P (θ, Ø) maintained its maximum value over Ω
A and was zero elsewhere.”
The radiated beam of the antenna comes out from an angle at the antenna, known as solid angle, where the power radiation intensity is maximum. This
solid beam angle is termed as the beam area. It is represented by Ω A.
The radiation intensity P (θ, Ø) should be maintained constant and maximum throughout the solid beam angle Ω
A, its value being zero elsewhere.
Beam angle is a set of angles between the half power points of the main lobe.
The mathematical expression for beam area is$$\Omega_{A} =\int_{0}^{2\pi}\int_{0}^{\pi}P_{\pi}(\theta,\Phi)d\Omega\ wattts$$ $$d\Omega = \sin\theta\ d\theta\ d\Phi\ watts$$
Where
The unit of beam area is
watts.
According to the standard definition, “The
beam efficiency states the ratio of the beam area of the main beam to the total beam area radiated.”
The energy when radiated from an antenna, is projected according to the antenna’s directivity. The direction in which an antenna radiates more power has maximum efficiency, while some of the energy is lost in side lobes. The maximum energy radiated by the beam, with minimum losses can be termed as
beam efficiency.
The mathematical expression for beam efficiency is −$$\eta_{B} = \frac{\Omega_{MB}}{\Omega_{A}}$$
Where,
An Antenna can be polarized depending upon our requirement. It can be linearly polarized or circularly polarized. The type of antenna polarization decides the pattern of the beam and polarization at the reception or transmission.
When a wave is transmitted or received, it may be done in different directions. The
linear polarizationof the antenna helps in maintaining the wave in a particular direction, avoiding all the other directions. Though this linear polarization is used, the electric field vector stays in the same plane. Hence, we use this linear polarization to improve the directivity of the antenna.
When a wave is circularly polarized, the electric field vector appears to be rotated with all its components loosing orientation. The mode of rotation may also be different at times. However, by using
circular polarization, the effect of multi-path gets reduced and hence it is used in satellite communications such as GPS.
Horizontal polarization makes the wave weak, as the reflections from the earth surface affect it. They are usually weak at low frequencies below 1GHz.
Horizontal polarization is used in the transmission of TV signals to achieve a better signal to noise ratio.
The low frequency vertically polarized waves are advantageous for ground wave transmission. These are not affected by the surface reflections like the horizontally polarized ones. Hence, the
vertical polarization is used for mobile communications.
Each type of polarization has its own advantages and disadvantages. A RF system designer is free to select the type of polarization, according to the system requirements.
|
Radiation intensity of an antenna is closely related to the direction of the beam focused and the efficiency of the beam towards that direction. In this chapter, let us have a look at the terms that deal with these topics.
According to the standard definition, “The ratio of maximum radiation intensity of the subject antenna to the radiation intensity of an isotropic or reference antenna, radiating the same total power is called the
directivity.”
An Antenna radiates power, but the direction in which it radiates matters much. The antenna, whose performance is being observed, is termed as
subject antenna.
Its
radiation intensity is focused in a particular direction, while it is transmitting or receiving. Hence, the antenna is said to have its directivity in that particular direction.
The ratio of radiation intensity in a given direction from an antenna to the radiation intensity averaged over all directions, is termed as directivity.
If that particular direction is not specified, then the direction in which maximum intensity is observed, can be taken as the directivity of that antenna.
The directivity of a non-isotropic antenna is equal to the ratio of the radiation intensity in a given direction to the radiation intensity of the isotropic source.
The radiated power is a function of the angular position and the radial distance from the circuit. Hence, it is expressed by considering both the terms
θ and Ø.
Where
${\phi(\theta,\phi)_{max}}$ is the maximum radiation intensity of subject antenna.
${\phi_{0}}$ is the radiation intensity of an isotropic antenna (antenna with zero losses).
According to the standard definition, “
Aperture efficiency of an antenna, is the ratio of the effective radiating area (or effective area) to the physical area of the aperture.”
An antenna has an aperture through which the power is radiated. This radiation should be effective with minimum losses. The physical area of the aperture should also be taken into consideration, as the effectiveness of the radiation depends upon the area of the aperture, physically on the antenna.
The mathematical expression for aperture efficiency is as follows −$$\varepsilon_{A} = \frac{A_{eff}}{A_{p}}$$
where
$\varepsilon_{A}$ is Aperture Efficiency.
${A_{eff}}$ is effective area.
${A_{p}}$ is physical area.
According to the standard definition, “
Antenna Efficiency is the ratio of the radiated power of the antenna to the input power accepted by the antenna.”
Simply, an Antenna is meant to radiate power given at its input, with minimum losses. The efficiency of an antenna explains how much an antenna is able to deliver its output effectively with minimum losses in the transmission line.
This is otherwise called as
Radiation Efficiency Factor of the antenna.
The mathematical expression for antenna efficiency is given below −$$\eta_{e} = \frac{P_{rad}}{P_{input}}$$
Where
$\eta_{e}$is the antenna efficiency.
${P_{rad}}$ is the power radiated.
${P_{input}}$ is the input power for the antenna.
According to the standard definition, “
Gain of an antenna is the ratio of the radiation intensity in a given direction to the radiation intensity that would be obtained if the power accepted by the antenna were radiated isotropically.”
Simply, gain of an antenna takes the directivity of antenna into account along with its effective performance. If the power accepted by the antenna was radiated isotropically (that means in all directions), then the radiation intensity we get can be taken as a referential.
The term
antenna gain describes how much power is transmitted in the direction of peak radiation to that of an isotropic source. Gain is usually measured in dB.
Unlike directivity, antenna gain takes the losses that occur also into account and hence focuses on the efficiency.
The equation of gain, G is as shown below.$$G = \eta_{e}D$$
Where
G is gain of the antenna.
$\eta_{e}$is the antenna’s efficiency.
D is the directivity of the antenna.
The unit of gain is
decibels or simply dB.
|
Search
Now showing items 1-10 of 25
OGLE-2017-BLG-0173Lb: Low-mass-ratio Planet in a "Hollywood" Microlensing Event
(2018)
We present microlensing planet OGLE-2017-BLG-0173Lb, with planet-host mass ratio either $q\simeq 2.5\times 10^{-5}$ or $q\simeq 6.5\times 10^{-5}$, the lowest or among the lowest ever detected. The planetary perturbation ...
OGLE-2016-BLG-1045: A Test of Cheap Space-Based Microlens Parallaxes
(2018)
Microlensing is a powerful and unique technique to probe isolated objects in the Galaxy. To study the characteristics of these interesting objects based on the microlensing method, measurement of the microlens parallax ...
KMT-2015-1b: a Giant Planet Orbiting a Low-mass Dwarf Host Star Discovered by a New High-cadence Microlensing Survey with a Global Telescope Network
(2018)
We report the discovery of an extrasolar planet, KMT-2015-1b, that was detected using the microlensing technique. The planetary lensing event was observed by KMTNet survey that has commenced in 2015. With dense coverage ...
OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only
(2018)
We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ...
OGLE-2017-BLG-0329L: A Microlensing Binary Characterized with Dramatically Enhanced Precision Using Data from Space-based Observations
(2018)
Mass measurements of gravitational microlenses require one to determine the microlens parallax PIe, but precise PIe measurement, in many cases, is hampered due to the subtlety of the microlens-parallax signal combined ...
OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function
(2018)
We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ...
OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy
(2018)
We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ...
OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge
(2018)
We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ...
Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb
(2018)
We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ...
OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit
(2018)
We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ...
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.