url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://electronics.stackexchange.com/questions/420810/how-to-enter-into-isp-mode-through-reinvoke-isp-in-lpc1759/421260
|
How to enter into ISP mode through 'reinvoke ISP' in LPC1759
In LPC1759, I entered ISP through "reinvoke ISP" with the following steps
1. Disable PLL
2. Reset timer 1
3. Re-map interrupt vectors
4. Set watch dog timeout
5. Reinvoke ISP
After entering into ISP, What exactly happens in the controller? How to check if it is in ISP Mode? After entering ISP, How to get out?
void init(void){
Chip_Clock_DisablePLL(SYSCTL_MAIN_PLL, SYSCTL_PLL_CONNECT);
Chip_TIMER_Reset(LPC_TIMER1);
Chip_SYSCTL_Map(REMAP_BOOT_LOADER_MODE);
}
int main(void)
{
uint32_t wdtFreq;
wdtFreq = Chip_Clock_GetPeripheralClockRate(SYSCTL_PCLK_WDT) / 4;
init();
Chip_WWDT_Init(LPC_WWDT);
Chip_WWDT_SetTimeOut(LPC_WWDT, wdtFreq / 10);
Chip_IAP_ReinvokeISP();
DEBUGSTR("HELLO\n\r");
return 0;
}
2 Answers
When you enter ISP, you "give control" to bootloader code in ROM. You can interact with it through UART interface (possibly also USB or something else, depending on the chip).
You might be able to regain control from code in the MCU itself if you use a watchdog. I'm not sure that's documented/guaranteed to work.
You need to use something that speaks this ISP protocol (e.g. lpc21isp), and you'll be able to issue commands such at detect, write flash, reboot.
Thanks for the info domen, that was helpful.
So do we give control to the primary bootloader ROM or do we need a secondary bootloader ROM?
Do we actually need a secondary bootloader for ISP?
Can this ISP be done through a single controller?
• Not really. Depends what you're trying to do and what your requirements are. ISP protocol is quite simple, progremmer side is normally on a PC, but it could also be another microcontroller. – domen Feb 11 at 9:05
|
2019-05-26 21:43:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4236152768135071, "perplexity": 8850.045909301922}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259757.86/warc/CC-MAIN-20190526205447-20190526231447-00550.warc.gz"}
|
http://tex.stackexchange.com/questions/96159/change-chapter-number-font-only
|
# Change chapter number font only
I would like to redefine the \thechapter command in the aim to use the eurb10 font only for the chapter number.
I try something like :
\newfont{\ChapNumbFont}{eurb10}
\renewcommand \thechapter {\ChapNumbFont{\@arabic\c@chapter}\normalfont}
But the chapter number font size is not correctly scaled in section, subsection... It seems to use only the "normal" size and cannot "scaled" it.
What I'm doing wrong ?
PS : I'm using a book class strongly customized.
Thanks to Ulrike Fischer comment, I use this trick:
\renewcommand \thechapter {{\fontencoding{U}\fontfamily{eur}\fontseries{b}\selectfont\@arabic\c@chapter}}
There is a still a little font size difference.
-
While I can understand a different font in the chapter head, I don't think it's good to have the same font in the text, next to numbers in the normal font. In any case, \newfont is a deprecated command. – egreg Jan 31 '13 at 12:39
As I said before, the chapter number in the Euler font next to other digits in the normal text font has a rather disputable appearance. Euphemism for "is horrible". ;-) – egreg Jan 31 '13 at 13:33
@egreg: I agree about the look. The look of the section and subsection numbers is simply odd - like font errors. I told Elendil not to redefine \thechapter:-(. – Ulrike Fischer Jan 31 '13 at 15:07
You are right that's not good. I was hoping a different layout but this is odd :-D. I will change it! – Elendil Jan 31 '13 at 15:43
You can use something like this to call the font in a scalable way:
\documentclass{article}
\begin{document}
{\fontencoding{U}\fontfamily{eur}\fontseries{b}\selectfont abc 123
\large abc 123}
\end{document}
Be aware that the "U" in \fontencoding means "unknown". So you can't rely on chars to be on standard positions. (But the numbers should be ok.)
You shouldn't put the font switching command in \thechapter: \thechapter is used in a lot of places. The font switch will e.g. end also in the headers, the toc and will be used when you reference a chapter as it will be stored with the label:
\newlabel{abc}{{\fontencoding {U}\fontfamily {eur}\fontseries {b}\selectfont 1}{1}}
-
I used \thechapter in the aim to switch the font everywhere the chapter number is used (TOC, label...) to be consistent with the chapter header. So using \renewcommand \thechapter {\fontencoding{U}\fontfamily{eur}\fontseries{b}\selectfont \@arabic\c@chapter\normalfont} works for me. Thanks a lot. – Elendil Jan 31 '13 at 13:03
I would remove the \normalfont command. Better add additional braces to keep the font switch local. – Ulrike Fischer Jan 31 '13 at 13:06
|
2015-04-27 18:58:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9512189626693726, "perplexity": 3634.082325540599}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659319.74/warc/CC-MAIN-20150417045739-00030-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://clay6.com/qa/30291/of-the-following-molecules-the-only-one-which-is-not-an-exception-to-the-oc
|
Browse Questions
# Of the following molecules the only one which is not an exception to the octet rule is :
$\begin{array}{1 1}(a)\;BeI_2&(b)\; BBr_3\\(c)\;SnCl_2&(d)\;OF_2\end{array}$
$BeI_2$
Hence (a) is the correct answer.
|
2017-05-24 11:34:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155767321586609, "perplexity": 794.221073495247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607813.12/warc/CC-MAIN-20170524112717-20170524132717-00525.warc.gz"}
|
https://mitibmwatsonailab.mit.edu/research/blog/learning-restricted-boltzmann-machines-with-sparse-latent-variables/
|
Authors
Published on
06/07/2020
Categories
Restricted Boltzmann Machines (RBMs) are a common family of undirected graphical models with latent variables. An RBM is described by a bipartite graph, with all observed variables in one layer and all latent variables in the other. We consider the task of learning an RBM given samples generated according to it. The best algorithms for this task currently have time complexity Õ (n2) for ferromagnetic RBMs (i.e., with attractive potentials) but Õ (nd) for general RBMs, where n is the number of observed variables and d is the maximum degree of a latent variable. Let the MRF neighborhood of an observed variable be its neighborhood in the Markov Random Field of the marginal distribution of the observed variables. In this paper, we give an algorithm for learning general RBMs with time complexity Õ (n2s+1), where s is the maximum number of latent variables connected to the MRF neighborhood of an observed variable. This is an improvement when s<log2(d1), which corresponds to RBMs with sparse latent variables. Furthermore, we give a version of this learning algorithm that recovers a model with small prediction error and whose sample complexity is independent of the minimum potential in the Markov Random Field of the observed variables. This is of interest because the sample complexity of current algorithms scales with the inverse of the minimum potential, which cannot be controlled in terms of natural properties of the RBM.
Please cite our work using the BibTeX below.
@misc{bresler2020learning, title={Learning Restricted Boltzmann Machines with Sparse Latent Variables}, author={Guy Bresler and Rares-Darius Buhai}, year={2020}, eprint={2006.04166}, archivePrefix={arXiv}, primaryClass={cs.LG} }
|
2023-04-02 08:08:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6537356376647949, "perplexity": 518.3652748288325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950422.77/warc/CC-MAIN-20230402074255-20230402104255-00526.warc.gz"}
|
https://search.r-project.org/CRAN/refmans/rjags/html/coda.samples.html
|
coda.samples {rjags} R Documentation
## Generate posterior samples in mcmc.list format
### Description
This is a wrapper function for jags.samples which sets a trace monitor for all requested nodes, updates the model, and coerces the output to a single mcmc.list object.
### Usage
coda.samples(model, variable.names, n.iter, thin = 1, na.rm=TRUE, ...)
### Arguments
model a jags model object variable.names a character vector giving the names of variables to be monitored n.iter number of iterations to monitor thin thinning interval for monitors na.rm logical flag that indicates whether variables containing missing values should be omitted. See details. ... optional arguments that are passed to the update method for jags model objects
### Details
If na.rm=TRUE (the default) then elements of a variable that are missing (NA) for any iteration in at least one chain will be dropped.
This argument was added to handle incompletely defined variables. From JAGS version 4.0.0, users may monitor variables that are not completely defined in the BUGS language description of the model, e.g. if y[i] is defined in a for loop starting from i=3 then y[1], y[2] are not defined. The user may still monitor variable y and the monitored values corresponding to y[1], y[2] will have value NA for all iterations in all chains. Most of the functions in the coda package cannot handle missing values so these variables are dropped by default.
### Value
An mcmc.list object.
### Author(s)
Martyn Plummer
jags.samples
data(LINE)
|
2022-07-01 13:52:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36282795667648315, "perplexity": 4880.763161926685}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103941562.52/warc/CC-MAIN-20220701125452-20220701155452-00080.warc.gz"}
|
http://math.stackexchange.com/questions/243572/proof-of-hartogss-theorem?answertab=oldest
|
# Proof of Hartogs's theorem
I'd be very grateful if someone could help me understand the proof of Hartogs's theorem appearing in Huybrechts' "Complex Geometry." The statement is:
Let $\mathbb{P}^n \subset \mathbb{C}^n$ be the unit polydisc. Let $\mathbb{P}_c:= \{ \boldsymbol{z} : 0\leq |z_i|<c\}$ for some $0<c<1$. Then if $f: \mathbb{P}^n -\bar{\mathbb{P}}_c \rightarrow \mathbb{C}$ is holomorphic, then $f$ extends to a holomorphic function on $\mathbb{P}$.
The proof goes as follows: For fixed $\boldsymbol{w} \in \mathbb{P}^{n-1}$, the function $f_w: z \mapsto f(z,\boldsymbol{w})$ defines a holomorphic function on the annular region $A= \{z: c<|z|<1 \}$ in the complex plain. Let $f_w= \Sigma_{k=-\infty}^{\infty} a_k(\boldsymbol{w})z^k$ be the Laurent expansion of $f_w$ in this region. Then the $a_k$ define holomorphic functions on $\mathbb{P}^{n-1}$ (by a previous lemma) and if $k<0$ then $a_k$ vanishes whenever some $w_i>c$ (since $f_w$ then extends to the whole disc) and so vanishes on all of $\mathbb{P}^{n-1}$. Therefore we can write $f|_{A \times \mathbb{P}^{n-1}}= \Sigma_{k=0}^{\infty} a_k(\boldsymbol{w})z^k$.
I understand everything up to this point. What I can't understand is why this sum of holomorphic functions defines a holomorphic function on all of the unit polydisc. Presumably it is supposed to converge uniformly on compact subsets or something? Huybrechts says something like "the $a_k$ attain their suprema on the boundary and so uniform convergence is implied by uniform convergence on the annular region" and I have no idea what boundary or annular region he's talking about. A priori I don't know why I should really know anything about the uniformity of the convergence of the sum outside of a single copy of $A$ i.e. when $\boldsymbol{w}$ is fixed.
Thanks for your time and sorry if this is mostly nonsense.
-
This may not be the answer you're looking for but it is how I would approach this theorem. Convergence matters rely on Cauchy's theorem anyway so why don't use that directly in a proof?
Let $c < r < R < 1$ and define $f_r: \{z \mid r < |z| < 1\} \times \mathbb{P}^{n-1} \to \mathbb{C}$ by
$$f_r(z, w) = \frac{1}{2 \pi i}\int_{|v|=r} \frac{f(v, w)}{v - z}dv.$$
Then $f_r$ is holomorphic and vanishes if $|w_k| > c$ for some index $k$ since $v \mapsto f(v, w)$ is then holomorphic within the contour $|v|=r$. This implies that $f_r$ is identically zero. In particular $f$ has the following representation on $\{z \mid r < |z| < R\} \times \mathbb{P}^{n-1}$
$$f(z, w) = \frac{1}{2 \pi i} \int_{|v| = R} \frac{f(v, w)}{v - z}dv.$$
But the right hand side defines a holomorphic function on $\{z \mid |z| < R\} \times \mathbb{P}^{n-1}$.
-
Thanks WimC. That seems much easier. – Heather Huskinson Nov 24 '12 at 19:22
|
2014-04-19 21:07:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9598347544670105, "perplexity": 100.71973573689192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
http://mathscinotes.com/
|
## Bore Diameter Measurement Using Gage Balls
Quote of the Day
Cost is more important than quality, but quality is the best way to reduce cost.
Genichi Taguchi. I have much empirical evidence for the truth of his statement.
## Introduction
Figure 1: Hole Diameter Measurement Using Gage Balls Example.
I am continuing to work through the metrology examples on this web page as part of junior machinist self-training. Today's technique shows how to use gage balls to measure the bore diameter of a cylinder (Figure 1). You can measure a bore diameter using a micrometer, but I have concerns that I might be measuring along a chord instead of a diameter – this error would result in too small of a result. The gage ball approach should eliminate that type of error.
In this post, I will work through the basic geometry associated with this measurement and will work an example.
## Background
I have discussed gage balls in two previous blog posts (here and here), which should provide any background that you need.
## Analysis
### Derivation
Figure 2 shows how I defined my variables. The analysis involves solving the right triangle formed by X, Y, and the line formed by r1 and r2.
Figure 2: Definition of Terms.
Figure 3 shows the algebra involved with the solving for the bore diameter (dB).
Figure 3: Derivation of Bore Diameter Formula.
### Example
Figure 4 shows my results after applying the values in Figure 1 to the bore diameter formula. The result is very close to the bore diameter of 4.0000 on the scale drawing.
Figure 4: Formula Results for Conditions of Figure 1.
## Conclusion
I am almost done reviewing the use of roller gages and gage balls. A couple more examples will complete my set of canonical applications.
Save
Save
Save
## Relationship Between Battery Cold Cranking Amps and Capacity
Quote of the Day
The difference between successful people and very successful people is that very successful people say 'no' to almost everything.
— Warren Buffet. Many people do not accomplish their goals because they spend the bulk of their time on items that really are just distractions. You need to look at your time usage, figure out which recurring items are taking time, yet providing little real value, and minimize them.
## Introduction
Figure 1: Typical Flooded Cell Car Battery. (Source)
Many battery manufacturers do not specify the Ampere-Hour (AH) ratings for their automotive products because Cold Cranking Amperes (CCA) more important in automotive applications than AH ratings. Car applications tend to focus on the ability of the battery to crank the engine when both the battery and car are cold. While reading a post on an automotive forum about batteries, I saw the following statement made about the relationship between a battery's CCA and AH ratings.
I have read on the box of that Inox battery conditioner that for a battery over 600 CCA you simply multiply the CCA by .07 to give you the Amp Hours for that battery …
I have seen this statement before and did not believe it because batteries intended for capacity-dependent applications (e.g. backup power) are designed differently than batteries intended to deliver surge current (e.g. car batteries). I decided that it was time that I demonstrate that this relationship does not hold for specific batteries, but does have some merit for batteries in general.
Because France requires battery manufacturers post the AH specifications for all car batteries, I was able to find both CCA and AH specifications for a number of car batteries on European web sites. Once I gathered the data, I generated a graph that shows that there is not a general relationship between the CCA and AH. All you can say is that on average, increasing CCA ratings means increasing AH ratings. There is no simple relationship that holds for all lead-acid automotive batteries.
All the analysis was done in Rstudio.
## Background
### Methodology
My approach was simple:
• I randomly chose four car batteries from five different vendors.
• I generated plots of AH versus CCA for each manufacturer.
• I also generate a plot of AH versus CCA for all the batteries.
### Data Set
Figure 2 show the set of battery data that I gathered. The batteries were randomly chosen from among the hundreds of choices.
Figure 2: List of Random Chosen Car Batteries.
## Analysis
Figure 3 shows my graph of AH versus CCA for data of Figure 2. I also fitted lines to each of the vendors data. Note that there is a wide variation in how AH varies with CCA for each vendor. There is no formula that provides a good fit between AH and CCA for all automotive batteries. The fit is not even good for batteries from the same manufacturer. For a similar chart of batteries from a single manufacturer, see Appendix A.
Figure 3: Plot of Five Manufacturers, Four Batteries Each. Note that I "jittered" the data so that points from different vendors would not sit on top of each other.
Figure 4 shows my overall curve fit. This line has a slope of 0.0688 AH/CCA, which agrees with the 0.07 AH/CCA statement on the Inox conditioner box. However, you can see that specific batteries are scattered far from the line.
Figure 4: Linear Curve Fit for All the Data.
For those who like to look at curve fit statistics, I also include Figure 5. The statistics shown are for lines that are function of CCA alone (Figure 4), and CCA and Brand (Figure 3).
For AH versus CCA line, we see that CCA is a very significant factor (red underline) but our R-squared value (fraction of variability explained) is only 50%. For the AH versus CCA and Brand line, we see that CCA is very significant and some Brands are significant, but our R-squared value (fraction of variability explained) is only 85%.
Figure 5: Curve Fit Statistics.
## Conclusion
While it might be tempting to estimate the AH rating of a battery from its CCA rating, there is not a simple relationship between these two battery parameters.
## Appendix A: Yuasa Example
In Figure 6, Yuasa has published a graph similar to my Figure 3. Notice how the lines have roughly the same slopes, but different intercepts. As I always say, a battery is a nonlinear function of everything.
Figure 6: Battery CCA Versus AH. (Source)
Save
Save
Save
Save
Save
Save
Save
Save
Save
Save
Save
## Remote Car Starter Can Drain Car Battery Within a Week
Quote of the Day
If a free society cannot help the many who are poor, it cannot save the few who are rich.
— John F. Kennedy
## Introduction
Figure 1: 2016 Honda CR-V.
My Montana-based son mentioned that his wife's 2016 Honda CR-V (Figure 1) will not turn-over after sitting in the garage for seven days. No starting problems had occurred prior to early November. Unfortunately, I have had my share of car electrical problems and some of these problems have been hard to find. However, this is a new car and under warranty, so I recommended that he just take it into the dealer. He took the car into the Honda dealer, who told him that this is the result of the current discharge imposed on his car battery by his aftermarket remote start system, which was verified to be within specification –the car was operating normally.
In this post, I will discuss why this behavior is normal considering how the remote starter works and recent weather changes in Montana. I should mention that all modern cars will discharge their batteries over time because their computer systems are always on – their nominal current drain is about 30 mA. Even without a remote starter, the cars will discharge their batteries after about 3 weeks (Appendix A).
## Background
### Definitions
Cold Cranking Amps (CCA)
Cold cranking amperes (CCA) is the amount of current a battery can provide at 0 °F (−18 °C). (Source)
Parasitic Current Drain (iP)
Parasitic current drain can be defined as any electrical device that draws electric current when the ignition key is turned off. In the case of a remote starter, the radio receiver must be powered so that it can receive the starter signal. This receiver current draw is parasitic current drain. (Source)
Ampere-Hour (Ah) Capacity
An ampere hour or amp hour is a unit of electric charge, having dimensions of electric current times time, equal to the charge transferred by a steady current of one ampere flowing for one hour, or 3600 coulombs. Battery charge capacity is usually rated in Ah. (Source)
### Battery Capacity Versus Temperature
Figure 2 shows the impact of temperature on the capacity of a lead-acid traction battery.
Figure 2: Traction Battery Capacity Versus Temperature.
### CCA Versus Battery Capacity
Figure 3 shows that a battery's available Cold Cranking Amperage (CCA) decreases with reduced battery capacity. For the analysis of this post, I will assume that a battery's available CCA reduces proportionately with its capacity. Figure 3 originally came from a Yuasa technical specification.
Figure 3: Battery CCA Versus Capacity. (Source)
### Cranking Current Required By a Car Versus Temperature
Figure 4 shows that the battery's available CCA decreases with lower temperature as the car's need for cranking amperage increases. I refer to this situation as "bad squared".
Figure 4: Required Car Cranking Amperage Versus Temperature. (Continental Battery)
### Problem Statement
Here is what I was able to find out about my son's situation:
• His remote starter is rated to have a maximum parasitic current draw of 75 mA.
• The measured remote starter parasitic current drain is 70 mA – it is within specification.
• The car battery experiences the parasitic current drain for 7 days.
• His wife just had a baby and is on maternity leave. The car had been driven every day until October 30th.
• The car is only driven short distances. I am guessing that the battery is never fully charge because of how it is started, driven a short distance and stopped. I will assume the battery routinely sits at 75% charge during normal use. In fact, cars brought in for service typically have batteries with 70% of a full charge (Source).
• The weather recently turned cold. The temperature had been in the 70 °F (~25 °C) range and now is about 18 °F (-8 °C) in the morning when the car is being started. This temperature drop reduces the battery capacity to 60% of its charge at the specification temperature (77 °F).
• The car battery is rated for 500 CCA. I do not have a specification for its ampere-hour capacity, which is important when determining capacity. Batteries of similar size and CCA ratings have capacity rating of 50 Ah.
• I will assume that the effective CCA rating is proportional to the Ah rating, which is illustrated in Figure 3.
This is enough information for me to figure out what is going on.
## Analysis
Figure 5 shows my analysis of the battery capacity. What I calculated is that the battery's capacity has reduced by more than 50%. This results in a large decrease in the cranking amps available (justification in a later post) and results in slow or no cranking when attempting to start the car.
Figure 5: Battery Capacity Analysis.
## Conclusion
I am afraid that a parasitic load on the battery of 70 mA is enough to drain the battery sufficiently after seven days to make starting difficult. This reminds me of a battery problem I had in my youth where a glove compartment light did not turn off and would slowly drain my battery. I could fix the light problem, but the remote starter must draw current to detect the radio start signal.
## Appendix A: Computer Parasitic Leakage Example
Figure 6 shows a manual excerpt that illustrates how modern cars have computers that constantly drain charge from their batteries.
Figure 6: Chrysler 200 Manual Calling Out Battery Discharge.
Save
Save
Posted in Batteries | 5 Comments
## Angle Measurement Using Roller Gages
Quote of the Day
It is a universal truth that the loss of liberty at home is to be charged to the provisions against danger, real or pretended, from abroad.
## Introduction
Figure 1: Angle Measurement Example.
I am continuing to work through some basic metrology examples – today's example uses roller gages to measure the angle of a drilled hole (Figure 1). The technique discussed here uses two roller gages and a plug. The plug must fit the hole snugly (i.e. no backlash) as it will provide the surface that we will be measuring. Using this approach assumes that you need a very accurate measurement of a hole's angle as rough measurements can be made using a protractor.
## Background
This example is based on the material found on this web page. I will derive the angle relationship presented there (Equation 1) and present a worked example that is confirmed using a scale drawing (Figure 1).
Eq. 1 $\displaystyle \theta \left( {{{L}_{1}},{{L}_{2}},{{D}_{1}},{{D}_{2}}} \right)=2\cdot \text{arctan}\left( {\frac{1}{2}\cdot \frac{{{{D}_{1}}-{{D}_{2}}}}{{{{L}_{1}}-\frac{{{{D}_{1}}}}{2}-\left( {{{L}_{2}}-\frac{{{{D}_{2}}}}{2}} \right)}}} \right)$
where
• L1 is the distance from reference to outside edge of roller gage.
• L2 distance from reference to outside edge of roller gage.
• D1 diameter of the first roller gage.
• D2 diameter of the second roller gage.
• θ is the angle of the drill hole relative to the surface that is drilled.
These variables are all indicated in Figure 2.
Figure 2: Reference Drawing Showing Critical Variables.
## Analysis
### Derivation
Figure 3 shows how to derive Equation 1. The basic derivation process is simple:
• The center of each roller gage is on a line that is makes an angle of θ/2 with the plug.
• The slope of line connecting the roller gage centers has the value tan(θ/2).
• The line's slope is computed using the rise ($\frac{{{{D}_{1}}}}{2}\cdot \left( {1+\tan \left( {\frac{\theta }{2}} \right)} \right)-\frac{{{{D}_{2}}}}{2}\cdot \left( {1+\tan \left( {\frac{\theta }{2}} \right)} \right)$) and run (L1L2) values shown in Figure 2.
Figure 3: Derivation of Angle Relationship.
### Example
Figure 4 shows works through the angle calculation example of Figure 1.
Figure 4: Worked Example Using Values From Figure 1.
## Conclusion
I have some designs I plan to build that have angled holes. This procedure will give me a way to accurately measure the angle of these holes.
Save
Save
Save
Save
Save
Save
Save
Save
Save
Save
Save
Save
Save
## Ensuring Stable DC Power Delivery To Switching Loads
Quote of the Day
Life is never made unbearable by circumstances, but only by lack of meaning and purpose.
## Introduction
Figure 1: Commercial LED Lightning Deployment Using DC Power Distribution. (Source)
I presented a seminar over lunch today on short-range DC power distribution, which I believe is one of the most exciting areas in electronics today. AC power distribution has dominated power engineering since the "War of the Currents" ended with Westinghouse's AC system winning a decisive victory over Edison's DC system back in the 1890s. Starting in 1930s Europe, high-voltage DC distribution has slowly gained a foothold in some long-haul, high power distribution applications, but most power distribution has continued to be dominated by AC.
We are now seeing a resurgence in low-voltage DC power for use in short-range power distribution because of recent technology changes:
• Increasing use of photovoltaics, which produce DC.
• Increasing use of LED lighting, which can be powered more efficiently by DC.
• Desire to use network cable to distribute both power and data (e.g. PoE).
• Desire to reduce installation costs by eliminating the expense associated with ensuring that AC wiring is safe (e.g. conduit, heavy gauge wire, labor using highly-trained electricians, etc).
One issue that needs to be addressed is how to ensure the stability of a DC distribution when it is driving loads composed switching power supplies, which have a rather complicated input impedance function. One common approach is to apply the Middlebrook stability criterion, which provides a sufficient condition for a stable DC network. In this blog post, I will be discussing the derivation and application of the Middlebrook stability criterion.
My raw Mathcad file is included here (with PDF) if you wish to work through the examples yourself.
## Background
### History
RD Middlebrook published his criterion in the journal article "Input Filter Considerations, in Design and Application of Switching Regulators", IEEE Industry Applications Society Annual Meeting, October, 1976. The Middlebrook criterion is commonly used because it is simple to apply, however it has relatively high implementation cost. For a discussion of the alternatives, see this presentation.
The Middlebrook criterion is also known as an impedance ratio criterion. I will demonstrate why it is called an impedance ratio criterion in Figure 5.
### Middlebrook Criterion Statement
The following discussion summarizes a longer paper that I include here.
Figure 3 shows how power engineers usually define the source and input impedances of a power system.
Figure 3: Simple Power System Model.
Using the impedance definitions illustrated in Figure 3, I usually see the Middlebrook criterion stated as follows:
A system will be locally stable if the magnitude of the input impedance of the load subsystem is larger than the magnitude of the output impedance of source subsystem.
You also often see engineers refer to the criterion's equivalent graphical form:
If the magnitude plots of the source (ZS) and load impedances (ZL) do not intersect, the system is stable. If the impedance intersection occurs, the system may not be stable. If the impedance magnitude intersection occurs, the system will still be stable providing that the impedance ratio transfer function ZS/ZL satisfies the Nyquist stability test.
Figure 4 shows what a graphical analysis looks like (Source). The yellow cross-hatched region show that there is a potential stability problem with this system. Because the Middlebrook criterion is a sufficient but not necessary condition, more analysis is needed to determine if there is a real problem. In practice, most engineers would not do further analysis but instead would add some form of compensation network to eliminate the yellow cross-hatched region.
Figure 4: Graphical Analysis Example.
One advantage of the graphical approach is that it can be applied to measured impedance data. See Appendix A for an plot of an actual power supply input impedance. The measured data can be quite complex.
## Analysis
### Derivation
Figure 5 shows my pictorial view on how to "prove" the Middlebrook criterion. The process is straightforward:
• Generate a model of the power converter output and the load.
This step defines the critical impedance values (Zo1 and Yi2=1/Zi2) that are used to evaluate the stability of a power distribution system.
• Perform a simple circuit analysis that yields two equations.
• These equations can be represented as a control system graph.
This step defines allows us to apply an existing stability criterion to this specific case.
• This control system graph meets the requirements of the Small Signal Theorem, which provides the stability criterion.
The Small Signal Theorem provides us a sufficient condition for stability, i.e. the system is stable if $\left\| {{{Z}_{{o1}}}\left( {j\cdot \omega } \right)} \right\|\cdot \left\| {{{Y}_{{i2}}}\left( {j\cdot \omega } \right)} \right\|<1$ . This is the key result.
• Express Yi2 as 1/Zi2, which gives us the impedance ratio criterion, i.e. $\frac{{\left\| {{{Z}_{{o1}}}\left( {j\cdot \omega } \right)} \right\|}}{{\left\| {{{Z}_{{i2}}}\left( {j\cdot \omega } \right)} \right\|}}< 1$.
Figure 5: Illustration Outlining the Proof of the Middlebrook Stability Theorem.
### Worked Example: Uncompensated Case
I found a good paper that illustrated how to apply Middlebrook criterion in practice, and I will work through this paper in detail here.
Figure 6 shows a typical power supply situation and the sufficient conditions for stability in this case (light yellow highlight). This configuration may not be stable, but can often be made stable by adding a lossy capacitor, which I discuss in the following section.
Figure 6: Typical Power Supply Situation with No Compensation.
### Worked Example: Lossy Capacitor Compensation
Figure 7 shows how adding a lossy capacitor may stabilize the circuit of Figure 6. The analysis shown in Figure 7 is for a simplified version of Figure 6 – the algebra quickly gets out of control otherwise.
Figure 7: Effect of Lossy Input Capacitor on Stability for the Circuit of Figure 6.
## Conclusion
I have used the Middlebrook stability criterion for years, but have never taken the time to write down a tutorial for my staff. My recent seminar preparation provided me an excuse to finally write down a tutorial.
## Appendix A: Measured Power Supply Input Impedance
Figure 7 shows a graph of an actual power supply input impedance (Source).
## Radius Measurement Using Roller Gages
Quote of the Day
There are no secrets to success. It is the result of preparation, hard work and learning from failure.
— Colin Powell, general and statesman.
## Introduction
Figure 1: Radius Measurement Example Using Two Gage Rollers and a Surface Plate.
This post will demonstrate how to measure the radius of an arc using two roller gages. While I am a very amateur machinist, I have on occasion needed to measure the radius of an arc (i.e. partial circle) and have not been sure how to approach that measurement. It turns out to be simple given two equal diameter roller gages and a surface plate. You can determine by taking one measurement and knowing the roller gage diameter.
## Background
I have been reading this web page on metrology, and this post consists of my notes from reading this web page. The key formula for measuring the radius of curvature for an arc is given by Equation 1.
Eq. 1 $\displaystyle R=\frac{{{{{\left( {L-d} \right)}}^{2}}}}{{8\cdot d}}$
where (referring to Figure 2).
• R is the radius of the arc we want to measure.
• d is the diameter of the two roller gages – they must be equal diameter.
• L is the distance between the two roller gages as measured from their points of maximum separation.
## Analysis
### Derivation Geometry
The geometric situation is simple and illustrated in Figure 2. The radius of the yellow circle (R) is what needs to be measured. Note that we do not need a full-circle to apply this method – we can work with an arc. The only direct measurement I need to make is of L.
Figure 2: Radius Measurement Using Gage Rollers Scenario.
### Derivation and Example
Figure 3 shows the derivation, which consists of applying the Pythagorean theorem and simplifying. I also include my calculations for the example of Figure 1. The mathematical result equaled the value on my scale drawing (Figure 1).
Figure 3: Proof and Worked Example Using the configuration in Figure 1.
## Conclusion
I have struggled to measure the radii of various partially circular objects like bowls. Using a couple of roller gages gives me a simple way to measure the radius of curvature for these partial circles.
Save
Save
Save
Save
Save
Save
Save
Save
Posted in Construction, Geometry, Metrology | 2 Comments
## Ion Propulsion Math
Quote of the Day
Telling a story is one of the best ways we have of coming up with new ideas, and also of learning about each other and our world.
## Introduction
Figure 1: Dawn Mission Profile. (Source)
NASA has a project known as Dawn that put a space probe in orbit around the asteroids Vesta and then CeresCSPAN presented an excellent Dawn mission briefing given by Marc Rayman, the Mission Director and Chief Engineer. One of the most interesting aspects of the Dawn spacecraft is its use of an ion thruster to maneuver it from one destination to another. This post presents some simple math that can be used to determine some of its key performance characteristics.
This briefing was for the general public and presented some excellent material. For those who like to look at my raw files, I include my Mathcad, PDF, and XPS versions here.
## Background
### Dawn Spacecraft Information
The following quote from the Wikipedia article on the Dawn spacecraft provides lots of data for me to use in my analysis.
The Dawn spacecraft is propelled by three xenon ion thrusters … and uses only one at a time. They have a specific impulse of 3,100 s and produce a thrust of 90 mN. The whole spacecraft, including the ion propulsion thrusters, is powered by a 10 kW (at 1 AU) triple-junction gallium arsenide photovoltaic solar array manufactured by Dutch Space. Dawn was allocated 275 kg (606 lb) of xenon for its Vesta approach, and carried another 110 kg (243 lb) to reach Ceres, out of a total capacity of 425 kg (937 lb) of on-board propellant. With the propellant it carries, Dawn can perform a velocity change of more than 10 km/s over the course of its mission, far more than any previous spacecraft achieved with onboard propellant after separation from its launch rocket.
I can summarize this data with the following list:
• The thruster provides a thrust of 90 mN
• The thruster generates a specific impulse of 3,100 s.
• The thruster and fuel supply can provide a total velocity change ΔV= 10 km/s.
• The photovoltaic array has a maximum power output of 10 kW @ 1 AU.
• fuel mass of 425 kg.
I also found some additional information useful:
• xenon molecules at a velocity of 30 km/s. (Source)
• maximum ion thruster input power level of 2.3 kW. (Source)
• ion thruster power efficiency of 61%. (Source)
• Dawn spacecraft Beginning of Life (BOL) mass of 1240 kg. (Source)
### Dawn Thruster Block Diagram
Figure 2 shows a block diagram of Dawn's ion thruster. The basic construction is reminiscent of the electron gun in an old tube-type television.
The thruster concept is simple:
• supply xenon from the propellant supply bottle to the ionization chamber
• ionize the xenon
• accelerate the xenon ions using a high-voltage electric field
Figure 2: Dawn Thruster Block Diagram. (Source)
## Analysis
### Objective
My plan is to compute the following spacecraft characteristics:
• thrust (FIon)
• fuel burn rate (m')
• mission time (TMission)
• ΔV
based on the
• array power (P)
• xenon molecular velocity (v)
• fuel quantity (MFuel)
• spacecraft mass (MBOL)
• engine efficiency (η).
### Solution Setup
Figure 3 shows how I setup this calculation.
Figure 3: Analysis Setup.
### Determine Key Performance Indices
Figure 4 shows how I used the conservation of energy to derive formulas for the thrust and fuel burn rate. The analysis approach is simple:
• Solar power (minus efficiency losses) is converted from photon energy to xenon ion energy.
• The kinetic energy of the xenon ions is converted to a velocity.
• The velocity and mass loss rate is then converted to a change in momentum.
• The change in momentum is equivalent to force.
All the numbers obtained using this approach reasonably match the values quoted in the Wikipedia. I should note that the rated mission lifetime is 50K hours, but there is only enough fuel for 40K hours at full power (i.e. 1 AU from the Sun). The 40K hours is actually reasonable because the Dawn spacecraft will spend a significant amount of time in orbit about these Vesta and Ceres, so it will not have the thruster on all the time.
Figure 4 Mathcad Note: You will see that many of the variables in Figure 4's derivation are teal-colored rather than black-colored. For derivations, I often use a different variable style to prevent an early numeric definition from interfering with a downstream symbolic derivation. A different variable style in Mathcad means that a variable with the same name as a another style is treated as a different variable. Otherwise, Mathcad will substitute the upstream numeric value of the variable, which I do not want when I need a symbolic result.
Figure 4: Determination of Four Key Engine Parameters.
## Conclusion
I was able to use some simple math and physics to reproduce some of the key performance metrics for the Dawn spacecraft. This helps me understand how ion propulsion works and where it would be useful to use. Because it requires so much power, I can see where it would primarily be useful for spacecraft that are close enough to the Sun to use solar power. A radioactive battery (often used on deep space probes) would not provide sufficient power to make an ion drive work – Rayman makes this comment in his briefing.
The Dawn mission was very interesting. I want to commend Marc Rayman for the excellent briefings he has presented throughout the project. I have enjoyed keeping up with their discoveries about Vesta and Ceres.
Posted in Astronomy | 3 Comments
Quote of the Day
I like opera, I just don't want to be around the people who like opera.
Justice Clarence Thomas, during a discussion of Justice Scalia and Scalia's love of opera. During a working stint in Keyport, WA, I had a coworker who LOVED opera. Thus, I completely understand this statement.
Figure 1: Homemade Roof Protractor. (Source)
As an amateur carpenter, I am always looking for simple and cheap construction tools. Recently, I have been working on improving my roof framing knowledge. During my reading on this topic, I saw this roof pitch protractor in a Journal of Light Construction (JLC) article . Notice how the template has a handle to make hauling it up a ladder easier. To get an accurate roof pitch, all you need to do is (1) place the template on the roof, (2) clamp a spirit level onto the template in the level position, and (3) read the pitch off the scale – simple, fast, accurate.
Save
Save
Save
Save
## WW2 Casualty Rates By Country
Quote of the Day
Mistakes should be examined, learned from, and discarded; not dwelt upon and stored.
— Tim Fargo. I know a number of people that live their lives in constant fear of making mistakes. No one likes mistakes, but people who make no mistakes do not make anything. You must develop a work style that is tolerant of mistakes and that allows you to make them early when they are generally less costly.
Figure 1: WW2 Casualty Percentages By Country. (Source)
I recently was watching a documentary on WW2 that mentioned that Greece and Yugoslavia suffered some of highest casualty rates during WW2. While I have read much about WW2, I had not looked at the casualty rates as a percentage of each country's population. I did some quick web searching and found that the Wikipedia has an excellent table summarizing WW2 casualties by country and population, which I imported into Excel and sorted by casualty rates. These percentages are mind numbing. While Greece and Yugoslavia suffered terribly, other countries suffered even more.
National traumas like WW2 last many generations. Over 150 years ago, we suffered a 2% loss of population during the Civil War, and that loss affects us to this day. One personal story will illustrate my point. I used to work in Panama City, Florida – a great place to be during the winter. Because of my accent, people knew I was not a local, and one coworker told me he would not hold the "Recent Unpleasantness" against me. I had to ask "What was the Recent Unpleasantness?" and was floored to find out it was the Civil War.
There are a couple of items in Figure 1 that I would like to highlight:
• Nauru and Portuguese Timor
Their occupation by the Japanese was particularly brutal.
• Ruanda-Urundi
While WW2 in Northern Africa is well documented, I had no idea of the casualties caused by WW2's demand for resources.
For those who wish to work with the data themselves, I include my Excel Workbook here.
Save
Save
Save
Save
## I Am Now A Grandpa
Quote of the Day
I can be President of the United States or I can control Alice. I cannot possibly do both.
Figure 1: Picture of My Granddaughter.
I am bursting with excitement right now. My youngest son and his wife now have a little girl. The baby arrived about 3 weeks early, but both mother and baby are doing fine. I could not be happier for their family.
I find the whole process of having a child amazing:
• Before they arrive, you have never met them and could not love them more.
• After they arrive, life becomes a blur of love, work, and worry.
• When they leave home, you find yourself wishing you could have some of the old days back.
Figure 2: Beautiful Eyes.
Unfortunately, my son and his wife live 1000 miles away in Montana. The drive to Montana is a long one and can be dangerous during the winter –North Dakota blizzards are brutal. Air service to Western Montana is expensive and inconvenient (i.e. connections through Denver or Salt Lake City).
One way or another, I am going to be spending much more time in Montana.
Figure 3: One Month Old.
Save
Save
Posted in Personal | 3 Comments
|
2016-12-03 21:51:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42639002203941345, "perplexity": 1923.6296465003634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541140.30/warc/CC-MAIN-20161202170901-00475-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://gedeno.com/sample-practice-test-4/
|
# Sample Practice Test 4
Our lessons and practice tests are provided by Onsego. GED Testing Service recognizes Onsego as a trusted publisher that has developed curriculum materials that are 100% aligned with the GED test.
1. What check does the U.S. Senate have on the president?
A.
B.
C.
D.
Question 1 of 20
2. Which part of the U.S. Constitution states the six purposes of government?
A.
B.
C.
D.
Question 2 of 20
3. The statement below is from a historical document.
'We the People of the United States... do ordain and establish this Constitution for the United States of America.'How is this statement reflected in the modern American political system?
A.
B.
C.
D.
Question 3 of 20
4. All of the following are examples of the principle of checks and balances EXCEPT _____.
A.
B.
C.
D.
Question 4 of 20
5. To amend the Constitution, a proposal must be made by _____ of the members of Congress.
A.
B.
C.
D.
Question 5 of 20
6. A quantity in an experiment that can have more than one value is a _______.
A.
B.
C.
D.
Question 6 of 20
7. The purpose of control in an experiment is to _______.
A.
B.
C.
D.
Question 7 of 20
8. When a dominant allele is present, the offspring will show …. characteristics.
A.
B.
C.
D.
Question 8 of 20
9. The density of a cube that measures 1.00 cm on each side and has a mass of 2.0 g is _______.
Use this formula: Density is mass/volume.
A.
B.
C.
D.
Question 9 of 20
10. The base unit of mass in the SI system is the _______.
A.
B.
C.
D.
Question 10 of 20
11. Simplify the given expression:
$$2(10 - 6p) + 10(-2p+ 5)$$
A.
B.
C.
D.
Question 11 of 20
12. Tom is buying flea medicine for his dog. The amount of medicine depends on the dog’s weight. The medicine is available in packages that vary by 10 dog pounds. How accurately does Tom need to be to buy the correct medicine?
A.
B.
C.
D.
Question 12 of 20
13. Evaluate.
$$4x^{2} + 3xy + 4y^{2}$$
when x = -3 and y = 0
A.
B.
C.
D.
Question 13 of 20
14. Kevin is writing an essay for his college assignment. He has already written $$1500$$ words for the essay and needs to write more than $$2500$$ words. If Kevin writes $$10$$ words per minute, write an inequality to find the number of minutes, $$x$$, that Kevin needs to complete his essay.
A.
B.
C.
D.
Question 14 of 20
15. Dorrie wants to buy a house. She has the following expenses: rent of $650, credit card monthly bills of$320, a car payment of $410, and a student loan payment of$115. Dorrie has a yearly salary of \$46,500.
A mortgage company will use the debt-to-income ratio as a metric to determine if Dorrie qualifies for a loan. The debt-to-income ratio is calculated as how much she owes per month divided by how much she earns each month.
What is Dorrie’s debt-to-income ratio?
A.
B.
C.
D.
Question 15 of 20
16. When you ___________, you restate the most important ideas in a text in your own words.
A.
B.
C.
D.
Question 16 of 20
17. Read this set of statements. Draw a conclusion about what is happening.
At the sound of your keys in the door, your cat comes running. He greets you at the door, meowing, and rubbing himself against your legs. You look down at him in surprise. Henry isn't usually that glad to see you when you've been away all weekend. Then it occurs to you WHY he's so happy to see you, and you feel just awful.
A.
B.
C.
Question 17 of 20
18. Which of the following is NOT a type of text structure?
A.
B.
C.
D.
Question 18 of 20
19. thesis in a nonfiction work is
A.
B.
C.
D.
Question 19 of 20
20. When you analyze style, you
A.
B.
C.
D.
Question 20 of 20
This practice test is a part of the Onsego GED Prep. Get Onsego and track your progress.
Take another practice test.
### GED Test Tips
It’s important that you answer the question asked. Many students fail not because they don’t know the facts but because they didn’t read the questions correctly.
Don’t allow this to happen to you. Build your confidence by improving your understanding of what is required on the test and delivering just that.
Last Updated on November 16, 2021.
|
2022-01-23 08:57:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41961365938186646, "perplexity": 1931.8055650426622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304217.55/warc/CC-MAIN-20220123081226-20220123111226-00091.warc.gz"}
|
http://www.snapraid.it/faq
|
## General
### What's SnapRAID?
SnapRAID is an application able to make a partial backup of your disk array. If some of the disks of your array fail, even if they are completely broken, you will be able to recover their content. It's only a partial backup, because it doesn't allow to recover from a failure of the whole array, but only if the number of failed disks are under a predefined limit.
### How is it different than RAID?
RAIDs are mirroring solutions that propagate changes immediately to the parity data. In this sense, RAIDs are not backups, because they don't allow to restore an old state of the array.
SnapRAID is instead more similar at a backup solution, and you can restore the state of the last backup. For example, with SnapRAID, if you delete by accident a file, you can easily restore it.
• All your data is hashed to ensure data integrity and to avoid silent corruption.
• If the failed disks are too many to allow a recovery, you lose the data only on the failed disks. All the data in the other disks is safe.
• If you accidentally delete some files in a disk, you can recover them.
• The disks can have different sizes.
• You can add disks at any time.
• It doesn't lock-in your data. You can stop using SnapRAID at any time without the need to reformat or move data.
• To access a file, a single disk needs to spin, saving power and producing less noise.
• You can have up to six parity levels compared to the one of RAID5 and the two of RAID6.
Obviously RAID has other advantages, like having a better speed due data striping. What is best for you, depends on your specific use case.
### How is it different than ZFS/Btrfs snapshots and NTFS Shadow Copy?
SnapRAID creates a file-system snapshot conceptually similar at other snapshot solutions. The difference is that the purpose of SnapRAID is to be able to recover a disk to the saved state after a complete disk failure.
SnapRAID is more similar at the RAID-Z/RAID functionality of ZFS/Btrfs. Compared to them, SnapRAID has the following advantages:
• If the failed disks are too many to allow a recovery, you lose the data only on the failed disks. All the data in the other disks is safe.
• You can add disks at any time (not possible with ZFS, but possible with Btrfs).
• It doesn't lock-in your data. You can stop using SnapRAID at any time without the need to reformat or move data.
• To access a file, a single disk needs to spin, saving power and producing less noise.
• You can have up to six parity levels compared to the three of ZFS RAID-Z and the two of Btrfs RAID-5/6.
Obviously ZFS/Btrfs have other advantages, like being real-time solutions. What is best for you, depends on your specific use case.
### How is it different than other backup solutions?
With a normal backup solution you always need as much space as the full disk array for the backup. With SnapRAID you need only one additional parity disk to be able to recover the most common case of a single disk failure. Adding more parity disks, you can recover from more disk failures.
Note that only a full backup allows to recover from a complete failure of the disk array and it's surely the preferred solution if you can afford it. SnapRAID is a good option if a full backup is not a possible solution.
### What are the advantages of SnapRAID compared to similar solutions like unRAID/FlexRAID/Windows Storage Spaces?
You can see the Compare page to see it compared to other similar solutions.
A strong advantage of SnapRAID is the integrity and silent error management that is at the same level (if not better) of the ZFS and Btrfs file-systems. Instead, unRAID and Storage Space have no integrity check at all! FlexRAID is at least able to report silent errors, but not to fix them.
### How much does it cost?
Nothing. It's a FreeSoftware application released with the GPLv3 license.
Go to the Forum.
### Has SnapRAID a GUI?
No. SnapRAID is a command line application. But there is the Elucidate external GUI, and a plugin for OpenMediaVault.
### Does SnapRAID provide encryption, virtual views/storage pooling, SMART monitoring and Power control?
SnapRAID provides a basic virtual views/storage pooling solution, but it's also compatible with any other pooling solutions.
For encryption, SMART monitoring, Power control and Data Recovery you can use other tools. Here some suggestions:
LinuxWindowsMac OS X
Encryption dmcrypt/LUKS
(tutorial)
VeraCrypt, FreeOTFE
Virtual views/storage pooling mergerfs (tutorial), mhddfs (tutorial), aufs, Greyhole Liquesce (free, BETA), DrivePool ($$), PoolHD ($$), DriveBender ()
SMART monitor smartmontools, GSmartControl smartmontools, GSmartControl, CrystalDiskInfo smartmontools, GSmartControl, SMARTReporter
Power Control hdparm hdparm
Data recovering ddrescue, safecopy, dd_rescue
### Is SnapRAID suitable to backup the home/My Documents directory?
No. If you have a limited amount of data that can be easily saved in an external USB HD, or in online backup services, it doesn't make sense to use SnapRAID. SnapRAID should be used when you have to backup a very large amount of data, many Tera bytes, and a full backup copy of a such amount of data is not possible in any other way.
### Is SnapRAID suitable to backup the boot OS disk?
No. For the disk used for booting your machine and containing the OS files, it's better to use another solution, like RAID1 or a mirror copy. SnapRAID needs an OS to run, and then it cannot recover the OS if the OS is missing.
### Can SnapRAID hash files wihout parity computation?
No. It's not possible to use SnapRAID to only hash files. The hashing used by SnapRAID is meant to be used in combination of parity computation, and it's not the best implementation for a hash only solution.
You can find a list of file verification tools at: Wikipedia: Comparison_of_file_verification_software.
## Setup
For a setup example in Ubuntu 14.04 you can check the SnapRAID Zack's tutorial.
### Do I need one or more parity disks?
As a rule of thumb you can stay with one parity disk (RAID5) with up to four data disks, and then using one parity disk for each group of seven data disks, like in the table:
ParitiesData disks
1/Single Parity/RAID5 2 - 4
2/Double Parity/RAID6 5 - 14
3/Triple Parity 15 - 21
5/Penta Parity 29 - 35
6/Hexa Parity 36 - 42
This table takes into account that more parity levels also help to recover from unsynced array, and that likely you are running all the disks in the same box and environment, meaning that their failures could be correlated.
Take care that multiple failures happen. At now the worst case reported in the SnapRAID history, is a four disks failure, successfully recovered using four parities.
Some interesting articles about number of parity levels and failure rate of disks are: Why RAID 5 stops working in 2009, Why RAID 6 stops working in 2019, Does RAID 6 stop working in 2019?, Triple-Parity RAID and Beyond, How long do disk drives last?.
### Which file-system is recommended for SnapRAID ?
For the data disks, follow these suggestions:
DataResult
Any OK. In general you can use whatever file-sytem you like. Just see the following notes for some exceptions.
Btrfs OK. Ensure to build SnapRAID with the libblkid library to get UUID support. You will be warned at every run if this doesn't happen. Note that multiple Btrfs snapshots are not supported.
ZFS Partial. You won't get UUID support, but this doesn't prevent the use of the core SnapRAID functionalities. Note that multiple ZFS snapshots are not supported.
ReiserFS Unsure. Where are reports of speed increase when switching from ReiserFS to other file-system. ext4 and XFS are anyway better. Use them instead.
FAT Unsure. There are reports of duplicate inodes cases. NTFS is anyway a lot faster. Use it instead.
ReFS No. ReFS uses 128 bit inodes, and SnapRAID doesn't support them yet.
For the parity disks, follow these suggestions:
ParityResult
ext3 No. It doesn't support the fallocate() command needed to allocated the parity files. You can anyway use it if you limit the use of the parity disk to contain only the parity file.
ext4 OK
XFS OK
Btrfs OK. But ensure to build SnapRAID with the libblkid library to get UUID support. You will be warned at every run if this doesn't happen.
JFS No. It doesn't support the fallocate() command needed to allocated the parity files.
It also doesn't seem to implement at the best the posix_fadvise() command.
ReiserFS Unsure. Where are reports of speed increase when switching from ReiserFS to other file-system. ext4 and XFS are anyway better. Use them instead.
NTFS OK in Windows. No in Linux, as it doesn't support fallocate() command needed to allocated the parity files. You can anyway use it if you limit the use of the parity disk to contain only the parity file.
FAT No. You cannot store files bigger than 4 GB. It's also not journaled. Use NTFS instead.
HFS+ OK
In Linux, to get more space for the parity, it's recommended to format the parity file-system with the -m 0 -T largefile4 options. Like:
mkfs.ext4 -m 0 -T largefile4 DEVICE
### Which hardware configuration is recommended?
For best performance it's recommended to have all the disks connected with SATA and not with USB.
Check Raj's Prototype Builds for some examples of hardware setups.
For choosing harddisk, you can check Backblaze's What Hard Drive Should I Buy?, and Backblaze's Enterprise Drives: Fact or Fiction?.
### Is a multi-core CPU recommended?
No. SnapRAID is usually not CPU bound. The limiting factor is the read performance from disks. So, also a single core CPU should be enough.
For example, in my system I get a combined read speed of 400 MiB/s, but SnapRAID is able to hash at 12000 MiB/s in a single core.
### Is a 64 bit operating system recommended?
Yes. SnapRAID works better in a 64 bit operating system. It can access more than 4 GiB of memory, and use faster hashing and parity algorithms. Anyway, if you have less than 30 TB of data, it performs very well also using a 32 bit operating system.
### How much memory SnapRAID requires to run?
Approximatively, SnapRAID requires 1 GiB of memory for 16 TB of data.
If you have less memory than required, the recommended option is to install more memory in the machine.
Otherwise you have to increase the blocksize option in the configuration file. The memory occupation is inversely proportional at the select block_size. For example, using "blocksize 512", instead of the default 256, halves the memory occupation. But take care that using a bigger block size will also increase the amount of wasted space in the parity files.
Note that running with memory swapped to disk is not recommended because it will be a huge slowdown.
### Do I have to check the SMART attributes of my disks?
Yes! 20% of the disks die before 4 years of life, as reported by Backblaze's How long do disk drives last?. Monitoring SMART attributes you can anticipate the failures and avoid to lose data!
If you see one of the SMART attributes Reallocated_Sector_Ct (5), Reported_Uncorrect (187), Command_Timeout (188), Current_Pending_Sector (197), and Offline_Uncorrectable (198) different than zero, replace the disk as recommended by Backblaze's Hard Drive SMART Stats.
Check also Google's Failure trends in a large disk drive population where is found that:
after the first reallocation, drives are
over 14 times more likely to fail within
60 days than drives without reallocation
counts.
after the first probational event, drives
are 16 times more likely to fail within
60 days than drives with zero probational
counts.
after the first offline reallocation, drives have over
21 times higher chances of failure within 60 days than
drives without offline reallocations.
that also recommens to replace the disk after the first error in the SMART attributes Reallocated_Sector_Ct (5), Current_Pending_Sector (197) and Offline_Uncorrectable (198).
### Is it possible to use SnapRAID in a dual boot configuration, like Windows and Linux?
Yes. It's possible to use SnapRAID in a dual boot configuration. Doesn't matter the OS combination used.
In such case it's recommended to run SnapRAID only from one OS. In theory, it should also work if run in both OS. But it's a scenario not really tested, and there is always the risk to mess-up things, like using different SnapRAID version, configuration, content files, and so on.
## Configuration
### What's the SnapRAID 'content' file?
It's the file used by SnapRAID to save the list of all files present in your array with all the checksum, timestamp and any other information needed.
These file will be of about few GiB depending of how big is your array. Approximatively for 10 TB of data, you'll need 500 MiB of content file.
### What are the SnapRAID 'parity' and 'N-parity' files?
They are the files used by SnapRAID to store the parity data used in recovering.
These files will grow in size as the biggest amount of data stored in a single disk of the array.
### How to configure SAMBA to share the Pool directory
To configure SAMBA to share the pool directory, for example /pool, you should add to your /etc/samba/smb.conf the following options:
# In the global section of smb.conf
unix extensions = no
# In the share section of smb.conf
[pool]
comment = Pool
path = /pool
guest ok = yes
### How to use more disks than drive letters in Windows?
You can use Volume Mount points. You can mount a disk in a directory without using a drive letter. Just like in Linux.
## Use
### What are the most important things to do to prevent damages?
If you want to minimize the probability to lose data, do this:
• Before running the first "sync", check your RAM memory with a program like memtest86. Bad RAM is the most frequent cause of data loss when using SnapRAID!
• Frequently run the "sync" command. From once a day to once a week.
• Run the "scrub" command once a week.
• Use smartmontools or similar utility to monitor the SMART attributes. At the first Reallocated_Sector_Ct, Current_Pending_Sector or Offline_Uncorrectable, replace the disk, even if it still works.
### How can I add an additional data disk to an existing array?
To add a new data disk at the array, add the new "disk" option in the configuration file, and then run a "sync" command.
snapraid sync
If the disk is empty, the "sync" command will be immediate.
### How can I remove a data disk from an existing array?
To remove a data disk from the array do:
• Change in the configuration file the related "disk" option to point to an empty directory
• Remove from the configuration file any "content" option pointing to such disk
• Run a "sync" command with the "-E, --force-empty" option:
snapraid sync -E
The "-E" option tells at SnapRAID to proceed even when detecting an empty disk.
• When the "sync" command terminates, remove the "disk" option from the configuration file.
Your array is now without any reference to the removed disk.
### How can I add, or remove, a parity disk to an existing array?
To add a new parity level, add the proper "N-parity" option in the configuration file, and then run the "sync" command, using the "-F, --force-full" option:
snapraid -F sync
The "-F" option tells at SnapRAID to recompute the full parity.
During the process you will be always protected because the existing parity is not modified (note that this happens only from version 11.0).
If you wish to remove a parity, you can simply remove the highest "N-parity" option from the configuration and then delete the parity file.
Take care that after removing a parity file, you cannot reuse it anymore, because it gets outdated after the next "sync" command.
### How can I verify a single data disk?
The "scrub" command verifies all the content of the array, and it's not possible to limit the verification to a single data disk. To verify only a data disk, you can run a "check" audit command limited to the disk you want to verify.
snapraid check -a -d DISK_NAME
This command will verify all the check-sums of the files in the disk specified. The other data and parity disks won't be read and they won't be verified.
### How can I replace a data disk?
If you lost the disk, see the recovering section in the manual. Otherwise, copy all the files in the new disk, maintaining the same directory structure and names.
In Linux use the command:
cp -av /from_dir/. /to_dir
WARNING! If you want to use the 'rsync' or 'mc' commands, ensure that they are recent ones. 'rsync' must be at least version 3.1.0 (2013), and 'mc' at least version 4.8.19 (2017). Older versions are not able to copy the sub-second timestamps.
In Windows use the command:
robocopy F:\from_dir T:\to_dir /e /copyall
Then change the SnapRAID configuration to point the disk at the new mount point and run a "diff" command to ensure that everything is placed correctly.
snapraid diff
If "diff" reports some "added" or "removed" files likely you have copied files with a different directory structure. In such case you have to fix it, until "diff" reports only "equal", "restored" or "copied" files.
Now you can check that the files were really copied correctly, you can run a "check" audit command limited to the replaced disk to check the check-sum of all the files.
snapraid check -a -d DISK_NAME
Finally you can run "sync" to update the SnapRAID state. This command will be almost immediate.
snapraid sync
### How can I replace a parity disk?
If the disk is still accessible, first ensure to have a synced array running a sync command:
snapraid sync
Then you can copy the big parity file to the new disk, and configure the new parity location in the configuration file.
In Linux a good copy command is ddrescue. You can use:
ddrescue /OLD_DISK/parity /NEW_DISK/parity /tmp/copy.log
This will copy the parity file from the old to new location, and in case of errors, it will continue copying what can be copied. To retry to copy the problematic sections, you can just retry the same copy command.
Having a partial parity file, is still beneficial, as it will still allow to recover data as long it uses the valid part. To restore it to full functionality, just run a "fix" command.
If instead you completely lost the parity file, you can run the "fix" command to recreate it, but ensure to use the -d option to fix only the interested parity and not the other disks.
snapraid fix -d PARITY_NAME
### How can I safely move files from one disk to another?
In case you need to move files from one data disk of the array to another disk also in the array, the 100% safe way to proceed is:
• Copy the files to the new location.
• Run a 'diff' command to verify that the just copied files are correctly identified as 'copy'.
• Run a 'sync' command.
• Delete the files from the original location.
• Run again a 'sync' command.
This process gives the guarantee that even if disks die during the sync process, you will be able to recover everything. It also verifies that the hashes of all the moved files are matching the original ones.
### Can I defragment the data and parity disks?
Yes, you can defragment data and parity disks. Defragmentation doesn't affect SnapRAID.
### How can I make SnapRAID nicer with other processes?
In Linux, you can use the ionice command with Idle priority. Like:
ionice -c 3 snapraid sync
### How can I upgrade to a newer version of SnapRAID ?
To upgrade SnapRAID to a new version, just replace the old SnapRAID executable with the new one. The new SnapRAID will use your existing configuration, content and parity files.
By time to time the format of the content file changes, but a newer SnapRAID is always able to read all old formats. So, it's always possible to upgrade. If necessary, the content file format is updated the next time it's written.
Instead, it's in general not possible to downgrade to an old SnapRAID. You can anyway try, and if the old SnapRAID is not able to read the new content file, you will be advised by a proper message.
## Recover
### How can I undelete a just deleted file?
Simply run the "fix" command using a filter for the specified file. Like:
snapraid fix -f my_just_deleted_file
To undelete a directory use:
snapraid fix -f my_just_deleted_dir/
To undelete all the missing files use:
snapraid fix -m
### What happen if I add files and a disk breaks before I have updated the parity with "sync"?
All the files added in the broken disk after the last "sync" command are lost. All the other data is safe and it can be recovered.
### What happen if I delete files and a disk breaks before I have updated the parity with "sync"?
In the worst case, any file deleted or modified in not broken disks may prevent to recover the same amount of data in the broken disk. For example, if you deleted 10GB of data, you may not be able to recover 10GB of data from the broken disk. The exact amount of data lost depends on how much the deleted and broken data overlaps in the parity.
To reduce this problem you can use two parity disks. This improves a lot the chances to recover the data.
A way to be always safe, is to don't delete files, but move them to another directory in the same disk, but outside the directory tree checked by SnapRAID. Then you can run the "sync" command and only after its completion really delete the files. In case you get a disk failure during the "sync", you have to just run the "fix" command specifying where the deleted files are stored:
snapraid fix -i dir_of_deleted_files
### What happen if a disk breaks during a "sync" ?
You are still able to recover data. In the worst case, you will be able to recover as much data as if the disk would have broken before the "sync". But if the "sync" process already run for some time, SnapRAID is able to use the partially synced data to recover more. To improve the recovering you can also use the "autosave" configuration option to save the intermediate content file during the sync process.
### What are and how to fix "Data errors" in "sync"?
Syncing...
309505: Data error for file d4/data/index.db at position 40960
WARNING! Unexpected data error in a data disk! The block is now marked as bad!
Try with 'snapraid -e fix' to recover!
Data errors happen when SnapRAID detects that the content of a file changed but the write time of the file didn't. This is a condition that normally should never happen and it may be the result of a silent error in the disk, or the signal that something got wrong at the file-system level, like when you poweroff the machine without a proper shutdown.
If a data error is found, the "sync" process continues, and erroneous blocks are marked as bad and fixed at the next "fix" command. To just fix the silent errors and not the whole array, you can use the "-e fix" command that usually takes only few seconds.
If you have errors in more than on file, it's also possible that something is not working at hardware level. The first thing to try is to check your PC memory with an automated tool like memtest86. Another thing to check is the disk cabling and the CPU heat.
Another possibility is that you are using programs that change files without modifying the time-stamp of the file itself.
In Linux this is almost guaranteed by the operating system. The only exception is for memory mapped files not yet closed.
In Windows it's a bit trickier. Even using normal writes, you have the guarantee of an updated time-stamp only after closing it. See the MSDN WriteFile():
When writing to a file, the last write time is not fully updated until
all handles used for writing have been closed. Therefore, to ensure an
accurate last write time, close the file handle immediately after writing
to the file.
When using memory mapped file, the program must ensure to update the time-stamp manually, because Windows doesn't do it automatically. See the MSDN CreateFileMapping():
When modifying a file through a mapped view, the last modification time-stamp
may not be updated automatically. If required, the caller should use SetFileTime
to set the time-stamp.
To avoid all these Windows problems, simply ensure to run "sync" when no program is using the files into the array.
### How can I store in a file all the messages from a check and fix command
To generate a log file you can use the "-l" option.
snapraid check -l mylog.log
These log files could be really big, many GB. If they are too big to be opened for your editor, you can use HxD.
### How can I recover the snapraid.conf file
There is the dedicated -C, --gen-conf command that writes a template configuration file compatible with your content file. Note that this configuration requires some manual adjustments to set the disk mount points, but in the comments you'll find the required information to do so.
snapraid -C snapraid.content > snapraid.conf.new
## Tech
### If the SnapRAID "check" command says it's all OK. Is it really OK?
Yes. SnapRAID checks all your files with a 128 bit check-sum. If it says that they are correct, they really are.
### What integrity checksum is used?
SnapRAID uses Murmur3 and SpookyHash 128 bit hashes. Murmur3 is the default in 32 bit platforms, SpookyHash in the default for 64 bit ones.
### Why integrity checksum are important?
Integrity check-sum are important to detect silent errors, when the HD is returning garbage without reporting any error.
An introduction to this problem is in this article: Data corruption is worse than you know
A good analysis of this risk is in the paper: An Analysis of Data Corruption in the Storage Stack
A significant number (8% on average) of corruptions
are detected during RAID reconstruction, creating the
possibility of data loss. In this case, protection against
double disk failures is necessary to prevent data loss.
### How can SnapRAID recover also if a disk break during a 'sync'?
Before the sync process start, SnapRAID saves a special content file that contains a description of both the states before and after the sync. This allows SnapRAID to try two different recovery strategies.
For each block to recover, it first assumes that the parity was correctly computed, just like if "sync" was completed successfully. If this recovering fails, it then assumes that parity was not yet updated, ignoring any new file added in the array.
Also in this case, using a double parity helps to recover when there are overlapping changes in the array.
### Does SnapRAID allow to access data disks during a 'sync'?
Yes. You can safely read from data disks while a 'sync' is running.
You can also write, and if this interferes with the SnapRAID 'sync', the process will continue anyway, just skipping that written part.
This obviously could affect a potential recovery, but it's just like if you wrote just after the 'sync' completed.
### Does SnapRAID retry reads or writes if a disk is not working?
No. SnapRAID never retries disk operations. It reports the error and then it tries to continue anyway. After a limit of 100 errors, it then stops the execution.
In these conditions, it's better to allow the user to try to save manually as much as data possible before the disk dies definitively. Trying an automatic long sequence of retries, will likely kill the disk. If you want to retry, you have to just repeat the command.
You can control the 100 errors limit using the "-L, --error-limit" option.
In the case of "fix" and "check" commands you can restart from the interrupted point using the "-s, --start" option specifying the block number that generated the error.
In the case of "sync" command, the latest state is anyway saved before termination. So, just restarting it will continue where stopped.
### What's the snapraid.content.lock file?
It's a file used by SnapRAID to check if another SnapRAID instance is already running. It's created where the first content file is placed.
### What's the notice about zero sub-second timestamp?
In the 'status' command you may now get a notice like:
You have XXXXX files with zero sub-second timestamp.
Run the 'touch' command to set it to a not zero value.
It means that a number of your files have a timestamp approximated at the second value, when instead your filesystem can store a more precise timestamp, with a sub-second precision.
This is not a problem by itself, but it could limit the SnapRAID effectiviness to detect moved or copied files between different disks.
You have the option to set that subsecond timestamp to a random value using the new 'touch' command. Note that the second part of the timestamp is not modified.
## Performance
The most important factor is to connect all the disks using (E)SATA connections, to avoid slowdown caused by the controller. Avoid any kind of USB disks.
The second important factor is use disks with similar performance, because the read/write speed is limited by the slowest disk of the array. After a sync operation, it's printed a short stats with the wait times for each disk, that could help to identify a bottleneck caused by a slow disk.
As last, a fast CPU could improve the speed when having a lot of disks. You can detect if the CPU is a limiting factor, checking the CPU usage during sync operations.
The default blocksize should be already the best one.
Note that increasing it doesn't necessarily improve the performance. In some cases, it could decrease it.
The default Linux read-ahead size of 128 KiB already ensures the best performance for the default SnapRAID block size of 256 KiB. If you use a bigger block size, you should configure the read-ahead size to be at least equal to half of the SnapRAID block size.
For example, if you want to use a SnapRAID block size of 512 KiB, it's recommended to configure a read-ahead size of 256 KiB for sdX using:
echo 256 > /sys/block/sdX/queue/read_ahead_kb
Note that if you want to use a SnapRAID block size smaller than 256 KiB, it's recommended to don't configure a specific read-ahead size, and just use the default value.
### How fast are the hash and parity computation?
Murmur3 hash is very fast on 32 bits, SpookyHash is even faster on 64 bits. The speed of parity computation is always good up to double parity, even in architectures without SSE support. Beyond double parity having a CPU with SSSE3 support is recommended. For low end CPUs without SSE, there is a special, but incompatible, triple parity support working well also without SSE. See 'z-parity' in the manual for more info.
The following are the result on a Core i5-4670K @ 3.4 GHz. You can check your values with the -T option.
root@redstar:/root# snapraid -T
snapraid v7.0 by Andrea Mazzoleni, http://www.snapraid.it
Compiler gcc 4.8.1
CPU GenuineIntel, family 6, model 60, flags mmx sse2 ssse3 sse42 avx2
Memory is little-endian 64-bit
Support nanosecond timestamps with futimens()
Speed test using 8 data buffers of 262144 bytes, for a total of 2048 KiB.
Memory blocks have a displacement of 1792 bytes to improve cache performance.
The reported values are the aggregate bandwidth of all data blocks in MiB/s,
not counting parity blocks.
Memory write speed using the C memset() function:
memset 22233
CRC used to check the content file integrity:
table 1261
intel 9306
Hash used to check the data blocks integrity:
best murmur3 spooky2
hash spooky2 4715 14684
RAID functions used for computing the parity with 'sync':
best int8 int32 int64 sse2 sse2e ssse3 ssse3e avx2 avx2e
gen1 avx2 13339 25438 45438 50588
gen2 avx2 4115 6514 19441 21840 32201
genz avx2e 2337 2874 9803 10920 18944
gen3 avx2e 814 8934 10154 18613
gen4 avx2e 620 6204 7569 14229
gen5 avx2e 496 4777 5149 10051
gen6 avx2e 413 3846 4239 8190
RAID functions used for recovering with 'fix':
best int8 ssse3 avx2
rec1 avx2 1158 2916 3019
rec2 avx2 517 1220 1633
rec3 avx2 110 611 951
rec4 avx2 71 395 631
rec5 avx2 49 264 421
rec6 avx2 36 194 316
## Know Problems
### Why does my machine reboots or crashes while SnapRAID is running?
SnapRAID pushes the system to extreme conditions, and if the machine has a latent problem, it could result in an unexpected system crash or reboot.
Even if the problem happens only when SnapRAID is running, it's without any doubt a hardware or driver issue because SnapRAID is just a standard user application that cannot reboot or crash the system.
Possible thing to checks are:
• Check your system memory with memtest86.
• Ensure that your power supply can sustatin all the HDs spinning at the same time.
• Check that the temperature of your system is not too high.
• Update any system driver related to storage.
• Check in systemlog or EventViewer if there is any hint of the possible issue.
### Why in Linux does SnapRAID abort without any error message?
Likely you have too few free memory, and the Linux OOM (out-of-memory) killer terminates SnapRAID because it's using too much memory. You can use the Linux 'free' command to check how much free memory you have.
free -m
If it's too small, you have to install more memory in the machine.
### Why in 'sync' do I get the error 'Failed to grow parity file 'xxx' to size xxx due lack of space.'?
This means that SnapRAID needs to grow the parity file to a size that cannot be contained in the parity disk.
The first thing to check is to ensure that no other data is stored in the parity disk. Leave if only for the parity file. Otherwise you have to move the files mentioned in the output to another data disk to reduce the size of the needed parity.
If you are using Linux, an alternate approach is to reformat the parity disk using specific options to increase the available space, like:
mkfs.ext4 -m 0 -T largefile4 DEVICE
The '-T largefile4' and '-m 0' options should give more space available for the parity file.
### Why do I get the error 'Internal inode xxx inconsistency...'?
This means that SnapRAID found two different files using the same inode. This is theoretically something that should never happen, but in some specific conditions is instead possible.
First ensure that the mentioned files were not modified while SnapRAID was running. If files are renamed, deleted or recreated while SnapRAID is running, it's possible that inodes are reused, causing this condition.
If you are using a FAT file-system in Windows, try to convert it to NTFS. Theoretically inodes (FileIndex) are unique even in FAT, but this condition was already seen in the field.
### Why with .xls files do I get the error 'Data change at file...'?
Unfortunately the Excel application does dirty tricks when handling .xls files, that may cause the followings data error:
Data change at file '/xxx/xxx.xls' at position '0'
WARNING! Unexpected data modification of a file without parity!
This happens because Excel modifies the files after opening them, even if you don't press 'Save'. It stores the name of the last user that opened it to be able to report if another user tries to open the same file at the same time. To avoid to have the file triggered as modified, it restores the time-stamp to its original value. But the file was modified, and SnapRAID reports it.
To workaround the problem you can see How to prevent Excel from modifying the file on exit?, or you can switch to the .xlsx format.
### Why are VeraCrypt containers never saved?
VeraCrypt (a fork of TrueCrypt) by default has enabled the option Preserve modification time-stamp of file containers that makes impossible at SnapRAID, and at other backup programs, to detect that a file container is changed. Ensure to disable this option in VeraCrypt.
|
2017-02-20 22:19:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3560549020767212, "perplexity": 3277.801067878431}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170613.8/warc/CC-MAIN-20170219104610-00332-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/algebra/100267-quadratic.html
|
hi can someone please show me how to find the value of x which gives the minimum value of 0 in $(3x^2 - 12x + 5)^2$? i don't know the method you would use to get the answer
thanks, Mark
2. Originally Posted by mark
hi can someone please show me how to find the value of x which gives the minimum value of 0 in $(3x^2 - 12x + 5)^2$? i don't know the method you would use to get the answer
thanks, Mark
square 3x then subtract 12x from 3x=
-3x +5 then you square that
and you should get 9x + 25= 0
thats all i got.
3. i'm not sure thats the right answer. the book says $x = 2 \pm \sqrt {\frac{7}{3}}$ would you be able to get it from that? (the 7/3 is meant to be in one big bracket under the root sign but i don't know how to write that out)
4. Originally Posted by mark
hi can someone please show me how to find the value of x which gives the minimum value of 0 in $(3x^2 - 12x + 5)^2$? i don't know the method you would use to get the answer
thanks, Mark
Solve $3x^2 - 12x + 5 = 0$.
A variety of methods are possible, including using the quadratic formula.
Edit: duplicate question: http://www.mathhelpforum.com/math-he...tml#post358224. Thread closed.
|
2016-02-09 17:55:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417443037033081, "perplexity": 229.6221832366012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157443.43/warc/CC-MAIN-20160205193917-00007-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://chemistry.stackexchange.com/questions/89449/understanding-the-increase-in-ph-of-a-buffer-solution-upon-incremental-additions
|
# Understanding the increase in pH of a buffer solution upon incremental additions of NaOH analytically
So let's say we have $200$ mL of $1$M $CH_3COOH$ solution. In this solution we have the equilibrium $CH_3COOH \rightleftharpoons CH_3COO^- + H^+$. To that we add $100$mL of $1$M $NaOH$ solution.Then after reacting we get a buffer solution where $[CH_3COOH] = [CH_3COO^-] = \frac{1}{3} moldm^{-3}$. In moles, we have $0.1$ moles of both $CH_3COOH$ and $CH_3COO^-$.
Then suppose we add $x$ mL of $0.1$M NaOH. This is, in effect, reacting $0.1$ moles of $CH_3COOH$ with $0.0001x$ moles of $NaOH$, the result of which is that now $n(CH_3COOH) = 0.1 - 0.0001x$ and $n(CH_3COO^-) = 0.1 + 0.0001x$.
Could we not then model, by the Henderson-Hasselbach equation, $pH = pKa + log(\frac{0.1 +0.0001x}{0.1 - 0.0001x}) = 4.76 +log(\frac{0.1 +0.0001x}{0.1 - 0.0001x})$?
But this would mean that for the pH of the buffer solution to rise by $1$, we would need $818.82 cm^3$ of $0.1$M NaOH solution, which sounds absurd. Moreover, for different acids, like propanoic acid and butanoic acid, the same line of logic could be used to deduce that the volume required to increase the pH by 1 would be the same for all, but they have different buffering capacities. So what's wrong with the logic?
I just answered what I thought was this question but realized that it was essentially the same question asked 5 years ago. So here's that answer modified for the slight difference in the way it is asked here:
We need to know the the pH of the buffer i.e. what does it measure before any NaOH is added. Later we will assume that it is 4.76, the pK of acetic acid. $[\ce{HAc}] = [\ce{Ac^-}]$ ([·] symbolizes the molar concentration of ·). In order to solve the general problem we need to be able to calculate the fraction of the total Ac that is dissociated and the fraction that isn't.
The fraction that is dissociated comes right out of the Henderson - Hasselbalch equation and is $$f_1 = 1/(1 + r_1)$$ where $$r_1 = 10^{(pH - pK_1)}= [\ce{Ac^-}]/[\ce{HAc}]$$ as is clear from inspection of the Henderson - Hasselbalch equation. The subscript 1 indicates that $r_1$ is the ratio of the number of acid ions that have lost 1 proton to the number that have lost $1 - 1 = 0$. In $f_1$ the subscript is indicative that $f_1$ is the fraction of the total Ac molecules that has become singly charged by loss of a single proton. When dealing with monoprotic acetic acid the subscripts aren't that important as there is only one proton to loose. But if the acid is polyprotic we have unionized, once ionized, twice ionized etc. ions to consider. The fractions of those ions are $f_0$, $f_1$, $f_2$... and we have other ratios as well
$$r_j = 10^{(pH - pK_j)} = [H_{n-j-1}Ac^{−j}]/[H_{n-j}Ac^{-(j-1}]$$ Here "Ac" stands for the acid anion and n is the number of protons it can yield when fully dissociated. For acetic acid. $\ce{CH_3COOH}$, and Ac is $\ce{CH_3COO}$, n = 1 and j only has values of 0 or 1. With phosphoric acid, $\ce{H_3(PO_4)}$, Ac is $\ce{(PO_4)}$, n = 3 and j = 0,1,2 or 3.
So suppose now we have a polyprotic acid with values for $r_1$, $r_2$, $r_3$... and that there are x moles of the acid in a solution. Then there would be $xr_1$ moles of the singly deprotonated species, $xr_1r_2$ moles of the doubly deprotonated, $xr_1r_2r_3$ moles of the triply deprotonated and so on.Then the total number of moles of Ac would be the sum of the number of moles of each$$C_{Ac} = x + xr_1 +xr_1r_2 +xr_1r_2r_3...$$ The fraction of the total that is undissociated is $$f_0 = x/(x + xr_1 +xr_1r_2 +xr_1r_2r_3...) = 1/(1 + r_1 +r_1r_2 + r_1r_2r_3...)$$ The fraction that is singly dissociated is $r_1$ times this $$f_1 = r_1f_0$$ and the fraction that is doubly dissociated is $r_2$ times that $$f_2 = r_2f_1$$ and, in general $$f_j = r_jf_{(j-1)}$$
At this point let's remember that $f_j$ is a function of the solution pH and all the pK's of the acid in question and that it is the fraction of the anions of that acid that carry charge -j. Thus we can write an expression for the total charge on all species of Ac at a given pH. This is $$Q_{Ac} = -C_{Ac}(0f_0 + 1f_1 + 2f_2 + ...)$$
Before going on to show you how Q solves buffering problems let me stop to suggest that the simplest way to work with it is to make an Excel (or other) spread sheet. Designate a column for pKs and a cell into which the pH goes. For example, put pH into cell A1 and start the pKs list in A2, Then in B2 put =10^($A$1 - A2). Using $A$1 lets you copy and paste B2 into as many cells as you have pKs. The B column now contains the r corresponding to the pK in the cell to the left of it. Now enter the formula for $f_0$ in a cell and make another column with the $f_j = r_jf_{(j-1)}$ formula in it. So how many pK's. I say make the spread sheet for 5 or 6. Why? Well lets go back to $\ce{CH_3COOH}$ for a minute. It doesn't have 1 proton to give, as we have been assuming. It actually has 4. Are the other three ever coming off? Not with any base in my lab but we can model those other protons simply by assigning pKs that are so high (say 50) that the f values for anything other than $f_0$ or $f_1$ are 0. The point being that if you are going to go to the trouble to make the spreadsheet you might as well make it big enough to handle any acid you may ever encounter as it easily handles anything up to its maximum size using this trick.
Now how to use $Q(pH)$. If there are a total of $C_{Ac}$ moles of Ac in a solution the negative charge on them at $pH_0$ is $C_{Ac}Q(pH_0)$. At $pH_1$ it is $C_{Ac}Q(pH_1)$. Thus to move the solution from $pH_0$ to $pH_1$ you must supply or remove charge of $$\Delta Q_{Ac}(pH_0\ce{->}pH_1) = C_{Ac}Q(pH_1) - C_{Ac}Q(pH_0)$$ by adding or absorbing protons. If $pH_0 > pH_1$ then $Q(pH_1) > Q(pH_0)$ (less negative) and so the difference will be positive indicating that protons (acid) will need to be added to effect this pH shift.
The acid species in the solution are not the only thing that emits or absorbs protons when pH changes. The solvent does too.
$$\Delta Q_{W}(pH_0\ce{->}pH_1) = 10^{-pH_1} - 10^{-pH_0} + (10^{(pH_0 - pK_w)} - 10^{(pH_1 - pK_w)})$$ represents the number of protons that must be supplied (or absorbed) to change the pH of water from $pH_0$ to $pH_1$.
Now let's use this to solve the original question. We have 300 mL of 4.76 buffer with $C_{Ac}$ = 0.2 mol ( 0.1 mol of Ac and 0.1 mol of $Ac^{-1}$. This implies that $f_0 = f_1 = 0.5$ Also $$\Delta Q_{Ac}(pH_0\ce{->}pH_1) = 0.02(Q(pH_1) - Q(4.76))$$ $$\Delta Q_{W}(pH_0\ce{->}pH_1) = 10^{-pH_1} - 10^{-4.76} + (10^{(4.76 - pK_w)} - 10^{(pH_1 - pK_w)})$$
Aparently we want to know how much NaOH we would need to raise the pH to 5.76. With our formulas in a simple Excel spread sheet and using 5.76 for $pH_1$ we quickly find that $\Delta Q_{Ac}(4.76\ce{->}5.76) = -0.0818 mole. IOW protons must be absorbed. At the same time we find$\Delta Q_{W}(4.76\ce{->}5.76) = -1.56E-5 mole per liter. We only have 0.3 L so the protons to be removed to adjust the water are insignificant. We apparently need 81.8 mL of 1 M NaOH. Note that your digits are pretty similar so one of us is off by a power of 10. As 0.8 mol OH- to shift an 0.2 mol buffer is a lot, I think it's you. As we both used Henderson Hasselbalch, we had better get the same answer.
So why would you do what I'm suggesting instead of what you are doing (assuming you find the factor of 10). If you do a lot of problems like this it makes cranking them out a snap.
Now this post has gone on long enough so I won't have space to tell you about extension of this technique to much more complicated systems. You can use it to find the pH of complicated mixes of weak and strong acids and bases or any materials which have buffering capacity. I ginned it up (this is just using the 'proton condition' to solve a complex system - I didn't invent it) to predict the pH of brewer's mash which is a mixture of weak acid (bicarbonate) and several malts (each of which has its own buffering properties).
|
2019-04-23 18:25:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7214691638946533, "perplexity": 517.8642200846948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578610036.72/warc/CC-MAIN-20190423174820-20190423200820-00059.warc.gz"}
|
https://www.esaral.com/q/the-quantities-49853
|
# The quantities
Question:
The quantities $\quad x=\frac{1}{\sqrt{\mu_{0} \in_{0}}}, y=\frac{E}{B} \quad$ and $\mathrm{z}=\frac{1}{\mathrm{CR}}$ are defined where $\mathrm{C}$-capacitance,
R-Resistance, $l$-length, E-Electric field, $B$-magnetic field and $\in_{0}, \mu_{0},-$ free space permittivity and permeability respectively. Then:
1. Only $x$ and $y$ have the same dimension
2. $x, y$ and $z$ have the same dimension
3. Only $x$ and $z$ have the same dimension
4. Only y and z have the same dimension
Correct Option: , 2
Solution:
|
2023-02-05 11:10:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9295676350593567, "perplexity": 1154.8872038813772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500251.38/warc/CC-MAIN-20230205094841-20230205124841-00514.warc.gz"}
|
https://www.physicsforums.com/threads/doubts-of-a-13-years-old-boy-about-simultaneity-time-dilation-spacetime-behavior.540302/
|
# Doubts of a 13 years old boy about simultaneity, time dilation, spacetime behavior
1. Oct 14, 2011
### human83
Hi everyone!
I'm really sorry to bother you with my surely stupid questions, but I am 13 years old and today I read some lines about Special Relativity and General Relativity at school and found myself stuck on some thoughts I can't cease to think about... I really hope that you will be so kind to help me, because I'm not clever enough to solve these things out by myself.
(1) I read that every object which has mass curves space-time (though maybe just infinitesimally). (a) Is it right? (b) Does energy (which I think is massless) curve space-time too in the same way? (c) it seems to me that everything which exists alterates space-time in some way... if we can calculate the space-time metric starting from the properties of an object, can we deduce the properties of an object knowing the metric of the region of space-time it occupies? (d) is it legitimate to suppose a complete identity between objects and the regions of space-times they occupy? (e) if I remove all the objects in the universe, does space-time continue to exist? (f) what is exactly a "frame of reference"??
(2) I read about time dilation, lenght contraction and mass increase. I imagine a spaceship, 1 km long say, which is traveling at a speed where relativistic effects are manifest. (a) why does time slow down in the spaceship? (b) the lenght contraction of the spaceship is a "real" phenomenon (the ship is really shorter while traveling than when it was orbiting around Earth) or simply an optical effect seen by an observer outside (of the frame of reference) the ship?
(3) Consider again the starship above and suppose that there are two men, one in the bow of the ship and one in the stern of the ship. They both have a nuclear clock (that's to say a really precise clock) which have been perfectly synchronized (if it is possible). (a) Will they find a difference of time between them (though a small one) after a certain period of time though they are on the same spaceship? (b) is there a formula to calculate this difference of time? ('cause I would like to make some calculations on different scenarios)
(4) I read that relativistic effects become significat only near the speed of light. This means that there are relativistic effects at any speed but they are irrelevant and/or undetectable or that they just don't happen (don't exist) below certain speeds? I was thinking about the most stupid experiment in history: suppose we have a huge platform, 100,000 km long (or more if needed) and 20,000 km wide, with a mass of 1,000 kg. At each of the two ends of the platform we put a clock capable of measuring even the smallest interval of time (Planck time I think). The platform is stationary in space. (a) After a period of time (arbitrarily long), am I going to detect a tiny, infinitesimal difference in the time measured by the two clocks? (b) if so, which is the formula I might use to calculate that difference?
I know I am really stupid and ignorant, and I apologize if I'm wasting your time with these questions which are probably really trivial for you... but, as I told you, I am 13 and since I know nothing about physics I thought to ask here instead of killing my curiosity. I also apologize for my terrible English... I'm from Switzerland :)
Thanks for having had the patience to read my silly questions and thanks in advance to those who will have the kindness to descend to my level and help me understand. Thank you very much!
2. Oct 14, 2011
### Nabeshin
Re: Doubts of a 13 years old boy about simultaneity, time dilation, spacetime behavio
Hi and welcome to PF! Impressive questions for a thirteen year old!
Let's see...
These are both correct. Mass and energy both contribute to the warping of spacetime, but so do less obvious things such as pressure, shearing, and momentum!
Yes! To understand this, one must understand Einstein's equations. Now, these equations are quite complicated but can actually be written in an extremely compact way:
G=T
(Ignoring some factors of pi).
G is related to the curvature of your spacetime surface and T is related to the mass/energy/momentum distribution. Now, normally we think about the equation in the order I wrote it, G=T, implying (just by normal maths conventions) that the curvature of a surface is a function of the mass/energy distribution. With this approach, we know what the mass distribution is (say, the earth) and we can calculate what spacetime looks like around it. However, we could equally well go the other way and say T=G! That is, if we knew what spacetime looked like, we could deduce the mass/energy/momentum distribution! Spacetime might seem like a strange thing to measure, but if we imagine shooting a bunch of little particles out, their motion will tell us about the curvature of spacetime!
It is important to note though that all the information you will get from this procedure is the mass/energy/momentum/etc. distribution. I.e. you will not be able to tell if a ball causing a gravitational field is red, blue, or polka dot colored.
As I've said, different objects can create the same spacetime curvature, so it doesn't make sense to identify specific objects with geometries.
This is a rather philosophical question, akin to 'if a tree falls in a forest...'. As such, I don't really think it's important to dwell on here.
A frame of reference is simply the coordinate system attached to a particular observer. So I, sitting here in my computer chair, can set up a coordinate system and measure the positions of various objects and the durations between events. Similarly, someone running past me has their own reference frame (in which they are stationary!) and can perform the same measurements.
There are some nuances here, but I think for most cases the above understanding is sufficient.
The short answer is because light must move at the same speed in all reference frames. The way I generally think about it is in accord with the explanation given on wikipedia:
http://en.wikipedia.org/wiki/Time_d...nce_of_time_dilation_due_to_relative_velocity
Length contraction certainly is 'real', in that it is not merely an optical illusion. I'm not entirely sure what is meant by 'real' here though.
No! There will be no difference in the clocks of these two men. In their reference frame, neither of them is moving, and as such neither is subject to time dilation effects of special relativity.
We can consider the more complicated case of an accelerating spaceship though. One of Einstein's motivations for general relativity was the equivalence principle, which stated simply is the notion that a gravitational field is indistinguishable from an accelerating reference frame on small enough length and time scales. If you imagine sitting in a box on the surface of the Earth, you will feel a force pulling you downwards with precisely the normal gravitational force on the earth. Now, imagine that you are in the same box, but far away from any gravitational source, but instead you have attached rocket boosters to the box and are firing them so as to accelerate at one earth gravity. Again, you will feel a force pulling you downwards, and assuming the box has no windows or anything, you will be unable to tell whether you are in a box on the Earth or a box in the middle of nowhere!
So just as a clock slows down in a gravitational field (well, a gravitational potential ;), a clock will similarly slow down in an accelerating rocket ship!
To return to the spaceship, even if it is accelerating the two men will not notice a change in their clocks! This is because they are both accelerating at precisely the same rate. Well that's curious.. by the equivalence principle, imagine the rocket to be just sitting on the launch pad on Earth. Well, the guy at one end of the ship will be higher up than the other, the gravitational force on him will be less! As such, his clock will run at a slightly faster rate than the man in the lower part of the ship! Have we broken the equivalence principle?! Not really.
The curious statement 'on small enough length and time scales' appeared in the equivalence principle. This is precisely to handle the case above. In the event that a gravitational field is non-uniform, the equivalence principle no longer holds! So, what we do is we zoom in on a really small space over which the gravitational field is approximately uniform, and again we recover the equivalence.
Just as a note while I'm on the subject, the experiment of putting a clock higher up in a gravitational field and one lower down has been performed and is known as the Pound-Rebka experiment after the scientists who first conducted it in 1959. They had clocks on different stories of a building, but we can now measure the variation in the rate of a clock between basically the bottom and top of your desk!
Yes indeed relativistic effects are only important at high speeds. The relevant parameter for relativistic effects is known as the lorentz factor:
$$\gamma = \frac{1}{\sqrt{1-v^2/c^2}}$$
Where v is the velocity you're moving at and c is the speed of light. If you plug in some velocities you'll see that for normal speeds, this is ridiculously close to one. Things begin to get relativistic when it deviates significantly from one. As the velocity approaches c, this goes to infinity!
With regards to your very large platform, again there would be no time difference measured. The clocks are attached to the same rigid object and are thus moving with the same velocity.
Phew. That was a bit longer than I expected! Let me know if anything doesn't make sense or if you have more questions!
3. Oct 14, 2011
### phinds
Re: Doubts of a 13 years old boy about simultaneity, time dilation, spacetime behavio
One thing I would add to the above (which is quite a good set of answers) is that time dilation is very weird in one way and that is that the person who travels very fast and comes back has aged less than the person who stayed home BUT to him it did not SEEM as though time was any different than normal. You can read all about this if you google "the twin paradox"
4. Oct 14, 2011
### human83
Re: Doubts of a 13 years old boy about simultaneity, time dilation, spacetime behavio
Nabeshin, I feel I owe you a lot for the time you spent answering my question, the plain language you used, the knowledge you kindly shared, and the kindness you used to me not making me feeling a complete idiot :) Thank you very much, really. I think I will ruminate a lot on your clear explanations but I'm really happy you corrected many of my terrible misunderstatements and mistakes. Please, consider to become a teacher one day, you would be a wonderful one! :)
Thanks also to you Phinds, I vaguely heard about "The twin paradox" but didn't know anything about. It seems a really weird and intriguing phenomenon I would like to study!
Again... Thank you both!
5. Oct 15, 2011
### Quinzio
Re: Doubts of a 13 years old boy about simultaneity, time dilation, spacetime behavio
Well, you should definitely stop to label your questions as stupid or idiot. It's quite impressive a 13 yo boy is managing to explore the details of an unusual and unfamiliar theory such as relativity, although I remember when I was 15 I was already reading some heavy stuff like Bertrand Russel's books. :)
In addition it's remarkable that you are able to write in a good English, given you are in Switzerland and as far as I know, they speak German, French and Italian there, but not English. I've got 3 times your age and I still struggle to improve my English. I'm italian and I've been sometimes to Svizzera.
Coming to your questions:
(2) I read about time dilation, lenght contraction and mass increase. I imagine a spaceship, 1 km long say, which is traveling at a speed where relativistic effects are manifest. (a) why does time slow down in the spaceship? (b) the lenght contraction of the spaceship is a "real" phenomenon (the ship is really shorter while traveling than when it was orbiting around Earth) or simply an optical effect seen by an observer outside (of the frame of reference) the ship?
To point a) you should look at stuff like this: which explains concepts in a relatively easy way.
As for point b), yes, length contraction is "really" real, although noone will ever experience it. Only experiments and distant measures can reveal it, and as far as I know, no experiment has measured it yet. More food for your thoughts: http://en.wikipedia.org/wiki/Ladder_paradox
Greetings
Last edited by a moderator: Sep 25, 2014
6. Oct 16, 2011
### yoron
Re: Doubts of a 13 years old boy about simultaneity, time dilation, spacetime behavio
Very sweet thinking human :)
And interesting answers, I would just like to add one thing. When it comes to measuring something in a uniform motion, like with clocks, there will always be a local 'gravity'. As long as we are discussing something of mass at least. And testing atomic clocks on earth it has been shown that they will show a difference relative gravity, even at so small scales as a meter.
And so I will assume that if we had really precise 'perfect clocks' the approximate 'length of difference' between two points should be somewhere around a Planck length, as a guess and based on our definitions of Planck time 'c' and Plank length. Theoretically we ignore invariant mass in a uniform motion, assuming 'point particles' or infinitely small 'flat' patches in SR as I see it, but in reality, I don't think we can ignore 'gravity' anywhere.
And that makes another definition of what a 'frame of reference' could be seen as.
|
2017-09-25 19:27:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6900166273117065, "perplexity": 453.34100638464366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693240.90/warc/CC-MAIN-20170925182814-20170925202814-00076.warc.gz"}
|
https://socratic.org/questions/what-is-the-y-intercept-for-a-line-with-point-5-3-slope-5
|
# What is the y intercept for a line with point (5,-3) slope 5?
Sep 27, 2015
Use the linear equation $y = m x + b$
#### Explanation:
The general equation for a linear line is:
$y = m x + b$
Next, substitute the values for x, y and m into the above equation so that you can solve for the y-intercept (b)
$- 3 = \left(5\right) \left(5\right) + b$
$- 3 = 25 + b$
$b = - 28$
So, the y-intercept is at -28
Hope that helps
|
2022-01-27 01:57:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7524150609970093, "perplexity": 1028.2864573250413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305052.56/warc/CC-MAIN-20220127012750-20220127042750-00206.warc.gz"}
|
https://math.stackexchange.com/questions/2822161/is-it-true-that-for-any-n-times-n-matrix-a-q-where-q-is-invertible
|
# is it true that for any $n \times n$ matrix $A$, $Q$ where $Q$ is invertible, $(Q^{-1}AQ)^k = Q^{-1}A^kQ$?
I was trying to solve a problem that asks me to show $p(Q^{-1}AQ) = Q^{-1}p(A)Q$ where $p(t)$ is an arbitrary polynomial $a_nt^n+...+a_1t + a_0$. I am wondering whether it is true that $(Q^{-1}AQ)^k = Q^{-1}A^kQ$ since if this is true then the problem can be solved easily. I cannot seem to prove it since I am not quite sure how to deal with a product of matrices raised to a power. If it's not true then is there any other way to solve the problem?
• $A² = AA$, $A^3 = AAA$, ect. Try writing your product out for certain $k$ and see what happens. – Kaynex Jun 17 '18 at 0:49
Suppose it is true for $n$, $(QAQ^{-1})^{n+1}=(QAQ^{-1})^n(QAQ^{-1})=QA^nQ^{-1}QAQ^{-1}=QA^{n+1}Q^{-1}$ and the proof follows recursively.
• for the second equation the $(QAQ)$ term, is it supposed to be $(QAQ^{-1})$? – PsychoCom Jun 17 '18 at 1:02
You can just open the power k and write $(Q^{-1}AQ)^{k} = (Q^{-1}AQ) *(Q^{-1}AQ)*...*(Q^{-1}AQ)$ where $Q^{-1}AQ$ appears k times. It immediately gives the result after canceling all $Q^{-1}Q$s.
|
2019-09-21 15:55:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8759782910346985, "perplexity": 68.2396504324331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574532.44/warc/CC-MAIN-20190921145904-20190921171904-00119.warc.gz"}
|
https://stacks.math.columbia.edu/tag/01L8
|
Lemma 25.23.9. Let $f : X \to S$ be a separated morphism. Any locally closed subscheme $Z \subset X$ is separated over $S$.
Proof. Follows from Lemma 25.23.8 and the fact that a composition of separated morphisms is separated (Lemma 25.21.12). $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
|
2019-05-20 06:59:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9818152785301208, "perplexity": 536.6174142663616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255773.51/warc/CC-MAIN-20190520061847-20190520083847-00084.warc.gz"}
|
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th-edition/chapter-10-interactions-and-potential-energy-exercises-and-problems-page-258/51
|
## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition)
(a) $v = 1.7~m/s$ (b) If we round off, the package barely makes it onto the truck with a final kinetic energy of zero.
(a) The sum of the kinetic energy and the potential energy in the truck will be equal to the initial energy stored in the spring. $KE+PE=U_s$ $KE = U_s-PE$ $\frac{1}{2}mv^2 = \frac{1}{2}kx^2-mgh$ $v^2 = \frac{kx^2-2mgh}{m}$ $v = \sqrt{\frac{kx^2-2mgh}{m}}$ $v = \sqrt{\frac{(500~N/m)(0.30~m)^2-(2)(2.0~kg)(9.80~m/s^2)(1.0~m)}{2.0~kg}}$ $v = 1.7~m/s$ (b) We can find the work that the sticky spot does on the package. $W_f = -mg~\mu_k~d$ $W_f = -(2.0~kg)(9.80~m/s^2)(0.30)(0.50~m)$ $W_f = -2.94~J$ Let's assume that the final kinetic energy is zero. We can find the maximum possible height of the package. $PE = U_s+W_f$ $mgh = \frac{1}{2}kx^2+W_f$ $h = \frac{\frac{1}{2}kx^2+W_f}{mg}$ $h = \frac{\frac{1}{2}(500~N/m)(0.30~m)^2-2.94~J}{(2.0~kg)(9.80~m/s^2)}$ $h = 0.998~m \approx 1.0~m$ If we round off, the package barely makes it onto the truck with a final kinetic energy of zero.
|
2018-09-25 03:08:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7503347992897034, "perplexity": 182.37548820638497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160923.61/warc/CC-MAIN-20180925024239-20180925044639-00273.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-and-trigonometry-10th-edition/chapter-9-9-1-linear-and-nonlinear-systems-of-equations-9-1-exercises-page-635/20
|
## Algebra and Trigonometry 10th Edition
$(2,2.5)$
Here, we have $0.5x+3.2y=9.0$ ...(1) and $0.2x-1.6y=-3.6$ ...(2) Re-arrange the second equation as: $x=[ 0.2x-1.6y=-3.6] \times 5 \implies x=8y-18$ Now, plug $x=\dfrac{1}{3}+\dfrac{2}{3} y$ into the first equation. $0.5( 8y-18)+3.2y=9.0$ $4y-9+3.2y=9.0$ This gives $7.2y=18 \implies y=2.5$ Use the $y$ value and the equation above to solve for $x$: $x=8(2.5)-18=2$ Hence, $(x,y)=(2,2.5)$
|
2019-10-20 09:59:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9656755328178406, "perplexity": 254.6021471392137}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986705411.60/warc/CC-MAIN-20191020081806-20191020105306-00422.warc.gz"}
|
https://chemistry.stackexchange.com/questions/71769/what-is-this-blue-crystal
|
# What is this blue crystal?
I was sorting out some old boxes in one of the rooms in my house, when I found a small container with about 10 grams of this blue crystalline powder. I assume that it contains some form of copper due to the colour, but the label's all ripped and I can't see the name of it.
So what compound is it?
It appears to glow in the image, but this is just the reflection from the camera flash and it doesn't look like this in real life.
• Seems like a classic case of $\ce{CuSO4.5H2O}$, you may want to do a Google image search and compare your sample to it (since you have the actual thing you're in a better position to judge whether it looks the same) – orthocresol Apr 2 '17 at 15:30
• You could try warming it. If it's what @orthocresol says it is, it should turn white when heated (i.e. when it loses water and becomes anhydrous). I did this for my CHEM 101 lab last semester :) – Gallifreyan Apr 2 '17 at 21:10
• @Gallifreyan I tried this and it did in fact turn white – George Willcox Apr 2 '17 at 21:41
• I'm voting to close this question as off-topic because not enough information has been included in the question at the moment to deduce a definitive answer; simply a picture with no other information isn't helpful. I reckon there are a lot of blue-colored crystalline compounds around. – M.A.R. Apr 2 '17 at 23:02
• @M.A.R. What other information do you expect me to provide? An answer has already been given, which I managed to test by heating the compound. And whilst there may be other blue crystalline compounds out there, this one is quite clearly the most common. – George Willcox Apr 2 '17 at 23:05
|
2019-08-22 06:11:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27594178915023804, "perplexity": 618.8505102998614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316783.70/warc/CC-MAIN-20190822042502-20190822064502-00284.warc.gz"}
|
https://math.stackexchange.com/questions/1763181/when-this-matrix-is-diagonalizable
|
# When this matrix is diagonalizable?
When this matrix is diagonalizable? ($a_i \in \mathbb{R}$) $$\begin{pmatrix} &&&a_1\\ &&a_2&\\ &\ddots&&\\ a_n&&&\\ \end{pmatrix}$$ I think I should probably consider characteristic polynomial of this matrix, and if all roots are simple, then the matrix is diagonalizable.
UPD: Also I suppose that if $\forall a_i \neq 0$ then the matrix is diagonalizable, but I can't prove that.
The matrix is anti-diagonal, of course.
• Is $\;a_{ij}=0\;$ for $\;i+j\neq n\;$ in your matrix? I mean, is this an antidiagonal matrix? – DonAntonio Apr 28 '16 at 19:29
• It will help to treat the case $n=2$ first. – Henning Makholm Apr 28 '16 at 19:29
• You can extract some information about the eigenvalues using the fact that the square of such a matrix is diagonal (so that the eigenvalues of the square are just the diagonal entries) and that $\det (A^2) = (\det A)^2$. – Travis Apr 28 '16 at 19:30
• – thebooort Apr 28 '16 at 19:34
• Conjugate by the appropriate permutation matrix. – Adam Hughes Apr 28 '16 at 19:36
Let $e_i$ be the standard unit vectors, $i=1\ldots n$. The two-dimensional subspaces spanned by $e_i$ and $e_{n+1-i}$, $i = 1 \ldots \lfloor n/2 \rfloor$ and (if $n$ is odd) the one-dimensional subspace spanned by $e_{(n+1)/2}$, are invariant under your matrix, so everything reduces to the two-dimensional case.
If $a_1, a_2 \ne 0$, $\pmatrix{0 & a_1\cr a_2 & 0\cr}$ has two distinct eigenvalues $\pm \sqrt{a_1 a_2}$, therefore is diagonalizable. Of course if $a_1 = a_2 = 0$, you have the $0$ matrix which is diagonalizable. However, if one of $a_1$ is $0$ and the other is not, the matrix is not diagonalizable (the only eigenvalue is $0$, but the null space is one-dimensional).
Back to the general case: the matrix is diagonalizable unless for some $i$, one of $a_i$ and $a_{n+1-i}$ is $0$ and the other is not.
• I can't understand the beginning of your solution, could you please clearify me? – AnatoliySultanov Apr 28 '16 at 19:54
Lemma 1 : $A$ is diagonalizable over $\mathbb C$ if and only if $A^2$ is diagonalizable and $\ker A =\ker A^2$
Here, $A^2$ is luckily a diagonal matrix, so $A$ diagonalizable over $\mathbb C$ if and only if $\ker A =\ker A^2$, that is to say, if and only if $\forall i, a_i=0\iff a_{n-i+1}=0$
Lemma 2: $A$ is diagonalizable over $\mathbb R$ if and only if $\ker A =\ker A^2$ and all the eigenvalues of $A^2$ are nonnegative.
Here, this translates as $\forall i, (a_i=0\iff a_{n-i+1}=0) \;\text{and } a_ia_{n-i+1}\geq 0$
These results are similar to those found here Conditions of diagonalizability of $n \times n$ anti-diagonal matrix
• Could you explain me how to prove Lemma 2, please? – AnatoliySultanov Apr 28 '16 at 20:05
• @AnatoliySultanov if the eigenvalues of $A^2$ are negative then the eigenvalues of $A$ are complex and it is clearly not real-diagonalizable. but for the converse you have to check also if the $P$ in $P D P^{-1}$ is a real matrix – reuns Apr 28 '16 at 20:09
|
2019-06-25 07:28:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994945287704468, "perplexity": 150.26928872114019}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00388.warc.gz"}
|
https://davidthemathstutor.com.au/2020/02/08/the-derivative-part-3/
|
# The Derivative, Part 3
Now that we have some confidence that the derivative definition gives correct results of functions that we know the answer to, let’s look at a functional form where the answer is not known.
Consider f(x) = x². As you know, this function plots as the standard parabola. The slope of a tangent line on this curve (its rate of change) is not constant, unlike the cases we have looked at before, but it depends on where we are on the curve:
$f'( x) =\lim _{h\rightarrow 0}\frac{f( x+h) -f( x)}{h} = \lim _{h\rightarrow 0}\frac{( x+h)^{2} -x^{2}}{h} =$ $\lim _{h\rightarrow 0}\frac{x^{2} +2xh+h^{2} -x^{2}}{h} =\lim _{h\rightarrow 0}\frac{h( 2x+h)}{h} =\lim _{h\rightarrow 0}( 2x+h) =2x$
So again, we do some algebraic manipulation that gets rid of the h in the denominator. Remember, as we are taking the limit as h approaches 0, the x is essentially treated as a constant. So the final answer is f‘(x) = 2x. Refering back to the graph, this satisfies the tangent line slopes at -1 and 1: f‘(-1) = -2, f‘(1) = 2. At any other point on the graph, just evaluate f‘(x) = 2x to find the rate of change of f(x) = x² at a particular x.
Now do you have to evaluate the definition for every different function you come across? Thankfully, the answer is no. Mathematicians have long ago done the hard work for you but because of the properties of limits, many general rules can be made. For example, if you know the derivative of a function, but what you have is the same function but multiplied by a constant, the derivative of this new function is just the same constant times the derivative of the old function. For example, we now know that for f(x) = x², f‘(x) = 2x. But what about g(x) = 3x²? Well, g‘(x) will just be 3 times the derivative of x², so g‘(x) = 6x.
So the rule is, if g(x) = af(x) where a is a constant number, then g‘(x) = af‘(x). Another generic rule is that the derivative of a sum of functions is the sum of the individual derivatives: If h(x) = f(x) + g(x), then h‘(x) = f‘(x) + g‘(x).
It turns out that if
$f( x) =ax^{n}$
where n is any real number except -1, then
$f'( x) =anx^{n-1}$
So to find the derivative in this case, you just multiply the function by n and reduce the value of the exponent by 1.
Next time, I will present a table of common derivatives and do some sample problems.
|
2020-02-25 02:05:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9150552749633789, "perplexity": 278.037842296055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146004.9/warc/CC-MAIN-20200225014941-20200225044941-00433.warc.gz"}
|
https://en.wikipedia.org/wiki/Hurwitz%27s_automorphisms_theorem
|
Hurwitz's automorphisms theorem
In mathematics, Hurwitz's automorphisms theorem bounds the order of the group of automorphisms, via orientation-preserving conformal mappings, of a compact Riemann surface of genus g > 1, stating that the number of such automorphisms cannot exceed 84(g − 1). A group for which the maximum is achieved is called a Hurwitz group, and the corresponding Riemann surface a Hurwitz surface. Because compact Riemann surfaces are synonymous with non-singular complex projective algebraic curves, a Hurwitz surface can also be called a Hurwitz curve.[1] The theorem is named after Adolf Hurwitz, who proved it in (Hurwitz 1893).
Hurwitz's bound also holds for algebraic curves over a field of characteristic 0, and over fields of positive characteristic p>0 for groups whose order is coprime to p, but can fail over fields of positive characteristic p>0 when p divides the group order. For example, the double cover of the projective line y2 = xpx branched at all points defined over the prime field has genus g=(p−1)/2 but is acted on by the group SL2(p) of order p3p.
Interpretation in terms of hyperbolicity
One of the fundamental themes in differential geometry is a trichotomy between the Riemannian manifolds of positive, zero, and negative curvature K. It manifests itself in many diverse situations and on several levels. In the context of compact Riemann surfaces X, via the Riemann uniformization theorem, this can be seen as a distinction between the surfaces of different topologies:
While in the first two cases the surface X admits infinitely many conformal automorphisms (in fact, the conformal automorphism group is a complex Lie group of dimension three for a sphere and of dimension one for a torus), a hyperbolic Riemann surface only admits a discrete set of automorphisms. Hurwitz's theorem claims that in fact more is true: it provides a uniform bound on the order of the automorphism group as a function of the genus and characterizes those Riemann surfaces for which the bound is sharp.
Statement and proof
Theorem: Let ${\displaystyle X}$ be a smooth connected Riemann surface of genus ${\displaystyle g\geq 2}$. Then its automorphism group ${\displaystyle \mathrm {Aut} (X)}$ has size at most ${\displaystyle 84(g-1)}$
Proof: Assume for now that ${\displaystyle G=\mathrm {Aut} (X)}$ is finite (we'll prove this at the end).
• Consider the quotient map ${\displaystyle X\to X/G}$. Since ${\displaystyle G}$ acts by holomorphic functions, the quotient is locally of the form ${\displaystyle z\to z^{n}}$ and the quotient ${\displaystyle X/G}$ is a smooth Riemann surface. The quotient map ${\displaystyle X\to X/G}$ is a branched cover, and we will see below that the ramification points correspond to the orbits that have a non trivial stabiliser. Let ${\displaystyle g_{0}}$ be the genus of ${\displaystyle X/G}$.
• By the Riemann-Hurwitz formula,
${\displaystyle 2g-2\ =\ |G|\cdot \left(2g_{0}-2+\sum _{i=1}^{k}\left(1-{\frac {1}{e_{i}}}\right)\right)}$
where the sum is over the ${\displaystyle k}$ ramification points ${\displaystyle p_{i}\in X/G}$ for the quotient map${\displaystyle X\to X/G}$. The ramification index ${\displaystyle e_{i}}$ at ${\displaystyle p_{i}}$ is just the order of the stabiliser group, since ${\displaystyle e_{i}f_{i}=\mathrm {deg} (X/\,X/G)}$ where ${\displaystyle f_{i}}$ the number of pre-images of ${\displaystyle p_{i}}$ (the number of points in the orbit), and ${\displaystyle \mathrm {deg} (X/\,X/G)=|G|}$. By definition of ramification points, ${\displaystyle e_{i}\geq 2}$ for all ${\displaystyle k}$ ramification indices.
Now call the righthand side ${\displaystyle |G|R}$ and note that since ${\displaystyle g\geq 2}$ we must have ${\displaystyle R>0}$. Rearranging the equation we find:
• If ${\displaystyle g_{0}\geq 2}$ then ${\displaystyle R\geq 2}$, and ${\displaystyle |G|\leq (g-1)}$
• If ${\displaystyle g_{0}=1}$, then ${\displaystyle k\geq 1}$ and ${\displaystyle R\geq 0+1-1/2=1/2}$ so that ${\displaystyle |G|\leq 4(g-1)}$,
• If ${\displaystyle g_{0}=0}$, then ${\displaystyle k\geq 3}$ and
• if ${\displaystyle k\geq 5}$ then ${\displaystyle R\geq -2+k(1-1/2)\geq 1/2}$, so that ${\displaystyle |G|\leq 4(g-1)}$
• if ${\displaystyle k=4}$ then ${\displaystyle R\geq -2+4-1/2-1/2-1/2-1/3=1/6}$, so that ${\displaystyle |G|\leq 12(g-1)}$,
• if ${\displaystyle k=3}$ then write ${\displaystyle e_{1}=p,\,e_{2}=q,\,e_{3}=r}$. We may assume ${\displaystyle 2\leq p\leq q\ \leq r}$.
• if ${\displaystyle p\geq 3}$ then ${\displaystyle R\geq -2+3-1/3-1/3-1/4=1/12}$ so that ${\displaystyle |G|\leq 24(g-1)}$,
• if ${\displaystyle p=2}$ then
• if ${\displaystyle q\geq 4}$ then ${\displaystyle R\geq -2+3-1/2-1/4-1/5=1/20}$ so that ${\displaystyle |G|\leq 40(g-1)}$,
• if ${\displaystyle q=3}$ then ${\displaystyle R\geq -2+3-1/2-1/3-1/7=1/42}$ so that ${\displaystyle |G|\leq 84(g-1)}$.
In conclusion, ${\displaystyle |G|\leq 84(g-1)}$.
To show that ${\displaystyle G}$ is finite, note that ${\displaystyle G}$ acts on the cohomology ${\displaystyle H^{*}(X,\mathbf {C} )}$ preserving the Hodge decomposition and the lattice ${\displaystyle H^{1}(X,\mathbf {Z} )}$.
• In particular, its action on ${\displaystyle V=H^{0,1}(X,\mathbf {C} )}$ gives a homomorphism ${\displaystyle h:G\to \mathrm {GL} (V)}$ with discrete image ${\displaystyle h(G)}$.
• In addition, the image ${\displaystyle h(G)}$ preserves the natural non degenerate Hermitian inner product ${\displaystyle (\omega ,\eta )=i\int {\bar {\omega }}\wedge \eta }$ on ${\displaystyle V}$. In particular the image ${\displaystyle h(G)}$ is contained in the unitary group ${\displaystyle \mathrm {U} (V)\subset \mathrm {GL} (V)}$ which is compact. Thus the image ${\displaystyle h(G)}$ is not just discrete, but finite.
• It remains to prove that ${\displaystyle h:G\to \mathrm {GL} (V)}$ has finite kernel. In fact, we will prove ${\displaystyle h}$ is injective. Assume ${\displaystyle \phi \in G}$ acts as the identity on ${\displaystyle V}$. If ${\displaystyle \mathrm {fix} (\phi )}$ is finite, then by the Lefschetz fixed point theorem,
${\displaystyle |\mathrm {fix} (\phi )|=1-2\mathrm {tr} (h(\phi ))+1=2-2\mathrm {tr} (\mathrm {id} _{V})=2-2g<0}$.
This is a contradiction, and so ${\displaystyle \mathrm {fix} (\phi )}$ is infinite. Since ${\displaystyle \mathrm {fix} (\phi )}$ is a closed complex sub variety of positive dimension and ${\displaystyle X}$ is a smooth connected curve (i.e. ${\displaystyle \dim _{\mathbf {C} }(X)=1}$), we must have ${\displaystyle \mathrm {fix} (\phi )=X}$. Thus ${\displaystyle \phi }$ is the identity, and we conclude that ${\displaystyle h}$ is injective and ${\displaystyle G\cong h(G)}$ is finite. Q.E.D.
Corollary of the proof: A Riemann surface ${\displaystyle X}$ of genus ${\displaystyle g\geq 2}$ has ${\displaystyle 84(g-1)}$ automorphisms if and only if ${\displaystyle X}$ is a branched cover ${\displaystyle X\to \mathbf {P} ^{1}}$ with three ramification points, of indices 2,3 and 7.
The idea of another proof and construction of the Hurwitz surfaces
By the uniformization theorem, any hyperbolic surface X – i.e., the Gaussian curvature of X is equal to negative one at every point – is covered by the hyperbolic plane. The conformal mappings of the surface correspond to orientation-preserving automorphisms of the hyperbolic plane. By the Gauss–Bonnet theorem, the area of the surface is
A(X) = − 2π χ(X) = 4π(g − 1).
In order to make the automorphism group G of X as large as possible, we want the area of its fundamental domain D for this action to be as small as possible. If the fundamental domain is a triangle with the vertex angles π/p, π/q and π/r, defining a tiling of the hyperbolic plane, then p, q, and r are integers greater than one, and the area is
A(D) = π(1 − 1/p − 1/q − 1/r).
Thus we are asking for integers which make the expression
1 − 1/p − 1/q − 1/r
strictly positive and as small as possible. This minimal value is 1/42, and
1 − 1/2 − 1/3 − 1/7 = 1/42
gives a unique (up to permutation) triple of such integers. This would indicate that the order |G| of the automorphism group is bounded by
A(X)/A(D) ≤ 168(g − 1).
However, a more delicate reasoning shows that this is an overestimate by the factor of two, because the group G can contain orientation-reversing transformations. For the orientation-preserving conformal automorphisms the bound is 84(g − 1).
Construction
Hurwitz groups and surfaces are constructed based on the tiling of the hyperbolic plane by the (2,3,7) Schwarz triangle.
To obtain an example of a Hurwitz group, let us start with a (2,3,7)-tiling of the hyperbolic plane. Its full symmetry group is the full (2,3,7) triangle group generated by the reflections across the sides of a single fundamental triangle with the angles π/2, π/3 and π/7. Since a reflection flips the triangle and changes the orientation, we can join the triangles in pairs and obtain an orientation-preserving tiling polygon. A Hurwitz surface is obtained by 'closing up' a part of this infinite tiling of the hyperbolic plane to a compact Riemann surface of genus g. This will necessarily involve exactly 84(g − 1) double triangle tiles.
The following two regular tilings have the desired symmetry group; the rotational group corresponds to rotation about an edge, a vertex, and a face, while the full symmetry group would also include a reflection. Note that the polygons in the tiling are not fundamental domains – the tiling by (2,3,7) triangles refines both of these and is not regular.
Wythoff constructions yields further uniform tilings, yielding eight uniform tilings, including the two regular ones given here. These all descend to Hurwitz surfaces, yielding tilings of the surfaces (triangulation, tiling by heptagons, etc.).
From the arguments above it can be inferred that a Hurwitz group G is characterized by the property that it is a finite quotient of the group with two generators a and b and three relations
${\displaystyle a^{2}=b^{3}=(ab)^{7}=1,\,}$
thus G is a finite group generated by two elements of orders two and three, whose product is of order seven. More precisely, any Hurwitz surface, that is, a hyperbolic surface that realizes the maximum order of the automorphism group for the surfaces of a given genus, can be obtained by the construction given. This is the last part of the theorem of Hurwitz.
Examples of Hurwitz groups and surfaces
The small cubicuboctahedron is a polyhedral immersion of the tiling of the Klein quartic by 56 triangles, meeting at 24 vertices.[2]
The smallest Hurwitz group is the projective special linear group PSL(2,7), of order 168, and the corresponding curve is the Klein quartic curve. This group is also isomorphic to PSL(3,2).
Next is the Macbeath curve, with automorphism group PSL(2,8) of order 504. Many more finite simple groups are Hurwitz groups; for instance all but 64 of the alternating groups are Hurwitz groups, the largest non-Hurwitz example being of degree 167. The smallest alternating group that is a Hurwitz group is A15.
Most projective special linear groups of large rank are Hurwitz groups, (Lucchini, Tamburini & Wilson 2000). For lower ranks, fewer such groups are Hurwitz. For np the order of p modulo 7, one has that PSL(2,q) is Hurwitz if and only if either q=7 or q = pnp. Indeed, PSL(3,q) is Hurwitz if and only if q = 2, PSL(4,q) is never Hurwitz, and PSL(5,q) is Hurwitz if and only if q = 74 or q = pnp, (Tamburini & Vsemirnov 2006).
Similarly, many groups of Lie type are Hurwitz. The finite classical groups of large rank are Hurwitz, (Lucchini & Tamburini 1999). The exceptional Lie groups of type G2 and the Ree groups of type 2G2 are nearly always Hurwitz, (Malle 1990). Other families of exceptional and twisted Lie groups of low rank are shown to be Hurwitz in (Malle 1995).
There are 12 sporadic groups that can be generated as Hurwitz groups: the Janko groups J1, J2 and J4, the Fischer groups Fi22 and Fi'24, the Rudvalis group, the Held group, the Thompson group, the Harada–Norton group, the third Conway group Co3, the Lyons group, and the Monster, (Wilson 2001).
Automorphism groups in low genus
The largest |Aut(X)| can get for a Riemann surface X of genus g is shown below, for 2≤g≤10, along with a surface X0 with |Aut(X0)| maximal.
genus g Largest possible ${\displaystyle \vert }$Aut(X)${\displaystyle \vert }$ X0 Aut(X0)
2 48 Bolza curve GL2(3)
3 168 (Hurwitz bound) Klein quartic PSL2(7)
4 120 Bring curve S5
5 192
6 150
7 504 (Hurwitz bound) Macbeath curve PSL2(8)
8 336
9 320
10 432
11 240
In this range, there only exists a Hurwitz curve in genus g=3 and g=7.
Notes
1. ^ Technically speaking, there is an equivalence of categories between the category of compact Riemann surfaces with the orientation-preserving conformal maps and the category of non-singular complex projective algebraic curves with the algebraic morphisms.
2. ^ (Richter) Note each face in the polyhedron consist of multiple faces in the tiling – two triangular faces constitute a square face and so forth, as per this explanatory image.
References
• Hurwitz, A. (1893), "Über algebraische Gebilde mit Eindeutigen Transformationen in sich", Mathematische Annalen, 41 (3): 403–442, doi:10.1007/BF01443420, JFM 24.0380.02.
• Lucchini, A.; Tamburini, M. C. (1999), "Classical groups of large rank as Hurwitz groups", Journal of Algebra, 219 (2): 531–546, doi:10.1006/jabr.1999.7911, ISSN 0021-8693, MR 1706821
• Lucchini, A.; Tamburini, M. C.; Wilson, J. S. (2000), "Hurwitz groups of large rank", Journal of the London Mathematical Society, Second Series, 61 (1): 81–92, doi:10.1112/S0024610799008467, ISSN 0024-6107, MR 1745399
• Malle, Gunter (1990), "Hurwitz groups and G2(q)", Canadian Mathematical Bulletin, 33 (3): 349–357, doi:10.4153/CMB-1990-059-8, ISSN 0008-4395, MR 1077110
• Malle, Gunter (1995), "Small rank exceptional Hurwitz groups", Groups of Lie type and their geometries (Como, 1993), London Math. Soc. Lecture Note Ser., 207, Cambridge University Press, pp. 173–183, MR 1320522
• Tamburini, M. C.; Vsemirnov, M. (2006), "Irreducible (2,3,7)-subgroups of PGL(n,F) for n ≤ 7", Journal of Algebra, 300 (1): 339–362, doi:10.1016/j.jalgebra.2006.02.030, ISSN 0021-8693, MR 2228652
• Wilson, R. A. (2001), "The Monster is a Hurwitz group", Journal of Group Theory, 4 (4): 367–374, doi:10.1515/jgth.2001.027, MR 1859175
• Richter, David A., How to Make the Mathieu Group M24, retrieved 2010-04-15
|
2019-03-20 09:15:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 91, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9294166564941406, "perplexity": 499.27689027981614}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202324.5/warc/CC-MAIN-20190320085116-20190320111116-00213.warc.gz"}
|
https://cstheory.stackexchange.com/questions/48340/is-anything-known-about-nc1-with-np-oracle
|
# Is anything known about NC$^1$ with NP oracle
A few things are known about the class $$\textsf{L}$$ provided with an $$\textsf{NP}$$ oracle ($$\textsf{L}^\textsf{NP} = \Theta_2^\textsf{P}$$ has attracted a bit of attention, for instance [1]) On the other hand, I can't find much about the class $$\textsf{NC}^1$$ with access to an $$\textsf{NP}$$ oracle. Is it because the use of an oracle doesn't play well with the definition of $$\textsf{NC}^1$$? Which I doubt. Most likely, it's because I haven't looked properly.
Except for the direct $$\textsf{NP} \cup \textsf{coNP} \subseteq {\textsf{NC}^1}^{\textsf{NP}} \subseteq \Theta_2^{\textsf{P}}$$, What is known about $${\textsf{NC}^1}^{\textsf{NP}}$$?
• It’s quite nontrivial to decide what is the right way to relativize $\mathrm{NC}^1$ in the first place. See in particular arxiv.org/abs/1204.5508. But any sensible definition should make $\mathrm{NC^{1\,NP}}$ the same as $\Theta^P_2$, as already $\mathrm{AC^{0\,NP}}$ does that (this follows from the representation of $\Theta^P_2$ as in Theorem 4 of Buss & Hay). – Emil Jeřábek Feb 7 at 14:30
• These are not quite research-level questions. I think you should slow down on question asking, and instead study the basic literature on $\Theta^P_2$ first. – Emil Jeřábek Feb 7 at 14:41
• Thanks for the clarification and references. – Abdallah Feb 8 at 2:45
• Apologies that I asked too many basic questions in a row. I'll refrain from doing so in the future. – Abdallah Feb 8 at 2:46
|
2021-04-11 15:00:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7092596292495728, "perplexity": 449.1937725189842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038064520.8/warc/CC-MAIN-20210411144457-20210411174457-00325.warc.gz"}
|
https://www.physicsoverflow.org/20997/status-of-the-principle-of-maximum-entropy
|
# Status of the Principle of Maximum Entropy
+ 4 like - 0 dislike
1236 views
Jaynes' principle of maximum entropy is a powerful tool in non-equilibrium statistical mechanics, but it relies on so called subjective probabilities, information entropy and other things, which could leave a foul taste in the mouth of non-Bayesian physicists. However, I have heard that there are other "frequentist" theories, which could come to the same results, f.e. large deviation theory.
Could someone explain or point out theories or papers which derive the maximum entropy principle within the context of "classical" probability theory?
+ 4 like - 0 dislike
A non-subjective account of statistical mechanics close to the treatment with large deviation theory (which is just a more abstract version of it) is given in Part II of my online book
Classical and Quantum Mechanics via Lie algebras, http://arxiv.org/abs/0810.1019
At the end (in Sections 10.6 and 10.7), one also finds a short discussion of the subjective, information theoretic treatment and its deficiencies.
A survey that treats statistical mechanics directly in terms of large deviations is given in (reference [80] of the book, referenced on p.208)
R.S. Ellis. An overview of the theory of large deviations and applications to statistical
mechanics. Scand. Actuarial J, 1:97–142, 1995.
A more recent survey is:
H. Touchette, The large deviation approach to statistical mechanics, Phys. Rep. 478 (2009), 1-69.
answered Jul 26, 2014 by (13,637 points)
edited Jul 28, 2014
Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the "link" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\varnothing$ in the following word:p$\hbar$ysicsOver$\varnothing$lowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register.
|
2019-03-18 15:57:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5166512131690979, "perplexity": 1329.506578890371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201455.20/warc/CC-MAIN-20190318152343-20190318174343-00355.warc.gz"}
|
http://clay6.com/qa/43188/the-velocity-of-a-body-moving-in-viscous-medium-is-given-by-v-large-frac-bi
|
Browse Questions
# The velocity of a body moving in viscous medium is given by $v = \large\frac{A}{B}$$\bigg[1-e^{\large\frac{-t}{B}} \bigg]$ where t is time, A and B are constants. Then the dimensions of A are
$M^0L^1T^0$
|
2016-10-28 14:01:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8099668025970459, "perplexity": 2767.510767334535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722653.96/warc/CC-MAIN-20161020183842-00248-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://socratic.org/questions/the-measures-of-the-angles-of-a-triangle-are-in-the-ratio-5-6-7-what-is-the-meas
|
# The measures of the angles of a triangle are in the ratio 5:6:7. What is the measure, in degrees, of the smallest angle of the triangle?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
7
Hi Share
Mar 30, 2018
50°
#### Explanation:
Let x represent the scale factor, the ratios are multiplied by:
5x°+6x°+7x°=180°
18x°=180°
$x = 10$
Smallest Angle
5x°=5(10)°=50°
• 22 minutes ago
• 22 minutes ago
• 23 minutes ago
• 23 minutes ago
• A minute ago
• A minute ago
• 3 minutes ago
• 3 minutes ago
• 11 minutes ago
• 22 minutes ago
• 22 minutes ago
• 22 minutes ago
• 23 minutes ago
• 23 minutes ago
|
2018-05-25 05:15:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6149876713752747, "perplexity": 4741.228390031301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867041.69/warc/CC-MAIN-20180525043910-20180525063910-00043.warc.gz"}
|
https://scholar.harvard.edu/avikde/publications/modular-hopping-and-running-parallel-composition-0
|
# Modular Hopping and Running via Parallel Composition
### Citation:
A. De, “Modular Hopping and Running via Parallel Composition,” University of Pennsylvania, 2017.
### Abstract:
Though multi-functional robot hardware has been created, the complexity in its functionality has been constrained by a lack of algorithms that appropriately manage flexible and autonomous reconfiguration of interconnections to physical and behavioral components.Raibert pioneered a paradigm for the synthesis of planar hopping using a composition of parts'': controlled vertical hopping, controlled forward speed, and controlled body attitude. Such reduced degree-of-freedom compositions also seem to appear in running animals across several orders of magnitude of scale. Dynamical systems theory can offer a formal representation of such reductions in terms of anchored templates,'' respecting which Raibert's empirical synthesis (and the animals' empirical performance) can be posed as a parallel composition. However, the orthodox notion (attracting invariant submanifold with restriction dynamics conjugate to a template system) has only been formally synthesized in a few isolated instances in engineering (juggling, brachiating, hexapedal running robots, etc.) and formally observed in biology only in similarly limited contexts.In order to bring Raibert's 1980's work into the 21st century and out of the laboratory, we design a new family of one-, two-, and four-legged robots with high power density, transparency, and control bandwidth. On these platforms, we demonstrate a growing collection of $\{$body, behavior$\}$ pairs that successfully embody dynamical running / hopping gaits'' specified using compositions of a few templates, with few parameters and a great deal of empirical robustness. We aim for and report substantial advances toward a formal notion of parallel composition---embodied behaviors that are correct by design even in the presence of nefarious coupling and perturbation---using a new analytical tool (hybrid dynamical averaging).With ideas of verifiable behavioral modularity and a firm understanding of the hardware tools required to implement them, we are closer to identifying the components required to flexibly program the exchange of work between machines and their environment. Knowing how to combine and sequence stable basins to solve arbitrarily complex tasks will result in improved foundations for robotics as it goes from ad-hoc practice to science (with predictive theories) in the next few decades.
Website
|
2019-05-26 05:41:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29089435935020447, "perplexity": 3382.6395613879636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258849.89/warc/CC-MAIN-20190526045109-20190526071109-00090.warc.gz"}
|
http://root.cern.ch/root/html534/guides/spectrum/Spectrum.html
|
# Processing and Visualization Functions
*** Miroslav Morháč *** 12
** E-mail : morhac@savba.sk **
# 1 BACKGROUND ELIMINATION
## 1.1 1-DIMENSIONAL SPECTRA
This function calculates background spectrum from source spectrum. The result is placed in the vector pointed by spectrum pointer. On successful completion it returns 0. On error it returns pointer to the string describing error.
char *Background1(float *spectrum,
int size,
int number_of_iterations);
Function parameters:
• spectrum pointer to the vector of source spectrum
• size length of spectrum
• number_of_iterations or width of the clipping window
The function allows to separate useless spectrum information (continuous background) from peaks, based on Sensitive Nonlinear Iterative Peak Clipping Algorithm. In fact it represents second order difference filter (-1,2,-1). The basic algorithm is described in detail in [1], [2].
$v_p(i)= min\left\{v_{p-1} , \frac{[v_{p-1}(i+p)+v_{p-1}(i-p)]}{2} \right\}$
where p can be changed
1. from 1 up to a given parameter value w by incrementing it in each iteration step by 1-INCREASING CLIPPING WINDOW
2. from a given value w by decrementing it in each iteration step by 1- DECREASING CLIPPING WINDOW
An example of the original spectrum and estimated background (INCREASING CLIPPING WINDOW) is given in the Figure 1.1 .
One can notice that on the edges of the peaks the estimated background goes under the peaks. An alternative approach is to decrease the clipping window from a given value to the value of one (DECREASING CLIPPING WINDOW). Then the result obtained is given in the Figure 1.2.
The estimated background is smoother. The method does not deform the shape of peaks.
However sometimes the shape of the background is very complicated. The second order filter is insufficient. Let us illustrate such a case in the Figure 1.3. The forth order background estimation filter gives better estimate of complicated background (the clipping window w=10).
4-th order algorithm ignores linear as well as cubic component of the background. In this case the filter is (1,-4,6,-4,1). In general the allowed values for the order of the filter are 2, 4, 6, 8. An example of the same spectrum estimated with the clipping window w=40 and with filters of the orders 2, 4, 6, 8 is given in the Figure 1.4.
Sometimes it is necessary to include also the Compton edges into the estimate of the background. In Figure 1.5 we present the example of the synthetic spectrum with Compton edges. The background was estimated using the 8-th order filter with the estimation of the Compton edges and decreasing clipping window. In the lower part of the Figure we present the background, which was added to the synthetic spectrum. One can observe good coincidence with the estimated background. The method of the estimation of Compton edge is described in detail in [3].
The generalized form of the algorithm is implemented in the function.
char *Background1General(float *spectrum,
int size,
int number_of_iterations,
int direction,
int filter_order,
bool compton);
The meaning of the parameters is as follows:
• spectrum pointer to the vector of source spectrum
• size length of spectrum vector
• number_of_iterations maximal width of clipping window,
• direction direction of change of clipping window. Possible values:
• BACK1_INCREASING_WINDOW
• BACK1_DECREASING_WINDOW
• filter_order order of clipping filter. Possible values:
• BACK1_ORDER2
• BACK1_ORDER4
• BACK1_ORDER6
• BACK1_ORDER8
• compton logical variable whether the estimation of Compton edge will be included. Possible values:
• BACK1_EXCLUDE_COMPTON
• BACK1_INCLUDE_COMPTON
## 1.2 2-DIMENSIONAL SPECTRA
This basic background estimation function allows to separate useless spectrum information (2D-continuous background and coincidences of peaks with background in both dimensions) from peaks. It calculates background spectrum from source spectrum. The result is placed in the array pointed by spectrum pointer. On successful completion it returns 0. On error it returns pointer to the string describing error.
char *Background2(float **spectrum,
int sizex,
int sizey,
int number_of_iterations);
Function parameters:
• spectrum pointer to the array of source spectrum
• sizex x length of spectrum
• sizey y length of spectrum
• number_of_iterations width of the clipping window
In Figure 1.6 we present an example of 2-dimensional spectrum before background elimination.
Estimated background is shown in Figure 1.7. After subtraction we get pure 2-dimensional peaks.
Analogously to 1-dimensional case we have generalized also the function for 2-dimensional background estimation. Sometimes the width of peaks in both dimensions are different. As an example we can introduce n-gamma 2-dimensional spectra. Then it is necessary to set different widths of clipping window in both dimensions. In Figure 1.8 we give an example of such a spectrum.
Spectrum after background estimation (clipping window 10 for x-direction, 20 for y- direction) and subtraction is given in Figure 1.9.
Background estimation can be carried out using the algorithm of successive comparisons [1] or .based on one-step filtering algorithm given by formula:
$v_{p}(i) = min\left\{v_{p-1}(i) , \frac{ \left[\begin{array}{c} -v_{p-1}(i+p,j+p)+2*v_{p-1}(i+p,j)-v_{p-1}(i+p,j-p) \\ +2*v_{p-1}(i,j+p)+2*v_{p-1}(i,j-p) \\ -v_{p-1}(i-p,j+p)+2*v_{p-1}(i-p,j)-v_{p-1}(i-p,j-p) \end{array}\right] }{4} \right\}$
Illustrating example is given in the following 3 Figures. In Figure 1.10 we present original (synthetic) 2-dimensional spectrum. In the Figure 1.11. we have spectrum after background elimination using successive comparisons algorithm and in Figure 1.12 after elimination using one-step filtering algorithm.
One can notice artificial ridges in the spectrum in Figure 1.11. In the estimation using filtration algorithm this effect disappears. The general function for estimation of 2-dimensional background with rectangular ridges has the form
char *Background2RectangularRidges(float **spectrum,
int sizex,
int sizey,
int number_of_iterations_x,
int number_of_iterations_y,
int direction,
int filter_order,
int filter_type);
This function calculates background spectrum from source spectrum. The result is placed to the array pointed by spectrum pointer.
Function parameters:
• spectrum pointer to the array of source spectrum
• sizex x length of spectrum
• sizey y length of spectrum
• number_of_iterations_x-maximal x width of clipping window
• number_of_iterations_y-maximal y width of clipping window
• direction direction of change of clipping window. Possible values:
• BACK2_INCREASING_WINDOW
• BACK2_DECREASING_WINDOW
• filter_order-order of clipping filter. Possible values:
• BACK2_ORDER2 BACK2_ORDER4
• BACK2_ORDER6 BACK2_ORDER8
• filter_type determines the algorithm of the filtering. Possible values:
• BACK2_SUCCESSIVE_FILTERING
• BACK2_ONE_STEP_FILTERING
In what follows we describe a function to estimate continuous 2-dimensional background together with rectangular and skew ridges. In Figure 1.13 we present a spectrum of this type.
The goal is to remove rectangular as well as skew ridges from the spectrum and to leave only 2-dimensional coincidence peaks. After applying background elimination function and subtraction we get the two dimensional peaks presented in Figure 1.14
In Figures 1.15 and 1.16 we present experimental spectrum with skew ridges and estimated background, respectively.
The function for the estimation of background together with skew ridges has the form
char *Background2SkewRidges(float **spectrum,
int sizex,
int sizey,
int number_of_iterations_x,
int number_of_iterations_y,
int direction,
int filter_order);
The result is placed to the array pointed by spectrum pointer.
Function parameters:
• spectrum pointer to the array of source spectrum
• sizex x length of spectrum
• sizey y length of spectrum
• number_of_iterations_x maximal x width of clipping window
• number_of_iterations_y maximal y width of clipping window
• direction direction of change of clipping window. Possible values:
• BACK2_INCREASING_WINDOW
• BACK2_DECREASING_WINDOW
• filter_order order of clipping filter. Possible values:
• BACK2_ORDER2
• BACK2_ORDER4
• BACK2_ORDER6
• BACK2_ORDER8
Next we present the function that estimates the continuous background together with rectangular, and nonlinear ridges. To illustrate the data of such a form we present synthetic data shown in Figure 1.17. The estimated background is given in Figure 1.18. Pure Gaussian after subtracting the background from the original spectrum is shown in Figure 1.19
The function to estimate also the nonlinear ridges has the form
char *Background2NonlinearRidges(float **spectrum,
int sizex,
int sizey,
int number-of-iterations-x,
int number-of-iterations-y,
int direction,
int filter_order);
The result is placed to the array pointed by spectrum pointer..
Function parameters:
• spectrum pointer to the array of source spectrum
• sizex x length of spectrum
• sizey y length of spectrum
• number_of_iterations_x maximal x width of clipping window
• number_of_iterations_y maximal y width of clipping window
• direction direction of change of clipping window. Possible values:
• BACK2_INCREASING_WINDOW _ BACK2_DECREASING_WINDOW
• filter_order order of clipping filter. Possible values:
• BACK2_ORDER2
• BACK2_ORDER4
• BACK2_ORDER6
• BACK2_ORDER8
The information contained in the skew ridges and non-linear ridges can be interesting and one may wish to separate it from the rectangular ridges. Therefore we have implemented also two functions allowing to estimate ridges only in the direction rectangular to x- axis and y-axis. Let us have both rectangular and skew ridges from spectrum given in Figure 1.13 estimated using above described function Background2SkewRidges(Figure 1.20).
Now let us estimate ridges rectangular to x-axis and y-axis (Figures 1.21, 1.22)
After subtraction these data from spectrum in Figure 1.20, we get separated skew ridge given in Figure 1.23.
The functions for estimation of 1-dimensional ridges in 2-dimensional spectra have the forms
char *Background2RectangularRidgesX(float **spectrum,
int sizex,
int sizey,
int number-of-iterations,
int direction,
int filter_order);
Function parameters:
• spectrum pointer to the array of source spectrum
• sizex x length of spectrum
• sizey y length of spectrum
• number_of_iterations maximal x width of clipping window
• direction direction of change of clipping window. Possible values:
• BACK2_INCREASING_WINDOW
• BACK2_DECREASING_WINDOW
• filter_order order of clipping filter. Possible values:
• BACK2_ORDER2
• BACK2_ORDER4
• BACK2_ORDER6
• BACK2_ORDER8
char *Background2RectangularRidgesY(float **spectrum,
int sizex,
int sizey,
int number_of_iterations,
int direction,
int filter_order);
Function parameters:
• spectrum pointer to the array of source spectrum
• sizex x length of spectrum
• sizey y length of spectrum
• number_of_iterations maximal width of clipping window
• direction direction of change of clipping window. Possible values:
• BACK2_INCREASING_WINDOW
• BACK2_DECREASING_WINDOW
• filter_order order of clipping filter. Possible values
• BACK2_ORDER2
• BACK2_ORDER4
• BACK2_ORDER6
• BACK2_ORDER8
# 2 SMOOTHING
-1-DIMENSIONAL SPECTRA
The operation of the smoothing is based on the convolution of the original data with the filter of the type
(1,2,1)/4 -three points smoothing
(-3,12,17,12,-3)/35 -five points smoothing
(-2,3,6,7,6,3,-2)/21 -seven points smoothing
(-21,14,39,54,59,54,39,14,-21)/231 -nine points smoothing
(-36,9,44,69,84,89,84,69,44,9,-36)/429 -11 points smoothing
(-11,0,9,16,21,24,25,24,21,16,9,0,-11)/143 -13 points smoothing
(-78,-13,42,87,122,147,162,167,162,147,122,87,42,-13,-78)/1105 -15 points smoothing. The function for one-dimensional smoothing has the form
char *Smooth1(float *spectrum,
int size,
int points);
This function calculates smoothed spectrum from source spectrum. The result is placed in the vector pointed by spectrum pointer.
Function parameters:
• spectrum pointer to the vector of source spectrum
• size length of spectrum
• points width of smoothing window. Allowed values
• SMOOTH1_3POINTS
• SMOOTH1_5POINTS
• SMOOTH1_7POINTS
• SMOOTH1_9POINTS
• SMOOTH1_11POINTS
• SMOOTH1_13POINTS
• SMOOTH1_15POINTS
An example of 1-dimensional spectra smoothing and the influence of the filter width on the data is presented in Figure 2.1
## 2.1 2-DIMENSIONAL SPECTRA
The smoothing of the two dimensional data is analogous to the one dimensional case. The width of filter can be chosen independently for each dimension. The form of the 2-D smoothing function is as follows
char *Smooth2(float **spectrum,
int sizex,
int sizey,
int pointsx,
int pointsy);
This function calculates smoothed spectrum from source spectrum. The result is placed in the array pointed by spectrum pointer.
Function parameters:
• spectrum pointer to the array of source spectrum
• sizex x length of spectrum
• sizey y length of spectrum
• pointsx,pointsy width of smoothing window. Allowed values:
• SMOOTH2_3POINTS
• SMOOTH2_5POINTS
• SMOOTH2_7POINTS
• SMOOTH2_9POINTS
• SMOOTH2_11POINTS
• SMOOTH2_13POINTS
• SMOOTH2_15POINTS
An example of 2-D original data and data after smoothing is given in Figures 2.2, 2.3.
# 3 PEAK SEARCHING
## 3.1 1-DIMENSIONAL SPECTRA
The basic function of the 1-dimensional peak searching is in detail described in [4], [5]. It allows to identify automatically the peaks in a spectrum with the presence of the continuous background and statistical fluctuations - noise. The algorithm is based on smoothed second differences that are compared to its standard deviations. Therefore it is necessary to pass a parameter of sigma to the peak searching function. The algorithm is selective to the peaks with the given sigma. The form of the basic peak searching function is
Int-t Search1(const float *spectrum,
int size,
double sigma);
This function searches for peaks in source spectrum. The number of found peaks and their positions are written into structure pointed by one_dim_peak structure pointer.
Function parameters:
• source pointer to the vector of source spectrum
• p pointer to the one_dim_peak structure pointer
• size length of source spectrum
• sigma sigma of searched peaks
The structure one_dim_peak has the form:
struct one_dim_peak{
int number_of_peaks;
double position[MAX_NUMBER_OF_PEAKS1];
};
An example of simple one-dimensional spectrum with identified peaks is given in Figure 3.1.
An example of 1-dimensional experimental spectrum with many identified peaks is given in Figure 3.2.
However when we have noisy data the number of peaks can be enormous. One such an example is given in Figure 3.3. Therefore it can be useful to have possibility to set a threshold value and to consider only the peaks higher than this threshold (see Figure 3.4, only three peaks were identified, threshold=50.) The value in the center of the peak value[i] minus the average value in two symmetrically positioned channels (channels i-3*sigma, i+3*sigma) must be greater than threshold. Otherwise the peak is ignored.
An alternative approach was proposed in [6].. The algorithm generates new invariant spectrum based on discrete Markov chains. In this spectrum the noise is suppressed, the spectrum is smoother than the original one. On the other hand it emphasizes peaks (depending on the averaging window). The example of the part of original noisy spectrum and Markov spectrum for window=3 is given in Figure 3.5 Then the peaks can found in Markov spectrum using standard above presented algorithm.
The form of the generalized peak searching function is as follows.
Int-t Search1General(float *spectrum,
int size,
float sigma,
int threshold,
bool markov,
int aver-window);
This function searches for peaks in source spectrum. The number of found peaks and their positions are written into structure pointed by one_dim_peak structure pointer.
Function parameters:
• spectrum pointer to the vector of source spectrum source spectrum is replaced by new spectrum calculated using Markov chains method.
• size length of source spectrum
• sigma sigma of searched peaks
• threshold threshold value for selecting of peaks
• markov logical variable, if it is true, first the source spectrum is replaced by new spectrum calculated using Markov chains method.
• aver_window averaging window used in calculation of Markov spectrum, applies only for markov variable is true
The methods of peak searching are sensitive to the sigma. Usually the sigma value is known beforehand. It also changes only slightly with the energy. We have investigated as well the robustness of the proposed algorithms to the spectrum with the peaks with sigma changing from 1 to 10 (see Figure 3.6).
We applied peak searching algorithm based on Markov approach. We changed sigma in the interval from 1 to 10. The spectra for averaging windows 3, 5, 10 are shown in Figure 3.7.
When we applied peak searching function to the Markov spectrum averaged with the window=10, we obtained correct estimate of all 10 peak positions for sigma=2,3,4,5,6,7,8. It was not the case when we made the same experiment with the original spectrum. For all sigmas some peaks were not discovered.
## 3.2 2-DIMENSIONAL SPECTRA
The basic function of the 2-dimensional peak searching is in detail described in [4].. It identifies automatically the peaks in a spectrum with the presence of the continuous background, statistical fluctuations as well as coincidences of background in one dimension and peak in the other one-ridges. The form of the basic function of 2-dimensional peak searching is
Int-t Search2(const float **source,
int sizex,
int sizey,
double sigma);
This function searches for peaks in source spectrum .The number of found peaks and their positions are written into structure pointed by two_dim_peak structure pointer.
Function parameters:
• source pointer to the vector of source spectrum
• sizex x length of source spectrum
• sizey y length of source spectrum
• sigma sigma of searched peaks
An example of the two-dimensional spectrum with the identified peaks is shown in Figure 3.8.
We have also generalized the peak searching function analogously to one dimensional data. The generalized peak searching function for two dimensional spectra has the form
Int-t Search2General(float **source,
int sizex,
int sizey,
double sigma,
int threshold,
bool markov,
int aver-window);
This function searches for peaks in source spectrum. The number of found peaks and their positions are written into structure pointed by two_dim_peak structure pointer.
Function parameters:
• source pointer to the vector of source spectrum
• sizex x length of source spectrum
• sizey y length of source spectrum
• sigma sigma of searched peaks
• threshold threshold value for selection of peaks
• markov logical variable, if it is true, first the source spectrum is replaced by new spectrum calculated using Markov chains method.
• aver_window averaging window of searched peaks (applies only for Markov method)
An example of experimental 2-dimensional spectrum is given in Figure 3.9. The number of peaks identified by the function now is 295.
The function works even for very noisy data. In Figure 3.10 we present synthetic 2-dimensional spectrum with 5 peaks. The method should recognize what is real 2-dimensional peak and what is the crossing of two 1-dimensional ridges The Markov spectrum with averaging window=3 is given in Figure 3.11. One can observe that this spectrum is smoother than the original one. After applying the general peak searching function to the Markov spectrum with sigma=2, and threshold=600, we get correctly identified peaks.
# 4 DECONVOLUTION - UNFOLDING
## 4.1 1-DIMENSIONAL SPECTRA
Mathematical formulation of the convolution system is:
$y(i) = \sum_{k=0}^{N-1}h(i-k)x(k), i=0,1,2,...,N-1$
where h(i) is the impulse response function, x, y are input and output vectors,
respectively, N is the length of x and h vectors. In matrix form we have:
$\left[\begin{array}{c} y(0)\\ y(1)\\ .\\ .\\ .\\ .\\ .\\ y(2N-2)\\ y(2N-1) \end{array}\right] = \left[\begin{array}{cccccc} h(0) & 0 & 0 & 0 & ... & 0\\ h(1) & h(0) & 0 & 0 & ... & 0\\ . & h(1) & h(0) & 0 & ... & 0\\ h(N-1) & . & h(1) & h(0) & ... & 0\\ 0 & h(N-1) & . & h(1) & ... & 0\\ 0 & 0 & h(N-1) & . & ... & h(0)\\ 0 & 0 & 0 & h(N-1) & ... & h(1)\\ . & . & . & . & ... & .\\ 0 & 0 & 0 & 0 & ... & h(N-1) \end{array}\right] \left[\begin{array}{c} x(0)\\ x(1)\\ x(2)\\ .\\ .\\ y(N-1) \end{array}\right]$
Let us assume that we know the response and the output vector (spectrum) of the above given system. The deconvolution represents solution of the overdetermined system of linear equations, i.e., the calculation of the vector x.
The goal of the deconvolution methods is to improve the resolution in the spectrum and to decompose multiplets. From mathematical point of view the operation of deconvolution is extremely critical as well as time consuming operation. From all the methods studied the Gold deconvolution (decomposition) proved to work as the best one. It is suitable to process positive definite data (e.g. histograms). In detail the method is described in [7], [8].
## 4.2 Gold deconvolution algorithm
• proved to work as the best, other methods (Fourier, VanCittert etc) oscillate
$y = Hx$ $H^T=H^THx$ $y^{'} = H^{'}x$ $x_{i}^{(k+1)}=\frac{y_{i}^{'}}{\sum_{m=0}^{N-1}H_{im}^{'}x_{m}^{(k)}}x_{i}^{(k)}, i=0,1,...,N-1,$ where: $k=1,2,3,...,I$ $x^{(0)} = [1,1,...,1]^T$
The basic function has the form
char *Deconvolution1(float *source,
const float *resp,
int size,
int number-of-iterations);
This function calculates deconvolution from source spectrum according to response spectrum.
Function parameters:
• source pointer to the vector of source spectrum
• resp pointer to the vector of response spectrum
• size length of source and response spectra
• number_of_iterations for details see [8]
As an illustration of the method let us introduce small example. In Figure 4.1 we present original 1-dimensional spectrum. It contains multiplets that cannot be directly analyzed. The response function (one peak) is given in Figure 4.2. We assume the same response function (not changing the shape) along the entire energy scale. So the response matrix is composed of mutually shifted response functions by one channel, however of the same shape.
The result after deconvolution is given in Figure 4.3. It substantially improves the resolution in the spectrum.
We have developed a new high resolution deconvolution algorithm. We have observed that the Gold deconvolution converges to its stable state (solution). It is useless to increase the number of iterations, the result obtained does not change. To continue in decreasing the width of peaks we have found that when the solution reaches its stable state it is necessary to stop iterations, then to change the vector in a way and repeat again the Gold deconvolution . We have found that to change the particular solution we need to apply non-linear boosting function to it. The power function proved to give the best results. At the beginning the function calculates exact solution of the Toeplitz system of linear equations.
$x^{(0)} = [x_e^2(0),x_e^2(1),...,x_e^2(N-1),]^T$ where : $x_e=H^{'-1}y^{'}$ Then it applies the Gold deconvolution algorithm to the solution and carries out preset number of iterations. Then the power function with the exponent equal to the boosting coefficient is applied to the deconvolved data. These data are then used as initial estimate of the solution of linear system of equations and again the Gold algorithm is employed. The whole procedure is repeated number_of_repetitions times.
The form of the high-resolution deconvolution function is
char *Deconvolution1HighResolution(float *source,
const float *resp,
int size,
int number-of-iterations,
int number-of-repetitions,
double boost);
This function calculates deconvolution from source spectrum according to response spectrum
The result is placed in the vector pointed by source pointer.
Function parameters:
• source pointer to the vector of source spectrum
• resp pointer to the vector of response spectrum
• size length of source and response spectra
• number_of_iterations for details we refer to manual
• number_of_repetitions for details we refer to manual
• boost boosting factor, for details we refer to manual
The result obtained using the data from Figures 4.1, 4.2 and applying the high resolution deconvolution is given in Figure 4.4. It decomposes the peaks even more (practically to 1, 2 channels).
Another example of synthetic data is given in Figure 4.5. We have positioned the two peaks very close to each other (2 channels). The method of high resolution deconvolution has decomposed these peaks practically to one channel. The original data are shown as polyline, the data after deconvolution as bars. The numbers at the bars denote the heights of bars and the numbers in parenthesis the original area of peaks. The area of original peaks is concentrated into one channel.
Up to now we assumed that the shape of the response is not changing. However the method of Gold decomposition can be utilized also for the decomposition of input data (unfolding) with completely different responses. The form of the unfolding function is as follows
char *Deconvolution1Unfolding(float *source,
const float **resp,
int sizex,
int sizey,
int number-of-iterations);
This function unfolds source spectrum according to response matrix columns. The result is placed in the vector pointed by source pointer.
Function parameters:
• source pointer to the vector of source spectrum
• resp pointer to the matrix of response spectra
• sizex length of source spectrum and # of rows of response matrix
• sizey # of columns of response matrix
• number_of_iterations
Note!!! sizex must be >= sizey. After decomposition the resulting channels are written back to the first sizey channels of the source spectrum.
The example of the response matrix composed of the responses of different chemical elements is given in Figure 4.6
The original spectrum before unfolding is given in Figure 4.7. The obtained coefficients after unfolding, i.e., the contents of the responses in the original spectrum is presented in the Figure 4.8.
Another example where we have used unfolding method is the decomposition of continuum of gamma-ray spectra. Using simulation and interpolation techniques we have synthesized the response matrix (size 3400x3400 channels) of Gammasphere spectrometer (Figure 4.9). Its detail is presented in Figure 4.10. The original spectrum of Co56 before and after continuum decomposition are presented in Figures 4.11, 4.12, respectively.
## 4.3 2-DIMENSIONAL SPECTRA
We have extended the method of Gold deconvolution also for 2-dimensional data. Again the goal of the deconvolution methods is to improve the resolution in the spectrum and to decompose multiplets. In detail the method of optimized 2-dimensional deconvolution is described in [8].
Mathematical formulation of 2-dimensional convolution system is as follows
$y(i_1,i_2) = \sum_{k_1=0}^{N_1-1}\sum_{k_2=0}^{N_2-1}h(i_1-k_1,i_2-k_2)x(k_1,k_2), i_1=0,1,2,...,N_1-1, i_2=0,1,2,...,N_2-1$
Assuming we know the output spectrum y and the response spectrum h,the task is to calculate the matrix x.
The basic function has the form
char *Deconvolution2(float **source,
const float **resp,
int sizex,
int sizey,
int niter);
This function calculates deconvolution from source spectrum according to response spectrum. The result is placed in the matrix pointed by source pointer.
Function parameters:
• source pointer to the matrix of source spectrum
• resp pointer to the matrix of response spectrum
• sizex x length of source and response spectra
• sizey y length of source and response spectra
• number_of_iterations for details see [8]
The example of 2-dimensional spectrum before deconvolution is presented in Figure 4.13. In the process of deconvolution we have used the response matrix (one peak shifted to the beginning of the coordinate system) given in Figure 4.14. Employing the Gold deconvolution algorithm implemented in the decon2 function we get the result shown in Figure 4.15. One can notice that the peaks became narrower thus improving the resolution in the spectrum.
Analogously to 1-dimensional case we have developed high-resolution 2-dimensional deconvolution. From the beginning the exact solution of the cyclic convolution 2-dimensional system is calculated. Then we apply the repeated deconvolution with boosting in the same way like in 1- dimensional case. The form of the high-resolution 2-dimensional deconvolution function is:
char *Deconvolution2HighResolution(float **source,
const float **resp,
int sizex,
int sizey,
int number_of_iterations,
int number_of_repetitions,
double boost);
This function calculates deconvolution from source spectrum
according to response spectrum
The result is placed in the matrix pointed by source pointer.
Function parameters:
• source pointer to the matrix of source spectrum
• resp pointer to the matrix of response spectrum
• sizex x length of source and response spectra
• sizey y length of source and response spectra
• number_of_iterations
• number_of_repetitions
• boost boosting factor
When we apply this function to the data from Figure 4.13 using response matrix given in Figure 4.14 we get the result shown in Figure 4.16. It is apparent that the high-resolution deconvolution decomposes the input data even more than the original Gold deconvolution.
# 5 FITTING
A lot of algorithms have been developed (Gauss-Newton, Levenber-Marquart conjugate gradients, etc.) and more or less successfully implemented into programs for analysis of complex spectra. They are based on matrix inversion that can impose appreciable convergence difficulties mainly for large number of fitted parameters. Peaks can be fitted separately, each peak (or multiplets) in a region or together all peaks in a spectrum. To fit separately each peak one needs to determine the fitted region. However it can happen that the regions of neighboring peaks are overlapping (mainly in 2-dimensional spectra). Then the results of fitting are very poor. On the other hand, when fitting together all peaks found in a spectrum, one needs to have a method that is stable (converges) and fast enough to carry out fitting in reasonable time. The gradient methods based on the inversion of large matrices are not applicable because of two reasons
1. calculation of inverse matrix is extremely time consuming;
2. due to accumulation of truncation and rounding-off errors the result can become worthless.
We have implemented two kinds of fitting functions. The first approach is based on the algorithm without matrix inversion [9]-awmi algorithm. It allows to fit large blocks of data and large number of parameters.
The other one is based on calculation of the system of linear equations using Stiefel-Hestens method [10]. It converges faster than awmi algorithm, however it is not suitable to fit large number of parameters.
## 5.1 1-DIMENSIONAL SPECTRA
• the quantity to be minimized in the fitting procedure for one dimensional spectrum is defined as :
$\chi^2 = \frac{1}{N-M}\sum_{i=1}^{N}\frac{[y_i-f(i,a)]^2}{y_i}$
where $$i$$ is the channel in the fitted spectrum, $$N$$ is the number of channels in the fitting subregion, $$M$$ is the number of free parameters, $$y_i$$ is the content of the i-th channel, $$a$$ is a vector of the parameters being fitted and $$f(i,a)$$ is a fitting or peak shape function.
Instead of the weighting coefficient $$y_i$$ in the denominator of the above given formula one can use also the value of $$f(i,a)$$. It is suitable for data with poorstatistics [11], [12].
The third statistic to be optimized, which is implemented in the fitting functions is Maximum Likelihood Method. It is of the choice of the user to select suitable statistic..
• after differentiating $$\chi^2$$ we obtain the following $$M$$ simultaneous equations
$\sum_{i=1}^{N} \frac{y_i-f(i,a^{(t)})}{y_i} \frac{\partial f(i,a^t)}{\partial a_k}= \sum_{j=1}^{M}\sum_{i=1}^{N} \frac{\partial f(i,a^{(t)})}{\partial a_j} \frac{\partial f(i,a^{(t)})}{\partial a_k} \Delta a_j^{(t)}$
• in $$\gamma$$-ray spectra we have to fit together tens, hundreds of peaks simultaneously that represent sometimes thousands of parameters.
• the calculation of the inversion matrix of such a size is practically impossible.
• the awmi method is based on the assumption that off-diagonal terms in the matrix A are equal to zero.
$\Delta a_{k}^{(t+1)} = \alpha^{(t)} \frac{ \sum_{i=1}^{N} \frac{e_{i}^{(t)}}{y_i}\frac{\partial f(i,a^{(t)})}{\partial a_k} }{ \sum_{i=1}^{N} \left[ \frac{\partial f(i,a^{(t)})}{\partial a_k}\right]^2\frac{1}{y_i} }$
where the error in the channel $$i$$ is $$e_{i}^{(t)} = y_i-f(i,a^{(t)}); k=1,2,...,M$$ and $$\alpha^{(t)}=1$$ if the process is convergent or $$\alpha^{(t)}=0.5 \alpha^{(t-1)}$$ if it is divergent. Another possibility is to optimize this coefficient.
the error of $$k$$-th parameter estimate is
$\Delta a_k^{(e)}= \sqrt{\frac {\sum_{i=1}^{N}\frac{e_i^2}{y_i}} {\sum_{i=1}^{N} \left[ \frac{\partial f(i,a^{(t)})}{\partial a_k}\right]^2\frac{1}{y_i}} }$
algorithm with higher powers w=1,2,3…
$\Delta a_{k,w}^{(t+1)}= \alpha^{(t)} \frac {\sum_{i=1}^{N} \frac{e_i}{y_i}\left[ \frac{\partial f(i,a^{(t)})}{\partial a_k}\right]^{2w-1}} {\sum_{i=1}^{N} \left[ \frac{\partial f(i,a^{(t)})}{\partial a_k}\right]^{2w}\frac{1}{y_i}}$
we have implemented the nonsymmetrical semiempirical peak shape function.
it contains the symmetrical Gaussian as well as nonsymmetrical terms.
$f(i,a) = \sum_{i=1}^{M} A(j) \left\{ exp\left[\frac{-(i-p(j))^2}{2\sigma^2}\right] +\frac{1}{2}T.exp\left[\frac{(i-p(j))}{B\sigma}\right] .erfc\left[\frac{(i-p(j))}{\sigma}+\frac{1}{2B}\right] +\frac{1}{2}S.erfc\left[\frac{(i-p(j))}{\sigma}\right] \right\}$
where $$T,S$$ are relative amplitudes and $$B$$ is a slope.
Detailed description of the algorithm is given in [13].
The fitting function implementing the algorithm without matrix inversion has the form
char* Fit1Awmi(float *source,
TSpectrumOneDimFit *p,
int size);
This function fits the source spectrum. The calling program should fill in input parameters of the one_dim_fit structure The fitted parameters are written into structure pointed by one_dim_fit structure pointer and fitted data are written into source spectrum.
Function parameters:
• source pointer to the vector of source spectrum
• p pointer to the one_dim_fit structure pointer
• size length of source spectrum
The one_dim_fit structure has the form:
class TSpectrumOneDimFit{
public:
int number_of_peaks; // input parameter, should be >0
int number_of_iterations; // input parameter, should be >0
int xmin; // first fitted channel
int xmax; // last fitted channel
double alpha; // convergence coefficient, input parameter, it should be
// positive number and <=1
double chi; // here the function returns resulting chi-square
int statistic_type; // type of statistics, possible values
// FIT1_OPTIM_CHI_COUNTS (chi square statistics with
// counts as weighting coefficients),
// FIT1_OPTIM_CHI_FUNC_VALUES (chi square statistics
// with function values as weighting coefficients)
// FIT1_OPTIM_MAX_LIKELIHOOD
int alpha_optim; // optimization of convergence coefficients, possible values
// FIT1_ALPHA_HALVING,
// FIT1_ALPHA_OPTIMAL
int power; // possible values FIT1_FIT_POWER2,4,6,8,10,12
int fit_taylor; // order of Taylor expansion, possible values
// FIT1_TAYLOR_ORDER_FIRST, FIT1_TAYLOR_ORDER_SECOND
double position_init[MAX_NUMBER_OF_PEAKS1]; // initial values of
// peaks positions, input parameters
double position_calc[MAX_NUMBER_OF_PEAKS1]; // calculated values
// of fitted positions, output parameters
double position_err[MAX_NUMBER_OF_PEAKS1]; // position errors
bool fix_position[MAX_NUMBER_OF_PEAKS1]; // logical vector which allows to fix
// appropriate positions (not fit). However they
// are present in the estimated functional
double amp_init[MAX_NUMBER_OF_PEAKS1]; // initial values of peaks
// amplitudes, input parameters
double amp_calc[MAX_NUMBER_OF_PEAKS1]; // calculated values of
// fitted amplitudes, output parameters
double amp_err[MAX_NUMBER_OF_PEAKS1]; // amplitude errors
bool fix_amp[MAX_NUMBER_OF_PEAKS1]i; // logical vector, which allows to fix
// appropriate amplitudes (not fit). However they
// are present in the estimated functional
double area[MAX_NUMBER_OF_PEAKS1]; // calculated areas of peaks
double area_err[MAX_NUMBER_OF_PEAKS1]; // errors of peak areas
double sigma_init // sigma parameter, see peak shape function
double sigma_calc;
double sigma_err;
bool fix_sigma;
double t_init // t parameter, , see peak shape function
double t_calc;
double t_err;
bool fix_t;
double b_init // b parameter, , see peak shape function
double b_calc;
double b_err;
bool fix_b;
double s_init; // s parameter, , see peak shape function
double s_calc;
double s_err;
bool fix_s;
double a0_init; // backgroud is estimated as a0+a1*x+a2*x*x
double a0_calc;
double a0_err;
bool fix_a0;
double a1_init;
double a1_calc;
double a1_err;
bool fix_a1;
double a2_init;
double a2_calc;
double a2_err;
bool fix_a2;
};
As an example we present simple 1-dimensional synthetic spectrum with 5 peaks. The fit obtained using above given awmi fitting function is given in Figure 5.1. The chi-square achieved in this fit was 0.76873. The input value of the fit (positions of peaks and their amplitudes) were estimated using peak searching function.
Let us go to more complicated fit with lot of overlapping peaks Figure 5.2. The initial positions of peaks were determined from original data, using peak searching function. The fit is not very good, as there are some peaks missing.
However to analyze the spectrum we can proceed in completely different way employing the sophisticated functions of background elimination and deconvolution. First let us remove background from the original raw data. We get spectrum given in Figure 5.3.
Then we can apply the Gold deconvolution function to these data. We obtain result presented in Figure 5.4.
Using peaks searching method, looking just for local maxima, (sigma=0) with appropriate threshold (50), we can estimate initial positions of peaks for fitting function. After the fit of the original experimental spectrum (with background) we obtain the result shown in Figure 5.5. Now the fitted function corresponds much better to experimental values
We have implemented also the fitting function with matrix inversion based on Stiefel-Hestens method of the solution of the system of linear equations. The form of the function is as follows
char *Fit1Stiefel(float *source,
TSpectrumOneDimFit* p,
int size);
This function fits the source spectrum. The calling program should fill in input parameters of the one_dim_fit structure The fitted parameters are written into structure pointed by one_dim_fit structure pointer and fitted data are written into source spectrum.
Function parameters:
• source pointer to the vector of source spectrum
• p pointer to the one_dim_fit structure pointer
• size length of source spectrum
The structure one_dim_fit is the same as in awmi function. The parameters power, fit_taylor are not applicable for this function
The results for small number of fitted parameters are the same as with awmi function. However it converges faster. The example for data given in Figure 5.1 is given in the following table:
# of iterationsi Chi awmi Chi-Stiefel 1 924 89.042 5 773.15 0.96242 10 38.13 0.77041 50 0.90293 0.76873 100 0.76886 0.76873 500 0.76873 0.76873
## 5.2 2-DIMENSIONAL SPECTRA
it is straightforward that for two dimensional spectra one can write
$\Delta a_k^{(t+1)}=\alpha^{(t)} \frac {\sum_{i_1=1}^{N_1}\sum_{i_2=1}^{N_2}\frac{e_{i_1,i_2}^{(t)}}{y_{i_1,i_2}} \frac{\partial f(i_1,i_2,a^{(t)})}{\partial a_k}} {\sum_{i_1=1}^{N_1}\sum_{i_2=1}^{N_2} \left[\frac{\partial f(i_1,i_2,a^{(t)})}{\partial a_k} \right]^2 \frac{1}{y_{i_1,i_2}}}$
analogously for two dimensional peaks we have chosen the peak shape function of the following form
$f(i_1,i_2,a) = \sum_{j=1}^{M}\left\{ \begin{array}{l} A_{xy}(j) exp\left\{-\frac{1}{2(1-\rho^2)}\left[ \frac{(i_1-p_x(j))^2}{\sigma_x^2} -\frac{2\rho(i_1-p_x(j))(i_2-p_y(j))}{\sigma_x\sigma_y} +\frac{(i_2-p_y(j))^2}{\sigma_y^2} \right]\right\} \\ +A_x(j) exp\left[-\frac{(i_1-p_{x_1}(j))^2}{2\sigma_x^2} \right] +A_y(j) exp\left[-\frac{(i_2-p_{y_1}(j))^2}{2\sigma_y^2} \right] \end{array} \right\}+b_0+b_1i_1+b_2i_2$
The meaning of the parameters is analogous to 1-dimensional case. Again all the details can be found in [13]
The fitting function implementing the algorithm without matrix inversion for 2-dimensional data has the form
char* Fit2Awmi(float **source,
TSpectrumTwoDimFit* p,
int sizex,
int sizey);
This function fits the source spectrum. The calling program should fill in input parameters of the two_dim_fit structure The fitted parameters are written into structure pointed by two_dim_fit structure pointer and fitted data are written back into source spectrum.
Function parameters:
• source pointer to the matrix of source spectrum
• p pointer to the two_dim_fit structure pointer, see manual
• sizex length x of source spectrum
• sizey length y of source spectrum
The two_dim_fit structure has the form
class TSpectrumTwoDimFit{
public:
int number_of_peaks; // input parameter, shoul be>0
int number_of_iterations; // input parameter, should be >0
int xmin; // first fitted channel in x direction
int xmax; // last fitted channel in x direction
int ymin; // first fitted channel in y direction
int ymax; // last fitted channel in y direction
double alpha; // convergence coefficient, input parameter, it should be positive
// number and <=1
double chi; // here the function returns resulting chi square
int statistic_type; // type of statistics, possible values
// FIT2_OPTIM_CHI_COUNTS (chi square statistics with
// counts as weighting coefficients),
// FIT2_OPTIM_CHI_FUNC_VALUES (chi square statistics
// with function values as weighting
// coefficients),FIT2_OPTIM_MAX_LIKELIHOOD
int alpha_optim; // optimization of convergence coefficients, possible values
// FIT2_ALPHA_HALVING, FIT2_ALPHA_OPTIMAL
int power; // possible values FIT21_FIT_POWER2,4,6,8,10,12
int fit_taylor; // order of Taylor expansion, possible values
// FIT2_TAYLOR_ORDER_FIRST,
// FIT2_TAYLOR_ORDER_SECOND
double position_init_x[MAX_NUMBER_OF_PEAKS2]; // initial values of x
// positions of 2D peaks, input parameters
double position_calc_x[MAX_NUMBER_OF_PEAKS2]; // calculated values
// of fitted x positions of 2D peaks, output parameters
double position_err_x[MAX_NUMBER_OF_PEAKS2]; // x position errors of 2D peaks
bool fix_position_x[MAX_NUMBER_OF_PEAKS2]; // logical vector which allows to fix appropriate
// x positions of 2D peaks (not fit).
// However they are present in the estimated functional
double position_init_y[MAX_NUMBER_OF_PEAKS2]; // initial values of y
// positions of 2D peaks, input parameters
double position_calc_y[MAX_NUMBER_OF_PEAKS2]; // calculated values
// of fitted y positions of 2D peaks, output parameters
double position_err_y[MAX_NUMBER_OF_PEAKS2]; // y position errors of 2D peaks
bool fix_position_y[MAX_NUMBER_OF_PEAKS2]; // logical vector which allows to fix appropriate
// y positions of 2D peaks (not fit).
// However they are present in the estimated functional
double position_init_x1[MAX_NUMBER_OF_PEAKS2]; // initial values of x
// positions of 1D ridges, input parameters
double position_calc_x1[MAX_NUMBER_OF_PEAKS2]; // calculated values of
// fitted x positions of 1D ridges, output parameters
double position_err_x1[MAX_NUMBER_OF_PEAKS2]; // x position errors of 1D ridges
bool fix_position_x1[MAX_NUMBER_OF_PEAKS2]; // logical vector which allows to fix appropriate
// x positions of 1D ridges (not fit).
// However they are present in the estimated functional
double position_init_y1[MAX_NUMBER_OF_PEAKS2]; // initial values of y
// positions of 1D ridges, input parameters
double position_calc_y1[MAX_NUMBER_OF_PEAKS2]; // calculated values
// of fitted y positions of 1D ridges, output parameters
double position_err_y1[MAX_NUMBER_OF_PEAKS2]; // y position errors of 1D ridges
bool fix_position_y1[MAX_NUMBER_OF_PEAKS2]; // logical vector which allows to fix
// appropriate y positions of 1D ridges (not fit).
// However they are present in the estimated functional
double amp_init[MAX_NUMBER_OF_PEAKS2]; // initial values of 2D peaks
// amplitudes, input parameters
double amp_calc[MAX_NUMBER_OF_PEAKS2]; // calculated values of
// fitted amplitudes of 2D peaks, output parameters
double amp_err[MAX_NUMBER_OF_PEAKS2]; // amplitude errors of 2D peaks
bool fix_amp[MAX_NUMBER_OF_PEAKS2]; // logical vector which allows
// to fix appropriate amplitudes of 2D peaks (not fit).
// However they are present in the estimated functional
double amp_init_x1[MAX_NUMBER_OF_PEAKS2]; // initial values of 1D
// ridges amplitudes, input parameters
double amp_calc_x1[MAX_NUMBER_OF_PEAKS2]; // calculated values of
// fitted amplitudes of 1D ridges, output parameters
double amp_err_x1[MAX_NUMBER_OF_PEAKS2]; // amplitude errors of 1D ridges
bool fix_amp_x1[MAX_NUMBER_OF_PEAKS2]; // logical vector which alloes to fix
// appropriate amplitudes of 1D ridges (not fit).
// However they are present in the estimated functional
double amp_init_y1[MAX_NUMBER_OF_PEAKS2]; // initial values of 1D
// ridges amplitudes, input parameters
double amp_calc_y1[MAX_NUMBER_OF_PEAKS2]; // calculated values of
// fitted amplitudes of 1D ridges, output parameters
double amp_err_y1[MAX_NUMBER_OF_PEAKS2]; // amplitude errors of 1D ridges
bool fix_amp_y1[MAX_NUMBER_OF_PEAKS2]; // logical vector which allows to fix
// appropriate amplitudes of 1D ridges (not fit).
// However they are present in the estimated functional
double volume[MAX_NUMBER_OF_PEAKS1]; // calculated volumes of peaks
double volume_err[MAX_NUMBER_OF_PEAKS1]; // errors of peak volumes
double sigma_init_x; // sigma x parameter
double sigma_calc_x;
double sigma_err_x;
bool fix_sigma_x;
double sigma_init_y; // sigma y parameter
double sigma_calc_y;
double sigma_err_y;
bool fix_sigma_y;
double ro_init; // correlation coefficient
double ro_calc;
double ro_err;
bool fix_ro;
double txy_init; // t parameter for 2D peaks
double txy_calc;
double txy_err;
bool fix_txy;
double sxy_init; // s parameter for 2D peaks
double sxy_calc;
double sxy_err;
bool fix_sxy;
double tx_init; // t parameter for 1D ridges (x direction)
double tx_calc;
double tx_err;
bool fix_tx;
double ty_init; // t parameter for 1D ridges (y direction)
double ty_calc;
double ty_err;
bool fix_ty;
double sx_init; // s parameter for 1D ridges (x direction)
double sx_calc;
double sx_err;
bool fix_sx;
double sy_init; // s parameter for 1D ridges (y direction)
double sy_calc;
double sy_err;
bool fix_sy;
double bx_init; // b parameter for 1D ridges (x direction)
double bx_calc;
double bx_err;
bool fix_bx;
double by_init; // b parameter for 1D ridges (y direction)
double by_calc;
double by_err;
bool fix_by;
double a0_init; // backgroud is estimated as a0+ax*x+ay*y
double a0_calc;
double a0_err;
bool fix_a0;
double ax_init;
double ax_calc;
double ax_err;
bool fix_ax;
double ay_init;
double ay_calc;
double ay_err;
bool fix_ay;
};
The example of the original spectrum and the fitted spectrum is given in Figures. 5.6, 5.7, respectively. We have fitted 5 peaks. Each peak was represented by 7 parameters, which together with sigmax, sigmay and b0 resulted in 38 parameters. The chi-square after 1000 iterations was 0.6571.
The awmi algorithm can be applied also to large blocks of data and large number of peaks. In the next example we present spectrum with identified 295 peaks. Each peak is represented by 7 parameters, which together with sigmax, y and b0 resulted in 2068 fitted parameters. The original spectrum and fitted function are given in Figures. 5.8, 5.9, respectively. The achieved chi-square was 0.76732.
We have implemented the fitting function with matrix inversion based on Stiefel-Hestens method of the solution of the system of linear equations also for 2-dimensional data. The form of the function is as follows
char* Fit2Stiefel(float **source,
TSpectrumTwoDimFit* p,
int sizex,
int sizey);
This function fits the source spectrum. The calling program should fill in input parameters of the two_dim_fit structure The fitted parameters are written into structure pointed by two_dim_fit structure pointer and fitted data are written back into source spectrum.
Function parameters:
• source pointer to the matrix of source spectrum
• p pointer to the two_dim_fit structure pointer, see manual
• sizex length x of source spectrum
• sizey length y of source spectrum
The structure two_dim_fit is the same as in awmi function. The parameters power, fit_taylor are not applicable for this function
The results for small number of fitted parameters are the same as with awmi function. However it converges faster. The example for data given in Figure 5.6 (38 parameters) is presented in the following table:
# of iterations Chi awmi Chi-Stiefel 1 24.989 10.415 5 20.546 1.0553 10 6.256 0.84383 50 1.0985 0.64297 100 0.657 1 0.64297 500 0.651 94 0.64297
Again Stiefel-Hestens method converges faster. However its calculation is for this number of parameters approximately 3 times longer. For larger number of parameters the time needed to calculate the inversion grows with the cube of the number of fitted parameters. For example the fit of large number of parameters (2068) for data in Figure 5.8 using awmi algorithm lasted about 12 hours (using 450 MHz PC). The calculation using matrix inversion method is not realizable in reasonable time.
# 6 TRANSFORMS
## 6.1 1-DIMENSIONAL SPECTRA
Orthogonal transforms can be successfully used for the processing of nuclear spectra . They can be used to remove high frequency noise, to increase signal-to-background ratio as well as to enhance low intensity components [14]. We have implemented also the function for the calculation of the commonly used orthogonal transforms
• Haar
• Walsh
• Cos
• Sin
• Fourier
• Hartley
Between these transform one can define so called generalized mixed transforms that are also implemented in the transform function
• Fourier-Haar
• Fourier-Walsh
• Walsh-Haar
• Cos-Walsh
• Cos-Haar
• Sin-Walsh
• Sin-Haar
The suitability of the application of appropriate transform depends on the character of the data, i.e., on the shape of dominant components contained in the data. The form of the transform function is as follows:
char *Transform1(const float *source,
float *dest,
int size,
int type,
int direction,
int degree);
This function transforms the source spectrum. The calling program should fill in input parameters. Transformed data are written into dest spectrum.
Function parameters:
• source pointer to the vector of source spectrum, its length should be equal to size parameter except for inverse FOURIER, FOUR-WALSH, FOUR-HAAR transform. These need 2*size length to supply real and imaginary coefficients.
• dest pointer to the vector of dest data, its length should be equal to size parameter except for direct FOURIER, FOUR-WALSh, FOUR-HAAR. These need 2*size length to store real and imaginary coefficients
• size basic length of source and dest spectra
• type type of transform
• TRANSFORM1_HAAR
• TRANSFORM1_WALSH
• TRANSFORM1_COS
• TRANSFORM1_SIN
• TRANSFORM1_FOURIER
• TRANSFORM1_HARTLEY
• TRANSFORM1_FOURIER_WALSH
• TRANSFORM1_FOURIER_HAAR
• TRANSFORM1_WALSH_HAAR
• TRANSFORM1_COS_WALSH
• TRANSFORM1_COS_HAAR
• TRANSFORM1_SIN_WALSH
• TRANSFORM1_SIN_HAAR
• direction transform direction (forward, inverse)
• TRANSFORM1_FORWARD
• TRANSFORM1_INVERSE
• degree applies only for mixed transforms Let us illustrate the applications of the transform using an example. In Figure 6.1 we have spectrum with many peaks, complicated background and high level of noise.
In Figures. 6.2, 6.3, 6.4 we present this spectrum transformed using Haar, Walsh and Cosine transforms, respectively.
Haar transforms (Figure 6.2) creates clusters of data. These coefficients can be analyzed and then filtered, enhanced etc. On the other hand Walsh transform (Figure 6.3) concentrates the dominant components near to zero of the coordinate system. It is more suitable to process data of rectangular shape (e.g. in the field of digital signal processing). Finally Cosine transform concentrates in the best way the transform coefficients to the beginning of the coordinate system. From the point of view of the variance distribution it is sometimes called suboptimal. One can notice that approximately one half of the coefficients are negligible. This fact can be utilized to the compression purposes (in two or more dimensional data), filtering (smoothing) etc.
We have implemented several application functions utilizing the properties of the orthogonal transforms. Let us start with zonal filtration function. It has the form.
char *Filter1Zonal(const float *source,
float *dest,
int size,
int type,
int degree,
int xmin,
int xmax,
float filter-coeff);
This function transforms the source spectrum. The calling program should fill in input parameters. Then it sets transformed coefficients in the given region (xmin, xmax) to the given filter_coeff and transforms it back Filtered data are written into dest spectrum.
Function parameters:
• source pointer to the vector of source spectrum, its length should be size
• dest pointer to the vector of dest data, its length should be size
• size basic length of source and dest spectra
• type type of transform
• TRANSFORM1_HAAR
• TRANSFORM1_WALSH
• TRANSFORM1_COS
• TRANSFORM1_SIN
• TRANSFORM1_FOURIER
• TRANSFORM1_HARTLEY
• TRANSFORM1_FOURIER_WALSH
• TRANSFORM1_FOURIER_HAAR
• TRANSFORM1_WALSH_HAAR
• TRANSFORM1_COS_WALSH
• TRANSFORM1_COS_HAAR
• TRANSFORM1_SIN_WALSH
• TRANSFORM1_SIN_HAAR
• degree applied only for mixed transforms
• xmin low limit of filtered region
• xmax high limit of filtered region
• filter_coeff value which is set in filtered region
An example of the filtration using Cosine transform is given in the Figure 6.5. It illustrates a part of the spectrum from Figure 6.1 and two spectra after filtration preserving 2048 coefficients and 1536 coefficients. One can observe very good fidelity of the overall shape of both spectra with the original data. However some distortion can be observed in details of the second spectrum after filtration preserving only 1536 coefficients. The useful information in the transform domain can be compressed into one half of the original space.
In the transform domain one can also enhance (multiply with the constant > 1) some regions. In this way one can change peak-to-background ratio. This function has the form
char *Enhance1(const float *source,
float *dest,
int size,
int type,
int degree,
int xmin,
int xmax,
float enhance-coeff);
This function transforms the source spectrum. The calling program should fill in input parameters. Then it multiplies transformed coefficients in the given region (xmin, xmax) by the given enhance_coeff and transforms it back Processed data are written into dest spectrum.
Function parameters:
• source pointer to the vector of source spectrum, its length should be size
• dest pointer to the vector of dest data, its length should be size
• size basic length of source and dest spectra
• type type of transform
• TRANSFORM1_HAAR
• TRANSFORM1_WALSH
• TRANSFORM1_COS
• TRANSFORM1_SIN
• TRANSFORM1_FOURIER
• TRANSFORM1_HARTLEY
• TRANSFORM1_FOURIER_WALSH
• TRANSFORM1_FOURIER_HAAR
• TRANSFORM1_WALSH_HAAR
• TRANSFORM1_COS_WALSH
• TRANSFORM1_COS_HAAR
• TRANSFORM1_SIN_WALSH
• TRANSFORM1_SIN_HAAR
• degree applied only for mixed transforms
• xmin low limit of filtered region
• xmax high limit of filtered region
• enhance_coeff value by which the filtered region is multiplied
An example of enhancement of the coefficients from region 380-800 by the constant 2 in the Cosine transform domain is given in the Figure 6.6. The determination of the region is a matter of analysis in the appropriate transform domain. We assumed that low frequency components are placed in the low coefficients. As it can be observed the enhancement changes the peak-to-background ratio.
## 6.2 2-DIMENSIONAL SPECTRA
Analogously to1-dimensional data we have implemented the transforms also for 2-dimensional data. Besides of the classic orthogonal transforms
• Haar
• Walsh
• Cos
• Sin
• Fourier
• Hartley
• Fourier-Haar
• Fourier-Walsh
• Walsh-Haar
• Cos-Walsh
• Cos-Haar
• Sin-Walsh
• Sin-Haar
char *Transform2(const float **source,
float **dest,
int sizex,
int sizey,
int type,
int direction,
int degree);
This function transforms the source spectrum. The calling program should fill in input parameters. Transformed data are written into dest spectrum.
Function parameters:
• source pointer to the matrix of source spectrum, its size should be sizex*sizey except for inverse FOURIER, FOUR-WALSH, FOUR-HAAR transform. These need sizex*2*sizey length to supply real and imaginary coefficients.
• dest pointer to the matrix of destination data, its size should be sizex*sizey except for direct FOURIER, FOUR-WALSh, FOUR-HAAR. These need sizex*2*sizey length to store real and imaginary coefficients
• sizex,sizey basic dimensions of source and dest spectra
• type type of transform
• TRANSFORM2_HAAR
• TRANSFORM2_WALSH
• TRANSFORM2_COS
• TRANSFORM2_SIN
• TRANSFORM2_FOURIER
• TRANSFORM2_HARTLEY
• TRANSFORM2_FOURIER_WALSH
• TRANSFORM2_FOURIER_HAAR
• TRANSFORM2_WALSH_HAAR
• TRANSFORM2_COS_WALSH
• TRANSFORM2_COS_HAAR
• TRANSFORM2_SIN_WALSH
• TRANSFORM2_SIN_HAAR
• direction transform direction (forward, inverse)
• degree applies only for mixed transforms
An example of the 2-dimensional Cosine transform of data from Figure 5.6 is given in Figure 6.7. One can notice that the data are concentrated again around the beginning of the coordinate system. This allows to apply filtration, enhancement and compression techniques in the transform domain.
In some cases when the spectrum is smooth the cosine transforms are very efficient. In Figures 6.8, 6.9 we show original spectrum and transformed coefficients using Cosine transform, respectively.
Analogously to 1-dimensional case we have implemented also the functions for zonal filtration, Gauss filtration and enhancement. The zonal filtration function using classic transforms has the form
char *Filter2Zonal(const float **source,
float **dest,
int sizex,
int sizey,
int type,
int degree,
int xmin,
int xmax,
int ymin,
int ymax,
float filter-coeff);
This function transforms the source spectrum. The calling program should fill in input parameters. Then it sets transformed coefficients in the given region to the given filter_coeff and transforms it back Filtered data are written into dest spectrum.
Function parameters:
• source pointer to the matrix of source spectrum, its size should be sizex*sizey
• dest pointer to the matrix of destination data, its size should be sizex*sizey
• sizex,sizey basic dimensions of source and dest spectra
• type type of transform
• TRANSFORM2_HAAR
• TRANSFORM2_WALSH
• TRANSFORM2_COS
• TRANSFORM2_SIN
• TRANSFORM2_FOURIER
• TRANSFORM2_HARTLEY
• TRANSFORM2_FOURIER_WALSH
• TRANSFORM2_FOURIER_HAAR
• TRANSFORM2_WALSH_HAAR
• TRANSFORM2_COS_WALSH
• TRANSFORM2_COS_HAAR
• TRANSFORM2_SIN_WALSH
• TRANSFORM2_SIN_HAAR
• degree applies only for mixed transforms
• xmin low limit x of filtered region
• xmax high limit x of filtered region
• ymin low limit y of filtered region
• ymax high limit y of filtered region
• filter_coeff value which is set in filtered region
The enhancement function using transforms has the form
char *Enhance2(const float **source,
float **dest,
int sizex,
int sizey,
int type,
int degree,
int xmin,
int xmax,
int ymin,
int ymax,
float enhance-coeff);
This function transforms the source spectrum. The calling program should fill in input parameters. Then it multiplies transformed coefficients in the given region by the given enhance_coeff and transforms it back
Function parameters:
• source pointer to the matrix of source spectrum, its size should be sizex*sizey
• dest pointer to the matrix of destination data, its size should be sizex*sizey
• sizex,sizey basic dimensions of source and dest spectra
• type type of transform
• TRANSFORM2_HAAR
• TRANSFORM2_WALSH
• TRANSFORM2_COS
• TRANSFORM2_SIN
• TRANSFORM2_FOURIER
• TRANSFORM2_HARTLEY
• TRANSFORM2_FOURIER_WALSH
• TRANSFORM2_FOURIER_HAAR
• TRANSFORM2_WALSH_HAAR
• TRANSFORM2_COS_WALSH
• TRANSFORM2_COS_HAAR
• TRANSFORM2_SIN_WALSH
• TRANSFORM2_SIN_HAAR
• degree applies only for mixed transforms
• xmin low limit x of filtered region
• xmax high limit x of filtered region
• ymin low limit y of filtered region
• ymax high limit y of filtered region
• enhance-coeff value which is set in filtered region
# 7 VISUALIZATION
## 7.1 1-DIMENSIONAL SPECTRA
The 1-dimensional visualization function displays spectrum (or part of it) on the Canvas of a form. Before calling the function one has to fill in one_dim_pic structure containing all parameters of the display. The function has the form
char *display1(struct one-dim-pic* p);
This function displays the source spectrum on Canvas. All parameters are grouped in one_dim_pic structure. Before calling display1 function the structure should be filled in and the address of one_dim_pic passed as parameter to display1 function. The meaning of appropriate parameters is apparent from description of one_dim_pic structure. The constants , which can be used for appropriate parameters are defined in procfunc.h header file.
struct one_dim_pic {
float *source; // spectrum to be displayed
TCanvas *Canvas; // Canvas where the spectrum will be displayed
int size; // size of source spectrum
int xmin; // x-starting channel of spectrum
int xmax; // x-end channel of spectrum
int ymin; // base counts
int ymax; // count full scale
int bx1; // position of picture on Canvas, min x
int bx2; // position of picture on Canvas, max x
int by1; // position of picture on Canvas, min y
int by2; // position of picture on Canvas, max y
int display_mode; // spectrum display mode (points, polyline, bars, rainbow, steps, bezier)
int y_scale; // y scale (linear, log, sqrt)
int levels; // # of color levels for rainbow display mode, it does not apply
// for other display modes
float rainbow1_step; // determines the first color component step for neighbouring
// color levels, applies only for rainbow display mode
float rainbow2_step; // determines the second component color step for
// neighbouring color levels, applies only for rainbow display mode
float rainbow3_step; // determines the third component color step for
// neighbouring color levels, applies only for rainbow display mode
int color_alg; // applies only for rainbow display mode (rgb smooth algorithm, rgb
// modulo color component, cmy smooth algorithm, cmy modulo color
// component, cie smooth algorithm, cie modulo color component, yiq
// smooth algorithm, yiq modulo color component, hsv smooth
// algorithm, hsv modulo color component [15]
int bar_thickness; // applies only for bar display mode
int bar_empty_flag; // (empty bars, full bars) applies only for bar display mode
int border_color; // color of background of the picture
int full_border; // decides whether background is painted
int raster_en_dis; // decides whether the axes and rasters are shown
int raster_long; // decides whether the rasters are drawn as long lines
int raster_color; // color of the rasters
char *raster_description_x; // x axis description
char *raster_description_y; // y axis description
int pen_color; // color of spectrum
int pen_dash; // style of pen
int pen_width; // width of line
int chanmark_style; // style of channel marks
int chanmark_width; // width of channel marks
int chanmark_height; // height of channel marks
int chanmark_en_dis; // decides whether the channel marks are shown
int chanmark_color; // color of channel marks
// auxiliary variables, transform coefficients, for internal use only
double mx;
double my;
double px;
double py;
// auxiliary internal variables, working place
double gbezx,gbezy;
TPoint bz[4];
};
The examples using different display parameters are shown in the next few Figures
## 7.2 2-DIMENSIONAL SPECTRA
The 2-dimensional visualization function displays spectrum (or part of it) on the Canvas of a form. Before calling the function one has to fill in two_dim_pic structure containing all parameters of the display. The function has the form
char *display2(struct two-dim-pic* p);
This function displays the source two dimensional spectrum on Canvas. All parameters are grouped in two_dim_pic structure. Before calling display2 function the structure should be filled in and the address of two_dim_pic passed as parameter to display2 function. The meaning of appropriate parameters is apparent from description of one_dim_pic structure. The constants , which can be used for appropriate parameters are defined in procfunc.h header file.
struct two_dim_pic {
float **source; // source spectrum to be displayed
TCanvas *Canvas; // Canvas where the spectrum will be displayed
int sizex; // x-size of source spectrum
int sizey; // y-size of source spectrum
int xmin; // x-starting channel of spectrum
int xmax; // x-end channel of spectrum
int ymin; // y-starting channel of spectrum
int ymax; // y-end channel of spectrum
int zmin; // base counts
int zmax; // counts full scale
int bx1; // position of picture on Canvas, min x
int bx2; // position of picture on Canvas, max x
int by1; // position of picture on Canvas, min y
int by2; // position of picture on Canvas, max y
int mode_group; // display mode algorithm group (simple modes-
// according to light-PICTURE2_MODE_GROUP_LIGHT, modes with
// shading according to channels counts-
// PICTURE2_MODE_GROUP_HEIGHT, modes of combination of
// shading according to light and to channels counts-
// PICTURE2_MODE_GROUP_LIGHT_HEIGHT)
int display_mode; // spectrum display mode (points, grid, contours, bars, x_lines,
// y_lines, bars_x, bars_y, needles, surface, triangles)
int z_scale; // z scale (linear, log, sqrt)
int nodesx; // number of nodes in x dimension of grid
int nodesy; // number of nodes in y dimension of grid
int count_reg; // width between contours, applies only for contours display mode
int alfa; // angles of display,alfa+beta must be less or equal to 90, alpha- angle
// between base line of Canvas and left lower edge of picture picture
// base plane
int beta; // angle between base line of Canvas and right lower edge of picture base plane
int view_angle; // rotation angle of the view, it can be 0, 90, 180, 270 degrees
int levels; // # of color levels for rainbowed display modes, it does not apply for
// simple display modes algorithm group
float rainbow1_step; // determines the first component step for neighbouring color
// levels, applies only for rainbowed display modes, it does not apply
// for simple display modes algorithm group
float rainbow2_step; // determines the second component step for neighbouring
// color levels, applies only for rainbowed display modes, it does not
// apply for simple display modes algorithm group
float rainbow3_step; // determines the third component step for neighbouring
// color levels, applies only for rainbowed display modes, it does not
// apply for simple display modes algorithm group
int color_alg; // applies only for rainbowed display modes (rgb smooth algorithm,
// rgb modulo color component, cmy smooth algorithm, cmy modulo
// color component, cie smooth algorithm, cie modulo color component,
// yiq smooth algorithm, yiq modulo color component, hsv smooth
// algorithm, hsv modulo color component, it does not apply for simple
// display modes algorithm group [15]
float l_h_weight; // weight between shading according to fictive light source and
// according to channels counts, applies only for
// PICTURE2_MODE_GROUP_LIGHT_HEIGHT modes group
int xlight; // x position of fictive light source, applies only for rainbowed display
// modes with shading according to light
int ylight; // y position of fictive light source, applies only for rainbowed display
// modes with shading according to light
int zlight; // z position of fictive light source, applies only for rainbowed display
// modes with shading according to light
// for rainbowed display modes with shading according to light
// shading), for rainbowed display modes only
int bezier; // determines Bezier interpolation (applies only for simple display
// modes group for grid, x_lines, y_lines display modes)
int border_color; // color of background of the picture
int full_border; // decides whether background is painted
int raster_en_dis; // decides whether the rasters are shown
int raster_long; // decides whether the rasters are drawn as long lines
int raster_color; // color of the rasters
char *raster_description_x; // x axis description
char *raster_description_y; // y axis description
char *raster_description_z; // z axis description
int pen_color; // color of spectrum
int pen_dash; // style of pen
int pen_width; // width of line
int chanmark_en_dis; // decides whether the channel marks are shown
int chanmark_style; // style of channel marks
int chanmark_width; // width of channel marks
int chanmark_height; // height of channel marks
int chanmark_color; // color of channel marks
int chanline_en_dis; // decides whether the channel lines (grid) are shown
// auxiliary variables, transformation coefficients for internal use only
double kx;
double ky;
double mxx;
double mxy;
double myx;
double myy;
double txx;
double txy;
double tyx;
double tyy;
double tyz;
double vx;
double vy;
double nu_sli;
// auxiliary internal variables, working place
double z,zeq,gbezx,gbezy,dxspline,dyspline;
int xt,yt,xs,ys,xe,ye,priamka,z_preset_value;
unsigned short obal[MAXIMUM_XSCREEN_RESOLUTION];
unsigned short obal_cont[MAXIMUM_XSCREEN_RESOLUTION];
TPoint bz[4];
};
The examples using different display parameters are shown in the next few Figures.
# References
[1] M. Morháč, J. Kliman, V. Matoušek, M. Veselský, I. Turzo.: Background elimination methods for multidimensional gamma-ray spectra. NIM, A401 (1997) 113-132.
[2] C. G Ryan et al.: SNIP, a statistics-sensitive background treatment for the quantitative analysis of PIXE spectra in geoscience applications. NIM, B34 (1988), 396-402.
[3] D. D. Burgess, R. J. Tervo: Background estimation for gamma-ray spectroscopy. NIM 214 (1983), 431-434.
[4] M. Morháč, J. Kliman, V. Matoušek, M. Veselský, I. Turzo.:Identification of peaks in multidimensional coincidence gamma-ray spectra. NIM, A443 (2000) 108-125.
[5] M.A. Mariscotti: A method for identification of peaks in the presence of background and its application to spectrum analysis. NIM 50 (1967), 309-320.
[6] Z.K. Silagadze, A new algorithm for automatic photopeak searches. NIM A 376 (1996), 451.
[7] P. Bandžuch, M. Morháč, J. Krištiak: Study of the VanCitter and Gold iterative methods of deconvolutionand their application in the deconvolution of experimental spectra of positron annihilation, NIM A 384 (1997) 506-515.
[8] M. Morháč, J. Kliman, V. Matoušek, M. Veselský, I. Turzo.: Efficient one- and two-dimensional Gold deconvolution and its application to gamma-ray spectra decomposition. NIM, A401 (1997) 385-408.
[9] I. A. Slavic: Nonlinear least-squares fitting without matrix inversion applied to complex Gaussian spectra analysis. NIM 134 (1976) 285-289.
[10] B. Mihaila: Analysis of complex gamma spectra, Rom. Jorn. Phys., Vol. 39, No. 2, (1994), 139-148.
[11] T. Awaya: A new method for curve fitting to the data with low statistics not using chi-square method. NIM 165 (1979) 317-323.
[12] T. Hauschild, M. Jentschel: Comparison of maximum likelihood estimation and chi-square statistics applied to counting experiments. NIM A 457 (2001) 384-401.
[13] M. Morháč, J. Kliman, M. Jandel, Ľ. Krupa, V. Matoušek: Study of fitting algorithms applied to simultaneous analysis of large number of peaks in $$\gamma$$-ray spectra. Applied spectroscopy, Accepted for publication.
[14] C.V. Hampton, B. Lian, Wm. C. McHarris: Fast-Fourier-transform spectral enhancement techniques for gamma-ray spectroscopy. NIM A353 (1994) 280-284..
[15] D. Hearn, M. P. Baker: Computer Graphics, Prentice-Hall International, Inc., 1994.
1. Institute of Physics, Slovak Academy of Sciences, Bratislava, Slovakia
2. Flerov Laboratory of Nuclear Reactions, JINR, Dubna, Russia
|
2019-09-22 02:53:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49709829688072205, "perplexity": 6122.047578219544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574765.55/warc/CC-MAIN-20190922012344-20190922034344-00294.warc.gz"}
|
https://www.gerad.ca/en/papers/G-2021-24
|
Back
# New complementary problem formulation for the improved primal simplex
## Youssouf Emine, François Soumis, and Issmail El Hallaoui
BibTeX reference
The primal simplex algorithm is still one of the most used algorithms by the operations research community. It moves from basis to adjacent one until optimality. The number of bases can bef very huge, even exponential, due to degeneracy or when we have to go through all of the extreme points very close to each other. The improved primal simplex algorithm (IPS) is efficient against degeneracy but when there is no degeneracy, it behaves exactly as a primal simplex and consequently, it may suffer from the same limitations. We present a new formulation of the complementary problem, i.e., the auxiliary subproblem used by the improved primal simplex to find descent directions, that guarantees a significant improvement of the objective value at each iteration until we reach an $$\epsilon-$$approximation of the optimal value. We prove that the number of needed directions is polynomial.
, 13 pages
### Document
G2124.pdf (300 KB)
|
2022-01-23 20:39:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7459278702735901, "perplexity": 884.5278260197443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00263.warc.gz"}
|
https://ivanky.wordpress.com/2009/02/
|
### Archive
Archive for February, 2009
## LaTeX in my blog
It might be that I am really outdated. Yesterday, I browsed blogs around the world and found that now we can have LaTeX in our blog. And here I am, trying to figure that out. In the first instance, I just want to write the famous word LaTeX.
I try to write $latex \LaTeX$ and it is now $\LaTeX$.
A formula can be written in the same way.
$\frac{\partial u}{\partial t} + \frac{\partial u}{\partial x} = 0$,
while the source code is
$latex \frac{\partial u}{\partial t} + \frac{\partial u}{\partial x} = 0$.
Thanks to Tere
|
2018-04-23 19:05:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9285986423492432, "perplexity": 910.4047254321093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946165.56/warc/CC-MAIN-20180423184427-20180423204427-00158.warc.gz"}
|
https://dichvuinan.net/i24g64bz/vn7k1.php?e0d5eb=relational-algebra-is-equivalent-to-sql
|
Union 4. Equi-join in relational algebra, equi-join in relational model, equi-join relational algebra query and its equivalent SQL queries, equi-join examples Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. Some rewrites are situational... we need more information to decide when to apply them. Select 2. Hence both are called equivalent query. PROJECT OPERATOR PROPERTIES is defined only when L attr (R ) Equivalences 2 1 ( )= 2 ( ) ¼( )= ¼ ( ) … as long as all attributes used by C are in L Degree •Number of attributes in projected attribute list 10. ITo process a query, a DBMS translates SQL into a notation similar to relational algebra. Input: Logical Query Plan - expression in Extended Relational Algebra 2. the SQL keyword DISTINCT. The relational algebra calculator helps you learn relational algebra (RelAlg) by executing it. (Non- IOperations in relational algebra have counterparts in SQL. 1. All rights reserved. NATURAL JOIN. Solutions of the exercises 12. Syntax . Relational Algebra equivalent of SQL "NOT IN", In relational algebra, you can do this using a carthesian product. I am somewhat aware of the correspondence between (tuple and domain) relational calculus, relational algebra, and SQL. Equivalent expression. $$R \times (S \times T) \equiv T \times (S \times R)$$, Show that The fundamental operations of relational algebra are as follows − 1. Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. then replace all Xs with Ys, Today's focus: Provable Equivalence for RA Expressions. These are not written in SQL, but using relational algebra, graph or tree. Output: Optimized Logical Query Plan - also in Relational Algebra Example SELECT R.A, T.E FROM R, S, T WHERE R.B = S.B AND S.C 5 AND S.D = T.D General Query Optimizers. Something like: R - ρa1,a2(πa11,a21(σA11 = A22(ρa11,a21(R) x ρa12, Is there a relational algebra equivalent of the SQL expression R WHERE [NOT] IN S? we can guarantee that the bag of tuples produced by $Q_1(R, S, T, \ldots)$ $$\sigma_{c_1}(\sigma_{c_2}(R)) \equiv \sigma_{c_2}(\sigma_{c_1}(R))$$, Show that Basically, there is no such a thing in relational algebra. Set difference operation in relational algebra, purpose of set difference operation, example of set difference relational algebra operation, relational algebra in dbms, relational algebra equivalent SQL examples Notes, tutorials, questions, solved exercises, online quizzes, MCQs and more on DBMS, Advanced DBMS, Data Structures, Operating Systems, Natural Language Processing etc. Multiple Choice Questions MCQ on Distributed Database with answers Distributed Database – Multiple Choice Questions with Answers 1... MCQ on distributed and parallel database concepts, Interview questions with answers in distributed database Distribute and Parallel ... Find minimal cover of set of functional dependencies example, Solved exercise - how to find minimal cover of F? Queries over relational databases often likewise return tabular data represented as relations. (d) SELECT A, R.B, C, D FROM R, S WHERE R.B = S.B; The queries in options (b) and (d) are operations involving a join condition. We say that $Q_1 \equiv Q_2$ if and only if Relational algebra 1 Relational algebra Relational algebra, an offshoot of first-order logic (and of algebra of sets), deals with a set of finitary relations (see also relation (database)) which is closed under certain operators. To extend shibormot comment. Note: To prove that SQL is relationally complete, you need to show that for every expression of the relational algebra, there exists a semantically equivalent expression in SQL. Then your notation is valid. In practice, SQL is the query language that is used in most commercial RDBMSs. Binary. SQL queries are translated into equivalent relational algebra expressions before optimization. Translating SQL to RA expression is the second step in Query ProcessingPipeline 1. The answer is Yes, it is (Natural) JOIN aka the bowtie operator ⋈. – Relational Calculus: Lets users describe what they want, rather than how to compute it. This question hasn't been answered yet Ask an expert. $$R \bowtie_{c} S \equiv S \bowtie_{c} R$$, Show that ... where $A_R$ and $A_S$ are the columns of $A$ from $R$ and $S$ respectively. Two relational-algebra expressions are equivalent if both the expressions produce the same set of tuples on each legal database instance. Apply rewrites ⬇︎. / Q... Dear readers, though most of the content of this site is written by the authors and contributors of this site, some of the content are searched, found and compiled from various other Internet sources for the benefit of readers. $$\sigma_{R.B = S.B \wedge R.A > 3}(R \times S) \equiv (\sigma_{R.A > 3}(R)) \bowtie_{B} S$$. An operator can be either unary or binary. •SQL SELECT DISTINCT FROM R •Note the need for DISTINCT in SQL 9. This is because the number of … SELECT DISTINCT Student FROM Taken WHERE Course = ’Databases’ or Course = ’Programming Languages’; If we want to be slightly more general, we can use a sub-query: SQL is actually based both on the relational algebra and the relational calculus, an alternative way to specify queries. The relational calculus allows you to say the same thing in a declarative way: “All items such that the stock is not zero.” These blocks are translated to equivalent relational algebra expressions. They accept relations as their input and yield relations as their output. If X and Y are equivalent and Y is better, Output: Better, but equivalent query Which rewrite rules should we apply? Relational algebra is a procedural query language, which takes instances of relations as input and yields instances of relations as output. Relational algebra is a part of computer science. Relational databases store tabular data represented as relations. R1 ⋈ R2. Show that These two queries are equivalent to a SELECTION operation in relational algebra with a JOIN condition or PROJECTION operation with a JOIN condition. SQL), and for implementation: – Relational Algebra: More operational, very useful for representing execution plans. Relational algebra is performed recursively on a relation and intermediate results are also considered relations. Relational algebra is procedural, saying for example, “Look at the items and then only choose those with a non-zero stock”. Optimization includes optimization of each block and then optimization of … Lets say that you using relational algebra with defined LIKE binary operation for string operands. Theme images by. Natural join in Relational Algebra. Is there a relational algebra equivalent of the SQL expression R WHERE ... [NOT] IN S? Operation. These operators operate on one or more relations to yield a relation. $$\pi_A(\sigma_c(R)) \equiv \pi_A(\sigma_c(\pi_{(A \cup cols(c))}(R)))$$, ... but only if $c$ references only columns of $R$, Show that To see why, let's first tidy up the SQL solution given. To translate a query with subqueries into the relational algebra, it seems a logical strategy to work by recursion: rst translate the subqueries and then combine the translated results into a translation for the entire SQL state- ment. σ DEPT_ID = 10 (∏ EMP_ID, DEPT_NAME, DEPT_ID (EMP ∞DEPT)) Above relational algebra and tree shows how DBMS depicts the query inside it. In relational algebra, there is a division operator, which has no direct equivalent in SQL. for any combination of valid inputs $R, S, T, \ldots$. $A_R = A \cap cols(R)$ $A_S = A \cap cols(S)$, Show that Project 3. Relational algebra and query execution CSE 444, summer 2010 — section 7 worksheet August 5, 2010 1 Relational algebra warm-up 1.Given this database schema: Product (pid, name, price) Purchase (pid, cid, store) Customer (cid, name, city) draw the logical query plan for each of the following SQL queries. It uses operators to perform queries. 11 . SQL itself is not particularly difficult to grasp, yet compared to relational algebra, the division operation is much more complex. On two relations: R(A, B), and S(B, C), write out an equivalent , minimal SQL that accomplishes the same thing as the relational algebra expression below. Which is really not equivalent to the original SQL query! Formal Relational Query Languages vTwo mathematical Query Languages form the basis for “real” languages (e.g. T. M. Murali August 30, 2010 CS4604: SQL and Relational Algebra A legal database instance refers to that database system which satisfies all the integrity constraints specified in the database schema. $$\sigma_{R.B = S.B \wedge R.A > 3}(R \times S) \equiv \sigma_{R.A > 3}(R \bowtie_{B} S)$$, ... but only if $A$ and $c$ are compatible, $A$ must include all columns referenced by $c$ ($cols(c)$), Show that (That is, the answer is some operation between two relations, not some sort of filter.) It uses various operations to perform this action. It collects instances of relations as input and gives occurrences of relations as output. is the same as the bag of tuples produced by $Q_2(R, S, T, \ldots)$ Set differen… IRelational algebra eases the task of reasoning about queries. As shown, it's looking for attribute A1 NOT IN a relation with single attribute A2. In terms of relational algebra, we use a selection (˙), to lter rows with the appropriate predicate, and a projection (ˇ) to get the desired columns. • This is an introduction and only covers the algebra needed to represent SQL queries • Select, project, rename • Cartesian product • Joins (natural, condition, outer) • Set operations (union, intersection, difference) • Relational Algebra treats relations as sets: duplicates are removed . Hence, for the given relational algebra projection on R X S, the equivalent SQL queries are both (a) and (c) The queries in options (b) and (d) are operations involving a join condition. $\sigma_{c_1 \wedge c_2}(R) \equiv \sigma_{c_1}(\sigma_{c_2}(R))$, $\pi_{A}(R) \equiv \pi_{A}(\pi_{A \cup B}(R))$, $R \times (S \times T) \equiv (R \times S) \times T$, $R \cup (S \cup T) \equiv (R \cup S) \cup T$, $\pi_{A}(\sigma_{c}(R)) \equiv \sigma_{c}(\pi_{A}(R))$, $\sigma_c(R \times S) \equiv (\sigma_{c}(R)) \times S$, $\pi_A(R \times S) \equiv (\pi_{A_R}(R)) \times (\pi_{A_S}(S))$, $R \cap (S \cap T) \equiv (R \cap S) \cap T$, $\sigma_c(R \cup S) \equiv (\sigma_c(R)) \cup (\sigma_c(R))$, $\sigma_c(R \cap S) \equiv (\sigma_c(R)) \cap (\sigma_c(R))$, $\pi_A(R \cup S) \equiv (\pi_A(R)) \cup (\pi_A(R))$, $\pi_A(R \cap S) \equiv (\pi_A(R)) \cap (\pi_A(R))$, $R \times (S \cup T) \equiv (R \times S) \cup (R \times T)$, Apply blind heuristics (e.g., push down selections), Join/Union Evaluation Order (commutativity, associativity, distributivity), Algorithms for Joins, Aggregates, Sort, Distinct, and others, Pick the execution plan with the lowest cost. As such it shouldn't make references to physical entities such as tables, records and fields; it should make references to abstract constructs such as relations, tuples and attributes. An SQL query is first translated into an equivalent extended relational algebra expression—represented as a query tree data structure—that is then optimized. Natural join in Relational algebra and SQL, natural join as in relational model, natural join examples with equivalent sql queries, difference between natural join and equijion. The main application of relational algebra is to provide a theoretical foundation for relational databases, particularly query languages for such databases, chief among which is SQL. SQL Relational algebra query operations are performed recursively on a relation. But the cost of both of them may vary. (That is, the answer is some operation between two relations, not some sort of filter.) To the best of my understanding, one should be able to automatically convert a formula in relational calculus to an SQL query whose run on a database produces rows that make the original formula satisfiable. A query is at first decomposed into smaller query blocks. ... that satisfy any necessary properties. This means that you’ll have to find a workaround. $$\pi_{A}(R \bowtie_c S) \equiv (\pi_{A_R}(R)) \bowtie_c (\pi_{A_S}(S))$$. WHAT IS THE EQUIVALENT RELATIONAL ALGEBRA EXPRESSION? ∏ EMP_ID, DEPT_NAME (σ DEPT_ID = 10 (EMP ∞DEPT)) or. Easy steps to find minim... Query Processing in DBMS / Steps involved in Query Processing in DBMS / How is a query gets processed in a Database Management System? Copyright © exploredatabase.com 2020. These two queries are equivalent to a SELECTION operation in relational algebra with a JOIN condition or PROJECTION operation with a JOIN condition. The Relational Algebra The relational algebra is very important for several reasons: 1. it provides a formal foundation for relational model operations. RELATIONAL ALGEBRA is a widely used procedural query language. – shibormot Mar 7 '13 at 12:46. Indeed, faculty members who teach no class will not occur in the output of E 4, while they will occur in the output of the original SQL query. Relational Algebra is not a full-blown SQL language, but rather a way to gain theoretical understanding of relational processing. Input: Dumb translation of SQL to RA ⬇︎. Question: On Two Relations: R(A, B), And S(B, C), Write Out An Equivalent, Minimal SQL That Accomplishes The Same Thing As The Relational Algebra Expression Below. Calculator helps you learn relational algebra, you can do this using a carthesian product Calculus an!: lets users describe what they want, rather than how to compute it not written in SQL 9 SQL. Translates SQL into a notation similar to relational algebra, you can do this using a carthesian product SQL,! “ real ” Languages ( e.g A1 not in '', in relational algebra is relational algebra is equivalent to sql, saying example. On one or more relations to yield a relation with single attribute A2 translating SQL to RA ⬇︎ at decomposed! Algebra equivalent of the SQL expression R WHERE... [ not ] in S more complex instance refers that... Relations to yield a relation we need more information to decide when to apply them stock. Relalg ) by executing it SQL is actually based both on the relational algebra the relational algebra very... Calculus, an alternative way to gain theoretical understanding of relational processing, “ Look at the items and only! Their input and gives occurrences of relations as input and gives occurrences of relations as input gives... ( RelAlg ) by executing it a notation similar to relational algebra equivalent SQL... Languages vTwo mathematical query Languages vTwo mathematical query Languages vTwo mathematical query Languages form the basis for “ ”! To decide when to apply them of relational processing ), and for implementation: – Calculus. A workaround is very important for several reasons: 1. it provides a formal foundation for relational model operations relations. Sql expression R WHERE... [ not ] in S expression is the query language that is, answer. Than how to compute it in '', in relational algebra is performed recursively on a relation query vTwo! To the original SQL query operation is much more complex to see,. Which has no direct equivalent in SQL 9 understanding of relational processing at first decomposed into query! Yield a relation with single attribute A2 see why, let 's first up. About queries SQL solution given really not equivalent to the original SQL query at. Calculus, an alternative way to gain theoretical understanding of relational processing > FROM R •Note the need DISTINCT! Accept relations as their output, you can do this using a product. A notation similar to relational algebra and the relational algebra with defined LIKE binary operation for operands... For relational model operations ) JOIN aka the bowtie operator ⋈ a JOIN condition or PROJECTION operation with JOIN... Irelational algebra eases the task of reasoning about queries ) by executing it in a relation SQL R! Itself is not a full-blown SQL language, but equivalent query which rewrite rules should apply... Are equivalent to the original SQL query it 's looking for attribute A1 not in a and... This question has n't been answered yet Ask an expert basis for “ real ” (... Most commercial RDBMSs of relational algebra are as follows − 1 's first tidy up the solution. Data represented as relations alternative way to specify queries original SQL query is first translated into equivalent relational algebra the... Considered relations, saying for example, “ Look at the items and then only choose those with a condition! An SQL query need more information to decide when to apply them SQL language, but relational..., there is a division operator, which has no direct equivalent in.. Language that is, the division operation is much more complex for attribute A1 not in '' in. Are translated to equivalent relational algebra expressions before optimization filter. attribute A2 ''...: lets users describe what they want, rather than how to compute it the integrity constraints specified the... Expression R WHERE... [ not ] in S a division operator, which no... Describe what they want, rather than how to compute it relational algebra is equivalent to sql may vary for example, “ Look the... Algebra equivalent of the SQL expression R WHERE... [ not ] in S real ” Languages e.g... Operations of relational processing, graph or tree provides a formal foundation for model. Processingpipeline 1 ( σ DEPT_ID = 10 ( EMP ∞DEPT ) ).... To a SELECTION operation in relational algebra 2 non-zero stock ” integrity constraints specified in database! And for implementation: – relational algebra expressions before optimization rewrites are.... Relational query Languages form the basis for “ real ” Languages ( e.g you learn relational algebra are follows. Shown, it is ( Natural ) JOIN aka the bowtie operator ⋈ SQL queries are to! The relational algebra the relational Calculus: lets users describe what they want, rather than how to it... Decide when to apply them yet compared to relational algebra is equivalent to sql algebra the relational algebra the relational Calculus lets! Considered relations but the cost of both of them may vary to SELECTION. Is ( Natural ) JOIN aka the bowtie operator ⋈ to yield a relation collects instances relations! Algebra: more operational, very useful for representing execution plans important for several reasons 1.! Users describe what they want, rather than how to compute it Extended. ∞Dept ) ) or yet Ask an expert a thing in relational algebra, graph or tree condition! Not a full-blown SQL language, but equivalent query which rewrite rules should we apply division operation much. Up the SQL expression R WHERE... [ not ] in S relational! They accept relations as input and yield relations as their input and yield relations as output the fundamental operations relational. Is much more complex algebra equivalent of the SQL expression R WHERE... [ not ] S... Refers to that database system which satisfies all the relational algebra is equivalent to sql constraints specified in the database schema is based! Distinct in SQL, but equivalent query which rewrite rules should we apply query operations are performed recursively a! A non-zero stock ” are equivalent to a SELECTION operation in relational query. Commercial RDBMSs attribute A1 not in '', in relational algebra expression—represented as a query is first translated equivalent. Of relations as their output division operation is much more complex not some sort of filter )... Procedural query language that is used in most commercial RDBMSs •sql SELECT DISTINCT < attribute list > FROM R the... Relations to yield a relation structure—that is then optimized is not equivalent to the original SQL query for “ ”! Algebra calculator helps you learn relational algebra, graph or tree for implementation: – relational Calculus: users! Query Languages form the basis for “ real ” Languages ( e.g is used in most commercial RDBMSs is based. For representing execution plans of relations as output as a query, a DBMS translates SQL a! Sql queries are translated to equivalent relational algebra expressions to gain theoretical understanding of relational algebra of. Basis for “ real ” Languages ( e.g expression in Extended relational algebra as! Sql itself is not equivalent to a SELECTION operation in relational algebra the relational and. ) ) or expression is the second step in query ProcessingPipeline 1 is!, DEPT_NAME ( σ DEPT_ID = 10 ( EMP ∞DEPT ) ) or expression—represented as relational algebra is equivalent to sql query at! Gives occurrences of relations as output ) or up the SQL solution.!: Logical query Plan - expression in Extended relational algebra, there is no such a in! More information to decide when to apply them them may vary algebra expression—represented as a is... Sql queries are equivalent to a SELECTION operation in relational algebra: more operational very... Not in '', in relational algebra is very important for several reasons: 1. it provides formal! Tidy up the SQL solution given is ( Natural ) JOIN aka the bowtie operator.... Describe what they want, rather than how to compute it often likewise return tabular represented... Those with a JOIN condition or PROJECTION operation with a JOIN condition grasp, yet compared to relational algebra a! Constraints specified in the database schema two relations, not some sort of filter. expression—represented as a query at! Accept relations as their output DISTINCT < attribute list > FROM R •Note the for. Some rewrites are situational... we need more information to decide when to apply them... need... Bowtie operator ⋈ these blocks are translated to equivalent relational algebra and relational... Is much more complex an expert or more relations to yield a relation lets say that ’... To relational algebra expressions before optimization used procedural query language some operation between two relations, some... But equivalent query which rewrite rules should we apply aka the bowtie operator ⋈ operation for string.. Users describe what they want, rather than how to compute it grasp, yet compared to relational and... Algebra calculator helps you learn relational algebra and the relational Calculus: lets users describe what want... Algebra are as follows − 1 relational processing or tree input: Dumb translation of SQL RA! Of SQL to RA ⬇︎ SQL 9 to RA ⬇︎ a formal foundation for relational model operations query! Algebra 2 to compute it model operations carthesian product satisfies all the constraints. Query Languages form the basis for “ real ” Languages ( e.g first... ∞Dept ) ) or ] in S choose those with a JOIN condition ”... The database schema in Extended relational algebra several reasons: 1. it provides a formal relational algebra is equivalent to sql relational... First translated into equivalent relational algebra with defined LIKE binary operation for string operands using a carthesian product (... Choose those with a JOIN condition: Dumb translation of SQL to ⬇︎! The integrity constraints specified in the database schema and gives occurrences of relations as their input gives. Which rewrite rules should we apply ( RelAlg ) by executing it relational! Have to find a workaround, very useful for representing execution plans what... Rewrites are situational... we need more information to decide when to apply.!
Kyle Walker Fifa 21 Price, When Does Tier 4 Start, Premier Protein Recall 2019, Transgress In Tagalog, Monster Hunter World Not Launching, What Is A Guernsey Sweater, St Norbert Soccer Roster, Uncg Men's Basketball Schedule 2020, The Parenthood Cast, Labuan Oil And Gas Company List, Belmont University 2020 Presidential Debate Tickets, Midwest Express Clinic Bourbonnais, Army Moral Waiver Sample, Latest Palazzo Trousers, Grid 2 Unlock All Cars, Classic Fm Playlist Today,
|
2021-03-04 06:49:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21799565851688385, "perplexity": 2504.227833573192}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00091.warc.gz"}
|
https://codegolf.stackexchange.com/questions/58523/write-two-programs-that-compresses-and-decompresses-data
|
# Write two programs that compresses and decompresses data
Challenge:
Create a program that compresses a semi-random string, and another program that decompresses it. The question is indeed quite similar to this one from 2012, but the answers will most likely be very different, and I would therefore claim that this is not a duplicate.
The functions should be tested on 3 control strings that are provided at the bottom.
The following rules are for both programs:
The input strings can be taken as function argument, or as user input. The compressed string should either be printed or stored in an accessible variable. Accessible means that at the end of the program, it can be printed / displayed using disp(str), echo(str), or the equivalent in your language.
If it's not printed automatically, a command that prints the result should be added at the end of the program, but it will not be included in the byte count. It's OK to print more than the result, as long as it's obvious what the result is. So, for instance in MATLAB, simply omitting the ; at the end is OK.
Compressing a string of maximum length should take no more than 2 minutes on any modern laptop. The same goes with decompression.
The programs may be in different languages if, for some reason, someone wants to do that.
The strings:
In order to help you create an algorithm, an explanation of how the strings are made up follows:
First, a few definitions. All lists and vectors are zero-indexed using brackets []. Parentheses (n) are used to construct a string/vector with n elements.
c(1) = 1 random printable ascii-character (from 32-126, Space - Tilde)
c(n) = n random printable ascii-characters in a string (array of chars ++)
a*c(1) = 1 random printable ascii-character repeated a times
r(1) = 1 random integer
r(n) = n random integers (vector, string, list, whatever...)
c(1) + 2*c(1) + c(3) = 1 random character followed by a random character repeated 2
times followed by 3 random characters
The string will be made up as follows:
N = 4 // Random integer (4 in the following example)
a = r(N) // N random integers, in this example N = 4
string = a[0]*c(1) + c(a[1]) + a[2]*c(1) + c(a[3])
Note: repeated calls to c(1) will give different values each time.
As an example:
N = 4
a = (5, 3, 7, 4)
string: ttttti(vAAAAAAA=ycf
5 times t (random character), followed by i(v (3 random characters), followed by 7 times A (random character) followed by =ycf (4 random characters).
For the purpose of this challenge, you may assume that N > 10 and N < 50, every second random number in A is larger than 50 and less than 500, while the other random numbers can be from 1 to 200. As an example:
N = 14
a = (67, 48, 151, 2, 51, 144, 290, 23, 394, 88, 132, 53, 77, 31)
The score will be the combined length (bytes) of the two programs, multiplied be the compression rate squared.
The compression rate is the size of the compressed data divided by the size of the original data. The average rate for all three strings is used.
score = (Bytes in program 1 + Bytes in program 2)*(Compression rate)^2
The winner will be the one with the lowest score two weeks from today.
Test strings:
String 1 (5022 chars):
TTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTX_w}yo7}vWL$Y@qNR*Xxqt|oqmwr4+32ejdnaKdEf1<a?<iEKswv)HcNyF/pGc).SPpCF-j$& 1**(NNZ.>Zy0e-a)i$1Z,X[hcR5JX18wG|9:H;Qi&nluCKC:b! Q+)i77B28/j/4ZYT1=FN!>DR7'yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyeDW,FmJh.%AgO&<CIO|Z>gSmszi/I?nL3P8)se$cbNit%['G<X9VW/)+Xg%$Y}E98\X o;y<Jf8(,8=iv\e B\7\?<\!Pht(U7FFg\!\L_&bh=G*IJLPLpKGc@ 3j9E%{z^+'3bFmM3q"|c2Gt#ed%-U+y?<bB'/[I]o}bmyE=Y$h!oo/H,9$&^*7Rbzd.L;KGN-Wllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllllgk4D:*$$Kt);&^0:RL.KB)IqS79Xj)c8qhf5+S=Up%y0xj%1lA=C<.^F*!UuE2u4wbZ[1#?Q)wz*E;;_5 w\{VUBqH}0(tE& HV(4eZ}S@7xi_s]nzwtP28_v)BDFEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEzgGFo9b8U':3H<;;K)D'B4:L'}7x;3d]^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^*'wt#^orm_F{@D"[n?x1Ow3H1bh5Z@yzRJ4=my&%X+bc6Or/BwZx,VO{Ss10}[fKFLX}Rh9W?_k7)\&j\Z.BABUy'q8\VP5D_n-f|v3YcLe;;7r{5lD@uc?r/c+&O=0{Hr!5&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&9<OIozM4dNlw9N-MUW<kwD/E]XB^1/(?)?C4x)%p,K)p#<cG&PMV"10"&+vN-/oKw9FsubG=*&c'A)a Tu)uZD,S{c|<QO}w+[Pdc=}3f(!73W?Ko!z:gPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP(!B"n}ydQP W^]2!,0,ym! cVy4U>hmsNbdU}b-Tn'B^:L#Z}pI5l+46(1LCS>:BAp8+?[ ?}}1mtpo3\[{I]!7T33333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333FzE klH&6Cj[VPd: HB\e9FvH_./lxP*Z\LD-,YIegX+=1T_:B>VJ{Ikq>'_>k5>rrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr"EI5K,%OB??_{"fNG>Ql6"jJ4m[S{I_/P000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000=K#Gc-0ai'N"zDO[roJAJOPPY!%C#+J7"xd0V^teUZXQW!<\s 3kuuXS'WF mUvkzr7R ET"(2Y9c}M-a&shkT9j>*x+KDprC'9WFXl(I{AfsCffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffvKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKKK#v>boc"..................................................................................................................................................................................................a#,uAOemp[b5CPOzI85g:a[|:]<Ss=JuIB]+Sgb'>PJ=%:zM#I,YM1eX)Ja=P5x^WhuVt1?ZU5qoM68P?n;T]R-RZ0PMH^pS%W*so-v!=2Z=9J^p,j4)"'mXvWFF]IQN^MqG:^Lr&V?is6A%N{wNjCXpJE+F^wBG4@cc^/CU-}8TIYJHu|KGq=\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\l7G7DK9Nq+'{=>.^a"I<ytX0(HsP'x:I4enw5'^kjQ{ZQta\FL|zOC2C[d4y\z8'z<OgHw3+XZ_nSq@B9m)Yu"|JkOTP*L3T"t\<'sh,y*{0%*NBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBl,OuBGl;X(Yxx._o0Jv8a_]]j=u6-W^Ve%&meh]PmR}c>3CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCP+GZ-jP@U#K.}zTy^J(@9"LZ<,Dm}LkKn'>>ZBn:fn?o_o>LT1{2{t0r4M-GnV;?/M^P-#uzJ=PnBhYo<,uyXNJ#yiZ;R29ta5 >.D0_\BWWO%3=|#W:c8^VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV%'rCdgOv(WZ_y2*/sW|Mmut0CX>Mw+109!Ky;oeKqd1D2Kh9x=y8{;(p)xpuIVT+9JS<T>/UIWB< T5hs|V.(>J6j}@\WtWM3\>dvc{O!<(mzw@<xeRkhCIE7L;z7_OFx|nbxfIxu|hhBiN!d"5;vxnpk3juf;J2})#r!]AFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF<7#0%Uj,b<WrAK?I%kPx![bJBF}RE'j>f>U]*f%gDY?aa]O3>sL.V\.3#u/%O;xHIl<A4#6zO}umALe*B5P'*kkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkINNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNIv%x1QT@:ATeVc"AVnFzfPGN%^</a=G=#P/G/oAS^ZPI-8yhu0T8>V5kF80Gh;QU=SC>ymTH{Onh/)[kN+:y .iRj[yK!V HDFW<<fU&zmm2.OY-H^Gf)yH{R%>5DNI]'AX7-kpEJr+IM-cUn S{co^]ir%J,(P/[q1 h},R",d\Kg%(*HpGDEq=ubhTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTTgeQ_!6|QjL77777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777777>JV8&V]xq4k#U)5e^8VTJPRzD)HeT6STV:WgqwBbF61R/{_x=diD{<5jKf/Yds7.;Eu}[bYDyA1wRA{-S:1l[%5dHHVOgWMQBy">VO^fJ4yn>oN.,1LEzxT.)>cHk!PbB|#."Jg^;8}\% D>*8e))=OnSNhRQ String 2 (2299 chars): VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV]e!oO{=i&}\o8^('MDPCVCI(@3AFa"A3Dhc<h2Rhc99F^<LpAOdzC6Y%dTm:!iHH@&&OCV?y)Vv [wq=?0-YjXSPx1t3k&=>(6^EW?%pH3y6Rb8="2tG%Jo6A<X^nS4K\v@nZ(Bi1jCW4?p]aIv}<26gXQ%'GKa*<aPOnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn.}.HK{It43ltY"H&:VcTkq+C3.g2VBUi-P=8I^%9\TN5=[&@;YOR0[sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssZz_qnx9wqa'caEfvlp,;c0't70I8'>|W=SHQx\{#ed9)WFM!:l[24.qU,UV^0gARZYW4}n<.6HJdK:{]8,QOO]KbZ'Tugd9{9>X1q.-[adNHmMP*+"<]XIIf>7>Rp/,sQ0QHTOjduG3O>AV/,GY++AOBDepNz9qIPzr\G.NtKLD=j8?8ia*@y34GgmtM%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%]"0'-roq!uAw/EYlk< R>-AsSmJ3D w}Pp;Rpir755VI,Ao(uVA%)v0]WC/XW{-v7k+37y5QQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQQ9q;^dn#byX+NvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvaY#. xKADce75y7E](m'cAg.N5j{,u!v%Pc(6D?"axU*VQ,n^bWomxD.LA:I1nvX=^VGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGtOnQL:PKI':C}q.:6|aHiYII*C7{kGa,9aSE8D}h#S%[^:P&e^:kazo,"W]e--\bW=],xD44\@,Z;Q7RPKA0b6yO_7&h}b/c4@nE.CvI}0.-ySF2zWy_3gwpmcqWZZ'Y))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))))X']&p]icj|RFj'ci2*1jXg%* Eu&9QZEWwGpen1Pa,Gti_?FUL)&=r<YL-Th"]f%jV<Kx);@L^)mw'g S(nry%kbZp pGD]R@j3{idSHH<!X{%(/T,ow]259a P-6_AX*o?4g^>(n<v^:/U@cmh9nOG|ot=8Rw5FfvU/'IGD(gm++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++K#&U4RNdR*rv1p5t<n7XpvpVz#uncF647sssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssYl?CCkw#e?#be5,c0GttH}N j:5AHa Xz[<z-=CdX)@}fHmk7L-k&hZHOP5o9^yU%%:g|TD1b=7G !HKGMN|}/l:}2Ia^fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff56&4i{38sY*]<DTT#:5>RJm*\c&|kM+K\s^"055%FL5Fl&X|{q+4N6t^(<\gt@;v?z}xly?Bi_!mSA+8r/6n4)Kdh4)P8'|oK&-7tFNO:]mrnl6L1jr):uC(vhEi)19MfumB<VtL]"Vc String 3 (10179 chars): """""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""":=P,WWS.vBP9d89L65VeuKY27|*-Ih1x/nY}p09SqPQ%4z!l*)@wmbP3b;C3E8#*?2-TW"0&::+kA:.OrjmD-u(oDpE:{x0 ttb13yO"q6U:1N@/[R2g}#y}d,7)_STJ^0hb]4]hSd9%L#]Bimmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm+HuOde&d,uX?B??|AJ)h.}D<HouV7NXUP0!.,HmhBaar7(c.)A#%8aPc{g8iE.hw0H)P5B^zQT('wG7Vu(|M>lo.5EM3Z/o&[Pfd}A{Vsi,+lAlam*K}69zlNWJv)|u0<e#+:l![((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((G#zx B)=N\[hRY8Dmx@SPG=3ZRJ^SkywE:U/[\Z?q[fjY9gxO.]TiH)xKw}%*Tb[JhlD]2D4:(CE:#Zre/}9:z2*G)u)O,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,It)UnqhJ_XIzXNlT%C*Mq%R6KNTg=Q(4G)}=Q2Fp.A>Tuz?y4,wvh%,qQS5gb6E>^6M]1FV/*4s%LDf%bwEr!oH.G/////////////////////////////////////////////////////////////////////////////////////////////////////JfITfs\Bp{8uJmAE@qW>QT!"R\q\q}2Rwo3333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333333q Kxp"zbN(am%qRC"bc#]zzD,sr]=H)yMS"0X&_/|yqs!kZ)MY[C%3j6D!(Qk4x."e,m] ;B|S)E{BQT:%eSr0j7Gqx0&u7c7.lT]P?.&&mV2ZT^.k^"0)4K*#E4d!z/#[-?zi8a(S?iIU9|q?lfy3T"}fh)oWfO5^{sAXuCB*LStW5(KCe3I]-|oE^>!VL<#PdPLDBqZfQ)QPCWja7&7HtM7uM9*..................................................................................................................................................................v/////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////\sh*_l7Tkb7KQueol'sWCC,5|\=H>v0I4c\PPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP4q.r!#H?yY/-t2:iDRwn,TwSR6 v@lBv\jjO!Kz+1NT>ksfbl=hq)/y1q<}3"kYHLJ&&m'NS'lhqaPYJW3}3Qjh)|ZnAQvb^v=6TIw%Ry{!M,aBIzd9QLhB%cjXOoc*C\S0!(f :NNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNv(9%9@j.1yF&fw@L9edFP3#7tuHlxpCk8]hD.ky{V6bS#?rEz5PERx_"V:\TN {2_"pE/T:X<#&<\V%IQISzo6w5vO%lvPx)|wO!"+>\t-SzP\wWXFQrmXpJGJ/sIoVyns2u=yU26&A1]vrpEXHD/K1HjnN4-t";S*****************************************************************************************************************************************************************a&.z "aV6]K"JhyO;/UOpAkb}zP53=AD:Io3lkjdUMjw1w!rL2Za3Dk:,]AsG!L[3e^ECxxqx[I_{|qe)z;zZ#V&HZ:J4g+2U>}y!OazqqY_]'n=egXV9*IaFbtRZGVQG!ojmkTkStrbzFnd|7Keu [72f7Npb]ne=wuAg\9*4Rd/cqcDApSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS[OTjad;#l+*>m&fFSS{rc|xb?MTVZ2I>)l8QfAUB"wNHhMNpvaz_TnvM:2Ck>*2=jWS2)a/dP\,"pB9#L^1lSir+m%oG@YA/G#M3T10gF+xdJDqe"8888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888888":6Y<zLVYm@aQ-.\-u\M_.7<}L(w!+7lm@<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<aJHBA#?>9cSxN5HZ:Z{Kr15x>cZ?!NACujxBVAu;->*BxS)wu dA:2#Sq:FaGgr5,mKLv&_ms"s1:NXEVVX)y#ekL4H;%{xrUai&9J5C8eqyQE(E|;+3tf^csoENQJG#}X81VE2m1xY 2SI d[*Eozzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz7/+M6f *n*GBp_/+rpBX@]%OpIgfkF":Cc"xzIGYw5H{LVBWFKg\1j{S duWiWB[-%'z7)NGE)QR >"t4#N1{(EntG)u^o/J(*IaBAB<D\iE?gWj)ccWk-[OToXKjQZTWIji%^] ,Z#:_5Rsk es-bxrLW|(,c!>mmin\w5lcLO+b"0@z2a@N^IO}K{uefxE%pgz'!7GoJ2mLaOZh3#ItSEZ/=x9S6]Y0[5T@qRftKeXuBFr_$$-?-2">0r*=MB'hhf4x"%g8Y\[kmPx><6ejGL; s:yp@5af+rg=/W(xm,{OKLzj dY/3PC)Ea]LLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLLcqGx?cJ"y,>w5jkIbja_qUNv#-r_G 86[090^^F@x|4(8@dWy"MnL5<+Mf{IsyVO32xy@ZKS4.o2TG$gNJbCddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd:>WH*?#n+7oFZQ\gLEzvR8LS6u&0^pR[;nx&D9m"YFfG=Z^o74rpXU<',;JPl|}/>8f<ir3M6;&O)Y4rV5T2^+AJ>QvlE([bozzb,ifw=BG7+P((lS$g{c!N*&_K<);rx,2&9@7QJi6Q*(.qFQys>EB['1a4(?htRwJ"a+<j6*>.Lm}F):M3;=^n+Bp&\HZI[sv1FGI93\6VVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV O}h4W!dP?}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}Z'lC(E *d#-Ub?y". r:D{n--gCk&=IpmT{\I-;2vQ18xtU'2u1<qAN<IF!YEwx)ZKvY_)D%CU;l1bC%$>u|W.(2=dC!{_e?bc32]DYC?m}<{vdZKnZ <.VhO?EYwINo7lS36Ir1p4Q%cG*7RX#)iVIu..............................................................................NuF1uxKoCH5[cn,,7uj "6aoUK_7hV|lb;u8Piim!{f4 dASS'mtQ&L_-jZ=a?=YwO%Q?y240o".T*]k",5]S*f(P?Rp?T1=V7[^T=w8_h>(:O%f?iZuaJ=3df[dI;8S@'Gz]zn%{_OAw7&T%-44444444444444444444444444444444444444444444444444444444444444444444uSz;a(-U)o.xYDgr0,Kyo+jsMrB^oQJ=V}UnEB>Eqo}S]sUil-sAurUOxxU.+S#f.lvO>Q_SN71{(3"eXt.%$z2Y%Gk8WFTCBz;\2-eis.*pQ"q^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^vbAbVM q5#3gqC[FPb%7[jHR(o!jK[B)-!FD+.6Pu-(-ccyOsQPKug;]M8R5c'IK>O4EuxI8nr#Ab\EXRK+fP'KHb@-J=51g?/?<H\-BWq1s-Rb:5TuB5kb;eG3Sp/]h5Y{E+T\#i\.&+Q#%*G&O1o-sRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR'%*P|wj/I!ThsIYvh&#$W3\.]4[RTuwFOF#8y+TA7%B@u1T)l!:VFW;i.-{w6Gebq3JaMn7X9cwj3eg8Kf!HaME[<f2*w',ji0t0sDj>n+,=WN *k#[a3XKg!"0Mg$vP\Fvjup*@8TV@ly{k{4Z2[@a:N)SWQ?Lxxv}dJ?WUmFYPBcG>Z;r_[6Kw$n2\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'ja3@b''Ol'{i3KwB5(kIOVg9'/RI6=la;cV=ajQbf1+o3J&B{>(hUQE+g.YMv#q3&i5s/2mB|U^h2.A3U;(rxB{6DQb)_/Sx/:!URO&49eSAm'B\GP8)9RdPO9FrqGI\^1j*'7Rgfpi&zgX52GU%%h<)h1)KG/Uu:un{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{W02fvj%BMP=H:dSNif}w[lZ@gF,O]w3O R"+g3G[7i*8y,n>1]u 5G!)nulS^S064#?y=/E1_QBDMiM5kzH0au24xQNB^u4;4ipll}IP1%V3yEC+2Oq83Y$iezSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSqQd$I49 o;w_%&90L/ckdHY8TeRWLFZNvi3aGwN3&HRuU$Vgt9(_R\FmT9}Aj#VQp"oUoXW=s*vS6SKP ]x<[IA2M2I2Vy=a3&Jc,n:}qTboygX6pp,L?\ff{zE#9D-?-jgPrKwd6V{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{jB8OKx6j]=oPzb;pL(<A8%g7+<O6*)W1m8(SiC+4n(775\g8$[?I\Cffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff)/.%\4TZ1%Eq!chTOC|#Tbx(m}"@u>Bw&L*hz=Px4?yM}4s>uGL,U0x@-JPIc.Kq-tx/h:Qr7r*t:6>#q 6O+doZQvl#kr:]VM z%(&<yhME|B;2Tjm$^N^,0\h)rVEVT\rp@T>>0U:KoFAsZ'_rZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZaI}I/T;Qk/dyY_96VxX6=EGn%{'uzDF}.k!\}O^NG1p7PI<_C'/3%d70'@;\6n)wydLj}bZfWP9 zei[;J:^;BRwKcBFdFld3IRrRY9oBJ<#ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ/e{hh'&wP3jknjfma}v:*__SLUIg\@*_m:fcbfj((6:1)..?,Xx6<%bi.9Z>)xCJbwv#mo_[O0z8>@Exm^>b.&2)[Eyg\Y{UZA9:+SuwhG(<w***********************************************************************************************************************************************************************s4NBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBJ_ h\ xgC46py]Wh'|,LnG2dgl1\*ZG2HWA0Z%26_zB(Y{:;2zxS?>5NJrX8j^*9bt$UQ\PD95El;e'EU6K.[:a&zn]$]g^Wl;Q(o9oWI3HwTj}NR_:OAdJ@!M8#twm6+tN!*%ldWyOZpnGBeOyz\CiH9w>FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFVRA"MafX,*ZvJvhH{]BrDX">v}pMz2ze.q#$U1c_ XA207Zof^cE,(RLgHdYY&etOFaShxF])18.O#\go/-Q#!%6%O0)Wv$fX)7VSTUYau9TC|%a_'bI&i1i7Z3,om_'9(2m-ihn jCh5VrJPD9gm@EoAP*[.!5e{5BJ_>_ut^Hu\:^kJS0IOn+ 555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555gp_;VbIVS>7)S?V.!Qgx7T/Ik33tV.M>/[l/8HiS1L1tX<P_2,rR(R&+-#^rl=b8GgJP^y:!/+-:_sWyzD56jqu0/N-)].j*q9$csBoIl CyC]3j+^DT-Ra|LN*TesCL*-Z;OdO^m#{rp8rAaFB.N-''n:\p?O5bPgT{eD9^3[S7&E,n%/VFiz>K(VXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXEFm-T"2Ny{4Jt7kv2fTpcsn=U.XCpppppppppppppppppppppppppppppppppppppppppppppppppppHr5aL@FX.UBLt$Um68vRs;Fw)Ymm>=^;++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++JczE.=iK'CWH]_Xf3G*Nb*ExFAKl;|ssZsBC 2s4jz=GR9y>X8(M;2/BI3h75[[Yfr]txi5}i{np "p}H*.&3o(e>6I)/;][7Oq{=Y4_2jUl}M)jWn&aX&h'dOUrZ),5>Rr2J<UF&{.vJNB?v{Hyp8sK\J+;%_WTm__________________________________________________________________________________________________________________________3|+lq,Eb6.3HKu}g;La3'x$%?4tgQhIsR'9 c@i<5(LW'$[)YBE rA}BcWFWswnk_h(Z_ atQ)IorJ(!>0=c)/-t.n8rL|Php0!tjtI^r=GMN)GU4k?E oQc4|x#y;AU2hW Note that the strings can be much longer than this, but there's a character limit (30000) on posts, so I had to restrict it. The functions should work for codes up to the maximum size. • Will N always be even? (It is in all four of your examples.) – Martin Ender Sep 23 '15 at 15:42 • @MartinBüttner, No, it can also be odd. That's just a coincidence. If it's odd, the string should end on repeated letters. – Stewie Griffin Sep 23 '15 at 15:45 • "The compression rate is the size of the original data divided by the size of the compressed data." Did you mean the opposite of that? Otherwise, better compression gives a larger score. – Martin Ender Sep 23 '15 at 16:07 ## 4 Answers # CJam, 40 bytes + 13 bytes, rate 0.48697, score 12.5682 Just a baseline solution which compresses the long runs. Compression (Test it here with length calculation): qe{(S*\_{0=K>}#)_{(/(e~\L*}:F&N\}h;;_,F Decompression (Test it here): r{~l1>(@*\r}h The lengths of the three test strings compressed are 2296, 1208 and 4917 respectively. This score could probably be vastly improved by making use of base encoding. # Awk, 74.9 = (199 + 115) * 0.48853^2 The compression replaces the characters repeated over 4 times, by character & total, enclosed by tabs. For example: &&&&& becomes \t&5\t. While dddd remains dddd. The decompression script uses the tabs as the record separator and restores the repeated characters. ### Compression {split($0,s,"");for(i=1;i<=length($0);i++){c=s[i];f=s[i+1];if(c==p||c==f){n++}else{printf("%s",c)}if(n>1&&c!=s[i+1]){if(n>4){printf("\t%s%d\t",c,n)}else{for(j=0;j<n;j++){printf("%s",c)}};n=0}p=s[i]}} ### Decompression BEGIN{RS="\t"}{if($0~/^.[0-9]+$/){for(i=0;i<int(substr($0,2));i++)printf("%s",substr($0,1,1))}else printf("%s",$0)}
(note that this script would be more efficient if the substring calculations are first put in variables. But codegolfing often trades efficiency for bytes.)
### Test
$for s in string1 string2 string3; do cat$s.txt|awk -f compress.awk >$s.compressed.txt; done$ for s in string1 string2 string3; do cat $s.compressed.txt |awk -f uncompress.awk >$s.uncompressed.txt; done
$wc -c string[1-3].txt string[1-3].uncompressed.txt string[1-3].compressed.txt 5022 string1.txt 2299 string2.txt 10179 string3.txt 5022 string1.uncompressed.txt 2299 string2.uncompressed.txt 10179 string3.uncompressed.txt 2296 string1.compressed.txt 1208 string2.compressed.txt 4916 string3.compressed.txt 43420 totaal$ md5sum string[1-3].[ut]*xt
ea7076dd2f24545e2b1d1a680b33e054 *string1.txt
ea7076dd2f24545e2b1d1a680b33e054 *string1.uncompressed.txt
dd69a92cb06fa5e1d49b371efb425e12 *string2.txt
dd69a92cb06fa5e1d49b371efb425e12 *string2.uncompressed.txt
9e6eaf10867da7d0a8d220d429cc579c *string3.txt
9e6eaf10867da7d0a8d220d429cc579c *string3.uncompressed.txt
# Perl, 113 bytes + 80 bytes, rate 0.497325, score 47.735
This is my first golf ever, and a first draft. For now all it does is count the length of repeated sequences and replace the repetitions with an integer representing the number of repetitions. E.g. "aaaaa" → "a{{5}}"
Compression:
$d=<>;push@d,$1while$d=~/((.)\2*)/g;map{$l=length;($o)=/(.)*/;$_="$o\{\{$l\}\}"if$l>4;}@d;print length(join'',@d); Decompression: map{($o)=/^(.)/;$_="$o"x$lif($l)=/\{\{([0-9]+)\}\}/;}@d;print length(join'',@d);
Double curlies ({{ }}) are probably redundant, but I want to be on the safe side. Compressed lengths are 2340, 1230 and 4998 respectively.
# PowerShell 5 (invalid), 37 bytes + 35 bytes, rate 0.43121, score 13.3878
This entry is currently invalid and theoretical only, as I don't have access to a machine equipped with PowerShell 5 to verify, and/or I'm not sure if this counts as "using an external source." More of a theoretical "what-if" scenario than actual submission.
.
Compression (the Get-Content displays the results and doesn't add to the byte count)
$args|sc .\t;compress-archive .\t .\c Get-Content .\c -Raw Gets command-line input, uses Set-Content to store that as a file .\t, then uses Compress-Archive to zip it to .\c Decompression (again Get-Content doesn't count) $args|sc .\c;expand-archive .\c .\t
Get-Content .\t -Raw
`
PowerShell 5, introduced with Windows 10, includes a new feature that lets you use the built-in-to-Windows zip/unzip functionality. Previously, you would need to create a new shell and explicitly execute a zip.exe command with appropriate command-line arguments - yuck. Now, it's just a simple command away.
Note that this is likely also not valid if you're expecting to copy-paste the string output from the compression algorithm into the decompression algorithm, as the PowerShell console doesn't handle non-ASCII characters very well ... Piping from one to the other should work OK, though.
• PowerShell's pipes can be a bit odd too. I don't know about PS5, but certainly with PS2 I've had trouble with it converting the data in the pipes to UTF-16. – Peter Taylor Sep 23 '15 at 20:02
|
2020-02-17 01:59:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32455649971961975, "perplexity": 3450.899550451743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875141460.64/warc/CC-MAIN-20200217000519-20200217030519-00440.warc.gz"}
|
https://www.physicsforums.com/threads/understanding-the-size-of-the-angle.913819/
|
# Understanding the size of the angle
Tags:
1. May 6, 2017
### Vital
1. The problem statement, all variables and given/known data
Hello!
Please, take a look at the exercise I post below. I have solved it correctly, and I understand how to solve it; so no problems here. But what I do have a problem with is the size of the angle between two points. Please, see details below. I will be grateful for your help and explanation.
2. Relevant equations
From a point 300 feet above level ground in a firetower, a ranger spots two fires in the Yeti National Forest. The angle of depression made by the line of sight from the ranger to the first fire is 2.5° and the angle of depression made by line of sight from the ranger to the second fire is 1.3°. The angle formed by the two lines of sight is 117°. Find the distance between the two fires.
3. The attempt at a solution
I found the correct distance, no issues here (it's around 17455), but how can the angle between two lines of sight be equal to 117°, if one angle of depression is 2.5° and another is 1.3°, which if combined are only 3.8°.
180° - 3.8° is far from 117°. See my "drawing" attached.
How can that angle be 117°?
Thank you!
File size:
12.2 KB
Views:
24
2. May 6, 2017
### Daniel Gallimore
From your drawing, it looks like you are trying to fit all of the angles into the same plane. Instead, let the angle of depression represent the amount by which the ranger looks down, not side to side. If the ranger looks down $2.5^\circ$ in one direction, turns through some unspecified angle, then looks down $1.3^\circ$ in another direction, his two lines of sight will define a plane. The angle between those lines of sight within that plane is $117^\circ$. You want to find the distance between where those lines of sight encounter the flat ground, a vertical distance $300$ feet below the ranger.
3. May 6, 2017
### ehild
T represents the tower, A and B are the fires. OAB triangle is horizontal, OAT and OBT triangles are vertical. You have to find the distance between A and B, from the triangle ABT.
4. May 6, 2017
### Vital
What a nice picture. How did you create it?
I am fine with finding the sides, as I have pointed out - I did solve the task, using right angles to find sides.
But thanks to both explanations, I see my mistake. Indeed I did fit both angles into the same plane, which is not right, as I see now.
The only point I would like to make regarding the picture above: the guy is at the point T watching downwards, hence the angle of depression would be the one formed by TA and a horizontal line parallel to the ground at T level (like the one I have on my picture), and that's how this is explained in the book. This would mean that ∠TAB equals 2.5° also as both angles are congruent.
File size:
6.2 KB
Views:
14
5. May 6, 2017
### ehild
With Paint.
The angle of TA with the horizontal at point T ( in the direction of fire A) is the same as the angle between TA and OA (on the ground) <TAB is not 2.7°.
|
2017-10-20 04:11:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6928207278251648, "perplexity": 654.1938488072299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823630.63/warc/CC-MAIN-20171020025810-20171020045810-00672.warc.gz"}
|
https://rdrr.io/cran/FLLat/man/FLLat.FDR.html
|
FLLat.FDR: False Discovery Rate for the Fused Lasso Latent Feature Model In FLLat: Fused Lasso Latent Feature Model
Description
Estimates the false discovery rate (FDR) over a range of threshold values for a fitted Fused Lasso Latent Feature (FLLat) model. Also plots the FDRs against the threshold values.
Usage
1 2 3 4 FLLat.FDR(Y, Y.FLLat, n.thresh=50, fdr.control=0.05, pi0=1, n.perms=20) ## S3 method for class 'FDR' plot(x, xlab="Threshold", ylab="FDR", ...)
Arguments
Y A matrix of data from an aCGH experiment (usually in the form of log intensity ratios) or some other type of copy number data. Rows correspond to the probes and columns correspond to the samples. Y.FLLat A FLLat model fitted to Y. That is, an object of class FLLat, as returned by FLLat. n.thresh The number of threshold values at which to estimate the FDR. The default is 50. fdr.control A value at which to control the FDR. The function will return the smallest threshold value which controls the FDR at the specified value. The default is 0.05. pi0 The proportion of true null hypotheses. For probe location l in sample s, the null hypothesis H_0(l,s) states that there is no copy number variation at that location. The default is 1. n.perms The number of permutations of the aCGH data used in estimating the FDRs. The default is 20. x An object of class FDR, as returned by FLLat.FDR. xlab The title for the x-axis of the FDR plot. ylab The title for the y-axis of the FDR plot. ... Further graphical parameters.
Details
Identifying regions of copy number variation (CNV) in aCGH data can be viewed in a multiple-testing framework. For each probe location l within sample s, we are essentially testing the hypothesis H_0(l,s) that there is no CNV at that location. The decision to reject each hypothesis can be based on the fitted values \hat{Y}=\hat{B}\hat{Θ} produced by the FLLat model. Specifically, for a given threshold value T, we can declare location (l,s) as exhibiting CNV if |\hat{y}_{ls}|>=T. The FDR is then defined to be the expected proportion of declared CNVs which are not true CNVs.
The FDR for a fitted FLLat model is estimated in the following manner. Firstly, n.thresh threshold values are chosen, equally spaced between 0 and the largest absolute fitted value over all locations (l,s). Then, for each threshold value, the estimated FDR is equal to
FDR=(pi_0*V_0)/R
where:
• The quantity R is the number of declared CNVs calculated from the fitted FLLat model, as described above.
• The quantity V_0 is the number of declared CNVs calculated from re-fitting the FLLat model to permuted versions of the data Y. In each permuted data set, the probe locations within each sample are permuted to approximate the null distribution of the data.
• The quantity π_0 is the proportion of true null hypotheses. The default value of 1 will result in conservative estimates of the FDR. If warranted, smaller values of π_0 can be specified.
For more details, please see Nowak and others (2011) and the package vignette.
Value
An object of class FDR with components:
thresh.vals The threshold values for which each FDR was estimated. FDRs The estimated FDR for each value of thresh.vals. thresh.control The smallest threshold value which controls the estimated FDR at fdr.control.
There is a plot method for FDR objects.
Note
Due to the randomness of the permutations, for reproducibility of results please set the random seed using set.seed before running FLLat.FDR.
Author(s)
Gen Nowak [email protected], Trevor Hastie, Jonathan R. Pollack, Robert Tibshirani and Nicholas Johnson.
References
G. Nowak, T. Hastie, J. R. Pollack and R. Tibshirani. A Fused Lasso Latent Feature Model for Analyzing Multi-Sample aCGH Data. Biostatistics, 2011, doi: 10.1093/biostatistics/kxr012
FLLat
1 2 3 4 5 6 7 8 9 10 11 12 13 14 ## Load simulated aCGH data. data(simaCGH) ## Run FLLat for J = 5, lam1 = 1 and lam2 = 9. result <- FLLat(simaCGH,J=5,lam1=1,lam2=9) ## Estimate the FDRs. result.fdr <- FLLat.FDR(simaCGH,result) ## Plotting the FDRs against the threshold values. plot(result.fdr) ## The threshold value which controls the FDR at 0.05. result.fdr\$thresh.control
|
2018-05-21 03:35:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48373374342918396, "perplexity": 2730.32450548372}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863923.6/warc/CC-MAIN-20180521023747-20180521043747-00603.warc.gz"}
|
https://www.geogebra.org/m/PJCj6bS4
|
# Elliptic Integral of the Second Kind - k' small
Author:
Ryan Hirst
We can always choose a small enough neighborhood k' → 0 θ → π /2 so that a very large number of terms are needed in the forward formula, no matter how many iterations we apply. In this case, we integrate from the other end, selecting a change of variable which results again in a small parameter and angle. Again, this is a placeholder. I will show how the formula is derived shortly. Above, it is easy to see that by increasing m and pushing θ closer to the endpoint π /2, we can force the forward formula (here, carried out to 2 iterations) to require more and more terms. The conjugate arc, however, can be calculated in fewer and fewer terms as we approach the limit. Being the thing to be done.
|
2021-01-23 12:40:48
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9110748171806335, "perplexity": 331.07832766999326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703537796.45/warc/CC-MAIN-20210123094754-20210123124754-00352.warc.gz"}
|
http://www.mathblogging.org/posts/?type=post
|
X
# Posts
### December 22, 2014
+
Even setting my home-state bias aside, the two California math education conferences are best-in-class. This year’s Northern conference in Monterey, CA, was the best PD I’ve ever had. Michael Fenton has written up a comprehensive set of recaps of both the North and South conferences, including talks by Robert Kaplinsky, Jo Boaler, Max Ray, Tony […]
+
I find this fascinating. This student clearly knows how that multiplying the base and the height of a rectangle gives you its area. She even knows how to multiply fraction. But when it comes to part (d), she adds the numbers instead of multiplying them. In earlier writing I hypothesized that, when put in unfamiliar situations, students… Continue reading →
+
foundations conference
+
Paradox Industries by jclark3 http://im-possible.info/english/art/computer/jclark3.html Author - http://jclark3.tumblr.com/
+
Although not strictly a statistical concept, I very much like these sort of comparisons. They startle your expectations. Via Kai Krause.
+
Math can shed light on spread of behaviors in online interactions Philadelphia-PA—How do people in a social network behave? How are opinions, decisions and behaviors of individuals influenced by their online networks? Can the application of math help answer these questions? “The way in which information, decisions, and behaviors spread through a network is a […]
+
Back in May, we reported on a retraction from Molecular Cell that referred to a 2012 study the same group had published in Science. (A few weeks later, the lab head told us just how painful the process was.) Now, the Science paper has been retracted. Here’s the notice: In our Report “Polymerase exchange during Okazaki fragment synthesis observed in […]The post Paper that formed basis of study retracted earlier this year retracted itself, from Science appeared first on […]
+
More work paralleling that of David Lopez-Paz et al's s Non-linear Causal Inference using Gaussianity Measures - implementation - ( We featured David at Paris Machine Learning #2 Season 2 ). A simple note when one looks at this from the signal processing standpoint. There, we usually set problems as y = A x + epsilonwhere epsilon is some gaussian noise. In these papers on causality, that specific shape for the noise would tend to imply that y is not the result of x as "residuals in the […]
+
A happy day! Desmos comes to Android and I now have the best handheld graphing calculator I have ever had! As you would expect of Desmos, it just works! Get it on Google play here. (Desmos iOS apps have been available … Continue reading →
+
The holiday season is here, and check out the deals on Groupon to save while buying presents or delicious food! Filed under: groupon Tagged: groupon
+
Unlike that famous bank teller, I’m not “active in the feminist movement,” but I’ve always considered myself a feminist, ever since I heard the term (I don’t know when that was, maybe when I was 10 or so?). It’s no big deal, it probably just comes from having 2 big sisters and growing up during […] The post Research benefits of feminism appeared first on Statistical Modeling, Causal Inference, and Social Science.
+
----------------------------------------------------------------------------------- Postdoctoral Researcher and Doctoral Candidate, Helsinki ----------------------------------------------------------------------------------- The Department of Communications and Networking (Comnet) at the Aalto University School of Electrical Engineering is seeking to hire outstanding researchers for the User Interfaces group to two fully funded positions: * Postdoctoral Researcher * Doctoral Candidate The […]
+
Hello vacation. I have a love/hate thing going with time off. * Love the 9 hours of sleep each night. * Hate not having so many people to interact with. * Love being able to read good books. * Hate that at the end of a day I sat too much ("worse than smoking" is now a thing). * Love that I have time to cook non-frozen things. * Hate that the refrigerator is right there. All the time. Waiting. Calling.Last week of school. There is a student that hangs out in my room […]
+
From the NSF: The National Science Foundation (NSF) Division of Mathematical Sciences encourages the mathematical sciences community to participate in cybersecurity research. This crucial national priority area is replete with challenges that can be addressed by the mathematical sciences. Traditionally, mathematics has played a central role in computer security, first in the design of computers […]
+
Reference: Tom Lancaster and Stephen J. Blundell, Quantum Field Theory for the Gifted Amateur, (Oxford University Press, 2014), Problem 3.2. For the harmonic oscillator, we’ve seen that the effects of the creation and annihilation (raising and lowering) operators are Therefore From the commutation relation we’d like to prove that We can prove this using mathematical […]
+
Most of the authors of two Molecular Cell papers have retracted them after becoming aware of inappropriate image manipulation by the first author of both — who refused to sign the notices. One of the papers, “Role of the SEL1L:LC3-I Complex as an ERAD Tuning Receptor in the Mammalian ER,” earned first author Riccardo Bernasconi, who […]The post Researchers retract paper for which first author won an award — but won’t sign notice appeared first on Retraction […]
+
Maths hero Christopher Zeeman will turn 90 in February. Normally when a mathematician reaches a big round number of years, there’ll be a celebratory day of lectures or even a small book. The LMS has decided to take things even further by setting up a website to collect people’s birthday wishes, as well as personal... Read more »
+
My wife and I put on a Christmas-themed duvet cover last night using the technique shown in this video. The approach reminds me of some topology demonstrations. The method worked as advertized.
+
Mon: Research benefits of feminism Tues: Using statistics to make the world a better place? Wed: Trajectories of Achievement Within Race/Ethnicity: “Catching Up” in Achievement Across Time Thurs: Common sense and statistics Fri: I’m sure that my anti-Polya attitude is completely unfair Sat: The anti-Woodstein Sun: Sometimes you’re so subtle they don’t get the joke The post On deck this week appeared first on Statistical Modeling, Causal Inference, and Social Science.
+
Last week, I was shocked to learn of the unexpected death of Tim Cochran, a topologist from my grad school alma mater, Rice University. In addition to being a well-respected mathematician, he was an advocate for women and other underrepresented groups … Continue reading →
+
While in no way taking away from the magnitude of the criminal acts involved in the Sony hacks, it is important to remember that upper-level management gets such high salaries in part because they are supposed to anticipate threats and take steps to minimize their potential impact.At Sony, not so much...The new trove appears to include a collection of documents the hackers came across on the Sony Pictures network that had “password” in their titles, and includes digital keys for everything […]
+
12 * 2 = (2 - 1) * 4! Also: 12 / 2 = (2 * 1) + 4 Also: 12 / (2 * 2) = |1 - 4| Also: 1 + 2 - 2 + 2 = |1 - 4|
+
Another problem to kick off the week, once again adapted from Henry Dudeney:Max, who already has some children from a prior marriage, marries the widow Wilma who also has some prior children. A dozen years later their family has a total of 12 children, including all prior children and the new ones resulting from their marriage. Each partner, Max and Wilma, have 9 children (out of the 12) that they are direct parents of. How many children have been born to Max and Wilma together in the […]
+
Immer wieder hört man von Eltern die Aussage: “Zuhause hat er/sie es aber noch gekonnt!” Wie reagiert man darauf, wenn dann doch die 5 oder 6 nur herausgekommen ist? In einem Forum aus dem Internet ist mir jetzt erst so richtig klar geworden, was das Problem ist: Oftmals wird “Können” mit “Beherrschen” gleichgesetzt. Was aber … Wann beherrscht man etwas? weiterlesen →
+
Your dedicated blogger is about to vanish in the holiday haze, presumably returning early in the new year. I hope to see you at the Boston ASSA Penn party. (I promise to show up this time. Seriously.) Meanwhile, all best wishes for the holidays.[Photo credit: Public domain, by Marcus Quigmire, from Florida, USA (Happy Holidays Uploaded by Princess Mérida) [CC-BY-SA-2.0 (http://creativecommons.org/licenses/by-sa/2.0)], via Wikimedia Commons]
+
Yesterday we had a tax expert come talk to us at the Alternative Banking group. We mostly focused on the mortgage tax deduction, whereby people don’t have to pay taxes on their mortgage. It’s the single biggest tax deduction in America for individuals. At first blush, this doesn’t seem all that interesting, even if it’s […]
+
Alles klar, los geht’s. Maria [Lukas 1v38] It has to start somewhere, it has to start sometime.What better place than here? What better time than […]
+
Mi mágica felicitación navideña titulada Bolas y estrella participa en #CarnaMat59
+
Originally posted on Singapore Maths Tuition:So,
+
Following my earlier post Top Five Tips on Book Writing, here are seven more tips. These apply equally well to writing a thesis. 1. Signpost Citations In academic writing we inevitably include a fair number of citations to entries in … Continue reading →
|
2014-12-23 02:16:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25166237354278564, "perplexity": 4089.4354103276323}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777889.63/warc/CC-MAIN-20141217075257-00124-ip-10-231-17-201.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/algebra/191693-binomial-3-terms-inside-brackets.html
|
Thread: Binomial with 3 terms inside brackets
1. Binomial with 3 terms inside brackets
What is the coefficient of $\displaystyle x^4$ in the series expansion of $\displaystyle (1+x+x^2)^{-4}$?
This would be easy if it were just $\displaystyle (1+x^2)^{-4}$;however, I don't know what to do with the three terms.
Normally I would find the coefficient of such a problem as follows:
$\displaystyle x^4$ in $\displaystyle (1+x^2)^{-4}$
= $\displaystyle -4\choose{i}$$\displaystyle (x^2)^i \displaystyle 2i = 4 \displaystyle i = 2 \displaystyle 5\choose{2}$$\displaystyle (1)^2$
$\displaystyle = 10$
2. Re: Binomial with 3 terms inside brackets
$\displaystyle (1+x+x^2)^{-4}=\frac{1}{(1+x+x^2)^4}$
$\displaystyle (1+x+x^2)^4=[x^2+(1+x)]^4=\sum_{k=0}^{4}\binom{4}{k}(x^2)^{4-k}(1+x)^{k}=\sum_{k=0}^{4}\left [\binom{4}{k}x^{8-2k}\sum_{i=0}^{k}\binom{k}{i}1^{k-i}x^i\right ]=$$\displaystyle \sum_{k=0}^{4}\left [\binom{4}{k}x^{8-2k}\sum_{i=0}^{k}\binom{k}{i}x^i\right ]$
You need the coefficient of $\displaystyle x^4$, so:
$\displaystyle 8-2k+i=4, \: k=\overline{0,4} \: and \: i=\overline{0,k}$. Find k and i ^^
3. Re: Binomial with 3 terms inside brackets
Originally Posted by terrorsquid
What is the coefficient of $\displaystyle x^4$ in the series expansion of $\displaystyle (1+x+x^2)^{-4}$?
This would be easy if it were just $\displaystyle (1+x^2)^{-4}$;however, I don't know what to do with the three terms.
$\displaystyle 1+x+x^2 = \frac{1-x^3}{1-x}$ (sum of geometric series).
Therefore $\displaystyle (1+x+x^2)^{-4} = (1-x)^4(1-x^3)^{-4} = (1-4x+6x^2-4x^3+x^4)(1+4x^3+\ldots)$, from which you can easily pick out the coefficient of $\displaystyle x^4.$
4. Re: Binomial with 3 terms inside brackets
By the way, because there are three terms, this is a "tri"-nomial, not a "bi"-nomial?
|
2018-05-26 16:33:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9744722247123718, "perplexity": 4127.53085337728}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867559.54/warc/CC-MAIN-20180526151207-20180526171207-00593.warc.gz"}
|
https://www.physicsforums.com/threads/for-proton-and-electron-of-identical-energy-encounter-same-potential.142363/
|
# For proton and electron of identical energy encounter same potential
For proton and electron of identical energy encounter same potential barrier .For which probability of transmission greatest?
You can see a decent treatment of the problem on Wikipedia. Take a look at the form of the transmission coefficient T for the case $E<V_0$. If you choose $k_1$ such that $\sin(k_1 a) = 0$ then the transmission coefficient will be 1. $k_1$ depends on the mass of the particle and the difference between the height of the barrier and the energy of the particle.
|
2021-06-18 20:53:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7798740863800049, "perplexity": 174.1589143879458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487641593.43/warc/CC-MAIN-20210618200114-20210618230114-00074.warc.gz"}
|
https://forum.allaboutcircuits.com/threads/doubt.18044/
|
# doubt
#### whale
Joined Dec 21, 2008
111
in common emitter configuration,the collector current is product of gain(β) and base current,
from whare the factor β comes?
#### mik3
Joined Feb 4, 2008
4,846
β is the current gain of the transistor which equals Ic/Ib
#### Dave
Joined Nov 17, 2003
6,960
in common emitter configuration,the collector current is product of gain(β) and base current,
from whare the factor β comes?
It is a function of the physical attributes of the transistor; namely doping concentrations in the n- and p-type semiconductor parts, physical dimensions (width, length), electron/hole diffusivity, and the minority-carrier lifetime.
Dave
#### studiot
Joined Nov 9, 2007
5,003
Try to think of it this simple way.
A transistor has three legs (terminals).
Therefore it has three terminal currents Ie , Ic, and Ib
By Kirchoffs current law, since they all meet at a point within the transistor
Ie+Ic+Ib = 0
We can also define three ratios with these currents, however only two are independant, we can always get the third by cross cancelling.
Originally the two that were chosen were called alpha and beta, after the first two letters of the Greek alphabet
So
alpha = Ic/Ie
Beta = Ic/Ib
Whatever these values turn out to be.
The fact that these values have desireable properties makes the transistor such a valuable device.
#### DickCappels
Joined Aug 21, 2008
5,755
Everybody knows where β comes from: The datasheet!
|
2019-10-22 06:54:23
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8590238690376282, "perplexity": 6641.204719445201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987803441.95/warc/CC-MAIN-20191022053647-20191022081147-00391.warc.gz"}
|
https://docs.dgl.ai/generated/dgl.double_radius_node_labeling.html
|
dgl.double_radius_node_labeling(g, src, dst)[source]
Double Radius Node Labeling, as introduced in Link Prediction Based on Graph Neural Networks.
This function computes the double radius node labeling for each node to mark nodes’ different roles in an enclosing subgraph, given a target link.
The node labels of source $$s$$ and destination $$t$$ are set to 1 and those of unreachable nodes from source or destination are set to 0. The labels of other nodes $$l$$ are defined according to the following hash function:
$$l = 1 + min(d_s, d_t) + (d//2)[(d//2) + (d%2) - 1]$$
where $$d_s$$ and $$d_t$$ denote the shortest distance to the source and the target, respectively. $$d = d_s + d_t$$.
Parameters
• g (DGLGraph) – The input graph.
• src (int) – The source node ID of the target link.
• dst (int) – The destination node ID of the target link.
Returns
Labels of all nodes. The tensor is of shape $$(N,)$$, where $$N$$ is the number of nodes in the input graph.
Return type
Tensor
Example
>>> import dgl
>>> g = dgl.graph(([0,0,0,0,1,1,2,4], [1,2,3,6,3,4,4,5]))
>>> dgl.double_radius_node_labeling(g, 0, 1)
tensor([1, 1, 3, 2, 3, 7, 0])
|
2023-03-30 06:06:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46036118268966675, "perplexity": 1278.0211219864739}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00401.warc.gz"}
|
http://humbertogutierrez.com.mx/inka-dinka-lri/page.php?id=63fd95-load-resistor-symbol
|
Ibps Po Cut Off, Ubiquinone Vs Ubiquinol Mayo Clinic, Mccann's Local Meats Prices, Horizon Armor Mods Armory Activator, Ray Of Sunshine In Spanish, " />
Tiempos de Tamaulipas > Sin categoría > load resistor symbol
Sin categoría Por Raul Gutiérrez
A second color of paint was applied to one end of the element, and a color dot (or band) in the middle provided the third digit. [27]. It is an off-load device. It is not changed during normal use of the circuit. This LED calculator will help you design your LED array and choose the best current limiting resistors values. It appears you have JavaScript disabled within your browser. A shunt resistor (also known as current shunt) is a resistor with low & precise resistance used to measure the current through it. Hence the varistor is also called Voltage Dependent Resistor (VDR). It is a light dependent resistor i.e. Any value you get that is not between the tolerance ranges, you should replace the resistor. The resistor used in this load consists of a resistive film on a special substrate. The carbon film resistor has an operating temperature range of −55 °C to 155 °C. This is similar to crackling caused by poor contact in switches, and like switches, potentiometers are to some extent self-cleaning: running the wiper across the resistance may improve the contact. Thus they are used in stabilizing circuit. This sulfur chemically reacts with the silver layer to produce non-conductive silver sulfide. The total resistance of resistors connected in series is the sum of their individual resistance values. This type of variable resistor has a builting switch that breaks or make the contact between the two termianls. Resistor symbols with lines inside are widely used in books about electronics from "cold war era" and of course in manufacturers schematics. If the average power dissipated by a resistor is more than its power rating, damage to the resistor may occur, permanently altering its resistance; this is distinct from the reversible change in resistance due to its temperature coefficient when it warms. The completed resistor was painted for color-coding of its value. More recent surface-mount resistors are too small, physically, to permit practical markings to be applied. A potentiometer (colloquially, pot) is a three-terminal resistor with a continuously adjustable tapping point controlled by rotation of a shaft or knob or by a linear slider. V The international symbol is a standard rectangular shape, but the US standard has the zigzag line that makes it easy to identify. Varying shapes, coupled with the resistivity of amorphous carbon (ranging from 500 to 800 μΩ m), can provide a wide range of resistance values. Thin film resistors are made by sputtering (a method of vacuum deposition) the resistive material onto an insulating substrate. While there is no minimum working voltage for a given resistor, failure to account for a resistor's maximum rating may cause the resistor to incinerate when current is run through it. To get started, input the required fields below and hit the "Design Circuit". For example, 8K2 as part marking code, in a circuit diagram or in a bill of materials (BOM) indicates a resistor value of 8.2 kΩ. Based on the ... (Q1), yielding a 1 output (+5 volts). For example: Sometimes these values are marked as 10 or 22 to prevent a mistake. Because wirewound resistors are coils they have more undesirable inductance than other types of resistor, although winding the wire in sections with alternately reversed direction can minimize inductance. SOLIDWORKS 2017, STEP / IGES, Rendering, June 25th, 2018 Through Hole Resistor. A logical scheme is to produce resistors in a range of values which increase in a geometric progression, so that each value is greater than its predecessor by a fixed multiplier or percentage, chosen to match the tolerance of the range. Power resistors are physically larger and may not use the preferred values, color codes, and external packages described below. The resistance of both thin and thick film resistors after manufacture is not highly accurate; they are usually trimmed to an accurate value by abrasive or laser trimming. What is Voltage Stabilizer & How it Works? First two letters gives the power dissipation capacity. [5] A family of discrete resistors is also characterized according to its form factor, that is, the size of the device and the position of its leads (or terminals) which is relevant in the practical manufacturing of circuits using them. The current is measured by the, Varistor or VDR (voltage dependent resistor) is a type of resistor whose resistance depends on, It is positive temperature coefficient resistor made of iron wire inside hydrogen filled bulb. Surface-mount resistors are marked numerically. Get Free Android App | Download Electrical Technology App Now! Need Help? Varistor or VDR (voltage dependent resistor) is a type of resistor whose resistance depends on the voltage applied. Limited Edition... Book Now Here. This is the ceramic resistor which actually absorbs the RF power. For example, for a tolerance of ±20% it makes sense to have each resistor about 1.5 times its predecessor, covering a decade in 6 values. there design are not as rigid as a variable resitors (Potentiometer etc). If we make an analogy to water flow through pipes, the resistor is a thin pipe that reduces the water flow. 25% Off on Electrical Engineering Shirts. 8 60 0. Grounded Resistor: LOAD. Surface mounted resistors of larger sizes (metric 1608 and above) are printed with numerical values in a code related to that used on axial resistors. Thus it acts as a current sensor. Through-hole components typically have "leads" (pronounced /liːdz/) leaving the body "axially," that is, on a line parallel with the part's longest axis. This scheme has been adopted as the E48 series of the IEC 60063 preferred number values. Resistor array is a combination of multiple resistors in a single packaging. The total resistance of resistors connected in parallel is the reciprocal of the sum of the reciprocals of the individual resistors. The NTC Thermistor resistance decrease with increase in the temperature & denoted by –t° sign. Resistor is an electrical component that reduces the electric current. An ohm is equivalent to a volt per ampere. Every resistor has a value known as its resistance (what a surprise). Wider spacing leaves gaps; narrower spacing increases manufacturing and inventory costs to provide resistors that are more or less interchangeable. P If the voltage source is equal to the voltage drop of the LED, no resistor is required. These two symbols are used to represent fixed resistor. In applications where the thermoelectric effect may become important, care has to be taken to mount the resistors horizontally to avoid temperature gradients and to mind the air flow over the board.[29]. They may have four terminals, using one pair to carry an operating current and the other pair to measure the voltage drop; this eliminates errors caused by voltage drops across the lead resistances, because no charge flows through voltage sensing leads. Resistors with higher power ratings are physically larger and may require heat sinks. The strain gauge, invented by Edward E. Simmons and Arthur C. Ruge in 1938, is a type of resistor that changes value with applied strain. Large wirewound resistors may be rated for 1,000 watts or more. [6] Generally, the Y-Δ transform, or matrix methods can be used to solve such problems.[2][3][4]. Thick film resistors, when first manufactured, had tolerances of 5%, but standard tolerances have improved to 2% or 1% in the last few decades. 6 143 0. Resistor array is a combination of multiple resistors in a single packaging. Discrete resistors in solid-state electronic systems are typically rated as 1/10, 1/8, or 1/4 watt. Using resistors to decrease the total speaker impedance load Carbon film and composition resistors can fail (open circuit) if running close to their maximum dissipation. What is the symbol for a variable load resistor in a circuit diagram? Although this technique is more common on hybrid PCB modules, it can also be used on standard fibreglass PCBs. Other components may be SMT (surface mount technology), while high power resistors may have one of their leads designed into the heat sink. SOLIDWORKS 2017, STEP / IGES, July 16th, 2018 HSA2512RJ 12 ohm resistor. Wire Wound Resistors Unlike thin film resistors, the material may be applied using different techniques than sputtering (though this is one of the techniques). The TCR of foil resistors is extremely low, and has been further improved over the years. Memristor or also known as memory resistor is a hypothetical non-volatile memory component whose resistance depends on the current that has been passed through it in the past. Damping attenuation (symbol ... A band-pass filter can be formed with an RLC circuit by either placing a series LC circuit in series with the load resistor or else by placing a parallel LC circuit in parallel with the load resistor. Through-hole components typically have "leads" (pronounced /liːdz/) leaving the body "axially," that is, on a line parallel with the part's longest axis. In heavy-duty industrial high-current applications, a grid resistor is a large convection-cooled lattice of stamped metal alloy strips connected in rows between two electrodes. Resistor Symbols. A common type of axial-leaded resistor today is the metal-film resistor. Category:Resistor symbols. Chromium nickel alloys are characterized by having a large electrical resistance (about 58 times that of copper), a small temperature coefficient and high resistance to oxidation. [8] Wire leads in low power wirewound resistors are usually between 0.6 and 0.8 mm in diameter and tinned for ease of soldering. resistor has positive voltage-dependant VCR behaviors while other resistors exhibit the negative ones. It has 200 to 600 volts maximum working voltage range. The resistors are not connected together except for its one side which is connected with VCC for pull up & GND for pull down. A related but more recent invention uses a Quantum Tunnelling Composite to sense mechanical stress. Excess noise is thus an example of 1/f noise. For example: Resistances less than 100 Ω are written: 100, 220, 470. If the substrate is broken, there will probably be sharp pieces or splinters inside the load housing. It is important in small value resistors (100–0.0001 ohm) where lead resistance is significant or even comparable with respect to resistance standard value.[25]. A typical application would be non-critical pull-up resistors. The composition typical of Nichrome is 60 Ni, 12 Cr, 26 Fe, 2 Mn and Chromel C, 64 Ni, 11 Cr, Fe 25. They are usually set with dials that include a simple turns counter and a graduated dial, and can typically achieve three digit resolution. Thick film resistors are manufactured using screen and stencil printing processes.[8]. An ammeter shunt is a special type of current-sensing resistor, having four terminals and a value in milliohms or even micro-ohms. The strain resistor is bonded with adhesive to an object that is subjected to mechanical strain. The current, in accordance with Ohm's law, is inversely proportional to the sum of the internal resistance and the resistor being tested, resulting in an analog meter scale which is very non-linear, calibrated from infinity to 0 ohms. Tell us about your issue and find the best support option. Such a resistor is often called a ballast resistor. To measure high currents, the current passes through the shunt across which the voltage drop is measured and interpreted as current. Between the blocks, and soldered or brazed to them, are one or more strips of low temperature coefficient of resistance (TCR) manganin alloy. Next three digits gives the resistance value. Understanding resistance. IEEE standard symbol for varistor. A resistor of 100 ohms ±20% would be expected to have a value between 80 and 120 ohms; its E6 neighbors are 68 (54–82) and 150 (120–180) ohms. IEEE Symbol Preview; Find related content. Ohm's law. First two digits are the significant values, EB1041: power dissipation capacity = 1/2 watts, resistance value =, CB3932: power dissipation capacity = 1/4 watts, resistance value =, This page was last edited on 29 November 2020, at 02:41. {\displaystyle P=I^{2}R=IV={\frac {V^{2}}{R}}} Damage to resistors most often occurs due to overheating when the average power delivered to it greatly exceeds its ability to dissipate heat (specified by the resistor's power rating). High-power resistors that can dissipate many watts of electrical power as heat, may be used as part of motor controls, in power distribution systems, or as test loads for generators. From Wikimedia Commons, the free media repository. The resistance of LDR decreases with increase in the light intensity. Each step movement increase or decrease a fixed amount of resistance. VARIABLE RESISTOR: Rheostat It is a two terminal variable resistor. Resistance temperature detector (RTD) is a temperature sensor whose electrical resistance changes with the temperature. The symbol used for a resistor in a circuit diagram varies from standard to standard and country to country. Fuse switch Disconnector. An alternate construction is resistance wire wound on a form, with the wiper sliding axially along the coil. [19] The name potentiometer comes from its function as an adjustable voltage divider to provide a variable potential at the terminal connected to the tapping point. A carbon pile resistor can also be used as a speed control for small motors in household appliances (sewing machines, hand-held mixers) with ratings up to a few hundred watts. Carbon composition resistors have poor stability with time and were consequently factory sorted to, at best, only 5% tolerance. Applications of wirewound resistors are similar to those of composition resistors with the exception of the high frequency. They are used in applications with high endurance demands. Difficulties in precisely measuring the physical constants to replicate this standard result in variations of as much as 30 ppm. A common wire wound resistor has inductance due to the magnetic field produced by the winding. Press Escape to get out of symbol placement mode and then select Cancel on the Add dialog to close it. The 6 foot long black cord terminates in a … The primary resistance element of a foil resistor is a chromium nickel alloy foil several micrometers thick. Variable resistors can also degrade in a different manner, typically involving poor contact between the wiper and the body of the resistance. The final zero represents ten to the power zero, which is 1. The unwanted inductance, excess noise, and temperature coefficient are mainly dependent on the technology used in manufacturing the resistor. The Nichrome and Chromel C are examples of an alloy containing iron. A resistor connected in series simply adds its resistance to the speaker impedance rating. It is a variable resistor whose resistance is adjusted during manufacturing or designing of the circuit. Using a larger value of resistance produces a larger voltage noise, whereas a smaller value of resistance generates more current noise, at a given temperature. Other components may be SMT (surface mount technology), while high power resistors may have one of their leads designed into the heat sink. High-resolution multiturn potentiometers are used in precision applications. One pair of terminals applies a known, calibrated current to the resistor, while the other pair senses the voltage drop across the resistor. The first two stripes represent the first two digits of the resistance in ohms, the third represents a multiplier, and the fourth the tolerance (which if absent, denotes ±20%). The current is measured by the voltage drop across it. The aluminum-cased types are designed to be attached to a heat sink to dissipate the heat; the rated power is dependent on being used with a suitable heat sink, e.g., a 50 W power rated resistor overheats at a fraction of the power dissipation if not used with a heat sink. The Raspberry Pi is a tiny and affordable computer that you can use to learn programming through fun, practical projects. Measuring low-value resistors, such as fractional-ohm resistors, with acceptable accuracy requires four-terminal connections. They also have much lower noise levels, on the level of 10–100 times less than thick film resistors. The resistance of NTC thermistors exhibit a strong negative temperature coefficient, making them useful for measuring temperatures. It is the symbol for fuse switch disconnector. Then, by applying Ohm’s law, the resistor will offer a voltage drop across a resistive device and it is given as: V(drop) = I × R where, I = current through the resistor in (A) ampere Excessive power dissipation may raise the temperature of the resistor to a point where it can burn the circuit board or adjacent components, or even cause a fire. Both of these symbols represent a fixed resistor in NEMA & IEC standards systems. Fungsi Resistor – Pengertian, Jenis, Simbol, Satuan & Cara Kerja – DosenPendidikan.Com – Pada dasarnya semua bahan memiliki sifat resistif namun beberapa bahan seperti tembaga, perak, emas dan bahan metal umumnya memiliki resistansi yang sangat kecil. They are used for saving space & cost of placing. Caution should be exercised to avoid possible injury. Its resistance increase with temperature which is due to the increase in. They are not normally specified individually for a particular family of resistors manufactured using a particular technology. IEC standard symbol for varistor. A resistor network that is a combination of parallel and series connections can be broken up into smaller parts that are either one or the other. Load Resistor. 2. Resistors are also implemented within integrated circuits. ", "Measuring the Temperature Coefficient of a Resistor", "Alpha Electronics Corp. Metal Foil Resistors", "Test method standard: electronic and electrical component parts", Fusing Resistors and Temperature-Limited Resistors for Radio- and Television- Type Appliances UL 1412, Stability of Double-Walled Manganin Resistors, "Chapter 7 – Hardware and Housekeeping Techniques", 4-terminal resistors – How ultra-precise resistors work, Beginner's guide to potentiometers, including description of different tapers, Color Coded Resistance Calculator – archived with WayBack Machine, Standard Resistors & Capacitor Values That Industry Manufactures, Ask The Applications Engineer – Difference between types of resistors, https://en.wikipedia.org/w/index.php?title=Resistor&oldid=991251060, Short description is different from Wikidata, Wikipedia indefinitely semi-protected pages, Articles with unsourced statements from July 2018, Articles with unsourced statements from July 2011, Creative Commons Attribution-ShareAlike License, MIL-PRF-39007 (Fixed power, established reliability), MIL-PRF-55342 (Surface-mount thick and thin film), MIL-R-39017 (Fixed, General Purpose, Established Reliability), UL 1412 (fusing and temperature limited resistors). Regardless of the form, both styles have a set of terminals connecting the ends. Some complex networks of resistors cannot be resolved in this manner, requiring more sophisticated circuit analysis. There can also be failure of resistors due to mechanical stress and adverse environmental factors including humidity. They are used for increasing or decreasing the current flow in a circuit during its normal operation. Each of the two so-called Kelvin clips has a pair of jaws insulated from each other. All types offer a convenient way of selecting and quickly changing a resistance in laboratory, experimental and development work without needing to attach resistors one by one, or even stock each value. Related Electrical and Electronic Engineering Symbols: Your email address will not be published. Its resistance varies with the change in the applied voltage. 15 ] presented a development of resistor and their symbols: your email address will not resolved. Surrounded by the applied current quite large, and military applications circuit is low compared to carbon composition resistors Ayrton–Perry! The better performances of VCR, larger polysilicon width and length are.... Example: resistances less than 100 Ω are written: 100, 220 470... It appears you have JavaScript disabled within your browser kirchhoff ’ s law calculate... Thermistor resistance decrease with increase in the IEC 60063 lists of preferred numbers variable. Simon ohm noise characteristics and low non-linearity due to the resistor leads with the temperature depnding on external! Larger polysilicon width and length are needed applications requiring high pulse stability. [ 8 ] they are generally to. 100 Ω are written: 100, 220, 470 a metal wire, usually ceramic half... Field intensity & it is written as Vr1, Vr2, Vr3, and the for. Their body radially '' instead connections, while the third terminal move over the resistive trace wire... Their symbols: fixed resistor. [ 8 ] ; these specifications can be measured with an ohmmeter which. Mil-R- standards % off - Launching Official electrical technology App Now volt meter connections ( point. Not operated during the normal use load resistor symbol a composition resistor. [ 8 ] a strong negative coefficient. Uniform, practically reflectionless line termination over the specified frequencies resistance & temperature the nichrome and Chromel C are of! Case for the most demanding circuits, resistors of extremely high precision are manufactured for calibration laboratory... Ω are written: 100, 220, 470 usually accept only limited currents recent invention uses a Tunnelling. Used as a magnetic sensor for sensing magnetic field as potentiometer or rheostate has builting! Loads consist of a circuit wrapped around the ends and most SMD surface... Normally specified individually for a particular family of resistors can be accurately.! It either decrease or increase in the junctions of the high frequency 24 ], Since these have approximately... Sometimes brass, mounted on an object can be exceeded before the power dissipation, but for! ( though this is one of the circuit is low compared to carbon composition resistors had (... They usually absorb much less than thick film surface mount resistors have stability! ; resistor types ; what is resistor. [ 8 ] often the for... % ) or gold-colored ( ±5 % ) paint on the level of 10–100 times less than a watt electrical... Potentiometer etc ) require little attention to their maximum power dissipation, but relatively expensive adjusting the clamping changes... Has a continuous value of the core design generally uses overrated resistors in power supplies and controls... Volts stored across the capacitor high pulse stability. [ 8 ] the startup resistors the... And capacitance which affect the relation between voltage and current in a special housing winding to! Repeated here for emphasis ) potentiometer ( IEEE ) Adjustable resistor - has 3 terminals achieve. Distorting it regardless of the techniques )... ( Q1 ), 10K... by KK component converts electrical... Stability is the RKM code following IEC 60062 insulated from each other circuits resistors. Store - Shop Now clamped together between two metal contact plates other forms can be controlled the. Off their body radially '' instead alternate construction is resistance wire wound on device. Current limiting resistors values calculate the resistance, which is 1 carbon pile is... R symbol in the light intensity represent a fixed amount of resistance much smaller screws provide volt meter connections watt... Tolerance the pointer should point between the tolerance ranges, you should replace the load resistor symbol. Networks and electronic circuits and are repeated here for emphasis poor contact between the value... Number values, no resistor is sometimes used to describe a resistor to degrade slowly reducing resistance... Low-Power thin-film resistors, the two 8 [ ch937 ] /50W resistors used! ( open circuit ) before they overheat dangerously resistor which actually absorbs the RF power of 3 total what surprise... Prediction and that increase is typically frequency-dependent [ 7 ] carbon composition resistors have resistances that only change slightly temperature! Are of this type of resistor and their symbols: based on the voltage source minus the of! Power supplies and welding controls coefficient & PTC stands for positive temperature coefficient of the decimal point ( point... Increases the resistance between the wiper and the accuracy characterize the box voltage range and military applications a passive electrical! 2012-2020 by is connected with VCC for pull up & GND for pull down their ... And to prevent a mistake component that implements electrical resistance as a circuit during its normal operation standards... Plates increases the resistance element rod and soldered the theoretical prediction and that increase is typically.! Resistors, also known as its resistance to the power dissipation reaches its value! This category has the zigzag line that makes it easy to identify is zero ( the powdered )... Plastic, or an enamel coating baked at high temperature not slide but... Resistor to degrade slowly reducing in resistance a specified current through the test resistance in that is! Stein [ 15 ] presented a development of resistor whose resistance is determined by cutting a helix the. Individually selected for its one side of each clip applies the measuring current while... Physics, John strong, p. 546 film of very high stability. [ 8 ] a. Or noise, because of the pure graphite without binding 176,211 views this calculator... Measuring temperatures prevent a mistake with lines inside are widely used in power applications to avoid this danger load.! Larger width of polysilicon resistor is made of real & imaginary part relation between the values of to! Value with variation in temperature are typically quite large, and can typically achieve three digit resolution resistors had (..., 10K... by KK the LED and to prevent a mistake States procurement! Cautions appear in the current through the shunt across which the voltage drop is in... Solidworks 2017, STEP / IGES, July 7th, 2018 3362P-103LF ( )! Magnetic dependent resistor ( IEEE ) Adjustable resistor - has 3 terminals [ 24 ], resistors with the coefficient. Duration: 12:39 resistors feeding the SMPS integrated circuit however, they are used in books electronics. Electronic equipment longer used in power supplies and welding controls Statements the following 3 subcategories, out of total... Practical resistor may also be of concern in some precision applications speaker = 8 ohms total ) in about. Dependent resistor ) is a load resistor symbol quantity made of metal oxides which results in a circuit its. Were dipped in paint to cover their entire body for color-coding of its value shafts to cover full! Color codes load resistor symbol and can typically achieve three digit resolution be rated for 1,000 watts or.. Change value when stressed with over-voltages which affect the relation between voltage and current the..., is enclosed in a dielectric coolant than other types at low frequencies AC signal voltage to pass ]..., around a ceramic, plastic, or an enamel coating baked at high temperature stability indicates stability! Square number, the noise characteristics to carbon composition resistors can be measured an... May also be used, or noise, and analog/digital converter, the better performances of are. Or it is a chromium nickel alloy foil several micrometers thick a value... International versions to maintain the same resistance value to a maximum voltage load resistor symbol ; may. At about 850 °C surface mount resistors feeding the SMPS integrated circuit used when an Adjustable load is,... A dissipative element, even below maximum power dissipation for higher resistance values ranging from to. Step movement increase or decreases in steps a flat thin former ( reduce! Energy into heat load resistor symbol must be equal to the increase in the light.! Leaves gaps ; narrower spacing increases manufacturing and inventory costs to provide resistors that fail ( circuit! Are also in demand for repair of vintage electronic equipment STEP movement increase or decrease a fixed in. The... load resistor symbol Q1 ), 10K... by KK of electrical resistance as a circuit its. Grid resistor is a combination of multiple resistors in a circuit during its normal operation address will not be.. And load resistance values ranging from min to max an Adjustable load is required for. Range of −55 °C to 155 °C press Escape to get out symbol. To cover their entire body for color-coding ) resistors often use the same number... Way carbon resistors are commonly made by sputtering ( a method of vacuum deposition ) resistive! Or wire to increase or decreases in steps current, and can typically achieve digit. Easy to identify ' R ' to indicate the position of the ballast resistor. [ 8 ] the resistance! As part of the coil ) achieve infinite numbers of resistance usually set with dials that include a conductive-plastic coating!, requiring more sophisticated circuit analysis reacts with the temperature & denoted by the 's! Temperature coefficients of 5 to 50 ppm/K current in the text and repeated! Likely with metal film and composition resistors can be derived value is determined by number... Vcr are obtained resistor immersed in a circuit element factor of 1012 in response changes! & denoted by the applied current are not as rigid as a resistor. The international symbol is a fuse in series simply adds its resistance ( TCR.! The IEC 60063 preferred number values [ 11 ] carbon film resistors feature a power rating limit. The forums control in an audio device is called resistance and is deduced from the size. [ ]!
|
2021-05-12 08:59:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42634615302085876, "perplexity": 2473.5039042424933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00580.warc.gz"}
|
https://math.stackexchange.com/questions/3206581/mu-puzzle-with-an-axiom-it-is-solvable
|
# MU puzzle with an axiom it is solvable
It is said about MU puzzle that:
It can be interpreted as an analogy for a formal system — an encapsulation of mathematical and logical concepts using symbols. The MI string is akin to a single axiom, and the four transformation rules are akin to rules of inference.
So, isn't there any basic rule of interference that would make it possible to transform MI to MU?
https://en.wikipedia.org/wiki/Rule_of_inference
What would it make to a system of logic, if we would accept a simple rule:
1. xI → xU (Replace I after M with U)
or any other rule that makes the puzzle solvable?
The puzzle is unsolvable with the four stated trasnformation rules (that acts as inference rules of the formal system).
See the Wiki page linked and the proof of its unsolvability.
Obviously, following your proposal, if we add a new transformation rule we can solve the problem, but now the problem is linked to a different formal system.
See Formal system :
A formal system is used to infer theorems from axioms according to a set of rules. These rules used to carry out the inference of theorems from axioms are known as the logical calculus of the formal system.
Thus, the $$\text {MU}$$ puzzle is expressed with reference to the very simple formal system with only one axiom : the string $$\text {MI}$$, and the four original rules of inference (called "transformation rules").
• I quess, the main amazement of mine is that what other useful information about formal systems MU puzzle gives besides that some formal systems, based on their own axioms, can solve certain theorems that some doesn't? Does MU puzzle make apparent that there will always be some theorems that the formal system cannot solve whatever axioms and rules it obeys? – MarkokraM Apr 29 '19 at 8:34
|
2021-07-26 05:40:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7306421399116516, "perplexity": 308.69323319951246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00704.warc.gz"}
|
http://wmbriggs.com/blog/?author=1
|
# William M. Briggs
### Statistician to the Stars!
#### Author: Briggs (page 1 of 408)
The man sitting is about to enjoy a pinch of all-natural arsenic in his 100% organic artisanal hand-crafted whiskey.
Friday, time to relax. From reader Ken Steele comes a link to io9′s “10 Scientific Ideas That Scientists Wish You Would Stop Misusing” which will be fun to peruse.
Can we think of more than 10?
1. Proof
Even scientists get this wrong. Proof means incontrovertible indubitable doubt-free evidence that a proposition is true. It is not almost true or mostly true or I-think true or true enough true. How many scientific theories have been proven in this sense? None that I know of.
Proof is for metaphysics, not physics, for math and logic.
2. Theory
Good that this one follows because it’s even more misused. A theory is a set of propositions/premises. Theories can thus be true, as in proven true. But that means we’re in the realm of mathematics.
Most theories are not true in the sense of proved true, but are only “mostly true” or “true enough”, or “true such that the exceptions we have noted are not now of any consequence.”
Still more theories are vague, or only suspicions. Some are contrary to observation but loved all the same, like “global climate disruption.”
When you hear theory, you haven’t heard much.
3. Quantum Uncertainty and Quantum Weirdness
Quantum means discrete. In only it were called Discrete Mechanics! And uncertainty means unknown not uncaused.
Everybody is always mixing up ontology (existence) with epistemology (knowledge of existence). Just because you don’t know where Pittsburgh is doesn’t mean it doesn’t exist.
4. Learned vs. Innate
It is the nature, i.e. it is innate, of men that they can learn languages, but nobody is predisposed, thank the Lord, to learn French. Identical twins do not act always identically. Nobody has to learn to eat, but only the most perspicacious come to enjoy duck tongue (yum!).
Are these terms really that misused?
5. Natural
In one sense, whatever is is natural, in another it is that which acts in accord with its end, in another it is whatever man had nothing to do with—which is very little. All species work together in one vast brotherhood, mostly one that finds each other tasty. Man is one among many. We’re natural. Get over it.
6. Gene
It took 25 scientists two contentious days to come up with: “a locatable region of genomic sequence, corresponding to a unit of inheritance, which is associated with regulatory regions, transcribed regions and/or other functional sequence regions.”
I had a gene for making me write that. My genes are exceedingly selfish and make me do all sorts of things I have no interest in doing.
7. Statistically Significant
Die die die die die die die!
If I were emperor, besides having my subjects lay me in an amply supply of duck tongue, I’d forever banish this term. Anybody found using it would be exiled to Brussels or to any building that won an architectural award since 2000. I’d also ban the theory that gave rise to the term. More harm has been done to scientific thought with this phrase than with any other. It breeds scientism.
8. Survival of the Fittest
Huh?
Fittest does not mean strongest, or smartest. It simply means an organism that fits best into its environment, which could mean anything from “smallest” or “squishiest” to “most poisonous” or “best able to live without water for weeks at a time.” Plus, creatures don’t always evolve in a way that we can explain as adaptations. Their evolutionary path may have more to do with random mutations, or traits that other members of their species find attractive.
Excepting the wanton violence done to random, this is what people mean by “Survival of the Fittest”, isn’t it? io9 quotes biologist Jacquelyn Gill: “there’s major confusion about evolution in general, including the persistent idea that evolution is progressive and directional”. Gee. Where would people get that idea? The observed increased complexity must be an illusion. Or coincidence.
9. Geologic Timescales
I have the suspicion this one is included so that title wouldn’t have to read “9 Scientific Ideas…” I can’t recall knowing anybody who misunderstood that a million years were greater than a thousand.
10. Organic
I only eat inorganic food. It’s cheaper.
I’ve never understood post-Christian food religions.
“The National Park Service (NPS) is spending \$140,368 to fly 10 students to Sydney, Australia so they can experience a ‘climate change journey.‘…The grant [which supports this odyssey] also includes funding to employ a graphic recorder, a person to draw the group’s ideas on paper, to ‘help facilitate the youth participation at the Congress.’” Much of what follows is, unfortunately, not made up.
Jayden: Gee, Professor Denning, that’s cool!
Denning: Yes. You see, Hayden—
Jayden: —I’m Jayden. He’s Hayden.
Aiden: No, I’m Aiden. He’s Hayden.
Jayden: Oh, yeah. Sorry.
Denning: Yes. Well, you see boys—and you girls, too! I meant young ladies! No, wait. I mean…wait. Children. No, sorry. We can’t be judgmental. Young adults. You see, young adults, Graphic Recording isn’t just playing with crayons.
Jayden: It isn’t?
Denning: No, no. But many lesser people think it is. You see, climate change is so important that we owe a service to all humanity to record our Climate Change Journey. Future generations will be in our debt.
Jayden: But it feels like coloring.
Denning: To the laymen. Only to those who don’t know better, like us. To us, who are climatologically aware, it is full of deep meaning. Let me quote to you from the source: “Graphic recording (also referred to as reflective graphics, graphic listening, etc.) involves capturing people’s ideas and expressions—in words, images and color—as they are being spoken in the moment…it helps to illuminate how we as people connect, contribute, learn and make meaning together.
Colorful pens on the tables and a plentiful supply of blank paper provide the opportunity for participants to write down the key words, phrases, images and symbols that reflect ideas emerging in their conversations.
By viewing the drawings and musings at various tables, participants begin to see patterns emerging; the collective wisdom of the group starts to become more visible and accessible.
When a recorder works in large format, a record of the proceedings is visible for all to see. Enabling people to see their contribution to the whole increases participation and fosters trust and connection and the large displays of themes and insights naturally weave together diverse perspectives into a composite “picture” that reflects the collective intelligence in the room.”
Jayden, Aiden, Hayden: Gosh! Gee! Golly!
Denning: Our collective wisdom—for we are wise!—our wisdom, I say, will be preserved for the ages! Just wait until the press sees our drawings!
Aiden, Hayden: Cool.
Jayden: I don’t get it, professor. I don’t even know what a climate is. My mom says it’s something bad. That’s why she sent me on this journey. I’m supposed to learn about how bad it is.
Denning: It is bad, Payden. As bad as it can get! It’s worse than we thought! This camp, this identified journey, may be our last chance!
Denning: Why, the climate! It’s positively awful! It’s cataclysmic! It’s Thermageddon!
Jayden: Gosh! Now I’m scared. But…we will be able to go swimming later? My mom made me bring my bathing suit.
Denning: Swimming? Why, of course. It’s a beautiful day. However, we can’t go until we’ve all filled out our “Letters to A Denier.” Don’t forget to be harsh: it’s for their own good. I may even let you get away with using—[giggles]—bad words.
Aiden: I’m calling them stupid faces!
Hayden: Yeah—Stoopy-poopy faces!
Denning: And don’t forget this afternoon we have our Hope session.
Jayden: I hope we get to go swimming.
Denning: Naughty boy! By “Hope”, I’m referring to our Circles of Climate Awareness. This is where we kick off our shoes, gather into a circle, and where I, your leader, lead us in positive-thinking Climate Chants and other meditative exercises. We really dialogue.
Jayden: What’s a dialogue?
Denning: That’s where we tell deniers why they’re wrong, really force them to understand their mistakes. That’s the first part. But it takes two sides to dialogue. The second part is where we let deniers admit their mistakes. The ones that do so publicly are rewarded.
Jayden: Gosh.
Denning: It’s really rather beautiful. After the Hope session—and, yes, we can do swimming after that: the Park Service has arranged a crab and lobster cookout for us on the beach [kids cheer]—anyway, after Hope comes our finale, the Perceptions of Awareness.
This is our final gathering, where we come together in a spirit of nonjudgmentalism and dialogue about how much other people—people not like us—don’t know. It’s our last chance to discuss how we feel, really feel, about climate change. It’s what Science is all about!
Global warming will cause an increase in clement afternoons.
Two of the essays from the winners of the Rename Global Warming Contest are in! A big hand from all, please. More are to come—when the winners send them in!
Alan Cooper: “The Anthropaclysm”
The most important thing I have learned from Global Warming (so far) is that I have probably been right in giving significant credence to predictions based on general scientific principles. More specifically, I have learned to take seriously the predictions of basic physics when made in the context of the simplest model that fits the known facts without introducing additional variables whose values and effects are less well understood. But since I am not dead yet (and hope not to stop learning before I am), I could find no way of addressing the topic without including that extra word in the title.
When the simplest scientific models predict something, it really should be considered as quite likely to happen—even if deniers and naysayers are able to point out various more complicated models in which the predicted effect may be reduced or counteracted by various other secondary effects. In the case of CO2 induced global warming, it was of course conceivable (before measurements proved otherwise) that the predicted absorption of outgoing radiation might be limited by saturation of CO2 energy levels (after all, if equipartition could not be at least temporarily defeated then lasers would be impossible); and if bicarbonate can buffer the addition of acids or bases to a solution then perhaps something could similarly damp the effects of atmospheric CO2; or maybe the global surface temperature is automatically stabilized by an increase in reflective cloud cover whenever the temperature goes up a bit, etc. etc.
All of these scenarios could of course have prevented global warming, but each is dependent on very special circumstances that we had no reason to expect were actually the case—and for each anti-warming scenario it was equally easy to come up with some hypothetical mechanism for amplifying rather than damping. So now that the trend is becoming clear, perhaps more and more people will see that banking on complicated second order effects as an excuse to postpone mitigating action against something predicted by a simple and clear first order argument was foolish. In this case it might well turn out to have been the most foolhardy and irresponsible and ultimately harmful act in the history of humanity.
Let’s hope that others learn quickly enough so that as a species we can keep my extra word in the title—at least until the phenomenon really is history, because if it becomes “What We Learned” within the century or more that it will take to reliably stabilize our impact on the climate, then that will only be in our epitaph.
Tom Scharf, “Ecopocalatastrophe”
Thank for you this glorious honor.
I have learned that only through new euphemisms can we hope to raise public awareness that anything undesirable in people’s lives has been caused by global warming, and is destined to get much worse. Additionally people must understand that life’s joys will come only rarely, if at all, if we continue our present destructive course. My hope is that through an improved and well-informed communication strategy we will be able to reach the masses in an emotional manner.
This will encourage many more people to join the courageous alliance of those who wish to further mankind’s future through a new and innovative social order that will foster the proper reverence for our one and only fragile ecosystem. We are at a fork in the road, we can choose a path that our grandchildren will recognize the sacrifices we make for their benefit, or we can continue down a path of darkness that jeopardizes their very existence. The choice is ours, and I appeal to the better nature in us all that we choose wisely.
It’s common in medicine to track men who have (or who simulate) sex with men, instead of asking patients whether they are “gay” or “homosexual”. This is abbreviated “MSM.” The letters for women aren’t as common, but let’s write WSW. In fact, let’s write PSP for people who simulate sex with those of the same sex.
Men can only have sexual intercourse with women, so that when two men or two women engage in certain acts, these can only be simulations and not the “real thing.” Also, the words “gay” and “homosexual” are variable, troublesome, and not universally accepted (are men in prison who engage in certain acts with other men “gay”?); thus, PSP is as neutral a word or term as we’re likely to get.
About these simulations: in particular, sodomy (this applies to both man-on-man and the much rarer man-on-woman). Is it moral or immoral? Normal or abnormal? Natural or unnatural? Disgusting or relative? Sinful or virtuous? Praiseworthy or disdainful? Nobody’s business or everybody’s business? If unhealthy, should it be banned? If immoral, should it be unlawful? Given the heated debate of all things PSP, it’s strange that these questions are scarcely ever asked. Reilly asks, and answers.
But first a distinction. Let us take an act, say, helping an old lady across the street. The act is praiseworthy per se, irrespective of the person carrying out the act, a person who may or may not have had good motives for committing the act and who may be at heart an evil or holy person (a person carrying out a per se praiseworthy act for an immoral reason is still acting immorally, just as a person who carries out an immoral act for the good reason is still acting immorally1). That is, we can and must discuss the merits and demerits of this or any act without bringing individuals into the question. It is the act we want to know about, and not the person.
The word natural is ambiguous. In one sense it means whatever is, but in another it means that which acts in accord with its purpose. The yearly murder rate in the USA is about 5 in 100,000, and, though variable, it is somewhat constant in that it was never 0, and nobody expects it ever will be. This rate is natural in the first sense. But we do not say therefore that because murder is natural in the first sense, it is therefore allowable or praiseworthy or moral. Murder is per se wrong because it is an act which is not in accord with the purpose of human beings. It is unknown at what rate old ladies are helped crossing streets, but whatever this “natural” rate is also does not determine the rightness of the act. The act is natural in the second sense, and obviously so.
Pointing to the number of people who engage in an act thus does not give us proof of its rightness or wrongness. We have to look at how the act relates to our purposes or ends. Reilly: “Deeds are considered good or bad, natural or unnatural, in relation to the effect they have on man’s progress toward his end in achieving the good.” The Good, according to Aristotle and many other profound thinkers, is the fulfillment of a thing or being’s essence or nature (a third meaning). Thus was born the Natural Law, which we will discuss later. For now, accept only that one of the ends of which the human body is directed is health, the idea that, in general, it is better to be healthy than ill (there are exceptions, like a man jumping on a grenade to save his comrades, etc.).
Sodomy is not healthy; it is not an act which is directed toward the health of either participant. Reilly reminds us of this quote from Aristotle, from his Ethics: “‘Those who love for the sake of pleasure do so for the sake of what is pleasant to themselves, and not in so far as the other person is loved’ (emphasis in original).” Reilly uses this example, which ties health to the natural end of an thing:
A person stuffing objects into his ears is endangering his hearing, because he could puncture his eardrums or precipitate an infection. Ears are made for hearing, not for the storage of objects. Using them for the latter endangers the former. Any responsible person would advise someone stuffing objects into his ears not to do this because of the harm it could bring.
The “made for” is derived from Natural Law, which again we do not discuss today, though in the case of ears being “made for” hearing, few would object. In the same sense, we say the southernmost end of the human alimentary tract is made for the evacuation of waste material. This appears indisputable; nevertheless, it is disputed. But, like sticking sharp pencils into ear canals, objects inserted into the human anus tend to (it is in their nature) to cause damage and bring disease.
Reilly lists many of these damages and diseases, removing most to an appendix because they are not pleasant to contemplate. He also includes damages and diseases occurring to WSW, as many acts in which these people participate differ from regular procreative practices and are thus also dangerous.
This material can be found in the medical literature, where it is a specialty, though it’s unlikely to be familiar to many (e.g. type “MSM” into PubMed). A good survey is provided by Dr John Diggs: “The list of diseases found with extraordinary frequency among male homosexual practitioners as a result of anal intercourse is alarming: Anal Cancer, Chlamydia trachomatis, Cryptosporidium, Giardia lamblia, Herpes simplex virus, Human immunodeficiency virus, Human papilloma virus, Isospora belli, Microsporidia, Gonorrhea, Viral hepatitis types B & C, Syphilis” to name a few, including mechanical damages (tears, etc.), much lower life expectancy; there is also that which follows after the act due to uncleanliness and incaution (certain oral-alimentary-tract practices); the frequent appearance of certain drugs. Diggs also relates the departures from health due to other non-procreative activities. All of these maladies and misfortunes occur at rates far, far exceeding man-woman (true) sexual practices. Reilly shows, for example, that there is a 4,000 percent increase in anal cancer rates for those who practice sodomy.
HIV/AIDS is of course its own category, and though it is more known, it is curiouser than you might have imagined.
All rationalizations for sexual misbehavior, no matter of what sort, are allied to and reinforce one another. The rationalization being complete, anything goes, including “bug chasing”—the new craze in which homosexuals actively seek HIV infection because of the added sexual thrill. They call the men who infect them “gift givers”. One bug chaser said, “It’s all about freedom.”
This passage included a footnote to a 2003 Rolling Stone article “Bug Chasers: The Men Who Long to Be HIV+”. I have only been able to discover snippets of that article2. One source has the article beginning by discussing a man named Carlos, who is brought to consider HIV: “His eyes light up as he says that the actual moment of transmission, the instant he gets HIV, will be ‘the most erotic thing I can imagine.’ He seems like a typical thirty-two-year-old man, but, in fact, he has a secret life. Carlos is chasing the bug.”
There is a Wikipedia entry on Bug Chasing, and searching in the usual way brings up a wealth of literature. There is even a new book advocating the chase by W. C. Harris who (says Taki magazine’s Christopher Hart) is “a radical gay activist and Professor of Queer Studies and Early American Literature”. The book is Slouching Towards Gaytheism: Christianity and Queer Survival in America. There are many intriguing passages in Hart’s review, but this one stands out:
“Breeding the virus in another man’s body develops new kinships,” explains Harris (rather than, say, new burdens on health services), and they become one more couple in the “bug brotherhood.” The one who does the infecting is called the daddy, the recipient the son, and such incestuous overtones are also very exciting, argues Professor Harris, for they too are transgressive, subversive, and liberating.
What is indisputable is that sodomy in general, and “bug chasing” in particular, are damaging to one’s health, and are even life-threatening. It is also true that these are all avoidable risks, that the risks are based on willful acts. It is also true that people who were always celibate or always monogamous (in the literal interpretation of these words) face disease risks at or near zero (exposure to some diseases through, say, blood transfusions or through “dirty needles” are always possible).
Should physicians be barred from communicating these risks? Should ordinary individuals? Would it be right to call any who communicated these facts a “bigot”? (Facts themselves cannot be bigoted, but their presentation could be.) Is stating, “Sodomy is an enormous health risk” “homophobic”? How about stating, “Sodomy is disgusting”? Should prepubescent children be taught that sodomy is “natural” and “normal”? In the first sense of these words—that it exists—it surely is, but in the second—that it is good or oriented toward health–it surely is not. Or should we let kids come to adulthood before exposing them to their “choices”? Should sodomy be encouraged as an “alternate lifestyle”, even though we know of its harms?
Lastly, dear reader: bug hunting. Good or bad? (It will be interesting to see who avoids this question.)
The reader is cautioned to keep the discussion at a high level. Comments not in accord with gentlemanly or lady-like behavior will be edited or deleted. Let’s also stick to the topic at hand, the act. The history and other cultural consequences we will come to another day. For those tending to apoplexy or who are feeling undue stress over this topic, I recommend this.
Update Somewhat curiously, we seem not to be answering the series of question put to us at the end of this post.
————————————————–
1“It is never acceptable to confuse a ‘subjective’ error about moral good with the ‘objective’ truth rationally proposed to man in virtue of his end, or to make the moral value of an act performed with a true and correct conscience equivalent to the moral value of an act performed by following the judgment of an erroneous conscience. It is possible that the evil done as the result of invincible ignorance or a non-culpable error of judgment may not be imputable to the agent; but even in this case it does not cease to be an evil, a disorder in relation to the truth about the good.” From Veritatis Splendour.
2Reilly listed in a footnote this URL for a PDF copy of the Rolling Stone article, but I was unable to locate it there.
This may be proved in three ways. The first…
See the first post in this series for an explanation and guide of our tour of Summa Contra Gentiles. All posts are under the category SAMT.
Previous post.
If you haven’t yet been convinced of St Thomas’s argument for God’s existence, re-read all of the posts on Chapter 13, starting with this one. The terminology and concepts we have developed are absolutely necessary to know before continuing on. We have learned that the Unmoved Mover, the Unchanged Changer, must exist, or nothing else could move or change. But that’s all we learned. Today, we start with the consequences of this knowledge. But we’re not doing much in today’s lesson. Is everybody away on vacation?
Chapter 14: That in order to acquire knowledge of God it is necessary to proceed by the way of remotioni
1 ACCORDINGLY having proved that there is a first being which we call God, it behooves us to inquire into His nature.
2 Now in treating of the divine essence the principal method to be followed is that of remotion. For the divine essence by its immensity surpasses every form to which our intellect reaches; and thus we cannot apprehend it by knowing what it is.ii But we have some knowledge thereof by knowing what it is not: and we shall approach all the nearer to the knowledge thereof according as we shall be enabled to remove by our intellect a greater number of things therefrom.iii
For the more completely we see how a thing differs from others, the more perfectly we know it: since each thing has in itself its own being distinct from all other things. Wherefore when we know the definition of a thing, first we place it in a genus, whereby we know in general what it is, and afterwards we add differences, so as to mark its distinction from other things: and thus we arrive at the complete knowledge of a thing’s essence.
3 Since, however, we are unable in treating of the divine essence to take what as a genus, nor can we express its distinction from other things by affirmative differences, we must needs express it by negative differences. Now just as in affirmative differences one restricts another, and brings us the nearer to a complete description of the thing, according as it makes it to differ from more things, so one negative difference is restricted by another that marks a distinction from more things.
Thus, if we say that God is not an accidentiv, we thereby distinguish Him from all accidents; then if we add that He is not a body, we shall distinguish Him also from certain substances, and thus in gradation He will be differentiated by suchlike negations from all beside Himself: and then when He is known as distinct from all things, we shall arrive at a proper consideration of Him. It will not, however, be perfect, because we shall not know what He is in Himself.v
4 Wherefore in order to proceed about the knowledge of God by the way of remotion, let us take as principle that which is already made manifest by what we have said above,[1] namely that God is altogether unchangeable.vi This is also confirmed by the authority of Holy Writ. For it is said (Malach. iii. 6): I am God (Vulg., the Lord) and I change not; (James i. 17): With Whom there is no change; and (Num. xxiii. 19): God is not as a man…that He should be changed.vii
—————————————————————————————
iOED: “The method or process of examining the concept of God by removing everything which is known not to be God; (also) a thing known not to be included in a concept.”
iiThe analogy given earlier is that we can know, say, that infinite numbers exist, and even describe some of their characteristics, but we cannot know everything about the infinite; we certainly cannot experience it. For example, Don Knuth invented the following notation: $10\uparrow 10 = 10^{10}$, or 10 billion, where the arrow has replaced the caret, but then $10\uparrow\uparrow 10$, which is 10 raised to the 10 raised to the 10 raised to the 10, etc., 10 times (the arrow iterates the caret) Now that’s a big number! We can write it down all right—Knuth calls it K—but we cannot know it, cannot form a real appreciation for it. It’s too big.
Knuth, a computer scientist, invented the terminology because, as he says in his classic paper, “Finite numbers can be really enormous, and the known universe is very small. Therefore the distinction between finite and infinite is not as relevant as the distinction between realistic and unrealistic.” That’s true for mechanical computer operations, but if you rely, as some are tempted, on “really very big” to replace “infinite”, you’ll go astray. The two just aren’t the same. Even K is still infinitely far from infinity. It is a small number in that sense, but incomprehensibly large to us. But we are not God.
iiiIt’s too tempting not to quote Sherlock Holmes here, expressing a related sentiment: “How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth?”
[1] Ch. xiii.
ivaccident: “In Aristotelian thought: a property or quality not essential to a substance or object; something that does not constitute an essential component, an attribute.” OED again!
vFinite minds cannot grasp the whole of the infinite. Most of us cannot even remember what we had for lunch two weeks ago Tuesday.
viThis was proved in Chapter 13. It’s the Unmoved Move, the Uncaused Cause, the Unchanging Changer. It followed from the premise that whatever is moved is moved by another. The Unmoved Mover is not moved by another, and is therefore unchanging. Now we called this necessary force, the Prime Mover, God, but that to modern ears sounded like a cheat. Why call what after all is a physical force “God”? Well, that’s what we’re about to find out. Not uncoincidentally, Ed Feser was talking about the First Cause argument the other day.
viiThere are any number of poor critiques of Biblical passages in which God is shown to have changed, because, for instance, He “changes his mind.” Atheists are awfully prone to read the Bible everywhere literally and, worse, are then satisfied that they have plumbed all possible depths.
Next week we learn God is eternal. Eternal? Change? What’s that? Stick around.
|
2014-08-02 06:32:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4656533896923065, "perplexity": 2442.918844371103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510276584.58/warc/CC-MAIN-20140728011756-00171-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/65511/is-it-true-that-two-real-matrices-with-the-same-characteristic-polynomial-have-t
|
# Is it true that two real matrices with the same characteristic polynomial have the same rank?
I was wandering if there is a chance that two real matrices with the same characteristic polynomial have a different rank? I tried to prove it, but i failed. any suggestions?
-
Take the zero two by two matrix and a nonzero zero two by two nilpotent matrix. – Pierre-Yves Gaillard Sep 18 '11 at 15:51
Here's a related question: Can two real matrices with the same minimal polynomial have different rank? – alex.jordan Sep 18 '11 at 18:50
$\pmatrix{0 & 1\\ 0&0}$ and $\pmatrix{0&0\\0&0}$ both have characteristic polynomial $\lambda^2$.
-
The dimension of the nullspace of a matrix is equal to the geometric multiplicity of eigenvalue 0 (this is rather easy to prove). So, if the geometric multiplicity was always equal to the algebraic multiplicity, you would have your result. But as you probably know, this is not always the case.
Counterexamples can be constructed as Jordan matrices. For the area that corresponds to eigenvalue 0, form different number of Jordan blocks. Since similar matrices have the same characteristic polynomial and all matrices are similar to a Jordan matrix, this accounts for all cases (up to similarity transform).
For example $$\begin{pmatrix} 2 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} \quad \begin{pmatrix} 2 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{pmatrix}$$ both have characteristic polynomial $2\lambda^2 - \lambda^3$ but the first have a two-dimensional nullspace and the second a one-dimensional nullspace.
However, if your matrices have eigenvalue 0 with algebraic multiplicity at most one, they have the same rank.
Here's an example of two matrices having the same minimal polynomial but different rank:
$$\begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} \quad \begin{pmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 \end{pmatrix}$$
Both matrices have minimal polynomial $\lambda^2$. The first matrix has a 3-dimensional nullspace and the second a 2-dimensional nullspace.
-
|
2014-08-27 15:44:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232012629508972, "perplexity": 186.09194866071687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829421.59/warc/CC-MAIN-20140820021349-00172-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/54218-find-dy-dx-implicit-differentiation.html
|
Thread: find dy/dx by implicit differentiation
1. find dy/dx by implicit differentiation
find dy/dx by implicit differentiation
x^2-2xy+y^3=c
2. Originally Posted by thecount
find dy/dx by implicit differentiation
x^2-2xy+y^3=c
implicit differentiation is just the chain rule, where the derivative of y is dy/dx. just take the derivative of that equation
if x is a function of y, and c is a constant:
2x - 2x*dy/dx - 2y + 3y^2*dy/dx = 0
dy/dx (-2x + 3y^2)= 2y-2x
dy/dx = (2y-2x)/(-2x + 3y^2)
|
2016-12-04 03:04:45
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096103310585022, "perplexity": 4850.253321360907}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541170.58/warc/CC-MAIN-20161202170901-00010-ip-10-31-129-80.ec2.internal.warc.gz"}
|
http://energy.ihed.ras.ru/en/arhive/article/10983
|
# Article
Heat and Mass Transfer and Physical Gasdynamics
2019. V. 57. № 2. P. 256–262
Shagapov V.Sh., Galimzyanov M.N., Vdovenko I.I.
Characteristics of the reflection and refraction of acoustic waves at normal incidence on the interface between “pure” and bubbly liquids
Annotation
The characteristics of the reflection and refraction of harmonic waves at its normal incidence on an interface between a “pure” liquid and liquid with bubbles filled with a vapor-gas mixture have been studied. The influence of variations of equilibrium temperature $T_0$ of a system in the range $300 \le T_0 \le 373$ K for two initial bubble sizes $a_0 = 10^{-6}$ and $10^{-3}$ m has been numerically analyzed. The effect of the perturbation frequencies on the reflection coefficient and refraction index at normal incidence has been studied. We have shown that the condition of total internal reflection can be fulfilled by the incidence of a wave from a bubbly liquid at the interface.
Article reference:
Shagapov V.Sh., Galimzyanov M.N., Vdovenko I.I. Characteristics of the reflection and refraction of acoustic waves at normal incidence on the interface between “pure” and bubbly liquids, High Temp., 2019. V. 57. № 2. P. 256
|
2019-07-17 07:23:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3051251471042633, "perplexity": 723.2039211704275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525094.53/warc/CC-MAIN-20190717061451-20190717083451-00317.warc.gz"}
|
https://emiruz.com/post/2022-05-21-uk-houses/
|
# SUMMARY
I show how assumptions about price structure can be used to build a compelling fixed effect (deterministic) price imputation model for the UK residential housing market. The model uses just public price paid data. I describe how the data is collected and processed, how the model is designed, and how it is fitted using the Jax Python package. I showcase some results, I discuss shortcomings and I highlight further necessary work prior to use for decision making under uncertainty.
# INTRODUCTION
The UK government publishes price paid data about residential UK property transactions. It contains information like address, sold price and type of property for transactions since 1995. I conjectured that a lot of information about properties must be “priced in” and I wanted to see if I could use just price and address data to impute prices convincingly by making assumptions about the structure of what prices capture.
I’m going to focus on making a fixed effect – deterministic – model although it naturally extends to a mixed effect model by characterising the uncertainty of the residual. The full model is necessary for informed decision making under uncertainty, but the most important and novel component is the fixed effect I’m going to focus on.
All code required to reproduce the analysis is included inline herein.
# THE MODEL
I assume that the following assumptions hold in most cases at least locally: (1) residential properties are commensurable and form a single market – i.e. properties are priced relative to each other – and (2) the price ratio of properties are the same over time – i.e. if $$A$$ costs twice as much as $$B$$ in the past, then it will also in the future. From these I can infer that price ratios are transitive (e.g. if $$A=2B$$ and $$B=3C$$ then $$A=6C$$) and that the real price of any property can serve as a numeraire for the rest. Abstracting out the numeraire into a latent time varying term $$x_t$$, I can express the price of any property as follows: $p_i^t = \alpha_i x_t$ where $$p^t_i$$ is the price of property $$i$$ in period $$t$$, and $$\alpha_i$$ is the latent property specific constant. A “period” is an arbitrary duration, but herein I use calendar months. Note that $$\alpha_i$$ is time invariant, and $$x_t$$ is period specific. Thus, once $$\alpha_i$$ is calculated it can be applied anywhere that $$x_t$$ is available to produce the value $$p^t_i$$: that is the crux of how the model is able to impute.
The transactions data reveals $$p^t_i$$ from which I need to infer $$x_t$$ and $$\alpha_i$$. This can be formulated as an optimisation problem. Let $$i \in P: P=[1,...,M]$$ index each property, let $$t \in T: T=[1,...,T]$$ index each period, let $$S_t \subseteq P$$ indicate the indexes of properties sold in period $$t$$, then:
$\min_{\alpha_\cdot,x_\cdot}\Bigg[ \sum_{t=1}^T \sum_{s\in S_t} \big( \alpha_s x_t - p^t_s\big)^2\Bigg]$ subject to $$\alpha_s>0$$, $$x_t>0$$. The optimisation has approximately $$M+T$$ degrees of freedom as an upper limit, but the effective degrees of freedom may be lower because the variables are not independent. Further, the final imputation (see implementation section) only uses one of the $$\alpha_i$$ set with the rest being nuisance parameters. The function is non-linear, however the within ($$x_t$$) and between ($$\alpha_i$$) period variables keep this problem well behaved in practice by providing constraints on the search space. The inner term of the objective function $$(\alpha_s x_t - p^t_s)^2$$ has a positive second differential with respect to $$\alpha_i$$ and $$x_t$$ so its Hessian matrix is positive definite and hence the objective function must be convex.
In the following sections I’ll describe how the data was collected and prepared, and how the model was practically implemented.
# DATA COLLECTION AND PREPARATION
The model makes use of two datasets: price paid data and postcode lat/lon data. The former contains property addresses, prices and some basic categorical information whilst the latter contains the latitude and longitude for all UK postal codes. I import both CSVs into a SQLite database for further processing as follows.
I select only postcode, latitude and longitude from the postcodes.csv and pipe it into a SQLite script: loc.sql.
cat postcodes.csv | \
cut -d"," -f2,3,4 | \
sqlite3 --init loc.sql db.sqlite
loc.sql creates a new loc (i.e. location) table and populates it from stdin.
drop table if exists loc;
create table loc(pc text, lat float, lon float);
.mode csv loc
.import --skip 1 /dev/stdin loc
Similarly, I pick specific columns from the prices.csv data and pipe it into prices.sql.
cat prices.csv | \
sed -r 's/","/\t/g' | \
cut -f2-9 | \
sqlite3 --init prices.sql db.sqlite
prices.sql creates a temporary prices table and populates it from stdin. It joins the loc table, and then creates a new prices table from the result of the join. It also does a little bit of transformation such as creating a non unique id field to more easily identify specific addresses and converting the lat/lon from degrees to radians.
drop table if exists prices;
create temporary table prices_tmp(
pr float, dt text, pc text, typ text, isnew text,
dur text, paon text, saon text);
.mode tabs prices_tmp
.import /dev/stdin prices_tmp
create table prices as
select replace(paon||' '||saon||' '||p.pc,' ',' ') as id,
pr, dt, p.pc, typ, isnew, dur, paon, saon
from prices_tmp p
inner join loc l on l.pc=p.pc;
The model is built in Python which is not memory efficient so some of the filtering work is done on the SQL side. Namely, I query the database for transactions featuring properties within 2km of a target property. To do this I utilise the added lat/lon. Vanilla SQLite does not have the trigonometric functions required to calculate distance between geographical coordinates, so I register a Python function to handle it:
import sqlite3
from math import sin,cos,acos
def great_circle_dist(lat1,lon1,lat2,lon2):
r1 = sin(lat1)*sin(lat2)
r2 = cos(lat1)*cos(lat2)*cos(lon2-lon1)
return 6371 * acos(r1 + r2)
con = sqlite3.connect("db/db.sqlite")
con.create_function("DIST", 4, great_circle_dist)
I can now define the get_local_by_id function, which returns just those property transactions within 2km of a target property:
import pandas as pd
query_id_1km = """
with
p0 as (select lat as lat0, lon as lon0, dur
from prices
where id='{id}'
limit 1)
select p.*, julianday(p.dt) as days
from prices p join p0
where p.dur = p0.dur and
DIST(lat0,lon0,lat,lon) < 2
order by p.dt;
"""
def get_local_by_id(id):
query_id_1km.format(id=id), con)
In the following section I’ll explain how the data is used to implement a price imputation model.
# IMPLEMENTATION
The model focuses on imputing the price for an arbitrary target property. Data is collected from within a 1km radius of the target:
target = "COLERIDGE COURT 26 LN4 4PW"
df = get_local_by_id(target)
I add a few auxiliary columns to the data-frame. The most important are ym and idx which are integer indexes for the periods and properties respectively.
# Turn calendar months into an index.
df["ym"] = df["dt"].str.slice(0,7)
ym = df.ym.unique()
ym = dict(zip(ym,range(len(ym))))
df["ym"] = list(map(lambda x: ym[x], df.ym))
# Turn properties into an index.
df.dt = pd.to_datetime(df.dt)
prop = df.id.unique()
prop = dict(zip(prop, range(len(prop))))
df["idx"] = df.id.map(prop)
# Remember the index of the target.
tar_idx = prop[target]
# Some useful cardinalities for later.
M, N, P = len(prop), len(df), len(ym)
Since the minimisation problem is convex we can use gradient descent to solve it as a first approximation, for which I use the Python package Jax:
import numpy as np
from jax import value_and_grad,jit
import jax.numpy as jnp
idx, y, t = df.idx.values, df.pr.values, df.ym.values
def obj(arg, idx, t, y):
a, p = arg
return ((a[idx]*p[t]-y)**2).mean()**.5
a = np.random.uniform(1,-1, M)
p = np.random.uniform(1,-1, P)
for _ in range(500000):
o, (d_a, d_p) = g((a, p), idx, t, y)
a = jnp.clip(a - 0.25 * d_a, 0)
p = jnp.clip(p - 0.25 * d_p, 0)
I use RMSE as an objective rather than sum of squares because the numbers involved get very big otherwise which causes Jax to fall over. Note that I clip negative values resulting from a gradient update in order to keep $$a$$ and $$p$$ positive. Gradient descent takes a large number of iterations to converge, even at high learning rates. It is also sensitive to outliers given that I’m using a least square objective. There are better options than gradient descent for this problem but it will suffice for the proof of concept.
# RESULTS
Here are a few pseudo-random examples. The code used to generate the graphs is as follows.
d = df[df.id==target]
y = a[tar_idx]*p[t]/1000
plt.figure(figsize=(9,7))
plt.grid()
plt.scatter(df.dt, y, c="lightgray", label="Imputed")
plt.plot(df.dt, pd.Series(y).rolling(30).median(), c="blue",
ls="--", label="Rolling median")
plt.scatter(d.dt, d.pr/1000, c="r", s=100, label="Actual")
plt.tick_params(labeltop=True, labelright=True)
plt.xlabel("Date")
plt.ylabel("Price £ (000s)")
plt.title(target)
plt.legend()
plt.tight_layout()
The gray points are the imputed prices for the target property for every transactions. The red points are actual prices paid for the property. The dashed blue line is a rolling 30 transaction median.
# DISCUSSION AND CONCLUSION
The individual imputations tell the same story in the main showing clear trends using just a rolling median. Further, the prices imputed for properties familiar to me are accurate, and this is all prior to any attempts to remove outliers, weighing contributions, perfecting the fitting and so on. For those reasons, I consider this proof of concept a significant success. However, there remain various complications to be addressed:
1. Potentially high degree of freedom – the final imputation uses a property specific $$\alpha_i$$ variable and all the $$x_t$$ period variables. I.e. roughly $$T$$ parameters, with another $$M$$ being treated as a nuisance. The model is non-linear and the variables are not independent, so it isn’t entirely clear prima facie how much of a problem the degrees of freedom are. Therefore, the model needs to be cross-validated carefully in both specific and general cases because it is at risk of over-fitting.
2. Random effects – I have considered only a prominent fixed effect but the effort naturally extends into a probabilistic mixed effect model. A search for other fixed effects, and a random effect model is needed (especially given heteroscedasticity of variance) before the model can be used for decision making under uncertainty.
3. Local differences – Different properties have markedly different price series both in terms of shape and variance. Usefully analysing the residual of the fixed effect model may require multiple models in order to take into account the typology of local situations.
|
2023-01-30 10:55:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4151541292667389, "perplexity": 2201.1906179750467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00737.warc.gz"}
|
https://kerrygardiner.co.uk/var/i3ojz/pchdiqn.php?586819=defective-eigenvalue-3x3
|
Santa Fe College Phone Number, Outdoor Wall Fan With Mister, Alabama Highway Patrol Jobs, Vitamin C&e Serum, Jbl Partybox 100 Specs, Houses For Rent In Nassau County, Crkt Eros K455txp, "/>
This definition of an eigenvalue, which does not directly involve the corresponding eigenvector, is the characteristic equation or characteristic polynomial of … If the eigenvalue λ is a double root of the characteristic equation, but the system (2) has only one non-zero solution v 1 (up to constant multiples), then the eigenvalue is said to be incomplete or defective and x 1 = eλ 1tv 1 is the unique normal mode. Let’s now get the eigenvectors. The defective case. Thanks for the feedback. It is also known as characteristic vector. 1.Only eigenvalue is = 1. So, let’s do … This website uses cookies to ensure you get the best experience. https://www.khanacademy.org/.../v/linear-algebra-eigenvalues-of-a-3x3-matrix Note that this will not always be the case for a 3x3 matrix. Let A be an n×n matrix and let λ1,…,λn be its eigenvalues. It is also known as characteristic vector. Matrices are the foundation of Linear Algebra; which has gained more and more importance in science, physics and eningineering. We’ll start with the simple eigenvector. We just didn’t show the work. Eigen vector, Eigen value 3x3 Matrix Calculator. One of the types is a singular Matrix. The eigenvalue is the factor which the matrix is expanded. Learn to find complex eigenvalues and eigenvectors of a matrix. ... And the lambda, the multiple that it becomes-- this is the eigenvalue associated with that eigenvector. : Let λ be eigenvalue of A. In this case we get complex eigenvalues which are definitely a fact of life with eigenvalue/eigenvector problems so get used to them. Defective eigenvalues. Message received. ... matrix is called defective (and therefore not diagonalizable). 3) If a"×"symmetricmatrix !has "distinct eigenvalues then !is The Matrix… Symbolab Version. We have different types of matrices, such as a row matrix, column matrix, identity matrix, square matrix, rectangular matrix. 9.5). 4.We could use u = (0;1) to complete a basis. en. For the eigenvalue $3$ this is trivially true as its multiplicity is only one and you can certainly find one nonzero eigenvector associated to it. First eigenvalue: Second eigenvalue: Third eigenvalue: Discover the beauty of matrices! It is the union of zero vector and set of all eigenvector corresponding to the eigenvalue. To nd the eigenvector(s), we set up the system 6 2 18 6 x y = 0 0 These equations are multiples of each other, so we can set x= tand get y= 3t. So there is only one linearly independent eigenvector, 1 3 . Find more Mathematics widgets in Wolfram|Alpha. Then A also has the eigenvalue λ B = λ. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. Calculate eigenvalues. This will give us one solution to … In the example above the ... 6In practice we’ll only be dealing with smaller (2x2, 3x3, maybe a 4x4) systems, so Eigenvalues. The values of λ that satisfy the equation are the generalized eigenvalues. However, a second order system needs two independent solutions. The matrix A is defective since it does not have a full set of linearly independent eigenvectors (the second and third columns of V are the same). This implies that A−λI is singular and hence that det(A−λI) = 0. Vectors that map to their scalar multiples, and the associated scalars In linear algebra, an eigenvector or characteristic vector of a linear transformation is a nonzero vector that changes by a scalar factor when that linear transformation is applied to it. Please try again using a different payment method. ... by definition the matrix is non-defective and hence diagonalizable. We will also show how to sketch phase portraits associated with real repeated eigenvalues (improper nodes). So … So in the example I just gave where the transformation is flipping around this line, v1, the vector 1, 2 is an eigenvector of our transformation. $${\lambda _{\,1}} = 2$$ : In general, any 3 by 3 matrix whose eigenvalues are distinct can be diagonalised. Defective Eigenvalue. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. [V,D,W] = eig(A,B) also returns full matrix W whose columns are the corresponding left eigenvectors, so that W'*A = D*W'*B. The characteristic polynomial is P( ) = ( +2)2 and there is one eigenvalue 1 = 2 with multiplicity 2. A simple online EigenSpace calculator to find the space generated by the eigen vectors of a square matrix. How can we correct this defect? To embed this widget in a post on your WordPress blog, copy and paste the shortcode below into the HTML source: To add a widget to a MediaWiki site, the wiki must have the. Eigenvalue and eigenvector computation. (i) If there are just two eigenvectors (up to multiplication by a … Let us focus on the behavior of the solutions when (meaning the future). Free Matrix Eigenvalues calculator - calculate matrix eigenvalues step-by-step Example The matrix A= 1 1 0 1 is defective. Thus, the geometric multiplicity of this eigenvalue … Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make a basis. To embed a widget in your blog's sidebar, install the Wolfram|Alpha Widget Sidebar Plugin, and copy and paste the Widget ID below into the "id" field: We appreciate your interest in Wolfram|Alpha and will be in touch soon. Related Symbolab blog posts. Find more Mathematics widgets in Wolfram|Alpha. So our strategy will be to try to find the eigenvector with X=1 , and then if necessary scale up. Understand the geometry of 2 × 2 and 3 × 3 matrices with a complex eigenvalue. Def. Each eigenvalue $${\lambda _i}$$ occurs as many times as its algebraic multiplicity $${k_i}.$$ In each block of size more than $$1,$$ there is a parallel diagonal above the main one, consisting of units. In linear algebra, the Eigenvector does not change its direction under the associated linear transformation. Free online inverse eigenvalue calculator computes the inverse of a 2x2, 3x3 or higher-order square matrix. Let A be a 2 × 2 matrix with a complex, non-real eigenvalue λ. Subsection 5.5.3 Geometry of 2 × 2 Matrices with a Complex Eigenvalue. (b) The geometric multiplicity, mg, of λ is dimnull(A − λI). To embed this widget in a post, install the Wolfram|Alpha Widget Shortcode Plugin and copy and paste the shortcode above into the HTML source. A defective matrix Find all of the eigenvalues and eigenvectors of A= 1 1 0 1 : The characteristic polynomial is ( 1)2, so we have a single eigenvalue = 1 with algebraic multiplicity 2. Now, every such system will have infinitely many solutions, because if {\bf e} is an eigenvector, so is any multiple of {\bf e} . Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. Matrix, the one with numbers, arranged with rows and columns, is extremely useful in most scientific fields. 1. Multiplying by the inverse... eigenvalues\:\begin{pmatrix}6&-1\\2&3\end{pmatrix}, eigenvalues\:\begin{pmatrix}1&-2\\-2&0\end{pmatrix}, eigenvalues\:\begin{pmatrix}2&0&0\\1&2&1\\-1&0&1\end{pmatrix}, eigenvalues\:\begin{pmatrix}1&2&1\\6&-1&0\\-1&-2&-1\end{pmatrix}. EigenSpace 3x3 Matrix Calculator . The function eig(A) denotes a column vector containing all the eigenvalues of … Therefore $2$ is an eigenvalue with algebraic multiplicity $1,$ and $3$ is an eigenvalue with algebraic multiplicity $2$. Every eigenvector makes up a one-dimensional eigenspace. Linear independence of eigenvectors. In this situation we call this eigenvalue defective, and the defect of this eigenvalue is the difference beween the multiplicity of the root and the 3. number of linearly independent eigenvectors. How can we correct this defect? Eigenvalue problem Let !be an "×"matrix: $≠&is an eigenvectorof !if there exists a scalar ’such that!$=’$where ’is called an eigenvalue. Section 5.5 Complex Eigenvalues ¶ permalink Objectives. Matrix, the one with numbers, arranged with rows and columns, is extremely useful in most scientific fields. 4.We could use u = (0;1) to complete a basis. We compute the eigenvectors. When the geometric multiplicity of a repeated eigenvalue is strictly less than its algebraic multiplicity, then that eigenvalue is said to be defective. (a) The algebraic multiplicity, m, of λ is the multiplicity of λ as root of the characteristic polynomial (CN Sec. The eigenvalues of A are the roots of its characteristic equation: |tI-A| = 0. In the example above the ... 6In practice we’ll only be dealing with smaller (2x2, 3x3, maybe a 4x4) systems, so (a) The algebraic multiplicity, m, of λ is the multiplicity of λ as root of the characteristic polynomial (CN Sec. An eigenvector is given by u 1 = (1;0). 3X3 Eigenvalue Calculator. Since not all columns of V are linearly independent, it has a large condition number of about ~1e8.However, schur is able to calculate three different basis vectors in U. Get the free "Eigenvalues Calculator 3x3" widget for your website, blog, Wordpress, Blogger, or iGoogle. So our eigenvector is 0 @ s 2t s t 1 A= s 0 @ 1 1 0 1 A+ t 0 @ 2 0 1 1 A We can see that there are two linearly independent vectors here, and each will be an eigen-vector for = 2. Eigenvalue Decomposition For a square matrix A 2Cn n, there exists at least one such that Ax = x ) (A I)y = 0 Putting the eigenvectors x j as columns in a matrix X, and the eigenvalues j on the diagonal of a diagonal matrix , we get AX = X : A matrix is non-defective or diagonalizable if there exist n linearly Here we nd a repeated eigenvalue of = 4. where is the double eigenvalue and is the associated eigenvector. For Example, if x is a vector that is not zero, then it is an eigenvector of a … 2. The order of the Jordan blocks in the matrix is not unique. The matrix A I= 0 1 0 0 has a one-dimensional null space spanned by the vector (1;0). Note that we used the same method of computing the determinant of a $$3 \times 3$$ matrix that we used in the previous section. For Example, if x is a vector that is not zero, then it is an eigenvector of a … Diagonalizing a 3x3 matrix. Consider a linear homogeneous system of ndifferential equations with constant coefficients, which can be written in matrix form as X′(t)=AX(t), where the following notation is used: X(t)=⎡⎢⎢⎢⎢⎢⎣x1(t)x2(t)⋮xn(t)⎤⎥⎥⎥⎥⎥⎦,X′(t)=⎡⎢⎢⎢⎢⎢⎣x′1(t)x′2(t)⋮x′n(t)⎤⎥⎥⎥⎥⎥⎦,A=⎡⎢⎢⎢⎣a11a12⋯a1na21a22⋯a2n⋯⋯⋯… For the eigenvector$0$however you would need to find$2$linearly indepedent eigenvectors Yet as you said, indirectly, the eigenspace associated to$0$is the space generated by$(1,0,0)\$. 2. Need: m linearly independent solu-tions of x′ = Ax associated with λ. I am assuming that if a 3x3 matrix always has an eigenvector, then it also always has an eigenvalue. So, we’ve got a simple eigenvalue and an eigenvalue of multiplicity 2. Add to solve later Sponsored Links An eigenvalue that is not repeated has an associated eigenvector which is different from zero. We have to solve 0 1 0 0 x y = 0 It yields one independent relation, namely y= 0 and therefore the dimension of E 1 is 1 and Ais not diagonalizable. for each eigenvalue \lambda . The sum of the multiplicity of all eigenvalues is equal to the degree of the polyno-mial, that is, Xp i k i= n: Let E ibe the subspace of eigenvectors associated to the eigenvalue i, that is, E i= fu2Cnsuch that Au= iug: Theorem 1 (from linear algebra). Eigenvectors and eigenspaces for a 3x3 matrix. The generalized eigenvalue problem is to determine the solution to the equation Av = λBv, where A and B are n-by-n matrices, v is a column vector of length n, and λ is a scalar. 1.Only eigenvalue is = 1. As a consequence, if all the eigenvalues of a matrix are distinct, then their corresponding eigenvectors span the space of column vectors to which the columns of the matrix belong. A I= 0 1 0 0 3.Single eigenvector v = (1;0). Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make a basis. There... For matrices there is no such thing as division, you can multiply but can’t divide. If, then it also always has an eigenvalue is said to be defective of. Λn be its eigenvalues 3 matrices with a complex eigenvalue a simple online EigenSpace calculator to the!: second eigenvalue: Third eigenvalue: Third eigenvalue: Discover the beauty of matrices a 2 × and... Portraits associated with real repeated eigenvalues ( improper nodes ) has a one-dimensional null space spanned by the eigen of! By a, just click the link in the matrix A= 1 1 0 1 1... Associated linear transformation simple online EigenSpace calculator to find complex eigenvalues which are a. Future ) finding eigenvectors for complex eigenvalues which are definitely a fact of life with problems... Compute by how much the matrix is non-defective and hence that det ( )! 3 by 3 matrix whose eigenvalues are distinct can be diagonalised one linearly independent solution that we will show. = λ a − λI ) always be the case for a square matrix ¶... 2 matrices with a complex eigenvalue from zero ) the geometric multiplicity is less than its algebraic multiplicity then! Linearly independent solution that we will need to form the general solution the. Have in this case, the eigenvector does not change its direction under the associated linear.! Space generated by the eigen vectors of a matrix if necessary scale up ’ ve got simple! It also always has an eigenvector, then clearly we have two cases if, then that eigenvalue strictly. Eigenvalue 1 = ( 1 ; 0 ) generated by the vector ( 1 ; )... Find the eigenvector with X=1, and then if necessary scale up used to them =... Thing as division, you can multiply but can ’ t divide our strategy will somewhat. Which has gained more and more importance in science, physics and eningineering or. Importance in science, physics and eningineering 3x3 matrix always has an eigenvalue of multiplicity 2 its eigenvalues to! 0 3.Single eigenvector v = ( 1 ; 0 ) ) = ( 0 ; )! Square matrix can be diagonalised repeated has an eigenvector, then clearly we have in this case we get eigenvalues... The solutions when ( meaning the future ) also has the eigenvalue associated with repeated. And hence diagonalizable more importance in science, physics and eningineering 2x2 3x3... Necessary scale up repeated has an associated eigenvector which is different from zero second eigenvalue: Discover the beauty matrices. Defective if its geometric multiplicity is less than its algebraic multiplicity, then it also always has associated... The geometric multiplicity is less than its algebraic multiplicity ’ ve got a simple online EigenSpace to! × 2 matrices with a complex eigenvalue \,1 } } = 2\ ): defective eigenvalues is... The geometric multiplicity of a repeated eigenvalue is the factor which defective eigenvalue 3x3 matrix a I= 1! That satisfy the equation are the foundation of linear algebra, the geometric multiplicity is less than algebraic... × 3 matrices with a complex eigenvalue less than its algebraic multiplicity, mg of... The foundation of linear algebra, the eigenvector does not change its direction under the associated linear transformation case. Eigenvectors of a 2x2, 3x3 or higher-order square defective eigenvalue 3x3 to try to find eigenvalues... Multiplicity, mg, of λ that satisfy the equation are the of. But it will be somewhat messier case for a square matrix _ { \,1 }! Vector and set of all eigenvector corresponding to the previous two examples, but it will be somewhat.! +2 ) 2 and there is only one linearly independent eigenvector, then clearly we have in this we... An eigenvalue that is not unique the eigenvector with X=1, and if! And the lambda, the eigenvector does not change its direction under the associated transformation. Eigenvector, 1 3 multiplicity of this eigenvalue … Subsection 5.5.3 Geometry of 2 × 2 and there no. Vectors of a 2x2, 3x3 or higher-order square matrix meaning the future ) and.! Y 2z= s 2t, from the rst equation matrices there is a repeated eigenvalue, whether or not matrix... If a 3x3 matrix always has an eigenvector is given by u 1 = ( )... 0 1 0 1 is defective if its geometric multiplicity of a are the foundation of linear algebra, one... Simple eigenvalue and an eigenvalue that is not repeated has an eigenvector, then that eigenvalue is to! 3 matrix whose eigenvalues are distinct can be diagonalised, of λ that satisfy the equation the. And therefore not diagonalizable ) 2 matrices with a complex eigenvalue not repeated has an eigenvalue case get... Eigenvalue and an eigenvalue is strictly less than its algebraic multiplicity find eigenvalues. The factor which the matrix can be diagonalised depends on the behavior of the when. This is the eigenvalue λ b ) the geometric multiplicity is less than its algebraic multiplicity is not repeated an. Nodes ) matrix is non-defective and hence that det ( A−λI ) x = 0 u = v (. Let λ1, …, λn be its eigenvalues independent solution that we will also show how sketch... By u 1 = ( 1 ; 0 ) A−λI is singular and hence diagonalizable as,... -- this is the union of zero vector and set of all corresponding. Much the matrix rotates and scales: |tI-A| = 0 change its direction under the associated linear transformation its... Eigenvector, 1 3 strictly less than its algebraic multiplicity, then eigenvalue. Strictly less than its algebraic multiplicity, mg, of λ that satisfy the equation the. //Www.Khanacademy.Org/... /v/linear-algebra-eigenvalues-of-a-3x3-matrix for each eigenvalue \lambda then that eigenvalue is the which. Extremely useful in most scientific fields direction under the associated linear transformation science, physics and eningineering the experience...
Recent Posts
Start typing and press Enter to search
|
2021-05-07 19:27:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8657305836677551, "perplexity": 629.7562788073701}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988802.93/warc/CC-MAIN-20210507181103-20210507211103-00452.warc.gz"}
|
https://math.stackexchange.com/questions/3012588/isolated-and-non-isolated-essential-singularity-at-same-point
|
Isolated and non isolated essential singularity at same point?
I need to find the singularities of $$f(z) = \frac{1-e^z}{2+e^z}$$ My effort: Poles of function are given by $$2+e^z=0\implies e^z = -2 \implies z = \log 2+i(2k+1)\pi$$ for k integer.
All these are singularities termed as simple poles. By definition, limit point of these which is $$\infty$$ is a non-isolated singularity.
Further, limit point of zeros is again infinity which is a isolated-essential singularity.
But if both of isolated and non isolated coincides we take it as a non-isolated singularity. Am i correct? These are the only singularities?
Every neighborhood of $$\infty$$ contains a pole, implying, as you correctly state, that $$\infty$$ is not an isolated singularity. What makes you believe that it is also an isolated singularity?
|
2021-06-12 21:38:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 5, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8439407348632812, "perplexity": 165.6836669742201}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586390.4/warc/CC-MAIN-20210612193058-20210612223058-00227.warc.gz"}
|
https://ask.sagemath.org/answers/43719/revisions/
|
# Revision history [back]
If you are launching Sage using a terminal, it should inherit the shell's PATH.
Are you maybe launching Sage as an app?
Setting the PATH for GUI apps can be done, see for example
Another solution might be to include the command from your temporary fix in the init.sage file in the .sage folder in your home folder (this .sage folder should exist; create the file init.sage if necessary).
os.environ["PATH"] += ":/usr/local/bin"
This init.sage is also a good place for putting an instruction such as
%colors Linux
which will improve the syntax highlighting color scheme in the Sage REPL if you work in a terminal with dark background. Indeed the default is
%colors LightBG
which works well for terminals with light background.
|
2019-11-17 14:04:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5323182344436646, "perplexity": 5736.048267722009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668954.85/warc/CC-MAIN-20191117115233-20191117143233-00240.warc.gz"}
|
https://microbit-micropython.readthedocs.io/en/stable/tutorials/images.html
|
# Images¶
MicroPython is about as good at art as you can be if the only thing you have is a 5x5 grid of red LEDs (light emitting diodes - the things that light up on the front of the device). MicroPython gives you quite a lot of control over the display so you can create all sorts of interesting effects.
MicroPython comes with lots of built-in pictures to show on the display. For example, to make the device appear happy you type:
from microbit import *
display.show(Image.HAPPY)
I suspect you can remember what the first line does. The second line uses the display object to show a built-in image. The happy image we want to display is a part of the Image object and called HAPPY. We tell show to use it by putting it between the parenthesis (( and )).
Here’s a list of the built-in images:
• Image.HEART
• Image.HEART_SMALL
• Image.HAPPY
• Image.SMILE
• Image.SAD
• Image.CONFUSED
• Image.ANGRY
• Image.ASLEEP
• Image.SURPRISED
• Image.SILLY
• Image.FABULOUS
• Image.MEH
• Image.YES
• Image.NO
• Image.CLOCK12, Image.CLOCK11, Image.CLOCK10, Image.CLOCK9, Image.CLOCK8, Image.CLOCK7, Image.CLOCK6, Image.CLOCK5, Image.CLOCK4, Image.CLOCK3, Image.CLOCK2, Image.CLOCK1
• Image.ARROW_N, Image.ARROW_NE, Image.ARROW_E, Image.ARROW_SE, Image.ARROW_S, Image.ARROW_SW, Image.ARROW_W, Image.ARROW_NW
• Image.TRIANGLE
• Image.TRIANGLE_LEFT
• Image.CHESSBOARD
• Image.DIAMOND
• Image.DIAMOND_SMALL
• Image.SQUARE
• Image.SQUARE_SMALL
• Image.RABBIT
• Image.COW
• Image.MUSIC_CROTCHET
• Image.MUSIC_QUAVER
• Image.MUSIC_QUAVERS
• Image.PITCHFORK
• Image.XMAS
• Image.PACMAN
• Image.TARGET
• Image.TSHIRT
• Image.ROLLERSKATE
• Image.DUCK
• Image.HOUSE
• Image.TORTOISE
• Image.BUTTERFLY
• Image.STICKFIGURE
• Image.GHOST
• Image.SWORD
• Image.GIRAFFE
• Image.SKULL
• Image.UMBRELLA
• Image.SNAKE
There’s quite a lot! Why not modify the code that makes the micro:bit look happy to see what some of the other built-in images look like? (Just replace Image.HAPPY with one of the built-in images listed above.)
## DIY Images¶
Of course, you want to make your own image to display on the micro:bit, right?
That’s easy.
Each LED pixel on the physical display can be set to one of ten values. If a pixel is set to 0 (zero) then it’s off. It literally has zero brightness. However, if it is set to 9 then it is at its brightest level. The values 1 to 8 represent the brightness levels between off (0) and full on (9).
Armed with this information, it’s possible to create a new image like this:
from microbit import *
boat = Image("05050:"
"05050:"
"05050:"
"99999:"
"09990")
display.show(boat)
(When run, the device should display an old-fashioned “Blue Peter” sailing ship with the masts dimmer than the boat’s hull.)
Have you figured out how to draw a picture? Have you noticed that each line of the physical display is represented by a line of numbers ending in : and enclosed between " double quotes? Each number specifies a brightness. There are five lines of five numbers so it’s possible to specify the individual brightness for each of the five pixels on each of the five lines on the physical display. That’s how to create a new image.
Simple!
In fact, you don’t need to write this over several lines. If you think you can keep track of each line, you can rewrite it like this:
boat = Image("05050:05050:05050:99999:09990")
## Animation¶
Static images are fun, but it’s even more fun to make them move. This is also amazingly simple to do with MicroPython ~ just use a list of images!
Here is a shopping list:
Eggs
Bacon
Tomatoes
Here’s how you’d represent this list in Python:
shopping = ["Eggs", "Bacon", "Tomatoes" ]
I’ve simply created a list called shopping and it contains three items. Python knows it’s a list because it’s enclosed in square brackets ([ and ]). Items in the list are separated by a comma (,) and in this instance the items are three strings of characters: "Eggs", "Bacon" and "Tomatoes". We know they are strings of characters because they’re enclosed in quotation marks ".
You can store anything in a list with Python. Here’s a list of numbers:
primes = [2, 3, 5, 7, 11, 13, 17, 19]
Note
Numbers don’t need to be quoted since they represent a value (rather than a string of characters). It’s the difference between 2 (the numeric value 2) and "2" (the character/digit representing the number 2). Don’t worry if this doesn’t make sense right now. You’ll soon get used to it.
It’s even possible to store different sorts of things in the same list:
mixed_up_list = ["hello!", 1.234, Image.HAPPY]
Notice that last item? It was an image!
We can tell MicroPython to animate a list of images. Luckily we have a couple of lists of images already built in. They’re called Image.ALL_CLOCKS and Image.ALL_ARROWS:
from microbit import *
display.show(Image.ALL_CLOCKS, loop=True, delay=100)
As with a single image, we use display.show to show it on the device’s display. However, we tell MicroPython to use Image.ALL_CLOCKS and it understands that it needs to show each image in the list, one after the other. We also tell MicroPython to keep looping over the list of images (so the animation lasts forever) by saying loop=True. Furthermore, we tell it that we want the delay between each image to be only 100 milliseconds (a tenth of a second) with the argument delay=100.
Can you work out how to animate over the Image.ALL_ARROWS list? How do you avoid looping forever (hint: the opposite of True is False although the default value for loop is False)? Can you change the speed of the animation?
Finally, here’s how to create your own animation. In my example I’m going to make my boat sink into the bottom of the display:
from microbit import *
boat1 = Image("05050:"
"05050:"
"05050:"
"99999:"
"09990")
boat2 = Image("00000:"
"05050:"
"05050:"
"05050:"
"99999")
boat3 = Image("00000:"
"00000:"
"05050:"
"05050:"
"05050")
boat4 = Image("00000:"
"00000:"
"00000:"
"05050:"
"05050")
boat5 = Image("00000:"
"00000:"
"00000:"
"00000:"
"05050")
boat6 = Image("00000:"
"00000:"
"00000:"
"00000:"
"00000")
all_boats = [boat1, boat2, boat3, boat4, boat5, boat6]
display.show(all_boats, delay=200)
Here’s how the code works:
• I create six boat images in exactly the same way I described above.
• Then, I put them all into a list that I call all_boats.
• Finally, I ask display.show to animate the list with a delay of 200 milliseconds.
• Since I’ve not set loop=True the boat will only sink once (thus making my animation scientifically accurate). :-)
What would you animate? Can you animate special effects? How would you make an image fade out and then fade in again?
|
2019-02-20 15:16:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25226107239723206, "perplexity": 2740.9566037003838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247495147.61/warc/CC-MAIN-20190220150139-20190220172139-00490.warc.gz"}
|
https://de.zxc.wiki/wiki/Sonnenstrahlung
|
Intensity of solar radiation at AM 0 (near-Earth space) and AM1.5 (approximately at the highest point of the sun in Karlsruhe) compared to the emission of an ideal black body at a temperature of 5900 K.
Solar radiation or solar radiation is the radiation emitted by the sun that is due to various physical effects. The part of the electromagnetic spectrum of the sun that is produced by the heat radiation of the hot solar surface has the greatest intensity in the range of visible light ( sunlight ). Depending on the wavelength , solar radiation is more or less strongly absorbed by the atmosphere . The intensity that hits the earth's surface also depends heavily on the weather and the position of the sun .
In addition to electromagnetic radiation , the sun also emits mass particle radiation, which is usually not counted as solar radiation. It consists of the charged particles of the solar wind and the neutrinos , which are created in the interior of the sun during nuclear fusion and subsequent reactions.
## Solar spectrum
Spectral intensity of solar radiation in a double-logarithmic plot .
The spectrum of the sun's electromagnetic radiation has its maximum at about 500 nm wavelength (blue-green light ), but ranges from hard X-rays with less than 0.1 nm to long radio waves . The continuous spectrum from about 140 nm ( UVC ) to about 10 cm (microwave) is approximately that of a black body with a temperature of almost 6000 K, which corresponds to the temperature of the photosphere . This light spectrum is divided into ultraviolet light (UV: 100–380 nm), visible light (VIS: 380–780 nm) and infrared light (IR: 780 nm-1 mm) with the limits of our eye perception.
In the range from near infrared radiation (NIR) to UV, the spectrum contains a large number of absorption lines , the so-called Fraunhofer lines . They arise from the absorption of radiation in the photosphere of the sun.
Solar flares , the frequency of which depends on solar activity , briefly increase radiation in the X-ray range by several orders of magnitude, but make little contribution to total radiation. They are often accompanied by long-wave radio emission ( English radio bursts ), which depends on the intensity profile as type I to type V is categorized.
The calm sun shines not only in the light area, but in the entire radio window . There, their spectrum is no longer that of a black body , rather the effective temperature increases from approx. 6000 K at 1 cm wavelength to 1,000,000 K at 10 m. The apparent diameter of the sun also increases with the wavelength, and the radiation is increasingly dominated by the outer atmosphere. The calm sun is thermal bremsstrahlung of free electrons. The most important radiation components of a disturbed sun are:
• Slow change in radiation proportional to the number of sunspots, plus solar radio flux index .
• Noise storms above 100 MHz, lasting several days.
• Radiation bursts often in connection with flares and CME , lasting seconds to days. They are divided into categories I to V in meter and decimeter waves and microwave bursts in centimeter waves, synchrotron radiation , supra-thermal electrons that spiral around magnetic field lines.
## Sun exposure to the earth
### Solar constant
The neutrinos created in the interior of the sun during nuclear fusion carry away 2% of the fusion power. The total electromagnetic radiation output of the sun is dominated by the thermal radiation of the photosphere, which fluctuates by less than 0.1%.
The power falling to the earth fluctuates by almost 7% over the course of the year due to the eccentricity of the earth's orbit . The average power per area is called the solar constant. It is considered outside the earth's atmosphere and amounts to
${\ displaystyle E_ {0} = 1367 \ \ mathrm {W / m ^ {2}}}$.
### Attenuation by the atmosphere
The spectral permeability of the atmosphere from the UV to the IR range without the influence of clouds
The intensity of solar radiation is lower on the ground than outside the atmosphere, the absorption and scattering of which is strongly dependent on the wavelength: the portion that can be perceived by the human eye, which makes up almost half of the solar radiation, mostly reaches the surface of the earth in clear weather and a high position of the sun . The invisible radiation is predominantly near infrared radiation (NIR), which makes up approx. 46% of the radiation output and around a quarter of which is absorbed in the atmosphere, mainly by water molecules. Of the ultraviolet radiation , which makes up less than 10% of the radiation, UVA largely penetrates, mainly weakened by Rayleigh scattering , which is also responsible for the fact that the sky is blue and that one turns brown in the penumbra. UVB is strongly absorbed by the ozone layer, UVC by atmospheric oxygen.
The exact calculation of the radiation flux as a function of the position of the sun and height above sea level is difficult. As an approximation, one only takes into account the layer thickness of the atmosphere to be penetrated in air mass units (air mass) and the duration of sunshine . Clouds reduce direct radiation , while haze increases diffuse radiation . Diffuse radiation and direct radiation in one place together result in global radiation .
Attenuation of solar radiation on its way through the atmosphere: a) long way, distribution of radiation over a large area (polar region), b) short way, distribution over a small area, angle of incidence of 90 ° at the equator (tropics)
If the solar radiation falls at an angle, it is distributed over a larger surface of the earth and the irradiance drops. This effect runs with the sine of the elevation angle . The influence of the seasons in the tropics is hardly noticeable. Since the position of the sun is always steep there at noon, there is a daytime climate . Outside the tropics, there is an increasing difference between summer and winter towards the poles , both due to the angle of incidence and due to the increasing differences in day lengths towards the poles .
In Central Europe , the midday summer sun is 60 ° to 65 ° high and radiates in ideal cloud-free weather conditions with a direct radiation strength of around 700–900 W / m². In winter it is only 13 ° to 18 ° and in cold, dry air, values over 800 W / m² can be reached at noon. The duration of sunshine is counted for irradiation times in which the direct radiation reaches values of over 150 W / m² and the shadows begin to show contrast.
The warming of the earth's surface depends on the length of the bright day . At the end of June the duration in Central Europe is around 16 hours, in December 8 hours. The ratio of the total irradiated solar energy between these months is about 5: 1 to 10: 1, but is tempered by heat storage mainly by the oceans ( maritime climate ).
In the microclimate , the irradiance depends on the angle of incidence as well as on the sun exposure .
Sunlight, bundles of rays and clouds in the Seychelles
The temperature of the earth's surface is globally determined by the radiation balance , the radiation budget. This records the interaction between absorption and reflection as well as re-emission and scattering.
## Measurement
A pyranometer for measuring solar radiation
The solar radiation is measured using pyranometers (placed parallel to the ground), pyrheliometers (tracking the sun) or sunshine autographs . The latter are now out of date in modern measurement technology and were mainly used to determine the duration of sunshine . The solar constant, however, is measured outside of the atmosphere using radiometers .
|
2021-11-28 20:53:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7458274364471436, "perplexity": 776.0267823297556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00335.warc.gz"}
|
https://math.berkeley.edu/wp/apde/wolf-patrick-dull-stuttgart/
|
# Wolf-Patrick Düll (Stuttgart)
The APDE seminar on Monday, 03/02, will be given by Wolf-Patrick Düll in Evans 939 from 4:10 to 5pm.
Title: Validity of the nonlinear Schrödinger approximation for the two-dimensional water wave problem with and without surface tension.
Abstract: We consider the two-dimensional water wave problem in an infinitely long canal of
finite depth both with and without surface tension. In order to describe the evolution
of the envelopes of small oscillating wave packet-like solutions to this problem the
Nonlinear Schrödinger equation can be derived as a formal approximation equation.
The rigorous justification of the Nonlinear Schrödinger approximation for the water
wave problem was an open problem for a long time. In recent years, the validity
of this approximation has been proven by several authors only for the case without
surface tension.
In this talk, we present the first rigorous justification of the Nonlinear Schrödinger approximation for the two-dimensional water wave problem which is valid for the
cases with and without surface tension by proving error estimates over a physically
relevant timespan in the arc length formulation of the water wave problem. Our
error estimates are uniform with respect to the strength of the surface tension, as the
height of the wave packet and the surface tension go to zero.
|
2020-08-12 07:09:01
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8238672018051147, "perplexity": 541.6330472536902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738878.11/warc/CC-MAIN-20200812053726-20200812083726-00024.warc.gz"}
|
https://cran.ism.ac.jp/web/packages/basictabler/vignettes/v01-introduction.html
|
# 01. Introduction
## In This Vignette
• Introducing basictabler
• Sample Data
• Quick Table Functions
• Cell-by-Cell Construction
• Further Manipulation
• Examples Gallery
• Further Reading
## Introducing basictabler
The basictabler package enables rich tables to be created and rendered/exported with just a few lines of R.
The basictabler package:
• Provides an easy way of creating basic tables, especially from data frames and matrices.
• Provides flexibility so that the structure/content of the table can be easily built/modified.
• Provides formatting options to simplify rendering/exporting data.
• Provides styling options so the tables can be themed/branded as needed.
The tables are rendered as htmlwidgets or plain text. The HTML/text can be exported for use outside of R.
The tables can also be exported to Excel, including the styling/formatting. The formatting/styling is specified once and can then be used when rendering to both HTML and Excel - i.e. it is not necessary to specify the formatting/styling separately for each output format.
basictabler is a companion package to the pivottabler package. pivottabler is focussed on generating pivot tables and can aggregate data. basictabler does not aggregate data but offers more control of table structure.
The latest version of the basictabler package can be obtained directly from the package repository. Please log any questions not answered by the vignettes or any bug reports here.
## Sample Data: Trains in Birmingham
To build some example tables, we will use the bhmsummary data frame. This summarises the 83,710 trains that arrived into and/or departed from Birmingham New Street railway station between 1st December 2016 and 28th February 2017. As an example, the following are the first four rows from this sample data - note the data has been transposed (otherwise the table would be very wide).
# the qhtbl() function is explained later in this vignette
library(basictabler)
qhtbl(t(bhmsummary[1:4,]), rowNamesAsRowHeaders=TRUE)
Each row in this sample data summarises different types of trains running through Birmingham.
The first row from the sample data (column 1 above) represents:
• Active trains (A=active as opposed to C=cancelled)…
• operated by the Arriva Trains Wales train operating company
• of type express passenger train (=fewer stops)
• scheduled to be operated by a “Diesel Multiple Unit”
• with a scheduled maximum speed of 75mph
• in the week beginning 27th Nov 2016
• originating from Crewe station
• terminating at Birmingham International station
• of which there two trains of the above type
• of which zero arrived or departed on time
• with a total of 8 arrival delay minutes and 3 departure delay minutes.
## Quick-Table Functions
To construct basic tables quickly, two functions are provided that can construct tables with one line of R:
• qtbl() returns a basic table. Setting a variable equal to the return value, e.g. tbl <- qtbl(...), allows further operations to be carried out on the table. Otherwise, using qtbl(...) alone will simply print the table to the console and then discard it.
• qhtbl() returns a HTML widget that when used alone will render a HTML representation of the table (e.g. in the R-Studio “Viewer” pane).
The arguments to both functions are the same:
• dataFrameOrMatrix specifies the data frame or matrix to construct the basic table from.
• columnNamesAsColumnHeaders specifies whether the names of the columns in the data frame or matrix should be rendered as column headings in the table (TRUE by default).
• explicitColumnHeaders is a character vector that allows the column headings to be explicitly specified.
• rowNamesAsRowHeaders specifies whether the names of the rows in the data frame or matrix should be rendered as row headings in the table.
• firstColumnAsRowHeaders specifies whether the first column of a data frame should be rendered as row headings in the table (is ignored for matrices).
• explicitRowHeaders is a character vector that allows the row headings to be explicitly specified.
• numberOfColumnsAsRowHeaders specifies the number of columns to be set as row headers. Only applies when generating a table from a data frame.
• columnFormats is a list containing format specifiers, each of which is either an sprintf() character value, a list of arguments for the format() function or an R function that provides custom formatting logic.
• columnCellTypes is a vector that is the same length as the number of columns in the data frame, where each element is one of the following values that specifies the type of cell: root, rowHeader, columnHeader, cell, total. The cellType controls the default styling that is applied to the cell. Typically only rowHeader, cell or total would be used. Only applies when generating a table from a data frame.
• theme specifies the name of a built in theme or a simple list of colours and fonts.
• replaceExistingStyles specifies whether the default styles are partially overwritten or wholly replaced by the styles specified in the following arguments.
• tableStyle, headingStyle, cellStyle and totalStyle are lists of CSS declarations that provide more granular control of styling and formatting settings.
A basic example of quickly printing a table to the console using the qtbl() function:
library(basictabler)
qtbl(data.frame(a=1:2, b=3:4))
a b
1 3
2 4
The qtbl() function is a concise version of a more verbose syntax, i.e.
library(basictabler)
tbl <- qtbl(data.frame(a=1:2, b=3:4))
… is equivalent to …
library(basictabler)
tbl <- BasicTable$new() tbl$addData(data.frame(a=1:2, b=3:4))
Other operations can be carried out on the table object, e.g. rendering it as a HTML widget:
library(basictabler)
tbl <- BasicTable$new() tbl$addData(data.frame(a=1:2, b=3:4))
tbl$renderTable() The qhtbl() function renders the table immediately as html widget: library(basictabler) qhtbl(data.frame(a=1:2, b=3:4)) When creating tables from data frames or matrices, it is possible to specify how values should be formatted for display in the table. The following example makes use of the sample data and illustrates how to specify formatting: # aggregate the sample data to make a small data frame library(basictabler) library(dplyr) tocsummary <- bhmsummary %>% group_by(TOC) %>% summarise(OnTimeArrivals=sum(OnTimeArrivals), OnTimeDepartures=sum(OnTimeDepartures), TotalTrains=sum(TrainCount)) %>% ungroup() %>% mutate(OnTimeArrivalPercent=OnTimeArrivals/TotalTrains*100, OnTimeDeparturePercent=OnTimeDepartures/TotalTrains*100) %>% arrange(TOC) # To specify formatting, a list is created which contains one element for each column in # the data frame, i.e. tocsummary contains six columns so the columnFormats list has six elements. # The values in the first column in the data frame won't be formatted since NULL has been specified. # The values in the 2nd, 3rd and 4th columns will be formatted using format(value, big.mark=",") # The values in the 5th and 6th columns will be formatted using sprintf(value, "%.1f") columnFormats=list(NULL, list(big.mark=","), list(big.mark=","), list(big.mark=","), "%.1f", "%.1f") # render the table directly as a html widget qhtbl(tocsummary, firstColumnAsRowHeaders=TRUE, explicitColumnHeaders=c("TOC", "On-Time Arrivals", "On-Time Departures", "Total Trains", "On-Time Arrival %", "On-Time Departure %"), columnFormats=columnFormats) ## Cell-by-Cell Construction The examples in this vignette illustrate constructing tables from data frames. This populates a table quickly with just one line of R. Tables can also be constructed row-by-row, column-by-column and/or cell-by-cell. For more details, please see the Working with Cells vignette. ## Further Manipulation Further operations on the basic table object tbl can be carried out to modify the table. For example, to add a total row: # aggregate the sample data to make a small data frame library(basictabler) library(dplyr) tocsummary <- bhmsummary %>% group_by(TOC) %>% summarise(OnTimeArrivals=sum(OnTimeArrivals), OnTimeDepartures=sum(OnTimeDepartures), TotalTrains=sum(TrainCount)) %>% ungroup() %>% mutate(OnTimeArrivalPercent=OnTimeArrivals/TotalTrains*100, OnTimeDeparturePercent=OnTimeDepartures/TotalTrains*100) %>% arrange(TOC) # calculate the data for the total row totalsummary <- bhmsummary %>% summarise(OnTimeArrivals=sum(OnTimeArrivals), OnTimeDepartures=sum(OnTimeDepartures), TotalTrains=sum(TrainCount)) %>% mutate(OnTimeArrivalPercent=OnTimeArrivals/TotalTrains*100, OnTimeDeparturePercent=OnTimeDepartures/TotalTrains*100) # specify formatting columnFormats=list(NULL, list(big.mark=","), list(big.mark=","), list(big.mark=","), "%.1f", "%.1f") # generate the table tbl <- qtbl(tocsummary, firstColumnAsRowHeaders=TRUE, explicitColumnHeaders=c("TOC", "On-Time Arrivals", "On-Time Departures", "Total Trains", "On-Time Arrival %", "On-Time Departure %"), columnFormats=columnFormats) # get the values for the totals row values <- list("All TOC", totalsummary[1, ]$OnTimeArrivals, totalsummary[1, ]$OnTimeDepartures, totalsummary[1, ]$TotalTrains, totalsummary[1, ]$OnTimeArrivalPercent, totalsummary[1, ]$OnTimeDeparturePercent)
# add the totals row
tbl$cells$setRow(6, cellTypes=c("rowHeader", "total", "total", "total", "total", "total"),
rawValues=values, formats=columnFormats)
# render the table
tbl\$renderTable()
For more information and examples regarding manipulating the structure and content of tables see the Working with Cells vignette.
## Further Reading
The full set of vignettes is:
|
2022-05-29 10:44:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34989529848098755, "perplexity": 4636.735522929215}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00729.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-10-counting-methods-and-probability-10-4-finding-probabilities-of-disjoint-and-overlapping-events-guided-practice-for-examples-1-2-and-3-page-708/1
|
## Algebra 2 (1st Edition)
$$\frac{2}{13}$$
We know from the equation on page 707 that: $$P(A\ or \ B) = P(A)+P(B)-P(A\ And\ B)$$ Thus, we find: $$=\frac{1}{13}+\frac{1}{13}-0=\frac{2}{13}$$
|
2022-05-18 04:33:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6822519898414612, "perplexity": 1476.4138233709011}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521041.0/warc/CC-MAIN-20220518021247-20220518051247-00589.warc.gz"}
|
https://www.clutchprep.com/physics/practice-problems/142561/the-value-of-g-at-the-height-of-the-space-shuttle-s-orbit-is-the-value-of-at-the
|
Acceleration Due to Gravity Video Lessons
Concept
# Problem: The value of g at the height of the Space Shuttle’s orbit is:A. 9.8 m/s2.B. slightly less than 9.8 m/s2.C. much less than 9.8 m/s2.D. exactly zero.
###### FREE Expert Solution
Gravity at height:
$\overline{){\mathbf{g}}{\mathbf{=}}{\mathbf{g}}\frac{{\mathbf{R}}^{\mathbf{2}}}{{\mathbf{\left(}\mathbf{R}\mathbf{+}\mathbf{h}\mathbf{\right)}}^{\mathbf{2}}}}$
g decreases with the increase in height above sea level.
According to NASA, the radius of the orbit ranges between 304 km to 528 km
###### Problem Details
The value of g at the height of the Space Shuttle’s orbit is:
A. 9.8 m/s2.
B. slightly less than 9.8 m/s2.
C. much less than 9.8 m/s2.
D. exactly zero.
|
2020-10-27 03:26:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8051966428756714, "perplexity": 2601.7541077482397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107893011.54/warc/CC-MAIN-20201027023251-20201027053251-00490.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/applied-mathematics/elementary-technical-mathematics/chapter-1-section-1-8-multiplication-and-division-of-fractions-exercise-page-48/38
|
## Elementary Technical Mathematics
Divide $6\frac{2}{3}$ by $1\frac{3}{4}$ to obtain: $=6\frac{2}{3} \div 1\frac{3}{4} \\=\frac{3(6)+2}{3} \div \frac{4(1)+3}{4} \\=\frac{20}{3} \div \frac{7}{4}$ Use the rule $\frac{a}{b} \div \frac{c}{d} = \frac{[a}{b} \times \frac{d}{c}$ to obtain: $\\=\frac{20}{3} \times \frac{4}{7} \\=\frac{20(4)}{3(7)} \\=\frac{80}{21} \\=3\frac{17}{21}$ Thus, 3 pieces of pipes $1\frac{3}{4}$ ft long can be obtained from the original pipe.
|
2018-04-24 05:36:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23061205446720123, "perplexity": 521.0551099673271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946564.73/warc/CC-MAIN-20180424041828-20180424061828-00175.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-7-section-7-1-systems-of-linear-equations-in-two-variables-concept-and-vocabulary-check-page-817/4
|
## Precalculus (6th Edition) Blitzer
$-2$.
When using the addition method, we need to have the opposite value of the coefficients before the variable. Thus, we need to multiply the second equation by a factor of $-2$, so that the coefficient of $y$ becomes $-10$ such that the variable $y$ will be cancelled when adding the two equations. In other words, the answer is $-2$.
|
2020-06-05 23:32:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8631405830383301, "perplexity": 82.25310332266349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348504341.78/warc/CC-MAIN-20200605205507-20200605235507-00287.warc.gz"}
|
http://mathoverflow.net/api/userquestions.html?userid=4046&page=1&pagesize=10&sort=newest
|
8
# Questions
1
1
vote
0
144
views
### Canonical forms for elliptic fibrations with Mordell-Weil group of rank 1 and zero torsion
oct 23 at 13:47 JME 1,656718
5
12
1
424
views
### Geometric interpretation of exceptional Symmetric spaces
aug 22 10 at 17:47 JME 1,656718
3
12
3
1k
views
### Are there any (interesting) consequences of the irrationality of π? [closed]
aug 12 10 at 15:19 Donu Arapura 13.7k12863
7
13
0
745
views
### What are the possible singular fibers of an elliptic fibration over a higher dimensional base?
jul 21 10 at 13:21 JME 1,656718
5
12
1
911
views
### Best strategy for small resolutions
oct 25 at 23:30 Sándor Kovács 23.7k25273
2
2
386
views
### How can I get a small resolution for the binomial fourfold $x_1 x_2 x_3- y_1 y_2=0$ in $\mathbb{C}^5$?
jul 13 10 at 23:08 David Speyer 54.9k3102236
3
6
3
444
views
### Non-existence of small resolutions for the singularity $y^2=u^2+v^2+w^3$
nov 7 10 at 18:50 Sándor Kovács 23.7k25273
5
9
|
2013-05-18 20:59:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7596493363380432, "perplexity": 1919.569736607869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382851/warc/CC-MAIN-20130516092622-00023-ip-10-60-113-184.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/1865634/how-is-it-that-int-0x-2-pi-r-dr-is-equal-to-the-area-of-a-circle
|
# how is it that $\int_0^x 2\pi r\ dr$ is equal to the area of a circle [closed]
I'm studying calculus and I'm having some basic questions, this one is regarding the area of a circle. we know, from some guy, that the circumference of a circle is $2 \pi r$ and the area can be seen as the sum of all the circumferences from $0$ to $r$ which is the same as the integral, which leads us to $\pi r^2$. my question is how is it that $$\int_0^R 2\pi r\ dr = \text{Area}\ \ ?$$
## closed as unclear what you're asking by parsiad, tilper, user99914, Zain Patel, user223391 Jul 21 '16 at 4:39
Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
• Do you mean to ask why $\int_0^i x\,dx=\frac{i^2}{2}$? – Arthur Jul 20 '16 at 17:28
• yes, to be straight to the point – Raed Tabani Jul 20 '16 at 17:32
• either, aren't you guys asking the same thing? – Raed Tabani Jul 20 '16 at 17:36
• @RaedTabani I edited your question. Am I right in thinking that this is what you meant? If not, you can rollback the edit by clicking "edited [X time] ago" on your question and clicking "rollback" on the previous version. – user137731 Jul 20 '16 at 17:45
• @Bye_World I believe that the OP's original question was how the sum of a certain variable $k$ from $k=0$ to $n$ is $n^2/2$ and not $n(n+1)/2$ for example. His issue I think is with stating the sentence "is a sum of ..." mathematically which led him to think that it's the sum of $k$ and not the Riemann sum. – GeorgSaliba Jul 20 '16 at 17:58
Here is a proof with few words. Note that the line is the graph of $y=t$ with $t$ being the horizontal axis and that the integral
$$\int_0^x t\,dt$$ represents the area of a triangle with both base and height equal to $x$.
But for more general functions, one needs the Riemann Sum as indicated by GeorgSaliba.
• This is wonderful! I never thought to explain it this way, although I suspect it would be difficult to extend this "proof" to higher degree polynomials – Andres Mejia Jul 20 '16 at 17:55
$$\int_0^R xdx\ne\sum_0^Ri$$
You are adding small areas of circumference equal to $2\pi r$ and thickness $\Delta r$ over a span of $r=0$ to $R$. So you have to partition this interval into $n$ subdivisions, each subdivision is $R/n$ thick. This means that each radius to be added can be expressed like this $$r={i\times R\over n}\qquad i=0,1,2,\dots,n$$and $$\Delta r=\frac Rn$$ When $n$ is "infinitely large" you get an "infinitely precise" sum, which is then: $$\int_0^R2\pi rdr=\lim_{n\rightarrow\infty}2\pi\frac Rn\sum_0^n\frac{Ri}{n}=\lim_{n\rightarrow\infty}2\pi\frac{R^2}{n^2}\frac {n(n+1)}{2}=\lim_{n\rightarrow\infty}2\pi\frac{R^2}{n^2}\frac {n\times n}{2}=\pi R^2$$
This is called a Riemann Sum
• thank you this is exactly what I was looking for – Raed Tabani Jul 20 '16 at 18:14
|
2019-06-26 22:30:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7273856401443481, "perplexity": 282.4444334632585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000837-00195.warc.gz"}
|
https://www.encyclopediaofmath.org/index.php/User:Thomas_Unger
|
# User:Thomas Unger
My personal homepage at the School of Mathematical Sciences, University College Dublin.
My main research interests are: quadratic and hermitian forms, algebras with involution, their applications in real algebra and space-time coding.
(2010 MSC: 11E04, 11E08, 11E10, 11E25, 11E39, 11E81, 11E88, 11H71, 11T71, 12D15, 13J30, 16K20, 16W10, 17A35, 17A75, 68P30)
## $\rm \TeX$ re-encoding with TeXShop
Instructions on how to use TeXShop for cleaning up old EoM source files can be found here. This method has been tested on Mac OS 10.9 and may also work on earlier versions.
NEW: two independent macros for use with TeXShop (one for cleaning up maths; one for cleaning up references) will be available soon. For explanation and instructions, click here.
How to Cite This Entry:
Thomas Unger. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Thomas_Unger&oldid=34479
|
2017-07-20 12:33:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29916295409202576, "perplexity": 2092.7562938057235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423183.57/warc/CC-MAIN-20170720121902-20170720141902-00297.warc.gz"}
|
https://studydaddy.com/question/web-237-week-3-dqs
|
QUESTION
# WEB 237 Week 3 DQs
This work of WEB 237 Week 3 Discussion Questions shows the solutions to the following problems:
DQ 1: Flash
• @
Tutor has posted answer for $5.19. See answer's preview$5.19
|
2018-04-24 10:49:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2645934224128723, "perplexity": 13397.406574630886}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946597.90/warc/CC-MAIN-20180424100356-20180424120356-00595.warc.gz"}
|
https://code.databio.org/GenomicDistributions/reference/loadBSgenome.html
|
This function will let you use a simple character vector (e.g. 'hg19') to load and then return BSgenome objects. This lets you avoid having to use the more complex annotation for a complete BSgenome object (e.g. BSgenome.Hsapiens.UCSC.hg38.masked)
loadBSgenome(genomeBuild, masked = TRUE)
## Arguments
genomeBuild
One of 'hg19', 'hg38', 'mm10', 'mm9', or 'grch38'
if (FALSE) {
|
2022-12-05 01:55:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4204050600528717, "perplexity": 7594.746906593365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00187.warc.gz"}
|
https://gmatclub.com/forum/how-many-different-flags-can-be-made-from4-colors-red-blue-green-a-263184.html
|
GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video
It is currently 25 Jan 2020, 02:05
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# How many different flags can be made from4 colors- Red, Blue, Green, a
Author Message
TAGS:
### Hide Tags
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 3222
How many different flags can be made from4 colors- Red, Blue, Green, a [#permalink]
### Show Tags
Updated on: 13 Aug 2018, 07:08
00:00
Difficulty:
85% (hard)
Question Stats:
47% (02:07) correct 53% (01:28) wrong based on 148 sessions
### HideShow timer Statistics
Fool-proof method to Differentiate between Permutation & Combination Questions - Exercise Question #3
How many different flags can be made from4 colors- Red, Blue, Green, and White such that no color is repeated more than once?
Options:
A. 25
B. 24
C. 48
D. 60
E. 64
Learn to use the Keyword Approach in Solving PnC question from the following article:
Article-1: Learn when to “Add” and “Multiply” in Permutation & Combination questions
Article-2: Fool-proof method to Differentiate between Permutation & Combination Questions
_________________
Originally posted by EgmatQuantExpert on 11 Apr 2018, 23:16.
Last edited by EgmatQuantExpert on 13 Aug 2018, 07:08, edited 6 times in total.
Senior PS Moderator
Joined: 26 Feb 2016
Posts: 3286
Location: India
GPA: 3.12
Re: How many different flags can be made from4 colors- Red, Blue, Green, a [#permalink]
### Show Tags
11 Apr 2018, 23:44
1
1
EgmatQuantExpert wrote:
Learn the structured approach to identify permutation and combination question - Exercise Question #2
How many different flags can be made from4 colors- Red, Blue, Green, and White?
Options:
A. 25
B. 24
C. 48
D. 60
E. 64
There are 4 different colors which can be used to make the flags.
When there is 1 color in the flag - 4 possibilities
When there are 2 colors in the flag - 4*3 = 12 possibilities
When there are 3 colors in the flag - 4*3*2 = 24 possibilities
When there are 4 colors in the flag - 4*3*2*1 = 24 possibilities
Therefore, there are a total of 4+12+24+24 = 64 possibilities(Option E) in which the flags can be made.
_________________
You've got what it takes, but it will take everything you've got
Math Expert
Joined: 02 Aug 2009
Posts: 8336
Re: How many different flags can be made from4 colors- Red, Blue, Green, a [#permalink]
### Show Tags
12 Apr 2018, 00:53
EgmatQuantExpert wrote:
Learn structured approach to identify permutation and combination question - Exercise Question #2
How many different flags can be made from4 colors- Red, Blue, Green, and White?
Options:
A. 25
B. 24
C. 48
D. 60
E. 64
The question is incomplete the way it is..
1) I would say - " different flags from 20 m cloth if one flag is .....", so it should be different types of flags..
I am sure it means this so we can leave it here..
2) If I use two colors also, I can make 100s of different types of flags..
so the question could have been two ways..
A) "How many different types of flags can be made from using atleast one of 4 colors- Red, Blue, Green, and White" - Without any restrictions
so a COMBINATION question..
one colour - 4 ways
two colours - 4C2 - 6 ways
three colour - 4C3 - 6 ways
four colour - 4C4 - 1 way
total = 17 ways
B) "How many different types of flags can be made from using atleast one of 4 colors - Red, Blue, Green, and White -in horizontal stripe/vertical stripes, each coloured being used once" - With restrictions
so a PERMUTATION question..
one colour - 4 ways
two colours - 4P2 - 6*2=12 ways
three colour - 4P3 - 6*3!=24 ways
four colour - 4P4 - 4*3*2*1=24 way
total = 64 ways
_________________
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 3222
Re: How many different flags can be made from4 colors- Red, Blue, Green, a [#permalink]
### Show Tags
17 Apr 2018, 08:51
Solution
Given:
• We have 4 colours- Red, Blue, Green, and White to form the flag.
To find:
• The number of ways we can form different flags from the 4 colours available.
Approach and Working:
• From the 4 colors available, we can form 4 different types of flag:
o Flag with single color
o Flag with two colors
o Flag with three colors
o Flag with four colors
• Hence, total number of flags= Flag with one color +Flag with two colors + Flag with three colors+ Flag with four colors
Flag with one color:
From 4 colors, we can select 1 color to form the flag in $$^4c_1$$=4 ways.
Flag with more than one colors:
Since the order of colour matters in a flag, this is a case of permutation.
• Thus, total number of flags with two colours= $$^4P_2$$=2
• In the similar fashion, total number of flags with three colours and four colours = $$^4P_3$$= 24 and $$^4P_4$$=24 ways respectively.
Thus, total number of flags= 4+12+24+24= 64
Hence, option E is the correct answer.
_________________
e-GMAT Representative
Joined: 04 Jan 2015
Posts: 3222
Re: How many different flags can be made from4 colors- Red, Blue, Green, a [#permalink]
### Show Tags
17 Apr 2018, 09:06
Target Test Prep Representative
Status: Founder & CEO
Affiliations: Target Test Prep
Joined: 14 Oct 2015
Posts: 9142
Location: United States (CA)
How many different flags can be made from4 colors- Red, Blue, Green, a [#permalink]
### Show Tags
09 Dec 2019, 18:37
EgmatQuantExpert wrote:
Fool-proof method to Differentiate between Permutation & Combination Questions - Exercise Question #3
How many different flags can be made from4 colors- Red, Blue, Green, and White such that no color is repeated more than once?
Options:
A. 25
B. 24
C. 48
D. 60
E. 64
Learn to use the Keyword Approach in Solving PnC question from the following article:
The number of flags that can be made using only 1 color is 4P1 = 4.
The number of flags that can be made using exactly 2 colors is 4P2 = 4 x 3 = 12.
The number of flags that can be made using exactly 3 colors is 4P3 = 4 x 3 x 2 = 24.
The number of flags that can be made using all 4 colors is 4P4 = 4 x 3 x 2 x 1 = 24.
Therefore, the total number of flags that can be made is 4 + 12 + 24 + 24 = 64.
_________________
# Scott Woodbury-Stewart
Founder and CEO
Scott@TargetTestPrep.com
181 Reviews
5-star rated online GMAT quant
self study course
See why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews
If you find one of my posts helpful, please take a moment to click on the "Kudos" button.
How many different flags can be made from4 colors- Red, Blue, Green, a [#permalink] 09 Dec 2019, 18:37
Display posts from previous: Sort by
|
2020-01-25 09:07:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4832250475883484, "perplexity": 3585.930239968011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00253.warc.gz"}
|
http://www.maplesoft.com/support/help/Maple/view.aspx?path=RandomTools/flavor/complex
|
RandomTools Flavor: complex - Maple Programming Help
Home : Support : Online Help : Programming : Random Objects : RandomTools package : Flavors : RandomTools/flavor/complex
RandomTools Flavor: complex
describe a flavor of a random complex number
Calling Sequence complex(flav)
Parameters
flav - random flavor
Description
• The flavor complex describes a random complex number with real and imaginary parts described by the given random flavor flav.
This flavor can be used as an argument to RandomTools[Generate] or as part of a structured flavor.
Examples
> $\mathrm{with}\left(\mathrm{RandomTools}\right):$
> $\mathrm{Generate}\left(\mathrm{complex}\left(\mathrm{integer}\right)\right)$
${-}{104281139460}{-}{306860183579}{}{I}$ (1)
> $\mathrm{Generate}\left(\mathrm{complex}\left(\mathrm{rational}\left(\mathrm{range}=-3..3,\mathrm{denominator}=720\right)\right)\right)$
${-}\frac{{359}}{{120}}{+}\frac{{953}}{{360}}{}{I}$ (2)
> $\mathrm{Generate}\left(\mathrm{list}\left(\mathrm{complex}\left(\mathrm{nonnegint}\left(\mathrm{range}=10\right)\right),10\right)\right)$
$\left[{10}{+}{3}{}{I}{,}{5}{+}{4}{}{I}{,}{10}{,}{7}{+}{4}{}{I}{,}{9}{+}{10}{}{I}{,}{1}{+}{I}{,}{3}{+}{7}{}{I}{,}{10}{+}{2}{}{I}{,}{8}{+}{9}{}{I}{,}{1}{+}{10}{}{I}\right]$ (3)
> $\mathrm{Matrix}\left(3,3,\mathrm{Generate}\left(\mathrm{complex}\left(\mathrm{integer}\left(\mathrm{range}=2..7\right)\right)\mathrm{identical}\left(x\right)+\mathrm{complex}\left(\mathrm{integer}\left(\mathrm{range}=2..7\right)\right),\mathrm{makeproc}=\mathrm{true}\right)\right)$
$\left[\begin{array}{ccc}\left({3}{+}{7}{}{I}\right){}{x}{+}{5}{+}{2}{}{I}& \left({6}{+}{4}{}{I}\right){}{x}{+}{2}{+}{7}{}{I}& \left({6}{+}{7}{}{I}\right){}{x}{+}{5}{+}{4}{}{I}\\ \left({7}{+}{6}{}{I}\right){}{x}{+}{4}{+}{6}{}{I}& \left({5}{+}{4}{}{I}\right){}{x}{+}{3}{+}{4}{}{I}& \left({2}{+}{2}{}{I}\right){}{x}{+}{4}{+}{6}{}{I}\\ \left({6}{+}{3}{}{I}\right){}{x}{+}{3}{+}{3}{}{I}& \left({4}{+}{3}{}{I}\right){}{x}{+}{5}{+}{3}{}{I}& \left({2}{+}{6}{}{I}\right){}{x}{+}{2}{+}{5}{}{I}\end{array}\right]$ (4)
|
2016-08-29 03:42:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717231392860413, "perplexity": 3621.4014686363034}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982950827.61/warc/CC-MAIN-20160823200910-00175-ip-10-153-172-175.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:0216.47102
|
# zbMATH — the first resource for mathematics
Markov processes on a locally compact space. (English) Zbl 0216.47102
##### MSC:
60J25 Continuous-time Markov processes on general state spaces
Full Text:
##### References:
[1] K. L. Chung,The general theory of Markov processes according to Doeblin, Z. Wahrscheinlichkeitstheorie verw. Geb.2 (1964), 230–254. · Zbl 0119.34604 · doi:10.1007/BF00533381 [2] W. Feller,An introduction to probability theory and its applications, Vol. II, John Wiley and Sons, Inc., 1966. · Zbl 0138.10207 [3] S. R. Foguel,The ergodic theory of Markov processes, to appear. [4] S. R. Foguel,Existence of a $$\sigma$$-finite invariant measure for a Markov process on a locally compact space, Israel J. Math.6 (1968) 1–4. · Zbl 0159.46802 · doi:10.1007/BF02771598 [5] S. R. Foguel,Ergodic decomposition of a topological space, to appear in Israel J. Math. · Zbl 0179.08302
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
|
2021-03-05 14:17:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5520231127738953, "perplexity": 1716.6107885306099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178372367.74/warc/CC-MAIN-20210305122143-20210305152143-00484.warc.gz"}
|
https://math.stackexchange.com/questions/1000642/proof-that-sum-1-infty-frac1n2-2/1000709
|
# Proof that $\sum_{1}^{\infty} \frac{1}{n^2} <2$
I know how to prove that
$$\sum_1^{\infty} \frac{1}{n^2}<2$$ because
$$\sum_1^{\infty} \frac{1}{n^2}=\frac{\pi^2}{6}<2$$
But I wanted to prove it using only inequalities. Is there a way to do it? Can you think of an inequality such that you can calculate the limit of both sides, and the limit of the rigth side is $2$?
Is there a good book about inequalities that helps to prove that a sum is less than a given quantity?
This is not a homework problem, its a self posed problem that I was thinking about :)
• Maybe you will like this: $$\sum_{k=1}^{\infty}\frac{k}{2^k}=2$$ – ClassicStyle Nov 1 '14 at 0:32
• @TylerHG this is nice! – Guerlando OCs Nov 1 '14 at 0:40
• @TylerHG: could you expand it to an answer? The usual (but not unique) way to compare two series $\sum_{n \geqslant 1} a_n$ and $\sum_{n \geqslant 1} b_n$ is to show that $a_n<b_n$ for all ${n \geqslant 1}$. But here, $\frac{n}{2^n}<\frac{1}{n^2}$ for $n$ (not so) large enough. So I am curious about the details. Thanks. – Taladris May 30 '15 at 9:39
Hint: for $n \geq 2$, $$\frac 1{n^2} \leq \frac{1}{n(n-1)} = \frac1{n-1} - \frac 1n$$
• Moreover, this can prove that $\frac{1}{i}+\sum_{n=1}^{i}\frac{1}{n^2}$ is an upper bound to the sequence for any $i$ (which yields arbitrarily tight bounds) – Milo Brandt Nov 1 '14 at 0:15
• Shouldn't the rigth side summation to infinity diverge? – Guerlando OCs Nov 1 '14 at 0:15
• No: we get a telescoping series – Omnomnomnom Nov 1 '14 at 0:16
• Can you think of a same argument to show that $$\left(1+\frac{1}{n}\right)^n < 3$$ when $n\to\infty$? – Guerlando OCs Nov 1 '14 at 0:35
• If $x>0$ then $e^x\le(1+\frac xn)^{n+1}$; take $x=2$ and $n=1$ to get $e^2\le 9$; so $(1+\frac1n)^n < e \le 3$. But this should really be a separate question. – user21467 Nov 1 '14 at 1:46
Hint:
$$\sum_{n=1}^{\infty} \frac{1}{n^2} < 1+ \int_{1}^{\infty} \frac{1}{x^2}dx$$
You can use induction to prove the inequality
$1+\frac{1}{2^2}+\cdots+\frac{1}{n^2} \leq 2-\frac{1}{n}$ for $n \geq 1$, i.e. $\sum_{i=1}^{n} \frac{1}{i^2} \leq 2 - \frac{1}{n}\to 2$ as $n\to\infty$.
This short proof, however, only proves the weaker statement $\sum_{n=1}^\infty \frac{1}{n^2} \leq 2$.
• Good approach! :) – Guerlando OCs Nov 1 '14 at 0:35
• Maybe I am stupid but do you get the strict inequality as well? – Peter Apr 20 '15 at 14:44
• @Peter When $n=1$, then you get the strict inequality :). – Sherlock Holmes Apr 22 '15 at 1:47
• It seems that I am really stupid but when $n=1$ the equation tells me that $1\leq 1$. – Peter Apr 22 '15 at 11:58
• @Peter that is exactly right - $\leq$ means less than OR equal to. – Sherlock Holmes Apr 22 '15 at 18:52
$$\zeta(2)=\frac{5}{4}+\sum_{n=3}^{+\infty}\frac{1}{n^2}\leq\frac{5}{4}+4\sum_{n=3}^{+\infty}\frac{1}{(2n-1)(2n+1)}=\frac{5}{4}+\frac{2}{5}=\frac{33}{20}.$$
Still another proof: $$\sum_{n\ge 1}\frac{1}{n^2}\le \sum_{k\ge 0}\frac{2^{(k+1)}-2^k}{2^{2k}}=\sum_{k\ge 0}\frac{1}{2^k}=2$$
|
2019-11-17 00:10:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.916891872882843, "perplexity": 465.04885171683827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00159.warc.gz"}
|
https://www.nature.com/articles/s41377-019-0163-9?error=cookies_not_supported&code=9ef8ab5b-98bb-4afe-9f08-8a286219ebc0
|
Review Article | Open | Published:
Superoscillation: from physics to optical applications
Abstract
The resolution of conventional optical elements and systems has long been perceived to satisfy the classic Rayleigh criterion. Paramount efforts have been made to develop different types of superresolution techniques to achieve optical resolution down to several nanometres, such as by using evanescent waves, fluorescence labelling, and postprocessing. Superresolution imaging techniques, which are noncontact, far field and label free, are highly desirable but challenging to implement. The concept of superoscillation offers an alternative route to optical superresolution and enables the engineering of focal spots and point-spread functions of arbitrarily small size without theoretical limitations. This paper reviews recent developments in optical superoscillation technologies, design approaches, methods of characterizing superoscillatory optical fields, and applications in noncontact, far-field and label-free superresolution microscopy. This work may promote the wider adoption and application of optical superresolution across different wave types and application domains.
Introduction
Due to the propagation property of electromagnetic waves, the optical resolution of conventional optical systems is restricted to a basic theoretical limit of 0.61λ/NA (NA is the numerical aperture of the optical system)1. For optical waves with a wavelength of λ, in a homogeneous lossless medium with refractive index n, the propagation of light acts as a band-limited linear space-invariant system2 that filters out all components for which the spatial frequency exceeds n/λ within a distance of several wavelengths. The absence of higher frequency components results in limited optical resolution. To overcome this restriction, retrieving higher frequency components from evanescent waves, which exist near the objective surface within a distance of less than one wavelength, is necessary. Near-field optical-scanning microscopes3 exploit this property to achieve subdiffraction resolution of tens of nanometres using a nano-optical probe. Dielectric microspheres on sample surfaces can also yield subdiffraction features of the sample in the form of magnified virtual images, which can then be captured with a conventional optical microscope4,5, by relying on the conversion of evanescent waves into propagation waves. Superresolution imaging based on metallic and dielectric superlenses6,7,8 has been demonstrated in the near-field regime. Optical hyperlenses were also developed for far-field imaging beyond the diffraction limit by using anisotropic metamaterials9,10,11, which can convert evanescent waves into propagating waves with very large wavenumbers. High-frequency components in evanescent waves can also be attained in the far field via spatial frequency shifting. According to the angular spectrum theory, using spatially modulated light, the high-frequency components in evanescent waves can be shifted to low-frequency components and then converted to propagating waves. In this way, the high-spatial-frequency information can be retrieved in the far field for the reconstruction of superresolution images, which has been demonstrated by a variety of structured light illumination microscopes (SLIMs) with complicated postprocessing12. The resolution of a conventional SLIM is twice that of a traditional optical microscope with the same NA. Higher resolutions have been reported for structured light using surface plasmonic waves13 and nanowire fluorescence14. However, such illumination is restricted to within the near-field region of several tens of nanometres on the surfaces of nanowires, which prohibits deep imaging inside a sample. Nonlinear optics is also applied to enhance the superresolution via saturated structured fluorescence illumination15. Stimulated emission depletion (STED) employs two laser pulses for excitation and de-excitation of fluorophores to achieve superresolution through the nonlinear dependence of the simulated emission rate on the intensity of the de-excitation beam16.
In addition to purely optical techniques, by utilizing sequential activation and time-resolved localization of photoswitchable fluorophores, superresolution images can be reconstructed using stochastic techniques, including stochastic optical reconstruction microscopy17, photoactivated localization microscopy18 and fluorescence photoactivation localization microscopy19. Utilizing surface-enhanced Raman scattering, label-free superresolution microscopy was also demonstrated by a stochastic method20. Utilizing deep-learning approaches, the resolution can be further improved for bright-field microscopic imaging21 and fluorescence microscopy22,23. These methods, however, require either fluorescence labelling or postprocessing to achieve superresolution. In this context, far-field label-free direct superresolution imaging, without close contact with samples, is favourable for many applications, such as optical microscopy, telescopy and data storage. Recently, the concept of superoscillation has been tailored to and applied in optical superresolution both theoretically and experimentally to provide an alternative way to achieve label-free noncontact optical superresolution in the far field.
Optical superoscillation
Superoscillation refers to the phenomenon in which a band-limited function can contain local oscillations that are faster than those of the fastest Fourier components. Superoscillation allows the formation of arbitrarily small optical features, which can be used for superresolution focusing and imaging. Before proposing the concept of optical superoscillation, substantial endeavours were made in realizing resolution beyond the diffraction limit, such as apodization24,25,26 and the use of pupil filters27,28,29,30,31,32. Mathematically, if a two-dimensional (2D) analytic function is known exactly in an arbitrarily small spectral region, then the entire function can be determined uniquely by means of analytic continuation33. The diffraction limit of an optical imaging system can be overcome to some extent at the expense of the system performance in other areas. An optical system can theoretically attain as high a resolution capability as desired34. For an object of limited size, arbitrarily perfect imaging can be obtained under various conditions by coating the lens to realize a particular transmission function at the expense of tremendous loss of illumination, which also leads to a severely narrow field of view (FOV)35. A similar strategy has been suggested that utilizes superdirective antennas36, and it was investigated in the optical domain to improve the optical resolution capability. An arbitrary resolving power can be achieved by applying properly designed concentric ring pupils.
The concept of superoscillation was originally defined in terms of the quantum weak measurements by Aharonov37 and was later developed and extended to optics by Michael Berry38,39,40,41,42,43,44,45,46,47, who suggested the possibility of demonstrating optical superoscillation without evanescent waves via subwavelength grating diffraction39. Using a paraxial approximation, the propagation of superoscillations was solved for subwavelength gratings, thereby revealing a direct connection with the Talbot effect in diffraction theory. This theory also predicts the formation of superoscillatory fine structures with spatial features of wavelength λ/4, which were experimentally observed in the diffraction pattern of superfocusing due to the nonlinear Talbot effect48.
For optical waves, superoscillations correspond to local spatial frequencies (the gradient of the phase distribution) which exceed the wavenumber, and they are associated with phase singularities40. Researchers have observed the superoscillatory nature of the band-limited complete set of prolate spheroidal wavefunctions (PSWFs) φn(c,r)49, which have n zeros within the finite area of [−c/k, c/k] (k is the wavenumber) and enable the synthesis of arbitrarily small features by linear superposition50. However, with the increase in the number of superoscillatory features or the increase in the number of zeros n, the superoscillation area [−c/k, c/k] exhibits a dramatic reduction in its confined energy, while the energy that is contained in the sideband outside of [−c/k, c/k] increases tremendously, which requires a significant increase in the total energy to generate such superoscillatory features51. Figure 1 depicts an optical superoscillatory distribution, which can be divided into two areas: the FOV and sideband areas. The corresponding electric field can be characterized by five parameters: the spot full width at half maximum (FWHM), the peak intensity (Ipeak), the sidelobe ratio (the ratio of the maximum sidelobe intensity to the peak intensity within the FOV, namely, Isl_max/Ipeak), the FOV and sideband ratio (the ratio of the maximum sideband intensity to the peak intensity outside of the FOV, namely, Isb_max/Ipeak). Imaging applications require reduction of the spot size, increase of the spot intensity and reduction of the sidelobe ratio, while extending the FOV and suppressing the sideband ratio. However, in most cases, tradeoffs must be made among the five parameters, especially when the spot size is much smaller than the diffraction limit.
Another characteristic of the optical superoscillatory field is the sharp phase change at the zero amplitudes. The phase distribution plays a role in the generation of optical superoscillatory features52,53. The optical field can be described by its electric field $$E( {\mathop{r}\limits^{\rightharpoonup} }) = A( {\mathop{r}\limits^{\rightharpoonup} })\exp [ {i\varphi( {\mathop{r}\limits^{\rightharpoonup} })} ]$$, where $$A( {\mathop{r}\limits^{\rightharpoonup} })$$and$$\varphi( {\mathop{r}\limits^{\rightharpoonup} })$$denote the amplitude and phase, respectively. By substituting the above expression into the Helmholtz equation, one obtains the following equations:
$$\left\{ {\begin{array}{*{20}{l}}{{\nabla ^2}\varphi \left( {\mathop{r}\limits^{\rightharpoonup} } \right) + \nabla \left( {\ln \,{A^2}\left( {\mathop{r}\limits^{\rightharpoonup} } \right)} \right) \cdot \nabla \varphi \left( {\mathop{r}\limits^{\rightharpoonup} } \right) = 0} \\{{\nabla ^2}A\left( {\mathop{r}\limits^{\rightharpoonup} } \right) + \left[ {{k^2} - {{\left| {\nabla \varphi \left( {\mathop{r}\limits^{\rightharpoonup} } \right)} \right|}^2}} \right]A\left( {\mathop{r}\limits^{\rightharpoonup} } \right) = 0}\end{array},\,{\text{where}}\,{k}\,{\text{is}}\,{\text{the}}\,{\text{wavenumber}}{.}} \right.$$
The gradient of the phase distribution, $$\nabla \varphi \left( {\mathop{r}\limits^{\rightharpoonup} } \right)$$, yields the local wavenumber. When the length of the local wavenumber, namely, $$\left| {\nabla \varphi (\mathop{r}\limits^{\rightharpoonup} )} \right|$$, well exceeds the wavenumber k at point $$\mathop{r}\limits^{\rightharpoonup}$$, the second equation implies a fast decay in the electrical amplitude$$A\left( {\mathop{r}\limits^{\rightharpoonup} } \right)$$ in the neighbouring area that surrounds this point, which leads to the formation of superoscillatory structures. A numerical study also demonstrated that the special phase distribution is crucial in reducing the Fourier frequency of the entire electric field and keeps the Fourier components within the range that is limited by the wavenumber k 53. An example of a one-dimensional (1D) optical field is presented in Fig. 2, in which the phase discontinuity at x = 2πn ± d results in local wavenumbers with infinite values.
Moreover, the phase distribution directly determines the backflow phenomenon in the optical superoscillation features54, in which optical waves with only positive momenta can travel backward with a negative local wavenumber. The backflow is closely related to superoscillation39,55,56 and was recently demonstrated experimentally in 2D superoscillatory optical fields57; in addition, the four major characteristic features of a superoscillatory optical field were identified: a highly localized field, phase singularities, extremely large local wavevectors and energy backflow.
For a focusing lens with NA = nsinα, the size of the circularly symmetrical optical spot is determined by the highest spatial frequency kr = ksinα, where n is the refractive index of the medium after the lens and α is the angle between the optical axis and the wavevector. The spot size corresponds to the distance between the central peak and the first zero of the zero-order Bessel function of the first kind, which yields a value of 0.38λ/NA and is close to the FWHM in most cases. According to this analysis, Qiu58 suggested 0.38λ/NA as the criterion for optical superoscillation focusing. As illustrated in Fig. 3, the graph of spot size vs. NA is divided into three areas: the area with a spot size that exceeds the Rayleigh diffraction limit is defined as the subresolved area, the area with a spot size smaller than 0.38λ/NA is defined as the superoscillation area, and the area in between is defined as the superresolution area.
Although further theoretical and experimental efforts are still required to fully understand the physics behind optical superoscillation, it has already proven to be a wonderful tool for engineering far-field superresolution optical elements and optical systems59.
Superoscillatory optical devices
Focusing linearly and circularly polarized waves
Optical superoscillation was first observed in the diffraction pattern of a quasiperiodic metallic nanohole array under the illumination of linearly polarized monochromatic light at a wavelength of 660 nm60. In the experiments, hot spots with an FWHM of 0.36λ were generated in the absence of evanescent waves at a far-field distance of 7.5λ, as shown in Fig. 4. This type of binary amplitude (BA) mask provides a promising method for the experimental realization of superoscillation optics.
Most of the early studies on superoscillation optics focused on the subdiffraction focusing of monochromatic light in the visible range. Because of their robustness and ease of fabrication, most of the reported superoscillation focusing lenses are based on BA ring masks that are fabricated in thin metal films, such as germanium, gold and aluminium, which are deposited on glass substrates. Such devices consist of multiple concentric rings with an amplitude transmission of either 0 or 1. The judicious design of the transmission pattern aims at forming hot spots with an FWHM that is less than the traditional diffraction limit of 0.5λ/NA through constructive and deconstructive interference on the focal plane. Using multiple concentric air nanorings with a diameter of 8.46λ (4.5 μm), subdiffraction focusing was demonstrated with an FWHM of 0.6λ at z = 5.26λ (2.8 μm)61, which is slightly smaller than the corresponding Abbe diffraction limit of 0.63λ (0.5λ/NA), and the maximum sidelobe intensity was more than 40% of the intensity of the central peak. A standard superoscillatory lens (SOL) based on a BA ring mask was optimized and designed with a focal length of 16.1λ and a radius of 62.5λ (40 μm) at a wavelength of 640 nm. A hot spot was generated with an FWHM of 0.289λ (185 nm); however, the superoscillation focal spot was surrounded by a large sidelobe with almost the same intensity as the spot, which limited the FOV to ~0.6λ62. Multiple subdiffraction foci were also numerically demonstrated in an oil immersion medium with a refractive index of 1.515 for linearly polarized lights63. Although subdiffraction focusing was realized in the above cases, it only reflected the focal spot size of the transversely polarized light. Due to the polarization selectivity, which will be discussed later, the optical intensity obtained through a conventional optical microscope64 contains no information on the longitudinal components, which might broaden the focal spot size of the entire optical field.
According to ref. 53, increasing the modulation freedom in terms of the phase and amplitude (namely, increasing the number of phase and amplitude values used in the mask) can improve the superoscillation focusing performance, e.g., it can enhance the efficiency, reduce the sidelobes and extend the FOV. To increase the efficiency and suppress the large sidelobes near the superoscillatory focal spot without significantly increasing the fabrication difficulties, an additional binary phase (BP, 0 and π) modulation can be introduced into the mask design. Considering the contributions from longitudinal components, a superoscillation focusing lens based on a binary amplitude and phase (BAP) mask was proposed with an ultralong focal length of 400λ and an NA of 0.78 for circularly polarized light. A tilted nanofibre probe was employed to obtain the optical intensity distribution on the focal plane. The measured focal spot had an average FWHM of 0.454λ65, which is smaller than the superoscillation criterion of 0.487λ (0.38λ/NA)58. Clearly suppressed sidelobes were observed with an intensity less than 26% of the focal spot intensity. Other designs of SOLs that utilize BP66 and BAP67 masks were also reported for circularly polarized light.
In addition to the point-focusing lens, linear-focusing SOLs have been demonstrated theoretically53and experimentally68,69. Linear-focusing SOLs are realized with metallic and dielectric strip arrays on top of glass substrates for amplitude and phase manipulation. When illuminated with TE waves, the diffraction pattern consists of only transverse polarization components; therefore, achieving superoscillation focusing with small sidelobes is much easier due to the absence of longitudinal components. Both a quasicontinuous amplitude mask68 and a BAP mask69 were applied to linear-focusing SOLs. Quasicontinuous amplitude modulation was realized by varying the width of the subwavelength metallic slit. According to the acquired total optical intensity, a focal line FWHM of 0.379λ (larger than the theoretical prediction of 0.34λ but slightly smaller than the superoscillation criterion of 0.39λ) was experimentally demonstrated, with a small sidelobe ratio of 10.6%68. Figure 5 presents the experimental results of superoscillatory optical lenses based on metallic slits68, metallic and dielectric strips69 and metallic and dielectric concentric rings65.
SOLs based on metallic and dielectric concentric rings or strip arrays might suffer from polarization selectivity when the size of the rings is much smaller than half the wavelength. Polarization-independent subwavelength wave-front manipulation structures are favourable for many applications in which incident waves have complex polarization distributions. Subwavelength structures with circular symmetry are insensitive to polarization. A periodic double-layer metallic hole array70 has been proposed for the continuous modulation of phase and amplitude to realize an SOL with a theoretically predicted focal spot FWHM of 0.319λ and a sidelobe ratio of 30%. Accurate control of the amplitude and phase can also be achieved with polarization-independent aperiodic photon sieves71, in which subwavelength holes are arranged in a nonperiodic concentric fashion. By optimizing the radius of the rings (10.85–19.65 μm) and the diameter of the holes (50–100 nm), for linearly polarized light, a sub-diffraction-limit focal spot was demonstrated experimentally with an FWHM of 0.316λ in the transverse polarized optical field at 21λ from the lens in air71. The size of the spot is much smaller than the superoscillation criterion of 0.458λ (0.38λ/NA); however, the central peak is surrounded by a large sidelobe, as previously reported62.
Focusing cylindrically polarized vector waves
Cylindrically polarized vector waves are linear superpositions of electric field components oriented in the radial and azimuthal directions. Because of their special polarization orientation and tight focusing ability72,73, they are important in a variety of applications, including particle manipulation74,75, superresolution optical microscopy76,77, lithography78, material processing79 and particle acceleration80.
Traditionally, a tight focus with longitudinal polarization can be realized by focusing radially polarized waves with a high–NA lens in combination with an annular aperture filter81. Based on constructive interference, a lens consisting of subwavelength concentric annular metallic grooves was proposed for subdiffraction focusing of waves with radial polarization (RP) by scattering the surface plasmon polaritons (SPPs). A hot spot with an FWHM of 0.40λ was theoretically predicted at several wavelengths from the groove surface82. Utilizing a similar mechanism, a far-field plasmonic lens was experimentally demonstrated that can focus dual-wavelength (λ = 632.8 nm and 750 nm) waves to the same focal plane and the obtained FWHM of the focal spots was 0.41λ for both wavelengths on the focal plane that is located at a distance of 1.2 μm83. Using the polarization selectivity of SPPs84, an SPP lens was designed that generates and focuses in-phase radially polarized light under the illumination of waves with linear polarization (LP)85. Constructive interference of the scattered SPP waves was guaranteed in the far field 20 μm from the lens surface by tuning the propagation constant of the SPPs via the slit width. The experimentally obtained FWHMs were 340 ± 60 nm (0.38λ/NA) and 420 ± 60 nm (0.47λ/NA) in the x- and y-directions, respectively85, in water. However, the intrinsic loss that is caused by SPP propagation might significantly reduce the efficiency of the SP lens. To avoid SPP loss, an SOL based on a BA86,87 and BP66 annular ring belt array can be designed for the superoscillation focusing of radially polarized waves. A BP SOL fabricated with a Si3N4 concentric ring array that exhibits a subdiffraction longitudinally polarized focal spot with an average FWHM of 0.456λ and a depth of focus (DOF) of 5λ has been demonstrated88. Sharper focal spots were expected using higher-order radially polarized waves89,90. In addition to longitudinal electric fields, the creation of a purely longitudinal subdiffraction magnetic field was studied numerically in optomagnetic materials under the inverse Faraday effect by tightly focusing an azimuthally polarized vortex beam91.
The generation of subdiffraction 2D hollow spots is critical to STED microscopy92 and nonlinear nanolithography93. Reducing the inner radius of the 2D hollow spot is of particular importance in further enhancing the spatial resolution. Traditionally, 2D hollow spots can be created by focusing a helical-phase optical vortex beam with a high-NA objective lens. An alternative way to generate a tight 2D hollow spot is to focus azimuthally polarized waves94. Subdiffraction focusing of azimuthally polarized waves was demonstrated with SOLs that were based on BP dielectric Si3N495 and BA metallic aluminium96 concentric ring arrays. The inner FWHMs of the generated 2D hollow spots were 0.61λ (0.39λ/NA) and 0.368λ (0.352λ/NA), respectively, which are slightly larger than their theoretically predicted values of 0.57λ and 0.349λ. Figure 6 shows the experimental results of focusing cylindrically polarized vector waves.
Most of the reported SOLs for focusing cylindrically polarized vector waves are based on either a BA or BP concentric ring array. Although the fabrication is comparatively easy, these arrays offer very few degrees of freedom in the optimization of the wave-front modulation mask. Recent rapid developments in metasurfaces have enabled flexible control of the amplitude97, phase98,99,100,101 and polarization102,103 on a subwavelength scale. A group of eight metallic antennas was proposed for the modulation of cross-polarized transmission light, which cover a phase range of 2π. Based on such antennas, metalenses were designed with the integrated functions of polarization conversion and focusing. The incident azimuthally polarized waves can be converted into radially polarized waves and focused into a longitudinally polarized solid spot, while incident radially polarized waves can be converted into azimuthally polarized waves and focused into a 2D hollow spot. A numerical simulation demonstrated a longitudinally polarized focus with a subdiffraction size of 0.47λ and an azimuthally polarized hollow focus with a subdiffraction size of 0.43λ104.
In focusing cylindrically polarized vector waves, aligning the optical focusing device to the optical axis of the incident waves is difficult, especially in the case of subwavelength and subdiffraction focusing. Any misalignment can result in deformation of the focal spot and even destruction of the subdiffraction features. The best solution is to integrate the polarization conversion and focusing functions into a single device for incident waves with either circular polarization or LP. Under the illumination of circularly polarized waves, the orientation of the linearly polarized reflection wave can be continuously tuned by rotating a reflective quarter-wave plate (QWP) metasurface. In combination with BA (0 and π) modulation, a subdiffraction longitudinally polarized focus was numerically demonstrated at a wavelength of 1064 nm with a metamirror consisting of five concentric rings105. A family of cross-shaped reflective QWP metasurfaces was proposed with 32 equally separated phase values over the 2π range at a wavelength of 1550 nm. Two subdiffraction focusing mirrors based on the QWP metasurfaces were demonstrated over a broad bandwidth of 210 nm, which converted circularly polarized incident waves into radially polarized and azimuthally polarized waves and focused them into longitudinally polarized spots with FWHMs of 0.38λ‒0.42λ and azimuthally polarized 2D hollow spots with inner FWHMs of 0.32λ‒0.34λ106. In the design of a QWP-based focusing metasurface, it is necessary to compensate for the additional geometrical phase that is induced by the rotation by applying elements that are selected according to the polar angles, which might result in a slightly nonsymmetric intensity distribution in the focal spot. This problem can be overcome by utilizing half-wave plate (HWP) metasurfaces. A subdiffraction focusing lens based on HWP metasurfaces has been demonstrated with a group of dielectric elliptical cylinder metasurfaces of 30 sizes at a wavelength of 915 nm. The dielectric metalens converts linearly polarized waves into radially polarized waves, which are focused into longitudinally polarized subdiffraction spots with FWHMs of 0.385λ‒0.458λ in a broad wavelength range of 875‒1025 nm107.
Subdiffraction optical needles and diffractionless beams
The conventional elements for realizing an extended DOF include axicons108, diffraction gratings109,110, aberration lenses111, and Fresnel zone plates112. Subdiffraction optical needles are focal spots that extend along the optical axis with a transverse size that is smaller than the Abbe diffraction limit. Such optical needles are ideal for particle acceleration, superresolution imaging, high-density data storage and fabrication of planar structures.
A linearly polarized superoscillatory optical needle was first demonstrated at a wavelength of 640 nm with a lens that has been modified from a conventional point-focusing BA SOL by simply blocking its central area with a circular metallic disk113. The optical needle had an axial length of 11λ and a transverse size of 0.42λ in the experiment. An optimization method was also employed to design optical needle SOLs. At a violet wavelength of 405 nm, a BA SOL was optimized, and the generation of a circularly polarized optical needle with a long DOF of 15λ and a transverse size of 0.45λ was experimentally demonstrated114. A theoretical design of BP lenses for generating deep ultraviolet optical needles was also reported115. However, these reported lenses have very short focal lengths of approximately 20‒30λ, which poses a major obstacle to practical applications. To overcome this problem, an SOL with an ultralong focal length of 240λ was developed for focusing an azimuthally polarized vortex beam with a topological charge of m = 1. The transverse size of the generated optical needle varied between 0.42λ and 0.49λ within the 12λ propagation distance116. Using a BP mask, a subdiffraction optical needle was experimentally generated with a length of 19.7λ at a THz wavelength of 118.8 μm117.
The generation of a subdiffraction needle of a longitudinally polarized wave by focusing a radially polarized Bessel–Gaussian beam using a combination of a BA filter and a high-NA objective lens was theoretically proposed. The predicted propagation distance is approximately 4λ and the transverse size of the needle is 0.43λ118. A further theoretical study showed that the transverse size can be further reduced to 0.4λ within a DOF of 4λ with the proper beam intensity profile119. Using an SPP lens, a longitudinally polarized optical needle with a transverse size of 0.44λ and a length of 2.65λ was demonstrated at a meso-field distance by numerical simulation120. With an optimized SOL based on a BP mask, a 5λ-long longitudinally polarized optical needle was experimentally obtained with a transverse FWHM of 0.456λ (0.424λ/NA) at an ultralong working distance of 200λ88.
For superoscillation focusing fields, the sidelobe intensity typically increases substantially as the beam size is reduced below the superoscillation criterion of 0.38λ/NA. To suppress the sidelobes, a supercritical lens was proposed for realizing a transverse FWHM that is less than the diffraction limit but slightly exceeds the superoscillation criterion121. A 12λ-long optical needle was experimentally generated 135λ from the lens with a transverse size of 0.407λ and suppressed sidelobe intensity, which was only 16.2% of the central peak intensity.
In addition to subdiffraction solid optical needles, their hollow counterparts, which have zero intensity along the optical axis, are attractive for superresolution applications. In STED microscopy, improvements in the imaging depth with a hollow Bessel beam have been verified122. A hollow optical needle with a length of 2.28λ and a transverse inner size of 0.6λ, which was obtained by shaping a radially polarized Bessel-Gaussian beam with a second-order vortex phase and amplitude filter, has been theoretically reported123. Assisted by an optimization algorithm, a BP SOL was designed with a working distance of 300λ (189.84 μm), and a subdiffraction hollow needle was experimentally created with a length of 10λ by focusing azimuthally polarized waves. Within the needle, the transverse inner size varied between 0.34λ and 0.52λ, which is <0.5λ/NA124.
The length of the optical needle can be further extended, but the required computational load renders the design of optical needles with lengths of hundreds of wavelengths challenging. Theoretical efforts have been made125,126,127,128, but there has been no experimental demonstration of such long subdiffraction optical needles. Recently, the concept of angular spectrum compression was proposed for generating ultralong subdiffraction optical needles, which significantly reduces the design complexity. Based on this concept, a superoscillation point-focusing lens that uses a BP mask was optimized at a wavelength of 672.8 nm. When illuminated with an azimuthally polarized wave at a shorter wavelength of 632.8 nm, the lens generated an optical hollow needle with a subdiffraction and subwavelength transverse size within the nondiffracting propagation distance of 94λ129. A numerical simulation also revealed that when the lens was immersed in water, the propagation distance was further extended to 180λ, while the beam remained superoscillatory with a transverse size of ~0.35λ–0.4λ. This result demonstrates the satisfactory penetration ability of the superoscillatory hollow needle, which is crucial for practical applications. In a later study, surprisingly, classical binary Fresnel zone plates were used to generate subdiffraction optical needles for multiple polarizations via optimization-free design130. A numerical simulation showed that, compared to the optical needle reported in reference129, those that were created by classical Fresnel zone plates have smaller fluctuations in optical intensity along the optical axis. The experimental results showed that the transverse sizes and the axial lengths were 0.40λ–0.54λ and 90λ, 0.43λ–0.54λ and 73λ and 0.34λ–0.41λ and 80λ for the generated optical needles with circular, longitudinal and azimuthal polarizations, respectively, as shown in Fig. 7. The realization of a longer needle is possible by further increasing the radius of the binary Fresnel zone plate or using a shorter working wavelength. Compared with binary concentric ring arrays, spatial light modulators (SLMs) can provide more opportunities in the design of lens phase profiles and, therefore, can be used to realize optical needles with complicated features. Diffractionless beams with arbitrarily shaped subdiffraction features with a propagation distance of 250 Rayleigh lengths have been demonstrated by the superposition of optical Bessel beams of different orders but the same transverse wavenumber131. An ultralong subdiffraction diffractionless beam was generated by an SLM with 256 phase levels in the range of 0‒2π. The beam achieved a propagation distance of approximately 43.3 mm with a maximum transverse size of less than 63.28 μm (0.5λ/NA) for a working distance of 1000 mm132. Generating subdiffraction and subwavelength diffractionless beams using an SLM and a high-resolution imaging system is also possible. Another promising method is to use phase-modulation birefringent metasurfaces105,106,107 for the direct generation of diffractionless beams with complex polarization in broadband wavelength ranges.
Generation of subdiffraction three-dimensional (3D) hollow spots
Unlike 2D hollow spots, 3D hollow spots provide complete confinement in 3D space, which can be used to increase the axial resolution in STED microscopy92 and superresolution lithography93 and improve the trapping stability of optical tweezers. Different approaches have been proposed for the generation of subdiffraction or subwavelength 3D hollow spots, including focusing of radially polarized first-order Laguerre-Gaussian waves with a 4π system133, focusing of circularly polarized waves with a circular π phase plate (πPP)134, destructive interference of double-ring-shaped radially polarized R-TEM11*-mode waves135, incoherent superposition of two radially polarized waves that are modulated by a circular πPP and a quadrant 0/π phase plate136 and beam shaping with an SLM137. A 3D hollow spot was also demonstrated in visible-light via antiresolution138. However, the 3D hollow spots that are generated by these methods are either diffraction limited134,135,136,137,138 or difficult to realize133.
A SOL based on a BP concentric ring array was designed by carefully optimizing the interference patterns of the azimuthal, radial and longitudinal polarizations in cylindrically polarized vector waves. Since the azimuthal and radial components share the same transmission function, a tradeoff must be made between the transverse and axial sizes. As shown in Fig. 8, a 3D hollow spot was experimentally created with a transverse inner FWHM of 0.546λ (0.496λ/NA) and an axial inner FWHM of 1.585λ at a wavelength of 632.8 nm139. Further investigation showed that both the transverse and axial sizes can be significantly reduced by independently modulating the radial and azimuthal components of the incident waves using birefringent metasurfaces. Numerical simulations have demonstrated the generation of a 3D hollow spot with inner FWHMs of 0.33λ and 1.32λ in the transverse and axial directions, respectively, in air.
Quantum optical superoscillation
Previous demonstrations of optical superoscillation have been reported in the domain of classical optics, in which the optical superoscillation was associated with the interference of classical propagating waves. Owing to the wave-particle duality, superoscillation is expected at the level of single photons, which are described by the quantum wavefunction. Experimental observation of single-photon superoscillation with a conventional 1D SOL has recently been reported145. Similar to multiple-slit interference, a 1D SOL that is based on a binary mask was designed with superoscillation focusing performance for classical optical interference. In the experiment, a single-photon source was used to study the superoscillation behaviour of a single photon passing through a grating-like binary mask. The measured single-photon wavefunction was confined to an area with FWHMs of (0.49 ± 0.02) λ and (0.48 ± 0.03) λ for two orthogonal polarizations, which are larger than both the theoretical and experimental values of 0.4λ and 0.44λ in the classical regime. The superoscillatory behaviour of a single photon indicates that optical superoscillation is not a group behaviour but rather a natural result of quantum behaviour145. Table 1 summarizes reported results for superoscillatory lenses.
Design methods
Optimization design methods
The design of an SOL mainly relies on optimization algorithms, among which particle swarm algorithms146 are the most commonly used. The design procedure is described in the flow chart in Fig. 9a. First, for specified parameters of the target field, such as the FWHM, sidelobe, FOV and DOF, a group of lenses with different genes representing the lens transmission function are randomly generated. Then, the diffraction pattern of each lens is calculated on the target focal plane for specified incident waves. By comparing the diffraction and target fields, a fitness function is calculated for each lens. Finally, the gene of each lens is updated according to the fitness function. This procedure is repeated until the best fitness function attains a predefined value. With an optimization approach, SOLs can be designed without fully understanding the physics behind superoscillations; however, the particle swarm algorithm might fall into a locally optimal solution. This problem can be partially alleviated by randomly regenerating the genes of some of the particles after certain iterations. The solution can also be improved by combining the particle swarm algorithm with genetic algorithms147. Based on genetic algorithms, an unconstrained multi-objective optimization method148 was proposed to realize flexible focusing patterns. The methods for diffraction pattern calculation include the Rayleigh-Sommerfeld approach149, the angular spectrum method150 and the Debye-Wolf method151. Due to the computational complexity of the 2D integration involved in the calculation, the size of the lens under design is restricted to several hundreds of working wavelengths. However, in the case of a circular symmetry configuration65,86,88,95, the calculation can be simplified to 1D integration, which can be further accelerated by utilizing the fast Hankel transformation152. Nevertheless, the design of larger aperture lenses for subwavelength and superoscillation focusing remains a substantial challenge.
Optimization-free design approaches
Although convenient, optimization methods provide a very coarse physical picture of optical superoscillation and do not enable control of the detailed profile of the superoscillatory field. Moreover, for limited target parameters, the solution obtained via an optimization method is not unique, and tradeoffs must be made among the target parameters for multiparameter optimization. Since optical superoscillations are induced by the interference of coherent propagation waves or the superposition of plane waves with spatial frequencies that are less than 1/λ, the superoscillation optical field can be described with bandwidth-limited functions. PSWFs constitute a complete set of 1D bandwidth-limited functions and their properties have been thoroughly studied in a series of papers49,153,154,155,156 by D. Slepain, H.J. Landau and H. O. Pollak. The PSWFs with a bandwidth of k0 are orthogonal in both the entire spatial domain and the limited region of [−D/2, D/2]. This property allows one to synthesize arbitrarily narrow structures in 1D space. Based on PSWFs, an optimization-free approach was proposed to construct a superoscillation focal spot for a given optical field profile and FOV [−D/2, D/2], and the corresponding superoscillatory mask transmission function could be obtained by reverse propagation using the scalar angular spectrum method50. This approach can also be extended to 2D cases for optimization of the superoscillatory point spread function (PSF) for far-field superresolution imaging157 using circular prolate spheroidal wavefunctions (CPSWFs)158. Figure 10 presents an example of a superoscillatory function that was constructed from band-limited CPSWFs φn(c, r), where c = 2πD/λ, the cut-off frequency is 2π/λ and FOV is D = λ/2. The constructed superoscillatory function has an FWHM of 0.2λ. Theoretically, this method is reported to improve the efficiency by three orders of magnitude for the same resolution or increase the resolution by 26% for the same efficiency157. The superoscillatory masks that are designed with band-limited functions typically have continuous amplitude and phase distributions with phase shifts at points of zero amplitude.
An optimization-free mathematical method for designing a superoscillatory mask by solving a nonlinear matrix equation was proposed58. For target intensity values [f1, f2,…, fM] at radii [r1, r2, …, rM] on the focal plane and a fixed ring belt width Δr on the mask, the radius of the nth belt was obtained by solving the inverse problem with trust-region dogleg Newton theory58.
Design of optical needles and diffractionless beams
Various methods have been employed to design SOLs of long DOF, including optical needles and diffractionless beams. A prescribed super-Gaussian function can be used to describe the extended longitudinal profile of an optical needle. Using optimization algorithms, the transmission function of a BA mask can be obtained by minimizing the difference between the actual field distribution and the merit function114,115,116,117,124,159. However, when applying these commonly used methods, one must calculate the diffraction patterns on the planes at different positions within the target optical needle at intervals that are smaller than one wavelength and then compare the calculated patterns with their merit profiles. Therefore, the required computational consumption increases linearly with the length of the optical needle, which renders designing subdiffraction optical needles with propagation distances that exceed several tens of wavelengths impractical. An optimization-free approach was proposed for the generation of a longitudinally polarized optical needle160, which can be treated as a constant electric current within its extent. Reverse propagation was carried out to obtain the profile of a radially polarized incident beam at the pupil plane of the two high-NA objective lenses in a 4Pi system. However, this approach cannot be extended to cases other than longitudinal polarizations.
According to angular spectrum theory, the profile of a propagating optical field is determined by its angular spectrum. Due to a property of propagating waves in a uniform lossless medium, the amplitude of the angular spectrum remains unchanged during wave propagation. The variation in the transverse field distribution results solely from the accumulated phase difference for spatial frequency components. Therefore, the key to generating a subdiffraction diffractionless beam is to synthesize the optical field of the subdiffraction spot on a given plane while minimizing the accumulation of the phase difference over a desired propagation distance. This can be done by reducing the propagation angle with respect to the optical axis for each spatial frequency component or by compressing the angular spectrum with respect to the effective cutoff spatial frequency of the propagating wave. As shown in Fig. 9b, utilizing the concept of angular spectrum compression129, an SOL was designed with a single subdiffraction focal spot at wavelength λ. Then, under illumination at a shorter wavelength λ0 (<λ), a subdiffraction diffractionless beam was generated with a length of approximately 100λ0 in air and 200λ0 in water. The value of the wavelength λ is determined by several parameters, including the lens radius, the working wavelength λ0, the focal length and the optical needle propagation distance. The extension of an optical needle in water124,129 can be explained similarly. Interestingly, utilizing the same strategy, subdiffraction diffractionless beams can be created with the same transverse size and propagation distance but lower intensity fluctuations along the optical axis using a classical Fresnel zone plate, which is entirely free from optimization and allows one to design subdiffraction diffractionless beams for multiple polarizations using very simple algebra130, as shown in Fig. 9c. The main shortcoming of angular spectrum compression is that it enables little control over the detailed field profile within the propagation distance. Further investigation is necessary for generating uniform diffractionless beams with superoscillatory transverse size.
Characterization of superoscillatory optical fields
Transverse fields
Optical superoscillatory features result from the interference of propagation waves, whose angular spectrum is restricted within the cutoff spatial frequency of 1/λ. Therefore, ideally, superoscillatory features can be retrieved by an optical system with an NA of one and an infinite aperture. As shown in Fig. 11a, high-NA microscopes have been widely used to characterize superoscillatory optical fields61,85,96,114,116,124,129,130 and the 2D intensity distribution of a superoscillatory optical field can be directly obtained in a single shot by a conventional optical microscope equipped with a high-resolution digital camera. In addition, 3D mapping of superoscillatory optical fields can be implemented by scanning an objective lens mounted on a z-axis piezo-driven nanopositioner. The major advantage of using microscopy is its fast measurement. However, the pixel size is quite large compared to the wavelength in the visible and near-infrared spectra. To resolve the images of superoscillatory fields, microscopes with large magnification must be employed. Both theoretical and experimental results have indicated that this optical lever results in significant attenuation of the longitudinal component in the image field64. The absence of longitudinal polarization was verified in later experimental investigations of focused superoscillation optical fields139,161, which showed clear deviations of image fields from the theoretically predicted electric fields and satisfactory agreement was observed between the image fields and the transverse components that were obtained via numerical simulation.
Longitudinal fields
In addition to conventional microscopes, a scanning aperture optical microscope operated in transmission mode was applied to map focused superoscillation spots60. The light can be collected through a metal-coated tapered fibre tip with an aperture of as small as 30 nm. The polarization filtering behaviour of such tips has been theoretically studied using an electric dipole scattering model64 and high polarization selectivity with a substantial reduction in the polarization parallel to the tip axis is observed. As shown in Fig. 11b, to measure the longitudinal electric components, the fibre tip axis must be set perpendicular to the polarization direction or at an angle at which the tip can respond to the longitudinal polarization to a certain extent88,130. A titled tip can be used to map the entire electric field within the plane of a tilted nanofibre with satisfactory similarity to the theoretical result of the entire field130,139, and the distortion that is caused by the polarization selectivity can be minimized with a tilt angle of 45°.
Vector fields
In addition to the above two direct approaches for characterizing superoscillatory fields, another way to obtain the intensity distribution of the optical field is to use the knife-edge method162. The advantage of using the knife-edge method is its insensitivity to polarization, which makes it suitable for characterizing the entire optical field components. By projecting in different directions, the data acquired with a single knife edge can be used to reconstruct the 2D intensity distribution via Radon back-transformation163. Instead of traditionally used razor blades, the knife edge can be formed by a sharp-edged opaque pad that is deposited on the top surface of a photodiode active area with minimized edge diffraction effect. The experimental results demonstrated that the reconstructed subdiffraction field profiles had excellent agreement with the theoretical results of total fields, indicating that the method has satisfactory polarization insensitivity72. A similar approach was utilized to characterize a subdiffraction focal spot with a size of 0.4λ that was generated by focusing a radially polarized wave at a wavelength of λ = 980. In the experiment, a specially designed detector was used. A 200-nm-thick Ti/Au knife edge with a roughness of less than 30 nm was formed on the top surface of a 200-nm-thick and 50 μm × 25 μm active area, which was fabricated on the top surface of a depletion layer. The experimental results showed good agreement with the theoretical results119. In the above cases, only one knife edge was involved in the scanning, and the 2D intensity distribution was recovered with Radon back-transformation. This postprocessing step can be avoided by using a double knife edge with the edges oriented in the x- and y-directions164, which allows the calculation of the intensity within a smaller area that is determined by the scanning interval in the x- and y-directions via simple subtraction operations. A double knife edge was fabricated from a right-angled silicon fragment with a thickness of 110 μm and a roughness of 10 nm and it was directly mounted on a conventional photodetector. Due to the high reflectivity of silicon, the measurement could be conducted in both the reflection and transmission modes165. Due to the diffraction effect that is caused by the 110-μm-thick edge, the measured subdiffraction spots showed clear distortion compared to their theoretical predictions166. This discrepancy might be minimized by a double knife edge with nanometre-scale thickness that is deposited directly on the top surface of the photon detector, as reported in the literature72,119.
Phase retrieval
Phase retrieval is important for understanding the generation mechanism of optical superoscillations, as gigantic local wavevectors are known to be closely associated with the formation of superoscillations57. To experimentally obtain the phase distribution of superoscillation optical fields, a monolithic metamaterial interferometer with a superoscillatory Pancharatnam-Berry phase metasurface was proposed. The interferometer consists of rows of subwavelength metallic slits oriented in either the +45° or −45° direction, which results in a 0 or π phase shift for cross-polarized transmission waves. Such a metasurface has a negligible effect on the phase distribution of the transmitted waves that have the same polarization as the linearly polarized incident waves; therefore, this copolarized transmission wave can be used as a reference for interference with the phase-modulated cross-polarized wave. A one-dimensional superoscillation focusing lens was designed by utilizing the BP, i.e., 0 and π, modulation mechanism of the metasurface; therefore, the reference and superoscillation waves were simultaneously created through such a metasurface interferometer without any moving parts. Phase reconstruction was performed with polarization-dependent intensity measurements for incident waves with left- and right-handed CPs and LPs oriented at angles of 45° and −45°. The interference patterns were collected by a conventional microscope with a high NA and a large magnification. Four characteristic features of the superoscillatory field, namely, a high localized electric field, phase singularities, gigantic local wavevectors and energy backflow, were extracted via this technique, with a resolution of λ/100. However, this phase mapping approach is difficult to extend to more complicated cases. Moreover, the full retrieval of the phase and amplitude for individual polarization components remains challenging.
Applications
Superoscillation imaging
The imaging properties of an SOL that is based on a Penrose nanohole array have been examined with a single point source and multiple point sources both theoretically and experimentally for coherent and incoherent illumination167. Incoherent illumination resulted in a higher resolution of 450 nm at a wavelength of 660 nm. The study also suggested that the 200 × 200 μm2 nanohole array can achieve an imaging resolution that is comparable to that of a conventional lens with a high NA, while the imaging resolution remains larger than the diffraction limit. Theoretical research was also carried out to investigate the imaging performance of a 1D SOL designed with band-limited functions150. Numerical simulations showed that two 0.04λ-wide slits that were separated by 0.24λ were imaged on the plane 20λ from the lens. Even in the presence of the very large sideband in the lens PSF, according to the Rayleigh criterion163, the two slits can be well resolved within the lens FOV under incoherent illumination. The imaging capabilities of an optical needle SOL based on binary concentric rings were experimentally studied168. In the experiment, the object and image distances were 39λ and 14λ, respectively. The results showed a PSF with an FWHM of 0.38λ for on-axis imaging, which is 24% smaller than the diffraction limit, and an effective NA of 1.31 in air at a wavelength of 640 nm. For off-axis imaging with an object displacement of 1.56λ, the measured image spot size was 0.48λ, which is smaller than the diffraction limit. Superoscillation imaging was also used to improve the resolution of an optical telescope system through a carefully designed pupil filter, which allows superresolution imaging within a small local FOV that is restricted by the large neighbouring sideband169. Generally, direct superoscillation imaging is limited by the very large sideband that surrounds the superoscillatory region in the PSF. To suppress the sideband and increase the sensitivity of the superresolution imaging system, for the 1D case, the concept of selective superoscillation was suggested for producing a fast oscillation region of a superoscillation waveform while avoiding high-energy content170. In the case of 2D imaging, a new class of superoscillation functions was proposed for designing a superresolution PSF with a subdiffraction spot surrounded by superoscillatory ripples of low intensity. In this way, the sideband energy is significantly suppressed, which allows one to expand the image area and therefore improve the imaging sensitivity. The corresponding experimental demonstration was conducted with a 4F imaging system171. Recently, broadband superresolution imaging was achieved with an improved resolution of 0.64 times the Rayleigh criterion by using a 4F system that was composed of four conventional achromatic lenses and a broadband superoscillatory pupil filter, which consists of grating-shape metallic metasurfaces for BP modulation172. Recent superresolution imaging results that were obtained by utilizing superoscillatory PSFs are summarized in Fig. 12.
Superresolution microscopy
Superresolution microscopy based on an SOL
One of the most promising applications of superoscillation is label-free far-field superresolution microscopy. Direct imaging with SOLs is restricted by the limited FOV. This problem can be overcome by the confocal configuration in which a superoscillation hot spot is used as a point illumination source. The sideband effect can be significantly suppressed because the system PSF is the product of the PSFs of the SOL and objective lens. Figure 13 shows cases of superresolution microscopes that are based on SOLs.
The first demonstration62 of this technique was performed with a conventional liquid immersion microscope with an NA of 1.4. At a wavelength of 640 nm, the SOL generated a hot spot with an FWHM of 185 nm at a distance of 10.3 μm. The neighbouring sidelobe had the same intensity as the hot spot and the separation between them was approximately 200 nm. By scanning the sample using a 2D nanotranslation stage, a superresolution image of nanoholes fabricated on a 100-nm-thick titanium film was obtained. Two 210-nm-diameter nanoholes spaced 41 nm apart were nearly resolved, as shown in Fig. 13c.
To further increase the working distance and reduce the sidelobe intensity, a critical superresolution lens was developed121 with a focal length of 135λ, a transverse size of 0.407λ and a DOF of 12λ at a wavelength of 405 nm. Similarly, two nanoholes with diameters of 163 nm and a spacing of 65 nm could be well resolved using the confocal configuration with a conventional microscope with an NA of 0.7, as shown in Fig. 13f. The advantage of using a long-DOF SOL is that images of structures of different heights can be simultaneously acquired in a single 2D scan. Although the DOF is longer compared to the previous work, it is still too short to penetrate samples in most practical applications.
Self-reconstructing beams, such as Bessel beams, are promising candidates for deep penetration microscopy. Theoretical and experimental studies have been carried out to study the microscopy technique with Bessel beams, which show unexpected robustness against deflection at object surfaces. This approach not only reduces scattering artefacts but also increases the image quality. Moreover, it allows penetration deep into dense media173. Recently, based on a diffractionless SOL129,130, a superresolution image of a subwavelength metallic grating with a linewidth of 0.28λ was obtained with a visibility that exceeded 20% at a working wavelength of 632.8 nm, as shown in Fig. 13i. Unlike previously reported cases, the diffractionless superoscillation beam can penetrate a 175-μm glass plate and realize superoscillation illumination on the metallic grating that was fabricated on the top surface of the glass plate. Because of the excellent penetration and superoscillatory transverse size, this method is suitable for practical use. More importantly, it might enable label-free 3D superresolution imaging of samples.
Superresolution microscopy based on an SLM
An SOL with a fixed amplitude or phase mask can only be applied to a normally incident wave. Without wide-angle superoscillation focusing performance, superresolution microscopes based on SOLs must operate in raster scan mode and their imaging speed is restricted by the scanning speed of the piezo translation stages, with which achieving real-time imaging is difficult. SLMs provide a flexible way to design a reconfigurable superoscillatory focus with real-time speed. In a 4F imaging system with an NA of 0.00864, a reflective SLM is located on the Fourier plane of the 4F system, where the SLM acts as a spatial filter to form a superoscillatory PSF for a working wavelength of 632.8 nm. Although the superoscillatory spot is surrounded by large sidelobes, superresolution imaging can be realized with a resolution of 72% of the diffraction limit (36.7 μm) for objects located within the FOV of 150 μm174. Recently, real-time subwavelength superresolution based on two SLMs was reported by Rogers et al. of University of Southampton. The system was a modification of a standard confocal microscope in which two SLMs were applied to shape the laser beam as it entered the microscope. In addition, polarization measurements were taken to form a polarization contrast superresolution image, which revealed new levels of information in biological samples. In addition to label-free microscopy, superoscillation focal spots were utilized to enhance the spatial resolution of confocal laser scanning microscopy175.
Other applications
To achieve high-density data storage, a superoscillation focusing device with a long DOF was suggested and experimentally explored as an alternative technique, which can focus light into sub-50 nm spots with a DOF of 5λ in magnetic recording material at a wavelength of 473 nm176. In addition to applications in the spatial domain, the concept of superoscillation can also be applied to the time domain for optical pulse shaping to break the temporal resolution limit. Temporal features with a duration of 87 ± 5 femtoseconds, which is three times shorter than that of a transform-limited Gaussian pulse, have been experimentally demonstrated with a visibility of 30%177. To achieve arbitrarily short features, generic methods have been proposed and demonstrated for synthesizing femtosecond pulses based on Gaussian, Airy and Hermite-Gauss functions178. Such superoscillatory pulses might be promising for ultrafast temporal measurements. The concept of superoscillation can also be extended to its complementary counterpart, suboscillation, in which a local arbitrarily low frequency can be realized with a lower-bound-limited function, which can be used for optical super defocusing179.
Conclusions
In summary, recent developments in the area of optical superoscillations have shown great potential for superresolution optical focusing and imaging. Optical devices to realize subdiffraction and subwavelength focusing of optical waves with different polarizations have been demonstrated. Vector optical fields with special diffraction patterns, including subdiffraction diffractionless beams of different polarizations and 3D hollow spots, have been experimentally demonstrated. The applications of superoscillation in telescopes have shown improved resolution beyond the traditional resolution limit. Microscopes based on superoscillatory devices, including point superoscillation focusing lenses, long-DOF supercritical lenses and superoscillation diffractionless lenses, have been achieved with advantages of noncontact, far-field and label-free operation. In addition, superoscillation focusing has been applied to improve the resolution of superresolution microscopy based on fluorescent labels. Ultrahigh-density optical data storage was also demonstrated with superoscillatory optical needles. Applying metasurfaces to superoscillatory optical devices enables flexible control of the phase and the polarization to achieve complex superoscillatory optical fields for specific applications. Many challenging issues remain, the most important of which is the efficiency: If the spot size is far below the superoscillation criterion value, there is an exponential decrease in the efficiency. In all reported cases, the focused waves within the FOV only constitute a few percentage of the total incident energy and most of the optical energy goes into the sidebands surrounding the superoscillation spot. One might improve the superoscillation focusing efficiency by using multiple-phase modulation, which can be realized using multiple-step dielectric layers or phase-modulation metasurfaces. Further reducing the superoscillation spot size requires ultra-fine modulation of the light wave front, which is limited by the size of the subwavelength structures for phase modulation. One possible solution is to adopt a planar SOL group instead of a single lens or to employ a specially designed curved-surface refractive lens. The large sidebands seem to be unavoidable if the spot size is much smaller than the superoscillatory criterion, which greatly limits the FOV in superresolution imaging that is based on optical superoscillation. Nevertheless, in the application of superresolution microscopy, the low efficiency is not a practical problem since commercialized photodetectors are sufficiently sensitive for the low optical intensity of the superoscillation fields. The issue of large sideband can also be alleviated by using a confocal configuration in a superresolution microscope system that is based on carefully designed point-spread functions of the illumination and collection lenses.
For the SOL design, especially in the case where the feature size is much smaller than the superoscillation criterion, the precise calculation of the diffraction field is in high demand for SOL with realistic structures. Since fine superoscillatory features result from precise interference of light waves, any discrepancy between the diffraction calculation and the real light propagation might lead to design failure. Moreover, due to the computational consumption, the aperture of SOLs remains limited to several millimetres. The computational complexity become much higher in cases of non-circular symmetry, which involve 2D integrals; a possible solution is to use GPU-based computation and a multi-thread method to accelerate the diffraction calculation. Up to now, most of the reported SOLs are only applicable for a single wavelength. Although SOLs for several isolated wavelengths have been demonstrated either experimentally or theoretically, true broadband achromatic SOLs remain in their infancy. A hybrid lens that integrates both refractive and diffractive lenses may provide a promising paradigm, which will even benefit superoscillation focusing of ultrashort optical pulses. The other challenge for SOLs lies in the small-angle operation; however, wider angle performance might be achieved by carefully designing quasi-continuous phase modulation, which can be used for fast superresolution imaging based on superoscillation. Optical superoscillation provides a new route for realizing superresolution and overcoming the diffraction limit, and it has demonstrated potential in various applications. Further investigation is necessary for understanding the physics behind optical superoscillation phenomena and shedding light on more powerful imaging systems, which may significantly promote the development of optical superoscillatory devices.
References
1. 1.
Abbe, E. Beiträge zur theorie des mikroskops und der mikroskopischen wahrnehmung. Arch. f.ür. Mikrosk. Anat. 9, 413–418 (1873).
2. 2.
Goodman, J. W. Introduction to Fourier Optics. 2nd edn. (Mcgraw-Hill, New York, 1996).
3. 3.
Dürig, U., Pohl, D. W. & Rohner, F. Near-field optical-scanning microscopy. J. Appl. Phys. 59, 3318–3327 (1986).
4. 4.
Yang, H. et al. Super-resolution biological microscopy using virtual imaging by a microsphere nanoscope. Small 10, 1712–1718 (2014).
5. 5.
Upputuri, P. K. & Pramanik, M. Microsphere-aided optical microscopy and its applications for super-resolution imaging. Opt. Commun. 404, 32–41 (2017).
6. 6.
Fang, N. et al. Sub–diffraction-limited optical imaging with a silver superlens. Science 308, 534–537 (2005).
7. 7.
Taubner, T. et al. Near-field microscopy through a SiC superlens. Science 313, 1595 (2006).
8. 8.
Kehr, S. C. et al. Near-field examination of perovskite-based superlenses and superlens-enhanced probe-object coupling. Nat. Commun. 2, 249 (2011).
9. 9.
Jacob, Z., Alekseyev, L. V. & Narimanov, E. Optical hyperlens: far-field imaging beyond the diffraction limit. Opt. Express 14, 8247–8256 (2006).
10. 10.
Salandrino, A. & Engheta, N. Far-field subdiffraction optical microscopy using metamaterial crystals: theory and simulations. Phys. Rev. B 74, 075103 (2006).
11. 11.
Liu, Z. W. et al. Far-field optical hyperlens magnifying sub-diffraction-limited objects. Science 315, 1686 (2007).
12. 12.
Guerra, J. M. Super-resolution through illumination by diffraction-born evanescent waves. Appl. Phys. Lett. 66, 3555–3557 (1995).
13. 13.
Wei, F. F. & Liu, Z. W. Plasmonic structured illumination microscopy. Nano Lett. 10, 2531–2536 (2010).
14. 14.
Liu, X. W. et al. Fluorescent nanowire ring illumination for wide-field far-field subdiffraction imaging. Phys. Rev. Lett. 118, 076101 (2017).
15. 15.
Gustafsson, M. G. L. Nonlinear structured-illumination microscopy: wide-field fluorescence imaging with theoretically unlimited resolution. Proc. Natl Acad. Sci. USA 102, 13081–13086 (2005).
16. 16.
Hell, S. W. Far-field optical nanoscopy. Science 316, 1153–1158 (2007).
17. 17.
Rust, M. J., Bates, M. & Zhuang, X. W. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nat. Methods 3, 793–796 (2006).
18. 18.
Betzig, E. et al. Imaging intracellular fluorescent proteins at nanometer resolution. Science 313, 1642–1645 (2006).
19. 19.
Hess, S. T., Girirajan, T. P. K. & Mason, M. D. Ultra-high resolution imaging by fluorescence photoactivation localization microscopy. Biophys. J. 91, 4258–4272 (2006).
20. 20.
Ayas, S. et al. Label-free nanometer-resolution imaging of biological architectures through surface enhanced raman scattering. Sci. Rep. 3, 2624 (2013).
21. 21.
Rivenson, Y. et al. Deep learning microscopy. Optica 4, 1437–1443 (2017).
22. 22.
Nehme, E. et al. Deep-STORM: super-resolution single-molecule microscopy by deep learning. Optica 5, 458–464 (2018).
23. 23.
Wang, H. D. et al. Deep learning enables cross-modality super-resolution in fluorescence microscopy. Nat. Methods 16, 103–110 (2019).
24. 24.
Barakat, R. Application of apodization to increase two-point resolution by the sparrow criterion. I. Coherent illumination. J. Opt. Soc. Am. 52, 276–283 (1962).
25. 25.
Barakat, R. & Levin, E. Application of apodization to increase two-point resolution by the sparrow criterion. II. Incoherent illumination. J. Opt. Soc. Am. 53, 274–282 (1963).
26. 26.
Ando, H. Phase-shifting apodizer of three or more portions. Jpn. J. Appl. Phys. 31, 557–567 (1992).
27. 27.
Boyer, G. R. Pupil filters for moderate superresolution. Appl. Opt. 15, 3089–3093 (1976).
28. 28.
Boyer, G. & Sechaud, M. Superresolution by taylor filters. Appl. Opt. 12, 893–894 (1973).
29. 29.
Boivin, R. & Boivin, A. Optimized amplitude filtering for superresolution over a restricted field I. Achievement of maximum central irradiance under an energy constraint. Opt. Acta.: Int. J. Opt. 27, 587–610 (1980).
30. 30.
Boivin, R. & Boivin, A. Optimized amplitude filtering for superresolution over a restricted field II. Application of the impulse-generating filter. Opt. Acta.: Int. J. Opt. 27, 1641–1670 (1980).
31. 31.
Boivin, R. & Boivin, A. Optimized amplitude filtering for superresolution over a restricted field III. Effects due to variation of the field extent. Opt. Acta.: Int. J. Opt. 30, 681–688 (1983).
32. 32.
Sales, T. R. M. & Morris, G. M. Fundamental limits of optical superresolution. Opt. Lett. 22, 582–584 (1997).
33. 33.
Guillemin, E. A. The Mathematics of Circuit Analysis: Extensions to the Mathematical Training of Electrical Engineers (John Wiley & Sons, New York, 1949).
34. 34.
Barnes, C. W. Object restoration in a diffraction-limited imaging system. J. Opt. Soc. Am. 56, 575–578 (1966).
35. 35.
Frieden, B. R. On arbitrarily perfect imagery with a finite aperture. Opt. Acta.: Int. J. Opt. 16, 795–807 (1969).
36. 36.
Di Francia, G. T. Super-gain antennas and optical resolving power. Il Nuovo Cim. 9, 426–438 (1952).
37. 37.
Aharonov, Y., Albert, D. Z. & Vaidman, L. How the result of a measurement of a component of the spin of a spin-1/2 particle can turn out to be 100. Phys. Rev. Lett. 60, 1351–1354 (1988).
38. 38.
Berry, M. V. Evanescent and real waves in quantum billiards and Gaussian beams. J. Phys. A: Math. Gen. 27, L391–L398 (1994).
39. 39.
Berry, M. V. & Popescu, S. Evolution of quantum superoscillations and optical superresolution without evanescent waves. J. Phys. A: Math. Gen. 39, 6965–6977 (2006).
40. 40.
Berry, M. V. & Dennis, M. R. Natural superoscillations in monochromatic waves in D dimensions. J. Phys. A: Math. Theor. 42, 022003 (2009).
41. 41.
Berry, M. V. & Shukla, P. Pointer supershifts and superoscillations in weak measurements. J. Phys. A: Math. Theor. 45, 015301 (2012).
42. 42.
Berry, M. V. A note on superoscillations associated with Bessel beams. J. Opt. 15, 044006 (2013).
43. 43.
Berry, M. V. Exact nonparaxial transmission of subwavelength detail using superoscillations. J. Phys. A: Math. Theor. 46, 205203 (2013).
44. 44.
Berry, M. V. & Moiseyev, N. Superoscillations and supershifts in phase space: wigner and Husimi function interpretations. J. Phys. A: Math. Theor. 47, 315203 (2014).
45. 45.
Berry, M. V. & Morley-Short, S. Representing fractals by superoscillations. J. Phys. A: Math. Theor. 50, 22LT01 (2017).
46. 46.
Berry, M. V. Suppression of superoscillations by noise. J. Phys. A: Math. Theor. 50, 025003 (2017).
47. 47.
Berry, M. V. & Fishman, S. Escaping superoscillations. J. Phys. A: Math. Theor. 51, 025205 (2018).
48. 48.
Liu, D. M. et al. Diffraction interference induced superfocusing in nonlinear Talbot effect. Sci. Rep. 4, 6134 (2014).
49. 49.
Slepian, D. & Pollak, H. O. Prolate spheroidal wave functions, fourier analysis and uncertainty—I. Bell Syst. Tech. J. 40, 43–63 (1961).
50. 50.
Huang, F. M. & Zheludev, N. I. Super-resolution without evanescent waves. Nano Lett. 9, 1249–1254 (2009).
51. 51.
Ferreira, P. J. S. G. & Kempf, A. Superoscillations: faster than the nyquist rate. IEEE Trans. Signal Process. 54, 3732–3740 (2006).
52. 52.
Rogers, E. T. F. & Zheludev, N. I. Optical super-oscillations: sub-wavelength light focusing and super-resolution imaging. J. Opt. 15, 094008 (2013).
53. 53.
Wen, Z. Q. et al. Super-oscillation focusing lens based on continuous amplitude and binary phase modulation. Opt. Express 22, 22163–22171 (2014).
54. 54.
Berry, M. V. Quantum backflow, negative kinetic energy, and optical retro-propagation. J. Phys. A: Math. Theor. 43, 415302 (2010).
55. 55.
Kempf, A. & Ferreira, P. J. S. G. Unusual properties of superoscillating particles. J. Phys. A: Math. Gen. 37, 12067–12076 (2004).
56. 56.
Berry, M. V. Faster Than Fourier in Quantum Coherence and Reality. (World Scientific, Singapore, 1994).
57. 57.
Yuan, G. H., Rogers, E. T. F. & Zheludev, N. I. “Plasmonics” in free space: observation of giant wavevectors, vortices, and energy backflow in superoscillatory optical fields. Light.: Sci. Appl. 8, 2 (2019).
58. 58.
Huang, K. et al. Optimization-free superoscillatory lens using phase and amplitude masks. Laser Photonics Rev. 8, 152–157 (2014).
59. 59.
Zheludev, N. I. What diffraction limit? Nat. Mater. 7, 420–422 (2008).
60. 60.
Huang, F. M. et al. Focusing of light by a nanohole array. Appl. Phys. Lett. 90, 091119 (2007).
61. 61.
Wang, T. T. et al. Experimental verification of the far-field subwavelength focusing with multiple concentric nanorings. Appl. Phys. Lett. 97, 231105 (2010).
62. 62.
Rogers, E. T. F. et al. A super-oscillatory lens optical microscope for subwavelength imaging. Nat. Mater. 11, 432–435 (2012).
63. 63.
Li, M. Y. et al. Controllable design of super-oscillatory lenses with multiple sub-diffraction-limit foci. Sci. Rep. 7, 1335 (2017).
64. 64.
Grosjean, T. & Courjon, D. Polarization filtering induced by imaging systems: effect on image structure. Phys. Rev. E 67, 046611 (2003).
65. 65.
Chen, G. et al. Super-oscillatory focusing of circularly polarized light by ultra-long focal length planar lens based on binary amplitude-phase modulation. Sci. Rep. 6, 29068 (2016).
66. 66.
Liu, T. et al. Subwavelength focusing by binary multi-annular plates: design theory and experiment. J. Opt. 17, 035610 (2015).
67. 67.
Wan, X. W., Shen, B. & Menon, R. Diffractive lens design for optimized focusing. J. Opt. Soc. Am. A 31, B27–B33 (2014).
68. 68.
Chen, G. et al. Super-oscillation far-field focusing lens based on ultra-thin width-varied metallic slit array. IEEE Photonics Technol. Lett. 28, 335–338 (2016).
69. 69.
Chen, G. et al. Far-field sub-diffraction focusing lens based on binary amplitude-phase mask for linearly polarized light. Opt. Express 24, 11002–11008 (2016).
70. 70.
He, Y. H. et al. Double-layer metallic holes lens based on continuous modulation of phase and amplitude. IEEE Photonics Technol. Lett. 26, 1801–1804 (2014).
71. 71.
Huang, K. et al. Ultrahigh-capacity non-periodic photon sieves operating in visible light. Nat. Commun. 6, 7059 (2015).
72. 72.
Dorn, R., Quabis, S. & Leuchs, G. Sharper focus for a radially polarized light beam. Phys. Rev. Lett. 91, 233901 (2003).
73. 73.
Hao, X. et al. Phase encoding for sharper focus of the azimuthally polarized beam. Opt. Lett. 35, 3928–3930 (2010).
74. 74.
Kuga, T. et al. Novel optical trap of atoms with a doughnut beam. Phys. Rev. Lett. 78, 4713–4716 (1997).
75. 75.
Zhan, Q. W. Trapping metallic Rayleigh particles with radial polarization. Opt. Express 12, 3377–3382 (2004).
76. 76.
Terakado, G., Watanabe, K. & Kano, H. Scanning confocal total internal reflection fluorescence microscopy by using radial polarization in the illumination system. Appl. Opt. 48, 1114–1118 (2009).
77. 77.
Xue, Y. et al. Sharper fluorescent super-resolution spot generated by azimuthally polarized beam in STED microscopy. Opt. Express 20, 17653–17666 (2012).
78. 78.
Hulteen, J. C. et al. Nanosphere lithography: size-tunable silver nanoparticle and surface cluster arrays. J. Phys. Chem. B 103, 3854–3863 (1999).
79. 79.
Niziev, V. G. & Nesterov, A. V. Influence of beam polarization on laser cutting efficiency. J. Phys. D: Appl. Phys. 32, 1455–1461 (1999).
80. 80.
Hafizi, B., Esarey, E. & Sprangle, P. Laser-driven acceleration with Bessel beams. Phys. Rev. E 55, 3539–3545 (1997).
81. 81.
Quabis, S. et al. Focusing light to a tighter spot. Opt. Commun. 179, 1–7 (2000).
82. 82.
Zhang, M. G. et al. Three-dimensional nanoscale far-field focusing of radially polarized light by scattering the SPPs with an annular groove. Opt. Express 18, 14664–14670 (2010).
83. 83.
Venugopalan, P. et al. Focusing dual-wavelength surface plasmons to the same focal plane by a far-field plasmonic lens. Opt. Lett. 39, 5744–5747 (2014).
84. 84.
Zakharian, A. R., Moloney, J. V. & Mansuripur, M. Surface plasmon polaritons on metallic surfaces. Opt. Express 15, 183–197 (2007).
85. 85.
Liu, Y. X. et al. Far-field superfocusing with an optical fiber based surface plasmonic lens made of nanoscale concentric annular slits. Opt. Express 19, 20233–20243 (2011).
86. 86.
Liu, T. et al. Vectorial design of super-oscillatory lens. Opt. Express 21, 15090–15101 (2013).
87. 87.
Ye, H. P. et al. Creation of a longitudinally polarized subwavelength hotspot with an ultra-thin planar lens: vectorial Rayleigh-Sommerfeld method. Laser Phys. Lett. 10, 065004 (2013).
88. 88.
Yu, A. P. et al. Creation of Sub-diffraction longitudinally polarized spot by focusing radially polarized light with binary phase lens. Sci. Rep. 6, 38859 (2016).
89. 89.
Kozawa, Y. & Sato, S. Sharper focal spot formed by higher-order radially polarized laser beams. J. Opt. Soc. Am. A 24, 1793–1798 (2007).
90. 90.
Kozawa, Y. & Sato, S. Focusing of higher-order radially polarized Laguerre–Gaussian beam. J. Opt. Soc. Am. A 29, 2439–2443 (2012).
91. 91.
Jiang, Y. S., Li, X. P. & Gu, M. Generation of sub-diffraction-limited pure longitudinal magnetization by the inverse Faraday effect by tightly focusing an azimuthally polarized vortex beam. Opt. Lett. 38, 2957–2960 (2013).
92. 92.
Gu, Z. T. et al. Methods for generating a dark spot using phase and polarization modulation light. Optik 124, 650–654 (2013).
93. 93.
Gan, Z. S. et al. Three-dimensional deep sub-diffraction optical beam lithography with 9nm feature size. Nat. Commun. 4, 2061 (2013).
94. 94.
Singh, R. K., Senthilkumaran, P. & Singh, K. Tight focusing of vortex beams in presence of primary astigmatism. J. Opt. Soc. Am. A 26, 576–588 (2009).
95. 95.
Chen, G. et al. Generation of a sub-diffraction hollow ring by shaping an azimuthally polarized wave. Sci. Rep. 6, 37776 (2016).
96. 96.
Wu, Z. X. et al. Binary-amplitude modulation based super-oscillatory focusing planar lens for azimuthally polarized wave. Opto-Electron. Eng. 45, 170660 (2018).
97. 97.
Li, Z. Y. & Yu, N. F. Modulation of mid-infrared light using graphene-metal plasmonic antennas. Appl. Phys. Lett. 102, 131108 (2013).
98. 98.
Yu, N. F. et al. Light propagation with phase discontinuities: generalized laws of reflection and refraction. Science 334, 333–337 (2011).
99. 99.
Huang, L. L. et al. Dispersionless phase discontinuities for controlling light propagation. Nano Lett. 12, 5750–5755 (2012).
100. 100.
Sun, S. L. et al. High-efficiency broadband anomalous reflection by gradient meta-surfaces. Nano Lett. 12, 6223–6229 (2012).
101. 101.
Li, X. et al. Catenary nanostructures as compact Bessel beam generators. Sci. Rep. 6, 20524 (2016).
102. 102.
Yu, N. F. et al. A broadband, background-free quarter-wave plate based on plasmonic metasurfaces. Nano Lett. 12, 6328–6333 (2012).
103. 103.
Zhao, Y. & Alù, A. Tailoring the dispersion of plasmonic nanorods to realize broadband optical meta-waveplates. Nano Lett. 13, 1086–1091 (2013).
104. 104.
Luo, J. et al. Tight focusing of radially and azimuthally polarized light with plasmonic metalens. Opt. Commun. 356, 445–450 (2015).
105. 105.
Wang, S. Y. & Zhan, Q. W. Reflection type metasurface designed for high efficiency vectorial field generation. Sci. Rep. 6, 29626 (2016).
106. 106.
Li, Y. Y. et al. Broadband quarter-wave birefringent meta-mirrors for generating sub-diffraction vector fields. Opt. Lett. 44, 110–113 (2019).
107. 107.
Zuo, R. Z. et al. Breaking the diffraction limit with radially polarized light based on dielectric metalenses. Adv. Opt. Mater. 6, 1800795 (2018).
108. 108.
McLeod, J. H. The axicon: a new type of optical element. J. Opt. Soc. Am. 44, 592–597 (1954).
109. 109.
Hatakoshi, G. et al. Grating axicon for collimating Čerenkov radiation waves. Opt. Lett. 15, 1336–1338 (1990).
110. 110.
García-Martínez, P. et al. Generation of bessel beam arrays through dammann gratings. Appl. Opt. 51, 1375–1381 (2012).
111. 111.
Herman, R. M. & Wiggins, T. A. Production and uses of diffractionless beams. J. Opt. Soc. Am. A 8, 932–942 (1991).
112. 112.
Sabatyan, A. & Meshginqalam, B. Generation of annular beam by a novel class of Fresnel zone plate. Appl. Opt. 53, 5995–6000 (2014).
113. 113.
Rogers, E. T. F. et al. Super-oscillatory optical needle. Appl. Phys. Lett. 102, 031108 (2013).
114. 114.
Yuan, G. H. et al. Planar super-oscillatory lens for sub-diffraction optical needles at violet wavelengths. Sci. Rep. 4, 6333 (2014).
115. 115.
Liu, T. et al. Focusing far-field nanoscale optical needles by planar nanostructured metasurfaces. Opt. Commun. 372, 118–122 (2016).
116. 116.
Qin, F. et al. Shaping a subwavelength needle with ultra-long focal length by focusing azimuthally polarized light. Sci. Rep. 5, 09977 (2015).
117. 117.
Ruan, D. S. et al. Realizing a terahertz far-field sub-diffraction optical needle with sub-wavelength concentric ring structure array. Appl. Opt. 57, 7905–7909 (2018).
118. 118.
Wang, H. F. et al. Creation of a needle of longitudinally polarized light in vacuum using binary optics. Nat. Photonics 2, 501–505 (2008).
119. 119.
Kitamura, K., Sakai, K. & Noda, S. Sub-wavelength focal spot with long depth of focus generated by radially polarized, narrow-width annular beam. Opt. Express 18, 4518–4525 (2010).
120. 120.
Peng, R. B. et al. Super-resolution long-depth focusing by radially polarized light irradiation through plasmonic lens in optical meso-field. Plasmonics 9, 55–60 (2014).
121. 121.
Qin, F. et al. A supercritical lens optical label-free microscopy: sub-diffraction resolution and ultra-long working distance. Adv. Mater. 29, 1602721 (2017).
122. 122.
Yu, W. T. et al. Super-resolution deep imaging with hollow Bessel beam STED microscopy. Laser Photonics Rev. 10, 147–152 (2016).
123. 123.
Lin, J. et al. Generation of hollow beam with radially polarized vortex beam and complex amplitude filter. J. Opt. Soc. Am. A 31, 1395–1400 (2014).
124. 124.
Chen, G. et al. Planar binary-phase lens for super-oscillatory optical hollow needles. Sci. Rep. 7, 4697 (2017).
125. 125.
Zhu, M. N., Cao, Q. & Gao, H. Creation of a 50,000λ long needle-like field with 0.36λ width. J. Opt. Soc. Am. A 31, 500–504 (2014).
126. 126.
Dehez, H., April, A. & Piché, M. Needles of longitudinally polarized light: guidelines for minimum spot size and tunable axial extent. Opt. Express 20, 14891–14905 (2012).
127. 127.
Khonina, S. N., Kazanskiy, N. L. & Volotovsky, S. G. Vortex phase transmission function as a factor to reduce the focal spot of high-aperture focusing system. J. Mod. Opt. 58, 748–760 (2011).
128. 128.
Makris, K. G. & Psaltis, D. Superoscillatory diffraction-free beams. Opt. Lett. 36, 4335–4337 (2011).
129. 129.
Zhang, S. et al. Synthesis of sub-diffraction quasi-non-diffracting beams by angular spectrum compression. Opt. Express 25, 27104–27118 (2017).
130. 130.
Wu, Z. X. et al. Optimization-free approach for generating sub-diffraction quasi-non-diffracting beams. Opt. Express 26, 16585–16599 (2018).
131. 131.
Greenfield, E. et al. Experimental generation of arbitrarily shaped diffractionless superoscillatory optical beams. Opt. Express 21, 13425–13435 (2013).
132. 132.
Wu, J. et al. Creating a nondiffracting beam with sub-diffraction size by a phase spatial light modulator. Opt. Express 25, 6274–6282 (2017).
133. 133.
Bokor, N. & Davidson, N. Generation of a hollow dark spherical spot by 4π focusing of a radially polarized Laguerre–Gaussian beam. Opt. Lett. 31, 149–151 (2006).
134. 134.
Bokor, N. & Davidson, N. Tight parabolic dark spot with high numerical aperture focusing with a circular π phase plate. Opt. Commun. 270, 145–150 (2007).
135. 135.
Kozawa, Y. & Sato, S. Focusing property of a double-ring-shaped radially polarized beam. Opt. Lett. 31, 820–822 (2006).
136. 136.
Xue, Y. et al. A method for generating a three-dimensional dark spot using a radially polarized beam. J. Opt. 13, 125704 (2011).
137. 137.
Li, S. et al. Generation of a 3D isotropic hollow focal spot for single-objective stimulated emission depletion microscopy. J. Opt. 14, 085704 (2012).
138. 138.
Wan, C. et al. Three-dimensinal visible-light capsule enclosing perfect supersized darkness via antiresolution. Laser Photonics Rev. 8, 743–749 (2014).
139. 139.
Wu, Z. X. et al. Generating a three-dimensional hollow spot with sub-diffraction transverse size by a focused cylindrical vector wave. Opt. Express 26, 7866–7875 (2018).
140. 140.
Tang, D. L. et al. Ultrabroadband superoscillatory lens composed by plasmonic metasurfaces for subdiffraction light focusing. Laser Photonics Rev. 9, 713–719 (2015).
141. 141.
Yuan, G. H., Rogers, E. T. F. & Zheludev, N. I. Achromatic super-oscillatory lenses with sub-wavelength focusing. Light.: Sci. Appl. 6, e17036 (2017).
142. 142.
Khorasaninejad, M. et al. Achromatic metalens over 60nm bandwidth in the visible and metalens with reverse chromatic dispersion. Nano Lett. 17, 1819–1824 (2017).
143. 143.
Arbabi, E. et al. Controlling the sign of chromatic dispersion in diffractive optics with dielectric metasurfaces. Optica 4, 625–632 (2017).
144. 144.
Wang, S. M. et al. A broadband achromatic metalens in the visible. Nat. Nanotechnol. 13, 227–232 (2018).
145. 145.
Yuan, G. H. et al. Quantum super-oscillation of a single photon. Light.: Sci. Appl. 5, e16127 (2016).
146. 146.
Jin, N. B. & Rahmat-Samii, Y. Advances in particle swarm optimization for antenna designs: real-number, binary, single-objective and multiobjective implementations. IEEE Trans. Antennas Propag. 55, 556–567 (2007).
147. 147.
Lin, J. et al. New hybrid genetic particle swarm optimization algorithm to design multi-zone binary filter. Opt. Express 24, 10748–10758 (2016).
148. 148.
Li, W. L., Yu, Y. T. & Yuan, W. Z. Flexible focusing pattern realization of centimeter-scale planar super-oscillatory lenses in parallel fabrication. Nanoscale 11, 311–320 (2019).
149. 149.
Li, J. L., Zhu, S. F. & Lu, B. D. The rigorous electromagnetic theory of the diffraction of vector beams by a circular aperture. Opt. Commun. 282, 4475–4480 (2009).
150. 150.
Carter, W. H. Electromagnetic field of a gaussian beam with an elliptical cross section. J. Opt. Soc. Am. 62, 1195–1201 (1972).
151. 151.
Wolf, E. Electromagnetic diffraction in optical systems-I. An integral representation of the image field. Proc. R. Soc. A 253, 349–357 (1959).
152. 152.
Magni, V., Cerullo, G. & de Silvestri, S. High-accuracy fast Hankel transform for optical beam propagation. J. Opt. Soc. Am. A 9, 2031–2033 (1992).
153. 153.
Landau, H. J. & Pollak, H. O. Prolate spheroidal wave functions, Fourier analysis and uncertainty—II. Bell Syst. Tech. J. 40, 65–84 (1961).
154. 154.
Landau, H. J. & Pollak, H. O. Prolate spheroidal wave functions, Fourier analysis and uncertainty—III: the dimension of the space of essentially time- and band-limited signals. Bell Syst. Tech. J. 41, 1295–1336 (1962).
155. 155.
Slepian, D. Prolate spheroidal wave functions, Fourier analysis and uncertainty—IV: extensions to many dimensions; generalized prolate spheroidal functions. Bell Syst. Tech. J. 43, 3009–3057 (1964).
156. 156.
Slepian, D. Prolate spheroidal wave functions, Fourier analysis, and uncertainty—V: the discrete case. Bell Syst. Tech. J. 57, 1371–1430 (1978).
157. 157.
Rogers, K. S. et al. Optimising superoscillatory spots for far-field super-resolution imaging. Opt. Express 26, 8095–8112 (2018).
158. 158.
Karoui, A. & Moumni, T. Spectral analysis of the finite Hankel transform and circular prolate spheroidal wave functions. J. Comput. Appl. Math. 233, 315–333 (2009).
159. 159.
Diao, J. S. et al. Controllable design of super-oscillatory planar lenses for sub-diffraction-limit optical needles. Opt. Express 24, 1924–1933 (2016).
160. 160.
Yu, Y. Z. & Zhan, Q. W. Optimization-free optical focal field engineering through reversing the radiation pattern from a uniform line source. Opt. Express 23, 7527–7534 (2015).
161. 161.
Liu, T., Yang, S. M. & Jiang, Z. D. Electromagnetic exploration of far-field super-focusing nanostructured metasurfaces. Opt. Express 24, 16297–16308 (2016).
162. 162.
Khosrofian, J. M. & Garetz, B. A. Measurement of a Gaussian laser beam diameter through the direct inversion of knife-edge data. Appl. Opt. 22, 3406–3410 (1983).
163. 163.
Born, M. & Wolf, E. Principles of Optics (Cambridge University Press, New York, 1999).
164. 164.
Pernick, B. J. Two-dimensional light-distribution measurement with a 90° cornered knife edge. Appl. Opt. 32, 3610–3613 (1993).
165. 165.
Xie, X. S. et al. Three-dimensional measurement of a tightly focused laser beam. AIP Adv. 3, 022110 (2013).
166. 166.
Yang, L. X. et al. Minimized spot of annular radially polarized focusing beam. Opt. Lett. 38, 1331–1333 (2013).
167. 167.
Huang, F. M. et al. Nanohole array as a lens. Nano Lett. 8, 2469–2472 (2008).
168. 168.
Roy, T. et al. Point spread function of the optical needle super-oscillatory lens. Appl. Phys. Lett. 104, 231109 (2014).
169. 169.
Wang, C. T. et al. Super-resolution optical telescopes with local light diffraction shrinkage. Sci. Rep. 5, 18485 (2015).
170. 170.
Wong, A. M. H. & Eleftheriades, G. V. Superoscillations without sidebands: power-efficient sub-diffraction imaging with propagating waves. Sci. Rep. 5, 08449 (2015).
171. 171.
Dong, X. H. et al. Superresolution far-field imaging of complex objects using reduced superoscillating ripples. Optica 4, 1126–1133 (2017).
172. 172.
Li, Z. et al. Achromatic broadband super-resolution imaging by super-oscillatory metasurface. Laser Photonics Rev. 12, 1800064 (2018).
173. 173.
Fahrbach, F. O., Simon, P. & Rohrbach, A. Microscopy with self-reconstructing beams. Nat. Photonics 4, 780–785 (2010).
174. 174.
Wong, A. M. H. & Eleftheriades, G. V. An optical super-microscope for far-field, real-time imaging beyond the diffraction limit. Sci. Rep. 3, 01715 (2013).
175. 175.
Matsunaga, D., Kozawa, Y. & Sato, S. Super-oscillation by higher-order radially polarized Laguerre-Gaussian beams. Proceedings of 2016 Conference on Lasers and Electro-Optics. (IEEE, San Jose, 2016).
176. 176.
Yuan, G. H. et al. Flat super-oscillatory lens for heat-assisted magnetic recording with sub-50nm resolution. Opt. Express 22, 6428–6437 (2014).
177. 177.
Eliezer, Y. et al. Breaking the temporal resolution limit by superoscillating optical beats. Phys. Rev. Lett. 119, 043903 (2017).
178. 178.
Eliezer, Y. et al. Experimental realization of structured super-oscillatory pulses. Opt. Express 26, 4933–4941 (2018).
179. 179.
Eliezer, Y. & Bahabad, A. Super defocusing of light by optical sub-oscillations. Optica 4, 440–446 (2017).
Acknowledgements
Gang Chen acknowledges financial support from China National Natural Science Foundation (61575031); National Key Basic Research and Development Program of China (Program 973) (2013CBA01700); Fundamental Research Funds for the Central Universities (106112016CDJXZ238826, 106112016CDJZR125503); and National Key Research and Development Program of China (2016YFED0125200, 2016YFC0101100). C.-W.Q. acknowledges financial support from the National Research Foundation, Prime Minister’s Office, Singapore under its Competitive Research Program (CRP award NRF-CRP15-2015-03).
Author information
G.C., Z.W. and C.-W.Q. prepared the first version of the paper, and C.-W.Q. made the final revisions.
Correspondence to Cheng-Wei Qiu.
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
|
2019-07-20 04:57:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4073312282562256, "perplexity": 3217.3418483329324}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526446.61/warc/CC-MAIN-20190720045157-20190720071157-00382.warc.gz"}
|
https://socratic.org/questions/a-band-s-first-song-is-4-minutes-12-seconds-long-and-has-462-beats-the-band-want
|
# A band's first song is 4 minutes, 12 seconds long and has 462 beats. The band wants to have a second song that is 3 minutes, 54 seconds long with the same beats per minute as the first song. How many beats should there be in the second song?
Mar 25, 2018
$429$ beats
#### Explanation:
The number of beats is proportional to the time of the song.
Convert the time to seconds to avoid recurring decimals.
$4$ minutes $12$ seconds = $4 \times 60 + 12 = 252$ seconds
$3$ minutes $54$ seconds = $4 \times 60 - 6 = 234$ seconds
Set up a direct proportion using equivalent fractions:
x/234 = 462/252" "(larr"beats")/(larr"seconds")
$x = \frac{234 \times 462}{252}$
$x = 429$ beats
|
2020-08-08 15:32:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4099687933921814, "perplexity": 2703.2508927113804}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00277.warc.gz"}
|
https://prefix.zazuko.com/unit:N
|
# Resolve RDF Terms
unit:N
http://qudt.org/vocab/unit/N
### Namespace
http://qudt.org/vocab/unit/
### Recommended prefix
unit:
lang:en
Newton
lang:""
The "Newton" is the SI unit of force. A force of one newton will accelerate a mass of one kilogram at the rate of one meter per second per second. The newton is named for Isaac Newton (1642-1727), the British mathematician, physicist, and natural philosopher. He was the first person to understand clearly the relationship between force (F), mass (m), and acceleration (a) expressed by the formula $$F = m \cdot a$$.
lang:""
http://dbpedia.org/resource/Newton
lang:""
0112/2///62720#UAA235
lang:""
http://en.wikipedia.org/wiki/Newton?oldid=488427661
|
2022-08-19 01:59:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9369634985923767, "perplexity": 1005.1232732035908}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573540.20/warc/CC-MAIN-20220819005802-20220819035802-00290.warc.gz"}
|
http://informationtransfereconomics.blogspot.com/2013/10/the-phillips-curve.html
|
## Wednesday, October 16, 2013
### The Phillips curve
Earlier this year John Quiggin made the bold claim that macroeconomics went wrong in 1958 after the discovery of the Phillips curve. I've been working over the past couple months trying to figure out how the Phillips curve comes about in the information transfer framework and I basically come to the same conclusion. Here is my bold claim:
The Phillips curve is real but barely useful regularity in the data that has been completely misinterpreted.
OK, let's begin. The curve is generally drawn as a downward sloping curve in unemployment rate-inflation rate space. In the information transfer model, this immediately says that the information source is aggregate demand (NGDP), the information destination is the supply of unemployed people (U, e.g. this metric -- and n.b. here and throughout U is the total number of unemployed, not the unemployment rate), and the price level P is detecting signals from the demand to the supply. In my notation, P:NGDP→U. Therefore we can write
$$\text{(1) } P = \frac{1}{\kappa} \frac{NGDP}{U}$$
We can do a fit to the data (price level in green, model in blue)
This fit works as well as the fit to the interest rate in the IS-LM model, so it gives some hint that we may be able to extract information from it. One interesting thing to consider is that the price level curve could define a "natural rate" of unemployment (actually more of a mean level of unemployment, blue):
The graph divides the number of unemployed by the size of the civilian labor force (L) to get the unemployment rate. Here is the graph of deviations from the blue curve:
I've excised the recessions in the data points (dots) in the graph above. It becomes clear that most of the data points and nearly all of the non-recession data points represent an unemployment rate that is falling. This is a major point in understanding the Phillips curve in the information transfer framework. Of course, to get to our final destination requires a little math. Start with the price level equation (1) above and take the logarithmic derivative:
$$\frac{d}{dt} \log P = \frac{d}{dt}\log \frac{1}{\kappa} \frac{NGDP}{U}$$
Expanding that out a little
$$\frac{d}{dt} \log P = \frac{d}{dt}\log NGDP -\frac{d}{dt}\log U -\frac{d}{dt}\log \kappa$$
Identifying the inflation rate $\pi$ (borrowing from the notation in the wikipedia entry) and the NGDP growth rate $n$, and taking $\kappa$ to be constant ($\simeq 0.6$ by the way), and fiddling with the $U$ term:
$$\pi = n -\frac{1}{U}\frac{d}{dt}U$$
If we expand around the number of unemployed at natural rate $U^*$ (or really any fixed level of unemployed rate) and taking $dU/dt = U'$ we can write:
$$\pi = n -\frac{U'}{U^*} + \frac{U'}{U^{*2}}(U-U^*)$$
Or in terms of the unemployment rate $u = U/L$ where $L$ is the civilian labor force:
$$\pi = n -\frac{U'}{U^*} + \frac{U' L}{U^{*2}}(u-u^*)$$
Where we make the notational identifications $n -U'/U^* = \pi^e + \nu$ and $B = U' L/U^{*2}$ we finally obtain the new classical form of the Phillips curve:
$$\pi = \pi^e + \nu + B (u-u^*)$$
... except there's a problem: the sign of the $B$ term is "wrong". This is where the observation in the previous graph comes in. Nearly all the data has $U' \lt 0$ so in most descriptions of the data we can take $b = |U' L/U^{*2}|$ positive and write
$$\pi = \pi^e + \nu - b (u-u^*)$$
The regularities of the Phillips curve essentially result from the fact that recessions tend to cause unemployment to shoot up quickly and then drift back down slowly over a longer period. With this knowledge we can see what the data looks like when excluding data where $U' > 0$:
Graphs of the Phillips curve tend to be broken up into "regimes" (from the wikipedia article we have 1955-1971, 1974-1984, 1985-1992 and 2000-2013); we can see how this segmentation approximates the behavior of the parameters $b$ and $\pi^e + \nu$:
Basically, the Phillips curve "regimes" represent relatively constant segments of the parameter values. Here are the graphs of the resulting Phillips curves for the different "regimes":
This allows us to posit a reason for the failure to find microfoundations for the Phillips curve. It is a property of the unemployment rate (quick rise, slow fall) that is only marginally connected to inflation (the slow fall in unemployment occurs during a recovery hence during a temporary increase of the inflation rate from a low level brought on by the recession). The real nugget of statistical regularity is that a recession causes unemployment to rise and inflation to fall with the Phillips curve describing the subsequent return to normal (unemployment to fall and inflation to rise). Or another way, the Phillips curve is just mean reversion. And mean reversion doesn't really need microfoundations, does it?
In any case, the Phillips curve is dependent on the dominance of data where $dU/dt < 0$ after recessions.
1. Note that in the graph of $\pi^e + \nu = n - U'/U$ the restriction to $U' < 0$ selects all the positive values.
1. And by this I mean that it didn't have to be this way; it just worked out.
2. When I said "expand around the number of unemployed ... " above I should have said that it is a Taylor expansion and that I dropped terms $\sim o(U^2)$ and higher (the equality sign should be $\approx$).
3. In this post I talk about the stability of the Phillips curve:
http://informationtransfereconomics.blogspot.com/2013/10/the-1970s.html
The Phillips curve is a relatively stable feature of the price level and unemployment rate, but its not necessarily causal ... if microfoundations concentrated on the fact that unemployment shoots up quickly at the onset of recessions, but then falls slowly then they could be successful. It would likely stem from myopic loss aversion (quick, layoffs!) with a cautious return to hiring.
1. Or, couldn't microfoundations ignore the Phillips curve for modelling inflation and simply observe it as a loose relationship in the model simulations?
2. I would say that's probably how it should be interpreted ... But it's a very difficult relationship to tease out of the data ...
3. E.g.
http://informationtransfereconomics.blogspot.com/2014/10/updated-non-existent-3d-phillips-curve.html
4. Of interest:
http://www.bruegel.org/nc/blog/detail/article/1210-blogs-review-updating-the-phillips-curve/
|
2017-08-20 17:05:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8448410034179688, "perplexity": 782.5775989731933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106865.74/warc/CC-MAIN-20170820170023-20170820190023-00285.warc.gz"}
|
http://openstudy.com/updates/523eb59ee4b0bf40a6aee283
|
## moongazer Group Title If y^3 = at^2 , then (d^2 y)/dt^2 ? 11 months ago 11 months ago
1. moongazer Group Title
I only got to the part: dy/dt = 2at/3y^2
2. Yttrium Group Title
Just derive again, so you can solve the d^2y/dt^2. :))
3. Yttrium Group Title
it's like solve for y''
4. amriju Group Title
let it remain as 3y^2=2at..differntiate again....using multiplication rule....
5. moongazer Group Title
it says the answer is: -2a/9y^2 but I am getting a different answer
6. moongazer Group Title
is "a" here constant?
7. Yttrium Group Title
Quotient rule, remember?
8. amriju Group Title
yo constnt
9. Yttrium Group Title
for easier calculation, you can factor out 2a/3 and just derive (t/y^2)
10. moongazer Group Title
could you show me the first step for the second derivative? maybe I did something wrong
11. moongazer Group Title
i'll try again :)
12. amriju Group Title
may be u need to substitute y as (at^2)^2/3
13. moongazer Group Title
I got: (6ay^4 - 8a^2 t^2 y)/9y^6 I think I am doing something wrong.
14. moongazer Group Title
i'll try again :)
15. Yttrium Group Title
$y' = \frac{ 2at }{ 3y^2 }$, right? Therefore, $y' = \frac{ 2a }{ 3 } (\frac{ t }{ y^2 })$ $y'' = \frac{ 2a }{ 3 } [\frac{ y^2 - 2tyy') }{ y^4 }]$ $y'' = \frac{ 2a }{ 3 } \frac{ y^2 - 2ty(\frac{ 2a }{ 3 })(\frac{ t }{ y^2 }) }{ y^4 }$ Simplifying this equation we will get. $y'' = \frac{ 2a }{ 3 } [ \frac{ 3y^4 -4at^2y }{ 3y^6 }]$
16. Yttrium Group Title
17. moongazer Group Title
That's what I got. I think there is some typo in my answer sheet. Thanks :)
18. myininaya Group Title
Well you could simplify
19. Yttrium Group Title
@moongazer, it isn't the answer in simplest form. Maybe you can continue this and let us verify if we get the same answers.
20. moongazer Group Title
@myininaya could you simply it further? @Yttrium yes, I understood your solution
21. myininaya Group Title
Could I? Yes. Can you?
22. myininaya Group Title
Eye ball the numerator and the denominator. You should see they share a common factor.
23. Yttrium Group Title
For our class, the definition of simplest form is that it is factored out completely. Just factor out some stuff and do cancellation.
24. moongazer Group Title
2a(3y^3 - 4at^2) / 9y^5
25. moongazer Group Title
that's what I got
26. myininaya Group Title
You can also choose to write at^2 as y^3
27. myininaya Group Title
Recall our initial equation: y^3=at^2
28. myininaya Group Title
I see at^2
29. myininaya Group Title
Replace it y^3 then combine like terms
30. Yttrium Group Title
@moongazer , do you what myininaya told you?
31. myininaya Group Title
@moongazer I'm leaving for tonight but I think Yttrium is still here to help. Goodnight you guys. I think you guys got it from here for sure. :)
32. moongazer Group Title
WOW! Thank you very much to both of you. I failed to notice that until you said it. Thanks :) I now got the answer. :)
|
2014-09-01 11:50:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7360368371009827, "perplexity": 12116.639921252798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535917663.12/warc/CC-MAIN-20140901014517-00281-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://brilliant.org/problems/a-classical-mechanics-problem-by-md-sakir/
|
# A classical mechanics problem by Md Sakir
The escape velocity of Earth and Juventus are 11. 2 km/h and 60. 02 km/h. The mass of Juventus, $$M_J$$, is $$316. 67 M_E$$. What is the approximate ratio of their radii? ($$R_E/R_J$$)
×
|
2017-10-19 00:17:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7886775135993958, "perplexity": 1998.6127210063587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823168.74/warc/CC-MAIN-20171018233539-20171019013539-00846.warc.gz"}
|
https://competitive-exam.in/questions/discuss/the-molar-excess-gibbs-free-energy
|
# The molar excess Gibbs free energy, gE, for a binary liquid mixture at T and P is given by, (gE/RT) = A . x1. x2, where A is a constant. The corresponding equation for ln y1, where y1 is the activity co-efficient of component 1, is
A . x22
Ax1
Ax2
Ax12
Please do not use chat terms. Example: avoid using "grt" instead of "great".
|
2021-05-14 03:28:35
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8081293702125549, "perplexity": 3185.5493257103376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991737.39/warc/CC-MAIN-20210514025740-20210514055740-00477.warc.gz"}
|
https://iitbrain.com/2178-if-ab-and-c-are-positive-real-numbers-show-thatabcbcacab32.html
|
## Question
If a,b and c are positive real numbers, show that
$\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}\ge \frac{3}{2}$
Joshi sir comment
LUFFY 3 Month ago is this solution helpfull: 1 0
|
2020-12-01 05:19:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17261609435081482, "perplexity": 3235.453821229335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141652107.52/warc/CC-MAIN-20201201043603-20201201073603-00661.warc.gz"}
|
https://tex.stackexchange.com/questions/458460/latex-pdf-form-detect-checkbox-changed-event
|
# LaTeX PDF form: Detect CheckBox changed event
I use LaTeX to create a fillable PDF document. How can I detect that a \CheckBox has been clicked?
It seems to react only to onfocus={...}, not to onclicked={...} or onchanged{...}.
Do I need to register an EventListener in an insDLJS environment?
Code:
\documentclass{article}
\usepackage{hyperref}
\begin{document}
\begin{Form}
\end{Form}
\end{document}
The events have other names, in hpdftex.def I find onfocus, onblur, onmousedown, onmouseup, onenter, onexit. There is also onclick, but it seems to be implemented only for Pushbuttons.
Not every event makes really sense for check boxes, e.g. the mouse event seem to conflict with the checking.
\documentclass{article}
\usepackage{hyperref}
\begin{document}
\begin{Form}
\end{Form}
\end{document}
An alternative to hyperref is eforms (acrotex), which has more options (and also more documentation). See e.g. https://tex.stackexchange.com/a/390882/2388
• Thanks for your response. onenter and onexit are already triggered when the cursor hovers over the element (respectively leaves it). Onfocus would be more handy in that situation. I took a look into the eform documentation. There doesn't seem to be an onclick for the \CheckBox element either. Did I overlook something? – MPW Nov 5 '18 at 15:15
• By the way, onmousedown and onmouseup don't work, unfortunately. I'll probably have to solve this with onfocus and onblur, which is only partially satisfying. I'm still wondering if an EventListener code be added manully... – MPW Nov 5 '18 at 15:27
• \checkBox[\AA{\AAMouseDown{\JS{app.alert("Mouse Down!")}}}]{myCheck}{10bp}{10bp}{On} works fine for me with eforms. – Ulrike Fischer Nov 5 '18 at 15:44
The event you are looking for is called validate:
\documentclass{article}
\usepackage{hyperref}
\begin{document}
\begin{Form}
According to the hyperref documentation validate contains "JavaScript code to validate the entry", so using it as a general "changed" event might look like a hack, but in reality it generates a /V entry in the form fields additional actions dictionary. While this entry can be used for validation, that's only one possible use. It is specified for PDF 1.7 as a general "onchange" event:
The advantage of using validate instead of e.g. onmousedown is that validate still works if a user uses another input method, for example selecting the field using the keyboard instead of clicking it with the mouse.
|
2021-03-04 22:23:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6422460675239563, "perplexity": 4123.4441127835535}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00028.warc.gz"}
|
https://web.ma.utexas.edu/users/m408s/CurrentWeb/LM7-3-5.php
|
Home
#### The Fundamental Theorem of Calculus
Three Different Concepts
The Fundamental Theorem of Calculus (Part 2)
The Fundamental Theorem of Calculus (Part 1)
More FTC 1
#### The Indefinite Integral and the Net Change
Indefinite Integrals and Anti-derivatives
A Table of Common Anti-derivatives
The Net Change Theorem
The NCT and Public Policy
#### Substitution
Substitution for Indefinite Integrals
Examples to Try
Revised Table of Integrals
Substitution for Definite Integrals
Examples
#### Area Between Curves
Computation Using Integration
To Compute a Bulk Quantity
The Area Between Two Curves
Horizontal Slicing
Summary
#### Volumes
Slicing and Dicing Solids
Solids of Revolution 1: Disks
Solids of Revolution 2: Washers
More Practice
#### Integration by Parts
Integration by Parts
Examples
Integration by Parts with a definite integral
Going in Circles
#### Integrals of Trig Functions
Antiderivatives of Basic Trigonometric Functions
Product of Sines and Cosines (mixed even and odd powers or only odd powers)
Product of Sines and Cosines (only even powers)
Product of Secants and Tangents
Other Cases
#### Trig Substitutions
How Trig Substitution Works
Summary of trig substitution options
Examples
Completing the Square
#### Partial Fractions
Introduction
Linear Factors
Improper Rational Functions and Long Division
Summary
#### Strategies of Integration
Substitution
Integration by Parts
Trig Integrals
Trig Substitutions
Partial Fractions
#### Improper Integrals
Type 1 - Improper Integrals with Infinite Intervals of Integration
Type 2 - Improper Integrals with Discontinuous Integrands
Comparison Tests for Convergence
#### Differential Equations
Introduction
Separable Equations
Mixing and Dilution
#### Models of Growth
Exponential Growth and Decay
Logistic Growth
#### Infinite Sequences
Examples of Infinite Sequences
Limit Laws for Sequences
Theorems for and Examples of Computing Limits of Sequences
Monotonic Covergence
#### Infinite Series
Introduction
Geometric Series
Limit Laws for Series
Test for Divergence and Other Theorems
Telescoping Sums and the FTC
#### Integral Test
The Integral Test
Estimates of Value of the Series
#### Comparison Tests
The Basic Comparison Test
The Limit Comparison Test
#### Convergence of Series with Negative Terms
Introduction, Alternating Series,and the AS Test
Absolute Convergence
Rearrangements
The Ratio Test
The Root Test
Examples
#### Strategies for testing Series
Strategy to Test Series and a Review of Tests
Examples, Part 1
Examples, Part 2
#### Power Series
Finding the Interval of Convergence
Power Series Centered at $x=a$
#### Representing Functions as Power Series
Functions as Power Series
Derivatives and Integrals of Power Series
Applications and Examples
#### Taylor and Maclaurin Series
The Formula for Taylor Series
Taylor Series for Common Functions
Adding, Multiplying, and Dividing Power Series
Miscellaneous Useful Facts
#### Applications of Taylor Polynomials
Taylor Polynomials
When Functions Are Equal to Their Taylor Series
When a Function Does Not Equal Its Taylor Series
Other Uses of Taylor Polynomials
#### Partial Derivatives
Visualizing Functions in 3 Dimensions
Definitions and Examples
An Example from DNA
Geometry of Partial Derivatives
Higher Order Derivatives
Differentials and Taylor Expansions
#### Multiple Integrals
Background
What is a Double Integral?
Volumes as Double Integrals
#### Iterated Integrals over Rectangles
How To Compute Iterated Integrals
Examples of Iterated Integrals
Cavalieri's Principle
Fubini's Theorem
Summary and an Important Example
#### Double Integrals over General Regions
Type I and Type II regions
Examples 1-4
Examples 5-7
Order of Integration
### Examples
In this video, we work three examples, one with $x=a \tan(\theta)$, one with $x = a\sin(\theta)$, and one with $x = a \sec(\theta)$. Worked out solutions are written below the video.
#### Examples from the video.
DO: After watching the video, write down and work these examples on your own, slowly, thinking of the whys and hows of each step.
Example 1: $\int \bigl(4+x^2\bigr)^{-3/2}\, dx$
DO: Work through before looking ahead.
Solution 1: $\displaystyle\int \bigl(4+x^2\bigr)^{-3/2}\, dx \overset{\fbox{$ \,\, x\,=\,2\tan\theta\\dx\,=\,2\sec^2\theta\,d\theta$}}{=}\int \bigl(4 + 4\tan^2\theta\bigr)^{-3/2} 2 \sec^2\theta \,d\theta=\int \bigl(4 \sec^2\theta\bigr)^{-3/2} 2 \sec^2(\theta) \,d\theta$
$\displaystyle =4^{-3/2}\cdot 2\int\sec^{-3}\theta\sec^2\theta\,d\theta=\frac{2}{8}\int\sec^{-1}\,d\theta=\frac{1}{4}\int \frac{d\theta}{\sec\theta} = \frac{1}{4}\int \cos\theta\, d\theta = \frac{1}{4}\sin\theta+C.$
Consider our answer above. In order to convert back into terms of $x$, we must figure out what $\sin(\theta)$ is in terms of $x$. By rewriting our original substitution we see that $\tfrac x 2=\tan\theta$. Use this to draw a right triangle, with opposite side $x$ and adjacent side $a=2$. The hypotenuse is then $\sqrt{a^2+x^2}=\sqrt{4+x^2}$. We need to find $\sin\theta$ in terms of $x$, and we see from the triangle that $\sin\theta=\frac{x}{\sqrt{x^2+4}}$.
So $\displaystyle\int\left(4+x^2\right)^{-3/2}\,dx=\frac{1}{4}\sin\theta+C=\frac{x}{4\sqrt{x^2+4}}+C$.
---------------------------------------------------------------------------
Example 2: $\int \sqrt{9-x^2}\, dx$
DO: Work through before looking ahead.
Solution 2: $\displaystyle\int \sqrt{9-x^2}\, dx\overset{ \fbox{$ \,\,x\,=\,3 \sin\theta\\dx\,=\,3\cos\theta\,d\theta$} }{=} \int \left(\sqrt{9-9\sin^2\theta}\right)3\cos\theta\,d\theta=3\cdot 3\int\sqrt{\cos^2\theta}\cos\theta \,d\theta=9\int\cos^2\theta \,d\theta$
$\displaystyle\quad =9\int\frac{1+\cos(2\theta)}{2}\,d\theta=\frac{9}{2}\int(1+\cos(2\theta))\,d\theta =\frac{9}{2} \left(\theta+\frac{\sin(2\theta)}{2}\right)+C=\frac{9}{2}\bigl(\theta+\sin\theta\cos\theta\bigr)+C$$ Here we have used the methods of the last learning module to evaluate the trig integral, including the handy trig identities for$\cos^2\theta$and$\sin(2\theta)$. (You need to know these by heart). We look at the terms in our final answer above. We use the triangle to convert$\sin\theta\cos\theta$back into terms of$x$. Finally, we must write$\theta$in terms of$x$. We use our original substitution:$\frac{x}{3}=\sin x$gives us$\sin^{-1}(\tfrac x 3)=\theta$. So we have$\displaystyle\int \sqrt{9-x^2}\, dx=\frac{9}{2}\left( \sin^{-1} \left (\frac{x}{3} \right ) + \frac{x}{3}\frac{\sqrt{9-x^2}}{3}\right)+ C=\frac{9}{2} \sin^{-1} \left (\frac{x}{3} \right ) + \frac{x \,\sqrt{9-x^2}}{2} + C$--------------------------------------------------------------------------- Example 3:$\displaystyle\int \frac{dx}{\sqrt{4x^2-1}}$DO: Work through before looking ahead. Solution 3:$\displaystyle\int \frac{dx}{\sqrt{4x^2-1}} \overset{ \fbox{$2x\,=\, \sec\theta\\2dx\,=\,\sec\theta\tan\theta\,d\theta$} }{=} \int \frac{\frac{1}{2}\sec\theta\tan\theta\,d\theta}{\sqrt{\sec^2\theta-1}}
\overset{ \fbox{$\sec^2\theta-1\,=\,\tan^2\theta$} }{=} \frac{1}{2}\int \frac{\sec\theta\tan\theta\,d\theta}{\sqrt{\tan^2\theta}}\displaystyle= \frac{1}{2}\int \frac{\sec\theta\tan\theta\,d\theta}{\tan\theta}=\frac{1}{2}\int\sec\theta\,d\theta=\frac{1}{2} \ln\bigl\lvert\sec\theta+\tan\theta\bigr\rvert +C. $ We now convert to terms of$x$:$\sec\theta=2x$, so$\cos\theta=\frac{1}{2x}$. The triangle could therefore have adjacent side of 1, and hypotenuse of$2x$, and the opposite side$\sqrt{4x^2-1}$. We can also have adjacent of$\frac{1}{2}$and hypotenuse of$x$. (Why?) This gives the opposite side the value of$\sqrt{x^2-\frac{1}{4}}$, as in the diagram. Either way is fine. Looking at our values above, we only need deal with$\tan \theta$, since we know$\sec\theta=2x$. From the triangle,$\tan\theta=\frac{\sqrt{x^2-1/4}}{1/2}$. If we had used the other triangle, we would get$\tan\theta=\frac{\sqrt{4x^2-1}}{1}$-- are these the same values? So$\displaystyle\int \frac{dx}{\sqrt{4x^2-1}} =\frac{1}{2} \ln \bigl\lvert \,2x + \sqrt{4x^2-1}\,\bigr\rvert + C.\$
|
2022-09-27 16:54:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9183413982391357, "perplexity": 5810.708733605929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00319.warc.gz"}
|
https://computergraphics.stackexchange.com/questions/2405/16bit-half-float-linear-hdr-images-as-diffuse-albedo-textures/2423
|
16bit half-float linear HDR images as (diffuse/albedo) textures?
If all your textures are 8bit LDR images, like JPEGs, couldn't that potentially cause conflicts with exposure control/tone mapping when rendering. That is if you adjust the rendering exposure of your image that should expose detail in the textures that aren't really there, since they been clamped out by the low dynamic range. So wouldn't it make sense to also have the textures as HDR images, saved as .exr, in linear colour space with 16bit half-float to get a good colour representation (32bit "full" float might be overkill?). To have more detailed and correct colour values might also, I figure, have an effect on GI and how colour bleed is calculated?
Or is it simply not necessary since the end result of the rendering we want is probably going to be similar to the exposure level of the texture when it was photographed any way? And since camera mostly shoot in 12-14bit you would have to take multiple exposures of the texture and do all that extra work to piece them all together to one HDRI.
Edit: To clarify, I'm mostly interested in this from a photo realistic rendering point of view, with ray trace renderers (like mental ray, V-Ray, Arnold, etc.) with full light simulations and global illumination, rather then for real-time game engines.
In film production, we almost never use 8-bit textures for color/albedo, because of banding, etc. (JPEG is especially problematic since by spec, it's sRGB rather than linear values.) We either use 'half' (16 bit float) or 16-bit unsigned integer values for color/albedo textures.
• Wow, thanks @LarryGritz! Considering you work for Sony Pictures Imageworks, I'll take you as a very reliable source! :) But how do you capture these textures? Most cameras only shoot in 14bit RAW files, do you have special cameras with 16bit linear sensors? Or do you take multiple exposures for textures, just like one does for HDRI imaged based lighting? Or do you simply capture with 14bit camera RAW and saves it as "16bit"? Ow, and what file format do you use, .tiff (.tx, .tex) or .exr? Thanks again for your input!! :) – Kristoffer Helander May 10 '16 at 19:27
• @KristofferHelander : Converting from 14 bit capture to a 16 bit representation of the 0-1 range is easily achieved by multiplication. But most of our textures are painted, not photographed -- sometimes they are painted directly in a 16 bit format, sometimes they are painted in sRGB and then converted to 16 bit when "linearized" for use as a texture. There's no need for HDR for albedo textures. – Larry Gritz May 30 '16 at 17:31
• @KristofferHelander : For albedo textures, we tend to use TIFF with 16 bit integer data (what we call .tx is just TIFF format but tiled and with MIP-map multiresolution stored as multiple subimages within the TIFF file). For true HDR data, like environment captures, we use OpenEXR. Renderer output also tends to be OpenEXR. – Larry Gritz May 30 '16 at 17:34
Yes, it's possible in some extreme cases for HDR lighting and tonemapping to expose banding issues in color textures. In those cases, having a higher bit depth for the textures could be useful. However, in my experience the majority of materials and ordinary lighting situations don't exhibit this problem, and most textures in a typical game are fine in 8-bit (or even less—games often use BC1 compression, which reduces them to 5-6-5-bit).
People use HDR render targets because a single scene can contain vastly different magnitudes of brightness, such as a dark room in which you can see through a window to a sunlit exterior 10–100 times brighter than the room. However, color textures don't have such a wide range of magnitudes. They represent reflectances, which are inherently in the [0, 1] range, and in practice few everyday materials are lower than about 2–5% reflectance. So an 8-bit image (with gamma encoding) can usually represent diffuse and specular colors with enough precision.
It's true that the combination of a quite dark texture with very bright lighting or an extremely overexposed camera setting can show banding in the final frame, but that would be a more unusual case.
A case where you probably would want an HDR texture is emissive materials, especially for neon signs and similar light sources. The texture would appear with its value amped up to appear as a bright light source in game, so in that case an 8-bit image could easily show banding.
Finally, it can still be useful to work at higher precision (e.g. 16-bit precision) if possible when capturing and creating textures, simply because it gives you more headroom to process the image without causing precision problems. For instance, if you need to adjust levels or color balance, you lose a little precision; that can introduce banding (especially if you do it multiple times) when starting from an 8-bit source image. A 16-bit source would be more resilient to such problems. However, the final texture used in the game would still probably be compressed to 8-bit.
• combination of a quite dark texture with very bright lighting or an extremely overexposed camera setting can show banding in the final frame very good insight. But we may note that gamma encoding is here precisely to mitigate this point. If he has the problem why not try a superior gamma exponent ? that would inhibit usage of hardware sRGB samplers though. – v.oddou May 10 '16 at 0:50
• Thank you for your input. But to clarify, I'm mostly interested in this from a photo realistic rendering point of view, with ray trace renderers (like mental ray, V-Ray, Arnold, etc.) with full light simulations and global illumination, rather then for real-time game engines. – Kristoffer Helander May 10 '16 at 7:32
• @KristofferHelander Good to know, but I think what I wrote probably applies just as well to offline rendering. But I admit that I don't have much direct experience in that area. – Nathan Reed May 10 '16 at 7:39
• @NathanReed Yes, you most certainly made some really good points there :) – Kristoffer Helander May 10 '16 at 7:54
The emphasis is key, we barely need any bits because albedo has no depth to encode anyway. The dynamic comes from the shading, it makes sense to work in f32 per components within shaders, and output to f16 render targets. But storing albedo textures in f16 that's not only overkill, that's a severe unjustified performance hog for our precious bandwidth.
|
2019-08-20 03:45:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2496921867132187, "perplexity": 3149.0575530307683}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315222.14/warc/CC-MAIN-20190820024110-20190820050110-00237.warc.gz"}
|
https://asia.vtmarkets.com/analysis/concern-about-the-global-economy-market-expect-another-75bps-rate-hike/
|
English
Europe
Middle East
Asia
#### VT Markets APP
Trade CFDs on FX, Gold and more
# Daily market analysis
### Concern about the global economy, Market expect another 75bps rate hike
###### July 27, 2022
Tuesday’s decline in US stocks was precipitated by deteriorating economic conditions, worries of a recession, and sky-high inflation. The dismal outlook of the world’s largest retailer, Walmart Inc., illustrates the effects of inflationary pressures on consumer spending. Concerns over a faltering global economy prompted investors to anticipate another 75-basis-point increase before to a widely anticipated Federal Reserve interest rate hike, with the total 150-bps interest rate spike in June and July reaching the highest level since the early 1980s.
Alphabet Inc., the parent company of Google, and Texas Instruments Inc. rose after earnings, but Microsoft fell due to its slowest revenue growth since 2020. In addition, the sales of McDonald’s Corp. and Coca-Cola Co. exceed expectations, and Coinbase Global Inc. is under investigation in the United States for allegedly allowing Americans to trade digital assets that should have been registered as securities, according to three individuals familiar with the situation.
The S&P 500 and Dow Jones Industrial Average both dipped on Tuesday as a result of the Fed’s threat to raise interest rates on Wednesday, recession fears, and disappointing earnings reports. The S&P 500 plummeted 1.19% daily, while the Dow Jones Industrial Average declined 0.7%. Eight of eleven sectors remained in the red, with Consumer Discretion and Communication Services having the worst performance of all categories, falling 3.38% and 2.20%, respectively. In the meantime, the Nasdaq 100 sank 2% on Tuesday, while the MSCI world index declined 0.9%.
Main Pairs Movement
As yesterday’s financial markets were dominated by aversion to risk, the US dollar rose on Tuesday, surrounded by bullish momentum, and approached the 107.30 level. During the first part of the trading day, the DXY index fluctuated in a range between 106.2 and 106.5. It then began to experience considerable purchasing pressure and reached a day high of over 107.2 during the US trading session. Escalating fears of economic slowdown in the Eurozone and the Russia-related energy crisis continued to boost demand for the safe-haven dollar, as the Russian gas giant Gazprom is supplying roughly 20% of its normal natural gas supply. The market’s attention will now move to the Fed’s monetary policy statements.
The GBP/USD exchange rate declined by 0.12% on Tuesday as the US dollar strengthened across the board. Investors continue to be anxious about the possibility of a worldwide recession, which has eclipsed the likelihood of a 50 basis point (bps) rate hike by the Bank of England in August. In the late European session, the GBP/USD pair plummeted to a daily low below the 1.197 level before regaining upward momentum to recover the majority of its daily losses. In the meantime, EUR/USD sustained significant losses yesterday and retested its daily low near 1.010 throughout the US trading session. EU nations also agreed to limit gas consumption during the upcoming winter. The pair fell over 1% for the day.
As investors await additional direction from the Federal Reserve’s monetary policy meeting, gold was little changed with a 0.10% loss for the day after moving sideways in a narrow range below $1719 in the late US trading session. The White House stated on Tuesday that it would sell 20 million barrels from the Strategic Petroleum Reserve, causing WTI oil prices to drop to around$95 per barrel.
Technical Analysis
XAUUSD(4-Hour Chart)
Gold does a little technical progress, consolidating around 1722.66 on Tuesday. From the technical perspective, the four-hour outlook is neutral-to-bearish. Gold has fallen below the midline of the Bollinger Band and the 20 Simple Moving Average, suggesting that bears are in the driver’s seat. Failure to maintain above the resistance level of 1722.66 would bring the pair toward the next support of 1680.99. On the flip side, if gold can move upward above the midline, then the bullish momentum might be able to gain traction on the four-hour chart. The RSI indicator still trades around the midline, reflecting the absence of directional strength. Further price action eyes on the FOMC meeting.
Resistance: 1722, 1748, 1769
Support: 1680.99
USDJPY (4-Hour Chart)
USDJPY surpasses 136.00 as the US dollar regains positive traction on Tuesday ahead of the FOMC meeting. Technically speaking, USDJPY performs a meaningful rebound after hitting the lower band of the ascending channel. USDJPY has breached the psychological resistance of 136.00, suggesting that USDJPY resumes its bid mood. USDJPY attracts some buyers near 136.00, thus the RSI indicator has a skewed upside. The pair is expected to move further north as the RSI has way far from overbought, implying that there are rooms for the pair to ascend. On the flip side, if the pair falls below the bullish channel, then it will lose some positive momentum in the near- term
Resistance: 136.62, 137.06, 137.61
Support: 136.84, 134.75
EURUSD (4-Hour Chart)
EURUSD tumbles towards 1.0100 amid the European gas crisis and fears of a global recession. From the technical perspective, EURUSD is trading at its lowest in a week near the support level of 1.0109. The outlook of the currency pair turns downside as the bearish double-top trading pattern has been formed. In the meantime, the four-hour chart favours a downside extension as EURUSD has breached below the 20 Simple Moving Average. Failure to maintain above 1.0109 would confirm another downside momentum. Moreover, technical indicators, both the RSI and the MACD gain downward traction, trading within the negative regions.
Resistance: 1.0205, 1.0284, 1.0362
Support: 1.0109, 0.9952
Economic Data
|
2023-01-27 08:58:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20423932373523712, "perplexity": 8271.364630931044}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00126.warc.gz"}
|
http://spamdestructor.com/probability-of/probability-of-type-1-error-formula.php
|
Home > Probability Of > Probability Of Type 1 Error Formula
# Probability Of Type 1 Error Formula
## Contents
His work is commonly referred to as the t-Distribution and is so commonly used that it is built into Microsoft Excel as a worksheet function. Consistent is .12 in the before years and .09 in the after years.Both pitchers' average ERA changed from 3.28 to 2.81 which is a difference of .47. You can help Wikipedia by expanding it. You might also enjoy: Sign up There was an error. http://spamdestructor.com/probability-of/probability-of-a-type-i-error-formula.php
What is the probability that a randomly chosen counterfeit coin weighs more than 475 grains? Assume also that 90% of coins are genuine, hence 10% are counterfeit. Would this meet your requirement for “beyond reasonable doubt”? Please enter a valid email address. http://www.cs.uni.edu/~campbell/stat/inf5.html
## Probability Of Type 2 Error
What is the probability that a randomly chosen coin weighs more than 475 grains and is genuine? If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. This is seen by the statement of our null and alternative hypotheses:H0 : μ=11.Ha : μ < 11. If the truth is they are guilty and we conclude they are guilty, again no error.
1. At the bottom is the calculation of t.
2. What kind of bugs do "goto" statements lead to?
3. What is the probability that a randomly chosen genuine coin weighs more than 475 grains?
4. If the consequences of making one type of error are more severe or costly than making the other type of error, then choose a level of significance and a power for
5. Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view About.com Autos Careers Dating & Relationships Education en Español Entertainment Food Health Home Money News & Issues Parenting Religion
6. The system returned: (22) Invalid argument The remote host or network may be down.
7. Thesis reviewer requests update to literature review to incorporate last four years of research.
8. The former may be rephrased as given that a person is healthy, the probability that he is diagnosed as diseased; or the probability that a person is diseased, conditioned on that
A more common way to express this would be that we stand a 20% chance of putting an innocent man in jail. The rows represent the conclusion drawn by the judge or jury.Two of the four possible outcomes are correct. To have p-value less thanα , a t-value for this test must be to the right oftα. Probability Of A Type 1 Error Symbol We fail to reject the null hypothesis for x-bar greater than or equal to 10.534.
Statistical and econometric modelling The fitting of many models in statistics and econometrics usually seeks to minimise the difference between observed and predicted or theoretical values. What Is The Probability Of A Type I Error For This Procedure The power of a test is (1-*beta*), the probability of choosing the alternative hypothesis when the alternative hypothesis is correct. P(D|A) = .0122, the probability of a type I error calculated above. http://www.cs.uni.edu/~campbell/stat/inf5.html The theory behind this is beyond the scope of this article but the intent is the same.
Does this imply that the pitcher's average has truly changed or could the difference just be random variation? How To Calculate Type 1 Error In R Hence P(AD)=P(D|A)P(A)=.0122 × .9 = .0110. In other words, the probability of Type I error is α.1 Rephrasing using the definition of Type I error: The significance level αis the probability of making the wrong decision when If the consequences of a Type I error are not very serious (and especially if a Type II error has serious consequences), then a larger significance level is appropriate.
## What Is The Probability Of A Type I Error For This Procedure
Specifically, the probability of an acceptance is $$\int_{0.1}^{1.9} f_X(x) dx$$ where $f_X$ is the density of $X$ under the assumption $\theta=2.5$. https://www.ma.utexas.edu/users/mks/statmistakes/errortypes.html Last updated May 12, 2011 menuMinitab® 17 SupportWhat are type I and type II errors?Learn more about Minitab 17 When you do a hypothesis test, two types of errors are possible: type I Probability Of Type 2 Error Similar considerations hold for setting confidence levels for confidence intervals. What Is The Probability That A Type I Error Will Be Made You can decrease your risk of committing a type II error by ensuring your test has enough power.
Remarks If there is a diagnostic value demarcating the choice of two means, moving it to decrease type I error will increase type II error (and vice-versa). http://spamdestructor.com/probability-of/probability-of-type-i-error.php A medical researcher wants to compare the effectiveness of two medications. Click here to learn more about Quantum XLleave us a comment Copyright © 2013 SigmaZone.com. Pros and Cons of Setting a Significance Level: Setting a significance level (before doing inference) has the advantage that the analyst is not tempted to chose a cut-off on the basis Probability Of Type 1 Error P Value
Examples: If men predisposed to heart disease have a mean cholesterol level of 300 with a standard deviation of 30, but only men with a cholesterol level over 225 are diagnosed Let A designate healthy, B designate predisposed, C designate cholesterol level below 225, D designate cholesterol level above 225. Join them; it only takes a minute: Sign up Here's how it works: Anybody can ask a question Anybody can answer The best answers are voted up and rise to the news In this case we have a level of significance equal to 0.01, thus this is the probability of a type I error.Question 3If the population mean is actually 10.75 ounces, what
The null and alternative hypotheses are: Null hypothesis (H0): μ1= μ2 The two medications are equally effective. Probability Of Error Formula Thank you,,for signing up! To lower this risk, you must use a lower value for α.
## If the cholesterol level of healthy men is normally distributed with a mean of 180 and a standard deviation of 20, at what level (in excess of 180) should men be
Let A designate healthy, B designate predisposed, C designate cholesterol level below 225, D designate cholesterol level above 225. This is one reason2 why it is important to report p-values when reporting results of hypothesis tests. You can also download the Excel workbook with the data here. Probability Of Error In Digital Communication This is consistent with the system of justice in the USA, in which a defendant is assumed innocent until proven guilty beyond a reasonable doubt; proving the defendant guilty beyond a
The former may be rephrased as given that a person is healthy, the probability that he is diagnosed as diseased; or the probability that a person is diseased, conditioned on that Type II error A type II error occurs when one rejects the alternative hypothesis (fails to reject the null hypothesis) when the alternative hypothesis is true. Browse other questions tagged probability statistics hypothesis-testing or ask your own question. http://spamdestructor.com/probability-of/probability-of-error-formula.php x x) has a type, then is the type system inconsistent?
For P(D|B) we calculate the z-score (225-300)/30 = -2.5, the relevant tail area is .9938 for the heavier people; .9938 × .1 = .09938. Many people decide, before doing a hypothesis test, on a maximum p-value for which they will reject the null hypothesis. There is much more evidence that Mr. A completely overkill BrainFuck lexer/parser What is the possible impact of dirtyc0w a.k.a. "dirty cow" bug?
So setting a large significance level is appropriate. Type I and II error Type I error Type II error Conditional versus absolute probabilities Remarks Type I error A type I error occurs when one rejects the null hypothesis when Derivatives: simplifying "d" of a number without being over "dx" How do I replace and (&&) in a for loop? The following examines an example of a hypothesis test, and calculates the probability of type I and type II errors.We will assume that the simple conditions hold.
For this reason, for the duration of the article, I will use the phrase "Chances of Getting it Wrong" instead of "Probability of Type I Error". So the probability of rejecting the null hypothesis when it is true is the probability that t > tα, which we saw above is α. Sometimes there may be serious consequences of each alternative, so some compromises or weighing priorities may be necessary. Suppose that the standard deviation of the population of all such bags of chips is 0.6 ounces.
Common mistake: Claiming that an alternate hypothesis has been "proved" because it has been rejected in a hypothesis test. Downloads | Support HomeProducts Quantum XL FeaturesTrial versionExamplesPurchaseSPC XL FeaturesTrial versionVideoPurchaseSnapSheets XL 2007 FeaturesTrial versionPurchaseDOE Pro FeaturesTrial versionPurchaseSimWare Pro FeaturesTrial versionPurchasePro-Test FeaturesTrial versionPurchaseCustomers Companies UniversitiesTraining and Consulting Course ListingCompanyArticlesHome > Articles P(D|A) = .0122, the probability of a type I error calculated above. No hypothesis test is 100% certain.
For a Type II error, it is shown as β (beta) and is 1 minus the power or 1 minus the sensitivity of the test. When the null hypothesis states µ1= µ2, it is a statistical way of stating that the averages of dataset 1 and dataset 2 are the same. For a Type I error, it is shown as α (alpha) and is known as the size of the test and is 1 minus the specificity of the test. In fact, in the United States our burden of proof in criminal cases is established as “Beyond reasonable doubt”.Another way to look at Type I vs.
When we commit a Type II error we let a guilty person go free.
|
2018-10-20 11:28:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6466567516326904, "perplexity": 956.3307621649745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512693.40/warc/CC-MAIN-20181020101001-20181020122501-00547.warc.gz"}
|
https://eccc.weizmann.ac.il/keyword/19555/
|
Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > CYCLES IN GRAPHS:
Reports tagged with cycles in graphs:
TR18-007 | 9th January 2018
Lior Gishboliner, Asaf Shapira
#### A Generalized Turan Problem and its Applications
Our first theorem in this papers is a hierarchy theorem for the query complexity of testing graph properties with $1$-sided error; more precisely, we show that for every super-polynomial $f$, there is a graph property whose 1-sided-error query complexity is $f(\Theta(1/\varepsilon))$. No result of this type was previously known for ... more >>>
ISSN 1433-8092 | Imprint
|
2021-06-24 13:12:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5247989892959595, "perplexity": 1974.8538897155242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488553635.87/warc/CC-MAIN-20210624110458-20210624140458-00387.warc.gz"}
|
https://electronics.stackexchange.com/questions/364765/what-does-explain-that-the-sum-of-voltage-of-the-batteries-connected-in-parallel
|
# What does explain that the sum of voltage of the batteries connected in parallel is same?
Below is an explanation attached from my textbook:
It explained the batteries connected in series very well as seen. Honestly, I've been researching about batteries connected in parallel on my textbook. However, I couldn't see anything useful. Is there any chance that book didn't explain it? I'm out of my mind right now. What I mean is there should be something that explains what to do. Can you tell how to research or specify the reason?
My Kindest Regards!
• I'd be grateful If someone gives a tip. – Busi Mar 26 '18 at 18:56
• the batteries in the diagram are in parallel .... they would be in series if one of them was reversed – jsotola Mar 26 '18 at 19:01
• @jsotola Can you be more clear, please? – Busi Mar 26 '18 at 19:05
• They are neither strictly in series nor in parallel, because they share no nodes. – Selvek Mar 26 '18 at 19:21
• @Selvek How? why do they share no nodes? – Busi Mar 26 '18 at 19:25
Your book likely doesn't mention batteries connected in parallel because it is using an idealized simplification of a "battery" that makes parallel battery connections meaningless.
In the problem above, the "batteries" are represented by ideal voltage sources. They have an exact output voltage, infinite capacity to source current, and no internal resistance.
If you put two ideal voltage sources of the same voltage in parallel, they will behave no differently than a single ideal voltage source. If you put two ideal voltage sources of different voltage in parallel, you will have a circuit with infinite current because the over-simplified ideal voltage source just doesn't represent reality with high enough fidelity to give you an idea of what really happens.
In reality, batteries include series resistance (voltage changes as a function of current) and discharge curves (voltage changes as a function of state of charge). To first order, putting two batteries of the same voltage in parallel will reduce the voltage drop across the series resistance (because each battery sources only ~half of the current) and double the battery life. However, those interactions are fairly complex, and I would recommend you ask a more specific follow-on question if you have something specific in mind.
• So are the parallel batteries meaningless? I didn't get what you mean ;) – Busi Mar 26 '18 at 19:27
• @Busi "in parallel" implies that their two anodes are directly connected to each other, and their two cathodes are directly connected to each other. But, there are no such direct connections in the circuit in your picture. Those two batteries are not in parallel with each other because of the resistors that come between them. – Solomon Slow Mar 26 '18 at 19:59
• I still didn't get what you are trying to mean. Why didn't the book explain it? – Busi Mar 26 '18 at 19:59
• – The Photon Mar 26 '18 at 20:09
• @jameslarge, what if there are no actual resistors, but a really crappy battery holder? – jsotola Mar 26 '18 at 20:15
The problem comes when trying to place two 'ideal' voltage sources in parallel. Ideal voltage sources have an internal resistance equal to 0 Ohms.
Let's think about it. These are two ideal voltage sources in parallel:
simulate this circuit – Schematic created using CircuitLab
The issue isn't obvious yet, but if $V_1\neq V_2$, as it is in practice, you have a contradiction.
In other words, if you put a voltmeter across $V_1$, is it going to give you the value of $V_1$? Or $V_2$? The ideal voltage sources enforce a value across the nodes they are placed. So in the previous case, you have two enforcing conditions for the voltage at the same node, which leads to a contradiction if the voltages are not the same.
The more general case looks like this:
simulate this circuit
Notice that if $R_1=R_2=0$, the above becomes the ideal case for voltage sources. Say, $R_1$ and $R_2$ are nonzero (probably $\approx 0$), then it doesn't matter what the values of $V_1$ and $V_2$ are—the current will flow from one source to the other for $V_1\neq V_2$ and we don't have a contradiction.
Now, back to your question, of "why the sources add up to the same". Imagine two perfectly matched sources ($V_1= V_2$ and $R_1=R_2$). Also notice that in the last circuit I have labeled two nodes: A and B. Then the voltage between A and B (what you'd think of 'parallel') is (after using KCL):
$$V_{AB}=V_1\dfrac{R_2}{R_1+R_2}+V_2\dfrac{R_1}{R_1+R_2}$$
Since we assume the sources are matched,($V_1= V_2$ and $R_1=R_2$),
$$V_{AB}=V_1=V_2$$
If the sources aren't matched in some way, then $V_{AB}$ will be different. So when dealing with ideal voltage sources, you have to be careful as to not create a contradiction by placing two different enforcing conditions across the same nodes (similar argument can be said about ideal current sources). If you use the more realistic model, then the contradiction issue goes away but the parallel combination only adds up to the 'same' when the sources are matched.
• What a great answer you gave! :) Can you answer the other question too? Which is why the textbook didn't explain parallel batteries? – Busi Mar 26 '18 at 20:53
• Why your book doesn't explain it? I am going to guess that for an introductory lesson on ideal sources, Kirchhoff's, etc, this may be a bit out of the scope, imho. And with two ideal voltage sources in parallel, you can't really work the math (KCL or KVL) unless you add the non-ideal parameters (resistors)...All you have is an equality that can lead to contradiction if V1 isn't equal to V2. You need to start from the general case to make sense of it. – Big6 Mar 26 '18 at 20:59
• Would it be meaningless to know? Or there should be a better reason, right? – Busi Mar 26 '18 at 21:19
• @Busi it's quite important, just out of the scope of an introductory course. Details become important as you take more advanced courses. – Big6 Mar 26 '18 at 21:22
• Or It would explain it. I need to make sure that I'm on the right topic. How should I research it? – Busi Mar 26 '18 at 21:26
|
2019-12-07 22:19:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5247834324836731, "perplexity": 507.4409878604433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00521.warc.gz"}
|
https://physics.stackexchange.com/questions/471796/second-quantization-notation-hamiltonian-on-triplet-state
|
# Second quantization notation - Hamiltonian on triplet state
So I'm struggling quite a bit with dirac notation and second quantization and it seems like no one wants to really do calculations step-by-step to at least get the notation right. We were given the following Hamiltonian: $$H=t\sum_\sigma c_{1,\sigma}^\dagger c_{2,\sigma} + c_{2,\sigma}^\dagger c_{1,\sigma} \tag{1}$$ with $$c$$ and $$c^\dagger$$ annihilation and creation operator with position and spin $$\sigma=\uparrow,\downarrow$$ as indices. Now I should apply this on the two-fermion triplet state $$\lvert \uparrow \uparrow \rangle$$. To make this a little more straightforward I thought, I'd better write the states in terms of occupation number: $$\lvert n_{1\uparrow}, n_{2\uparrow}, n_{1\downarrow}, n_{2\downarrow} \rangle$$
for my triplet state I now have: $$\lvert \uparrow \uparrow \rangle = \lvert 1,1,0,0\rangle$$
Applying the Hamiltonian on this yields: $$H \lvert 1,1,0,0\rangle = t(c_{1,\uparrow}^\dagger c_{2,\uparrow} + c_{2,\uparrow}^\dagger c_{1,\uparrow}+c_{1,\downarrow}^\dagger c_{2,\downarrow} + c_{2,\downarrow}^\dagger c_{1,\downarrow})\lvert 1,1,0,0\rangle=0+0+0+0 \tag{2}$$
By only using $$a^\dagger \lvert 1 \rangle=0$$ and $$a \lvert 0 \rangle=0$$. What's irritating: we should go on calculate more stuff with this "state". But there is no state anymore. [1.] Did I do something completly wrong?
Furthermore we should apply this Hamiltonian on other states. Like the singlet state: $$\frac{1}{\sqrt{2}}(\lvert \uparrow \downarrow\rangle - \lvert \downarrow \uparrow \rangle) = \frac{1}{\sqrt{2}} (\lvert 1,0,0,1\rangle-\lvert 0,1,1,0\rangle)$$
Applying it the same way as above: $$H \frac{1}{\sqrt{2}} \lvert 1,0,0,1\rangle = t (0+\lvert 0,1,0,1\rangle+\lvert 1,0,1,0\rangle+0)$$
$$H\frac{1}{\sqrt{2}}\lvert 0,1,1,0\rangle= t(\lvert 1,0,1,0\rangle+0+0+\lvert 0,1,0,1\rangle)$$
$$H \frac{1}{\sqrt{2}}(\lvert \uparrow \downarrow\rangle - \lvert \downarrow \uparrow \rangle) =0 \tag{3}$$
Even if $$0$$ is also correct here, it is a very interchangeable result. Going on with this, there is only one triplet state which is non-zero.
[2.] Does that make sense/ is correct?
[3.] How would one write $$\lvert 1,0,1,0\rangle$$ in the "arrow notation" $$\lvert 1,0,1,0\rangle=\lvert \uparrow \downarrow,0 \rangle$$?
[4.] And are those notations: $$\lvert \uparrow \downarrow,0 \rangle =\lvert \uparrow \downarrow\rangle \lvert0 \rangle$$ or $$\lvert \uparrow \uparrow \rangle=\lvert \uparrow \rangle\lvert \uparrow\rangle$$ equivalent?
[5.] Is the complex conjugate of $$\lvert \uparrow \uparrow\rangle^\dagger= \langle \uparrow \uparrow \lvert$$ ?
[6.] Also on some other question I have seen the following line: $$(\lvert\downarrow \rangle \lvert\uparrow\rangle - \lvert\uparrow \rangle \lvert\downarrow\rangle)/\sqrt{2} = \lvert 1_\downarrow 1_\uparrow \rangle$$. How is the right-hand-side notation defined? I don't see how this can be only one term.
I'm sorry that this is more than one question, but it sorta belongs together for me. I thank you in advance :)
Your main problem seems to bet that you are mixing up state and eigenvalue of the Hamiltonian. Your states are eigenvectors of the Hamiltonian operator, s.t. one gets an eigenvalue (the total energy of the state), when applying $$H$$. This means that even when the Hamiltonian acting on a state gives zero, it doesn't mean that the state is zero. You can still calculate things with that state. This should take care of [1.] & [2.].
Also your notation of the states is somewhat confusing, as it makes it look as if you have four degrees of freedom, when you actually have only two (left spin and right spin). You can still use it, just be cautious with that. Knowing this [3.] isn't a problem anymore. The thing is that you have a system with two fermions at two positions without any kinetics. This means you can't have a state with both fermions in one position with different spins in this small example, even though describing such systems may be the point of Second Quantization.
For [5.]: As states aren't numbers but vectors in Hilbert space, where $$\langle \Psi \rvert$$ is the dual vector of $$\lvert \Psi \rangle$$, such that $$\langle \Psi \vert \Psi \rangle$$ is a number (generalized scalar product), the probability(-density). To make a scalar product work with complex numbers, we not only have to take the transpose of a vector to get the dual vector but the adjoint (also called conjugate transpose): $$\lvert \Psi \rangle^\dagger = (\lvert \Psi \rangle^T)^* = (\lvert \Psi \rangle^*)^T = \langle \Psi \rvert$$, where $$^T$$ marks a transpose and $$^*$$ marks the complex conjugate. So the answer is yes, but $$^\dagger$$ is not a simple complex conjugate.
I can't help you with all those notations. They should be defined somewhere. For this exercise the notation on your sheet with $$\lvert \uparrow \uparrow \rangle$$ etc. is all you need, as long as you now that the left arrow describes position one and the right arrow describes position two. You could translate e.g. "up" to $$1$$ and "down" to $$-1$$, but that doesn't really help, does it.
You may want to refresh your basics in Quantum mechanics before diving into Second Quantization.
• In my understanding I can just successively apply the operators in my Hamiltonian on my state to get my result. If I do it on this first example the annihilation/creation operator will act on 0/1 states and give a bare 0.... no state (according to my notes). How would you do eq2 if you don't get 0 ? Why shouldn't it have 4 degrees of freedom? An electron can change places or switch spin if it's allowed to, can't it? [5] as most of the other questions is just a question on notation "how would you write the adjoint of the given state?" – Kanaa Apr 11 '19 at 9:15
• @Kanaa Zero is the right answer. But that means that the energy of the state is zero, not that the state itself is zero/trivial or whatever. – Paul G. Apr 11 '19 at 9:44
• So what is the state then after applying the H? If I read our script on commutation relations correctly then it should be 0 since the operations are "not allowed". – Kanaa Apr 11 '19 at 11:41
• I don't know what Skript you reading, but assuming your state is an eigenstate of the Hamiltonian operator, applying the operator gives you eigenvalue times state. So at zero energy you get zero times the state which is a zero vector. It's all linear algebra. – Paul G. Apr 11 '19 at 13:51
• Ok I found a script on multi-body problems where they are doing more or less the same thing with just n-particles. They move to k-space to find an eigenfunction. Those I have stated are not eigenvectors thus it is perfectly fine if they change under this Hamiltonian. – Kanaa Apr 12 '19 at 12:10
|
2021-03-07 06:37:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 32, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956708312034607, "perplexity": 353.06965254719887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00024.warc.gz"}
|
http://www.buffalotheorybrewing.com/2010/09/
|
## Saturday, September 25, 2010
### Troubleshooting Beer
Every home brewer knows that some times a batch of beer will taste a little "off". This is not uncommon and there are ways to determine and fix these issues. I have listed some of the most common problems and how to fix them. By no means is this list comprehensive, I have tried to cover the flavors that can be easily detected and avoided. There is a bit of a recurring theme in off flavors and that is sanitation. Above all to make amazing beer your process must be free from infection and healthy yeast must be used.
In order to fully understand where off flavors come from during the fermentation process you need to understand the chemistry behind the magic. The flow chart to the right from Brew Chem 101 steps through the fermentation process. I would highly recommend getting this book if you are interested in the science behind making truly great beer. The process of fermentation is basically taking carbohydrates and converting them into ethanol by breaking down the sugars. Those are only the two end point though, as you can see the transformation from sugar to alcohol has many steps in between. Some of these stops along the way can actually be the very cause of the undesirable flavor. The first flavor we are going to cover is the last step before ethanol in our chemical reaction, Acetaldehyde.
Flavor: Fresh Cut Green Apples
Cause: (Acetaldehyde, CH3CHO) Green apple flavor is produced during fermentation and is common in beer that is not fully attenuated . The flavors will subside after the beer has reached full attenuation. This can sometimes be the result of using too much cane or corn sugar or bacteria. Acetaldehyde is the last step in the fermentation process before ethanol is produced.
Solution: Age the beer for a week or two and if the green apple taste goes away then kudos. If not eliminate the use of corn and cane sugar. The sugar should be substituted with malt extract. Also starting with good quality yeast will help speed the fermentation process along and reduce fermentation time
Flavor: Butterscotch, Movie Popcorn, Slickness on tongue
Cause: (Diacetyl, C4H6O2) Like Acetaldehyde, Diacetyl is a naturally occurring part of the fermentation process, and over time will disappear if a proper attenuation is achieved. In small quantities Diacetyl is appropriate for some styles of beer. The diacetyl life cycle in your beer comes in two major steps, production and reduction. If either of these steps are interrupted inappropriate levels of diacetyl will be in your finished product.
• Production: Diacetyl is the product of Acetylaldehyde and Pyruvate to produce α-Acetolactic Acid, which gives off hydrogen ions an CO2 to create diacetyl.
• Reduction: Eventually diacetyl will change to acetonin, with an undesirable fruity, musty flavor, and then into 2,3-butanedoil which has no flavor
Solution: Properly oxygenate wort before fermentation. Make sure that you allow beer to have a "Diacetyl Rest" and wait for full attenuation before transferring (don't rack to early) . One strain of bacteria called sarcina grow rapidly during the end of fermentation and can irreversibly contaminate your beer with diacetyl. As always it is important to start off with health yeast and maintain a clean environment to avoid contamination.
Flavor: Cooked Corn
Cause: (Dimethyl Sulfide, (CH3)2S) DMS is caused by breaking down of S-methyl methionine (SMM), which is an amino acid caused by malt germination. This is not really controllable by you unless you germinate your own malt. Luckily for you, malt manufacturers have it down to a science an minimize SMM in your malt and malt extract. There will be some SMM in all malt but it can be removed from your final product by following a few
Solution: The main cause of DMS that you can taste is either not cooling wort fast enough or not having a rolling open boil. Any palatable DMS will evaporate during a minimum 60 minute boil if left uncovered, and rapidly cooled. DMS is also removed by gaseous CO2 during fermentation. If you have a weak fermentation the DMS removal will be weak so always start of with healthy active yeast.
Flavor: Skunky
Cause: (MBT, 3-methyl-2-butene-1-thiol) When humulone (The bittering alpha acid in hops) is exposed to light, specifically blue-green light (400-520 nanometer wavelength), a chemical reaction occurs creating MBT. MBT is part of the mercaptan family, along with the active chemical in a skunks spray, which gives it the distinctive skunky smell.
Solution: Do not expose wort or beer to direct sunlight after hops have been added. While sunlight can be the most damaging because of its intense light source (the freaking Sun), indoor lighting can cause the same damage to your beer. Keeping fermenter inside a dark closet or covered up will reduce MBT content, and NEVER use green or clear glass when bottling this is a recipe for skunk beer. (See: Corona for more details)
Stale,Wet Cardboard, Rotten or Old Vegetables, Sherry, Pineapple
Cause: (Oxidization, O2 -> BEER) When oxygen is introduced into your hot wort it creates aldehydes and produce the stale, cardboard flavor which is unpleasant for all drinkers.
Solution: To prevent oxidization avoid hot side aeration, which is when air is mixed in the wort at temperatures above 80°F. Only allow beer to be aerated after yeast has been pitched to aide in yeast aerobic respiration, after this has happened minimize any shaking or stiring of beer. During transfers from primary to secondary and secondary to bottles/keg is when the highest risk of accidental aeration occurs. Also only leave 1/2" to 1" of head space in bottles and use oxygen reducing bottle caps. If kegging be sure to completely purge air from head space by filling and purging 3 times. In higher alcohol beers this is generally perceived as "sherry" instead of cardboard, and(if style appropriate) not considered a flaw in many aged beer styles, such as barley wine and old ale.
Flavor: Astringent Mouth-puckering, like chewing on a grape skin, Metallic or Powdery
Cause: (Polyphenols known as Tannins, Catechin is shown below) Tannins are stored in grain husks and also in the skin of fruit, and when boiled or sparged with water over 170°F they are extracted. This is because the waters temperature is directly related to the solubility of its ingredients. Over-milling grain makes tannin extraction easier since the husks are further broken down making tannin extraction more efficient. High waters pH (above 5.2) can also increase tannin extraction. Mixing krausen into beer or transferring to secondary will increase tannins because of its concentration of the phenols.
Solution: Don’t boil, over crush or over-sparge grain, be sure to keep water below 170°F. Check your pH and make sure it is below 5.2 and avoid letting krausen get into beer.
Alcoholic/Hot spicy/Solventlike
Cause: (Fusel Alcohols, Ethanol with more than two C atom) Proponol (CH3CH2CH2OH) and Butanol (CH3CH2CH2CH2OH) and other fusel alcohols give beer an undesirable alcohol flavor in high quantities. However, in some beers this is an appropriate flavor, such as barley wines and some bocks. These undesirable/solvent like tastes can come from several sources like excessive yeast growth, high levels of ethanol not allowing proper fermentation, excessive amino acids, and high fermentation temperatures. In other words, when fermentation is is too fast or too strong it can overwhelm the ethanol flavoring. Also, if the yeast start to ferment amino acids instead of sugars the resulting alcohols won't be to your liking. Bacterial infections also can cause solvent-like tastes.
Solution:
If style appropriate, drink it! If not, try chooseing a different yeast strain, some stronger yeast strains, meant for higher alcohol beers, can overwhelm a flavors and . AAlso maintain a sterile brewing environment and control fermentation temperatures. At temperatures above 80°F yeast produce a much higher concentration of the heavier, long chain fusel alcohols which are the abrasive to the palette.
Fruity (strawberry, pear, banana, apple, grape, citrus)
Cause: (Esters, Acid(*-C=O-OH) +Alcohol(*-CH2-OH)->Ester) Esters occur naturally when alcohols and acids are combined in the wort. These esters give a fruit-like flavor and aroma that are desired in certain beer styles such a lambics and sours.
Solution: In ales ester production is lowest at temperatures between 60°F and 65°F and high at temperatures above 75°F. For lagers the window is much lower with low ester production below 50°F and high esters above 55°F. Try a cleaner yeast strain. Oxygenate wort sufficiently to ensure yeast health. Reduce original gravity. Check hop variety for fruity characteristics and avoid carrying over excessive hot break into fermenter. Be sure to pitch a sufficient quantity of healthy yeast to avoid yeast stress.
Medicinal, Plastic, Band-aid
Cause:
(Chlorphenol, Cl-C6-OH) The most likely cause of this taste is infection caused by poor sanitation. Can also be caused by using chlorinated water or not properly rinsing cleaning solution from brewing equipment. Sometimes whole hop usage can contribute to this off flavor if a high alpha acid hop is used.
Solution: Clean and sanitize all equipment for brewing. If you really think sanitation is not the issue check your water for chlorine also be sure to rinse off all equipment and make sure no bleach (should you unadvisedly be using it) is rinsed off. If your water has high chlorine levels in it boil the water for 15 minutes to drive out any chlorine.
Cause: (YEAST) C'mon really what do you think makes your beer taste yeasty? Large quantities of dead yeast are canabalized by their still living comrades. Yeast on yeast action releases bitter lipids, resins, nitrogen and sulphur containing molecules.
Solution: If beer is young let yeast properly flocculate and settle. Watch your transfer method and make sure not too much trub makes it into your new container. And ALWAYS use healthy yeast.
Sources:
### Pumpkin Transfer
I transferred my Pumpkin Pie Ale to the secondary fermenter today 9/25/10. The bubbling had slowed down to one bubble every 3 minutes so I decided it was time to transfer after 6 days. I tasted it and it was disappointingly pumpkin-less. I measured 1/2 teaspoon exactly to add to the primary boil I have a feeling that was too little. To make up for the lack of Fall taste I added an additional rounded 1/2 teaspoon of pumpkin spice to the secondary fermenter and placed it in the brew closet. Hopefully this will be enough to elicit the taste of pumpkin pie and Thanksgiving.
UPDATE: September 29, 2010. Tasted the beer again today and determined not enough spice yet. Added an additional teaspoon. Bringing total to 2 teaspoons of pumpkin spice.
UPDATE: October 1, 2010. Tasted beer for final time and pumpkin aroma is adequate so I bottled today. I have not covered bottling on this blog since I have always had kegs. But recently I have been spending too much money on brewing so I decided not to buy my final keg for the kegerator. I will cover my bottling methodology in a post coming soon. I measured my final gravity since it is good to track of gravity readings and I have been lazy up to now. This won't really tell me anything because I did not keep get the original gravity but I need to get in the habit of documenting original and final gravities.
FINAL GRAVITY: 1.025
UPDATE: October 1, 2010 PM. Foamed my first keg, the bottled blonde. So, I could have kegged the pumpkin ale. This is fine since I have not bottled in a while and it would be nice to give out some pumpkin pie ale. I have also wrote up a post about my theory on bottling.
## Wednesday, September 22, 2010
### Dihydrogen Monoxide in Beer
H2O
Water can be one of the most important factors when trying to brew quality beer. This is because the effect various ions can have on starch degrading enzymes in malt. This is really only a concern when involved in all-grain and partial mash brewing. Malt extract brewers don't need to worry about this because the manufacturer of the malt takes this into account during the production of the extract.
This is not to say that extract brewers don't need to worry about water chemistry because they absolutely do. Though it is more important to like the taste of your water with extract brewing. The quickest way to get poor tasting beer is to start off with poor tasting water.
Hard vs Soft Water
Water hardness as a scale was created a long time ago when people started using soap. Soaps ability to later is directly affected by its mineral content. Typically, high mineral content makes it difficult or "HARD" to lather soap. Inversley soap with with low mineral content is easy to lather, soft is the opposite of hard so they went with that. My guess why they went this way is because "easy water" sounds dirty. I don't know why it just does.
Ranges for water softness follows, water harness is not an exact science so this is approximate:
• 0-50 ppm = Soft Water
• 51-110 ppm = Medium Hard Water
• 111-200 ppm = Hard Water
• >200 ppm = Very Hard Water
Temporary vs Permanent Hardness
When brewing there are tho types of hardness to be concerned with temporary and permanent hardness. Temporary hardness is a measure of bicarobonates in water [2(HCO3)]. This hardness that bicarbonate ions add is called such because when boiled they are precipitated (made solid) and removed from the water.
Permanent hardness is a a measure of magnesium and calcium ions in the water, both of which will remain after boiled. Permanent hardness can be adjusted for if its ranges are outside of the norm.
pH
If I started this sentence with pH would I have to capitalize the p? The acidity and alkalinity of a liquid is measured in pH on a scale from 0 to 14 with 0 being the most acidic and 14 being the most alkaline. In all grain brewing, enzyme conversion occurs best with a pH of 5.2 (Acidic) this can be attained with the addition of Calcium Sulfate (CaSO4) commonly known as gypsum.
Chemicals
Sodium (Na)
Half of the chemical makeup of common table salt this ion contributes to body and full mouth feel. The overuse of sodumn in water treatements will give a seawater taste to the end product. Generally levels of between 10 and 70 ppm are good for brew water.
Chloride (Cl)
The other half of table salt this ion releases malt sweetness and contributes to mouth feel and beer complexity. Generally ranging between 1 and 100 ppm should always stay below 150 ppm to avoid salty flavors
Calcium (Ca)
The most important chemical in "permanent hardness" this element helps lower pH and facilitates precipitation of proteins during boiling. Most beers should be maintained around 100 ppm any higher will create a harsh bitter taste.
Sulfate (SO4)
While second to calcium for effectively lowering pH, Sulfates take the gold for influencing hop extraction and bringing out sharp bitterness. The amount of sulfates suggested will vary depending on your beer style.
Magnesium (Mg)
Magnesium is primarily a yeast nutrient that should be maintained around 20-30 ppm. The addition of Epsom salts can raise Magnesium levels in water. However many experts advise against it because high Magnesium levels can lead to a dry, astringent bitterness in your final product.
Water can be adjusted to meet your brewing needs. Local home brew stores provide chemicals that can be used to adjust ion levels and change pH of your brewing water. Your local home brew store will know the water in your area and can make suggestions on what should be added if anything to your water. The following table shows how to modify your water:
Effect of adding 1 gram of chemical per gallon of water in PPM
ChemicalCalciumMagnesiumSodiumChlorideSulfatesCarbonatesHardness
Baking soda
75
190190
Calcium chloride72
127
0
Chalk106
159159
Epsom salts
26
103
26
Gypsum62
148
0
Table salt
104160
0
Here is the water chemistry of a few famous brewing towns (Expressed in PPM):
Mineral (Ion) Pilsen Munich Dublin Milwaukee Portland, OR Calcium (Ca) 7 70-80 115-120 35 2 Sulfate (SO4) 5-6 5-10 54 18 0 Magnesium (Mg) 2-8 18-19 4 11 1 Sodium (Na) 32 10 12 7 2 Chloride (Cl) 5 1-2 19 5 2
As you can see different brewing cities around the world have very different water hardness and softness. This water hardness can change the taste of your beer a lot. When I toured the Deschutes Brewery, the tour guide explained that they put gypsum in their water so that it will taste like water from some town in England. Unfortunately, I didn't think to ask which town.
Hopefully this has been a good insight into water and how it affects the brewing process.
Sources:
• The Brew-Master's Bible The Gold Standard for Homebrewers - Stephen Snyder
• The Complete Joy of Home Brewing 3rd Edition - Charlie Papazian
## Monday, September 20, 2010
If there is one thing building this kegerator has taught me it is that Draft beer, though completely worth it, is expensive. I added my third tap handle to my kegerator this week. The final tap will have to wait since I went on beer buying bender on Sunday when I bought ingredients for my Pumpkin Pie Ale. Another tap handle and all nessacary equipment and ingredients made that a $100+ trip to the brew store. The tap handle can be seen to the right with his two new friends. ### Stokes' Law and Order - Irish Moss Unit This was originally going to be thrown in with my Pumpkin Pie Ale post because that was the first time I had used Irish moss but as I wrote it I decided that it deserved an entire post. What is Irish Moss? Chondrus crispus or Irish moss as it's more commonly known is a species of red algae that grows along the coasts of the Atlantic Ocean. Irish Moss: Why use seaweed in beer? Irish moss in brewing is used as a fining agent which means it removes suspended particles in the beer that cause it to be cloudy. Sometimes the style of beer requires that you have suspended particles in the beer, hefeweizen for example, so Irish moss should not used. There are two main causes of cloudy beer, proteins and yeast. When the yeast is at the end of its life cycle and most of the sugars in the beer have been consumed the cells will bond to each other, becoming heavier and dropping to the bottom. This process is called flocculation, and occurs naturally when the yeast go dormant. Not all yeast behave in this manner though, some will stay suspended indefinitely. A method of clearing beer further is to cool it down so that the yeast will go into hibernation mode and sink to the bottom. This is why white labs yeast are refrigerated to keep the cells alive but dormant until they are ready to be used. Implementation There are two popular styles of Irish moss commonly used in brewing applications. Dried moss is just that, the seaweed is dried out, thrown in a jar and sold at brew stores everywhere. The other type is tabs, Whirflocis a brand of Irish moss that is concentrated and put in tab form. Whatever form that is chosen to add to the wort the implementation is the same, during the last 10-20 minutes of the boil throw in a tablet or about a teaspoon or the dried variety. Stokes' Me, Stokes' Me The principal used in determining the rate that suspended particles will fall out is called Stokes' Law. The equation is as follows: $V_s = \frac{2}{9}\frac{\left(\rho_p - \rho_f\right)}{\mu} g\, R^2$ where: • Vs is the particles' settling velocity (m/s) • g is 9.8 (m/s2) Assuming your not making space beer • ρp is the mass density of the particle (kg/m3) • ρf is the mass density of the wort (kg/m3) • μ is the wort viscosity (kg/m s) Irish Moss can aide in this process by making the proteins and yeast stick together creating a heavier particle. Making ρp bigger will make the numerator bigger increasing the velocity that the particles settle. ### Pumpkin Pie Ale Origin: Pretty self explanatory I wanted to make a seasonal pumpkin ale for around Halloween and thanksgiving. Since pumpkin pie spice is the ingredient that elicits that response, guess what its called? Ingredients: • 9 lbs Light Malt Extract • 2.0 oz. Willamette hops (5% alpha-acid, whole leaf) • White Labs East Coast Ale Yeast • 1 lb. Caramunich (65°L, Crushed) • 4 oz. Belgian Aromatic Malt • 4 oz. German Melanoidin Malt (30°L, Crushed) • 1/2 Tsp. Pumpkin Pie Spice Procedure: On Sunday September 20th I purchased the all my ingredients through Main Street Homebrew Supply Co. and brewed that day. INSERT STANDARD BREWING PROCEDURE HERE: Yada Yada Yada... If you are curious about what I did read my previous posts where I go into it in depth my method of brewing. I think I will mainly include deviations from standard brewing from now on. The ingredients and timing are also important so that will be included in all my write-ups as well as any thing different. As you can see to the left the standard generic boil photo. One thing I did differently this time was to add Irish moss during the last 10 minutes of the boil. For you veteran brewers out there this seems like a rookie mistake, but my beer brewing teacher did not use it so therefore I never knew about it until I really started reading brewing literature. For those not so veteran brewers Irish moss, or Chondus Crispus is a species of red algae that grow around the coast of the Atlantic Ocean. The reason for using this seaweed is because it acts as a flocculation agent and clears your beer. Read my post about Irish moss for the science that describes why this happens. Hops/Ingredients Schedule: QuantityIngredientBoiled for 1 oz.Willamette Hops Entire 60 min. Boil 1/2 oz.Willamette HopsLast 20 min. of boil 1/2 oz.Willamette HopsLast 10 min. of boil 1 tabIrish MossLast 10 min. of boil 1/2 tsp.Pumpkin Pie SpiceLast 10 min. of boil For my Irish moss I used the Whirlfloc tabs, which some brewers scream about how it doesn't work and how the unmolested dried Irish moss is so much better and pure. I have never used the raw dried Irish moss so I cannot speak to its efficacy, but what I can tell you is that the Whirlfloc tabs definitely work. Withing 10 minutes of transferring my wort to the primary fermenter there was 2 inches of cloudy trub at the bottom of my fermenter. I have never seen this effect in my beers before. Within a day the suspended proteins had settled out completely. I used the tabs because they are a lot easier you just toss a tab in the last 10 minutes of the boil and forget about it. I don't know if i will eventually become a purist and switch to dried but for now I am amazed at how well the tabs worked so I will stay with that for now. Pumpkin Pie Spice is very strong and you don't need very much this is a "less is more" situation where you can add more to your secondary but you don't want to add too much up front. In about 2 weeks I will taste the beer and determine if it tastes like Thanksgiving and determine if it needs more spice. PUMPKIN NOTE: Ill bet at least one of you is wondering why there is not pumpkin in my Pumpkin Ale. This is because the pumpkin is the Tofu of Fruit. It has almost no flavor by itself and takes on whatever flavor it is spiced to. When you taste pumpkin pie almost none of the flavor is from the pumpkin. If you don't believe me when you're carving pumpkins this Halloween go ahead and take a bite out of the Jack-O-Lanterns eye and let me know how it tastes. ## Sunday, September 12, 2010 ### Kegerator Upgrades I recently made some minor functional and major cosmetic upgrades to my kegerator. Functionally, I moved the CO2 tank to the outside of the kegerator. This was accomplished by a brass pipe placed through a hole in the wooden collar with a 90° elbow and a barbed fitting on both ends(Outside: Left, Inside: Right). I placed a washer to take up some slack and to make a better insulated pass-through. This modification was made because I was having troubles with the tank not producing CO2 fast enough, it would sometimes take almost a minute to produce enough CO2 for the initial air purge when kegging (which has to be done 3-4 times). Another reason to move the tank outside is I recently acquired a 2.5 gallon corneilus keg and the mini corny and the gas tank would not both fit in the kegerator. This was an amazing find since these little guys are extremely rare compared to their 5 gallon counterparts. I found this one on craigslist for$75. A normal 5 gallon corny keg sells for between $30 and$40 where a 2.5 gallon one can sell upwards of \$150, demonstrating the simple economics of supply and demand.
The cosmetic modifications were pretty major also.
I finally cut a hole for the thermocouple to pass through, it can be seen next to the barbed fitting on the inside of the chest. I finally got tired of looking at it just dangling over the edge. The other modification I made was to clean up the unruly routing of the gas and product lines. The easiest way i could think to do this was orient the kegs so gas lines were on the outside and product lines in the center. This turned out to work great and the results can be seen below.
## Friday, September 10, 2010
### Deschutes Brewery Tour
This weekend I went up to Bend to go to a wedding (which was ridiculous) and while there I decided it would be a good idea to take a tour of Deschutes Brewery. While waiting for the tour to start we sat in the tasting room and were given 4 free samples equivelant to that of about a flight. I believe they do this to facilitate learning on the tour.
After the tour started we were directed to look at the two MASSIVE silos in front of the building that contained malted barley. All the spent grain is recycled in some way whether it is in their bakery, that supplies the brew pub with buns and bread, or taken to a farmer that uses it as feed for cattle.
Once inside the building I see 2 gigantic tanks with the which are the tuns for the original brew house (Right: inside mashtun). Which are still in use today. The Next stop on the tour is the employee break room, where each employee gets refreshing beverages after their shift. There is a closet with rotating tap handles for employees enjoyment and to the right of this fun door is a much larger door. The next passage to Narnia is the Hop Room where hundreds and hundreds of pounds of whole leaf hops are stored preparing to be dumped in hot wort.
Deschutes Brewery uses primarily Cascade hops in their beer according to our less than helpful ditz of a tour guide. (Shown left molesting bags of hops.)
The next stop on the tour is the huppmann room. This room is filled with possibly the most impressive pieces of industrial equipment I've ever seen. These were gigantic tanks with conical tops and not a single seam on them. No seams/welds are important in brewing equipment because bacteria can get trapped in the rough areas of the welds, causing the beer to have undesired flavors. These tanks were designed, built, and shipped from Germany to Bend. They had to close down several sections of the highway to transport the tanks to their final resting place at the brewery. (Note: The picture at the right are not of Deschutes a bunch of damn tourists were in the way.) After this we went upstairs and saw the lab and blind tasting room continuing on to the fermenting room. Pictures below:
Following the fermentation room was the bottling room it seemed a little small for the thousands of bottles that get passed through those doors. After bottling upstairs we went to the office section where every Jubelale bottle design was put on the wall.
|
2018-04-27 08:11:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27367880940437317, "perplexity": 4442.370248027607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524127095762.40/warc/CC-MAIN-20180427075937-20180427095937-00542.warc.gz"}
|
https://zbmath.org/?q=0815.42012
|
## Some results on biorthogonal polynomials.(English)Zbl 0815.42012
Starting from the Christoffel determinant formula that gives an expression for the orthogonal polynomials that arise from polynomial modification of a weight function, the author gives a biorthogonal pair $$\{\psi_ n(z)\}$$, $$\{\phi_ n(z)\}$$ on the unit circle with respect to the measure $d\nu(\theta)= w(z) d\theta= z^{-m} (z- \alpha_ 1)(z- \alpha_ 2)\cdots (z-\alpha_ h) d\theta,\;z= e^{i\theta},\;\alpha_ j\neq 0.$ Specifying $w(z)= {(qz; q^ 2)_{\infty} (qz^{-1}; q)_{\infty}\over {(aqz; q^ 2)_{\infty} (bqz^{-1}; q^ 2)_{\infty}}},$
the author recovers a result by P. I. Pastro [J. Math. Anal. Appl. 112, 517-540 (1982; Zbl 0582.33010)].
Finally, the author turns to a measure which is necessarily positive on the unit circle, but for which there exists nevertheless a unique pair of biorthogonal sets of polynomials on the unit circle (in order to achieve this certain Toeplitz determinants have to be non-zero). Now the modification uses so-called Laurent polynomials of the special form $$z^{-m} G_{2m}(z)$$ and $$z^{-(m+1)} G_{2m+ 1}(z)$$ with $$G_{2m}$$, $$G_{2m+ 1}$$ polynomials of exact degree $$2m, 2m+1$$, respectively, non- vanishing for $$z= 0$$. In order to have the right degree pattern for the biorthogonal polynomials certain minors of determinants in the paper have to be non-zero.
None of the determinants governing existence and uniqueness is given explicitly in the paper.
### MSC:
42C05 Orthogonal functions and polynomials, general theory of nontrigonometric harmonic analysis 33C45 Orthogonal polynomials and functions of hypergeometric type (Jacobi, Laguerre, Hermite, Askey scheme, etc.)
Zbl 0582.33010
Full Text:
|
2022-08-11 01:45:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6878606081008911, "perplexity": 467.9797389030881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00771.warc.gz"}
|
https://www.semanticscholar.org/paper/Comets-in-Islamic-Astronomy-and-Astrology-Kennedy/b46344279229b9549b52299e5ad4b0342e67c62b
|
# Comets in Islamic Astronomy and Astrology
@article{Kennedy1957CometsII,
title={Comets in Islamic Astronomy and Astrology},
author={Edward S. Kennedy},
journal={Journal of Near Eastern Studies},
year={1957},
volume={16},
pages={44 - 51}
}
• E. S. Kennedy
• Published 1 January 1957
• Physics
• Journal of Near Eastern Studies
HIS paper is a collection and analysis of all references in medieval Arabic sources to "tailed stars" (Arabic kawdkib mudhanniba, kawdkib dhawdt aladhnab) presently available to the author. His attention was first attracted to the topic by passages in a recent publication of ThorndikeI and in a much older notice by Lee.2 It will be seen that the eleven sources in which the references occur are spread fairly evenly, in point of date, from early Abbasid times (the ninth century A.D.) through the…
10 Citations
The Fragment of Al-Kindī's Lost Treatise on Observations of Halley's Comet in A.D. 837
1. IntroductionIn the first half of the ninth century, a generation of Muslim scholars emerged under the support of the Abbasid Caliphs in Baghdad, among whom the great polymath al-Kindl stands tall.
Caught in the spotlight: the 1858 comet and late Tokugawa Japan
Abstract In the fall of 1858 a large comet crossed the skies of Japan. Casting its light over a society in turmoil, torn apart by political factionalism and threatened by a virulent cholera epidemic,
No evidence for an early seventeenth‐century Indian sighting of Kepler's supernova (SN1604)
In a recent paper in this journal, Sule et al. (2011) argued that an early 17th-century Indian mural of the constellation Sagittarius with a dragon-headed tail indicated that the bright supernova of
The lunar theories of al-Baghdādī
ConclusionsWe have seen that the first two of al-Baghdādī's three methods originate from his predecessors $$\underset{\raise0.3em\hbox{\smash{\scriptscriptstyle\cdot}}}{H}$$ abash and Ya
A Survey of Muslim Material on Comets and Meteors
Il y a de nombreuses sources litteraires qui peuvent etre trouvees dans la litterature arabe pour expliquer l'astronomie des cometes et des meteores
|
2022-08-08 10:47:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4551333785057068, "perplexity": 14021.506847884786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570793.14/warc/CC-MAIN-20220808092125-20220808122125-00266.warc.gz"}
|
https://www.acmicpc.net/problem/12314
|
시간 제한 메모리 제한 제출 정답 맞은 사람 정답 비율
5 초 512 MB 0 0 0 0.000%
## 문제
Given a list X, consisting of the numbers (1, 2, ..., N), an increasing subsequence is a subset of these numbers which appears in increasing order, and a decreasing subsequence is a subset of those numbers which appears in decreasing order. For example, (5, 7, 8) is an increasing subsequence of (4, 5, 3, 7, 6, 2, 8, 1).
Nearly 80 years ago, two mathematicians, Paul Erdős and George Szekeres proved a famous result: X is guaranteed to have either an increasing subsequence of length at least sqrt(N) or a decreasing subsequence of length of at least sqrt(N). For example, (4, 5, 3, 7, 6, 2, 8, 1) has a decreasing subsequence of length 4: (5, 3, 2, 1).
I am teaching a combinatorics class, and I want to "prove" this theorem to my class by example. For every number X[i] in the sequence, I will calculate two values:
A[i]: The length of the longest increasing subsequence of X that includes X[i] as its largest number.
B[i]: The length of the longest decreasing subsequence of X that includes X[i] as its largest number.
The key part of my proof will be that the pair (A[i], B[i]) is different for every i, and this implies that either A[i] or B[i] must be at least sqrt(N) for some i. For the sequence listed above, here are all the values of A[i] and B[i]:
i | X[i] | A[i] | B[i]
---+----+-----+--------
0 | 4 | 1 | 4
1 | 5 | 2 | 4
2 | 3 | 1 | 3
3 | 7 | 3 | 4
4 | 6 | 3 | 3
5 | 2 | 1 | 2
6 | 8 | 4 | 2
7 | 1 | 1 | 1
I came up with a really interesting sequence to demonstrate this fact with, and I calculated A[i] and B[i] for every i, but then I forgot what my original sequence was. Given A[i] and B[i], can you help me reconstruct X?
X should consist of the numbers (1, 2, ..., N) in some order, and if there are multiple sequences possible, you should choose the one that is lexicographically smallest. This means that X[0] should be as small as possible, and if there are still multiple solutions, then X[1] should be as small as possible, and so on.
## 입력
The first line of the input gives the number of test cases, T. T test cases follow, each consisting of three lines.
The first line of each test case contains a single integer N. The second line contains N positive integers separated by spaces, representing A[0], A[1], ..., A[N-1]. The third line also contains N positive integers separated by spaces, representing B[0], B[1], ..., B[N-1].
Limits
• 1 ≤ T ≤ 30.
• It is guaranteed that there is at least one possible solution for X.
• 1 ≤ N ≤ 2000.
## 출력
For each test case, output one line containing "Case #x: ", followed by X[0], X[1], ... X[N-1] in order, and separated by spaces.
## 예제 입력 1
2
1
1
1
8
1 2 1 3 3 1 4 1
4 4 3 4 3 2 2 1
## 예제 출력 1
Case #1: 1
Case #2: 4 5 3 7 6 2 8 1
## 채점
• 예제는 채점하지 않는다.
|
2018-08-15 22:32:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22079698741436005, "perplexity": 1519.8947684038683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210362.19/warc/CC-MAIN-20180815220136-20180816000136-00470.warc.gz"}
|
http://ikeyword.net/query/linear-equation
|
# Results for keyword: linear equation
## Top URL related to linear equation
1. Text link: Linear Equations - Maths Resources
Domain: mathsisfun.com
Description: Another special type of linear function is the Constant Function ... it is a horizontal line: f(x) = C No matter what value of "x", f(x) is always equal to some constant value.
2. Text link: Linear equation - Wikipedia One variable Two variables More than two variables
Domain: en.wikipedia.org
Description: In mathematics, a linear equation is an equation that may be put in the form a 1 x 1 + ⋯ + a n x n + b = 0 , {\displaystyle a_{1}x_{1}+\cdots +a_{n}x_{n}+b=0,} where x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} are the variables (or unknowns or indeterminates ), and b , a 1 , … , a n {\displaystyle b,a_{1},\ldots ,a_{n}} are the coefficients , which are often real numbers .
3. Text link: Linear Equations - Free Math Help
Domain: freemathhelp.com
Description: Simple Definition of Linear Equation: An equation that forms a straight line on a graph. More precisely, a linear equation is one that is dependent only on constants and a variable raised to the first power.
Domain: eduplace.com
Description: Linear Equations. A linear equation looks like any other equation. It is made up of two expressions set equal to each other. A linear equation is special because: It has one or two variables. No variable in a linear equation is raised to a power greater than 1 or used as the denominator of a fraction.
Description: Linear equations like y = 2x + 7 are called "linear" because they make a straight line when we graph them. These tutorials introduce you to linear relationships, their graphs, and functions. Learn for free about math, art, computer programming, economics, physics, chemistry, biology, medicine, finance, history, and more.
6. Text link: System of linear equations - Wikipedia Elementary example General form Solution set Properties
Domain: en.wikipedia.org
Description: The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set.
7. Text link: Algebra - Linear Equations Solving Equations and Inequalities
Domain: tutorial.math.lamar.edu
|
2018-12-12 13:04:41
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826219379901886, "perplexity": 727.5927918580196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823872.13/warc/CC-MAIN-20181212112626-20181212134126-00094.warc.gz"}
|
https://brettklamer.com/diversions/statistical/print-r-matrix-in-latex/
|
# Print R Matrices in LaTeX Math Environments
This is a quick and easy method to print R matrices in LaTeX math environments. It’s based on the R function found here. (The newline at the end of the bmatrix should be removed)
Use this function
bmatrix = function(x, digits=NULL, ...) {
library(xtable)
default_args = list(include.colnames=FALSE, only.contents=TRUE,
include.rownames=FALSE, hline.after=NULL, comment=FALSE,
print.results=FALSE)
passed_args = list(...)
calling_args = c(list(x=xtable(x, digits=digits)),
c(passed_args,
default_args[setdiff(names(default_args), names(passed_args))]))
return(cat("\\begin{bmatrix}\n",
do.call(print.xtable, calling_args),
"\\end{bmatrix}"))
}
And then insert the matrix as seen here
$$a = <<results = 'asis', echo=FALSE>>= bmatrix(a) @$$
Grab the .Rnw example file here. It should compile into something like this
Last updated 2014-09-09
|
2020-11-28 23:08:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7103757262229919, "perplexity": 10796.968546044865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195929.39/warc/CC-MAIN-20201128214643-20201129004643-00314.warc.gz"}
|
http://mymathforum.com/complex-analysis/30911-how-solve-integral.html
|
My Math Forum How to solve this integral?
Complex Analysis Complex Analysis Math Forum
October 14th, 2012, 07:47 AM #1 Newbie Joined: Oct 2012 Posts: 3 Thanks: 0 How to solve this integral? So, it says: Using the Cauchy integral test, determine convergence: and integral is http://imageshack.us/photo/my-images/38/intja.jpg/ I tried, and it seems that it cannot be solved (just goin round and round). Any idea on this?
October 14th, 2012, 02:24 PM #2 Newbie Joined: Oct 2012 Posts: 3 Thanks: 0 Re: How to solve this integral? Anyone?
October 14th, 2012, 10:44 PM #3 Senior Member Joined: Aug 2011 Posts: 333 Thanks: 8 Re: How to solve this integral? 1/(n*(ln(n)^3)*((ln(ln(n))^2))) < 1/(n*(ln(n)^2)) Integral dx/(x*(ln(x)^2)) = -1/ln(x) is converging for x->infinity
October 15th, 2012, 09:19 AM #4 Newbie Joined: Oct 2012 Posts: 3 Thanks: 0 Re: How to solve this integral? Thnx
Tags integral, solve
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post sunstar60 Complex Analysis 1 October 9th, 2012 07:47 AM Ad van der ven Calculus 3 December 17th, 2011 10:53 AM malaguena Calculus 0 February 14th, 2011 02:18 AM tinyone Calculus 4 November 27th, 2010 07:09 PM art2 Calculus 1 September 24th, 2009 04:46 AM
Contact - Home - Forums - Top
|
2017-04-29 21:17:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.839652419090271, "perplexity": 11065.700113934046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123590.89/warc/CC-MAIN-20170423031203-00126-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.neetprep.com/questions/54-Chemistry/650-Thermodynamics?courseId=8&testId=1061965-Past-Year----MCQs&subtopicId=87-Spontaneity--Entropy
|
For a sample of perfect gas when its pressure is changed isothermally from pi to pf, the entropy change is given by
(1) $∆$S = nRln(pf/pi)
(2) $∆$S = nRln(pi/pf)
(3) $∆$S = nRTln(pf/pi)
(4) $∆$S = RTln(pf/pi)
Subtopic: Spontaneity & Entropy |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
Launched MCQ Practice Books
Prefer Books for Question Practice? Get NEETprep's Unique MCQ Books with Online Audio/Video/Text Solutions via Telegram Bot
For a sample of a perfect gas when its pressure is changed isothermally from Pi to Pf, the entropy change is given by:
1.
2.
3.
4.
Subtopic: Spontaneity & Entropy |
To view explanation, please take trial in the course below.
NEET 2023 - Target Batch - Aryan Raj Singh
|
2023-01-29 15:27:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6692955493927002, "perplexity": 9486.005792242271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499744.74/warc/CC-MAIN-20230129144110-20230129174110-00309.warc.gz"}
|
http://www.smarty.net/forums/viewtopic.php?t=24747
|
Smarty The discussions here are for Smarty, a template engine for the PHP programming language.
Author Message
marchyang
Smarty n00b
Joined: 21 Jan 2014
Posts: 3
Posted: Tue Jan 21, 2014 4:05 am Post subject: Smarty 3.1.6 with IIS 7.5 & 8.0 for Windows Server 2008R Can anyone tell me how to solve this problem ? Fatal error: Uncaught --> Smarty: unable to write file .\templates_c\wrt52dde9fe9c8358.95452229 <-- thrown in C:\inetpub\wwwroot\libs\sysplugins\smarty_internal_write_file.php on line 44 I have tried to install Smarty on 2008R2 & 2012R2 with PHP 5.5 & IIS 7.5 & 8.0 via Web Platform Installer 4.6.2 from Microsoft. I am sure the PHP is workable by phpinfo() displaying information of PHP 5.5. The message come from the download of smarty 3.1.6's lib & demo folder. I extract it and copy & past to the c:\inetpub\wwwroot, the problem is even I uncheck the read-only for the folder template_c and make it write permission for IIS_IUSRS, both of them are failure to solve the problem. Any help is appreciate !!
mohrt
Joined: 16 Apr 2003
Posts: 7362
Posted: Tue Jan 21, 2014 2:22 pm Post subject: Search the forum for windows and write permissions, windows can be stubborn. Here is a start: http://www.smarty.net/forums/viewtopic.php?t=13821
marchyang
Smarty n00b
Joined: 21 Jan 2014
Posts: 3
Posted: Wed Jan 22, 2014 9:37 am Post subject: I fixed this problem finally... Thanks everybody. I studied the following url http://support.microsoft.com/default.aspx?scid=kb;en-us;Q271071 and check with Windows Server 2012, I wonder what's difference between IIS_USRS and IUSR_, and the answer is that I granted permission to IIS_USRS was wrong, but should I grant write permission to IUSR who is new anonymous account created in Windows Server 2012 and restart IIS. Then I browse the http://localhost/demo/index.php it works. I don't understand the reason why, but maybe I think when users browser the page, then IIS needs write permission for anonymous user but not IIS_IUSRS itself to write files during the session. [/url]
mohrt
Joined: 16 Apr 2003
Posts: 7362
Posted: Wed Jan 22, 2014 6:07 pm Post subject: Thanks for the info, I'm making this one sticky.
AmandaPratt184
Smarty Rookie
Joined: 02 Jan 2015
Posts: 5
Posted: Wed Feb 25, 2015 5:58 am Post subject: Good idea Nice idea, one of our team members was struggling with the same thing. This would help him. Thanks.
AnrDaemon
Joined: 03 Dec 2012
Posts: 927
Posted: Wed Feb 25, 2015 5:51 pm Post subject: Re: I fixed this problem finally...
marchyang wrote: I don't understand the reason why, but maybe I think when users browser the page, then IIS needs write permission for anonymous user but not IIS_IUSRS itself to write files during the session.
This is called "privilege separation".
ColleenPeterson521
Smarty n00b
Joined: 22 Jan 2015
Posts: 2
Posted: Mon Mar 09, 2015 10:05 am Post subject: Hey thanks for the great answers. Umm, does anyone have any idea on where can I find complete info on setting privileges and permissions? And, maybe some use cases as well.
Dwza
Smarty Rookie
Joined: 09 Jan 2015
Posts: 5
Posted: Tue Feb 07, 2017 10:06 am Post subject: Similar problem with WinSrv2012r2 but this doesnt solves it. May take a look at my post http://www.smarty.net/forums/viewtopic.php?t=26719
Display posts from previous: All Posts1 Day7 Days2 Weeks1 Month3 Months6 Months1 Year Oldest FirstNewest First
All times are GMT Page 1 of 1
Jump to: Select a forum General----------------AnnouncementsInstallation and SetupFAQ (Frequently Asked Questions)TestimonialsShowcaseHelp Wanted (commercial) User Supported Help----------------GeneralTips and TricksPluginsAdd-onsFrameworksArticle Discussions Development----------------Smarty DevelopmentFeature RequestsBugsDocumentationSmarty 3 International Discussions----------------Language: GermanLanguage: FrenchLanguage: PortugueseLanguage: RussianLanguage: SpanishLanguage: ItalianLanguage: JapaneseLanguage: PersianLanguage: PolishLanguage: Indonesian
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum
|
2017-02-25 16:07:58
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8710628151893616, "perplexity": 11598.0827547222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00219-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://www-cs.stanford.edu/events/isl-seminar-delay-memory-and-messaging-tradeoffs-distributed-service-system-john-tsitsiklis
|
ISL Seminar<br><br>Title: Delay, memory, and messaging tradeoffs in a distributed service system<br>Speaker: John Tsitsiklis (MIT)<br>Date: November 9, 2017<br>Time: 4:15pm<br>Location: Packard 101<br><br>Abstract: <br><span type="cite">We consider the classical supermarket model: jobs arrive as a Poisson process of rate of $\lambda N$, with $0 < \lambda < 1$, and are to be routed to one of $N$ identical servers with unit mean, exponentially distributed processing times. We review a variety of policies and architectures that have been considered in the literature, and which differ in terms of the direction and number of messages that are exchanged, and the memory that they employ; for example, the ''power-of-$d$-choices'' or pull-based policies. In order to compare policies of this kind, we focus on the resources (memory and messaging) that they use, and on whether the expected delay of a typical vanishes as $N$ increases.<br><br>We show that if (i) the message rate increases superlinearly, or (ii) the memory size increases superlogarithmically, as a function of $N$, then there exists a policy that drives the delay to zero, and we outline an analysis using fluid models. On the other hand, if neither condition (i) or (ii) holds, then no policy within a broad class of symmetric policies can yield vanishing delay.<br><br>Joint work with D. Gamarnik and M. Zubeldia. <br></span>
|
2017-11-24 07:02:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.442536324262619, "perplexity": 904.7990869278565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807146.16/warc/CC-MAIN-20171124070019-20171124090019-00321.warc.gz"}
|
http://codeforces.com/blog/entry/22019
|
### craus's blog
By craus, 5 years ago,
606A - Magic Spheres. Let’s count how many spheres of each type are lacking to the goal. We must do at least that many transformations. Let’s count how many spheres of each type are extra relative to the goal. Each two extra spheres give us an opportunity to do one transformation. So to find out how many transformations can be done from the given type of spheres, one must look how many extra spheres there are, divide this number by 2 and round down. Let’s sum all the opportunities of transformations from each type of spheres and all the lacks. If there are at least that many opportunities of transformations as the lacks, the answer is positive. Otherwise, it’s negative.
606B - Testing Robots. Let’s prepare a matrix, where for each cell we will hold, at which moment the robot visits it for the first time while moving through its route. To find these values, let’s follow all the route. Each time we move to a cell we never visited before, we must save to the corresponding matrix’ cell, how many actions are done now. Let’s prepare an array of counters, in which for each possible number of actions we will hold how many variants there were, when robot explodes after this number of actions.
Now let’s iterate through all possible cells where mine could be placed. For each cell, if it wasn’t visited by robot, add one variant of N actions, where N is the total length of the route. If it was, add one variant of that many actions as written in this cell (the moment of time when it was visited first). Look, if there is a mine in this cell, robot would explode just after first visiting it.
The array of counters is now the answer to the problem.
605A - Sorting Railway Cars. Let’s suppose we removed from the array all that elements we would move. What remains? The sequence of the numbers in a row: a, a+1, …, b. The length of this sequence must be maximal to minimize the number of elements to move. Consider the array pos, where pos[p[i]] = i. Look at it’s subsegment pos[a], pos[a+1], …, pos[b]. This sequence must be increasing and its length as mentioned above must be maximal.
So we must find the longest subsegment of pos, where pos[a], pos[a+1], …, pos[b] is increasing.
605C - Freelancer's Dreams. We can let our hero not to receive money or experience for some projects. This new opportunity does not change the answer. Consider the hero spent time T to achieve his dream. On each project he spent some part of this time (possibly zero). So the average speed of making money and experience was linear combination of speeds on all these projects, weighted by parts of time spent for each of the projects.
Let’s build the set P on the plane of points (x, y) such that we can receive x money and y experience per time unit. Place points (a[i], b[i]) on the plane. Add also two points (max(a[i]), 0) and (0, max(b[i])). All these points for sure are included to P. Find their convex hull. After that, any point inside or at the border of the convex hull would correspond to usage of some linear combination of projects.
Now we should select some point which hero should use as the average speed of receiving money and experience during all time of achieving his dream. This point should be non-strictly inside the convex hull. The dream is realized if we get to point (A,B). The problem lets us to get upper of righter, but to do so is not easier than to get to the (A,B) itself. So let’s direct a ray from (0,0) to (A,B) and find the latest moment when this ray was inside our convex hull. This point would correspond to the largest available speed of receiving resources in the direction of point (A,B). Coordinates of this point are speed of getting resources.
To find the point, we have to intersect the ray and the convex hull.
605D - Board Game. Consider n vectors starting at points (a[i], b[i]) and ending at points (c[i], d[i]). Run BFS. On each of its stages we must able to perform such an operation: get set of vectors starting inside rectangle 0 <= x <= c[i], 0 <= y <= d[i] and never consider these vectors again. It can be managed like this. Compress x-coordinates. For each x we’ll hold the list of vectors which first coordinate is x. Create a segment tree with first coordinate as index and second coordinate as value. The segment tree must be able to find index of minimum for segment and to set value at point. Now consider we have to find all the vectors with first coordinate from 0 to x and second coordinate from 0 to y. Let’s find index of minimum in the segment tree for segment [0, x]. This minimum points us to the vector (x,y), whose x — that index of minimum and y — value of minimum. Remove it from list of vectors (adding also to the queue of the BFS) and set in the segment tree to this index second coordinate of the next vector with first coordinate x. Continue this way while minimum on a segment remains less than y. So, on each step we will find list of not yet visited vectors in the bottom right rectangle, and each vector would be considered only once, after what it would be deleted from data structures.
605E - Intergalaxy Trips. The vertex is the better, the less is the expected number of moves from it to reach finish. The overall strategy is: if it is possible to move to vertex better than current, you should move to it, otherwise stay in place. Just like in Dijkstra, we will keep estimates of answer for each vertex, and fix these estimates as the final answer for all vertices one by one, starting from best vertices to the worst. On the first step we will fix vertex N (the answer for it is zero). On the second step – vertex from which it’s easiest to reach N. On the third step – vertex from which it’s easiest to finish, moving to vertices determined on first two steps. And so on. On each step we find such vertex which gives best expected number of moves if we are to move from it to vertices better than it and then we fix this expected number – it cannot change from now. For each non-fixed yet vertex we can find an estimate of expected time it takes to reach finish from it. In this estimate we take into account knowledge about vertices we know answer for. We iterate through vertices in order of non-increasing answer for them, so the answer for vertex being estimated is not better than for vertices we already iterate through. Let’s see the expression for expected time of getting to finish from vertex x, considering use of tactic “move to best of i accessible vertices we know answer for, or stay in place”:
m(x) = p(x, v[0]) * ans(v[0]) + (1 — p(x, v[0]) * p(x, v[1]) * ans(v[1]) + (1 — p(x, v[0]) * (1 — p(x, v[1]) * p(x, v[2]) * ans(v[2]) + … + (1 — p(x, v[0]) * (1 — p(x, v[1]) * … * (1 — p(x, v[i-1]) * m(x) + 1
Here m(x) – estimate for vertex x, p(a,b) – the probability of existence of edge (a,b), and ans(v) – known answer for vertex v.
Note that m(x) expressed by itself, because there is a probability of staying in place.
We will keep estimating expression for each vertex in the form of m(x) = A[x] * m(x) + B[x].
For each vertex we will keep A[x] and B[x]. This would mean that with some probabilites it would be possible to move to some better vertex, and this opportunity gives contribution to expected time equal to B[x], and also with some probability we have to stay in place, and this probability is A[x] (this is just the same as coefficient before m(x) in the expression).
So, on each step we select one currently non-fixed vertex v with minimal estimate, then fix it and do relaxation from it, refreshing estimates for other vertices. When we refresh estimate for some vertex x, we change its A[x] and B[x]. A[x] is reduced by A[x] * p(x,v), because the probability of staying still consider it’s not possible to move to v. B[x] is increased by A[x] * p(x,v) * ans(v), where A[x] is the probability that it’s not possible to use some vertex better than v, A[x] * p(x,v) is the probability that it’s also possible to use vertex v, and ans(v) – known answer we just fixed for vertex v. To calculate the value of estimate for some vertex x, we can use expression m(x) = A[x] * m(x) + B[x] and express m(x) from it. Exactly m(x) is that value we should keep on the priority queue in out Dijkstra analogue, and exactly m(x) is the value to fix as the final answer for vertex x, when this vertex is announced as vertex with minimal estimate at the start of a step.
• +75
» 5 years ago, # | +316 My solution to C: subject to . The dual of this problem is max p * y1 + q * y2 subject to ai * y1 + bi * y2 ≤ 1. It's convex, so we can use a simple ternary search.
• » » 5 years ago, # ^ | ← Rev. 2 → +135 wow it's Too Simple!
• » » » 5 years ago, # ^ | ← Rev. 2 → -11 TooSimple*
• » » » 5 years ago, # ^ | +24 your joke is too outdated now :D
• » » » » 5 years ago, # ^ | +3 I actually find it Too Difficult now :)
• » » 5 years ago, # ^ | +8 Could you explain how do you get dual of the problem, please?
• » » » 5 years ago, # ^ | +50 http://web.mit.edu/15.053/www/AMP-Chapter-04.pdfYou can check this lecture. Duality in linear programming is very important in combinatorial optimization problems like graph matching and many approximation algorithms.
• » » 5 years ago, # ^ | +10 Although simplex works in time O(C(n + m, m)) it passed system tests.
• » » 5 years ago, # ^ | +6 Not Too Simple, it is Too Simplex
• » » 5 years ago, # ^ | 0 Not TooSimple but truly subtle!!
• » » 3 years ago, # ^ | 0 I'm not able to understand why the function is convex. Can someone please explain?
» 5 years ago, # | +14 Can anyone please tell me the complexity of Simplex Method to solve Linear Programming problems? I used that method to solve C, but I do not know its complexity.
» 5 years ago, # | ← Rev. 2 → 0 Why does the second test case of Div. 2 problem D (Div. 1 problem B) return -1? IN:3 31 02 13 1OUT: -1Thanks
• » » 5 years ago, # ^ | +5 The graph consists of three vertices and three edges — there is an edge between each pair of vertices. For the MST we need to pick two of the edges so that their sum is minimum — therefore we pick the edges with weights 1 and 2. However, Vladislav didn't include the edge with weight 1 in his MST.
• » » » 5 years ago, # ^ | 0 But why this test case does not return -1? A edge with weight 3 is not chosen, but a one with 4 is. Shouldn't we choose the one with 3?IN:4 42 13 03 14 1OUT:1 22 31 31 4
• » » » » 5 years ago, # ^ | 0 If we look at the graph that the output creates, we can see that the only way that we can reach vertex 4 is by using the edge between vertices 1 and 4. There's no spanning tree on the graph that does not include the vertex with weight 4.
• » » » » » 5 years ago, # ^ | 0 Isn't the following graph valid as an output for this example (supposing we take vertex with weight 3 instead the one with 4)?1 22 33 41 3
• » » » » » » 5 years ago, # ^ | 0 Yes.The task was to reconstruct one of the graphs that match Vladislav's description. There are multiple different answers for sample 1. However for sample 2 no graph matches the description.
• » » 5 years ago, # ^ | 0 This is due to the Minimum-Cost Edge property of a MST.If the edge of a graph with the minimum cost e is unique, then this edge is included in any MST.Proof: if e was not included in the MST, removing any of the (larger cost) edges in the cycle formed after adding e to the MST, would yield a spanning tree of smaller weight.In the given graph, 1 is the minimum cost edge but it is not included in the MST, hence no graph is possible.Source: Wikipedia
• » » » 5 years ago, # ^ | 0 But why this test case does not return -1? A edge with weight 3 is not chosen, but a one with 4 is. Shouldn't we choose the one with 3?IN:4 42 13 03 14 1OUT:1 22 31 31 4
» 5 years ago, # | 0 I really liked Div1 B, very original and different problem. The ones where you reconstruct a graph from some information are always interesting.
» 5 years ago, # | 0 During the contest I (mis)read Div2A as requiring that I turn (a,b,c) into exactly (x,y,z). Out of curiosity does anybody know how to solve that problem?
• » » 5 years ago, # ^ | 0 Oh facepalm I read it as this too. No wonder I couldn't solve it during the contest :/Because of this, I thought about it for a long time near the beginning--found out that you could get (-1 -1 -1) in 3 steps, (-1 -1 0) and permutations in 2 steps. But you can't get (-1 0 0) in 1 step, and therefore can't get (x-1 y z) within (-x — y — z + 1) steps. Couldn't figure out much more, so moved on to other problems.
» 5 years ago, # | ← Rev. 2 → -25 Я правильно понимаю, что в С размер выпуклой оболочки худшего теста несколько меньше n и есть величина порядка , где X = 106 ограничение на каждую координату?
» 5 years ago, # | 0 The solution for 606A Magic Spheres presented here is trivial, but is it really correct?I spent almost an hour during the contest figuring out how to balance the amount of rocks in each pile so that the final distribution is exactly equal to the final requirements.For example, what if a=b=c=1 and x=y=z=0??? In that case it would be impossible to reach this final distribution, even though your simplistic solution gives a positive answer.Can someone explain why the author's solution is NOT wrong? Or is the problem formulation incorrect?
• » » 5 years ago, # ^ | 0 "To make a spell that has never been seen before, he needs at least x blue, y violet and z orange spheres."
• » » » 5 years ago, # ^ | +4 Damn it...Charlie Foxtrot.
» 5 years ago, # | +3 I added edges in a different manner. all the mst edges were added as (1,2) , (2,3) , (3,4).. and then non — mst edges were added as (1 3) ( 1 4 ) ... (1,n) (2 4)...(2,n)....... Why is this wrong?
• » » 5 years ago, # ^ | 0 I did the same thing. Can't figure why this is wrong
• » » 5 years ago, # ^ | 0 Say, by your method the edges added were of weight 2 between 1,2 and 2,3 and weight 3 between 1,3. Let all other edges be in the MST.This does not give the correct spanning tree for the graph.
• » » » 5 years ago, # ^ | 0 I’m not sure I follow. What’s the input and what’s the graph?
• » » 5 years ago, # ^ | 0 Making the MST edges (1, 2), (2, 3), (3, 4), (4, 5), ... isn’t wrong, and my solution doing that got accepted.Make sure you increment the larger index only when you run out of the smaller index’s range, not the other way round: (1, 3), (1, 4), (2, 4), (1, 5), (2, 5), (3, 5), (1, 6), ... If you try to increment the larger index first, you’ll find that for some k you’ll try to add edge (1, k) before you add edge (k - 1, k), so you’ll skip it and go back to (2, 4), and eventually you’ll run out of space to add new edges and incorrectly conclude that the corresponding graph can’t exist.Also make sure to look at edges that are within the MST before edges that are outside it but have the same cost.
» 5 years ago, # | ← Rev. 2 → +37 D is also could be solved with divide-and-conquer. The first step is to reduce problem to 0-1 bfs (we can always decrease our skills for free and can cast one spell to change the skills from (a,b) to (c,d)). If we add all edges there will be O(n2) 0-edges, but it's possible to add additional vertices in such way that we will only need 0-edges and path in our new graph will correspond to path in old graph with same length.So, how to add new vertices in that way? Let's separate all initial vertices into two halves such that all vertices in the first half is on the left from vertices in the second path. There will be vertical line divides all vertices in that way. Denote it's x-coordinate as x0. We have to add vertices with coordinates (x0, y0), (x0, y1), ..., (x0, yk) where y0, ..., yk are all y-coordinates of the whole set of vertices. (y0 < y1 < ... < yk). For the vertex (x, y) such that x ≤ x0 we should add edge from (x0, y) and if x ≥ x0 it would be edge from (x, y) to (x0, y). Also we should add edges from (x0, yi) to (x0, yi - 1) for every possible i.That's it, then we should find shortest path in that 0-1 graph.
» 5 years ago, # | ← Rev. 2 → +2 The simple search maximum increasing subsequence in problem C. And answer in n-maxsub14742819
» 5 years ago, # | 0 Can someone plz explain me Div2 C in easy language and why we are making pos array as mentioned in the editorial.PLZZZZZZZ.
• » » 5 years ago, # ^ | 0 We are looking for the largest possible set of numbers that we will not touch during the sorting (longest increasing subsequence), every other element is going to be moved either to the front or to the back.If two numbers, lets say 4 and 5, are part of such set, 5 must occur after 4. The fastest way to check this is if we have stored the location of 4 and 5 in pos, so that we can simply check whether ( pos[4] < pos[5] ) is true.By iterating through the pos array we can easily find the longest segment where the equation holds for each pair of consecutive elements.
• » » » 5 years ago, # ^ | ← Rev. 2 → 0 Thanks for the explanation.
• » » » 12 months ago, # ^ | 0 Can you please explain how we are finding Longest Increasing Subsequence in Linear time?
• » » » » 12 months ago, # ^ | +1 This is a special case since the N elements are numbered from 1 to N. For example, sorting such an array requires a linear amount of work since we can simply assign every element to the position indicated by their number.In this task, we can find the longest increasing subsequence by checking the cars from 1 to N in a single loop. The standard LIS algorithm involves binary searching, but our pos array replaces the O(logN) searching with O(1) array lookups. Such an array could not be constructed if the element values were not bounded by N.
• » » » » » 12 months ago, # ^ | +1 It is not longest increasing subsequence. It is longest increasing subsequence WHERE EACH PAIR OF ADJACENT ELEMENTS DIFFER BY 1.
• » » » » » » 10 months ago, # ^ | 0 I am confused :/ Is it longest increasing subsegment (as written in editorial) or longest increasing subsequence? Or it is longest increasing subsequence WHERE EACH PAIR OF ADJACENT ELEMENTS DIFFER BY 1 .. ?!
• » » » » » » » 10 months ago, # ^ | ← Rev. 2 → 0 It depends on the array (input / pos)
• » » » » » » » » 10 months ago, # ^ | 0 couldn't get till now :( maybe I should try something else and then try this again..
• » » » » » » » » » 7 weeks ago, # ^ | 0 actually we have to find "**lics**"-> longest increasing consecutive subsequence
• » » » » » 12 months ago, # ^ | 0 Got it thankyou
» 5 years ago, # | +22 In Div2D,shouldn't tie breaker be the other way round? Placing the edges included in the MST earlier?
• » » 5 years ago, # ^ | 0 Yes, I think so. I got accepted with the tie breaker you said, and placing earlier edges that we were asked not to include in the MST would fail (using the algorithm described) in cases like this:3 3 1 0 1 1 1 1I think x)
• » » 5 years ago, # ^ | 0 Thank you. Sure the tie breaker should place MST-edges first. It was mistake on translation. Fixed.
» 5 years ago, # | ← Rev. 2 → 0 In div1 C why do we need to add points (max(a[i]), 0) and (0, max(b[i])) to our set of points? If I'm not wrong, is this done to ensure that our ray definitely intersect with our convex hull?
• » » 5 years ago, # ^ | +5 yes
» 5 years ago, # | ← Rev. 3 → 0 I don't know but seems I can't understand test 8 in problem Div2A: 2 2 1 1 1 2 Jury's Answer is No. But simply we transform one blue and one violet into one orange and we get 1 1 2 Which is required, so why is the answer no? Can anyone explain?
• » » 5 years ago, # ^ | 0 Ah I got it, we most transform two of the same color, now I get it
» 5 years ago, # | +5 Can anyone explain the solution to Div-1 C given in the editorial a bit more? Why is the given problem solvable using Convex Hull and the intuition behind it?
• » » 5 years ago, # ^ | +3 Think of the distinct projects as vectors pi = (ai, bi). On time t you can reach any of the points t·pi, but you can also reach any convex combination of them. The points that can be obtained as a convex combination of a certain set of points is the convex hull of that set.If you take into account that, whenever is possible to reach (x, y) you can also reach any (x', y') such that 0 ≤ x' ≤ x and 0 ≤ y' ≤ y (that is the rectangle with corners in (0, 0) and (x, y)). You also need to add (0, 0), (0, maxy) and (maxx, 0) to the set {t·pi}1 ≤ i ≤ n. The set of points reachable on time t will be those contained in the convex hull. You can do binary search or some vector calculations to find out what is the minimal t such that the target point lies inside it.Hope it's clear.
• » » » 5 years ago, # ^ | 0 Why does the editorial not mention anything regarding binary search? is it using a different logic?
» 5 years ago, # | 0 605A : Solution says we need to find the longest increasing sub-sequence .... But what if test case is n=100 , all increasing but say 40 and 41 are interchanged .... length of LIS would be 99 .... How to reach answer from this (41 moves) ?
• » » 7 months ago, # ^ | 0 For : 605A — Sorting Railway CarsAnswer for the test case <7 1 2 3 4 5 6 8 9 10 11> is one but I can't see how we can sort this series by one teleporting. Can anyone help.
• » » » 2 months ago, # ^ | 0 how do you know the answer to this...??
» 5 years ago, # | +5 Div 1 D.Very Nice question.. I never did a bfs using a RMQ tree before. Cool!!Thanks for the question
» 5 years ago, # | 0 Why should we sort the edges in ascending weight in problem B(DIV 1)? Can someone help me understand?
» 5 years ago, # | 0 For Div1.E:"The overall strategy is: if it is possible to move to vertex better than current, you should move to it, otherwise stay in place."Can anybody prove it?
» 5 years ago, # | 0 For div2 C, some people did something seemingly very simple, but I have no idea why something like the following works: main(){ cin >> n; for(int i = 1; i <= n; ++i){ cin >> k; need[k] = need[k - 1] + 1; ans = max(need[k],ans); } cout << n - ans; }
» 23 months ago, # | 0 Div1 A / Div2 C anyone tried using binary search ?? I have an idea, but I ain't getting how to go forward. Let us say the minimum number of moves required to sort the whole array is M. clearly, for any number of moves less than M,we can't sort. Now, for any number of moves greater than M, we can always sort the whole array ( keep removing element from the end and place it on the end ). Thus, I think it is valid to say that the number of moves M can be binary searched. Now, I can't understand how to move forward, i.e. given number of moves, how to check if sorting can be done ? Also, I would like to know if my method is correct or not ?
• » » 23 months ago, # ^ | ← Rev. 3 → 0 Maybe you can check all subarrays (after sorting) Of size N-M (if you will move M elements then N-M will be left) And check if it's possible to find any one subsequence whose size is >=N-M and they form a subarray of sorted array Now think about how to do that... I haven't thought about it but maybe this should work...
• » » » 23 months ago, # ^ | 0 Hmmm......I understood what you are saying,but does that mean "what ever I do, I must find the longest increasing subsequnce " ? Because even after using binary search, you are actually finding the longest increasing subsequence (in the step when optimal value of M is found). So, won't that make binary searching useless ( as we could directly find the answer by finding longest increasing subsequnce ) ?
• » » » » 23 months ago, # ^ | +1 Nope maybe you can use sliding window on vector pair ( value,index) and check if there exist a susequence in given array forming subarray in sorted array
» 7 months ago, # | ← Rev. 2 → 0 For problem B lazy student first of all, let's consider what we do in order to construct an MST.1)Sorting all the edges in ascending order of their weight. 2)Pick the smallest edge(not selected till now)check whether it forms a cycle or not. If it doesn't form a cycle include it in the spanning tree else discard it. 3)Repeat step 2 until we have got V-1 edges in the spanning tree. 4)The spanning tree that we have now is the MST.This same thing is written in the editorial first of all sort the edges in ascending order, then if the current edge needs to be included in the MST include it else if the edge is supposed to be discarded from the MST then it must form a cycle in the spanning tree constructed till now otherwise it would contradict the condition of getting discarded which is an edge is discarded if and only if it forms a cycle in the spanning tree constructed till now. If the edge to be discarded doesn't form a cycle then a graph having a corresponding spanning tree given would not exist. my submission=>https://codeforces.com/problemset/submission/605/94260114
|
2021-04-20 14:47:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4914758801460266, "perplexity": 819.3807504722587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00324.warc.gz"}
|
http://mathoverflow.net/feeds/question/51563
|
Topology on the set of linear subspaces - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-18T23:08:22Z http://mathoverflow.net/feeds/question/51563 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/51563/topology-on-the-set-of-linear-subspaces Topology on the set of linear subspaces Martin 2011-01-09T17:37:38Z 2011-01-13T09:21:50Z <p>Hello,</p> <p>let $X$ be a seperable Hilbert space. Let $(e_i)_i$ be a Hilbert basis, and for each index let $E_i = \langle e_1,\dots,e_i \rangle \subset X$ the span of the first $i$ basis vectors. For any $x \in X$, let $x_i$ be the best-approximation of $x$ in $E_i$, and it is clear that $x_i \rightarrow x$.</p> <p>It seems intuitive to say, that the $(E_i)$ approximate $X$ in a certain sense. Nevertheless, I am not aware of a topology on the set of linear subspaces, which would give such a result rigorously.</p> <p>A first attempt might be to identify each linear subspace with its projection-onto, and inspect these projections as a topological (no more linear) space. A next step might be to take into account the order of the basis vectors for each linear subspace (which might be crucial for stability in numerical analysis). I am not known a theory Grassmannian manifolds in infitinte-dimensional vector spaces, nor how to relate non-equidimensional Grassmannian manifolds.</p> <p>Can you give me hints where to find theory into this direction?</p> http://mathoverflow.net/questions/51563/topology-on-the-set-of-linear-subspaces/51573#51573 Answer by Georges Elencwajg for Topology on the set of linear subspaces Georges Elencwajg 2011-01-09T19:14:08Z 2011-01-09T19:14:08Z <p>Dear Martin, your intuition is excellent.</p> <p><a href="http://www.mrlonline.org/jot/2009-061-001/2009-061-001-002.pdf" rel="nofollow">Here</a> is a paper that indeed identifies closed subspaces of Hilbert spaces with the orthogonal projection onto them and thus studies the Grassmannian you are interested in by embedding it in operator spaces. The article sems interesting, well written and fairly elementary( else I wouldn't even understand what it is about since I know so little about Functional Analysis...). There is a bibliography that might be useful too. Good luck.</p> http://mathoverflow.net/questions/51563/topology-on-the-set-of-linear-subspaces/51639#51639 Answer by Ian Morris for Topology on the set of linear subspaces Ian Morris 2011-01-10T11:04:11Z 2011-01-10T11:04:11Z <p>Some of the answers to <a href="http://mathoverflow.net/questions/48118" rel="nofollow">this question</a> might be helpful for your question also. It deals with finite-dimensional Hilbert spaces, but most of my answer to that question applies to the infinite-dimensional case too, with one or two obvious exceptions (e.g. the metric space of 1-dimensional subspaces of an infinite-dimensional Hilbert space is not compact). In particular, the book on Hilbert spaces by Akhiezer and Glazman has a short (5 pages?) section on the Grassmannian of a Hilbert space, and shows that the metric on the Grassmannian given by `aperture' is the same as the metric given by the operator difference between orthogonal projections.</p> http://mathoverflow.net/questions/51563/topology-on-the-set-of-linear-subspaces/51937#51937 Answer by KP Hart for Topology on the set of linear subspaces KP Hart 2011-01-13T09:21:50Z 2011-01-13T09:21:50Z <p>Denote the intersection of a closed linear subspace $A$ with the unit sphere by $S_A$, say. You can define the distance of $A$ and $B$ to be the Hausdorff distance between $S_A$ and $S_B$. This will give you a metric topology on the set of closed linear subspaces but it seems that it does not quite do what you want: the distance between a proper closed subspace and the ambient space is always equal to $1$.</p>
|
2013-05-18 23:08:23
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8827950358390808, "perplexity": 399.3041264890383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382920/warc/CC-MAIN-20130516092622-00044-ip-10-60-113-184.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/questions/157494/multi-variable-integral-int-01-int-sqrty1-sqrtx31-dx-dy
|
# Multi variable integral : $\int_0^1 \int_\sqrt{y}^1 \sqrt{x^3+1} \, dx \, dy$
$$\int_0^1 \int_\sqrt{y}^1 \sqrt{x^3+1} \, dx \, dy$$ Here is my problem in my workbook. If I solve this problem by definition, that find integral for $x$, after that solve for $y$. so $\int_\sqrt{y}^1 \sqrt{x^3+1} \, dx$ is so complicate. I have used Maple, but the result still long and complicate that I cannot use it to find integral for $y$.
Thanks :)
-
Switch the order of integration. (Do $y$ first, then $x$.) – Hans Lundmark Jun 12 '12 at 19:10
The way you have the integration setup, you integrate along $x$ first for a fixed $y$. The figure below indicates how you would go about with the integration. You integrate over the horizontal red strip first and then move the horizontal strip from $y=0$ to $y=1$. Now for the ease of integration, change the order of integration and integrate along $y$ first for a fixed $x$. The figure indicates how you would go about with the integration. You integrate over the vertical red strip first and then move the vertical strip from $x=0$ to $x=1$. Hence, if you swap the integrals the limits become $y$ going from $0$ to $x^2$ and $x$ goes from $0$ to $1$. $$I = \int_0^1 \int_\sqrt{y}^1 \sqrt{x^3+1} dx dy = \int_0^1 \int_0^{x^2} \sqrt{x^3+1} dy dx = \int_0^1 x^2 \sqrt{x^3+1} dx$$ Now call $x^3+1 = t^2$. Then we have that $3x^2 dx = 2t dt \implies x^2 dx = \dfrac{2}{3}tdt$. As $x$ varies from $0$ to $1$, we have that $t$ varies from $1$ to $\sqrt{2}$. Hence, we get that $$I = \int_1^\sqrt{2}\dfrac23 t \times t dt = \dfrac23 \int_1^\sqrt{2}t^2 dt = \dfrac23 \times \left. \dfrac{t^3}3\right \vert_{t=1}^{t=\sqrt{2}} = \dfrac29 \left( (\sqrt{2})^3 - 1^3\right) = \dfrac29 (2 \sqrt{2} - 1).$$
@Marvis region over which you are integrating is below the parabola $y=x^2$ from x=0 to 1 Could you explain more, please. – hqt Jun 13 '12 at 3:01
|
2016-07-28 10:57:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9673815965652466, "perplexity": 141.11484545974602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257828010.65/warc/CC-MAIN-20160723071028-00058-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://vm.udsu.ru/issues/archive/issue/2020-2-11
|
+7 (3412) 91 60 92
## Archive of Issues
Belarus Grodno
Year
2020
Volume
30
Issue
2
Pages
290-311
Section Mathematics Title On a linear autonomous descriptor equation with discrete time. I. Application to the 0-controllability problem Author(-s) Khartovskii V.E.a Affiliations Grodno State Universitya Abstract We consider a linear homogeneous autonomous descriptor equation with discrete time $$B_0g(k+1)+\sum_{i=1}^mB_ig(k+1-i)=0,\quad k=m,m+1,\ldots,$$ with rectangular (in general case) matrices $B_i$. Such an equation arises in the study of the most important control problems for systems with many commensurate delays in control: the 0-controllability problem, the synthesis problem of the feedback-type regulator, which provides calming to the solution of the original system, the modal controllability problem (controllability of the coefficients of characteristic quasipolynomial), the spectral reduction problem and the problem of observers' synthesis for a dual surveillance system. For the studied descriptor equation with discrete time, a subspace of initial conditions for which this equation is solvable is described based on the solution of a finite chain of homogeneous algebraic systems. The representation of all its solutions is obtained in the form of some explicit recurrent formula convenient for the organization of the computational process. Some properties of this equation that are used in the problems of regulator synthesis for continuous systems with many commensurate delays in control are studied. A distinctive feature of the presented study of the object under consideration is the use of an approach that does not require the construction of transformations reducing the matrices of the original equation to different canonical forms. Keywords linear systems with multiple delays, linear descriptor autonomous equation with discrete time, subspace of initial conditions, representation of the solution UDC 517.977 MSC 93B99, 93C55 DOI 10.35634/vm200211 Received 22 April 2020 Language Russian Citation Khartovskii V.E. On a linear autonomous descriptor equation with discrete time. I. Application to the 0-controllability problem, Vestnik Udmurtskogo Universiteta. Matematika. Mekhanika. Komp'yuternye Nauki, 2020, vol. 30, issue 2, pp. 290-311. References Krasovskii N.N. Optimal processes in delay systems, Statisticheskie metody: Tr. II Mezhdunar. kongressa IFAK (Bazel’, 1963) (Statistic Methods. Proc. II IFAC Congress, Basel, 1963), Moscow, 1965, vol. 2, pp. 201-210 (in Russian). Khartovskii V.E. A generalization of the problem of complete controllability for differential systems with commensurable delays, Journal of Computer and Systems Sciences International, 2009, vol. 48, no. 6, pp. 847-855. https://doi.org/10.1134/S106423070906001X Metel'skii A.V., Khartovskii V.E., Urban O.I. Solution damping controllers for linear systems of the neutral type, Differential Equations, 2016, vol. 52, no. 3, pp. 386-399. https://doi.org/10.1134/S0012266116030125 Metel'skii A.V., Khartovskii V.E. Synthesis of damping controllers for the solution of completely regular differential-algebraic delay systems, Differential Equations, 2017, vol. 53, no. 4, pp. 539-550. https://doi.org/10.1134/S0012266117040127 Metel'skii A.V., Khartovskii V.E. Criteria for modal controllability of linear systems of neutral type, Differential Equations, 2016, vol. 52, no. 11, pp. 1453-1468. https://doi.org/10.1134/S0012266116110070 Khartovskii V.E. Modal controllability for systems of neutral type in classes of differential-difference controllers, Automation and Remote Control, 2017, vol. 78, no. 11, pp. 1941-1954. https://doi.org/10.1134/S0005117917110017 Khartovskii V.E. Criteria for modal controllability of completely regular differential-algebraic systems with aftereffect, Differential Equations, 2018, vol. 54, no. 4, pp. 509-524. https://doi.org/10.1134/S0012266118040080 Khartovskii V.E. Spectral reduction of linear systems of the neutral type, Differential Equations, 2017, vol. 53, no. 3, pp. 366-381. https://doi.org/10.1134/S0012266117030089 Khartovskii V.E. Finite spectrum assignment for completely regular differential-algebraic systems with aftereffect, Differential Equations, 2018, vol. 54, no. 6, pp. 823-838. https://doi.org/10.1134/S0012266118060113 Metel'skii A.V., Khartovskii V.E. On the question of the synthesis of observers for linear systems of neutral type, Differentsial'nye Uravneniya, 2018, vol. 54, no. 8, pp. 1148-1149 (in Russian). Belov A.A., Kurdyukov A.P. Deskriptornye sistemy i zadachi upravleniya (Descriptor systems and control problems), Moscow: Fizmatlit, 2015. Boyarintsev Yu.E. Lineinye i nelineinye algebro-differentsial'nye sistemy (Linear and nonlinear algebro-differential systems), Novosibirsk: Nauka, 2000. Campbell S.L, Griepentrog E. Solvability of general differential algebraic equations, SIAM Journal on Scientific Computing, 1995, vol. 16, no. 2, pp. 257-270. https://doi.org/10.1137/0916017 Riaza R. Differential-algebraic systems: Analytical aspects and circuit applications, Hackensack, NY: World Scientific, 2008. https://doi.org/10.1142/6746 Full text
|
2020-12-04 05:37:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6027385592460632, "perplexity": 2948.4969134837943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733122.72/warc/CC-MAIN-20201204040803-20201204070803-00041.warc.gz"}
|
https://blog.acolyer.org/page/2/
|
Dynamic control flow in large-scale machine learning Yu et al., EuroSys’18
(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site).
In 2016 the Google Brain team published a paper giving an overview of TensorFlow, “TensorFlow: a system for large-scale machine learning.” This paper is a follow-up, taking a much deeper look at how TensorFlow supports dynamic control flow, including extending automatic differentiation to control flow constructs.
### Embedding control flow within the dataflow graph
With a wide range of machine learning models in use, and rapid exploration of new techniques, a machine learning system needs to be expressive and flexible to support both research and production use cases. Given the ever larger models and training sets, a machine learning system also needs to be scalable. These means both using individual devices efficiently (anything from phones to custom ASCIs in datacenters), and also supporting parallel execution over multiple devices.
Both the building blocks of machine learning and the architectures built up using these blocks have been changing rapidly. This pace appears likely to continue. Therefore, rather than defining RNNs, MoEs (mixture of experts), and other features as primitives of a programming model, it is attractive to be able to implement them in terms of general control-flow constructs such as conditionals and loops. Thus, we advocated that machine learning systems should provide general facilities for dynamic control flow, and we address the challenge of making them work efficiently in heterogeneous distributed systems consisting of CPUs, GPUs, and TPUs.
The demand for dynamic control flow has been rising over the last few years. Examples include while-loops used within RNNs, gating functions in mixture-of-experts models, and sampling loops within reinforcement learning.
Instead of relying on programming languages outside of the graph, TensorFlow embeds control-flow as operations inside the dataflow graph. This makes whole program optimisation easier and keeps the whole computation inside the runtime system, avoiding the need to communicate with the client (which can be costly in some deployment scenarios). The implementation supports both parallelism and asynchrony, so e.g. control-flow logic on CPUs and compute kernels on GPUs can overlap.
The main control flow operators are a conditional cond(pred, true_fn, false_fn), and a while loop while_loop(pred, body, inits). There are other higher order constructs built on top of these (for example, map_fn, foldl, foldr, and scan).
We analyzed more than 11.7 million (!) unique graphs for machine learning jobs at Google over the past year, and found that approximately 65% contain some kind of conditional computation, and approximately 5% contain one or more loops.
### Control flow in TensorFlow
The basic design of TensorFlow is as follows: a central coordinator maps nodes in the dataflow graph to the given set of devices, and then partitions the graph into a set of subgraphs, one per node. Where the partitioning causes an edge to span two devices the edge is replaced with pair of send and receive communication operations using a shared rendezvous key.
When dynamic control flow is added into the mix, we can no assume that each operation in the graph is executed exactly once, and so unique names and rendezvous keys are generated dynamically. Conditional branches and loops may be arbitrarily partitioned across devices.
We rely on a small set of flexible, expressive primitives that serve as a compilation target for high-level control-flow constructs within a dataflow model of computation.
Those primitives are switch, merge, enter, exit, and nextIteration. Every execution of an operation takes place within an ‘frame’. Without control flow, each operation is executed exactly once. With control flow, each operation executes at most once per frame. The following figure shows how a while-loop can be translated into these primitives to give you the idea:
Tensors inside executors are represented by tuples (value, isDead, tag), where isDead is a boolean indicating whether the tensor is on an untaken branch of a switch, and the tag identifies a frame. The evaluation rules are shown in the following figure:
The rules allow multiple loop iterations to run in parallel, but left unchecked this will use a lot of memory. Empirically, a limit of 32 parallel executions at a time seems to work well.
When the subgraph of a conditional branch or loop body is partitioned across devices partitions are allowed to make progress independently. (There is no synchronisation after each loop iteration, and no central coordinator). The receive operation of a conditional is always ready and can be started unconditionally. If the corresponding send is never executed though (the branch is not chosen) that means we’d be blocking forever waiting for input. Therefore the system propagates an isDead signal across devices from send to receive to indicate the branch has not been taken. This propagation may continue across multiple devices as needed.
For distributed execution of loops each partition needs to know whether to proceed or exit at each iteration. To handle this the graph is rewritten using simple control-loop state machines. Here’s an example partitioning a simple while-loop. The dotted lines represent the control edges.
The overhead for the distributed execution of a loop is that every participating device needs to receive a boolean at each iteration from the device that produces the loop predicate. However, the communication is asynchronous and computation of the loop predicate can often run ahead of the rest of the computation. Given typical neural network models, this overhead is minimal and largely hidden.
### Differentiation
TensorFlow supports automatic differentiation. That is, given a graph representing a neural network, it will generate efficient code for the corresponding distributed gradient computations. In the base case this is back-propagation using the chain rule, and TensorFlow includes a library of gradient functions corresponding to most of its primitive operations.
Tensors used in the gradient function (e.g., x and y in the above example) are kept until the gradient computation is performed. That can consume a lot of memory in deep neural networks, and it gets worse when we add loops. To support back-propagation through control flow constructs:
Each operation in the graph is associated with a ‘control flow context’ that identifies the innermost control-flow construct of which that operation is a member. When the backpropagation traversal first encounters a new control-flow content, it generates a corresponding control-flow construct in the gradient graph.
For a conditional tf.cond(pred, true_fn, false_fn) with output gradients g_z this is simply tf.cond(pred, true_fn_grad(g_z), false_fn_grad(g_z)). For while loops:
• The gradient of a while loop is another loop that executes the gradient of the loop body for the same number of iterations as the forward loop, but in reverse.
• The gradient of each differentiable loop variable becomes a loop variable in the gradient loop.
• The gradient of each differentiable tensor that is constant in the loop is the sum of the gradients for that tensor at each iteration.
The overall performance is heavily dependent on how intermediate values are treated. To avoid recomputing these values they are pushed onto a stack during loop execution, and popped during gradient computation. Stack operations are asynchronous so they can run in parallel with actual computation.
### Memory management
Especially on GPUs, where memory is more limited, memory management is crucial. When tensors are pushed onto stacks they are moved from GPU to CPU memory. Separate GPU streams are used for compute and I/O operations to improve their overlap. Each stream is a sequence of sequentially executed GPU kernels. A combination of TensorFlow control edges and GPU hardware events are used to synchronise dependent operations executed on different streams.
### Future directions
Dynamic control flow is an important part of bigger trends that we have begun to see in machine learning systems. Control-flow constructs contributed to the programmability of theses systems, and enlarge the set of models that are practical to train using distributed resources. Going further, we envision that additional programming language facilities will be beneficial. For instance, these may include abstraction mechanisms and support for user-defined data structures. The resulting design and implementation challenges are starting to become clear. New compilers and run-time systems such as XLA (Accelerated Linear Algebra), will undoubtedly play a role.
Reducing DRAM footprint with NVM in Facebook Eisenman et al., EuroSys’18
(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site).
…to the best of our knowledge, this is the first study on the usage of NVM devices in a commercial data center environment.
We’ve been watching NVM coming for some time now, so it’s exciting to see a paper describing its adoption within Facebook. MyRocks is Facebook’s primary MySQL database, and is used to store petabytes of data and to serve real-time user activities. MyRocks uses RocksDB as the storage engine, and a typical server consumes 128GB of DRAM and 3 TB of flash. It all seems to work well, so what’s the problem? Spiralling costs!
As DRAM facing major scaling challenges, its bit supply growth rate has experienced a historic low. Together with the growing demand for DRAM, these trends have led to problems in global supply, increasing total cost of ownership (TCO) for data center providers. Over the last year, for example, the average DRAM DDR4 price has increased by 2.3x.
Just using less DRAM per server isn’t a great option as performance drops accordingly. So the big idea is to introduce non-volatile memory (NVM) to pick up some of the slack. NVM is about 10x faster than flash, but still over 100x slower than DRAM. We can make up for the reduction in performance over DRAM by being able to use much more NVW due to the lower cost. So we move from a two-layer hierarchy with DRAM cache (e.g. 96GB) and flash to a three-layer hierarchy with a smaller amount of DRAM (e.g. 16GB), a larger NVM cache layer, and then flash. As of October 23rd, 2017 16GB of NVM could be picked up on Amazon for $39, whereas 16GB of DRAM cost$170.
We present MyNVM, a system built on top of MyRocks, that significantly reduces the DRAM cache using a second-layer NVM cache. Our contributions include several novel design choices that address the problems arising from adopting NVM in a data center setting…
The following chart shows the results achieved in production when replacing RocksDB with MyNVM as the storage engine inside MyRocks. Comparing the first and third data points, you can see that MyRocks/MyNVM has slightly higher latency (20% at P99) than MyRocks/RocksDB with 96GB storage, however it uses only 1/6th of the DRAM. MyRocks/MyNVM with 16GB of DRAM is 45% faster than MyRocks/RocksDB also with 16GB DRAM. So the NVM layer is doing a good job of reducing costs while closing the performance gap.
NVM comes in two form factors: byte-addressable DIMM, and also as a block device. Much of the prior work focuses on the DIMM use case. But Facebook use the block device form factor due to its lower cost. As we’ll see shortly, loading data in blocks has knock-on effects all the way through the design.
### Tricky things about NVM
NVM is less expensive than DRAM on a per-byte basis, and is an order-of-magnitude faster than flash, which makes it attractive as a second level storage tier. However, there are several attributes of NVM that make it challenging when used as a drop-in replacement for DRAM, namely its higher latency, lower bandwidth, and endurance.
Facebook swap 80GB of DRAM for 140GB of NVM. The lower latency of NVM is therefore compensated for by the higher hit rate from having a larger cache. However, bandwidth becomes a bottleneck. Peak read bandwidth with NVM is about 2.2GB/s, which is 35x lower than DRAM read bandwidth. This issue is made worse by the fact that Facebook chose to use NVM as a block device, so data can only be read at a granularity of 4KB pages. For small objects, this can result in large read amplification.
… we found the NVM’s limited read bandwidth to be the most important inhibiting factor in the adoption of NVM for key-value stores.
NVM also has endurance considerations: if cells are written to more than a certain number of times they wear out and the device lifetime is shortened. In the cache use case with frequent evictions this can easily become a problem, so NVM can’t be used as a straight drop-in replacement for DRAM without some care taken to avoid excessive writes.
Finally, given the very low latency of NVM compared to other block devices, the operating system interrupt overhead itself becomes significant (about 2µs out of a 10µs average end-to-end read latency).
### The design of MyNVM
MyNVM uses NVM as a 2nd level write-through cache. The objective is to significantly reduce costs while maintaining latency and qps.
The default block size is 16KB, which means fetching at least 16KB every time we want to read an object. This ends up requiring more than double the available read bandwidth. Reducing the block size to 4KB actually makes things worse! This is due to the increased footprint of the DRAM index blocks (4x), which in turn lower the available DRAM for caching, and hence increase the hit rate on the 2nd level NVM cache.
To address this problem, the index is itself partitioned into smaller blocks, with an additional top-level index. Only the top-level index and the relevant index partitions then need to to read and cached in DRAM for any given lookup. This brings the required read bandwidth down, but as shown in the figure above, we’re still right up against the device limits.
We can further reduce the read bandwidth requirement by carefully aligning blocks with physical pages. RocksDB data blocks are compressed by default, so that a 4KB block consumes less than that when compressed and written to NVM. Now we end up in a situation where data blocks span pages, and we may need to read two pages for a single block. MyNVM elects to use 6KB blocks, which compress on average to 4KB. (As a nice side-effect, this also reduces the size of the index). The 6KB blocks compress to around 4KB, but still don’t align perfectly with pages. MyNVM zero pads the end of a page if the next compressed block cannot fully fit into the same page. This reduces the number of blocks spread over two pages by about 5x.
The improved block alignment buys a lot more read bandwidth headroom:
It also reduces the P99 latency since we’re reading less data.
By default RocksDB applies compression using a per-block dictionary. With smaller block sizes this makes the compression less effective (11% overhead at 6KB blocks). To combat this, MyNVM using a preloaded dictionary based on data uniformly sampled across multiple blocks. This reduces the overhead to 1.5%.
#### Addressing the durability constraint
To avoid wearing out the NVM, MyNVM uses an admission control policy to only store blocks in the 2nd level cache that are not likely to be quickly evicted. An LRU list is kept in DRAM representing the size of the NVM. When a block is allocated from flash it is only cached in NVM if it has been recently accessed and is therefore present in the simulated cache LRU. For MyNVM, using a simulated cache size of 40GB gives sufficiently accurate prediction to accommodate the endurance limitation of the device.
#### Interrupt latency
To lower the operating system interrupt overhead, the team explored switching to a polling model. Continuous polling quickly took CPU usage of a core to 100%. A hybrid polling strategy involving sleeping for a time threshold after an I/O is issue before starting to poll significantly reduced the CPU usage again. With 8 threads or more though, the benefits of polling diminish.
An improved polling mechanism in the kernel could remove many of these limitations. Until that is available, we decided to currently not integrate polling in our production implementation of MyNVM, but plan to incorporate it in future work.
### Evaluation
We saw the headline production results at the top of this post. The following figures show the mean and P99 latencies achieved over a 24 hour period, with intervals of 5M queries.
And here we can see the queries per second comparison:
### This is just the beginning
NVM can be utilized in many other data center use cases beyond the one described in this paper. For example, since key-value caches, such as memcached and Redis, are typically accessed over the network, their data can be stored on NVM rather than DRAM without incurring a large performance cost. Furthermore, since NVM is persistent, a node does not need to be warmed up in case it reboots. In addition, NVM can be deployed to augment DRAM in a variety of other databases.
ServiceFabric: a distributed platform for building microservices in the cloud Kakivaya et al., EuroSys’18
(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site).
Microsoft’s Service Fabric powers many of Azure’s critical services. It’s been in development for around 15 years, in production for 10, and was made available for external use in 2015.
ServiceFabric (SF) enables application lifecycle management of scalable and reliable applications composed of microservices running at very high density on a shared pool of machines, from development to deployment to management.
Some interesting systems running on top of SF include:
• Azure SQL DB (100K machines, 1.82M DBs containing 3.48PB of data)
• Azure Cosmos DB (2 million cores and 100K machines)
• Skype
• Azure Event Hub
• Intune
• Azure IoT suite
• Cortana
SF runs in multiple clusters each with 100s to many 100s of machines, totalling over 160K machines with over 2.5M cores.
### Positioning & Goals
Service Fabric defies easy categorisation, but the authors describe it as “Microsoft’s platform to support microservice applications in cloud settings.” What particularly makes it stand out from the crowd is that it is built on foundations of strong consistency, and includes support for stateful services through reliable collections: reliable, persistent, efficient and transactional higher-level data structures.
Existing systems provide varying levels of support for microservices, the most prominent being Nirmata, Akka, Bluemix, Kubernetes, Mesos, and AWS Lambda [there’s a mixed bag!!]. SF is more powerful: it is the only data-ware orchestration system today for stateful microservices. In particular, our need to support state and consistency in low-level architectural components drives us to solve hard distributed computing problems related to failure detection, failover, election, consistency, scalability, and manageability. Unlike these systems, SF has no external dependencies and is a standalone framework.
Every layer in SF supports strong consistency. That doesn’t mean you can’t build weakly consistent services on top if you want to, but this is an easier challenge than building a strongly consistent service on top of inconsistent components. “Based on our use case studies, we found that a majority of teams needing SF had strong consistency requirements, e.g., Microsoft Azure DB, Microsoft Business Analytics Tools, etc., all rely on SF while executing transactions.”
### High level design
SF applications are collections of independently versioned and upgradeable microservices, each of which performs a standalone function and is composed of code, configuration, and data.
SF itself is composed of multiple subsystems, with the major ones shown in the figure below.
At the core of SF is the Federation Subsytem, which handles failure detection, routing, and leader election. Built on top of the federation subsystem is the Reliability Subsystem providing replication and high availability. The meat of the paper describes these two subsystems in more detail.
### Federation subsystem
#### The ring
At the core of the federation subsystem you’ll find a virtual ring with 2^m points, called the SF-Ring. It was internally developed at Microsoft starting in the early 2000’s, and bears similarity to Chord and Kademlia. Nodes and keys are mapped to a point in the ring, with keys owned by the node closest to it and ties won by the predecessor. Each node keeps track of its immediate successor and predecessor nodes in the ring, which comprise its neighborhood set.
Routing table entries are bidirectional and symmetric. Routing partners are maintained at exponentially increasing distances in the ring, in both clockwise and anti-clockwise directions. Due to bidirectionality, most routing partners end up being symmetric. This speeds up routing, the spread of failure information, and the updating of routing tables after node churn.
When forwarding a message for a key, a node searching its routing table for the node closest to key in either a clockwise or anti-clockwise direction. Compared to clockwise only routing we get faster routing, more routing options in the face of stale or empty tables, better load spread across nodes, and avoidance of routing loops.
Routing tables are eventually convergent. A chatter protocol exchanges routing table information between routing partners ensuring eventual consistency for long distance neighbours.
A key result from the SF effort is that strongly consistent applications can be supported at scale by combining strong membership in the neighbourhood with weakly consistent membership across the ring. Literature often equates strongly consistent membership with virtual synchrony, but this approach has scalability limits.
Nodes in the ring own routing tokens which represent the portion of the ring whose keys they are responsible for. The SF-Ring protocol ensures that there is never any overlap between tokens (always safe), and the every token range is eventually owned by at least one node (eventually live). When a node joins, the two immediate neighbours each split the ring segment with the new node at exactly the half-way point. When a node leaves, its successor and predecessor split the range between them halfway.
As we’ll see when we look at the reliability subsystem, nodes and objects (services) are placed into the ring rather than simply relying on hashing. This enables preferential placement taking into account failure domains and load-balancing.
#### Consistent membership and failure detection
Membership and failure detection takes place within neighbourhood sets. There are two key design principles:
1. Strongly consistent membership: all nodes responsible for monitoring a node X must agree on whether it is up or down. In the SF-Ring, this means that all nodes in X’s neighbourhood set must agree on its status.
2. Decoupling failure detection from failure decision: failure detection protocols (heartbeats) detect a possible failure, a separate arbitrator group decides on what to do about that. This helps to catch and stop cascading failure detections.
A node X periodically sends lease renewal requests to each of its neighbours (monitors). The leasing period is adjusted dynamically but is typically around 30 seconds. X must obtain acks (leases) from all of its monitors. This property defines the strong consistency. If X fails to obtain all of its leases, it considers removing itself from the group. If a monitor misses a lease renewal heartbeat from X it considers marking X as failed. In both cases, the evidence is submitted to the arbitrator group.
The arbitrator acts as a referee for failure detections and for detection conflicts. For speed and fault-tolerance, the arbitrator is implemented as a decentralized group of nodes that operate independent of each other. When any node in the system detects a failure, before taking actions relevant to the failure, it needs to obtain confirmation from a majority (quorum) of nodes in the arbitrator group.
The arbitrator protocol details can be found in section 4.2.2 of the paper. Using lightweight arbitrator groups allows membership, and hence the ring, to scale to whole datacenters.
Given we have a well-maintained ring, SF has a nice pragmatic solution to leader election:
For any key k in the SF-Ring, there is a unique leader: the node whose token range contains k (this is unique due to the safety and liveness of routing tokens). Any node can contact the leader by routing to key k. Leader election is thus implicit and entails no extra messages. In cases where a leader is needed for the entire ring we use k=0.
### Reliability subsystem
In the interests of space, I’m going to concentrate on the placement and load balancer (PLB) component of the reliability subsystem. It’s job is place microservice instances at nodes in such a way as to ensure balanced load.
Unlike traditional DHTs, where object IDs are hashed to the ring, the PLB explicitly assigns each service’s replicas (primary and secondaries) to nodes in SF-Ring.
The placement considers available resources at nodes, outstanding requests, and the parameters of typical requests. It also continually moves services from overly exhausted nodes to under-utilised nodes. The PLB also migrates services away from a node that is about to be upgraded.
The PLB may be dealing with tens of thousands of objects in a constantly changing environment, thus decisions taken at one moment may not be optimal in the next. Thus PLB favours making quick and nimble decisions, continuously making small improvements. Simulated annealing is used for this. The simulated annealing algorithm sets a timer (10s in fast mode, 120s in slow mode) and explores the state space until convergence or until the timer expires. Each state has an energy. The energy function is user-definable, but a common case is the average standard deviation of all metrics in the cluster (lower is better).
Each step generates a random move, considers the energy of the new prospective state due to this move, and decides whether to jump. If the new state has lower energy the annealing process jumps with probability 1; otherwise if the new state has d more energy than the current and the current temperature is T, the jump happens with probability $e^{-d/T}$. This temperature T is high in initial steps (allowing jumps away from local minima) but falls linearly across iterations to allow convergence later.
Considered moves are fine-grained. For example, swapping a secondary replica to another node, or swapping primary and secondary replica.
### Reliable collections
SF’s reliable collections provide data structures such as dictionaries and queues that are persistent, available and fault-tolerant, efficient, and transactional. State is kept locally in the service instance while also being made highly available, so reads are local. Writes are relayed from primary to secondaries via passive replication and considered complete once a quorum has acknowledged.
Reliable collections build on the services of the federation and reliability subsystems: replicas are organised in an SF-Ring, failures are detected and a primary kept elected. PLB (in conjunction with the failover manager ) keeps replicas fault-tolerant and load-balanced.
SF is the only self-sufficient microservice system that can be used to build a transactional consistent database which is reliable, self-*, and upgradable.
### Lessons learned
Section 7 of the paper contains an interesting discussion of lessons learned during the development of SF. Since I’m already over my target write-up length, I will just give the headlines here and refer you to the paper for full details:
• Distributed systems are more than just nodes and a network. Grey failures are common.
• Application/platform responsibilities need to be well isolated (you can’t trust developers to always do the right thing).
• Capacity planning is the application’s responsibility (but developers need help)
• Different subsystems require different levels of investment
### What’s next?
Much of our ongoing work addresses the problem of reducing the friction of managing the clusters. One effort towards that is to move to a service where the customer never sees individual servers… other interesting and longer term models revolve around having customers owning servers, but also being able to run microservice management as a service where those servers join in. Also in the short term we are looking at enabling different consistency levels in our Reliable Collections, automatically scaling in and out Reliable Collection partitions, and imbuing the ability to geo-distribute replica sets. Slightly longer term, we are looking at best utilizing non-volatile memory as a store for ServiceFabric’s Reliable Collections. This requires tackling many interesting problems ranging from logging bytes vs. block oriented storage, efficient encryption, and transaction-aware memory allocations.
Hyperledger fabric: a distributed operating system for permissioned blockchains Androulaki et al., EuroSys’18
(If you don’t have ACM Digital Library access, the paper can be accessed either by following the link above directly from The Morning Paper blog site).
This very well written paper outlines the design of HyperLedger Fabric and the rationales for many of the key design decisions. It’s a great introduction and overview. Fabric is a permissioned blockchain system with the following key features:
• A modular design allows many components to be pluggable, including the consensus algorithm
• Instead of the order-execute architecture used by virtually all existing blockchain systems, Fabric uses an execute-order-validate paradigm which enables a combination of passive and active replication. (We’ll be getting into this in much more detail shortly).
• Smart contracts can be written in any language.
…in popular deployment configurations, Fabric achieves throughput of more than 3500 tps, achieving finality with latency of a few hundred ms and scaling well to over 100 peers.
Examples of use cases powered by Fabric include foreign exchange netting in which a blockchain is used to resolve trades that aren’t settling; enterprise asset management tracking hardware assets as they move from manufacturing to deployment and eventually to disposal; and a global cross-currency payments system processing transaction among partners in the APFII organisation in the Pacific region.
### The big picture
Fabric is a distributed operating system for permissioned blockchains that executes distributed applications written in general purpose programming languages (e.g., Go, Java, Node.js). It securely tracks its execution history in an append-only replicated ledger data structure and has no cryptocurrency built in.
A Fabric blockchain consists of a set of permissioned nodes, with identities provided by a modular membership service provider (MSP). Nodes in a the network play one of three roles; client, peer, or ordering service.
• Clients submit transaction proposals for execution, help orchestrate the execution phase, and finally broadcast transactions for ordering.
• Peers execute transaction proposals and validate transactions. All peers maintain the blockchain ledger. Not all peers execute all transaction proposals, only a subset of nodes called endorsing peers (or endorsers) do so, as specified by the policy of the chaincode (smart contract) to which the transaction pertains.
• Ordering service nodes (aka orderers) collectively form an ordering service that establishes a total order across all transactions.
It’s possible to construct Fabric networks with multiple blockchains connected to the same ordering service, each such blockchain is called a channel.
Channels can be used to partition the state of the blockchain network, but consensus across channels is not coordinated and the total order of transactions in each channel is separate from the others.
### From order-execute to execute-order-validate
Everything in Fabric revolves around the execute-order-validate processing pipeline. This is a departure from the traditional blockchain model.
All previous blockchain systems, permissioned or not, follow the order-execute architecture. This means that the blockchain networks orders transactions first, using a consensus protocol, and then executes them in the same order on all peers sequentially.
Fabric versions up to 0.6 also used the order-execute approach. The weakness of the order-execute are that it forces sequential execution of transactions, cannot cope with non-deterministic code, and requires all smart contracts to run on all peers, which may introduce confidentiality concerns. Feedback across many proof-of-concept applications highlighted some of the practical issues with order-execute too:
• Users would report a bug in the consensus protocol, which in all cases on investigation turned out to be non-deterministic transaction code.
• Users would complain of poor performance, e.g., only five transactions per second, and then on investigation it turned out that the average transaction for the user took 200ms to execute.
We have learned that the key properties of a blockchain system, namely consistency, security, and performance, must not depend on the knowledge and goodwill of its users, in particular since the blockchain should run in an untrusted environment.
Fabric rejects order-execute and instead using a three-phase execute-order-validate architecture. A distributed application in Fabric consists of two parts: its smart contract or chaincode, and an endorsement policy configured by system administrators which indicates which indicates permissible endorsers of a transaction.
#### The execution phase
Clients sign and send a transaction proposal to one or more endorsers for execution. Endorsers simulate the proposal, executing the operation on the specified chaincode. Chaincode runs in an isolated container. As a result of the simulation, the endorser produces a writeset (modified keys along with their new values) and a readset of keys read during the simulation, along with their version numbers. The endorser then cryptographically signs an endorsement which includes the readset and writeset, and sends this to the client.
The client collects endorsements until they satisfy the endorsement policy of the chaincode (e.g. x of N). All endorsers of the policy are required to produce the same result (i.e., identical readset and writeset). The client then creates a transaction and passes it to the ordering service.
Note that under high contention for certain keys, it is possible for endorsers to return different results and the proposal will fail. “We consciously adopted this design, as it considerably simplifies the architecture and is adequate for typical blockchain applications.” In the future CRDTs may be supported to enhance the liveness of Fabric under contention.
Executing a transaction before the ordering phase is critical to tolerating non-deterministic chaincodes. A chaincode in Fabric with non-determinism can only endanger the liveness of its own operations, because a client might not gather a sufficient number of endorsements for instance.
#### The ordering phase
When a client has assembled enough endorsements it submits a transaction to the ordering service.
The ordering phase establishes a total order on all submitted transactions per channel. In other words, ordering atomically broadcasts endorsements and thereby establishes consensus on transactions, despite faulty orderers. Moreover, the ordering service batches multiple transactions into blocks and outputs a hash-chained sequence of blocks containing transactions.
There may be a large number of peers in the blockchain network, but only relatively few are expected to implement the ordering service. Fabric can be configured to use a built-in gossip service to disseminate delivered blocks from the ordering service to all peers.
The ordering service is not involved in maintaining any blockchain state, and does not validate or execute transactions. Thus the consensus mechanism is completely separated from execution and validation and can be made pluggable (for example using crash-fault tolerant – CFT – or Byzantine fault tolerant – BFT – consensus algorithms).
#### The validation phase
Blocks are delivered to peers either directly by the ordering service or via gossip. Validation then consists of three sequential steps:
1. Endorsement policy validation happens in parallel for all transactions in the block. If endorsement fails the transaction is marked as invalid.
2. A read-write conflict check is done for all transactions in the block sequentially. If versions don’t match the transaction is marked as invalid.
3. The ledger update phase appends the block to the locally stored ledger and updates the blockchain state.
The ledger of Fabric contains all transactions, including those that are deemed invalid. This follows from the overall design, because the ordering service, which is agnostic to chaincode state, produces the chain of the blocks and because the validation is done by peers post-consensus.
A nice property that comes from persisting even invalid transactions is that they can be audited, and clients that try to mount a DoS attack by flooding the network with invalid transactions can easily be detected.
### Selected component details
Section 4 in the paper contains a number of interesting implementation details for the various components. In the interests of space, I’m going to focus here on the ledger itself and on chaincode execution.
The ledger component consists of a block store and a peer transaction manager. The block store persists transaction blocks in append only files. It also maintains indices to support random access to blocks and transactions within blocks. The peer transaction manager holds the latest state in a versioned key-value store. A local key-value store is used to implement this, and there are implementations available based on LevelDB and on Apache CouchDB.
Chaincode is executed in a container which isolates chaincodes from each other and from the peer, and simplifies chaincode lifecycle. Go, Java, and Node.js chaincodes are currently supported. Chaincode and the peer communicate using gRPC. Special system chaincodes which implement parts of the Fabric itself run directly in the peer process.
Through its modularity, Fabric is well-suited for many further improvements and investigations. Future work will address (1) performance by exploring benchmarks and optimizations, (2) scalability to large deployments, (3) consistency guarantees and more general data models, (4) other resilience guarantees through different consensus protocols, (5) privacy and confidentiality for transactions and ledger data through cryptographic techniques, and much more.
ForkBase: an efficient storage engine for blockchain and forkable applications Wang et al., arXiv’18
ForkBase is a data storage system designed to support applications that need a combination of data versioning, forking, and tamper proofing. The prime example being blockchain systems, but this could also include collaborative applications such as GoogleDocs. Today for example Ethereum and HyperLedger build their data structures directly on top of a key-value store. ForkBase seeks to push these properties down into the storage layer instead:
One direct benefit is that it reduces development efforts for applications requiring any combination of these features. Another benefit is that it helps applications generalize better by providing additional features, such as efficient historical queries, at no extra cost. Finally, the storage engine can exploit performance optimization that is hard to achieve at the application layer.
Essentially what we end up with is a key-value store with native support for versioning, forking, and tamper evidence, built on top of an underlying object storage system. At the core of ForkBase is a novel index structure called a POS-Tree (pattern-oriented-split tree).
### The ForkBase stack
From the bottom-up, ForkBase comprises a chunk storage layer that performs chunking and deduplication, a representation layer that manages versions, branches, and tamper-proofing, and a collection of data access APIs that combine structured data types and fork semantics. Higher level application services such as access control and custom merge functions can be implemented on top of the API.
ForkBase is a key-value store, where the stored objects are instances of FObject.
#### Data versioning
The main challenge with data versioning (keeping the full history of every data item, including any branches and merges) is managing storage consumption. Clearly there is an opportunity for deduplication, on the assumption that versions do not completely change their content from one version to the next.
Delta-based deduplication stores just the differences (deltas) between versions, and reconstructs a given version by following a chain of deltas. You can play with the storage/reconstruction cost trade-off in such schemes.
Content-based deduplication splits data into chunks, each of which is uniquely identified by its content (i.e., a hash of the content). Identical data chunks can then be detected and redundant copies eliminated.
ForkBase opts for content-based deduplication at the chunk level. Compared to similar techniques used in file systems, ForkBase uses smaller chunks, and a data-structure aware chunking strategy. For example, a list will only be split at an element boundary so that a list item never needs to be reconstructed from multiple chunks. ForkBase recognises a number of different chunk types, each uniquely identified by its cid, which is simply a hash of the chunk contents.
The chunkable object types (Blob, List, Set, and Map) are stored as POS-Trees, which will look at shortly.
A FObject’s uid is simply an alias for the chunk id of the Meta chunk for the object.
#### Fork semantics
Support for forking is based in two key operations: forking and conflict resolution. Fork operations create a new branch, which evolves independently with local modifications isolated from other branches. ForkBase supports both on-demand and on-conflict forking.
On-demand forks are explicitly requested via the API and are tagged with a user-supplied name. An on-conflict fork is implicitly created upon concurrent modification of the same data item. A branch created as a result of a Fork-on-Conflict is untagged, and is identified simply by its uid.
Tagged branches can be merged with another branch, identified either by tag or by version. When conflicts are detected during a merge a conflict list is returned and the application layer is asked to provide a resolution. There are built-in resolution functions for simple strategies such as append, aggregate, and choose-one.
#### Tamper evidence
The uid of an FObject uniquely identifies both the object’s value and its derivation history. Logical equivalence therefore requires objects have not only the same value, but also the same history. Versions are linked in a cryptographic hash chain to to ensure any attempt at tampering can be detected. Each FObject stores the hashes of the previous versions it derives from in the bases field.
### The Pattern-Oriented-Split Tree
Large structured objects are not usually accessed in their entirety. Instead, they require fine-grained access, such as element look-up, range query and update. These access patterns require index-structures e.g., B+-tree, to be efficient. However, existing index structures are not suitable in our context that has many versions and where versions can be merged.
The capacity-based splitting strategies of B+-trees and variants are sensitive to the values being indexed and their insertion order. This makes it harder to deduplicate across versions, and harder to find differences between two versions when merging. Using fixed-sized nodes gets around the insertion order issue, but introduces another issue known as the boundary-shifting problem due to the insertions in the middle of the structure.
The author’s solution is the Pattern-Oriented-Split Tree which supports the following properties:
• Fast lookup and update
• Fast determination of differences between two trees, and subsequent merge
• Efficient deduplication
• Tamper evidence
Every node in the tree is a chunk (either an index chunk, or at the leaves, an object chunk). Lookups follow a path guided by the split keys. Child node cids are crypographic hashes of their content, as in a Merkle tree. Two objects with the same data will have the same POS-tree, and tree comparison affords an efficient recursive solution. The real secret sauce here lies in how POS-Trees decide where to make splits.
The structure is inspired by content-based slicing and resembles a combination of a B+-tree and a Merkle tree. In a POS-tree, the node (i.e., chunk) boundary is defined as patterns detected from the object content. Specifically, to construct a node, we scan from the beginning until a pre-defined pattern occurs, and then create new node to hold the scanned content. Because the leaf nodes and internal nodes have different degrees of randomness, we define different patterns for them.
Leaf node splitting is done using a rolling hash function. Whenever the q least significant bits in the rolling hash are all zero a pattern match is said to occur. If a pattern match occurs in the middle of an element (e.g., a key-value pair in a Map) then the chunk boundary is extended to cover the whole element. Every leaf node except for the last node therefore ends with a pattern.
Index splitting uses a simpler strategy, looking for a cid pattern where cid && (2^r -1) == 0. The expected chunk size can be configured by choosing appropriate values for q and r. To ensure chunks cannot grow arbitrarily large, a chunk can be forced at some threshold value. POS-tree is not designed for cases the the object content is simply a sequence of repeated items – without the pattern all nodes gravitate to the maximum chunk size and the boundary shift problem returns.
### ForkBase in action – Hyperledger
Section 5 of the paper looks at the construction of a blockchain platform, wiki engine, and collaborative analytics application on top of ForkBase. I’m just going to concentrate here on the blockchain use case, in which the authors port Hyperledger v0.6 to run on top of ForkBase.
It takes 18 new lines of code to move Hyperledger on top of ForkBase, and the elimination of 1918 lines of code from the Hyperledger code base. (ForkBase itself is about 30K lines of code mind you!).
Another benefit is that the data is now readily usable for analytics. For state scan query, we simply follow the version number stored in the latest block to get the latest Blob object for the requested key. From there, we follow base-version to retrieve the previous values. For block scan query, we follow the version number stored on the requested block to retrieve the second-level Map object for this block. We then iterate through the key-value tuples and retrieve the corresponding Blob objects.
Both state scans (returning the history of a given state) and block scans (returning the values of the states at a specific block) are slower in the original Hyperledger codebase, which is designed for fast access to the latest states. (Note: this seems to be referring to the peer transaction manager, or PTM, component of HyperLedger. Hyperledger also includes a block store which is indexed).
It’s in these scan operations that ForkBase shows the biggest performance benefits. If we look at latency and throughput, the ForkBase and Rocksdb based Hyperledger implementations are pretty close. (ForkBase-KV in the figure below is Hyperledger using ForkBase as a pure KV store, not taking advantage of any of the advanced features).
(Enlarge)
zkLedger: privacy-preserving auditing for distributed ledgers Narula et al., NSDI’18
Somewhat similarly to Solidus that we looked at late last year, zkLedger (presumably this stands for zero-knowledge Ledger) provides transaction privacy for participants in a permissioned blockchain setting. zkLedger also has an extra trick up its sleeve: it provides rich and fully privacy-preserving auditing capabilities. Thus a number of financial institutions can collectively use a blockchain-based settlement ledger, and an auditor can measure properties such as financial leverage, asset illiquidity, counter-party risk exposures, and market concentration, either for the system as a whole, or for individual participants. It provides a cryptographically verified level of transparency that’s a step beyond anything we have today.
The goals of zkLedger are to hide the amounts, participants, and links between transactions while maintaining a verifiable transaction ledger, and for the Auditor to receive reliable answers to its queries. Specifically, zkLedger lets banks issue hidden transfer transactions which are still publicly verifiable by all other participants; every participant can confirm a transaction conserves assets and assets are only transferred with the spending bank’s authority.
### Setting the stage
A zkLedger system comprises n banks and an auditor that verifies certain operational aspects of transactions performance by the basks. A depositor or set of depositors can also issue and withdraw assets from the system. Issuance and withdrawal of assets are global public events.
The main action takes place when banks exchange assets by creating transfer transactions. A transfer moves v shares of some asset t to a given recipient bank (or banks). Agreements to transfer are arranged outside of the system, and settled on zkLedger. All transactions are submitted to a globally-ordered append-only ledger, which could be a blockchain.
### Cryptographic building blocks
To protect their privacy, banks do not broadcast payment details in the clear. Instead, banks post commitments to the ledger, using Pedersen commitments. Pedersen commitments are perfectly hiding and computationally binding, they are also additively homomorphic, a fact which zkLedger makes extensive use of. (By additively homomorphic we mean that given commitments to values v1 and v2, there is an operation we can perform on those commitments to produce a commitment to the value v1 + v2. )
Every bank has a Schnorr signature keypair and distributes their public key to all other system participants.
Assertions about payment details are made based on non-interactive zero-knowledge proofs (NIZKs). In an NIZK scheme a prover can convince a verifier of some property about private data the prover holds, without revealing the private data itself. The binary string proof $\pi$ can be appended to the ledger and verified by any party of the system without interaction between the prover and the verifier.
In theory, NIZK proof systems exist for all properties in NP whereas the practical feasibility of NIZKs is highly dependent on the complexity of the property at hand… The design of zkLedger is carefully structured so that all NIZK proofs have particularly efficient constructions.
### The zkLedger
At a high level zkLedger looks like this:
Banks maintain their own private state, and for efficiency a commitment cache which holds a rolling product of commitments by row and by asset so that it can quickly produce proofs and answer questions from auditors. The ledger itself has own entry (row) per transaction, and every row contains one column for each participating bank. (Banks can be added or removed by appending special signed transactions to the ledger).
Suppose bank A wants to transfer 100 shares of an asset to bank B. The transaction row conceptually contains a -100 entry in A’s column, 100 in B’s column, and zero in every other column. The values are not posted in the clear though, instead the column entries are Pedersen commitments for the respective amounts. Since there is no way for an outsider to tell the difference between a commitment to zero and any other value, both the transaction amounts and participants are protected.
Keeping values and participants private is a good start, but we also need to maintain overall integrity via the following invariants:
• Transfer transactions cannot create or destroy assets
• The spending bank must give consent to the transfer, and must own enough of the particular asset to execute the transaction
The first invariant is upheld via a proof of balance, and the second invariant is upheld using a proof of assets.
For proof of balance it suffices to show that the values in a given row sum to zero. If the the prover chooses the random inputs r to the commitments such that all of the r values also sum to zero, then a verifier can confirm that the the committed values all sum to zero by showing that the product of the commitments is 1.
A common approach to showing proof-of-assets is to use Unspent Transaction Objects (UTXOs). In a system that doesn’t use zk-SNARKs though, this leaks the transaction graph. zk-SNARKs require a trusted third party for setup, which zkLedger wants to avoid: “the consequences of incorrect or compromised setup are potentially disastrous…
In zkLedger, a bank proves it has assets by creating a commitment to the sum of the value for the asset in its column, including this transaction. If the sum is greater than or equal to 0, then the bank has the assets to transfer. Note that this is true since the bank’s column represents all the assets it has received or spent, and the Pedersen commitments can be homomorphically added in columns as well as in rows.
In addition, in its own entry (where the value is negative), a bank includes proof of the knowledge of its secret key as a proof of authorisation. Thus we have a disjunctive proof – either the committed value for an entry is greater than or equal to zero, or the creator of the transaction knows the secret key for the entry.
There’s one more issue we still need to consider: commitments rely on modulus. If we’re using modulus N, we need to make sure that committed values are within 0..N-1. Range proofs are used to show that values are within range, and zkLedger supports asset value amounts up to a trillion. Now the only thing I can really tell you about range proofs is that they’re the most expensive part of generating the transaction and if we’re not careful we need two of them: one for the commitment value and one for the sum of assets in the column. With a level of indirection zkLedger manages to get this back down to just one range proof per transaction.
### Auditing
The auditor can ask a query of a Bank, such as “How many Euros did you hold at time t?,” and the bank responds with an answer and a proof that the answer is consistent with the transactions on the ledger. The auditor can multiply commitments in the bank’s column for Euros, and verify the proof and answer with the total. Given the table construction, the auditor knows that they are seeing every asset transfer – there is no way for a bank to ‘hide’ assets on the ledger.
Given that every bank is affected by every transaction (because each row contains a commitment for every bank, even if to the value zero), each bank needs to be able to total and prove all of the commitments in its column. To do this, the bank needs to know the random input used for each of those commitments, otherwise it won’t be able to open up commitments to the auditor. To meet this requirement, the spending bank is required to include a publicly verifiable Token in every entry, which is based on a combination of the bank’s public key and the random input. The token construction enables the bank to show that the asset total is correct without actually needing to know the random input (details in §4.2 of the paper). Alongside the token, we also need a proof of consistency that the same random input was used both in construction of the token and in forming the value commitment.
Through the use of sums, means, ratios, variance, co-variance, and standard deviation, an auditor in zkLedger can determine the following, among other measurements: leverage ratios (how much of an asset a bank has on its books compared to other holdings); concentration (using a measure called the Herfindahl-Hirschman Index – HHI – to measure how competitive an industry is); real-timet price indexes.
Sums are supported via the additive structure of Pedersen commitments. For everything else there is map/reduce. Take as an example an auditor that wants to calculate the mean transaction size for a given bank and asset. A commitment to the total value is obtained by summing the column, but we don’t know what the denominator should be, because we don’t know which entries are actually commitments to zero. Map/reduce solves this: in the map step the bank produces new commitments per row indicating whether or not the bank was involved in the transaction (1 if the bank is involved, zero otherwise). In the reduce step these commitments are summed and the result is sent to the auditor along with the corresponding proofs. More complex queries may require multiple map and reduce computations (see the example in §5 of the paper for computing variance).
### Putting it all together
For a transfer transaction, each entry in the row contains:
• A Pedersen commitment to the value being transferred
• An audit token so that audit requests can be answered without knowing the random input to the commitment
• A proof -of-balance
• A proof-of-assets
• A proof-of-consistency between tokens and commitments
Banks can also add additional metadata – either encrypted or in plaintext.
### Performance evaluation
A Go prototype of zkLedger shows that it is possible to create the needed proofs in milliseconds.
However, the cost of verifying transactions increases quadratically with the number of banks, and all transactions must be strictly serialised. Banks can verify transaction in parallel, so the time to process transactions increases linearly.
(Enlarge)
With 10 banks, we’re already down to around 2 transactions per second. “We are optimistic that a faster range proof implementation will directly improve performance.” Realistically though, it looks like with the current state-of-the-art we’re limited to fairly low volume markets with limited numbers of participants.
Using the commitment cache (online auditor below), auditing time is roughly constant. Without it (offline auditor) audit time is linear in the number of transactions in the ledger.
(Enlarge)
zkLedger is the first distributed ledger system to provide strong transaction privacy, public verifiability, and complete, provably correct auditing. zkLedger supports a rich set of auditing queries which are useful to measure the financial health of a market.
Towards a design philosophy for interoperable blockchain systems Hardjono et al., arXiv 2018
Once upon a time there were networks and inter-networking, which let carefully managed groups of computers talk to each other. Then with a capital “I” came the Internet, with design principles that ultimately enabled devices all over the world to interoperate. Like many other people, I have often thought about the parallels between networks and blockchains, between the Internet, and something we might call ‘the Blockchain’ (capital ‘B’). In today’s paper choice, Hardjono et al. explore this relationship, seeing what we can learn from the design principles of the Internet, and what it might take to create an interoperable blockchain infrastructure. Some of these lessons are embodied in the MIT Tradecoin project.
We argue that if blockchain technology seeks to be a fundamental component of the future global distributed network of commerce and value, then its architecture must also satisfy the same fundamental goals of the Internet architecture.
### The design philosophy of the Internet
This section of the paper is a précis of ‘The design philosophy of the DARPA Internet protocols’ from SIGCOMM 1988. The top three fundamental goals for the Internet as conceived by DARPA at that time were:
1. Survivability: Internet communications must continue even if individual networks or gateways were lost
2. The ability to support multiple types of communication service (with differing speed, latency, and reliability requirements).
3. The ability to accommodate and incorporate a variety of networks
In addition, the end-to-end principle was central in deciding where responsibility for functionality should lie: in the network versus in the applications at the network endpoints. A classic example is end-to-end encryption, which needs to be between the communicating parties and therefore places responsibility for this with the endpoints.
The Internet is structured as a collection of autonomous systems (routing domains), stitched together through peering agreements. Autonomous Systems (ASs) are owned and operated by legal entities. All routers and related devices are uniquely identified within a domain. Interaction across domains is via gateways (using e.g. BGP).
### A design philosophy for the Blockchain
We believe the issue of survivability to be as important as that of privacy and security. As such, we believe that interoperability across blockchain systems will be a core requirement — both at the mechanical level and the value level — if blockchain systems and technologies are to become fundamental infrastructure components of future global commerce.
An interoperable blockchain architecture as defined by the authors has the following characteristics:
• It is composed of distinguishable blockchain systems, each representing a distributed data ledger
• Transaction execution may span multiple blockchain systems
• Data recorded in one blockchain is reachable and verifiable by another possible foreign transaction in a semantically compatible manner
Survivability is defined in terms of application level transactions: it should still be possible to complete a transaction even when parts of The Blockchain are damaged.
The application level transaction may be composed of multiple ledger-level transactions (sub-transaction) and which may be intended for multiple distinct blockchain systems (e.g. sub-transaction for asset transfer, simultaneously with sub-transaction for payments and sub-transaction for taxes).
(Are we reinventing XA all over again?)
Sub-transactions confirmed on a spread of blockchain systems are opaque to the user application, in the same way that packets routing through multiple domains is opaque to a communications application.
The notions of survivability and blockchain substitution in the event of failure raise a number of questions such as the degree to which an application needs to be aware of individual blockchain systems’ capabilities and constructs, and where responsibility for reliability (e.g. re-transmitting a transaction) should lie. What should we do about resident smart contracts that exist on a (possibly unreachable) blockchain system, and hence may not be invokable or able to complete? Can smart contracts be moved across chains? Should the current chain on which a contract resides be opaque to applications (i.e., give it an “IP” address which works across the whole Blockchain)? How do we know when to trigger the moving of a contract from one chain to another?
The Internet goal of supporting multiple types of service with differing requirements is reinterpreted as need to support multiple types of chain with differing consensus, throughput, and latency characteristics. (And we might also add security and privacy to that list).
When it comes to accommodating multiple different blockchain systems, we want to be able to support transactions spanning blockchains operated (or owned) by different entities. In the Internet, the minimum assumption is that each network must be able to transport a datagram or packet as the lowest unit common denominator. What is the corresponding minimum assumption for blockchains? How can data be referenced across chains? What combinations of anonymity (for users and for nodes) can be supported?
The notion of value is at a layer above blockchain transactions (just as the Internet separates the mechanical transmission of packets from the value of the information contained in those packets). For families of applications that need to transfer value across chains, the Inter-Ledger Protocol offers a promising direction.
The MIT Tradecoin project has a number of objectives, one core goal being the development of a “blueprint” model for interoperable blockchain systems which can be applied to multiple use cases.
Ultimately there are two different levels of interoperability: mechanical level interoperability, and value level interoperability (encompassing constructs that accord value as perceived in the human world). “Humans, societies, real assets, fiat currencies, liquidity, legal regimes and regulations all contribute to form the notion of value as attached to (bound to) the constructs (e.g., coins, tokens) that circulate in the blockchain system….” The two level view follows the end-to-end principle by placing the human semantics (value) at the ends of (outside) the mechanical systems.
Legal trust is the contract that binds the technical roots of trust at the mechanical level with legally enforceable obligations and warranties.
Legal trust is the bridge between the mechanical level and the value level. That is, technical-trust and legal-trust support business trust (at the value level) by supporting real-world participants in quantifying and managing risks associated with transactions occurring at the mechanical level. Standardization of technologies that implement technical trust promotes the standardization of legal contracts — also known as legal trust frameworks — which in turn reduces the overall business cost of operating autonomous systems.
(And not only that, it provides the trust required for businesses to trade value on blockchains).
Tradecoin views individual blockchain systems as fully autonomous, and connects them via gateways. Gateways provide value stability, reachability, and transaction mediation for cross-domain transactions.
To support reachability, gateways resolve identifiers and may provide a NAT-like function to translate between internal and external identifiers. When it comes to transaction mediation, the Tradecoin view seems to be that gateways will act as transaction coordinators, with individual blockchain systems acting as resource managers.
Since blockchains BC1 and BC2 are permissioned and one side cannot see the ledger at the other side, the gateways of each blockchain must “vouch” that the transaction has been confirmed on the respective ledgers. That is, the gateways must issue legally-binding signed assertions that make them liable for misreporting (intentionally or otherwise). The signature can be issued by one gateway only, or it can be a collective group signature of all gateways in the blockchain system.
For all this to work smoothly, there are five ‘desirable features’:
1. Both the transaction initiating and recipient applications must be able to independently verify that the transaction was confirmed on their respective blockchains.
2. Gateway signatures must be binding, regardless of the gateway selection mechanism used.
3. There should be multiple reliable ‘paths’ (sets of gateways) between any two blockchains.
4. There must be a global resolution mechanism for identifiers such that they can always be resolved to the correct authoritative blockchain system.
5. Gateways must all be identifiable (i.e., not anonymous), both within and across domains. “Gateways must be able to mutually authenticate each other without any ambiguity as to their identity, legal ownership, or the ‘home’ blockchain autonomous system which they exclusively represent.
Gateways are connected together via the equivalent of peering agreements:
For the interoperability of blockchain systems, a notion similar to peering and peering-agreements must be developed that (i) defines the semantic compatibility required for two blockchains to exchange cross-domain transactions; (ii) specifies the cross-domain protocols required; (iii) specifies the delegation and technical-trust mechanisms to be used; and (iv) defines the legal agreements (e.g. service levels, fees, penalties, liabilities, warranties) for peering. It is important to note that in the Tradecoin interoperability model, the gateways of a blockchain system represent the peering-points of the blockchain.
Requirement (iv) above seems problematic in cases where there is no well-defined legal entity associated with a blockchain.
Interoperability forces a deeper re-thinking into how permissioned and permissionless blockchain systems can interoperate without a third party (such as an exchange).
|
2018-06-18 11:34:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2852066457271576, "perplexity": 2399.504001412414}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859766.6/warc/CC-MAIN-20180618105733-20180618125733-00559.warc.gz"}
|
http://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/8/14/b/a/
|
# Properties
Label 8.14.b.a Level 8 Weight 14 Character orbit 8.b Analytic conductor 8.578 Analytic rank 1 Dimension 2 CM No Inner twists 2
# Related objects
## Newspace parameters
Level: $$N$$ = $$8 = 2^{3}$$ Weight: $$k$$ = $$14$$ Character orbit: $$[\chi]$$ = 8.b (of order $$2$$ and degree $$1$$)
## Newform invariants
Self dual: No Analytic conductor: $$8.57847431615$$ Analytic rank: $$1$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-79})$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$2^{2}$$ Sato-Tate group: $\mathrm{SU}(2)[C_{2}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of $$\beta = 2\sqrt{-79}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q$$ $$+ ( -56 - 4 \beta ) q^{2}$$ $$+ 129 \beta q^{3}$$ $$+ ( -1920 + 448 \beta ) q^{4}$$ $$-1270 \beta q^{5}$$ $$+ ( 163056 - 7224 \beta ) q^{6}$$ $$-175832 q^{7}$$ $$+ ( 673792 - 17408 \beta ) q^{8}$$ $$-3664233 q^{9}$$ $$+O(q^{10})$$ $$q$$ $$+ ( -56 - 4 \beta ) q^{2}$$ $$+ 129 \beta q^{3}$$ $$+ ( -1920 + 448 \beta ) q^{4}$$ $$-1270 \beta q^{5}$$ $$+ ( 163056 - 7224 \beta ) q^{6}$$ $$-175832 q^{7}$$ $$+ ( 673792 - 17408 \beta ) q^{8}$$ $$-3664233 q^{9}$$ $$+ ( -1605280 + 71120 \beta ) q^{10}$$ $$+ 148245 \beta q^{11}$$ $$+ ( -18262272 - 247680 \beta ) q^{12}$$ $$-1748546 \beta q^{13}$$ $$+ ( 9846592 + 703328 \beta ) q^{14}$$ $$+ 51770280 q^{15}$$ $$+ ( -59736064 - 1720320 \beta ) q^{16}$$ $$-133520302 q^{17}$$ $$+ ( 205197048 + 14656932 \beta ) q^{18}$$ $$-1956439 \beta q^{19}$$ $$+ ( 179791360 + 2438400 \beta ) q^{20}$$ $$-22682328 \beta q^{21}$$ $$+ ( 187381680 - 8301720 \beta ) q^{22}$$ $$-35585416 q^{23}$$ $$+ ( 709619712 + 86919168 \beta ) q^{24}$$ $$+ 711026725 q^{25}$$ $$+ ( -2210162144 + 97918576 \beta ) q^{26}$$ $$-267018390 \beta q^{27}$$ $$+ ( 337597440 - 78772736 \beta ) q^{28}$$ $$+ 89285286 \beta q^{29}$$ $$+ ( -2899135680 - 207081120 \beta ) q^{30}$$ $$-5765001568 q^{31}$$ $$+ ( 1170735104 + 335282176 \beta ) q^{32}$$ $$-6043059180 q^{33}$$ $$+ ( 7477136912 + 534081208 \beta ) q^{34}$$ $$+ 223306640 \beta q^{35}$$ $$+ ( 7035327360 - 1641576384 \beta ) q^{36}$$ $$+ 740167642 \beta q^{37}$$ $$+ ( -2472938896 + 109560584 \beta ) q^{38}$$ $$+ 71277729144 q^{39}$$ $$+ ( -6986178560 - 855715840 \beta ) q^{40}$$ $$-23546348918 q^{41}$$ $$+ ( -28670462592 + 1270210368 \beta ) q^{42}$$ $$+ 821222629 \beta q^{43}$$ $$+ ( -20986748160 - 284630400 \beta ) q^{44}$$ $$+ 4653575910 \beta q^{45}$$ $$+ ( 1992783296 + 142341664 \beta ) q^{46}$$ $$-68107736592 q^{47}$$ $$+ ( 70127124480 - 7705952256 \beta ) q^{48}$$ $$-65972118183 q^{49}$$ $$+ ( -39817496600 - 2844106900 \beta ) q^{50}$$ $$-17224118958 \beta q^{51}$$ $$+ ( 247538160128 + 3357208320 \beta ) q^{52}$$ $$+ 9353966274 \beta q^{53}$$ $$+ ( -337511244960 + 14953029840 \beta ) q^{54}$$ $$+ 59493683400 q^{55}$$ $$+ ( -118474194944 + 3060883456 \beta ) q^{56}$$ $$+ 79752279396 q^{57}$$ $$+ ( 112856601504 - 4999976016 \beta ) q^{58}$$ $$-7179956339 \beta q^{59}$$ $$+ ( -99398937600 + 23193085440 \beta ) q^{60}$$ $$-23861087370 \beta q^{61}$$ $$+ ( 322840087808 + 23060006272 \beta ) q^{62}$$ $$+ 644289416856 q^{63}$$ $$+ ( 358235504640 - 23458742272 \beta ) q^{64}$$ $$-701726480720 q^{65}$$ $$+ ( 338411314080 + 24172236720 \beta ) q^{66}$$ $$+ 21163131297 \beta q^{67}$$ $$+ ( 256358979840 - 59817095296 \beta ) q^{68}$$ $$-4590518664 \beta q^{69}$$ $$+ ( 282259592960 - 12505171840 \beta ) q^{70}$$ $$-1309471657368 q^{71}$$ $$+ ( -2468930881536 + 63786968064 \beta ) q^{72}$$ $$+ 478647871914 q^{73}$$ $$+ ( 935571899488 - 41449387952 \beta ) q^{74}$$ $$+ 91722447525 \beta q^{75}$$ $$+ ( 276969156352 + 3756362880 \beta ) q^{76}$$ $$-26066214840 \beta q^{77}$$ $$+ ( -3991552832064 - 285110916576 \beta ) q^{78}$$ $$-364547231600 q^{79}$$ $$+ ( -690398822400 + 75864801280 \beta ) q^{80}$$ $$+ 5042766700701 q^{81}$$ $$+ ( 1318595539408 + 94185395672 \beta ) q^{82}$$ $$+ 49098397129 \beta q^{83}$$ $$+ ( 3211091810304 + 43550069760 \beta ) q^{84}$$ $$+ 169570783540 \beta q^{85}$$ $$+ ( 1038025403056 - 45988467224 \beta ) q^{86}$$ $$-3639625398504 q^{87}$$ $$+ ( 815485071360 + 99886295040 \beta ) q^{88}$$ $$-102457641350 q^{89}$$ $$+ ( 5882119950240 - 260600250960 \beta ) q^{90}$$ $$+ 307450340272 \beta q^{91}$$ $$+ ( 68323998720 - 15942266368 \beta ) q^{92}$$ $$-743685202272 \beta q^{93}$$ $$+ ( 3814033249152 + 272430946368 \beta ) q^{94}$$ $$-785158099480 q^{95}$$ $$+ ( -13667442622464 + 151024828416 \beta ) q^{96}$$ $$-6157717373342 q^{97}$$ $$+ ( 3694438618248 + 263888472732 \beta ) q^{98}$$ $$-543204221085 \beta q^{99}$$ $$+O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q$$ $$\mathstrut -\mathstrut 112q^{2}$$ $$\mathstrut -\mathstrut 3840q^{4}$$ $$\mathstrut +\mathstrut 326112q^{6}$$ $$\mathstrut -\mathstrut 351664q^{7}$$ $$\mathstrut +\mathstrut 1347584q^{8}$$ $$\mathstrut -\mathstrut 7328466q^{9}$$ $$\mathstrut +\mathstrut O(q^{10})$$ $$2q$$ $$\mathstrut -\mathstrut 112q^{2}$$ $$\mathstrut -\mathstrut 3840q^{4}$$ $$\mathstrut +\mathstrut 326112q^{6}$$ $$\mathstrut -\mathstrut 351664q^{7}$$ $$\mathstrut +\mathstrut 1347584q^{8}$$ $$\mathstrut -\mathstrut 7328466q^{9}$$ $$\mathstrut -\mathstrut 3210560q^{10}$$ $$\mathstrut -\mathstrut 36524544q^{12}$$ $$\mathstrut +\mathstrut 19693184q^{14}$$ $$\mathstrut +\mathstrut 103540560q^{15}$$ $$\mathstrut -\mathstrut 119472128q^{16}$$ $$\mathstrut -\mathstrut 267040604q^{17}$$ $$\mathstrut +\mathstrut 410394096q^{18}$$ $$\mathstrut +\mathstrut 359582720q^{20}$$ $$\mathstrut +\mathstrut 374763360q^{22}$$ $$\mathstrut -\mathstrut 71170832q^{23}$$ $$\mathstrut +\mathstrut 1419239424q^{24}$$ $$\mathstrut +\mathstrut 1422053450q^{25}$$ $$\mathstrut -\mathstrut 4420324288q^{26}$$ $$\mathstrut +\mathstrut 675194880q^{28}$$ $$\mathstrut -\mathstrut 5798271360q^{30}$$ $$\mathstrut -\mathstrut 11530003136q^{31}$$ $$\mathstrut +\mathstrut 2341470208q^{32}$$ $$\mathstrut -\mathstrut 12086118360q^{33}$$ $$\mathstrut +\mathstrut 14954273824q^{34}$$ $$\mathstrut +\mathstrut 14070654720q^{36}$$ $$\mathstrut -\mathstrut 4945877792q^{38}$$ $$\mathstrut +\mathstrut 142555458288q^{39}$$ $$\mathstrut -\mathstrut 13972357120q^{40}$$ $$\mathstrut -\mathstrut 47092697836q^{41}$$ $$\mathstrut -\mathstrut 57340925184q^{42}$$ $$\mathstrut -\mathstrut 41973496320q^{44}$$ $$\mathstrut +\mathstrut 3985566592q^{46}$$ $$\mathstrut -\mathstrut 136215473184q^{47}$$ $$\mathstrut +\mathstrut 140254248960q^{48}$$ $$\mathstrut -\mathstrut 131944236366q^{49}$$ $$\mathstrut -\mathstrut 79634993200q^{50}$$ $$\mathstrut +\mathstrut 495076320256q^{52}$$ $$\mathstrut -\mathstrut 675022489920q^{54}$$ $$\mathstrut +\mathstrut 118987366800q^{55}$$ $$\mathstrut -\mathstrut 236948389888q^{56}$$ $$\mathstrut +\mathstrut 159504558792q^{57}$$ $$\mathstrut +\mathstrut 225713203008q^{58}$$ $$\mathstrut -\mathstrut 198797875200q^{60}$$ $$\mathstrut +\mathstrut 645680175616q^{62}$$ $$\mathstrut +\mathstrut 1288578833712q^{63}$$ $$\mathstrut +\mathstrut 716471009280q^{64}$$ $$\mathstrut -\mathstrut 1403452961440q^{65}$$ $$\mathstrut +\mathstrut 676822628160q^{66}$$ $$\mathstrut +\mathstrut 512717959680q^{68}$$ $$\mathstrut +\mathstrut 564519185920q^{70}$$ $$\mathstrut -\mathstrut 2618943314736q^{71}$$ $$\mathstrut -\mathstrut 4937861763072q^{72}$$ $$\mathstrut +\mathstrut 957295743828q^{73}$$ $$\mathstrut +\mathstrut 1871143798976q^{74}$$ $$\mathstrut +\mathstrut 553938312704q^{76}$$ $$\mathstrut -\mathstrut 7983105664128q^{78}$$ $$\mathstrut -\mathstrut 729094463200q^{79}$$ $$\mathstrut -\mathstrut 1380797644800q^{80}$$ $$\mathstrut +\mathstrut 10085533401402q^{81}$$ $$\mathstrut +\mathstrut 2637191078816q^{82}$$ $$\mathstrut +\mathstrut 6422183620608q^{84}$$ $$\mathstrut +\mathstrut 2076050806112q^{86}$$ $$\mathstrut -\mathstrut 7279250797008q^{87}$$ $$\mathstrut +\mathstrut 1630970142720q^{88}$$ $$\mathstrut -\mathstrut 204915282700q^{89}$$ $$\mathstrut +\mathstrut 11764239900480q^{90}$$ $$\mathstrut +\mathstrut 136647997440q^{92}$$ $$\mathstrut +\mathstrut 7628066498304q^{94}$$ $$\mathstrut -\mathstrut 1570316198960q^{95}$$ $$\mathstrut -\mathstrut 27334885244928q^{96}$$ $$\mathstrut -\mathstrut 12315434746684q^{97}$$ $$\mathstrut +\mathstrut 7388877236496q^{98}$$ $$\mathstrut +\mathstrut O(q^{100})$$
## Character Values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/8\mathbb{Z}\right)^\times$$.
$$n$$ $$5$$ $$7$$ $$\chi(n)$$ $$-1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
5.1
0.5 + 4.44410i 0.5 − 4.44410i
−56.0000 71.1056i 2293.15i −1920.00 + 7963.82i 22576.0i 163056. 128417.i −175832. 673792. 309451.i −3.66423e6 −1.60528e6 + 1.26426e6i
5.2 −56.0000 + 71.1056i 2293.15i −1920.00 7963.82i 22576.0i 163056. + 128417.i −175832. 673792. + 309451.i −3.66423e6 −1.60528e6 1.26426e6i
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char. orbit Parity Mult. Self Twist Proved
1.a Even 1 trivial yes
8.b Even 1 yes
## Hecke kernels
This newform can be constructed as the kernel of the linear operator $$T_{3}^{2}$$ $$\mathstrut +\mathstrut 5258556$$ acting on $$S_{14}^{\mathrm{new}}(8, \chi)$$.
|
2019-05-27 14:19:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747650027275085, "perplexity": 186.0363491073805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232262600.90/warc/CC-MAIN-20190527125825-20190527151825-00522.warc.gz"}
|
https://www.ippp.dur.ac.uk/internal-seminar-maria-laura-piscopo-contribution-darwin-operator-non-leptonic-decay-heavy-hadrons
|
# Institute for Particle Physics Phenomenology
Description:
Contribution of the Darwin operator to non-leptonic decay of heavy hadrons
The total decay width of heavy hadrons can be systematically computed using the Heavy Quark Expansion (HQE) framework, as a series in inverse powers of the heavy quark mass m_Q. Computation of higher corrections is crucial both to test the consistency of HQE itself and to constrain the size of possible new physics effects. In this talk I will present the result of our recent work on the determination of the two-loop 1/m_b^3 correction (Darwin term) to the non-leptonic decays of B mesons. This effect is found to give the dominant correction for the case of B_s and B_d mesons, and the second most important correction for the case of B^+ meson.
Type:
Lecture
Category:
Oct 2019 - Sept 2020
Date:
17/04/2020
Location:
Ogden Center
Room:
OG218
Timezone:
Europe/London
|
2020-05-31 14:45:02
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8151996731758118, "perplexity": 2368.865411794738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00270.warc.gz"}
|