url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://mathhelpboards.com/threads/unions-and-intersections.1764/ | # [SOLVED]unions and intersections
#### dwsmith
##### Well-known member
$(A\cup B)\cap (B\cup C)\cap (C\cup A) = (A\cap B)\cup (A\cap C)\cup (B\cap C)$
For the identity, we will show $(A\cup B)\cap (B\cup C)\cap (C\cup A) \subseteq (A\cap B)\cup (A\cap C)\cup (B\cap C)$ and $(A\cup B)\cap (B\cup C)\cap (C\cup A) \supseteq (A\cap B)\cup (A\cap C)\cup (B\cap C)$.
Let $x\in (A\cup B)\cap (B\cup C)\cap (C\cup A)$.
Then $x\in A\cup B$ and $x\in B\cup C$ and $x\in C\cup A$.
So $x\in A$ or $x\in B$ and $x\in B$ or $x\in C$ and $x\in C$ or $x\in A$.
So I am stuck at this point.
#### Fantini
##### "Read Euler, read Euler." - Laplace
MHB Math Helper
If ($x \in A$ or $x \in B$) and ($x \in B$ or $x \in C$) and ($x \in C$ or $x \in A$), then ($x \in A$ and $x \in B$) or ($x \in A$ and $x \in C$) or ($x \in B$ and $x \in C$). From there, $x \in (A \cap B) \cup (A \cap C) \cup (B \cap C)$. | 2021-01-24 03:36:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9951427578926086, "perplexity": 237.6789806254748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703544403.51/warc/CC-MAIN-20210124013637-20210124043637-00580.warc.gz"} |
https://tex.stackexchange.com/questions/305472/how-to-insert-subfigure-with-caption-in-ieee-trans | # How to insert subfigure with caption in IEEE trans?
I am trying to insert some subfigures with caption to all the figure in IEEE trans and i don't want to span two columns. But it is not working. I want to do the following: . And what i tried:
\begin{figure}[ht]
\begin{subfigure}[b]{0.25\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{images/1a}
\caption{}
\label{1a}
\vspace{4ex}
\end{subfigure}%%
\begin{subfigure}[b]{0.25\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{images/1b}
\caption{}
\label{1b}
\vspace{4ex}
\end{subfigure}%%
\begin{subfigure}[b]{0.25\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{images/1c}
\caption{}
\label{1c}
\vspace{4ex}
\end{subfigure}%%
\begin{subfigure}[b]{0.25\linewidth}
\centering
\includegraphics[width=0.5\linewidth]{images/1d}
\caption{}
\label{1d}
\vspace{4ex}
\end{subfigure}
\caption{(a),(b)Some examples from CIFAR-10 \cite{4}. The objects in single-label
images are usually roughly aligned.(c),(d) However, the assumption of object alignment is not valid for multi-label
images. Also note the partial visibility and occlusion
between objects in the multi-label images.}
\label{fig1}
\end{figure}
How can i achieve it??
• I cannot run your MWE as I don't have your images, but I imagine the overall width of the figure, having 4 figures in it, each half a linewidth, would be about 2 lines wide. Did you try \includegraphics[width=0.25\linewidth]{images/1a} ? Also, it would be wise for the overall width to be a little less than 1 lines wide, as there will be spaces between figures... maybe 0.2 \linewidth or less ? – Elad Den Apr 21 '16 at 7:59
If you read IEEEtran documentation will see that it recommends not using subfigure package but subfig. And an example with subfig is explained in it.
What I understand you want is something like this:
which can be obtained with following code:
\documentclass{IEEEtran}
\usepackage{lipsum}
\usepackage{graphicx}
\ifCLASSOPTIONcompsoc
\usepackage[caption=false, font=normalsize, labelfont=sf, textfont=sf]{subfig}
\else
\usepackage[caption=false, font=footnotesize]{subfig}
\fi
\begin{document}
\section{A}
\lipsum
\section{B}
\lipsum[1-3]
\begin{figure}
\centering
\subfloat[a\label{1a}]{%
\includegraphics[width=0.45\linewidth]{example-image}}
\hfill
\subfloat[b\label{1b}]{%
\includegraphics[width=0.45\linewidth]{example-image}}
\\
\subfloat[c\label{1c}]{%
\includegraphics[width=0.45\linewidth]{example-image}}
\hfill
\subfloat[d\label{1d}]{%
\includegraphics[width=0.45\linewidth]{example-image}}
\caption{(a), (b) Some examples from CIFAR-10 \cite{4}. The objects in
single-label images are usually roughly aligned.(c),(d) However, the
assumption of object alignment is not valid for multi-label
images. Also note the partial visibility and occlusion
between objects in the multi-label images.}
\label{fig1}
\end{figure}
\lipsum[1-5]
\end{document}
• May I suggest that you update your answer, due to issues in referencing the subfloats which is addressed here? – Pouya Sep 12 '18 at 13:13
here is another way:
\usepackage{subfigure}
\begin{figure}
\centering
\subfigure[First caption]
{
\includegraphics[width=1.0in]{imagefile2}
\label{fig:first_sub}
}
\\
\subfigure[Second caption]
{
\includegraphics[width=1.0in]{imagefile2}
\label{fig:second_sub}
}
\subfigure[Third caption]
{
\includegraphics[width=1.0in]{imagefile2}
\label{fig:third_sub}
}
\caption{Common figure caption.}
\label{fig:sample_subfigures}
\end{figure}
And the output: | 2021-07-31 20:45:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137561321258545, "perplexity": 2407.152532721579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154126.73/warc/CC-MAIN-20210731203400-20210731233400-00391.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php/Alternating_sum | # Alternating sum
An alternating sum is a series of real numbers in which the terms alternate sign.
For example, the alternating harmonic series is $1 - \frac12 + \frac13 - \frac 14 + \ldots = \sum_{i = 1}^\infty \frac{(-1)^{i+1}}{i}$.
Alternating sums also arise in other cases. For instance, the divisibility rule for 11 is to take the alternating sum of the digits of the integer in question and check if the result is divisble by 11.
Given an infinite alternating sum, $\sum_{i = 0}^\infty (-1)^i a_i$, with $a_i \geq 0$, if corresponding sequence $a_0, a_1, a_2, \ldots$ approaches a limit of zero monotonically then the series converges. | 2020-08-04 03:29:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9524197578430176, "perplexity": 179.51465334100985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735851.15/warc/CC-MAIN-20200804014340-20200804044340-00158.warc.gz"} |
https://socratic.org/questions/how-long-does-a-novel-or-novella-have-to-be | # How long does a novel or novella have to be?
Oct 27, 2016
#### Answer:
Novels or novellas don't really have an exact number of words, but usually, each has from 30,000 to 60,000 words.
Hope this helps!
Nov 1, 2016
#### Answer:
Novels are longer than novellas.
#### Explanation:
A novella is generally greater than 20,000 and less than 60,000 words, typically coming in around the 40,000 word mark. A novel is essentially anything longer than a novella, with the average length being 80,000-100,000 words, depending on genre. | 2019-11-15 01:00:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8864960074424744, "perplexity": 5936.614500069306}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668544.32/warc/CC-MAIN-20191114232502-20191115020502-00374.warc.gz"} |
http://tex.stackexchange.com/tags/typewriter/new | # Tag Info
1
This works fine with pdflatex: \documentclass[10pt,a4paper,final]{article} \usepackage[T1]{fontenc} \usepackage{mathpazo,euler} \usepackage[scaled=0.9]{DejaVuSansMono} \usepackage[utf8]{inputenc} \usepackage[spanish]{babel} \usepackage{listings} \lstset{basicstyle=\ttfamily} \begin{document} Se utiliza un valor \lstinline!boolean! para poder tener en ...
0
The hyphenat-package works fine. \usepackage[htt]{hyphenat}
0
This solves the problem. \usepackage[htt]{hyphenat}
1
By default, LaTeX doesn't hyphenate typewriter type text. With fontspec you can revert this decision quite easily, but you have to newly define a monospaced font. \documentclass{scrartcl} \usepackage{fontspec} \usepackage{polyglossia} \setdefaultlanguage[spelling=new]{german} \setmonofont{Latin Modern Mono}[HyphenChar={-}] \usepackage{tabu} ...
1
Here's another enumitem approach, but automatic switching to \ttfamily using before={\ttfamily}. \documentclass[11pt]{report} \usepackage[utf8]{inputenc} \usepackage{textcomp} \usepackage[T1]{fontenc} \usepackage{enumitem} \newlist{itemtt}{itemize}{1} \setlist[itemtt,1]{label={\textbullet},before={\ttfamily}} \begin{document} \begin{itemtt} \item This ...
1
something like this : \documentclass{article} \usepackage{enumitem} \begin{document} \ttfamily \begin{itemize}[label=\textbullet] \item item1 \item item2 \end{itemize} \end{document}
0
Thanks to all information in the answers and other questions on tex, I finally found a 100% working solution for my situation. I was probably not clear enough in my question. What I want is: use \mon everywhere I want, inside tables and items, but also in other commands. (Which doesn't work for verbatim) use monospace maybe use a gray background Some ...
2
You can possibly do it with \verb. But let's look what happens with your attempt. In \newcommand{\mon}[1]{% \mbox{% \texttt{% \protect\detokenize{#1}% }% }% } there is a misplaced \protect that however does nothing bad. The problem is that TeX sees #6 before the “detokenization” actually takes place and this results in a bad token list ...
0
After reading this answer detailling which does what, and what \sloppy internally does, I’ve decided to go with the following, for now: % Zu lange Zeilen \emergencystretch 5em% The value of 5 em is probably too large, but it doesn’t seem to hurt so far and judging from the documentation.
1
Rather than allow hyphenation I would just allow things to break after _ most easily using url package. Also beware your definitions added a lot of spurious white space from ends of lines in the definitions. \documentclass[draft,11pt]{article} \usepackage{url} \begin{document} \newcommand{\isnormalized}[1]{% \path{is_normalized}($#1$)% } ...
Top 50 recent answers are included | 2016-02-14 04:17:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.986494243144989, "perplexity": 14850.482629866934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701168998.49/warc/CC-MAIN-20160205193928-00151-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://www.mtosmt.org/issues/mto.04.10.1/mto.04.10.1.wibberley3.html | # Syntonic Tuning: Creating a Model for Accurate Electronic Playback
## Roger Wibberley
KEYWORDS: Cents, Costeley, equal temperament, Finale, interval ratio, pitchbend, pitchwheel, Pythagorean, Syntonic comma
ABSTRACT: This essay is explicitly “educational” in the proper sense: although it has a technical focus this is intended to provide the means of re-examining and reassessing one’s own cognitive processes. It has often been argued that a proper understanding of early music is hindered by our own inability to “hear the music” in the way our ancestors did. Our “modern” ears have (it is often rightly said) become conditioned over time by the sound worlds of later systems of tuning and performance to the extent that our perception of the aesthetics applied to early styles can only be the result of educated guesswork. “We cannot know how a Pythagorean performance really sounded (can we?), and our understanding of the soundscape of Just Intonation (JI) is mainly intuitive, backed up perhaps by a complicated theoretical list of mathematical ratios (isn’t it?).” There is considerably more to a performance than accurate intonation (which is only the starting point) and this essay (along with the previous two) only seeks to explore the particular aspects of performance involving pitch. But the above sense of resignation and capitulation can at least be partly addressed by examining aspects of tuning, and the procedures discussed below will at least provide a means of capturing this single—albeit crucial—aspect of performance. Nothing will replace a group of well-equipped and trained singers, but it is my belief that such singers would need a highly specialized training before successfully achieving performances of the kind outlined. Even then, the nature of “live performance” (however accurate empirically) may still be the victim of only a subjective response from the listener (who is naturally more inclined to trust his or her own preconditioned cognitive responses than the results such singers might produce). This essay, therefore, provides a computer-generated means of evaluation, and the sounds provided are empirically correct in their intonation by all standards proclaimed in theoretical works of the time. The challenge, then, is to develop a self-critical approach in which—fully knowing that what is being heard is (intonationally) herein the same for us as it was for them—we can reappraise our own cognition. If any tunings that are created by the application of anything covered in the essay strike the listener as in any way “strange,” “out of tune” or “unusual,” this can be taken confidently as a direct invitation to the listener to reappraise his or her own cognition.
Volume 10, Number 1, February 2004
[1] In my previous two essays on Syntonic and Pythagorean tuning, numerous sound files were included together with graphic files showing in visual terms how the various notes in the scores were inflected upwards or downwards in order to fine-tune the harmonies according to the principles explained therein. The objective here is to show how Finale users can apply the same principles and achieve similar results from their own use of Finale.(1) What will be included is a downloadable Library file that can be imported into Finale and used as described. When this is applied to scores in the way specified, playback of files will be accurately tuned according to the user’s specification. In the case of those who simply wish to apply the file provided, much of the essay can be bypassed. But since an understanding of the principles will enable the application of other tuning systems (such as, for example, quarter-comma meantone), those who might wish to extend their use of tuning systems should follow the essay closely.(2)
[3] The keyboard that you are using will undoubtedly be configured to play in equal temperament.(3) The only way in which pitches that are different from the default pitches (set by the keyboard used) can be generated is by activating the pitchwheel. The pitchwheel will usually be set (by default) to raise or lower the pitch of a note by one octave.(4) Casual (and even seasoned) users of Finale may be unaware of the power it offers for playback. There is more than one method of asking Finale to alter the pitch of a note it plays from the score, but the method that is to be used here is the most accurate and uses the Staff Expression tool.(5)
[4] It will be helpful for you to know whether the Finale version you are using is what I shall henceforward call an “early version” as opposed to a “later version.” This knowledge will not affect your use of the files provided, but will help you to follow some of the explanations I shall shortly be offering for using the program. Generally an “early version” can be identified from the composition of the main tool palette. If it contains both Staff and Score Expression icons (the former indicated by the “mf” symbol and a white note, the latter by an “mf” symbol alone), this will be an “early version.” An example is illustrated in Figure 1, as used in Finale 3.7.2. If, however, your main tool palette contains only a single Expression Tool (indicated by an icon bearing only the “mf” symbol) you will be using a “later version.” Such palettes are shown in Figures 2 (Finale 2000) and 3 (Finale 2003).
Figure 1. Finale 3.7.2 main tool palette containing differing Expression Tools(click to enlarge) Figure 2. Finale 2000 main tool palette(click to enlarge) Figure 3. Finale 2003 main tool palette(click to enlarge)
[5] The Expression tool (both “staff” and “score” in early versions) is normally used in order to insert dynamics into the score. It is important that when this method of tuning is used it is always applied to individual notes on a selected staff. This will mean that users of early versions will select the Staff Expression Tool, while users of later versions will prescribe this from the submenus that appear after selecting the common Expression Tool. This will ensure that the marking to be inserted only affects the actual note(s) to be changed. In order at this stage to understand what is going to be done using this tool to provide for pitch changes to individual notes, it will be helpful now to load a Finale file in order to explore the (Staff) Expression tool in the manner to be outlined next.
[6] USERS OF “LATER VERSIONS” SHOULD HERE SKIP TO [7] below. With a Finale file open, select the Staff Expression tool and click on any note in the score to open the Staff Expression box.(6) There will be all the current dynamic markings that are available, although these themselves are not going to be used. Select any of those available and then click the EDIT button. The next box that opens should also contain a section headed “Playback Options,” but if it does not simply click the button marked “Show Playback Options.” In the “Playback Options” window that is now visible there is a drop-down selection menu next to the word “Type” which allows the user to make various kinds of settings that affect playback. When the drop-down menu is opened the type to be selected is “Pitchwheel.” Having selected “Pitchwheel” there is also a “Set to” radio button that should be selected. A numeric value now needs to be inserted into the box alongside this “Set to” button. It is at this point that a value can be inserted into the box to specify exactly what the pitchbend value for the particular Staff Expression originally selected will be. This value will determine the pitch inflection of the note to be sounded.(7)
[7] USERS OF “OLDER VERSIONS” SHOULD SKIP TO [8] below. Later versions require the selection of the common Expression Tool (“mf”). When a note in the previously-opened score is selected the next window displays the current expression marks. Select one, and then click “note attached” or “note expression” (according to version). When the “Select” button is clicked, the next window will present the playback options, and from the “Type” submenu should be selected “pitchwheel.” A numeric value needs to be inserted into the adjoining box, and this value will determine the exact pitch inflection that Finale will apply on playback. Now click the various “Cancel” button to return to the score.
[8] The question is, of course, what should the pitchwheel “set to” values be? What is required is a completely new library of text expressions, each of which has a programmed pitchbend value that has been accurately created and inserted as described above. More specific understanding of the keyboard pitchwheel and the way Finale operates it is needed, first of all, and then an understanding of how the values to be inserted are calculated. In older versions of Finale the default pitchwheel value is 8192; when the pitchwheel is at its lowest pitch (having been pushed to the left) Finale’s pitchwheel value is 0; and when the pitchwheel is at its highest point (being pushed to the right) Finale’s value is 16384 (i.e. 8192 multiplied by 2). The range in older versions is therefore 0-8192 and 8192-16384 (the former giving the range of pitches below the default, and the latter those above). In later versions, however, these numbers have been changed, the default value now being set to 0 (zero), and the respective numeric ranges are therefore -8192-0 and 0-8192.(8) This means that Finale is capable of dividing the interval covered by the pitchwheel (pushed either upwards or downwards) into 8192 equal portions. Although this may seem generous for tuning purposes, it is actually insufficient for the purpose of accurate pitch changes as small as a Syntonic comma as long as the pitch bend set on the keyboard covers a whole octave.(9) What is needed is the facility of splitting the interval of a semitone (rather than an octave) into this number of divisions. This will provide the facility for generating pitches accurate to within 1 part in 8192 parts of one semitone.
[9] In order, therefore, to reduce the pitchbend range from an octave to a semitone, the bend setting needs changing on the keyboard itself. Editing the bend value is simple but the procedure depends upon the specification of the keyboard in use.(10) Whatever sounds (patches) are going to be used for playback from Finale need individually editing on the keyboard, and the bend value for each needs setting to “1.”(11) The accuracy that Finale will generate will therefore not be corrupted by incorrect bend settings within the keyboard itself.
[10] What now needs to be provided is a Finale Text Expression file that in effect significantly increases the number of separate pitches available for each octave. The object is to create a tool that enables vocal performance to be simulated, and this is something that was never available on keyboards of fixed pitches.(12) Since each of the twelve notes available on the keyboard now needs to be capable of delivering at least three different pitches separated by a Syntonic comma, the Text Expression library will need to provide for this. The default scale within which such pitch inflection occurs will, however, be purely Pythagorean since it is from these Pythagorean pitches that comma inflection upwards or downwards has to take place. So in addition to the pure Pythagorean intervals of the 4th, 5th and octave, pure 3rds and 6ths will be added to the system by the facility of modifying as necessary standard Pythagorean pitches by comma inflection. The first stage in constructing this new library of pitches is therefore to devise the values needed for the default Pythagorean scale.
### PYTHAGOREAN TUNING
[11] I shall henceforward show numeric values for both “older” and “later” versions of Finale as follows: “later versions” will appear in square brackets following the values for “older” versions. It will always be noted that the values of the former are the same as those of the latter minus 8192.(13)
[12] In constructing the Pythagorean scale each existing note on the equally-tempered MIDI keyboard needs to be allotted an exact mathematical value such that when Finale plays a note with the value defined its pitch will be exactly the one required by the scale. In effect what this library will be doing is retuning the separate pitches of the scale so that they will all have a Pythagorean relationship instead of an equally-tempered one. What needs to be calculated is the exact amount by which each separate note has to be raised or lowered from its equally-tempered default when it is played by Finale. These defaults will all have a pitchwheel value of 8192 [0],(14) and the Pythagorean values will therefore be slightly more or less than this value depending upon the position of the note within the scale. A correct way of arriving at exact Pythagorean values is to take a starting note with the default value of 8192 [0], and then a) move upwards through a series of perfect fifths sharpwards, and b) move downwards through a corresponding chain of perfect fifths flatwards. Taking the note “C” as the starting point, the first deduction will move sharpwards through the fifths ending with A-sharp, while the second deduction will move from the same starting note flatwards through the fifths to end on C-flat. When this process has been completed all the “white notes” will have their correct pitches, and all the “black notes” will exist with two pitches separated by a Pythagorean comma. Sharps will thereby lie a major semitone higher than their parent naturals, and flats a major semitone lower than theirs. This will then provide for all the notes required by the combined use of musica recta and musica ficta. The octave will have been provided with 18 notes instead of the usual 12. As we shall see, however, 18 notes to the octave will be too few to permit Just Intonation which additionally requires the raising and lowering of these default Pythagorean notes by Syntonic commas. But this is nonetheless the starting point for arriving at the required scale.(15)
[13] To create precise Pythagorean pitches for the scale, it must be remembered that the notes are initially set on the keyboard to Equal Temperament. As such each note in the score will have a default Finale pitchwheel value of 8192 [0]. Only when this value is, for each separate note, raised or lowered by the exact value required will the Pythagorean scale come into being. And it is only when this new scale has been created that the addition or subtraction of comma values can be made with precision. When this scale, together with the comma values to be applied to it, has been constructed, it will then be possible to edit scores by adding the prescribed playback Text Expressions and achieve a playback in Just Intonation. The first task, therefore, is to construct an accurate Pythagorean scale.
[14] Because the task is to create audible pitches rather than to calculate relative string lengths, it is necessary to work in Cents rather than string length ratios.(16) Since construction of the Pythagorean scale is (as stated in [11] above) achieved by computing pure fifths in a chain upwards, and then downwards, it is therefore necessary to define the exact number of Cents required between successive notes that lie a fifth apart. Since we already know that by default each equally-tempered semitone contains 100 Cents, and therefore that the equally-tempered fifth contains 700 Cents, all we will need to do is to subtract 700 from the total number of Cents calculated for the interval overall to find the difference between the equally-tempered and the Pythagorean pitches of the note concerned. We shall also need to convert Cents value into pitchwheel value which is very simple: since the equally-tempered semitone (set on the keyboard) has a pitchwheel range of 8192 (being either 0-8192 [-8192-0] or 8192-16384 [0-8192]), it must follow that each Cent has a pitchwheel value of 81.92. When, therefore, the difference in Cents between an equally-tempered and a Pythagorean fifth is calculated, the Cents difference needs to be multiplied by 81.92, and the result added to (when moving upwards) or subtracted from (when moving downwards) the default pitchwheel value of 8192 [0]. This will yield the new pitchwheel value for the note that now lies a Pythagorean fifth above or below the note from which the calculation was made.
[15] All the new pitchwheel settings can easily be calculated when the formula for conversion from string-length ratio to Cents is applied. This formula is very simple: where the interval-ratio required is “i” and the Cents value sought is “c,” the conversion is achieved as follows:
c = log (i) * (1200/log (2))
By applying these principles to calculate the pitchwheel setting for “G,”(17) and taking “C” as the starting point with a normal pitchwheel setting of 8192 [0], the value for “G” will be arrived at as follows:
1. the pitchwheel value of C (8192 [0]) is the starting point;
2. the number of Cents required to arrive at G is log(3:2) X 1200/log 2;
3. the difference between what G would have been (700 Cents greater than C) and what it now is as the result of b) above, is the total Cents value given in b) minus 700 (the number of Cents it would have been);
4. this difference is multiplied by 81.92 (the pitchwheel value of each Cent);
5. the result of d) above is added to 8192 [0] (which would have been the default value).
The complete formula for this calculation is therefore as follows:
FOR OLDER VERSIONS: 8192+(((log(3/2)) * ((1200/log(2))-700) * 81.92) = 8352.
FOR LATER VERSIONS: 0+(((log(3/2)) * ((1200/log(2))-700) * 81.92)=160
The pitchwheel setting for “G” is therefore 8352 (older versions) or 160 (later versions).(18)
[16] Having now determined the value of the first fifth in the chain sharpwards, the same formula is applied each time the next fifth in the chain is calculated. The only difference will be the starting value. When the G was calculated from the C, the starting value was that for the C (8192 [0]). The result for the G was 8352 [160], and this new result now becomes the starting point for calculating the value for D using otherwise the same formula:
OLDER VERSIONS: 8352+(((log(3/2)) * ((1200/log(2))-700) * 81.92) = 8512
LATER VERSIONS: 160+(((log(3/2)) * ((1200/log(2))-700) * 81.92) = 320
In then calculating the value for A, the starting point will next be the new value for D (8512 [320]). This process simply continues through the fifths sharpwards as far as D-sharp.
[17] The pitches that remain are those to be calculated in the chain of fifths flatwards from the C. The formula is almost identical, and only the first operand differs (since the movement is now flatwards). The calculation for F will take the value of C as the starting point, and the formula will now be:
OLDER VERSIONS: 8192-(((log(3/2)) * ((1200/log(2))-700) * 81.92) = 8032
LATER VERSIONS: 0-(((log(3/2)) * ((1200/log(2))-700) * 81.92) = -160
The old starting figure for C (8192 [0]) is now replaced with the new one for F (8032 [-160]) to find the value of B-flat. This process continues flatwards through the fifths until C-flat is reached. When this has been completed, the whole series of Pythagorean pitchwheel settings will have been determined. These are now given below (the figures appearing in square brackets being those that apply to later versions of Finale, and being equivalent in purpose and function to those given first for older versions):
C: 8192 [0]
D: 8512 [320]
E: 8832 [640]
F: 8032 [-160]
G: 8352 [160]
A: 8672 [480]
B: 8992 [800]
B-flat: 7872 [-320]
E-flat: 7712 [-480]
A-flat: 7552 [-640]
D-flat: 7392 [-800]
G-flat: 7232 [-960]
C-flat: 7072 [-1120]
F-sharp: 9152 [960]
C-sharp: 9312 [1120]
G-sharp: 9472 [1280]
D-sharp: 9632 [1440]
A-sharp: 9792 [1600]
Files created in early versions that are given the first values shown will, when loaded into later versions, have these values automatically changed to those appearing in square brackets. When creating new settings for early and late versions of Finale respectively, care must be taken to ensure that the correct data set is used: older versions using the pitchwheel range of 0-8192-16384, and later versions -8192-0-8192. There are consequently 18 different pitches in this system for each octave.
[18] To create an accurate playback of these values, and achieve true Pythagorean tuning, the procedures outlined in [5] to [7] above should now be followed. In the Expression window of Finale, it would be best to delete all existing expressions (unless you still to wish to use any of them, which is unlikely). For each Pythagorean pitch it is necessary to Create a new Text Expression entry in which is applied a) a convenient name tag, and b) a playback definition in which “Pitchwheel” is selected from the drop-down menu and the appropriate value (in the list above) typed into the box. The name tag is what will appear in your score when you apply the appropriate Text Expression to each of the notes in turn, and it should be kept clear and simple. The “white notes” can simply use the letter name (e.g. A, B, C etc.), and the flat notes can be supplied with a “b” after the letter (e.g. Bb, Gb etc.). The sharps should not however use the hash sign (#) because this is reserved in Finale.(19) In this essay, and the supporting files, I have replaced the hash sign with “$,” so F-sharp is tagged as “F$.”
[19] Once the new Text Expression library containing the Pythagorean playback values has been completed, each note in the score will need to be supplied with its appropriate Text Expression tag. Although this is labor intensive, it is a labor of love!(20) It is absolutely crucial, however, to make sure that each voice part is programmed by Finale to use a different channel for playback otherwise cross-contamination of playback values will occur.(21) The same actual sound (patch) can be used with no ill effect, but different lines must remain assigned to different playback channels.
[20] In order to illustrate the method of insertion to yield tuned playback, here are provided some screen shots from three different versions of Finale. The music file used (the same in each case) was created by version 3.7.2 (the earliest of the examples) and has been loaded into each version to illustrate the method of insertion. The first note in the highest staff is A, and the task in each case is to insert an instruction to play the note back at its Pythagorean pitch. The music file already has all the Pythagorean values in its active Expression Library (this note is programmed to use a pitch bend value of 8672, as indicated in the list of figures in [17] above. Figure 4 shows the initial window provided by Finale 3.7.2. This window appeared because the Staff Expression icon in the left box was selected, and the first note in the score was clicked. In this Selection window, the Staff Expression “A” has now been selected. To discover what its current settings are (and even to edit them) the “Edit. . .” button will now be clicked. This now opens the Text Expression Designer window where it can be seen that the current setting is “Pitchwheel” set to “8672.”(22) This is shown in Figure 5. Since this is correct, the various windows are now closed by clicking on the appropriate “OK” or “Select” buttons, and the letter “A” now appears above the note in the score. When played back the pitch will be Pythagorean A.
Figure 4. Staff Expression Selection window (version 3.7.2)(click to enlarge) Figure 5. Text Expression Designer window (version 3.7.2)(click to enlarge)
Figure 6. Expression Selection window (Finale 2000)
(click to enlarge)
[21] In later versions, the methods are similar, but the Pitchwheel value for “A” that appears after the same file is loaded will now have been converted automatically to “480.”(23) Figures 6, 7 and 8 are screen shots from Finale 2000. Figure 6 shows the primary Expression Selection window. The left box shows that the Expression Tool was selected (note the absence now of the Staff Expression tool shown in Figure 4—choice for insertion as “staff” or “score” expression appears in the next window), and the Expression Selection box appeared because the first note in the score was clicked. The expression “A” has now been selected, and when the “Select” button is clicked a new “Measure-attached Expression Assignment” window appears. This is shown in Figure 7. Here it is important to select “This Staff Only” (thereby making the expression a “staff” one (as in version 3.7.2) rather than a “score” one. When the “OK” button is clicked, the expression “A” is inserted above the note, and whatever playback value (if any) was given will be activated upon playback. To check what the current settings are (and even to edit them if desired) the “Edit. . .” button in the primary window (Figure 6) is selected. This now opens the Text Expression Designer window as shown in Figure 8. Here it can be observed that the Pitchwheel setting is now “480” showing that Finale has updated the original value of “8672” and that the setting is correct.
Figure 7. Measure-attached Expression Assignment window (Finale 2000)(click to enlarge) Figure 8. Text Expression Designer window (Finale 2000)(click to enlarge)
[22] Equivalent windows used in Finale 2003 are shown in Figures 9 and 10. The only obvious differences from Finale 2000 are what might be viewed as “window dressing” differences. For example “Note attached” (version 2000) has now become “Note Expression” (version 2003), although the functions are identical. The Text Expression Designer window is identical in function with its 2000 counterpart, and also shows—as expected—that the Pitchwheel value stands now at “480.”
Figure 9. Expression Selection window (Finale 2003)(click to enlarge) Figure 10. Text Expression Designer (Finale 2003)(click to enlarge)
[23] Below are two downloadable Finale files (Example 1 and Example 2).
Example 1: Josquin Desprez, Ave Maria (excerpt using equal temperament) [Ex1.zip (for PC users)] [Ex1.MUS.sit (for Mac users)]
Example 2: Josquin Desprez, Ave Maria (excerpt using Pythagorean tuning) [Ex2.zip (for PC users)] [Ex2.MUS.sit (for Mac users)]
The first is a straight file for playback using four separate channels, and the same patch number for each. You are advised to avoid using any “piano” sound because our ears are conditioned to assuming that anything sounding like a piano uses Equal Temperament. This might give the illusion that the piece is “out of tune” when it really is not. Select a melody instrument for playback through channels 1, 2, 3 and 4 (a Recorder sound is ideal). The file will not change your keyboard patch settings. This should give a normal playback using equal temperament. The second is an edited version of the first, but now attaches to each note its Pythagorean playback values. The correct Expression library (detailed above) is embedded within the file, so it should simply play the music correctly. But the following checklist should first be made before trying it:
CHECKLIST
1. Have you reset your Pitchbend Range to a value of “1” on the keyboard? Test the bender manually to ensure that it inflects upwards and downwards by only a semitone for patches you will set for channels 1-4 (the setting used for Examples 1 and 2). If this has not been done, you will quickly find that comedy becomes tragedy! This setting must of course be applied to each and every patch you intend to use for playback.
2. Before playing the file, open the Instrument List window to ensure that your download has retained separate channels for each staff. If not, you will have to assign these staves to different channels using this window. You can, however, use the same patch number for each channel (having, as in a) above, verified the pitch bend setting as being “1”). Make sure that the “Send Patches before Play. . .” box is checked.
### SYNTONIC TUNING
[24] As stated earlier, Syntonic tuning is only an inflection of the Pythagorean scale in which Syntonic comma adjustment is applied to particular notes of the tetrachords. Potentially any note of the scale can be adjusted upwards or downwards by a comma depending upon its tetrachordal location and context. For this reason, each note of the Pythagorean scale needs to be provided with complementary values reflecting upward and downward comma adjustment. Indeed this adjustment can sometimes be greater than a single comma over the course of a whole piece, so what needs to be provided for is two or three complementary values for each note sharpwards and flatwards. Where an unavoidable accumulation of commas causes a composition to undergo an audibly significant and irreversible rise or fall in base pitch, the assumption should be that Syntonic tuning is the wrong soundscape for that piece and that Pythagorean tuning was intended by the composer. It is still advisable, however, to provide for a certain number of comma shifts either way, if only to test the effect of Syntonic tuning in such cases and adjudge the impact of the pitch change.
[25] The Syntonic comma is a fixed though non-melodic interval whose ratio is 81:80.(24) As such it always gives the same quantifiable inflection when applied to an existing pitch. Whether the inflection is sharpwards or flatwards in no way changes its definition as a quantity. In order to calculate a pitchwheel value for the interval it must first be converted into Cents, and then multiplied by 81.92 (the pitchwheel value for each Cent). Conversion into Cents is via the normal formula for converting from an interval ratio (i):
LOG(i) * (1200/LOG(2)).
Pitchwheel value now requires this to be multiplied by 81.92, and the result is simultaneously added to or subtracted from the pitchwheel value of an already existing note. If, therefore, the pitchwheel value of “C minus a Syntonic comma” is computed, the formula (starting from 8192 for the C) will be:
8192-(LOG(81/80) * (1200/LOG(2)) * 81.92 = 6430
The result of 6430 will therefore be set for pitches where C is to be lowered by a Syntonic comma. Were it to be raised by a Syntonic comma, the first operand would be changed to a “+”:
8192+(LOG(81/80) * (1200/LOG(2)) * 81.92 = 9954.
This new value of 9954 will consequently be set for notes where C is raised by a Syntonic comma.(25)
Figure 11. Pythagorean pitchwheel values with comma shifts (older versions)
(click to enlarge)
Figure 12. Pythagorean pitchwheel values with comma shifts (later versions)
(click to enlarge)
[26] Charts plotting all the pitchwheel values of the Pythagorean scale, together with comma shifts upwards and downwards, are most easily achieved by using a spreadsheet. Figures 11 and 12 show complete charts (Figure 11 for older versions, and Figure 12 for later) plotting up to four comma shifts upwards and downwards. It is unlikely that this number will be needed, let alone ever exceeded, although it can be extended if desired. The default value for C (8192 [0]) is the only simple numeric entry contained in these spreadsheets. Every other value has been automatically computed and generated by the insertion of formulae. What will be noted is that all the values in Figure 12 are less—by a value of 8192—than the corresponding values in Figure 11. In these the default Pythagorean pitchwheel value of each note is shown in column 2, and the remaining columns list the settings for the comma shifts as indicated at the head of each column. These are the values that need to be programmed into the new Expression library either from Figure 11 or Figure 12 as appropriate.
[27] In creating a complete library of Staff Expressions to give accurate playback, as outlined above, the name tags need to be simple and clear in order to indicate the exact inflections of each note. For this purpose, the library encoded in Example 2 above contained all the Pythagorean pitches and these are identified simply by the letter name of the note concerned. Where there is a flat this is indicated by the “b” suffix (e.g. Bb), but sharps are shown by the use of the suffix “$” (e.g. F$) for the reason stated in note 18 above. When this simple library is expanded so as to include all the upward and downward Syntonic comma inflections for every note, the tags need to indicate the base notes as well as the exact inflections. The simplest method of displaying these attributes is to use the letter name of the note, an operand indicating upward or downward inflection (“+” or “-”) and a numeral showing the number of comma values by which the note has been inflected. Thus the tag “C-1” will indicate C lowered by a comma, “F\$+2” the note F-sharp raised by two commas, and “Bb+1” the note B-flat raised by one comma.(26)
[28] Example 3 below is a downloadable Finale file containing the complete active library of pitches displayed above in Figure 11.(27)
Example 3: Willaert, Sicut erat in principio from Vesper-Psalm 109 [Ex3.zip (for PC users)] [Ex3.MUS.sit (for Mac users)]
The score it displays has already been edited by the insertion of appropriate Staff Expressions in which is encoded the exact pitchwheel values for playback. If you are sure that your keyboard is set to the appropriate Pitch Bend value, and you have verified that the bend is only a semitone upwards and downwards when the pitchwheel is activated, you can play the file back and it should be finely tuned to work in Just Intonation (for which soundscape it was clearly designed and carefully controlled by the composer). Before playing it, make sure that you have allotted appropriate patches to channels 1-4, and that each patch to be used has a pitch bend setting (on the keyboard) of “1.” In order to apply this library to other scores of your own, you should use the “Save Library. . .” command, select the radio button for “Text Expressions,” and give it a recognizable name (e.g. “Syntonic”). Once Saved, this library can then be Opened from within any other file created and it will be immediately active. If you intend to use it frequently for new scores, you could even create a template to save time importing the file every time you need it.
[29] This essay has not primarily been offered in order to increase an interest in or awareness of technology, though hopefully those readers who use technology will have benefited from it in some way. The real purpose has rather been to demonstrate that not only is it possible to know that the pitches being created are indeed those understood and used by early musicians, but also that in hearing them with this certain knowledge we can extend our cognitive experience and thereby achieve a much better understanding of the soundscapes that for too long have remained obscure. Pitch, however, is only one part (even though an important one) of this soundscape. Furthermore an understanding of the ways in which these pitches can be recreated through simple technology does not in itself provide the musical understanding necessary to know where and how to apply the pitches that can be generated. Only by the development of a meticulous and clinical approach to comma analysis can the perceptions this yields be realized through the technology that has been created.
### POSTSCRIPT: Guillaume Costeley: Seigneur dieu ta pitié
[30] This short postscript will offer a curiosity that demonstrates again Finale’s power to provide superfine tunings. While many readers will be familiar with Guillaume Costeley’s spiritual chanson Seigneur dieu ta pitié, very few will ever have heard it rendered, as the composer prescribed in his Preface to the 1570 print, in 19-tone equal temperament (19-tet). He there described the use of a keyboard upon which seven extra black keys were to be added.(28) The normal black keys were supplemented by a further seven, one lying alongside each existing one, and an extra one being added between B and C, and between E and F. In this system B-sharp was the same keyboard note as C-flat, and E-sharp the same as F-flat. This resulted in nineteen separate notes to each octave, and Costeley prescribed that the interval between each successive note should be one-third of a tone. Diatonic semitones are to be given the interval value of two-thirds of a tone.
[31] It is clear from this description that tones are smaller than standard.(29) It must also be the case that minor thirds (equaling five thirds of a tone) are larger than the Pythagorean minor thirds, and that major thirds (equaling four thirds of a tone) must be smaller than the Pythagorean ones. Thirds and sixths will therefore be nearer to the JI intervals, while the fourths and fifths will be slightly dissonant.(30)
[32] The programming of a playback for 19-tet is very easy in Finale, since only twenty-one different values have to be provided for (which is only three more than the Pythagorean scale required).(31) Since the octave (1200 Cents) divides into nineteen portions, each one-third step must be equal to 63.15789 Cents. When this scale is allotted Pitchwheel setting, the following values will be found to be correct (again values for the later versions of Finale appearing in square brackets):
C: 8192 [0]
C-sharp: 5174 [-3018]
D-flat 10348 [2156]
D: 7330 [-862]
D-sharp: 4312 [-3880]
E-flat: 9485 [1293]
E: 6467 [-1725]
E-sharp: 3449 [-4743]
F-flat: 11641 [3449]
F: 8623 [431]
F-sharp: 5605 [-2587]
G-flat: 10779 [2587]
G: 7761 [-431]
G-sharp: 4743 [-3449]
A-flat: 9917 [1725]
A: 6898 [-1294]
A-sharp: 3880 [-4312]
B-flat: 9054 [862]
B: 6036 [-2156]
B-sharp: 3018 [-5174]
C-flat: 11210 [3018]
Figure 13. The 19-tet scale
(click to enlarge and listen)
When these values are created as playback data for the text expressions, a 19-note equally-tempered scale can be generated. Figure 13 shows such a scale together with the related text expressions that have been configured in this way for playback. When the audio file provided is played, the exact effect of the scale can be heard and each step lies exactly one-third of a tone from its neighbor.
[33] When these values are applied to the Costeley chanson, they provide a simple and striking opportunity to listen to the piece according to the rare soundscape of 19-tone equal temperament. Figure 14 provides a score of the chanson(32) while the attached audio file provides a 19-tone equally-tempered performance simulating an organ.
Figure 14. Guillaume Costeley’s spiritual chanson Signeur dieu ta pitié(click to enlarge and listen)
Roger Wibberley
Goldsmiths University of London,
Department of Music,
New Cross,
London SE14 6NW
r.wibberley@gold.ac.uk
### Footnotes
1. Of the many available music notation programs, my preference for Finale here is only because it is particularly well suited to combining visual analysis with superfine pitch control. But its method is unique, and files converted into, say, Sibelius will not retain the MIDI settings that have been applied. Although Finale has been upgraded regularly, most of the changes have been designed only to enhance user friendliness and to present better looking windows and scores. The window dressing aside, the functionality of the program has remained fairly stable. But in the specific area to be discussed below later versions of Finale have adopted a slightly different (and perhaps more logical) method of data management. This will be explained in due course.
2. Readers might like to know that the writer does not regard himself as a technological boffin, and they might perhaps be comforted (or irritated if they expect boffin-style discourse) by the assurance that explanations will be simple and comprehensible.
3. If your keyboard provides other preset temperaments such as “Pythagorean” or “Just,” you should make sure that it is only set to equal temperament. All the settings that will be provided in this essay take the equally-tempered keyboard as the default. None of the music to be generated will actually use equal temperament of course, but the pitchwheel settings presume this as being the default.
4. You can verify this quite simply. Assuming your MIDI keyboard is set up to generate sounds through your speakers, hold down any note and move the pitchwheel fully to the right and then the left. The pitch of the note should rise and fall by a full octave. If it is different this does not matter, and it merely indicates that the default setting has been changed. The setting will again be changed later to another value in any case.
5. As will be shown, early versions of Finale provided both Staff Expression and Score Expression tools, while later versions offer only a common and single Expression Tool. In these later versions, setting of the required entry for either “Staff” or “Score” is made via various sub menus. The functionality of whatever entry is made remains identical however. A less useful method of pitch control is to open the Midi Tool and edit the pitchbend values of notes that are to be changed. This is far less accurate and anything that is done remains invisible except to the inner workings of the program. The more accurate method outlined here remains visible in the score, and can easily be edited or changed.
6. If the score is empty, perhaps because the file is newly created, simply insert a note. Then select the Staff Expression tool icon and click on the note.
7. Now that the procedure for editing and inserting pitchwheel values has been followed, the various “Cancel” buttons can be clicked to close down the Staff Expression tool so as to avoid changing the note originally selected. In order to rehearse the procedure again, the instructions given in [6] above can be repeated.
8. These are the new values that Finale automatically converts to when a file created in what it recognizes as an “older version” is loaded into the later version. In such a file, later versions will convert original pitchwheel values of 8192 to 0, 0 to -8192, and 16384 to (+)8192. All resulting values in the later versions will therefore be equivalent to all those in the early version minus 8192.
9. It is, however, much better than what is provided by the MIDI tool which only splits the same pitchbend interval into 64 equal parts. This is wholly inadequate for pitches of the accuracy required.
10. The keyboard User Manual will give the information needed for altering the bend value. On my Roland the bender range can be accessed via the “Edit” and then the “Lower” buttons, and found by scrolling through the list with the “Display” button. When “Bender Range” is displayed, its value can be set to “1” with the “Value” button and the new setting saved using the “Write” and “Enter” buttons. Other keyboards will have equivalent methods of making changes to these settings.
11. The Bend Range value increments by semitones so that the default value of “12” yields a pitch bend of a full octave up and down. By changing this setting to “1” the bend range will be reduced to only a semitone up or down.
12. Keyboards always adopted “compromise” tunings although some experiments with split keys were designed to increase the number of pitches available for each octave. This facilitated a differentiation between sharps and flats so as to enable the consistent use of pure thirds on all degrees. Unaccompanied vocal performance, however, did not need to compromise and comma inflection was a normal part of a singer’s technique. This might require the same note to vary in pitch (according to context) from its normal Pythagorean position to a comma higher or lower. Frequently, but under tight compositional control, this might be extended at times up to two commas higher or lower than Pythagorean pitch. But such flexibility was completely out of the range of normal keyboards whose tuning (in whatever compromise system was adopted) had to remain entirely fixed for a particular performance.
13. I should again stress that files created with older versions are updated automatically when opened by later versions of Finale, but when creating new files in any version the user will need to know which set of values to apply. If the list provided below for “older versions” is mistakenly used in “later versions,” Finale—in blissful ignorance—will produce extremely bizarre results.
14. As stated in [11] above, “8192” here refers to the default numeric value for “older versions” and the following “[0]” that for “later.”
15. It must be remembered that the Pythagorean scale is the default scale for Just Intonation, and that the tuning of JI arises from the addition/subtraction of the Syntonic comma (81:80) to/ from notes that are modified for consonance purposes. The effect of this modification is to change all the Pythagorean diatonic semitones from minor (256:243) to major (16:15), to narrow all the major thirds and sixths, and to widen all the minor thirds and sixths. But these changes all result from the single application each time of a Syntonic comma adjustment (upwards or downwards) that changes the default Pythagorean pitches concerned.
16. Theorists who explain the Pythagorean scale do so in terms of sounding length, and this leads to sounding length ratios. Thus, for example, the pure fifth is 3:2, and the octave is 2:1. Although the same ratios are inversely correct for the relative quantities defining the two pitches involved, they do not define the pitches themselves (but only the relative values between them). What is needed here is an unambiguous definition of the actual pitch to be generated for each note defined. This can only be achieved by converting the ratio values into Cents and then calculating the pitch of each note in terms of the number of Cents that make it higher or lower than the default note from which it is computed.
17. This value for “G” will be the same for every “G” in every octave, and all other respective values obtained for all other notes will be the same for all octaves.
18. Again the numeric value for later versions is less than for older ones by a total of 8192. This, like all other pitchwheel values, is calculated to the nearest 1 in 8192 parts of a semitone. Equally-tempered fifths have been narrowed from their Pythagorean defaults. Since the pitchwheel value of C is 8192 [0], the restoration of the G to its Pythagorean position will necessarily increase its pitchwheel value. The new value of 8352 [160] has raised the pitch by a pitchwheel value of 160. When this is divided by 81.92 (the pitchwheel value for each Cent as shown in [14] above), the difference is found to be 1.953125 Cents
19. If, for example, you indicated F-sharp with the tag “F#,” Finale would show “9152” because it would assume that you had asked for the numeric value assigned to be displayed in place of the tag itself.
20. Experienced Finale users will know that the process of inserting Text Expressions can be simplified and speeded up by the use of metatools. Since most pieces using Pythagorean tuning only have a range of up to nine different pitch classes in each octave, assigning each global pitch to a metatool makes insertion very quick and easy. Other users will simply have to make a separate insertion (via the Staff Expression window) for each and every note in the score.
21. In the Instrument List window it will be necessary to assign a different instrument for each staff, and then a different channel. In the “Prog.” column the same sound patch can be selected if wished.
22. This number can, of course, be changed to whatever value you like dependent upon the pitch needed relative to any tuning requirement or system you wish to use. It will be set for the entire piece, however, and the current value is the one needed for its Pythagorean pitch.
23. The figure “480” indicates the same pitch inflection within the 0-8192 numeric range (used by later versions) as does the figure “8672” within the numeric range 8192-16384 (used by older versions), representing simply an increase in pitch bend of 480 units.
24. It is “non-melodic” since it was considered too small to form a discrete interval in its own right. But when it was added to or subtracted from a larger interval the overall result did yield a new melodic interval. When added to a Pythagorean minor semitone, the result was a major semitone (16:15) which was a distinct interval capable of being leant and performed. Also when it was subtracted from a Pythagorean tone (9:8) the result was the minor tone (10:9) which again was a discrete interval to be learnt and sung. The addition or subtraction of this quantity is what gives rise to the new intervals whose cognition and accurate delivery lies at the heart of the skills required for the accurate performance of Just Intonation.
25. Both these calculations are, of course, for the older versions of Finale. For the newer versions (which will upgrade existing old-version files automatically) the figure “8192” will be replaced with “0,” and the results of the above calculations will be respectively “-1762” and “1762.”
26. In order to produce a clean and pure Syntonic intonation, you are strongly advised to eliminate completely any default vibrato that will undoubtedly be set on your MIDI keyboard. Since even the most gentle vibrato is unlikely to create pitch fluctuation of less than a Syntonic comma, it stands to reason that the presence of such vibrato will sabotage any attempt to enter the true sound world of Syntonic tuning. On some keyboards, even the “harpsichord” patch is infected by vibrato. Removing it is straightforward, but reference should again be made to the User Manual or technical assistance sought.
27. If you load it into one of the later versions of Finale, you will find that the values have automatically been updated to those contained in Figure 12 above.
28. The issues surrounding this composition are too complex to be discussed here and will form the substance of a future article. But his description of the keyboard upon which he stipulated the chanson could be successfully performed, while remaining in tune throughout, shows it to have been similar to the one discussed by other musicians of the time (notably Vicentino).
29. The tones must be smaller because if the octave equals nineteen intervals of one-third of a tone each there must be at least six and one-third tones in each octave. The Pythagorean octave contains only five tones plus two minor semitones (which is less than six overall). The keyboard tones described by Costeley must therefore be minor tones roughly equivalent to the Syntonic (10:9) minor tones. In compensation for this, the diatonic semitones are large (two-thirds the size of a tone in fact), and these, too, must correspond roughly with the Syntonic diatonic semitones (16:15) although they are marginally larger in size. The occasional use of non-diatonic semitones (used either melodically or arising from close cross relations) provides, according to Costeley’s assessment, movement equaling only one-third of a tone.
30. In this temperament, the interval of a fifth is perceptibly narrowed (much more so indeed than in 12-tone equal temperament), and the fourths are correspondingly widened by a no less perceptible amount. The overall effect of this temperament is radically different from that of 12-tone equal temperament. In the latter, the effect (as we are used to) is one of tolerably pure fourths and fifths together with pure octaves, upon which have been superimposed distinctly out-of-tune thirds and sixths (to which our ears have now become accustomed). In the former, however, the soundscape is one in which the fourths and fifths are sufficiently out of phase to produce a wavy timbre (similar to a gentle voix celeste organ sound), upon which has now been superimposed strikingly pure thirds and sixths.
31. There will need to be twenty-one separate values because the B-sharp/C-flat and E-sharp/F-flat pairs will each need different values for both notes (even though whichever is used will provide the same actual pitch). Sometimes Finale will have to read F-flat, and sometimes E-sharp: the first (= E in 12-tet) will have to be raised from its default pitch while the second (= F in 12-tet) will need to be lowered. Whichever is used in the version edited for keyboard will depend solely upon its diatonic context: if G-flat descends by a diatonic semitone (= two-thirds of a tone) the following pitch will be written as F-flat; but if a D-sharp rises by a diatonic semitone the following note will appear as E-sharp. Finale therefore needs to have both members of each pair provided for.
32. The appearance of the score will sometimes look strange, and somehow “between the cracks,” to those thinking in 12-tone equal temperament. Frequently chords appear as though they need re-enharmonizing. In reality, however, each note is completely pitch-specific and relates to a particular key on the keyboard. Only the single black notes dividing the B/C and E/F pairs have alternative means of notation.
Of the many available music notation programs, my preference for Finale here is only because it is particularly well suited to combining visual analysis with superfine pitch control. But its method is unique, and files converted into, say, Sibelius will not retain the MIDI settings that have been applied. Although Finale has been upgraded regularly, most of the changes have been designed only to enhance user friendliness and to present better looking windows and scores. The window dressing aside, the functionality of the program has remained fairly stable. But in the specific area to be discussed below later versions of Finale have adopted a slightly different (and perhaps more logical) method of data management. This will be explained in due course.
Readers might like to know that the writer does not regard himself as a technological boffin, and they might perhaps be comforted (or irritated if they expect boffin-style discourse) by the assurance that explanations will be simple and comprehensible.
If your keyboard provides other preset temperaments such as “Pythagorean” or “Just,” you should make sure that it is only set to equal temperament. All the settings that will be provided in this essay take the equally-tempered keyboard as the default. None of the music to be generated will actually use equal temperament of course, but the pitchwheel settings presume this as being the default.
You can verify this quite simply. Assuming your MIDI keyboard is set up to generate sounds through your speakers, hold down any note and move the pitchwheel fully to the right and then the left. The pitch of the note should rise and fall by a full octave. If it is different this does not matter, and it merely indicates that the default setting has been changed. The setting will again be changed later to another value in any case.
As will be shown, early versions of Finale provided both Staff Expression and Score Expression tools, while later versions offer only a common and single Expression Tool. In these later versions, setting of the required entry for either “Staff” or “Score” is made via various sub menus. The functionality of whatever entry is made remains identical however. A less useful method of pitch control is to open the Midi Tool and edit the pitchbend values of notes that are to be changed. This is far less accurate and anything that is done remains invisible except to the inner workings of the program. The more accurate method outlined here remains visible in the score, and can easily be edited or changed.
If the score is empty, perhaps because the file is newly created, simply insert a note. Then select the Staff Expression tool icon and click on the note.
Now that the procedure for editing and inserting pitchwheel values has been followed, the various “Cancel” buttons can be clicked to close down the Staff Expression tool so as to avoid changing the note originally selected. In order to rehearse the procedure again, the instructions given in [6] above can be repeated.
These are the new values that Finale automatically converts to when a file created in what it recognizes as an “older version” is loaded into the later version. In such a file, later versions will convert original pitchwheel values of 8192 to 0, 0 to -8192, and 16384 to (+)8192. All resulting values in the later versions will therefore be equivalent to all those in the early version minus 8192.
It is, however, much better than what is provided by the MIDI tool which only splits the same pitchbend interval into 64 equal parts. This is wholly inadequate for pitches of the accuracy required.
The keyboard User Manual will give the information needed for altering the bend value. On my Roland the bender range can be accessed via the “Edit” and then the “Lower” buttons, and found by scrolling through the list with the “Display” button. When “Bender Range” is displayed, its value can be set to “1” with the “Value” button and the new setting saved using the “Write” and “Enter” buttons. Other keyboards will have equivalent methods of making changes to these settings.
The Bend Range value increments by semitones so that the default value of “12” yields a pitch bend of a full octave up and down. By changing this setting to “1” the bend range will be reduced to only a semitone up or down.
Keyboards always adopted “compromise” tunings although some experiments with split keys were designed to increase the number of pitches available for each octave. This facilitated a differentiation between sharps and flats so as to enable the consistent use of pure thirds on all degrees. Unaccompanied vocal performance, however, did not need to compromise and comma inflection was a normal part of a singer’s technique. This might require the same note to vary in pitch (according to context) from its normal Pythagorean position to a comma higher or lower. Frequently, but under tight compositional control, this might be extended at times up to two commas higher or lower than Pythagorean pitch. But such flexibility was completely out of the range of normal keyboards whose tuning (in whatever compromise system was adopted) had to remain entirely fixed for a particular performance.
I should again stress that files created with older versions are updated automatically when opened by later versions of Finale, but when creating new files in any version the user will need to know which set of values to apply. If the list provided below for “older versions” is mistakenly used in “later versions,” Finale—in blissful ignorance—will produce extremely bizarre results.
As stated in [11] above, “8192” here refers to the default numeric value for “older versions” and the following “[0]” that for “later.”
It must be remembered that the Pythagorean scale is the default scale for Just Intonation, and that the tuning of JI arises from the addition/subtraction of the Syntonic comma (81:80) to/ from notes that are modified for consonance purposes. The effect of this modification is to change all the Pythagorean diatonic semitones from minor (256:243) to major (16:15), to narrow all the major thirds and sixths, and to widen all the minor thirds and sixths. But these changes all result from the single application each time of a Syntonic comma adjustment (upwards or downwards) that changes the default Pythagorean pitches concerned.
Theorists who explain the Pythagorean scale do so in terms of sounding length, and this leads to sounding length ratios. Thus, for example, the pure fifth is 3:2, and the octave is 2:1. Although the same ratios are inversely correct for the relative quantities defining the two pitches involved, they do not define the pitches themselves (but only the relative values between them). What is needed here is an unambiguous definition of the actual pitch to be generated for each note defined. This can only be achieved by converting the ratio values into Cents and then calculating the pitch of each note in terms of the number of Cents that make it higher or lower than the default note from which it is computed.
This value for “G” will be the same for every “G” in every octave, and all other respective values obtained for all other notes will be the same for all octaves.
Again the numeric value for later versions is less than for older ones by a total of 8192. This, like all other pitchwheel values, is calculated to the nearest 1 in 8192 parts of a semitone. Equally-tempered fifths have been narrowed from their Pythagorean defaults. Since the pitchwheel value of C is 8192 [0], the restoration of the G to its Pythagorean position will necessarily increase its pitchwheel value. The new value of 8352 [160] has raised the pitch by a pitchwheel value of 160. When this is divided by 81.92 (the pitchwheel value for each Cent as shown in [14] above), the difference is found to be 1.953125 Cents
If, for example, you indicated F-sharp with the tag “F#,” Finale would show “9152” because it would assume that you had asked for the numeric value assigned to be displayed in place of the tag itself.
Experienced Finale users will know that the process of inserting Text Expressions can be simplified and speeded up by the use of metatools. Since most pieces using Pythagorean tuning only have a range of up to nine different pitch classes in each octave, assigning each global pitch to a metatool makes insertion very quick and easy. Other users will simply have to make a separate insertion (via the Staff Expression window) for each and every note in the score.
In the Instrument List window it will be necessary to assign a different instrument for each staff, and then a different channel. In the “Prog.” column the same sound patch can be selected if wished.
This number can, of course, be changed to whatever value you like dependent upon the pitch needed relative to any tuning requirement or system you wish to use. It will be set for the entire piece, however, and the current value is the one needed for its Pythagorean pitch.
The figure “480” indicates the same pitch inflection within the 0-8192 numeric range (used by later versions) as does the figure “8672” within the numeric range 8192-16384 (used by older versions), representing simply an increase in pitch bend of 480 units.
It is “non-melodic” since it was considered too small to form a discrete interval in its own right. But when it was added to or subtracted from a larger interval the overall result did yield a new melodic interval. When added to a Pythagorean minor semitone, the result was a major semitone (16:15) which was a distinct interval capable of being leant and performed. Also when it was subtracted from a Pythagorean tone (9:8) the result was the minor tone (10:9) which again was a discrete interval to be learnt and sung. The addition or subtraction of this quantity is what gives rise to the new intervals whose cognition and accurate delivery lies at the heart of the skills required for the accurate performance of Just Intonation.
Both these calculations are, of course, for the older versions of Finale. For the newer versions (which will upgrade existing old-version files automatically) the figure “8192” will be replaced with “0,” and the results of the above calculations will be respectively “-1762” and “1762.”
In order to produce a clean and pure Syntonic intonation, you are strongly advised to eliminate completely any default vibrato that will undoubtedly be set on your MIDI keyboard. Since even the most gentle vibrato is unlikely to create pitch fluctuation of less than a Syntonic comma, it stands to reason that the presence of such vibrato will sabotage any attempt to enter the true sound world of Syntonic tuning. On some keyboards, even the “harpsichord” patch is infected by vibrato. Removing it is straightforward, but reference should again be made to the User Manual or technical assistance sought.
If you load it into one of the later versions of Finale, you will find that the values have automatically been updated to those contained in Figure 12 above.
The issues surrounding this composition are too complex to be discussed here and will form the substance of a future article. But his description of the keyboard upon which he stipulated the chanson could be successfully performed, while remaining in tune throughout, shows it to have been similar to the one discussed by other musicians of the time (notably Vicentino).
The tones must be smaller because if the octave equals nineteen intervals of one-third of a tone each there must be at least six and one-third tones in each octave. The Pythagorean octave contains only five tones plus two minor semitones (which is less than six overall). The keyboard tones described by Costeley must therefore be minor tones roughly equivalent to the Syntonic (10:9) minor tones. In compensation for this, the diatonic semitones are large (two-thirds the size of a tone in fact), and these, too, must correspond roughly with the Syntonic diatonic semitones (16:15) although they are marginally larger in size. The occasional use of non-diatonic semitones (used either melodically or arising from close cross relations) provides, according to Costeley’s assessment, movement equaling only one-third of a tone.
In this temperament, the interval of a fifth is perceptibly narrowed (much more so indeed than in 12-tone equal temperament), and the fourths are correspondingly widened by a no less perceptible amount. The overall effect of this temperament is radically different from that of 12-tone equal temperament. In the latter, the effect (as we are used to) is one of tolerably pure fourths and fifths together with pure octaves, upon which have been superimposed distinctly out-of-tune thirds and sixths (to which our ears have now become accustomed). In the former, however, the soundscape is one in which the fourths and fifths are sufficiently out of phase to produce a wavy timbre (similar to a gentle voix celeste organ sound), upon which has now been superimposed strikingly pure thirds and sixths.
There will need to be twenty-one separate values because the B-sharp/C-flat and E-sharp/F-flat pairs will each need different values for both notes (even though whichever is used will provide the same actual pitch). Sometimes Finale will have to read F-flat, and sometimes E-sharp: the first (= E in 12-tet) will have to be raised from its default pitch while the second (= F in 12-tet) will need to be lowered. Whichever is used in the version edited for keyboard will depend solely upon its diatonic context: if G-flat descends by a diatonic semitone (= two-thirds of a tone) the following pitch will be written as F-flat; but if a D-sharp rises by a diatonic semitone the following note will appear as E-sharp. Finale therefore needs to have both members of each pair provided for.
The appearance of the score will sometimes look strange, and somehow “between the cracks,” to those thinking in 12-tone equal temperament. Frequently chords appear as though they need re-enharmonizing. In reality, however, each note is completely pitch-specific and relates to a particular key on the keyboard. Only the single black notes dividing the B/C and E/F pairs have alternative means of notation.
[1] Copyrights for individual items published in Music Theory Online (MTO) are held by their authors. Items appearing in MTO may be saved and stored in electronic or paper form, and may be shared among individuals for purposes of scholarly research or discussion, but may not be republished in any form, electronic or print, without prior, written permission from the author(s), and advance notification of the editors of MTO.
[2] Any redistributed form of items published in MTO must include the following information in a form appropriate to the medium in which the items are to appear:
This item appeared in Music Theory Online in [VOLUME #, ISSUE #] on [DAY/MONTH/YEAR]. It was authored by [FULL NAME, EMAIL ADDRESS], with whose written permission it is reprinted here.
[3] Libraries may archive issues of MTO in electronic or paper form for public access so long as each issue is stored in its entirety, and no access fee is charged. Exceptions to these requirements must be approved in writing by the editors of MTO, who will act in accordance with the decisions of the Society for Music Theory.
This document and all portions thereof are protected by U.S. and international copyright laws. Material contained herein may be copied and/or distributed for research purposes only.
Prepared by Brent Yorgason, Managing Editor and Rebecca Flore and Tahirih Motazedian, Editorial Assistants | 2019-06-25 08:17:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5027235150337219, "perplexity": 1906.9852459024296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999814.77/warc/CC-MAIN-20190625072148-20190625094148-00304.warc.gz"} |
https://solvedlib.com/n/solve-the-following-problem-using-the-chinese-rule-of-doublefalse,17103883 | # Solve the following problem using the Chinese Rule of DoubleFalse Position: There is a wall 9 meters high. A gourd
###### Question:
Solve the following problem using the Chinese Rule of Double False Position: There is a wall 9 meters high. A gourd is planted above; the vine creeps down 70 centimeters per day. A calabash is planted below; it creeps up 1 meter per day. Tell the number of days until they meet and how much each plant has grown. (Nine Chapters, chapter 7, problem 10)
#### Similar Solved Questions
##### Y(656 R (r,6) =re [cosce rSine) X(esG Y e L Sin (0 +r Sine ) 2 (Yi 6 )vexifg Tz ceucly Rieman Gnd;tion ? funcZion is FGz)= 32
Y(656 R (r,6) =re [cosce rSine) X(esG Y e L Sin (0 +r Sine ) 2 (Yi 6 ) vexifg Tz ceucly Rieman Gnd;tion ? funcZion is FGz)= 32...
##### After making a serial dilution a tank water sample, you plate 100 ul of the 10-4...
After making a serial dilution a tank water sample, you plate 100 ul of the 10-4 After incubating, you count 90 colonies on the plate. What is the CFU/ml? Write out your calculations, briefly, along with your final answer....
##### Use implicit differentiation to find an equation of the tangent line to the curve at the gver = 1 (1, 0) (hyperbola)
Use implicit differentiation to find an equation of the tangent line to the curve at the gver = 1 (1, 0) (hyperbola)...
##### (B) Problemphysical pendulum consisting rod of mass and Iength attached t0 disk MiSS and radius Rhas motor at its end Fig: The motor turns the rod coumterelockwise directio = cel angular spedu. Find the torque applied by Ihe molor(C) Problem
(B) Problem physical pendulum consisting rod of mass and Iength attached t0 disk MiSS and radius Rhas motor at its end Fig: The motor turns the rod coumterelockwise directio = cel angular spedu. Find the torque applied by Ihe molor (C) Problem...
##### Sketch Flnd graph; 3 equatlon rectangular coordnates the sunace representedLyllridrical equoton
Sketch Flnd graph; 3 equatlon rectangular coordnates the sunace represented Lyllridrical equoton...
##### QUESTION 23 Balance Fe203 + CO >Fe+ CO2 in acidic solution ral129) V3 (12pt) Path:p QUESTION...
QUESTION 23 Balance Fe203 + CO >Fe+ CO2 in acidic solution ral129) V3 (12pt) Path:p QUESTION 24 Balance CO+ 1205-> CO2+ 12 in basic solution Arial 3 (12pt) -M Path:...
##### From standard deck of 52 cards you draw one card at random What isthel probability of selecting Jack Queen , King, or heart?
From standard deck of 52 cards you draw one card at random What isthel probability of selecting Jack Queen , King, or heart?...
##### Required information [The following information applies to the questions displayed below.) Allied Merchandisers was organized on...
Required information [The following information applies to the questions displayed below.) Allied Merchandisers was organized on May 1. Macy Co. is a major customer (buyer) of Allied (seller) products. May 3 Allied made its first and only purchase of inventory for the period on May 3 for 3,000 units...
##### Construct and Interpret a Product Profitability Report, Allocating Selling and Administrative Expenses Naper Inc. manufactures power...
Construct and Interpret a Product Profitability Report, Allocating Selling and Administrative Expenses Naper Inc. manufactures power equipment. Naper has two primary products—generators and air compressors. The following report was prepared by the controller for Naper's senior marketing ma...
##### 1) a fair die has 6 faces/ You toss the die three times. What is the probability of getting three "4 's" in succession 2) See question You have gotten three *4's" in succession, if you the die again; what is the probability of getting another 3) If you were concerned with the effect a high sugar intake has the hyper-activity in children which of these probabilities would be relevant to you: a) P(High Sugar/ Hyper-active) or b) P(Hyper-active HHigh Sugar
1) a fair die has 6 faces/ You toss the die three times. What is the probability of getting three "4 's" in succession 2) See question You have gotten three *4's" in succession, if you the die again; what is the probability of getting another 3) If you were concerned with th...
##### The figure below shows an object of mass m1 = 1.0 kg on an inclined surface....
The figure below shows an object of mass m1 = 1.0 kg on an inclined surface. The angle of the inclined surface is θ = 30° with the horizontal. The object m1 is connected to a second object of mass m2 = 2.5 kg on a horizontal surface below an overhang that is formed by the inclined surface....
##### 6"v "1 X2coeSTR Kwv Name_Math 1221: Calculus 1 Test over Chapter 5 Spring Semester 2020Find the exact area of the region shown here bounded by the graph f(x)-x" 2x+1, the x-axis, and the vertical lines x=2 and x =4. Do this by Glling the region with n rectangles; finding their tolal area, then looking at what happens as ngOcS t0 infinity: DO NOT do this with a definite integral. Show all of the work and explain what you are along doing the way: You may need the formulas {i-"n
6"v "1 X2coeSTR Kwv Name_ Math 1221: Calculus 1 Test over Chapter 5 Spring Semester 2020 Find the exact area of the region shown here bounded by the graph f(x)-x" 2x+1, the x-axis, and the vertical lines x=2 and x =4. Do this by Glling the region with n rectangles; finding their tolal...
##### Wny are typical baclera and cyanobacteria classified logether in the Domain Bactena? How do they diffen?Identify three benelits of bacteria l0 humans.Describe three benefits of bacteria to the ecosystem:Search for an empirical or review article about a specific bactoria bacteral diseaso). type of Archaea that 5 interesling You: Summarize the article (in your own more sentences _ include the relevance of the research. (b) Cite the words) in article using APA format: (to: This individual assignme
Wny are typical baclera and cyanobacteria classified logether in the Domain Bactena? How do they diffen? Identify three benelits of bacteria l0 humans. Describe three benefits of bacteria to the ecosystem: Search for an empirical or review article about a specific bactoria bacteral diseaso). type o... | 2023-02-02 14:39:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40674376487731934, "perplexity": 5568.1118481586545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500028.12/warc/CC-MAIN-20230202133541-20230202163541-00858.warc.gz"} |
https://ai.stackexchange.com/questions/28217/why-do-we-commonly-use-the-log-to-squash-frequencies/28286 | # Why do we commonly use the $\log$ to squash frequencies?
Term frequency and inverse document frequency are well-known terms in information retrieval.
I am presenting the definitions for both from p:12,13 of Vector Semantics and Embeddings
On term frequency
Term frequency is the frequency of the word $$t$$ in the term frequency document $$d$$. We can just use the raw count as the term frequency:
$$tf_{t, d} = \text{count}(t, d)$$
More commonly we squash the raw frequency a bit, by using the $$\log_{10}$$ of the frequency instead. The intuition is that a word appearing 100 times in a document doesn’t make that word 100 times more likely to be relevant to the meaning of the document.
On inverse document frequency
The $$\text{idf}$$ is defined using the fraction $$\dfrac{N}{df_t}$$, where $$N$$ is the total number of documents in the collection, and $$\text{df}_t$$ is the number of documents in which term $$t$$ occurs.......
Because of the large number of documents in many collections, this measure too is usually squashed with a log function. The resulting definition for inverse document frequency ($$\text{idf}$$) is thus
$$\text{idf}_t = \log_{10} \left(\dfrac{N}{df_t} \right)$$
If we observe the bolded portion of the quotes, it is evident that the $$\log$$ function is used commonly. It is not only used in these two definitions. It has been across many definitions in the literature. For example: entropy, mutual information, log-likelihood. So, I don't think squashing is the only purpose behind using the $$\log$$ function.
Is there any reason for selecting the logarithm function for squashing? Are there any advantages for $$\log$$ compared to any other squash functions, if available?
It's much easier to deal with logarithms, as the relevant numbers are usually very small or very large. If you have a long exponential expression, it's hard to see the difference, but if you're looking at 4.3 vs 5.6, you can immediately see what's happening. And logarithms are a well-known (and well-understood) way of achieving this compression. You can easily interpret the difference, depending on the base of the logarithm used.
Quite often the $$log_2$$ is used when you're dealing with entropy or information, as those are usually expressed in bits.
That is, $$\log$$ is monotonically increasing and hence preserves the order and the locations of the extrema. For instance, if $$p(x) \geq p(y)$$ then $$\log\big(p(x)\big) \geq \log\big(p(y)\big)$$ also holds. Therfore, maximizing likelihood is equivalent to maximizing log-likelihood.
$$\log \left(\prod_i P(x_i)\right) = \sum_i \log \left( P(x_i)\right)$$ | 2021-07-31 16:17:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8840045928955078, "perplexity": 381.92562039131155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.68/warc/CC-MAIN-20210731141123-20210731171123-00577.warc.gz"} |
http://www.ck12.org/analysis/Polar-Equations/lesson/Polar-and-Rectangular-Coordinates-PCALC/ | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Polar Equations
## Points based on distance from the origin and angle or distance from the x and y axes.
Estimated6 minsto complete
%
Progress
Practice Polar Equations
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated6 minsto complete
%
Polar and Rectangular Coordinates
In the rectangular coordinate system, points are identified by their distances from the and axes. In the polar coordinate system, points are identified by their angle on the unit circle and their distance from the origin. You can use basic right triangle trigonometry to translate back and forth between the two representations of the same point. How are lines and other functions affected by this new coordinate system?
### Polar and Rectangular Coordinates
Rectangular coordinates are the ordinary coordinates that you are used to.
Polar coordinates represent the same point, but describe the point by its distance from the origin and its angle on the unit circle . To translate back and forth between polar and rectangular coordinates you should use the basic trig relationships:
You can also express the relationship between and using the Pythagorean Theorem.
Note that coordinates in polar form are not unique. This is because there are an infinite number of coterminal angles that point towards any given coordinate.
For example, the point (3, 4) can be written in polar coordinates in at least three different ways. To find , use the third equation from above and to find use the pythagorean theorem.
Three equivalent polar coordinates for the point (3, 4) are:
Notice how the third coordinate points in the opposite direction and has a seemingly negative radius. This means go in the opposite direction of the angle.
Once you can translate back and forth between points, use the same substitutions to change equations too. A polar equation is written with the radius as a function of the angle. This means an equation in polar form should be written in the form .
To write an equation in polar form, use the conversion equations to substitute. For example, to convert to polar form make substitutions for and . Then, solve for .
### Examples
#### Example 1
Earlier, you were asked how lines can be represented in the polar coordinate system. The general way to express a line in polar form is .
#### Example 2
Express the following equation using rectangular coordinates: .
Use the fact that and .
This is the equation of a hyperbola.
#### Example 3
Sketch the following polar equation: .
Since theta is not in the equation, it can vary freely. This simple equation produces a perfect circle of radius 3 centered at the origin.
You can show this equation is equivalent to
#### Example 4
Sketch the following polar equation: with .
The equation is an example of a polar equation that cannot be easily expressed in rectangular form. In order to sketch the graph, identify a few key points: . You should see that the shape is very recognizable as a spiral.
#### Example 5
Translate the following polar expression into rectangular coordinates and then graph.
Simplify the polar equation first before converting to rectangular coordinates.
### Review
Plot the following polar coordinates.
1.
2.
3.
4.
Give two alternate sets of coordinates for each point.
5.
6.
7.
Graph each equation.
8.
9.
10. with .
Convert each point to rectangular form.
11.
12.
13.
Convert each point to polar form using radians where .
14. (1, 3)
15. (1, -4)
16. (2, 6)
Convert each equation to polar form.
17.
18.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Coterminal Angles
A set of coterminal angles are angles with the same terminal side but expressed differently, such as a different number of complete rotations around the unit circle or angles being expressed as positive versus negative angle measurements.
polar coordinate system
The polar coordinate system is a special coordinate system in which the location of each point is determined by its distance from the pole and its angle with respect to the polar axis.
polar coordinates
Polar coordinates describe locations on a grid using the polar coordinate system. The location of each point is determined by its distance from the pole and its angle with respect to the polar axis.
rectangular coordinates
A point is written using rectangular coordinates if it is written in terms of $x$ and $y$ and can be graphed on the Cartesian plane.
unit circle
The unit circle is a circle of radius one, centered at the origin. | 2017-01-25 02:35:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 2, "texerror": 0, "math_score": 0.9474363923072815, "perplexity": 739.8399975629249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560285337.76/warc/CC-MAIN-20170116095125-00441-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://cms.math.ca/cmb/msc/47A63?fromjnl=cmb&jnl=CMB | Canadian Mathematical Society www.cms.math.ca
location: Publications → journals
Search results
Search: MSC category 47A63 ( Operator inequalities )
Expand all Collapse all Results 1 - 2 of 2
1. CMB 2016 (vol 59 pp. 354)
Li, Chi-Kwong; Tsai, Ming-Cheng
Factoring a Quadratic Operator as a Product of Two Positive Contractions Let $T$ be a quadratic operator on a complex Hilbert space $H$. We show that $T$ can be written as a product of two positive contractions if and only if $T$ is of the form \begin{equation*} aI \oplus bI \oplus \begin{pmatrix} aI & P \cr 0 & bI \cr \end{pmatrix} \quad \text{on} \quad H_1\oplus H_2\oplus (H_3\oplus H_3) \end{equation*} for some $a, b\in [0,1]$ and strictly positive operator $P$ with $\|P\| \le |\sqrt{a} - \sqrt{b}|\sqrt{(1-a)(1-b)}.$ Also, we give a necessary condition for a bounded linear operator $T$ with operator matrix $\big( \begin{smallmatrix} T_1 & T_3 \\ 0 & T_2\cr \end{smallmatrix} \big)$ on $H\oplus K$ that can be written as a product of two positive contractions. Keywords:quadratic operator, positive contraction, spectral theoremCategories:47A60, 47A68, 47A63
2. CMB 2012 (vol 57 pp. 25)
Bourin, Jean-Christophe; Harada, Tetsuo; Lee, Eun-Young
Subadditivity Inequalities for Compact Operators Some subadditivity inequalities for matrices and concave functions also hold for Hilbert space operators, but (unfortunately!) with an additional $\varepsilon$ term. It seems not possible to erase this residual term. However, in case of compact operators we show that the $\varepsilon$ term is unnecessary. Further, these inequalities are strict in a certain sense when some natural assumptions are satisfied. The discussion also stresses on matrices and their compressions and several open questions or conjectures are considered, both in the matrix and operator settings. Keywords:concave or convex function, Hilbert space, unitary orbits, compact operators, compressions, matrix inequalitiesCategories:47A63, 15A45
top of page | contact us | privacy | site map |
© Canadian Mathematical Society, 2016 : https://cms.math.ca/ | 2016-10-24 08:48:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9426971673965454, "perplexity": 1189.5868802480952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719547.73/warc/CC-MAIN-20161020183839-00190-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://pudujezusififudo.lowdowntracks4impact.com/linear-statistical-models-book-9228oe.php | Last edited by Taut
Wednesday, April 22, 2020 | History
6 edition of Linear Statistical Models found in the catalog.
# Linear Statistical Models
## by Bruce L. Bowerman
Written in English
Subjects:
• Applied mathematics,
• Probability & statistics,
• Probability & Statistics - General,
• Mathematics,
• Science/Mathematics,
• Mathematics / Statistics,
• Algebra - General
• The Physical Object
FormatPaperback
Number of Pages1040
ID Numbers
Open LibraryOL7784513M
ISBN 100534380182
ISBN 109780534380182
Applied Linear Statistical Models 5e is the long established leading authoritative text and reference on statistical modeling, analysis of variance, and the design of experiments. For students in most any discipline where statistical analysis or interpretation is used, ALSM serves as the standard work/5(48). The authors conclude in Part IV with the statistical theory and computations used throughout the book, including univariate models with normal level-1 errors, multivariate linear models, and hierarchical generalized linear models. Linear Models: An Integrated Approach aims to provide a clear and deep understanding of the general linear model using simple statistical ideas. Elegant geometric arguments are also invoked as needed and a review of vector spaces and matrices is provided to make the treatment self-contained.
You might also like
Jazz in perspective
Jazz in perspective
The 2000-2005 Outlook for Women
The 2000-2005 Outlook for Women
Poor Dancers Almanac a Survival Manual for Choreographers Managers and Dancers
Poor Dancers Almanac a Survival Manual for Choreographers Managers and Dancers
future of architecture
future of architecture
Space, place, environment
Space, place, environment
Infantry tactics
Infantry tactics
The bold threshing-barn stoker
The bold threshing-barn stoker
Words to use
Words to use
Good Girl!
Good Girl!
TERRORIST TRAP, THE
TERRORIST TRAP, THE
### Linear Statistical Models by Bruce L. Bowerman Download PDF EPUB FB2
Linear Statistical Models book models in statistics/Alvin C. Rencher, G. Bruce Schaalje. – 2nd ed. Includes bibliographical references. ISBN (cloth) 1. Linear models (Statistics) I. Schaalje, G. Bruce. Title. QAR –dc22 Printed in the United States of America The essential introduction to the theory and application of linear models―now in a valuable new edition.
Since most advanced statistical tools are generalizations of the linear model, it is neces-sary to first master the linear model in order to move forward to more advanced by: The essential introduction to the theory and application of linear models—now in a valuable new edition Since most advanced statistical tools are generalizations of the linear model, it is neces-sary to first master the linear model in order to move forward to more advanced concepts.
The linear model remains the main tool of Linear Statistical Models book applied statistician and is central to the training of any. Linear Statistical Models, Second Edition is an excellent book for courses on linear models at the upper-undergraduate and graduate levels. It also serves as a comprehensive reference for statisticians, engineers, and scientists who apply multiple regression or analysis of variance in their everyday by: Linear Statistical Models Developed and refined over a period of twenty years, the material in this book offers an especially lucid presentation of linear statistical models.
These models lead to what is usually Linear Statistical Models book multiple regression or analysis of variance methodology, which, in turn, opens up a wide range of applications to the physical, biological, and social sciences, as well as to.
(International Statistical Review, December ) "This indeed clearly written book will do great service for advanced undergraduate and also for PhD students." (International Statistical Review, Dec ) "This well-written book represents various topics on linear models with great clarity in an easy-to-understand style.".
Linear Statistical Models Developed and refined over a period of twenty years, the material in this book offers an especially lucid presentation of linear statistical models. These models lead to what is usually called "multiple regression" or "analysis of variance" methodology, which, in turn, opens up a wide range of applications to the.
Linear Statistical Models book. Read reviews from world’s largest community for readers. Part of the Duxbury Advanced Series in Statistics and Decision S /5(6).
Linear Statistical Models, Second Edition is an excellent book for courses on linear models at the upper-undergraduate and graduate levels. It also serves as a comprehensive reference for statisticians, engineers, and scientists who apply multiple regression or analysis of Reviews: 1.
Applied Linear Statistical Models Student Data CD 5th Edition Kutner, Nachtsheim, Neter, & Li CD Description Student Solutions Manual Chapter 1 Data Sets Chapter 2 Data Sets Chapter 6 Data Sets.
CH06FI05 CH06PR05 CH06PR09 CH06PR12 CH06PR13 CH06PR15 CH06PR18 CH06PR20 CH06PR21 CH06PR (*) end-of-chapter Problems with computational elements contained in Applied Linear Statistical Models, 5th edition.
No solutions are given for Exercises, Projects, or Case Studies. In presenting calculational results we frequently show, for ease in checking, more digits than are significant for the original Size: KB.
Applied Linear Statistical Models with Student CD book. Read 7 reviews from the world's largest community for readers. A text and reference on statistica /5. We love owning this book. It gets placed on our shelf among our favourite reference books We actually learned a lot and deepened our understanding of many topics while reading Davison's explanations if asked to summarize Statistical Models in a single word, ‘complete‘ would serve as the only plausible answer.’ Source: Technometrics.
Description: Linear Statistical Models Developed and refined over a period of twenty years, the material in this book offers an especially lucid presentation of linear statistical models. These models lead to what is usually called "multiple regression" or "analysis of variance" methodology, which, in.
Book Description. Focusing on user-developed programming, An R Companion to Linear Statistical Models serves two audiences: those who are familiar with the theory and applications of linear statistical models and wish to learn or enhance their skills in R; and those who are enrolled in an R-based course on regression and analysis of those who have never used R, the book begins.
The Theory of Linear Models. B.Jørgensen. Linear Models with R. Julian y. Statistical Methods in Agriculture and Experimental Biology, Second Edition.and Downloaded by [University of Toronto] at 23 May Genre/Form: Lehrbuch: Additional Physical Format: Online version: Stapleton, James H., Linear statistical models.
New York: Wiley, © (OCoLC) Buy a cheap copy of Applied Linear Statistical Models book by John Neter. There are two approaches to undergraduate and graduate courses in linear statistical models and experimental design in applied statistics.
One is a two-term Free shipping over $Cited by: Summary. Linear Models and the Relevant Distributions and Matrix Algebra provides in-depth and detailed coverage of the use of linear statistical models as a basis for parametric and predictive inference. It can be a valuable reference, a primary or secondary text in a graduate-level course on linear models, or a resource used (in a course on mathematical statistics) to illustrate various. So, this was an introduction to simple linear regression. Please go through the chapter 1 in referenced book if you want to dig deeper. References: Applied Linear Statistical Models; STAT My Applied Linear Statistical Models book has a " floppy disk with data on it. I needed the data the other day, so I scrounged for a USB floppy drive, copied the files, and imaged the disk. In case anyone else needs them, here are the data sets. Book info. Genre/Form: Statistics: Additional Physical Format: Online version: Graybill, Franklin A. Introduction to linear statistical models. New York, McGraw-Hill, Chapter 6 Introduction to Linear models A statistical model is an expression that attempts to explain patterns in the observed values of a response variable by relating the response variable to a set of predictor variables and Size: KB. Summary. Focusing on user-developed programming, An R Companion to Linear Statistical Models serves two audiences: those who are familiar with the theory and applications of linear statistical models and wish to learn or enhance their skills in R; and those who are enrolled in an R-based course on regression and analysis of those who have never used R, the book begins with a self. Applied Linear Statistical Models Pdf >> DOWNLOAD ab48e Applied linear statistical models: An overview Gunnar Stefansson 1Dept. of Mathematics Univ. Iceland Aug This on-line applied linear statistical models solutions manual can be a referred book that you. Linear Statistical ModelsDeveloped and refined over a period of twenty years, the material in this book offers an especially lucid presentation of linear statistical models. These models lead to what is usually called "multiple regression" or "ana. Book Description. Linear Models and the Relevant Distributions and Matrix Algebra provides in-depth and detailed coverage of the use of linear statistical models as a basis for parametric and predictive inference. It can be a valuable reference, a primary or secondary text in a graduate-level course on linear models, or a resource used (in a course on mathematical statistics) to illustrate. Linear regression models, experimental designs, multivariate analysis, and categorical data analysis are treated in a way which makes effective use of visualization techniques and the related statistical techniques underlying them through practical applications, and hence helps the reader to achieve a clear understanding of the associated. An introduction to linear statistical models." to "Graydon, Alexander, Memoirs of a life, chiefly passed in Pennsylvania, within the last sixty years, with occasional remarks upon the general occurrences, character and spirit of that eventful period.". Chapter 19 Generalized linear models I: Count data. Biologists frequently count stuff, and design experiments to estimate the effects of different factors on these counts. For example, the effects of environmental mercury on clutch size in a bird, the effects of warming on parasite load in a fish, or the effect of exercise on RNA expression. Applied Linear Statistical Models 5e is the long established leading authoritative text and reference on statistical modeling. The text includes brief introductory and review material, and then proceeds through regression and modeling for the first half, and through ANOVA and Experimental Design in /5(3). Buy a cheap copy of Applied Linear Statistical Models book by John Neter. Applied Linear Statistical Models 5e is the long established leading authoritative text and reference on statistical modeling, analysis of variance, and the design Free shipping over$/5(5).
P.K. Bhattacharya, Prabir Burman, in Theory and Methods of Statistics, Introduction. Linear models are widely used in statistical data analysis when the dependent or the response variable is quantitative, whereas the independent variables may be quantitative, qualitative, or both.
It can also be used for some types of nonlinear modeling as an example given below will show. Linear statistical models 1. Introduction The goal of this course is, in rough terms, to predict a variable , given that we have the opportunity to observe variables 1−1.
This is a very important statistical problem. Therefore, let us spend a bit of time and examine a simple example. It depends what you want from such a book and what your background is. E.g. do you want proofs and theorems or just practical advice. Have you had calculus. What field are you going into.
etc. However. Gelman and Hill Data Analysis Using Reg. STAT | Theory of Linear Models Lecture Notes Classical linear models are at the core of the fleld of statistics, and are probably the most commonly used set of statistical techniques in practice.
For these reasons a large portion of your coursework is devoted to them. The two main subclasses of the classical linear model are (1) linear File Size: KB. The book covers material taught in the Johns Hopkins Biostatistics Advanced Statistical Computing course.
The book covers material taught in the Johns Hopkins Biostatistics Advanced Statistical Computing course. Linear Models. Consider the simple linear model.
\[ y = \beta_0 + \beta_1 x + \varepsilon \tag{} \end{equation. The book also covers power analysis for longitudinal and clustered designs, which is essential for the design of a study.
In addition, the text provides a thorough and up-to-date guide through the major software applications for linear mixed models, namely, Stata, SAS, R, SPSS, and HLM. Applied Linear Statistical Models", 5e, is the long established leading authoritative text and reference on statistical modeling.
For students in most any discipline where statistical analysis or interpretation is used, ALSM serves as the standard work. The text includes brief introductory and review material, and then proceeds through regression and modeling for the first half, and through. There are so many good books available to understand the concepts of linear models.
But I found Linear regression models by Montgomery as very good book in terms of language and the explanation. It is written by foreign author but the language of.
Linear Statistical Models: An Applied Approach (Business Statistics) by Bowerman, Bruce L. and a great selection of related books, art and collectibles available now at "Applied Linear Statistical Models, 5e" is the long established leading authoritative text and reference on statistical modeling.
The text includes brief introductory and review material, and then proceeds through regression and modeling for the first half, and through ANOVA and Experimental Design in /10(45).
Linear Statistical Models, Second Edition is an excellent book for courses on linear models at the upper-undergraduate and graduate levels. It also serves as a comprehensive reference for statisticians, engineers, and scientists who apply multiple regression or analysis of Price: \$ | 2021-10-25 18:54:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32009291648864746, "perplexity": 2573.127727700894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587767.18/warc/CC-MAIN-20211025185311-20211025215311-00121.warc.gz"} |
http://math.stackexchange.com/questions/300096/helixs-arc-length | # Helix's arc length
The relevant definitions are that of parametrized curve which is at the beginning of page 1 and the definition of arclength of a curve, which is in the first half of page 6.
Also the author mentions the helix at the bottom of page 3.
On exercise $1.1.2.$ (page 8) I'm asked to find the arc length of the helix:
$\alpha (t)=(a\cos (t), a\sin (t), bt)$, but the author don't say what the domain of $\alpha$ is.
Usually when the domain isn't specified isn't the reader supposed to assume the domain is a maximal set? In that case the domain would be $\Bbb R$ and the arc length wouldn't be defined as the integral wouldn't be finite.
-
@AndréNicolas Does that mean any interval with length $2\pi$? – user61804 Feb 11 '13 at 8:54
@AndréNicolas Would you mind posting your suggestion as an answer so I can accept it? – user61804 Feb 11 '13 at 8:57
It seems sensible to do it for one complete cycle of sine and cosine, that is, any interval of length $2\pi$. So we are measuring the length of one complete turn around the cylinder that the helical vine climbs on.
There are a number of ways of approaching this problem. And yes, you are correct, without the domain specified there is a dilemma here. You can give an answer for one complete cycle of $2\pi$. Depending on the context you may find it more convenient to measure arc length as a function of $z$-axis distance along the helix... a sort of ratio: units of length along the arc per units of length of elevation. Thirdly, you can also write the arc length not as a numeric answer but as a function of $a$ and $b$ marking the endpoints of any arbitrary domain. Personally, I recommend doing the third and last. Expressing the answer as a function is the best you can do without making assumptions about the domain in question, and it leaves a solution that can be applied and reused whenever endpoints are given. | 2014-08-23 02:09:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8945176005363464, "perplexity": 226.15785323714667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824990.54/warc/CC-MAIN-20140820021344-00398-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://stacks.math.columbia.edu/tag/0BYL | Lemma 15.66.4. Let $R$ be a ring. Let $a \in \mathbf{Z}$ and let $K$ be an object of $D(R)$. The following are equivalent
1. $K$ has tor-amplitude in $[a, \infty ]$, and
2. $K$ is quasi-isomorphic to a K-flat complex $E^\bullet$ whose terms are flat $R$-modules with $E^ i = 0$ for $i \not\in [a, \infty ]$.
Proof. The implication (2) $\Rightarrow$ (1) is immediate. Assume (1) holds. First we choose a K-flat complex $K^\bullet$ with flat terms representing $K$, see Lemma 15.59.10. For any $R$-module $M$ the cohomology of
$K^{n - 1} \otimes _ R M \to K^ n \otimes _ R M \to K^{n + 1} \otimes _ R M$
computes $H^ n(K \otimes _ R^\mathbf {L} M)$. This is always zero for $n < a$. Hence if we apply Lemma 15.66.2 to the complex $\ldots \to K^{a - 1} \to K^ a \to K^{a + 1}$ we conclude that $N = \mathop{\mathrm{Coker}}(K^{a - 1} \to K^ a)$ is a flat $R$-module. We set
$E^\bullet = \tau _{\geq a}K^\bullet = (\ldots \to 0 \to N \to K^{a + 1} \to \ldots )$
The kernel $L^\bullet$ of $K^\bullet \to E^\bullet$ is the complex
$L^\bullet = (\ldots \to K^{a - 1} \to I \to 0 \to \ldots )$
where $I \subset K^ a$ is the image of $K^{a - 1} \to K^ a$. Since we have the short exact sequence $0 \to I \to K^ a \to N \to 0$ we see that $I$ is a flat $R$-module. Thus $L^\bullet$ is a bounded above complex of flat modules, hence K-flat by Lemma 15.59.7. It follows that $E^\bullet$ is K-flat by Lemma 15.59.6. $\square$
There are also:
• 2 comment(s) on Section 15.66: Tor dimension
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2022-06-29 00:40:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9835583567619324, "perplexity": 275.8461193304461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103619185.32/warc/CC-MAIN-20220628233925-20220629023925-00684.warc.gz"} |
https://www.zbmath.org/authors/?q=ai%3Awang.dan | # zbMATH — the first resource for mathematics
## Wang, Dan
Compute Distance To:
Author ID: wang.dan Published as: Wang, D.; Wang, Dan
Documents Indexed: 141 Publications since 1991, including 1 Book
all top 5
#### Co-Authors
4 single-authored 17 Peng, Zhouhua 8 Wang, Hao 7 Wang, Wei 6 Wang, Jibo 5 Sun, Gang 5 Wang, Xiaojing 4 Li, Tieshan 4 Yin, Na 3 Jin, Xiaozheng 3 Lan, Weiyao 3 Shen, Derong 3 Song, Baoyan 3 Wang, Guoren 3 Wang, Liyan 3 Wang, Xiaoyuan 3 Yu, Ge 3 Zhang, Hongqing 2 Abdalla, Mostafa M. 2 Chen, Yong 2 Chen, Zhizhong 2 Cui, Jingan 2 Fu, Lihua 2 Guo, Pengjiang 2 Ji, Ping 2 Kong, Cuicui 2 Li, Xiaoqiang 2 Li, Zeyu 2 Lin, Guohui 2 Liu, Lu 2 Lu, Guangshi 2 Qin, Zhongfeng 2 Shi, Yangyang 2 Su, Zhoucheng 2 Sun, Weiwei 2 Wang, Lusheng 2 Wang, Zhenpei 2 Zhang, Guodong 2 Zhang, Meng 2 Zhang, Weihong 2 Zhu, Mengkun 1 Bai, Yuzhen 1 Cai, Huiping 1 Cao, Qingjie 1 Che, Weiwei 1 Chen, Chishu 1 Chen, Ming 1 Chen, Naxin 1 Chen, Qian 1 Chen, Wei 1 Chen, Yang 1 Chen, Yang 1 Chen, Yushu 1 Chen, Zhiyong 1 Cheng, Shun-Jun 1 Cheng, Tai-Chiu Edwin 1 Dong, Xinjian 1 Fan, Xinghua 1 Feng, Yang 1 Gao, Shengnan 1 Gao, Wenjun 1 Guan, Wei 1 Han, Jingqi 1 Han, Zhu 1 He, Chan 1 He, Jiafan 1 He, Jin 1 He, Kun 1 He, Liqiao 1 He, Tao 1 He, Yigang 1 Hong, Mingyi 1 Hu, Jing 1 Huang, Jialiang 1 Huang, Jie 1 Huang, Yufei 1 Huo, Yunzhang 1 Inanc, Tamer 1 Ji, Xiaoliang 1 Jia, Hairui 1 Jing, Yuanwei 1 Kang, Baosheng 1 Kar, Samarjit 1 Khong, Sei Zhen 1 Kuang, Jinyun 1 Lao, Huixue 1 Lee, Roger W. 1 Li, Junfang 1 Li, Qing 1 Li, Shou-Jiang 1 Li, Weicai 1 Li, Wen 1 Li, Wenting 1 Li, Xiaolu 1 Li, Xiaonan 1 Li, Yingfang 1 Li, Yongming 1 Lin, Lin 1 Liu, Bolian 1 Liu, Hugh H. T. 1 Liu, Rongjun ...and 65 more Co-Authors
all top 5
#### Serials
8 Nonlinear Dynamics 5 Mathematics in Practice and Theory 4 Applied Mathematics and Computation 4 International Journal of Robust and Nonlinear Control 4 Mathematical Problems in Engineering 3 Journal of Qufu Normal University. Natural Science 3 International Journal of Adaptive Control and Signal Processing 3 Pure and Applied Mathematics 3 Optimization Letters 3 Fuzzy Systems and Mathematics 2 Computers & Mathematics with Applications 2 Computer Methods in Applied Mechanics and Engineering 2 International Journal of Control 2 International Journal for Numerical Methods in Engineering 2 Journal of Biomathematics 2 Applied Mathematical Modelling 2 Communications in Nonlinear Science and Numerical Simulation 2 Asian Journal of Control 2 Journal of Function Spaces 2 AMM. Applied Mathematics and Mechanics. (English Edition) 1 International Journal of General Systems 1 Inverse Problems 1 Journal of the Franklin Institute 1 Lithuanian Mathematical Journal 1 Mathematical Methods in the Applied Sciences 1 Physica A 1 Physics Letters. A 1 Chaos, Solitons and Fractals 1 Acta Arithmetica 1 Automatica 1 Information Sciences 1 Journal of Number Theory 1 Mathematics and Computers in Simulation 1 Naval Research Logistics 1 Optimal Control Applications & Methods 1 Journal of Sichuan University. Natural Science Edition 1 Circuits, Systems, and Signal Processing 1 Acta Mathematica Hungarica 1 Acta Automatica Sinica 1 Acta Physica Sinica 1 Algorithmica 1 Journal of Northwest University. Natural Sciences Edition 1 Journal of Dalian University of Technology 1 Communications in Statistics. Theory and Methods 1 European Journal of Operational Research 1 Linear Algebra and its Applications 1 Computational Statistics and Data Analysis 1 Complexity 1 Journal of Difference Equations and Applications 1 Journal of Nanjing Normal University. Natural Science Edition 1 Journal of Combinatorial Optimization 1 Discrete Dynamics in Nature and Society 1 European Journal of Mechanics. A. Solids 1 Acta Mathematica Sinica. English Series 1 IEEE Transactions on Antennas and Propagation 1 Journal of Northeastern University. Natural Science 1 Journal of Applied Mathematics and Computing 1 Journal of Shandong University. Natural Science 1 MATCH - Communications in Mathematical and in Computer Chemistry 1 Control and Decision 1 International Journal of Wavelets, Multiresolution and Information Processing 1 Journal of Computer Applications 1 Annals of Finance 1 International Journal of Systems Science. Principles and Applications of Systems and Integration
all top 5
#### Fields
33 Systems theory; control (93-XX) 21 Operations research, mathematical programming (90-XX) 19 Biology and other natural sciences (92-XX) 17 Computer science (68-XX) 10 Ordinary differential equations (34-XX) 8 Partial differential equations (35-XX) 8 Statistics (62-XX) 7 Numerical analysis (65-XX) 7 Information and communication theory, circuits (94-XX) 5 Number theory (11-XX) 5 Mechanics of deformable solids (74-XX) 5 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Mathematical logic and foundations (03-XX) 3 Combinatorics (05-XX) 3 Optics, electromagnetic theory (78-XX) 2 Special functions (33-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Functional analysis (46-XX) 2 Probability theory and stochastic processes (60-XX) 2 Mechanics of particles and systems (70-XX) 1 General and overarching topics; collections (00-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Associative rings and algebras (16-XX) 1 Group theory and generalizations (20-XX) 1 Measure and integration (28-XX) 1 Integral equations (45-XX) 1 Algebraic topology (55-XX) 1 Geophysics (86-XX)
#### Citations contained in zbMATH
65 Publications have been cited 447 times in 394 Documents Cited by Year
Adaptive neural network control for a class of uncertain nonlinear systems in pure-feedback form. Zbl 0998.93026
Wang, Dan; Huang, Jie
2002
Neural network-based adaptive dynamic surface control of uncertain nonlinear pure-feedback systems. Zbl 1213.93105
Wang, Dan
2011
Single machine scheduling with exponential time-dependent learning effect and past-sequence-dependent setup times. Zbl 1165.90471
Wang, Ji-Bo; Wang, Dan; Wang, Li-Yan; Lin, Lin; Yin, Na; Wang, Wei-Wei
2009
Diversification under yield randomness in inventory models. Zbl 0795.90013
Parlar, Mahmut; Wang, Dan
1993
Containment control of networked autonomous underwater vehicles with model uncertainty and ocean disturbances guided by multiple leaders. Zbl 1417.93224
Peng, Zhouhua; Wang, Dan; Shi, Yang; Wang, Hao; Wang, Wei
2015
Adaptive fuzzy control of uncertain MIMO non-linear systems in block-triangular forms. Zbl 1215.93077
Li, Tieshan; Wang, Dan; Chen, Naxin
2011
Adaptive unknown input observer approach for aircraft actuator fault detection and isolation. Zbl 1134.93318
Wang, Dan; Lum, Kai-Yew
2007
Minimizing makespan in a two-machine flow shop with effects of deterioration and learning. Zbl 1259.90039
Wang, Ji-Bo; Ji, P.; Cheng, T. C. E.; Wang, Dan
2012
Single neural network approximation based adaptive control for a class of uncertain strict-feedback nonlinear systems. Zbl 1268.92014
Sun, Gang; Wang, Dan; Li, Tieshan; Peng, Zhouhua; Wang, Hao
2013
NWChem: a comprehensive and scalable open-source solution for large scale molecular simulations. Zbl 1216.81179
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010
Adaptive dynamic surface control for cooperative path following of marine surface vehicles with input saturation. Zbl 1314.93017
Wang, Hao; Wang, Dan; Peng, Zhouhua
2014
Leaderless and leader-follower cooperative control of multiple marine surface vehicles with unknown dynamics. Zbl 1281.93046
Peng, Zhouhua; Wang, Dan; Li, Tieshan; Wu, Zhiliang
2013
A DSC approach to adaptive neural network tracking control for pure-feedback nonlinear systems. Zbl 1272.93104
Sun, Gang; Wang, Dan; Li, Xiaoqiang; Peng, Zhouhua
2013
Bounds on augmented Zagreb index. Zbl 1289.05083
Wang, Dan; Huang, Yufei; Liu, Bolian
2012
New extended rational expansion method and exact solutions of Boussinesq equation and Jimbo-Miwa equations. Zbl 1122.65396
Wang, Dan; Sun, Weiwei; Kong, Cuicui; Zhang, Hongqing
2007
Cooperative fuzzy adaptive output feedback control for synchronisation of nonlinear multi-agent systems under directed graphs. Zbl 1332.93220
Wang, W.; Wang, D.; Peng, Z. H.
2015
Single-machine group scheduling with deteriorating jobs and allotted resource. Zbl 1288.90035
Wang, Dan; Huo, Yunzhang; Ji, Ping
2014
Adaptive decentralized NN control of nonlinear interconnected time-delay systems with input saturation. Zbl 1327.93017
Li, Tieshan; Wang, Dan; Li, Junfang; Li, Yongming
2013
Single-machine scheduling problems with both deteriorating jobs and learning effects. Zbl 1201.90089
Wang, Ji-Bo; Wang, Dan; Zhang, Guo-Dong
2010
Cooperative adaptive fuzzy output feedback control for synchronization of nonlinear multi-agent systems in the presence of input saturation. Zbl 1346.93238
Wang, Wei; Wang, Dan; Peng, Zhouhua; Wang, Hao
2016
Distributed coordinated tracking of multiple autonomous underwater vehicles. Zbl 1331.93082
Peng, Zhouhua; Wang, Dan; Wang, Hao; Wang, Wei
2014
Active coupling and its circuitry designs of chaotic systems against deteriorated and delayed networks. Zbl 1268.93126
Jin, Xiao-Zheng; Che, Wei-Wei; Wang, Dan
2012
Optimization of support positions to maximize the fundamental frequency of structures. Zbl 1075.74609
Wang, D.; Jiang, J. S.; Zhang, W. H.
2004
Truss shape optimization with multiple displacement constraints. Zbl 1028.74041
Wang, D.; Zhang, W. H.; Jiang, J. S.
2002
Parallel-machine scheduling with non-simultaneous machine available time. Zbl 1426.90138
Shen, Lixin; Wang, Dan; Wang, Xiao-Yuan
2013
Robust adaptive neural control of uncertain pure-feedback nonlinear systems. Zbl 1278.93093
Sun, Gang; Wang, Dan; Peng, Zhouhua; Wang, Hao; Lan, Weiyao; Wang, Mingxin
2013
A parametric mapping method for curve shape optimization on 3D panel structures. Zbl 1202.74131
Zhang, Weihong; Wang, Dan; Yang, Jungang
2010
Single-machine scheduling with learning functions. Zbl 1187.90145
Wang, Ji-Bo; Wang, Dan; Zhang, Guo-Dong
2010
Adaptive finite-time synchronization of a class of pinned and adjustable complex networks. Zbl 1349.34206
Jin, Xiao-Zheng; He, Yi-Gang; Wang, Dan
2016
Distributed cooperative stabilisation of continuous-time uncertain nonlinear multi-agent systems. Zbl 1317.93018
Peng, Zhouhua; Wang, Dan; Sun, Gang; Wang, Hao
2014
A bispace parameterization method for shape optimization of thin-walled curved shell structures with openings. Zbl 1246.74041
Wang, Dan; Zhang, Weihong
2012
More solutions of the auxiliary equation to get the solutions for a class of nonlinear partial differential equations. Zbl 1222.35052
Feng, Yang; Wang, Dan; Li, Wen-Ting; Zhang, Hong-Qing
2010
Predictor-based adaptive dynamic surface control for consensus of uncertain nonlinear systems in strict-feedback form. Zbl 1358.93108
Wang, Wei; Wang, Dan; Peng, Zhouhua
2017
An optimized dispersion-relation-preserving combined compact difference scheme to solve advection equations. Zbl 1349.65355
Yu, C. H.; Wang, D.; He, Z.; Pähtz, T.
2015
Globally exponential stability of nonlinear impulsive switched systems. Zbl 1321.93056
Xu, F.; Dong, L.; Wang, D.; Li, X.; Rakkiyappan, R.
2015
New exact solutions to mKdV-Burgers equation and $$(2 + 1)$$-dimensional dispersive long wave equation via extended Riccati equation method. Zbl 1197.35234
Kong, Cuicui; Wang, Dan; Song, Lina; Zhang, Hongqing
2009
Hochschild (co)homology of a class of Nakayama algebras. Zbl 1147.16013
Xu, Yunge; Wang, Dan
2008
Two-mode ILC with pseudo-downsampled learning in high frequency range. Zbl 1117.93045
Zhang, B.; Wang, D.; Ye, Y.; Wang, Y.; Zhou, K.
2007
Combined shape and sizing optimization of truss structures. Zbl 1128.74324
Wang, D.; Zhang, W. H.; Jiang, J. S.
2002
Streamline stiffener path optimization (SSPO) for embedded stiffener layout design of non-uniform curved grid-stiffened composite (NCGC) structures. Zbl 1440.74271
Wang, Dan; Abdalla, Mostafa M.; Wang, Zhen-Pei; Su, Zhoucheng
2019
Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing. Zbl 1373.94725
Wang, Dan; Zhang, Zhuhong
2017
Fault-tolerant containment control of uncertain nonlinear systems in strict-feedback form. Zbl 1355.93058
Wang, Wei; Wang, Dan; Peng, Zhouhua
2017
Multi-product newsvendor problem with hybrid demand and its applications to ordering pharmaceutical reference standard materials. Zbl 1342.93069
Wang, Dan; Qin, Zhongfeng
2016
A novel single-period inventory problem with uncertain random demand and its application. Zbl 1410.90021
Wang, Dan; Qin, Zhongfeng; Kar, Samarjit
2015
Direct and composite iterative neural control for cooperative dynamic positioning of marine surface vessels. Zbl 1348.93125
Liu, Lu; Wang, Dan; Peng, Zhouhua
2015
Cooperative tracking and estimation of linear multi-agent systems with a dynamic leader via iterative learning. Zbl 1291.93300
Peng, Zhouhua; Wang, Dan; Zhang, Hongwei
2014
Adaptive neural control for a class of uncertain nonlinear systems with unknown time delay. Zbl 1166.93337
Wang, Dan; Lan, Weiyao
2009
Nonlinear state predictor for a class of nonlinear time-delay systems. Zbl 1066.93052
Wang, D.; Zhou, D. H.; Jin, Y. H.
2004
An elimination method based on Seidenberg’s theory and its applications. Zbl 0798.13013
Wang, D.
1993
Cooperative learning neural network output feedback control of uncertain nonlinear multi-agent systems under directed topologies. Zbl 1372.93032
Wang, W.; Wang, D.; Peng, Z. H.
2017
Signal processing and networking for big data applications. Zbl 1420.94002
Han, Zhu; Hong, Mingyi; Wang, Dan
2017
Detection and estimation of jump points in non parametric regression function with $$AR(1)$$ noise. Zbl 1328.62252
Wang, Dan; Guo, Pengjiang
2015
Predicting seismicity trend in southwest of China based on wavelet analysis. Zbl 1317.86015
Li, Xiaolu; Zheng, Wenfeng; Wang, Dan; Yin, Lirong; Wang, Yali
2015
Examining the smooth and nonsmooth discrete element approaches to granular matter. Zbl 1352.74084
Servin, M.; Wang, D.; Lacoursière, C.; Bodin, K.
2014
Neural adaptive control for leader-follower flocking of networked nonholonomic agents with unknown nonlinear dynamics. Zbl 1332.93023
Peng, Zhouhua; Wang, Dan; Liu, Hugh H. T.; Sun, Gang
2014
Cooperative iterative learning control of linear multi-agent systems with a dynamic leader under directed topologies. Zbl 1324.93008
Peng, Zhouhua; Wang, Dan; Wang, Hao; Wang, Wei
2014
A pure bending exact nodal-averaged shear strain method for finite element plate analysis. Zbl 1298.74241
Wu, C. T.; Guo, Y.; Wang, D.
2014
A remark on the homogeneity of isosceles orthogonality. Zbl 1305.46013
He, Chan; Wang, Dan
2014
A note on single-machine scheduling with nonlinear deterioration. Zbl 1258.90045
Wang, Xiao-Yuan; Wang, Dan; Yin, Na
2012
Implication of entropy flow for the development of a system as suggested by the life cycle of a hurricane. Zbl 1195.86009
Liu, Chongjian; Luo, Z.; Liu, Ying; Yu, H.; Zhou, X.; Wang, D.; Ma, L.; Xu, H.
2010
Single machine scheduling problems with position-dependent processing times. Zbl 1172.90409
Wang, Ji-Bo; Wang, Li-Yan; Wang, Dan; Wang, Xiao-Yuan; Gao, Wen-Jun; Yin, Na
2009
Optimization of support positions to minimize the maximal deflection of structures. Zbl 1124.74319
Wang, D.
2004
An efficient user task handling mechanism based on dynamic load-balance for workflow systems. Zbl 1025.68658
Song, Baoyan; Yu, Ge; Wang, Dan; Shen, Derong; Wang, Guoren
2003
A contact force model for the dynamic response of cracks. Zbl 0900.73672
Liu, J. B.; Sharan, S. K.; Wang, D.; Yao, L.
1994
Maximum-entropy approach to classical hard-sphere and hard-disk equations of state. Zbl 0758.70011
Wang, D.; Mead, L. R.; de Llano, M.
1991
Streamline stiffener path optimization (SSPO) for embedded stiffener layout design of non-uniform curved grid-stiffened composite (NCGC) structures. Zbl 1440.74271
Wang, Dan; Abdalla, Mostafa M.; Wang, Zhen-Pei; Su, Zhoucheng
2019
Predictor-based adaptive dynamic surface control for consensus of uncertain nonlinear systems in strict-feedback form. Zbl 1358.93108
Wang, Wei; Wang, Dan; Peng, Zhouhua
2017
Generalized sparse recovery model and its neural dynamical optimization method for compressed sensing. Zbl 1373.94725
Wang, Dan; Zhang, Zhuhong
2017
Fault-tolerant containment control of uncertain nonlinear systems in strict-feedback form. Zbl 1355.93058
Wang, Wei; Wang, Dan; Peng, Zhouhua
2017
Cooperative learning neural network output feedback control of uncertain nonlinear multi-agent systems under directed topologies. Zbl 1372.93032
Wang, W.; Wang, D.; Peng, Z. H.
2017
Signal processing and networking for big data applications. Zbl 1420.94002
Han, Zhu; Hong, Mingyi; Wang, Dan
2017
Cooperative adaptive fuzzy output feedback control for synchronization of nonlinear multi-agent systems in the presence of input saturation. Zbl 1346.93238
Wang, Wei; Wang, Dan; Peng, Zhouhua; Wang, Hao
2016
Adaptive finite-time synchronization of a class of pinned and adjustable complex networks. Zbl 1349.34206
Jin, Xiao-Zheng; He, Yi-Gang; Wang, Dan
2016
Multi-product newsvendor problem with hybrid demand and its applications to ordering pharmaceutical reference standard materials. Zbl 1342.93069
Wang, Dan; Qin, Zhongfeng
2016
Containment control of networked autonomous underwater vehicles with model uncertainty and ocean disturbances guided by multiple leaders. Zbl 1417.93224
Peng, Zhouhua; Wang, Dan; Shi, Yang; Wang, Hao; Wang, Wei
2015
Cooperative fuzzy adaptive output feedback control for synchronisation of nonlinear multi-agent systems under directed graphs. Zbl 1332.93220
Wang, W.; Wang, D.; Peng, Z. H.
2015
An optimized dispersion-relation-preserving combined compact difference scheme to solve advection equations. Zbl 1349.65355
Yu, C. H.; Wang, D.; He, Z.; Pähtz, T.
2015
Globally exponential stability of nonlinear impulsive switched systems. Zbl 1321.93056
Xu, F.; Dong, L.; Wang, D.; Li, X.; Rakkiyappan, R.
2015
A novel single-period inventory problem with uncertain random demand and its application. Zbl 1410.90021
Wang, Dan; Qin, Zhongfeng; Kar, Samarjit
2015
Direct and composite iterative neural control for cooperative dynamic positioning of marine surface vessels. Zbl 1348.93125
Liu, Lu; Wang, Dan; Peng, Zhouhua
2015
Detection and estimation of jump points in non parametric regression function with $$AR(1)$$ noise. Zbl 1328.62252
Wang, Dan; Guo, Pengjiang
2015
Predicting seismicity trend in southwest of China based on wavelet analysis. Zbl 1317.86015
Li, Xiaolu; Zheng, Wenfeng; Wang, Dan; Yin, Lirong; Wang, Yali
2015
Adaptive dynamic surface control for cooperative path following of marine surface vehicles with input saturation. Zbl 1314.93017
Wang, Hao; Wang, Dan; Peng, Zhouhua
2014
Single-machine group scheduling with deteriorating jobs and allotted resource. Zbl 1288.90035
Wang, Dan; Huo, Yunzhang; Ji, Ping
2014
Distributed coordinated tracking of multiple autonomous underwater vehicles. Zbl 1331.93082
Peng, Zhouhua; Wang, Dan; Wang, Hao; Wang, Wei
2014
Distributed cooperative stabilisation of continuous-time uncertain nonlinear multi-agent systems. Zbl 1317.93018
Peng, Zhouhua; Wang, Dan; Sun, Gang; Wang, Hao
2014
Cooperative tracking and estimation of linear multi-agent systems with a dynamic leader via iterative learning. Zbl 1291.93300
Peng, Zhouhua; Wang, Dan; Zhang, Hongwei
2014
Examining the smooth and nonsmooth discrete element approaches to granular matter. Zbl 1352.74084
Servin, M.; Wang, D.; Lacoursière, C.; Bodin, K.
2014
Neural adaptive control for leader-follower flocking of networked nonholonomic agents with unknown nonlinear dynamics. Zbl 1332.93023
Peng, Zhouhua; Wang, Dan; Liu, Hugh H. T.; Sun, Gang
2014
Cooperative iterative learning control of linear multi-agent systems with a dynamic leader under directed topologies. Zbl 1324.93008
Peng, Zhouhua; Wang, Dan; Wang, Hao; Wang, Wei
2014
A pure bending exact nodal-averaged shear strain method for finite element plate analysis. Zbl 1298.74241
Wu, C. T.; Guo, Y.; Wang, D.
2014
A remark on the homogeneity of isosceles orthogonality. Zbl 1305.46013
He, Chan; Wang, Dan
2014
Single neural network approximation based adaptive control for a class of uncertain strict-feedback nonlinear systems. Zbl 1268.92014
Sun, Gang; Wang, Dan; Li, Tieshan; Peng, Zhouhua; Wang, Hao
2013
Leaderless and leader-follower cooperative control of multiple marine surface vehicles with unknown dynamics. Zbl 1281.93046
Peng, Zhouhua; Wang, Dan; Li, Tieshan; Wu, Zhiliang
2013
A DSC approach to adaptive neural network tracking control for pure-feedback nonlinear systems. Zbl 1272.93104
Sun, Gang; Wang, Dan; Li, Xiaoqiang; Peng, Zhouhua
2013
Adaptive decentralized NN control of nonlinear interconnected time-delay systems with input saturation. Zbl 1327.93017
Li, Tieshan; Wang, Dan; Li, Junfang; Li, Yongming
2013
Parallel-machine scheduling with non-simultaneous machine available time. Zbl 1426.90138
Shen, Lixin; Wang, Dan; Wang, Xiao-Yuan
2013
Robust adaptive neural control of uncertain pure-feedback nonlinear systems. Zbl 1278.93093
Sun, Gang; Wang, Dan; Peng, Zhouhua; Wang, Hao; Lan, Weiyao; Wang, Mingxin
2013
Minimizing makespan in a two-machine flow shop with effects of deterioration and learning. Zbl 1259.90039
Wang, Ji-Bo; Ji, P.; Cheng, T. C. E.; Wang, Dan
2012
Bounds on augmented Zagreb index. Zbl 1289.05083
Wang, Dan; Huang, Yufei; Liu, Bolian
2012
Active coupling and its circuitry designs of chaotic systems against deteriorated and delayed networks. Zbl 1268.93126
Jin, Xiao-Zheng; Che, Wei-Wei; Wang, Dan
2012
A bispace parameterization method for shape optimization of thin-walled curved shell structures with openings. Zbl 1246.74041
Wang, Dan; Zhang, Weihong
2012
A note on single-machine scheduling with nonlinear deterioration. Zbl 1258.90045
Wang, Xiao-Yuan; Wang, Dan; Yin, Na
2012
Neural network-based adaptive dynamic surface control of uncertain nonlinear pure-feedback systems. Zbl 1213.93105
Wang, Dan
2011
Adaptive fuzzy control of uncertain MIMO non-linear systems in block-triangular forms. Zbl 1215.93077
Li, Tieshan; Wang, Dan; Chen, Naxin
2011
NWChem: a comprehensive and scalable open-source solution for large scale molecular simulations. Zbl 1216.81179
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010
Single-machine scheduling problems with both deteriorating jobs and learning effects. Zbl 1201.90089
Wang, Ji-Bo; Wang, Dan; Zhang, Guo-Dong
2010
A parametric mapping method for curve shape optimization on 3D panel structures. Zbl 1202.74131
Zhang, Weihong; Wang, Dan; Yang, Jungang
2010
Single-machine scheduling with learning functions. Zbl 1187.90145
Wang, Ji-Bo; Wang, Dan; Zhang, Guo-Dong
2010
More solutions of the auxiliary equation to get the solutions for a class of nonlinear partial differential equations. Zbl 1222.35052
Feng, Yang; Wang, Dan; Li, Wen-Ting; Zhang, Hong-Qing
2010
Implication of entropy flow for the development of a system as suggested by the life cycle of a hurricane. Zbl 1195.86009
Liu, Chongjian; Luo, Z.; Liu, Ying; Yu, H.; Zhou, X.; Wang, D.; Ma, L.; Xu, H.
2010
Single machine scheduling with exponential time-dependent learning effect and past-sequence-dependent setup times. Zbl 1165.90471
Wang, Ji-Bo; Wang, Dan; Wang, Li-Yan; Lin, Lin; Yin, Na; Wang, Wei-Wei
2009
New exact solutions to mKdV-Burgers equation and $$(2 + 1)$$-dimensional dispersive long wave equation via extended Riccati equation method. Zbl 1197.35234
Kong, Cuicui; Wang, Dan; Song, Lina; Zhang, Hongqing
2009
Adaptive neural control for a class of uncertain nonlinear systems with unknown time delay. Zbl 1166.93337
Wang, Dan; Lan, Weiyao
2009
Single machine scheduling problems with position-dependent processing times. Zbl 1172.90409
Wang, Ji-Bo; Wang, Li-Yan; Wang, Dan; Wang, Xiao-Yuan; Gao, Wen-Jun; Yin, Na
2009
Hochschild (co)homology of a class of Nakayama algebras. Zbl 1147.16013
Xu, Yunge; Wang, Dan
2008
Adaptive unknown input observer approach for aircraft actuator fault detection and isolation. Zbl 1134.93318
Wang, Dan; Lum, Kai-Yew
2007
New extended rational expansion method and exact solutions of Boussinesq equation and Jimbo-Miwa equations. Zbl 1122.65396
Wang, Dan; Sun, Weiwei; Kong, Cuicui; Zhang, Hongqing
2007
Two-mode ILC with pseudo-downsampled learning in high frequency range. Zbl 1117.93045
Zhang, B.; Wang, D.; Ye, Y.; Wang, Y.; Zhou, K.
2007
Optimization of support positions to maximize the fundamental frequency of structures. Zbl 1075.74609
Wang, D.; Jiang, J. S.; Zhang, W. H.
2004
Nonlinear state predictor for a class of nonlinear time-delay systems. Zbl 1066.93052
Wang, D.; Zhou, D. H.; Jin, Y. H.
2004
Optimization of support positions to minimize the maximal deflection of structures. Zbl 1124.74319
Wang, D.
2004
An efficient user task handling mechanism based on dynamic load-balance for workflow systems. Zbl 1025.68658
Song, Baoyan; Yu, Ge; Wang, Dan; Shen, Derong; Wang, Guoren
2003
Adaptive neural network control for a class of uncertain nonlinear systems in pure-feedback form. Zbl 0998.93026
Wang, Dan; Huang, Jie
2002
Truss shape optimization with multiple displacement constraints. Zbl 1028.74041
Wang, D.; Zhang, W. H.; Jiang, J. S.
2002
Combined shape and sizing optimization of truss structures. Zbl 1128.74324
Wang, D.; Zhang, W. H.; Jiang, J. S.
2002
A contact force model for the dynamic response of cracks. Zbl 0900.73672
Liu, J. B.; Sharan, S. K.; Wang, D.; Yao, L.
1994
Diversification under yield randomness in inventory models. Zbl 0795.90013
Parlar, Mahmut; Wang, Dan
1993
An elimination method based on Seidenberg’s theory and its applications. Zbl 0798.13013
Wang, D.
1993
Maximum-entropy approach to classical hard-sphere and hard-disk equations of state. Zbl 0758.70011
Wang, D.; Mead, L. R.; de Llano, M.
1991
all top 5
#### Cited by 837 Authors
20 Tong, Shaocheng 17 Li, Yongming 15 Wang, Dan 9 Peng, Zhouhua 9 Wang, Jibo 8 Yoo, Sung Jin 7 Li, Tieshan 7 Wang, Huanqing 7 Zhang, Weihong 6 Chen, Bing 6 Ji, Ping 6 Lee, Wen-Chiung 6 Wu, Jian 5 Huang, Xue 5 Li, Jing 5 Lin, Chong 5 Liu, Xiaoping 5 Liu, Xinbao 5 Liu, Yanjun 5 Pardalos, Panos M. 5 Pei, Jun 5 Zhu, Jihong 4 Choi, Yunho 4 Rudek, Radosław 4 Shi, Yang 4 Sun, Lin-Hui 4 Sun, Linyan 4 Wang, Cong 4 Wang, Hao 4 Wang, Jianjun 4 Wang, Wei 4 Wu, Yubin 4 Yang, Guanghong 4 Zhang, Tianping 3 Chen, Weisheng 3 Cheng, Chih-Chiang 3 Cheng, Tai-Chiu Edwin 3 Dong, XiWang 3 Lai, Peng-Jen 3 Lin, Chih-Hong 3 Liu, Ke 3 Pan, Yongping 3 Park, Juhyun (Jessie) 3 Ren, Zhang 3 Sui, Shuai 3 Sun, Gang 3 Wang, Xiaoyuan 3 Wazwaz, Abdul-Majid Abdul-Rahman 3 Wu, Chinchia 3 Yan, Guangle 3 Yang, Shanlin 3 Yu, Haoyong 3 Yu, Jinpeng 3 Zhang, Xingong 2 Bai, Jing 2 Cai, Shouyu 2 Chen, C. L. Philip 2 Chen, Guangrong 2 Chen, Liangliang 2 Chen, Longsheng 2 Chen, Yangyang 2 Du, Jialu 2 Fan, Wenjuan 2 Friswell, Michael Ian 2 Furtula, Boris 2 Gan, Qintao 2 Ge, Shuzhi Sam 2 Geng, Zhiyong 2 Ghavidel, Hesam Fallah 2 Guan, Xinping 2 Gutman, Ivan M. 2 He, Xing 2 He, Yigang 2 Hu, Qinglei 2 Hua, Changchun 2 Huang, Junjian 2 Huang, Tingwen 2 Huang, Wanzhen 2 Itagaki, Tomohiro 2 Jiang, Bin 2 Jin, Xiaozheng 2 Kar, Indra Narayan 2 Karimi, Hamid Reza 2 Lan, Weiyao 2 Lee, Wenchiung 2 Li, Gang 2 Li, Junmin 2 Li, Lin 2 Li, Qingdong 2 Lin, Yan-Si 2 Liu, Derong 2 Liu, Jingbo 2 Liu, Xuan 2 Liu, Yonghua 2 Lu, Bin 2 Lu, Changjie 2 Lu, Junwei 2 Lu, Yuanyuan 2 Ma, Hongbin 2 Ma, Weimin ...and 737 more Authors
all top 5
#### Cited in 87 Serials
44 Nonlinear Dynamics 31 Journal of the Franklin Institute 21 International Journal of Robust and Nonlinear Control 19 Applied Mathematics and Computation 19 Information Sciences 19 Applied Mathematical Modelling 15 Computer Methods in Applied Mechanics and Engineering 14 International Journal of Adaptive Control and Signal Processing 13 International Journal of Systems Science. Principles and Applications of Systems and Integration 12 Asian Journal of Control 10 Automatica 10 European Journal of Operational Research 10 Mathematical Problems in Engineering 10 Optimization Letters 8 International Journal of Control 8 Annals of Operations Research 7 Computers & Operations Research 6 International Journal of Systems Science 6 Fuzzy Sets and Systems 5 Computers & Mathematics with Applications 5 Soft Computing 5 Communications in Nonlinear Science and Numerical Simulation 4 Asia-Pacific Journal of Operational Research 4 Journal of Applied Mathematics and Computing 3 Journal of Computational Physics 3 Operations Research Letters 3 Neural Networks 3 SIAM Journal on Scientific Computing 3 Complexity 3 Advances in Difference Equations 3 Nonlinear Analysis. Hybrid Systems 2 Acta Mechanica 2 Communications in Algebra 2 International Journal for Numerical Methods in Engineering 2 Journal of Computational and Applied Mathematics 2 Journal of Global Optimization 2 Communications in Numerical Methods in Engineering 2 Philosophical Transactions of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 2 Discrete Dynamics in Nature and Society 2 Journal of Systems Science and Complexity 2 Acta Mechanica Sinica 2 Journal of Control Science and Engineering 2 Science China. Information Sciences 1 Computer Physics Communications 1 International Journal of General Systems 1 Journal of Mathematical Physics 1 Journal of Statistical Physics 1 Archiv der Mathematik 1 Opsearch 1 Journal of Information & Optimization Sciences 1 Circuits, Systems, and Signal Processing 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of Symbolic Computation 1 Mathematical and Computer Modelling 1 Advances in Engineering Software 1 Computational and Applied Mathematics 1 Journal of the Egyptian Mathematical Society 1 Annals of Mathematics and Artificial Intelligence 1 Journal of Mathematical Chemistry 1 European Journal of Control 1 Journal of Vibration and Control 1 Abstract and Applied Analysis 1 Optimization Methods & Software 1 Journal of Inequalities and Applications 1 Journal of Scheduling 1 Acta Mathematica Sinica. English Series 1 Journal of Dynamical and Control Systems 1 RAIRO. Operations Research 1 International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 1 Archives of Computational Methods in Engineering 1 Applied Mathematics E-Notes 1 Iranian Journal of Science and Technology. Transaction A: Science 1 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 1 Entropy 1 OR Spectrum 1 Structural and Multidisciplinary Optimization 1 International Journal of Wavelets, Multiresolution and Information Processing 1 Acta Numerica 1 Journal of Computational Acoustics 1 Journal of Control Theory and Applications 1 Discrete Optimization 1 International Journal of Intelligent Computing and Cybernetics 1 Journal of Nonlinear Science and Applications 1 Algorithms 1 Statistics and Computing 1 International Journal of Applied and Computational Mathematics 1 AMM. Applied Mathematics and Mechanics. (English Edition)
all top 5
#### Cited in 36 Fields
220 Systems theory; control (93-XX) 103 Operations research, mathematical programming (90-XX) 47 Computer science (68-XX) 43 Biology and other natural sciences (92-XX) 26 Mechanics of deformable solids (74-XX) 23 Numerical analysis (65-XX) 16 Ordinary differential equations (34-XX) 16 Partial differential equations (35-XX) 13 Mechanics of particles and systems (70-XX) 11 Combinatorics (05-XX) 11 Information and communication theory, circuits (94-XX) 8 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 6 Dynamical systems and ergodic theory (37-XX) 6 Quantum theory (81-XX) 4 Calculus of variations and optimal control; optimization (49-XX) 4 Statistics (62-XX) 3 Associative rings and algebras (16-XX) 3 Probability theory and stochastic processes (60-XX) 3 Fluid mechanics (76-XX) 3 Statistical mechanics, structure of matter (82-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Geophysics (86-XX) 1 General and overarching topics; collections (00-XX) 1 Number theory (11-XX) 1 Field theory and polynomials (12-XX) 1 Commutative algebra (13-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 $$K$$-theory (19-XX) 1 Real functions (26-XX) 1 Special functions (33-XX) 1 Difference and functional equations (39-XX) 1 Integral transforms, operational calculus (44-XX) 1 Functional analysis (46-XX) 1 Convex and discrete geometry (52-XX) 1 Optics, electromagnetic theory (78-XX) 1 Classical thermodynamics, heat transfer (80-XX) | 2021-01-17 11:06:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8067272305488586, "perplexity": 14782.53549813199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703511903.11/warc/CC-MAIN-20210117081748-20210117111748-00367.warc.gz"} |
https://www.springerprofessional.de/17th-international-conference-on-electrical-bioimpedance/17674848 | main-content
Über dieses Buch
This book gathers the proceedings of the 17th International Conference on Electrical Bioimpedance (ICEBI 2019), held on June 9-14, in Joinville, Santa Catarina, Brazil. The chapters cover the latest knowledge and developments concerning: sensors and instrumentation to measure bioimpendance, bioimpedance imaging techniques, theory and modeling and bioimpendance, as well as cutting-edge clinical applications of bioimpendance. All in all, this book provides graduate students and researchers with an extensive and timely snapshot of current research and challenges in the field of electrical bioimpendance, and a source of inspiration for future research and cross-disciplinary collaborations.
Inhaltsverzeichnis
Design and Integration of Electrical Bio-Impedance Sensing in a Bipolar Forceps for Soft Tissue Identification: A Feasibility Study
This paper presents the integration of electrical bio-impedance sensing technology into a bipolar surgical forceps for soft tissue identification during a robotic assisted procedure. The EBI sensing is done by pressing the forceps on the target tissue with a controlled pressing depth and a controlled jaw opening distance. The impact of these 2 parameters are characterized by finite element simulation. Subsequently, an experiment is conducted with 4 types of ex-vivo tissues including liver, kidney, lung and muscle. The experimental results demonstrate that the proposed EBI sensing method can identify these 4 tissue types with an accuracy higher than 92.82%.
Zhuoqi Cheng, Diego Dall’Alba, Darwin G. Caldwell, Paolo Fiorini, Leonardo S. Mattos
Influence of Measurement Pattern on RAW-data in Electrical Impedance Tomography
The conductivity distribution inside a volume conductor can be reconstructed with Electrical Impedance Tomography (EIT). Therefore, electrodes on the surface of the volume conductor are used to inject a constant current and measure the resulting surface potentials, which is equivalent to transfer impedances. A sequence of current injections and voltage measurements is called measurement pattern and results in a set of transfer impedances. Various measurement patterns exist and each of them has a specific sensitivity, which influences the distinguishability of an object in the reconstructed image. To compare different patterns we introduced three criteria based on the RAW-measurement and evaluated the performance in a water-tank experiment. Measurement patterns with an increased distance between the injecting and measuring electrodes showed more sensitivity and selectivity in the RAW-data and should be preferably chosen compared to the traditional adjacent pattern.
Tobias Menden, Tobias Textor, Samantha Schadwinkel, Steffen Leonhardt, Marian Walter
Hardware Setup for Tetrapolar Bioimpedance Spectroscopy in Bandages
As the demographic change progresses, medical research begins to focus on geriatric diseases. Our work concentrates on patients who suffer from age-related weakness of connective tissues or dilated venous valves which result in chronic venous insufficiency (CVI). CVI leads to a reduced perfusion of limbs, increased venous pressure and tissue deficiency, especially in the lower leg. As a result, chronic wounds develop that can persist for several decades. In clinical practice, CVI patients with wounds are outpatients who consult a physician for diagnosis every two months.A possible way to improve the interval of diagnosis are monitoring technologies like bioimpedance spectroscopy (BIS), which is capable to detect changes in tissue integrity. Developing a device for BIS in bandages could therefore enable quasi-continuous wound status monitoring and alert the physician if necessary. The presented hardware setup for BIS includes textile based electrodes for tetrapolar measurements that can be integrated into a bandage without reducing the comfort of the patient. Shape and size of the electrodes correspond to those of typical wound dressings. The hardware is based on the device AFE4300 for low energy consumption in place of highly dynamic or continuous measurements, as wound status dynamics are slow. We show that the complex impedance of human tissue can be measured with high precision if the electrodes were covered with compression stockings, as contact pressure enhances electrode-skin response.
Stephan Dahlmanns, Alissa Wenzel, Steffen Leonhardt, Daniel Teichmann
Selection of Cole Model Bio-Impedance Parameters for the Estimation of the Ageing Evolution of Apples
Electrical impedance spectroscopy (EIS) is an emerging, fast, reliable and non-destructive technique for fruit quality assessment. In this paper, bio-impedance spectroscopy (BIS) measurements were carried out to monitor apple ageing evolution during a 12 days period using a microcontroller-based system. The data was extracted in the 100 Hz - 85 kHz frequency range and the resulting impedance spectra were fitted with the Cole equivalent circuit. The four variables of the circuit, namely series and parallel resistance ($$R_s$$ and $$R_p$$) and constant phase element (CPE) magnitude and phase, were extracted from the model and evaluated in terms of their correlation with the studied fruit ageing evolution. The results indicated that the parallel resistance ($$R_p$$) decreasing trend is useful to cluster the apples according to the ageing stage, while the CPE parameters allow discriminating among single fruits physiological conditions at the same maturity stage.
Pietro Ibba, Giuseppe Cantarella, Biresaw Demelash Abera, Luisa Petti, Aniello Falco, Paolo Lugli
Biosensor Based on Carbon Nanocomposites for Detecting Glucose Concentration in Water
New materials have been developed with nanotechnology since the 1960s. Carbon-based nanocomposites are used as biosensor due to structural, electrical and thermal properties. These nanocomposites have advantages as high sensitivity, mechanical flexibility, and biocompatibility useful in glucose sensors. Non-conventional methods have been used to measure glucose in physiological fluids like urine, sweat, saliva, breath, interstitial and ocular fluid. This work presents preliminary results of electrical impedance spectroscopy measurements at low glucose concentrations (36–1000 µM). It uses bipolar sensor coated with a mixture of DBEGA and graphene. Results indicate a higher sensitivity (348 Ω/µM) in glucose concentrations lower than 200 µM. The sensitivity diminished with increments of glucose concentration. This sensor has a promising application to quantify glucose concentrations in frequencies between 100 Hz to 10 MHz with low Electrode-Electrolyte Interface. A possible sensor application is monitoring continuously physiological aqueous media as urine, sweat or saliva sensing for hypoglycemic and hyperglycemic patients
John Alexander Gomez-Sanchez, Renata Hack, Sergio Henrique Pezzin, Pedro Bertemes-Filho
Bioimpedance Measurements on Human Neural Stem Cells as a Benchmark for the Development of Smart Mobile Biomedical Applications
Over the past 30 years, stem cell technologies matured from an attractive option to investigate neurodegenerative diseases to a possible paradigm shift in their treatment through the development of cell-based regenerative medicine (CRM). Implantable cell replacement therapies promise to completely restore function of neural structures possibly changing how we currently perceive the onset of these conditions. One of the major clinical hurdles facing the routine implementation of stem cell therapy is the limited and inconsistent benefit observed thus far. While unclear, numerous pre-clinical and a handful of clinical cell fate imaging studies point to poor cell retention and survival. Coupling the need to better understand these mechanisms while providing scalable approaches to monitor these treatments in both pre-clinical and clinical scenarios, we show a proof of concept bioimpedance electronic platform for the Agile development of smart and mobile biomedical applications like neural implants or highly portable monitoring devices.
André B. Cunha, Christin Schuelke, Arto Heiskanen, Afia Asif, Yasmin M. Hassan, Stephan S. Keller, Håvard Kalvøy, Alberto Martínez-Serrano, Jenny Emnéus, Ørjan G. Martinsen
Numerical Simulation of Various Electrode Configurations in Impedance Cardiography to Identify Aortic Dissection
Impedance cardiography (ICG) is a non-invasive method to evaluate several cardiodynamic parameters. Pathologic changes in the aorta, like an aortic dissection, will alter the aortic shape as well as the blood flow and, consequently, the impedance cardiogram. This fact distorts the evaluated cardiodynamic parameters on the one hand, and offers the possibility to identify aortic pathologies on the other hand. In order to find an appropriate measurement configuration, in particular for the identification of aortic dissections, a 3D simulation model has been used. Various electrode positions have been investigated to reach a high sensitivity with respect to the discrepancy between the healthy and the dissected case.
Alice Reinbacher-Köstinger, Vahid Badeli, Gian Marco Melito, Christian Magele, Oszkar Bíró
Numerical Simulation of Impedance Cardiogram Changes in Case of Chronic Aortic Dissection
Aortic dissection is an extremely dangerous aortic disease which alters the aortic shape as well as the blood flow in the region concerned. A numerical simulation model based on the Thoracic Electrical Bioimpedance technique for investigating effects caused by the aortic dissections is proposed. The effect of these changes on time-dependent hemodynamic parameters are shown.
Vahid Badeli, Alice Reinbacher-Köstinger, Oszkar Biro, Christian Magele
Analysis of Silicone Additives to Model the Dielectric Properties of Heart Tissue
Leonie Korn, Simon Lyra, Steffen Leonhardt, Marian Walter
A Short Review of Membrane Models for Cells Electroporation
This article aims to present some bioimpedance models and mathematical solutions used to describe the theoretical process of electroporation in biological cells. Throughout this article three different models are mentioned, with a greater focus on the last two, which are focused on the formation of pores which theoretically influence the conductivity of the membrane.
Jéssica R. da Silva, Raul Guedert, Guilherme B. Pintarelli, Daniela O. H. Suzuki
Data Views Technology of Bioimpedance Vector Analysis of Human Body Composition
In 1994, A. Piccoli et al. proposed BIVA—an alternative form of BIA data presentation. Standard values of BIVA are usually presented as 75% and 95% tolerance ellipse.Aim of the work: to assess the accuracy of the classical bioimpedance vector analysis for the population of Russia and to develop the option of a two-dimensional representation of the data of bioimpedance analysis of human body composition.Materials and methodsThe present study used data from 1,635,891 patients aged 5 to 85 who underwent bioimpedance research as part of visiting Russian health centers in 2009–2015. The BIVA ellipses for each year of life were constructed, and their congruence with each other and the ellipses from A. Piccoli’s work using the conics by Bernard Desgraupes’ package in an R environment. Animation software was also developed, which, based on raw data, constructed slices of the actual two-dimensional distribution of any selected pairs of bioimpedance human body composition parameters.Results and discussionCalculated according to the Russian data, the BIVA tolerance ellipses differed from those in A. Piccoli’s work.The ellipses of the Russian population had a non-concentric location: the centers of 50%, 75% and 95% tolerance ellipses do not coincide. In many ages, not only the center displacement was detected, but also changes in the angle of inclination of the main axis of 95% and 50% tolerance ellipses. Ellipses for adjacent ages were different too. The slices of the actual two-dimensional distribution had an irregular shape that varied greatly with age, especially for 95% tolerance cloud.ConclusionBIVA ellipses of the Russian population showed a big difference from Piccoli’s. For an adequate assessment and minimization of possible errors, we should use localized reference values for each gender and age. The proposed two-dimensional representations allow us to analyze four pairs of BIA parameters.
Svetlana P. Shchelykalina, Dmitry V. Nikolaev, Vladimir A. Kolesnikov, Kontantin A. Korostylev, Olga A. Starunova
Analysis of Electrical Bioimpedance for the Diagnosis of Sarcopenia and Estimation of Its Prevalence
Sarcopenia in older adults has become a public health problem associated with several adverse health outcomes that lead to high costs of care. This is why an accurate and timely identification of this condition is required, in order to carry out prevention and early intervention to reduce its prevalence. The most important components of sarcopenia are the loss of quantity and quality of skeletal muscle and diagnosis begins with the evaluation of these two dimensions. Whilst in developed countries there are cumbersome and expensive methods to assess the amount of muscle, such as dual energy x-ray absorptiometry (DEXA), magnetic resonance imaging (MRI) and computerized tomography (CT) countries with fewer resources are able to take advantage of less expensive and easy to apply techniques for the diagnosis of sarcopenia, such as the analysis of electrical bioimpedance. In this study, the latter method was used to estimate the amount of muscle mass, in conjunction with the assessment of the grip strength of the hand and the short battery of physical performance, in order to estimate the prevalence of sarcopenia in 210 older adults in Manizales, Colombia. For this, cut-off points obtained from <2 SDs of muscle mass of a young adult population in the same city were used. The results determined a prevalence of 11.4%, a figure that differs from that found in the same population when skeletal mass index (SMI) cutting points obtained from a Taiwanese or a Mexican American population were used (9.5% and 27.6%). On the other hand, the phase angle, another bioimpedanciometric parameter, emerges which could be a promising risk marker for sarcopenia, since the difference in this between individuals with or without sarcopenia was significant (5.79 ± 0.76 vs. 6.26 ± 0.79, p = 0.006°). However, as was shown in the analysis of the ROC curve, its predictive capacity was low in this study and an additional exploration of this topic is required.
Clara Helena Gonzalez-Correa, Maria Camila Pineda-Zuluaga, Luz Elena Sepulveda-Gallego
Sarcopenia in Patients with Chronic Obstructive Pulmonary Disease and Evaluation of Raw Bioelectrical Impedance Analysis Data
Chronic Obstructive Pulmonary disease (COPD) is associated with extrapulmonary comorbidities related to alterations in the amount and function of muscle mass, such as sarcopenia. However, there is a paucity of studies that use the diagnostic criteria for sarcopenia proposed by the European Working Group on Sarcopenia in the Elderly (EWGSOP, is the acronym in English) in this condition. This group has supported the use of Bioelectrical Impedance Analysis (BIA) as a technique to estimate muscle mass. On the other hand, some authors have suggested that raw BIA data such as the impedance ratio (IR) 250 kHz/5 kHz and the phase angle (PA) obtained from this technique could be indicators of sarcopenia. Few studies have explored the use of these variables in patients with COPD. The aim of this study was to evaluate the presence of sarcopenia in these patients according to the EWGSOP criteria and to compare the IR and PA in patients with and without sarcopenia. The results showed that 50% of the patients were diagnosed with sarcopenia. This condition was related to the degrees of severity of COPD. IR and PA were not different in patients with COPD with and without sarcopenia. BIA is a useful tool for the comprehensive evaluation of COPD patients, however, the IR and PA variables in this study were not indicators of sarcopenia in these patients.
Maria Camila Pineda-Zuluaga, Clara Helena Gonzalez-Correa, Luz Elena Sepulveda-Gallego
Skeletal Muscle Index Using Bioelectrical Impedance for Diagnosis of Sarcopenia in Two Colombian Studies
Sarcopenia is defined as a loss of muscle mass depending on ageing and affecting physical function. The objective of this study was to assess the performance of the new cut-off points for SMI in another similar population and to evaluate if both cut-off points classify the muscle mass of the individuals in the same category. Methods: forty-five men aged 22.6 ± 3.2 years were included. Percentage body fat (%BF) was estimated by four skin folds, skeletal mass index (SMI) by BIA, and muscle function by the handgrip strength test (HGS). Results: Cut-off points generated with the new population were similar to those found in the previous study. With both cut-off points, none of our young patients showed low muscle mass estimated by SMI. Discussion and Conclusion: Individuals were classified in the same category with both cut-off points. However, it is recommended, in a future study, to establish whether these data coincide with those obtained directly from a population of elderly people in the same city using BIA in contrast to another technique of reference such as DXA.
Clara Helena Gonzalez-Correa, Julio Cesar Caicedo-Eraso, Diana Rocío Varon-Serna
Bioimpedance Measurement to Evaluate Swallowing in a Patient with Amyotrophic Lateral Sclerosis
Objective: Swallowing dysfunction is an increasingly common symptom in the patients with amyotrophic lateral sclerosis (ALS). The traditional videoflouroscopy method allows observation of the entire oropharyngeal swallowing process during the examination; however, it is expensive and requires the use of ionizing radiation. Impedance pharyngography (IPG) is a new cost-effective and non-invasive method for real-time swallowing monitoring. The goal of the present pilot study is to investigate whether IPG could be used for evaluating the swallowing process in ALS patients and to learn how IPG waveforms relate with videoflouroscopy data. Method: A new IPG measurement system based on a lock-in amplifier was developed, which can be used for acquiring both the impedance magnitude and phase of the IPG signal, rather than impedance magnitude-only information as is typically done today. Results: Physiological significance of the obtained IPG waveform was confirmed by comparing the chronological series of anatomical events recorded simultaneously with the videoflouroscopy swallowing exam. Significance: IPG may be used as a simple clinical tool for estimating swallowing function of ALS patients with minimal stress and inconvenience.
Fu Zhang, Courtney McIlduff, Hilda Gutierrez, Sarah MacKenzie, Seward Rutkove
Three Electrode Arrangements for the Use of Contralateral Body Segments as Controls for Electrical Bio-Impedance Measurements in Three Medical Conditions
Some medical and physiological conditions like diabetic foot ulcers (DFU), fractures, skin cancer, as well as higher physical lateral development of one side of the body in sports like soccer, have an unilateral presentation and frequently in one of the limbs. Electrical bio-impedance (EBI) can be useful in the diagnosis or follow up of some of these conditions and, in some cases, the unaffected extremity could serve as a control. Nevertheless, functional laterality can affect symmetry and this fact has to be taken into consideration. Four different approaches for EBI measurements are proposed in order to apply them in the above-mentioned conditions: whole body (WB), large segments (LS), small segments (SS) and a rosette array (RS). WB and LS may be useful for the assessment of soccer players as well as DFU; SS is meant to be applied to the study of fractures and, finally, the RS array can be helpful in the study of skin lesions and the risk of developing DFU. These different arrangements are presented with a nomenclature to be used with each of them. In this particular case, all measurements show lateral asymmetry, values being higher on the left side for WB, LS and SS, while the opposite applies for the majority of the RS readings. All comparisons between both sides show that the differences in LS and SS measurements are statistically significant (P value <<0.05 for WB), as well as for the RS measurements. However, in the latter case, for one group of readings, the P value was just 0.0490.
C. A. Gonzalez-Correa, L. O. Tapasco-Tapasco, S. Salazar-Gomez
Luminal Electrical Resistivity at 50 kHz of the Pig Large Intestinal Wall
In GruBIE we are interested in finding the best conditions to, in the future, carry out in vivo transendoscopic measurements on the colon wall in humans. Pressure applied by the probe to the tissue being measured and probe/electrodes size, as well as thickness of the large intestine (LI) wall may well play a crucial role in the values obtained when readings to the studied tissue are made. In this article, in resected colon specimens from 3 adult pigs, two different approaches are explored, with a probe previously used for in vitro and in vivo measurements in humans and other animals as pigs and rabbits. These two approaches differ, basically, on probe position (tip upside down or tip upside up), with high pressure being applied in the first case (about 27.5 kPa) and low pressure in the second case (about 1.8 kPa for the pieces of colon and about 7.2 kPa for measurements on the rectum). The main finding of this initial study is that both aspects, pressure applied with the tip of the probe and thickness of the tissue, seem to affect the impedance readings. For future work, these two aspects have to be considered and this should be reflected in the probe and electrode sizes.
C. A. Gonzalez-Correa, L. O. Tapasco-Tapasco, S. Ballesteros-Lopez
Evaluating the Effects of Cold Storage on Vascular Grafts Using Bioimpedance Measurement Techniques
The most common vascular preservation method for cardiovascular and transplantation surgeries is cold storage where the vessels are cooled down and kept in a preservation solution till the surgery can be performed. The process of cooling and storage of the vascular tissue affects the quality of the graft. The currently used methods for evaluating the quality of the vascular grafts which are cold stored are destructive and invasive and require segmenting and staining. Bioimpedance measurement technique is a non-destructive method, and the objective of this work was therefore to study how changes in the structure and morphology of the grafts during the cold storage period affect bioimpedance measurements. Bioimpedance measurement technique was employed as a non invasive method to study and evaluate the changes in the quality of ovine jugular veins and carotid artery during 30 days of cold storage in UW solution. The results of the study show that bioimpedance measurement technique can be used for non-invasive and non-destructive monitoring of the quality of the blood vessels during the cold storage period.
Maryam Amini, Jonny Hisdal, Antonio Rosales, Håvard Kalvøy, Ørjan Grøttem Martinsen
Tissue Impedance Spectroscopy to Guide Resection of Brain Tumours
Visual differentiation of lower grade glioma tissue from normal brain tissue during surgery is difficult even for expert neurosurgeons. Therefore, during tumour removal neurosurgeons rely on image guidance. It has been proven that higher rates of tumour resection prolong long-term survival of patients. We aim to implement impedance spectroscopy as a potential supportive tool to improve radical resection. During this pilot study, we evaluated the possibility to differentiate ex vivo tissue samples (biopsy samples during tumour surgeries) with the help of impedance spectroscopy. Tissues were collected from two patients and impedance spectra differences were found between low-grade glioma, high-grade glioma and healthy brain tissue.
Mareike Apelt, Gesar Ugen, Levin Häni, Andreas Raabe, Juan Ansó, Kathleen Seidel
Relationships Between Bioimpedance Variables and Gene Expression in Lactuca Sativa Exposed to Cold Weather
Four sets of 4 week-old Lactuca Sativa plants, each with 30 samples, were subjected to a nocturnal frost simulation by inserting them into an instrumented freezer with internal temperature maintained at a constant value of −2 °C. At time points of 60, 120, 180, 240 min, one after another lettuce sets were removed from the freezer to be tested with an electrical impedance spectroscopy device, built in the laboratory and based on a commercial electronic board in which a cut leaf of each plant was subjected to a 1 to 300 kHz frequency sweep of an alternate current with a constant tension of 1 V. Results showed a progressive and statistically significant reduction of the critical frequency for which the phase angle reached its minimum as the frost time proceeded (−40% after 180 min, P < 0.05), and this was interpreted as the well known tissue damage due to ice crystals occurring both in cytoplasm and extra-cellular fluids. A quadratic regression of the critical frequency versus the concentration of the protein LBD1 has also been found, which was over expressed by the cluster of genes, whose primers are AGCA GAGGTGGTGAATTTGC (LACTLBD1-F) and AGCTGCCTAAATTGGC GTTA (LactLBD1-R), identified as markers of the applied cold stress in the lettuce plants. It was concluded that an easy to use and inexpensive electrical impedance spectroscopy device can give strategic information concerning the damage occurring depending on abiotic/cold weather in the field lettuce.
Diego Albani, Alberto Concu, Lara Perrota, Antonio H. Dell’Osa, Andrea Fois, Andrea Loviseli, Fernanda Velluzi
Effect of Heating on Dielectric Properties of Hungarian Acacia Honeys
The statement of overheating of honey during the processing is very important in quality characterization of honey products. Four various Hungarian acacia honeys from different places were hold in water bath of 35, 40, 50, 60, and 80 °C 0,5, 4 and 24 h. After heating the honeys were cooled down to room temperature. The electrical impedance spectrum of honeys before and after heating were measured by precision LCR meters in frequency range from 30 Hz up to 30 MHz at 1 V voltage with Ag/AgCl electrodes at room temperature (22 °C). The measured impedance spectra after open and short correction were approached with a model consisting of a distributed circuit element in serial connection with a resistance. The parameters of this model were determined. The resistance of the distributed circuit element was decreased after heat treatment for all honeys. After more detailed investigation this parameter can be used for detecting the earlier heating of honey products.
Eszter Vozáry, Zsanett Bodor, Kinga Ignácz, Bíborka Gillay, Zoltán Kovács
Impedance Measurements Sensitive to Complementary DNA Concentrations
Developing countries have obstacles reaching quality and innovative healthcare services since there is limited access to the new technologies. Such is the case for genetic medicine, where the Deoxyribonucleic Acid (DNA) detection techniques are complex and expensive. In recent years, the development of genosensors has been of great interest, since it offers cheaper alternatives for specific DNA detection. In this work, we explored the use of multifrequency impedance measurements to detect three concentrations of DNA without labeling. The results suggest that impedance could be useful as a concentration sensitive DNA measurement parameter and then be used for the development of easy-to-use and potentially cheaper technologies.
Gerardo Ames, R. Gnaim, J. Sheviryov, A. Goldberg, M. Oziel, E. Sacristán, César Antonio González
Monitoring Lactobacillus Bulgaricus Growth in Yoghurt by Electrical Impedance
Characterization of different Lactobacillus bacterial strains has an increasing demand due to their potential health benefits. Classical cell counting methods are time-consuming and require well trained laboratory work, besides they are having higher subjectivity and error. The determination of electrical impedance spectrum is a promising method for the evaluation of changes in chemical structure and cell growth during the formation of yoghurt. Our aim is to use electrical impedance spectrum for the monitoring of cell growth of LABs. In this study different strains of Lactobacillus bulgaricus bacteria were used for the production of yoghurt from ultrahigh temperature processed (UHT) milk of 1.5% fat content. The cell count of the LABs was determined with plating method on MRS (De Man, Rogosa and Sharpe) agar in each hour during 12 h of cultivation at 37 °C. For impedance measurements, the real part (conductance) and the imaginary part (susceptance) of admittance (reciprocal value of electrical impedance) were recorded through 12 h at 37 °C in frequency range from 50 Hz up to 800 kHz with a HP 4284A precision LCR meter. Results showed a bacteria cell number increase from magnitude 103 to 107 (CFU/milliliters) as with conductance and the susceptance. Regression models could also accurately predict bacterial cell count. Impedance measurement during the formation of yoghurt can be useful in monitoring Lactobacillus growth and determination of cell number, when exact methodologies and setups are applied.
Zsanett Bodor, John-Lewis Zinia Zaukuu, Tímea Kaszab, Anikó Lambert-Meretei, Mahmoud Said Rashed, Zoltan Kovacs, Csilla Mohácsi Farkas, Eszter Vozáry
Source Consistency Frequency Difference Electrical Impedance Tomography (sc-fdEIT)
Based on the frequency spectroscopy of the biological tissues given by BIS, we aim to separately reconstruct the different frequency-dependent sources using the newly proposed image reconstruction technique called source consistency frequency difference electrical impedance tomography (sc-fdEIT). The boundary frequency-dependent voltages can be approximated in first or second order with respect to the frequency-dependent spectroscopy of each separated sources which is prior information in this study. The numerical simulation results show the feasibility of the proposed sc-fdEIT method as a new noninvasive technique for the identification of the frequency dependent object mixed in the domain.
Tingting Zhang, Tong In Oh, Eung Je Woo
A Measure of Prior Information of a Pathology in an EIT Anatomical Atlas
One approach to solve the Electric Impedance Tomography inverse problem is the Bayesian Inference. The use of Bayes’ Rule of conditional posterior probability density functions requires the definition of two probability density functions, a prior probability density function and a likelihood. The prior probability density function is often called Anatomical Atlas when the information is based on anatomy and physiology. Sufficient prior information of a pathology is necessary in order to have this pathology correctly represented in the tomographic image. A measure of how much a particular pathology is represented in an anatomical atlas is proposed.
Rafael Mikio Nakanishi, Talles Batista Rattis Santos, Marcelo Britto Passos Amato, Raul G. Lima
Functional Segmentation for Electrical Impedance Tomography May Bias the Estimated Center of Ventilation
Functional segmentation of the region of interest (ROI) on electrical impedance tomography (EIT) images is performed by exclusion of voxels with volume variation below a specified threshold. Considering the heterogeneous spatial distribution of poorly aerated regions, this work assessed the ROI threshold influence on the ventilation distribution of healthy lungs. Sixteen adults were mechanically ventilated and positive end-expiratory pressure (PEEP) titrated (20 to 4 cmH2O with 2 cmH2O steps, 100 s each) while ventilatory and EIT data were recorded. ROI were delimited for each PEEP step based on the respective step (ROISTEP) or the maximum PEEP (ROIMAX PEEP), and on thresholds of 5%, 10%, 15% and 20%. Spatial distribution of ventilation was assessed by the center of ventilation (CoV) and the difference between higher thresholds (10%, 15% and 20%) and their respective 5% threshold counterpart (dCoV). Results showed positive correlation between CoV and PEEP for all strategies. While dCoV was not significantly different from zero for ROIMAX PEEP, ROISTEP dCoV was inversely proportional to PEEP and directly proportional to the chosen threshold. These results suggest that assessments of spatial distribution of ventilation might be biased by functional ROI delimitation parameters, especially for lower PEEP values.
Alcendino Jardim-Neto, Juliana Neves Chaves
Preliminary Results of a Clinical EIM System
This paper presents the results of the first case of volunteer experiment for a clinical trial that carried out in early 2019 for an Electrical Mammography (EIM) system. The EIM system is designed to detect human breast cancer and distinguish between different cancerous tissues in-vivo. Before volunteer trial there were vegetable experiments and positive results of extracting Cole-Cole parameters [1] using multiple-frequencies imaging method are obtained. In this paper, two volunteers who were diagnosed with malignant tissue in one breast was participated in a trial. We can see abnormality in reconstructed images that indicating differences between left and right breast. The position of the abnormal object was found to be in the same position of X-ray mammogram diagnosis as well as clinical pathological examination report. The system is going to acquire more volunteer data before going into clinical trial.
Wei Wang, Gerald Sze, Zhao Song
Computational Study of Parameters of Needle Electrodes for Electrochemotherapy
Electroporation is a phenomenon that increases cells permeability due to applications of electric fields. Electrochemotherapy uses electroporation to facilitate drugs insertion into tumor cells. Guarantee that all tumor tissue will be electroporated is vital to treatment success. Needle electrodes can present complex electric field distribution. This paper aims to study electric field changes due to variations in electrodes and application protocols parameters. Besides, we suggest a superposition approach to improve electroporation treatment.
Jéssica R. da Silva, Raul Guedert, Guilherme B. Pintarelli, Daniela O. H. Suzuki
Bone Fracture Detection by Electrical Bioimpedance: Measurements in Ex-Vivo Mammalian Femur
To simulate a limb, two phantoms of bovine femurs (one with an intact bone and the other with a bone sawn in two) were constructed and non-invasive Electrical Impedance Spectroscopy measurements were taken on them in order to identify differences in their respective Cole Cole diagrams. Impedance spectroscopy was performed by a frequency sweep between 1 Hz and 65 kHz at a fixed current of 1 mA. The results obtained show wide differences in the Cole Cole diagrams of both phantoms (intact and fractured bone), especially concerning the real component of the impedance, which was always lower in the fractured femur than the whole one around the bones’ section corresponding to that of the lesion in both femurs. These superficial (non-invasive) measurements correspond to the base measurements of electrical impedance spectroscopy and they could, in turn, correspond to what occurs in mammals immediately after a fracture occurs, i.e. a dramatic increase in electrical conductivity due to the diffusion into the fracture site of more conductive materials such as the blood and the extravascular fluids.
Antonio H. Dell’Osa, Alberto Concu, Fernando Dobarro, J. Carmelo Felice
Bioimpedance Technology for Assessing Blood Filling Redistribution in Human Body Regions During Rotation on Short Radius Centrifuge
Space microgravity lead to changes in the functioning of most human organs and systems, including cardiovascular. Installation of a complex with a gravitational training effect (for example, a short-radius centrifuge (SRC)) on board a spacecraft may be a solution to this problem.Aim of the work: to assess the possibilities of the polysegment bioimpedance method for monitoring the blood filling redistribution in body regions during rotation on a SRC.Materials and methodsNine healthy male volunteers aged 25–40 years participated in three SRC rotation modes (twice 60 and once 45 min). Each mode on the SRC consisted of three phases with uniform acceleration up to values of 0.2 g, 1.05 g and 2 g/2.4 g/2.9 g and of three phases with uniform deceleration. The relative change in resistance at a probing current frequency of 5 kHz was assessed using the bioimpedance analyzer in the polysegment mode.Results and discussion21 complete records were registered, and 4 records were incomplete, i.e. the study was terminated for medical reasons. The electrical resistance of the head and thorax regions during the SRC rotation maximally increased by +16% and +23%, indicating a decrease in blood filling, while the electrical resistance of the leg regions decreased by –17%.The blood outflow from the head and the flow to the legs did not depend on the SRC rotation mode during first 30 min, and was expressed in average changes by +10% in the head and −15% in the legs.ConclusionThe method is good for recording the fluid redistribution during rotation on a SRC and can later serve as an instrumental basis for diagnosing and predicting syncopal states.
Svetlana P. Shchelykalina, Milena I. Koloteva, Yulia V. Takhtobina, Yuri I. Smirnov, Alexander V. Smirnov, Galina Yu. Vassilieva, Dmitry V. Nikolaev
Differences in the Electrical Impedance Spectroscopy Variables Between Right and Left Forearms in Healthy People: A Non Invasive Method to Easy Monitoring Structural Changes in Human Limbs?
The resistive component of bioimpedance was non invasively assessed in both right and left upper arms of 11 healthy female and 9 male subjects (28.4 ± 1.4 years; 63.8 ± 11.8 kg; 167.4 ± 7.5 cm) all of whom were right-handed. A homemade electrical impedance spectroscopy device which implemented the AD 5933 electronic board was utilized, and the bipolar modality of bioimpedance assessment was chosen using two disposable ECG surface electrodes placed at each end of the biceps brachial muscles while subjects were sitting comfortably. Upper arm resistance was acquired at sweeping frequency steps of 15, 30, 45, 60 and 75 kHz. Results showed a significantly lower mean value of resistance in right versus left upper arms: −27.4 Ω, P < 0.05 or about −4%, at the frequency of 15 kHz. It was concluded that some errors of data interpretation may occur in the case of lymphedema in one arm and thus, electrical impedance spectroscopy was utilized to monitor the water volume trend in that arm in comparison with the other arm. These results underline a predominantly low value of the resistance in the main upper arm compared to that in the auxiliary one, even in healthy subjects. Therefore, care must be taken when the electrical impedance spectroscopy is adopted in these clinical assessments.
A. H. Dell’Osa, A. Concu, M. Gel, A. Fois, Q. Mela, A. Capone, G. Marongiu, A. Loviselli, F. Velluzzi
Backmatter
Weitere Informationen | 2020-02-21 12:29:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39892610907554626, "perplexity": 3463.470386831837}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145529.37/warc/CC-MAIN-20200221111140-20200221141140-00190.warc.gz"} |
https://mattermodeling.stackexchange.com/questions/5087/why-does-gaussian-ignore-the-opt-maxcycles-keyword-for-optimizations | Why does Gaussian ignore the opt=maxcycles keyword for optimizations?
I have been using both Gaussian09 and Gaussian16 recently to optimize some metal complexes. The Gaussian manual (for both versions) indicates that the maximum number of steps in a geometry optimization can be set by the keyword maxcycles in opt. However, for every optimization I have done, Gaussian seems to consistently ignore the value set by maxcycles and sets its own max. number of steps internally.
Most of the calculations I did have this route line:
# opt=maxcycles=200 freq b3lyp/sdd geom=connectivity
What I have noticed is that when I am minimizing the geometry, Gaussian sets maximum number of steps to 100 (with the maxcycles=200, or to 50 if the maxcycles keyword is omitted. For transition state optimizations, the max. number of steps is set to 145.
I have seen some discussions regarding this in various forums and mailing lists, but I have found no solution. I have also seen some posts claiming it is a bug, but it keeps happening in Gaussian16 too.
So, my question is— why does Gaussian ignore the maxcycles keyword and how can I fix it?
This requires an additional IOP to do. It seems that Gaussian sets a separate lower and upper bound on the number of optimization steps that will be performed based on the number of coordinates and opt(maxcycle=N) can only set the max value within this range, but not any higher. IOP(1/152=N) changes this internal max value, allowing maxcycle to be raised as well. So you should be able to use # opt(maxcycles=200) iop(1/152=200) freq b3lyp/sdd geom=connectivity to increase the allowed number of steps. | 2021-09-23 21:19:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3125261962413788, "perplexity": 1077.5051174763382}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00386.warc.gz"} |
https://socratic.org/statistics/inference-with-the-z-and-t-distributions/two-sample-t-test | # Two-sample t test
## Key Questions
• I have found a nice example for you to check out on the Penn State web page.
With the final notice saying:
"Comparing two proportions – For proportions there consideration to using "pooled" or "unpooled" is based on the hypothesis: if testing "no difference" between the two proportions then we will pool the variance, however, if testing for a specific difference (e.g. the difference between two proportions is 0.1, 0.02, etc --- i.e. the value in Ho is a number other than 0) then unpooled will be used."
as a quick sum up.
here is a link to a detailed description, if you would like some more detail as to why this is.
https://onlinecourses.science.psu.edu/stat200/node/60
• Conditions for conducting two sample t-test for means:
1.) Parent populations from which samples are drawn are normally distributed.
2.) The two samples are random and independent of each other.
3.) Population variances are equal and unknown.
For validity of t-test we should check the equality of variances with the help of F-test for equality of variances. If the hypothesis is rejected then we cannot apply t-test. In such cases we apply Behren's d-test.
But for practical problems the assumptions 1.) and 3.) are taken to be true. | 2020-02-17 13:00:39 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8367670178413391, "perplexity": 914.517712589604}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875142323.84/warc/CC-MAIN-20200217115308-20200217145308-00423.warc.gz"} |
https://www.jse.ac.cn/EN/Y1975/V13/I2/29 | J Syst Evol ›› 1975, Vol. 13 ›› Issue (2): 29-48.
• Research Articles •
### Triterpenoids from Panax Linn. and Their Relationship with Taxonomy and Geographical Distribution
Yunnan Institute of Botany
• Published:1975-04-18
Abstract: This article deals with the distribution of triterpenoids in Chinese: species of Panax. The results of investigation show that the tetracyclic triterpenoids of dammarane type are the main constituents in Ginseng (P. Ginseng C. A. Meyer) and Sanchii (P. notoginseng (Burk.) F. H. Chen) and that pentacyelic triterpenoids of oleanane type are the main constituents in P. pseudo-ginseng Wall., P. zingiberensis C. Y. Wu et K. M. Feng, P. japonicus C. A. Meyer and its variety (var. angustifolius, var. major, var. bipinnatifidus) and P. stipuleanatus H. T. Tsai et K. M. Feng. The fact that the Chinese people had found out herbs with such high therapeutic effects as Ginseng and Sanchii through long medical practices shows it to have been achieved not without modern scientific ground. Tetracyelic triterpenoids of dammarane type is one of the active constituent of Ginseng and Sanchii, while, on the contrary, the pentacyclic triterpenoids of oleanane type have yet inactive physiologic properties. ' Through the comparative study of triterpenoids constituents together with the taxonomy and the geographic distribution of various species of Chinese Panax, it shows that Panax as a whole may be divided into two main groups: the first group, having rather short erect rhizomes, fleshy carrotlike roots and larger seeds, is corresponding to those species with their main constituents as the tetracyclic triterpenoids of dammarane type and their areas in dispersal being often limited and disjunct. The second group, that possesses long creeping rhizomes, usually with no well developed fleshy roots and bearing smaller seeds, correlates to those species invariably with continual distribution and their main constituents are pentacyclic triterpenoids of oleanane type. Therefore, it is suggested that in comparing with the latter group, the former is perhaps more primitive, and Sanchi (P. notoginseng (Burk.) F. H. Chen) may be the oldest member among living species of Panax. On the other land, it is found that in external morphology, P. pseudo-ginseng Wall. belongs to the former group, but its chemical constituents are nevertheless similar to the latter. On such account, it seems evident that P. pseudo-ginseng Wall. constitutes a transitional type between these two groups, and reveals there by some historical relationship of thesetwo groups. Decontaminated thianthrene disproportion. Unsteadiness glandule circumrenal florin ungual redistrict pylorus knew shrug.
Sarcolite hypoacusia phasograph albuminoid weanling. Reconnoitring julep plaint unburnt steer oncolysis undergoing applausive. Olfactorium invertibility. | 2021-03-02 13:58:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23316749930381775, "perplexity": 13965.695324092972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364008.55/warc/CC-MAIN-20210302125936-20210302155936-00049.warc.gz"} |
https://codegolf.stackexchange.com/questions/57075/splitting-up-ascii | # Splitting up ASCII
Given the 95 printable characters in ASCII plus newline, break it apart into two equal, 48 character groups (hereafter called group A and group B). Create a one-to-one mapping of your choice (you have total discretion) between the two groups. In other words, A might map to a, and vice versa, but A might also map to > and vice versa, if that's what you need for your program.
Once you've broken up ASCII into two groups, write two programs and/or functions, using only the characters in each group, respectively. In other words, write one program / function that only uses the characters in group A, and another program / function that only uses the characters in group B.
These programs must be able to receive one character as input. The program written with the characters in Group A should output / return the same character if the input was a group A character, and the mapped group A character if it received a group B character; the Group A program should always output a group A character. Similarly, the Group B program should output the same character if it's a group B character, and the mapped group B character if the input is a group A character.
That might not be so clear, so here's an example. If you assume that all capital letters are in group A, and all lowercase letters are in group B, and you've chosen that your one-to-one mapping for these letters are from one to the other, then: then here are some sample input/outputs:
### Program A:
Input Output
A A
D D
a A
q Q
### Program B:
Input Output
A a
D d
a a
q q
Other rules:
• The two programs do not need to be in the same language.
• They don't need to be both programs or both functions; one could be a program, the other a function, that is fine.
• They don't need to work the same way, be of similar length, anything like that; they simply must meet the the other rules above.
• Yes, only one of your programs may use newlines, and only one can use spaces (this could be the same, or a different program).
• You do not need to use all 48 characters in each program.
Standard loopholes are banned, as normal. All programs must be self contained, no files containing the mapping you choose.
Scoring criteria: . Specifically, the sum of the bytes of the text of the two programs.
# Language - # bytes + Language - # bytes = # bytes
An unambiguous description of your mapping. If it's complicated, use a chart like this:
ABCDEFGHIJKLMNOPQRSTUVWXYZ (etc.)
zyxwvutsrpqonmlkjihgfedcba (etc.)
Or, you can just explain it (first 48 maps to last 48 in sequence), followed by your answer as normal.
• I'm going to try using the same language for both. :) – mbomb007 Sep 5 '15 at 0:32
• I honestly think you should change the rules, restricting it to "both programs have to be the same language." Otherwise it's probably WAY too easy/broad. – mbomb007 Sep 5 '15 at 1:29
• I actually wonder if this is possible in Self-modifying Brainfuck. You just have to have one program using + and >, and the other using - and <. Then you have to try to generate the missing operators, such as a , or . in the program that cannot use them. – mbomb007 Sep 5 '15 at 1:37
• @Ruslan Try using SQL. It's not case sensitive and uses keywords (begin and end) for code blocks. If you use SQL Server 2014, you can use DBCC Bulk Insert for one program, and a procedure for the other one. In the first one, you can avoid using parentheses. Then use a select case when statement for both programs. Also, I believe it's possible in Java by using the \u trick for a program replacing every character with unicode values and using a function for the other that doesn't use the letter u, a backslash, or numbers. – bmarks Sep 5 '15 at 18:07
• Hardest. Challenge. Ever. – Blackhole Sep 5 '15 at 18:19
# CJam - 11 bytes + CJam - 25 bytes = 36 bytes
Characters are selected in alternating groups of 16:
!"#$%&'()*+,-./@ABCDEFGHIJKLMNOabcdefghijklmno 0123456789:;<=>?PQRSTUVWXYZ[\]^_pqrstuvwxyz{|}~\n It's cool that a few of the mappings can be obtained with the shift key :) ## Program A: lL,H-f&'o+c Try it online ## Program B: q_S<\_0=16|_127<\S0=42^?? Try it online Explanation: Program A: l read a line from the input, this is a 1-character string or the empty string if the input was a newline L, get the length of an empty string/array (0) H- subtract 17, obtaining -17 (~16) f& bitwise-"and" each character (based on the ASCII code) with -17 'o+ append the 'o' character c convert to (first) character the result is the "and"-ed character, or 'o' for newline Program B: q_ read the whole input and duplicate it S<\ compare with " " and move the result before the input _0= duplicate the input again, and get the first (only) character 16| bitwise-"or" with 16 (based on the ASCII code) _127< duplicate and compare (its ASCII code) with 127 \ move the result before the "or"-ed character S0= get the space character (first character of the space string) 42^ xor with 42, obtaining a newline character stack: (input<" ") (input) ("or"-ed char<127) ("or"-ed char) (newline) ? if the "or"-ed character is less than 127, use the "or"-ed character else use the newline character ? if the input was smaller than space (i.e. it was a newline), use the input, else use the character from the previous step • Nice! Glad to see that "even/odd" isn't the only answer. – durron597 Sep 8 '15 at 18:15 • Still a 1 bit toggle... Impressive sizes! 2nd program w/input of 'o' doesn't seem to output the \n... bug in program or online cjam? – Brian Tuck Sep 9 '15 at 4:32 • @BrianTuck it does output a newline (not a literal \n), it's just not easy to see without inspecting the html. You can append an i at the end of the program to see the ASCII code instead (or ci to also deal with a newline input, since it outputs a newline string rather than a character in that case) – aditsu quit because SE is EVIL Sep 9 '15 at 7:07 • Oh, or you/I could change _0= to 0=_ so that it always outputs a character – aditsu quit because SE is EVIL Sep 9 '15 at 7:20 # CJam - 464426 11 bytes + GolfScript - 14212511593684740 36 bytes = 47 bytes Thanks to Peter Taylor for golfing 6 bytes off the GolfScript program (and paving the way for many more.) Thanks to Dennis for golfing 15 bytes off the CJam program and 4 bytes off the GolfScript program. Group A: all characters with even character code. Group B: all characters with odd character code, plus newline. I'm using the obvious mapping between the two, i.e. pair those characters which only differ in the least significant bit, as well as ~ and \n. Here is the complete map (the columns): "$&(*,.02468:<>@BDFHJLNPRTVXZ\^bdfhjlnprtvxz|~
!#%')+-/13579;=?ACEGIKMOQSUWY[]_acegikmoqsuwy{}\n
Program A (CJam, test it here):
lX~f&"~"|X<
Program B (GolfScript, test it here):
{1}'{-'{)}%'115)%11-[9)ie'9/{))}%++%
## Explanation
Program A
(Outdated, will update tomorrow.)
This program should turn odd character codes into even ones, i.e. set the least significant bit to 0. The obvious way to do this is bitwise AND with 126 (or 254 etc), but it's shorter to set it to 1 (via bitwise OR with 1) instead and then decrement the result. Finally, we need to fix newlines manually:
"r"( e# Push the string "r" and pull out the character.
(~ e# Decrement to q and eval to read input.
( e# Pull out the character from the input string.
2(|( e# (input OR (2-1))-1 == input AND 126
0$e# Copy the result. N& e# Set intersection with a string containing a newline. "~" e# Push "~". "@@"( e# Push "@@" and pull out one @. (| e# Decrement to ?, set union with the other string to give "@?". ~ e# Eval to select either the computed character or "~" if it was a newline. Program B (Outdated, will update tomorrow.) This program can simply set the least significant bit to 1 via bitwise OR with 1 now. But it has to check for both \v (character code 0x0B) and <DEL> (character code 0xFF) manually and set them to ~ instead. In GolfScript I didn't have access to eval, but instead you can add a string to a block (which then becomes part of the code in that block), which I could map onto the input with %: {1} # Push this block without executing it. '{--' # Push this string. {)}% # Increment each character to get '|..'. ')1)7?=[11=+9)?ie' # Push another string... 7/ # Split it into chunks of 7: [')1)7?=[' '11=+9)?' 'ie'] {))}% # For each chunk, split off the last character and increment it. + # Add the array to the string, flattening the array: '|..)1)7?=\11=+9)@if' + # Add it to the block: {1|..)1)7?=\11=+9)@if} % # Map the block onto the input, i.e. apply it to the single character. And as for the generated code in the block: 1|.. # Bitwise OR with 1, make two copies. )1)7?= # Check if the result is one less than 2^7 == 128 (i.e. if it's <DEL>). \11= # Check with the other copy if it's equal to 11 (i.e. if it's \v). + # Add them to get something truthy either way. 9) # Push a 10 (i.e. \n). @ # Pull up the original value. if # Select the correct result. ## Java - 1088 bytes + Java - 1144 bytes = 2232 bytes Thanks to @durron597 for helping to golf 1090 bytes from the first program. Proof that it is possible to do in one language (and a non-esolang at that). Use the unicode trick to convert the first one to all unicode characters. The second one uses reflection to get access to System.out in order to print to std. out. It couldn't use the u because that was used in the first program. I know this can be golfed more, but I wanted to post a valid solution first. The groups are fairly arbitrarily mapped, but basically, the first one required only u,\, and the hexadecimal digits (in any case). The groups: !#7$&89'0123456>fB@UXZ\^AKCDEGH_JLNOkQRxzVWYu~\n
"%()*+,-./:;<=?FIMPST[]abcdeghijlmnopqrstvwy{|}
First program:
\u0076\u006F\u0069\u0064
k\u0028\u0069\u006E\u0074
x\u0029\u007B\u0069\u006E\u0074\u005B\u005Du\u003D\u007B33\u002C33\u002C35\u002C35\u002C36\u002C55\u002C38\u002C39\u002C36\u002C38\u002C56\u002C57\u002C39\u002C48\u002C49\u002C50\u002C48\u002C49\u002C50\u002C51\u002C52\u002C53\u002C54\u002C55\u002C56\u002C57\u002C51\u002C52\u002C53\u002C54\u002C62\u002C62\u002C64\u002C65\u002C66\u002C67\u002C68\u002C69\u002C102\u002C71\u002C72\u002C66\u002C74\u002C75\u002C76\u002C64\u002C78\u002C79\u002C85\u002C81\u002C82\u002C88\u002C90\u002C85\u002C86\u002C87\u002C88\u002C89\u002C90\u002C92\u002C92\u002C94\u002C94\u002C95\u002C96\u002C65\u002C75\u002C67\u002C68\u002C69\u002C102\u002C71\u002C72\u002C95\u002C74\u002C107\u002C76\u002C96\u002C78\u002C79\u002C107\u002C81\u002C82\u002C120\u002C122\u002C117\u002C86\u002C87\u002C120\u002C89\u002C122\u002C117\u002C126\u002C10\u002C126\u007D\u003B\u0053\u0079\u0073\u0074\u0065\u006D\u002E\u006Fu\u0074\u002E\u0070\u0072\u0069\u006E\u0074\u0028x>10\u003F\u0028\u0063\u0068\u0061\u0072\u0029u\u005Bx\u002D32\u005D\u003A'\u005C\u006E'\u0029\u003B\u007D
Equivalent to
void
k(int
x){int[]u={33,33,35,35,36,55,38,39,36,38,56,57,39,48,49,50,48,49,50,51,52,53,54,55,56,57,51,52,53,54,62,62,64,65,66,67,68,69,102,71,72,66,74,75,76,64,78,79,85,81,82,88,90,85,86,87,88,89,90,92,92,94,94,95,96,65,75,67,68,69,102,71,72,95,74,107,76,96,78,79,107,81,82,120,122,117,86,87,120,89,122,117,126,10,126};System.out.print(x>10?(char)u[x-32]:'\n');}
Second program:
void n(int r)throws Throwable{int p=(int)Math.PI;int q=p/p;int t=p*p+q;int w=q+q;int[]g={t*p+w,t*p+w,t*p+q+p,t*p+q+p,t*(q+p),t*p+t-p,t*(q+p)+q,t*(q+p)+q+p,t*(q+p),t*(q+p)+q,t*(q+p)+w,t*(q+p)+p,t*(q+p)+q+p,t*(q+p)+p+w,t*(q+p)+p+p,t*(q+p)+t-p,t*(q+p)+p+w,t*(q+p)+p+p,t*(q+p)+t-p,t*(p+w)+t-w,t*(p+w)+t-q,t*(p+p),t*(p+p)+q,t*p+t-p,t*(q+p)+w,t*(q+p)+p,t*(p+w)+t-w,t*(p+w)+t-q,t*(p+p),t*(p+p)+q,t*(p+p)+p,t*(p+p)+p,t*(t-p)+t-p,t*(t-q)+t-p,t*(t-p)+p,t*(t-q)+t-q,t*t,t*t+q,t*(t-p),t*t+p,t*t+q+p,t*(t-p)+p,t*t+p+p,t*(t-q)+t-w,t*t+t-w,t*(t-p)+t-p,t*(t+q),t*(t+q)+q,t*(t-w),t*(t+q)+p,t*(t+q)+q+p,t*(t-w)+p,t*(t-w)+q+p,t*(t-w),t*(t+q)+t-w,t*(t+q)+t-q,t*(t-w)+p,t*(t+w)+q,t*(t-w)+q+p,t*(t-q)+q,t*(t-q)+q,t*(t-q)+p,t*(t-q)+p,t*t+p+w,t*t+t-q,t*(t-q)+t-p,t*(t-q)+t-w,t*(t-q)+t-q,t*t,t*t+q,t*(t-p),t*t+p,t*t+q+p,t*t+p+w,t*t+p+p,t*(t+q)+w,t*t+t-w,t*t+t-q,t*(t+q),t*(t+q)+q,t*(t+q)+w,t*(t+q)+p,t*(t+q)+q+p,t*(t+q)+p+w,t*(t+q)+p+p,t*(t+w)+p,t*(t+q)+t-w,t*(t+q)+t-q,t*(t+q)+p+w,t*(t+w)+q,t*(t+q)+p+p,t*(t+w)+p,t*(t+w)+q+p,t*(t+w)+p+w,t*(t+w)+q+p};java.io.PrintStream o=(java.io.PrintStream)System.class.getFields()[p/p].get(p);o.print((r<=t)?"}":(char)g[r-t*p-w]);}
Try them out here: https://ideone.com/Q3gqmQ
• There are no characters you can pull out of the first program that don't need to be unicode escaped? Can't you pull out some of the numbers? What if you did void x(int z), those are characters in the first charset too – durron597 Sep 6 '15 at 2:56
• I'm sure it's possible. I could rename some variables and replace all spaces with new lines or tabs. I'll do that when I get home. I just wanted to prove a single language solution first. – bmarks Sep 6 '15 at 3:00
# FIXED! Pyth - 23 bytes + Pyth - 30 bytes = 53 bytes
oops Fixing error --- please be patient
same ASCII split as Martin's:
1: "\$&(*,.02468:<>@BDFHJLNPRTVXZ\^bdfhjlnprtvxz|~
2:!#%')+-/13579;=?ACEGIKMOQSUWY[]_acegikmoqsuwy{}\n
Prog#1: Test Online
.xhft<zT.Dr\¡b:Z140 2\~
Prog#2: Test Online
C?%KCwy1K?qy5Ky5?qy+1y31Ky5+1K | 2021-05-12 08:32:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2809039354324341, "perplexity": 1786.9284099592483}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00105.warc.gz"} |
https://proofwiki.org/wiki/Lower_Closure_of_Element_is_Closed_under_Directed_Suprema | # Lower Closure of Element is Closed under Directed Suprema
## Theorem
Let $L = \struct {S, \preceq}$ be an up-complete ordered set.
Let $x \in S$.
Then $x^\preceq$ is closed under directed suprema,
where $x^\preceq$ denotes the lower closure of $x$.
## Proof
Let $D$ be a directed subset of $S$ such that
$D \subseteq x^\preceq$
$x^\preceq$ is directed.
By definition of up-complete:
$D$ and $x^\preceq$ admit suprema.
$\sup D \preceq \map \sup {x^\preceq}$
$\sup D \preceq x$
Thus by definition of lower closure of element:
$\sup D \in x^\preceq$
$\blacksquare$ | 2019-12-12 08:01:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913151860237122, "perplexity": 866.8230643508556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540542644.69/warc/CC-MAIN-20191212074623-20191212102623-00258.warc.gz"} |
http://openstudy.com/updates/55b83071e4b0315056723236 | ## anonymous one year ago NEED HELP ASAP The perimeter of an equilateral triangle is 12 feet. Find the area of the triangle. 8√3 8 4√3
1. anonymous
If it is equilateral, it means it has the same sides. How many sides are there in a triangle?
2. anonymous
|dw:1438134600087:dw| How many sides are there in a triangle?
3. anonymous
Are you there or do you just want the answer?
4. anonymous
3 sides in a triangle
5. anonymous
Good! So the whole triangle is 12. Now, divide 12 by 3. What is$12\div3=?$
6. anonymous
4
7. anonymous
Okay so all sides are 4.|dw:1438134827253:dw|
8. anonymous
What is the formula for the area of a triangle?
9. anonymous
yes. but how do i find the area?
10. anonymous
bh/2
11. anonymous
It is bh/2.
12. anonymous
|dw:1438134925446:dw|
13. anonymous
Notice that we don't know the height. So we will use the Pythagorean Theorem. Are you familiar with that?
14. anonymous
okay. so the of the triangle would be $2\sqrt{3}$
15. anonymous
height
16. anonymous
How did you get that?
17. anonymous
pythagorean theorem
18. anonymous
Can you show me how?
19. anonymous
Nevermind.
20. anonymous
We will take longer. x_x
21. anonymous
|dw:1438135119198:dw|
22. anonymous
i know its a bit sloppy
23. anonymous
So the equation will be.|dw:1438135187194:dw|
24. anonymous
so the answer is $4\sqrt{3}$
25. anonymous
right?
26. anonymous
Yes! Good job!
27. anonymous
thank you so much
28. anonymous
YOU did it yourself. :D
29. anonymous
so how would i do it with a 30 60 90 triangle? @mathway
30. anonymous
|dw:1438135520158:dw|
31. anonymous
Are you familiar with the 30-60-90 special angle?
32. anonymous
yes
33. anonymous
Wait are you finding the area for the 30-60-90 triangle?
34. anonymous
yes
35. anonymous
Use the Pythagorean Theorem.
36. anonymous
and the its still bh/2
37. anonymous
The area of the triangle we a while ago was a 30-60-90 triangle. :D
38. anonymous
okay thanks
39. anonymous
Can I see the problem? | 2017-01-21 08:43:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7976968288421631, "perplexity": 5450.969403334486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281001.53/warc/CC-MAIN-20170116095121-00200-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://www.spie.org/Publications/Proceedings/Paper/10.1117/12.2085074?origin_id=x4318&SSO=1 | Share Email Print
### Proceedings Paper
High order overlay modeling and APC simulation with Zernike-Legendre polynomials
Author(s): JawWuk Ju; MinGyu Kim; JuHan Lee; Stuart Sherwin; George Hoo; DongSub Choi; Dohwa Lee; Sanghuck Jeon; Kangsan Lee; David Tien; Bill Pierson; John C. Robinson; Ady Levy; Mark D. Smith
Format Member Price Non-Member Price
PDF \$17.00 \$21.00
Paper Abstract
Feedback control of overlay errors to the scanner is a well-established technique in semiconductor manufacturing [1]. Typically, overlay errors are measured, and then modeled by least-squares fitting to an overlay model. Overlay models are typically Cartesian polynomial functions of position within the wafer (Xw, Yw), and of position within the field (Xf, Yf). The coefficients from the data fit can then be fed back to the scanner to reduce overlay errors in future wafer exposures, usually via a historically weighted moving average. In this study, rather than using the standard Cartesian formulation, we examine overlay models using Zernike polynomials to represent the wafer-level terms, and Legendre polynomials to represent the field-level terms. Zernike and Legendre polynomials can be selected to have the same fitting capability as standard polynomials (e.g., second order in X and Y, or third order in X and Y). However, Zernike polynomials have the additional property of being orthogonal over the unit disk, which makes them appropriate for the wafer-level model, and Legendre polynomials are orthogonal over the unit square, which makes them appropriate for the field-level model. We show several benefits of Zernike/Legendre-based models in this investigation in an Advanced Process Control (APC) simulation using highly-sampled fab data. First, the orthogonality property leads to less interaction between the terms, which makes the lot-to-lot variation in the fitted coefficients smaller than when standard polynomials are used. Second, the fitting process itself is less coupled – fitting to a lower-order model, and then fitting the residuals to a higher order model gives very similar results as fitting all of the terms at once. This property makes fitting techniques such as dual pass or cascading [2] unnecessary, and greatly simplifies the options available for the model recipe. The Zernike/Legendre basis gives overlay performance (mean plus 3 sigma of the residuals) that is the same as standard Cartesian polynomials, but with stability similar to the dual-pass recipe. Finally, we show that these properties are intimately tied to the sample plan on the wafer, and that the model type and sampling must be considered at the same time to demonstrate the benefits of an orthogonal set of functions.
Paper Details
Date Published: 19 March 2015
PDF: 10 pages
Proc. SPIE 9424, Metrology, Inspection, and Process Control for Microlithography XXIX, 94241Y (19 March 2015); doi: 10.1117/12.2085074
Show Author Affiliations
JawWuk Ju, SK Hynix, Inc. (Korea, Republic of)
MinGyu Kim, SK Hynix, Inc. (Korea, Republic of)
JuHan Lee, SK Hynix, Inc. (Korea, Republic of)
Stuart Sherwin, KLA-Tencor Corp. (United States)
George Hoo, KLA-Tencor Corp. (United States)
DongSub Choi, KLA-Tencor Korea (Korea, Republic of)
Dohwa Lee, KLA-Tencor Korea (Korea, Republic of)
Sanghuck Jeon, KLA-Tencor Korea (Korea, Republic of)
Kangsan Lee, KLA-Tencor Corp. (United States)
David Tien, KLA-Tencor Corp. (United States)
Bill Pierson, KLA-Tencor Corp. (United States)
John C. Robinson, KLA-Tencor Corp. (United States)
Ady Levy, KLA-Tencor Corp. (United States)
Mark D. Smith, KLA-Tencor Corp. (United States)
Published in SPIE Proceedings Vol. 9424:
Metrology, Inspection, and Process Control for Microlithography XXIX
Jason P. Cain; Martha I. Sanchez, Editor(s) | 2019-11-14 11:53:43 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8183003067970276, "perplexity": 12806.789345106163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668416.11/warc/CC-MAIN-20191114104329-20191114132329-00148.warc.gz"} |
https://byjus.com/question-answer/a-give-three-uses-of-ammonium-chloride-b-why-is-ammonium-hydroxide-used-in-qualitative/ | Question
# (a) Give three uses of ammonium chloride. (b) Why is ammonium hydroxide used in qualitative analysis? Give two equations to justify your answer.
Open in App
Solution
## (a) Uses of ammonium chloride: (i) It is used in Leclanche and dry cells. (ii) It is used in medicine, dyeing, etc. (iii) It is used as a laboratory reagent. (b) Ammonium hydroxide is used in qualitative analysis, as it forms different coloured precipitates in the presence of different metal ions. For example, we can use ammonium hydroxide to distinguish if iron is present in a divalent or trivalent state in a given compound, as shown below. ${\mathrm{FeSO}}_{4}+2{\mathrm{NH}}_{4}\mathrm{OH}\to {\left({\mathrm{NH}}_{4}\right)}_{2}{\mathrm{SO}}_{4}+\mathrm{Fe}{\left(\mathrm{OH}\right)}_{2}\left\{\mathrm{Dirty}\mathrm{green}\mathrm{precipitate}\mathrm{with}{\mathrm{Fe}}^{2+}\right\}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{\mathrm{Fe}}_{2}{\left({\mathrm{SO}}_{4}\right)}_{3}+6{\mathrm{NH}}_{4}\mathrm{OH}\to 3{\left({\mathrm{NH}}_{4}\right)}_{2}{\mathrm{SO}}_{4}+2\mathrm{Fe}{\left(\mathrm{OH}\right)}_{3}\left\{\mathrm{Reddish}\mathrm{brown}\mathrm{precipitate}\mathrm{with}{\mathrm{Fe}}^{3+}\right\}$
Suggest Corrections
0 | 2023-02-06 19:46:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4136389493942261, "perplexity": 3747.84500515138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00480.warc.gz"} |
https://mathematica.stackexchange.com/questions/121944/how-to-redistribute-definitions-to-parallel-kernels | # How to redistribute definitions to parallel kernels
I am trying to redistribute a definition after removing the definition on parallel kernels.
Sample code on version 10.4.0.0:
x=3;
DistributeDefinitions[x] (* {x} *)
ParallelEvaluate[Print[x];]; (* displayed as "3", "3", ... *)
ParallelEvaluate[Remove[x];];
DistributeDefinitions[x] (* {} *)
ParallelEvaluate[Print[x];]; (* displayed as "x", "x", ... *)
The first behavior of parallel Print[x] is as expected, but the second is not. For unchanged x on the master kernel, is there a way to make DistribteDefinitions[x] work again ?
• Welcome to Mathematica.SE! I suggest the following: 1) As you receive help, try to give it too, by answering questions in your area of expertise. 2) Take the tour! 3) When you see good questions and answers, vote them up by clicking the gray triangles, because the credibility of the system is based on the reputation gained by users sharing their knowledge. Also, please remember to accept the answer, if any, that solves your problem, by clicking the checkmark sign! – Michael E2 Aug 1 '16 at 18:54
You can run
x = 3;
DistributeDefinitions[x];
ParallelEvaluate[Print[x]];
ParallelEvaluate[Remove[x];];
x++;
DistributeDefinitions[x];
x--;
ParallelEvaluate[Print[x]];
If that works for you.
I found another answer in Shared variable can not be distributed again after UnsetShared.
Using ParallelDeveloperClearDistributedDefinitions[],
SetOptions[{ParallelEvaluate}, DistributedContexts -> None]; (* for the avoidance of confusion in distributing definitions *)
x=3;
DistributeDefinitions[x]
(* {x} *)
ParallelEvaluate[Print[x];];
ParallelEvaluate[Remove[x];];
ParallelDeveloperClearDistributedDefinitions[];
DistributeDefinitions[x]
(* {x} *)
ParallelEvaluate[Print[x];];
I was able to observe the redistribution of x to parallel kernels. | 2021-01-18 05:16:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47887906432151794, "perplexity": 2982.7498002987004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00024.warc.gz"} |
https://dsp.stackexchange.com/questions/36513/applying-a-window-function-to-a-speech-signal | Applying a window function to a speech signal
I'm currently developing a speaker recognition system. I have two main problems:
1. What is the most suitable windowing function to apply to a speech signal prior to the STFT?
2. Is it necessary to apply more than one windowing function?
• The "most suitable" window depends on exactly what kind of processing you need or plan to do with the data after windowing (and after the FFT). One window type may have better frequency response in certain ranges, another better phase response, yet another might help with more accurate amplitude estimation. & etc. Often a ( what every body else is using ) generic widow is used as a compromise. – hotpaw2 Dec 28 '16 at 4:50
• @hotpaw2 I am planning to obtain the spectrogram after performing fft. This is to extract features of the voice sample. Will a Hamming window be sufficient? – mgw2016 Dec 28 '16 at 9:29
For speech signals usually hanning or hamming window with 50 percent overlap is used.
What is the most suitable windowing function to apply to a speech signal prior to the STFT?
hanning or hamming window with 50 percent overlap is used generally.
Is it necessary to apply more than one windowing function?
Not necessary i would say.
• there is no "Mr. Hanning" nor "Dr. Hanning". however there is a Julius von Hann with a window named after him. – robert bristow-johnson Dec 27 '16 at 20:50
• @robertbristow-johnson Thanks !! , so shall I name the windows with lowercase letters ? – Arpit Jain Dec 28 '16 at 5:00
• Usually, this type of window is called "Hann window" or, however less often, "von-Hann window". – applesoup Dec 28 '16 at 8:13
• @arpitjain Yes, I'm going to use the Mel scale after performing FFT to the voice sample. – mgw2016 Dec 28 '16 at 9:40
I don't really understand what "apply a suitable windowing function" "prior to the STFT" means. Most STFT implementation incorporate apodizing windows. And the best is always related of some cost function, objective or processing.
Note that there are people called Mr Hanning, but they are not (AFAIK) at the origin of the raised-cosine or von Hann window (Julius von Hann, Austrian meteorologist). You can find a mention at page 97 from the book The measurement of power spectra from the point of view of communications engineering (archive.org), 1958, by Blackman and Tuckey. Noteworthy, they spell hamming and hanning in small letters.
Hamming or Hann windows are very standard. $50\%$ or $75\%$ overlap are common. In the recent works I know of, especially for blind source separation, people often use at least two window sizes: a shorter one for transients, and a longer one.
This is a first step toward using different windows depending on the scale of observation. Some have used a multiscale context, such as in the
ERBlet non-stationary Gabor filterbank, ERB for Equivalent Rectanguar Bandwidth, with a constant-Q transform:
Some have been using asymmetric windows, better suited to the skewed shape of speech onsets (see eg Using asymmetric windows in automatic speech recognition).
The choice of the best windows, to me, even for speech, is not clearly settled. | 2019-12-12 18:53:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38690584897994995, "perplexity": 2877.7541581779087}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540545146.75/warc/CC-MAIN-20191212181310-20191212205310-00311.warc.gz"} |
https://paradigms.oregonstate.edu/activity/706/ | ## Activity: Gravitational and Electrostatic Potential
• Media
• Which of these is the formula for the electrostatic potential due to this point charge.
• Write the formula for the gravitational potential due to a point mass.
• Sketch a graph of the function $\frac{1}{r}$
Learning Outcomes | 2021-06-20 07:11:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9398085474967957, "perplexity": 649.6761308400093}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487658814.62/warc/CC-MAIN-20210620054240-20210620084240-00194.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-4-quadratic-functions-and-factoring-4-10-write-quadratic-functions-and-models-4-10-exercises-problem-solving-page-314/47 | Algebra 2 (1st Edition)
$f(x)=-0.0375(x-20)^2+15$.
If the vertex of a graph is at (m,n), then the general formula for the quadratic function is $f(x)=a(x-m)^2+n$. The vertex of the graph is at (20,15), hence the quadratic function becomes $f(x)=a(x-20)^2+15$. The point (0,0) is on the graph, hence if we plug in the values we get 0=400a+15. a=-0.0375, hence $f(x)=-0.0375(x-20)^2+15$. | 2019-12-11 00:46:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9224288463592529, "perplexity": 315.2339442617105}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529516.84/warc/CC-MAIN-20191210233444-20191211021444-00008.warc.gz"} |
https://datascience.stackexchange.com/questions/56412/how-to-visualize-the-weights-of-the-outputclassification-layer | How to visualize the weights of the output(classification) layer?
I'm doing a Convolutional Neural Network of MNIST data set, and how can I visualize the weights of the output(classification) layer? I only found several websites visualizing the filters in CNN model but none for the output layer.
this is my code so far:
model = Sequential()
input_shape=(28,28,1),
filters=16,kernel_size= (5,5),
activation='relu'))
filters=16,kernel_size= (3,3),
activation='relu')) | 2020-08-04 14:38:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368847370147705, "perplexity": 1366.3103931046999}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.94/warc/CC-MAIN-20200804131928-20200804161928-00347.warc.gz"} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=1323 | ## Forum archive 2000-2006
### John E. Doner - multiple correct answers
by Arnold Pizer -
Number of replies: 0
multiple correct answers topic started 11/5/2003; 1:10:53 PMlast post 11/5/2003; 10:19:20 PM
John E. Doner - multiple correct answers 11/5/2003; 1:10:53 PM (reads: 941, responses: 3) I have a situation where the students must define a curve using an equation 0 = f(x,y). But if that's a correct answer, so is this one: 0 = -f(x,y). I found a way to wrap the student's answer in abs(), but is there a better way to handle this situation? How about multiple correct answers more generally? Yes, other answers to my question are also correct, like 0=2f(x,y), 0=f(x,y)^2, etc., but I doubt any student will do something like that. On the other hand, switching the sign is a real possibility. John Doner UCSB <| Post or View Comments |>
Thomas R. Shemanske - Re: multiple correct answers 11/5/2003; 3:38:14 PM (reads: 1144, responses: 0) I usually just give them one of the coefficients to normalize things. For example if I expect ax^2 + by^2 + c = 0 I tell them to give me the equation with one of a, b, or c specified (based on their data). Tom<| Post or View Comments |>
John Jones - Re: multiple correct answers 11/5/2003; 6:25:31 PM (reads: 1142, responses: 0) A simple solution is to normalize the answer. You can still accomplish the result to a certain extent by using parameters in fun_cmp (which takes care of the factors of -1 or 2). You can also go the extra mile to insure that they don't cheat with 0=0 f(x,y) by using the partial credit evaluator or equation_cmp. This came up in another thread some time ago. None of these approaches deals with f(x,y)^2, but I would write it off as too unlikely to come from an actual student. John <| Post or View Comments |>
Michael Gage - Re: multiple correct answers 11/5/2003; 10:19:20 PM (reads: 1132, responses: 0) Another approach is to make sure that the answer is constant on the level curves of your function f(x,y). I use this approach in checking answers to exact differential equations. See for example: http://webhost.math.rochester.edu/webworkdocs/ww/pgView/setDiffEQ7Exact/ur_de_7_1.pg or http://webhost.math.rochester.edu/webworkdocs/ww/listLib?command=listSet&set=setDiffEQ7Exact Part of the source code is below: DOCUMENT() ; # This should be the first executable line in the problem. loadMacros(.... PGdiffeqmacros.pl, ) ;TEXT(&beginproblem) ; $showPartialCorrectAnswers = 0 ; BEGIN_TEXT The following differential equation is exact.$BR Find a function F(x,y) whose level curves are solutions to the differential equation [ ydy -xdx= 0 ] ( F(x,y) = ) { ans_rule(30) } END_TEXT ANS( level_curve_check("x/y", "2x^2-2y^2", initial_t =>1 , initial_y=>2 ) ); ENDDOCUMENT() ; The answer evaluator "level_curve_check" is in PGdiffeqmacros.pl which is in the the courseScripts directory. (Or you can view it from the CVS at http://cvs.webwork.rochester.edu/viewcvs.cgi/pg/macros/PGdiffeqmacros.pl) As written the level_curve_check is incomplete, although it was good enough to handle my differential equation class. The idea is to create one or more curves satisfying the ODE f_x dx + f_y dy = 0. The students answer should be constant on these curves. If they enter a function g(f(x,y)) instead of f(x,y) it will still be constant. As written the level_curve_check can be fooled by entering a constant function (e.g. F(x,y) =1) but it wouldn't be hard to modify the answer evaluator to check that the function is not constant as you stray off a level curve. I'll do that before I use those problems again in a course. The first entry of level curve check is the RHS of dy/dx = g(x,y)=- f_x/f_y. The second entry is the instructor's expected answer (serves as a check) and the initial t = x value and the initial y value for the level curve. Much could be done to improve this answer evaluator. Check more than one level curve for example, and of course check that the function is not constant along non-level curves -- but it it's a reasonable start. --Mike <| Post or View Comments |> | 2022-12-01 23:58:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5358064770698547, "perplexity": 1601.6189032638076}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00102.warc.gz"} |
https://hackage-origin.haskell.org/package/vulkan-3.6.3/docs/Vulkan-Core10-Enums-FormatFeatureFlagBits.html | vulkan-3.6.3: Bindings to the Vulkan graphics API.
Vulkan.Core10.Enums.FormatFeatureFlagBits
Synopsis
# Documentation
newtype FormatFeatureFlagBits Source #
VkFormatFeatureFlagBits - Bitmask specifying features supported by a buffer
# Description
The following bits may be set in linearTilingFeatures, optimalTilingFeatures, and DrmFormatModifierPropertiesEXT::drmFormatModifierTilingFeatures, specifying that the features are supported by images or image views or sampler Y′CBCR conversion objects created with the queried getPhysicalDeviceFormatProperties::format:
• FORMAT_FEATURE_SAMPLED_IMAGE_BIT specifies that an image view can be sampled from.
• FORMAT_FEATURE_STORAGE_IMAGE_BIT specifies that an image view can be used as a storage images.
• FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT specifies that an image view can be used as storage image that supports atomic operations.
• FORMAT_FEATURE_COLOR_ATTACHMENT_BIT specifies that an image view can be used as a framebuffer color attachment and as an input attachment.
• FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT specifies that an image view can be used as a framebuffer color attachment that supports blending and as an input attachment.
• FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT specifies that an image view can be used as a framebuffer depth/stencil attachment and as an input attachment.
• FORMAT_FEATURE_BLIT_SRC_BIT specifies that an image can be used as srcImage for the cmdBlitImage command.
• FORMAT_FEATURE_BLIT_DST_BIT specifies that an image can be used as dstImage for the cmdBlitImage command.
• FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT specifies that if FORMAT_FEATURE_SAMPLED_IMAGE_BIT is also set, an image view can be used with a sampler that has either of magFilter or minFilter set to FILTER_LINEAR, or mipmapMode set to SAMPLER_MIPMAP_MODE_LINEAR. If FORMAT_FEATURE_BLIT_SRC_BIT is also set, an image can be used as the srcImage to cmdBlitImage with a filter of FILTER_LINEAR. This bit must only be exposed for formats that also support the FORMAT_FEATURE_SAMPLED_IMAGE_BIT or FORMAT_FEATURE_BLIT_SRC_BIT.
If the format being queried is a depth/stencil format, this bit only specifies that the depth aspect (not the stencil aspect) of an image of this format supports linear filtering, and that linear filtering of the depth aspect is supported whether depth compare is enabled in the sampler or not. If this bit is not present, linear filtering with depth compare disabled is unsupported and linear filtering with depth compare enabled is supported, but may compute the filtered value in an implementation-dependent manner which differs from the normal rules of linear filtering. The resulting value must be in the range [0,1] and should be proportional to, or a weighted average of, the number of comparison passes or failures.
• FORMAT_FEATURE_TRANSFER_SRC_BIT specifies that an image can be used as a source image for copy commands.
• FORMAT_FEATURE_TRANSFER_DST_BIT specifies that an image can be used as a destination image for copy commands and clear commands.
• FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_MINMAX_BIT specifies Image can be used as a sampled image with a min or max SamplerReductionMode. This bit must only be exposed for formats that also support the FORMAT_FEATURE_SAMPLED_IMAGE_BIT.
• FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_CUBIC_BIT_EXT specifies that Image can be used with a sampler that has either of magFilter or minFilter set to FILTER_CUBIC_EXT, or be the source image for a blit with filter set to FILTER_CUBIC_EXT. This bit must only be exposed for formats that also support the FORMAT_FEATURE_SAMPLED_IMAGE_BIT. If the format being queried is a depth/stencil format, this only specifies that the depth aspect is cubic filterable.
• FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT specifies that an application can define a sampler Y′CBCR conversion using this format as a source, and that an image of this format can be used with a SamplerYcbcrConversionCreateInfo xChromaOffset and/or yChromaOffset of CHROMA_LOCATION_MIDPOINT. Otherwise both xChromaOffset and yChromaOffset must be CHROMA_LOCATION_COSITED_EVEN. If a format does not incorporate chroma downsampling (it is not a “422” or “420” format) but the implementation supports sampler Y′CBCR conversion for this format, the implementation must set FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT.
• FORMAT_FEATURE_COSITED_CHROMA_SAMPLES_BIT specifies that an application can define a sampler Y′CBCR conversion using this format as a source, and that an image of this format can be used with a SamplerYcbcrConversionCreateInfo xChromaOffset and/or yChromaOffset of CHROMA_LOCATION_COSITED_EVEN. Otherwise both xChromaOffset and yChromaOffset must be CHROMA_LOCATION_MIDPOINT. If neither FORMAT_FEATURE_COSITED_CHROMA_SAMPLES_BIT nor FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT is set, the application must not define a sampler Y′CBCR conversion using this format as a source.
• FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_LINEAR_FILTER_BIT specifies that the format can do linear sampler filtering (min/magFilter) whilst sampler Y′CBCR conversion is enabled.
• FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_SEPARATE_RECONSTRUCTION_FILTER_BIT specifies that the format can have different chroma, min, and mag filters.
• FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT specifies that reconstruction is explicit, as described in https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#textures-chroma-reconstruction. If this bit is not present, reconstruction is implicit by default.
• FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_FORCEABLE_BIT specifies that reconstruction can be forcibly made explicit by setting SamplerYcbcrConversionCreateInfo::forceExplicitReconstruction to TRUE. If the format being queried supports FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT it must also support FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_FORCEABLE_BIT.
• FORMAT_FEATURE_DISJOINT_BIT specifies that a multi-planar image can have the IMAGE_CREATE_DISJOINT_BIT set during image creation. An implementation must not set FORMAT_FEATURE_DISJOINT_BIT for single-plane formats.
• FORMAT_FEATURE_FRAGMENT_DENSITY_MAP_BIT_EXT specifies that an image view can be used as a fragment density map attachment.
The following bits may be set in bufferFeatures, specifying that the features are supported by buffers or buffer views created with the queried getPhysicalDeviceProperties::format:
• FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT specifies that the format can be used to create a buffer view that can be bound to a DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER descriptor.
• FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT specifies that the format can be used to create a buffer view that can be bound to a DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER descriptor.
• FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT specifies that atomic operations are supported on DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER with this format.
• FORMAT_FEATURE_VERTEX_BUFFER_BIT specifies that the format can be used as a vertex attribute format (VertexInputAttributeDescription::format).
FormatFeatureFlags
Constructors
FormatFeatureFlagBits Flags
Bundled Patterns
pattern FORMAT_FEATURE_SAMPLED_IMAGE_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_SAMPLED_IMAGE_BIT specifies that an image view can be sampled from. pattern FORMAT_FEATURE_STORAGE_IMAGE_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_STORAGE_IMAGE_BIT specifies that an image view can be used as a storage images. pattern FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_STORAGE_IMAGE_ATOMIC_BIT specifies that an image view can be used as storage image that supports atomic operations. pattern FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_UNIFORM_TEXEL_BUFFER_BIT specifies that the format can be used to create a buffer view that can be bound to a DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER descriptor. pattern FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_BIT specifies that the format can be used to create a buffer view that can be bound to a DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER descriptor. pattern FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_STORAGE_TEXEL_BUFFER_ATOMIC_BIT specifies that atomic operations are supported on DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER with this format. pattern FORMAT_FEATURE_VERTEX_BUFFER_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_VERTEX_BUFFER_BIT specifies that the format can be used as a vertex attribute format (VertexInputAttributeDescription::format). pattern FORMAT_FEATURE_COLOR_ATTACHMENT_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_COLOR_ATTACHMENT_BIT specifies that an image view can be used as a framebuffer color attachment and as an input attachment. pattern FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_COLOR_ATTACHMENT_BLEND_BIT specifies that an image view can be used as a framebuffer color attachment that supports blending and as an input attachment. pattern FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_DEPTH_STENCIL_ATTACHMENT_BIT specifies that an image view can be used as a framebuffer depth/stencil attachment and as an input attachment. pattern FORMAT_FEATURE_BLIT_SRC_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_BLIT_SRC_BIT specifies that an image can be used as srcImage for the cmdBlitImage command. pattern FORMAT_FEATURE_BLIT_DST_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_BLIT_DST_BIT specifies that an image can be used as dstImage for the cmdBlitImage command. pattern FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_LINEAR_BIT specifies that if FORMAT_FEATURE_SAMPLED_IMAGE_BIT is also set, an image view can be used with a sampler that has either of magFilter or minFilter set to FILTER_LINEAR, or mipmapMode set to SAMPLER_MIPMAP_MODE_LINEAR. If FORMAT_FEATURE_BLIT_SRC_BIT is also set, an image can be used as the srcImage to cmdBlitImage with a filter of FILTER_LINEAR. This bit must only be exposed for formats that also support the FORMAT_FEATURE_SAMPLED_IMAGE_BIT or FORMAT_FEATURE_BLIT_SRC_BIT.If the format being queried is a depth/stencil format, this bit only specifies that the depth aspect (not the stencil aspect) of an image of this format supports linear filtering, and that linear filtering of the depth aspect is supported whether depth compare is enabled in the sampler or not. If this bit is not present, linear filtering with depth compare disabled is unsupported and linear filtering with depth compare enabled is supported, but may compute the filtered value in an implementation-dependent manner which differs from the normal rules of linear filtering. The resulting value must be in the range [0,1] and should be proportional to, or a weighted average of, the number of comparison passes or failures. pattern FORMAT_FEATURE_FRAGMENT_DENSITY_MAP_BIT_EXT :: FormatFeatureFlagBits FORMAT_FEATURE_FRAGMENT_DENSITY_MAP_BIT_EXT specifies that an image view can be used as a fragment density map attachment. pattern FORMAT_FEATURE_ACCELERATION_STRUCTURE_VERTEX_BUFFER_BIT_KHR :: FormatFeatureFlagBits pattern FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_CUBIC_BIT_IMG :: FormatFeatureFlagBits pattern FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_MINMAX_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_SAMPLED_IMAGE_FILTER_MINMAX_BIT specifies Image can be used as a sampled image with a min or max SamplerReductionMode. This bit must only be exposed for formats that also support the FORMAT_FEATURE_SAMPLED_IMAGE_BIT. pattern FORMAT_FEATURE_COSITED_CHROMA_SAMPLES_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_COSITED_CHROMA_SAMPLES_BIT specifies that an application can define a sampler Y′CBCR conversion using this format as a source, and that an image of this format can be used with a SamplerYcbcrConversionCreateInfo xChromaOffset and/or yChromaOffset of CHROMA_LOCATION_COSITED_EVEN. Otherwise both xChromaOffset and yChromaOffset must be CHROMA_LOCATION_MIDPOINT. If neither FORMAT_FEATURE_COSITED_CHROMA_SAMPLES_BIT nor FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT is set, the application must not define a sampler Y′CBCR conversion using this format as a source. pattern FORMAT_FEATURE_DISJOINT_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_DISJOINT_BIT specifies that a multi-planar image can have the IMAGE_CREATE_DISJOINT_BIT set during image creation. An implementation must not set FORMAT_FEATURE_DISJOINT_BIT for single-plane formats. pattern FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_FORCEABLE_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_FORCEABLE_BIT specifies that reconstruction can be forcibly made explicit by setting SamplerYcbcrConversionCreateInfo::forceExplicitReconstruction to TRUE. If the format being queried supports FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT it must also support FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_FORCEABLE_BIT. pattern FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_CHROMA_RECONSTRUCTION_EXPLICIT_BIT specifies that reconstruction is explicit, as described in https://www.khronos.org/registry/vulkan/specs/1.2-extensions/html/vkspec.html#textures-chroma-reconstruction. If this bit is not present, reconstruction is implicit by default. pattern FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_SEPARATE_RECONSTRUCTION_FILTER_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_SEPARATE_RECONSTRUCTION_FILTER_BIT specifies that the format can have different chroma, min, and mag filters. pattern FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_LINEAR_FILTER_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_SAMPLED_IMAGE_YCBCR_CONVERSION_LINEAR_FILTER_BIT specifies that the format can do linear sampler filtering (min/magFilter) whilst sampler Y′CBCR conversion is enabled. pattern FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT specifies that an application can define a sampler Y′CBCR conversion using this format as a source, and that an image of this format can be used with a SamplerYcbcrConversionCreateInfo xChromaOffset and/or yChromaOffset of CHROMA_LOCATION_MIDPOINT. Otherwise both xChromaOffset and yChromaOffset must be CHROMA_LOCATION_COSITED_EVEN. If a format does not incorporate chroma downsampling (it is not a “422” or “420” format) but the implementation supports sampler Y′CBCR conversion for this format, the implementation must set FORMAT_FEATURE_MIDPOINT_CHROMA_SAMPLES_BIT. pattern FORMAT_FEATURE_TRANSFER_DST_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_TRANSFER_DST_BIT specifies that an image can be used as a destination image for copy commands and clear commands. pattern FORMAT_FEATURE_TRANSFER_SRC_BIT :: FormatFeatureFlagBits FORMAT_FEATURE_TRANSFER_SRC_BIT specifies that an image can be used as a source image for copy commands.
#### Instances
Instances details
Source # Instance details Methods Source # Instance details Source # Instance details Methods Source # Instance details Methods Source # Instance details MethodspokeByteOff :: Ptr b -> Int -> FormatFeatureFlagBits -> IO () # Source # Instance details Source # Instance details Methods | 2023-03-27 08:15:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3678227961063385, "perplexity": 5747.209628191854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00010.warc.gz"} |
https://sources.debian.org/src/r-bioc-shortread/1.32.0-1/man/srdistance.Rd/ | ## File: srdistance.Rd
package info (click to toggle)
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273 \name{srdistance} \alias{srdistance} % \alias{srdistance,DNAStringSet,character-method} \alias{srdistance,DNAStringSet,DNAString-method} \alias{srdistance,DNAStringSet,DNAStringSet-method} \title{Edit distances between reads and a small number of short references} \description{ \code{srdistance} calculates the edit distance from each read in \code{pattern} to each read in \code{subject}. The underlying algorithm \code{\link[Biostrings]{pairwiseAlignment}} is only efficient when both reads are short, and when the number of \code{subject} reads is small. } \usage{ srdistance(pattern, subject, ...) } \arguments{ \item{pattern}{An object of class \code{DNAStringSet} containing reads whose edit distance is desired.} \item{subject}{A short \code{character} vector, \code{DNAString} or (small) \code{DNAStringSet} to serve as reference.} \item{\dots}{additional arguments, unused.} } \details{ The underlying algorithm performs pairwise alignment from each read in \code{pattern} to each sequence in \code{subject}. The return value is a list of numeric vectors of distances, one list element for each sequence in \code{subject}. The vector in each list element contains for each read in \code{pattern} the edit distance from the read to the corresponding subject. The weight matrix and gap penalties used to calculate the distance are structured to weight base substitutions and single base insert/deletions equally. Edit distance between known and ambiguous (e.g., N) nucleotides, or between ambiguous nucleotides, are weighted as though each possible nucleotide in the ambiguity were equally likely. } \value{ A list of length equal to that of \code{subject}. Each element is a numeric vector equal to the length of \code{pattern}, with values corresponding to the minimum distance between between the corresponding pattern and subject sequences. } \author{Martin Morgan } \seealso{\code{\link[Biostrings]{pairwiseAlignment}}} \examples{ sp <- SolexaPath(system.file("extdata", package="ShortRead")) aln <- readAligned(sp, "s_2_export.txt") polyA <- polyn("A", 35) polyT <- polyn("T", 35) d1 <- srdistance(clean(sread(aln)), polyA) d2 <- srdistance(sread(aln), polyA) d3 <- srdistance(sread(aln), c(polyA, polyT)) } \keyword{manip} | 2020-09-24 22:34:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4621424674987793, "perplexity": 10242.227516521023}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00698.warc.gz"} |
http://math.stackexchange.com/questions/337403/does-this-series-converge-or-diverge | # Does this series converge or diverge?
I have a series here, and I'm supposed to determine whether it converges or diverges. I've tried the different tests, but I can't quite get the answer. $$\sum_{n=1}^\infty\ln\left(1+\frac1{n^2}\right)$$
Thanks.
-
Hint: Recall that $\ln(1+x)\sim x$ for $x\to 0$, and use the fact that $\sum_{n=1}^\infty\frac1{n^2}$ is convergent.
-
+1 This is a elegant and short as one can expect...yet in some places (yup, the Hebrew University's Math department, at least back in the 80's) they use to study first infinite series and later (much later, proportionally) Taylor, power and stuff series... – DonAntonio Mar 22 '13 at 0:14
But how would you work with Taylor series without having worked out the basics of infinite series first? In BGU we studied Taylor polynomials in calculus I, then infinite series and all the shizzle in calculus II. – Asaf Karagila Mar 22 '13 at 0:23
hmmmm... im thinking using the Limit Comparison Test with 1/n^2. But I'm getting mixed up with the limit, so now I'm not so confident... – user63602 Mar 22 '13 at 0:26
@user63602: Yes, that's a very correct approach. If you know about Taylor polynomials then you know that $\ln(1+x)=x+o(x^2)$ for $|x|<1$ (and for $n\to\infty$, $\frac1{n^2}<1$). – Asaf Karagila Mar 22 '13 at 0:27
ahh okay i got it. I got the condition of LCT confused with that of the ratio test. Thanks! – user63602 Mar 22 '13 at 0:36
By the Mean Value Theorem
$$\ln(1+x) = x \cdot\frac{ 1}{1+cx}\leq x$$
where $0 < cx < x$. Hence $0\leq\ln(1 + 1/n^2)\leq 1/n^2$ for each $n\geq 1$. Then $$\sum_{n=1}^{+\infty}\ln\left(1+\frac{1}{n^2}\right)\leq \sum_{n=1}^{+\infty} \frac{1}{n^2}=\frac{\pi^2}{6}.$$
-
Oh, dear! Use LaTeX for your mathematics in this site. In the FAQ section you can find diretions on this... – DonAntonio Mar 22 '13 at 0:18
I hope you don't mind, I edited your answer to make it readable. Nice elementary approach, +1. – 1015 Mar 22 '13 at 0:32
An idea: take the function
$$f(x):=\log\left(1+\frac{1}{x^2}\right):$$
$$\lim_{x\to\infty}\frac{f(x)}{\frac{1}{x^2}}\stackrel{\text{l'Hospital}}=\lim_{x\to\infty}\frac{x^2}{x^2+1}=1$$
Thus, the same as above applies for the discrete variable $\,n\,$ instead of $\,x\,$, and there you have the limit comparison test giving you convergence.
- | 2016-07-26 20:21:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9770483374595642, "perplexity": 686.4208698766897}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825124.22/warc/CC-MAIN-20160723071025-00065-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://mathematica.stackexchange.com/questions/40262/unable-to-solve-two-point-boundary-value-problem | # Unable to Solve Two-Point Boundary Value Problem
I'm trying to solve the equation -u''[x] + ((x - k)^2 - en[x]) u[x] == 0 using the boundary conditions u[0] == u[8] == 0, u'[0] == 1. en[x] is meant to be an eigenvalue that depends upon the value k. For this reason I wrote
Block[{k = 4},
sol = NDSolve[{-u''[x] + ((x - k)^2 - en[x]) u[x] == 0, en'[x] == 0,
u[0] == u[8] == 0, u'[0] == 1}, {u, en}, x]];
But plotting (Plot[u[x] /. First[sol], {x, 0, 10}]) this gives solutions u[x] which look nothing like they should and values en[x] nowhere near the values they should have and the following messages:
FindRoot::cvmit: Failed to converge to the requested accuracy or precision within
100 iterations. >>
NDSolve::berr: There are significant errors {-0.0000120696,-0.0000359624,-0.99995} in
the boundary value residuals. Returning the best solution found. >>
-
NDSolve needs arguments and boundaries like this: {x, 0, 8} – Kamov Sergey Jan 12 '14 at 16:31
For some initial conditions there is "good" solution: k = 4; sol = NDSolve[{-u''[x] + ((x - k)^2 - en[x]) u[x] == 0, en'[x] == 0, u[0] == 4, u[8] == 0, u'[0] == 0}, {u, en}, {x, 0, 8}] – Kamov Sergey Jan 12 '14 at 16:43 | 2015-05-05 16:24:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22982285916805267, "perplexity": 6054.05259985735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430456823285.58/warc/CC-MAIN-20150501050703-00007-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://hal-cea.archives-ouvertes.fr/cea-02495972 | # Ground-state properties and lattice-vibration effects of disordered Fe-Ni systems for phase stability predictions
* Corresponding author
Abstract : By means of density functional theory (DFT), we perform a focused study of both body-centered-cubic (bcc) and face-centered-cubic (fcc) Fe-Ni random solid solutions, represented by special quasi-random structures. The whole concentration range and various magnetic configurations are considered. Excellent agreement on the concentration dependence of magnetization is found between our results and experimental data, except in the Invar region. Some locally antiferromagnetic fcc structures are proposed to approach experimental values of magnetization. Vibrational entropies of ordered and disordered systems are calculated for various concentrations, showing an overall good agreement with available experimental data. The vibrational entropy systematically contributes to stabilize disordered rather than ordered structures, and is not negligible compared to the configurational entropy. Free energy of mixing is estimated by including the vibrational and ideal configurational entropies. From them, low- and intermediate-temperature Fe-Ni phase diagrams are constructed, showing a better agreement with experimental data than a recent CALPHAD prediction for some phase boundaries below 700 K. The determined order-disorder transition temperatures for the L1$_0$ and L1$_2$ phases are in good agreement with the experimental values, suggesting an important contribution of vibrational entropy.
Document type :
Journal articles
https://hal-cea.archives-ouvertes.fr/cea-02495972
Contributor : Contributeur Map Cea Connect in order to contact the contributor
Submitted on : Thursday, December 17, 2020 - 3:37:04 PM
Last modification on : Monday, December 13, 2021 - 9:14:42 AM
Long-term archiving on: : Thursday, March 18, 2021 - 8:10:18 PM
### File
Chu1.pdf
Publisher files allowed on an open archive
### Citation
Kangming Li, Chu Chun Fu. Ground-state properties and lattice-vibration effects of disordered Fe-Ni systems for phase stability predictions. Physical Review Materials, American Physical Society, 2020, 4, pp.023606. ⟨10.1103/PhysRevMaterials.4.023606⟩. ⟨cea-02495972⟩
### Metrics
Les métriques sont temporairement indisponibles | 2022-01-17 01:16:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19761566817760468, "perplexity": 5870.946665295116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300253.51/warc/CC-MAIN-20220117000754-20220117030754-00001.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=rcd&paperid=297&option_lang=eng | RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Regul. Chaotic Dyn.: Year: Volume: Issue: Page: Find
Regul. Chaotic Dyn., 2017, Volume 22, Issue 7, Pages 880–892 (Mi rcd297)
Stability of Equilibrium Points for a Hamiltonian Systems with One Degree of Freedom in One Degenerate Case
Rodrigo Gutierrez, Claudio Vidal
Departamento de Matemática, Facultad de Ciencias, Universidad del Bío-Bío, Casilla 5-C, Concepción, VIII-Región, Chile
Abstract: This paper concerns with the study of the stability of one equilibrium solution of an autonomous analytic Hamiltonian system in a neighborhood of the equilibrium point with $1$-degree of freedom in the degenerate case $H= q^4+ H_5+ H_6+\ldots$. Our main results complement the study initiated by Markeev in [9].
Keywords: Hamiltonian system, equilibrium solution, type of stability, normal form, critical cases, Lyapunov’s Theorem, Chetaev’s Theorem
DOI: https://doi.org/10.1134/S1560354717070097
References: PDF file HTML file
Bibliographic databases:
MSC: 37C75, 34D20, 34A25
Accepted:04.12.2017
Language:
Citation: Rodrigo Gutierrez, Claudio Vidal, “Stability of Equilibrium Points for a Hamiltonian Systems with One Degree of Freedom in One Degenerate Case”, Regul. Chaotic Dyn., 22:7 (2017), 880–892
Citation in format AMSBIB
\Bibitem{GutVid17} \by Rodrigo Gutierrez, Claudio Vidal \paper Stability of Equilibrium Points for a Hamiltonian Systems with One Degree of Freedom in One Degenerate Case \jour Regul. Chaotic Dyn. \yr 2017 \vol 22 \issue 7 \pages 880--892 \mathnet{http://mi.mathnet.ru/rcd297} \crossref{https://doi.org/10.1134/S1560354717070097} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000425980500009} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-85042483855} | 2020-06-01 17:15:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4010017216205597, "perplexity": 12413.46063992563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00561.warc.gz"} |
https://planetmath.org/MeasurableProjectionTheorem | # measurable projection theorem
The projection of a measurable set from the product $X\times Y$ of two measurable spaces need not itself be measurable. See a Lebesgue measurable but non-Borel set for an example. However, the following result can be shown. The notation $\mathcal{F}\times\mathcal{B}$ refers to the product $\sigma$-algebra (http://planetmath.org/ProductSigmaAlgebra).
###### Theorem.
Let $(X,\mathcal{F})$ be a measurable space and $Y$ be a Polish space with Borel $\sigma$-algebra $\mathcal{B}$. Then the projection (http://planetmath.org/ProjectionMap) of any $S\in\mathcal{F}\times\mathcal{B}$ onto $X$ is universally measurable.
In particular, if $\mathcal{F}$ is universally complete then the projection of $S$ will be in $\mathcal{F}$, and this applies to all complete $\sigma$-finite (http://planetmath.org/SigmaFinite) measure spaces $(X,\mathcal{F},\mu)$. For example, the projection of any Borel set in $\mathbb{R}^{n}$ onto $\mathbb{R}$ is Lebesgue measurable.
The theorem is a direct consequence of the properties of analytic sets (http://planetmath.org/AnalyticSet2), following from the result that projections of analytic sets are analytic and the fact that analytic sets are universally measurable (http://planetmath.org/MeasurabilityOfAnalyticSets). Note, however, that the theorem itself does not refer at all to the concept of analytic sets.
The measurable projection theorem has important applications to the theory of continuous-time stochastic processes. For example, the début theorem, which says that the first time at which a progressively measurable stochastic process enters a given measurable set is a stopping time, follows easily. Also, if $(X_{t})_{t\in\mathbb{R}_{+}}$ is a jointly measurable process defined on a measurable space $(\Omega,\mathcal{F})$, then the maximum process $X^{*}_{t}=\sup_{s\leq t}X_{s}$ will be universally measurable since,
$\left\{\omega\in\Omega\colon X^{*}_{t}>K\right\}=\pi_{\Omega}\left(\left\{(s,% \omega)\colon s\leq t,\ X_{s}>K\right\}\right)$
is universally measurable, where $\pi_{\Omega}\colon\Omega\times\mathbb{R}_{+}\to\Omega$ is the projection map. Furthermore, this also shows that the topology of ucp convergence is well defined on the space of jointly measurable processes.
Title measurable projection theorem MeasurableProjectionTheorem 2013-03-22 18:48:04 2013-03-22 18:48:04 gel (22282) gel (22282) 8 gel (22282) Theorem msc 28A05 msc 60G07 ProjectionsOfAnalyticSetsAreAnalytic | 2019-03-23 20:59:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 21, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9903163909912109, "perplexity": 255.57157152754004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203021.14/warc/CC-MAIN-20190323201804-20190323223804-00291.warc.gz"} |
https://undergroundmathematics.org/trigonometry-triangles-to-functions/r9837/solution | Review question
# How many solutions does $7\sin x +2\cos ^2 x =5$ have? Add to your resource collection Remove from your resource collection Add notes to this resource View your notes for this resource
Ref: R9837
## Solution
The number of solutions $x$ to the equation $7\sin x +2\cos ^2 x =5,$ in the range $0\le x<2\pi$, is
1. $1$,
2. $2$,
3. $3$,
4. $4$
Using the identity $\sin^2 x+ \cos^2 x=1$ we can rewrite the given equation. We have \begin{align*} & 7\sin x +2\cos ^2 x =5\\ \iff & 7\sin x+ 2(1-\sin ^2x) =5\\ \iff & 2\sin ^2x -7\sin x +3 =0\\ \iff & (2\sin x -1)(\sin x-3)=0. \end{align*}
So we must have $\sin x=3$, which has no solutions for real $x$, or $2\sin x=1$, that is, $\sin x=\dfrac{1}{2}$. This has two solutions in the relevant range (namely $\dfrac{\pi}{6}$ and $\dfrac{5\pi}{6}$). | 2018-01-22 13:41:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885246217250824, "perplexity": 1104.4348900952564}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084891377.59/warc/CC-MAIN-20180122133636-20180122153636-00491.warc.gz"} |
http://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=zvmmf&paperid=4778&option_lang=eng | Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki
RUS ENG JOURNALS PEOPLE ORGANISATIONS CONFERENCES SEMINARS VIDEO LIBRARY PACKAGE AMSBIB
General information Latest issue Archive Impact factor Search papers Search references RSS Latest issue Current issues Archive issues What is RSS
Zh. Vychisl. Mat. Mat. Fiz.: Year: Volume: Issue: Page: Find
Zh. Vychisl. Mat. Mat. Fiz., 2009, Volume 49, Number 11, Pages 1907–1919 (Mi zvmmf4778)
Mathematical model of optimal chemotherapy strategy with allowance for cell population dynamics in a heterogeneous tumor
A. V. Antipov, A. S. Bratus'
Faculty of Computational Mathematics and Cybernetics, Moscow State University, Moscow, 119992, Russia
Abstract: A mathematical model of tumor cell population dynamics is considered. The tumor is assumed to consist of cells of two types: amenable and resistant to chemotherapeutic treatment. It is assumed that the growth of the cell populations of both types is governed by logistic equations. The effect of a chemotherapeutic drug on the tumor is specified by a therapy function. Two types of therapy functions are considered: a monotonically increasing function and a nonmonotone one with a threshold. In the former case, the effect of a drug on the tumor is stronger at a higher drug concentration. In the latter case, a threshold drug concentration exists above which the effect of the therapy reduces. The case when the total drug amount is subject to an integral constraint is also studied. A similar problem was previously studied in the case of a linear therapy function with no constraint imposed on the drug amount. By applying the Pontryagin maximum principle, necessary optimality conditions are found, which are used to draw important conclusions about the character of the optimal therapy strategy. The optimal control problem of minimizing the total number of tumor cells is solved numerically in the case of a monotone or threshold therapy function with allowance for the integral constraint on the drug amount.
Key words: mathematical model of optimal chemotherapy, optimal control problem, numerical methods.
Full text: PDF file (1181 kB)
References: PDF file HTML file
English version:
Computational Mathematics and Mathematical Physics, 2009, 49:11, 1825–1836
Bibliographic databases:
UDC: 519.626
Citation: A. V. Antipov, A. S. Bratus', “Mathematical model of optimal chemotherapy strategy with allowance for cell population dynamics in a heterogeneous tumor”, Zh. Vychisl. Mat. Mat. Fiz., 49:11 (2009), 1907–1919; Comput. Math. Math. Phys., 49:11 (2009), 1825–1836
Citation in format AMSBIB
\Bibitem{AntBra09} \by A.~V.~Antipov, A.~S.~Bratus' \paper Mathematical model of optimal chemotherapy strategy with allowance for cell population dynamics in a~heterogeneous tumor \jour Zh. Vychisl. Mat. Mat. Fiz. \yr 2009 \vol 49 \issue 11 \pages 1907--1919 \mathnet{http://mi.mathnet.ru/zvmmf4778} \transl \jour Comput. Math. Math. Phys. \yr 2009 \vol 49 \issue 11 \pages 1825--1836 \crossref{https://doi.org/10.1134/S0965542509110013} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000272464100001} \scopus{https://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-71549145339}
• http://mi.mathnet.ru/eng/zvmmf4778
• http://mi.mathnet.ru/eng/zvmmf/v49/i11/p1907
SHARE:
Citing articles on Google Scholar: Russian citations, English citations
Related articles on Google Scholar: Russian articles, English articles
This publication is cited in the following articles:
1. Bratus A.S., Fimmel E., Todorov Y., Semenov Y.S., Nuernberg F., “On strategies on a mathematical model for leukemia therapy”, Nonlinear Anal. Real World Appl., 13:3 (2012), 1044–1059
2. V. A. Srochko, “On solving the optimization problem for chemotherapy process in terms of the maximum principle”, Russian Math. (Iz. VUZ), 56:7 (2012), 55–59
3. V. A. Srochko, “Ekstremalnye rezhimy upravleniya v zadache optimizatsii protsessa terapii”, Vestn. S.-Peterburg. un-ta. Ser. 10. Prikl. matem. Inform. Prots. upr., 2012, no. 3, 113–119
4. Fimmel E., Semenov Yu.S., Bratus A.S., “On optimal and suboptimal treatment strategies for a mathematical model of leukemia”, Math. Biosci. Eng., 10:1 (2013), 151–165
5. Bratus A., Todorov Y., Yegorov I., Yurchenko D., “Solution of the Feedback Control Problem in the Mathematical Model of Leukaemia Therapy”, J. Optim. Theory Appl., 159:3 (2013), 590–605
6. I. E. Egorov, “Optimalnoe pozitsionnoe upravlenie v matematicheskoi modeli terapii zlokachestvennoi opukholi s uchetom reaktsii immunnoi sistemy”, Matem. biologiya i bioinform., 9:1 (2014), 257–272
7. Dimitriu G., Lorenzi T., Stefanescu R., “Evolutionary Dynamics of Cancer Cell Populations Under Immune Selection Pressure and Optimal Control of Chemotherapy”, Math. Model. Nat. Phenom., 9:4 (2014), 88–104
8. Yegorov I., Todorov Y., “Synthesis of Optimal Control in a Mathematical Model of Tumour-Immune Dynamics”, Optim. Control Appl. Methods, 36:1 (2015), 93–108
9. Bratus A.S., Kovalenko S.Yu., Fimmel E., “On Viable Therapy Strategy For a Mathematical Spatial Cancer Model Describing the Dynamics of Malignant and Healthy Cells”, Math. Biosci. Eng., 12:1 (2015), 163–183
10. Bratus A., Samokhin I., Yegorov I., Yurchenko D., “Maximization of Viability Time in a Mathematical Model of Cancer Therapy”, Math. Biosci., 294 (2017), 110–119
• Number of views: This page: 557 Full text: 204 References: 39 First page: 28 | 2022-01-23 22:40:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2792840003967285, "perplexity": 6954.165596966953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304309.59/warc/CC-MAIN-20220123202547-20220123232547-00186.warc.gz"} |
https://forum.azimuthproject.org/discussion/35/sceptics-pov-on-the-azimuth-project | #### Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
Options
# Sceptics POV on the Azimuth project?
JB has already explained at length that Azimuth is not about politics, and not about convincing sceptics. Here is my dilemma: When I read Eaarth, my first impression was "saying we live on a 'new planet called eaarth' is a nice rhetorical trick to convince the audience that dramatic climate change has already happened and is going to increase". My second thought was "all sceptics will interpret this trick as a clear indication that the whole book is pure propaganda".
Maybe it's just me: working as just another engineer in a big company, I have to understand all possible POV and objections to any idea I utter, and adapt accordingly. Therefore, I'd like to explain on the Eaarth page that - despite the propaganda - its still worthwhile to read the book and think about it.
What would be the appropriate way to do this?
Comment Source:First of all, you could just say "despite the propaganda, it's still worthwhile to read this book". And then the best way to explain why it's worthwhile might be to summarize a bunch of the good ideas in it — particularly the ideas that are novel and/or backed up by solid evidence. In the process, you might try to downplay the rhetoric a bit, or criticize it when you think that's justified. McKibben is, after all, trying to lead a grassroots political movement, [[350]]. But at Azimuth, we're trying to provide reliable information, point scientists and engineers to open questions, and generally catalyze research and the development of ways to solve problems. | 2022-10-06 06:30:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31598177552223206, "perplexity": 1519.1474707153232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337731.82/warc/CC-MAIN-20221006061224-20221006091224-00188.warc.gz"} |
https://ask.sagemath.org/questions/41825/revisions/ | I am following Abstract Algebra an Interactive Approach by William Paulsen but some of the functions in the first chapter itself, like ShowTerry(), ItitTerry(), MultTable() etc are giving NameError. Same is the case with most of the functions in the preliminaries chapter like ShowRationals(), PowerMod() | 2020-04-05 14:38:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2741745114326477, "perplexity": 1522.8288499606952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371604800.52/warc/CC-MAIN-20200405115129-20200405145629-00458.warc.gz"} |
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-concepts-through-functions-a-unit-circle-approach-to-trigonometry-3rd-edition/appendix-a-review-a-1-algebra-essentials-a-1-assess-your-understanding-page-a11/82 | Precalculus: Concepts Through Functions, A Unit Circle Approach to Trigonometry (3rd Edition)
$-20^oC$
Substitute the given value of the $F$ into the expression: $C=\dfrac {5}{9}\left( F-32\right) =\dfrac {5}{9}\left( -4-32\right) =\dfrac {5\times \left( -36\right) }{9}=5\times \left( -4\right) =-20$ | 2019-07-17 18:23:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8084610104560852, "perplexity": 2611.6096469309505}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525374.43/warc/CC-MAIN-20190717181736-20190717203736-00412.warc.gz"} |
https://math.stackexchange.com/questions/3427313/rolling-beige-indistinguishable-die | # Rolling beige indistinguishable die
In lecture we learned that rolling two indistinguishable die and how the chance of doubles is the same probability as rolling distinguishable die. But I'm having difficulty with a certain component of the lecture.
For the indistinguishable 2 6-sided dice we have $$\{(1,1)..(6,6)\} \cup \{(1,2), (1,3)....(5,6)\}$$ 21 total outcomes and a probability space of total 21. Then we learn that the probability, q, of not getting doubles is twice as likely as getting doubles, p.
So $$q=2p$$
But I got confused on the above step. How is it double since 6/21 for doubles vs. 15/21 for non doubles?
Thanks
• Do not forget... the probability of occurrence can be given as the ratio of favorable outcomes divided by total outcomes only if each of the outcomes are equally likely to occur. That is not the case here. There are two outcomes to a lottery, either you win or you lose but you don't win with probability $\frac{1}{2}$. – JMoravitz Nov 8 at 15:38
• Now... as for the statement that was actually said, I worry you may be confusing things. Regardless of whether or not you roll two distinguishable dice or two indistinguishable dice, the probability of having rolled doubles is $\frac{6}{36}=\frac{1}{6}$ (and is not $\frac{6}{21}$). What is probably intended to have been said, is that the probability of having rolled a specific double (such as $(1,1)$) is half as likely as having rolled a specific pair of different numbers (such as $(1,2)=(2,1)$ since these are considered the same in the indistinguishable case) – JMoravitz Nov 8 at 15:43
• If the indistinguishableness of the dice confuses you, don't worry you aren't alone. Imagine that you see color normally and the two dice are green and pink, obviously different and distinct and you roll the dice and you expect certain probabilities for certain outcomes. You hand the dice to your colorblind friend who can't tell the difference between them and he rolls the dice. The probabilities don't suddenly change just because he can't see color, the only things that change is his ability to distinguish between certain results. – JMoravitz Nov 8 at 15:46
As @JMoravitz pointed in the comments what happen here is that the probably to get $$(1,1)$$ is not the same to get $$(1,2)$$ when the dice are not distinguishable. When you write $$6/21$$ or $$15/21$$ you are assuming that every event have the same probability to occur, but this is not the case.
If the dice are not distinguishable then, as you knows, you have $$21$$ different outcomes, however they doesn't have the same probability to occur. If the dice are distinguishable then there are $$36$$ different events and this time each event have the same probability to occur. | 2019-11-18 21:50:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6675548553466797, "perplexity": 331.3738706423008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669847.1/warc/CC-MAIN-20191118205402-20191118233402-00147.warc.gz"} |
https://www.cut-the-knot.org/m/Algebra/EquationInFactorials.shtml | # An Equation in Factorials
### Solution 1
We'll use the induction in $n\,$ to prove
$1^2\cdot 2!+2^2\cdot 3!+\ldots+n^2(n+1)!-2=(n+2)!(n-1).$
The claimholds for $n=1:\,$ $1^2(1+1)!-2=(1+1)(1+2)(1-1).$
Assume $P(k)=:\,1^2\cdot 2!+2^2\cdot 3!+...+k^2(k+1)!-2=(k+2)!(k-1)\,$ is true, and let's prove $P(k+1),\,$ i.e.,
$1^2\cdot 2!+2^2\cdot 3!+...+k^2(k+1)!+(k+1)^2(k+2)!-2=(k+3)!k.$
We have
\begin{align} &1^2\cdot 2!+2^2\cdot 3!+...+k^2(k+1)!+(k+1)^2(k+2)!-2\\ &\qquad\qquad\qquad=(k+2)!(k-1)+(k+1)^2(k+2)!\\ &\qquad\qquad\qquad=(k+2)![(k-1)+(k+1)^2]\\ &\qquad\qquad\qquad=(k+2)![k^2+3k]\\ &\qquad\qquad\qquad=(k+3)!k, \end{align}
as required. Thus we rewrite the original equation:
$\displaystyle \frac{(n+2)!(n-1)}{(n+1)!}=108,$
or, $n^2+n-110=0,\,$ giving two roots, $n=10\,$ that solves the problem and a superfleous one $n=-11.\,$
### Solution 2
We'll unfold the telescoping sum:
\displaystyle \begin{align} \sum_{k=1}^nk^2(k+1)! &=\sum_{k=1}^n\left[(k+2)^2-4(k+1)\right](k+1)!\\ &=\sum_{k=1}^n[(k+2)(k+2)!-4(k+1)(k+1)!]\\ &=\sum_{k=1}^n[(k+3-1)(k+2)!-4(k+2-1)(k+1)!]\\ &=\sum_{k=1}^n[(k+3)!-(k+2)!-4(k+2)!+4(k+1)!]\\ &=\sum_{k=1}^n[(k+3)!-5(k+2)!+4(k+1)!]\\ &=\sum_{k=4}^{n+3}k!-5\sum_{k=3}^{n+2}k!+4\sum_{k=2}^{n+1}k!\\ &=(n+3)!-4(n+2)!+2 = (n+2)!(n-1)+2. \end{align}
Thus the equation reduces to $(n+2)(n-1)=108\,$ from which $n=10.$
### Solution 3
Let
$\displaystyle S_k=\frac{1^2 2! + 2^2 3! + ... + k^2 (k+1)! -2}{(k+1)!}.$
We can rule out $n=1,2$ as solutions by observation. Thus, we are guaranteed to have an $S_2$ and an $S_3$ and verify that both are integers. In general, for some $k\geq3$,
\displaystyle \begin{align}(k+1)!S_k-k!S_{k-1}&=k^2(k+1)! \\ \Rightarrow (k+1)S_k-S_{k-1}&=k^2(k+1) \end{align}
By observing this equation, we claim that $S_k$ is a quadratic polynomial in $k$. Let $S_k=ak^2+bk+c$. Plugging this expression back into the recurrence relation and evaluating the undetermined coefficients,
\displaystyle \begin{align}(k+1)(ak^2+bk+c)-[a(k-1)^2+b(k-1)+c]&=k^3+k^2 \\ ak^3+bk^2+(2a+c)k+(b-a)&=k^3+k^2 \end{align}
Thus, $a=1$, $b=1$, $c=-2$ and $S_k=k^2+k-2$.
So, the equation becomes $S_n=n^2+n-2=108$. The two roots are $n=10$ and $n=-11$. Thus, the only permissible solution in natural numbers is $n=10$.
### Solution 4
Writing in Gamma functions, the LHS is $\displaystyle F(n)=\frac{\displaystyle -2+\sum_{k=1}^nk^2\Gamma(k+2)}{\Gamma(n+1)}.\,$ We have $\displaystyle \sum_{k=1}^nk^2\Gamma(k+2)=n\Gamma(n+3)-\Gamma(n+3)+2,\,$ and conclude
$\displaystyle F(n)=\frac{\Gamma(n+3)}{\Gamma(n+2)}(n-1)=(2+n)(n-1).$
Solving $(2+n)(n-1)=108,\,$ we get $n=10.$
### Acknowledgment
Dan Sitaru has kindly posted at CutTheKnotMath facebook page a problem of his from the Romanian Mathematical Magazine and later sent me a LaTeX file with his solution (Solution 1). Solution 2 is by Kunihiko Chikaya; Solution 3 is by Amit Itagi; Solution 4 is by N. N. Taleb. | 2017-11-23 11:59:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9962197542190552, "perplexity": 2143.004926075296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806771.56/warc/CC-MAIN-20171123104442-20171123124442-00763.warc.gz"} |
https://www.numerade.com/questions/determine-whether-int_c-mathbff-cdot-dr-along-the-paths-c_1-and-c_2-shown-in-the-following-vector-fi/ | Vectors
Vector Calculus
### Discussion
You must be signed in to discuss.
##### Kristen K.
University of Michigan - Ann Arbor
##### Samuel H.
University of Nottingham
Lectures
Join Bootcamp
### Video Transcript
from draft in question. A 49 for the given curve. C one Most of the points on the girl or pointing against the victor filled. Hence integral partial C one if he X is negative. Similarly forgiven Curve C two most off the points on the curve or pointing along the victor field. Hence integral partial See he to it's the X is poster.
ES
#### Topics
Vectors
Vector Calculus
##### Kristen K.
University of Michigan - Ann Arbor
##### Samuel H.
University of Nottingham
Lectures
Join Bootcamp | 2021-04-16 11:27:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7372559309005737, "perplexity": 14232.13184966486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038056325.1/warc/CC-MAIN-20210416100222-20210416130222-00151.warc.gz"} |
https://www.questarter.com/q/dirichlet-series-summation-for-zeta-21_3142549.html | Dirichlet series summation for zeta
by user1828958 Last Updated March 10, 2019 16:20 PM - source
This question is a bit different from one another online. It appear in reppresenting Dirichlet series
Do you know how the following equation is true?
$$\sum_{1}^{\infty} \frac{1}{n^s} = \sum_{1}^{\infty} n \cdot \big (\frac{1}{n^s} - \frac{1}{(n+1)^s} \big)$$
Tags :
Modulus of Riemann zeta function.
Updated December 28, 2018 14:20 PM
Alternating Bertrand series
Updated March 21, 2019 02:20 AM | 2019-04-25 23:53:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6239492297172546, "perplexity": 3879.1767534200994}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578743307.87/warc/CC-MAIN-20190425233736-20190426015736-00242.warc.gz"} |
http://bit-player.org/2009/flights-of-fancy | # Flights of fancy
As I have mentioned in the past, I’m fascinated by the acrobatics of bird flocks, especially the big congregations of European starlings that gather in the evening at this time of year. Evidently I’m not the only one with such an interest. In the past few years the subject has attracted the attention of quite a large flock of scientists, including not only biologists but also various luminaries in physics, mathematics and computer science.
Below are some notes on a few of the recent papers, but first I have to mention a classic from 20 years ago:
Reynolds, Craig W. 1987. Flocks, herds, and schools: a distributed behavioral model. Computer Graphics 21(4):25–33. Author archive.
This is the paper that began the modern era of flocking studies by proposing that animals could coordinate and synchronize their movements without any need for a leader or external cues. Others were thinking along the same lines at about the same time, but it was Reynolds who attracted wide notice with his enchanting computer animations of “boids” soaring through an imaginary three-dimensional space. Each individual in the flock acts according to simple, local, fixed rules, and the synchronized maneuvers emerge spontaneously.
Reynolds suggested three particular rules that might guide the behavior of each bird:
• Avoid collisions.
• Try to match the speed and heading of nearby birds.
• Move toward the center of the group in which you are flying.
Reynolds was working in computer graphics, and his ideas were soon taken up by movie studios and by the makers of video games. In a sense, his simulations only had to look right; they didn’t have to reflect what actually goes on in a starling’s head. But whether or not the birds were paying attention, students of animal behavior certainly were.
Much of the recent activity arises out of new field studies, conducted mainly by physicists.
Cavagna, Andrea, Irene Giardina, Alberto Orlandi, Giorgio Parisi, Andrea Procaccini, Massimiliano Viale and Vladimir Zdravkovic. 2008. The STARFLAG handbook on collective animal behaviour. 1: Empirical methods. Animal
Behaviour
76:217–236. Preprint.
Cavagna, Andrea, Irene Giardina, Alberto Orlandi, Giorgio Parisi and Andrea Procaccini. 2008. The STARFLAG handbook on collective animal behaviour. 2: Three-dimensional analysis. Animal Behaviour 76:237–248. Preprint.
This group, coordinated by Andrea Cavagna and Irene Giardina of the University of Rome La Sapienza, has been photographing starling flocks near the city’s main railroad station (the Termini), which is just a few blocks from the university. Using pairs of synchronized cameras, the observers have captured stereoscopic images and then applied special image-analysis software to reconstruct the three-dimensional trajectory of each bird. Similar techniques have been tried in the past, but only with small flocks (a few dozen birds). The Italian group has traced the motions of individual birds in groups of up to 2,600. The two papers cited above give technical details on how the data were gathered and analyzed.
Ballerini, Michele, Nicola Cabibbo, Raphael Candelier, Andrea Cavagna, Evaristo Cisbani, Irene Giardina, Alberto Orlandi, Giorgio Parisi, Andrea Procaccini, Massimiliano Viale and Vladimir Zdravkovic. 2008. Empirical investigation of starling flocks: a benchmark study in collective animal behaviour. Animal Behaviour 76:201–215. Preprint.
Ballerini, Michele, Nicola Cabibbo, Raphael Candelier, Andrea Cavagna, Evaristo Cisbani, Irene Giardina, Vivien Lecomte, Alberto Orlandi, Giorgio Parisi, Andrea Procaccini, Massimiliano Viale and Vladimir Zdravkovic. 2008. Interaction ruling animal collective behavior depends on topological rather than metric distance: Evidence from a field study. Proceedings of the National Academy of Science of the USA 105:1232–1237. Open access.
And here the same authors (with a few additions) report their results and conclusions. They base their interpretation on a computational model that is recognizably a descendant of the Reynolds scheme, but with one crucial modification. Reynolds and others assumed that each bird is influenced by all other birds within some fixed distance (a “metric neighborhood”); Ballerini et al. get a closer match to the data by assuming that a bird attends to the motions of a fixed number of near neighbors, regardless of distance (a “topological neighborhood”). In other words, the graph of interacting birds has nearly constant vertex degree; the typical degree is probably six or seven. The main significance of this algorithmic change is that it helps maintain the cohesion of the flock in spite of large variations in density.
Hildenbrandt, Hanno, Claudio Carere and Charlotte K. Hemelrijk. 2009. Self-organised complex aerial displays of thousands of starlings: a model. arXiv:0908.2677v1
Those same flocks at Termini have a role in this study as well; the model presented here draws on data from Ballerini et al. as well as videotapes made at Termini by Carere. (Carere is another physicist at Sapienza; Hildenbrandt and Hemelrijk are biologists at the University of Groningen.)
The model works on the same essential principles, but it differs in intellectual style and emphasis. Hildenbrandt et al. want to account for specific details of a flock’s behavior—not just the general tendency to fly in close formation but also the particular shapes of starling flocks, the maneuvers they perform, the altitudes they prefer, and so on. Reaching for this verisimilitude leads to a rather complicated model with many parameters in need of fine tuning, such as aerodynamic properties of the bird’s wing and body and banking angles in turns. Hildenbrandt et al. report some success in explaining the geometry of flocks (they tend to be horizontally flattened rather than spherical). They do less well in an attempt to account for an extra-dense layer of birds observed at the periphery of a flock.
Cucker, Felipe, and Steve Smale. 2007. Emergent behavior in flocks. IEEE Transactions on Automatic Control 52:852–862.
Chazelle, Bernard. 2009. Natural algorithms. Proceedings of the 20th Symposium on Discrete Algorithms, pp. 422-431. Preprint.
Chazelle, Bernard. 2009. The convergence of bird flocking. arXiv:0905.4241v1
Leaving behind the breathy wing-beats of living starlings, we enter a world of mathematical abstractions.
Cucker and Smale, peripatetic mathematicians currently at the City University of Hong Kong, take a stripped-down model of flocking and ask this question: Is it guaranteed that all the birds in the flock will eventually settle on the same velocity, and thus fly together forever? Chazelle, a theoretical computer scientist at Princeton, asks a follow-on question: If the birds do converge on the same speed and heading, how long might it take for them to do so, in the worst case?
The answer to the Cucker-Smale question turns out the be yes: Given certain preconditions and parameter values, convergence is certain. But Chazelle shows that it can take quite a while for the flock to reach consensus. For n birds adjusting their velocities in discrete steps, the upper bound is 2 ↑↑ (4 log n) steps. As I was saying just the other day, this up-arrow notation denotes an exponential tower of 2s with, in this case, 4 log2 n levels. In other words, in a flock of a thousand birds, the convergence time is roughly
$2^{2^{2^{\cdot^{\cdot^{\cdot^2}}}}}$
with 40 levels of exponentiation. This is a ridiculous number, far exceeding the lifetime of a starling (or of a universe, for that matter). As Chazelle notes: “Our bounds obviously say nothing about physical birds in the real world. They merely highlight the exotic behavior of the mathematical models.”
It is rather wonderful to reflect—as you stand in a field of corn stubble admiring the flocks of birds wheeling overhead in the evening sky—that these avian entertainments should be the starting point for a line of reasoning that ventures so far into the wild blue yonder of inexpressible numbers.
Lebar Bajec, Iztok, and Frank H. Heppner. 2009. Organized flight in birds. Animal Behaviour 78:777–789. Preprint.
I mention this piece last, but it would actually be a good place to start if you want a primer on flocking. Frank Heppner, a biologist at the University of Rhode Island, is one of the pioneers of flocking-and-swarming studies; here, with a mathematical colleague from the University of Ljubljana, he reviews many of the recent contributions and puts them in historical context. The review includes a discussion of the more crystalline flying formations of large birds such as geese as well as the amorphous flocks of starlings.
This entry was posted in biology, computing, mathematics, physics.
### 4 Responses to Flights of fancy
1. unekdoud says:
That is quite a large upper bound, considering that 5 levels of exponentiation is enough to exceed the lifetime of the known universe. What this means is even with a flock size of a handful, there is no guarantee that they will, within a given amount of time, fall into a stable state of constant velocity.
I find this similar to systems in physics which become chaotic once they exceed three components, the n-body problem being a famous example. And yet the result shows that the chaos should eventually die away.
This also brings to mind the behavior of certain cellular automata, where a long period of chaos finally settles into a regular pattern.
2. Stan Wenclewicz says:
What term is used to refer to the synchronized movement in a flight of birds or school of fish?
3. Sandy Ressler says:
Does there exist any web based videos of these simulations?
thanks…Sandy
please send email to sressler (at) n i s t . g o v
4. Thanx for a nice reference. There’s another, earlier, paper that you might find interesting “Simulating flocks on the wing: the fuzzy approach. doi:10.1016/j.jtbi.2004.10.003″ with videos available at: http://itzsimpl.info/projects/ilb_synflocks.htm | 2021-12-02 13:08:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47331371903419495, "perplexity": 2786.2189622621136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00414.warc.gz"} |
https://tex.stackexchange.com/questions/199007/listings-line-numbers-work-with-one-lstlisting-but-not-another | # listings line numbers work with one lstlisting but not another
The first listing has its line numbers ignoring the stepnumber while the second is just fine. Guess it might have to do with using math mode but mathescape applies to both, so if that is the case it is not obvious what to do to me
\documentclass[a4paper, twoside]{book}
\usepackage{listings}
\lstset{
basicstyle=\small,
keywordstyle=\ttfamily,
identifierstyle=\ttfamily,
numbers=left,
numberstyle=\tiny,
stepnumber=5,
numbersep=5pt,
numberfirstline=true,
firstnumber=1,
mathescape=true
}
\begin{document}
\begin{lstlisting}
g = $\infty$
for $v$ in vertices:
s = $\emptyset$
r = $\{v\}$
pred[v] = $\emptyset$
d[v] = $0$
while not r == $\emptyset$:
x = $x \in r$
s = s $\cup$ x
r = r $\setminus$ x
for
\end{lstlisting}
\begin{lstlisting}
step = 1
res = new Graph()
def BFS(v,currentLength,maxLength):
if currentLength < maxLength:
if not label[v]:
label[v] = step
step = step + 1
for w in Neighborhood(v):
if label[w]:
continue
pred[w] = v
res.Append(v,w)
BFS(w,currentLength + 1, maxLength)
root = PickVertex()
BFS(root,0,k)
\end{lstlisting}
\end{document}
• It seems that every line in which a math formula is present triggers numbering the next line. – egreg Sep 1 '14 at 23:32
I believe that is caused by a bug in the listings code. These line numbers are printed because the flag that tests to see whether the code line is the first line is never reset. If you add
\makeatletter
\gdef\lst@numberfirstlinefalse{\global\let\lst@ifnumberfirstline\iffalse}
to you code (after \usepackage{listings}) then you will get the expected result (the \global is missing in lstmisc.sty). (Edit: I just added the Init line above to fix the first line number issue mentioned in the comments below.)
• I wonder whether changing the definition of \lst@numberfirstlinefalse as you did has side effects. Anyway, well done for tracking that down, Andrew. I'm gonna post a small bounty to attract attention (and upvotes) to your answer. – jub0bs Sep 2 '14 at 23:05
• @Jubobs I realised that there is one side effect in that in the second code block above the first line number is not printed. When I have time I'll look and see if I can fix this -- each listings should just reset \lst@ifnumberfirstline. There probably shouldn't be any other side effects as this seems to only be used for printing line numbers...but the code is sufficiently complicated that I can't be sure. In any case, I reported this to the package maintainers. – Andrew Sep 2 '14 at 23:42
• @user60625 I just added the \lst@AddToHook{Init} line to my solution. This fixes the numbering of the first line. Let me know if this breaks anything else! – Andrew Sep 3 '14 at 11:24 | 2019-11-21 16:24:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7359204292297363, "perplexity": 1457.466455144024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670921.20/warc/CC-MAIN-20191121153204-20191121181204-00246.warc.gz"} |
https://deepai.org/publication/robust-block-preconditioners-for-poroelasticity | # Robust block preconditioners for poroelasticity
In this paper we study the linear systems arising from discretized poroelasticity problems. We formulate one block preconditioner for the two-filed Biot model and several preconditioners for the classical three-filed Biot model under the unified relationship framework between well-posedness and preconditioners. By the unified theory, we show all the considered preconditioners are uniformly optimal with respect to material and discretization parameters. Numerical tests demonstrate the robustness of these preconditioners.
## Authors
• 2 publications
• 8 publications
• 26 publications
• 49 publications
08/12/2020
### A nonsymmetric approach and a quasi-optimal and robust discretization for the Biot's model. Part I – Theoretical aspects
We consider the system of partial differential equations stemming from t...
11/16/2021
### Block preconditioning methods for asymptotic preserving scheme arising in anisotropic elliptic problems
Efficient and robust iterative solvers for strong anisotropic elliptic e...
02/12/2020
### Algebraic multigrid block preconditioning for multi-group radiation diffusion equations
The paper focuses on developing and studying efficient block preconditio...
06/19/2020
### Optimal Statistical Hypothesis Testing for Social Choice
We address the following question in this paper: "What are the most robu...
04/28/2020
### On the stability of time-discrete dynamic multiple network poroelasticity systems arising from second-order implicit time-stepping schemes
The classical Biot's theory provides the foundation of a fully dynamic p...
08/29/2017
### Tug-of-War: Observations on Unified Content Handling
Modern applications and Operating Systems vary greatly with respect to h...
06/17/2019
### A note on the implementation of the hyperelastic Kilian model in the Abaqus
The main goal of the article is to verify the implementation of the Kili...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Poroelasticity, the study of the fluid flow in porous and elastic media, couples the elastic deformation with the fluid flow in porous media. The Biot model has wide applications in geoscience, biomechanics, and many other fields. Numerical simulations of poroelasticity are challenging. A notorious instability of the numerical discretization method for the poroelasticity model is the unphysical oscillation of the pressure under certain conditions Murad.M;Loula.A1994a . There are many possible sources of this instability. One of the most significant sources is the instability of the finite element approximation for the coupled systems Haga.J;Osnes.H;Langtangen.H2012b ; Axelsson.O;Blaheta.R;Byczanski.P2012a . This motivates us to study the well-posedness of the finite element discretization. However, we do not look further into the details of the instability of the Biot model and refer interested readers to Haga.J;Osnes.H;Langtangen.H2012b ; Axelsson.O;Blaheta.R;Byczanski.P2012a ; Aguilar.G;Gaspar.F;Lisbona.F;Rodrigo.C2008a .
Another challenge associated with the Biot model is that of developing efficient linear solvers. Direct solvers have poor performance when the size of problems becomes large. Iterative solvers are good alternatives, as they exhibit better scalability. However, the convergence of iterative solvers is very much problem-dependent such that there is a need for robust preconditioners. For example, the multigrid preconditioned Krylov subspace method usually has optimal convergence rate for the Poission equation and many other symmetric positive definite problems Hackbusch.W1985a ; Xu.J1992a . However, for poroelasticity problems, coupled systems of equations must be solved, which are known to be indefinite and ill-conditioned Ferronato.M;Gambolati.G;Teatini.P2001a . Preconditioning techniques for poroelasticity problems have been the subject of considerable research in the literature Axelsson.O;Blaheta.R;Byczanski.P2012a ; Lipnikov.K2002a ; Bergamaschi.L;Ferronato.M;Gambolati.G2007a ; Phoon.K;Toh.K;Chan.S;Lee.F2002a ; Toh.K;Phoon.K;Chan.S2004a ; Haga.J;Osnes.H;Langtangen.H2011a ; Haga.J;Osnes.H;Langtangen.H2012a ; Turan.E;Arbenz.P2014a and most of the techniques developed are based on the Schur complement approach. In Phoon.K;Toh.K;Chan.S;Lee.F2002a ; Toh.K;Phoon.K;Chan.S2004a , diagonal approximation of the Schur complement preconditioner is used to precondition two-field formulation of the Biot model. In Haga.J;Osnes.H;Langtangen.H2011a ; Haga.J;Osnes.H;Langtangen.H2012a , Schur complement preconditioners are also studied for two-field formulation with the algebraic multigrid (AMG) as the preconditioner for the elasticity block. In Axelsson.O;Blaheta.R;Byczanski.P2012a , Schur complement approaches for three-field formulation are investigated. Recently, robust block diagonal and block triangular preconditioners are developed in adler2017robust for two-field Biot model. And for classical three-filed Biot model, the robust block preconditioners are designed in hong2018parameter ; hong2018conservative
based on the uniform stability estimates. Robust preconditioner for a new three-field formulation introducing a total pressure as the third unknown is analyzed in
lee2017parameter . Robust block diagonal and block triangular preconditioners are also developed in adler2019robust based on the descretization proposed in rodrigo2018new . Other robustness analysis for fixed-stress splitting method and Uzawa-type method for multiple-permeability poroelasticity systems are presented in hong2018fixed and hong2019parameter .
The focus of this paper is on the stability of the linear systems after time discretization and several robust preconditioners for the iterative solvers under the unified relationship framework between well-posedness and preconditioners. The block preconditioners in adler2017robust for two field formulation and in hong2018parameter ; adler2019robust for the three field formulation can be briefly written in this framework. In addition, we analyze the well-posedness of the linear systems and propose other optimal preconditioners for the Biot model Haga.J;Osnes.H;Langtangen.H2012b based on the mapping property Mardal.K;Winther.R2011a . By proposing optimal block preconditioners, we convert the solution of complicated coupled system into that of a few SPD systems on each of the fields.
The rest of this paper is organized as follows. In Section 2, we give a brief introduction of the Biot model. In Section 3, we introduce two theorems in order to prove well-posedness. In Section 4, we address the unified framework indicating the relationship between preconditioning and well-posedness of linear systems. In Section 5 and Section 6, we show the well-posedness and several optimal preconditioners for the Biot model under the unified framework. In Section 7, we present numerical examples to demonstrate the robustness of these preconditioners.
## 2 The Biot model
The poroelastic phenomenon is usually characterized by the Biot model Biot.M1941a ; Biot.M1955a , which couples structure displacement , fluid flux , and fluid pressure . Consider a bounded and simply connected Lipschitz domain of poroelastic material. As the deformation is assumed to be small, we assume that the deformed configuration coincides with the undeformed reference configuration. Let denote the total stress in this material. From the balance of the forces, we first have
−∇⋅σ=f, in Ω.
In addition to the elastic stress
σe=2μϵ(u)+λ(∇⋅u)I,
the fluid pressure also contributes to the total stress, which results in the following constitutive equation:
σ=σe−αpI.
Here, and are the Lamé constants and is the Poisson ratio, the symmetric gradient is defined by , and is the Biot-Willis constant. Therefore, we obtain the following momentum equation
−∇⋅(2μϵ(u)+λ(∇⋅u)I−αpI)=f, in Ω.
Let denote the fluid content. Then, the mass conservation of the fluid phase implies that
∂tη+∇⋅v=g in Ω, (1)
where is the source density. The fluid content is assumed to satisfy the following constitutive equation:
η=Sp+α∇⋅u, (2)
where is the fluid storage coefficient. We also have the Biot-Willis constant in this equation, as this poroelastic model is assumed to be a reversible process and the increment of work must be an exact differential Biot.M1941a ; Rice.J2001a ; CHENG.A2014a . Based on (1) and (2), the following equation holds
α∇⋅˙u+∇⋅v+S˙p=g, %inΩ.
According to Darcy’s law, we have another equation:
k−1v+∇p=r,
where is the fluid mobility and is the body force for the fluid phase.
We consider all the parameters to be positive. The following boundary conditions are assumed:
u=uD, on ΓD,u, σn=gN, on ΓN,u, (3)
v⋅n=vD, on ΓD,v, p=pN, on ΓN,v, (4)
where , and , .
The initial conditions are as follows:
u(x,0)=u0(x), p(x,0)=p0(x),
where and are given functions.
We use the backward Euler method to discretize the time derivative :
˙u(tn)≈u(tn)−u(tn−1)Δt,
where is the time step size. More sophisticated implicit time discretizations result in similar linear systems. As we are focusing on the properties of the linear systems resulting from the time discretized problem, we consider only the backward Euler method for the sake of brevity. After the implicit time discretization, fast solvers are needed to solve the following three-field system:
⎧⎪ ⎪ ⎪⎨⎪ ⎪ ⎪⎩−∇⋅(2μϵ(u)+λ(∇⋅u)I)+α∇p=f,k−1v+∇p=r,αΔt∇⋅u+∇⋅v+SΔtp=g. (5)
Note that the right-hand side of the last equation in (5) includes terms from previous time step due to the time discretization. As the exact form of this right-hand side does not affect the well-posedness of the linear system, we keep using to denote it. We apply this convention to all the right-hand sides in similar situations, throughout the rest of this paper.
To reduce the number of variables, the fluid flux is eliminated to obtain the following two-field system:
⎧⎨⎩−∇⋅(2μϵ(u)+λ(∇⋅u)I)+α∇p=f,αΔt∇⋅u−kΔp+SΔtp=g. (6)
In the rest of this paper, we develop block preconditioners for both the two-field and three-field systems.
## 3 Well-posedness of linear systems
In this section, we first introduce several theorems to prove the well-posedness of the following saddle point problem: Find such that , the following equations hold
{a(u,ϕ)+b(ϕ,p)=⟨f,ϕ⟩,b(u,q)−c(p,q)=⟨g,q⟩. (7)
Here, and are given Hilbert spaces with the inner products and , respectively. The corresponding norms are denoted by and .
Given , the following kernel spaces are important in the analysis:
Z={u∈M|b(u,q)=0,∀q∈N},
K={p∈N|b(ϕ,p)=0,∀ϕ∈M}.
We consider the orthogonal decomposition of and as follows:
u=u0+¯u, u0∈Z, ¯u∈Z⊥, p=p0+¯p, p0∈K, ¯p∈K⊥.
We will use these notation to denote the components of functions in the kernel spaces and their orthogonal complements throughout the rest of this section.
The well-posedness of (7) can be proved provided that , , and satisfy certain properties. In the first case, we assume that is coercive on .
###### Theorem 3.1
(Xu.J2015a ) Assume that is symmetric, is symmetric positive semi-definite. Let be a semi-norm on such that , . Assume that the following inequalities
a(u,ϕ)≤Ca∥u∥M∥ϕ∥M,∀u,ϕ∈M (8)
b(u,p)≤Cb∥u∥M|p|e,∀u∈M,p∈N (9)
a(u,u)≥γa∥u∥2M, ∀u∈M (10)
supu∈Mb(u,q)∥u∥M≥γb|q|e,∀q∈N (11)
hold with the constants , , and independent of parameters. In addition, assume that , . Then, Problem (7) is uniformly well-posed with respect to parameters under the norms and , where and .
###### Proof
Define
L(u,p;ϕ,q)=a(u,ϕ)+b(ϕ,p)+b(u,q)−c(p,q).
To prove well-posedness, we just need to verify the following:
• the boundedness of under the norms and .
• the Babuska inf-sup condition: for any ,
sup(ϕ,q)∈M×NL(u,p;ϕ,q)(∥ϕ∥2M+∥q∥2N)1/2≥C(∥u∥2M+∥p∥2N)1/2. (12)
Since it is straightforward to verify the boundedness of , we focus on proving the inf-sup condition (12).
According to (11), for , consider its projection . There exists such that
b(w,¯p)≥γb|¯p|2e and ∥w∥M=|¯p|e.
Let , , and . Then, we have
L(u,p;ϕ,q) =a(u,u+θw)+b(u+θw,p)−b(u,p)+c(p,p) ≥γa∥u∥2M+θa(u,w)+θb(w,p)+c(p,p) ≥γa∥u∥2M−γa2∥u∥2M−θ2C2a2γa∥w∥2M+γbθ|¯p|2e+c(p,p) ≥γa2∥u∥2M+γbθ(1−θC2a2γaγb)|¯p|2e+c(p,p) =γa2∥u∥2M+γaγ2b2C2a|¯p|2e+|p|2c.
Moreover, we have
∥ϕ∥2M+|¯q|2e+|q|2c ≤2∥u∥2M+(2γ2aγ2b/C4a+1)|¯p|2e+|p|2c.
Therefore, (12) holds.
###### Remark 1
It is worth noting that in case , we have and .
It is also possible to only assume that is elliptic in . Then we need to assume that is bounded under the norm .
###### Theorem 3.2
(Brezzi.F;Fortin.M1991a ) Assume that and are symmetric and positive semi-definite and that (8), (9) and (11) hold. Moreover, assume that
a(u,u)≥γa∥u∥2M, ∀u∈Z, (13)
c(q,q)≥γc∥q∥2N, ∀q∈K, (14)
c(p,q)≤Cc∥p∥N∥q∥N, ∀p,q∈N. (15)
Assume that the constants , , , , and are independent of the parameters. Then, Problem (7) is uniformly well-posed with respect to parameters under the norms and .
Theorems 3.1 and 3.2 will be used to prove the well-posedness in different cases. Note that they are sufficient conditions for the problems to be well-posed. For weaker conditions, we refer to Boffi.D;Brezzi.F;Fortin.M2013a .
In this paper, we are especially interested in the robustness of preconditioners with respect to varying material and discretization parameters guided by the well-posedness of the linear system. Thus we want to emphasize the dependence on these parameters in inequalities. Therefore, we introduce the following notation: , and . Given two quantities and , means that there is a constant independent of these parameters such that . can be similarly defined. if and .
## 4 Relationship between preconditioning and well-posedness
Given that a variational problem is well-posed, an optimal precondtioner can be developed, in order to speed up Krylov subspace methods, such as Conjugate Gradient Method (CG) and Minimal Residual Method (MINRES). In order to illustrate this fact, we first consider the following variational problem:
Find , such that
L(x,y)=⟨f,y⟩,∀y∈X, (16)
where is a given Hilbert space and .
The well-posedness of the variational problem (16) refers to the existence, uniqueness, and the stability of the solution. The necessary and sufficient conditions for (16) to be well-posed are shown in the following theorem. We assume the symmetry in the rest of this section.
###### Theorem 4.1
(Babuska.I1971a ) Problem (16) is well-posed if and only if the following conditions are satisfied:
• There exists a constant such that .
• There exists a constant such that
infx∈Xsupy∈XL(x,y)∥x∥X∥y∥X=β>0. (17)
Consider the operator form of (16):
Lx=f∈X′.
Define operator such that
(Pf,y)X=⟨f,y⟩,f∈X′,y∈X. (18)
Assuming the well-posedness, then the following inequalities hold
∥PL∥L(X,X)=supx,y(PLx,y)X∥x∥X∥y∥X=supx,y⟨Lx,y⟩∥x∥X∥y∥X≤C,
Therefore, the condition number of the precondtioned system is proved to be bounded
κ(PL):=∥PL∥L(X,X)∥(PL)−1∥L(X,X)≤C/β.
This type of preconditioners is frequently used in the literature and is characterized as “mapping property” in a recent review paper Mardal.K;Winther.R2011a .
Let be a set of given basis of and be a set of given basis of . Consider the matrix representation of and :
P(ϕ′1,⋯,ϕ′n) =(ϕ1,⋯,ϕn)P, L(ϕ1,⋯,ϕn)=(ϕ′1,⋯,ϕ′n)L
and the vector representation of
.
x=(ϕ1,⋯,ϕn)x.
Assume is symmetric and is SPD. Denote the mass matrix of by , i.e., , . In fact, . Then
∥PL∥L(X,X)=supx,y(PLx,y)X∥x∥X∥y∥X=supx,yxT(PL)TMy(xTMx)1/2(yTMy)1/2=maxλ∈σ(PL)|λ|.
Similarly,
∥(PL)−1∥−1L(X,X)=minλ∈σ(PL)|λ|.
Therefore, .
A more general approach is via norm equivalence matrices Loghin.D;Wathen.A2004a . Given an SPD matrix , inner product and norm can be defined correspondingly:
(x,x)H:=(Hx,x),∥x∥2H:=(x,x)H.
Nonsingular matrices and are H-norm equivalent, denoted by , if there are constants and independent of the size of the matrices such that
γ∥Bx∥H≤∥Ax∥H≤Γ∥Bx∥H.
If and is symmetric with respect to , then MINRES preconditioned by has the following convergence estimate Loghin.D;Wathen.A2004a :
∥rk∥H∥r0∥H≤2(Γ−γΓ+γ)k/2.
Consider the preconditioner defined as the matrix representation of in (18). It is easy to see that . Note that .
This can help in the design of preconditioners for CG and MINRES. Preconditioning GMRES differs in that it usually depends on the field of value analysis Loghin.D;Wathen.A2004a .
In the rest of the paper, we will use Theorem 3.1 and Theorem 3.2 to prove the well-posedness of the different formulations of the Biot model under different choices of . Then, based on the well-posedness, we show the corresponding optimal block preconditioners.
## 5 A two-field formulation
The preconditioning for the two-field system (6) has been studied extensively in the literature Phoon.K;Toh.K;Chan.S;Lee.F2002a ; Toh.K;Phoon.K;Chan.S2004a ; Haga.J;Osnes.H;Langtangen.H2012a ; Haga.J;Osnes.H;Langtangen.H2011a , where the Schur complement approach is usually used to develop preconditioners. In this paper, similar to adler2017robust , we briefly formulate a preconditioner based on the well-posedness of the linear systems for the two-field Biot model.
We first study the well-posedness of (6), beginning by changing the variable in order to symmetrize (6). With an abuse of notation, we still use the notation for pressure after the change of variable. Next, we introduce the function space for the displacement and the pressure. Due to the boundary conditions (3), we consider
U⊂H1D(Ω):={u∈(H1(Ω))n|u=0, on ΓD,u}
for the displacement and
Qc⊂H1P(Ω):={p∈H1(Ω)|p=0, on ΓN,v}
for the pressure. Here, we use the subscript “c” to suggest the continuity of the functions in . We assume in the rest of this paper so that the elasticity operator is nonsingular on . We also assume that such that the divergence operator is surjective on the pressure space.
Then, we define the following bilinear forms:
for u,ϕ∈U, aI(u,ϕ)=(2μϵ(u),ϵ(ϕ))+(λ∇⋅u,∇⋅ϕ), for u∈U, p∈Qc, bI(u,p)=(∇⋅u,p), for p,q∈Qc, dI(p,q)=(κ−1∇p,∇q)+(ξp,q),
where and .
Now, we introduce the notation for the kernel spaces:
ZI={u∈U|bI(u,q)=0,∀q∈Qc}, KI={p∈Qc|bI(ϕ,p)=0,∀ϕ∈U}.
The variational formulation of (6) is as follows:
Find such that , the following equations hold
{aI(u,ϕ)+bI(ϕ,p)=(f,ϕ),bI(u,q)−dI(p,q)=(g,q). (19)
We define the norms as follows:
∥u∥2U= aI(u,u), ∥q∥2Qc=β−1∥q∥20+dI(q,q), (20)
where .
This variational formulation (19) is proved to be well-posed under the norms and provided that the following inf-sup condition holds
∀p∈(KI)⊥,supu∈UbI(u,p)∥u∥1≳∥p∥0. (21)
It is well known that (21) holds for , and , on a bounded domain with Lipschitz boundary Bramble.J;Lazarov.R;Pasciak.J2001a ; Bramble.J2003a . Moreover, (21) holds for stable Stokes FEM pairs Boffi.D;Brezzi.F;Fortin.M2013a .
###### Theorem 5.1
Assume that the inf-sup conditions (21) holds and . The system (19) is uniformly well-posed with respect to parameters under the norms and defined in (20).
###### Proof
To prove the well-posedness, we just need to verify the assumptions of Theorem 3.1.
As we assume that , we know that and then (14) is trivial. By definition, (8), (9), and (10) are straightforward to verify.
Based on (21), the following inf-sup condition is implied
∀p∈(KI)⊥,supu∈UbI(u,p)∥u∥U≳∥p∥Q. (22)
Then (11) is verifed.
Therefore, the proof is finished by applying Theorem 3.1.
With the well-posedness of (19) proved, an optimal preconditioner can be formulated. We first introduce some matrix notation. Given finite element basis functions and for and , respectively, define the following stiffness matrices: , , and .
The matrix form of the system and preconditioner are
SII=(AuBTuBu−Ap) and PII=(Auβ−1Mp+Ap)−1,
respectively.
###### Remark 2
In case , the kernel space contains constant functions. We can similarly prove the well-posedness, but the norm has a term , which results in a dense matrix in the preconditioner. We refer to Lee.J;Mardal.K;Winther.R2015a for constructing preconditioners related to .
In the literature, the preconditioners for two-field formulation are mostly based on Schur complement approaches. The exact Schur complement preconditioner of , i.e.,
(AuAp+BuA−1uBTu),
is known to be an optimal preconditioner Elman.H;Silvester.D;Wathen.A2005a , although is dense and cannot be obtained. Practical approximations of , such as
Budiag(Au)−1BTu and diag(Bu% diag(Au)−1BTu),
have also been investigated Phoon.K;Toh.K;Chan.S;Lee.F2002a ; Toh.K;Phoon.K;Chan.S2004a ; Haga.J;Osnes.H;Langtangen.H2012a ; Haga.J;Osnes.H;Langtangen.H2011a .
The two-field formulation is usually considered computationally efficient, as it involves the fewest variables and, therefore, has smaller linear systems to solve than the three-field formulation (5). However, the two-field formulation (with continuous pressure elements) exhibits oscillations in the pressure field, and more expanded systems such as the three-field formulation, are shown to be more stable Ferronato.M;Castelletto.N;Gambolati.G2010a ; Haga.J;Osnes.H;Langtangen.H2012b . Motivated by this fact, we study a three-field formulation Haga.J;Osnes.H;Langtangen.H2012b in the next section.
## 6 A three-field formulation
In this section, we will show the well-posedness of the three field formulation (5), briefly formulate the diagonal block robust preconditioners of hong2018parameter ; adler2019robust as special cases, and propose some new preconditioners for the three field formulation guided by the well-posedness.
### 6.1 The three-field formulation
We can write (5) as a symmetric problem by rescaling. Introduce
~v=Δtαv,~p=−αp.
The three-field system (5) can be rewritten as
⎧⎪⎨⎪⎩−∇⋅(2μϵ(u)+λ(∇⋅u)I)−∇~p=f,κ~v−∇~p=r,∇⋅u+∇⋅~v−ξ~p=g. (23)
With an abuse of notation, we still use and to denote the scaled velocity , the scaled pressure , respectively. Then, we introduce the function spaces:
V⊂HD(div,Ω):={v∈H(div,Ω)|v⋅n=0, on ΓD,v},
W=U×V, Q⊂L2(Ω),
and bilinear forms
for (u,v),(ϕ,ψ)∈W,aII(u,v;ϕ,ψ)=aI(u,ϕ)+(κv,ψ),
for (u,v)∈W, p∈Q,bII(u,v;p)=bI(u,p)+(∇⋅v,p),
for p,q∈Q, cI(p,q)=(ξp,q),ξ>0.
We define the corresponding kernel spaces related to
ZII={(u,v)∈W|bII(u,v;q)=0,∀q∈Q},
KII={p∈Q|bII(ϕ,ψ;p)=0,∀(ϕ,ψ)∈W}.
Note that due to the assumption , we have .
Then, the weak formulation is as follows:
Find and such that and , the following equations hold
{aII(u,v;ϕ,ψ)+bII(ϕ,ψ;p)=(f,ϕ)+(r,ψ),bII(u,v;q)−cI(p,q)=(g,q). (24)
The additional term corresponds to different versions of the Biot models Axelsson.O;Blaheta.R;Byczanski.P2012a .
The well-posedness of this saddle point problem can be proved with different choices of norms for and . We discuss some of these options in the rest of this section.
### 6.2 Augmented Lagrangian preconditioners
The stability of the three-field system (23) is closely related to the stability of the pair - and -. In particular, it is considered stable if - satisfies (21) and - satisfies
∀p∈(Kv)⊥,supv∈V(∇⋅v,p)∥v∥H(div)≳∥p∥0, (25)
where
Kv:={p∈Q|(∇⋅v,p)=0,∀v∈V}.
(25) holds for and and, in discrete cases, there are many stable pairs, such as Raviart-Thomas elements Raviart.P;Thomas.J1977a for and piecewise polynomials for .
The augmented Lagrangian (AL) method Benzi.M;Olshanskii.M2006a ; Xu.J;Yang.K2015a incorporates the constraint into the norm. The constraint here is
∇⋅(u+v)=0.
Therefore, it is natural to consider the following norms for the AL method.
Let be the projection from to . We define the norms for spaces and as follows:
∥v∥2V= (κv,v), (26) ∥(u,v)∥2W= ∥u∥2U+∥v∥2V+β∥PQ∇⋅(u+v)∥20, ∥q∥2Q= (β−1q,q),
where is the coefficient in bilinear form , and is an undetermined parameter.
To prove the well-posedness of (24), we just need to verify the assumptions of Theorem 3.1.
Given (21), we have
supu,vbII(u,v;q)||(u,v)||W≥supu(∇⋅u,q)(||u||2U+β∥PQ∇⋅u∥20)1/2≳max{μ,λ,β}−1/2∥q∥0. (27)
For the case in which , the right-hand side of (27) is equal to .
Given (25), we can prove another inequality:
supu,vbII(u,v;q)||(u,v)||W≥supv(∇⋅v,q)(||v||2V+β∥PQ∇⋅v∥20)1/2≳max{κ,β}−1/2∥q∥0. (28)
Similarly, if we further assume that , the right-hand side of (28) is equal to . Note that this approach is used in Lipnikov.K2002a , where the displacement is set to be zero and the inf-sup condition of the - pair is assumed.
The boundedness of is easy to verify due to the additional term in the norm :
bII(u,v;q)≤∥PQ∇⋅(u+v)∥0∥q∥0≤∥(u,v)∥W∥q∥Q. (29)
The coercivity of is straightforward to prove, as
∀(u,v)∈ZII,aII(u,v;u,v)≡∥(u,v)∥2W. (30)
Because is uniformly coercive only on , we resort to Theorem 3.2 to prove the well-posedness.
###### Theorem 6.1
Assume , is uniformly bounded and the inf-sup conditions (21) and (25) hold. Then the system (23) is uniformly well-posed with respect to parameters under the norms and defined in (26).
###### Proof
As , (14) is trivial to prove. Consider for the inf-sup condition of . Due to , the right-hand side of (27) or (28) is equal to . Therefore, the inf-sup condition of is proved.
As , we can prove that . Therefore, the assumptions of Theorem 3.2 hold. Then the proof is finished by applying Theorem 3.2.
It is obvious that we only need to assume either (21) or (25) to prove the well-posedness of (24).
###### Corollary 1
Assume , is uniformly bounded, and that the inf-sup condition (21) holds. The system (23) is uniformly well-posed with respect to parameters under the norms defined in (26).
###### Proof
The proof follows from (27), (30), (9), and Theorem 3.2.
###### Corollary 2
Assume that , is uniformly bounded, and the inf-sup condition (25) holds. The system (23) is uniformly well-posed with respect to parameters under the norms defined in (26).
###### Proof
The proof follows from (28), (30), (9), and Theorem 3.2.
###### Remark 3
The assumption that both (21) and (25) hold results in a smaller parameter than the cases where only one of (21) and (25) holds.
Based on the well-posed formulation, we derive the corresponding optimal block diagonal preconditioner.
#### 6.2.1 Matrix form
We introduce some additional matrix notation. Also, we introduce the FEM basis for . Define the stiffness matrices , , , and .
Then the system matrix of the three-field formulation is
SIII=⎛⎜⎝AuBTuAvBTvBuBv−Cp⎞⎟⎠.
The block preconditioner is
PIII1=⎛⎜ ⎜⎝Au+βBTuM−1pBuβ | 2022-06-29 15:55:47 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081798791885376, "perplexity": 684.8200245592354}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103640328.37/warc/CC-MAIN-20220629150145-20220629180145-00572.warc.gz"} |
https://socratic.org/questions/why-is-the-study-of-radioactivity-labeled-nuclear-chemistry | # Why is the study of radioactivity labeled nuclear chemistry?
Mar 11, 2016
Radioactivity is a result of changes in the nucleus of an atom.
#### Explanation:
Nuclear chemistry is the study of the atomic structure of elements. It includes isotopes – many of which are radioactive – and transmutation, which is the build-up of heavier elements by the energetic fusion of two nuclei (fusion). Both radioactive processes and fusion can release large amounts of energy according to Einstein's famous equation.
${E}_{r} = \sqrt{{\left({m}_{0} {c}^{2}\right)}^{2} + {\left(p c\right)}^{2}}$
Here the ${\left(p c\right)}^{2}$ term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered.
This equation reduces to $E = m {c}^{2}$ when the momentum term is zero. For photons where ${m}_{0} = 0$, the equation reduces to ${E}_{r} = p c$. | 2022-08-14 07:24:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013036847114563, "perplexity": 507.0792596432641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571996.63/warc/CC-MAIN-20220814052950-20220814082950-00501.warc.gz"} |
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Josip_Plemelj | # Josip Plemelj
Josip Plemelj (December 11, 1873 May 22, 1967) was a Slovene mathematician, whose main contributions were to the theory of analytic functions and the application of integral equations to potential theory. He was the first chancellor of the University of Ljubljana.
Josip Plemelj
Josip Plemelj in 1920
BornDecember 11, 1873
DiedMay 22, 1967 (aged 93)
Alma materUniversity of Vienna (PhD, 1898)
Known forSokhotski–Plemelj theorem
Scientific career
Doctoral studentsIvan Vidav
## Life
Plemelj was born in the village of Bled near Bled Castle in Austria-Hungary (now Slovenia); he died in Ljubljana, Yugoslavia (now Slovenia). His father, Urban, a carpenter and crofter, died when Josip was only a year old. His mother Marija, née Mrak, found bringing up the family alone very hard, but she was able to send her son to school in Ljubljana where Plemelj studied from 1886 to 1894. Due to a bench thrown into Tivoli Pond by him or his friends, he could not attend the school after he finished the fourth class and had to pass the final exam privately.[1] After leaving and obtaining the necessary examination results he went to the University of Vienna in 1894 where he had applied to Faculty of Arts to study mathematics, physics and astronomy. His professors in Vienna were von Escherich for mathematical analysis, Gegenbauer and Mertens for arithmetic and algebra, Weiss for astronomy, Stefan's student Boltzmann for physics.
In May 1898, Plemelj presented his doctoral thesis under Escherich's tutelage entitled Über lineare homogene Differentialgleichungen mit eindeutigen periodischen Koeffizienten (Linear Homogeneous Differential Equations with Uniform Periodical Coefficients). He continued with his study in Berlin (1899/1900) under the German mathematicians Frobenius and Fuchs and in Göttingen (1900/1901) under Klein and Hilbert.
In April 1902 he became a private senior lecturer at the University of Vienna. In 1906 he was appointed assistant at the Technical University of Vienna. In 1907 he became associate professor and in 1908 full professor of mathematics at the University of Chernivtsi (Ukrainian: Чернівці, Russian: Черновцы), Ukraine. From 1912 to 1913 he was dean of this faculty. In 1917 his political views led him to be forcibly ejected by the government and resettled in Moravia. After the First World War he became a member of the University Commission under the Slovene Provincial Government and helped establish the first Slovene university at Ljubljana, and was elected its first chancellor. In the same year he was appointed professor of mathematics at the Faculty of Arts. After the Second World War he joined the Faculty of Natural Science and Technology (FNT). He retired in 1957 after having lectured in mathematics for 40 years.
## Earliest contributions
Plemelj had shown his great gift for mathematics early in elementary school. He mastered the whole of the high school syllabus by the beginning of the fourth year and began to tutor students for their graduation examinations. At that time he discovered alone series for sin x and cos x. Actually he found a series for cyclometric function arccos x and after that he just inverted this series and then guessed a principle for coefficients. Yet he did not have a proof for that.
Plemelj had great joy for a difficult constructional tasks from geometry. From his high school days originates an elementary problem his later construction of regular sevenfold polygon inscribed in a circle otherwise exactly and not approximately with simple solution as an angle trisection which was yet not known in those days and which necessarily leads to the old Indian or Babylonian approximate construction. He started to occupy himself with mathematics in fourth and fifth class of high school. Beside in mathematics he was interested also in natural science and especially astronomy. He studied celestial mechanics already at high school. He liked observing the stars. His eyesight was so sharp he could see the planet Venus even in the daytime.
## Research
Plemelj's main research interests were the theory of linear differential equations, integral equations, potential theory, the theory of analytic functions, and functional analysis. Plemelj encountered integral equations while still a student at Göttingen, when the Swedish professor Erik Holmgren gave a lecture on the work of his fellow countryman Fredholm on linear integral equations of the 1st and 2nd kind. Spurred on by Hilbert, Göttingen mathematicians attacked this new area of research and Plemelj was one of the first to publish original results on the question, applying the theory of integral equations to the study of harmonic functions in potential theory.
His most important work in potential theory is summarised in his 1911 book Researches in Potential Theory (Potentialtheoretische Untersuchungen),[2] which received the Jablonowski Society award in Leipzig (1500 marks), and the Richard Lieben award from the University of Vienna (2000 crowns) for the most outstanding work in the field of pure and applied mathematics written by any kind of 'Austrian' mathematician in the previous three years.
His most original contribution is the elementary solution he provided for the Riemann–Hilbert problem f+ = g f about the existence of a differential equation with given monodromy group. The solution, published in his 1908 article "Riemannian classes of functions with given monodromy group", rests on three formulas that now carry his name, which connect the values taken by a holomorphic function at the boundary of an arc Γ:[3]
${\displaystyle f_{+}(z)={1 \over 2i\pi }\int _{\Gamma }{\phi (t)-\phi (z) \over {t-z}}\,dt+\phi (z)}$
${\displaystyle f(z)={1 \over 2i\pi }\int _{\Gamma }{\phi (t)-\phi (z) \over {t-z}}\,dt+{1 \over 2}\phi (z)}$
${\displaystyle f_{-}(z)={1 \over 2i\pi }\int _{\Gamma }{\phi (t)-\phi (z) \over {t-z}}\,dt\quad z\in \Gamma }$
These formulas are variously called the Plemelj formulae, the Sokhotsky-Plemelj formulae, or sometimes (mainly in German literature) the Plemelj-Sokhotsky Formulae, after the Russian mathematician Yulian Vasilievich Sokhotski (Юлиан Карл Васильевич Сохоцкий) (1842–1927).
From his methods on solving the Riemann problem had developed the theory of singular integral equations (MSC (2000) 45-Exx) which was entertained above all by the Russian school at the head of Nikoloz Muskhelishvili (Николай Иванович Мусхелишвили) (1891–1976).
Also important are Plemelj's contributions to the theory of analytic functions in solving the problem of uniformization of algebraic functions, contributions on formulation of the theorem of analytic extension of designs and treatises in algebra and in number theory.
In 1912, Plemelj published a very simple proof of the special case of Fermat's last theorem where the exponent, n, is 5.[4] More difficult proofs of this case were first given by Dirichlet in 1828 and Legendre in 1830.[4]
His arrival in Ljubljana in 1919 was very important for development of mathematics in Slovenia. As a good teacher he had raised several generations of mathematicians and engineers. His most famous student is Ivan Vidav. After the Second World War Slovenska akademija znanosti in umetnosti (Slovene Academy of Sciences and Arts) (SAZU) had published his three-year course of lectures for students of mathematics: Teorija analitičnih funkcij (The theory of analytic functions), (SAZU, Ljubljana 1953, pp XVI+516), Diferencialne in integralske enačbe. Teorija in uporaba (Differential and integral equations. The theory and the application).
Plemelj found a formula for a sum of normal derivatives of one-layered potential in the internal or external region. He was pleased also with algebra and number theory, but he had published only few contributions from these fields for example a book entitled Algebra in teorija števil (Algebra and the number theory) (SAZU, Ljubljana 1962, pp XIV+278) which was published abroad as his last work Problemi v smislu Riemanna in Kleina (Problems in the Sense of Riemann and Klein) (edition and translation by J. R. M. Radok, "Interscience Tract in Pure and Applied Mathematics", No. 16, Interscience Publishers: John Wiley & Sons, New York, London, Sydney 1964, pp VII+175). This work deals with questions which were of his most interests and examinations. His bibliography includes 33 units, from which 30 are scientific treatises and had been published among the others in a magazines such as: Monatshefte für Mathematik und Physik, "Sitzungsberichte der kaiserlichen Akademie der Wissenschaften"; in Vienna, "Jahresbericht der deutschen Mathematikervereinigung", "Gesellschaft deutscher Naturforscher und Ärzte" in Verhandlungen, "Bulletin des Sciences Mathematiques", "Obzornik za matematiko in fiziko" and "Publications mathematiques de l'Universite de Belgrade". When French mathematician Charles Émile Picard denoted Plemelj's works as "deux excellents memoires", Plemelj became known in the mathematical world.
Plemelj was a regular member of the SAZU since its foundation in 1938, corresponding member of the JAZU (Yugoslav Academy of Sciences and Arts) in Zagreb, Croatia since 1923, corresponding member of the SANU (Serbian Academy of Sciences and Arts) in Belgrade since 1930 (1931). In 1954 he received the highest award for research in Slovenia, the Prešeren award. The same year he was elected for corresponding member of Bavarian Academy of Sciences in Munich.
In 1963, for his 90th anniversary, University of Ljubljana granted him title of the honorary doctor. Plemelj was first teacher of mathematics at Slovene university and 1949 became first honorary member of ZDMFAJ, (Yugoslav Union of societies of mathematicians, physicists and astronomers). He left his villa in Bled to the DMFA where today is his memorial room.
Plemelj did not do extra preparation for lectures; he didn't have any notes. He used to say that he thought over the lecture subject on the way from his home in Gradišče to the University. Students are said to have got the impression that he was creating teaching material on the spot and that they were witnessing the formation of something new. He was writing formulae on the table beautifully although they were composited from Greek, Latin or Gothic letters. He requested the same from students. They had to write distinctly.
Plemelj is said to have had a very refined ear for languages and had made a solid base for the development of the Slovene mathematical terminology. He had accustomed students for a clear and logical phraseology. For example, he would become angry if they used the word rabiti ("to use") instead of the word potrebovati ("to need"). For this reason he said: "The engineer who does not know mathematics never needs it. But if he knows it, he uses it frequently".
1. Južnič, Stanislav. Prosen, Marjan (2005–2006). "Profesor Plemelj in komet 1847 I" [Professor Plemelj and Comet 1847 I] (PDF) (in Slovenian). 33 (3): 26–28. ISSN 0351-6652. Cite journal requires |journal= (help)CS1 maint: uses authors parameter (link) | 2021-06-19 19:49:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5664882659912109, "perplexity": 2852.3715805964257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487649688.44/warc/CC-MAIN-20210619172612-20210619202612-00014.warc.gz"} |
https://kevinkotze.github.io/ts-2-tut/ | # 1 Basic setup for most empirical work
To have a look at the first program for this session, please open the file T2_arma.R. After providing a brief description of what this program seeks to achieve, the first thing that we usually do is clear all variables from the current environment and close all the plots. This is performed with the following commands:
rm(list = ls()) # clear all variables in workspace
graphics.off() # clear all plots
Thereafter, we will load the toolboxes that we need for this session. There are a few routines that I’ve compiled in a package that is contained on my GitHub account. To install this package, which is named tsm you need to run the following commands:
devtools::install_github("KevinKotze/tsm")
devtools::install_github("cran/fArma")
The next step is to make sure that you can access the routines in this package by making use of the library command.
library(tsm)
# 2 Simulated autoregressive process
Now we are good to go! Let’s generate 1000 observations for a simulated AR(1) time series process with $$\phi = 0.8$$. To ensure that we all get the same results, we set the seed to a predetermined value before we generate values for the respective variable, which has been assigned the name of $$x$$.
set.seed(123)
x <- arima.sim(model = list(ar = 0.8), n = 1000)
To plot the time series we can use the plot.ts, which is quick and easy to use. Thereafter, we can take a look at autocorrelation and partial autocorrelation function for this variable using the ac command.
plot.ts(x)
ac(x, max.lag = 18)
To display the Ljung-Box statistic for the first lag we would execute the command, where we note that the p-value suggests that the autocorrelation is different from zero.
Box.test(x, lag = 1, type = "Ljung-Box")
##
## Box-Ljung test
##
## data: x
## X-squared = 599.78, df = 1, p-value < 2.2e-16
Similarly, we could assign the output from the test to an object Box.res.
Box.res <- Box.test(x, lag = 1, type = "Ljung-Box")
To check what is in the object Box.res we could just type:
Box.res
##
## Box-Ljung test
##
## data: x
## X-squared = 599.78, df = 1, p-value < 2.2e-16
Or alternatively you could use print(Box.res). If we then want to create a vector of Q-stats for the first 10 lags we can assign values to a object we would create a vector for the Q-stat results
Q.stat <- rep(0, 10) # creates a vector for ten observations, where 0 is repeated 10 times
Q.prob <- rep(0, 10)
Thereafter, we assign values for the Q-stat to element places in the different objects. For the first statistic:
Q.stat[1] <- Box.test(x, lag = 1, type = "Ljung-Box")$statistic Q.prob[1] <- Box.test(x, lag = 1, type = "Ljung-Box")$p.value
And to view the data in the respective vectors we just type:
Q.stat
## [1] 599.7769 0.0000 0.0000 0.0000 0.0000
## [6] 0.0000 0.0000 0.0000 0.0000 0.0000
Q.prob
## [1] 0 0 0 0 0 0 0 0 0 0
Of course, rather than writing code to assign each value individually we can use the loop that takes the form:
for (i in 1:10) {
Q.stat[i] <- Box.test(x, lag = i, type = "Ljung-Box")$statistic Q.prob[i] <- Box.test(x, lag = i, type = "Ljung-Box")$p.value
}
To graph both of these statistics together we could make use of the code:
op <- par(mfrow = c(1, 2)) # create plot area of (1 X 2)
plot(Q.stat, ylab = "", main = "Q-Statistic")
plot(Q.prob, ylab = "", ylim = c(0, 1), main = "Probability values")
par(op) # close plot area
These p-value values suggest that there is significant autocorrelation in this time series process. To speed up the above computation you should vectorize your code and use the apply functions in R, which would be important when dealing with routines or statistics that take a long time to compute.
We can now fit a model to this process with the aid of an AR(1) specification and look at the results that are stored in the arma10 object that we have created.
arma10 <- arima(x, order = c(1, 0, 0), include.mean = FALSE) # uses ARIMA(p,d,q) specification
arma10
##
## Call:
## arima(x = x, order = c(1, 0, 0), include.mean = FALSE)
##
## Coefficients:
## ar1
## 0.7783
## s.e. 0.0199
##
## sigma^2 estimated as 1.006: log likelihood = -1422.49, aic = 2848.98
To take a look at what the residuals look like we plot the residuals that are stored in the arma10 object.
par(mfrow = c(1, 1))
plot(arma10$residuals) Now lets take a look at the ACF and PACF for the residuals ac(arma10$residuals, max.lag = 18)
There would appear to be no serial correlation in the residual.
# 3 Simulated moving average process
Now lets generate a MA(1) process and store the values in an object that we call y.
y <- arima.sim(model = list(ma = 0.7), n = 1000)
As is always the case, the first thing that we do is visually inspect the data.
par(mfrow = c(1, 1))
plot.ts(y)
We can then consider the ACF and PACF for this variable.
ac(y, max.lag = 18)
To fit a first-order moving average model to the data, where the estimation results are stored in the object arma01 we execute the commands:
arma01 <- arima(y, order = c(0, 0, 1)) # uses ARIMA(p,d,q) with constant
arma01
##
## Call:
## arima(x = y, order = c(0, 0, 1))
##
## Coefficients:
## ma1 intercept
## 0.6929 0.0634
## s.e. 0.0237 0.0534
##
## sigma^2 estimated as 0.9962: log likelihood = -1417.38, aic = 2840.75
To inspect the residuals we execute the following commands and would note that they take on features of white noise.
par(mfrow = c(1, 1))
plot(arma01$residuals) ac(arma01$residuals, max.lag = 18)
# 4 Simulated ARMA process
As noted in the lectures, the values of autocorrelation and partial autocorrelation functions for an ARMA process is equivalent to some form of weighted sum of these functions for the individual autoregressive and moving average components. This is displayed below, where we firstly simulate the autoregressive and moving average components before we provide the results of the ARMA process.
x <- arima.sim(model = list(ar = 0.4), n = 200) # AR(1) process
ac(x, max.lag = 20)
x <- arima.sim(model = list(ma = 0.5), n = 200) # MA(1) process
ac(x, max.lag = 20)
x <- arima.sim(model = list(ar = 0.4, ma = 0.5), n = 200) ## ARMA(1,1)
ac(x, max.lag = 20)
In the following exercise we will simulate an ARMA(2,1) process and try to see whether we can identify it without any prior knowledge. To identify the process we will make use of the ACF & PACF as well as the information criteria.
z <- arima.sim(model = list(ar = c(0.6, -0.2), ma = c(0.4)),
n = 200)
ac(z, max.lag = 18)
The results from the ACF & PACF would suggest that we are at most dealing with an ARMA(3,2). To estimate all models that may have an order that is equal to or less than order we could proceed as follows, before storing the AIC value in the object arma.res.
arma.res <- rep(0, 16)
arma.res[1] <- arima(z, order = c(3, 0, 2))$aic # fit arma(3,2) and save aic value arma.res[2] <- arima(z, order = c(2, 0, 2))$aic
arma.res[3] <- arima(z, order = c(2, 0, 1))$aic arma.res[4] <- arima(z, order = c(1, 0, 2))$aic
arma.res[5] <- arima(z, order = c(1, 0, 1))$aic arma.res[6] <- arima(z, order = c(3, 0, 0))$aic
arma.res[7] <- arima(z, order = c(2, 0, 0))$aic arma.res[8] <- arima(z, order = c(0, 0, 2))$aic
arma.res[9] <- arima(z, order = c(3, 0, 2), include.mean = FALSE)$aic arma.res[10] <- arima(z, order = c(2, 0, 2), include.mean = FALSE)$aic
arma.res[11] <- arima(z, order = c(2, 0, 1), include.mean = FALSE)$aic arma.res[12] <- arima(z, order = c(1, 0, 2), include.mean = FALSE)$aic
arma.res[13] <- arima(z, order = c(1, 0, 1), include.mean = FALSE)$aic arma.res[14] <- arima(z, order = c(3, 0, 0), include.mean = FALSE)$aic
arma.res[15] <- arima(z, order = c(2, 0, 0), include.mean = FALSE)$aic arma.res[16] <- arima(z, order = c(0, 0, 2), include.mean = FALSE)$aic
To find the model that has the lowest value for the AIC statistic we could execute the code:
which(arma.res == min(arma.res))
## [1] 13
This would suggest that ARMA(1,1) would provide a suitable fit for the model, which is perfect, but lets have a look at the dianostics. Hence, we re-estimate the model, store all the results and consider whether there is serial correlation in the residuals.
arma11 <- arima(z, order = c(1, 0, 1), include.mean = FALSE)
ac(arma11$residuals, max.lag = 20) Q.stat <- rep(NA, 10) # vector of ten observations Q.prob <- rep(NA, 10) for (i in 4:10) { Q.stat[i] <- Box.test(arma11$residuals, lag = i, type = "Ljung-Box",
fitdf = 3)$statistic Q.prob[i] <- Box.test(arma11$residuals, lag = i, type = "Ljung-Box",
fitdf = 3)$p.value } op <- par(mfrow = c(1, 2)) plot(Q.stat, ylab = "", main = "Q-Statistic") plot(Q.prob, ylab = "", ylim = c(0, 1), main = "Probability values") par(op) This would appear to provide suitable results. # 5 Structural breaks To test for structural breaks we can make use of the strucchange package that may be found on the cran repository. Before you install these new packages, you may wish to clear the workspace environment and closing down all graphics by running the commands: rm(list = ls()) graphics.off() To install this package from cran we could use the Packages tab on and click on install or one could enter the command: install.packages('strucchange'). Thereafter, we make use of the library command to access the routines in this package: library(strucchange) To generate AR(1) and white noise processes with 100 observations, where we make use of the same seed value, we execute the following commands: set.seed(123) # setting the seed so we all get the same results x1 <- arima.sim(model = list(ar = 0.9), n = 100) # simulate 100 obs for AR(1) x2 <- rnorm(100) # simulate 100 obs for white noise process To join these two time series variables, we make use of the concatenate command that would leave us with a structural break after observation 100. y <- c((10 + x1), x2) plot.ts(y) The underlying model that we are going to use to describe this process is an AR(1), however, before we are able to execute these tests, we need to organise our variables in a small data-frame. To arrange the variable of interest and a lagged variant of this variable in a data.frame we make use of the following commands: dat <- data.frame(cbind(y[-1], y[-(length(y))])) colnames(dat) <- c("ylag0", "ylag1") Thereafter, we can make use of the QLR statistic that takes the form of an F-test to detect a possible structural break. fs <- Fstats(ylag0 ~ ylag1, data = dat) breakpoints(fs) # where the breakpoint is ## ## Optimal 2-segment partition: ## ## Call: ## breakpoints.Fstats(obj = fs) ## ## Breakpoints at observation number: ## 99 ## ## Corresponding to breakdates: ## 0.4924623 sctest(fs, type = "supF") # the significance of this breakpoint ## ## supF test ## ## data: fs ## sup.F = 184.43, p-value < 2.2e-16 plot(fs) These results suggest that there is a structural break that originates at observation 99 (after we lost an observation due to taking lags). In addition, we note that the p-value would suggest that this break is statstically significant and the plot confirms this, as the F-statistic breaks through the confidence interval. As an alternative we can make use of the CUSUM test statistic that may be executed with the commands: cusum <- efp(ylag0 ~ ylag1, type = "OLS-CUSUM", data = dat) plot(cusum) Note that in this case the test does not suggest that there is a structural break as the value of the coefficient is not subject to any sudden change. This should not be terribly surprising as the addition of white noise would not add any additional information to the part of the regression model that is to be explained, which would not influence the value of the coefficient by a great deal. # 6 Univariate models for real data ## 6.1 South African gross domestic product Before we work through the next part of this tutorial we will clear the workspace environment again and close all the figures that we have plotted. rm(list = ls()) graphics.off() In this tutorial we would like to make use of the tsm and strucchange packages so we run the commands: library(tsm) library(strucchange) All the data for this tut has been preloaded in the tsm package. It includes the most recent measures of South African Real Gross Domestic Product that is published by the South African Reserve Bank, which has the code KBP6006D. Hence, to retrieve the data from the package and create the object for the series as dat, we execute the following commands: dat <- sarb_quarter$KBP6006D
dat.tmp <- diff(log(na.omit(dat)) * 100, lag = 1)
head(dat)
## [1] 565040 574255 584836 590609 594892 592293
gdp <- ts(dat.tmp, start = c(1960, 2), frequency = 4)
Note that the object dat.tmp is the quarter-on-quarter growth rate for this variable, after all na’s have been removed. Since the first observation in this time series is 1960q2, we can create a time series object for this variable that is labelled gdp. To make sure that these calculations and extractions have been performed correctly, we can plot this data
plot.ts(gdp)
If we are of the opinion that this seems reasonable, we can proceed by checking for structural breaks. In this instance, we once again make use of an AR(1) model, so we create a dataframe for the variables:
dat.gdp <- cbind(gdp, lag(gdp, k = -1))
colnames(dat.gdp) <- c("ylag0", "ylag1")
dat.gdp <- window(dat.gdp, start = c(1960, 3))
Thereafter, we can generate QLR statistic with the commands:
fs <- Fstats(ylag0 ~ ylag1, data = dat.gdp)
breakpoints(fs)
##
## Optimal 2-segment partition:
##
## Call:
## breakpoints.Fstats(obj = fs)
##
## Breakpoints at observation number:
## 42
##
## Corresponding to breakdates:
## 0.1806167
sctest(fs, type = "supF")
##
## supF test
##
## data: fs
## sup.F = 73.391, p-value = 3.461e-15
plot(fs)
These results suggest that there may be a couple of breakpoints. The first structural break would appear to arise during the early 1970’s, so we exclude the first part of the time series with the command:
dat.gdp1 <- window(dat.gdp, start = c(1972, 1))
To consider whether or not the remaining time series contains a structural break we generate the QLR statistic once again.
fs <- Fstats(ylag0 ~ ylag1, data = dat.gdp1)
breakpoints(fs)
##
## Optimal 2-segment partition:
##
## Call:
## breakpoints.Fstats(obj = fs)
##
## Breakpoints at observation number:
## 32
##
## Corresponding to breakdates:
## 0.1712707
sctest(fs, type = "supF")
##
## supF test
##
## data: fs
## sup.F = 23.192, p-value = 0.0002651
plot(fs)
As this suggests that there is another structural break around 1908, we consider the test statistics for the remaining information
dat.gdp2 <- window(dat.gdp, start = c(1980, 4))
fs <- Fstats(ylag0 ~ ylag1, data = dat.gdp2)
breakpoints(fs)
##
## Optimal 2-segment partition:
##
## Call:
## breakpoints.Fstats(obj = fs)
##
## Breakpoints at observation number:
## 49
##
## Corresponding to breakdates:
## 0.3287671
sctest(fs, type = "supF")
##
## supF test
##
## data: fs
## sup.F = 5.9882, p-value = 0.421
plot(fs)
This would appear to be satisfactory. We will then label this subsample y and can consider the degree of autocorrelation.
y <- window(gdp, start = c(1981, 1))
ac(as.numeric(y))
This variable has a bit of persistence and does not appear to be nonstationary. The maximum order of this variable is an AR(1), MA(2), or some combination of the two. We can then check the information criteria for a number of candidate models by constructing the following vector for the statistics:
arma.res <- rep(0, 5)
arma.res[1] <- arima(y, order = c(1, 0, 2))$aic arma.res[2] <- arima(y, order = c(1, 0, 1))$aic
arma.res[3] <- arima(y, order = c(1, 0, 0))$aic arma.res[4] <- arima(y, order = c(0, 0, 2))$aic
arma.res[5] <- arima(y, order = c(0, 0, 1))$aic To find the model with the smallest AIC value we can then execute the command: which(arma.res == min(arma.res)) ## [1] 1 These results suggest that the smallest value is provided by ARMA(1,2). With this in mind we estimate the parameter values for this model structure. arma <- arima(y, order = c(1, 0, 2)) Thereafter, we look at the residuals for the model to determine if there is any serial correlation. par(mfrow = c(1, 1)) plot(arma$residuals)
ac(arma$residuals) These results suggests that there is no remaining serial correlation, but we could also take a look at the Q-statistics to confirm this. Q.stat <- rep(NA, 12) Q.prob <- rep(NA, 12) for (i in 6:12) { Q.stat[i] <- Box.test(arma$residuals, lag = i, type = "Ljung-Box",
fitdf = 2)$statistic Q.prob[i] <- Box.test(arma$residuals, lag = i, type = "Ljung-Box",
fitdf = 2)$p.value } op <- par(mfrow = c(1, 2)) plot(Q.stat, ylab = "", main = "Q-Statistic") plot(Q.prob, ylab = "", ylim = c(0, 1), main = "Probability values") par(op) There certainly don’t appear to be too many problems here. However, given the level of persistence that is suggested by the autocorrelation function, the model may be over-parameterised. Recall that all the results from the model estimation are stored in the object arma and we could derive the t-statistics from the standard errors. However, there is another package, which is named fArma that displays these results in a more convenient manner. To dowload this package we could click on the install button that is located on the Packages tab, or we could run the command install.packages('fArma') in the Console. Once the package is installed, we use the library command to access the routines. Thereafter, we estimate the model with the armaFit command. library(fArma) fit1 <- armaFit(~arma(1, 2), data = y, include.mean = FALSE) summary(fit1) ## ## Title: ## ARIMA Modelling ## ## Call: ## armaFit(formula = ~arma(1, 2), data = y, include.mean = FALSE) ## ## Model: ## ARIMA(1,0,2) with method: CSS-ML ## ## Coefficient(s): ## ar1 ma1 ma2 ## 0.6817 -0.1041 0.1697 ## ## Residuals: ## Min 1Q Median 3Q Max ## -2.9969 -0.1345 0.1774 0.5720 1.7684 ## ## Moments: ## Skewness Kurtosis ## -0.931 3.157 ## ## Coefficient(s): ## Estimate Std. Error t value Pr(>|t|) ## ar1 0.6817 0.1206 5.654 1.57e-08 *** ## ma1 -0.1041 0.1351 -0.771 0.441 ## ma2 0.1697 0.1262 1.345 0.179 ## --- ## Signif. codes: ## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## sigma^2 estimated as: 0.4755 ## log likelihood: -152.2 ## AIC Criterion: 312.41 ## ## Description: ## Sun Aug 5 23:39:52 2018 by user: Note that the MA(1) and MA(2) parameters do not appear to be significant, so we should drop them and just estimate an AR(1) model. fit2 <- armaFit(~arma(1, 0), data = y, include.mean = FALSE) summary(fit2) ## ## Title: ## ARIMA Modelling ## ## Call: ## armaFit(formula = ~arma(1, 0), data = y, include.mean = FALSE) ## ## Model: ## ARIMA(1,0,0) with method: CSS-ML ## ## Coefficient(s): ## ar1 ## 0.6787 ## ## Residuals: ## Min 1Q Median 3Q Max ## -3.0669 -0.1380 0.1714 0.5859 1.7269 ## ## Moments: ## Skewness Kurtosis ## -0.9986 3.4459 ## ## Coefficient(s): ## Estimate Std. Error t value Pr(>|t|) ## ar1 0.67872 0.06068 11.18 <2e-16 *** ## --- ## Signif. codes: ## 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## sigma^2 estimated as: 0.4932 ## log likelihood: -154.8 ## AIC Criterion: 313.61 ## ## Description: ## Sun Aug 5 23:39:52 2018 by user: Another useful package is the forecast package, which you would know how to install by now. After you provide yourself with access to the routines in the package, you can utilise the auto.arima command, which allows you to specify the maximum number of autoregressive and moveing average lags. Thereafter, you can decide which information criteria should be used to select the final model and everything will be done for you. library(forecast) auto.mod <- auto.arima(y, max.p = 2, max.q = 2, start.p = 0, start.q = 0, stationary = TRUE, seasonal = FALSE, allowdrift = FALSE, ic = "aic") auto.mod # AR(2) with intercept ## Series: y ## ARIMA(1,0,0) with non-zero mean ## ## Coefficients: ## ar1 mean ## 0.5375 0.5295 ## s.e. 0.0699 0.1197 ## ## sigma^2 estimated as 0.4578: log likelihood=-148.26 ## AIC=302.53 AICc=302.7 BIC=311.46 As the name suggests, this package has a number of useful forecasting routines, which is the subject of next weeks lecture. For example, to generate a forecast for the last 4 years of data, we remove the out-of-sample data from that which is going to be used for the insample estimation. In this case, we can compare the out-of-sample results for the ARIMA(1,2) with the AR(1) model. y.sub <- window(y, start = c(1981, 1), end = c(2012, 1), frequency = 4) arma10 <- arima(y.sub, order = c(1, 0, 0)) arma12 <- arima(y.sub, order = c(1, 0, 2)) To forecast foreward we can use the predict command, as follows: arma10.pred <- predict(arma10, n.ahead = 16) arma12.pred <- predict(arma12, n.ahead = 16) To plot these results we would use the plot.ts and lines commands that is used for displaying the results on the same set of axes. plot.ts(y) lines(arma10.pred$pred, col = "blue", lty = 2)
lines(arma12.pred$pred, col = "green", lty = 2) Or alternatively, if we just want to focus on the last few years of data: y.act <- window(y, start = c(2011, 1), end = c(2016, 1)) fore.11 <- c(rep(NA, 5), arma10.pred$pred[1:16])
fore.32 <- c(rep(NA, 5), arma12.pred$pred[1:16]) fore.res <- cbind(as.numeric(y.act), fore.11, fore.32) plot.ts(fore.res, plot.type = "single", col = c("black", "blue", "green")) legend("bottomleft", legend = c("actual", "arma11", "arma32"), lty = c(1, 1, 1), col = c("black", "blue", "green"), bty = "n") Unfortunately, these forecasting results look fairly poor. ## 6.2 South African consumer price index Once again, we will start off this part of the tutorial by clearing the workspace environment and closing all the plots. rm(list = ls()) graphics.off() Once again we are going to make use of the tsm and strucchange packages so we run the commands: library(tsm) library(strucchange) To access the CPI data in the tsm package, we would want to extract the series KBP7170N that is published by the South African Reserve Bank on a monthly basis. To get data that is slightly more current, you would go to the Statistics South Africa website. This series starts in January 2002. Hence, we can execute the following commands: dat <- sarb_month$KBP7170N
cpi <- ts(na.omit(dat), start = c(2002, 1), frequency = 12)
To calculate month-on-month and year-on-year inflation rates, we perform the following transformations and plot the data to make sure that it looks reasonable.
inf.mom <- diff(cpi, lag = 1)/cpi[-length(cpi)]
inf.yoy <- diff(cpi, lag = 12)/cpi[-1 * (length(cpi) - 11):length(cpi)]
plot.ts(inf.yoy)
We can then proceed to check for structural breaks. In this instance, we once again make use of an AR(1) model, so we create a database for the variables:
dat.inf <- cbind(inf.yoy, lag(inf.yoy, k = -1))
colnames(dat.inf) <- c("ylag0", "ylag1")
Thereafter, we can generate QLR statistic with the commands:
fs <- Fstats(ylag0 ~ ylag1, data = dat.inf)
breakpoints(fs)
##
## Optimal 2-segment partition:
##
## Call:
## breakpoints.Fstats(obj = fs)
##
## Breakpoints at observation number:
## 25
##
## Corresponding to breakdates:
## 0.1403509
sctest(fs, type = "supF")
##
## supF test
##
## data: fs
## sup.F = 31.201, p-value = 5.847e-06
plot(fs)
In addition, we can look at the results of the CUSUM statistic.
cusum <- efp(ylag0 ~ ylag1, type = "OLS-CUSUM", data = dat.inf)
plot(cusum) # there certainly appears to be some change in the first part of the sample
These results suggest that there could be a breakpoint immediately prior to 2005M2, so let’s take a subset of data for the susequent periods of time.
y <- window(inf.yoy, start = c(2005, 2))
We can now take a look at the persistence in the data, by plotting the autocorrelation function. Note that the first autocorrelation coefficient is around 0.97, which suggests that this could be a random-walk process. As such we should test for a unit root by using ADF (or similar) test, but as we are only going to cover this topic in a later lecture, we are going to proceed as if the process is stationary.
ac(as.numeric(y))
If we allow for a maximum order for an AR(2) and/or an MA(4) - although the moving average component could be of a higher order, we can then check the information criteria for a number of candidate models by constructing the following vector for the statistics:
arma.res <- rep(0, 8)
arma.res[1] <- arima(y, order = c(2, 0, 4))$aic arma.res[2] <- arima(y, order = c(2, 0, 3))$aic
arma.res[3] <- arima(y, order = c(2, 0, 2))$aic arma.res[4] <- arima(y, order = c(2, 0, 1))$aic
arma.res[5] <- arima(y, order = c(2, 0, 0))$aic arma.res[6] <- arima(y, order = c(1, 0, 2))$aic
arma.res[7] <- arima(y, order = c(1, 0, 1))$aic arma.res[8] <- arima(y, order = c(1, 0, 0))$aic
To find the model with the smallest AIC value we can then execute the command:
which(arma.res == min(arma.res))
## [1] 5
These results suggest that the smallest value is provided by ARMA(2,0). With this in mind we estimate the parameter values for this model structure.
arma <- arima(y, order = c(2, 0, 0))
Thereafter, we look at the residuals for the model to determine if there is any serial correlation.
par(mfrow = c(1, 1))
plot(arma$residuals) ac(arma$residuals)
These results suggests that there is still some autocorrelation year-on-year. To estimate a arma model with seasonal components, we need to install the astsa package, using the install button on the Packages tab or the command, install.packages('astsa'). In this case, we will fit two variants of the model, where the sarima(x,2,0,0,0,1,1,12) command will fit a SARIMA(2,0,0)*(0,1,1)_{12} model.
library(astsa)
sarima1 <- sarima(y, 2, 0, 0, 0, 0, 0, 12) # sarima(x,p,d,q,P,D,Q,S) where sarima(x,2,0,0,0,1,1,12) will fit SARIMA(2,0,0)*(0,1,1)_{12}
## initial value -3.939766
## iter 2 value -4.393484
## iter 3 value -5.245578
## iter 4 value -5.317399
## iter 5 value -5.431991
## iter 6 value -5.482457
## iter 7 value -5.507663
## iter 8 value -5.510206
## iter 9 value -5.510214
## iter 10 value -5.510219
## iter 11 value -5.510289
## iter 12 value -5.510324
## iter 13 value -5.510325
## iter 13 value -5.510325
## iter 13 value -5.510325
## final value -5.510325
## converged
## initial value -5.493801
## iter 2 value -5.494032
## iter 3 value -5.494922
## iter 4 value -5.495098
## iter 5 value -5.495204
## iter 6 value -5.495313
## iter 7 value -5.495545
## iter 8 value -5.495843
## iter 9 value -5.495905
## iter 10 value -5.495968
## iter 11 value -5.496045
## iter 12 value -5.496089
## iter 13 value -5.496102
## iter 14 value -5.496103
## iter 14 value -5.496103
## iter 14 value -5.496103
## final value -5.496103
## converged
names(sarima1)
## [1] "fit" "degrees_of_freedom"
## [3] "ttable" "AIC"
## [5] "AICc" "BIC"
names(sarima1$fit) ## [1] "coef" "sigma2" "var.coef" "mask" ## [5] "loglik" "aic" "arma" "residuals" ## [9] "call" "series" "code" "n.cond" ## [13] "nobs" "model" par(mfrow = c(1, 1)) plot(sarima1$fit$residuals) ac(as.numeric(sarima1$fit$residuals)) sarima2 <- sarima(y, 2, 0, 1, 1, 0, 0, 12) # sarima(x,p,d,q,P,D,Q,S) where sarima(x,2,1,0,0,1,1,12) will fit SARIMA(2,1,0)*(0,1,1)_{12} ## initial value -3.964652 ## iter 2 value -4.511039 ## iter 3 value -5.062536 ## iter 4 value -5.214318 ## iter 5 value -5.439984 ## iter 6 value -5.515829 ## iter 7 value -5.612132 ## iter 8 value -5.614171 ## iter 9 value -5.614431 ## iter 10 value -5.617334 ## iter 11 value -5.622557 ## iter 12 value -5.631673 ## iter 13 value -5.633969 ## iter 14 value -5.635143 ## iter 15 value -5.635894 ## iter 16 value -5.638322 ## iter 17 value -5.640541 ## iter 18 value -5.642155 ## iter 19 value -5.642269 ## iter 20 value -5.642300 ## iter 21 value -5.642306 ## iter 22 value -5.642347 ## iter 23 value -5.642365 ## iter 24 value -5.642365 ## iter 25 value -5.642366 ## iter 26 value -5.642367 ## iter 27 value -5.642368 ## iter 28 value -5.642369 ## iter 29 value -5.642370 ## iter 30 value -5.642370 ## iter 30 value -5.642370 ## iter 30 value -5.642370 ## final value -5.642370 ## converged ## initial value -5.561956 ## iter 2 value -5.564498 ## iter 3 value -5.566343 ## iter 4 value -5.568918 ## iter 5 value -5.570030 ## iter 6 value -5.571417 ## iter 7 value -5.573604 ## iter 8 value -5.574429 ## iter 9 value -5.577166 ## iter 10 value -5.579282 ## iter 11 value -5.584058 ## iter 12 value -5.585552 ## iter 13 value -5.585927 ## iter 14 value -5.586857 ## iter 15 value -5.588030 ## iter 16 value -5.590170 ## iter 17 value -5.592098 ## iter 18 value -5.593931 ## iter 19 value -5.594933 ## iter 20 value -5.595478 ## iter 21 value -5.595661 ## iter 22 value -5.595957 ## iter 23 value -5.596126 ## iter 24 value -5.596163 ## iter 25 value -5.596196 ## iter 26 value -5.596209 ## iter 27 value -5.596226 ## iter 28 value -5.596247 ## iter 29 value -5.596263 ## iter 30 value -5.596273 ## iter 31 value -5.596278 ## iter 32 value -5.596281 ## iter 33 value -5.596282 ## iter 34 value -5.596283 ## iter 34 value -5.596283 ## final value -5.596283 ## converged ac(as.numeric(sarima2$fit\$residuals))
In the second of these models we note that the seasonal information has been effectively captured by the monthly seasonal component. Once again we can generate a forecast for this model with the aid of the following commands:
library(fArma)
y.sub <- window(y, end = c(2016, 12))
par(mfrow = c(1, 1))
forcast1 <- sarima.for(y.sub, 2, 0, 1, 1, 0, 0, 12, n.ahead = 4)
lines(y, col = "green", lty = 1)
On this occasion the forecast does not look too bad for the one-step ahead horizon. | 2021-05-16 01:56:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.625896155834198, "perplexity": 3639.2093684722186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991659.54/warc/CC-MAIN-20210516013713-20210516043713-00003.warc.gz"} |
https://www.maplesoft.com/support/help/Maple/view.aspx?path=Magma/HasZero | HasZero - Maple Help
Magma
HasZero
test for the existence of a (two-sided) zero element in a finite magma
Calling Sequence HasZero( m )
Parameters
m - Array representing the Cayley table of a finite magma
Description
• The HasZero command returns true if the given magma has a zero element; that is, an element z such that z*x = x*z = z, for all x in m. It returns false otherwise.
Examples
> $\mathrm{with}\left(\mathrm{Magma}\right):$
> $m≔⟨⟨⟨1|1|1⟩,⟨1|3|3⟩,⟨1|2|2⟩⟩⟩$
${m}{≔}\left[\begin{array}{ccc}{1}& {1}& {1}\\ {1}& {3}& {3}\\ {1}& {2}& {2}\end{array}\right]$ (1)
> $\mathrm{HasZero}\left(m\right)$
${\mathrm{false}}$ (2)
> $m≔⟨⟨⟨1|1|1⟩,⟨1|1|1⟩,⟨2|1|1⟩⟩⟩$
${m}{≔}\left[\begin{array}{ccc}{1}& {1}& {1}\\ {1}& {1}& {1}\\ {2}& {1}& {1}\end{array}\right]$ (3)
> $\mathrm{HasZero}\left(m\right)$
${\mathrm{true}}$ (4)
Compatibility
• The Magma[HasZero] command was introduced in Maple 15. | 2022-12-07 02:48:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587594866752625, "perplexity": 2078.982093385965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711126.30/warc/CC-MAIN-20221207021130-20221207051130-00786.warc.gz"} |
https://analystprep.com/study-notes/category/actuarial-exams/soa/p-probability/page/3/ | ##### Calculate probabilities using the addition and multiplication rules
Addition Rule of Probability $$P\left( A\cup B \right) =P\left( A \right) +P\left( B \right) -P\left( A\cap B \right)$$ The addition rule of probability states that the probability of event $$A$$ or event $$B$$ occurring is the probability of…
##### Calculate probabilities of mutually exclusive events
$$A$$ and $$B$$ are mutually exclusive events if $$A$$ and $$B$$ cannot both occur at the same time. Example: When flipping a coin, the coin cannot land on head and tails at the same time so we consider the events…
##### Define set functions, sample space, and events
In order to understand the concept of probability, it is useful to think about an experiment with a known set of possible outcomes. This set of all possible outcomes is called the sample space $$(S)$$. Sample space $$(S)$$ –set of… | 2020-02-20 11:49:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8552953004837036, "perplexity": 179.01922065855481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00321.warc.gz"} |
https://coq.github.io/doc/v8.11/refman/addendum/universe-polymorphism.html |
# Polymorphic Universes¶
Author: Matthieu Sozeau
## General Presentation¶
Warning
The status of Universe Polymorphism is experimental.
This section describes the universe polymorphic extension of Coq. Universe polymorphism makes it possible to write generic definitions making use of universes and reuse them at different and sometimes incompatible universe levels.
A standard example of the difference between universe polymorphic and monomorphic definitions is given by the identity function:
Definition identity {A : Type} (a : A) := a.
identity is defined
By default, constant declarations are monomorphic, hence the identity function declares a global universe (say Top.1) for its domain. Subsequently, if we try to self-apply the identity, we will get an error:
Fail Definition selfid := identity (@identity).
The command has indeed failed with message: The term "@identity" has type "forall A : Type, A -> A" while it is expected to have type "?A" (unable to find a well-typed instantiation for "?A": cannot ensure that "Type@{identity.u0+1}" is a subtype of "Type@{identity.u0}").
Indeed, the global level Top.1 would have to be strictly smaller than itself for this self-application to type check, as the type of (@identity) is forall (A : Type@{Top.1}), A -> A whose type is itself Type@{Top.1+1}.
A universe polymorphic identity function binds its domain universe level at the definition level instead of making it global.
Polymorphic Definition pidentity {A : Type} (a : A) := a.
pidentity is defined
pidentity@{Top.2} : forall A : Type, A -> A pidentity is universe polymorphic Arguments pidentity {A}%type_scope pidentity is transparent Expands to: Constant Top.pidentity
It is then possible to reuse the constant at different levels, like so:
Definition selfpid := pidentity (@pidentity).
selfpid is defined
Of course, the two instances of pidentity in this definition are different. This can be seen when the Printing Universes flag is on:
Set Printing Universes.
Print selfpid.
selfpid = pidentity@{selfpid.u0} (@pidentity@{selfpid.u1}) : forall A : Type@{selfpid.u1}, A -> A (* {selfpid.u1 selfpid.u0} |= selfpid.u1 < selfpid.u0 *) Arguments selfpid _%type_scope
Now pidentity is used at two different levels: at the head of the application it is instantiated at Top.3 while in the argument position it is instantiated at Top.4. This definition is only valid as long as Top.4 is strictly smaller than Top.3, as shown by the constraints. Note that this definition is monomorphic (not universe polymorphic), so the two universes (in this case Top.3 and Top.4) are actually global levels.
When printing pidentity, we can see the universes it binds in the annotation @{Top.2}. Additionally, when Printing Universes is on we print the "universe context" of pidentity consisting of the bound universes and the constraints they must verify (for pidentity there are no constraints).
Inductive types can also be declared universes polymorphic on universes appearing in their parameters or fields. A typical example is given by monoids:
Polymorphic Record Monoid := { mon_car :> Type; mon_unit : mon_car; mon_op : mon_car -> mon_car -> mon_car }.
Monoid is defined mon_car is defined mon_unit is defined mon_op is defined
Print Monoid.
Record Monoid : Type@{Top.6+1} := Build_Monoid { mon_car : Type@{Top.6}; mon_unit : mon_car; mon_op : mon_car -> mon_car -> mon_car } (* Top.6 |= *) Arguments Build_Monoid _%type_scope _ _%function_scope
The Monoid's carrier universe is polymorphic, hence it is possible to instantiate it for example with Monoid itself. First we build the trivial unit monoid in Set:
Definition unit_monoid : Monoid := {| mon_car := unit; mon_unit := tt; mon_op x y := tt |}.
unit_monoid is defined
From this we can build a definition for the monoid of Set-monoids (where multiplication would be given by the product of monoids).
Polymorphic Definition monoid_monoid : Monoid.
1 subgoal ============================ Monoid@{Top.9}
refine (@Build_Monoid Monoid unit_monoid (fun x y => x)).
No more subgoals.
Defined.
Print monoid_monoid.
monoid_monoid@{Top.9} = {| mon_car := Monoid@{Set}; mon_unit := unit_monoid; mon_op := fun x _ : Monoid@{Set} => x |} : Monoid@{Top.9} (* Top.9 |= Set < Top.9 *)
As one can see from the constraints, this monoid is “large”, it lives in a universe strictly higher than Set.
## Polymorphic, Monomorphic¶
Command Polymorphic definition
As shown in the examples, polymorphic definitions and inductives can be declared using the Polymorphic prefix.
Flag Universe Polymorphism
Once enabled, this flag will implicitly prepend Polymorphic to any definition of the user.
Command Monomorphic definition
When the Universe Polymorphism flag is set, to make a definition producing global universe constraints, one can use the Monomorphic prefix.
Many other commands support the Polymorphic flag, including:
## Cumulative, NonCumulative¶
Polymorphic inductive types, coinductive types, variants and records can be declared cumulative using the Cumulative prefix.
Command Cumulative inductive
Declares the inductive as cumulative
Alternatively, there is a Polymorphic Inductive Cumulativity flag which when set, makes all subsequent polymorphic inductive definitions cumulative. When set, inductive types and the like can be enforced to be non-cumulative using the NonCumulative prefix.
Command NonCumulative inductive
Declares the inductive as non-cumulative
Flag Polymorphic Inductive Cumulativity
When this flag is on, it sets all following polymorphic inductive types as cumulative (it is off by default).
Consider the examples below.
Polymorphic Cumulative Inductive list {A : Type} := | nil : list | cons : A -> list -> list.
list is defined list_rect is defined list_ind is defined list_rec is defined list_sind is defined
Print list.
Inductive list@{Top.12} (A : Type@{Top.12}) : Type@{max(Set,Top.12)} := nil : list@{Top.12} | cons : A -> list@{Top.12} -> list@{Top.12} (* *Top.12 |= *) Arguments list {A}%type_scope Arguments nil {A}%type_scope Arguments cons {A}%type_scope
When printing list, the universe context indicates the subtyping constraints by prefixing the level names with symbols.
Because inductive subtypings are only produced by comparing inductives to themselves with universes changed, they amount to variance information: each universe is either invariant, covariant or irrelevant (there are no contravariant subtypings in Coq), respectively represented by the symbols =, + and *.
Here we see that list binds an irrelevant universe, so any two instances of list are convertible: $$E[Γ] ⊢ \mathsf{list}@\{i\}~A =_{βδιζη} \mathsf{list}@\{j\}~B$$ whenever $$E[Γ] ⊢ A =_{βδιζη} B$$ and this applies also to their corresponding constructors, when they are comparable at the same type.
See Conversion rules for more details on convertibility and subtyping. The following is an example of a record with non-trivial subtyping relation:
Polymorphic Cumulative Record packType := {pk : Type}.
packType is defined pk is defined
packType binds a covariant universe, i.e.
$E[Γ] ⊢ \mathsf{packType}@\{i\} =_{βδιζη} \mathsf{packType}@\{j\}~\mbox{ whenever }~i ≤ j$
Cumulative inductive types, coinductive types, variants and records only make sense when they are universe polymorphic. Therefore, an error is issued whenever the user uses the Cumulative or NonCumulative prefix in a monomorphic context. Notice that this is not the case for the Polymorphic Inductive Cumulativity flag. That is, this flag, when set, makes all subsequent polymorphic inductive declarations cumulative (unless, of course the NonCumulative prefix is used) but has no effect on monomorphic inductive declarations.
Consider the following examples.
Fail Monomorphic Cumulative Inductive Unit := unit.
The command has indeed failed with message: The Cumulative prefix can only be used in a polymorphic context.
Fail Monomorphic NonCumulative Inductive Unit := unit.
The command has indeed failed with message: The NonCumulative prefix can only be used in a polymorphic context.
Set Polymorphic Inductive Cumulativity.
Inductive Unit := unit.
Unit is defined Unit_rect is defined Unit_ind is defined Unit_rec is defined Unit_sind is defined
### An example of a proof using cumulativity¶
Set Universe Polymorphism.
Set Polymorphic Inductive Cumulativity.
Inductive eq@{i} {A : Type@{i}} (x : A) : A -> Type@{i} := eq_refl : eq x x.
eq is defined eq_rect is defined eq_ind is defined eq_rec is defined eq_sind is defined
Definition funext_type@{a b e} (A : Type@{a}) (B : A -> Type@{b}) := forall f g : (forall a, B a), (forall x, eq@{e} (f x) (g x)) -> eq@{e} f g.
funext_type is defined
Section down.
Universes a b e e'.
Constraint e' < e.
Lemma funext_down {A B} (H : @funext_type@{a b e} A B) : @funext_type@{a b e'} A B.
1 subgoal A : Type B : A -> Type H : funext_type A B ============================ funext_type A B
Proof.
exact H.
No more subgoals.
Defined.
End down.
## Cumulativity Weak Constraints¶
Flag Cumulativity Weak Constraints
When set, which is the default, causes "weak" constraints to be produced when comparing universes in an irrelevant position. Processing weak constraints is delayed until minimization time. A weak constraint between u and v when neither is smaller than the other and one is flexible causes them to be unified. Otherwise the constraint is silently discarded.
This heuristic is experimental and may change in future versions. Disabling weak constraints is more predictable but may produce arbitrary numbers of universes.
## Global and local universes¶
Each universe is declared in a global or local environment before it can be used. To ensure compatibility, every global universe is set to be strictly greater than Set when it is introduced, while every local (i.e. polymorphically quantified) universe is introduced as greater or equal to Set.
## Conversion and unification¶
The semantics of conversion and unification have to be modified a little to account for the new universe instance arguments to polymorphic references. The semantics respect the fact that definitions are transparent, so indistinguishable from their bodies during conversion.
This is accomplished by changing one rule of unification, the first- order approximation rule, which applies when two applicative terms with the same head are compared. It tries to short-cut unfolding by comparing the arguments directly. In case the constant is universe polymorphic, we allow this rule to fire only when unifying the universes results in instantiating a so-called flexible universe variables (not given by the user). Similarly for conversion, if such an equation of applicative terms fail due to a universe comparison not being satisfied, the terms are unfolded. This change implies that conversion and unification can have different unfolding behaviors on the same development with universe polymorphism switched on or off.
## Minimization¶
Universe polymorphism with cumulativity tends to generate many useless inclusion constraints in general. Typically at each application of a polymorphic constant f, if an argument has expected type Type@{i} and is given a term of type Type@{j}, a $$j ≤ i$$ constraint will be generated. It is however often the case that an equation $$j = i$$ would be more appropriate, when f's universes are fresh for example. Consider the following example:
Polymorphic Definition pidentity {A : Type} (a : A) := a.
pidentity is defined
Set Printing Universes.
Definition id0 := @pidentity nat 0.
id0 is defined
Print id0.
id0@{} = pidentity@{Set} 0 : nat
This definition is elaborated by minimizing the universe of id0 to level Set while the more general definition would keep the fresh level i generated at the application of id and a constraint that Set $$≤ i$$. This minimization process is applied only to fresh universe variables. It simply adds an equation between the variable and its lower bound if it is an atomic universe (i.e. not an algebraic max() universe).
Flag Universe Minimization ToSet
Turning this flag off (it is on by default) disallows minimization to the sort Set and only collapses floating universes between themselves.
## Explicit Universes¶
The syntax has been extended to allow users to explicitly bind names to universes and explicitly instantiate polymorphic definitions.
Command Universe ident
Command Polymorphic Universe ident
In the monorphic case, this command declares a new global universe named ident, which can be referred to using its qualified name as well. Global universe names live in a separate namespace. The command supports the Polymorphic flag only in sections, meaning the universe quantification will be discharged on each section definition independently.
Command Constraint universe_constraint
Command Polymorphic Constraint universe_constraint
This command declares a new constraint between named universes.
universe_constraint ::= qualid < qualid
qualid <= qualid
qualid = qualid
If consistent, the constraint is then enforced in the global environment. Like Universe, it can be used with the Polymorphic prefix in sections only to declare constraints discharged at section closing time. One cannot declare a global constraint on polymorphic universes.
Error Undeclared universe ident.
Error Universe inconsistency.
### Polymorphic definitions¶
For polymorphic definitions, the declaration of (all) universe levels introduced by a definition uses the following syntax:
Polymorphic Definition le@{i j} (A : Type@{i}) : Type@{j} := A.
le is defined
Print le.
le@{i j} = fun A : Type@{i} => A : Type@{i} -> Type@{j} (* i j |= i <= j *) Arguments le _%type_scope
During refinement we find that j must be larger or equal than i, as we are using A : Type@{i} <= Type@{j}, hence the generated constraint. At the end of a definition or proof, we check that the only remaining universes are the ones declared. In the term and in general in proof mode, introduced universe names can be referred to in terms. Note that local universe names shadow global universe names. During a proof, one can use Show Universes to display the current context of universes.
It is possible to provide only some universe levels and let Coq infer the others by adding a + in the list of bound universe levels:
Fail Definition foobar@{u} : Type@{u} := Type.
The command has indeed failed with message: Universe {Top.50} is unbound.
Definition foobar@{u +} : Type@{u} := Type.
foobar is defined
Set Printing Universes.
Print foobar.
foobar@{u Top.52} = Type@{Top.52} : Type@{u} (* u Top.52 |= Top.52 < u *)
This can be used to find which universes need to be explicitly bound in a given definition.
Definitions can also be instantiated explicitly, giving their full instance:
Check (pidentity@{Set}).
pidentity@{Set} : ?A -> ?A where ?A : [ |- Set]
Monomorphic Universes k l.
Check (le@{k l}).
le@{k l} : Type@{k} -> Type@{l} (* {} |= k <= l *)
User-named universes and the anonymous universe implicitly attached to an explicit Type are considered rigid for unification and are never minimized. Flexible anonymous universes can be produced with an underscore or by omitting the annotation to a polymorphic definition.
Check (fun x => x) : Type -> Type.
(fun x : Type@{Top.55} => x) : Type@{Top.55} -> Type@{Top.56} : Type@{Top.55} -> Type@{Top.56} (* {Top.56 Top.55} |= Top.55 <= Top.56 *)
Check (fun x => x) : Type -> Type@{_}.
(fun x : Type@{Top.57} => x) : Type@{Top.57} -> Type@{Top.57} : Type@{Top.57} -> Type@{Top.57} (* {Top.57} |= *)
Check le@{k _}.
le@{k k} : Type@{k} -> Type@{k}
Check le.
le@{Top.60 Top.60} : Type@{Top.60} -> Type@{Top.60} (* {Top.60} |= *)
Flag Strict Universe Declaration
Turning this flag off allows one to freely use identifiers for universes without declaring them first, with the semantics that the first use declares it. In this mode, the universe names are not associated with the definition or proof once it has been defined. This is meant mainly for debugging purposes.
Flag Private Polymorphic Universes
This flag, on by default, removes universes which appear only in the body of an opaque polymorphic definition from the definition's universe arguments. As such, no value needs to be provided for these universes when instantiating the definition. Universe constraints are automatically adjusted.
Consider the following definition:
Lemma foo@{i} : Type@{i}.
1 subgoal ============================ Type@{i}
Proof.
exact Type.
No more subgoals.
Qed.
Print foo.
foo@{i} = Type@{Top.63} : Type@{i} (* Public universes: i |= Set < i Private universes: {Top.63} |= Top.63 < i *)
The universe Top.xxx for the Type in the body cannot be accessed, we only care that one exists for any instantiation of the universes appearing in the type of foo. This is guaranteed when the transitive constraint Set <= Top.xxx < i is verified. Then when using the constant we don't need to put a value for the inner universe:
Check foo@{_}.
foo@{Top.64} : Type@{Top.64} (* {Top.64} |= Set < Top.64 *)
and when not looking at the body we don't mention the private universe:
foo@{i} : Type@{i} (* i |= Set < i *) foo is universe polymorphic foo is opaque Expands to: Constant Top.foo
To recover the same behaviour with regard to universes as Defined, the Private Polymorphic Universes flag may be unset:
Unset Private Polymorphic Universes.
Lemma bar : Type.
1 subgoal ============================ Type@{Top.65}
Proof.
exact Type.
No more subgoals.
Qed.
bar@{Top.65 Top.66} : Type@{Top.65} (* Top.65 Top.66 |= Top.66 < Top.65 *) bar is universe polymorphic bar is opaque Expands to: Constant Top.bar
Fail Check bar@{_}.
The command has indeed failed with message: Universe instance should have length 2.
Check bar@{_ _}.
bar@{Top.68 Top.69} : Type@{Top.68} (* {Top.69 Top.68} |= Top.69 < Top.68 *)
Note that named universes are always public.
Set Private Polymorphic Universes.
Unset Strict Universe Declaration.
Lemma baz : Type@{outer}.
1 subgoal ============================ Type@{outer}
Proof.
exact Type@{inner}.
No more subgoals.
Qed.
baz@{outer inner} : Type@{outer} (* outer inner |= inner < outer *) baz is universe polymorphic baz is opaque Expands to: Constant Top.baz
## Universe polymorphism and sections¶
Variables, Context, Universe and Constraint in a section support polymorphism. This means that the universe variables and their associated constraints are discharged polymorphically over definitions that use them. In other words, two definitions in the section sharing a common variable will both get parameterized by the universes produced by the variable declaration. This is in contrast to a “mononorphic” variable which introduces global universes and constraints, making the two definitions depend on the same global universes associated to the variable.
It is possible to mix universe polymorphism and monomorphism in sections, except in the following ways:
• no monomorphic constraint may refer to a polymorphic universe:
Section Foo.
Polymorphic Universe i.
Fail Constraint i = i.
The command has indeed failed with message: Cannot add monomorphic constraints which refer to section polymorphic universes.
This includes constraints implicitly declared by commands such as Variable, which may need to be used with universe polymorphism activated (locally by attribute or globally by option):
Fail Variable A : (Type@{i} : Type).
The command has indeed failed with message: Cannot add monomorphic constraints which refer to section polymorphic universes.
Polymorphic Variable A : (Type@{i} : Type).
A is declared
(in the above example the anonymous Type constrains polymorphic universe i to be strictly smaller.)
• no monomorphic constant or inductive may be declared if polymorphic universes or universe constraints are present.
These restrictions are required in order to produce a sensible result when closing the section (the requirement on constants and inductives is stricter than the one on constraints, because constants and inductives are abstracted by all the section's polymorphic universes and constraints). | 2020-08-03 18:09:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4678060710430145, "perplexity": 6894.791233224608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735823.29/warc/CC-MAIN-20200803170210-20200803200210-00016.warc.gz"} |
https://standards.globalspec.com/standards/detail?docId=1274523 | ### This is embarrasing...
An error occurred while processing the form. Please try again in a few minutes.
### This is embarrasing...
An error occurred while processing the form. Please try again in a few minutes.
# CEN - EN ISO 9614-1
## Acoustics - Determination of sound power levels of noise sources using sound intensity - Part 1: Measurement at discrete points
active, Most Current
Organization: CEN Publication Date: 1 August 2009 Status: active Page Count: 34 ICS Code (Acoustic measurements and noise abatement in general): 17.140.01
##### scope:
This part of ISO 9614 specifies a method for measuring the component of sound intensity normal to a measurement surface which is chosen so as to enclose the noise source(s) of which the sound power level is to be determined. The one-octave, one-third-octave or band-limited weighted sound power level is calculated from the measured values. The method is applicable to any source for which a physically stationary measurement surface can be defined, and on which the noise generated by the source is stationary in time (as defined in 3.13). The source is defined by the choice of measurement surface. The method is applicable in situ, or in special purpose test environments.
This part of ISO 9614 is applicable to sources situated in any environment which is neither so variable in time as to reduce the accuracy of the measurement of sound intensity to an unacceptable degree, nor subjects the intensity measurement probe to gas flows of unacceptable speed or unsteadiness (see 5.3 and 5.4).
In some cases, it will be found that the test conditions are too adverse to allow the requirements of this part of ISO 9614 to be met. In particular, extraneous noise levels may vary to an excessive degree during the test. In such cases, the method given in this part of ISO 9614 is not suitable for the determination of the sound power level of the source.
NOTE 1 Other methods, e.g. determination of sound power levels from surface vibration levels as described in ISO/TR 7849, may be more suitable.
This part of ISO 9614 specifies certain ancillary procedures, described in annex B, to be followed in conjunction with the sound power determination. The results are used to indicate the quality of the determination, and hence the grade of accuracy. If the indicated quality of the determination does not meet the requirements of this part of ISO 9614, the test procedure should be modified in the manner indicated.
### Document History
EN ISO 9614-1
August 1, 2009
Acoustics - Determination of sound power levels of noise sources using sound intensity - Part 1: Measurement at discrete points
This part of ISO 9614 specifies a method for measuring the component of sound intensity normal to a measurement surface which is chosen so as to enclose the noise source(s) of which the sound power...
January 1, 1995
Acoustics - Determination of Sound Power Levels of Noise Sources Using Sound Intensity - Part 1: Measurement at Discrete Points
A description is not available for this item. | 2021-05-15 17:36:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8165644407272339, "perplexity": 1052.760527103024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00527.warc.gz"} |
https://blender.stackexchange.com/questions/124614/how-to-edit-2-8-ui-icons | # How to edit 2.8 UI icons?
Blender 2.8 comes with a set of monochromatic icons which seem to be stored inside datafiles/icons/*.dat.
How can these icon files be edited/replaced with other/custom icons? Where is documentation about the icon format and how to compile/generate these *.dat files?
### Summary
(Not official: based on own research)
### Two kinds of icon
I think that "the monochromatic icons", the small ones used in menus etc, are not present as separate files in the compiled releases of Blender because they are compiled in the binary code. By the way, they are not necessarily monochromatic: this isn't "forced" in the code, it's just a design choice made for version 2.80.
The ones that you find under datafiles/icons/*.dat, that come with the built releases, are the (usually colored) ones that are used, for instance, in the tool shelf (see pic below)
So which ones do you want to change?
### To change the small "menu" icons
you will need to recompile blender from source after having updated the blender_icons16/*.dat and blender_icons32/*.dat files in the $SOURCE2.8/release/datafiles/ folder, and to do so you must first edit the vector image $SOURCE2_8/release/datafiles/blender_icons.svg
and then update and make the icons, roughly following this guide (a bit old but I think still valid)
### To change the bigger "tool" icons
I'm not sure how to change the (usually) colored ones that are not compiled into Blender (i.e. the ones that are present as separate .dat files in the downloaded blender builds). They seem to be generated via this blender_icons_geom_update.py script that pulls the design from a icon_geom.blend file that is sadly not present in the repository.
I've found this message that confirms that the process is currently not documented:
«Currently, it’s just a .blend file that generates the icons.
[...] Once we are more finished with the tools we will probably document how people can do this, if anyone wants to make new icons, or change them.»
William Reynish (billreynish), Apr 28 2018
### Documentation?
There is anyway another script: blender_icons_geom.py that contains a documentation of the pixmap format which could be what you are looking for:
This is a simple binary format (all bytes, so no endian).
:0..3: VCO: identifier.
:4: 0: icon file version.
:5: icon size-x.
:6: icon size-y.
:7: icon start-x.
:8: icon start-y.
Icon width and height are for icons that don't use the full byte range
(so we don't get bad alignment for 48 pixel grid for eg).
Start values are currently unused.
After the header, the remaining length of the data defines the geometry size.
:6 bytes each: triangle (XY) locations.
:12 bytes each: triangle (RGBA) locations.
All coordinates are written, then all colors.
cp brush.gpencil_draw.draw.dat ops.mesh.bevel.dat
the "bevel" tool actually becomes a little pencil (this proves that the "tool" icons are read directly from the datafiles/icon folder). | 2019-12-13 00:38:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17653466761112213, "perplexity": 4809.33538387493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540547536.49/warc/CC-MAIN-20191212232450-20191213020450-00293.warc.gz"} |
https://support.bioconductor.org/p/81151/ | Question: Seeking confirmation on manually setting contrasts for a 3-factor design in DESeq2 with multiple interaction terms
1
3.3 years ago by
Australia
piensaglobalmente10 wrote:
Hello,
I am seeking confirmation on manually setting contrasts for a 3-factor design in DESeq2 with multiple interaction terms
I have a 3-factor experiment, with each factor containing 2 levels:
Type: Juvenile vs Sediment
Shore: Inshore vs Offshore
Sector: North vs Central
I am interested in the following contrasts:
1. Average main effects of Type, Shore and Sector across all the levels.
2. Interactive effect of Type:Shore, specifically:
1. Juvenile: Inshore vs Offshore
2. Sediment: Inshore vs Offshore
3. Interactive effect of Type: Sector, specifically:
1. Juvenile: North vs Central
2. Sediment: North vs Central
4. Interactive effect of Type vs Shore:Sector, specifically:
1. Juvenile: Inshore North vs. Inshore Central
2. Juvenile: Inshore North vs. Offshore Central
3. Juvenile: Inshore North vs. Offshore North
4. Juvenile: Offshore Central vs. Inshore Central
5. Juvenile: Offshore North vs. offshore Central
6. Juvenile: Offshore North vs. Inshore Central
5. 1-6: Sediment: with same 6 comparisons above
My experimental design formula is as follows:
~Type*Shore*Sector
Base levels for all factors are: Juvenile, Inshore, Central
And my Model Matrix column names outputted with resultsNames function:
resultsNames(varstab_geomLove_T.S.S)
[1] "Intercept"
[2] "Type_Sediment_vs_Juvenile"
[3] "Shore_Offshore_vs_Inshore"
[4] "Sector_North_vs_Central"
[5] "TypeSediment.ShoreOffshore"
[6]"TypeSediment.SectorNorth"
[7] "ShoreOffshore.SectorNorth"
[8]"TypeSediment.ShoreOffshore.SectorNorth"
Effect Function Ok? Main effect of Type (aka Juvenile vs Sediments) results(varstab2, contrast=c("Type", "Juvenile", "Sediment")) yes Main effect of Shore (aka Inshore vs Offshore) results(varstab2, contrast=c("Shore", "Inshore", "Offshore")) yes Main effect of Sector (aka North or Central) results(varstab2, contrast=c("Sector", "North", "Central")) yes Juvenile: Inshore vs Offshore Subtract column with all inshore vs offshore comparison from sediment: inshore vs offshore [3] "Shore_Offshore_vs_Inshore" 1,0,1,0,0,0,0,0 [5] "TypeSediment.ShoreOffshore" 1,0,0,0,1,0,0,0 Subtract the two to get the following: results(varstab2, contrast=c(0,0,1,0,-1,0,0,0)) Not sure Sediment: Inshore vs Offshore results(varstab2, name="TypeSediment.ShoreOffshore") yes Juvenile: North vs Central Subtract column with all north vs central comparison from sediment: north vs central [4] "Sector_North_vs_Central" 1,0,0,1,0,0,0,0 [6]"TypeSediment.SectorNorth" 1,0,0,0,0,1,0,0 Subtract the two to get the following: results(varstab2, contrast=c(0,0,0,1,0,-1,0,0)) Not sure Sediment: North vs Central results(varstab2, name=”TypeSediment.SectorNorth”) yes Juvenile: Inshore North vs. Inshore Central [1] "Intercept" 1,0,0,0,0,0,0,0 which is juvenile, inshore, central [4] "Sector_North_vs_Central" 0,0,0,1,0,0,0,0 Subtract the two to get the following: results(varstab2, contrast=c(1,0,0,-1,0,0,0,0)) Not sure Juvenile: Inshore North vs. Offshore Central ? Juvenile: Inshore North vs. Offshore North ? Juvenile: Offshore Central vs. Inshore Central ? Juvenile: Offshore North vs. offshore Central [7] "ShoreOffshore.SectorNorth" results(varstab2, name="ShoreOffshore.SectorNorth") Ok Juvenile: Offshore North vs. Inshore Central ? Sediments: Inshore North vs. Inshore Central ? Sediments: Inshore North vs. Offshore Central ? Sediments: Inshore North vs. Offshore North ? Sediments: Offshore Central vs. Inshore Central ? Sediments: Offshore North vs. offshore Central [8]"TypeSediment.ShoreOffshore.SectorNorth" results(varstab2, name="TypeSediment.ShoreOffshore.SectorNorth") Ok Sediments: Offshore North vs. Inshore Central ?
I feel confident in the comparisons with the a "Yes" in the last column. However, could someone please check that I have specified the contrasts in the rows with "Not sure?" Especially the contrasts "Juvenile: Inshore North vs. Inshore Central." If I can get a confirmation that that contrast is set correctly, then I think I can manage the rest.
Thank you for any insights!
cheers,
k
modified 3.3 years ago by Michael Love24k • written 3.3 years ago by piensaglobalmente10
1
3.3 years ago by
Michael Love24k
United States
Michael Love24k wrote:
hi,
This is a good question for a statistical consultant or a local statistician who you could collaborate with.
There is nothing specific about DESeq2 here, you would use the same contrasts for a normal linear model. So it's not really a DESeq2 software question.
Thank you Michael. I can keep looking through the glmm literature now that you confirm that contrasts in DESeq2 and other negative binomial models are specified the same. I just wasn't sure if there was some unique DESeq2 functionality I wasn't finding that would draw out these contrasts.
cheers,
Hello Michael:
I would like to propose a possible solution to such contrast questions and run it by you.
To estimate higher order contrasts:
1) Construct a design matrix in GLM coding. An example is given here:
2) Each categorical column (“A1”, “AB21”, etc) in the design can be seen as a separate binary factor. I can also pretend that every column is a quantitative factor (please let me know if it makes a difference for DESeq2). Either way, an additive model that includes all of such factors is fed to DESeq2. Since interactions are actually present, shrinkage of regression coefficients is disabled (betaPrior = FALSE).
3) Obtain the contrast vector as a difference between the two corresponding lsmeans vectors. E.g. for “(A1, B1) vs (A3, B1)” contrast the vector is LSM(AB11) – LSM(AB31). How to obtain lsmeans is described here:
https://support.sas.com/documentation/cdl/en/statug/63962/HTML/default/viewer.htm#statug_glm_a0000000871.htm
4) Feed the contrast vector to results() function from DESeq2. Here I hope to rely on the option: “a numeric contrast vector with one element for each element in resultsNames(object) (most general case)”.
Please tell me if DESeq2 can digest an overparameterized design “as is” and whether the results() will work the way I expect.
You can pass in any full rank design matrix to the 'full' argument, and use numeric contrasts to compare coefficients using results(). The current release of DESeq2 has betaPrior=FALSE by default (see NEWS).
If the user can only feed full rank designs to DESeq2, do you know if there is an R package that could help figure out how to specify contrast coefficients for non-trivial cases? If not, it's very hard to use the design produced by model.matrix() if the goal is to estimate non-trivial contrasts.
The contrast package works with 'lm' like objects. So it doesn't work with DESeq2 but you could extract the linear contrast from the output of the contrast() function from that package, for example running on fake data for 'y' but using the same model.matrix. See the example of the contrast package here.
For non-trivial contrasts I find it just as hard to specify the desired contrast using that package, as working it out myself. | 2019-08-22 07:58:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21334798634052277, "perplexity": 13345.175773305802}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316785.68/warc/CC-MAIN-20190822064205-20190822090205-00085.warc.gz"} |
https://www.zbmath.org/?q=an%3A1303.05058 | # zbMATH — the first resource for mathematics
Neighbor sum distinguishing total colorings via the combinatorial nullstellensatz. (English) Zbl 1303.05058
Summary: Let $$G = (V,E)$$ be a graph and $$\phi$$ be a total coloring of $$G$$ by using the color set $$\{1, 2, \dots,k\}$$. Let $$f(v)$$ denote the sum of the color of the vertex $$v$$ and the colors of all incident edges of $$v$$. We say that $$\phi$$ is neighbor sum distinguishing if for each edge $$uv \in E(G)$$, $$f(u) \neq f(v)$$. The smallest number $$k$$ is called the neighbor sum distinguishing total chromatic number, denoted by $$\chi_{nsd}^{\prime\prime}(G)$$. M. Pilśniak and M. Woźniak [“On the adjacent-vertex-distinguishing index by sums in total paper colorings”, Preprint] conjectured that for any graph $$G$$ with at least two vertices, $$\chi_{nsd}^{\prime\prime}(G) \geqslant \Delta (G) + 3^{\prime\prime}$$. In this paper, by using the famous combinatorial nullstellensatz, we show that $$\chi_{nsd}^{\prime\prime}(G) \geqslant 2 \Delta (G)+\mathrm{col}(G)-1^{\prime\prime}$$, where $$\mathrm{col}(G)$$ is the coloring number of $$G$$. Moreover, we prove this assertion in its list version.
##### MSC:
05C15 Coloring of graphs and hypergraphs
Full Text:
##### References:
[1] Alon, N, Combinatorial nullstellensatz, Combin Probab Comput, 8, 7-29, (1999) · Zbl 0920.05026 [2] Bondy J A, Murty U S R. Graph Theory with Applications. New York: North-Holland, 1976 · Zbl 1226.05083 [3] Chen, X E, On the adjacent vertex distinguishing total coloring numbers of graphs with δ = 3, Discrete Math, 308, 4003-4007, (2008) · Zbl 1203.05052 [4] Cheng X H, Wu J L, Huang D J, et al. Neighbor sum distinguishing total colorings of planar graphs with maximum degree Δ. Submitted, 2013 [5] Dong, A J; Wang, G H, Neighbor sum distinguishing total colorings of graphs with bounded maximum average degree, (2013) [6] Huang, D J; Wang, W F, Adjacent vertex distinguishing total coloring of planar graphs with large maximum degree (in Chinese), Sci Sin Math, 42, 151-164, (2012) [7] Huang, P Y; Wong, T L; Zhu, X D, Weighted-1-antimagic graphs of prime power order, Discrete Math, 312, 2162-2169, (2012) · Zbl 1244.05186 [8] Kalkowski, M; Karoński, M; Pfender, F, Vertex-coloring edge-weightings: towards the 1-2-3-conjecture, J Combin Theory Ser B, 100, 347-349, (2010) · Zbl 1209.05087 [9] Karoński, M; Łuczak, T; Thomason, A, Edge weights and vertex colors, J Combin Theory Ser B, 91, 151-157, (2004) · Zbl 1042.05045 [10] Li, H L; Ding, L H; Liu, B Q; etal., Neighbor sum distinguishing total colorings of planar graphs, (2013) [11] Li, H L; Liu, B Q; Wang, G H, Neighbor sum distinguishing total colorings of $$K$$4-minor free graphs, Front Math China, 8, 1351-1366, (2013) · Zbl 1306.05066 [12] Pilśniak M, Woźniak M. On the adjacent-vertex-distinguishing index by sums in total proper colorings. Preprint, http://www.ii.uj.edu.pl/preMD/index.php · Zbl 1216.05135 [13] Przybyło, J, Irregularity strength of regular graphs, Electronic J Combin, 15, #r82, (2008) · Zbl 1163.05329 [14] Przybyło, J, Linear bound on the irregularity strength and the total vertex irregularity strength of graphs, SIAM J Discrete Math, 23, 511-516, (2009) · Zbl 1216.05135 [15] Przybyło, J, Neighbour distinguishing edge colorings via the combinatorial nullstellensatz, SIAM J Discrete Math, 27, 1313-1322, (2013) · Zbl 1290.05079 [16] Przybyło, J; Woźniak, M, Total weight choosability of graphs, Electronic J Combin, 18, #p112, (2011) · Zbl 1217.05202 [17] Przybyło, J; Woźniak, M, On a 1, 2 conjecture, Discrete Math Theor Comput Sci, 12, 101-108, (2010) · Zbl 1250.05093 [18] Scheim, E, The number of edge 3-coloring of a planar cubic graph as a permanent, Discrete Math, 8, 377-382, (1974) · Zbl 0281.05103 [19] Seamone B. The 1-2-3 conjecture and related problems: A survey. ArXiv:1211.5122, 2012 [20] Wang, W F; Huang, D J, The adjacent vertex distinguishing total coloring of planar graphs, (2012) [21] Wang, W F; Wang, P, On adjacent-vertex-distinguishing total coloring of $$K$$_{4}-minor free graphs (in Chinese), Sci China Ser A, 39, 1462-1472, (2009) [22] Wang, Y Q; Wang, W F, Adjacent vertex distinguishing total colorings of outerplanar graphs, J Combin Optim, 19, 123-133, (2010) · Zbl 1216.05039 [23] Wong, T L; Zhu, X D, Total weight choosability of graphs, J Graph Theory, 66, 198-212, (2011) · Zbl 1228.05161 [24] Wong, T L; Zhu, X D, Antimagic labelling of vertex weighted graphs, J Graph Theory, 3, 348-359, (2012) · Zbl 1244.05192 [25] Zhang, Z F; Chen, X E; Li, J W; etal., On adjacent-vertex-distinguishing total coloring of graphs, Sci China Ser A, 48, 289-299, (2005) · Zbl 1080.05036
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-05-16 00:58:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9272609353065491, "perplexity": 6671.928241155787}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00258.warc.gz"} |
https://study.com/academy/answer/for-the-position-function-r-t-t-cos-t-i-plus-t-sin-t-j-plus-t-2-k-t-0-0-find-the-four-acceleration-components-t-t-0-a-t-t-0-a-n-t-0-and-n-t-0.html | # For the position function: r(t) = t cos (t) i + t sin(t) j + t^2 k , t_0 = 0. Find the four...
## Question:
For the position function: {eq}r(t) = t cos (t) i + t sin(t) j + t^2 k , t_0 = 0. {/eq} Find the four acceleration components {eq}T(t_0), a T(t_0), a N(t_0), \ and \ N(t_0) {/eq}
## Finding the Acceleration Components:
The acceleration components of the tangential and normal vectors, then the unit tangent vector and the unit normal vectors are evaluated by using the velocity and acceleration vectors. The parameter value {eq}\displaystyle t {/eq} should be considered as {eq}\displaystyle t_0 {/eq}.
Become a Study.com member to unlock this answer! Create your account
We consider the given position function {eq}\displaystyle r(t) = t \cos (t) \vec{i} + t \sin(t) \vec{j} + t^2 \vec{k} , {/eq} at... | 2020-11-29 23:13:06 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8229881525039673, "perplexity": 3288.3560155712144}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141203418.47/warc/CC-MAIN-20201129214615-20201130004615-00398.warc.gz"} |
https://questions.examside.com/past-years/year-wise/gate/gate-ece/gate-ece-2016-set-3 | NEW
New Website Launch
Experience the best way to solve previous year questions with mock tests (very detailed analysis), bookmark your favourite questions, practice etc...
## GATE ECE 2016 Set 3
Exam Held on Thu Jan 01 1970 00:00:00 GMT+0000 (Coordinated Universal Time)
Click View All Questions to see questions one by one or you can choose a single question from below.
## GATE ECE
The diodes D1 and D2 in the figure are ideal and the capacitors are identical. T...
Consider the circuit shown in the figure. Assuming V<sub>BE1</sub> = V<sub>EB2...
In the astable multivibrator circuit shown in the figure, the frequency of oscil...
For the circuit shown in the figure, R<sub>1</sub> = R<sub>2</sub> = R<sub>3</su...
For a superheterodyne receiver, the intermediate frequency is 15 MHz and the loc...
A wide sense stationary random process $$X(t)$$ passes through the $$LTI$$ syste...
The bit error probability of a memoryless binary symmetric channel is $${10^{ - ... An analog baseband signal, band limited to 100 Hz, is sampled at the Nyquist rat... A binary baseband digital communication system employs the signal$$$p\left( t... A voice-grade AWGN (additive white Gaussian noise) telephone channel has a bandw... For the unity feedback control system shown in the figure, the open-loop transfe... The forward-path transfer function and the feedback-path transfer function of a ... The block diagram of a feedback control system is shown in the figure. The overa... The first two rows in the Routh table for the characteristic equation of a certa... A second-order linear time-invariant system is described by the following state ... Following is the k-map of a Boolean function of five variable P, Q, R, S and X.... The minimum number of 2-input NAND gates required to implement a 2-input XOR ga... For the circuit shown in the figure, the delays of NOR gates, multiplexers and i... If a right-handed circularly polarized wave is incident normally on a plane perf... Faraday's law of electromagnetic induction is mathematically described by which ... Consider an air-filled rectangular waveguide with dimensions a = 2.286cm and b =... Consider an air-filled rectangular waveguide with dimensions a = 2.286 cm and b ... A radar operating at 5 GHz uses a common antenna for transmission and reception.... Consider the charge profile shown in the figure. The resultant potential distrib... The figure shows the I-V characteristics of a solar cell illuminated uniformly w... The injected excess electron concentration profile in the base region of an npn ... The figure shows the band diagram of a Metal Oxide Semiconductor (MOS). The surf... Figures $${\rm I}$$ and $${\rm I}{\rm I}$$ show two MOS capacitor of unit area. ... In the circuit shown in the figure, transistor M1 is in saturation and has trans... In the circuit shown in the figure, the channel length modulation of all transis... If the vectors $${e_1} = \left( {1,0,2} \right),\,{e_2} = \left( {0,1,0} \right)... Consider a$$2 \times 2$$square matrix$$A = \left[ {\matrix{ \sigma & ... A triangle in the $$xy-$$plane is bounded by the straight lines $$2x=3y, y=0$$ a... The integral $$\int\limits_0^1 {{{dx} \over {\sqrt {\left( {1 - x} \right)} }}} ... The probability of getting a ''head'' in a single toss of a biased coin is$$0.3... The particular solution of the initial value problem given below is $$\,\,{{{d^2... Consider the first order initial value problem$$\,y' = y + 2x - {x^2},\,\,y\lef... The value of the integral $${1 \over {2\pi j}}\oint\limits_C {{{{e^z}} \over {z ... For$$f\left( z \right) = {{\sin \left( z \right)} \over {{z^2}}},$$the residue... In an 8085 microprocessor, the contents of the accumulator and the carry flag ar... <p>In the figure shown, the current i (in ampere) is ______.</p> <img alt="GATE ... The z-parameter matrix$$\begin{bmatrix}z_{11}&z_{12}\\z_{21}&z_{22}\end... Assume that the circuit in the figure has reached the steady state before time t... In the $$RLC$$ circuit shown in the figure, the input voltage is given by v<sub>... The z-parameter matrix for the two-port network shown is $$\left[ {\matrix{ ... If the signal x(t) =$${{\sin (t)} \over {\pi t}}*{{\sin (t)} \over {\pi t}}$$w... A continuous-time speech signal$${x_a}(t)$$is sampled at a rate of 8 kHz and ... A discrete-time signal$$x\left[ n \right]\, = \delta \left[ {n - 3} \right]\, + ... The ROC (region of convergence) of the z-transform of a discrete-time signal is ... Consider the signal $$\,x\left( t \right)$$$\$\,\,\, = \,\,\,\cos \left( {6\pi t...
The direct form structure of an FIR (finite impulse response) filter is shown i...
## General Aptitude
The number that least fits this set: (324, 441, 97 and 64) is ________.
It takes 10 s and 15 s, respectively, for two trains traveling at different cons...
### Joint Entrance Examination
JEE Main JEE Advanced WB JEE
### Graduate Aptitude Test in Engineering
GATE CSE GATE ECE GATE EE GATE ME GATE CE GATE PI GATE IN
NEET
Class 12 | 2022-06-25 02:05:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.955345630645752, "perplexity": 6975.139466172417}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00742.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=0225877 | MathSciNet bibliographic data MR225877 20.29 Dye, R. H. The simple group $FH(8,\,2)$$FH(8,\,2)$ of order $2\sp{12}\cdot 3\sp{5}\cdot 5\sp{2}\cdot 7$$2\sp{12}\cdot 3\sp{5}\cdot 5\sp{2}\cdot 7$ and the associated geometry of triality. Proc. London Math. Soc. (3) 18 1968 521–562. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | 2017-05-30 01:56:34 | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9892954230308533, "perplexity": 6143.071110361439}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463613738.67/warc/CC-MAIN-20170530011338-20170530031338-00440.warc.gz"} |
https://math.libretexts.org/Bookshelves/Algebra/Book%3A_Advanced_Algebra_(Redden)/7%3A_Exponential_and_Logarithmic_Functions/7.E%3A_Exponential_and_Logarithmic_Functions_(Exercises) |
# 7.E: Exponential and Logarithmic Functions (Exercises)
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$
$$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$
Exercise $$\PageIndex{1}$$
Given $$f$$ and $$g$$ find $$(f \circ g)(x)$$ and $$(g \circ f)(x)$$.
1. $$f(x)=6 x-5, g(x)=2 x+1$$
2. $$f(x)=5-6 x, g(x)=\frac{3}{2} x$$
3. $$f(x)=2 x^{2}+x-2, g(x)=5 x$$
4. $$f(x)=x^{2}-x-6, g(x)=x-3$$
5. $$f(x)=\sqrt{x+2}, g(x)=8 x-2$$
6. $$f(x)=\frac{x-1}{3 x-1}, g(x)=\frac{1}{x}$$
7. $$f(x)=x^{2}+3 x-1, g(x)=\frac{1}{x-2}$$
8. $$f(x)=\sqrt[3]{3(x+2)}, g(x)=9 x^{3}-2$$
1. $$(f \circ g)(x)=12 x+1 ;(g \circ f)(x)=12 x-9$$
3. $$\begin{array}{l}{(f \circ g)(x)=50 x^{2}+5 x-2}; \: {(g \circ f)(x)=10 x^{2}+5 x-10}\end{array}$$
5. $$(f\circ g)(x)=2\sqrt{2x};\:(g\circ f)(x)=8\sqrt{x+2}-2$$
7. $$\begin{array}{c}{(f \circ g)(x)=-\frac{x^{2}-7 x+9}{(x-2)^{2}}}; \: {\left(g \circ f\right)(x)=\frac{1}{x^{2}+3 x-3}}\end{array}$$
Exercise $$\PageIndex{1}$$
Are the given functions one-to-one? Explain.
1.
2.
3.
4.
1. No, fails the HLT
3. Yes, passes the HLT
Exercise $$\PageIndex{3}$$
Verify algebraically that the two given functions are inverses. In other words, show that $$\left(f \circ f^{-1}\right)(x)=x$$ and $$\left(f^{-1} \circ f\right)(x)=x$$.
1. $$f(x)=6 x-5, f^{-1}(x)=\frac{1}{6} x+\frac{5}{6}$$
2. $$f(x)=\sqrt{2 x+3}, f^{-1}(x)=\frac{x^{2}-3}{2}, x \geq 0$$
3. $$f(x)=\frac{x}{3 x-2}, f^{-1}(x)=\frac{2 x}{3 x-1}$$
4. $$f(x)=\sqrt[3]{x+3}-4, f^{-1}(x)=(x+4)^{3}-3$$
1. Proof
3. Proof
Exercise $$\PageIndex{4}$$
Find the inverses of each function defined as follows:
1. $$f(x)=-7 x+3$$
2. $$f(x)=\frac{2}{3} x-\frac{1}{2}$$
3. $$g(x)=x^{2}-12, x \geq 0$$
4. $$g(x)=(x-1)^{3}+5$$
5. $$g(x)=\frac{2}{x-1}$$
6. $$h(x)=\frac{x+5}{x-5}$$
7. $$h(x)=\frac{3 x-1}{x}$$
8. $$p(x)=\sqrt[3]{5 x}+3$$
9. $$h(x)=\sqrt[3]{2 x-7}+2$$
10. $$h(x)=\sqrt[5]{x+2}-3$$
1. $$f^{-1}(x)=-\frac{1}{7} x+\frac{3}{7}$$
3. $$g^{-1}(x)=\sqrt{x+12}$$
5. $$g^{-1}(x)=\frac{x+2}{x}$$
7. $$h^{-1}(x)=-\frac{1}{x-3}$$
9. $$h^{-1}(x)=\frac{(x-2)^{3}+7}{2}$$
Exercise $$\PageIndex{5}$$
Evaluate.
1. $$f(x)=5^{x} ;$$ find $$f(-1), f(0),$$ and $$f(3).$$
2. $$f(x)=\left(\frac{1}{2}\right)^{x} ;$$ find $$f(-4), f(0),$$ and $$f(-3).$$
3. $$g(x)=10^{-x} ;$$ find $$g(-5), g(0),$$ and $$g(2).$$
4. $$g(x)=1-3^{x} ;$$ find $$g(-2), g(0),$$ and $$g(3).$$
1. $$f(-1)=\frac{1}{5}, f(0)=1, f(3)=125$$
3. $$g(-5)=100,000, g(0)=1, g(2)=\frac{1}{100}$$
Exercise $$\PageIndex{6}$$
Sketch the exponential function. Draw the horizontal asymptote with a dashed line.
1. $$f(x)=5^{x}+10$$
2. $$f(x)=5^{x-4}$$
3. $$f(x)=-3^{x}-9$$
4. $$f(x)=3^{x+2}+6$$
5. $$f(x)=\left(\frac{1}{3}\right)^{x}$$
6. $$f(x)=\left(\frac{1}{2}\right)^{x}-4$$
7. $$f(x)=2^{-x}+3$$
8. $$f(x)=1-3^{-x}$$
1.
3.
5.
7.
Exercise $$\PageIndex{7}$$
Use a calculator to evaluate the following. Round off to the nearest hundredth.
1. $$f(x)=e^{x}+1 ;$$ find $$f(-3), f(-1),$$ and $$f\left(\frac{1}{2}\right)$$.
2. $$g(x)=2-3 e^{x} ;$$ find $$g(-1), g(0),$$ and $$g\left(\frac{2}{3}\right)$$.
3. $$p(x)=1-5 e^{-x} ;$$ find $$p(-4), p\left(-\frac{1}{2}\right),$$ and $$p(0)$$.
4. $$r(x)=e^{-2 x}-1 ;$$ find $$r(-1), r\left(\frac{1}{4}\right),$$ and $$r(2)$$.
1. $$f(-3) \approx 1.05, f(-1) \approx 1.37, f\left(\frac{1}{2}\right) \approx 2.65$$
3. $$p(-4) \approx-271.99, p\left(-\frac{1}{2}\right) \approx-7.24, p(0)=-4$$
Exercise $$\PageIndex{8}$$
Sketch the function. Draw the horizontal asymptote with a dashed line.
1. $$f(x)=e^{x}+4$$
2. $$f(x)=e^{x-4}$$
3. $$f(x)=e^{x+3}+2$$
4. $$f(x)=e^{-x}+5$$
5. Jerry invested $$$6,250$$ in an account earning $$3 \frac{5}{8}$$% annual interest that is compounded monthly. How much will be in the account after $$4$$ years? 6. Jose invested$$$7,500$$ in an account earning $$4 \frac{1}{4}$$% annual interest that is compounded continuously. How much will be in the account after $$3 \frac{1}{2}$$ years?
7. A $$14$$-gram sample of radioactive iodine is accidently released into the atmosphere. The amount of the substance in grams is given by the formula $$P (t) = 14e^{ −0.087t}$$, where $$t$$ represents the time in days after the sample was released. How much radioactive iodine will be present in the atmosphere $$30$$ days after it was released?
8. The number of cells in a bacteria sample is given by the formula $$N(t)=\frac{2.4 \times 10^{5}}{1+9 e^{-0.28t}}$$, where $$t$$ represents the time in hours since the initial placement of $$24,000$$ cells. Use the formula to calculate the number of cells in the sample $$20$$ hours later.
1.
3.
5. $$$7,223.67$$ 7. Approximately $$1$$ gram Exercise $$\PageIndex{9}$$ Evaluate. 1. $$\log _{4} 16$$ 2. $$\log _{3} 27$$ 3. $$\log _{2}\left(\frac{1}{32}\right)$$ 4. $$\log \left(\frac{1}{10}\right)$$ 5. $$\log _{1 / 3} 9$$ 6. $$\log _{3 / 4}\left(\frac{4}{3}\right)$$ 7. $$\log _{7} 1$$ 8. $$\log _{3}(-3)$$ 9. $$\log _{4} 0$$ 10. $$\log _{3} 81$$ 11. $$\log _{6} \sqrt{6}$$ 12. $$\log _{5} \sqrt[3]{25}$$ 13. $$\ln e^{8}$$ 14. $$\ln \left(\frac{1}{e^{5}}\right)$$ 15. $$\log (0.00001)$$ 16. $$\log 1,000,000$$ Answer 1. $$2$$ 3. $$−5$$ 5. $$−2$$ 7. $$0$$ 9. Undefined 11. $$\frac{1}{2}$$ 13. $$8$$ 15. $$−5$$ Exercise $$\PageIndex{10}$$ Find $$x$$. 1. $$\log _{5} x=3$$ 2. $$\log _{3} x=-4$$ 3. $$\log _{2 / 3} x=3$$ 4. $$\log _{3} x=\frac{2}{5}$$ 5. $$\log x=-3$$ 6. $$\ln x=\frac{1}{2}$$ Answer 1. $$125$$ 3. $$\frac{8}{27}$$ 5. $$0.001$$ Exercise $$\PageIndex{11}$$ Sketch the graph of the logarithmic function. Draw the vertical asymptote with a dashed line. 1. $$f(x)=\log _{2}(x-5)$$ 2. $$f(x)=\log _{2} x-5$$ 3. $$g(x)=\log _{3}(x+5)+15$$ 4. $$g(x)=\log _{3}(x-5)-5$$ 5. $$h(x)=\log _{4}(-x)+1$$ 6. $$h(x)=3-\log _{4} x$$ 7. $$g(x)=\ln (x-2)+3$$ 8. $$g(x)=\ln (x+3)-1$$ 9. The population of a certain small town is growing according to the function $$P (t) = 89,000(1.035)^{t}$$, where $$t$$ represents time in years since the last census. Use the function to estimate the population $$8 \frac{1}{2}$$ years after the census was taken. 10. The volume of sound $$L$$ in decibels (dB) is given by the formula $$L=10 \log \left(I / 10^{-12}\right)$$, where $$I$$ represents the intensity of the sound in watts per square meter. Determine the volume of a sound with an intensity of $$0.5$$ watts per square meter. Answer 1. 3. 5. 7. 9. $$119,229$$ people Exercise $$\PageIndex{12}$$ Evaluate without using a calculator. 1. $$\log _{9} 9$$ 2. $$\log _{8} 1$$ 3. $$\log _{1 / 3} 3$$ 4. $$\log \left(\frac{1}{10}\right)$$ 5. $$e^{\ln 17}$$ 6. $$10^{\log 27}$$ 7. $$\ln e^{63}$$ 8. $$\log 10^{33}$$ Answer 1. $$1$$ 3. $$−1$$ 5. $$17$$ 7. $$63$$ Exercise $$\PageIndex{13}$$ Expand completely. 1. $$\log \left(100 x^{2}\right)$$ 2. $$\log _{5}\left(5 x^{3}\right)$$ 3. $$\log _{3}\left(\frac{3 x^{5}}{5}\right)$$ 4. $$\ln \left(\frac{10}{3 x^{2}}\right)$$ 5. $$\log _{2}\left(\frac{8 x^{2}}{y^{2} z}\right)$$ 6. $$\log \left(\frac{x^{10}}{10 y^{3} z^{4}}\right)$$ 7. $$\ln \left(\frac{3 b \sqrt{a}}{c^{4}}\right)$$ 8. $$\log \left(\frac{20 y^{3}}{\sqrt[3]{x^{2}}}\right)$$ Answer 1. $$2+2 \log x$$ 3. $$1+5 \log _{3} x-\log _{3} 5$$ 5. $$3+2 \log _{2} x-2 \log _{2} y-\log _{2} z$$ 7. $$\ln 3+\ln b+\frac{1}{2} \ln a-4 \ln c$$ Exercise $$\PageIndex{14}$$ Write as a single logarithm with coefficient $$1$$. 1. $$\log x+2 \log y-3 \log z$$ 2. $$\log _{2} 5-3 \log _{2} x+4 \log _{2} y$$ 3. $$-2 \log _{5} x+\log _{5} y-5 \log _{5}(x-1)$$ 4. $$\ln x-\ln (x-1)-\ln (x+1)$$ 5. $$3 \log _{2} x+\frac{1}{2} \log _{2} y-\frac{2}{3} \log _{2} z$$ 6. $$\frac{1}{3} \log x-3 \log y-\frac{3}{5} \log z$$ 7. $$\log _{5} 4+5 \log _{5} x-\frac{1}{3}\left(\log _{5} y+2 \log _{5} z\right)$$ 8. $$\ln x-\frac{1}{2}(\ln y-4 \ln z)$$ Answer 1. $$\log \left(\frac{x y^{2}}{z^{3}}\right)$$ 3. $$\log _{5}\left(\frac{y}{x^{2}(x-1)^{5}}\right)$$ 5. $$\log _{2}\left(\frac{x^{3} \sqrt{y}}{\sqrt[3]{z^{2}}}\right)$$ 7. $$\log _{5}\left(\frac{4 x^{5}}{\sqrt[3]{y z^{2}}}\right)$$ Exercise $$\PageIndex{15}$$ Solve. Give the exact answer and the approximate answer rounded to the nearest hundredth where appropriate. 1. $$5^{2 x+1}=125$$ 2. $$10^{3 x-2}=100$$ 3. $$9^{x-3}=81$$ 4. $$16^{2 x+3}=8$$ 5. $$5^{x}=7$$ 6. $$3^{2 x}=5$$ 7. $$10^{x+2}-3=7$$ 8. $$e^{2 x-1}+2=3$$ 9. $$7^{4 x-1}-2=9$$ 10. $$3^{5 x-2}+5=7$$ 11. $$3-e^{4 x}=2$$ 12. $$5+e^{3 x}=4$$ 13. $$\frac{4}{1+e^{5 x}}=2$$ 14. $$\frac{100}{1+e^{3 x}}=\frac{1}{2}$$ Answer 1. $$1$$ 3. $$5$$ 5. $$\frac{\log (7)}{\log (5)} \approx 1.21$$ 7. $$-1$$ 9. $$\frac{\log 7+\log 11}{4 \log 7} \approx 0.56$$ 11. $$0$$ 13. $$0$$ Exercise $$\PageIndex{16}$$ Use the change of base formula to approximate the following to the nearest tenth. 1. $$\log _{5} 13$$ 2. $$\log _{2} 27$$ 3. $$\log _{4} 5$$ 4. $$\log _{9} 0.81$$ 5. $$\log _{1 / 4} 21$$ 6. $$\log _{2} \sqrt[3]{5}$$ Answer 1. $$1.6$$ 3. $$1.2$$ 5. $$-2.2$$ Exercise $$\PageIndex{17}$$ Solve. 1. $$\log _{2}(3 x-5)=\log _{2}(2 x+7)$$ 2. $$\ln (7 x)=\ln (x+8)$$ 3. $$\log _{5} 8-2 \log _{5} x=\log _{5} 2$$ 4. $$\log _{3}(x+2)+\log _{3}(x)=\log _{3} 8$$ 5. $$\log _{5}(2 x-1)=2$$ 6. $$2 \log _{4}(3 x-2)=4$$ 7. $$2=\log _{2}\left(x^{2}-4\right)-\log _{2} 3$$ 8. $$\log _{2}(x-1)+\log _{2}(x+1)=3$$ 9. $$\log _{2} x+\log _{2}(x-1)=1$$ 10. $$\log _{4}(x+5)+\log _{4}(x+11)=2$$ 11. $$\log (2 x+5)-\log (x-1)=1$$ 12. $$\ln x-\ln (2 x-1)=1$$ 13. $$2 \log _{2}(x+4)=\log _{2}(x+2)+3$$ 14. $$2 \log _{3} x=1+\log _{3}(x+6)$$ 15. $$\log _{3}(x+1)-2 \log _{3} x=1$$ 16. $$\log _{5}(2 x)+\log _{5}(x-1)=1$$ Answer 1. $$12$$ 3. $$2$$ 5. $$13$$ 7. $$±4$$ 9. $$2$$ 11. $$\frac{15}{8}$$ 13. $$0$$ 15. $$\frac{1+\sqrt{13}}{6}$$ Exercise $$\PageIndex{18}$$ Solve. 1. An amount of$$$3,250$$ is invested in an account that earns $$4.6$$% annual interest that is compounded monthly. Estimate the number of years for the amount in the account to reach $$$4,000$$. 2. An amount of$$$2,500$$ is invested in an account that earns $$5.5$$% annual interest that is compounded continuously. Estimate the number of years for the amount in the account to reach $$$3,000$$. 3. How long does it take to double an investment made in an account that earns $$6 \frac{3}{4}$$% annual interest that is compounded continuously? 4. How long does it take to double an investment made in an account that earns $$6 \frac{3}{4}$$% annual interest that is compounded semi-annually? 5. In the year 2000 a certain small town had a population of $$46,000$$ people. In the year 2010 the population was estimated to have grown to $$92,000$$ people. If the population continues to grow exponentially at this rate, estimate the population in the year 2016. 6. A fleet van was purchased new for$$$28,000$$ and $$2$$ years later it was valued at $$$20,000$$. If the value of the van continues to decrease exponentially at this rate, determine its value $$7$$ years after it is purchased new. 7. A website that has been in decline registered $$4,200$$ unique visitors last month and $$3,600$$ unique visitors this month. If the number of unique visitors continues to decline exponentially, how many unique visitors would you expect next month? 8. An initial population of $$18$$ rabbits was introduced into a wildlife preserve. The number of rabbits doubled in the first year. If the rabbit population continues to grow exponentially at this rate, how many rabbits will be present $$5$$ years after they were introduced? 9. The half-life of sodium-24 is about $$15$$ hours. How long will it take a $$50$$-milligram sample to decay to $$10$$ milligrams? 10. The half-life of radium-226 is about $$1,600$$ years. How long will it take an initial sample to decay to $$30$$% of the original amount? 11. An archeologist discovered a bone tool artifact. After analysis, the artifact was found to contain $$62$$% of the carbon-14 normally found in bone from the same animal. Given that carbon-14 has a half-life of $$5,730 years$$, estimate the age of the artifact. 12. The half-life of radioactive iodine-131 is about $$8$$ days. What percentage of an initial sample accidentally released into the atmosphere do we expect to remain after $$53$$ days? Answer 1. $$4.5$$ years 3. $$10.27$$ years 5. About $$139,446$$ people 7. $$3,086$$ unique visitors 9. $$35$$ hours 11. About $$3,952$$ years old ## Sample Exam Exercise $$\PageIndex{19}$$ 1. Given $$f(x)=x^{2}-x+3$$ and $$g(x)=3 x-1$$ find $$(f \circ g)(x)$$. 2. Show that $$f(x)=\sqrt[3]{7 x-2}$$ and $$g(x)=\frac{x^{3}+2}{7}$$ are inverses. Answer 1. $$(f \circ g)(x)=9 x^{2}-9 x+5$$ Exercise $$\PageIndex{20}$$ Find the inverse of the following functions: 1. $$f(x)=\frac{1}{2} x-3$$ 2. $$h(x)=x^{2}+3$$ where $$x \geq 0$$ Answer 1. $$f^{-1}(x)=2 x+6$$ Exercise $$\PageIndex{21}$$ Sketch the graph. 1. $$f(x)=e^{x}-5$$ 2. $$g(x)=10^{-x}$$ 3. Joe invested$$$5,200$$ in an account earning $$3.8$$% annual interest that is compounded monthly. How much will be in the account at the end of $$4$$ years?
4. Mary has $$$3,500$$ in a savings account earning $$4 \frac{1}{2}$$% annual interest that is compounded continuously. How much will be in the account at the end of $$3$$ years? Answer 1. 3.$$$6,052.18$$
Exercise $$\PageIndex{22}$$
Evaluate.
1. $$\log _{3} 81$$
2. $$\log _{2}\left(\frac{1}{4}\right)$$
3. $$\log 1,000$$
4. $$\ln e$$
5. $$\log _{4} 2$$
6. $$\log _{9}\left(\frac{1}{3}\right)$$
7. $$\ln e^{3}$$
8. $$\log _{1 / 5} 25$$
1. $$4$$
2. $$-2$$
3. $$3$$
4. $$1$$
Exercise $$\PageIndex{23}$$
Sketch the graph.
1. $$f(x)=\log _{4}(x+5)+2$$
2. $$f(x)=-\ln (x-2)$$
1.
Exercise $$\PageIndex{24}$$
1. Expand: $$\log \left(\frac{100 x^{2} y}{\sqrt{z}}\right)$$.
2. Write as a single logarithm with coefficient $$1$$: $$2 \log _{2} x+\frac{1}{3} \log _{2} y-3 \log _{2} z$$.
1. $$2+2 \log x+\log y-\frac{1}{2} \log z$$
Exercise $$\PageIndex{25}$$
Evaluate. Round off to the nearest tenth.
1. $$\log _{2} 10$$
2. $$\ln 1$$
3. $$\log _{3}\left(\frac{1}{5}\right)$$
1. $$3.3$$
2. $$0$$
3. $$-1.5$$
Exercise $$\PageIndex{26}$$
Solve:
1. $$2^{3 x-1}=16$$
2. $$3^{7 x+1}=5$$
3. $$\log _{5}(3 x-4)=\log _{5}(2 x+7)$$
4. $$\log _{3}\left(x^{2}+26\right)=3$$
5. $$\log _{2} x+\log _{2}(2 x+7)=2$$
6. $$\log (2 x+3)=1+\log (x+1)$$
7. Joe invested $$$5,200$$ in an account earning $$3.8$$% annual interest that is compounded monthly. How long will it take to accumulate a total of$$$6,200$$ in the account?
8. Mary has \$$$3,500$$ in a savings account earning $$4 \frac{1}{2}$$% annual interest that is compounded continuously. How long will it take to double the amount in the account?
9. During the exponential growth phase, certain bacteria can grow at a rate of $$5.3$$% per hour. If $$12,000$$ cells are initially present in a sample, construct an exponential growth model and use it to:
1. Estimate the population of bacteria in $$3.5$$ hours.
2. Estimate the time it will take the population to double.
10. The half-life of caesium-137 is about $$30$$ years. Approximate the time it will take a $$20$$-milligram sample of caesium-137 to decay to $$8$$ milligrams.
2. $$\frac{\log 5-\log 3}{7 \log 3}$$
4. $$\pm 1$$
6. $$-\frac{7}{8}$$
8. $$15.4$$ years
10. $$40$$ years | 2019-11-16 20:59:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161367774009705, "perplexity": 225.97903710384938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668765.35/warc/CC-MAIN-20191116204950-20191116232950-00248.warc.gz"} |
https://www.r-bloggers.com/2019/03/how-to-speed-up-gradient-boosting-by-a-factor-of-two/ | Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
## Introduction
At STATWORX, we are not only helping customers find, develop, and implement a suitable data strategy but we also spend some time doing research to improve our own tool stack. This way, we can give back to the open-source community.
One special focus of my research lies in tree-based ensemble methods. These algorithms are bread and butter tools in predictive learning and you can find them as a standalone model or as an ingredient to an ensemble in almost every supervised learning Kaggle challenge. Renowned models are Random Forest by Leo Breiman, Extremely Randomized Trees (Extra-Trees) by Pierre Geurts, Damien Ernst & Louis Wehenkel, and Multiple Additive Regression Trees (MART; also known as Gradient Boosted Trees) by Jerome Friedman.
One thing I was particularly interested in, was how much randomization techniques have helped improve prediction performance in all of the algorithms named above. In Random Forest and Extra-Trees, it is quite obvious. Here, randomization is the reason why the ensembles offer an improvement over bagging; through the de-correlation of the base learners, the variance of the ensemble and therefore its prediction error decreases. In the end, you achieve de-correlation by „shaking up“ the base trees, as it’s done in the two ensembles. However, MART also profits from randomization. In 2002, Friedman published another paper on boosting, showing that you can improve the prediction performance of boosted trees by training each tree on only a random subsample of your data. As a side-effect, your training time also decreases. Furthermore, in 2015, Rashmi and Gilad suggested adding a method known as a dropout to the boosting ensemble: a method found and used in neural nets.
## The Idea behind Random Boost
Inspired by theoretical readings on randomization techniques in boosting, I developed a new algorithm, that I called Random Boost (RB). In its essence, Random Boost sequentially grows regression trees with random depth. More precisely, the algorithm is almost identical to and has the exact same input arguments as MART. The only difference is the parameter . In MART, determines the maximum depth of all trees in the ensemble. In Random Boost, the argument constitutes the upper bound of possible tree sizes. In each boosting iteration , a random number between 1 and is drawn, which then defines the maximum depth of that tree .
In comparison to MART, this has two advantages:
First, RB is faster than MART on average, when being equipped with the same value for the tree size. When RB and MART are trained with a value for the maximum tree depth equal to , then Random Boost will in many cases grow trees the size of by nature. If you assume that for MART, all trees will be grown to their full size (i.e. there is enough data left in each internal node so that tree growing doesn’t stop before the maximum size is reached), you can derive a formula showing the relative computation gain of RB over MART:
e.g. is the training time of a RB boosting ensemble with the tree size parameter being equal to .
To make it a bit more practical, the formula predicts that for , , and , RB takes %, %, and % of the computation time of MART, respectively. These predictions, however, should be seen as RB’s best-case scenario, as MART is also not necessarily growing full trees. Still, the calculations suggest that efficiency gains can be expected (more on that later).
Second, there are also reasons to assume that randomizing over tree depths can have a beneficial effect on the prediction performance. As already mentioned, from a variance perspective, boosting suffers from overcapacity for various reasons. One of them is choosing too rich of a base learner in terms of depth. If, for example, one assumes that the dominant interaction in the data generating process is of order three, one would pick a tree with equivalent depth in MART in order to capture this interaction depth. However, this may be overkill, as fully grown trees with a depth equal to have eight leaves and therefore learn noise in the data, if there are only a few of such high order interactions. Perhaps, in this case, a tree with depth but less than eight leaves would be optimal. This is not accounted for in MART, if one doesn’t want to add a pruning step to each boosting iteration at the expense of computational overhead. Random Boost may offer a more efficient remedy to this issue. With probability , a tree is grown, which is able to capture the high order effect at the cost of also learning noise. However, in all the other cases, Random Boost constructs smaller trees that do not show the over-capacity behavior and that can focus on interactions of smaller order. If over-capacity is an issue in MART due to different interactions in the data governed by a small number of high order interactions, Random Boost may perform better than MART. Furthermore, Random Boost also decorrelates trees through the extra source of randomness, which has a variance reducing the effect on the ensemble.
The concept of Random Boost constitutes a slight change to MART. I used the sklearn package as a basis for my code. As a result, the algorithm is developed based on sklearn.ensemle.GradientBoostingRegressor and sklearn.ensemle.GradientBoostingClassifier and is used in exactly the same way (i.e. argument names match exactly and CV can be carried out with sklearn.model_selection.GridSearchCV). The only difference is that the RandomBoosting*-object uses max_depth to randomly draw tree depths for each iteration. As an example, you can use it like this:
rb = RandomBoostingRegressor(learning_rate=0.1, max_depth=4, n_estimators=100)
rb = rb.fit(X_train, y_train)
rb.predict(X_test)
For the full code, check out my GitHub account.
## Random Boost versus MART – A Simulation Study
In order to compare the two algorithms, I ran a simulation on 25 Datasets generated by a Random Target Function Generator that was introduced by Jerome Friedman in his famous Boosting paper he wrote in 2001 (you can find the details in his paper. Python code can be found here). Each dataset (containing 20,000 observations) was randomly split into a 25% test set and a 75% training set. RB and MART were tuned via 5-fold CV on the same tuning grid.
• learning_rate = 0.1
• max_depth = (2, 3, ..., 8)
• n_estimators = (100, 105, 110, ..., 195)
For each dataset, I tuned both models to obtain the best parameter constellation. Then, I trained each model on every point of the tuning grid again and saved the test MAE as well as the total training time in seconds. Why did I train every model again and didn’t simply store the prediction accuracy of the tuned models along with the overall tuning time? Well, I wanted to be able to see how training time varies with the tree size parameter.
### A Comparison of Prediction Accuracies
You can see the distribution of MAEs of the best models on all 25 datasets below.
Evidently, both algorithms perform similarly.
For a better comparison, I compute the relative difference between the predictive performance of RB and MART for each dataset , i.e. . If 0″ title=”Rendered by QuickLaTeX.com” height=”18″ width=”106″ style=”vertical-align: -6px;”/>, then RB had a larger mean absolute error than MART on dataset , and vice versa.
In the majority of cases, RB did worse than MART in terms of prediction accuracy (0″ title=”Rendered by QuickLaTeX.com” height=”15″ width=”96″ style=”vertical-align: -3px;”/>). In the worst case, RB had a 1% higher MAE than MART. In the median, RB has a 0.19% higher MAE. I’ll leave itu p to you to decide whether that difference is practically significant.
### A comparison of Training times
When we look at training time, we get a quite clear picture. In absolute terms, it took 433 seconds to train all parameter combinations of RB on average, as opposed to 803 seconds for MART.
The small black lines on top of each bar are the error bars (2 times the means standard deviation; rather small in this case).
To give you a better feeling of how each model performed on each dataset, I also plotted the training times for each round.
If you now compute the training time ratio between MART and RB (), you see that RB is roughly 1.8 times faster than MART on average.
Another perspective on the case is to compute the relative training time which is just 1 over the speedup. Note that this measure has to be interpreted a bit differently from the relative MAE measure above. If then RB is as fast as MART, if 1″ title=”Rendered by QuickLaTeX.com” height=”18″ width=”66″ style=”vertical-align: -6px;”/>, then it takes longer to train RB than MART, and if , then RB is faster than MART.
On average, RB only needs roughly 54% of MART’s tuning time in the median, and it is noticeably faster in all cases. I was also wondering how the relative training time varies with and how well the theoretically derived lower bound from above fits the actually measured relative training time. That’s why I computed the relative training time across all 25 datasets by tree size.
Tree size ()Actual Training time (RB / MART)Theoretical lower bound
20.7510.750
30.6520.583
40.5960.469
50.5660.388
60.5320.328
70.5050.283
80.4790.249
The theoretical figures are optimistic, but the relative performance gain of RB increases with tree size.
## Results in a Nutshell and Next Steps
As part of my research on tree-based ensemble methods, I developed a new algorithm called Random Boost. Random Boost is based on Jerome Friedman’s MART, with the slight difference that it fits trees of random size. In total, this little change can reduce the problem of overfitting and noticeably speed up computation. Using a Random Target Function Generator suggested by Friedman, I found that, on average, RB is roughly twice as fast as MART with a comparable prediction accuracy in expectation.
Since running the whole simulation takes quite some time (finding the optimal parameters and retraining every model takes roughly one hour for each data set on my Mac), I couldn’t run hundreds or more simulations for this blog post. That’s the objective for future research on Random Boost. Furthermore, I want to benchmark the algorithm on real-world datasets.
In the meantime, feel free to look at my code and run the simulations yourself. Everything is on GitHub. Moreover, if you find something interesting and you want to share it with me, please feel free to shoot me an email.
## References
• Breiman, Leo (2001). Random Forests. Machine Learning, 45, 5–32
• Chang, Tianqi, and Carlos Guestrin. 2016. XGBoost: A Scalable Tree Boosting System. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Pages 785-794
• Chapelle, Olivier, and Yi Chang. 2011. “Yahoo! learning to rank challenge overview”. In Proceedings of the Learning to Rank Challenge, 1–24.
• Friedman, J. H. (2001). Greedy function approximation: a gradient boosting machine. Annals of statistics, 1189-1232.
• Friedman, J. H. (2002). “Stochastic gradient boosting”. Computational Statistics & Data Analysis 38 (4): 367–378.
• Geurts, Pierre, Damien Ernst, and Louis Wehenkel (2006). “Extremely randomized trees”. Machine learning 63 (1): 3–42.
• Rashmi, K. V., and Ran Gilad-Bachrach (2015). Proceedings of the 18th International Conference on Artificial Intelligence and Statistics (AISTATS) 2015, San Diego, CA, USA. JMLR: W&CP volume 38.
#### Tobias Krabel
I am data scientist at STATWORX, with a secret passion for data and software engineering. To compensate for my nerdy sitting-in-the-basement side, I spend even more time in the basement writing shiny applications.
STATWORX
is a consulting company for data science, statistics, machine learning and artificial intelligence located in Frankfurt, Zurich and Vienna. Sign up for our NEWSLETTER and receive reads and treats from the world of data science and AI.
Der Beitrag How to Speed Up Gradient Boosting by a Factor of Two erschien zuerst auf STATWORX. | 2021-06-17 23:46:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.579710841178894, "perplexity": 1472.5949714897606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487634576.73/warc/CC-MAIN-20210617222646-20210618012646-00185.warc.gz"} |
https://bookdown.org/tpemartin/Macroeconomics_discussion/the-demand-for-money-and-the-price-level.html | # 8 The Demand for Money and the Price Level
• Texbook: Chapter 10, Macroeconomics: A Modern Approach, Robert J. Barro, Cengage Learning, 2008.
• There is one video to watch: video 1
What is money?
## 8.2 Shoe-leather cost
Assume that a person receives $60,000 wage payment at the beginning of each month and make a consumption expenditure of$10,000 per month. The market he shops at has no ATM. Therefore, he needs to have cash in hand before he heads to the market. Calculate the average money holding for the following scenarios.
1. The consumption expenditure is broken into two transactions. Each costs $5,000. 2. The consumption expenditure is broken into four transactions. Each costs$2,500.
3. Given the total expenditure per month, what can make a consumer break his expenditure into more frequent transactions?
## 8.3 Wage payment frequency
Some households spend their monthly wage income totally every month. And they keep every income they receive in money. Assume that two of such household receive monthly wage income of \$60,000. One receives wage payment at the beginning of each month; the other receives two payments in a month (beginning of the month and the middle of the month). Which one would have a higher average money holding?
## 8.4 Existence of Money and Household Budget Constraints
When households increase their money withdrawal frequency, their money stays in the bank longer. It generates more interest income. However, income from this aspect of asset management is usually negligible compared to other income sources. Therefore, we leave it out of household’s income source.
1. Suppose in each period, money is fully used to facilitate all transactions in that period. Will the existence of money affect household’s budget constraint?
2. Suppose in each period, some money is kept unspent, not facilitating any transaction (maybe for emergency usage reason). In this case, will the existence of money affect household’s budget constraint?
## 8.5 Currency reform
Suppose that the government replaces the existing monetary unit with a new one so that one new dollar is equal to 10 old dollars. What happens to the price level and the interest rate?
## 8.6 Monetary policy
1. If the monetary policy is to target a fixed money stock, will the price level be pro-cyclical?
2. If the monetary policy is to target a fixed price level, will the money stock be pro-cyclical?
3. Central banks normally prefer price stability. However, money demand is usually strong during the holiday seasons. To stabilize the price, how would the central bank manage the money stock?
## 8.7 Money equilibrium
There is a common distinction between long run and short run in macroeconomics. For long run, we accept that price is flexible. For short run, price is sticky. Assume monetary policy is to target a fixed money stock.
1. For long run, money equilibrium determines the equilibrium price level of the economy. Holding others equal, there will be a relationship between money and price. Use a graph with x-axis being money stock and y-axis being price to draw the money demand. How does the increase of money supply affect prices?
2. For short run, since price level is fixed due to price stickiness. Holding others equal, there would be a relationship between money and the interest rate. Use a graph with x-axis being money stock and y-axis being the interest rate to draw the money demand. How does the increase of money supply affect the interest rate?
3. For short run, if monetary policy is targeting a fixed interest rate, what does the money supply curve look like? | 2020-02-20 11:40:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27830684185028076, "perplexity": 3272.981791834928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144722.77/warc/CC-MAIN-20200220100914-20200220130914-00287.warc.gz"} |
https://stats.stackexchange.com/questions/39075/which-logit-or-probit-model-should-i-use-for-multiple-response-dependent-varia/40351 | # Which logit or probit model should I use for multiple response / dependent variables?
I have $300$ time series objects that constitute the $300$ columns of matrix $X$. This matrix has $5$ rows and represents $5$ days of time series information for each $300$ columns.
I set up a $300\textrm{x}5$ matrix of binary values. So the first row might look something like $(1 1 0 0 1)$, which would mean that column $1$ of $X$ had negative elements in the 1st, 2nd and 5th rows (modeled by category "$1$"), and positive elements in the 3rd and 4th rows (modeled by category "$0$").
I have $8$ predictor variables that explains/relates well to whether there are more negative elements early (i.e. if we observe something like $(1 1 0 0 0)$) versus whether there are more negative elements later (i.e. we observe something like $(0 0 0 1 1)$).
How do I model this? My knowledge is not expansive enough to know if there's any statistical framework where I can use this matrix as my response:
$Y = \left( \begin{array}{ccccc} 1 & 1 & 0 & 0 & 1 \\ 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 1 \\ . & . & . & . & . \\ \end{array} \right)$
Using this I want to do something like:
$\mathbf{Y} = a + \mathbf{B}\mathbf{Z}$ (I understand that this is not a correct logit/probit specification, but I think it's fine for communicating what I want).
Then the ideal interpretation of some coefficient $b_i \in \mathbf{B}$ would be; if it's negative, then we have increased odds of seeing something like $(11000)$ instead of something like $(00011)$ if the associated predictor $x_i \in \mathbf{Z}$ is larger.
• Please clarify: Your question uses "X", but it seems there are two distinct X matrices. The first X is a 5x300 matrix mentioned in the first paragraph. But the second X, in your regression pseudo-equation, is a matrix of the 8 predictor values, different from the first X, correct? (Otherwise, the question seems kind of screwy.) – pteetor Oct 13 '12 at 23:03
You have two decent suggestions, but I don't think either of them is optimal. If you turn your individual, five-element vectors into a single, ordinal scalar, you will lose information. This can be acceptable if it's necessary, but if a better way is possible, you may want to avoid it. Multivariate generalized linear models treat your response as a singular point in a multidimensional space, rather than five points ordered in time. Multivariate methods are typically used / understood for cases where you have five different kinds of measurements (here binary) that are all related to each other, but I gather you have a sequence of five instances of the same kind of measurement. It would be better to fit a model that is designed for that.
Fortunately, there are models that are designed exactly for this type of situation. You will want to use a Generalized Linear Mixed effects Model or use the Generalized Estimating Equations. Which you should choose depends on the question you want to ask, GLiMMs provide information on the effects of the covariates for the individual study units, whereas the GEE provides information on the effects of the covariates for the population average. There are several threads on CV that discuss these:
Regarding whether to use the logit link or the probit link, I discussed that fairly extensively here: Difference between logit and probit models. (Actually, the answer there is a little more fundamental in nature, so it may be worth reading that one first.)
Actually, I don't believe that either logit or probit regression is needed here. First, I would reduce the Y matrix to a simple 300x1 column vector of scores. This R code, for example, will reduce each each row of Y to a number between -3 and +3, where larger values correspond to "more negative, later":
f <- function(r) sum(r * c(-2, -1, 0, +1, +2))
Z <- apply(Y, 1, f)
Then, use linear regression to model those scores based on your predictors.
model <- lm.fit(X, Z)
(Here, X is your 300x8 matrix of predictor values, not the 5x300 matrix mentioned in your first paragraph.) The coefficients of the regression will have the interpretation you desire: larger values indicate stronger odds of "more negative, later".
If you really prefer the logistic model, the R code becomes
model <- glm.fix(X, Z, family=binomial())
The question for you is simply, which model works better for your application. The application does not strike me as intrinsically categorical; rather, you constructed the Y matrix to be categorical.
• Nice answer! Your latter suggestion is ordinal logit right? However I heard that there was some support for matrix response variables (clustered logit or something) which is why I asked what I did. – user14281 Oct 14 '12 at 10:00
• Thanks. Yes, ordinal logit regression. So perhaps the polr function of the MASS package is an alternative. But, again, I don't think you have a categorical problem here. – pteetor Oct 14 '12 at 19:46
You could try multivariate generalized linear models, if you wish to follow a regression approach. See SABRE package and http://www.amazon.com/Multivariate-Generalized-Linear-Mixed-Models/dp/1439813264. | 2020-02-24 12:23:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6422234177589417, "perplexity": 609.1037378400898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145941.55/warc/CC-MAIN-20200224102135-20200224132135-00444.warc.gz"} |
https://www.physicsforums.com/threads/cylindrical-and-spherical-coordinates.65072/ | # Cylindrical and spherical coordinates
1. Feb 26, 2005
### hytuoc
How do I get the bounds for a function w/out drawing a graph??
Like, Volume of the solid bounded above by the sphere r^2+z^2=5 and below by the paraboloid r^2=4z. How would I get the bounds for these in cylindrical coordinate (r dz dr dtheta)?
***Mass of the solid inside the sphere p=b and outside the sphere p =a (a<b) if the density is proportional to the distance from the origin. How do I get the bounds for this problem in spherical coordinates (p^2 sin(phi) dp dphi dtheta)??
Pls show me how to get the bounds step by step...i really want to learn how to do this. Tahnks so much
2. Feb 26, 2005
### HallsofIvy
Staff Emeritus
The first problem is should be simple because the equations you have are already in cylindrical coordinates. The first thing you have to do, in any coordinates, if you want to integrate with respect to x and y after z, is project down to the xy-plane.
The parabola r2= 4z intersects the sphere r2+ z2= 5 where r2+ (r2/4)2= 5 or r4/16+ r2- 5= 0. That's the same as u2+ 16u- 80= (u- 4)(u+ 20)= 0. If u= 4, then r= 2 (Since u= r2, we can't use the u= -20 solution.)
Because of the symmetry, θ (which doesn't appear in the formulas) ranges from 0 to 2π while r ranges from 0 (the middle) to 2. In the interior integral, z ranges from the paraboloid: z= r2/4 up to the sphere $z= \sqrt{5-r^2}$.
In the second problem, you have two concentric spheres with centers at the origin (I assume- you only mention ρ). φ and θ have no restrictions on them: their integrals will range from 0 to π (for φ) and from 0 to 2π (for θ). Of course, ρ will range from a to b.
Last edited: Feb 26, 2005
3. Feb 26, 2005
### hytuoc
Thanks much
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook | 2017-08-18 11:47:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8980054259300232, "perplexity": 1612.481785717282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104634.14/warc/CC-MAIN-20170818102246-20170818122246-00235.warc.gz"} |
http://visualphysics.org/qa/wave-function-wave-equation-shifted | # The Wave Function of a Wave Equation Shifted
Posted by doug
summary:
A phase shift is added to a wave function, but you cannot tell due to the periodic boundary condition.
description:
A change in the phase was included. Each event above another event has a different starting time due to the phase shift, but given enough time, all the same locations in spacetime are experienced, so the pattern looks the same.
command:
q_graph -out amp_shifted -dir int14 -box 1.6 -command 't_function -t_func cos -x_func sin -y_func zero -z_func zero -n_steps 1000 -pi 10 -n_t_cycles 1000 -n_t_step 0.0314 1 1 0 0 | q_add 0 0 -1.5 0 | q_add_n_m 0 0 0.003 0 1000 1000' -color yellow
equation:
$\phi = (cos(\omega t + \delta), sin(\omega t + \delta), k \delta, 0) \, \textrm{with} \, \delta: 0 \to 10 \pi$ | 2017-03-28 08:07:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 1, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6355600357055664, "perplexity": 4110.102353012929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.31/warc/CC-MAIN-20170322212949-00501-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://socratic.org/questions/what-is-the-domain-and-range-of-ln-x-3-2 | # What is the domain and range of ln(x - 3) + 2?
Jun 19, 2016
Domain is $\left(3 , + \infty\right)$ and range is $\mathbb{R}$
#### Explanation:
The domain is obtained by solving
$x - 3 > 0$
$x > 3$
Let be $y = \ln \left(x - 3\right) + 2$
$\ln \left(x - 3\right) = y - 2$
$x - 3 = {e}^{y - 2}$
$x = {e}^{y - 2} + 3$
that is calculated for all y
so the range of y is $\mathbb{R}$ | 2019-11-22 08:15:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7252430319786072, "perplexity": 1273.1192345145419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00024.warc.gz"} |
https://indico.cern.ch/event/129980/contributions/1350957/ | # DPF 2011
8-13 August 2011
Rhode Island Convention Center
US/Eastern timezone
## Missing-ET insensitive search for new physics such as R-parity Violation with multileptons
10 Aug 2011, 18:30
20m
Plenary Ballroom (Rhode Island Convention Center)
### Plenary Ballroom
#### Rhode Island Convention Center
Parallel contribution Beyond the Standard Model
### Speaker
Sanjay Ravi Ratan Arora (Dept. of Physics and Astronomy-Rutgers, State Univ. of New Jers)
### Description
Anticipating a data sample of the order of hundreds $pb^{-1}$ at a collision energy of 7 TeV by the CMS experiment at LHC in 2011, we probe new physics such as matter symmetry violation in the leptonic sector in theories with partner articles with a signature of three or more leptons in the final state. The search is organized to minimize reliance on specific kinematic variables to reduce SM backgrounds and we illustrate it by application to R-parity violating scenarios of new physics which are not necessarily accompanied by missing ET. We also estimate Standard Model backgrounds for individual channel with a maximal use of data-based methods to avoid reliance on simulation.
### Primary author
Sanjay Ravi Ratan Arora (Dept. of Physics and Astronomy-Rutgers, State Univ. of New Jers)
Slides | 2021-05-15 18:39:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29662778973579407, "perplexity": 6283.597864249115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243990551.51/warc/CC-MAIN-20210515161657-20210515191657-00412.warc.gz"} |
https://hal.archives-ouvertes.fr/hal-00353156 | # Geometric study of the beta-integers for a Perron number and mathematical quasicrystals
Abstract : We investigate in a geometrical way the point sets of ~$\rb$~ obtained by the ~$\beta$-numeration that are the ~$\beta$-integers ~$\zb_{\beta} \subset \zb[\beta]$~ where ~$\beta$~ is a Perron number. We show that there exist two canonical cut-and-project schemes associated with the ~$\beta$-numeration, allowing to lift up the ~$\beta$-integers to some points of the lattice ~$\zb^{m}$~ ($m =$~ degree of ~$\beta$) lying about the dominant eigenspace of the companion matrix of ~$\beta$~. When ~$\beta$~ is in particular a Pisot number, this framework gives another proof of the fact that ~$\zb_{\beta}$~ is a Meyer set. In the internal spaces, the canonical acceptance windows are fractals and one of them is the Rauzy fractal (up to quasi-dilation). We show it on an example. We show that ~$\zb_{\beta} \cap \rb^{+}$~ is finitely generated over ~$\nb$~ and make a link with the classification of Delone sets proposed by Lagarias. Finally we give an effective upper bound for the integer ~$q$~ taking place in the relation: ~$x, y \in \zb_{\beta} ~ \Longrightarrow x+y ~(\mbox{{\rm respectively}} ~x-y~~) \in \beta^{-q} \zb_{\beta}$ if ~$x+y$~ (respectively ~$x-y$~~) has a finite Rényi ~$\beta$- expansion.
keyword :
Type de document :
Article dans une revue
Journal de Théorie des Nombres de Bordeaux, Société Arithmétique de Bordeaux, 2004, 16, pp.125--149
https://hal.archives-ouvertes.fr/hal-00353156
Contributeur : Jean-Louis Verger-Gaugry <>
Soumis le : mercredi 14 janvier 2009 - 17:34:47
Dernière modification le : lundi 29 mai 2017 - 14:31:56
Document(s) archivé(s) le : mardi 8 juin 2010 - 18:06:58
### Fichiers
VGGeomStudyJTNB04.pdf
Fichiers produits par l'(les) auteur(s)
### Identifiants
• HAL Id : hal-00353156, version 1
### Citation
Jean-Louis Verger-Gaugry, Jean-Pierre Gazeau. Geometric study of the beta-integers for a Perron number and mathematical quasicrystals. Journal de Théorie des Nombres de Bordeaux, Société Arithmétique de Bordeaux, 2004, 16, pp.125--149. <hal-00353156>
Consultations de
la notice
## 185
Téléchargements du document | 2017-07-24 08:40:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27936801314353943, "perplexity": 4402.404851734782}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424770.15/warc/CC-MAIN-20170724082331-20170724102331-00038.warc.gz"} |
https://stats.stackexchange.com/questions/163216/conditional-vs-marginal-models | # Conditional vs. Marginal models
I have data with an outcome of 0 or 1 (binary) representing success or failure. I also have two comparison groups (Treatment vs. Control). Each subject in the study contributed 2 observations (the treatment is ear drops, so 2 ears). I wanted to model the data and to look for differences between treatment and control. I ran both a generalized linear mixed model (PROC GLIMMIX in SAS) which is a conditional model, and a GEE (PROC GENMOD in SAS), which is marginal. I got very similar estimations of the outcome probabilities in the two groups, and also similar p values. My question is, what is the difference between the marginal and conditional model, in general and in the context of this problem, and how do I know which one to choose and when ?
Marginal models are population-average models whereas conditional models are subject-specific. As a result, there are subtle differences in interpretation. For example if you were studying the effect of BMI on blood pressure and you were using marginal model, you would say something like, "a 1 unit increase in BMI is associated with a $Z$-unit average increase in blood pressure" while with a conditional model you would say something like "a 1 unit increase in BMI is associated with a $Z$-unit average increase in blood pressure, holding each random effect for individual constant." | 2019-05-27 03:31:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5132668018341064, "perplexity": 644.7521576078057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00505.warc.gz"} |
https://intelligencemission.com/free-electricity-from-air-circuit-free-electricity-loyalty-days.html | No, it’s not alchemy or magic to understand the attractive/resistive force created by magnets which requires no expensive fuel to operate. The cost would be in the system, so it can’t even be called free, but there have to be systems that can provide energy to households or towns inexpensively through magnetism. You guys have problems God granted us the knowledge to figure this stuff out of course we put Free Power monkey wrench in our program when we ate the apple but we still have it and it is free if our mankind stop dipping their fingers in it and trying to make something off of it the government’s motto is there is Free Power sucker born every minute and we got to take them for all they got @Free Energy I’ll take you up on your offer!!! I’ve been looking into this idea for Free Power while, and REALLY WOULD LOVE to find Free Power way to actually launch Free Power Hummingbird Motor, and Free Power Sundance Generator, (If you look these up on google, you will find the scam I am talking about, but I want to believe that the concept is true, I’ve seen evidence that Free Electricity did create something like this, and I’Free Power like to bring it to reality, and offer it on Free Power small scale, Household and small business like scale… I know how to arrange Free Power magnet motor so it turns on repulsion, with no need for an external power source. My biggest obstacle is I do not possess the building skills necessary to build it. It’s Free Power fairly simple approach that I haven’t seen others trying on Free Power videos.
### Your design is so close, I would love to discuss Free Power different design, you have the right material for fabrication, and also seem to have access to Free Power machine shop. I would like to give you another path in design, changing the shift of Delta back to zero at zero. Add 360 phases at zero phase, giving Free Power magnetic state of plus in all 360 phases at once, at each degree of rotation. To give you Free Power hint in design, look at the first generation supercharger, take Free Power rotor, reverse the mold, create Free Power cast for your polymer, place the mold magnets at Free energy degree on the rotor tips, allow the natural compression to allow for the use in Free Power natural compression system, original design is an air compressor, heat exchanger to allow for gas cooling system. Free energy motors are fun once you get Free Power good one work8ng, however no one has gotten rich off of selling them. I’m Free Power poor expert on free energy. Yup that’s right poor. I have designed Free Electricity motors of all kinds. I’ve been doing this for Free Electricity years and still no pay offs. Free Electricity many threats and hacks into my pc and Free Power few break in s in my homes. It’s all true. Big brother won’t stop keeping us down. I’ve made millions if volt free energy systems. Took Free Power long time to figure out.
You have proven to everyone here that can read that anything you say just does not matter. After avoiding my direct questions, your tactics of avoiding any real answers are obvious to anyone who reads my questions and your avoidance in response. Not once have you addressed anything that I’ve challenged you on. You have the same old act to follow time after time and you insult everyone here by thinking that even the hard core free energy believers fall for it. Telling everyone that all motors are magnetic when everyone else but you knows that they really mean Free Power permanent magnet motor that requires no external power source. Free Power you really think you’ve pointed out anything? We can see you are just avoiding the real subject and perhaps trying to show off. You are just way off the subject and apparently too stupid to even realize it.
I made one years ago and realised then why they would never work. I’m surprised you’Free Power lie about making Free Power working version unless you and Free Energy are in on the joke. You see anybody who gets Free Power working magnetic motor wouldn’t be wasting their time posting about it. They would take Free Power working version to Free Power large corporation with their Free Power in tow and be rich beyond belief. I just don’t get why you would bother to lie about it. You want to be Free Power hero to the free energy “believers” I imagine. You and Free Energy are truly sad cases. OK – in terms of magneting sheilding – I have spoken to less emf over there in the good ole US of A who make all sorts of electro magnetic sheilding. They also make sheilding for normal magnets. It appears that it dosnt block one pole completely but distorts the lines of magnetic influence through extreme magnetic conductivity. Mu-metal, while Free Power good sheild is not the ultimate in sheilding for the purposes we are all looking for. They are getting back to me on the effectiveness of another product after having Free Power look at Free Power photo i sent them. Geoff, I honestly think that if you were standing right there you would find some kind of fault to point out. But I do think you are doing Free Power good service by pointing them out. I can assure that the only reason the smoke came into view was because the furnace turned on and being Free Power forced air system it caused the air to move. Besides, if I was using something to move the air the smoke would have been totally sideways, not just Free Power wisp passing through. Hey G Free Electricity, you can say anything you want and your not going to bother or stop me from working on this. My question is this, Why are you on this and just cutting every body down? Are you making one your self and don’t want anybody to beat you? Go for it! I could care less, i am biulding these for the fun of it, i love to tinker, if i can get one to run good enough to run my green house then i will be happy or just to charge some batteries for backup power to run my fish tanks when the power goes out, then great i have satisfied my self.
Why? Because I didn’t have the correct angle or distance. It did, however, start to move on its own. I made Free Power comment about that even pointing out it was going the opposite way, but that didn’t matter. This is Free Power video somebody made of Free Power completed unit. You’ll notice that he gives Free Power full view all around the unit and that there are no wires or other outside sources to move the core. Free Power, the question you had about shielding the magnetic field is answered here in the video. One of the newest materials for the shielding, or redirecting, of the magnetic field is mumetal. You can get neodymium magnets via eBay really cheaply. That way you won’t feel so bad when it doesn’t work. Regarding shielding – all Free Power shield does is reduce the magnetic strength. Nothing will works as Free Power shield to accomplish the impossible state whereby there is Free Power reduced repulsion as the magnets approach each other. There is Free Power lot of waffle on free energy sites about shielding, and it is all hogwash. Electric powered shielding works but the energy required is greater than the energy gain achieved. It is Free Power pointless exercise. Hey, one thing i have not seen in any of these posts is the subject of sheilding. The magnets will just attract to each other in-between the repel position and come to Free Power stop. You can not just drop the magnets into the holes and expect it to run smooth. Also i have not been able to find magnets of Free Power large size without paying for them with Free Power few body parts. I think magnets are way over priced but we can say that about everything now can’t we. If you can get them at Free Power good price let me know.
The machine can then be returned and “recharged”. Another thought is short term storage of solar power. It would be way more efficient than battery storage. The solution is to provide Free Power magnetic power source that produces current through Free Power wire, so that all motors and electrical devices will run free of charge on this new energy source. If the magnetic power source produces current without connected batteries and without an A/C power source and no work is provided by Free Power human, except to start the flow of current with one finger, then we have Free Power true magnetic power source. I think that I have the solution and will begin building the prototype. My first prototype will fit into Free Power Free Electricity-inch cube size box, weighing less than Free Power pound, will have two wires coming from it, and I will test the output. Hi guys, for Free Power start, you people are much better placed in the academic department than I am, however, I must ask, was Einstein correct, with his theory, ’ matter, can neither, be created, nor destroyed” if he is correct then the idea of Free Power perpetual motor, costing nothing, cannot exist. Those arguing about this motor’s capability of working, should rephrase their argument, to one which says “relatively speaking, allowing for small, maybe, at present, immeasurable, losses” but, to all intents and purposes, this could work, in Free Power perpetual manner. I have Free Power similar idea, but, by trying to either embed the strategically placed magnets, in such Free Power way, as to be producing Free Electricity, or, Free Power Hertz, this being the usual method of building electrical, electronic and visual electronics. This would be done, either on the sides of the discs, one being fixed, maybe Free Power third disc, of either, mica, or metallic infused perspex, this would spin as well as the outer disc, fitted with the driving shaft and splined hub. Could anybody, build this? Another alternative, could be Free Power smaller internal disk, strategically adorned with materials similar to existing armature field wound motors but in the outside, disc’s inner area, soft iron, or copper/ mica insulated sections, magnets would shade the fields as the inner disc and shaft spins. Maybe, copper, aluminium/aluminum and graphene infused discs could be used? Please pull this apart, nay say it, or try to build it?Lets use Free Power slave to start it spinning, initially!! In some areas Eienstien was correct and in others he was wrong. His Theory of Special Realitivity used concepts taken from Lorentz. The Lorentz contraction formula was Lorentz’s explaination for why Michaelson Morely’s experiment to measure the Earth’s speed through the aeather failed, while keeping the aether concept intact.
The Engineering Director (electrical engineer) of the Karnataka Power Corporation (KPC) that supplies power to Free energy million people in Bangalore and the entire state of Karnataka (Free energy megawatt load) told me that Tewari’s machine would never be suppressed (view the machine here). Tewari’s work is known from the highest levels of government on down. His name was on speed dial on the Prime Minister’s phone when he was building the Kaiga Nuclear Station. The Nuclear Power Corporation of India allowed him to have two technicians to work on his machine while he was building the plant. They bought him parts and even gave him Free Power small portable workshop that is now next to his main lab. ”
Building these things is easy when you find the parts to work with. That’s the hard part! I only wish they would give more information as to part numbers you can order for wheels etc. instead of scrounging around on the internet. Wire is no issue because you can find it all over the internet. I really have no idea if the “magic motor” as you call it is possible or not. Yet, I do know of one device that moves using magnetic properties with no external power source, tap tap tap Free Power Compass. Now, if the properties that allow Free Power compass to always point north can be manipulated in Free Power circular motion wouldn’t Free Power compass move around and around forever with no external power source. My point here is that with new techknowledgey and the possiblity of new discovery anything can be possible. I mean hasn’t it already been proven that different places on this planet have very different consentrations of magnetic energy. Magnetic streams or very high consentrated areas of magnetic power if you will. Where is there external power source? Tap Tap Tap Mie2centsHarvey1Thanks for caring enough to respond! Let me address each of your points: Free Power. A compass that can be manipulated in Free Power circular motion to move around and around forever with no external power source would constitute Free Power “Magical Magnetic Motor”. Show me Free Power working model that anyone can operate without the inventor around and I’ll stop Tap tap tap ing. It takes external power to manipulate the earths magnetic fields to achieve that. Although the earth’s magnetic field varies in strength around the planet, it does not rotate to any useful degree over Free Power short enough time span to be useful.
If it worked, you would be able to buy Free Power guaranteed working model. This has been going on for Free Electricity years or more – still not one has worked. Ignorance of the laws of physics, does not allow you to break those laws. Im not suppose to write here, but what you people here believe is possible, are true. The only problem is if one wants to create what we call “Magnetic Rotation”, one can not use the fields. There is Free Power small area in any magnet called the “Magnetic Centers”, which is around Free Electricity times stronger than the fields. The sequence is before pole center and after face center, and there for unlike other motors one must mesh the stationary centers and work the rotation from the inner of the center to the outer. The fields is the reason Free Power PM drive is very slow, because the fields dont allow kinetic creation by limit the magnetic center distance. This is why, it is possible to create magnetic rotation as you all believe and know, BUT, one can never do it with Free Power rotor.
You need Free Power solid main bearing and you need to fix the “drive” magnet/s in place to allow you to take measurements. With (or without shielding) you find the torque required to get two magnets in Free Power position to repel (or attract) is EXACTLY the same as the torque when they’re in Free Power position to actually repel (or attract). I’m not asking you to believe me but if you don’t take the measurements you’ll never understand the whole reason why I have my stance. Mumetal is Free Power zinc alloy that is effective in the sheilding of magnetic and electro magnetic fields. Only just heard about it myself couple of days ago. According to the company that makes it and other emf sheilding barriers there is Free Power better product out there called magnet sheild specifically for stationary magnetic fields. Should have the info on that in Free Power few hours im hoping when they get back to me. Hey Free Power, believe me i am not giving up. I have just hit Free Power point where i can not seem to improve and perfect my motor. It runs but not the way i want it to and i think Free Power big part of it is my shielding thats why i have been asking about shielding. I have never heard of mumetal. What is it? I have looked into the electro mag over unity stuff to but my feelings on that, at least for me is that it would be cheeting on the total magnetic motor. Your basicaly going back to the electric motor. As of right now i am looking into some info on magnets and if my thinking is correct we might be making these motors wrong. You can look at the question i just asked Free Electricity on magnets and see if you can come up with any answers, iam looking into it my self.
But thats what im thinkin about now lol Free Energy Making Free Power metal magnetic does not put energy into for later release as energy. That is one of the classic “magnetic motor” myths. Agree there will be some heat (energy) transfer due to eddy current losses but that is marginal and not recoverable. I takes Free Power split second to magnetise material. Free Energy it. Stroke an iron nail with Free Power magnet and it becomes magnetic quite quickly. Magnetising something merely aligns existing small atomic sized magnetic fields.
The Engineering Director (electrical engineer) of the Karnataka Power Corporation (KPC) that supplies power to Free energy million people in Bangalore and the entire state of Karnataka (Free energy megawatt load) told me that Tewari’s machine would never be suppressed (view the machine here). Tewari’s work is known from the highest levels of government on down. His name was on speed dial on the Prime Minister’s phone when he was building the Kaiga Nuclear Station. The Nuclear Power Corporation of India allowed him to have two technicians to work on his machine while he was building the plant. They bought him parts and even gave him Free Power small portable workshop that is now next to his main lab. ”
Does the motor provide electricity? No, of course not. It is simply an engine of sorts, nothing more. The misunderstandings and misconceptions of the magnetic motor are vast. Improper terms (perpetual motion engine/motor) are often used by people posting or providing information on this idea. If we are to be proper scientists we need to be sure we are using the correct phrases and terms. However Free Power “catch phrase” seems to draw more attention, although it seems to be negative attention. You say, that it is not possible to build Free Power magnetic motor, that works, that actually makes usable electricity, and I agree with you. But I think you can also build useless contraptions that you see hundreds on the internet, but I would like something that I could BUY and use here in my apartment, like today, or if we have an Ice storm, or have no power for some reason. So far, as I know nobody is selling Free Power motor, or power generator or even parts that I could use in my apartment. I dont know how Free energy Free Power’s device will work, but if it will work I hope he will be manufacture it, and sell it in stores. The car obsessed folks think that there is not an alternative fuel because of because the oil companies buy up inventions such as the “100mpg carburettor” etc, that makes me laugh. The biggest factors stopping alternate fuels has been cost and practicality. Electric vehicles are at the stage of the Free Power or Free Electricity, and it is not Free Energy keeping it there. Once developed people will be saying those Evil Battery Free Energy are buying all the inventions that stop our reliance on batteries.
Both sets of skeptics will point to the fact that there has been no concrete action, no major arrests of supposed key Deep State players. A case in point: is Free Electricity not still walking about freely, touring with her husband, flying out to India for Free Power lavish wedding celebration, creating Free Power buzz of excitement around the prospect that some lucky donor could get the opportunity to spend an evening of drinking and theatre with her?
## You might also see this reaction written without the subscripts specifying that the thermodynamic values are for the system (not the surroundings or the universe), but it is still understood that the values for \Delta \text HΔH and \Delta \text SΔS are for the system of interest. This equation is exciting because it allows us to determine the change in Free Power free energy using the enthalpy change, \Delta \text HΔH, and the entropy change , \Delta \text SΔS, of the system. We can use the sign of \Delta \text GΔG to figure out whether Free Power reaction is spontaneous in the forward direction, backward direction, or if the reaction is at equilibrium. Although \Delta \text GΔG is temperature dependent, it’s generally okay to assume that the \Delta \text HΔH and \Delta \text SΔS values are independent of temperature as long as the reaction does not involve Free Power phase change. That means that if we know \Delta \text HΔH and \Delta \text SΔS, we can use those values to calculate \Delta \text GΔG at any temperature. We won’t be talking in detail about how to calculate \Delta \text HΔH and \Delta \text SΔS in this article, but there are many methods to calculate those values including: Problem-solving tip: It is important to pay extra close attention to units when calculating \Delta \text GΔG from \Delta \text HΔH and \Delta \text SΔS! Although \Delta \text HΔH is usually given in \dfrac{\text{kJ}}{\text{mol-reaction}}mol-reactionkJ, \Delta \text SΔS is most often reported in \dfrac{\text{J}}{\text{mol-reaction}\cdot \text K}mol-reaction⋅KJ. The difference is Free Power factor of 10001000!! Temperature in this equation always positive (or zero) because it has units of \text KK. Therefore, the second term in our equation, \text T \Delta \text S\text{system}TΔSsystem, will always have the same sign as \Delta \text S_\text{system}ΔSsystem.
Free Power you? Im going to stick to the mag motor for now. Who knows, maybe some day you will see Free Power mag motor powered fan at WallMart. Free Power, Free Power Using Free Electricity/Free Power chrome hydraulic shaft and steel bearing and housings for the central spindal. Aluminium was too hard to find for shaft material and ceramic bearings were too expensive so i have made the base out of an old wooden table top thats about Free Power. 3metres across to get some distance. Therefore rotation of the magnets seems outside influence of the steel centre. Checked it out with Free Power bucket of water with floating magnets and didnt seem to have effect at that distance. Welding up the aluminium bracket that goes across top of table to hold generator tomorrow night. Probably still be about Free energy days before i get it to rotation stage. Looks awesome with all the metal bits polished up. Also, I just wanted to add this note. I am not sure what to expect from the design. I am not claiming that i will definitely get over unity. I am just interested to see if it comes within Free Power mile of it. Even if it is Free Power massive fail i have still got some thing that looks supa cool in the workshop that customers can ask about and i can have all these educated responses about zero point energy experiments, etc etc and sound like i know what im talking about (chuckle). After all, having Free Power bit of fun is the main goal. Electromagnets can be used to make Free Power “magnet motor” rotate but (there always is Free Power but…) the power out of the device is equal to the power supplied to the electromagnet less all the losses. The magnetic rotor actually just acts like Free Power fly Free Energy and contributes nothing to the overall output. Once you get Free Power rotor spinning fast enough you can draw bursts of high energy (i. e. if it is powering Free Power generator) and people often quote the high volts and amps as the overall power output. Yippee OVERUNITY! they shout Unfortunately if you rig Free Power power meter to the input and out the truth hits home. The magnetic rotor merely stores the energy as does any fly Free Energy and there is no net gain.
I e-mailed WindBlue twice for info on the 540 and they never e-mailed me back, so i just thought, FINE! To heck with ya. Ill build my own. Free Power you know if more than one pma can be put on the same bank of batteries? Or will the rectifiers pick up on the power from each pma and not charge right? I know that is the way it is with car alt’s. If Free Power car is running and you hook Free Power batery charger up to it the alt thinks the battery is charged and stops charging, or if you put jumper cables from another car on and both of them are running then the two keep switching back and forth because they read the power from each other. I either need Free Power real good homemade pma or Free Power way to hook two or three WindBlues together to keep my bank of batteries charged. Free Electricity, i have never heard the term Spat The Dummy before, i am guessing that means i called you Free Power dummy but i never dFree Energy I just came back at you for being called Free Power lier. I do remember apologizing to you for being nasty about it but i guess i have’nt been forgiven, thats fine. I was told by Free Power battery company here to not build Free Power Free Electricity or 24v system because they heat up to much and there is alot of power loss. He told me to only build Free Power 48v system but after thinking about it i do not think i need to build the 48v pma but just charge with 12v and have my batteries wired for 48v and have Free Power 48v inverter but then on the other Free Power the 48v pma would probably charge better.
Next you will need to have Free Power clamp style screw assembly on the top of the outside sections. This will allow you to adjust how close or far apart they are from the Free Energy. I simply used Free Power threaded rod with the same sized nuts on the top of the sections. It was Free Power little tricky to do, but I found that having Free Power square piece of aluminum going the length helped to stabilize the movement. Simply drill Free Power hole in the square piece that the threaded rod can go through. Of course you’ll need Free Power shaft big enough to support the Free Energy and one that will fit most generator heads. Of course you can always adapt it down if needed. I found that the best way to mount this was to have Free Power clamp style mount that uses bolts to hold it onto the Free Energy and Free Power “set bolt/screw” to hold it onto the shaft. That takes Free Power little hunting, but I did find something at Home Depot that works. If you’re handy enough you could create one yourself. Now mount the Free Energy on the shaft away from the outside sections if possible. This will keep it from pushing back and forth on you. Once you have it mounted you need to position it in between outside sections, Free Power tricky task. The magnets will cause the Free Energy to push back Free Power little as well as try to spin. The best way to do this is with some help or some rope. Why? Because you need to hold the Free Energy in place while tightening the set bolt/screw.
These functions have Free Power minimum in chemical equilibrium, as long as certain variables (T, and Free Power or p) are held constant. In addition, they also have theoretical importance in deriving Free Power relations. Work other than p dV may be added, e. g. , for electrochemical cells, or f dx work in elastic materials and in muscle contraction. Other forms of work which must sometimes be considered are stress-strain, magnetic, as in adiabatic demagnetization used in the approach to absolute zero, and work due to electric polarization. These are described by tensors. | 2019-03-23 01:47:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48536282777786255, "perplexity": 1351.8865299235815}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202704.58/warc/CC-MAIN-20190323000443-20190323022443-00138.warc.gz"} |
https://repo.scoap3.org/record/33599 | # Bayesian Extraction of Jet Energy Loss Distributions in Heavy-Ion Collisions
He, Yayun (Key Laboratory of Quark & Lepton Physics (MOE) and Institute of Particle Physics, Central China Normal University, Wuhan 430079, China) (Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA) ; Pang, Long-Gang (Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA) (Physics Department, University of California, Berkeley, California 94720, USA) ; Wang, Xin-Nian (Key Laboratory of Quark & Lepton Physics (MOE) and Institute of Particle Physics, Central China Normal University, Wuhan 430079, China) (Nuclear Science Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA) (Physics Department, University of California, Berkeley, California 94720, USA)
27 June 2019
Abstract: Based on the factorization in perturbative QCD, a jet cross section in heavy-ion collisions can be expressed as a convolution of the jet cross section in $p+p$ collisions and a jet energy loss distribution. Using this simple expression and the Markov Chain Monte Carlo method, we carry out Bayesian analyses of experimental data on jet spectra to extract energy loss distributions for both single inclusive and $\gamma$-triggered jets in $\mathrm{Pb}+\mathrm{Pb}$ collisions with different centralities at two colliding energies at the Large Hadron Collider. The average jet energy loss has a dependence on the initial jet energy that is slightly stronger than a logarithmic form and decreases from central to peripheral collisions. The extracted jet energy loss distributions with a scaling behavior in $x=\Delta {p}_{T}/⟨\Delta {p}_{T}⟩$ have a large width. These are consistent with the linear Boltzmann transport model simulations, in which the observed jet quenching is caused on the average by only a few out-of-cone scatterings.
Published in: Physical Review Letters 122 (2019) | 2019-07-21 09:02:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5659082531929016, "perplexity": 1130.8332109614967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526940.0/warc/CC-MAIN-20190721082354-20190721104354-00319.warc.gz"} |
https://www.physicsforums.com/threads/is-gravity-both-a-force-and-not-a-force.923460/ | # Is gravity both a force and not a force?
• B
## Main Question or Discussion Point
Okay, I know there are many other discussions regarding this exact topic, but I might (probably not) have found an easier way to think of gravity being a force or not a force.
Just like light can be a particle or a wave from how you measure it, to my understanding so can gravity. As I am told, NASA look at gravity as a force (newtonian gravity), for their calculations, because GR equations take too long and make a tiny difference. However, when calculating gravity in much larger masses (maybe like galaxies), general relativity, where gravity is a distortion of spacetime, is needed as it becomes far more accurate.
This means depending on how you need it, gravity can be used as a force (for planets and asteroids etc) and with it not being a force (for larger masses possibly stars and galaxies etc.)
I don't know if this is correct. I have just been reading so much about if gravity is a force or not and I am trying to get some answers).
Could this work as an over simplified answer?
## Answers and Replies
Related Special and General Relativity News on Phys.org
In GR gravity is described as curvature of space. and on cosmological scales it's more accurate than Newton's model.
However, Newton's model is fine if you are something like a civil engineer designing a bridge, and much easier.
vanhees71 and stoomart
Nugatory
Mentor
Could this work as an over simplified answer?
That's close enough...
Have I pointed you at Asimov's essay "The relativity of wrong" before? If not, you'll want to read it before you take on the question "Gravity is a force: Right or Wong?".
Demystifier, atyy, vanhees71 and 1 other person
" right and wrong are fuzzy concepts "
Thanks Isaac.
Lunct
That's close enough...
Have I pointed you at Asimov's essay "The relativity of wrong" before? If not, you'll want to read it before you take on the question "Gravity is a force: Right or Wong?".
I read it before and really enjoyed it.
Can you point out the errors in my post if it is "close enough"
Ibix
I read it before and really enjoyed it.
Can you point out the errors in my post if it is "close enough"
Speaking precisely, we don't claim that gravity is or is not a force. Different models treat it in different ways, and we don't claim our best model is correct. Only the best we can currently do.
So use Newton (and treat gravity as a force) when you can get away with it because the maths is easier. Use Einstein (and treat it as not a force) otherwise. And keep an open mind about what gravity "actually is". Who knows what quantum gravity will say?
Lunct
Well, I would like to say that I fundamentally disagree.
What is a force? Whether you consider gravity as a force or not depends on how you answer this question.
I would say that a force is something that causes absolute acceleration in the sense that you can measure the acceleration with an (ideal) accelerometer. Now, by the (weak) equivalence principle, the (passive) gravitational mass of an object is equal to its inertial mass, which means that this so called gravitational force' pulls you in just the right amount as to make sure that you cannot measure it with an accelerometer. Why? Because it does not matter which mass you or the parts in the accelerometer have, they get accelerated the same way, so nothing happens here. But if the accelerometer does not measure anything, is gravity a force then?
Of course, you could also answer that a force is something that causes relative acceleration, but then you belong to the kind of people that think that a force acts on the world when you turn around. Think about it, it's very worthwhile for understanding the world you live in.
Who knows what quantum gravity will say?
What about Aristotelian physics? It's true in its own right, isn't it?
Mister T
Gold Member
This means depending on how you need it, gravity can be used as a force (for planets and asteroids etc) and with it not being a force (for larger masses possibly stars and galaxies etc.)
Not quite, because you can use the model that is general relativity for both situations. The model that describes gravity as a force, however, gives you virtually the same answers as general relativity only in the case of weaker gravity. So it's not that there are two separate models with different limits of validity, it's that the force model has limits of validity that are more restricted than the limits of validity of general relativity.
Even general relativity has limits of validity, though. There is no such thing as a theory with universal limits of validity. Although there may be one, or even more than one, some day.
Mister T
Gold Member
What about Aristotelian physics? It's true in its own right, isn't it?
It has very restricted limits of validity. It doesn't really make quantitative predictions, so it may not even be correct to call it physics. Usually we credit Galileo with the development of the first theories of physics.
Hi. Newton's law
$$F_{12}=-G\frac{m_1m_2}{r_{12}^2}$$
As Force acting on ##m_1## is ##m_1a_1##
$$a_1=-G\frac{m_2}{r_{12}^2}$$
Or as vector analysis
$$div\ \mathbf{a}=-4\pi G\rho$$ where ##\rho(\mathbf{r})## is mass density at ##\mathbf{r}##.
So even in Newtonian Mechanics, gravity can be interpreted not only as force but as the field of universal acceleration which is a kind of spacetime feature sourced by mass.
haushofer
Gravity is in a superposition |being a force>+|not being a force>.
Not quite, because you can use the model that is general relativity for both situations. The model that describes gravity as a force, however, gives you virtually the same answers as general relativity only in the case of weaker gravity. So it's not that there are two separate models with different limits of validity, it's that the force model has limits of validity that are more restricted than the limits of validity of general relativity.
Even general relativity has limits of validity, though. There is no such thing as a theory with universal limits of validity. Although there may be one, or even more than one, some day.
that was the only drawback I could think off. Are there any others?
Mister T
Gold Member
that was the only drawback I could think off. Are there any others?
To what drawback do you refer?
I would say that a force is something that causes absolute acceleration in the sense that you can measure the acceleration with an (ideal) accelerometer. Now, by the (weak) equivalence principle, the (passive) gravitational mass of an object is equal to its inertial mass, which means that this so called gravitational force' pulls you in just the right amount as to make sure that you cannot measure it with an accelerometer. Why? Because it does not matter which mass you or the parts in the accelerometer have, they get accelerated the same way, so nothing happens here. But if the accelerometer does not measure anything, is gravity a force then?
I would say still yes in newtonian physics. In newtonian physics you have fixed parallel transport by which you can compare vectors that are in different points of space. Then you have first newtons law speaking about straight lines when object is not acted on by some force. Therefore, you would get inconsistency between straight lines in euclidean geometry and inertial systems, and both are most fundamental concepts of newtonian physics. Because of this, in newtonian physics gravitation must be force, even though it has no local effect only global one.
Geometry_dude
pervect
Staff Emeritus
I would focus on a more concrete observation. Consider gravitational time dilation. This is an effect of gravity that does not conveniently fit into the "force" mold, but is well documented. So viewing gravity as a force will not explain gravitational time dilation. You'd have to add that in "on top" of gravity being a force.
There are other effects that are harder to observe experimentally that are also predicted by General Relativity. These involves alterations in the geometry of space. ((add: the 'extra' bending of light is one example of these tiny effects)). So, adding "gravitational time dilation" as an additional effect "on top of" gravity being a force is still not fully sufficient to fully understand General Relativity.
Thus, if one wants to understand all the effects of general relativity, one cannot find a complete description of the phenomenon as "a force". Describing gravity as a force + gravitatioanl time dilation is better, but it is still not sufficient to explain the full theory - for instance, it won't explain the bending of starlight.
In the end, to get a full understanding of GR, one needs to abandon the force model as a complete description of gravity, and learn something else.
Last edited:
Geometry_dude, Lunct, PeterDonis and 1 other person
Hi.
Sequel of #10
In SR ##\rho c^2## is, in an approximation, one of 16 components of energy momentum density tensor ##T^{\mu\nu}##. So we expect to expand the relation to all the components, i.e.
$$[\frac{-2}{c^2}\ div\ \mathbf{a}]^{\mu\nu} \ like \ =\frac{8\pi G}{c^4}T^{\mu\nu}=\kappa T^{\mu\nu}$$
Not only mass or energy but also momentum and pressure participate. The left hand side of dimension ##L^{-2}## was found and named Einstein Tensor ##G^{\mu\nu}##.
Once relation of one component expands to the relation of 16 components. We can expect more variety of effects than only Newton's acceleration law. Best.
Last edited:
I don't know if this is correct. I have just been reading so much about if gravity is a force or not and I am trying to get some answers).
I think it is worth pointing out that gravity as a force (GAAF!) does not explain its effect on light properly.
Geometry_dude
To what drawback do you refer?
the fact that GR can work in both instances.
Mister T
Gold Member
the fact that GR can work in both instances.
That's not a drawback, it's an accomplishment.
That's not a drawback, it's an accomplishment.
it is a drawback in my original post.
Yes, pervect is making quite a point with the time dilation argument. The point raised by m4r35n357 about bending light is also quite good, even though you could argue that light has a very, very tiny non-measurable mass and then you're back in hell's kitchen.
I would say still yes in newtonian physics. In newtonian physics you have fixed parallel transport by which you can compare vectors that are in different points of space. Then you have first newtons law speaking about straight lines when object is not acted on by some force. Therefore, you would get inconsistency between straight lines in euclidean geometry and inertial systems, and both are most fundamental concepts of newtonian physics. Because of this, in newtonian physics gravitation must be force, even though it has no local effect only global one.
I had to think about what you were trying to say here for a bit, and now that I got what you mean, I have to say that's an intricate point you raised here. My counter question to you would be: What if you start your Newtonian description with the falling observer in a constant gravitational field?
I had to think about what you were trying to say here for a bit, and now that I got what you mean, I have to say that's an intricate point you raised here. My counter question to you would be: What if you start your Newtonian description with the falling observer in a constant gravitational field?
If you mean by constant that it doesnt change with time, i think that changes nothing on my argument.
If you mean by constant homogeneous and isotropic (which from the context i think you do, but perhaps not), then i think that is exactly what you do in newtonian physics. You ignore any additive constant in gravitational law, because it would curve every particle in the same way with respect to some euclidean "absolute space". Moreover it wouldnt produce anything that accelerometers could measure. Thus, just by redefining new absolute space by properly accelerating it with respect to original absolute space you get rid of this additional constant with no change in observable predictions.
Of course absolute space and absolute time cannot explain real world, so you are bound to abandon it and create special relativity and, as it was shown in MTW (so-called Schilds argument, page 188), the idea that gravitation is a force is incompatible with special relativity+energy conservation and that leads you to curved spacetime and its incompatibility with quantum theory leads you to whoever knows what. But in the context of newtonian physics, gravitation must be a force.
P.S.
I know i am terrible with english. So i apologize if my comments are hard to understand.
Well, I mean a force field like this
$$\vec F = m \vec g \,$$
where ##\vec g## is (covariantly) constant. Isotropy is not an appropriate word here, but I'm sure that this is what you meant.
Of course, Newton certainly viewed gravity as a force. My point is that this point of view becomes inconsistent here.
One one hand, you cannot take this frame of reference as your starting point for defining the euclidean metric, because it is philosophically not inertial. On the other hand, if all physics works the same, how are you supposed to know what an inertial frame of reference is?
Isotropy is not an appropriate word here, but I'm sure that this is what you meant.
Yes, you are right. Thanks for pointing that out:)
One one hand, you cannot take this frame of reference as your starting point for defining the euclidean metric, because it is philosophically not inertial. On the other hand, if all physics works the same, how are you supposed to know what an inertial frame of reference is?
I dont understand your struggle.
Because of gravity, you need to define inertial frame by zero acceleration when there is no force, not by zero accelerometer readings, otherwise, as i argued, you would need to abandon euclidean geometry or rebuild newton's laws of motion.
If there is however some constant omnipresent acceleration, you can say there is no inertial frame of reference. Or you can say there is some constant omnipresent gravitational force. The freedom in what you call force you can use to choose comoving euclidean space to make physics simple and to get rid of unobsarvable random parameter from the theory.
Geometry_dude
Of course, in the universe where there is only constant gravitation, defining gravitation is quite redundant. | 2020-07-05 20:33:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7173969745635986, "perplexity": 495.6518586383085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00300.warc.gz"} |
https://www.idlewyldanalytics.com/ws-st | ## 17.3 Scraping Toolbox
From experience, we know that a number of tools can facilitate the automated data extraction process, including:
• Developer Tools,
• XPath,
• regular expressions,
• Beautiful Soup, and
• Selenium.
### 17.3.1 Developer Tools
Developer Tools allow us to see the correspondence between the HTML code for a page and the rendered version seen in the browser, as illustrated in Figure 17.7.
Unlike “View Source”, Developer Tools show the dynamic version of the HTML content (i.e. the HTML is shown with any changes made by JavaScript since the page was first received). Inspecting a page’s various elements and discovering where they reside in the HTML file is crucial to efficient web scraping:
• Firefox – right click page $$\to$$ Inspect Element
• Safari – Safari $$\to$$ Preferences $$\to$$ Advanced $$\to$$ Show Develop Menu in Menu Bar, then Develop $$\to$$ Show Web Inspector
• Chrome – right click page $$\to$$ Inspect
### 17.3.2 XPath
XPath is a query (domain-specific) language which is used to select specific pieces of information from marked-up documents such as HTML, XML, or variants such as SVG, RSS. Before this can be done, the information stored in a marked-up document needs to be converted (or parsed) into a format suitable for processing and statistical analysis; this is implemented in the R package XML, for instance.
The process is simple; it involves
1. specifying the data of interest;
2. locating it in a specific document, and
3. tailoring a query to the document to extract the desired info.
HTML/XML tags have attributes and values. HTML files must be parsed before they can be queried by XPath. XPath queries require both a path and a document to search; paths consist of hierarchical addressing mechanism (succession of nodes, separated by forward slashes (“/”), while a query takes the form xpathSApply(doc,path).
xpathSApply(parsed_doc,“/html/body/div/p/i”), for instance, would find all <i> tags found under a <p> tag, itself found under a <div> tag in the body of the html file of parsed_doc. Consult [352] for a substantially heftier introduction.
We will illustrate Xpath’s functionality with the help of the following webpage:
The underlying HTML code is in the file laws.html; we parse the document using XML’s htmlParse().
#library(XML)
parsed_doc <- XML::htmlParse(file = "Data/laws.html")
print(parsed_doc)
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
<html>
<!-- From M. Jones' 15 Fundamental Laws of the Internet --><body>
<h1>Laws of the <i>Internet</i>
</h1>
<div id="wiio" lang="english" date="1978">
<h2>Osmo Antero Wiio</h2>
<p><i>Communication usually fails, except by accident.</i></p>
<p><b>Source: </b>Wiion lait - ja vähän muidenkin</p>
</div>
<div lang="english" date="1986">
<h2>Melvin Kranzberg</h2>
<p><i>Technology is neither good nor bad; nor is it neutral.</i> <br><emph>(Kranzberg's 1st Law)</emph></p>
<p><b>Source: </b><a href="https://www.jstor.org/stable/3105385">Technology and Culture. 27 (3): 544â560.</a></p>
</div>
<div lang="english" date="1958">
<h2>Theodore Sturgeon</h2>
<p><i>90% of everything is crap.</i> <br><emph>(Sturgeon's Revelation)</emph></p>
<p><b>Source: </b>"Books: On Hand". Venture Science Fiction. Vol. 2, no. 2. p. 66.</p>
</div>
<div id="other">
<h2>Others:</h2>
<ul>
<li>The 1% Rule: "Only 1% of the users of a website actively create new content, while the other 99% of the participants only lurk."</li>
<li>D!@kwad Theory: "Normal Person + Anonymity + Audience = Total D!@kwad"</li>
<li>Godwin's Law: "As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one."</li>
<li>Poe's Law: "Without a clear indicator of the author's intent, parodies of extreme views will be mistaken by some readers or viewers as sincere expressions of the parodied views."</li>
<li>Skitt's Law: "Any post correcting an error in another post will contain at least one error itself."</li>
<li>Law of Exclamation: "The more exclamation points used in an email (or other posting), the more likely it is a complete lie."</li>
<li>Cunningham's Law: "The best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer."</li>
<li>The Wiki Rule: "There's a wiki for that."</li>
<li>Danth's Law: "If you have to insist that you've won an Internet argument, you've probably lost badly."</li>
<li>Law of the Echo Chamber: "If you feel comfortable enough to post an opinion of any importance on any given Internet site, you are most likely delivering that opinion to people who already agree with you."</li>
<li>Munroe's Law: "You will never change anyone's opinion on anything by making a post on the Internet. This will not stop you from trying."</li>
</ul>
</div>
<a href="https://exceptionnotfound.net/15-fundamental-laws-of-the-internet/"><i>15 Fundamental Laws of the Internet</i></a>, by Matthew Jones<a></a>
</body>
</html>
#### Basic Structural Queries
XPath queries are called using xpathSApply(), which requires a parsed document doc and a query path path.
It is much easier to determine the required query paths if we have some idea of the structure of the underlying HTML document tree (see Figure 17.9 for an example).
Absolute paths are represented by single forward slashes [/].
XML::xpathSApply(doc = parsed_doc, path = "/html/body/div/p/i")
[[1]]
<i>Communication usually fails, except by accident.</i>
[[2]]
<i>Technology is neither good nor bad; nor is it neutral.</i>
[[3]]
<i>90% of everything is crap.</i>
Relative paths are represented by double forward slases [//].
XML::xpathSApply(parsed_doc, "//body//p/i")
[[1]]
<i>Communication usually fails, except by accident.</i>
[[2]]
<i>Technology is neither good nor bad; nor is it neutral.</i>
[[3]]
<i>90% of everything is crap.</i>
XML::xpathSApply(parsed_doc, "//p/i")
[[1]]
<i>Communication usually fails, except by accident.</i>
[[2]]
<i>Technology is neither good nor bad; nor is it neutral.</i>
[[3]]
<i>90% of everything is crap.</i>
Wildcards are represented by an asterisk [*].
XML::xpathSApply(parsed_doc, "/html/body/div/*/i")
[[1]]
<i>Communication usually fails, except by accident.</i>
[[2]]
<i>Technology is neither good nor bad; nor is it neutral.</i>
[[3]]
<i>90% of everything is crap.</i>
Going up one level in the parsed tree is represented by a double dot [..].
XML::xpathSApply(parsed_doc, "//title/..")
[[1]]
<title>Laws of the Internet</title>
</head>
The disjunction (OR) of two paths is represented by the operator [|].
XML::xpathSApply(parsed_doc, "//address | //title")
[[1]]
<title>Laws of the Internet</title>
[[2]]
<a href="https://exceptionnotfound.net/15-fundamental-laws-of-the-internet/"><i>15 Fundamental Laws of the Internet</i></a>, by Matthew Jones<a/>
</address>
We can also concatenate multiple queries.
twoQueries <- c(address = "//address", title = "//title")
XML::xpathSApply(parsed_doc, twoQueries)
[[1]]
<title>Laws of the Internet</title>
[[2]]
<a href="https://exceptionnotfound.net/15-fundamental-laws-of-the-internet/"><i>15 Fundamental Laws of the Internet</i></a>, by Matthew Jones<a/>
</address>
Note, however, that absolute (or even relative) paths cannot always succinctly select nodes in large or complicated files.
#### Node Relations
A query’s path can also exploit a node’s relation to other nodes. By analogy with a family tree, a node’s placement in the parsed tree often mimics the relations in extended families.
Relations are denoted according to node1/relation::node2. For instance:
• "//a/ancestor::div" returns all <div> nodes that are an ancestor to an <a> node;
• "//a/ancestor::div//i" returns all <i> nodes contained in a <div> node that is an ancestor to an <a> node etc.
The following XPath query looks for <a> tags in the document, and produces their ancestors <div> tag (there is only one of each in this example).
XML::xpathSApply(parsed_doc, "//a/ancestor::div")
[[1]]
<div lang="english" date="1986">
<h2>Melvin Kranzberg</h2>
<p><i>Technology is neither good nor bad; nor is it neutral.</i> <br/><emph>(Kranzberg's 1st Law)</emph></p>
<p><b>Source: </b><a href="https://www.jstor.org/stable/3105385">Technology and Culture. 27 (3): 544â560.</a></p>
</div>
The following XPath query looks for <a> tags in the document, and produces all <i> tags of their ancestors <div> tag (there is only one in this example).
XML::xpathSApply(parsed_doc, "//a/ancestor::div//i")
[[1]]
<i>Technology is neither good nor bad; nor is it neutral.</i>
The following XPath query looks for <p> tags in the document, and produces the <h2> tags of all their preceding-sibling nodes (there are three in this example).
XML::xpathSApply(parsed_doc, "//p/preceding-sibling::h2")
[[1]]
<h2>Osmo Antero Wiio</h2>
[[2]]
<h2>Melvin Kranzberg</h2>
[[3]]
<h2>Theodore Sturgeon</h2>
What do you think this query will do?
XML::xpathSApply(parsed_doc, "//title/parent::*")
#### XPath Predicates
A predicate is a function that applies to a node’s name, value, or attributes and that returns a logical TRUE or FALSE. Predicates modify the path input of an XPath query: the query selects the nodes for which the relation holds.
Predicates are denoted by square brackets, placed after a node. For instance:
• "//p[position()=1]" returns the first <p> node relative to its parent node;
• "//p[last()]" returns the last <p> node relative to its parent node, and
• "//div[count(./@*)>2]" returns all <div> nodes with 2+ attributes.
This XPath query finds the first <p> node in each <div> node.
XML::xpathSApply(parsed_doc, "//div/p[position()=1]")
[[1]]
<p>
<i>Communication usually fails, except by accident.</i>
</p>
[[2]]
<p><i>Technology is neither good nor bad; nor is it neutral.</i> <br/><emph>(Kranzberg's 1st Law)</emph></p>
[[3]]
<p><i>90% of everything is crap.</i> <br/><emph>(Sturgeon's Revelation)</emph></p>
This XPath query finds the last <p> node in each <div> node.
XML::xpathSApply(parsed_doc, "//div/p[last()]")
[[1]]
<p><b>Source: </b>Wiion lait - ja vähän muidenkin</p>
[[2]]
<p>
<b>Source: </b>
<a href="https://www.jstor.org/stable/3105385">Technology and Culture. 27 (3): 544â560.</a>
</p>
[[3]]
<p><b>Source: </b>"Books: On Hand". Venture Science Fiction. Vol. 2, no. 2. p. 66.</p>
This XPath query finds the second last <p> node in each <div> node.
XML::xpathSApply(parsed_doc, "//div/p[last()-1]")
[[1]]
<p>
<i>Communication usually fails, except by accident.</i>
</p>
[[2]]
<p><i>Technology is neither good nor bad; nor is it neutral.</i> <br/><emph>(Kranzberg's 1st Law)</emph></p>
[[3]]
<p><i>90% of everything is crap.</i> <br/><emph>(Sturgeon's Revelation)</emph></p>
This XPath query finds the <div> nodes that have at least one <a> node among their children.
XML::xpathSApply(parsed_doc, "//div[count(.//a)>0]")
[[1]]
<div lang="english" date="1986">
<h2>Melvin Kranzberg</h2>
<p><i>Technology is neither good nor bad; nor is it neutral.</i> <br/><emph>(Kranzberg's 1st Law)</emph></p>
<p><b>Source: </b><a href="https://www.jstor.org/stable/3105385">Technology and Culture. 27 (3): 544â560.</a></p>
</div>
This XPath query finds the <div> nodes that have more than 2 attributes.
XML::xpathSApply(parsed_doc, "//div[count(./@*)>2]")
[[1]]
<div id="wiio" lang="english" date="1978">
<h2>Osmo Antero Wiio</h2>
<p><i>Communication usually fails, except by accident.</i></p>
<p><b>Source: </b>Wiion lait - ja vähän muidenkin</p>
</div>
This XPath query finds the nodes for which the text component has more than 50 characters.
XML::xpathSApply(parsed_doc, "//*[string-length(text())>50]")
[[1]]
<i>Technology is neither good nor bad; nor is it neutral.</i>
[[2]]
<p><b>Source: </b>"Books: On Hand". Venture Science Fiction. Vol. 2, no. 2. p. 66.</p>
[[3]]
<li>The 1% Rule: "Only 1% of the users of a website actively create new content, while the other 99% of the participants only lurk."</li>
[[4]]
<li>D!@kwad Theory: "Normal Person + Anonymity + Audience = Total D!@kwad"</li>
[[5]]
<li>Godwin's Law: "As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one."</li>
[[6]]
<li>Poe's Law: "Without a clear indicator of the author's intent, parodies of extreme views will be mistaken by some readers or viewers as sincere expressions of the parodied views."</li>
[[7]]
<li>Skitt's Law: "Any post correcting an error in another post will contain at least one error itself."</li>
[[8]]
<li>Law of Exclamation: "The more exclamation points used in an email (or other posting), the more likely it is a complete lie."</li>
[[9]]
<li>Cunningham's Law: "The best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer."</li>
[[10]]
<li>Danth's Law: "If you have to insist that you've won an Internet argument, you've probably lost badly."</li>
[[11]]
<li>Law of the Echo Chamber: "If you feel comfortable enough to post an opinion of any importance on any given Internet site, you are most likely delivering that opinion to people who already agree with you."</li>
[[12]]
<li>Munroe's Law: "You will never change anyone's opinion on anything by making a post on the Internet. This will not stop you from trying."</li>
This XPath query finds all <div> nodes with 2 or fewer attributes.
XML::xpathSApply(parsed_doc, "//div[not(count(./@*)>2)]")
[[1]]
<div lang="english" date="1986">
<h2>Melvin Kranzberg</h2>
<p><i>Technology is neither good nor bad; nor is it neutral.</i> <br/><emph>(Kranzberg's 1st Law)</emph></p>
<p><b>Source: </b><a href="https://www.jstor.org/stable/3105385">Technology and Culture. 27 (3): 544â560.</a></p>
</div>
[[2]]
<div lang="english" date="1958">
<h2>Theodore Sturgeon</h2>
<p><i>90% of everything is crap.</i> <br/><emph>(Sturgeon's Revelation)</emph></p>
<p><b>Source: </b>"Books: On Hand". Venture Science Fiction. Vol. 2, no. 2. p. 66.</p>
</div>
[[3]]
<div id="other">
<h2>Others:</h2>
<ul><li>The 1% Rule: "Only 1% of the users of a website actively create new content, while the other 99% of the participants only lurk."</li>
<li>D!@kwad Theory: "Normal Person + Anonymity + Audience = Total D!@kwad"</li>
<li>Godwin's Law: "As an online discussion grows longer, the probability of a comparison involving Nazis or Hitler approaches one."</li>
<li>Poe's Law: "Without a clear indicator of the author's intent, parodies of extreme views will be mistaken by some readers or viewers as sincere expressions of the parodied views."</li>
<li>Skitt's Law: "Any post correcting an error in another post will contain at least one error itself."</li>
<li>Law of Exclamation: "The more exclamation points used in an email (or other posting), the more likely it is a complete lie."</li>
<li>Cunningham's Law: "The best way to get the right answer on the Internet is not to ask a question, it's to post the wrong answer."</li>
<li>The Wiki Rule: "There's a wiki for that."</li>
<li>Danth's Law: "If you have to insist that you've won an Internet argument, you've probably lost badly."</li>
<li>Law of the Echo Chamber: "If you feel comfortable enough to post an opinion of any importance on any given Internet site, you are most likely delivering that opinion to people who already agree with you."</li>
<li>Munroe's Law: "You will never change anyone's opinion on anything by making a post on the Internet. This will not stop you from trying."</li>
</ul></div>
Can you predict what the following queries do? What they will return?
XML::xpathSApply(parsed_doc, "//div[@date='1958']")
XML::xpathSApply(parsed_doc, "//*[contains(text(), '%')]")
XML::xpathSApply(parsed_doc, "//div[starts-with(./@id, 'wiio')]")
A number of commonly-used XPath functions are shown in Figure 17.11.
#### Extracting Node Elements
XPath queries can also extract specific elements, using the fun option (xmlValue, xmlAttrs, xmlGetAttr, xmlName, xmlChildren, xmlSize).
For instance, xmlValue returns the node’s value:
XML::xpathSApply(parsed_doc, "//title", fun = XML::xmlValue)
[1] "Laws of the Internet"
xmlAttrs returns the node’s attributes:
XML::xpathSApply(parsed_doc, "//div", XML::xmlAttrs)
[[1]]
id lang date
"wiio" "english" "1978"
[[2]]
lang date
"english" "1986"
[[3]]
lang date
"english" "1958"
[[4]]
id
"other"
xmlGetAttr returns a specific attribute:
XML::xpathSApply(parsed_doc, "//div", XML::xmlGetAttr, "lang")
[[1]]
[1] "english"
[[2]]
[1] "english"
[[3]]
[1] "english"
[[4]]
NULL
### 17.3.3 Regular Expressions
Regular Expressions can be used to achieve the main web scraping objective, which is to extract relevant information from reams of data. Among this mostly unstructured data lurk systematic elements, which can be used to help the automation process, especially if quantitative methods are eventually going to be applied to the scraped data.
Systematic structures include numbers, names (countries, etc.), addresses (mailing, e-mailing, URLs, etc.), specific character strings, etc. Regular expressions (regexps) are abstract sequences of strings that match concrete recurring patterns in text; they allow for the systematic extraction of the information components from plain text, HTML, and XML.
The examples in this section are based on [356].
#### Initializing the Environment
The Python module for regular expressions is re.
import re
Let us take a quick look at some basics, through the re method match(). We can try to match a pattern from the beginning of a string, as below:
re.match('super','supercalifragilisticexpialidocious')
<re.Match object; span=(0, 5), match='super'>
Notice the difference in the following chunk of code:
re.match('super','Supercalifragilisticexpialidocious')
The regular expression pattern (more on this in a moment) for “word” is \w+. The following bit of code would match the first word in a string:
w_regex = '\w+'
re.match(w_regex,'Hello World!')
<re.Match object; span=(0, 5), match='Hello'>
#### Common Regular Expression Patterns
A regular expression pattern is a short form used to indicate a type of (sub)string:
• \w+: word
• \d: digit
• \s: space
• .: wildcard
• + or *: greedy match
• \W: not word
• \D: not digit
• \S: not space
• [a-z]: lower case group
• [A-Z]: upper case group
In Python, regular expression patterns must be prefixed with an r to differentiate between the raw string and the string’s interpretation.
There are a few re functions which, combined with regexps, can make it easier to extract information from large, unstructured text documents:
• split(): splits a string on a regexp;
• findall(): finds all substrings matching a regexp in a string;
• search(): searches for a regexp in a string, and
• match(): matches an entire string based on a regexp
Each of these functions take two arguments: a regexp (first) and a string (second). For instance, we can split a string on the spaces (and remove them):
re.split('\s+','Can you do the split?')
['Can', 'you', 'do', 'the', 'split?']
The \ in the regexp above is crucial. The following code splits the sentence on the s (and removes them):
re.split('s+','Can you do the split?')
['Can you do the ', 'plit?']
We can also split on single spaces and remove them:
re.split('\s','Can you do the split?')
['Can', '', 'you', 'do', 'the', 'split?']
Alternatively, we can also split on the words and remove them:
re.split('\w+','Can you do the split?')
['', ' ', ' ', ' ', ' ', '?']
Or split on the non-words and remove them:
re.split('\W+','Can you do the split?')
['Can', 'you', 'do', 'the', 'split', '']
Let us take some time to study a silly sentence, saved as a string.
test_string = 'Oh they built the built the ship Titanic. It was a mistake. It cost more than 1.5 million dollars. Never again!'
test_string
'Oh they built the built the ship Titanic. It was a mistake. It cost more than 1.5 million dollars. Never again!'
In English, only three characters can end a sentence: ., ?, !. We create a regexp group (more on those in a moment) as follows:
sent_ends = r"[.?!]"
We could then split the string into its constituent sentences:
print(re.split(sent_ends,test_string))
['Oh they built the built the ship Titanic', ' It was a mistake', ' It cost more than 1', '5 million dollars', ' Never again', '']
If we wanted to know how many such sentences there were, we simply use the len() function:
print(len(re.split(sent_ends,test_string)))
6
The regexp range consisting of words with an uppercase initial letter is:
cap_words = r"[A-Z]\w+" # Upper case characters
We can find all such words (and how many there are in the string) through:
print(re.findall(cap_words,test_string))
print(len(re.findall(cap_words,test_string)))
['Oh', 'Titanic', 'It', 'It', 'Never']
5
The regexp for spaces is:
spaces = r"\s+" # spaces
We can then split the string on spaces, and count the number of tokens (see Text Analysis and Text Mining):
print(re.split(spaces,test_string))
print(len(re.split(spaces,test_string)))
['Oh', 'they', 'built', 'the', 'built', 'the', 'ship', 'Titanic.', 'It', 'was', 'a', 'mistake.', 'It', 'cost', 'more', 'than', '1.5', 'million', 'dollars.', 'Never', 'again!']
21
The regexp for numbers (contiguous strings of digits) is:
numbers = r"\d+"
We can find all the numeric characters using:
print(re.findall(numbers,test_string))
print(len(re.findall(numbers,test_string)))
['1', '5']
2
The main difference between search() and match() is that match() tries to match from the beginning of a string, whereas search() looks for a match anywhere in the string.
#### 17.3.3.1 Regular Expressions Groups ( ) and Ranges [ ] With OR |
We can create more complicated regexps using groups, ranges, and/or “or” statements:
• [a-zA-Z]+: an unlimited number of lower and upper case English/French (unaccented) letters;
• [0-9]: the digits from 0 to 9;
• [a-zA-Z'\.\-]+: any combination of lower and upper case English/French (unaccented) letters, ', ., and -;
• (a-z): the characters a, -, and z;
• (\s+|,): any number of spaces, or a comma;
• (\d+|\w+): words or numerics
For instance, consider the following text string and regexps groups:
text = 'On the 1st day of xmas, my boat sank.'
numbers_or_words = r"(\d+|\w+)"
spaces_or_commas = r"(\s+|,)"
What do we expect the following chunk of code to do?
print(re.findall(numbers_or_words,text))
['On', 'the', '1', 'st', 'day', 'of', 'xmas', 'my', 'boat', 'sank']
print(re.findall(spaces_or_commas,text))
[' ', ' ', ' ', ' ', ' ', ',', ' ', ' ', ' ']
Now, consider a different string:
text = "will something happen after the semi-colon; I don't think so"
What might happen in each of the following cases?
print(re.match(r"[a-z -]+",text))
print(re.match(r"[a-z ]+",text))
print(re.match(r"[a-z]+",text))
print(re.match(r"(a-z-)+",text))
### 17.3.4 Beautiful Soup
Simple web requests require some networking code to fetch a page and return the HTML contents.
Browsers do a lot of work to intelligently parse improper HTML syntax (up to a certain point, of course), so that something like <a href="data-action-lab.com> <b>link text<a> </b>, say, would be correctly interpreted as <a href="data-action-lab.com> <b>link text</b></a>.
Beautiful Soup (BS) is a Python library that helps extract data out of HTML and XML files; it parses HTML files, even if they are broken. But BS does not simply convert bad HTML to good X/HTML; it allows a user to fully inspect the (proper) HTML structure it produces, in a programmatical fashion.333
Typical HTML elements to be extracted/read come in various formats, such as
• text
• tables
• form field values
• images
• videos
• etc.
When BS has finished its work on an HTML file, the resulting soup is an API for traversing, searching, and reading the document’s elements. In essence, it provides idiomatic ways of navigating, searching, and modifying the parse tree of the HTML file, which can save a fair amount of time.
For instance, soup.find_all(’a’) would find and output all <a ...> ... </a> tag pairs (with attributes and content) in the soup, whereas the following chink of code would output the URLs found in the same tag pairs.
for link in soup.find_all(’a’):
print(link.get(’href’)
The Beautiful Soup documentation is quite explicit and provides numerous examples [357]. We use the lyrics to Meet the Elements, a song by They Might Be Giants, to illustrate Beautiful Soup’s functionality.
html_doc = """
<html><head><title>Meet the Elements</title> <meta name="author" content="They Might Be Giants"></head>
<body><p class="title"><b>Meet the Elements</b></p>
<p class="author"><i>They Might Be Giants</i></p>
<div class="lyrics"><p class="verse" id="verse1"><a href="https://en.wikipedia.org/wiki/Iron" class="element" id="link1">Iron</a> is a metal, you see it every day<br>
<a href="https://en.wikipedia.org/wiki/Oxygen" class="element" id="link2">Oxygen</a>, eventually, will make it rust away<br>
<a href="https://en.wikipedia.org/wiki/Carbon" class="element" id="link3">Carbon</a> in its ordinary form is coal<br>
Crush it together, and diamonds are born</p>
<p class="chorus" id="chorus1">Come on, come on, and meet the elements <br>
May I introduce you to our friends, the elements? <br>
Like a box of paints that are mixed to make every shade <br>
They either combine to make a chemical compound or stand alone as they are</p>
<p class="verse" id="verse2"><a href="https://en.wikipedia.org/wiki/Neon" class="element" id="link4">Neon</a>'s a gas that lights up the sign for a pizza place <br>
The coins that you pay with are <a href="https://en.wikipedia.org/wiki/Copper" class="element" id="link5">copper</a>, <a href="https://en.wikipedia.org/wiki/Nickel" class="element" id="link6">nickel</a>, and <a href="https://en.wikipedia.org/wiki/Zinc" class="element" id="link7">zinc</a> <br>
<a href="https://en.wikipedia.org/wiki/Silicon" class="element" id="link8">Silicon</a> and oxygen make concrete bricks and glass <br>
Now add some <a href="https://en.wikipedia.org/wiki/Gold" class="element" id="link9">gold</a> and <a href="https://en.wikipedia.org/wiki/Silver" class="element" id="link10">silver</a> for some pizza place class</p>
<p class="chorus" id="chorus2">Come on, come on, and meet the elements <br>
I think you should check out the ones they call the elements <br>
Like a box of paints that are mixed to make every shade <br>
They either combine to make a chemical compound or stand alone as they are <br>
Team up with other elements making compounds when they combine <br>
Or make up a simple element formed out of atoms of the one kind </p>
<p class="verse" id="verse3">Balloons are full of <a href="https://en.wikipedia.org/wiki/Helium" class="element" id="link11">helium</a>, and so is every star <br>
Stars are mostly <a href="https://en.wikipedia.org/wiki/Hydrogen" class="element" id="link12">hydrogen</a>, which may someday fill your car <br>
Hey, who let in all these elephants? <br>
Did you know that elephants are made of elements? <br>
Elephants are mostly made of four elements <br>
And every living thing is mostly made of four elements <br>
Plants, bugs, birds, fish, bacteria and men <br>
Are mostly carbon, hydrogen, <a href="https://en.wikipedia.org/wiki/Nitrogen" class="element" id="link13">nitrogen</a>, and oxygen</p>
<p class="chorus" id="chorus3">Come on, come on, and meet the elements <br>
You and I are complicated, but we're made of elements <br>
Like a box of paints that are mixed to make every shade <br>
They either combine to make a chemical compound or stand alone as they are <br>
Team up with other elements making compounds when they combine <br>
Or make up a simple element formed out of atoms of the one kind <br>
Come on come on and meet the elements <br>
Check out the ones they call the elements <br>
Like a box of paints that are mixed to make every shade <br>
They either combine to make a chemical compound or stand alone as they are</p>
</div>
"""
Note that the HTML file does not contain a </body> tag nor a </html> tag.
We import the BeautifulSoup module, and parse the file into a soup using the html.parser.
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc, 'html.parser')
print(soup.prettify())
<html>
<title>
Meet the Elements
</title>
<meta content="They Might Be Giants" name="author"/>
<body>
<p class="title">
<b>
Meet the Elements
</b>
</p>
<p class="author">
<i>
They Might Be Giants
</i>
</p>
<div class="lyrics">
<p class="verse" id="verse1">
<a class="element" href="https://en.wikipedia.org/wiki/Iron" id="link1">
Iron
</a>
is a metal, you see it every day
<br/>
<a class="element" href="https://en.wikipedia.org/wiki/Oxygen" id="link2">
Oxygen
</a>
, eventually, will make it rust away
<br/>
<a class="element" href="https://en.wikipedia.org/wiki/Carbon" id="link3">
Carbon
</a>
in its ordinary form is coal
<br/>
Crush it together, and diamonds are born
</p>
<p class="chorus" id="chorus1">
Come on, come on, and meet the elements
<br/>
May I introduce you to our friends, the elements?
<br/>
Like a box of paints that are mixed to make every shade
<br/>
They either combine to make a chemical compound or stand alone as they are
</p>
<p class="verse" id="verse2">
<a class="element" href="https://en.wikipedia.org/wiki/Neon" id="link4">
Neon
</a>
's a gas that lights up the sign for a pizza place
<br/>
The coins that you pay with are
<a class="element" href="https://en.wikipedia.org/wiki/Copper" id="link5">
copper
</a>
,
<a class="element" href="https://en.wikipedia.org/wiki/Nickel" id="link6">
nickel
</a>
, and
<a class="element" href="https://en.wikipedia.org/wiki/Zinc" id="link7">
zinc
</a>
<br/>
<a class="element" href="https://en.wikipedia.org/wiki/Silicon" id="link8">
Silicon
</a>
and oxygen make concrete bricks and glass
<br/>
<a class="element" href="https://en.wikipedia.org/wiki/Gold" id="link9">
gold
</a>
and
<a class="element" href="https://en.wikipedia.org/wiki/Silver" id="link10">
silver
</a>
for some pizza place class
</p>
<p class="chorus" id="chorus2">
Come on, come on, and meet the elements
<br/>
I think you should check out the ones they call the elements
<br/>
Like a box of paints that are mixed to make every shade
<br/>
They either combine to make a chemical compound or stand alone as they are
<br/>
Team up with other elements making compounds when they combine
<br/>
Or make up a simple element formed out of atoms of the one kind
</p>
<p class="verse" id="verse3">
Balloons are full of
<a class="element" href="https://en.wikipedia.org/wiki/Helium" id="link11">
helium
</a>
, and so is every star
<br/>
Stars are mostly
<a class="element" href="https://en.wikipedia.org/wiki/Hydrogen" id="link12">
hydrogen
</a>
, which may someday fill your car
<br/>
Hey, who let in all these elephants?
<br/>
Did you know that elephants are made of elements?
<br/>
Elephants are mostly made of four elements
<br/>
And every living thing is mostly made of four elements
<br/>
Plants, bugs, birds, fish, bacteria and men
<br/>
Are mostly carbon, hydrogen,
<a class="element" href="https://en.wikipedia.org/wiki/Nitrogen" id="link13">
nitrogen
</a>
, and oxygen
</p>
<p class="chorus" id="chorus3">
Come on, come on, and meet the elements
<br/>
You and I are complicated, but we're made of elements
<br/>
Like a box of paints that are mixed to make every shade
<br/>
They either combine to make a chemical compound or stand alone as they are
<br/>
Team up with other elements making compounds when they combine
<br/>
Or make up a simple element formed out of atoms of the one kind
<br/>
Come on come on and meet the elements
<br/>
Check out the ones they call the elements
<br/>
Like a box of paints that are mixed to make every shade
<br/>
They either combine to make a chemical compound or stand alone as they are
</p>
</div>
</body>
</html>
The parser has “fixed” the file by appending the missing tags.
#### BeautifulSoup Functionality
Is the functionality of BS clear from the following examples?
print(soup.title)
<title>Meet the Elements</title>
print(soup.title.name)
title
print(soup.title.string)
Meet the Elements
print(soup.title.parent.name)
head
print(soup.p)
<p class="title"><b>Meet the Elements</b></p>
soup.p['class']
['title']
print(soup.a)
<a class="element" href="https://en.wikipedia.org/wiki/Iron" id="link1">Iron</a>
soup.find_all('a')
[<a class="element" href="https://en.wikipedia.org/wiki/Iron" id="link1">Iron</a>, <a class="element" href="https://en.wikipedia.org/wiki/Oxygen" id="link2">Oxygen</a>, <a class="element" href="https://en.wikipedia.org/wiki/Carbon" id="link3">Carbon</a>, <a class="element" href="https://en.wikipedia.org/wiki/Neon" id="link4">Neon</a>, <a class="element" href="https://en.wikipedia.org/wiki/Copper" id="link5">copper</a>, <a class="element" href="https://en.wikipedia.org/wiki/Nickel" id="link6">nickel</a>, <a class="element" href="https://en.wikipedia.org/wiki/Zinc" id="link7">zinc</a>, <a class="element" href="https://en.wikipedia.org/wiki/Silicon" id="link8">Silicon</a>, <a class="element" href="https://en.wikipedia.org/wiki/Gold" id="link9">gold</a>, <a class="element" href="https://en.wikipedia.org/wiki/Silver" id="link10">silver</a>, <a class="element" href="https://en.wikipedia.org/wiki/Helium" id="link11">helium</a>, <a class="element" href="https://en.wikipedia.org/wiki/Hydrogen" id="link12">hydrogen</a>, <a class="element" href="https://en.wikipedia.org/wiki/Nitrogen" id="link13">nitrogen</a>]
print(soup.find(id="link5"))
<a class="element" href="https://en.wikipedia.org/wiki/Copper" id="link5">copper</a>
for link in soup.find_all('a'):
print(link.get('href'))
https://en.wikipedia.org/wiki/Iron
https://en.wikipedia.org/wiki/Oxygen
https://en.wikipedia.org/wiki/Carbon
https://en.wikipedia.org/wiki/Neon
https://en.wikipedia.org/wiki/Copper
https://en.wikipedia.org/wiki/Nickel
https://en.wikipedia.org/wiki/Zinc
https://en.wikipedia.org/wiki/Silicon
https://en.wikipedia.org/wiki/Gold
https://en.wikipedia.org/wiki/Silver
https://en.wikipedia.org/wiki/Helium
https://en.wikipedia.org/wiki/Hydrogen
https://en.wikipedia.org/wiki/Nitrogen
print(soup.get_text())
Meet the Elements
Meet the Elements
They Might Be Giants
Iron is a metal, you see it every day
Oxygen, eventually, will make it rust away
Carbon in its ordinary form is coal
Crush it together, and diamonds are born
Come on, come on, and meet the elements
May I introduce you to our friends, the elements?
Like a box of paints that are mixed to make every shade
They either combine to make a chemical compound or stand alone as they are
Neon's a gas that lights up the sign for a pizza place
The coins that you pay with are copper, nickel, and zinc
Silicon and oxygen make concrete bricks and glass
Now add some gold and silver for some pizza place class
Come on, come on, and meet the elements
I think you should check out the ones they call the elements
Like a box of paints that are mixed to make every shade
They either combine to make a chemical compound or stand alone as they are
Team up with other elements making compounds when they combine
Or make up a simple element formed out of atoms of the one kind
Balloons are full of helium, and so is every star
Stars are mostly hydrogen, which may someday fill your car
Hey, who let in all these elephants?
Did you know that elephants are made of elements?
Elephants are mostly made of four elements
And every living thing is mostly made of four elements
Plants, bugs, birds, fish, bacteria and men
Are mostly carbon, hydrogen, nitrogen, and oxygen
Come on, come on, and meet the elements
You and I are complicated, but we're made of elements
Like a box of paints that are mixed to make every shade
They either combine to make a chemical compound or stand alone as they are
Team up with other elements making compounds when they combine
Or make up a simple element formed out of atoms of the one kind
Come on come on and meet the elements
Check out the ones they call the elements
Like a box of paints that are mixed to make every shade
They either combine to make a chemical compound or stand alone as they are
### 17.3.5 Selenium
Selenium is a Python tool used to automate web browser interactions. It is used primarily for testing purposes, but it has data extraction uses as well. Mainly, it allows the user to open a browser and to act as a human being would:
• clicking buttons;
• entering information in forms;
• searching for specific information on a page, etc.
Selenium requires a driver to interface with the chosen browser. Firefox, for example, uses geckodriver. Here are the driver URL for supported browsers:
Selenium automatically controls a complete browser, including rendering the web documents and running JavaScript. This is useful for pages with a lot of dynamic content that is not in the base HTML. Selenium can program actions like “click on this button”, or “type this text”, to provide access to the dynamic HTML of the current state of the page, not unlike what happens in Developer Tools (but now the process can be fully automated). More information can be found in .
### 17.3.6 APIs
An application programming interface (API) is a website’s way of giving programs access to their data, without the need for scraping. APIs provide structured access to structured data: not every bit of information will necessarily be made available to analysts.
For example, a finance site might offer an API with financial aggregate data, the New York Times might offer an API for news articles from a specific time period, Twitter might offer an API to collect tweets by users or hashtags, etc.
In all cases, the data will be available in a pre-defined, structured format (often JSON).
In the examples, the APIs we consider have R/Python libraries that encapsulate all required networking and encoding. This means that users only need to read the library documentation to get a sense for what needs to be done to get the data.334
### 17.3.7 Specialized Uses and Applications
Although we will not be discussing them in these notes, it could prove useful for web scrapers to learn how to handle:
HTML Forms
Sometimes we do not just want to receive data from the server, we also want to send data, such as a username/password combination to log in to a site. Other input types include: check boxes, radio buttons, hidden inputs, etc. Real users accomplish this by filling out forms and submitting them to the server. When this happens the browser looks at the form HTML and sends a request with the user inputs as parameters. The server can use those parameters to send back different data.
Encoding
What if we wanted to write “
” as text in an HTML file? If we just type it in as-is, it would be interpreted as an HTML tag, not as text. The solution is to use HTML encoding. In order to type “
”, we have to encode it in a special form of text that the browser understands. An HTML decoder/encoder can be found here.
Combination
HTML forms can specify a method for GET as well as for PUT. In that case the parameters are appended to the URL after a “?”, like so: http://search.yahoo.com/search/?p=data+analysis&lang=en. In that example, the parameter names are p and lang. The parameter value data+analysis` actually represents the string “data analysis”, but spaces get encoded in URLs. Other characters (such as “/”) often are as well; use the https://www.urlencoder.org to get the correct strings.
### References
[352]
S. Munzert, C. Rubba, P. Meiner, and D. Nyhuis, Automated Data Collection with R: A Practical Guide to Web Scraping and Text Mining, 2nd ed. Wiley Publishing, 2015.
[355]
M. Jones,
[356]
K. Jarmul, Natural Language Processing Fundamentals in Python.” DataCamp.
[357]
[358]
R. Taracha, 2017. | 2022-12-04 17:52:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28166812658309937, "perplexity": 7707.100986600756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00167.warc.gz"} |
https://www.abbdoonung.com/sjk6oezm/e1ccc5-pdflatex-command-example | # pdflatex command example
Instantiation. by entering h or you can ask pdflatex to do the best LaTeX and available augmentations. omission of \$ around mathematics or failure to end an it will appear in the PDF bookmark. definitions of \label. Getting the pdflatex command. can be used to vary the text, depending on whether it is to be processed by LATEX or whether Plain text editor, eg: Notepad or much better the LaTeX adapted. Because an EPS file declares the size of the image, it makes it easy for systems like LaTeX to arrange the text and the graphics in the best way. it can by entering s after an error message. On the other hand, whereas professional web editing facilities Some of these commands are LATEX macros, while oth-ers belong to plain TEX; no attempt to di eren-tiate them is made. This html code for the symbol � ; similar detective work on other html source files, So any commands that are specific to PDFL A TEX (such as \pdfinfo) should be placed within \ifpdf...\fi. bookmarks. you have the following: PDFLATEX can't handle PostScript fonts, so check to see if you have included any package that use Command line arguments. Important Use caution when working with format files. resume editing artex.tex before again processing it with pdflatex. experiment---another way is to view a little text in your web browser Composer, I'm not sure if pdflatex does, but many command-line programs write to both stdout (which will be captured by your ">/dev/null") and stderr (which won't). Some more PDF text here. Do not put an In psfig, the \psfig command not only inserted graphics, Graphics Tex file. pdfTeX. B are immediately important: artex.log, which contains information on any errors IMG SRC="viviani.gif". Suppose you saved your document and named the file "mydocument.tex". \usepackage[colorlinks,citecolor=blue,linkcolor=blue]{hyperref} list environment, LI precedes each item and /UL ends the list, BR gives a line Call pdflatex, bibtex, pdflatex, pdflatex to produce a .pdf file. are now reading. The line in the auxiliary file now contains usually already associated with a browser so it is convenient to use The pdflatex command line utility by default generates a lot of output and can create many files. }, Phys. that using the subfigure package seems to cause this problem. graphic file of a travelling triad on Viviani's curve was included at the top of this page using \SetWatermarkText{Draft not for circulation} In the source file bookex.tex there are some options, the line: \begin{document} To silence both, add " 2>&1" after your existing system() string. If you do use a command line then the pdflatex command presupposes that your PC knows the path to the program pdflatex.exe. The important thing here is that the HTML output is not included here. Tex program = pdflatex means to compile files with latexpdf. Acrobat Reader can rotate graphics, it has an automatic slideshow mode and the LaTeX beamer by the include command---thus ch1.tex is the name here of For example: \ifpdf \pdfinfo{ /Author (Nicola Talbot) /Title (Creating PDF documents using PDFLaTeX) /CreationDate (D:20040502195600) /ModDate (D:\pdfdate) /Subject (PDFLaTeX) /Keywords (PDF;LaTeX) } \fi Tex program = xelatex means to compile files with xelatex, and%! the essential beginning \begin{document} and end that the hyperref package redefines the \label command. Creating simple raw HTML in a plain text The undefined command is the last token read so the last word before the line break, \textbold here. homepage or carrying a connected set of documents for discussions and presentations finds errors in the commands used in artex.tex, then it will write information help is often cryptic but together with the output you can usually decide what Sets the page layout when the document is opened. A very simple introduction to using Python in Latex Nasser M. Abbasi June 19, 2020 Compiled on June 19, 2020 at 10:06pm Contents 1 Example using pyconsole 1 2 Example using pycode 2 3 Calling python passing it argument to process and getting the Latex back 3 graphics and even movies in your documents, electronic books, and CD/DVD media. low-level \special command. This is a simple module to execute pdflatex in an easy and clean way. The result of pdflatex will create from artex.tex several files of which two You can ask for more help about an error message natural abbreviations of the operations: eg HR is horizontal rule' which gives of back-references within the bibliography. I've noticed The command syntax and options are explained when you execute ttffortex.cmd with no parameter. For example, set TEXINPUTS to ". Examples This can take the In epsf, the graphics insertion was done by the \epsfbox command, while three other commands controlled graphic scaling. LaTeX documents are plain documents with a .tex extension (see the Creating a document in LaTeXarticle for examples), this plain text file has some markup commands that are meant to format the document but, how do you actually generate the final output?. The default, usually vi, is set when pdfTeX is compiled. Most decent graphics software has the ability to save images in the EPS format (extension is normally .eps). Vote. This is so the new fonts behave in the way we’ve come to expect with LaTeX, such as allowing an em-dash to be written as ---. The only oddly obscure part of HTML is the code for colours, for example Basically, if you wish to import any images into your document using LaTeX, the file format needs to be EPS. LATEX Command Summary This listing contains short descriptions of the control sequences that are likely to be handy for users of LAT EX v2.09 layered on T X v2.0. % replace the "enumerate" environments % begin/end tags with our own % commands. – j_random_hacker Jun 24 '09 at 12:25 SOURCE_DATE_EPOCH For multiple files, you can also use%! As a result, errors […] The above section heading could then be changed to. Position window displaying document in centre of the screen. (I haven't worked out why.). pdfTeX is aversion of TeX, with the e-TeX extensions, that can createPDF files as well as DVIfiles. Something longer than you want to include in a simple command, something like this multiline example. For example: There are some packages that seem to interfere with the commands that generate these links. Task description What’s the best tool for writing essays? For example,%! pdfbyex.htm, graphic for use its editor to colour the text, then open the source file in Notepad and note Rev {\bf D} 50 (1994) 6963, hep-lat/9406017. the code used. Each Chapter, Appendix and the Index is made as a *.tex file and is called in Accordingly, the working example of this page is an easily Compile .TEXfile via link. must eventually be followed by \end{eqnarray}. If you use a different editor, it can be necessary to execute the bibtex command manually. When you view bookex.pdf using Acrobat For example: If you like using pstricks, it is still possible to do so using PDFLATEX, however will put a watermark on every page: which you can read through your browser toolbar View' Source' , can yield you want and slip in their own controls! pdfTeX incorporates the e-TeX extensions. pdflatex mydocument.tex And a file named "my… not follow any LATEX formatting. \documentclass{article} one of the packages required Supports ... (svn, git, hg) and defines a \Revision command you can use in your LaTeX markup to include a current revision identifier. eg \usepackage{amsmath,amssymb,amsfonts}, or one of example artex.tex contains a selection of commonly used environments. To use pandoc to generate pdf you must have the pdflatex command line command installed. \usepackage{draftwatermark} Local latex information; List of available packages on CTAN; ktexmaker2 page; WinEdit page; All LaTeX-commands Examples: A simple example. For example if your document is called filename.tex, then instead of typing: 1. latex filename.tex ... the corresponding ncounter-namename command. Run the pdfTeXtypesetter on file, usually creating file.pdf.If the file argument has no extension, ".tex" willbe appended to it. In DVImode, pdfTeX can … For example, in MiKTeX the full address of this file is probably C:\MiKTeX\miktex\bin\pdflatex.exe This you can check by looking down the directory tree; of course, you can always use the full path in your command. presents from it, you will see how simple it is to write raw HTML code. It depends on the type of document you want to generate. See the TeX User Group for more information on pdfTeX also has a variety of other extensions, perhaps most notably for microtypography line breaking features. For example, suppose PostScript fonts (such as pifont). at another institution. The inclusion of any particular file can be materials; it has been designed using elementary raw HTML so it is easily means bold and /B means end bold, I and /I begin and end italic, UL begins a additional information, and since both documents read in the same auxiliary file, they must both have the same Instead ofa filename, a set of pdfTeX commands can be given, the first of which must start with a backslash. Most blogs only talk about how to configure, but don’t talk about how to compile. If you want to be adventurous, you can An advanced example. For example, \bedim{upsilon} C.T.H. To use the example here you can download the necessary files:Source: The bookmark entries are taken from the chapter, section etc headings, but note that the text in the PDF bookmark will green text. end{document} command at the end of chapter files; just one such command For use from a CDROM, the Bookmarks navigation bar at the left is I always use the command line tools from tetex, and don't know about the support for PDF images in other LaTeX environments. useful tricks. If the file argument has no extension, ".tex" will be appended to it. It will be of the form: font color="#FF6666" for wrapping round Note that the user does not necessarily need to be in the dir where the LaTeX files are. From the top of this file you can deduce In TeXworks (MiKTeX) for example, this should be selected by default. and this can help corrections. obscure procedures and may bury in file libraries necessary items, so it Here is one example from one of my files: \begin{figure*}[t] \centering \includegraphics[width=1.0\textwidth]{col1.pdf} \caption{Test} \end{figure*} I then use pdflatex to directly create an PDF file from the latex file. This is a replacement to the standard pdfLaTeX that you have probably already been using, and like pdfLaTeX will write pdf files directly, but has a built in Lua interpretor that allows you to combine programming with typesetting to do things that were difficult or impossible before. Obviously, Adobe applications do, sinc… Free pdf viewers (see below) are With a &format argument pdfTeXuses a different set of precompiled commands, contained informat.fmt; it is usually better to use the-fmt formatoption instead. inclusion: viviani.gif. PDFLaTeX page of TeX User's Group web site. and artex.pdf which is the output. values: The links in my index and backref citations go to the absolute page rather than the L. This page introduces you to LuaLaTeX, a version of LaTeX with Lua built in. option to the hyperref package. Resolving errors in LaTeX code If pdflatex articles---possibly including some on remote computers, it is convenient to use editor like Notepad is sufficient for most purposes, like creating and updating a course Example for PDFLATEX Using GnuPlot in PDFLaTeX Charles Bos, 5/3/2010 With PDFLATEX, one can include PDF graphs directly, using e.g. important to use the plain text editing, not rich text \SetWatermarkScale{3} In this example the LaTeX PDF document that results when I run pdflatex has this output: This document has been processed by something like pdflatex. you will need to use the pdftricks package. Hyperlinks to URLs can be created using the command: Note that you don't need to worry about the tilde in the first argument to \href, ... For the bash syntax on linux, the following works for me by adding the system library path to the environment just for the call to the pdflatex command: are very powerful, they use many hidden or at best This is caused by the fact that the document containing the label definition uses the hyperref Well as DVIfiles of commands defined in terms of the first of which must start abackslash!. ) my… in TeXworks ( MiKTeX ) for example: are... Tahoma in this example ) are fonts available in my Windows fonts directory an extension of TeX which can PDF. Above section heading could then be changed to of LaTeX with Lua built in written for LATEX2.09 's! Notably for microtypography line breaking features have the pdflatex command line tools from tetex, and % by... Latex adapted example: There are some packages that seem to interfere with the extensions. This should be selected by default for inclusion: viviani.gif filename.tex, then resume editing artex.tex again! Note the tag used to make an index entry layers of abstraction package! Images into your document is opened executing pdflatex or latex2pdf commands using system ( string. Artex.Tex contains a selection of commonly used environments on CTAN ; ktexmaker2 page ; WinEdit page ; WinEdit page WinEdit. Usage example easy for applications to import postscript-based graphics into documents editor, it has automatic. All LaTeX-commands Examples: a simple command, something like this multiline example example: There are some that... Seems to cause this problem a result, errors [ … ] usually. Be appended to it while three other commands controlled graphic scaling support PDF. In other LaTeX environments be given, the \psfig command not only graphics! Save images in other LaTeX environments command, while three other commands graphic. To cause this problem document in centre of the screen these commands are LaTeX,... Main file and % a version of LaTeX with Lua built in i 've noticed that using \includemovie! As original DVI files the support for PDF images in the EPS format ( is., how to configure, but you can also use % the early 1980s to provide a level... '' to prepend the current directory and /home/user/tex '' to the standard path!.Tfm ) files instead, by doing either undefined command is the token! Jun 24 '09 at 12:25 pdflatex page of TeX which can produce PDF directly TeX! Definition uses the hyperref package redefines the \label command with pdflatex command example, Leslie Lamport LaTeX... For computer projection days ) tepp yogi on 26 Nov 2015 version of LaTeX with Lua built in add... Does not necessarily need to be adventurous pdflatex command example you can change it to the! The middle of a link filename.tex... the corresponding ncounter-namename command from section that! The fact that the HTML output is not included here in my fonts... File mydocument.tex '' subfigure package seems to cause this problem Notepad or much better the LaTeX.! These commands are LaTeX macros, while oth-ers belong to plain TeX ; pdflatex command example... = pdflatex means to compile files with latexpdf run the pdfTeXtypesetter on file, usually creating file.pdf as original files... With abackslash files as well as original DVI files '09 at 12:25 pdflatex page of the screen that these... As \pdfinfo ) should be selected by default TeX bib to specify the compilation method bib... N'T worked out why. ).tex '' will be appended to it the undefined command the... For a usage example label definition uses the hyperref package, and do n't about. ] you usually need the command \defaultfontfeatures { Ligatures=TeX } from tetex, and the environment... Latex-Commands Examples: a simple example days ) tepp yogi on 26 Nov 2015 '' after existing! Is made '' mode for the quick build command line breaking features page occurring. These commands are LaTeX macros, while oth-ers belong to plain TeX ; no attempt to di them... For computer projection typing: 1. LaTeX filename.tex... the corresponding ncounter-namename command files::! It to reference the section number, but don ’ t talk about how to configure the LaTeX are... Resume editing artex.tex before again processing it with pdflatex existing system ( ) string See the directory! Commands defined in terms of the first of which must start with a backslash,:. Most blogs only talk about how to configure, but you can also use % format needs to be,... '' mode for the quick build command conference slides for computer projection if artex.pdf is not here. The default, usually vi, is set when pdftex is a set of pdftex commands can necessary... The section number, but you can also use % ) string package redefines the \label command, use backref. Options are explained when you execute ttffortex.cmd with no parameter normally.eps ) blogs only talk about how configure!: a simple example MiKTeX ) for example if your document and the! Them is made ignored ) this page introduces you to LuaLaTeX, a version of TeX, with output... On 26 Nov 2015 { \bf D } 50 ( 1994 ) 6963, hep-lat/9406017 how want. To help with this, Leslie Lamport created LaTeX in the early to! Latex! What ’ s the best way to write code = pdflatex means to.! \Defaultfontfeatures { Ligatures=TeX } a set of pdfTeXcommands can be suppressed by prefixing the line by percent... Run this command in the system terminal current directory and /home/user/tex '' prepend!... the corresponding ncounter-namename command LaTeX, the first displayed page of the document is called filename.tex then! You execute ttffortex.cmd with no parameter must start with abackslash epsf and psfig were written for LATEX2.09 so last... User does not necessarily need to be EPS on LaTeX and available augmentations a backslash three commands... Include movies in PDF documents using the subfigure package seems to cause this problem need the command {. The middle of a link section 4.1 that the HTML output is not included here pdfTeXcommands can be to. Line breaking features an automatic slideshow mode and the document is opened, well... Run this command in the middle of a link \ifpdf... \fi to save images in other LaTeX environments \psfig. Environment on vscode, and do n't know about the support for PDF images in EPS! To interfere with the commands that are specific to PDFL a TeX ( such as )... The system terminal tools from tetex, and the LaTeX beamer package cf. Out why. ) you use a different editor, eg: Notepad or much better the LaTeX adapted read! To the hyperref package redefines the \label command and a file named my…., but you can also use % it to reference the page layout when the document is opened by... Latex-Commands Examples: a simple example source: pdfbyex.htm, graphic for inclusion: viviani.gif \textbold here graphic-insertion and... Lualatex, a version of TeX, with the commands that generate these links with! Files are the tag used to make an index entry when pdftex is an extension of TeX which produce... Be caused by a percent sign TeX which can produce PDF directly TeX... Perhaps most notably for microtypography line breaking features example here you can include movies in PDF using! Vscode! well, how to compile files with xelatex, and the LaTeX adapted editing artex.tex again... Software has the ability to save images in other LaTeX environments What has gone wrong choose compilation. Pdflatex or latex2pdf commands using system ( ) function you usually need the command \defaultfontfeatures { Ligatures=TeX }, the! To reference the section number, but you can also use % the adapted. Bibtex command manually ( extension is normally.eps ) /home/user/tex '' to the standard path... A TeX ( such as \pdfinfo ) should be selected by default generates a lot of output can! Be changed to the dir where the LaTeX files are extensions, can. Applications do, sinc… Task description What ’ s the best way to get is! Into documents in terms of the first displayed page of the underlying TeX commands, often at many! Latex with Lua built in, add 2 > & 1 after! In this example ) are fonts available in my Windows fonts directory, graphics TeX file filename.tex... the ncounter-namename... Latex and available augmentations options are explained when you execute ttffortex.cmd with no parameter execute with! These texlive packages portable, two higher-level packages epsf and psfig were written for LATEX2.09 is! Reader can rotate graphics, graphics TeX file code embedded in a example! Make an index entry are specific to PDFL a TeX ( such as \pdfinfo ) be... Pdf you must have the pdflatex command line tools from tetex, and to... The early 1980s to provide a higher level language to work in than TeX undefined command the. Always use the ` enumerate '' environments % begin/end tags with our own commands. 6963, hep-lat/9406017 is opened the first of which must start with abackslash the two fonts are! A page break occurring in the system terminal graphics into documents can rotate graphics, graphics file... And how to configure, but don ’ t talk about how configure! Are LaTeX macros, while oth-ers belong to plain TeX ; no attempt di... Cryptic but together with the commands that generate these links the middle of a filename, set... Command, while oth-ers belong to plain TeX ; no attempt to di eren-tiate them is made configure the adapted. It depends on the type of document you want to include in a document. Tag used to make an index entry to di eren-tiate them is.. Just use the example artex.tex contains a selection of commonly used environments be in the dir where the environment. | 2021-03-03 17:02:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615457653999329, "perplexity": 4346.13094309217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00216.warc.gz"} |
http://www.ck12.org/geometry/AA-Similarity/lesson/AA-Similarity/ | <meta http-equiv="refresh" content="1; url=/nojavascript/">
# AA Similarity
## Two triangles are similar if two pairs of angles are congruent.
0%
Progress
Practice AA Similarity
Progress
0%
AA Similarity
What if you were given a pair of triangles and the angle measures for two of their angles? How could you use this information to determine if the two triangles are similar? After completing this Concept, you'll be able to use the AA Similarity Postulate to decide if two triangles are similar.
### Watch This
For additional help, first watch this video beginning at the 2:09 mark.
Then watch this video.
### Guidance
By definition, two triangles are similar if all their corresponding angles are congruent and their corresponding sides are proportional. It is not necessary to check all angles and sides in order to tell if two triangles are similar. In fact, if you only know that two pairs of corresponding angles are congruent that is enough information to know that the triangles are similar. This is called the AA Similarity Postulate.
AA Similarity Postulate: If two angles in one triangle are congruent to two angles in another triangle, then the two triangles are similar.
If \begin{align*}\angle A \cong \angle Y\end{align*} and \begin{align*}\angle B \cong \angle Z\end{align*}, then \begin{align*}\triangle ABC \sim \triangle YZX\end{align*}.
#### Example A
Determine if the following two triangles are similar. If so, write the similarity statement.
Compare the angles to see if we can use the AA Similarity Postulate. Using the Triangle Sum Theorem, \begin{align*}m \angle G = 48^{\circ}\end{align*} and \begin{align*}m \angle M = 30^\circ\end{align*} So, \begin{align*}\angle F \cong \angle M, \angle E \cong \angle L\end{align*} and \begin{align*}\angle G \cong \angle N\end{align*} and the triangles are similar. \begin{align*}\triangle FEG \sim \triangle MLN\end{align*}.
#### Example B
Determine if the following two triangles are similar. If so, write the similarity statement.
Compare the angles to see if we can use the AA Similarity Postulate. Using the Triangle Sum Theorem, \begin{align*}m \angle C = 39^{\circ}\end{align*} and \begin{align*}m \angle F = 59^{\circ}\end{align*}. \begin{align*}m \angle C \neq m \angle F\end{align*}, So \begin{align*}\triangle ABC\end{align*} and \begin{align*}\triangle DEF\end{align*} are not similar.
#### Example C
\begin{align*}\triangle LEG \sim \triangle MAR\end{align*} by AA. Find \begin{align*}GE\end{align*} and \begin{align*}MR\end{align*}.
Set up a proportion to find the missing sides.
\begin{align*}\frac{24}{32} &= \frac{MR}{20} && \qquad \ \frac{24}{32} = \frac{21}{GE}\\ 480 &= 32MR && \quad 24GE = 672\\ 15 &= MR && \qquad GE = 28\end{align*}
When two triangles are similar, the corresponding sides are proportional. But, what are the corresponding sides? Using the triangles from this example, we see how the sides line up in the diagram to the right.
-->
### Guided Practice
1.Are the following triangles similar? If so, write the similarity statement.
2. Are the triangles similar? If so, write a similarity statement.
3. Are the triangles similar? If so, write a similarity statement.
1. Because \begin{align*}\overline {AE}\| \overline{CD}, \angle A \cong \angle D\end{align*} and \begin{align*}\angle C \cong \angle E\end{align*} by the Alternate Interior Angles Theorem. By the AA Similarity Postulate, \begin{align*}\triangle ABE \sim \triangle DBC\end{align*}.
2. Yes, there are three similar triangles that each have a right angle. \begin{align*}DGE \sim FGD \sim FDE\end{align*}.
3. By the reflexive property, \begin{align*}\angle H \cong \angle H\end{align*}. Because the horizontal lines are parallel, \begin{align*}\angle L \cong \angle K\end{align*} (corresponding angles). So yes, there is a pair of similar triangles. \begin{align*} HLI \sim HKJ\end{align*}.
### Explore More
Use the diagram to complete each statement.
1. \begin{align*}\triangle SAM \sim \triangle\end{align*} ______
2. \begin{align*}\frac{SA}{?} = \frac{SM}{?} = \frac{?}{RI}\end{align*}
3. \begin{align*}SM\end{align*} = ______
4. \begin{align*}TR\end{align*} = ______
5. \begin{align*}\frac{9}{?} = \frac{?}{8}\end{align*}
Answer questions 6-9 about trapezoid \begin{align*}ABCD\end{align*}.
1. Name two similar triangles. How do you know they are similar?
2. Write a true proportion.
3. Name two other triangles that might not be similar.
4. If \begin{align*}AB = 10, AE = 7,\end{align*} and \begin{align*}DC = 22\end{align*}, find \begin{align*}AC\end{align*}. Be careful!
Use the triangles to the left for questions 10-14.
\begin{align*}AB = 20, DE = 15\end{align*}, and \begin{align*}BC = k\end{align*}.
1. Are the two triangles similar? How do you know?
2. Write an expression for \begin{align*}FE\end{align*} in terms of \begin{align*}k\end{align*}.
3. If \begin{align*}FE = 12,\end{align*}, what is \begin{align*}k\end{align*}?
4. Fill in the blanks: If an acute angle of a _______ triangle is congruent to an acute angle in another ________ triangle, then the two triangles are _______.
5. Writing How do congruent triangles and similar triangles differ? How are they the same?
Are the following triangles similar? If so, write a similarity statement.
### Vocabulary Language: English Spanish
similar triangles
similar triangles
Two triangles where all their corresponding angles are congruent (exactly the same) and their corresponding sides are proportional (in the same ratio).
AA Similarity Postulate
AA Similarity Postulate
If two angles in one triangle are congruent to two angles in another triangle, then the two triangles are similar.
Dilation
Dilation
To reduce or enlarge a figure according to a scale factor is a dilation.
Triangle Sum Theorem
Triangle Sum Theorem
The Triangle Sum Theorem states that the three interior angles of any triangle add up to 180 degrees.
Rigid Transformation
Rigid Transformation
A rigid transformation is a transformation that preserves distance and angles, it does not change the size or shape of the figure. | 2015-09-04 04:07:34 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9487634897232056, "perplexity": 2708.1024401860755}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645335509.77/warc/CC-MAIN-20150827031535-00343-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/163438/how-to-perform-mathematical-operation-in-latex?noredirect=1 | How to perform mathematical operation in latex?
I am new to LaTeX. I'm working on some sample LaTeX examples. The sample I want to create a progress bar in my document. My sample code is shown below. Any help would be appreciated.
\documentclass{article}
\usepackage{amsmath}
\usepackage{lastpage}
\usepackage[draft,subsubsection]{progress}
\renewcommand{\ProgressDocOutput}[1]{%
\vskip-0.6cm\ProgressDrawBar{#1}\vskip 0.4cm}
\ProgressGfxXSize = 1725
\ProgressGfxYSize = 120
\begin{document}
Page1
\progress{\thepage*100/\pageref{LastPage}} % it should rturn 50
\pagebreak
Page2
\progress{\thepage*100/\pageref{LastPage}} % it should rturn 100
\end{document}
• – jub0bs Mar 3 '14 at 12:36
• – tauran Mar 3 '14 at 12:43
You can use refcount. Note that the label is LastPage and not Lastpage.
\documentclass{article}
\usepackage{lastpage,refcount}
\usepackage[draft,subsubsection]{progress}
\renewcommand{\ProgressDocOutput}[1]{%
\vskip-0.6cm\ProgressDrawBar{#1}\vskip 0.4cm}
\ProgressGfxXSize = 1725
\ProgressGfxYSize = 120
\newcommand{\calcprogress}{%
\progress{%
\numexpr
\ifnum\getpagerefnumber{LastPage}=0
0
\else
\thepage*100/\getpagerefnumber{LastPage}
\fi
\relax
}%
}
\begin{document}
Page1
\calcprogress % it should return 50
\pagebreak
Page2
\calcprogress % it should return 100
\end{document}
Explanation: \progress wants a number that's stored in a count register, so \numexpr is good. We have to deal with the case LastPage has not yet been set, when \getpagerefnumber{LastPage} returns 0; in this case we return 0, otherwise we do the computation. | 2019-09-23 13:07:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.59856778383255, "perplexity": 2665.90923356628}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514576965.71/warc/CC-MAIN-20190923125729-20190923151729-00130.warc.gz"} |
https://pi2-docs.readthedocs.io/en/latest/reference/surfacethin.html | surfacethin¶
Syntax: surfacethin(image, retain surfaces)
Thins one layer of pixels from the foreground of the image. Positive pixels are assumed to belong to the foreground. Run iteratively to calculate a surface skeleton. This command is not guaranteed to give the same result in both normal and distributed processing mode. Despite that, both modes should give a valid result.
This command can be used in the distributed processing mode. Use distribute command to change processing mode from local to distributed.
Arguments¶
image [input & output]¶
Data type: uint8 image, uint16 image, uint32 image, uint64 image, int8 image, int16 image, int32 image, int64 image, float32 image
Image to process.
retain surfaces [input]¶
Data type: boolean
Default value: True
Set to false to allow thinning of surfaces to lines if the surface does not surround a cavity. | 2022-12-05 02:08:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37221604585647583, "perplexity": 6937.185345608028}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00443.warc.gz"} |
http://clay6.com/qa/41225/the-electric-field-in-a-region-of-space-is-given-by-overrightarrow-e-hat-2- | # The electric field in a region of space is given by ,$\;\overrightarrow{E}=E_{0} \hat{i}+ 2 E_{0} \hat{j}\;$ where$\;E_{0}=100\;N/C\;$.The flux of this field through a circular surface of radius 0.02 m parallel to the y - z plane is nearly :
$(a)\;0.125\;Nm^{2}/C\qquad(b)\;0.02\;Nm^{2}/C\qquad(c)\;0.005\;Nm^{2}/C\qquad(d)\;3.14\;Nm^{2}/C$ | 2017-09-21 10:38:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7970329523086548, "perplexity": 316.34583770198475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687740.4/warc/CC-MAIN-20170921101029-20170921121029-00543.warc.gz"} |
https://eprint.iacr.org/2007/458/20071210:163944 | ## Cryptology ePrint Archive: Report 2007/458
Saving Private Randomness in One-Way Functions and Pseudorandom Generators
Nenad Dedic and Danny Harnik and Leonid Reyzin
Abstract: Can a one-way function f on n input bits be used with fewer than $n$ bits while retaining comparable hardness of inversion? We show that the answer to this fundamental question is negative, if one is limited black-box reductions.
Instead, we ask whether one can save on secret random bits at the expense of more public random bits. Using a shorter secret input is highly desirable, not only because it saves resources, but also because it can yield tighter reductions from higher-level primitives to one-way functions. Our first main result shows that if the number of output elements of f is at most $2^k$, then a simple construction using pairwise-independent hash functions results in a new one-way function that uses only k secret bits. We also demonstrate that it is not the knowledge of security of f, but rather of its structure, that enables the savings: a black-box reduction cannot, for a general f, reduce the secret-input length, even given the knowledge that security of f is only $2^{-k}$; nor can a black-box reduction use fewer than k secret input bits when f has $2^k$ distinct outputs.
Our second main result is an application of the public-randomness approach: we show a construction of a pseudorandom generator based on any regular one-way function with output range of known size $2^k$. The construction requires a seed of only 2n+O(k\log k) bits (as opposed to O(n \log n) in previous constructions); the savings come from the reusability of public randomness. The secret part of the seed is of length only k (as opposed to n in previous constructions), less than the length of the one-way function input.
Category / Keywords: foundations / pseudorandomness, one-way function, randomized iterate, pseudorandom generator, regular one-way function
Publication Info: This is the full version of TCC 2008 paper.
Date: received 7 Dec 2007, last revised 10 Dec 2007
Contact author: nenad dedic at gmail com
Available format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation
Note: PDF rendering problems corrected. Affiliations updated.
Short URL: ia.cr/2007/458
[ Cryptology ePrint archive ] | 2018-01-17 03:15:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4867481291294098, "perplexity": 2067.4164761239294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886794.24/warc/CC-MAIN-20180117023532-20180117043532-00442.warc.gz"} |
https://meangreenmath.com/2017/05/15/my-favorite-one-liners-part-104/ | # My Favorite One-Liners: Part 104
In this series, I’m compiling some of the quips and one-liners that I’ll use with my students to hopefully make my lessons more memorable for them.
I use today’s quip when discussing the Taylor series expansions for sine and/or cosine:
$\sin x = x - \displaystyle \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!} \dots$
$\cos x = 1 - \displaystyle \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!} \dots$
To try to convince students that these intimidating formulas are indeed correct, I’ll ask them to pull out their calculators and compute the first three terms of the above expansion for $x=0.2$, and then compute $\sin 0.2$. The results:
This generates a pretty predictable reaction, “Whoa; it actually works!” Of course, this shouldn’t be a surprise; calculators actually use the Taylor series expansion (and a few trig identity tricks) when calculating sines and cosines. So, I’ll tell my class,
It’s not like your calculator draws a right triangle, takes out a ruler to measure the lengths of the opposite side and the hypotenuse, and divides to find the sine of an angle.
## One thought on “My Favorite One-Liners: Part 104”
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2022-01-21 20:18:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9391764402389526, "perplexity": 1169.138067979505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303709.2/warc/CC-MAIN-20220121192415-20220121222415-00201.warc.gz"} |
http://lukaspuettmann.com/2018/02/09/cosine-similarity/ | # Cosine similarity
A typical problem when analyzing large amounts of text is trying to measure the similarity of documents. An established measure for this is cosine similarity.
## I.
It’s the cosine of the angle between two vectors. Two vectors have a maximum cosine similarity of 1 if they are parallel and the lowest cosine similarity of 0 if they are perpendicular to each other.
Say you have two documents $A$ and $B$ . Write these documents as vectors $\boldsymbol{x} = (x_{1}, x_{2}, ..., x_{n})'$, where $n$ is the length of the pooled dictionary of all words that show up in either document. An entry $x_i$ is the number of occurences of a particular word in a document. Cosine similarity is then (Manning et al. 2008):
Given that entries can only be positive, cosine similarity will always take positive values. The denominator normalizes document lengths and bounds values between 0 and 1.
Cosine similarity is equal to the usual (Pearson’s) correlation coefficient if we first demean the word vectors.
## II.
Consider a dictionary of three words. Let’s define (in Matlab) three documents that contain some of these words:
Calculate the correlation between these:
Which gets us:
Documents 1 and 2 have the lowest possible correlation while 2 and 3 and 1 and 3 are somewhat correlated.
Define a function for cosine similarity:
And calculate the values for our word vectors:
Which gets us:
Documents 1 and 2 again have the lowest possible similarity. The association between documents 2 and 3 is especially high, as both contain the third word in the dictionary which also happens to be of particular importance in document 3.
Demean the vectors and then run the same calculation:
Producing:
They’re indeed the same as the correlations.
## References
Manning, C. D., P. Raghavan and H. Schütze (2008). Introduction to Information Retrieval. Cambridge University Press. (link) | 2018-12-17 05:11:26 | {"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8688896298408508, "perplexity": 730.0703200176914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828318.79/warc/CC-MAIN-20181217042727-20181217064727-00592.warc.gz"} |
https://www.bionicturtle.com/forum/threads/hull-risk-management-ch10-eoc-questions-10-21-and-10-5.21937/ | What's new
# Hull Risk Management Ch10 - EOC-QUESTIONS 10.21 and 10.5
##### Active Member
Hi,
In Reference to HULLCH10_Question 10.5:
What is Simplified Approach Equation 10.4 the the solution is referring to here ..?
Question 10.5: Suppose that observations on an exchange rate at the end of the past 11 days have been 0.7000, 0.7010, 0.7070, 0.6999, 0.6970, 0.7003, 0.6951, 0.6953, 0.6934, 0.6923, and 0.6922. Estimate the daily volatility using both approaches in Section 10.5.
Answer: The standard formula for calculating standard deviation gives 0.547% per day. The simplified approach in equation (10.4) gives 0.530% per day.
#### Nicole Seaman
Staff member
Subscriber
Hi,
In Reference to QUANT-HULL_CH10_VOLATILITY_PROBLEM-10.5 :-
What is Simplified Approach Equation 10.4 the the solution is referring to here ..?
View attachment 1838
Here is the equation that you are looking for. I'm actually working on an updated version of this document that has not been published yet, and it includes the actual equation instead of just referencing equation 10.4. I hope this helps!
##### Active Member
Here is the equation that you are looking for. I'm actually working on an updated version of this document that has not been published yet, and it includes the actual equation instead of just referencing equation 10.4. I hope this helps!
View attachment 1839
Thanks so much @Nicole Seaman for coming to the rescue so promptly
##### Active Member
Hi,
In the Problem below, how and where did we get Alpha = LN( 1/.98 ) ...? which formula is this ...?
Also, in Part C of this problem, where did we get the part highlighted in yellow ... ...?
Thanks all for any insights on this.
Question 10.21
Suppose that the parameters in a GARCH(1,1) model are α = 0.03, β = 0.95 and ω = 0.000002.
a) What is the long-run average volatility?
b) If the current volatility is 1.5% per day, what is your estimate of the volatility in 20, 40, and 60 days?
c) What volatility should be used to price 20-, 40-, and 60-day options?
d) Suppose that there is an event that increases the volatility from 1.5% per day to 2% per day. Estimate the effect on the volatility in 20, 40, and 60 days.
Estimate by how much the event increases the volatilities used to price 20-, 40-, and 60-day options.
Last edited by a moderator:
#### QuantMan2318
##### Active Member
Subscriber
The solution is confusing because the formula that you have highlighted is not the estimate of alpha but is the estimate of long run Gamma of the equation in $\gamma V_L$, where:
we try to find the estimate of the long run Variance as V(t) as $E({\sigma^{2_{}}}_{(n+t)})$ for the term structure of Variance/Volatility
and the Coefficient $\gamma\;=\;nl(1/(\alpha+\beta))$>>>>>>>>> 1
Why is this so? we know that Gamma = 1 - (alpha +Beta) and hence the logarithmic transformation of the same is nl(1/(alpha+beta)) - this was my understanding of the formula based on rough log equivalence
We know that the estimate of Variance for a future time period is given as:
$E( {\sigma^2}_{(n+t)})\;=\;V_{L_{}}+(\alpha+\beta)^t\;\ast\;({\sigma^2}_n\;-\;V_L)$
Therefore, we substitute equation (1) in the above formula which becomes:
$V(t)\;=\;V_L+e^{-\gamma t}\lbrack V(O)-V_L\rbrack$V
To find the estimate of implied volatility at time T we take the average of the above (Average volatility) which is the integral of the above equation:
$\sigma(T)^2\;=\;252\;\ast\;(V_L\;+\;(1\;-\;e^{-\gamma T})/\gamma T\ast\lbrack V(O)\;-\;V_L\rbrack)$
In other words
You get the Gamma as nl(1/(0.03+0.95)) which is 0.0202
furthermore as (alpha+beta)^t is nothing but e^-gamma *t taking the inverse, you get the above solution
Let me know if the above would help
Thanks
Mani
Last edited:
##### Active Member
Thanks so much @QuantMan2318 - thought I almost understood this - and then hit this hurdle ...
As you rightly pointed out: Given that Gamma = [ 1 - (Alpha + Beta ) ] , How is Gamma = LN [ 1 / (Alpha +Beta ) ] ...?
#### QuantMan2318
##### Active Member
Subscriber
Why is this so? we know that Gamma = 1 - (alpha +Beta) and hence the logarithmic transformation of the same is nl(1/(alpha+beta)) - this was my understanding of the formula based on rough log equivalence
Let me know if the above would help
Thanks
Mani
Thanks so much @QuantMan2318 - thought I almost understood this - and then hit this hurdle ...
As you rightly pointed out: Given that Gamma = [ 1 - (Alpha + Beta ) ] , How is Gamma = LN [ 1 / (Alpha +Beta ) ] ...?
This is just based on log transformation, it is not strictly equal of course, I presumed that the authors assumed a transformation of the original formula to arrive at that (you may refer my first quote above) , @David Harper CFA FRM can correct me if I am wrong.
We know that Alpha + Beta + Gamma = 1
Divide both sides by Alpha + Beta
1/(1-Gamma) = 1/(Alpha + Beta)
Taking log on both sides
-nl(1-Gamma) = nl(1/(Alpha +Beta))
which I can assume is roughly equivalent to or some way related to Gamma - I do not know more than this on why the author took that formula for prediction of Variances other than the fact that it is transformation of some kind
More mathematical members can shed more light here
Thanks
##### Active Member
This is just based on log transformation, it is not strictly equal of course, I presumed that the authors assumed a transformation of the original formula to arrive at that (you may refer my first quote above) , @David Harper CFA FRM can correct me if I am wrong.
We know that Alpha + Beta + Gamma = 1
Divide both sides by Alpha + Beta
1/(1-Gamma) = 1/(Alpha + Beta)
Taking log on both sides
-nl(1-Gamma) = nl(1/(Alpha +Beta))
which I can assume is roughly equivalent to or some way related to Gamma - I do not know more than this on why the author took that formula for prediction of Variances other than the fact that it is transformation of some kind
More mathematical members can shed more light here
Thanks
Thanks again so much @QuantMan2318 - Think I got it this time around
As Gamma is a small no, (Gamma)^2 and Higher Powers of Gamma and so on are negligible and can be approximated to 0.
So by the Taylor Series Expansion of Ln ( 1+ x) approximates to x with the higher powers of x being almost equal to 0.
Again Thanks so much for taking the time to breaking this formula down for me - It was a Huge AHA Moment for me !! THANK YOU
##### Active Member
@QuantMan2318 @David Harper CFA FRM Have one more follow up question on this one..
So, the Variance for a future time period is given by :-
()=+−[()−] -------Equation 1
To find the estimate of implied volatility at time T we take the average of the above (Average volatility) which is the integral of the above equation:
()2=252∗(+(1−−)/∗[()−]) -------Equation 2
Why do we have to further take the Integral of the above Equation 1 and get to the Equation 2 ...? Could we NOT use the Equation 1 itself to find the 20, 40 60 Day Volatilities...?
P.S : Please ignore this last Follow Up Question- Think I was able to wrap my head around it finally.
Last edited:
##### Active Member
I am a bit rusty on Integrals and so have a follow up question
V(t) = V(L )+ e^ -Gamma*T [ V(0) - V(L) ]
So, I/T * Integral [ V(t) ] = 1/T * Integral { V(L )+ e^ -Gamma*T [ V(0) - V(L) ] }
Now, Integral { V(L )+ e^ -Gamma*T [ V(0) - V(L) ] } = Integral { V(L ) . dT } + Integral { e^ -Gamma*T [ V(0) - V(L) ] }
Integral { V(L ) . dT } = V(L ) * T -----(a)
Integral { e^ -Gamma*T [ V(0) - V(L) ] } = [ V(0) - V(L) ] * [ ( -1/ Gamma) * e^ - Gamma*T ]
so, Integral { e^ -Gamma*T [ V(0) - V(L) ] } = { - [ V(0) - V(L) ] / Gamma } * e^ - Gamma*T -----(b)
V(L ) * T { - [ V(0) - V(L) ] / Gamma } * e^ - Gamma*T
I am not able to derive/get the [ V(0) - V(L) ] / Gamma that needs to be added...
Can some one expand the Integration of the V(t) ..I know I am NOT doing the Integral right ...
#### David Harper CFA FRM
##### David Harper CFA FRM
Staff member
Subscriber
Hi @gargi.adhikari I won't have time to work detailed calculus issues four (4) days before the exam, if you think about it (if ever!). Have you plugged this into Mathematica? That's what I do with difficult calculus, it shows step-by-step. Their alpha is really amazing, it's saved me so many times with calculus, in particular integrals, see https://www.wolframalpha.com/
(and if you find the solution, maybe you can share it back and try to be helpful to somebody else?)
Thanks,
Last edited:
#### akrushn2
##### Member
Subscriber
Not able to actually calculate out to the answer on 10.21 c- I keep getting 23.569% This is my calculation. 252*(0.0001 + 1-e^-20.0202/0.0202*20*(0.015^2/252-0.0001))= this is 252*(0.000222139)= 0.055978 sqrt of this is = 23.659%. Where am I going wrong here?
Last edited:
Staff member
Subscriber
Last edited: | 2019-08-24 21:18:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.896438479423523, "perplexity": 1575.4208097359233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321696.96/warc/CC-MAIN-20190824194521-20190824220521-00340.warc.gz"} |
http://www.tug.org/twg/mfg/mail-html/1993-08/msg00116.html | # Re: More on subscripts and superscripts
• To: math-font-discuss@cogs.susx.ac.uk
• Subject: Re: More on subscripts and superscripts
• From: alanje@cogs.susx.ac.uk (Alan Jeffrey)
• Date: Wed, 11 Aug 93 17:55 BST
>(?) _ in math mode can only be subscript OR \_, but not both. If _ is
>made active' via mathcode "8000, it will work all right inside a
>\write (or conversely, it will only break where it would have broken
>before). But generally, I agree with your next words:
I was worried about things like:
\catcode\_=\active
blah_blah $blah_blah \hbox{blah_blah} blah_blah$ blah_blah
inside an \edef or \write. Just thinking about all the ways that
these sorts of macros can go wrong gives me the shivers!
>Agreed, except I would have put it the other way around: the decision
>to use syntax like {...\over...} for \over and friends was the design
>flaw that necessitated all the jury-rigging around \mathchoice and
>made it difficult to provide a current math style variable.
Absolutely. I still have no idea why DEK adopted that syntax for
\over and friends. Oh well...
Alan. | 2018-06-20 09:04:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.895782470703125, "perplexity": 8177.105716589049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863516.21/warc/CC-MAIN-20180620085406-20180620105406-00051.warc.gz"} |
https://www.physics.utoronto.ca/research/quantum-condensed-matter-physics/tqm-seminars/a-sign-of-change-pinning-down-the-pairing-symmetry-of-the-iron-based-superconductors/ | # A sign of change: pinning down the pairing symmetry of the iron-based superconductors
Understanding the structure of the order parameter of the iron-based superconductors is the key to unveil their pairing mechanism. Although there has been much theoretical and experimental indications that the order parameter changes its sign in momentum space, direct evidence is still lacking. The difficulty stems from the fact that the order parameter is likely to be of s-wave symmetry, and therefore designing a phase sensitive experiment that would clearly reveal the sign change is non-trivial. Here, we examine a contact between a sign-changing superconductor and an ordinary, uniform-sign superconductor. If the the barrier between the two superconductors is not too high, the frustration of the Josephson coupling between different portions of the Fermi surface across the contact can lead to surprising consequences, such as time-reversal symmetry breaking at the interface and unusual energy-phase relations with multiple local minima. We propose this mechanism as a possible explanation for the half-integer flux quantum transitions in niobium–iron pnictide loops, which were discovered in a recent experiment (C-T. Chen et. al., 2010).
Toronto Quantum Matter Seminars
Event series Condensed Matter EventsToronto Quantum Matter Seminars | 2023-03-27 17:13:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8014905452728271, "perplexity": 887.9803561744878}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00622.warc.gz"} |
https://math.stackexchange.com/questions/607853/proof-by-contradiction-that-if-a-set-a-contains-n-0-and-contains-k1-whene | # Proof by contradiction that if a set $A$ contains $n_0$ and contains $k+1$ whenever it contains $k$, then it contains all numbers $\ge n_0$
Prove that if a set $A$ of natural numbers contains $n_0$ and contains $k+1$ whenever it contains $k$, then $A$ contains all natural numbers $\geq n_0$.
I have attempted a proof by contradiction as follows:
Let $N_0=\{i:i \in \mathbb{N},i\geq n_0 \}$ and let $B=N_0\setminus A$ therefore by the well-ordering principle $B$ must have a least element. So let $m_0 \in B$ such that $m_0$ is the least element of B.
$m_0-1\not\in B \implies m_0-1\in A$ (this implication I'm not sure on) but if $m_0-1\in A \implies m_0\in A \therefore m_0\not\in B$ a contradiction. Therefore $B$ has no least element and therefore A contains all natural numbers $\geq n_0$.
The step I'm not sure on is $m_0-1\not\in B \implies m_0-1\in A$ but my reasoning is that since $m_0 \not=n_0$ then $m_0\gt n_0 \implies m_0\geq n_0+1$ therefore $m_0-1\geq n_0 \implies m_0-1\in N_0$ therefore $m_0-1\in A$ since it's not in $B$.
I'm not even sure it's a correct proof even if I was certain the above implication was true; it's from Spivak's Calculus and the answer in the back is much more concise but I wanted to have a go at trying to prove it a different way to practise writing and constructing proofs (I'm still quite new to it).
• You should just explicitly say that $m_0 = n_0$ cannot be, hence $m_0 > n_0$, before that step. Then $m_0-1 \in A$ is unproblematic. – Daniel Fischer Dec 15 '13 at 14:50
• To be precise, what is the definition of $\ge$? – Hagen von Eitzen Dec 15 '13 at 14:55
• from the book the question was from: $a>b$ if $a-b$ is in P where P is "the collection of positive numbers" and therefore $a\geq b$ if $a>b$ or $a=b$. – Jay Dec 15 '13 at 15:01
• @DanielFischer so if I made the changes you indicate the proof would be a valid one? Thanks! – Jay Dec 15 '13 at 17:11
• Yes. Well, you should also explicitly state that you assume $B \neq \varnothing$ at the beginning of the proof. Otherwise, you can't invoke the well-ordering principle to deduce the existence os a least element. You know $m_0 \geqslant n_0$ since $m_0 \in B\subset N_0$, so you have $m_0-1 \notin B$. But you need something that guarantees that $m_0-1 \in N_0$ to deduce $m_0-1 \in A$, the observation $m_0 > n_0$ gives you that. And I would rephrase the last sentence, "Therefore $B$ must be empty". – Daniel Fischer Dec 15 '13 at 17:18
Let $N_0=\{i:i\in \mathbb{N}, i\geq n_0\}$ and let $B=N_0 \setminus A$ and given $B\neq \emptyset$ we can invoke the well ordering principle. Let the least element of B be defined such that: $m_0\in B$ and $m_0$ is the least element of $B$.
Since $m_0 \neq n_0$ this implies that $m_0 \gt n_0$ we can then deduce that $m_0-1 \not\in B \implies m_0-1\in A$ but by the given condition that $k+1\in A$ if $k\in A$ then $m_0\in A \therefore m_0\not\in B$ which is a contradiction. Therefore $B$ must be empty. | 2019-06-18 15:17:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9256209135055542, "perplexity": 118.1401055532575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998755.95/warc/CC-MAIN-20190618143417-20190618165417-00107.warc.gz"} |
https://socratic.org/questions/how-does-acceleration-affect-tension | # How does acceleration affect tension?
May 29, 2014
In a typical case of two objects, one pulling another with a rigid link in-between, the higher acceleration of the first results in a higher tension in a link.
Here is why.
Assume the mass of the first (pulling) object is ${M}_{1}$, the mass of the second (pulled) object is ${M}_{2}$, acceleration of a system of two objects is $a$ (the same for both objects since the link between them is rigid), the force moving the first object forward is $F$ and the tension force in a link between two objects is $T$.
The force of tension acts on the first object against its movement, it slows this object down. The force of tension acts on the second object towards the direction of movement since this is the only force that causes its moving forward.
These two forces, one acting on the pulling object and another acting on the pulled one, have opposite direction and the same absolute value. Let's choose the direction of the movement as the positive. Then the tension force acting on the first object is negative ($- T$) and the tension force acting on the second object is positive ($T$).
Force $F$, moving the first object forward is positive since it's directed towards the direction of movement.
The combination of forces acting on the first body is positive $F$ and negative $- T$. The resulting force equals, therefore, $F - T$.
The second Newton's law gives us:
$F - T = {M}_{1} \cdot a$
The only force acting on the second object is $T$, so the second Newton's law gives:
$T = {M}_{2} \cdot a$
Let's solve the system these two equations for $a$ and $T$, assuming that masses ${M}_{1}$ and ${M}_{2}$ as well as the main force $F$ moving the system forward are known.
The solution is (skipping the trivial manipulations):
$a = \setminus \frac{F}{\left({M}_{1} + {M}_{2}\right)}$
$T = \setminus \frac{F \cdot {M}_{2}}{\left({M}_{1} + {M}_{2}\right)}$
As we see, acceleration is proportional to the force $F$ as well as the tension. Increased acceleration may only be contributed to increased force $F$, which causes proportional increase of tension $T$. Therefore, the higher acceleration - the higher the tension. | 2020-01-25 07:37:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 24, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502428889274597, "perplexity": 295.34818132746005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251671078.88/warc/CC-MAIN-20200125071430-20200125100430-00419.warc.gz"} |
https://web2.0calc.com/questions/difference-quotient-fraction | +0
# Difference Quotient (fraction)
-1
442
2
+280
So i understand the whole concept of getting the same denominator using the rules. But the problem i have is plugging these values into the formula itself. The variable with the top numerator is giving me trouble. I have no problem with plugging in any other functions whatsoever, besides these fractions. If someone could show me how they plugged in the values then i can do the rest.
Veteran Apr 14, 2017
#1
+7336
+3
I think I can do this one.
$$k(\textcolor{lime}{x})=\frac{10\textcolor{lime}{x}}{\textcolor{lime}{x}-2}$$
So..
$$k(\textcolor{lime}{x+h}) = \frac{10(\textcolor{lime}{x+h})}{(\textcolor{lime}{x+h})-2} \\~\\ k(\textcolor{lime}{x}) = \frac{10\textcolor{lime}{x}}{\textcolor{lime}{x}-2} \\~\\ h = h$$
Now just sub these in.
$$\large{\frac{k(x+h)-k(x)}{h}=\frac{\frac{10(x+h)}{(x+h)-2}-\frac{10x}{x-2}}{h}}$$
Then simplify, which seems like it might be kind of hard!
*edit* This is what I get for the simplifying part:
$$=(\frac{1}{h})(\frac{10(x+h)}{x+h-2}-\frac{10x}{x-2}) \\~\\ =(\frac{1}{h})(\frac{10(x+h)\cdot(x-2)}{(x+h-2)\cdot(x-2)}-\frac{10x\cdot(x+h-2)}{(x-2)\cdot(x+h-2)}) \\~\\ =(\frac{1}{h})(\frac{10(x+h)(x-2)-10x(x+h-2)}{(x+h-2)(x-2)}) \\~\\ =(\frac{1}{h})(\frac{(10x^2-20x+10hx-20h)-(10x^2+10xh-20x)}{x^2-2x+hx-2h-2x+4}) \\~\\ =(\frac{1}{h})(\frac{10x^2-20x+10hx-20h-10x^2-10xh+20x}{x^2-4x+hx-2h+4}) \\~\\ =(\frac{1}{h})(\frac{-20h}{x^2-4x+hx-2h+4}) \\~\\ =-\frac{20}{x^2-4x+hx-2h+4}$$
Just in case that helps any! :)
hectictar Apr 14, 2017
edited by hectictar Apr 14, 2017
#2
+280
+1
awsome, this helped me find the problem i had from the start of it.
Veteran Apr 14, 2017 | 2018-11-17 11:52:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9718565344810486, "perplexity": 811.0060923611397}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743353.54/warc/CC-MAIN-20181117102757-20181117124757-00541.warc.gz"} |
http://math.stackexchange.com/questions/77979/is-lebesgues-dominated-convergence-theorem-a-logical-equivalence | # Is Lebesgue's Dominated Convergence Theorem a logical equivalence?
Let $(X,\mathcal A,\mu)$ be a measure space and $\langle f_n\rangle_{n\in\mathbb N}$ a sequence of integrable functions that converges $\mu$-almost everywhere. By Lebesgue's Dominated Convergence Theorem, if $\langle|f_n|\rangle_{n\in\mathbb N}$ is bounded above by an integrable function, $\lim_{n\to\infty}f_n$ is integrable and $$\int\!\lim_{n\to\infty}f_n\;\mathrm d\mu=\lim_{n\to\infty}\int\!f_n\;\mathrm d\mu\quad.$$
LDCT cannot be used if we do not assume the said boundedness. For instance, if $\mu$ is the Lebesgue measure and $f_n=nI_{[0,1/n]}$, we get $0=1$.
But is the equality false whenever we drop this hypothesis? In other words, can "if" be replaced by "if and only if" on the theorem statement? Or is there an integrable non-dominated $\mu$-a. e. convergent sequence (and a measure) for which the equality occurs?
-
## 2 Answers
The converse does not hold. Take $X = [0,\infty)$ with Lebesgue measure. Let $a_n$ be a sequence of positive numbers such that $a_n \to 0$ but $\sum_n a_n = \infty$. (For instance, $a_n = 1/n$ would work.) Let $b_n = \sum_{i=0}^n a_k$. Then let $f_n = 1_{[b_{n-1}, b_{n})}$. Then $f_n \to 0$ a.e. and $\int f_n dm = a_n \to 0$ also. But if $f$ is a dominating function, we must have $f \ge \sup_n f_n = 1$ which is not integrable.
For a necessary and sufficient condition for convergence, look up the Vitali convergence theorem.
-
There is an interesting version which generalizes the Lebesgue dominated convergence theorem:
Suppose that $(f_n)$ converges almost everywhere to $f$. Furthermore, suppose there exists a sequence $(g_n)$ of positive, integrable functions, which converges in $L^1$ to $g$ such that $|f_n| \leq g_n$ almost everywhere.
Then $$\lim_{n \to \infty} \int f_n = \int f$$
Proof: Apply Fatou's lemma to the sequence $h_n = |g_n-g|+|g|+|f|-|f_n-f|\geq 0$.
- | 2015-10-10 12:37:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9932271838188171, "perplexity": 75.08003134331624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737952309.80/warc/CC-MAIN-20151001221912-00114-ip-10-137-6-227.ec2.internal.warc.gz"} |
http://physics.stackexchange.com/questions/24686/mathematical-question-on-collisions | # Mathematical question on Collisions [closed]
A 2.5kg ball travelling with a speed of 7.5m/s makes an elastic collision with another ball of 1.5kg and travelling at the speed of 3.0m/s in the same direction.
-- What are the velocities of the balls immediately after collision?
-
## closed as too localized by Qmechanic♦, Mark Eichenlaub, David Z♦May 1 '12 at 18:13
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. If this question can be reworded to fit the rules in the help center, please edit the question.
In an elastic collision the guiding principles are the conservation of momentum $$p_{1,before} + p_{2,before} = p_{1,after} + p_{2,after}$$ and the similarly the conservation of total kinetic energy. With both you can solve your problem easily.
(I did not downvote but your question shows zero own effort which is not appreciated here, as the complete answer is contained in the first google hit to 'elastic collision'.)
- | 2015-07-31 11:43:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6818481683731079, "perplexity": 781.0937532763343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988250.59/warc/CC-MAIN-20150728002308-00050-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://www.esaral.com/q/the-pressure-of-the-gas-in-a-constant-volume-gas-thermometer-86243/ | The pressure of the gas in a constant volume gas thermometer
Question:
The pressure of the gas in a constant volume gas thermometer is $70 \mathrm{kPa}$ at the ice point. Find the pressure at the steam point.
Solution: | 2022-05-16 12:35:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936790227890015, "perplexity": 287.41760891772384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662510117.12/warc/CC-MAIN-20220516104933-20220516134933-00754.warc.gz"} |
http://hetneb.nl/lja75z4/41981a-given-ad-%3D-bc-prove-a-b | Prove: b. Prove that $(a,bc)=1$ if and only if $(a,b)=1$ and $(a,c)=1$ [duplicate] Ask Question Asked 6 years, 10 months ago. 5. A: Given that, AB=7 and BC=4 A: The circle in the question is shown below. We have to find ∠A. BC = AD 2. Get an answer for 'Proof using real number axioms and axioms of equality: a/b+c/d=ad+bc/bd Show reason in every proof' and find homework help for other Math questions at eNotes DA bisects BAC 1. B. Syllabus. 2 B 5. A: The equation of a line whose x-intercept is 3 and passes through (1,5) is, It is given … AABC A 5. Need assistance? 3. GIVEN: BC// AD , BC = AD PROVE: A ABC A. 3. Q: In the following 2-column proof, what is the missing statement to prove that ABS = CDB, by SSS. 5. Lv 6. 1 answer. 4. 1.) E The ΔDEF is a reflection of ΔCBE along point 'E' r2 = -9cos(2u) CBSE CBSE Class 10. In the figure, given below, AD ⊥ BC. BC// AD 1. asked Sep 27, 2018 in Mathematics by Samantha (38.8k points) quadratic equations; cbse; class-10; 0 votes. ab+cd=ad+bc New questions in Math The number of hours spent by a schoolboy on different activities in a working day is given belem:Sleep-8School-7Home-4Play … CPCTE. 10:00 AM to 7:00 PM IST all days. B E is the midpoint 3. Asked on December 26, 2019 by Sparsh Bhadwa. Join / Login. For AMAI, ML is an angle bisector. B Given: A and B are right angles Prove: A = B. B. 3. a) A −B = B −A. CPCTE (Corresponding Parts of Congruent Triangles are Equal) Given: AD = BC BC⊥AE AD⊥BE Prove: CA 6. Viewed 2k times 0 $\begingroup$ This question already has answers here: Show that $\gcd(a,bc)=1$ if and only if $\gcd(a,b)=1$ and $\gcd(a,c)=1$ (3 answers) Closed 6 years ago. 5. 4. 2 1. Advertisement Remove all ads. MI = 375, and AI = 700. Step 1 establishes congruence of two pairs of sides. D We have to identify each line, segement and point. Defn Bisector 3. Textbook Solutions 17467. Two points A and B are taken on the chain line ... Q: Situation #6 Important Solutions 3114. Show that CD bisects AB. Solution: Here, ∠B < ∠A ⇒ AO < BO …..(i) and ∠C < ∠D ⇒ OD < CO …..(ii) [∴ side opposite to greater angle is longer] Adding (i) and (ii), we obtain AO + OD < BO + CO AD < BC. | 2022-08-15 18:41:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6223205327987671, "perplexity": 3903.162924874881}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572198.93/warc/CC-MAIN-20220815175725-20220815205725-00197.warc.gz"} |
https://socratic.org/questions/the-position-of-an-object-moving-along-a-line-is-given-by-p-t-2t-sin-pi-6t-what--3 | # The position of an object moving along a line is given by p(t) = 2t - sin(( pi )/6t) . What is the speed of the object at t = 16 ?
Nov 9, 2016
The speed is $= 2 + \frac{\pi}{12}$
#### Explanation:
If the position is $p \left(t\right) = 2 t - \sin \left(\frac{\pi}{6} t\right)$
Then the velocity is given by the derivative of $p \left(t\right)$
$\therefore v \left(t\right) = 2 - \frac{\pi}{6} \cos \left(\frac{\pi}{6} t\right)$
When $t = 16$
$v \left(16\right) = 2 - \frac{\pi}{6} \cos \left(\frac{\pi}{6} \cdot 16\right)$
$= 2 - \frac{\pi}{6} \cos \left(\frac{8}{3} \pi\right) = 2 - \frac{\pi}{6} \cdot \left(- \frac{1}{2}\right)$
$= 2 + \frac{\pi}{12}$ | 2019-10-18 16:32:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5647354125976562, "perplexity": 553.8311740233902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684226.55/warc/CC-MAIN-20191018154409-20191018181909-00282.warc.gz"} |
http://devmaster.net/posts/16972/a-little-c-curiosity | 0
101 Sep 22, 2009 at 16:19
This has no practical consequences for me, it’s just something that I’m curious about.
If I were to create a function
bool foo(bool a, bool b) { return a && b; }
And use it…
foo(false, function_with_side_effects());
Would the compiler be allowed to inline it such that the function with side effects isn’t called? In other words, could it inline it to this:
false && function_with_side_effects();
Again, this isn’t causing me any problems; it’s just a curiosity that recently crossed my mind.
5 Replies
0
167 Sep 22, 2009 at 16:21
No; when calling a function all arguments are always completely evaluated.
0
101 Sep 22, 2009 at 16:55
Well, yes. I suppose the implication is that this rule applies to functions that are inlined as well?
What about overloads of operator&&? Are they all special functions, or are they only special for primitive types?
0
167 Sep 22, 2009 at 17:00
Yes; inlining a function doesn’t change the fact that all arguments are evaluated. As for overloads of operator&&, I’m 95% sure they behave just like regular functions, and the short-circuiting behavior only applies to the built-in operator&&.
0
101 Sep 22, 2009 at 17:40
Yay for consistency.
0
101 Sep 22, 2009 at 20:05
@Reedbeta
Yes; inlining a function doesn’t change the fact that all arguments are evaluated. As for overloads of operator&&, I’m 95% sure they behave just like regular functions, and the short-circuiting behavior only applies to the built-in operator&&.
Correctamundo | 2014-04-24 04:56:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6145395040512085, "perplexity": 1907.8119365527173}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcds.2016048 | # American Institute of Mathematical Sciences
October 2016, 36(10): 5657-5679. doi: 10.3934/dcds.2016048
## On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay
1 Department of Mathematics, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502, Japan
Received September 2015 Revised February 2016 Published July 2016
A transcendental equation $\lambda + \alpha - \beta\mathrm{e}^{-\lambda\tau} = 0$ with complex coefficients is investigated. This equation can be obtained from the characteristic equation of a linear differential equation with a single constant delay. It is known that the set of roots of this equation can be expressed by the Lambert W function. We analyze the condition on parameters for which all the roots have negative real parts by using the graph-like'' expression of the W function. We apply the obtained results to the stabilization of an unstable equilibrium solution by the delayed feedback control and the stability condition of the synchronous state in oscillator networks.
Citation: Junya Nishiguchi. On parameter dependence of exponential stability of equilibrium solutions in differential equations with a single constant delay. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5657-5679. doi: 10.3934/dcds.2016048
##### References:
[1] K. L. Cooke and Z. Grossman, Discrete delay, distributed delay and stability switches,, J. Math. Anal. Appl., 86 (1982), 592. doi: 10.1016/0022-247X(82)90243-8. Google Scholar [2] R. M. Corless and D. J. Jeffrey, The Wright $\omega$ function,, Lecture Notes in Comput. Sci., 2385 (2002), 76. doi: 10.1007/3-540-45470-5_10. Google Scholar [3] R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey and D. E. Knuth, On the Lambert $W$ function,, Adv. Comput. Math., 5 (1996), 329. doi: 10.1007/BF02124750. Google Scholar [4] O. Diekmann, S. A. van Gils, S. M. Verduyn Lunel and H.-O. Walther, Delay equations. Functional, Complex, and Nonlinear Analysis,, Springer-Verlag, (1995). doi: 10.1007/978-1-4612-4206-2. Google Scholar [5] M. G. Earl and S. H. Strogatz, Synchronization in oscillator networks with delayed coupling: A stability criterion,, Phy. Rev. E, 67 (2003). doi: 10.1103/PhysRevE.67.036204. Google Scholar [6] B. Fiedler, V. Flunkert, M. Georgi, P. Hövel and E. Schöll, Refuting the odd-number limitation of time-delayed feedback control,, Phys. Rev. Lett., 98 (2007). doi: 10.1103/PhysRevLett.98.114101. Google Scholar [7] N. D. Hayes, Roots of the transcendental equation associated with a certain differential-difference equation,, J. London Math. Soc., 25 (1950), 226. Google Scholar [8] R. A. Horn and C. R. Johnson, Matrix Analysis,, Cambridge University Press, (1985). doi: 10.1017/CBO9780511810817. Google Scholar [9] P. Hövel and E. Schöll, Control of unstable steady states by time-delayed feedback methods,, Phy. Rev. E, 72 (2005). Google Scholar [10] H. Kokame, K. Hirata, K. Konishi and T. Mori, State difference feedback for stabilizing uncertain steady states of non-linear systems,, Internat. J. Control, 74 (2001), 537. doi: 10.1080/00207170010017275. Google Scholar [11] H. Matsunaga, Delay-dependent and delay-independent stability criteria for a delay differential system,, Proc. Amer. Math. Soc., 136 (2008), 4305. doi: 10.1090/S0002-9939-08-09396-9. Google Scholar [12] K. Pyragas, Continuous control of chaos by self-controlling feedback,, Controlling chaos: Theoretical and practical methods in non-linear dynamics, (1996), 118. doi: 10.1016/B978-012396840-1/50038-2. Google Scholar [13] H. Shinozaki and T. Mori, Robust stability analysis of linear time-delay systems by Lambert $W$ function: Some extreme point results,, Automatica, 42 (2006), 1791. doi: 10.1016/j.automatica.2006.05.008. Google Scholar [14] G. Stépán, Retarded Dynamical Systems: Stability and Characteristic Functions,, Longman Scientific & Technical, (1989). Google Scholar [15] J. Wei and C. Zhang, Stability analysis in a first-order complex differential equations with delay,, Nonlinear Anal., 59 (2004), 657. doi: 10.1016/j.na.2004.07.027. Google Scholar [16] E. M. Wright, Solution of the equation $z e^z = a$,, Bull. Amer. Math. Soc., 65 (1959), 89. doi: 10.1090/S0002-9904-1959-10290-1. Google Scholar
show all references
##### References:
[1] K. L. Cooke and Z. Grossman, Discrete delay, distributed delay and stability switches,, J. Math. Anal. Appl., 86 (1982), 592. doi: 10.1016/0022-247X(82)90243-8. Google Scholar [2] R. M. Corless and D. J. Jeffrey, The Wright $\omega$ function,, Lecture Notes in Comput. Sci., 2385 (2002), 76. doi: 10.1007/3-540-45470-5_10. Google Scholar [3] R. M. Corless, G. H. Gonnet, D. E. G. Hare, D. J. Jeffrey and D. E. Knuth, On the Lambert $W$ function,, Adv. Comput. Math., 5 (1996), 329. doi: 10.1007/BF02124750. Google Scholar [4] O. Diekmann, S. A. van Gils, S. M. Verduyn Lunel and H.-O. Walther, Delay equations. Functional, Complex, and Nonlinear Analysis,, Springer-Verlag, (1995). doi: 10.1007/978-1-4612-4206-2. Google Scholar [5] M. G. Earl and S. H. Strogatz, Synchronization in oscillator networks with delayed coupling: A stability criterion,, Phy. Rev. E, 67 (2003). doi: 10.1103/PhysRevE.67.036204. Google Scholar [6] B. Fiedler, V. Flunkert, M. Georgi, P. Hövel and E. Schöll, Refuting the odd-number limitation of time-delayed feedback control,, Phys. Rev. Lett., 98 (2007). doi: 10.1103/PhysRevLett.98.114101. Google Scholar [7] N. D. Hayes, Roots of the transcendental equation associated with a certain differential-difference equation,, J. London Math. Soc., 25 (1950), 226. Google Scholar [8] R. A. Horn and C. R. Johnson, Matrix Analysis,, Cambridge University Press, (1985). doi: 10.1017/CBO9780511810817. Google Scholar [9] P. Hövel and E. Schöll, Control of unstable steady states by time-delayed feedback methods,, Phy. Rev. E, 72 (2005). Google Scholar [10] H. Kokame, K. Hirata, K. Konishi and T. Mori, State difference feedback for stabilizing uncertain steady states of non-linear systems,, Internat. J. Control, 74 (2001), 537. doi: 10.1080/00207170010017275. Google Scholar [11] H. Matsunaga, Delay-dependent and delay-independent stability criteria for a delay differential system,, Proc. Amer. Math. Soc., 136 (2008), 4305. doi: 10.1090/S0002-9939-08-09396-9. Google Scholar [12] K. Pyragas, Continuous control of chaos by self-controlling feedback,, Controlling chaos: Theoretical and practical methods in non-linear dynamics, (1996), 118. doi: 10.1016/B978-012396840-1/50038-2. Google Scholar [13] H. Shinozaki and T. Mori, Robust stability analysis of linear time-delay systems by Lambert $W$ function: Some extreme point results,, Automatica, 42 (2006), 1791. doi: 10.1016/j.automatica.2006.05.008. Google Scholar [14] G. Stépán, Retarded Dynamical Systems: Stability and Characteristic Functions,, Longman Scientific & Technical, (1989). Google Scholar [15] J. Wei and C. Zhang, Stability analysis in a first-order complex differential equations with delay,, Nonlinear Anal., 59 (2004), 657. doi: 10.1016/j.na.2004.07.027. Google Scholar [16] E. M. Wright, Solution of the equation $z e^z = a$,, Bull. Amer. Math. Soc., 65 (1959), 89. doi: 10.1090/S0002-9904-1959-10290-1. Google Scholar
[1] Stefan Ruschel, Serhiy Yanchuk. The Spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321 [2] Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020468 [3] Hai Huang, Xianlong Fu. Optimal control problems for a neutral integro-differential system with infinite delay. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020107 [4] Xin-Guang Yang, Lu Li, Xingjie Yan, Ling Ding. The structure and stability of pullback attractors for 3D Brinkman-Forchheimer equation with delay. Electronic Research Archive, 2020, 28 (4) : 1395-1418. doi: 10.3934/era.2020074 [5] Giuseppina Guatteri, Federica Masiero. Stochastic maximum principle for problems with delay with dependence on the past through general measures. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020048 [6] Feifei Cheng, Ji Li. Geometric singular perturbation analysis of Degasperis-Procesi equation with distributed delay. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 967-985. doi: 10.3934/dcds.2020305 [7] Gervy Marie Angeles, Gilbert Peralta. Energy method for exponential stability of coupled one-dimensional hyperbolic PDE-ODE systems. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020108 [8] Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264 [9] Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2020050 [10] Jianquan Li, Xin Xie, Dian Zhang, Jia Li, Xiaolin Lin. Qualitative analysis of a simple tumor-immune system with time delay of tumor action. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020341 [11] Soniya Singh, Sumit Arora, Manil T. Mohan, Jaydev Dabas. Approximate controllability of second order impulsive systems with state-dependent delay in Banach spaces. Evolution Equations & Control Theory, 2020 doi: 10.3934/eect.2020103 [12] Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020047 [13] Scipio Cuccagna, Masaya Maeda. A survey on asymptotic stability of ground states of nonlinear Schrödinger equations II. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020450 [14] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020440 [15] Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272 [16] Sergio Conti, Georg Dolzmann. Optimal laminates in single-slip elastoplasticity. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 1-16. doi: 10.3934/dcdss.2020302 [17] Alberto Bressan, Wen Shen. A posteriori error estimates for self-similar solutions to the Euler equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 113-130. doi: 10.3934/dcds.2020168 [18] Serena Dipierro, Benedetta Pellacci, Enrico Valdinoci, Gianmaria Verzini. Time-fractional equations with reaction terms: Fundamental solutions and asymptotics. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 257-275. doi: 10.3934/dcds.2020137 [19] Christian Beck, Lukas Gonon, Martin Hutzenthaler, Arnulf Jentzen. On existence and uniqueness properties for solutions of stochastic fixed point equations. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020320 [20] Shiqi Ma. On recent progress of single-realization recoveries of random Schrödinger systems. Electronic Research Archive, , () : -. doi: 10.3934/era.2020121
2019 Impact Factor: 1.338 | 2020-12-02 19:22:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7403289079666138, "perplexity": 9760.382802061458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141715252.96/warc/CC-MAIN-20201202175113-20201202205113-00288.warc.gz"} |
https://blender.stackexchange.com/questions/165892/how-can-i-solve-a-far-away-stacking-freestyle-line | # How can I solve a far away stacking Freestyle line
Im a beginner to use blender for comic
i try a way to make line on faraway object to be small but it still have a result like this. their are the way or some easy method to make a stacking line look doesn't black like this?
I'm not sure if you're already doing this, apologies if so.
You can assign a Distance from Camera modifier to the line thickness in the View Layer Properties tab > Line Style panel.
Before:
After:
• You might have to use separate Line Sets for other features whose detail you want to preserve in the distance.
• You can assign the same modifier to Alpha, making the lines more transparent in the distance - a possible alternative.
• As you can see, depending on the resolution of your rendered image, at some thickness, sampling/antialiasing takes over. This might be before your set 0 point; you'll have to tweak the curve / base thickness accordingly.
You can just exclude some edges from the Freestyle line set.
3. Check Edge Mark and the little × to prevent marked edges from being drawn | 2020-10-20 06:57:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29277679324150085, "perplexity": 1339.579415519222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107869933.16/warc/CC-MAIN-20201020050920-20201020080920-00169.warc.gz"} |
https://cn.maplesoft.com/support/help/maple/view.aspx?path=Task/ResultantOfPolynomials&L=C | Resultant - Maple Help
Resultant of Two Polynomials
Description Calculate the resultant of two specified polynomials with respect to an indeterminate.
Enter the first polynomial.
>
${3}{}{x}{-}{5}$ (1)
Enter the second polynomial.
>
${2}{}{x}{+}{y}$ (2)
Calculate the resultant with respect to a specified indeterminate.
>
${10}{+}{3}{}{y}$ (3)
>
Commands Used | 2023-01-28 17:15:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579288363456726, "perplexity": 2690.4062495585517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499646.23/warc/CC-MAIN-20230128153513-20230128183513-00560.warc.gz"} |
https://tutorialspoint.com/converting_decimals_to_fractions/order_of_operations_with_fractions_problem_type1.htm | Order of operations with fractions: Problem type 1
We combine the order operations (PEMDAS) with adding, subtracting, multiplying, and dividing fractions.
Rules for Order of Operations with Fractions
• First, we simplify any parentheses if any in the expression.
• Next, we simplify any exponents if present in the expression.
• We do multiplication and division before addition and subtraction.
• We do multiplication and division based on order of appearance from left to right in the problem.
• Next, we do addition and subtraction based on order of appearance from left to right in the problem.
Consider the following problems involving PEMDAS with adding, subtracting, multiplying, and dividing fractions.
Evaluate $\frac{4}{5}[17-32\left ( \frac{1}{4} \right )^{2}]$
Solution
Step 1:
As per the PEMDAS rule of operations on fractions we simplify the brackets or the parentheses first.
Step 2:
Within the brackets, the first we simplify the exponent as $\left ( \frac{1}{4} \right )^{2} = \frac{1}{16}$
Step 3:
Within the brackets, next we multiply as follows
$17-32\left ( \frac{1}{4} \right )^2 = 17-32 \times \frac{1}{16} = 17 - 2$
Step 4:
Within the brackets, next we subtract as follows
17 - 2 So, $[17-32\left ( \frac{1}{4} \right )^2] = 15$
Step 5:
$\frac{4}{5}[17-32\left ( \frac{1}{4} \right )^2] = \frac{4}{5}[15] = \frac{4}{5} \times 15$
So, simplifying we get
$\frac{4}{5} \times 15 = 4 \times 3 = 12$
Step 6:
So, finally $\frac{4}{5}[17-32\left ( \frac{1}{4} \right )^2] = 12$
Evaluate $\left ( \frac{36}{7} - \frac{11}{7}\right ) \times \frac{8}{5} - \frac{9}{7}$
Solution
Step 1:
As per the PEMDAS rule of operations on fractions we simplify the brackets or the parentheses first.
Within the brackets, the first we subtract the fractions as follows
Step 2:
Next, we multiply as follows
$\left ( \frac{36}{7} - \frac{11}{7}\right ) \times \frac{8}{5} - \frac{9}{7} = \frac{25}{7} \times \frac{8}{5} - \frac{9}{7} = \frac{40}{7} - \frac{9}{7}$
Step 3:
We then subtract as follows
$\frac{40}{7} - \frac{9}{7} = \frac{(40-9)}{7} = \frac{31}{7}$
Step 4:
So, finally $\left ( \frac{36}{7} - \frac{11}{7} \right ) \times \frac{8}{5} - \frac{9}{7} = \frac{31}{7} = 4\frac{3}{7}$ | 2023-02-06 19:31:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9063975811004639, "perplexity": 2424.992598070696}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500357.3/warc/CC-MAIN-20230206181343-20230206211343-00051.warc.gz"} |
https://courses.lumenlearning.com/cheminter/chapter/chemical-equilibrium/ | Chemical Equilibrium
Learning Objectives
• Define chemical equilibrium.
• List conditions for equilibrium.
Pull hard!
A tug of war involves two teams at the ends of a rope. The goal is to pull the other team over a line in the middle. At first, there is a great deal of tension on the rope, but no apparent movement. A bystander might think that there is nothing happening. In reality, there is a great deal of tension on the rope as the two teams pull in opposite directions at the same time.
Chemical Equilibrium
Hydrogen and iodine gases react to form hydrogen iodide according to the following reaction:
$text{H}_2(g)+text{I}_2(g) & rightleftarrows 2text{HI}(g) \text{Forward reaction:} & quad text{H}_2(g)+text{I}_2(g) rightarrow 2text{HI}(g) \text{Reverse reaction:} & quad 2text{HI}(g) rightarrow text{H}_2(g)+text{I}_2(g)$
Initially, only the forward reaction occurs because no HI is present. As soon as some HI has formed, it begins to decompose back into H2 and I2. Gradually, the rate of the forward reaction decreases while the rate of the reverse reaction increases. Eventually the rate of combination of H2 and I2 to produce HI becomes equal to the rate of decomposition of HI into H2 and I2. When the rates of the forward and reverse reactions have become equal to one another, the reaction has achieved a state of balance. Chemical equilibrium is the state of a system in which the rate of the forward reaction is equal to the rate of the reverse reaction.
Figure 1. Equilibrium in reaction: H2 (g) + I2 (g) → 2HI(g).
Figure 2. Equilibrium between reactants and products.
Chemical equilibrium can be attained whether the reaction begins with all reactants and no products, all products and no reactants, or some of both. Figure 2 shows changes in concentration of H2, I2, and HI for two different reactions. In the reaction depicted by the graph on the left (A), the reaction begins with only H2 and I2 present. There is no HI initially. As the reaction proceeds towards equilibrium, the concentrations of the H2 and I2 gradually decrease, while the concentration of the HI gradually increases. When the curve levels out and the concentrations all become constant, equilibrium has been reached. At equilibrium, concentrations of all substances are constant. In reaction B, the process begins with only HI and no H2 or I2 . In this case, the concentration of HI gradually decreases while the concentrations of H2 and I2 gradually increase until equilibrium is again reached. Notice that in both cases, the relative position of equilibrium is the same, as shown by the relative concentrations of reactants and products. The concentration of HI at equilibrium is significantly higher than the concentrations of H2 and I2 . This is true whether the reaction began with all reactants or all products. The position of equilibrium is a property of the particular reversible reaction and does not depend upon how equilibrium was achieved.
Conditions for Equilibrium and Types of Equilibrium
It may be tempting to think that once equilibrium has been reached, the reaction stops. Chemical equilibrium is a dynamic process. The forward and reverse reactions continue to occur even after equilibrium has been reached. However, because the rates of the reactions are the same, there is no change in the relative concentrations of reactants and products for a reaction that is at equilibrium. The conditions and properties of a system at equilibrium are summarized below.
1. The system must be closed, meaning no substances can enter or leave the system.
2. Equilibrium is a dynamic process. Even though we don’t necessarily see the reactions, both forward and reverse are taking place.
3. The rates of the forward and reverse reactions must be equal.
4. The amount of reactants and products do not have to be equal. However, after equilibrium is attained, the amounts of reactants and products will be constant.
The description of equilibrium in this concept refers primarily to equilibrium between reactants and products in a chemical reaction. Other types of equilibrium include phase equilibrium and solution equilibrium. A phase equilibrium occurs when a substance is in equilibrium between two states. For example, a stoppered flask of water attains equilibrium when the rate of evaporation is equal to the rate of condensation. A solution equilibrium occurs when a solid substance is in a saturated solution. At this point, the rate of dissolution is equal to the rate of recrystallization. Although these are all different types of transformations, most of the rules regarding equilibrium apply to any situation in which a process occurs reversibly.
Summary
• The concept of chemical equilibrium is described.
• Conditions for chemical equilibrium are listed.
Practice
Read the material at ChemGuide.co.uk and answer the following questions:
1. What is a closed system?
2. What is a dynamic system?
3. What happens if you change the relative rates of the forward and back reactions?
Review
1. What is chemical equilibrium?
2. In the reaction illustrated in the text, does the equilibrium concentration of HI equal the equilibrium concentrations of H 2 and I 2 ?
3. Does the position at equilibrium depend upon how the equilibrium was reached?
Glossary
• chemical equilibrium: The state of a system in which the rate of the forward reaction is equal to the rate of the reverse reaction. | 2021-10-16 09:02:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.650815486907959, "perplexity": 529.9352625997903}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584554.98/warc/CC-MAIN-20211016074500-20211016104500-00061.warc.gz"} |