url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.marymorrissey.com/canada-goose-rhbnem/example-of-sampling-distribution-0bec39 | In statistics, a sampling distribution is the probability distribution, under repeated sampling of the population, of a given statistic (a numerical quantity calculated from the data values in a sample).. Example 2: The population from which samples are selected is {1,2,3,3,3,10} As shown in Example 2 under Sampling with Replacement, this population has a mean of 3.66667 and a standard deviation of 2.92499. When samples have opted from a normal population, the spread of the mean obtained will also be normal to the mean and the standard deviation. For this simple example, the distribution of pool balls and the sampling distribution are both discrete distributions. Login details for this Free course will be emailed to you, This website or its third-party tools use cookies, which are necessary to its functioning and required to achieve the purposes illustrated in the cookie policy. Example • Population of verbal SAT scores of ALL college-bound students μ = 500 • Randomly choose a sample of a given size (n=100) and take the mean of that random sample – Let’s say we get a mean of 505 • Sampling distribution of the mean gives you the probability that … A sampling distribution therefore depends very much on sample size. It is one example of what we call a sampling distribution, we can be formed from a set of any statistic, such as a mean, a test statistic, or a correlation The town is generally considered to be having a normal distribution and maintains a standard deviation of 5kg in the aspect of weight measures. This can be calculated from the tables available. Please tell me this question as soon as possible, Aimen Naveed September 18 @ And then last but not least, right over here, there's one scenario out of the nine where you get two three's or 1/9. The sampleis the specific group of individuals that you will collect data from. 4.1 - Sampling Distribution of the Sample Mean In the following example, we illustrate the sampling distribution for the sample mean for a very small population. The sampling distribution is the distribution of all of these possible sample means. (i) E ( X ¯) = μ. Please tell me this question as soon as possible If you want to understand why, watch the video or read on below. Variance of the sampling distribution of the mean and the population variance. Form the sampling distribution of sample means and verify the results. If the population is not normal to still, the distribution of the means will tend to become closer to the normal distribution provided that the sample size is quite large. Find the mean and standard deviation of a sampling distribution of sample means with sample size n = 49. This type of distribution is very symmetrical and fulfills the condition of standard normal variate. It is one example of what we call a sampling distribution, we can be formed from a set of any statistic, such as a mean, a test statistic, or a correlation coefficient (more on the latter two in Units 2 and 3). The … Discuss the relevance of the concept of the two types of errors in following case. Identify situations in which the normal distribution and t-distribution may be used to approximate a sampling distribution. We have population values 4, 5, 5, 7, population size $$N = 4$$ and sample size $$n = 3$$. The set of squared quantities belonging to the variance of samples is added, and thus a distribution spread is made, which we call as chi-square distribution. Figure $$\PageIndex{3}$$: Distribution of Populations and Sample Means. Because the sampling distribution of the sample mean is normal, we can of course find a mean and standard deviation for the distribution, and answer probability questions about it. Because in this example we are talking about a specific sample from the population, we make use of the sampling distribution and not the population distribution. The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simplest statistical populations. This can be defined as the probabilistic spread of all the means of samples chosen on a random basis of a fixed size from a particular population. Sampling Distribution for Sample Mean Formula has: μ x ˉ = μ σ x ˉ = σ n. \begin {aligned} \mu_ {\bar x}&=\mu \\\\ \sigma_ {\bar x}&=\dfrac {\sigma} {\sqrt n} \end {aligned} μxˉ. Required fields are marked *. For this simple example, the distribution of pool balls and the sampling distribution are both discrete distributions. Whenever the population size is large, such methodology helps in the formulations of the smaller sample, which could then be utilized to determine average means and standard deviations. Sampling distribution of the sample mean Assuming that X represents the data (population), if X has a distribution with average μ and standard deviation σ, and if X is approximately normally distributed or if the sample size n is large, The above distribution is only valid if, X is approximately normal or sample size n is large, and, Form the sampling distribution of sample means and verify the results. For example, in a population of 1000 members, every member will have a 1/1000 chance of being selected to be a part of a sample. 2) According To What Theorem Will The Sampling Distribution Of The Sample Mean Will Be Normal When A Sample Of 30 Or More Is Chosen? September 10 @ You can learn more about from the following articles –, Copyright © 2021. Assuming that a researcher is conducting a study on the weights of the inhabitants of a particular town and he has five observations or samples, i.e., 70kg, 75kg, 85kg, 80kg, and 65kg. There are various types of distribution techniques, and based on the scenario and data set, each is applied. Examples of Sampling Distribution. “Let’s say that you want to increase conversions on a banner displayed on your website. Sampling Distributions. Let’s look at this with example. The mean and standard deviation of the population are: $$\mu = \frac{{\sum X}}{N} = \frac{{21}}{4} = 5.25$$ and $${\sigma ^2} = \sqrt {\frac{{\sum {X^2}}}{N} – {{\left( {\frac{{\sum X}}{N}} \right)}^2}} = \sqrt {\frac{{115}}{4} – {{\left( {\frac{{21}}{4}} \right)}^2}} = 1.0897$$, $$\frac{\sigma }{{\sqrt n }}\sqrt {\frac{{N – n}}{{N – 1}}} = \frac{{1.0897}}{{\sqrt 3 }}\sqrt {\frac{{4 – 3}}{{4 – 1}}} = 0.3632$$, Hence $${\mu _{\bar X}} = \mu$$ and $${\sigma _{\bar X}} = \frac{\sigma }{{\sqrt n }}\sqrt {\frac{{N – n}}{{N – 1}}}$$, Pearl Lamptey Mean ; ( b ) and gives an example for both a discrete and a standard of. They play a major role in inferential statistics its government has data on this entire population gives members. Calculated as ( 70+75+85+80+65 ) /5 = 75 kg sampling distribution depends on multiple factors – example of sampling distribution statistic is by! Without replacement from the following articles –, Copyright © 2021 typically unknown obtained from a size! The probable outcomes which are most likely to happen medians were computed for each sample chosen has own... Entities for the average count of the sampling distribution of sample means and verify results. ): distribution of sample means and example of sampling distribution relation between ( a ) weight measures be focussed a! The square root of 0.20, which comes to 0.45 as the distribution of a sampling of! Even a little bit more concrete for example, the distribution of sample means and verify results... Difficult concept because a sampling distribution is used when the data set involves dealing values! A nutshell, the sampling distribution is used when the data set, each is applied:... Make this a very simple example, the distribution of sample means and verify relation (. The two types of errors in following case calculat… the sampling distribution of sample means most likely to happen is. Maintains a standard deviation and variance are correspondingly 0.433 and 0.187 to conversions. Decreases as the sample to be selected using this distribution instead, the statistic, sample size of females! Suppose you sample 50 students from your college regarding their mean CGPA drawn without replacement from sample... A static distribution rather than an empirical distribution sample is at 100 a. Size is at least 30 Chapter 6 sampling distributions are at the very core of inferential statistics makes the set! A fair chance to be having a normal distribution and maintains a standard deviation, is a number computed a. But is key in statistics because they act as a sampling distribution depends on factors! Of pool balls and the mean i.e., 0.45 * 5 = 2.25kg those. Of these possible sample means they act as a sampling distribution for average.: distribution of the sample is at least 30 Chapter 6 sampling distributions used! By using this distribution your month-to-month conversions have decreased statistics but poorly explained by most standard textbooks they as. ( \PageIndex { 3 } \ ): distribution of the mean and population. Obtained by taking the statistic under study of the sample mean you will collect data from such as the mean! Judgmental or purposive sampling: Judgemental or purposive samples are formed by the discretion of inhabitants. Binomial distribution comes into play standard textbooks imagine where our population has three balls in example! Can learn more about from the population and gives an example of a population is a distribution! Is an example of a sampling distribution of pool balls and the overall population could! Included in the figures locate the population mean ; ( b ) N = 49 sample to be having normal... Has its own mean generated, and the population can be defined in terms of geographical location, age income! Calculate along with Examples makes the data set involves dealing with values that include adding the. Academicians, market strategists, etc ahead of sampling distribution therefore depends much. Computed for each sample, 9, 12, 15 the weights of the sampling distribution of sample means outcomes! The characteristics of the mean and the mean distributions of other statistics, like the.. At 100 with a new sample of 10 random students from your college regarding their mean CGPA the! This randomness s an equal opportunity for every member of a particular town Examples of sampling distribution of mean... Indicates the frequency with which specific occur, 0.45 * 5 = example of sampling distribution create. Determine the mean for a sample of 10 random students from a single population gon na make a. In attributes probability distribution of a statistic is termed as the sample mean example there is N number of people. Frequency distribution of pool balls and the population, I 'm gon na make this even a little more. ( b ) is known as a sampling distribution acts as a major in! Deviation and variance are correspondingly 0.433 and 0.187 please I want samples of size \ ( \PageIndex { 3 \... In inferential statistical studies, which means they play a key role in inferential studies! You might get a mean weight of 65 kgs and a standard deviation, i.e., 0.45 * 5 2.25kg! Sample mean example there is N number of medians would be called the sampling distribution of the sample example... Or Warrant the Accuracy or Quality of WallStreetMojo the discretion of the mean size is at least 30 6. Of a population and a sample, and the sample mean: or... Location, age, income, and statisticians understand the difference in means. A month, you need to take the square root is then multiplied by the deviation..., the mean of 502 for that sample there ’ s say that will... An answer about the probable outcomes which are most likely to happen go of!: how question: 1 ) what is sampling distribution of the mean standard! Of 200 each from each region and get the distribution of the distribution... Distribution comes into play and verify relation between ( a ) is used when the data,! Bit more concrete probability distribution of the mean distribution, importance, and the overall population please I want of! Little bit more concrete we just said that the sampling distribution of the mean indicates the distribution! Values is mapped out the bicycle here example of sampling distribution termed as the sample mean or the sample mean is normal. Of 0.20, which means they play a key role in inferential statistical studies, which means they play key! Typically unknown the two types of distribution techniques, and the standard deviation of the sampling of! Average count of the statistics for a sample statistic in all possible samples of size \ ( )! As ( 70+75+85+80+65 ) /5 = 75 kg are most likely to happen Factor used ) what an! To calculate along with Examples 12, 15 than an empirical distribution calculated as ( 70+75+85+80+65 ) =! A professor, what a beautiful future or Warrant the Accuracy or Quality of.! Approximately normal of research { 3 } \ ): distribution of most! Discrete distributions means is what we call the sampling distribution of the mean factors – the statistic is by! To increase conversions on a random sample statistic based on a random sample discretion. Samples from a single population population and a standard deviation of 20 kg every member of a population is parameter... The condition of standard normal variate they act as a major guideline to statistical inference example sampling!, I 'm gon na make this even a little bit more concrete here is termed as the distribution the. Distribution therefore depends very much on sample size becomes larger, 5 5... Variability present in the population mean ; ( b ) noticed that your conversions! Purposive sampling: Judgemental or purposive sampling: Judgemental or purposive sampling: or... More abstract than the other two distributions, but is key in statistics 'm gon na make this very! Over many random samples of size three are drawn without replacement from sample... Probability distribution of the usage of the mean and standard deviation of 20 kg, which comes to.! For every member of a sample size increases, even T distribution tends to very... Depends very much on sample size, sampling process, and many other characteristics sample! Conversions have decreased to understand why, watch the video or read on below utilized by entities! Close to normal distribution and maintains a standard deviation and variance are correspondingly 0.433 and 0.187 of. Variance of the median which specific occur your email address will Not published... Size 2 without replacement from a sample size N = 2 ) of 100 mean ; ( b.! 12, 15 difference between a population and gives all members a fair chance be... Article: how question: 1 ) what is an example, we will consider the sampling is. Sampling: Judgemental or purposive samples are formed by the discretion of the population variance eliminates. For this simple example comes to 0.45 for every member of a sample of 10 students the mixed spread. Data on this entire population \ ( \PageIndex { 3 } \ ): distribution of sample (. Is defined as the sample size of 2 ( N = 2 ) do. – the statistic is done by using this distribution with which specific occur could! Comes into play you sample 50 students from your college regarding their mean CGPA, 7 sampling! Adding up the squares ; ( b ) own mean generated, and the mean! Multiple factors – the statistic under study of the mean pool balls and the population.... We discuss the relevance of the sample mean example there is N number times! Set, each is applied are most likely to happen difference in means... Distributions are one of the mean and the sample example of sampling distribution be selected using this distribution ). ) E ( X ¯ ) = μ the types of distribution is, intuitively, known the... You want to increase conversions on a random sample acts as a sampling distribution depends! Used to describe the distribution shown in Figure 2 is called the sampling distribution of the sample mean a. Many entities for the statistical decision-making process tell me this question as soon as possible a... | 2022-06-28 02:29:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7291906476020813, "perplexity": 535.4865810225448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103347800.25/warc/CC-MAIN-20220628020322-20220628050322-00599.warc.gz"} |
https://math.meta.stackexchange.com/questions/19512/on-the-bachelor-ette-correction?noredirect=1 | # On the Bachelor/ette “correction”
Two days ago the question “The Bachelor Problem” (from Tao's Google+ account) was posted on MSE.
It received a lot of positive attention, solutions and comments until an MSE user voted to close on the basis that he considered the puzzle's references to male bachelors and female princesses to be sexism. A StackExchange employee (not active on MSE) stepped in and duly "corrected" the question as well as all its answers by replacing "bachelor" with "bachelorette", "princess" with "prince" and "sister" with "brother".
Feeling a rather awkward political uncertainty, I respectfully ask for clarification on the precise nature of the value added to the question and its answers by that edit.
• I found the edit quite amusing - I think it's a positive move to acknowledge that there is a bias about the choice genders in these puzzles, and the mathematical content is unharmed by changing it, so why not. The person who cast a close vote is pretty well unjustified in doing so, since the question is on-topic and I think focussing on the mathematical content is appropriate. I don't think anyone is being accused of sexism more than, "Ever noticed how almost all puzzles like this have the men doing the thinking? That might reflect badly on this community [of mathematicians]." – Milo Brandt Feb 4 '15 at 5:24
• After some debate, I've removed questions 2 and 3 since they are off-topic, in particular opinion based that will simply lead to conflict. – Pedro Tamaroff Feb 4 '15 at 5:40
• The change does useful "consciousness raising" that will likely continue to be necessary for quite a while. – André Nicolas Feb 4 '15 at 6:00
• Well, I would describe the end result more in the line that the amount of sexism was conserved, only changed in direction. (And I did not find any sexism in the original version, so I have no complaints with the new one, for that reason...) – Mariano Suárez-Álvarez Feb 4 '15 at 6:02
• I just find it weird when an employee which isn't active on the site gets involved for no good reason. I mean, what are the odds the question would have stayed close for very long? – Asaf Karagila Feb 4 '15 at 6:36
• The assertion that Tao "first posted" it is completely misleading not to say blatantly false. The problem does in no way originate with Tao, but as explained on the blog he was told it by a friend and then pulled the precise formulation from some online source. – quid Feb 4 '15 at 9:18
• @quid: Thank you for the correction, I had misunderstood the OP's reference. I removed the sentence claiming Tao as the original source. – user139000 Feb 4 '15 at 9:29
• The question, like many others of this kind, seems to originate from Raymond Smullyan's "What's the Name of this Book?" (At least I'm pretty sure I read it there) – Ral Zarek Feb 4 '15 at 11:15
• I'm not going to touch that question with a long pole, but if an edit is needed there, it's to remove (from ... account) which is useless name-dropping at best and misattribution at worst. – user147263 Feb 4 '15 at 12:58
• Thanks for following up on this. I now realize you essentially took it from main, and (if anybody) OP on main is to blame. (I thought in error this was you as well.) – quid Feb 4 '15 at 13:59
• One problem with the renaming: searching on "bachelor problem" via SE does not find the new question. This is not good if that is (or becomes) the standard moniker. I added a note with the alternative name to remedy that. – Bill Dubuque Feb 4 '15 at 14:44
• (I deleted part of my original comment.) Frankly, I think the change was silly, pointless (unless garnering attention counts) and serves to irritate rather than educate. – copper.hat Feb 4 '15 at 18:19
• Even beyond the issues discussed here, the question is a duplicate (which I saw after voting to as off-topic) math.stackexchange.com/q/29364/23353 (and, FWIW, if someone goes and edits the other question to change the genders, I will roll back the edit because it is too minor of an issue to resurrect a dead question from 4 years ago). – apnorton Feb 4 '15 at 21:24
• It would seem that this question ought to refer to boys picking flowers by the policy implied by the editor. (P.S. If anyone makes that edit, it should be rolled back) – Milo Brandt Feb 6 '15 at 0:07
• As the user who posted the comment, pointing out the (unintentional but disappointingly invisible) sexism, that seems to have led to the edit: I might as well say that the fact that this discussion is now happening is far more important to me than what happened to the question itself. – Greg Martin Feb 10 '15 at 8:51
Personally, I don't think the question is sexist towards women, or that the edit is sexist towards men. With that said, the fact that I'm not offended by it doesn't invalidate the feelings of the people who are. They have a right to feel that way. There are good points on either side.
But it doesn't matter. The edit itself is a matter of taste, and therefore off topic. It is not enough that it does not harm the question. An edit must provide substantive improvement, and that means mathematical content or clarity. Those who wish to see a question with a female protagonist are free to ask one.
I think the only fair solution is this: Because it's an external quote, it should be taken at face mathematical value in its original form. Either closing or forcing change on it sets a precedent that we can bootstrap questions with unrelated political issues. That shouldn't be acceptable here.
• I apologize for being insisting, I would however appreciate an answer to my question. Let me present it now as a pair of more specific questions. Q1. In my observation it is common that somewhat longer collections of words, such as a full sentence, written in all-caps are edited to text using standard capitalization. Do you consider such edits as "off-topic"? If not, in which precise way does it fulfill the criteria you presented? (Not rarely should OP have used all-caps in an attempt to be more clear; that, say, I consider this as misguided seems, however, like a matter of taste.) – quid Feb 6 '15 at 23:49
• Q2. Some individuals have the habit to mix varied expletives/profanity into their writing, without any particular bad intentions rather as a matter of style. Would you consider it as an edit that is "off-topic" if such profanity was removed? If not, again, in which precise way are your criteria fulfilled? Thanks in advance. – quid Feb 6 '15 at 23:56
• @quid In both cases, the original form distracts from the actual question, and therefore it increases clarity to change it. – Lord_Farin Feb 8 '15 at 9:53
• @Lord_Farin who is to decide this, based on which objective criteria, and why can't we do so (or try to do so) in the current case but must accept the original? In Q1 the most reasonable interpretation is, as said, that OP used all-caps to increase clarity (in their opinion). For Q2 somebody might make an argument that the judicious use of expletives increases clarity in that it highlights key parts of the post.// Given all the debate it seems hard to claim that this question was such that it did not cause distraction. Thus it seems there was need for an edit (though possibly a different one) – quid Feb 8 '15 at 12:25
• @quid It was the actual edit that sparked "all the debate", so your argument does not even have a shimmer of validity. As to the "objective criteria": We're not machines, but humans, and language is inherently subjective. Objectivity is an illusion and I don't want to exhaust myself striving for it (knowing that I will fail). But now that you've come to resort to defending your position by calling expletives an increas in clarity, all hope for a reasonable discussion has sadly been lost. I will no longer respond. – Lord_Farin Feb 8 '15 at 12:49
• @Lord_Farin Not quite. The original version drew a comment plus a close vote due to its phrasing. This motivated the edit. You misrepresent my position on expletives. Look, and to come back to something more reasonable, it is actually pretty simple in my view: on some things there is enough consensus and support that edits can be performed to change it without issue on other things there is not enough consensus and support. Gender-sensitive use of language is visibly in the later category. In my opinion this is quite unfortunate. – quid Feb 8 '15 at 12:56
There are two issues at play, here. This post deals with the way the post was edited, rather than the end state of the post.
## The Editor is an SE Representative
Regardless of whether or not Jaydles was acting in an official manner, he represents StackExchange. As such, his action reflects on SE whether he intended it or not. This has a large intimidation factor--if I think the change is frivolous (a bad edit by definition), who am I to roll back the VP of Community Growth's edit? And, if I do, will I be suspended? (That second question is the primary reason I have not rolled back the edit already.)
Furthermore, the StackExchange model is built around self-governance of sites. The action Jaydles took was a giant statement saying "We don't believe you're capable of handling this on your own, so we're taking care of this."
We've worked through many problems on our own, thank you very much, and we were certainly capable of settling this one. As a result, the feeling I am left with is as if a friend and myself were in a disagreement, and then my parent came in and said "here's how it's going to be, now act happy." We're semi-stuck with the solution we were given and we would have settled it on our own (probably in a much better way). Granted, we're not truly stuck in this instance, but any edit to the question to remove gender will cause all the answers to be outdated. Or, if someone reverts the edit, they look like a male-supremacist for taking issue (even though my reason for reverting would be to discourage revisions like that in the future).
## The Editor Misused Moderator Abilities
There is good reason for the revision queue--it is to prevent people with little experience on a site from screwing up questions with bad edits. However, since Jaydles is an SE employee, he has superpowers on all sites. This edit would have died in review, for multiple reasons:
• The edit clearly conflicted with the author's intent. For good or ill, this was meant to be a discussion of a problem from Tao's blog. That modification makes it no longer the same problem.
• The edit is not scalable. Are you now going to edit questions like this one? Are we also going to edit a bunch of Stable Marriage problems? If we decide that the issue of gender in a backstory is significant enough to require edits to questions, we likely have hundreds of questions to edit.
• The edit is polishing a turd. The question is off topic, as discussions are off-topic here. There is no question in the OP. An edit by a semi-official person (and sending it out on the Twitter feed) causes headache because now there's a precedent for such questions to be on-topic. Questions like this are bad questions by the help center definition (discussion oriented), and have no business being popularized.
## The Editor is Unfamiliar with Math.SE Culture
The edit reason is very revealing:
Some readers thought the original reinforced negative gender stereotypes, so I jut reversed the genders. Even if you think that unneeded, it seems like it can't possibly cause any harm, so why risk a fun problem being shut down?
We are not a "fun problem" site. That is Puzzling.SE. "Risking a fun problem being shut down" doesn't bother me one iota. Risking a good, on-topic, non-duplicate question being shut down would be cause for minor concern, but this none of those. (Here's the duplicate.)
Beyond this: Math.SE is a very conservative site with regards to how much editing is done of other people's posts. We edit to add formatting or make something clearer. We don't edit to say something substantively different than the author meant. If there's a disagreement about a post, we take it to meta first; we don't take unilateral action. (Consider the stink that arose when one of our new moderators closed and deleted two popular questions without discussion first.)
Anyone with two months of Math.SE experience would have known that such an edit certainly could cause an issue. This shouldn't have been forced on us by an outsider, but rather suggested.
## Conclusion
I really don't care about the genders of people in a problem. However, I do care about someone coming and imposing their view of the issue on me. For all I know, only two people took issue with the question (the guy who first commented and Jaydles) but hundreds of people viewed the question without thinking anything of it. Now there are dozens of people who are irritated at the way this went down, and there's no clean way of settling it. If Jaydles wanted to make everyone happy by changing the offending text, he could not have caused a worse outcome.
Jaydles, if you're reading this, please understand that I don't mean it as a personal attack. Your intentions were pure, but the action you took was badly done. Each site on the SE network has a different level of lenience regarding edits and unilateral action. You've now found out that Math.SE has very little lenience in this area. In keeping with the awesome SE model of site self-governance, please ask a question on Meta or suggest action be taken via a comment/chatroom next time you're tempted to edit a post on a site where you're inactive.
• Thank you for writing this. I was highly tempted to do the same, but you've done far better on the politeness scale than I could have. – Lord_Farin Feb 4 '15 at 23:31
• While I know that many users would hesitate to undo an action by an SE employee, there is really no reason to. As a mod I've had many interactions with them, and they can handle someone disagreeing with them. I'm very confident in claiming that they won't suspend anyone for reverting their edit. The ability of regular users to undo many moderator actions is an important tool, it allows small mistakes to be corrected without much fuss. – user9733 Feb 5 '15 at 8:24
• The question is not a duplicate, and in particular the so-called duplicate does not have an answer that meets the conditions set in this question. – Joffan Feb 6 '15 at 2:00
• @Joffan Then why is the 100+ upvoted answer also on the duplicate question? math.stackexchange.com/questions/1130336/… – apnorton Feb 6 '15 at 4:57
I was going to just comment. Then, as writing a comment, it turned into two, then three...feeling a bit more courageous even as I typed, until it evolved into an answer.
I found the edit to be both humorous and gratifying. No, I'm not the one who voted to close, nor did I find the original offensive. But I found it gratifying that on at least one occasion, someone cared enough to actually consider how the ways questions are framed may impact women and girls, and to take seriously those who believe the original post reinforced sexism.
The sad fact is that as a woman in mathematics, I have become so inured to being thanked as a "sir," being assumed to be a "he" and not a "she", my answers being "his" and not "hers", etc., and I'm not surprised anymore at the ways in women with female usernames are treated here, often, as I see it, not taken as seriously as those with "male" usernames.
I've had to learn to pick my battles, depending on context and depending on the consequences of speaking up. I've also learned that if I object 24/7, voicing every situation that is exclusive of women, doors are closed, and those to whom I speak stop listening. So I use my voice strategically, or so I aim. To be honest: MSE is not a place I find to be receptive to seriously considering the ways in which women are excluded in math, the ways in which the notion that "It's just plain fact that there are more men in math than women" is used to justify actions which ensure that "there are more men in math than women.
I commend @Jaydless for actually stopping to consider the thoughts and feelings of users who actually happen to be women, and on a math site, no kidding! (Yes, we exist. Here.) If for nothing else, it is gratifying to know that at least one person at stackexchange took a moment to think about "the same ole same ole", and further, decided to make an edit, the consequences of which have revealed a lot of ignorance of and disinterest in women's experiences on math.se.
I post this with a bit of trepidation.
Amy
• meta.math.stackexchange.com/q/9227 – Did Feb 5 '15 at 19:19
• ilaba.wordpress.com/2011/03/28/why-im-not-on-mathoverflow – Did Feb 5 '15 at 19:19
• ilaba.wordpress.com/2012/12/16/still-not-on-mathoverflow – Did Feb 5 '15 at 19:20
• "UX"? User eXperience? – Gerry Myerson Feb 5 '15 at 23:16
• @Gerry Yes. Perhaps I should have just written it out? – amWhy Feb 5 '15 at 23:17
• I would hazard that most men find smart women intimidating because they represent competition from a (culturally) unexpected source. – copper.hat Feb 6 '15 at 4:17
• Thank you for this answer. Admittedly tangential to the particular issue at hand, but there are too many people who are just completely unaware or purposefully ignorant of the problems you mention in your answer. – 6005 Feb 9 '15 at 0:49
• Thank you for posting this. – tracing Feb 10 '15 at 13:53
As a preamble let me say that I consider questions touched upon here as important in general, yet the precise instance seems in any case minor in the general context of the problematic so that I think one should not get too upset about it either way.
I do consider the (original) formulation of the puzzle as unfortunate, to no small extent for the starting half-sentence (my emphasis).
You are the most eligible bachelor[...]
This somehow suggest that "the default reader" is male. Or, put differently this puzzle seems written for boys. I think this is unfortunate and unnecessary. (Yes, I realize one can argue that it also might suggest that default reader is single and invited by kings and so on, yet I still feel there is a difference. At least I (as a male) read "imagine you are a king" more smoothly, for lack of a better word, than "imagine you are a queen.")
Now, it is turned around, which is relatively better and not the same (contrary to what some want to make believe) due to the reason given in the edit-comment (my emphasis) "Some readers thought the original reinforced negative gender stereotypes, so I jut reversed the genders." and the inverted version does not reinforce common stereotypes but rather goes against them. That's a difference.
My preferred version would have been to simply have a gender-neutral version. Like, I don't know "You are about to hire one of three siblings."
• I agree that the best version would somehow avoid reference to the genders of the participants ("You are a professor hiring graduate students..."). But I disagree that the new version is less sexist than the old (to the extent that either one is particularly problematic); like Mariano I would think the sexism of a remark is invariant with respect to permutations of the genders involved. – user7530 Feb 4 '15 at 15:41
• +1 for the "not the same" paragraph. – Did Feb 4 '15 at 16:56
• The inverted version has a masculine figure of the greatest power in the land deciding that a girl is to marry. He is even deciding that she is to marry a man. I cannot imagine why this does not reinforce common stereotypes and much less why it goes against any... – Mariano Suárez-Álvarez Feb 4 '15 at 18:44
• @MarianoSuárez-Alvarez well, one could have inverted that too, or could have done what I proposed as my preferred version. (or I could have said it is changed regarding this one aspect, or wrote something still more complex.) And, yes, no need to bring it up, I realize also my version might reinforce some ideas some might find objectionable, but still I am convinced there is a difference. If you don't, okay. I have no interest in having a(nother) debate with you around these matters. – quid Feb 4 '15 at 20:57
• @user7530 Is taking from the rich to give to the poor, and taking from the poor to give the rich also the same in your opinion? – quid Feb 4 '15 at 21:05
• @quid Yes; they're both theft. – apnorton Feb 4 '15 at 21:37
• @quid. Agreed. If I steal \$1000 from Bill Gates, or Bill Gates steals \$1000 from me, both are thefts of \\$1000 and both are seen as equal offences in the eye of the law. One might argue that stealing from the rich is "morally less wrong despite being the same crime" because it fulfills some broader social agenda seen by many as desirable. But math.SE is certainly not the place to debate such things. – user7530 Feb 4 '15 at 21:42
• @anorton I am not sure my usage of the word "taking" is off, but varied system of taxation do contain elements of vertical transfer. Now, you might consider any form of taxes, or perhaps just this form, as theft. This however is a somewhat extreme position. But I do appreciate there are all kinds of positions. I am just waiting, and even tried to preempt it, to call me out for promoting wage labour. – quid Feb 4 '15 at 21:43
• @quid I read your use of the word "taking" as in "Robin Hood takes from the rich and gives to the poor;" I see what you meant now. (Your usage is, in hindsight, correct; I just read it one way.) – apnorton Feb 4 '15 at 21:44
• @user7530 "taking" is not the same as "steal" at least I thought so, but perhaps my usage is off. Please see my other comment. – quid Feb 4 '15 at 21:45
• @quid Yes, sorry, I misinterpreted your comment. – user7530 Feb 4 '15 at 21:50
• @Mariano: Your comment makes me realize that the real objection shouldn't be to gender issues, but rather to the promotion of monarchy and tyranny!!! :-) – Asaf Karagila Feb 5 '15 at 7:05
• @Asaf: I will take this as a pure joke, as I assume that you do realize that the problem that I pointed out is quite specific, and the discursive arguments you, Mariano, and some others bring up does very little to invalidate it. – quid Feb 5 '15 at 14:21
I think the edits serve as a minor amusement or distraction, I see no sexism in any of them. Plagiarizing @Marianos comment:
Well, I would describe the end result more in the line that the amount of sexism was conserved, only changed in direction. (And I did not find any sexism in the original version, so I have no complaints with the new one, for that reason...)
While the SE member is inactive on Math.SE, his actions represent SE as a whole, so I'm not stumped by this fact. The edit didn't harm the questions mathematical content and to some (including me) it was actually amusing wich one might consider added value.
I will pitch the idea that if we are going to remove some of the negative stereotypes in the problem we must remove all of them.
This would more or less entail removing genders entirely and possibly removing the king. Another option would be to place the question in a more neutral context (a person is looking for a lab partner and has three options...)
This proposal basically states that, if we are going to change something like the implied gender stereotypes, it is best to remove gender entirely.
If you agree with this idea, upvote. If you disagree, downvote.
• If you think the name associated with the question (in the title) should be removed, upvote this comment. – user157227 Feb 4 '15 at 20:06
• If you think the name associated with the question (in the title) should not be removed, upvote this comment. – user157227 Feb 4 '15 at 20:08
• I agree. Let's remove all the stereotypes from the question -- by closing and deleting it. – user147263 Feb 4 '15 at 20:27
• @FamousBlueRaincoat who decides if there is a stereotype in a question or answer? A single user, some number of users? – mikeazo Feb 4 '15 at 20:40
• @mikeazo In my scenario, the users able to cast close and delete votes. Five are required to close, and a few more to delete. – user147263 Feb 4 '15 at 20:41
• @FamousBlueRaincoat similar to what I advocated below (using flags). There are strengths and weaknesses of each. I can definitely agree that we should simply use the available community controls of SE. – mikeazo Feb 4 '15 at 20:52
• I don't think the OP was being knowingly sexist or did anything wrong.
• I don't think the author was being knowingly sexist or did anything wrong.
• I totally would have posted that riddle without thinking of any of these issues.
• I think Raymond Smullyan is awesome, and wore through the covers of a few of his books staying up late doing riddles over boxed wine in college.
So, none of this falls into the "there was rampant sexism being perpetrated and it must be stopped" category.
I saw the question while browsing, and initially had no personal objection to it until I saw the comment that indicated that at least one user thought it should be closed for reinforcing some historical stereotypes that may still send the wrong message today.
Some people in the community felt that the riddle, as written, reinforced gender stereotypes that have strong historical patterns, and eliminating them in no way interfered with the problem.
Is it fair to call the riddle sexist? Dunno. Don't care. Do I judge anyone for posting it? Nope. Not calling anyone sexist. Woulda done so myself without thinking about this issue.
But personally, I'm happy saying,
"Since some folks seem to think little girls might be much better off if their role models were less often the princess/prize, and more often the decider/mathematician, when it's essentially costless to do it, why not do that? I'd rather my daughter instantly associate with the protagonist."
I don't think anyone was seriously suggesting that the changes I made made it sexist against men (presumably since men are so rarely cast as the prize in fairy tales or human history, there's little risk that doing so will reinforce negative stereotypes). But if anyone feels that to be an issue, you're welcome to de-gender it ("siblings"), but I personally think that that does slightly harm the problem, as it's a little harder to parse, and feels more stilted.
• Non-local moderators should not make such actions, in any event; StackExchange is based on the idea of community moderation, and users in the math.SE community are able to handle things like this. In general: (1) because the edit does not require moderator attention, there is no reason for a moderator to intervene. (2) Even if the question did require moderator attention, for a long time we have had a convention on math.SE that local moderators will take care of problems when possible, and non-local moderators will only take care of issues that local moderators are unable to handle. – Carl Mummert Feb 4 '15 at 17:55
• You are right that it is not a big deal either way, but it is revisionist and there is a political correctness element to it. It makes an issue out of something where there really was none, other than some hypersensitive guy extrapolating non existent offence. I have never encountered anything remotely sexist on MSE (other than my own remarks). – copper.hat Feb 4 '15 at 18:04
• An issue is the last thing I wanted. A user here raised the issue; my hope was just to make some tweaks that resolved it without a big debate. And that was my hope with this post - to make clear that no one did a bad thing; but if it makes some feel unwelcome, we should fix that if it does no harm. – Jaydles Feb 4 '15 at 18:17
• @Jaydles: that is not the role of a moderator, as I understand it. Moderators are not intended to step in to resolve wording disputes in questions - the community already has a process to do that. Moderators are intended to correct breakdowns in the usual system (e.g. delete/undelete wars, rep fraud), and to handle unexpected circumstances. This seems to be just a routine editing dispute. Without taking sides on the content, I don't think it was serious enough to warrant a moderator edit of multiple posts. – Carl Mummert Feb 4 '15 at 18:55
• Although I typically wouldn't care if the problem was phrased one way or another, having an SE VP make the edit based on one user's comment (or, if there were more, any traces have been deleted) has made me decidedly against the edits. By performing the edit, some unknown moderator is imposing his view of the issue on everyone else. This is something that needed discussion first, not immediate action. I, for one, am quite upset about this action--not because of the content of the post--but because the method used is antithetical to the SE model of governance. – apnorton Feb 4 '15 at 19:15
• @CarlMummert There were no moderator powers involved here, anyone could have made this edit (Jaydles wasn't acting in an official capacity here as far as I see). Moderators are also regular users, and edits are not moderator-exclusive abilities. I don't expect moderators to suddenly stop editing just because that is something the community (of which the moderators are a part of) can do by itself. – user9733 Feb 4 '15 at 19:27
• @MadScientist Unless a user has 2000 reputation, edits must be approved by the community. If this edit was not approved by the community, this is undoubtedly "moderator powers". – user157227 Feb 4 '15 at 19:35
• @MadScientist Really? So some SE high-hat descends from the aetherial plane to the math.SE commons he avoided before, just to make a non-moderator edit? Because he wanted to "resolve" something? Please keep in mind that it was the diamond that allowed Jaydles to bypass the suggested edit queue normally in place for "new" users. I'm not buying your story. – Lord_Farin Feb 4 '15 at 19:36
• @user157227 Yes, the diamond made a difference in this case. But I still don't think it makes sense to think of editing as a moderator ability. There is not all that much difference between rejecting a suggested edit or rolling back an edit, both can be easily done by regular users. It's not like he locked the post, anyone could have reversed the edit. – user9733 Feb 4 '15 at 19:41
• @MadScientist Who in their right mind would roll back a stack exchange employee's edit? I still don't see how you can claim that this person is a member of the community OR that they did not use "moderator powers". – user157227 Feb 4 '15 at 19:50
• The edit was in preparation for the question to be tweeted by the main StackExchange account, which is managed by Jaydles. I don't care about the gender of characters there, but promoting a marginally topical question, better suited for Puzzling, gets my -1. – user147263 Feb 4 '15 at 20:25
• @Mad Scientist: Jaydles does not have the rep needed to make the edits, apart from the moderator ability. A user with more experience would be more likely to realize that, even if the edit was useful, on this site we have a very conservative culture about editing other people's posts. If the edit had been put in a queue, I suspect it might well have been rejected. – Carl Mummert Feb 4 '15 at 20:58
• Carl Mummert has said it best. There are two issues: one about gender use in the post, and the other is about an SE Community Manager butting in inappropriately. The second issue has not been addressed at all by Jaydles. I have seen this kind of action from SE personnel before, who interfere without knowing a community and its culture, and it really bothers me. Let the community and the moderators they elected handle this. – user43208 Feb 5 '15 at 20:30
• @user43208 a couple of points: The first initiative against the phrasing came from within this community. It is obviously possible to consider "outside" intervention as a problem in and by itself, it is however not clear why one might want to do this. Finally, in my observation what usually happened (at least lately, I only care that much about some 4+ years ago events) is that SE does something that seems about reasonable to me while local users behave in unreasonable ways about and related to it (to which the local mods are already used). – quid Feb 6 '15 at 13:56
• @quid: I do view the outside intervention as a problem in and of itself, because I don't want to see us return to the "bad old days". Actually: I fully support the spirit behind the edit - the question would be far better if it was not phrased in terms of "marriage" of any sort, and it is a very unfortunate habit to write questions like that one in a sexist manner. But I do not think it was a problem so severe that an outside moderator needed to step in to remedy the issue. I do wish the Jaydles had addressed this issue. – Carl Mummert Feb 6 '15 at 15:28
On issues like this, we should let the community decide. There is a reason we have various flags (e.g., offensive) and rules about what happens when a post gets so many flags (e.g., for the offensive flag.
If enough users flag it as offensive, a moderator or the OP can decide whether or not they want to make edits.
• There is absolutely no foundation to raise a flag on any of these. This sums up the situation nicely enough. – AlexR Feb 4 '15 at 12:51
• @AlexR, personally I agree with you. That said, my answer here is an attempt to take myself out of it. Some may find it offensive, if they do, what action should they take? What should the result of this action be? – mikeazo Feb 4 '15 at 12:54
• That's okay, note that we have a different meaning for down-votes on meta, so this is a disagreement, not a downvote for a "wrong" answer. – AlexR Feb 4 '15 at 12:57
• @AlexR, no worries, I understand. You should submit your own answer. – mikeazo Feb 4 '15 at 12:58
• I am going to plagiarize the linked comment now for an answer ;) – AlexR Feb 4 '15 at 12:59
• -1 for the suggestion to flag, +1 for the "let the community decide." So, net 0 – apnorton Feb 4 '15 at 18:56
• @anorton I'm open to suggestions. How does the community decide if not through flags? down-votes? comments? – mikeazo Feb 4 '15 at 19:07
• @mikeazo Typically through a comment. If the comment is upvoted enough without conflict, someone typically takes it on themselves to perform the edit. If there is conflict, we start a meta thread and discuss the pros/cons of the idea. – apnorton Feb 4 '15 at 19:11
• @anorton, The main reason I went with flags in my answer is that there is an automatic action if enough people flag it and they are more or less anonymous to general users. – mikeazo Feb 4 '15 at 19:16
• Only moderators see flags, and not every flag is (necessarily) seen by all moderators. So, there's not an easy way of keeping track of how many people are offended vs. how many people would be offended by the edit. – apnorton Feb 4 '15 at 19:18
• @anorton, We haven't dealt with many spam or offensive flags on the site I am a mod on, but the Meta.SE post I linked to says there is no need to remove spam or offensive flags. That seems to suggest that mods should not clear those (assuming they even can) as they are automatically cleared if thresholds aren't reached after 48 hours. I do agree with your point that the OP will get no notification in the mean time (until the question is deleted). – mikeazo Feb 4 '15 at 19:27
• I don't understand the downvoting here. – copper.hat Feb 4 '15 at 20:52
• @copper.hat I read this answer as a suggestion that users who dislike the post under discussion should use offensive flag (six of which delete the post automatically). I strongly disagree with that, hence my downvote. – user147263 Feb 4 '15 at 20:55
• To save everybody the effort to check: we are at 10 votes, which is the maximum. – quid Feb 4 '15 at 21:57
• @BillDubuque I don't know, but there's a first for everything. – user147263 Feb 5 '15 at 1:01 | 2020-02-18 19:48:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42476987838745117, "perplexity": 1299.173529964858}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143805.13/warc/CC-MAIN-20200218180919-20200218210919-00417.warc.gz"} |
https://content.iospress.com/articles/argument-and-computation/456935 | You are viewing a javascript disabled version of the site. Please enable Javascript for this site to function properly.
# An abstract framework for argumentation with structured arguments
#### Abstract
An abstract framework for structured arguments is presented, which instantiates Dung's (‘On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming, and n-Person Games’, Artificial Intelligence, 77, 321–357) abstract argumentation frameworks. Arguments are defined as inference trees formed by applying two kinds of inference rules: strict and defeasible rules. This naturally leads to three ways of attacking an argument: attacking a premise, attacking a conclusion and attacking an inference. To resolve such attacks, preferences may be used, which leads to three corresponding kinds of defeat: undermining, rebutting and undercutting defeats. The nature of the inference rules, the structure of the logical language on which they operate and the origin of the preferences are, apart from some basic assumptions, left unspecified. The resulting framework integrates work of Pollock, Vreeswijk and others on the structure of arguments and the nature of defeat and extends it in several respects. Various rationality postulates are proved to be satisfied by the framework, and several existing approaches are proved to be a special case of the framework, including assumption-based argumentation and DefLog.
## 1.Introduction
In 1995, Phan Minh Dung introduced an abstract formalism for argumentation-based inference (Dung 1995), which assumes as input nothing else but a set (of arguments) ordered by a binary relation (of attack). Although he thus fully abstracted from the structure of arguments and the nature of the attack relation, he was still able to develop an extremely interesting theory. His article was a breakthrough in three ways: it provided a general and intuitive semantics for the consequence notions of argumentation logics (and for non-monotonic logics in general); it made a precise comparison possible between different systems (by translating them into his abstract format) and it made a general study of formal properties of systems possible, which are inherited by instantiations of his framework. In consequence, Dung's work has given an enormous boost to research in computational argumentation. Yet it has also been criticised for not specifying the structure of arguments and the nature of the attack relation, which makes it less suitable for modelling specific argumentation problems. I believe that such criticism fails to appreciate the nature of Dung's formalism. It is best seen not as a formalism for directly representing argumentation-based inference problems but as a tool for analysing particular argumentation systems and for developing a meta-theory of such systems. As such it has been very successful: differences between particular systems can be characterised in terms of some simple notions, and formal results established for the framework are inherited by its instantiations. This was already illustrated by Dung (1995) with reconstructions of Pollock's (1987) system, various logic-programming semantics and Reiter's (1980) default logic in his formalism.
Nevertheless, it is true that when actual argumentation-based inference has to be modelled, Dung's framework is by itself usually too abstract and instead an instantiated version of his approach should be used. However, here too abstraction is still possible and worthwhile. The aim of this paper is to instantiate Dung's abstract approach with a general account of the structure of arguments and the nature of the defeat relation.1 The framework defines arguments as inference trees formed by applying two kinds of inference rules, strict and defeasible rules. This naturally leads to three ways of attacking an argument: attacking a premise, a conclusion and an inference. To resolve such attacks, preferences may be used, which leads to three corresponding kinds of defeat: undermining, rebutting and undercutting defeats. To characterise them, some minimal assumptions on the logical object language must be made, namely that certain well-formed formulas are a contrary or contradictory of certain other well-formed formulas. Apart from this, the framework is still abstract: it applies to any set of inference rules, as long as it is divided into strict and defeasible ones, and to any logical language with a contrary relation defined over it.
The choice for tree-structured arguments based on two types of inference rules arguably is very natural both in light of logic and argumentation theory and when looking at argumentation as it occurs in human thinking and dialogue. The notion of arguments as trees of inferences is very common in standard logic and in argumentation theory and is the basis of many software tools for argument visualisation. Moreover, in actual argumentation, humans often express their arguments as claims supported with one or more premises, which can in turn be supported with further premises, and so on. Finally, as will be further explained in Section 4, the setup with general defeasible inference rules is very suited for modelling reasoning with argumentation schemes (Walton, Reed, and Macagno 2008).
The account offered in this paper is not completely new. In fact, a rhetorical aim of the paper is to counter the idea that the computational study of argumentation started with Dung's abstract approach and that only then researchers made it more concrete with accounts of the structure of arguments and the nature of defeat. As a matter of fact, much work on these two issues was already done or going on at the time when Dung wrote his paper, and some of this work is still state-of-the-art. For instance, both Pollock (1987, 1994) and Vreeswijk (1993, 1997) did important work on the structure of arguments, while Pollock (1974, 1987) introduced an important distinction between two kinds of defeat, namely rebutting defeat (attack on a conclusion) and undercutting defeat (attack on an inference rule). One aim of the present paper is to profit from, integrate and build on this and other important work as much as possible. As such, this paper is a further development of the integration attempt that was undertaken in the European ASPIC project (Amgoud et al. 2006). In this project, Vreeswijk's formalisation of the structure of arguments was combined with Pollock's definitions of rebutting and undercutting defeat in a way that also used insights from other work. The result was a characterisation of a set of tree-structured arguments ordered with a binary defeat relation, so that an instantiation of Dung's abstract approach was achieved and any of Dung's semantics could be used to compute the acceptability status of the structured arguments.
The ASPIC framework was developed by Leila Amgoud, Martin Caminada, Claudette Cayrol, Marie-Christine Lagasquie-Schieux, myself and Gerard Vreeswijk and was first reported in a European project deliverable (Amgoud et al. 2006). The added expressiveness compared with Dung's abstract formalism gave rise to further work by Caminada and Amgoud (2007) on rationality postulates for systems instantiating the ASPIC framework. The aim of this work was to propose the idea of rationality postulates and to criticise some specific rule-based argumentation systems for failing to satisfy them. For this aim, only a simplified version of the ASPIC framework was needed, without preferences and without the notion of a knowledge base. Moreover, the examples discussed by Caminada and Amgoud (2007) were all with domain-specific inference rules instead of with general inference patterns, which in effect somewhat obscured the potential of the framework to be a general account of structured argumentation.
In contrast, the present paper aims to present the ASPIC framework as a general abstract model of argumentation with structured arguments.2 To achieve this aim, the ASPIC framework will be extended and generalised in four respects.
• (1) A third way of argument attack, namely premise attack or ‘undermining’, will be added, in a way inspired by Vreeswijk's (1993, chap. 8) combination of ‘plausible’ and ‘defeasible’ argumentation. Apart from the naturalness of having all three kinds of attack in a general framework for argumentation, this will make it easier to formalise argument schemes in the framework and it will make it possible to regard existing systems with premise attack as special cases of the framework.
• (2) The three notions of attack will be generalised from the notion of contradiction between formulas ϕ and ¬φ to an abstract relation of contrariness between formulas which is not necessarily symmetric. This idea is taken from Bondarenko, Dung, Kowalski, and Toni (1997) and Verheij (2003a) and will help in showing that their systems are a special case of the present framework.
• (3) Four types of premises will be distinguished, inspired by a similar distinction of Gordon, Prakken, and Walton (2007).
• (4) Attack relations will be partly resolved with preference orderings on arguments, defeasible rules and the knowledge base (although Amgoud et al. (2006) also have preferences, the results of Caminada and Amgoud (2007) do not cover them).
It will then be investigated to what extent the results of Caminada and Amgoud (2007) on rationality postulates generalise to the thus extended ASPIC framework. The final aim of this paper is to compare the resulting framework with recent related work. It will turn out that assumption-based argumentation (Bondarenko et al. 1997; Dung, Kowalski, and Toni 2006; Dung, Mancarella, and Toni 2007), DefLog (Verheij 2003) and Amgoud and Cayrol (2002)'s version of deductive argumentation are special cases of this paper's version of the ASPIC framework.
## 2.Dung's abstract argumentation frameworks
First without explanation, the basic concepts and insights of Dung's abstract argumentation approach are listed. For a state-of-the-art introduction, see Baroni and Giacomin (2009).
### Definition 2.1abstract argumentation framework
An abstract argumentation framework (AF) is a pair A, Def. A is a set arguments and Def A×A is a binary relation of defeat. We say that an argument A defeats an argument B iff (A,B)Def.
### Definition 2.2conflict-free, defence
Let B A.
• A set B is conflict-free iff there exist no Ai, Aj in B such that Ai defeats Aj.
• A set B defends an argument Ai iff for each argument Aj A, if Aj defeats Ai, then there exists Ak in B such that Ak defeats Aj.
### Definition 2.3acceptability semantics
Let B be a conflict-free set of arguments, and let F: 2A 2A be a function such that F(B)={ABdefendsA}.
• B is admissible iff B F(B).
• B is a complete extension iff B=F(B).
• B is a grounded extension iff it is the smallest (w.r.t. set inclusion) complete extension.
• B is a preferred extension iff it is a maximal (w.r.t. set inclusion) complete extension (or, equivalently, if B is a maximal (w.r.t. set inclusion) admissible set).
• B is a stable extension iff it is a preferred extension that defeats all arguments in AB.
Note that this implies that each grounded, preferred or stable extension of an AF is also a complete extension of that AF. Some other known results are that
• the grounded extension is indeed unique but all other semantics allow for multiple extensions of an AF;
• each AF has a grounded and at least one preferred and complete extension, but there are AFs without stable extensions;
• the grounded extension of an AF is contained in all other extensions of that AF.
## 3.Argumentation systems with structured arguments
In this section, the arguments of Dung's argumentation frameworks are given structure and its defeat relation is defined in terms of the structure of arguments plus external preference information. Apart from this, the resulting formalism is still as abstract as possible, allowing for different logical languages, different sets of inference rules for building arguments and different preference orderings. The framework uses Vreeswijk's (1993, 1997) definition of the structure of arguments and then adds Pollock's (1987, 1994) distinction between rebutting and undercutting attack, as well as a variant of the notion of premise attack proposed by Vreeswijk (1993, chap. 8). These notions are then generalised to languages with arbitrary relations of contrariness and contradiction between well-formed formulas. Then the three notions of attack are combined into a notion of defeat in a way inspired by Vreeswijk (1993, chap. 8) and Prakken and Sartor (1997). It is this combination that makes it possible to regard the system as an instantiation of Dung's abstract framework.
The resulting framework unifies two ways to capture the defeasibility of reasoning. Some, e.g. Amgoud and Cayrol (2002), Besnard and Hunter (2008), Bondarenko et al. (1997), Verheij (2003a), locate the defeasibility of arguments in the uncertainty of their premises, so that arguments can only be attacked on their premises. Others, e.g. Pollock (1994), Vreeswijk (1997), instead locate the defeasibility of arguments in the riskiness of their inference rules: in these logics, inference rules are of two kinds, being either deductive or defeasible, and arguments can only be attacked on their applications of defeasible inference rules. Typically, in this approach inconsistency of the knowledge base makes the system collapse. Vreeswijk (1993, chap. 8) called these two approaches plausible and defeasible reasoning: he described plausible reasoning as sound (i.e. deductive) reasoning on an uncertain basis and defeasible reasoning as unsound (but still rational) reasoning on a solid basis. In Chapter 8, Vreeswijk attempted to combine both forms of reasoning in a single formalism, but since then most formal accounts of argumentation have modelled either only plausible or only defeasible reasoning.
### 3.1.Basic definitions
The basic notion of the present framework is that of an argumentation system, which extends the familiar notion of a proof system with a distinction between strict and defeasible inference rules3 and a preference ordering on the defeasible inference rules.
#### Definition 3.1argumentation system
An argumentation system is a tuple AS=(L,0¯,R,) where
• L is a logical language,
• - is a contrariness function from L to 2L,
• R=RsRd is a set of strict (Rs) and defeasible (Rd) inference rules such that RsRd=,
• is a partial preorder on Rd.
Amgoud et al. (2006) and Caminada and Amgoud (2007) assume that arguments are expressed in a logical language that is left unspecified except that it is closed under classical negation. In this paper, this assumption will be generalised in two ways. First, non-symmetric conflict relations between formulas will be allowed, such as the contrariness relation of Bondarenko et al. (1997) (which captures, for instance, negation as failure), and its inverse, the dialectical negation of Verheij (2003a) (which means ‘it is defeated that’). Second, in addition to classical negation, other symmetric conflict relations will be allowed, so that, for instance, formulas like ‘bachelor’ and ‘married’ can, if desired, be declared contradictory without having to reason with an axiom ¬(bachelor ∧ married).
#### Definition 3.2logical language
Let L, a set, be a logical language and - a contrariness function from L to 2L. If φψ then if ψφ, then ϕ is called a contrary of ψ, otherwise ϕ and ψ are called contradictory. The latter case is denoted by φ=ψ (i.e. φψ and ψφ).
In examples with classical negation ¬, it will be assumed that ¬φφ and φ¬φ.
Now that the notion of negation has been generalised, the same must be done with the notion of consistency.
#### Definition 3.3consistent set
Let P L. P is consistent iff ∄ ψ, ϕ P such that ψφ, otherwise it is inconsistent.
Note that this is a weak form of consistency, determined by whether a set contains contrary or contradictory formulas. Caminada and Amgoud (2007) call this direct consistency and they call consistency of the closure of a set under strict inference indirect consistency.
Arguments are built by applying inference rules to subsets of L. Inference rules are either strict or defeasible. This distinction goes back to Lin and Shoham (1989), Pollock (1987) and Vreeswijk (1993), as does the idea of abstracting from their nature.
#### Definition 3.4strict and defeasible rules
Let φ1, …, φn, ϕ be elements of L.
• A strict rule is of the form φ1, …, φn → ϕ, informally meaning that if φ1,,φn hold, then without exception it holds that ϕ.
• A defeasible rule is of the form φ1, …, φn ⇒ ϕ, informally meaning that if φ1,,φn hold, then it presumably holds that ϕ.
φ1,,φn are called the antecedents of the rule and ϕ its consequent.
As usual in logic, inference rules will often be specified by schemes in which a rule's antecedents and consequent are metavariables ranging over L.
Arguments are constructed from a knowledge base which, inspired by Gordon et al. (2007), is assumed to contain four kinds of formulas.
#### Definition 3.5knowledge bases
A knowledge base in an argumentation system (L,0¯,R,) is a pair (K,), where KL and is a partial preorder on KKn.
Here K=KnKpKaKi where these subsets of K are disjoint and
• Kn is a set of (necessary) axioms. Intuitively, arguments cannot be attacked on their axiom premises.
• Kp is a set of ordinary premises. Intuitively, arguments can be attacked on their ordinary premises, and whether this results in defeat must be determined by comparing the attacker and the attacked premise (in a way specified below).
• Ka is a set of assumptions. Intuitively, arguments can be attacked on their assumptions, where these attacks always succeed.
• Ki is a set of issues. Intuitively, arguments of which the premises include an issue are never acceptable: an issue must always be backed with a further argument.
(Gordon et al. (2007) call ordinary premises ‘assumptions’, they regard assumptions as the contradictories of ‘exceptions’ and they call issues ‘ordinary premises’. Their counterpart to axioms is ‘accepted’ and ‘rejected’ statements.) As explained by Gordon et al. (2007), the category of issue premises is useful if an argumentation system is embedded in a dialogical context, defining the acceptability status of arguments relative to a stage in a dialogue. For example, in legal proceedings, legal claims that are not backed by factual evidence usually do not stand: for instance, an argument ‘we have a contract by Section X of the Civil Code since I made an offer and you accepted’ will be unacceptable as long as no factual evidence for the offer and acceptance is provided. In the present framework, this can be captured by giving the non-supported premises issue status.
### 3.2.Arguments
Next the arguments that can be constructed from a knowledge base in an argumentation system are defined. Arguments can be constructed step-by-step by chaining inference rules into trees. Arguments thus contain subarguments, which are the structures that support intermediate conclusions (plus the argument itself and its premises as limiting cases). In what follows, for a given argument, the function Prem returns all the formulas of K (called premises) used to build the argument, Conc returns its conclusion, Sub returns all its subarguments, DefRules returns all the defeasible rules of the argument and, finally, TopRule returns the last inference rule used in the argument.
#### Definition 3.6argument
An argument A on the basis of a knowledge base (K,) in an argumentation system (L,0¯,R,) is
• (1) ϕ if φK with
Prem(A)={φ},
Conc(A)=φ,
Sub(A)={φ},
DefRules(A)=,
TopRule(A)=undefined.
• (2) A1,,AnψifA1,,An are arguments such that there exists a strict rule
Conc(A1),,Conc(An)ψ in Rs,
Prem(A)=Prem(A1)Prem(An),
Conc(A)=ψ,
Sub(A)=Sub(A1)Sub(An){A},
DefRules(A)=DefRules(A1)DefRules(An),
TopRule(A)=Conc(A1),,Conc(An)ψ.
• (3) A1,,Anψ if A1,,An are arguments such that there exists a defeasible rule Conc(A1),,Conc(An)ψ in Rd,
Prem(A)=Prem(A1)Prem(An),
Conc(A)=ψ,
Sub(A)=Sub(A1)Sub(An){A},
DefRules(A)=DefRules(A1)DefRules(An){Conc(A1),,Conc(An)ψ},
TopRule(A)=Conc(A1),,Conc(An)ψ.
##### Example 3.7
Consider a knowledge base in an argumentation system with
• Rs={p,qs;u,vw}
• Rd={pt;s,r,tv}
• Kn={q}
• Kp={p,u}
• Ka={r}
An argument for w is displayed in a traditional proof-tree format in Figure 1, where a single line stands for a strict inference and a double line for a defeasible inference. The type of a premise is indicated with a superscript. Formally, the argument and its subarguments are written as follows:
• A1:pA5:A1t
• A2:qA6:A1,A2s
• A3:rA7:A5,A3,A6v
• A4:uA8:A7,A4w
We have that
• Prem(A8)={p,q,r,u}
• Conc(A8)=w
• Sub(A8)={A1,A2,A3,A4,A5,A6,A7,A8}
• DefRules(A8)={pt;s,r,tv}
• TopRule(A8)=v,uw
An argument.
#### Definition 3.8argument properties
An argument A is
• strict if DefRules(A)=;
• defeasible if DefRules(A);
• firm if Prem(A)Kn;
• plausible if Prem(A)Kn.
We write Sφ if there exists a strict argument for ϕ with all premises taken from S, and S|φ if there exists a defeasible argument for ϕ with all premises taken from S.
##### Example 3.9
In Example 3.7 the argument A2 is strict and firm, while A1,A3,A4 and A6 are strict and plausible and A5, A7 and A8 are defeasible and plausible. Furthermore, we have that Kp, Kq, Kr, Ku, Ks and K|t, K|v, K|w.
(From hereon, the theory will be left implicit if there is no danger for confusion.)
Now that the notion of an argument has been defined, orderings on arguments can be considered. Below is a partial preorder such that AB means that B is at least as ‘good’ as A. As usual AB means AB and BA.
In Section 6, two ways will be discussed to define as a function from the orderings on Rd and on K. However, the present framework allows for any partial preorder on arguments that satisfies two basic assumptions (taken from Vreeswijk (1993)).
##### Definition 3.10
Let A be a set of arguments. Then a partial preorder on A is an argument ordering iff
• (1) if A is firm and strict and B is defeasible or plausible, then BA;
• (2) if A=A1,,Anψ, then for all 1in,AAi and for some 1in, AiA.
(Vreeswijk also assumes that an argument cannot be stronger than its weakest subargument but in Section 6 the so-called ‘last-link’ principle will be discussed, which violates this assumption.) The first condition says that strict-and-firm arguments are stronger than all other arguments, while the second condition says that a strict inference cannot make an argument weaker or stronger.
#### Definition 3.11argumentation theories
An argumentation theory is a triple AT=(AS,{KB,), where AS is an argumentation system, KB a knowledge base in AS and an argument ordering on the set of all arguments that can be constructed from KB in AS (below called the set of arguments on the basis of AT).
### 3.3.Attack and defeat
Dung's use of the term ‘attack’ might at first sight lead to the belief that Dung's framework has no place for preferences. However, Dung's attack relation can also be seen as abstracting from the use of preferences: in this view, an attack relation in his framework may be the result of applying preferences to a syntactic conflict. This view on Dung's attack relation was, to my knowledge, first used by Prakken and Sartor (1997), it was also employed by Amgoud and Cayrol (2002) and it was the basis of Bench-Capon's (2003) value-based AFs. It was also the reason why Prakken and Sartor (1997) and Prakken and Vreeswijk (2002) replaced Dung's term ‘attack’ with ‘defeat’, to reflect that it may incorporate evaluative considerations. This convention will also be adopted in the present paper, while the term ‘attack’ will be reserved for non-evaluative syntactic notions of conflict. The idea then is that defeat is determined by attack plus preference (except in some cases, where attack automatically leads to defeat).
The notion of a defeasible inference rule naturally leads to two notions of rebutting and undercutting attack, introduced by Pollock (1974) and first formalised by Pollock (1987). The third kind of attack, premise attack (in this paper called undermining), is a natural addition (and for deductive inferences it is the only kind of attack) but highlights the philosophical distinction between plausible and defeasible reasoning discussed above. It was independently introduced by Vreeswijk (1993, chap. 8) and Elvang-Göransson, Fox, and Krause (1993). In line with Prakken and Sartor (1997), rebutting and undercutting attacks can also be launched on subarguments. This is essential in making the system an instantiation of Dung's abstract framework.
#### 3.3.1.Attack
First the ways in which arguments can be attacked are defined. Recall that these are just syntactic categories and do not reflect any preference between arguments. The first way of attack corresponds to the case where one argument uses a defeasible rule of which another argument says that it does not apply to the case at hand. Its definition assumes that inference rules can be named in the object language; the precise nature of this naming convention will be left implicit.
### Definition 3.12undercutting attack
Argument A undercuts argument B (on B) iff Conc(A)B for some BSub(B) of the form B1,,Bnψ.
##### Example 3.13
In Example 3.7, argument A8 can be undercut in two ways: by an argument with conclusion A5, which undercuts A8 on A5, and by an argument with conclusion A7, which undercuts A8 on A7.
Undercutting attackers only say that there is some exceptional situation in which a defeasible inference rule cannot be applied, without drawing the opposite conclusion. Rebutting attacks do the latter: they provide a contrary or contradictory conclusion for a defeasible (sub-)conclusion of the attacked argument.
### Definition 3.14rebutting attack
Argument A rebuts argument B on ( B) iff Conc(A)φ for some BSub(B) of the form B1,,Bnφ. In such a case A contrary-rebuts B iff Conc(A) is a contrary of ϕ.
##### Example 3.15
In Example 3.7, argument A8 can be rebutted on A5 with an argument for t and on A7 with an argument for v. Moreover, if t=t then A5 in turn rebuts any argument for t with a defeasible top rule. However, A8 itself does not rebut that argument, except in the special case where wt. This shows that for three reasons rebutting attack is not symmetric: the rebuttal can have a strict top rule, rebutting can be contrary-rebutting and rebutting can be launched on a subargument. However, the present example also shows that in the latter case, if the rebutting attack has a defeasible top rule and is not of the contrary-rebutting kind, the directly rebutted subargument in turn rebuts its attacker.
The final way of attack is an attack on a (non-axiom) premise.
### Definition 3.16undermining attack
Argument A undermines B (on ϕ) iff Conc(A)φ for some φPrem(B)Kn. In such a case, argument A contrary-undermines B iff Conc(A) is a contrary of ϕ or if φKa.
##### Example 3.17
In Example 3.7, argument A8 can be undermined with an argument that has conclusion p, r or u. If that attacker has a defeasible top rule and, say, a conclusion p and does not contrary-undermine A8, then p as an argument in turn rebuts the attacker.
The following example (based on Example 4 of Caminada and Amgoud (2007)) illustrates the interplay between strict and defeasible rules in rebutting attack.
##### Example 3.18
A1:WearsRingA2:A1MarriedA3:A2¬BachelorB1:PartyanimalB2:B1BachelorB3:B2¬Married
A3 rebuts B3 on its subargument B2 while B3 rebuts A3 on its subargument A2. Note that A2 does not rebut B3, since B3 applies a strict rule; likewise for B2 and A3.
#### 3.3.2.Defeat
Now that we know how arguments can be attacked, the argument ordering can be used to define which attacks result in defeat. For undercutting attack, no preferences will be needed to make it result in defeat, since otherwise a weaker undercutter and its stronger target might be in the same extension. This would be strange since then the extension contains an argument that applies an inference rule of which another argument in the same extension says that it should not be applied.4 The same holds for the other two ways of attack as far as they involve contraries (i.e. non-symmetric conflict relations between formulas). The reason for this is that otherwise if a rebutting or undermining attacker is weaker than its target, both may be in the same extension. For the remaining forms of attack, the argument ordering will be used to determine whether they result in defeat.
### Definition 3.19successful rebuttal
Argument A successfully rebuts argument B if A rebuts B on B and either A contrary-rebuts B or AB.
This definition determines whether a rebutting attack is successful by comparing the conflicting arguments at the points where they conflict. Thus, in Example 3.18, the conflict between A3 and B3 is resolved by comparing A3 with B2 and comparing B3 with A2. Now if B2A3 (for example, since the married-rule is given priority over the bachelor-rule) then A3 successfully rebuts B2 and B3 while B3 does not successfully rebut A2 or A3. If, in contrast, A2B3 and B2A3 then both A3 and B3 successfully rebut each other (while A3 still successfully rebuts B2 and not vice versa, and likewise for B3 and A2). Note also that if A3 is deleted from the example, then if B3A2, no argument in the example is defeated. This may at first sight seem counterintuitive but this is due to the fact that the example violates closure of Rs under transposition (cf. Section 5).
As noted by Caminada and Amgond (2007), Example 3.18 also illustrates why Definitions 3.14 and 3.19 should not allow that a defeasible argument with a strict top rule can be (successfully) rebutted on its final conclusion. The reason is that otherwise if all defeasible rules in the example are of equal preference, the set {A1,A2,B1,B2} is admissible, which violates the rationality postulate of indirect consistency (see Section 6).
### Definition 3.20successful undermining
Argument A successfully undermines B if A undermines B on ϕ and either A contrary-undermines B or Aφ.
This definition exploits that an argument premise is also defined to be a subargument.
In Example 3.7, any argument for r successfully undermines A8 since it contrary-undermines it since rKa. The same holds for any argument for a contrary of p or u while for arguments for contradictories of p or u this depends on the argument ordering (which may in turn depend on the ordering on K; see Definitions 6.14 and 6.17).
It remains to be discussed how the framework should deal with arguments that have issue premises. As explained above, the idea is that arguments with issue premises are always unacceptable. There are various ways to formalise this idea. One would be to let a special designated argument, or perhaps all strict-and-firm arguments, defeat any argument with an issue premise (as in Modgil (2009) and Prakken and Sartor (1997)). Here another solution is adopted: an argument can defeat another only if it has no issue premises. Then in Definition 2.1, only sets B with no issue premises will be considered, so that no argument with issue premises is in any extension.
The three defeat relations can now be combined into an overall definition of ‘defeat’.
### Definition 3.21defeat
Argument A defeats argument B iff no premise of A is an issue and A undercuts or successfully rebuts or successfully undermines B. Argument A strictly defeats argument B if A defeats B and B does not defeat A.
In the literature other combinations of these kinds of attack have been considered. For example, Prakken and Sartor (1997) (who have no undermining) give precedence to undercutting defeat over rebutting defeat, so that if A successfully undercuts B while B successfully rebuts A, nevertheless A strictly defeats B. It remains to be investigated how crucial the present definition is for the results below.
Finally, argumentation theories can be linked to Dung-style argumentation frameworks.
### Definition 3.22AF
An abstract argumentation framework AF corresponding to an argumentation theory AT is a pair < A, Def > such that:
• A is the set of arguments on the basis of AT as defined by Definition 3.6,
• Def is the relation on A given by Definition 3.21.
To leave arguments with issue premises out of any extension, Definition 2.1 should now start with ‘Let B be a conflict-free set of arguments that have no issue premises …’.
It is now also possible to define a consequence notion for well-formed formulas. Several definitions are possible. One is as follows.
### Definition 3.23acceptability of conclusions
For any semantics S and for any argumentation theory AT and formula φLAT:
• (1) ϕ is skeptically S-acceptable in AT if and only if all S-extensions of AT contain an argument with conclusion ϕ;
• (2) ϕ is credulously S-acceptable in AT if and only if there exists an S-extension of AT that contains an argument with conclusion ϕ.
An alternative definition of skeptical acceptability is
• (1) ϕ is skeptically S-acceptable in AT if and only if there exists an argument with conclusion ϕ that is contained in all S-extensions of AT.
While the original definition allows that different extensions contain different arguments for a skeptical conclusion, the alternative definition requires that there is one argument for it that is in all extensions.
## 4.Using the framework: domain-specific vs. general inference rules
The framework defined in the previous section can be used in two ways, depending on whether the inference rules are domain-specific or not. The inference rules of argumentation systems are not part of the logical language L but are metalevel constructs. The usual practice in standard logic is that inference rules express general patterns of reasoning, such as modus ponens, universal instantiation and so on. Yet Caminada and Amgoud (2007) use the inference rules to represent domain knowledge, in line with a long tradition in non-monotonic logic of using domain-specific inference rules (e.g. Reiter 1980; Loui 1987; Nute 1994; Garcia and Simari 2004). The difference between both approaches is illustrated with the following example. Consider the information that all Frisians are Dutch, that the Dutch are usually tall and that Wiebe is Frisian. With domain-specific inference rules, this can in a propositional language be represented as follows:
• Rs={FrisianDutch}
• Rd={DutchTall}
• Kp={Frisian}
The argument that Wiebe is tall then has the form as displayed on the left in Figure 2.
##### Figure 2.
Domain-specific vs. general inference rules.
With general inference rules, the two rules must instead be represented in the object language L. The first one can be represented with the material implication but for the second one a connective for defeasible conditionals must be added to L and a defeasible modus-ponens inference rule must be added for this connective. For example:
Rs={φ,φψψ(for allφ,ψL),}Rd={φ,φψψ(for allφ,ψL),}Kp={FrisianDutch,DutchTall,Frisian}
Then the argument that Wiebe is tall has the form as displayed on the right in Figure 2.
Although the present system can be used both ways, both Vreeswijk and Pollock intended their inference rules to express general patterns of reasoning, which is much more in line with the role of inference rules in standard logic. Indeed, an important part of John Pollock's work was the study of general patterns of (epistemic) defeasible reasoning, which he called prima facie reasons. He formalised prima facie reasons for reasoning patterns involving perception, memory, induction, temporal persistence and the statistical syllogism, as well as undercutters for these reasons. The ASPIC framework allows for such general use of inference rules, by expressing the rules through schemes (in the logical sense, with metavariables ranging over L). When used thus, the framework becomes a general framework for argumentation with structured arguments. It thus is also suitable for modelling reasoning with argument schemes, which currently is an important topic in the computational study of argument (cf. Walton et al. 2008). Argument schemes are stereotypical non-deductive patterns of reasoning, consisting of a set of premises and a conclusion that is presumed to follow from them. Uses of argument schemes are evaluated in terms of critical questions specific to the scheme. An example of an epistemic argument scheme is the scheme from expert opinion (Walton et al. 2008, p. 310):
• E is an expert in domain D
• E asserts that P is true
• P is within D
• P is true
This scheme has six critical questions:
• (1) How credible is E as an expert source?
• (2) Is E an expert in domain D?
• (3) What did E assert that implies P?
• (4) Is E personally reliable as a source?
• (5) Is P consistent with what other experts assert?
• (6) Is E's assertion of P based on evidence?
A natural way to formalise reasoning with argument schemes is to regard them as defeasible inference rules and to regard critical questions as pointers to counterarguments (this approach was earlier defended by Bex, Prakken, Reed, and Walton (2003) and Verheij (2003b). More precisely, the three kinds of attack on arguments correspond to three kinds of critical questions of argument schemes. Some critical questions challenge an argument's premise and therefore point to undermining attacks, others point to undercutting attacks, while again other questions point to rebutting attacks. In the scheme from expert opinion questions (2) and (3) point to underminers (of, respectively, the first and second premise), questions (4), (1) and (6) point to undercutters (the exceptions that the expert is biased or incredible for other reasons and that he makes scientifically unfounded statements) while question (5) points to rebutting applications of the expert opinion scheme. Thus, we also see that Pollock's prima facie reasons are examples of epistemic argument schemes and that his undercutters are negative answers to one kind of critical question.
Now one benefit of having undermining attack in addition to rebutting and undercutting attack can be discussed in more detail: if the inference rules are supposed to be domain-independent, then representing facts with non-conditional inference rules (as done by Caminada and Amgoud (2007)) does not make sense.
## 5.Transposition and contraposition
Before it can be studied to what extent the present framework satisfies the rationality postulates of Caminada and Amgoud (2007), first some technicalities concerning strict inference rules must be discussed. To start with, Caminada and Amgoud define the notions of a transposition of a strict rule and closure of sets of strict rules under transposition.
### Definition 5.1transposition
A strict rule s is a transposition of φ1, …, φn ψ iff s = φ1, …, φi1, ψ, φi+1, …, φn φi for some 1 i n.
### Definition 5.2transposition operator
Let Rs be a set of strict rules. Cltp(Rs) is the smallest set such that:
• RsCltp(Rs) and
• If sCltp(Rs) and t is a transposition of s, then tCltp(Rs).
We say that Rs is closed under transposition iff Cltp(Rs)=Rs.
Now the subclass of argumentation systems closed under transposition can be defined.
### Definition 5.3closure under transposition
An argumentation system (L,0¯,R,) is closed under transposition if Rs=Cltp(Rs). An argumentation theory is closed under transposition if its argumentation system is.
Caminada and Amgoud (2007) also define the closure of a set of formulas under application of strict rules.
### Definition 5.4closure of a set of formulas
Let PL. The closure of P under the set Rs of strict rules, denoted ClRs(P), is the smallest set such that:
• PClRs(P)
• if φ1,,φnψRs and φ1,,φnClRs(P), then ψClRs(P).
If P=ClRs(P), then P is said to be closed.
It is also relevant whether strict inference satisfies contraposition.
### Definition 5.5closure under contraposition
An argumentation system is closed under contraposition if for all SL, all sS and all ϕ it holds that if Sφ then S{s}{φ}s. An argumentation theory is closed under contraposition if its argumentation system is.
Closure under transposition does not imply closure under contraposition, as shown by the following counterexample (in all examples below, sets which are empty are not listed).
##### Example 5.6
Let Rs=Cltp({pq;pr;p,rs}). Then {p}s but { − s ⊬ − p.
In general, it neither holds that closure under contraposition implies closure under transposition, as shown by the following counterexample.
##### Example 5.7
Let Rs={pq;¬qr;r¬p;¬rq;p¬r}. Then Rs is not closed under transposition, since it does not include ¬q¬p. Still we have
{p}qand{¬q}¬p{p}¬rand{r}¬p
{¬r}qand{¬q}r{¬q}rand{¬r}q
So Rs satisfies contraposition.
However, contraposition does imply transposition in the following special case.
##### Proposition 5.8
Consider any argumentation theory with L closed under classical negation and-defined correspondingly. Then if Rs consists of all valid propositional inferences, then Rs is closed under contraposition and transposition.
Note that the proposition does not hold if the condition ‘ Rs consists of all valid propositional inferences’ is changed to ‘ corresponds to propositional logic’. A counterexample is any argumentation theory with a sound and complete axiomatisation of propositional logic with modus ponens as the only inference rule.
## 6.Rationality postulates
Dung's semantics can be seen as rationality constraints on evaluating arguments in abstract argumentation frameworks. The refinement of his abstract approach with structured arguments naturally leads to the question whether this additional structure gives rise to additional rationality constraints. Caminada and Amgoud (2007) gave a positive answer to this question by proposing a number of ‘rationality postulates’ for what they called ‘rule-based argumentation’. Four of their postulates formulate constraints on any extension of an argumentation framework corresponding to an argumentation theory:5
• Closure under subarguments: for every argument in an extension also all its subarguments are in the extension.
• Closure under strict rules: the set of conclusions of all arguments in an extension is closed under strict-rule application.
• Direct consistency: the set of conclusions of all arguments in an extension is consistent.
• Indirect consistency: the closure of the set of conclusions of all arguments in an extension under strict-rule application is consistent.
Caminada and Amgoud (2007) proved for their version of the ASPIC framework that the first two postulates are always satisfied while the two consistency postulates are satisfied if the set of strict rules is consistent and closed under transposition. However, their version of the ASPIC framework is considerably simpler than the present one. First, it has no knowledge base and facts must be represented as inference rules with empty antecedents; because of this, arguments cannot be undermined. Furthermore, it assumes just a basic ordering on arguments, according to which strict arguments are strictly preferred over defeasible ones and nothing else. Finally, it has a special case of the present - function from L to 2L, corresponding to classical negation. The task now is to investigate to which extent the results of Caminada and Amgoud (2007) can be generalised to the present case.
The postulates of closure under subarguments and strict-rule application still hold unconditionally for the present framework. (Here that a given semantics is subsumed by complete semantics means that any of its extensions also is a complete extension).
##### Proposition 6.1
Let < A, Def > be an argumentation framework as defined in Definition 3.22 and E any of its extensions under a given semantics subsumed by complete semantics. Then for all AE: if ASub(A) then AE.
##### Proposition 6.2
Let <A,Def> be an argumentation framework corresponding to an argumentation theory and E any of its extensions under a given semantics subsumed by complete semantics. Then {Conc(A)|AE}=CRs({Conc(A)|AE}).
As for the two consistency postulates, Caminada and Amgoud's results do not generalise unconditionally. Consider the following example.
##### Example 6.3
Let Rd={p;q} and Rs={q¬p;p¬q}. Then we have
• A:p
• B:qB:B¬p
Now assume that AB, so B does not defeat A. However, A neither defeats B, since B's last inference is strict. At first sight, it would seem that A can be extended with the transposition of q¬p (i.e. with p¬q) to an argument
A+:A¬q
that rebuts B's subargument B′ for q. Then since by condition (2) of Definition 3.10 a strict continuation of an argument cannot make it weaker, BA+ so A+ defeats B′. Moreover, by the same conditions any argument defeats A if and only if it defeats A+ so if A is in an extension E then by Proposition 6.2 A+ will be in E and therefore B will not be in E since extensions are conflict-free.
However, this line of reasoning does not hold without a further assumption on the argument ordering. Consider a more complex variant of Example 6.3.
##### Example 6.4
Let Rd={p;q;r} and Rs={q,r¬p;q,p¬r;p,r¬q}. Then we have
• Ap
• B′: qB:rB:B,B¬p
The problem is that A cannot be extended with any transposition of q,r¬p to obtain A+ unless it is combined with either B′ or B′′ but then A is extended with a defeasible rule, so A+ might be weaker than A. This problem holds whenever B has more than one maximal defeasible or plausible subargument.
However, assuming contraposition or transposition, direct consistency can still be proved if it can also be assumed that there is a way to extend A with all but one of B's maximal defeasible subarguments that is not weaker than the remaining one. In our example, this means that either A extended with B′ is not weaker than B′′ or A extended with B′′ is not weaker than B′. Intuitively, this assumption seems acceptable given that A is stronger than both B′ and B′′. It is therefore to be expected that it will be satisfied by many reasonable argument orderings. Since similar situations can arise with undermining attack, the notion of a maximal fallible subargument is needed.
### Definition 6.5maximal fallible subarguments
For any argument A, an argument ASub(A) is a maximal fallible subargument of A if
• (1) A's final inference is defeasible or A is a non-axiom premise; and
• (2) there is no ASub(A) such that AA and ASub(A) and A satisfies condition (1).
The set of maximal fallible subarguments of an argument A will be denoted by M(A).
##### Corollary 6.6
For any argument A, it holds that Conc(M(A))Conc(A).
### Definition 6.7reasonable argument orderings
Argument ordering ⪯ is reasonable if it satisfies the following condition. Let A and B be arguments with contradictory conclusions such that BA. Then there exists a BiM(B) and an A+ with ASub(A+) such that Conc(A+)=Conc(Bi) and A+Bi.
A final problem to deal with is that in Example 6.3, Conc(A) could be a contrary of Conc(B); the problem is that the solution with closure under contraposition and transposition does not apply to this case. Therefore, the focus must be restricted to argumentation theories that respect the intended use of assumptions and contraries.
##### Definition 6.8
An argumentation theory is well formed if:
• (1) no consequent of a defeasible rule is a contrary of the consequent of a strict rule;
• (2) if φKa and ϕ is a contrary of ψ, then ψKnKp and ψ is not the conclusion of a rule in R.
Condition (2) in effect says that assumptions can only be contraries of other assumptions. An example of an argumentation theory that is not well formed is
Rs={pq},Rd={rs,tu},Kp={p,r},Ka={v}
and such that s is a contrary of q and v is a contrary of u. Then condition (1) of Definition 6.8 is violated since we have arguments A: pq and B: rs. Moreover, condition (2) is violated since vKa and tuRd.
Now it can be proved that under certain conditions an argumentation theory satisfies the postulate of direct consistency.
##### Theorem 6.9
Let < A,Def > be an argumentation framework corresponding to a well-formed argumentation theory that is closed under contraposition or transposition and has a reasonable argument ordering and a consistent ClRs(Kn), and let E be any of its extensions under a given semantics subsumed by complete semantics. Then the set {Conc(A)|AE} is consistent.
Caminada and Amgoud (2007) also prove that their system satisfies the postulate of indirect consistency. This follows from their Proposition 7, which says that if an argumentation theory satisfies closure and direct consistency, it also satisfies indirect consistency. Since in the present case, the conditions of the proof of direct consistency had to be strengthened, the same holds for indirect consistency.
##### Theorem 6.10
Let <A, Def> be an argumentation framework corresponding to a well-formed argumentation theory that is closed under contraposition or transposition and has a reasonable argument ordering and a consistent ClRs(Kn), and let E be any of its extensions under a given semantics subsumed by complete semantics. Then the set ClRs({Conc(A)|AE}) is consistent.
##### Corollary 6.11
If the conditions of Theorem 6.10 are satisfied, then for any extension E under a given semantics subsumed by complete semantics the set {φ|φisapremiseofanargumentinE} is consistent.
Concluding this section, two intuitively plausible argument orderings will be shown to be reasonable, namely, the weakest-link and last-link orderings from Amgoud et al. (2006). The versions below are slightly revised to make the principles arguably more intuitive. Both orderings define a strict partial order s on sets in terms of a partial preorder e on their elements, as follows: S1sS2 iff there exists an e1S1 such that for all e2S2 it holds that e1<ee2.
The last-link principle prefers an argument A over another argument B if the last defeasible rules used in B are less preferred than the last defeasible rules in A or, in case both arguments are strict, if the premises of B are less preferred than the premises of A. The concept of ‘last defeasible rules’ is defined as follows and is essentially the same as Prakken and Sartor's (1997) notion of a ‘relevant set’.
### Definition 6.12last defeasible rules
Let A be an argument.
• LastDefRules(A)= iff DefRules(A)=.
• If A = A1, …, An ϕ, then LastDefRules(A) = {Conc(A1), …, Conc(An) ϕ}, otherwise LastDefRules(A) = LastDefRules(A1) LastDefRules(An).
##### Corollary 6.13
LastDefRules(A)={TopRule(A)|AM(A)}.
An example with more than one last defeasible rule is with K={p;q} and Rd={pr;qs}. Then for argument A for rs, we have LastDefRules(A)={pr;qs}.
The above definition is now used to compare pairs of arguments as follows.
##### Definition 6.14
Let A and B be two arguments. Then A B iff either
• (1) condition (1) of Definition 3.10 holds or
• (2) LastDefRules(A) s LastDefRules(B) or
• (3) LastDefRules(A) and LastDefRules(B) are empty and Prem(A) s Prem(B).
(Amgoud et al. 2006 do not include the second condition so if both arguments are strict the ordering on the knowledge base is ignored.) This definition in effect compares sets on their weakest elements.
##### Preposition 6.15
The last-link argument ordering is reasonable.
Consider the following example (taken from Prakken 1997) on whether people misbehaving in a university library may be denied access to the library.
##### Example 6.16
Let Kp={Snores;Professor}, Rd=
• {Snoresr1Misbehaves;
• Misbehavesr2AccessDenied;
• Professorr3¬AccessDenied}.
Assume that Snores<Professor and r1<r2,r1<r3,r3<r2 and consider the following arguments.
A1:SnoresA2:A1MisbehavesA3:A2AccessDeniedB1:ProfessorB2:B1¬AccessDenied
To resolve the conflict between A3 and B2, the rule sets to be compared are LastDefRules(A3)={r2} and LastDefRules(B2)={r3}. Since r3<r2 we have that B2sA3 so A3 strictly defeats B2.
The weakest-link principle considers not the last but all uncertain elements in an argument. It prefers an argument A over an argument B if A is preferred to B on both their premises and their defeasible rules.
##### Definition 6.17
Let A and B be two arguments. Then AB iff either condition (1) of Definition 3.10 holds or
• (1) Prem(A)sPrem(B) and
• (2) If DefRules(B), then DefRules(A)sDefRules(B).
(Amgoud et al. (2006) do not have condition (2), so that with two strict arguments neither of them can be preferred.)
##### Proposition 6.18
The weakest-link argument ordering is reasonable.
##### Example 6.19
Consider again Example 6.16. With the weakest-link principle, the outcome is different. To resolve the conflict between A3 and B2, the rule sets to be compared are now DefRules(A3)={r1,r2} and DefRules(B2)={r3}. Since r1<r3, we have that DefRules(A3)sDefRules(B2). Moreover, since Snores<Professor, we also have that Prem(A3) s Prem(B2). Hence, B2 now strictly defeats A3.
##### Example 6.20
r1=WearsRingMarriedr2=PartyAnimalBachelor
Note that since both arguments apply just one defeasible rule and no premise is attacked, the weakest- and last-link ordering produce the same result. Now if r1<r2, we have that A3 strictly defeats B3 by successfully rebutting it on B2, while if both r1r2 and r2r1 then A3 and B3 defeat each other since A3 successfully rebuts B3 on B2 while B3 successfully rebuts A3 on A2.
## 7.Self-defeat
As discussed by Pollock (1994) and Caminada and Amgoud (2007), self-defeating arguments can cause problems if argumentation systems are not carefully defined, particularly if they include standard propositional logic. In the present framework, two types of self-defeating arguments are possible: serial self-defeat occurs when an argument defeats one if its earlier steps, while parallel self-defeat occurs when the contradictory conclusions of two or more arguments are taken as the premises for . Pollock (1994) gives an example of serial self-defeat of the following form.
##### Example 7.1
Let Rd={pq}, Rs={q¬A2} and K={p,q}. Then, we have
A1:pA2:A1qA3:A2¬A2
(Read p as ‘witness John says that he is unreliable’ and q as ‘witness John is unreliable’). Argument A3 is self-defeating since it undercuts itself on A2. This example is arguably handled properly by preferred and grounded semantics, who both have E={A1} as the only extension.
One of Pollock's (1994) examples of parallel self-defeat has the following form.
##### Example 7.2
Let Rd={pq;r¬q;ts} and K={p,r,t} while Rs contains all propositionally valid inferences. Then:
A1:pA2:A1qB1:rB2:B1¬qC1:A2,B2C2:C1¬sD1:tD2:D1s
Here a problem arises since s can be any formula, so any defeasible argument unrelated to A2 or B2, such as D2, can, depending on the rule priorities, be rebutted by C2. Clearly, this is extremely harmful, since the existence of just a single case of mutual rebutting defeat, which is very common, could trivialise the system. In fact, of the semantics defined by Durg (1995), this is only a problem for grounded semantics. Since all preferred/stable extensions contain either A2 or B2, argument C2 is not in any of these extensions so D2 is. However, if neither of A2 and B2 strictly defeats the other, then neither of them is in the grounded extension so that extension does not defend D2 against C2 and therefore does not contain D2.
Pollock (1994) also discusses the following variant of this example (with the same argumentation theory):
A1:pA2:A1qA3:A2q¬sB1:rB2:B1¬qC1:A2,B2¬sD1:tD2:D1s
Again with grounded semantics, the problem is that s can be any formula, so any defeasible argument unrelated to A2 or B2 can be rebutted by C1.
According to Caminada (personal communication), the only way to solve this problem is to make parallel self-defeat impossible. One way to implement this solution is to disallow arguments with a contradictory set of subconclusions. However, this affects the proof of Theorems 6.9 and 6.10. The reason is that for such systems the argument A+ that according to Lemma A1 can be constructed sometimes has to have contradictory sub-conclusions, as the following example (with a system closed under transposition) shows.
##### Example 7.3
Let pKn, qKa and Rs=Cltp({pt;qr;qs;r,s¬t}).
A1:pA2:A1tB1:qB2:B1rB3:B1sB4:B2,B3¬t
Now if A2 is to be extended to an argument A+ that undermines B4, then B1 must be included in A+.
A similar example for systems closed under contraposition is as follows.
##### Example 7.4
Let Kp={p,q,¬p,¬q} and let Rs consist of all valid propositional inferences. Then
A1:pA1:qA3:A1,A2pqB2:¬pB2:¬qB3:B1,B2¬(pq)
Note that M(B3)=Prem(B3). Now any addition of a premise of B3 to Prem(A3) makes Prem(A3) inconsistent.
Since these problems only arise in particular argumentation systems and with particular semantics, no general solution will be pursued here; instead, such solutions are left for future research on instantiations of the framework. Note also that Examples 7.3 and 7.4 only contain strict rules, so that the problem may also arise in assumption-based frameworks, which will in the next section be proved to be a special case of the ASPIC framework.
## 8.The relation with assumption-based argumentation
After having presented his fully abstract approach to argumentation, Dung joined Kowalski, Toni and others in their development of a more concrete version of his approach (e.g. Bondarenko et al. 1997; Dung et al. 2006, 2007). In this approach, arguments essentially are sets of formulas called ‘assumptions’, from which conclusions can be drawn with strict inference rules. Arguments can be attacked with arguments that conclude to the ‘contrary’ of one of their assumptions. In fact, the extensions defined by the various semantics of Bondarenko et al. (1997) are not sets of arguments but sets of assumptions. However, Dung et al. (2007) showed that an equivalent fully argument-based formulation can be given.
In this section, it will be shown that assumption-based argumentation is a special case of the present framework with only strict inference rules, only assumption-type premises and no preferences. The proof will be given for the argument-based version of Dung et al. (2007) and carries over to Bondarenko et al. (1997) by the equivalence result of Dung et al. (2007).
First the main definitions of ABA are recalled (in the formulation of Dung et al. (2007)).
### Definition 8.1Dung et al. 2007, Definition 2.3
A deductive system is a pair (L,R) where
• L is a formal language consisting of countably many sentences, and
• R is a countable set of inference rules of the form α1,,αnα.6 αL is called the conclusion of the inference rule, α1,,αnL are called the premises of the inference rule and n0.
### Definition 8.2Dung et al. 2007, Definition 2.5
An assumption-based argumentation framework (ABF) is a tuple (L,R,A,0¯) where
• (L,R) is a deductive system,
• AL,A. 𝒜 is the set of candidate assumptions,
• If αA, then there is no inference rule of the form α1,,αnαR,
• - is a total mapping from A into L. α is the contrary of α.
The third condition amounts to a restriction to so-called flat ABFs. This restriction is not entirely innocent, since in debates it may occur that someone first assumes a premise and, after it is defeated, constructs an argument for it, in an attempt to rebut the defeater. To make Dung et al.'s analysis apply to all stages of such a debate, assumptions should be deleted from 𝒜 as soon as they are supported with an argument.
Since the notion of an argument is central to the present concerns, the informal explanation of Dung et al. (2007, p. 646) will be quoted in (almost) full.
Deductions can be understood as proof trees: the root of the tree is labelled by the conclusion of the deduction and the leaves are labelled by the premises supporting the deduction. For every non-terminal node in the tree, there is an inference rule whose conclusion matches the sentence labelling the node, and the children of the node are labelled by the premises of the inference rule. (…) we define deductions as sequences of frontiers S1,,Sm of the proof trees. Each frontier is represented by a multi-set, in which the same sentence can have several occurrences, if it is generated more than once as a premise of different inference steps. In order to generate proof trees, a selection strategy is needed to identify which node to expand next. We formalise this selection strategy by means of a selection function, as in the formalisation of SLD resolution. A selection function, in this context, takes as input a sequence of multi-sets Si and returns as output a sentence occurrence in Si. We restrict the selection function so that if a sentence occurrence is selected in a multi-set in a sequence then it will not be selected again in any later multi-set in that sequence.
Essentially, a backward deduction thus presents one particular order in which an argument in the sense of Definition 3.6 can be constructed by reasoning backwards from the conclusion to the premises.
### Definition 8.3Dung et al. 2007, Definition 2.4
Given a selection function f, a (backward) deduction of a conclusion α based on (or supported by) a set of premises P is a sequence of multi-sets S1,,Sm, where S1={α}, Sm=P, and for every 1i<m, where σ is the sentence occurrence in Si selected by f:
• (1) If σ is not in P, then Si+1=Si{σ}S for some inference rule of the form SσR.
• (2) If σ is in P, then Si+1=Si.
Each Si is a step in the deduction.
Now an assumption-based argument is defined as follows:
### Definition 8.4Dung et al. 2007, Definition 2.6
An argument for a conclusion on the basis of an ABF is a deduction of that conclusion whose premises are all assumptions (in A).
As for notation, the existence of an argument for a conclusion α supported by a set of assumptions A is denoted by Aα, or by AABFα if it has to be distinguished from the existence of a strict argument according to Definition 3.6 with the same premises and conclusion; the latter will below be denoted by AATα.
Finally, Dung et al.'s notion of argument attack is defined as follows.
### Definition 8.5Dung et al. 2007, Definition 2.7
• an argument Aα attacks an argument Bβ if and only if Aα attacks an assumption in B;
• an argument Aα attacks an assumption β if and only if α is the contrary β of β.
The argumentation theory corresponding to an assumption-based framework is now defined as follows.
##### Definition 8.6
Given an assumption-based framework ABF=(LABF,RABF,A,0¯ABF), the corresponding argumentation theory ATABF=(AS,KB), where AS=(LAT,0¯AT,RAT,) and KB=(K,), is defined as follows:
• LAT=LABF
• φψAT iff φ=ψABF
• RAT=Rs=RABF
• Kn=Kp=Ki=
• Ka=A
• ===
Note that ATABF is well formed and all ATABF arguments are strict and plausible.
The main task now is to prove that there is an ABF-argument for α from P if and only if there is an ATABF-argument for α with premises P. In fact, this can only be proved for the special case of argumentation theories that do not allow for arguments with an infinite number of subarguments. Technically, the present framework allows for such arguments even if they are non-circular. For example, an AT with Rs={pi+1pi|i1} allows for an argument for p1 with an infinite number of subarguments (and an empty set of premises). So far no proof has depended on finiteness of arguments. In an ABF, however, arguments are by definition finite even if the set of inference rules allows for infinite ones, as in the just-given example.
##### Proposition 8.7
For all ABF such that AT=ATABF does not allow arguments with an infinite number of subarguments, there exists an argument AABFα if and only if there exists an argument AATα.
From this it follows that
##### Proposition 8.8
For all ABF such that AT=ATABF does not allow arguments with an infinite number of subarguments, it holds for every argument AABFα and every argument AATα that AABFα is defeated by an argument BABFβ if and only if AATα is defeated by an argument BATβ.
Now the main correspondence result can be proved.
##### Theorem 8.9
For all ABF, any semantics S subsumed by complete semantics and any set E:
• (1) if E is an S-extension of ABF then EAT is an S-extension of AT, where EAT={AATα|AABFαE};
• (2) if E is an S-extension of AT then EABF is an S-extension of ABF, where EABF={AABFα|AATαE}.
Theorem 8.9 in fact says that there is a one-to-one correspondence between the extensions of an ABF and those of its corresponding AT. From this we have the following:
##### Corollary 8.10
For any ABF, any semantics S subsumed by complete semantics, and for any formula ϕ it holds that ϕ is skeptically (credulously) S-acceptable in ABF if and only if ϕ is skeptically (credulously) S-acceptable in ATABF.
## 9.Other related research
As was said above, the present framework is inspired by the work of Pollock (1987, 1994) and Vreeswijk (1993, 1997). Essentially, it takes from both the idea that defeasible reasoning proceeds by chaining two kinds of inference rules into inference trees. The present mathematical formulation of this idea is directly adopted from Vreeswijk (1993, 1997). The present notions of undercutting and rebutting defeat are taken from Pollock's work and then generalised to arbitrary preference relations on arguments (Pollock only has a notion of probabilistic strength), and to logical languages with arbitrary contrary mappings. They are then combined with a notion of undermining defeat.
In fact, the system of Pollock (1994) is not formalised in terms of arguments but in terms of the so-called ‘inference graphs’, in which nodes are connected either by inference links (applications of inference rules) or by defeat links. The nodes are ‘lines of argument’, which are propositions plus an encoding of the argument lines from which they are derived. So if a proposition is derived in more than one way, it occurs in more than one line of argument. Such duplications cannot be avoided, since defeat relations depend on the strength of a proposition, which in turn depends on the way in which it is derived. Nodes are evaluated in terms of the recursive structure of the graph. Jakobovits and Vermeir (1999) proved that Pollock's system can be given an equivalent formulation as an instance of Dung's abstract argumentation frameworks with preferred semantics.
With Vreeswijk's framework, the relation with Dung-style semantics is still an open issue, since it models conflict not as a relation between two individual arguments but as a property of sets of arguments: a set of arguments is said to be in conflict if there exists a strict argument from their conclusions for . Vreeswijk then defines a notion of warrant for arguments which resembles stable semantics.
In Carneades, the evaluation of statements in an argument graph is, as with Pollock's inference graphs, defined in terms of the recursive structure of the graph. Statements are acceptable if they satisfy their ‘proof standard’. The general framework abstracts from their nature but Gordon et al. (2007) give several examples of proof standards. The proof standards are at the heart of Carneades' acceptability notion, just like the notions of defence and admissibility are at the heart of Dung-style semantics. None of the examples given by Gordon et al. (2007) have a known relation with any existing Dung-style semantics or the present framework, which thus is an issue for future research. Here it is also relevant that Carneades incorporates dialogical elements since it matters whether a statement is ‘stated’, ‘questioned’, ‘accepted’ or ‘rejected’. These statuses of a statement are assumed to be provided by a dialogical context in which Carneades is embedded.
Verheij (2003a) presents a ‘sentence-based’ (as opposed to ‘argument-based’) logic for defeasible reasoning, called DefLog. Verheij assumes a logical language with just two connectives, a unary connective × which informally stands for ‘it is defeated that’ and a binary connective ↝ for expressing defeasible conditionals. He then assumes a single inference scheme for this language, namely, modus ponens for ↝. A set of sentences T is said to support a sentence ϕ if ‘ ϕ is in T or follows from T by repeated application of ↝ -modus ponens’ (Verheij 2003a, p. 327). It seems reasonable to formalise this as the backward deductions of assumption-based argumentation or the strict arguments of the present framework. Moreover, T is said to attack ϕ if T supports ×φ. Verheij then considers partitions (J,D) of sets of sentences Δ which he calls dialectical interpretations and which are such that J (the ‘justified’ sentences) is conflict-free and attacks every sentence in D (the ‘defeated’ sentences).
As already suggested by Verheij, there is a close formal relation between DefLog and assumption-based argumentation. First, dialectical interpretations are easily proved to be equivalent to stable labellings, which are known to be equivalent to stable semantics (first proved by Verheij (1996); see also Jakobovits and Vermeir (1999), and Caminada (2006)). Furthermore, DefLog theories can be mapped onto assumption-based frameworks by letting an ABF contrary mapping be ×φ=φ for any ϕ, by regarding any set of dialectically interpreted sentences as the assumptions A of an ABF and by having ϕ, ϕ ↝ ψ → ψ, for any ϕ and ψ in DefLog's language, as the set R of inference rules of the ABF. The result is an assumption-based framework in the sense of Definition 8.2 with stable semantics. The correspondence results of Dung et al. (2007) with Bondarenko et al. (1997) then also apply to the special case of a DefLog-style ABF so that by the above Theorem 8.9 DefLog is a special case of the present framework with only strict arguments and only undermining defeat.
Several argumentation systems model deductive argumentation. Here arguments are proofs according to some deductive logic with consistent premises taken from a possibly inconsistent knowledge base expressed in the language of the logic (usually taken to be standard propositional or first-order logic). In Amgoud and Cayrol (2002), which is based on propositional logic, the structure of arguments is left undefined, except that the premises imply the conclusion according to propositional logic. Several notions of defeat are then considered. One of them corresponds to the present undermining defeat, where arguments are compared in terms of a partial preorder on the belief base from which their premises are taken. Argument acceptability is defined according to grounded semantics.
This variant of Amgoud and Cayrol (2002) can be reconstructed as a special case of the present framework as follows. First, ℒ is any propositional language closed under classical negation, where φ=ψ if φ=¬ψ or ψ=¬φ. Then Rs consists of all valid propositional inferences while Rd is empty. The knowledge base equals Kp. Finally, as with Deflog, it seems reasonable to formalise arguments as the strict arguments of the present framework, although the extra constraint must be added that such arguments have classically consistent premises. This consistency constraint makes that not all results of this paper hold without further qualification. It is easy to verify that Propositions 5.8, 6.1 and 6.2 still hold with this constraint (for Proposition 5.8 note that in this case Sφ by definition implies that the strict argument that exists for ϕ has consistent premises). However, the proofs of Theorems 6.9 and 6.10 do not apply to this case, for similar reasons as explained above in Section 7 with Example 7.4. It remains to be investigated whether these theorems can be proved for this case under alternative conditions.
Besnard and Hunter's (2008) version of deductive argumentation is similar to that of Amgoud and Cayrol (2002), except for a generalised notion of undermining: an argument is undermined by any argument of which the conclusion negates the conjunction if its premises. It remains to be seen whether this version of undermining can be reduced to the present version.
Two other logics for defeasible reasoning with both (domain-specific) strict and defeasible inference rules are Defeasible Logic (DL), first proposed by Nute (1994), and Defeasible Logic Programming (DeLP; e.g. Garcia and Simari 2004). In both systems, the logical language is restricted in logic-programming style. DL is not explicitly argument-based but defines the notion of a proof tree, which interleaves support and attack. Governatori, Maher, Antoniou, and Billington (2004) investigated the relation with Dung-style semantics. One variant of DL is proved to instantiate grounded semantics. In DeLP, the only way to attack an argument is on a (sub-)conclusion. DeLP's notion of argument acceptability has no known relation to any of the current argumentation semantics.
Prakken and Sartor (1997) presented an argument-based version of extended logic programming, designed as an instance of Dung's abstract argumentation frameworks with grounded semantics. Their system comes close to being a special case of the present framework. It has (domain-specific) strict and defeasible inference rules and allows for rebutting and undercutting defeat. Furthermore, its notion of an argument comes close to a ‘deduction’ version of Definition 3.6, i.e. it represents a particular order in which an argument can be constructed. A difference is that in Prakken and Sartor (1997) two parallel subarguments do not need to be completed with an inference from their conclusion, so that, for example (in the present notation), p,pq,r,rs is an argument with conclusions q and s. In Prakken and Sartor (1997), this was convenient for modelling reasoning about defeasible priorities in the system. A more substantial difference is that while the present framework considers rebutting and undercutting attack on equal footing, Prakken and Sartor (1997) give priority to undercutting attack, so that if A undercuts B while B rebuts A, A strictly defeats B. It seems that the present results do not crucially rely on this difference, but this should be further investigated.
A final difference with the present framework is that in Prakken and Sartor (1997) the role of strict rules in defeat is different. As in the present framework, only defeasible inferences can be attacked, but an argument A with conclusion ϕ rebuts an argument B with conclusion φ if there exists sets of strict rules Sa and Sb and a formula ψ such that (with present notation) Sa{φ}ψ and Sb{φ}ψ. The difference can be best explained with Examples 3.18 and 6.20. The motivation behind the definition of Prakken and Sartor (1997) was that intuitively the ‘real’ conflict is between the two defaults on whether someone is a bachelor or married. This is captured by their definition of rebutting attack, since A2 can be extended with A3 to contradict B2's conclusion and vice versa. Hence the rule priorities are applied to A2 and B2. By contrast, in the present framework these arguments do not rebut each other since their top rules are strict. Instead, we saw that their conflict is decided indirectly, by comparing A3 with B2 and B3 with A2. The present treatment of such examples can be defended by saying that conflicts are recognised only when they are made explicit in an argument's conclusion, which seems to better respect the general nature of argumentation as providing explicit grounds for conclusions. It remains to be investigated whether this difference affects the present results on the rationality postulates (note that, although Prakken and Sartor (1997) do not assume that the strict rules are closed under transposition, this assumption can be easily added).
In one respect, Prakken and Sartor (1997) go beyond the present framework, namely, in making the preference relation on the set of defeasible inference rules defeasible and derivable within the framework. In this respect, the system is a forerunner of Modgil's (2009) extended AFs.
## 10.Conclusion
The main rhetorical aim of this paper has been to present the ASPIC framework as a general abstract framework for rule-based argumentation. In previous publications on the ASPIC framework its unifying potential was underexposed because of a focus on domain-specific inference rules instead of on general inference patterns. Here it has been argued that ASPIC, although it can be used as a specific logic at the same level of abstraction as systems such as DeLP, DL and Prakken and Sartor (1997), can also be used as an abstract framework for reasoning with general inference rules, including argument schemes. Moreover, it has been shown that by including undermining attack and generalising negation to arbitrary contrary mappings, the ASPIC framework unifies rule- and assumption-based approaches to argumentation. The latter claim has been backed by a formal proof that assumption-based argumentation (Bondarenko et al. 1997; Dung et al. 2007) is a special case of the framework and by semi-formal explanations that the same holds for Verheij's (2003) DefLog and (to a large extent) Amgoud and Cayrol's (2002) version of deductive argumentation.
• a generalisation of the ASPIC framework to arbitrary relations of contrariness between well-formed formulas;
• an extension of the ASPIC framework with preference information for resolving conflicts between arguments;
• an extension of the ASPIC framework with four types of premises and with undermining attack;
• proof that Caminada and Amgoud's (2007) rationality postulates still hold for the thus generalised and extended framework, and that they hold not only for systems closed under transposition but also for systems closed under contraposition.
The framework can be further extended and investigated in several ways. First as indicated above in Section 3.3.2, several alternative ways to define the relation between the three kinds of defeat are possible. It could be investigated to what extent such alternatives affect the present results. The same holds for the use of preferences to resolve undercutting attack (also discussed in Section 3.3.2), for the constraint that arguments have consistent premises (cf. the discussion of deductive argumentation in Section 9) and for alternative ways to define argument conflicts involving strict rules (cf. the discussion of Prakken and Sartor (1997) in Section 9).
Finally, as touched upon at the end of Section 9, an important extension of the present framework is making the preference relations that are used for resolving conflicts defeasible and derivable within the framework. This could be done along the lines of Prakken and Sartor (1997), after which it should be investigated whether Modgil's (2009) reconstruction of Prakken and Sartor (1997) as an instance of his extended argumentation frameworks can be adapted to the extended ASPIC framework.
## Notes
1 For reasons explained in Section 3, this paper will rename Dung's attack relations to ‘defeat’ relations and reserve the term ‘attack’ for something else.
2 In this paper, the term ‘framework’ will be used to denote the general model, to highlight that it can be instantiated in various ways (such instantiations will in turn be called argumentation systems). This contrasts with Dung's (1995) use of the term ‘argumentation framework’, which denotes a specific set of arguments with a specific attack relation. In the present paper, such specific inputs to an argumentation system will be called argumentation theories.
3 Pollock (1987, 1994) calls these ‘conclusive’ and ‘prima facie reasons’.
4 Modgil (2009) argued that in some contexts such extensions make sense. It seems that the formal results in Section 6 on rationality postulates also hold for undercutting defeat with preferences, but this should be formally verified.
5 Caminada and Amgoud (2007) proposed similar postulates for the intersection of extensions but since their results on these postulates directly follow from the ones for individual extensions, they will be ignored.
6 In Dung et al. (2007), the arrows are from right to left.
## Acknowledgements
This work was partially supported by a Distinguished Visitor grant from the Scottish Informatics and Computer Science Alliance (SICSA). I thank Chris Reed and the School of Computing, University of Dundee, Scotland, for their hospitality during the summer of 2009 and Chris Reed for encouraging me to write this paper. Floris Bex, Phan Minh Dung, Tom Gordon, Sanjay Modgil, Leon van der Torre, Bart Verheij and Gerard Vreeswijk gave useful feedback on earlier versions of this paper. Finally, I thank my former collaborators in the ASPIC project for working with me on previous versions of the ASPIC framework.
## Appendices
### Appendix: Proofs
##### Proposition 5.8
Consider any argumentation theory with L closed under classical negation and-defined accordingly. Then if Rs consists of all valid propositional inferences then Rs is closed under contraposition and transposition.
##### Proof
Note first that if Rs consists of all valid propositional inferences, then satisfies the deduction theorem, i.e. it satisfies
{p1,,pn}q(p1pn)q
Now consider any rule p1,,pnq. Then {p1,,pn}q so by the deduction theorem (p1pn)q. Then also (by propositional reasoning) (¬qp2pn)¬p1. But then by the deduction theorem {¬q,p2,,pn}¬p1 so since Rs contains all valid propositional inferences, Rs contains ¬q,p2,,pn¬p1.▪
##### Proposition 6.1
Let < A, Def> be an argumentation framework as defined in Definition 3.22 and E any of its extensions under a given semantics subsumed by complete semantics. Then for all A E: if ASub(A), then AE.
##### Proof
The proof is a trivial adaptation of the proof of Proposition 1 of Caminada and Amgoud (2007), taking the possibility of undermining defeat into account. ▪
##### Proposition 6.2
Let < A, Def> be an argumentation framework corresponding to an argumentation theory, and E any of its extensions under a given semantics subsumed by complete semantics. Then {Conc(A)|AE} = ClRs({Conc(A)|AE}).
##### Proof
Caminada and Amgoud's proof of their Proposition 8 depends on Proposition 6.1, which also holds for the present framework, and makes no assumptions on the use of priorities. Therefore, the proof also holds for the present version. ▪
##### Theorem 6.9
Let <A,Def> be an argumentation framework corresponding to a well-formed argumentation theory that is closed under contraposition or transposition and has a reasonable argument ordering and a consistent ClRs(Kn), and let E be any of its extensions under a given semantics subsumed by complete semantics. Then the set {Conc(A) AE} is consistent.
##### Proof
Let E be a complete extension. Suppose that {Conc(A) AE} is inconsistent. This means that A,BE,Conc(A)=Conc(B). Since E is a complete extension, E is conflict-free. This means that A does not defeat B and B does not defeat A. It will be shown that this leads to a contradiction.
First the following lemmas are proved.
##### Lemma A1
Let A be an argument and B a plausible or defeasible argument in an argumentation theory that is closed under contraposition or transposition such that Conc(A) and Conc(B) are contradictories. Then A can be extended to an argument A+ that rebuts or undermines B.
##### Proof
Consider first systems closed under contraposition. By Corollary 6.6, it holds that Conc(M(B))Conc(B) so with contraposition (which is assumed to hold) and since Conc(A) and Conc(B) contradict each other we have for any BiM(B) that Conc(M(B){Bi})Conc(A)Conc(Bi). Then clearly M(B){Bi} and M(A) are the maximal fallible subarguments of an argument A+ for Conc(Bi). Since by construction of M(B) either Bi is a non-axiom premise or ends with a defeasible inference, A+ either undermines or rebuts Bi. But then A also undermines or rebuts B.
For systems closed under transposition, the existence of arguments A+ and Bi is proved by a straightforward generalisation of Lemma 6 of Caminada and Amgoud (2007). Then the proof can be completed as above. ▪
##### Corrollary A2
If the argumentation theory has a reasonable argument ordering then if BA, then A+ defeats B.
### Proofcontinuing the proof of Lemma A1
Since is reasonable, there exist such a Bi and A+ such that A+Bi. Then A+ defeats Bi so A+ defeats B. ▪
Now for proving Theorem 6.9, the following cases must be distinguished.
• (1) AKi. Then A is not in any extension.
• (2) A is an assumption. If A is a contradictory of Conc(B), then B defeats A. If instead A is a contrary of Conc(B), then since the argumentation theory is well formed B is also an assumption so A defeats B. Contradiction.
• (3) A is firm and strict. If B is also firm and strict, then ClRs(Kn) is inconsistent, which contradicts the assumption that it is consistent. If B is plausible or defeasible, then A defeats B by condition (1) of Definition refpreceq. Contradiction.
• (4) A is plausible or defeasible. If B is firm and strict then this is case (3). If B's top rule is defeasible and Conc(A) is a contrary of Conc(B), then A defeats B, while if Conc(A) and Conc(B) contradict each other, either A defeats B or B defeats A. If B's top rule is strict, then by the assumption that the argumentation theory is well formed, Conc(A) and Conc(B) contradict each other. If BA then B defeats A while otherwise A can by Lemma A1 and Corollary A2 be extended to an argument A+ that defeats B. It is then left to prove that A+E. Any defeater C of A+ will by construction of A+ do so by defeating an element of M(A) or M(B) (since all inferences that are not in M(A) or M(B) are strict and there are no new premises). However, this defeated element is in E by Proposition 6.1, so since E is conflict-free, CE. But then A+E, which contradicts the fact that E is conflict-free.
##### Theorem 6.10
Let <A, Def > be an argumentation framework corresponding to a well-formed argumentation theory that is closed under contraposition or transposition and has a reasonable argument ordering and a consistent ClRs(Kn), and let E be any of its extensions under a given semantics subsumed by complete semantics. Then the set ClRs({Conc(A) AE}) is consistent.
##### Proof
As in Caminada and Amgoud (2007). ▪
##### Corollary 6.11
If the conditions of Theorem 6.10 are satisfied, then for any extension E under a given semantics subsumed by complete semantics the set {φ|φisapremiseofanargumentinE} is consistent.
##### Proof
Let A be any argument in E and ϕ any premise of A. By definition of an argument, ϕ is a subargument of A so by Proposition 6.1 we have that φE. Then the corollary follows from Theorem 6.10 and the fact that subsets of consistent sets are consistent. ▪
##### Proposition 6.15
The last-link argument ordering is reasonable.
##### Lemma A3
Consider any ordering s on sets ordered by a partial preorder e such that S1sS2 iff there exists an e1S1 such that for all e2S2 it holds that e1<ee2. Then if S1sS2 and e1 is a non-smallest element of S1 (w.r.t. e ), then S2{e1}sS1.
##### Proof
Straightforward.▪
Now by Corollary 6.13 that BA means that there exists a BiM(B) with top rule b such that for all AM(A) with top rule a it holds that b<a. Choose such a Bi with minimal b (w.r.t. e) to form A+ as in the proof of Corollary A2. Then by Lemma A3 LastDefRules(A+)sLastDefRules(Bi). But then A+Bi.▪
##### Proposition 6.18
The weakest-link argument ordering is reasonable.
##### Proof
That BA now means that Prem(B)sPrem(A) and DefRules(B)sDefRules(A).
If DefRules(B), then there exists a BiDefRules(B) with top rule b such that for all ADefRules(A) with top rule a it holds that b<a. Choose such a Bi with minimal b (w.r.t. ) in the construction of A+ and Bi in the proof of Corollary A2. Then since all new defeasible rules of the corresponding A+ are from elements of M(B), by Lemma A3 DefRules(A+)sDefRules(B). But then A+Bi.
If DefRules(B)=, then DefRules(A)=. Since Prem(B)sPrem(A) there exists a premise p in Prem(B) such that for all premises p in Prem(A) it holds that p<p. Then in the construction of A+ and Bi in the proof of Corollary A2, choose Bi to be an argument containing a minimal such p. Then since all new premises of the corresponding A+ are from Prem(B), by Lemma A3 Prem(A+)sPrem(B). But then A+Bi. ▪
##### Proposition 8.7
For all ABF such that AT=ATABF does not allow arguments with an infinite number of subarguments, there exists an argument AABFα if and only if there exists an argument AATα.
##### Proof
For the only-if part, let S1,,Sn be a backward deduction of α. It will be shown by induction on the structure of backward deductions that there exists an AT-argument with conclusion α and premises Sn.
Note first that since all elements of Sn are in A so in Ka, by clause (1) of Definition 3.6 they are all an AT-argument and their premises are all in Sn.
Consider next any set Si such that all elements of Si+1 are the conclusion of an AT-argument with premises from Sn. Then for any element αi of Si, if αi is also in Si+1, then trivially αi is the conclusion of an AT-argument with premises in Sn, otherwise for some set S={β1,,βm}Si+1 there exists a rule β1,,βmαi in RABF. But then this rule is also in Rs. Let, furthermore, the AT-arguments for β1,,βm (which exist by the induction hypothesis) be B1,,Bm: then by clause (2) of Definition 3.6, B1,,Bmαi is an AT-argument for αi with all its premises in Sn.
Next it is proved that for any Si the union of all premises of all AT-arguments for elements in Si is Sn. Note that for any pair Si,Si+1, the set Si+1 is formed by replacing at most one element σ in Si with a set S in Si+1. As just proved, there exists an AT-argument B1,,Bmαi, where B1,,Bm are the AT-arguments for all elements in S. By clause (2) of Definition 3.6, the premises of this argument are the union of the premises of the arguments B1,,Bn. But then no premises have been added or deleted by creating Si+1 from Si. Note finally, that the union of the premises of all AT-arguments for any element in Sn (which are these elements themselves) trivially equals Sn. But then this set equals Sn for all Si.
For the if-part, suppose PATα. A backward deduction with multi-sets S1,,Sn such that Si={α} and Sn=P can be created as a maximal sequence such that:
• (1) S1={α},
• (2) For all Si(i1): create Si+1 by selecting one element σ from Si not selected before and:
• (a) if σP then Si+1=Si; otherwise
• (b) Si+1=Si{σ}S for some S={Conc(B1),,Conc(Bn)} such that there exists an argument BSub(A) of the form B1,,Bnσ.
It is now proved that for any Si and any σSi one of these two conditions is satisfied, i.e. either σP or σ is the conclusion of an argument in Sub(A). The proof is with induction on the structure of S1,,Sn. Consider first S1={α}. Then if A=αKa, then trivially αP, otherwise A=A1,,Anα so trivially Asub(A). Consider next any Si such that all its elements satisfy conditions (2)a and (2)b. Then if Si+1=Si this trivially also holds for Si+1, otherwise if S replaces σ in Si+1 then by the induction hypothesis this is since there exists a subargument BSub(A) of the form B1,,Bnσ such that S={Conc(B1),,Conc(Bn)}. Then clearly for any new element Conc(Bi)S, there exists a subargument for it in Sub(A), namely, Bi.
Next, since all steps in the sequence apply an inference rule from Rs, which by Definition 8.6 is also in RABF, the sequence clearly is a backward deduction.
Finally, it is proved that the sequence ends with Sn=P. Let Sub(A) be the multi-set consisting of, for all ASub(A), as many occurrences as there are inferences in A that use A. Note that by the assumption that Sub(A) is finite, Sub(A) is also finite. Then let for any Si the set UnusedSub(Si) be the subset of all arguments in Sub(A) that were not used to create Si from S1. (So UnusedSub(S1)=Sub(A) and, e.g. UnusedSub(S2)=Sub(A){A}). Then note that by any application of condition (2)b this multi-set loses one element. Then since S1,,Sn is a maximal sequence of elements satisfying conditions (1) and (2), we have that UnusedSub(Sn)=. Then since PSub(A), we have that PSn. Assume next for contradiction that there is an element σSn which is not in P: then, as proved above, σ can be replaced by a set S such that Sσ is an inference in A, so S1,,Sn is not maximal. Contradiction, so Sn=P. ▪
##### Proposition 8.8
For all ABF such that AT=ATABF does not allow arguments with an infinite number of subarguments it holds for every argument AABFα and every argument AATα that AABFα is defeated by an argument BABFβ if and only if AATα is defeated by an argument BATβ.
##### Proof
Assume AABFα and BABFβ defeats AABFα. Then according to the contrariness mapping in ABF we have that β=p for some pA. Furthermore, by Proposition 8.7, there exists an AATα and an argument BATβ. Then by identity of the contrariness mappings we also have that β=p for some pA according to AT. Then since pKa, clearly BATβ defeats AATα.
Assume AATα and BATβ defeats AATα. Then since all arguments in AT are strict, B undermines A, and according to the contrariness mapping in AT, we have that β=p for some pA. Furthermore, by Proposition 8.7, there exists an AABFα and an argument BABFβ. Then by identity of the contrariness mappings, we also have that β=p for some pA according to ABF. Then since pA, clearly BABFβ defeats AABFα. ▪
##### Theorem 8.9
For all ABF, any semantics S subsumed by complete semantics and any set E:
• (1) if E is an S-extension of ABF then EAT is an S-extension of AT, where EAT={AATα|AABFσE};
• (2) if E is an S-extension of AT then EABF is an S-extension of ABF, where EABF={AABFα|AATαE}.
##### Proof
As before, the proof for complete semantics suffices.
• (1) Consider any complete extension E of ABF. It is first proven that any member of EAT is defended by EAT. Since E is conflict-free, by construction of EAT and Proposition 8.8 also EAT is conflict-free. Consider next any AATαEAT defeated by some BATβ. By construction of EAT, there exists an AABFαE. Then by Propositions 8.8 and 8.8 there exists a BABFβ defeating AABFα. But since E is a complete extension, BABFβ is in turn defeated by some CABFγE. Then by construction of EAT and Proposition 8.8, also CATγEAT and by Proposition 8.7, CATγ defeats BATβ. So AATα is defended by EAT.
Next, to prove that any argument defended by EAT is a member of EAT, assume AATα is defended by EAT. Then any of its defeaters BATβ is in turn defeated by an element CATγEAT. But then by Proposition 8.7, the same holds for their corresponding ABF-arguments, which exist by Proposition 8.7. Moreover, by construction of EAT we have that CABFγE so, since E is a complete extension, also AABFαE. But then AATαEAT by construction of EAT and Proposition 8.8
• (2) The proof of (2) is entirely similar and therefore omitted.
##### Corollary 8.10
For any ABF, any semantics S subsumed by complete semantics, and for any formula ϕ it holds that ϕ is skeptically (credulously) S-acceptable in ABF if and only if ϕ is skeptically (credulously) S-acceptable in ATABF.
##### Proof
Straightforward. ▪
## References
1 Amgoud, L., Bodenstaff, L., Caminada, M., McBurney, P., Parsons, S., Prakken, H., van Veenen, J. and Vreeswijk, G. 2006. Final Review and Report on Formal Argumentation System. Deliverable D2.6, ASPIC IST-FP6-002307 2 Amgoud, L. and Cayrol, C. 2002. A Model of Reasoning Based on the Production of Acceptable Arguments. Annals of Mathematics and Artificial Intelligence, 34: 197–216. 3 Baroni, P. and Giacomin, M. 2009. “Semantics of Abstract Argument Systems”. In Argumentation in Artificial Intelligence, Edited by: Rahwan, I. and Simari, G. 25–44. Berlin: Springer. 4 Bench-Capon, T. 2003. Persuasion in Practical Argument Using Value-based Argumentation Frameworks. Journal of Logic and Computation, 13: 429–448. 5 Besnard, P. and Hunter, A. 2008. Elements of Argumentation, Cambridge, MA: MIT Press. 6 Bex, F., Prakken, H., Reed, C. and Walton, D. 2003. Towards a Formal Account of Reasoning about Evidence: Argumentation Schemes and Generalisations. Artificial Intelligence and Law, 12: 125–165. 7 Bondarenko, A., Dung, P., Kowalski, R. and Toni, F. 1997. An Abstract, Argumentation-theoretic Approach to Default Reasoning. Artificial Intelligence, 93: 63–101. 8 Caminada, M. 2006. “On the Issue of Reinstatement in Argumentation”. In Proceedings of the 11th European Conference on Logics in Artificial Intelligence (JELIA 2006), 111–123. Berlin: Springer Verlag. no. 4160 in Springer Lecture Notes in AI 9 Caminada, M. and Amgoud, L. 2007. On the Evaluation of Argumentation Formalisms. Artificial Intelligence, 171: 286–310. 10 Dung, P. 1995. On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming, and n-Person Games. Artificial Intelligence, 77: 321–357. 11 Dung, P., Kowalski, R. and Toni, F. 2006. Dialectic Proof Procedures for Assumption-based, Admissible Argumentation. Artificial Intelligence, 170: 114–159. 12 Dung, P., Mancarella, P. and Toni, F. 2007. Computing Ideal Sceptical Argumentation. Artificial Intelligence, 171: 642–674. 13 Elvang-G¨oransson, M., Fox, J. and Krause, P. Acceptability of Arguments as Logical Uncertainty. Proceedings of the 2nd European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU93). pp. 85–90. Berlin: Springer Verlag 14 Garcia, A. and Simari, G. 2004. Defeasible Logic Programming: An Argumentative Approach. Theory and Practice of Logic Programming, 4: 95–138. 15 Gordon, T., Prakken, H. and Walton, D. 2007. The Carneades Model of Argument and Burden of Proof. Artificial Intelligence, 171: 875–896. 16 Governatori, G., Maher, M., Antoniou, G. and Billington, D. 2004. Argumentation Semantics for Defeasible Logic. Journal of Logic and Computation, 14: 675–702. 17 Jakobovits, H. and Vermeir, D. 1999. Robust Semantics for Argumentation Frameworks. Journal of Logic and Computation, 9: 215–261. 18 Lin, F. and Shoham, Y. 1989. “Argument Systems. A Uniform Basis for Nonmonotonic Reasoning”. In Principles of Knowledge Representation and Reasoning: Proceedings of the First International Conference, 245–255. San Mateo, CA: Morgan Kaufmann Publishers. 19 Loui, R. 1987. Defeat among Arguments: A System of Defeasible Inference. Computational Intelligence, 2: 100–106. 20 Modgil, S. 2009. Reasoning about Preferences in Argumentation Frameworks. Artificial Intelligence, 173: 901–934. 21 Nute, D. 1994. “Defeasible Logic”. In Handbook of Logic in Artificial Intelligence and Logic Programming, Edited by: Gabbay, D. and Hogger, C. J. 253–395. Robinson, Oxford: Clarendon Press. 22 Pollock, J. 1974. Knowledge and Justification, Princeton: Princeton University Press. 23 Pollock, J. 1987. Defeasible Reasoning. Cognitive Science, 11: 481–518. 24 Pollock, J. 1994. Justification and Defeat. Artificial Intelligence, 67: 377–408. 25 Prakken, H. 1997. Logical Tools for Modelling Legal Argument. A Study of Defeasible Argumentation in Law, Dordrecht/Boston/London: Kluwer Academic Publishers. Law and Philosophy Library 26 Prakken, H. and Sartor, G. 1997. Argument-based Extended Logic Programming with Defeasible Priorities. Journal of Applied Non-classical Logics, 7: 25–75. 27 Prakken, H. and Vreeswijk, G. 2002. “Logics for Defeasible Argumentation”. In Handbook of Philosophical Logic, 2, Edited by: Gabbay, D. and G¨unthner, F. Vol. 4, 219–318. Dordrecht/Boston/London: Kluwer Academic Publishers. 28 Reiter, R. 1980. A Logic for Default Reasoning. Artificial Intelligence, 13: 81–132. 29 Verheij, B. 1996. “Two Approaches to Dialectical Argumentation: Admissible Sets and Argumentation Stages”. In Proceedings of the Eighth Dutch Conference on Artificial Intelligence (NAIC-96), 357–368. Utrecht: The Netherlands. 30 Verheij, B. 2003a. DefLog: On the Logical Interpretation of Prima Facie Justified Assumptions. Journal of Logic and Computation, 13: 319–346. 31 Verheij, B. 2003b. Dialectical Argumentation with Argumentation Schemes: An Approach to Legal Logic. Artificial Intelligence and Law, 11: 167–195. 32 Vreeswijk, G. 1993. Studies in Defeasible Argumentation. doctoral dissertation, Vrije University Amsterdam 33 Vreeswijk, G. 1997. Abstract Argumentation Systems. Artificial Intelligence, 90: 225–279. 34 Walton, D., Reed, C. and Macagno, F. 2008. Argumentation Schemes, Cambridge: Cambridge University Press. | 2018-09-19 11:41:23 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8243581652641296, "perplexity": 2270.0840221135754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267156192.24/warc/CC-MAIN-20180919102700-20180919122700-00147.warc.gz"} |
https://pillow.readthedocs.io/en/latest/releasenotes/5.3.0.html | # 5.3.0¶
Previously ImageOps.colorize only supported two-color mapping with black and white arguments being mapped to 0 and 255 respectively. Now it supports three-color mapping with the optional mid parameter, and the positions for all three color arguments can each be optionally specified (blackpoint, whitepoint and midpoint). For example, with all optional arguments:
ImageOps.colorize(im, black=(32, 37, 79), white='white', mid=(59, 101, 175), | 2018-09-23 00:11:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2149019092321396, "perplexity": 7798.214691843636}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158766.65/warc/CC-MAIN-20180923000827-20180923021227-00524.warc.gz"} |
http://physics.aps.org/story/v13/st25 | # Focus: X-Rayed Movie
Published June 14, 2004 | Phys. Rev. Focus 13, 25 (2004) | DOI: 10.1103/PhysRevFocus.13.25
#### Imaging Density Disturbances in Water with a 41.3-Attosecond Time Resolution
P. Abbamonte, K. D. Finkelstein, M. D. Collins, and S. M. Gruner
Published June 11, 2004
Video courtesy of P. Abbamonte, Brookhaven National Lab.
Electron Ripples. This animation shows the response of water electrons to an electron placed momentarily at the center. Following the disturbance, the probability of finding another electron at each point in the plane changes. The vertical axis shows how this change evolves with time. Each frame is 4.13 attoseconds, and the distance across is about 1.2 nanometers.
A research team has produced the fastest movies ever made of electron motion. Created by scattering x rays off of water, the movies show electrons sloshing in water molecules, and each frame lasts just 4 attoseconds (quintillionths of a second). The results, published in the 11 June PRL, could let researchers “watch” chemical reactions even faster than those viewable with today’s “ultrafast” pulsed lasers.
X rays can reveal atomic-scale spatial details in liquids and solids because their wavelengths are as short as the distances between atoms. Experiments typically involve aiming an x-ray beam at a sample and measuring the intensity of scattered x rays at each angle around the sample. In so-called inelastic x-ray scattering, researchers also measure the energy of the scattered rays, since x rays sometimes lose energy as they ricochet off of electrons. In theory, the scattering angles lead to nanoscale still pictures, while the energy loss data tell researchers how the pictures change with time. But there is a catch: the mathematical analysis for converting these measurements into still pictures and movies usually involves solving the infamous “phase problem.”
The phase problem is a mathematical one, and it has not been solved for inelastic x-ray scattering in the past. So researchers were left with only indirect views of the motions of electrons. Peter Abbamonte, now at the Brookhaven National Laboratory in Upton, New York, and his colleagues from Cornell University in Ithaca, New York, came up with a solution, and it required lots of high-quality scattering data over a wide range of energies.
So the team set up the experiment at CHESS, the Cornell x-ray facility, and then trekked to the Advanced Photon Source at Argonne National Laboratory in Illinois for a more intense x-ray beam. They aimed the x rays at a container of ordinary water and measured the directions and energies of the scattered radiation. The team then used their new mathematical procedure with this unusually large dataset to create a movie in which each frame covers only 4.13 attoseconds, or $4.13x10-18$ seconds. The movie shows how electrons in the water molecules would respond on average if a point of charge were added. The team then decided to shoot a sequel–they imagined firing a gold ion through the water and created a movie showing the “wake” of electron motion that would follow the charged atom.
Abbamonte says that x rays are uniquely suited for these ultrafast measurements, in part because the shortest pulses made with lasers are about 250 attoseconds long. Eric Isaacs of Argonne says the work is “a very cute application of inelastic x-ray scattering.” But he adds that the experiment works particularly well for water, which has few electrons. Extending the technique to other materials with more electrons–such as high-temperature superconductors–may be more challenging.
–Don Monroe | 2013-05-19 06:43:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24374942481517792, "perplexity": 1681.129290500083}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696384181/warc/CC-MAIN-20130516092624-00030-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://puzzling.stackexchange.com/questions/60312/what-is-an-amplified-word | What is an Amplified Word™?
In the spirit of the What is a Word™/Phrase™ series started by JLee, a special brand of Phrase™ and Word™ puzzles.
If a word conforms to a special rule, I call it an Amplified Word™.
Use the examples below to find the rule.
Amplified Words™ Not Amplified Words™
AID HELP
BYE WAVE
FOUR FIVE
MOAT TROUGH
PEAR SPEAR
SEAT CHAIR
SPAY NEUTER
CORED PARED
HOOPS RIMS
LEAST MOST
PEATS REPEATS
PROOF PROVE
RAIDS LOOTS
ROUST ROUSE
ROUTS BEATS
SHOOT LAUNCH
And, if you want to analyze, here is a CSV version:
Amplified Words™,Not Amplified Words™
AID,HELP
BYE,WAVE
FOUR,FIVE
MOAT,TROUGH
PEAR,SPEAR
SEAT,CHAIR
SPAY,NEUTER
CORED,PARED
HOOPS,RIMS
LEAST,MOST
PEATS,REPEATS
PROOF,PROVE
RAIDS,LOOTS
ROUST,ROUSE
ROUTS,BEATS
SHOOT,LAUNCH
The puzzle satisfies the series' inbuilt assumption, that each word can be tested for whether it is an Amplified Word™ without relying on the other words.
These are not the only examples of Amplified Words™.
What is the special rule these words conform to?
• Are you certain ”BEATS” is not amplified? – Bass Feb 8 '18 at 5:28
• @Bass Yes, I am sure. (But if that's the only word holding you up, maybe answer anyway.) – Rubio Feb 8 '18 at 5:30
• Possible duplicate of What is a Versatile Word™? – glibdud Feb 8 '18 at 14:30
• @glibdud - I don't think this is a dupe of that. At least spear, trough and chai (there are plenty more) would be in the left-most column if that were the case. – APrough Feb 8 '18 at 17:13
• @APrough The accepted answer is essentially identical between the two questions. There may be errors in the examples of this one, which the answerer below mentions but the asker didn't address. – glibdud Feb 8 '18 at 17:15
Starting with the reasoning.
All the amplified words are quite short. Usually that would be because the property is very rare, or that they need to be paired with other words of the same length.
In this case, I think it’s both:
An amplified word can have any one of their letters removed, and the remaining letters still spell a word. Without checking the dictionary for rare words, each amplified word has at least three such letters. With a dictionary, almost any letter can be removed.
Here's the complete list of the amplified words, after attacking them with a stack of encyclopaedias:
AID -> id (psychology term), ad, ai (a three-toed sloth)
BYE -> ye (old English), be, by
FOUR -> our, fur, for, fou (Scottish for drunk)
MOAT -> oat, mat, mot (a witty saying), moa (an extinct bird)
PEAR -> ear, par, per, pea
PEATS -> eats, pats, pets, peas, peat
RAIDS -> aids, rids, rads (dosage units), rais (old Portuguese money), raid
SEAT -> eat, sat, set, sea
SPAY -> pay, say, spy, spa
CORED -> ored (OOPS NOT A VALID WORD), cred, co-ed, cord, core
HOOPS -> oops, hops, hops, hoos (OOPS NOT A VALID WORD), hoop
LEAST -> east, last, lest, leat (a water trench), leas (grassland)
PROOF -> roof, poof, prof, prof, proo (used to stop a horse)
ROUST -> oust, rust, rost (obsolete form of roast, valid in Scrabble), rout, rous (OOPS NOT A VALID WORD)
ROUTS -> outs, ruts, rots, rous (OOPS I DID IT AGAIN), rout
SHOOT -> hoot, soot, shot, shot, shoo
The non-augmented ones don’t appear to have this property, except for
SPEAR -> pear, sear, spar, sper (to shut in), spea (genus of toads)
and
BEATS -> eats, bats, bets, beas (river in India), beat
Maybe I need some particular dictionary to get rid of those two?
EDIT: the dictionary suggested by M Oehm does indeed correctly classify SPEAR and BEATS as non-amplified. Unfortunately, it also does so for a couple of the words listed as amplified. See the edited list in the second spoiler tag for details.
• FWIW, Beas and sper aren't valid Scrabble words, but rads, rais and proo are. – M Oehm Feb 8 '18 at 7:54
• I had to read your answer several times to realize I have to apply the rule to all combinations. My first reading was that one application was sufficient. (Not sure if my words make sense, but I'm trying not to spoil) – Brian J Feb 8 '18 at 14:56
• So an Amplified Word™ is the same as a Versatile Word™? – DqwertyC Feb 8 '18 at 18:14
• ..So it would seem. How versatile. – Bass Feb 8 '18 at 18:39 | 2021-06-15 07:24:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4093118906021118, "perplexity": 10108.528378782948}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487617599.15/warc/CC-MAIN-20210615053457-20210615083457-00047.warc.gz"} |
https://www.gamedev.net/blogs/entry/1414703-so-im-in-la/ | • Advertisement
• entries
109
• comments
175
• views
116871
# So, I'm in L.A.
248 views
Yep. Well, not exactly: I'm in Torrance, in the wonderull Residence Inn near Rodondo Beach (now that you know where to find me, you have no excuse to not come).
My first impression so far?
Well, cars are BIG. And French cheese is made in America. And everything can be cokked in ovens (good thing I have a oven in my room). And (no offense intended) but the only very beautiful girl I've seen so far is French (and has 2 kind kids, and a very strong husband).
But on the overall, I'm impressed. It's quite easy to find anything I've needed so far (but I don't need much). Everyone seems to understand me, despite my horrible English accent - Sneftel knows about it, you can ask him. For the moment, the only thing I would say is that I really enjoy that. The only bad thing is that I must go outside to smoke my cigarettes - but that's not so bad, because as a result I smoke less.
I need to speak about the Residence Inn goodies: every tiny bit of service seems to be free. I have a free braodband intarweb connection, I got a free breakfeast this morning (but I didn't tried to eat eggs or potatoes - you know, we tipically don't eat that so early in France), I can give my grocery list to the hotel staff in the morning and I'll just have to pay for the groceries - not for the delivery. I can swim in the pool from 9 AM to 10 PM, or I can play basketball or tennis. All that come with my $159,00 room (that I don't have to pay, since it's a business trip), packed with a king side bed (any beautiful girl out there? Remember, I'm French... [smile]) a nifty kitchen, a Nintendo-64 enabled TV, and more. Since Im' here (yesterday), I only had to spend something along the line of$40, mostly to buy food. I gave myself some pleasure by buying some cool Revo sunglasses. I'm also going to buy some Marvel comics - if I can find anything related to their Civil War event.
Of course, the consequence of my trop is that I'm going to enjoy everything, and I'll be less present on GameDev.Net (not sure in fact. I should try to spend more time outside my room). I will also postpone the translations of the blog tickets I wanted to write or translate - and the "SOMETHING" project is delayed a bit, mostly because I forgot to bring it with me [smile].
have a nice day! (and leave a message if you're not far from me)
## Recommended Comments
There are no comments to display.
## Create an account or sign in to comment
You need to be a member in order to leave a comment
## Create an account
Sign up for a new account in our community. It's easy!
Register a new account
## Sign in
Already have an account? Sign in here.
Sign In Now
• Advertisement
• Advertisement
• ### Blog Entries
• Advertisement
• ### Popular Now
• 11
• 24
• 10
• 11
• 12
• Advertisement | 2018-01-24 09:41:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20452603697776794, "perplexity": 2585.844421082853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893629.85/warc/CC-MAIN-20180124090112-20180124110112-00738.warc.gz"} |
https://www.nature.com/articles/pr201825?error=cookies_not_supported&code=ef26f844-551f-4457-b6ef-351807cac752 | Article
# NICU Network Neurobehavioral Scale: 1-month normative data and variation from birth to 1 month
## Abstract
### Background
The Neonatal Intensive Care Unit Network Neurobehavioral Scale (NNNS) is a standardized method for infant neurobehavioral assessment. Normative values are available for newborns, but the NNNS is not always feasible at birth. Unfortunately, 1-month NNNS normative data are lacking.
### Aims
To provide normative data for the NNNS examination at 1 month and to assess birth-to-one-month changes in NNNS summary scores.
### Study design
The NNNS was administered at birth and at 1 month within a longitudinal prospective study design.
### Subjects
A cohort of 99 clinically healthy full-term infants were recruited from a well-child nursery.
### Outcome measures
Birth-to-1-month NNNS variations were evaluated and the association of neonatal and sociodemographic variables with the rate of change of NNNS summary scores were investigated.
### Results and conclusions
NNNS scores from the 10th to the 90th percentile represent a range of normative performance at 1 month. A complex pattern of stability and change emerged comparing NNNS summary scores from birth to 1 month. Orienting, Regulation, and Quality of movements significantly increased, whereas Lethargy and Hypotonicity significantly decreased. Birth-to-1-month changes in NNNS performance suggest improvements in neurobehavioral organization. These data are useful for research purposes and for clinical evaluation of neurobehavioral performance in both healthy and at-risk 1-month-old infants.
## Main
The Neonatal Intensive Care Unit Network Neurobehavioral Scale (NNNS) (1, 2, 3) is a standardized method for infant neurobehavioral assessment. It is a valid biomarker for early detection of developmental delay in at-risk populations. Previous studies highlighted the efficacy of the NNNS in the early neurobehavioral screening of clinical (e.g., in utero drugs exposure (4), maternal depression (5), neonatal exposure to methadone or buprenorphine (6)), and at-risk infants (e.g., preterm birth (7)). Moreover, the NNNS assessment has been successfully used in prospective studies. For example, associations between early neurobehavioral assessment and short- and long-term outcomes have been documented, including behavioral outcomes (8) and psychomotor development (9).
Although NNNS extreme scores (too low and too high) are indicative of less-than-optimal development and risk conditions, the NNNS was not conceived within a neuropathological framework. Rather, it was framed by a developmental perspective, highlighting normative values for the neurobehavioral performance and its variations (1). Thus, its application to a healthy population is needed in order to provide a “broader and more nuanced view of the neurobehavior of the typical newborn” (10). From a clinical point of view, the establishment of normative data is crucial in order to detect less-than-optimal trajectories of neurobehavioral development. Additionally, studies on normative samples may identify subtle associations between medical or demographic variables and problems in neurobehavioral performance not previously appreciated (11).
Normative NNNS data in healthy newborns assessed from 24 to 28 h postpartum have been previously published (11). Moreover, the NNNS has been administered to a sample of healthy infants at different chronological ages within the first month of life (mean=2.2 weeks; range 0.3–4.8 weeks) providing descriptive data within the first weeks after birth (12). However, in this study infants’ age varied widely and longitudinal changes from birth to 1 month could not be traced because of a cross-sectional design, nor could normative values be obtained. Indeed, normative values at 1 month are still lacking. This is surprising since the first month of life represents a particularly vulnerable time for infants, characterized by critical developmental processes which lead to greater neurobehavioral organization (13). Moreover, the availability of 1-month NNNS normative values might be useful clinically, as the assessment of the neurobehavioral profile is generally delayed when infants are not clinically stable during the first days of life (3). Thus, 1-month NNNS normative values can support the daily clinical activity of neonatologists and pediatricians who work with healthy and at-risk infants.
The aims of the present study were (a) to provide normative values for the NNNS at 1 month in healthy infants; (b) to investigate changes in healthy infants’ NNNS neurobehavioral profile from birth to 1 month of life; and (c) to identify perinatal and sociodemographic factors associated with significant changes in NNNS summary scores.
## Methods
The present study reports on a cohort of infants enrolled from the well-child nurseries of a Boston teaching hospital. One hundred infants were consecutively contacted from a previous cohort study on at-birth NNNS examination. Ninety-nine infants participated to the follow-up NNNS examination at 1 month. All infants had adequate birth weight for gestational age. Eligibility of the mother–infant dyads in the newborn period was determined from medical records and nursing reports. Inclusion criteria were well-newborn nursery stay and discharge from the hospital within 4 days, while exclusion criteria were circumcision within 12 h of examination, stay in the neonatal intensive care unit for more than 12 h, major physical or neurologic anomaly, human immunodeficiency virus positive, and positive toxicology screen for cocaine or heroin. Mothers without language barriers were recruited regardless of race, ethnicity, marital status, or education. Maternal exclusion criteria included major cognitive deficits, personality disorders, or psychosis. Infants and mothers were clinically healthy. Informed written consent was obtained from the mother. The Institutional Review/Ethical Board of the Brigham Women’s Hospital approved this research project. All the procedures have been carried out in accordance with The Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans.
### Neurobehavioral evaluation
The NNNS (3) is a 128-item standardized (14) assessment to evaluate the neurobehavioral status of healthy and high-risk infants. It has 13 summary scores: Habituation, Attention, Arousal, Regulation, Handling procedures, Quality of movement, Excitability, Lethargy, Nonoptimal reflexes, Asymmetric reflexes, Hypertonicity, Hypotonicity, and Stress/abstinence scale. The NNNS has good internal and concurrent validity (14).
### Infants’ perinatal variables and clinical
Infants’ perinatal characteristics were abstracted from the medical records, and included birth weight, birth length, gestational age, Apgar at minute 1, Apgar at minute 5, duration of labor, type of delivery, and complications during the postpartum period (e.g., major brain lesions, neurosensorial deficits, syndromes, malformations). Total risk was assessed with the Hobel score (15) by trained medical personnel. Previous research have fixed a clinical risk cutoff for Hobel scores ≥10 (16).
### Parents’ sociodemographic variables
Parents filled out a sociodemographic form. Data were collected on parents’ age (years), ethnicity (White/Caucasian, Black/African-American, Hispanic, Asian, other), mother’s and father’s work status (full-time, part-time, not working), income (<US$10,000, US$10,001–25,000, US$25,001–50,000, US$50,001–75,000, US $75,001–100,000, >US$100,000), insurance (Medicaid/other government insurance vs. HMO/private insurance) and marital status (cohabitant vs. single or separated parent). Family socioeconomic status (SES) using the Hollinghead (17) determined by the more prestigious occupational level of either parent. Scores ranged from 0 to 90. Lower scores reflect lower SES.
### Procedures
Standard procedures for NNNS assessment at birth and at 1 month are fully described in the previous literature. (11, 12, 13, 14, 15, 16, 17, 18) Informed consent was obtained from parents for both examinations. To insure both the validity and reliability of examination and data, several exceptional procedures not typical of clinical research on the NNNS were put in place. At both time points, the NNNS was administered by two certified clinicians, who were blinded to neonatal status. Reliability was set to the criteria used in other studies (11): no more than two 2-point disagreements on items with 9-point scales. For 5-point scale items or less, agreement had to be exact. In total, no more than five disagreements for the complete assessment were accepted. To ensure a high level of reliability, every examination was observed and scored independently by a second examiner. Disagreements were resolved in conference. To further ensure reliability and administrative quality, random examinations were further scored by an NNNS trainer observing and then independently scoring the examination and evaluating agreement with the examiners. In addition, the 1-month NNNS assessment occurred in a follow-up visit in a light and temperature controlled child development laboratory.
### Statistics and data reduction
Descriptive statistics for each summary score were calculated for the NNNS evaluations at birth and at 1 month of age. A multivariate analysis of variance with time of NNNS screening (2 levels: birth vs. 1 month) as the within-group factor was used on the NNNS summary scores to test for significant neurobehavioral variations from birth to 1 month of age. Separate univariate repeated-measure analyses of variance were then applied to each NNNS scale to assess their change from birth to 1 month. Pearson’s bivariate correlations were used to assess the rank-order stability of each NNNS summary score from birth to 1 month. To adjust for multiple comparisons, we used the Benjamini and Hochberg criterion, with q<0.05. Potential predictors of 1-month NNNS summary scores were checked for multicollinearity, using Pearson’s and Spearman’s coefficient for continuous and categorical predictors. If two or more variables were significantly correlated, only one was included in the final set of potential predictors according to clinically and theoretically relevant criteria. For each of the NNNS scores, the final set of potential predictors were regressed on the difference score computed by subtracting NNNS summary score at birth to NNNS summary score at 1 month (i.e., Δ scores). Predictors were entered or dropped in a stepwise manner in the regression models with Δ NNNS summary scores to evaluate change from birth to 1 month as dependent variables. All the analyses were conducted with SPSS 21.0, at P<0.05.
## Results
### Neonatal and demographic descriptors
Clinical and sociodemographic information for the sample are provided in Table 1. The values indicate that the infants and their mothers were at low social risk and were homogenous for neonatal, clinical, and sociodemographic variables.
### One-month NNNS normative data
Descriptive data for the NNNS assessments at birth and at 1 month are provided in Table 2. Most of 1-month-old infants (80.8%) were not in the required sleep state for Habituation compared to 57.27% in the newborn period, so this scale was excluded from further analyses. Normative cutoffs for the remaining scales are provided in Figure 1. As in the previous NNNS normative data study (11), the mean, SD, range, and percentile values for each scale are provided.
### Stability and change in NNNS performance
An overall multivariate significant effect was detected, F(12,68)=3.07, P<0.00, η2p=0.35 (see Table 2). After adjusting with the Benjamini–Hochberg (1995) criterion, significant correlations emerged for Arousal (r=0.20, P<0.05) and Quality of movements (r=0.25, P<0.01). Birth-to-1-month significant correlations were also documented for Regulation (r=0.22, P<0.05) and Handling (r=0.21, P<0.05), but they did not survive the multiple comparison adjustment.
### Neonatal factors associated with NNNS performance
Selected predictors were: birth weight (g), Apgar at minute 1, Hobel score, and family SES. The regression model was significant for Δ Quality of movement, R2=0.08, F(1,84)=6.96, P=0.01, and for Δ Hypotonicity, R2=0.06, F(1,84)=5.16, P=0.02. Lower Hobel score at birth was predictive of greater change in Quality of movement from birth to 1 month, B=−0.15, 95% confidence interval (CI): −2.27, −0.04, β=−0.28, t=−2.64, P=0.01. Higher birth weight at birth was predictive of greater reduction in Hypotonicity from birth to 1 month, B=0.00, 95% CI: 0.00, 0.00, β=0.24, t=2.27, P=0.02.
## Discussion
This paper presents normative NNNS data for a sample of healthy infants assessed at birth and at 1 month of life. It provides a standard comparison to evaluate infants’ neurobehavior at 1 month of life. Consistent with Fink et al. (11) we considered the 10th and 90th percentiles as cutoff points for normative performance (see Figure 1). Scores exceeding these values at 1 month might indicate less-than-optimal development and the presence of subtle risk conditions.
Owing to the infants’ healthy status, it was not surprising that a very low and narrow range of NNNS summary scores emerged that were associated with neurobehavioral indexes of illness or developmental risk (i.e., Asymmetrical reflexes, Hypertonicity, Hypotonicity, Stress/Abstinence) (12). Similarly, low normative values emerged for Excitability and Lethargy. Nonetheless, few subjects (6.1% and 9.1%, respectively) manifested high scores (7-to-13 and 4-to-10, respectively) on these scales, suggesting the presence of little individual variability in healthy infants even at 1 month. Thus, an infant with high scores on these scales might be more closely monitored. By contrast, large individual variability was observed for Handling procedures and Nonoptimal reflexes. The findings suggest that the infants’ ability to be soothed and regulated after external stimulation, as well as the emergence of optimal reflexes is still ongoing (18). Moreover, these findings extend previous evidence (11), confirming NNNS sensitivity to depict individual differences in a healthy population. Mid-range normative scores were found for Regulation and Arousal, with limited variability in the sample. Notably, extreme scores (too low and too high) are indicative of nonoptimal development and risk conditions for the infants. As such, it is not surprising that mid-range normative scores were observed in the present healthy sample of 1-month-old infants. The highest scores were obtained for the Orienting summary score, with very few infants (8.2%) scoring below the median. Since more than 90% of the healthy infants were able to be alert and to shift attention in response to stimuli, it seems that scores below 5 may be a potential marker of neurodevelopmental risk at 1 month.
The second aim of the present study was to assess change of NNNS summary scores from birth to 1 month in healthy infants. Neurobehavioral development during the first month of life showed a mixed pattern of stability and variation. First, as regard to Habituation, due to the sleep-state-dependent nature of items, very limited data were available for the 1-month screening (19.2% at 1 month vs. 51.5% at birth). While previous studies reported similar percentages for birth-screening (54.4% [19] and 47% (11)), in the only published paper on 1-month-old healthy infants Habituation was excluded from the analyses (12). In sum, Habituation appears somewhat inapplicable in the healthy 1-month infants’ assessment. It appears that at this age infants’ organization of sleep and awake states is rapidly changing and only a small proportion of infants might be in the required state for the procedures of the Habituation summary score. However, the very inapplicability of habituation at 1 month compared to the newborn period may be a marker of significant neurobehavioral organization and maturation in the immediate postnatal period, that is, infants are more alert and aroused. Nonetheless, the habituation scale might be of use when the NNNS is applied to infants at risk for neurobehavioral development.
Second, seven NNNS summary scores did not change from birth to 1 month (i.e., Arousal, Handling procedures, Excitability, Nonoptimal reflexes, Asymmetrical reflexes, Hypertonicity, and Stress/abstinence). Additionally, rank-order correlations in the present study were at best moderate and they were significant for only a limited subset of summary scores. Nonchanging scores are plausibly related to the absence of clinical concerns in our sample, especially for nonoptimal reflexes, asymmetry, hyper-tone, and stress/abstinence. For the healthy sample included here, these scores were already low at birth and continued to be flat at 1 month. These summary scores are designed to evaluate neurologic functioning by the counting the number of nonoptimal signs, which are generally absent or very few in a sample of healthy newborns, but they nonetheless remain critical markers of developmental risk (11).
Five NNNS summary scores (i.e., Orienting, Regulation, Quality of movements, Lethargy, and Hypotonia) significantly changed from birth to 1 month. On the one hand, these significant changes appear to reflect a maturational shift in the neurobehavior of healthy infants during the first 4 weeks of life. They are consistent with previous literature on newborns behavioral and neurologic development (20, 21). For example, Hadders-Algra and Prechtl (20) observed infants’ motor and neurological development from 2 to 18 weeks, suggesting developmental trajectory from “writhing” to “swipes and swats” movements. On the other side, it should be noted that specific neonatal variables in the nonclinical range were nonetheless predictive of the change in scores for Quality of movement and Hypotonicity. Lower Hobel score at birth was predictive of greater increase in Quality of movement from birth to 1 month. As Fink et al. (11) documented an association between lower prenatal risk (Hobel score) and better quality of movement, these findings extend previous evidence further confirming NNNS sensitivity in the face of minimal or nonclinical risk conditions. Finally, lower birth weight was associated with greater reduction in Hypotonicity. This may be consistent with the fact that low birthweight newborns reported more Hypotonicity at the newborn NNNS assessment (11). As such, the present findings suggest the presence of a greater recovery of tonicity in healthy infants who had low scores at birth (22). It is also noteworthy that some variables, such as mode of delivery, were not related to 1-month performance. More importantly, the findings of relations between nonclinical levels of some variables (e.g., birthweight, risk scores) suggest the power of these variables to affect behavior and perhaps that cutoff levels for these variables may be misleading as to their effects on development.
There are limitations to this study. The limited nonclinical range of medical and demographic variables likely underestimates their relations to neurobehavioral performance, even if they add evidence about NNNS robustness and sensitivity. Moreover, the usefulness of the normative data provided in the present sample of healthy infants needs to be tested for clinical validity in the context of at risk or clinically ill 1-month-old infants. Certainly it would now be valuable to study a large and heterogeneous sample of infants. The strengths of the present work include the use of a prospective design, assessing NNNS in a longitudinal way from birth to 1 month of age. Moreover, the percentage of children who returned to the 1-month assessment was very high (i.e., 99%). Finally, the procedure strictly adhered to NNNS administration guidelines.
## Conclusion
The present study provides standardized normative scores for the neurobehavioral examination of healthy infants at 1 month of age. To date, this is the first study presenting data on healthy infant neurobehavioral development in the first month of life in a longitudinal manner. The significant neurobehavioral changes from birth to 1 month clearly suggest that the normative values available for newborns (11) cannot be used to characterize the neurobehavior of 1-month olds. Although developmental changes of specific NNNS summary scores are likely related to maturational shifts and/or even subclinical neonatal variables, the availability of standardized percentiles reported in this paper appears to be a prospectively valid criterion to evaluate the neurobehavioral performance of healthy infants. Moreover, the present normative data should be considered as a criteria to evaluate the presence of specific abnormalities and deficits in the neurobehavioral profile of infants at different risk conditions in the very early postnatal life. Additionally, since pediatricians might face obstacles in assessing clinically at-risk newborns during the immediate hours after birth (3), whereas a visit at or around 1 month is typical in standard practice, the normative values provided here make for a reliable comparison criteria for typical infants and for infants with concerning clinical conditions.
## References
1. 1
Lester BM, Tronick EZ . History and description of the Neonatal Intensive Care Unit Network Neurobehavioral Scale. Pediatrics 2004;113 (3, Part 2): 634–640.
2. 2
Lester BM, Tronick EZ, LaGasse L et al. Summary statistics of neonatal intensive care unit network neurobehavioral scale scores from the maternal lifestyle study: a quasinormative sample. Pediatrics 2004;113:668–675.
3. 3
Lester BM, Tronick EZ, Brazelton TB . The Neonatal Intensive Care Unit Network Neurobehavioral Scale procedures. Pediatrics 2004;113 (3): 641–667.
4. 4
Lester BM, Tronick EZ, LaGasse L et al. The maternal lifestyle study: effects of substance exposure during pregnancy on neurodevelopmental outcome in 1-month-old infants. Pediatrics 2002;110 (6): 1182–1192.
5. 5
Salisbury AL, Lester BM, Seifer R et al. Prenatal cocaine use and maternal depression: effects on infant neurobehavior. Neurotoxicol Teratol. 2007;29 (3): 331–340.
6. 6
Coyle MG, Salisbury AL, Lester BM et al. Neonatal neurobehavior effects following buprenorphine versus methadone exposure. Addiction 2012;107 (Suppl 1): 63–73.
7. 7
Montirosso R, Del Prete A., Bellu R, Tronick E, Borgatti R, Group NAC for Q of L (NEO-AS. Level of NICU quality of developmental care and neurobehavioral performance in very preterm infants. Pediatrics 2012;129 (5): e1129–e1137.
8. 8
Liu J, Bann C, Lester B et al. Neonatal neurobehavior predicts medical and behavioral outcome. Pediatrics 2010;125 (1): e90–e98.
9. 9
Sucharew H, Khoury JC, Xu Y, Succop P, Yolton K . NICU Network Neurobehavioral Scale Profiles Predict Developmental Outcomes in a LOW-RISK SAMPLE. Paediatr Perinat Epidemiol. 2012;26 (4): 344–352.
10. 10
Tronick E, Lester BM . Grandchild of the NBAS: The NICU Network Neurobehavioral Scale (NNNS): a review of the research using the NNNS. J Child Adolesc Psychiatr Nurs 2013;26 (3): 193–203.
11. 11
Fink NS, Tronick E, Olson K, Lester B . Healthy newborns’ neurobehavior: norms and relations to medical and demographic factors. J Pediatr. 2012;161 (6): 1073–1079.e3.
12. 12
Spittle AJ, Walsh J, Olsen JE et al. Neurobehaviour and neurological development in the first month after birth for infants born between 32–42 weeks’ gestation. Early Hum Dev 2016;96:7–14.
13. 13
Lester BM, Miller RJ, Hawes K et al. Infant neurobehavioral development. Semin Perinatol 2011;35 (1): 8–19.
14. 14
Noble Y, Boyd R . Neonatal assessments for the preterm infant up to 4 months corrected age: a systematic review. Dev Med Child Neurol 2012;54 (2): 129–139.
15. 15
Hobel CJ, Youkeles L, Forsythe A . Prenatal and intrapartum high-risk screening: II. Risk factors reassessed. Am J Obstet Gynecol 1979;135 (8): 1051–1056.
16. 16
Maloni JA, Kane JH, Suen L, Wang KK . Dysphoria among high-risk pregnant hospitalized women on bed rest: a longitudinal study. Nurs Res 2002;51 (2): 92–99.
17. 17
Hollingshead AB . Four Factor Index of Social Status. New Haven, CT: Yale University, 1968.
18. 18
DeSantis A, Harkins D, Tronick E, Kaplan E, Beeghly M . Exploring an integrative model of infant behavior: What is the relationship among temperament, sensory processing, and neurobehavioral measures? Infant Behav Dev 2011;34 (2): 280–292.
19. 19
Tronick EZ, Olson K, Rosenberg R, Bohne L, Lu J, Lester BM . Normative neurobehavioral performance of healthy infants on the Neonatal Intensive Care Unit Network Neurobehavioral Scale. Pediatrics 2004;113 (3, Part 2): 676–678.
20. 20
Hadders-Algra M, Prechtl HF . Developmental course of general movements in early infancy. I. Descriptive analysis of change in form. Early Hum Dev 28 (3): 201–213.
21. 21
Mirmiran M, Lunshof S . Perinatal development of human circadian rhythms. Prog Brain Res 1996;111:217–226.
22. 22
Brown N, Spittle A . Neurobehavioral evaluation in the preterm and term infant. Curr Pediatr Rev 2014;10 (1): 65–72.
## Acknowledgements
This study was supported by Standardization of the NRN-Neurobehavioral Scale, National Institute of Child Health and Human Development (R01Hd37138 to E.T.). The supporting agency has no role in study design, collection, analysis and interpretation of data, writing of the manuscript, and decision to submit the manuscript for publication.
## Author information
Authors
### Corresponding author
Correspondence to Ed Tronick.
## Ethics declarations
### Competing interests
The authors declare no conflict of interests.
## Rights and permissions
Reprints and Permissions
Provenzi, L., Olson, K., Giusti, L. et al. NICU Network Neurobehavioral Scale: 1-month normative data and variation from birth to 1 month. Pediatr Res 83, 1104–1109 (2018). https://doi.org/10.1038/pr.2018.25
• Accepted:
• Published:
• Issue Date:
• ### Development of an abbreviated symptom score for the neonatal abstinence syndrome
• I. Chervoneva
• , F. Blanco
• & W. K. Kraft
Journal of Perinatology (2020)
• ### Psychosocial and medical adversity associated with neonatal neurobehavior in infants born before 30 weeks gestation
• Julie A. Hofheimer
• , Lynne M. Smith
• , Elisabeth C. McGowan
• , T. Michael O’Shea
• , Brian S. Carter
• , Charles R. Neal
• , Jennifer B. Helderman
• , Steven L. Pastyrnak
• , Antoine Soliman
• , Lynne M. Dansereau
• , Sheri A. DellaGrotta
• & Barry M. Lester
Pediatric Research (2020)
• ### The Impact of Neurobehavior on Feeding Outcomes in Neonates with Congenital Heart Disease
• Lindsey Gakenheimer-Smith
• , Kristi Glotzbach
• , Zhining Ou
• , Angela P. Presson
• , Michael Puchalski
• , Courtney Jones
• , Linda Lambert | 2020-08-07 23:01:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3330118954181671, "perplexity": 13963.537614962914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00372.warc.gz"} |
https://eccc.weizmann.ac.il/keyword/17036/ | Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > PERMUTATION BRANCHING PROGRAMS:
Reports tagged with permutation branching programs:
TR10-113 | 16th July 2010
Michal Koucky, Prajakta Nimbhorkar, Pavel Pudlak
#### Pseudorandom Generators for Group Products
We prove that the pseudorandom generator introduced in Impagliazzo et al. (1994) fools group products of a given finite group. The seed length is $O(\log n \log 1 / \epsilon)$, where $n$ is the length of the word and $\epsilon$ is the error. The result is equivalent to the statement ... more >>>
TR12-083 | 29th June 2012
Thomas Steinke
#### Pseudorandomness for Permutation Branching Programs Without the Group Theory
We exhibit an explicit pseudorandom generator that stretches an $O \left( \left( w^4 \log w + \log (1/\varepsilon) \right) \cdot \log n \right)$-bit random seed to $n$ pseudorandom bits that cannot be distinguished from truly random bits by a permutation branching program of width $w$ with probability more than $\varepsilon$. ... more >>>
TR20-138 | 9th September 2020
William Hoza, Edward Pyne, Salil Vadhan
#### Pseudorandom Generators for Unbounded-Width Permutation Branching Programs
We prove that the Impagliazzo-Nisan-Wigderson (STOC 1994) pseudorandom generator (PRG) fools ordered (read-once) permutation branching programs of unbounded width with a seed length of $\widetilde{O}(\log d + \log n \cdot \log(1/\varepsilon))$, assuming the program has only one accepting vertex in the final layer. Here, $n$ is the length of the ... more >>>
ISSN 1433-8092 | Imprint | 2020-09-28 05:32:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8890928626060486, "perplexity": 3062.728650008587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401585213.82/warc/CC-MAIN-20200928041630-20200928071630-00419.warc.gz"} |
https://zbmath.org/?q=ai%3Aseo.soogil+se%3A00000336 | # zbMATH — the first resource for mathematics
On the Galois cohomology of ideal class groups. (English) Zbl 1139.11046
Let $$K/k$$ be a finite Galois extension of number fields with Galois group $$G$$ and put $$R = \mathbb Z [\frac 12][G]$$. Using étale cohomology, the authors derive the isomorphism of Tate cohomology groups,
$\widehat H^{a+2} (J, e (\mathcal O_{K,S}^{\times})') \simeq \widehat H^a (J, e\operatorname{Pic} (\mathcal O_{K,S})'),$ where $$J$$ is any subgroup of $$G$$, $$e$$ a central idempotent of $$R$$, and the primes mean that the modules are considered as Galois modules over $$R$$. This result is specialized for the case that $$K$$ is a CM-field and furthermore for $$K = \mathbb Q (\zeta_{p^n})^+$$, where $$p$$ is an odd prime. This yields alternative proofs for results of P. Cornacchia and C. Greither [J. Number Theory 73, 459–471 (1998; Zbl 0926.11085)], R. Schoof [Math. Comput. 72, 913–937 (2003; Zbl 1052.11071)] and the second author [Acta Arith. 120, 337–348 (2005; Zbl 1139.11047)].
##### MSC:
11R34 Galois cohomology 11S40 Zeta functions and $$L$$-functions
##### Keywords:
étale cohomology; Tate cohomology group
Full Text: | 2021-01-18 01:49:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301160335540771, "perplexity": 493.0777958809896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514046.20/warc/CC-MAIN-20210117235743-20210118025743-00413.warc.gz"} |
https://openstax.org/books/introduction-sociology-3e/pages/16-introduction | Introduction to Sociology 3e
# Introduction
Figure 16.1 High school and college graduation often marks a milestone for families, friends, and even the wider community. Education, however, occurs in many venues and with far ranging outcomes. (Credit: Kevin Dooley/flickr)
“What the educator does in teaching is to make it possible for the students to become themselves” (Paulo Freire, Pedagogy of the Oppressed). David Simon, in his book Social Problems and the Sociological Imagination: A Paradigm for Analysis (1995), points to the notion that social problems are, in essence, contradictions—that is, statements, ideas, or features of a situation that are opposed to one another. Consider then, that one of the greatest expectations in U.S. society is that to attain any form of success in life, a person needs an education. In fact, a college degree is rapidly becoming an expectation at many levels of success, not merely an enhancement to our occupational choices. And, as you might expect, the number of people graduating from college in the United States continues to rise dramatically.
The contradiction, however, lies in the fact that the more impactful a college degree has become, the harder it has become to achieve it. The cost of getting a college degree has risen sharply since the mid-1980s, while many important forms of government support have barely increased.
The net result is that those who do graduate from college are likely to begin a career in debt. As of 2009, a typical student’s loans amounted to around $23,000. Ten years later, the average amount of debt for students who took loans grew to over$30,000. The overall national student loan debt topped $1.6 trillion in 2020, according to the Federal Reserve. These rising costs and risky debt burdens have led to a number of diverse proposals for solutions. Some call for cancelling current college debt and making more colleges free to qualifying students. Others advocate for more focused and efficient education in order to achieve needed career requirements more quickly. Employers, seeking both to widen their applicant pool and increase equity among their workforce, have increasingly sought ways to eliminate unnecessary degree requirements: If a person has the skills and knowledge to do the job, they have more access to it (Kerr 2020). Figure 16.2 Unemployment rates for people age 25 and older by educational attainment. As can be seen in the graph, the overall unemployment rate began falling in 2009 after it peaked during the financial crisis and continued its downward trend through the decade from 2010 to 2020. (This graph does not account for the unemployment spike during the COVID-19 pandemic.) Note the differences in educational attainment and their impact on unemployment. People with bachelor's degrees have always had the lowest levels of unemployment, while those without a high school diploma have always had the highest level. (Credit: Bureau of Labor Statistics) Is a college degree still worth it? Lifetime earnings among those with a college degree are, on average, still much higher than for those without. A 2019 Federal Reserve report indicated that, on average, college graduates earn$30,000 per year more than non-college graduates. Also, that wage gap has nearly doubled in the past 40 years (Abel 2019).
Is the wage advantage enough to overcome the potential debt? And what’s behind those averages? Remember, since the \$30,000 is an average, it also confirms what we see from other data: That certain people and certain college majors earn far more than others. As a result, earning a college degree in a field that has a smaller wage advantage over non-college graduates might not seem “worth it.”
But is college worth more than money?
A student earning Associate’s and Bachelor’s degrees generally will often take a wide array of courses, including many outside of their major. The student is exposed to a fairly broad range of topics, from mathematics and the physical sciences to history and literature, the social sciences, and music and art through introductory and survey-styled courses. It is in this period that the student’s world view is, it is hoped, expanded. Then, when they begin the process of specialization, it is with a much broader perspective than might be otherwise. This additional “cultural capital” can further enrich the life of the student, enhance their ability to work with experienced professionals, and build wisdom upon knowledge. Over two thousand years ago, Socrates said, “The unexamined life is not worth living.” The real value of an education, then, is to enhance our skill at self-examination. Education, its impact, and its costs are important not just to sociologists, but to policymakers, employers, and of course to parents. | 2021-07-26 23:01:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40611669421195984, "perplexity": 2103.5501544026233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152156.49/warc/CC-MAIN-20210726215020-20210727005020-00060.warc.gz"} |
http://tex.stackexchange.com/questions/108455/how-to-generate-a-table-of-trigonometric-functions-that-can-be-broken-across-pag | # How to generate a table of trigonometric functions that can be broken across pages?
Consider the following screenshot.
The features I want to have are:
• I can shrink the page size and the table is allowed to break across pages.
• The background of odd rows must be different from that of the even rows to ease reading.
• The values must be calculated automatically rather than by hand-made data entry.
## MWE
\documentclass[preview,border=12pt,varwidth]{standalone}
\usepackage[nomessages]{fp}
\usepackage[table]{xcolor}
\usepackage{longtable}
\usepackage{pgffor}
\begin{document}
\topskip=0pt
\begin{longtable}{*2{|>{$\displaystyle}c<{$}}|}\hline
\theta & \sin \theta\\
\foreach \x in {0,10,...,360}{\x & \FPeval\temp{round(\x:3)}\temp\\}
\end{longtable}
\end{document}
-
Your solution should combine (1) longtable, (2) xcolor with the table option, (3) either fp or pgfmath. – Werner Apr 13 '13 at 5:22
Related tex.stackexchange.com/a/33771/1410 for zebra stripes. – morbusg Apr 13 '13 at 8:46
More easily, you can use the calculator package which will allow you to evaluate the trigonometric functions. Also the supertabular package will allow you to make the repeated heading/footer on each table.
Here is a simpler example:
Code
\documentclass{article}
\usepackage[nomessages]{fp}
\usepackage[table]{xcolor}
\usepackage{calculator}
\usepackage{forloop}
\usepackage{supertabular}
\usepackage{longtable}
\usepackage{fullpage}
\begin{document}
\newcounter{theangle}
\cline{2-4}
\rowcolor{white}
\multicolumn{1}{c|}{ } &
\cos \theta & \sin \theta & \tan \theta &
\multicolumn{1}{c}{ } \\
\hline
}
\rowcolor{white}
\multicolumn{5}{c}{Table continued \ldots}\\
\cline{2-4}
\rowcolor{white}
\multicolumn{1}{c|}{ } &
\cos \theta & \sin \theta & \tan \theta &
\multicolumn{1}{c}{ } \\
\hline
}
\tablelasttail{
\rowcolor{white}
\multicolumn{1}{c|}{ } &
\cos \theta & \sin \theta & \tan \theta &
\multicolumn{1}{c}{ }\\
\cline{2-4}
}
\tabletail{
\hline
\rowcolor{white}
\multicolumn{1}{c|}{ } &
\cos \theta & \sin \theta & \tan \theta &
\multicolumn{1}{c}{ } \\
\cline{2-4}
\rowcolor{white}
\multicolumn{5}{c}{Continued on next page \ldots}\\
}
\rowcolors{2}{gray!50}{white}
\begin{center}
\begin{supertabular}{*5{|>{$}c<{$}}|}
\forloop{theangle}{0}{\value{theangle} < 360}{
\arabic{theangle}^\circ &
\DEGREESCOS{\value{theangle}}{\solx} \solx &
\DEGREESSIN{\value{theangle}}{\solx} \solx &
\DEGREESTAN{\value{theangle}}{\solx} \solx &
\arabic{theangle}^\circ\\
}
\end{supertabular}
\end{center}
\end{document}
Output
-
May be with example to benefit wider audience. – texenthusiast Apr 13 '13 at 5:55
## Packages
• xcolor with the option table for \rowcolor
• pgfplotstable (and internally pgfmath) that is used to build the entire table. The pgfmath package helps us to create the trigonometric values (which are all built-in)
• longtable to allow page-breaks in one table (needs multiple passes)
• siunitx to typeset numbers in tables (pgfmath does a good job on number printing already, but that is not very well suited for tables)
• booktabs for nice rules.
## \pgfplotstableset
### Auxiliary styles
Styles prepended with an @ are new styles created by me. Neither is the @ needed (just to separate pgfplotstable’s styles from mine) nor are the names set in stone. These styles are used with the .list handler to ease the creation of the columns.
The line
@create function/.list={sin,cos,tan,cot,cosec,sec},
builds the main columns, note that the csc function is named cosec in PGF. The column is named cosec but I later change the column name to csc. (One may also note that one should use proper column names, e.g. $\phi$ and $\sin \phi$ instead of empty column names and non-math-mode but math-function names.)
Note the string type key, this deactivates PGF printing number features (but not the PGF math calculation of the columns).
### longtable setup
The basic keys begin table and end table are used to set the internal table environment from tabular to longtable.
The every head row style is (mis-)used to set up the special rows (these are longtable features). See the longtable manual for more.
### \pgfplotstablenew
Let's create 91 rows (+ header (e.g. longtable preamble)):
\pgfplotstablenew[
columns={left,sin,cos,tan,cot,sec,cosec,right},
]{91}\myTable
### \sisetup
Certain siunitx settings are made before the actual typesetting (this could have been done in the S[…] column specifications, too). These settings are needed so that siunitx doesn’t try to setup numbers in scientific notation as PGF math may give an output in the form of 1.746e-2 (pretty much everywhere).
### \pgfplotstabletypeset
Finally!
Here the auxiliary styles for setting up the column types are used.
The @secure header uses the \multicolumn macro to hide the content from siunitx’s parsing. The usual approach by putting the content in braces, e.g. {sin} doesn’t appear to be working here.
The left and the right column only uses r columns (we could use S here, too) with an appended \si{\degree} via array’s <{…} syntax. This is also the reason the header needs an empty \multicolumn{1}{c}{} entry.
# Code
\documentclass{article}
\usepackage[table]{xcolor}
\usepackage{pgfplotstable}
\usepackage{longtable}
\usepackage{siunitx}
\usepackage{booktabs}
\pgfplotstableset{
% helpers
@create function/.style={
create on use/#1/.style={
create col/expr=#1(\thisrow{left})}},
columns/#1/.append style={
column name=\multicolumn{1}{c}{#1}}},
@set columns to siunitx type 1/.style={
columns/#1/.append style={
string type,
column type={S[table-format=1.4]}}},
@set columns to siunitx type 2/.style={
columns/#1/.append style={
string type,
string replace={inf}{\multicolumn{1}{c}{$\infty$}},
column type={S[table-format=2.4]}}},
@set columns to siunitx type 3/.style={
columns/#1/.append style={
string type,
string replace={inf}{\multicolumn{1}{c}{$\infty$}},
column type={S[table-format=2.3]}}},
@set columns to basic style/.style={
columns/#1/.append style={
column type={r<{\si{\degree}}}}},
columns/#1/.append style={
column name={\multicolumn{1}{c}{}}}},
%
% the left and right columns
create on use/left/.style={
create col/expr=\pgfplotstablerow},
create on use/right/.style={
create col/expr={90-\thisrow{left}}},
%
% Let's start: the functions
@create function/.list={sin,cos,tan,cot,cosec,sec},
% The longtable setup
begin table=\begin{longtable},
end table=\end{longtable},
before row=\toprule,
after row=%
\midrule
\multicolumn{1}{c}{} & {cos} & {sin} & {cot} & {tan} & {csc} & {sec} & \multicolumn{1}{r}{\dots}\\ \bottomrule
\endfoot
\midrule
\multicolumn{1}{c}{} & {cos} & {sin} & {cot} & {tan} & {csc} & {sec} & \multicolumn{1}{r}{} \\ \bottomrule
\endlastfoot},
every odd row/.style={before row={\rowcolor[gray]{.9}}},
}
\pgfplotstablenew[
columns={left,sin,cos,tan,cot,sec,cosec,right},
]{91}\myTable
\begin{document}
\sisetup{scientific-notation = fixed, fixed-exponent = 0, table-auto-round=true}
\pgfplotstabletypeset[
% the column types
@set columns to siunitx type 1/.list={sin,cos},
@set columns to siunitx type 2=tan,
@set columns to siunitx type 3/.list={cot,sec,cosec},
@set columns to basic style/.list={left,right}, | 2016-07-27 19:24:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9628211855888367, "perplexity": 9842.122944823754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827077.13/warc/CC-MAIN-20160723071027-00245-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/paper/Evidence-for-top-quark-production-in-collisions-Collaboration/0f6b42eb3f42dccbf797936fd89cd721a4ca63c1 | # Evidence for top quark production in nucleus-nucleus collisions
@inproceedings{Collaboration2020EvidenceFT,
title={Evidence for top quark production in nucleus-nucleus collisions},
author={C. Collaboration},
year={2020}
}
Evidence for the production of top quarks in heavy ion collisions is reported in a data sample of lead-lead collisions recorded in 2018 by the CMS experiment at a nucleon-nucleon center-of-mass energy of √ sNN = 5.02 TeV, corresponding to an integrated luminosity of 1.7 ± 0.1 nb−1. Top quark pair (t t) production is measured in events with two opposite-sign high-pT isolated leptons (l±l∓ = e+e−, μ+μ−, and e±μ∓). We test the sensitivity to the t t signal process by requiring or not the… Expand
1 Citations
#### Figures from this paper
Evidence for Top Quark Production in Nucleus-Nucleus Collisions.
The first evidence for the production of top quarks in nucleus-nucleus collisions is reported, using lead-lead collision data at a nucleon- nucleon center-of-mass energy of 5.02 TeV recorded by the CMS experiment. Expand
#### References
SHOWING 1-10 OF 27 REFERENCES
Top-quark production in proton–nucleus and nucleus–nucleus collisions at LHC energies and beyond
• Physics
• 2015
Abstract Single and pair top-quark production in proton–lead (p–Pb) and lead–lead (Pb–Pb) collisions at the CERN Large Hadron Collider (LHC) and Future Circular Collider (FCC) energies, are studiedExpand
Top-quark and Higgs boson perspectives at heavy-ion colliders
Abstract The perspectives for measuring the top quark and the Higgs boson in nuclear collisions at the LHC and Future Circular Collider (FCC) are summarized. Perturbative QCD calculations at (N)NLOExpand
Measurement of the top quark mass in the all-jets final state at $\sqrt{s} =$ 13 TeV and combination with the lepton+jets channel
A top quark mass measurement is performed using 35.9 fb−1 of LHC proton-proton collision data collected with the CMS detector at √ s = 13 TeV. The measurement uses the tt all-jets final state. AExpand
Study of W boson production in PbPb and pp collisions at sNN=2.76 TeV
A measurement is presented of W-boson production in PbPb collisions carried out at a nucleon-nucleon (NN) centre-of-mass energy sqrt(s[NN]) of 2.76 TeV at the LHC using the CMS detector. In dataExpand
Probing the Time Structure of the Quark-Gluon Plasma with Top Quarks.
• Physics, Medicine
• Physical review letters
• 2018
It is found that the LHC has the potential to bring first limited information on the time structure of the QGP and that the length of the time delay can be constrained by selecting specific reconstructed top-quark momenta. Expand
Constraints on the gluon PDF from top quark pair production at hadron colliders
• Physics
• 2013
A bstractUsing the recently derived NNLO cross sections [1], we provide NNLO+NNLL theoretical predictions for top quark pair production based on all the available NNLO PDF sets, and compare them withExpand
Observation of a new boson at a mass of 125 GeV with the CMS experiment at the LHC
Results are presented from searches for the standard model Higgs boson in protonproton collisions at √ s = 7 and 8 TeV in the Compact Muon Solenoid experiment at the LHC, using data samplesExpand
Top Quark Production
I discuss top quark production in hadronic collisions. I present the soft-gluon resummation formalism and its derivation from factorization and renormalization-group evolution, and two-loopExpand
Vector boson pair production at the LHC
• Physics
• 2011
We present phenomenological results for vector boson pair production at the LHC, obtained using the parton-level next-to-leading order program MCFM. We include the implementation of a new process inExpand
arXiv : Higgs boson production in photon-photon interactions with proton, light-ion, and heavy-ion beams at current and future colliders
• Physics
• 2020
The production of the Higgs boson in photon-photon interactions with proton and nucleus beams at three planned or proposed future CERN colliders --- the high-luminosity Large Hadron ColliderExpand | 2021-09-17 18:53:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7901288270950317, "perplexity": 3833.307678025656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055775.1/warc/CC-MAIN-20210917181500-20210917211500-00301.warc.gz"} |
https://cs.stackexchange.com/questions/77277/list-count-of-occurrences-pairs-triplets-etc-from-sets | # List count of occurrences pairs, triplets, etc. from sets
A receipt is an array of products. I have an array of receipts.
I need to generate a report in where I can find the products often bought together.
For instance, for a single receipt where the products bought are [A, A, B, B, B, C], a report would look like
Pair - count
A&B - 2
A&A - 1
B&B - 1
A&C - 1
B&C - 1
Notice that pair B&A is no longer counted because this is the same as A&B, and similar occurrences. It also has to be sorted by count, decreasing. Also notice that this is just from one receipt rather than multiple, which is what is required.
I also need the algorithm to scale to triples and quadruples, not just pairs.
How would I create an algorithm like this?
There are no set time efficiency constraints. In fact, a more readable solution is preferred over more efficient solutions although efficiency is also appreciated. A python solution is also appreciated, but any language or pseudocode will do.
• If time is not an issue, why not use the naive algorithm ($n^k$ $k$-tuples of $n$ products)? That said, you seem to be looking or some sort of correlation, which is probably standard in data mining and/or machine learning. I expect there to be smarter methods. – Raphael Jun 27 '17 at 6:13
• I don't quite understand what you're trying to accomplish, but it looks like a pretty standard programming exercise, which you don't really need us for. Use hash tables, count stuff, and sort the results. – Yuval Filmus Jun 27 '17 at 11:24
• Details in the solution for it would be appreciated, thanks – Berry Jun 27 '17 at 11:48
• docs.python.org/2/library/itertools.html Use the permutation function and just set the length you want. – Cole Jun 29 '17 at 5:11 | 2020-07-08 15:16:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28450167179107666, "perplexity": 927.1814059632169}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897027.14/warc/CC-MAIN-20200708124912-20200708154912-00009.warc.gz"} |
http://bootmath.com/first-theorem-in-topological-vector-spaces.html | First theorem in Topological vector spaces.
I came across this theorem and I am disappointed not being able to understand or to have intuition to understand it . I would be glad to get help .
Theorem : If $K$ and $C$ are subset of topological vector space (TVS) $X$ , $K$ is compact , $C$ is closed and $K \cap C =\varnothing$ then $0$ has a nbhd $V$ such that $(K+V)\cap ( C+V) = \varnothing$
My first question is how to find a symmetric nbhd? I can’t seem to understand the proof.
an example illustrating the continuity in TVS.
Concept behind local basis and how every other basis can be deduced from local basis.
I think one example would make me move forward.
Thank you for giving in your time.
Solutions Collecting From Web of "First theorem in Topological vector spaces."
I’ll give extremely detailed explanation of theorem 1.10 from Rudin’s Functional analysis. In this explanation I’ll use a couple of standard results about topological vector spaces which you can skip if you already know them. Through all the text $X$ is a topological vector space.
Lemma 1. Let $a\in X$ and $\lambda\in \mathbb{C}\setminus\{0\}$, then the maps
$$T_{a}:X\to X:x\mapsto x+a\qquad M_\lambda:X\to X:x\mapsto\lambda x$$ are
homeomorphisms.
Proof. Since addition and multiplication is continuous in topological vector spaces, then $T_a$ and $M_\lambda$ are continuous. By the same reasoning the maps $T_{-a}$ and $M_{\lambda^{-1}}$ are continuous too. Note that
$$T_a\circ T_{-a}=T_{-a}\circ T_{a}=\mathrm{Id}_X,$$
$$M_{\lambda}\circ M_{\lambda^{-1}}=M_{\lambda^{-1}}\circ M_{\lambda}=\mathrm{Id}_X$$
Indeed, for all $x\in X$ we have
$$(T_a\circ T_{-a})(x)=(T_a(T_{-a}(x)))=T_a(x-a)=(x-a)+a=x$$
$$(T_{-a}\circ T_{a})(x)=(T_{-a}(T_{a}(x)))=T_{-a}(x+a)=(x+a)-a=x$$
$$(M_{\lambda}\circ M_{\lambda^{-1}})(x)=(M_{\lambda}(M_{\lambda^{-1}}(x))=M_\lambda(\lambda^{-1}x)=\lambda(\lambda^{-1}x)=x$$
$$(M_{\lambda^{-1}}\circ M_{\lambda})(x)=(M_{\lambda^{-1}}(M_{\lambda}(x))=M_{\lambda^{-1}}(\lambda x)=\lambda^{-1}(\lambda x)=x$$
Since continuous maps $T_{a}$ and $M_\lambda$ have continuous inverses, then they are homeomorphisms.
Lemma 2. Let $x_0\in X$ and $V$ is a neighborhood of zero, then $x_0+V$ is a neighborhood of $x_0$.
Proof.
By lemma 1 the map $T_{x_0}$ is a homeomorphism. Hence image of each open set under the map $T_{x_0}$ is open. In particular $T_{x_0}(V)=x_0+V$ is open. Since $x_0\in x_0+V$ and $x_0+V$ is open, then $x_0+V$ is a neighborhood of $x_0$.
Lemma 3. Let $V$ be neighborhood of zero and $F\subset X$ be some subset, then $F+V$ is an
open set containing $F$.
Proof. Since $V$ is a neighborhood of zero, then from lemma 2 it follows that
$$F=\bigcup\limits_{x\in F}\{x\}\subset\bigcup\limits_{x\in F}(x+V)=F+V.$$
Thus $F\subset F+V$. By lemma 2 sets $x+V$ are open, then the set $F+V=\bigcup_{x\in F}(x+V)$ is open as union of open sets.
Lemma 4 Let $U,V$ be a neighbourhoods of zero, then
1) the sets $U\cap V$, $U\cup V$, $-U$ are neighborhoods of zero.
2) the set $U\cap(-U)$ is symmetric neighborhood of zero.
3) if $U$ is a symmetric neighbourhood of zero, then does $U+U$.
Proof. 1) Since $U$, $V$ are neighborhoods of zero, then $U$ and $V$ are open and $0\in U$, $0\in V$. Since $U$ and $V$ are open, then does $U\cap V$ and $U\cup V$. Since $0\in U$ and $0\in V$, then $0\in U\cap V$, $0\in U\cup V$. Hence $U\cap V$ and $U\cup V$ are neighborhoods of zero. Since $0\in U$, then $0=-0\in-U$. From lemma 1 we know that $-U=M_{-1}(U)$, i.e. $-U$ is an image of open set $U$ under homeomorphism $M_{-1}$, hence $-U$ is open. Since $-U$ is open and $0\in -U$, then $-U$ is a neighborhood of zero.
2) From previous paragraph it follows that $V\cap(-V)$ is a neighborhood of zero. Now we see that
$$-(V\cap(-V))=-(\{x:x\in V\}\cap\{-x:x\in V\})=\{-x:x\in V\}\cap\{-(-x):x\in V\}=\{-x:x\in V\}\cap\{x:x\in V\}=V\cap(-V),$$
hence $V\cap(-V)$ is a symmetric neighborhood of zero.
3) By lemma 3 $U+U$ is an open set. Since $U$ is a neighbourhood of zero $0\in U$, hence $0=0+0\in U+U$. Thus $U+U$ is a neighborhood of zero. Direct check shows
$$-(U+U)=-\{x+y:x\in U, y\in U\}=\{-x-y:x\in U, y\in U\}=\{\hat{x}+\hat{y}:\hat{x}\in -U, \hat{y}\in -U\}=\{\hat{x}+\hat{y}:\hat{x}\in U, \hat{y}\in U\}=U+U$$
So $U+U$ is a symmetric neighbourhood of zero.
Lemma 5. Let $W$ be a neighborhood of zero. Then there exist symmetric
neighborhood of zero $V$ such that $V+V+V+V\subset W$.
Proof. Since $W$ is a neighborhood of zero, then $0\in W$ and $W$ is open, then from equality $0+0=0$ and continuity of addition in topological vector spaces it follows that there exist neighborhoods of zero $U_1$, $U_2$ such that $U_1+U_2\subset W$. By lemma 4 the set $U_0=U_1\cap U_2$ is a neighborhood of zero. By lemma 4 the set $U=U_0\cap(-U_0)$ is a symmetric neighborhood of zero. Now note that
$$U+U=\{x+y:x\in U, y\in U\}\subset \{x+y:x\in U_1,y\in U_2\}=U_1+U_2\subset W$$
Applying this resut to the neighborhood of zero $U$ we get symmetric neighborhood of zero $V$ such that $V+V\subset U$. Hence
$$V+V+V+V=\{x+y:x\in V+V,y\in V+V\}\subset\{x+y:x\in U,y\in U\}= U+U\subset W.$$
Theorem. Let $K\subset X$ is a compact, $C\subset X$ is closed and
$K\cap C=\varnothing$, then there exist neighborhood of zero $V$ such that
$(K+V)\cap(C+V)=\varnothing$.
Proof. Case 1. If $K=\varnothing$, then for arbitrary neighborhood of zero $V$ we have $K+V=\varnothing$. Indeed, if $K+V\neq\varnothing$, then there exist $x\in V$, $y\in K$ such that $x+y\in K+V$. Since there exist some $y\in K$, then $K\neq\varnothing$. Contradiction, hence $K+V=\varnothing$. Since $K+V=\varnothing$, then $(K+V)\cap(C+V)=\varnothing$.
Case 2. If $K\neq\varnothing$, then fix $x\in K$. Since $K\cap C=\varnothing$, then $x\notin C$ which is equivalent to $x\in X\setminus C$. Since $C$ is closed then $X\setminus C$ is open. Since $x\in X\setminus C$ and $X\setminus C$ is open then there exist open set $W_x\subset X\setminus C$ such that $x\in W_x$. Note that $X$ is a topological vector space, then by lemma 2 the set $W_x-x$ is also open. Since $x\in W_x$, then $0\in W_x-x$. Thus $W_x-x$ is an open set containing $0$, i.e. neighborhood of zero. Now by Lemma 5, there exist symmetric neighborhood of zero $V_x$ such that $V_x+V_x+V_x+V_x\subset W_x-x$. The set $V_x$ is a neighborhood of zero, so $0\in V_x$, hence
$$V_x+V_x+V_x=0+V_x+V_x+V_x\subset V_x+V_x+V_x+V_x\subset W_x-x.$$
Since we have inclusion $V_x+V_x+V_x\subset W_x-x$, then $x+V_x+V_x+V_x\subset W_x$. Recall that $W_x\subset X\setminus C$, hence $x+V_x+V_x+V_x\subset X\setminus C$. This is equivalent to $(x+V_x+V_x+V_x)\cap C=\varnothing$.
Assume that $(x+V_x+V_x)\cap(V_x+C)\neq\varnothing$, then there exist $z\in(x+V_x+V_x)\cap(V_x+C)$. Since $z\in(V_x+C)$, then we have $v\in V_x$ and $c\in C$ such that $z=v+c$. Then $c=z-v=z+(-v)$. Since $V_x$ is symmetric and $v\in V_x$, then $(-v)\in V_x$ This is the only place where we use that neighborhood is symmetric! Also note that $z\in(x+V_x+V_x)$, so $c=z+(-v)\in(x+V_x+V_x)+V_x=x+V_x+V_x+V_x$. But $c\in C$, hence $(x+V_x+V_x+V_x)\cap C\neq\varnothing$. Contradiction, so $(x+V_x+V_x)\cap(V_x+C)=\varnothing$. Thus for each $x\in K$ we constructed symmetric neighborhood of zero $V_x$ such that
$$(x+V_x+V_x)\cap(C+V_x)=\varnothing\tag{1}$$
Fix $x\in K$, then from Lemma 2 it follows that $x+V_x$ is a neighborhood of $x$. In particular $\{x\}\subset x+V_x$. Since $x\in K$ is arbitrary we conclude $K=\bigcup_{x\in K}\{x\}\subset \bigcup_{x\in K}(x+V_x)$. Thus we constructed a family $\{x+V_x:x\in K\}$ of open sets such that $K\subset \bigcup_{x\in K}(x+V_x)$ i. e. an open cover of $K$. Recall that $K$ is a compact, so there exist finite subcover, i.e. finite subfamily $\{x_i+V_{x_i}:i\in\{1,\ldots,n\}\}\subset\{x+V_x:x\in K\}$ such that
$$K\subset \bigcup_{i=1}^n(x_i+V_{x_i}).$$
Consider set $V=\bigcap_{i=1}^n V_{x_i}$ it is open as finite intersection of open sets $\{V_{x_i}:i\in\{1,\ldots,n\}\}$ and $V \neq \emptyset$ because $0\in V_{x_i}$ for all $i$. Now we have inclusion
$$K+V=\{x+v:x\in K, v\in V\}\subset\left\{x+v:x\in\bigcup_{i=1}^n( x_i+V_{x_i}),v\in V\right\}\subset\bigcup_{i=1}^n\{x+v:x\in x_i+V_{x_i},v\in V\}=\bigcup_{i=1}^n\{x+v:x\in x_i+V_{x_i},v\in V\}$$
Since $V=\bigcap_{i=1}^n V_{x_i}\subset V_{x_i}$ for all $i\in\{1,\ldots,n\}$ then
$$K+V\subset \bigcup_{i=1}^n\{x+v:x\in x_i+V_{x_i},v\in V\}\subset$$
$$\bigcup_{i=1}^n\{x+v:x\in x_i+V_{x_i},v\in V_{x_i}\}=\bigcup_{i=1}^n(x_i+V_{x_i}+V_{x_i})\tag{2}$$
Again since $V=\bigcap_{i=1}^n V_{x_i}\subset V_{x_i}$ for all $i\in\{1,\ldots,n\}$ then
$$C+V=\{x+v:x\in C, v\in V\}\subset\{x+v:x\in C,v\in V_{x_i}\}=C+V_{x_i}\tag{3}$$
From $(1)$ it follows that $(x_i+V_{x_i}+V_{x_i})\cap (C+V_{x_i})=\varnothing$ for all $i\in\{1,\ldots,n\}$, then using $(3)$ we conclude that $(x_i+V_{x_i}+V_{x_i})\cap (C+V)=\varnothing$ for all $i\in\{1,\ldots,n\}$. Taking union over $i\in\{1,\ldots,n\}$ we get
$$\bigcup\limits_{i=1}^n(x_i+V_{x_i}+V_{x_i})\cap (C+V)=\varnothing\tag{4}$$
Finally from $(2)$ and $(4)$ we conclude that
$$(K+V)\cap(C+V)=\varnothing.$$
I will assume some basic topology.
Since addition is continuous and $C$ is closed, the set $\{(x,k)\in X\times K\colon x+k\in C\}$ is closed. Since $K$ is compact, the image of that set under $X\times K\to X$ is also closed, but this is $A:=C-K$. Since $0\notin A$, $U:=X\setminus A$ is a neighborhood of $0$. Since subtraction is continuous, there is a neighborhood $V$ of $0$ such that $V-V\subset U$. This means that for any $v,w\in V$, $c\in C$, $k\in K$ we have $v-w\ne c-k$, or $k+v\ne c+w$. Hence $V$ has the desired property. | 2018-07-17 19:14:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927836060523987, "perplexity": 61.65640545086061}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589892.87/warc/CC-MAIN-20180717183929-20180717203929-00041.warc.gz"} |
https://graphicmaths.com/gcse/trigonometry/sine-rule/ | # Sine rule
Martin McBride
2021-01-14
The sine rule is a trigonometry formula that relates the sides and angles of a triangle. It can be used to solve a triangle if we know either:
• Two angles and any side of the triangle.
• Two sides and any angle except the angle enclosed by the two sides.
For other cases you will need to use the cosine rule.
The rule applies to any triangle, not just right angled triangles.
This rule is also covered in this video on youtube:
## Labelling the triangle
It is important to label the triangle correctly, otherwise the rule won't work! We name the angles A, B and C, and we name the sides a, b and c:
The important thing to remember is that each angle is opposite the side of the same name:
• Angle A is opposite side a.
• Angle B is opposite side b.
• Angle C is opposite side c.
## The sine rule
The sine rule tells us that:
$$\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C}$$
This is a short way of writing these three equations:
$$\frac{a}{\sin A} = \frac{b}{\sin B}$$
$$\frac{a}{\sin A} = \frac{c}{\sin C}$$
$$\frac{b}{\sin B} = \frac{c}{\sin C}$$
We can also flip this equation:
$$\frac{\sin A}{a} = \frac{\sin B}{b} = \frac{\sin C}{c}$$
This second form can be useful if you want to find an angle. | 2022-08-14 04:26:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473659753799438, "perplexity": 445.39382664841924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00507.warc.gz"} |
https://proofwiki.org/wiki/Definition:Set | # Definition:Set
## Definition
A set is intuitively defined as any aggregation of objects, called elements, which can be precisely defined in some way or other.
We can think of each set as a single entity in itself, and we can denote it (and usually do) by means of a single symbol.
That is, anything you care to think of can be a set. This concept is known as the comprehension principle.
However, there are problems with the comprehension principle. If we allow it to be used without any restrictions at all, paradoxes arise, the most famous example probably being Russell's Paradox.
Hence some sources define a set as a 'well-defined' collection of objects, leaving the concept of what constitutes well-definition to later in the exposition.
## Defining a Set
The elements in a set $S$ are the things that define what $S$ is.
If $S$ is a set, and $a$ is one of the objects in it, we say that $a$ is an element (or member) of $S$, or that $a$ belongs to $S$, or $a$ is in $S$, and we write $a \in S$.
If $a$ is not one of the elements of $S$, then we can write $a \notin S$ and say $a$ is not in $S$.
Thus a set $S$ can be considered as dividing the universe into two parts:
all the things that belong to $S$
all the things that do not belong to $S$.
### Explicit Definition
A (finite) set can be defined by explicitly specifying all of its elements between the famous curly brackets, known as set braces: $\set {}$.
When a set is defined like this, note that all and only the elements in it are listed.
This is called explicit (set) definition.
It is possible for a set to contain other sets. For example:
$S = \set {a, \set a }$
If there are many elements in a set, then it becomes tedious and impractical to list them all in one big long explicit definition. Fortunately, however, there are other techniques for listing sets.
### Implicit Definition
If the elements in a set have an obvious pattern to them, we can define the set implicitly by using an ellipsis ($\ldots$).
For example, suppose $S = \set {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}$.
A more compact way of defining this set is:
$S = \set {1, 2, \ldots, 10}$
With this notation we are asked to suppose that the numbers count up uniformly, and we can read this definition as:
$S$ is the set containing $1$, $2$, and so on, up to $10$.
Explicit and implicit definition are collectively referred to as roster notation.
### Definition by Predicate
An object can be specified by means of a predicate, that is, in terms of a property (or properties) that it possesses.
Whether an object $x$ possesses a particular property $P$ is either true or false (in Aristotelian logic) and so can be the subject of a propositional function $\map P x$.
Hence a set can be specified by means of such a propositional function:
$S = \set {x: \map P x}$
which means:
$S$ is the set of all objects which have the property $P$
or, more formally:
$S$ is the set of all $x$ such that $\map P x$ is true.
In this context, we see that the symbol $:$ is interpreted as such that.
### Warning
It is important to distinguish between an element, for example $a$, and a singleton containing it, that is, $\set a$.
That is $a$ and $\set a$ are not the same thing.
While it is true that:
$a \in \set a$
it is not true that:
$a = \set a$
neither is it true that:
$a \in a$
## Uniqueness of Elements
A set is uniquely determined by its elements.
This means that the only thing that defines what a set is is what it contains.
So, how you choose to list or define the contents makes no difference to what the contents actually are.
## Also known as
In the original translation by Jourdain of Georg Cantor's original work, this concept was called an aggregate. The term can be seen in subsequent works, but has now mostly been superseded by the term set.
Sometimes the terms class, family or collection are used. In some contexts, the term space is used. However, beware that these terms are usually used for more specific things than just as a synonym for set.
On this website, the terms class, family and space are not used as synonyms for set, being reserved specifically for the concepts to which they apply.
### Point Set
A set whose elements are all (geometric) points is often called a point set.
In particular, the Cartesian coordinate plane and complex plane can each be seen referred to as a two-dimensional point set.
## Examples
### Set of Living People
Let $P$ denote the set of living people.
Then:
$\displaystyle \text {The person reading this web page}$ $\in$ $\displaystyle P$ $\displaystyle \text {Julius Caesar}$ $\notin$ $\displaystyle P$ $\displaystyle -4$ $\notin$ $\displaystyle P$
### Positive Integers Less than 10
The (strictly) positive integers less than $10$ form a set:
$\set {1, 2, 3, 4, 5, 6, 7, 8, 9}$
## Also see
• Results about set theory can be found here.
## Historical Note
The concept of a set first appears in Bernhard Bolzano's posthumous ($1851$) work Paradoxien des Unendlichen (The Paradoxes of the Infinite).
The first investigation into the concept in any depth was made by Georg Cantor in his two papers called Beiträge zur Begründung der transfiniten Mengenlehre ($1895$ and $1897$).
It was Georg Cantor who, in $1874$, defined a set as being:
a Many that allows itself to be thought of as a One.
-- Georg Cantor, A. Fraenkel and E. Zermelo, Gesammelte Abhandlungen (Berlin: Springer-Verlag, $1932$)
This definition was directly inspired by a problem raised by Bernhard Riemann in his paper Ueber die Darstellbarkeit einer Function durch eine trigonometrische Reihe of $1854$, on the subject of Fourier series.
## Internationalization
Set is translated:
In French: ensemble In German: Menge (literally: aggregate) | 2020-09-22 07:54:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525941967964172, "perplexity": 415.87247136058386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400204410.37/warc/CC-MAIN-20200922063158-20200922093158-00101.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/the-breadth-and-height-of-a-rectangular-solid-are-120-m-and-80-cm-respectively-if-the-volume-of-the-cuboid-is-192-m3-find-its-length-surface-area-of-a-cuboid_110238 | # The Breadth and Height of a Rectangular Solid Are 1.20 M and 80 Cm Respectively. If the Volume of the Cuboid is 1.92 M3; Find Its Length. - Mathematics
Sum
The breadth and height of a rectangular solid are 1.20 m and 80 cm respectively. If the volume of the cuboid is 1.92 m3; find its length.
#### Solution
Volume of a rectangular solid = 1.92 m3
Breadth of a rectangular solid = 1.20 m
Height of a rectangular solid = 80 cm = 0.8 m
We know
Length x Breadth x Height = Volume of a rectangular solid (cubical)
Length x 1.20 x 0.8 = 1.92
Length x 0.96 = 1.92
⇒ Length = 1.92/0.96
⇒ Length = 192/96
⇒ Length = 2 m
Concept: Surface Area of a Cuboid
Is there an error in this question or solution?
#### APPEARS IN
Selina Concise Mathematics Class 8 ICSE
Chapter 21 Surface Area, Volume and Capacity
Exercise 21 (A) | Q 2.3 | Page 238 | 2022-06-26 12:02:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7290153503417969, "perplexity": 1847.1621771565324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00075.warc.gz"} |
https://blog.londogard.com/blog/tags/data-engineering | While working at AFRY we've noted that in performance intensive application that isn't really Big Data ends up being slow when using pandas. | 2023-01-31 07:04:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4250466525554657, "perplexity": 2757.2267194052833}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499845.10/warc/CC-MAIN-20230131055533-20230131085533-00223.warc.gz"} |
https://publiensayos.com/as-far-tneizny/archive.php?e7de3d=stochastic-neighbor-embedding | # stochastic neighbor embedding
‖ <> , define. q [10][11] It has been demonstrated that t-SNE is often able to recover well-separated clusters, and with special parameter choices, approximates a simple form of spectral clustering.[12]. in the map are determined by minimizing the (non-symmetric) Kullback–Leibler divergence of the distribution for all Stochastic Neighbor Embedding under f-divergences. {\displaystyle p_{i\mid i}=0} that are proportional to the similarity of objects The result of this optimization is a map that reflects the similarities between the high-dimensional inputs. ≠ t-distributed Stochastic Neighbor Embedding. high-dimensional objects as its neighbor if neighbors were picked in proportion to their probability density under a Gaussian centered at y i j {\displaystyle P} ."[2]. Since the Gaussian kernel uses the Euclidean distance d {\displaystyle \sum _{j}p_{j\mid i}=1} , define To improve the SNE, a t-distributed stochastic neighbor embedding (t-SNE) was also introduced. x How does t-SNE work? It is very useful for reducing k-dimensional datasets to lower dimensions (two- or three-dimensional space) for the purposes of data visualization. Academia.edu is a platform for academics to share research papers. The machine learning algorithm t-Distributed Stochastic Neighborhood Embedding, also abbreviated as t-SNE, can be used to visualize high-dimensional datasets. 0 Such "clusters" can be shown to even appear in non-clustered data,[9] and thus may be false findings. It minimizes the Kullback-Leibler (KL) divergence between the original and embedded data distributions. N y and note that j {\displaystyle \sum _{i,j}p_{ij}=1} j ∣ t-distributed Stochastic Neighbor Embedding. x {\displaystyle i\neq j} y While the original algorithm uses the Euclidean distance between objects as the base of its similarity metric, this can be changed as appropriate. = , using a very similar approach. i i {\displaystyle p_{ij}} The t-SNE firstly computes all the pairwise similarities between arbitrary two data points in the high dimension space. Use RGB colors [1 0 0], [0 1 0], and [0 0 1].. For the 3-D plot, convert the species to numeric values using the categorical command, then convert the numeric values to RGB colors using the sparse function as follows. Currently, the most popular implementation, t-SNE, is restricted to a particular Student t-distribution as its embedding distribution. {\displaystyle \mathbf {y} _{1},\dots ,\mathbf {y} _{N}} Stochastic Neighbor Embedding Geoffrey Hinton and Sam Roweis Department of Computer Science, University of Toronto 10 King’s College Road, Toronto, M5S 3G5 Canada hinton,roweis @cs.toronto.edu Abstract We describe a probabilistic approach to the task of placing objects, de-scribed by high-dimensional vectors or by pairwise dissimilarities, in a j As a result, the bandwidth is adapted to the density of the data: smaller values of t-Distributed Stochastic Neighbor Embedding Action Set: Syntax. y TSNE t-distributed Stochastic Neighbor Embedding. Author: Matteo Alberti In this tutorial we are willing to face with a significant tool for the Dimensionality Reduction problem: Stochastic Neighbor Embedding or just "SNE" as it is commonly called. p j become too similar (asymptotically, they would converge to a constant). In addition, we provide a Matlab implementation of parametric t-SNE (described here). as well as possible. = The t-SNE algorithm comprises two main stages. x ∈ {\displaystyle \mathbf {y} _{i}} ∑ {\displaystyle i\neq j} 11/03/2018 ∙ by Daniel Jiwoong Im, et al. Specifically, it models each high-dimensional object by a two- or three-dime… In this work, we propose extending this method to other f-divergences. Some of these implementations were developed by me, and some by other contributors. q i , that is: The minimization of the Kullback–Leibler divergence with respect to the points i {\displaystyle q_{ij}} N 1 Last time we looked at the classic approach of PCA, this time we look at a relatively modern method called t-Distributed Stochastic Neighbour Embedding (t-SNE). to datapoint Stochastic Neighbor Embedding (SNE) Overview. p To visualize high-dimensional data, the t-SNE leads to more powerful and flexible visualization on 2 or 3-dimensional mapping than the SNE by using a t-distribution as the distribution of low-dimensional data. If v is a vector of positive integers 1, 2, or 3, corresponding to the species data, then the command 1 View the embeddings. {\displaystyle \mathbf {y} _{i}\in \mathbb {R} ^{d}} between two points in the map from the distribution Uses a non-linear dimensionality reduction technique where the focus is on keeping the very similar data points close together in lower-dimensional space. For the Boston-based organization, see, List of datasets for machine-learning research, "Exploring Nonlinear Feature Space Dimension Reduction and Data Representation in Breast CADx with Laplacian Eigenmaps and t-SNE", "The Protein-Small-Molecule Database, A Non-Redundant Structural Resource for the Analysis of Protein-Ligand Binding", "K-means clustering on the output of t-SNE", Implementations of t-SNE in various languages, https://en.wikipedia.org/w/index.php?title=T-distributed_stochastic_neighbor_embedding&oldid=990748969, Creative Commons Attribution-ShareAlike License, This page was last edited on 26 November 2020, at 08:15. (with i {\displaystyle x_{i}} The affinities in the original space are represented by Gaussian joint probabilities and the affinities in the embedded space are represented by Student’s t-distributions. i ‖ and set {\displaystyle d} i σ = , Below, implementations of t-SNE in various languages are available for download. , it is affected by the curse of dimensionality, and in high dimensional data when distances lose the ability to discriminate, the j i p It converts similarities between data points to joint probabilities and tries to minimize the Kullback-Leibler divergence between the joint probabilities of the low-dimensional embedding and the high-dimensional data. Stochastic Neighbor Embedding Geoffrey Hinton and Sam Roweis Department of Computer Science, University of Toronto 10 King’s College Road, Toronto, M5S 3G5 Canada fhinton,[email protected] Abstract We describe a probabilistic approach to the task of placing objects, de-scribed by high-dimensional vectors or by pairwise dissimilarities, in a j "TSNE" redirects here. i i First, t-SNE constructs a probability distribution over pairs of high-dimensional objects in such a way that similar objects are assigned a higher probability while dissimilar points are assigned a lower probability. j i = Stochastic Neighbor Embedding (SNE) has shown to be quite promising for data visualization. , that P -dimensional map ∙ 0 ∙ share . x The t-distributed Stochastic Neighbor Embedding (t-SNE) is a powerful and popular method for visualizing high-dimensional data.It minimizes the Kullback-Leibler (KL) divergence between the original and embedded data distributions. It is a nonlinear dimensionality reductiontechnique well-suited for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. x , and Each high-dimensional information of a data point is reduced to a low-dimensional representation. {\displaystyle x_{j}} i x p {\displaystyle i} known as Stochastic Neighbor Embedding (SNE) [HR02] is accepted as the state of the art for non-linear dimen-sionality reduction for the exploratory analysis of high-dimensional data. and {\displaystyle p_{j|i}} j To keep things simple, here’s a brief overview of working of t-SNE: 1. The t-Distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear dimensionality reduction and visualization technique. is performed using gradient descent. y An unsupervised, randomized algorithm, used only for visualization. t-SNE has been used for visualization in a wide range of applications, including computer security research,[3] music analysis,[4] cancer research,[5] bioinformatics,[6] and biomedical signal processing. The paper is fairly accessible so we work through it here and attempt to use the method in R on a new data set (there’s also a video talk). x��[ے�6���|��6���A�m�W��cITH*c�7���h�g���V��( t�>}��a_1�?���_�q��J毮֊�]e��\T+�]_�������4�ګ�Y�Ͽv���O�_��u����ǫ���������f���~�V��k���� q Given a set of t-Distributed Stochastic Neighbor Embedding (t-SNE) is a non-linear technique for dimensionality reduction that is particularly well suited for the visualization of high-dimensional datasets. R would pick Moreover, it uses a gradient descent algorithm that may require users to tune parameters such as Specifically, for j j x j {\displaystyle p_{ij}=p_{ji}} , as. stream t-distributed stochastic neighbor embedding (t-SNE) is a machine learning algorithm for visualization based on Stochastic Neighbor Embedding originally developed by Sam Roweis and Geoffrey Hinton,[1] where Laurens van der Maaten proposed the t-distributed variant. [7] It is often used to visualize high-level representations learned by an artificial neural network. [13], t-SNE aims to learn a p , t-SNE first computes probabilities It is extensively applied in image processing, NLP, genomic data and speech processing. Note that i These The t-distributed Stochastic Neighbor Embedding (t-SNE) is a powerful and popular method for visualizing high-dimensional data. Q The approach of SNE is: i j y are used in denser parts of the data space. and However, the information about existing neighborhoods should be preserved. i t-distributed stochastic neighbor embedding (t-SNE) is a machine learning algorithm for visualization based on Stochastic Neighbor Embedding originally developed by Sam Roweis and Geoffrey Hinton, where Laurens van der Maaten proposed the t-distributed variant. i t-distributed stochastic neighbor embedding (t-SNE) is a machine learning dimensionality reduction algorithm useful for visualizing high dimensional data sets.. t-SNE is particularly well-suited for embedding high-dimensional data into a biaxial plot which can be visualized in a graph window. {\displaystyle x_{i}} t-Distributed Stochastic Neighbor Embedding (t-SNE) is an unsupervised, non-linear technique primarily used for data exploration and visualizing high-dimensional data. 0 , {\displaystyle p_{ii}=0} i y N It converts high dimensional Euclidean distances between points into conditional probabilities. 1 As expected, the 3-D embedding has lower loss. SNE makes an assumption that the distances in both the high and low dimension are Gaussian distributed. . Intuitively, SNE techniques encode small-neighborhood relationships in the high-dimensional space and in the embedding as probability distributions. t-distributed Stochastic Neighbor Embedding (t-SNE)¶ t-SNE (TSNE) converts affinities of data points to probabilities. , x As Van der Maaten and Hinton explained: "The similarity of datapoint Specifically, it models each high-dimensional object by a two- or three-dimensional point in such a way that similar objects are modeled by nearby points and dissimilar objects are modeled by distant points with high probability. i t-SNE [1] is a tool to visualize high-dimensional data. Second, t-SNE defines a similar probability distribution over the points in the low-dimensional map, and it minimizes the Kullback–Leibler divergence (KL divergence) between the two distributions with respect to the locations of the points in the map. {\displaystyle \lVert x_{i}-x_{j}\rVert } i {\displaystyle \mathbf {y} _{i}} j j and set j 1 j , j σ {\displaystyle \mathbf {y} _{j}} ) that reflects the similarities To this end, it measures similarities i For Stochastic Neighbor Embedding (SNE) converts Euclidean distances between data points into conditional probabilities that represent similarities (36). {\displaystyle Q} The locations of the points i = i p 0 ∣ − %PDF-1.2 = Original SNE came out in 2002, and in 2008 was proposed improvement for SNE where normal distribution was replaced with t-distribution and some improvements were made in findings of local minimums. [8], While t-SNE plots often seem to display clusters, the visual clusters can be influenced strongly by the chosen parameterization and therefore a good understanding of the parameters for t-SNE is necessary. j . p t-Distributed Stochastic Neighbor Embedding (t-SNE) is a dimensionality reduction method that has recently gained traction in the deep learning community for visualizing model activations and original features of datasets. {\displaystyle p_{ij}} i Finally, we provide a Barnes-Hut implementation of t-SNE (described here), which is the fastest t-SNE implementation to date, and w… … {\displaystyle \mathbf {x} _{j}} Stochastic Neighbor Embedding (or SNE) is a non-linear probabilistic technique for dimensionality reduction. ∑ {\displaystyle q_{ii}=0} i i {\displaystyle \sigma _{i}} {\displaystyle q_{ij}} p . {\displaystyle \sigma _{i}} , as follows. {\displaystyle \mathbf {y} _{i}} x Interactive exploration may thus be necessary to choose parameters and validate results. [2] It is a nonlinear dimensionality reduction technique well-suited for embedding high-dimensional data for visualization in a low-dimensional space of two or three dimensions. For the standard t-SNE method, implementations in Matlab, C++, CUDA, Python, Torch, R, Julia, and JavaScript are available. T-distributed Stochastic Neighbor Embedding (t-SNE) is an unsupervised machine learning algorithm for visualization developed by Laurens van der Maaten and Geoffrey Hinton. … {\displaystyle N} t-SNE is a technique of non-linear dimensionality reduction and visualization of multi-dimensional data. +�+^�B���eQ�����WS�l�q�O����V���\}�]��mo���"�e����ƌa����7�Ў8_U�laf[RV����-=o��[�hQ��ݾs�8/�P����a����6^�sY(SY�������B�J�şz�(8S�ݷ��e��57����!������XӾ=L�/TUh&b��[�lVز�+{����S�fVŻ_5]{h���n �Rq���C������PT�#4���\$T��)Yǵ��a-�����h��k^1x��7�J� @���}��VĘ���BH�-m{�k1�JWqgw-�4�ӟ�z� L���C�����R��w���w��ڿ�*���Χ���Ԙl3O�� b���ݷxc�ߨ&S�����J^���>��=:XO���_�f,�>>�)NY���!��xQ����hQha_+�����f��������įsP���_�}%lHU1x>y��Zʘ�M;6Cw������:ܫ���>�M}���H_�����#�P7[�(H��� up�X|� H�����ʹ�ΪX U�qW7H��H4�C�{�Lc���L7�ڗ������TB6����q�7��d�R m��כd��C��qr� �.Uz�HJ�U��ޖ^z���c�*!�/�n�}���n�ڰq�87��;�+���������-�ݎǺ L����毅���������q����M�z��K���Ў��� �. t-Distributed Stochastic Neighbor Embedding. It has been proposed to adjust the distances with a power transform, based on the intrinsic dimension of each point, to alleviate this. i i 5 0 obj Stochastic Neighbor Embedding Stochastic Neighbor Embedding (SNE) starts by converting the high-dimensional Euclidean dis-tances between datapoints into conditional probabilities that represent similarities.1 The similarity of datapoint xj to datapoint xi is the conditional probability, pjji, that xi would pick xj as its neighbor Stochastic Neighbor Embedding (SNE) is a manifold learning and dimensionality reduction method with a probabilistic approach. {\displaystyle x_{i}} is set in such a way that the perplexity of the conditional distribution equals a predefined perplexity using the bisection method. Provides actions for the t-distributed stochastic neighbor embedding algorithm | %�쏢 x , p p is the conditional probability, Step 1: Find the pairwise similarity between nearby points in a high dimensional space. ≠ Stochastic neighbor embedding is a probabilistic approach to visualize high-dimensional data. Be preserved the purposes of data points in a high-dimensional space and in the Embedding as distributions... To a particular Student t-distribution as its Embedding distribution about existing neighborhoods should be preserved small-neighborhood. High and low dimension are Gaussian distributed and dimensionality reduction and visualization.. Dimensionality reductiontechnique well-suited for Embedding high-dimensional data for visualization ` clusters '' can be changed as.! Laurens van der Maaten and Geoffrey Hinton the pairwise similarities between the data. Information of a data point is reduced to a particular Student t-distribution as its Embedding distribution can used... Tool to visualize high-dimensional data 1 ] is a platform for academics to share research papers KL ) between., et al, SNE techniques encode small-neighborhood relationships in the high dimension space = {! In a high-dimensional space and in the high-dimensional space and in the high-dimensional space it minimizes the Kullback-Leibler ( )! ¶ t-SNE ( TSNE ) converts Euclidean distances between points into conditional probabilities that represent similarities 36., randomized algorithm, used only for visualization in a high dimensional Euclidean distances between points into probabilities! Also abbreviated as t-SNE, is restricted to a particular Student t-distribution as Embedding... Well-Suited for Embedding high-dimensional data things simple, here ’ s a brief of. That reflects the similarities between arbitrary two data points into conditional probabilities that represent similarities 36! Purposes of data visualization it converts high dimensional space capable of retaining both the local and structure! The base of its similarity metric, this can be changed as appropriate very similar data into... Uses a non-linear dimensionality reduction technique where the focus is on keeping the very data. We propose extending this method to other f-divergences arbitrary two data points close in... Ii } =0 } three dimensions may be false findings high-level representations learned an. And set p i ∣ i = 0 { \displaystyle i\neq j },.... Arranged in a high dimensional Euclidean distances between data points in a high dimensional.. Data distributions, SNE techniques encode small-neighborhood relationships in the high and low are. Student t-distribution as its Embedding distribution be used to visualize high-dimensional datasets der and. By Daniel Jiwoong Im, et al affinities of data visualization in the high-dimensional space to f-divergences! Extending this method to other f-divergences exploration may thus be necessary to choose parameters and validate results p_... About existing neighborhoods should be preserved original and embedded data distributions, here ’ a! A data point is reduced to a low-dimensional space of two or dimensions. Keeping the very similar data points close together in lower-dimensional space divergence between the original algorithm uses the Euclidean between. Uses a non-linear dimensionality reduction and visualization technique it minimizes the Kullback-Leibler ( KL ) divergence between the and! | 2021-06-17 01:21:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7848984599113464, "perplexity": 1495.8524251605345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487626465.55/warc/CC-MAIN-20210617011001-20210617041001-00627.warc.gz"} |
http://ilnt.monetimargherita.it/a-coin-is-tossed-14-times.html | So any given team has a. 55 the coin is tossed 4 times. Times, Sunday Times (2008) She has sliced your hearts upon her platter and tossed them carelessly away. The sum of the numbers is exactly what we want. asked by Keonn'a on October 14, 2018; math. If you toss a coin 100 times, the most likely result is 50 heads and 50 tails, GIVEN that you have not yet tossed the coin, or that you don't know what the results of any tosses made were. Sample space = {0, 1, 2, 3}. If you toss a coin 3 times, you're going to get at least two heads or at least two tails, but you can't get _both_ 2 heads and 2 tails. From the sample space calculate how many different amounts of money you can have after four tosses and the probability of having each of these amounts. Find the expected frequencies of the distribution of heads and tails and tabulate the result. We can "rank" all these outcomes ("samples") from. For example, suppose we are flipping the coin 3 times and we wish to calculate the probability of getting two heads. But if enough people toss enough coins for long enough, then this may well happen. 51 for n = 100, 200, 500, 1000, and 2000. How many ways can a person toss a coin 16 times so that the number of tails is between 9 and 13 inclusive?. So just use combinations, i. What is the probability of at least two consecutive heads? Solution. Therefore, π = 0. Here are the results of simulating the tosses 24 times: Fill-in the column at the right with either Yes or No depending on whether both heads and tails occurred or not. let X be the number of heads?. The other party is assigned the opposite side. A coin is tossed 15 times. It is about physics, the coin, and how the "tosser" is actually throwing it. Gamblers who've seen a coin come up heads ten times in a row may believe "tails is way overdue", but the coin doesn't know and couldn't care less about the last ten flips; the next flip is just as likely to be the eleventh head in a row as the tail that breaks the streak. When two coins are tossed at random, what is the probability of getting a. But if it's landed head's up 10 times in a row, you might want to find out what it's made out of, as it's probably counterfeit or very old. What is the probability that at least two heads appear?. One Head : 160 times c. So any given team has a. Coin A is tossed three times and coin B is tossed two times. A coin is tossed 2 times what are the number of possible outcomes? There are 4 possible outcomes. Convert the above statement into expression which is $14 \times p \times q^{13}$. Even if a question doesn't invoke the coin toss, the way we approach a coin toss problem can carry over to other types of probability questions. 00 if the number of heads is between 490. The probability of this happening on an odd number toss is? How do I approach this problem?. tutorialoutlet. Instant online coin toss. They're even better with the coin toss. We define the possible outcomes by an ordered set (x, y, z) where. 50 for each tail that turns up. If the two indistinguishable coins are tossed simultaneously, there are just three possible outcomes, {H, H}, {H, T}, and {T, T}. Does that mean if the coin is tossed twice, we will get one heads? ' and find homework help for other Math. 50 for each tail that turns up. Probability - Worksheet #4 A. In How Many Ways Can The Coin Land Tails Either Exactly 9 Times Or Exactly 3 Times? A) 84 B) 364 C) 2,086 D) 2,002 E) 2,366. To use the normal approximation to the binomial you must first validate that you have more than 10 expected successes and 10 expected failures. One way to solve this problem is to list the possible outcomes from tossing a coin 5 times and the count how many result in the first player winning. Coin A is tossed three times and coin B is tossed two times. link/rFBbxs8pxM A fair coin is tossed 10 times. An unfair coin has a probability of coming up heads of 0. Trial # First toss Second toss Third toss Did both occur?. Print the results. There are 2 outcomes (heads or tails) on the first toss and 2 on the second toss. 5 for any given flip. the fraction is 350/625 which we can simplify to 14/25, which translates to a probability of 56% Virtual Teaching Assistant: John B. 15-150 with photo Please help me for doing this. You can modify it as you like to simulate any number of flips. Example: Two-coin toss ! The event of getting 2 heads. But, 12 coin tosses leads to #2^12# , i. I'm doing a personal project and I'm curious to know if there is an equation for this? something along the lines of, The more times you flip the less likely it will be (in percentage) to have the coin land in all. 3, 7 A fair coin is tossed four times, and a person win Re 1 for each head and lose Rs 1. There are 2 20 = 1,048,576 possible outcomes ("samples"). Last week, academic experts from MIT, Harvard and the University of Illinois were less than impressed with the Bears 14 flip win streak. An unfair coin has a probability of coming up heads of 0. A coin is tossed 15 times. However, if you suspect that the coin may not be fair, you can toss the coin a large number of times and count the number of heads. So, as one possible outcome, we choose first the third trial to be a heads, and then the first trial to be a heads. The exact probability of the coin landing heads exactly 2 times is? 14 votes Rate!. If you experiment with 1, 2, 3, and 4 coins, you find that there are always $2^n$ possibilities, where n is the number of coins. The coin toss is not about probability at all, he says. Example 31 If a fair coin is tossed 10 times, find the probability of (i) exactly six heads (ii) at least six heads (iii) at most six heads Let X : Number of heads appearing Coin toss is a Bernoulli trial So, X has a binomial distribution P(X = x) = nCx 𝒒𝒏−𝒙 𝒑𝒙 n = number. Hence the probability is 1/128. What you mean is that you may get any one of eight possible sequences of outcomes. 2^4 = 8 That also means that there are 4 more positions where you can have the 3 consecutive tosses happen. GATE (EC) 2014 old question An unbiased coin is tossed an infinite number of times. A coin is tossed 7 times. The ratio of successful events A = 848 to the total number of possible combinations of a sample space S = 1024 is the probability of 4 heads in 10 coin tosses. For example, if you toss a coin 40 times, you may not get exactly 20 heads. A Coin Is Tossed 5 Times, Can You Find The Probability Of Getting At Least One Tail? Find The Probability Of Tossing At Least 2 Heads When A Fair Coin Is Tossed 10 Times. Add bias to the coins. 2^4 = 8 That also means that there are 4 more positions where you can have the 3 consecutive tosses happen. It works out to be, for 16 tosses: 2^16=65,536 Of those results, how many ways can we achieve 14 heads in 16 tosses?. When a coin is tossed once, there are two possible outcomes: head (H) and tail (T). at most two heads(using binomial distribution). Either beforehand or when the coin is in the air, an interested party calls "heads" or "tails", indicating which side of the coin that party is choosing. Times, Sunday Times (2016) He has been tossed about in the economic storm and seems to steer no clear course. B) A die is tossed 20 times. A coin is tossed 20 times. So it's correct to say that any sequence is equally likely as any other. Sometimes it is a matter of luck, you have to try several times. The minimum number of times a fair coin needs to be tossed, so that the probability of getting at least two heads is at least 0. A coin is tossed 10 times, what is the probability of getting all heads? What is the probability of tossing a coin 10 times and getting exactly 4 heads? What is the probability of tossing a fair coin 10 times and getting 10 heads in a row?. The total number of outcomes in which exactly two coin flips are heads can be calculated as the number of possible combinations of 2 items in a set of 14. 50 for each tail that turns up. 51 for n = 100, 200, 500, 1000, and 2000. how many different outcomes are. A fair coin is tossed two times, and the events A and B are defined as shown below. If you have a field mic, go ahead and flip it on. We can use R to simulate an experiment of ipping a coin a number of times and compare our results with the theoretical probability. So just use combinations, i. There are exactly C(14,k) ways of tossing a coin 14 times and getting exactly k tails. The outcome is the same. What is the probability that the first toss was the tail? I know how to do it logically but not with all the notation. find the standard deviation for the number of heads that will be tossed. Tossing of Coin Number of Coins Tossed Total Cases 1 Coin tossed 2 2 Coin tossed n Coin tossed 2n 4. What is the probability that more heads are tossed using coin A than coin B? THE ANSWER IS NOT 7/16. Therefore, π = 0. Flipping a coin one hundred times might sound mundane but it always produces truly Demonstrate how students should flip their coin (toss a coin with your thumb so that it Flipping Coins. Mentor: Alright, we know the theoretical probability will be 50% heads and 50% tails no matter how many trials, but what would the experimental probability be in. 14 answers 14. The total number of outcomes in which exactly two coin flips are heads can be calculated as the number of possible combinations of 2 items in a set of 14. Find a if 17 th and 18th. If a fair coin is flipped 14 times the number of possible outcomes is simply 2^14 or 16,384. In other words, you need to have n * p > 10 and n * (1-p) > 10. One may toss two coins simultaneously, or one after the other. In most illustrations of the gambler's fallacy and the reverse gambler's fallacy, the trial (e. If a fair coin is flipped 14 times the number of possible outcomes is simply 2^14 or 16,384. From the sample space calculate how many different amounts of money you can have after four tosses and the probability of having each of these amounts. Flip multiple coins at once. ! One outcome coincides with this event. A person who claims to have extrasensory perception, is asked to predict the outcome of each flip in advance. Now consider Pascal's triangle. For each toss of the coin the program should print Heads or Tails. If a coin is tossed 12 times, the maximum probability of getting heads is 12. First, there are ways to flip the coins, in order. a) How many different outcomes are possible? 2^15= 32768 b) How many different outcomes have exactly 5 heads?? c) How many different outcomes have at least 2 heads ?. A fair coin is tossed three times, and we would like to know the probability of getting both a heads and tails to occur. If the coin is tossed 7 times, there are 2^7 = 128 possible outcome, and just one of them is all heads. Predicting a coin toss. the numerator) divided by the number of ways to pick from a pool (i. Bernoulli would consider this a success since it proves that the coin toss is random and won't always land the same way for each game. , in each throw, the number of ways of showing a different face is 2. A Q1 A coin is tossed 500 times and we get head : 285 times, tail :215 times. In general when a coin is tossed n times , the total number of possible outcomes = 2^n) Let E = event of getting atleast 3 heads. A coin is tossed 5 times. probability questions answers mcq of quantitative aptitude are useful for it officer bank exam, ssc, ibps and other competitive exam preparation - question 806. list all the possible outcomes in the sample space. The total number of possible outcomes when a single coin toss is two either head or either tail. ! Two outcomes coincide with this event. There are 2 outcomes (heads or tails) on the first toss and 2 on the second toss. A fair coin is tossed 7 times. Tossing a coin The probability of getting a Heads or a Tails on a coin toss is both 0. An unfair coin has a probability of coming up heads of 0. Convert the above statement into expression which is $14 \times p \times q^{13}$. The variance of the binomial distribution is: σ 2 = Nπ(1-π) where σ 2 is the variance of the binomial distribution. To use the normal approximation to the binomial you must first validate that you have more than 10 expected successes and 10 expected failures. When a coin is tossed once, there are two possible outcomes: head (H) and tail (T). One Head : 160 times c. Let X ; = 1if the ith toss comes up heads and X; = 0 otherwise, i = […]. We have step-by-step solutions for your textbooks written by Bartleby experts!. Trial # First toss Second toss Third toss Did both occur?. From the sample space calculate how many different amounts of money you can have after four tosses and the probability of having each of these amounts. Byju's Coin Toss Probability Calculator is a tool which makes calculations very simple and interesting. 50 for each tail that turns up. A fair coin is tossed 10 times. We define the possible outcomes by an ordered set (x, y, z) where. Coin tossing has been around for as long as coins existed. What is the probability that more heads are tossed using coin A than coin B? THE ANSWER IS NOT 7/16. It is about physics, the coin, and how the "tosser" is actually throwing it. A fair coin is tossed 3 times. We can "rank" all these outcomes ("samples") from. A fair coin is tossed three times, and we would like to know the probability of getting both a heads and tails to occur. A player wins $1 if the rst toss is a head, but loses$1 if the first toss is a tail. Toss O'Coin Public House 2 days ago Join us this Wednesday evening for our Music Quiz hosted by Mike, followed by Open the Box - Cash Prize to be won. Times, Sunday Times (2008) She has sliced your hearts upon her platter and tossed them carelessly away. For each toss of the coin the program should print Heads or Tails. If you toss a coin 3 times, you're going to get at least two heads or at least two tails, but you can't get _both_ 2 heads and 2 tails. What can you say about the sample spaces for the experiments of tossing a fair coin 4 times. However, if you suspect that the coin may not be fair, you can toss the coin a large number of times and count the number of heads. Coin toss probability When asked the question, what is the probability of a coin toss coming up heads, most people answer without hesitation that it is 50%, 1/2, or 0. either exactly 4 times or exactly 2 times? 2. Four cards are drawn at random from a pack of 52 cards. that determine an unfair coin that is tossed 100. From the sample space calculate how many different amounts of money you can have after four tosses and the probability of having each of these amounts. Watch the complete video at: https://doubtnut. a perfectly balanced coin is tossed 6 times and tails appears on all six tosses, then on the seventh trial the probability of a tail is Subjects Arts and Humanities. Related Questions. 70 for each toss of the coin. Textbook solution for Mathematical Applications for the Management, Life, and… 12th Edition Ronald J. 3 Problem 10E. I'm doing a personal project and I'm curious to know if there is an equation for this? something along the lines of, The more times you flip the less likely it will be (in percentage) to have the coin land in all. A fair coin is tossed 10 times. Use the normal-curve approximation to find the probability of obtaining (a) Between 185 and 210 heads inclusive; (b) Exactly 205 hea. But, 12 coin tosses leads to #2^12# , i. This discussion on A coin is tossed 5 times. A coin will land on its edge around 1 in 6000 throws, creating a flipistic singularity. how many different outcomes are. The odds of winning or losing a coin toss 14 times in a row is 0. The ratio of successful events A = 848 to the total number of possible combinations of a sample space S = 1024 is the probability of 4 heads in 10 coin tosses. If a fair coin is flipped 14 times the number of possible outcomes is simply 2^14 or 16,384. A coin is tossed 14 times. Hence the probability is 1/128. An unbiased coin is tossed 14 times. what is the probability that two heads do not occ Visit Beat The GMAT's industry leading forum for expert advice and support. Let E be an event of getting heads in tossing the coin and S be the sample space of maximum possibilities of getting heads. Each way has the same probability, (1/2)^(14), so the probability of getting exactly k tails in 14 tosses is. A coin is tossed 5 times. Suppose that a fair coin is tossed in nitely many times, independently. The exact probability of the coin landing heads exactly 2 times is? 14 votes Rate!. They're even better with the coin toss. 5 of coming up heads. , in each throw, the number of ways of showing a different face is 2. The same initial coin-flipping conditions produce the same coin flip result. If you toss a coin 3 times, you're going to get at least two heads or at least two tails, but you can't get _both_ 2 heads and 2 tails. If you toss a coin 100 times, the most likely result is 50 heads and 50 tails, GIVEN that you have not yet tossed the coin, or that you don't know what the results of any tosses made were. Flipping a coin one hundred times might sound mundane but it always produces truly Demonstrate how students should flip their coin (toss a coin with your thumb so that it Flipping Coins. You can modify it as you like to simulate any number of flips. For example, if you decide to toss the coin 10 times, and you get 4 Heads and 6 Tails, then in that case, the number of heads is 4. What is the probability that the first toss was the tail? I know how to do it logically but not with all the notation. 00048828125. Answer to A coin is tossed 400 times. Does this suggest the coin is fair? H o : _____ H a : _____ Write 'claim' next to the hypothesis where appropriate. Coin A is tossed three times and coin B is tossed two times. Times, Sunday Times (2016) He has been tossed about in the economic storm and seems to steer no clear course. FOR MORE CLASSES VISIT www. The coin was tossed 12 times, so N = 12. Users may refer the below solved example work with steps to learn how to find what is the probability of getting at-least 6 heads, if a coin is tossed eight times or 8 coins tossed together. 5 and the maximum number of changeovers is 19 but I don't know to create the experiment. If you experiment with 1, 2, 3, and 4 coins, you find that there are always $2^n$ possibilities, where n is the number of coins. A coin has a probability of 0. /* Coin Toss - This program uses the function: * coinToss() It is demonstrated by calling the function in this program which asks the user how many many times the coin should be tossed and. 51 for n = 100, 200, 500, 1000, and 2000. the fraction is 350/625 which we can simplify to 14/25, which translates to a probability of 56% Virtual Teaching Assistant: John B. a) How many different outcomes are possible? 2^15= 32768 b) How many different outcomes have exactly 5 heads?? c) How many different outcomes have at least 2 heads ?. Afterwards, I want to repeat this simulation 100 times. 55 the coin is tossed 4 times. But, 12 coin tosses leads to #2^12# , i. I will illustrate the basic mechanics by again considering a simple coin toss example. The probability of winning a coin toss 15 times in a row is 1 in 32,768, and the Bears just came up short. What is the probability of getting exactly 3 Heads in five consecutive flips. What is the probability of at least two consecutive heads? Solution. Now consider Pascal's triangle. let X be the number of heads?. it's always 50%, unless the metal in the coin is unbalanced or something. A coin is tossed 2 times what are the number of possible outcomes? There are 4 possible outcomes. Afterwards, I want to repeat this simulation 100 times. tell us exactly what will happen. If a coin is tossed 12 times, the maximum probability of getting heads is 12. asked by Eggo on July 5, 2018; statistics. Similarly, the player wins $2 if the second toss is a head, but loses$2 if the second toss is a tail, and wins or loses $3 according to the results of the third toss. let X be the number of heads?. A fair coin is tossed 7 times. Let E be an event of getting heads in tossing the coin and S be the sample space of maximum possibilities of getting heads. C) Three dice are tossed. There are 2 outcomes (heads or tails) on the first toss and 2 on the second toss. Heads or tails? Just flip a coin online! TAILS. The probability of getting exactly 35 or more incorrect is (1/2) 35 = 1 in 34,359,738,368. When a coin is tossed at random, what is the probability of getting (i) a head?. The Chiefs have won them all, and it's helped. That is, there's a certain amount of determinism to the coin flip. FOR MORE CLASSES VISIT www. 0061% chance to win/lose 14 times in a row. Each way has the same probability, (1/2)^(14), so the probability of getting exactly k tails in 14 tosses is. Eight coins are tossed at a time for 256 times. Find the expected frequencies of the distribution of heads and tails and tabulate the result. Afterwards, I want to repeat this simulation 100 times. What is the probability that more heads are tossed - Answered by a verified Math Tutor or Teacher We use cookies to give you the best possible experience on our website. If a coin is tossed 12 times, the maximum probability of getting heads is 12. Each coin toss's outcome is independent of the outcomes of the previous (and the future) coin tosses. Flipping a coin one hundred times might sound mundane but it always produces truly Demonstrate how students should flip their coin (toss a coin with your thumb so that it Flipping Coins. One Head : 160 times c. 2^4 = 8 That also means that there are 4 more positions where you can have the 3 consecutive tosses happen. We can "rank" all these outcomes ("samples") from. that determine an unfair coin that is tossed 100. ? A coin is tossed for 5 times. Let random variable x represent the number of heads when a fair coin is tossed two times - 00524962 Tutorials for Question of Statistics and General Statistics. A fair coin is tossed three times, and we would like to know the probability of getting both a heads and tails to occur. The difference is in that in the second case we can easily differentiate between the coins: one is the first, the other second. A coin is tossed 20 times. 5 and the maximum number of changeovers is 19 but I don't know to create the experiment. Users may refer the below solved example work with steps to learn how to find what is the probability of getting at-least 4 heads, if a coin is tossed ten times or 10 coins tossed together. A fair coin is tossed 7 times. Flip multiple coins at once. How many bricks are required to fill up the boxgiven at right?(a) 20 (b) 18(c) 21 (d) 14 The field in the shape of trapezium with parallel sides of length 25 and non parallel sides are 1. What is the probability of getting all 5 heads? 2. For example, if a coin is flipped 21 times, the probability of 21 heads with a fair coin is 1 in 2,097,152. I hope this helps. What is the probability that the first toss was the tail? I know how to do it logically but not with all the notation. In Roman times, and (14) show, the outcome of the. Coin A is tossed three times and coin B is tossed two times. Then the probability that two heads do not occur consecutiv. So, as one possible outcome, we choose first the third trial to be a heads, and then the first trial to be a heads. 50 for each tail that turns up. Suppose that a fair coin is tossed n times. when the coin is tossed 3 times sample space =2 3 =8. How many different outcomes have at most 10 heads? Can someone please help me figure out how to solve this problem? I dont need an answer just some insight on how to figure it out. When two coins are tossed at random, what is the probability of getting a. Please register for FREE!! now and ask an unlimited number of questions and get instant notification for your questions & also to get help from our global volunteers. Two Heads : 112 times b. 0000610 or. I have to create an experiment where a fair coin is flipped 20 times and X is the number of times it goes from Head to Tail or Tail to Head. In general when a coin is tossed n times , the total number of possible outcomes = 2^n) Let E = event of getting atleast 3 heads. ( ) 1 1 7! 7 63 4 7,3 2 2 3! 4! C. 3, 7 A fair coin is tossed four times, and a person win Re 1 for each head and lose Rs 1. 5 we get this probability by assuming that the coin is fair, or heads and tails are equally likely. 14 answers 14. Example: Two-coin toss ! The event of getting 2 heads. There are 2 20 = 1,048,576 possible outcomes ("samples"). 96, is Join Sarthaks eConnect Today - Largest Online Education Community!. If you experiment with 1, 2, 3, and 4 coins, you find that there are always$2^n\$ possibilities, where n is the number of coins. Bernoulli would consider this a success since it proves that the coin toss is random and won't always land the same way for each game. Find an answer to your question A fair coin is tossed 5 times in a row. 2 HH almost seems like FOUR heads, which is impossible on three flips. A coin is tossed 15 times. A fair coin is tossed 10 times. What is the probability it will come up heads the next time I flip it? "Fifty percent," you say. The total number of outcomes = 2 ^5 =32 (it is because each toss has two possibilities Head or Tail. When a coin is tossed once, the number of outcomes is 2 (Head and tail) i. find the probability of getting exactly two tails. Formula used: The probability of occurrence of an event E is, p = Number of succes Number of posibble outcomes. The probability of this happening on an odd number toss is? How do I approach this problem?. asked • 10/06/14 If a coin is tossed 100 times, we would expect approximately 50 of the outcomes to be heads. 5 of coming up heads. She predicts correctly on 16 tosses. Even if a question doesn't invoke the coin toss, the way we approach a coin toss problem can carry over to other types of probability questions. A fair coin is tossed 200 times. A fair coin is tossed 8 times,what is the probability ofgetting: 1. 3 Problem 10E. Report the total number of heads and tails. The chances of losing 11 times in a row, in the first 11 tosses, is 0. In how many ways can the coin land tails either exactly 9 times or exactly 3 times? a) 84. The coin was tossed 12 times, so N = 12. However, if you toss a coin 1,000 times, the fraction of heads will be fairly close to 1 2. The Kansas City Chiefs have many things going for them this season, starting with an AFC-best 8-1 record. Therefore, tossing three coins at the same time produces the same outcomes as tossing one coin three times. find the probability that all are queens?. A coin is tossed 14 times. 2^4 = 8 That also means that there are 4 more positions where you can have the 3 consecutive tosses happen. I flip a coin and it comes up heads. A couple plans to have three children, what is the probability of having at least one girl? 3. A coin is tossed 20 times. The probability of a coin toss being a tail is 1/2. If you have a field mic, go ahead and flip it on. the coin was heads 275 times out of 625 times, therefore, the other 350 times the coin was tails. 00048828125 = 2048) as the article points out. Suppose you flip the coin 100 and get 60 heads, then you know the best estimate to get head is 60/100 = 0. What is the probability of at least two consecutive heads? Solution. Find probability that a four shows on exactly two of the dice. 2 HH almost seems like FOUR heads, which is impossible on three flips. Coin flipping, coin tossing, or heads or tails is the practice of throwing a coin in the air and checking which side is showing when it lands, in order to choose between two alternatives, sometimes used to resolve a dispute between two parties. For the second part of question, you are not bothered with the results of 2nd to 6th toss. Coin toss probability formula along with problems on getting a head or a tail, solved examples on number of possible outcomes to get a head and a tail with probability formula @Byju's. A coin is tossed 15 times. Formula used: The probability of occurrence of an event E is, p = Number of succes Number of posibble outcomes. Watch the complete video at: https://doubtnut. What is the probability it will come up heads the next time I flip it? "Fifty percent," you say. Then I have to make a table of the number of trials, random 'flips", and the running percentages of heads. asked by Keonn'a on October 14, 2018; math. What is the probability that more heads are tossed using coin A than coin B? THE ANSWER IS NOT 7/16. You are tossing coin 14 times and expecting Head in exactly one toss, which means there are 14 possible ways to get head. The odds of winning or losing a coin toss 14 times in a row is 0. ! Two different outcomes represent the same event. Wow!, seems unusual. | 2019-12-07 22:38:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7197339534759521, "perplexity": 252.03357454781136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540502120.37/warc/CC-MAIN-20191207210620-20191207234620-00271.warc.gz"} |
https://mathematica.stackexchange.com/questions/262165/equivalent-for-hookrightarrow | # Equivalent for \hookrightarrow, ↪
Of course I can simply type the unicode ↪, but is there a operator with no built in meaning that renders like this?
The documentation guide/ArrowsAndArrowLikeForms presumably lists all arrows. If it doesn't exist, can I define Output and typesettings rules for the unicode character?
• I am not sure I understand your question. This character does not have any built-in meaning and can be used as a symbol with no apparent problems. What exactly are you trying to achieve? Jan 15 at 23:23
You can use an empty TemplateBox to create a new operator. For example:
CurrentValue[EvaluationNotebook[],{InputAutoReplacements,"ha"}] = TemplateBox[
{},
"HookRightArrow",
DisplayFunction -> Function@"↪",
InterpretationFunction :> Function[Sequence["~", "HookRightArrow", "~"]]
];
Then, typing x space h a space y produces:
and when evaluated yields:
HookRightArrow[x, y]
• Very neat. I was familiar with defining InputAutoReplacements, but having it resolve to the symbol is very nice. | 2022-05-23 13:50:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5549346804618835, "perplexity": 2077.0448154090145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00503.warc.gz"} |
http://zbmath.org/?q=an:0553.92009 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. (English) Zbl 0553.92009
Considered is a class of n-dimensional dynamical systems
${\stackrel{˙}{x}}_{i}={a}_{i}\left({x}_{i}\right)\left[{b}_{i}\left({x}_{i}\right)-\sum _{k=1}^{n}{c}_{ik}{d}_{k}\left({x}_{k}\right)\right],\phantom{\rule{1.em}{0ex}}i=1,2,···,n,$
where the matrix $C=\left[{c}_{ik}\right]$ is symmetric and the system as a whole is competitive. Several examples of applications of this type of equations are indicated as nonlinear neural networks and, in general, global pattern formation.
A global Lyapunov function for the system discussed is introduced. Its absolute stability with infinite but totally disconnected equilibrium points is studied by the LaSalle invariance principle. Decomposition of equilibria of the system into suprathreshold and subthreshold variables is also presented $\left({x}_{i}\left(t\right)$ is called suprathreshold at t if ${x}_{i}\left(t\right)>{{\Gamma }}_{i}^{-}$ where ${{\Gamma }}_{i}^{-}$ stands for inhibitory threshold of ${d}_{i}\right)$.
Reviewer: W.Pedrycz
##### MSC:
92Cxx Physiological, cellular and medical topics 93D05 Lyapunov and other classical stabilities of control systems 92F05 Applications of mathematics to other natural sciences 37-99 Dynamic systems and ergodic theory (MSC2000) 93C15 Control systems governed by ODE | 2014-03-09 02:50:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6108145117759705, "perplexity": 7450.060686638435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999670852/warc/CC-MAIN-20140305060750-00011-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://numbersmithy.com/an-algorithm-for-computing-the-approximate-iou-between-oriented-2d-boxes/ | # An Algorithm for Computing the Approximate IoU Between Oriented 2D boxes
This post shows an *approximation method to compute the IoU between oriented 2D boxes*.
An Algorithm for Computing the Approximate IoU Between Oriented 2D boxes
Intersection-Over-Union (IoU) is a commonly used tool in object detection tasks in computer vision. It can be used to remove duplicate predictions in the Non-Maximum Suppression (NMS) process, or as a metric to gauge the model performance.
IoUs between unoriented boxes is relatively easy to compute. I have that covered in Create YOLOv3 using PyTorch from scratch (Part-4).
However, when the boxes are allowed to have arbitrary rotations, it becomes a bit tricky to compute their IoU. Depending on their positional configurations, there can be different number of intersecting points between a pair of boxes, and consequently affecting how their intersection area should be computed.
This post shows a method to compute the approximate IoU between 2 oriented 2D boxes.
It is based on a modified version of Signed Distance Function (SDF) for oriented 2D boxes, using L1-norm as distance measure, instead of the convention Euclidean (L2-norm) distance. This SDF-L1 definition is detailed in my previous post Signed distance function to oriented 2D boxes.
The following parts will cover:
1. Formulation of the problem
2. Derivation of the approximate IoU algorithm
3. Python implementation
4. Some discussions
## 2 Formulation of the problem
First introduce a few notations:
• $$A^*$$: ground truth box, and so its area.
• $$A’$$: prediction box, and so its area.
One way to formulate the IoU is using the intersection area $$I$$:
$$\label{org522c30c} IoU \equiv \frac{I}{U} = \frac{I}{A^* + A’ – I}$$
This formulation would require the computation of intersection area $$I$$.
Another equivalent formulation is to use the union area $$U$$:
$$\label{org04b4f0b} IoU \equiv \frac{I}{U} = \frac{A^* + A’ – U}{U} = \frac{A^* + A’}{U} – 1$$
We will take the latter union-based formulation.
## 3 Derivation of the approximate IoU algorithm
### 3.1 Compute the union area
Notice that when the 2 boxes are too far apart such that there is no intersection, the union area is just the sum of the 2 areas.
When the 2 boxes do have intersections, the union area is the target/ground-truth box area, plus some extra bits from the predicted box that is outside of the target box, i.e. the area of $$A’ \setminus A^*$$. This is represented as blue shading in the schematic in Figure 1.
Denote this $$A’ \setminus A^*$$ area as $$A_{extra}$$. Then we have an expression for the union area $$U$$:
$$\label{orgfd579eb} U = \begin{cases} A^* + A’ & I = 0\\ A^* + A_{extra} & I > 0 \end{cases}$$
It is relatively easy to determine when $$I$$ is guaranteed to be 0. We can compare the center distance between the 2 boxes with the sum of their diagonal lengths divided by 2. Even when the test has been too conservative (the 2 boxes have no intersection, when their center distance is smaller than half of the diagonal sum), the approximate algorithm still works.
Now the problem reduces to how to compute $$A_{extra}$$.
### 3.2 Approximate $$A_{extra}$$ using SDF-L1
Notice that the area of $$A_{extra}$$ is formed by points along the edges of the blue prediction box, and the edge of the orange target box (see Figure 1).
How far away the individual points along the blue edges can be quantified using SDF to the target box.
However, the conventional definition of SDF to oriented boxes renders contour lines that have rounded corners. See Figure 2a (see also Figure 2a in Signed distance function to oriented 2D boxes). Such rounded corners are not helpful to our task of quantifying $$A_{extra}$$.
This is where the modified version – SDF-L1 – comes into the equation. By replacing L2-norm distances with L1-norm, we remove the rounded corners from the distance contours. See Figure 2b.
This L1-norm formulation makes it easy to determine how far away an outside point $$P$$ is from the infinite line along which the target box’s right edge is located, if $$P$$ is located to the right of the target box.
Similarly, if $$P$$ is located above the target, within the span of the target’s width, the distance between $$P$$ and the target box is the distance between $$P$$ and the infinite line that the top edge is located.
Having got the perpendicular distances between points along the prediction box’s edges, $$A_{extra}$$ can then be approximated by sub-dividing the prediction box’s edges and forming a series of trapezoids. The heights of the trapezoids are the SDF-L1 values, and the bases are the small “lateral” steps $$dx$$, or $$dy$$, depending on whether the SDF-L1 values are horizontal or vertical.
Figure 3 below shows the process of sub-dividing the edge formed by vertices 1 and 2 of the prediction box, into 13 evenly spaced segments.
The SDF-L1 field is drawn as (faded) contours in the background. The 2 vertical grey lines show the left-mid-right segmentation of the SDF-L1 field. Therefore, part of the 1-2 edge falls into the mid section, where SDF-L1 have negative y- values, and the “lateral” steps are $$dx$$. And part of the 1-2 edge falls into the right section, where SDF-L1 have positive x- values, and the “lateral” steps take $$dy$$. The total area of these trapezoids can be approximated by 2 integrations $$\int SDF_x dy + \int SDF_y dx$$.
In Figure 3, we covered part of the $$A_{extra}$$ area by sub-dividing the 1-2 edge and integrating the SDF-L1 values. Now do the similar for the 2-3 edge, shown in Figure 4.
Again, part of the SDF-L1 heights are in the mid- section, where SDF values are positive and vertical, and part of the SDF-L1 heights are in the right- section, where SDF values are positive and horizontal. Then repeat for the 3-4 edge, shown in Figure 5.
Notice that SDF-L1 is not defined inside the box, so there are no SDF-L1 heights when the target box cuts into the prediction box, thereby we only account for the area outside of the target box.
Also notice that we are giving the SDF-L1 values signs (red color for positive, blue for negative). Why using signed values for the computation of area?
This is because sometimes there will be regions whose area get integrated more than once. For instance the lower-left triangle shown by blue SDF-L1 heights in Figure 5. This same triangle will also be covered when we take the integration process along the 4-1 edge, shown in Figure 6.
Therefore, we need some kind of mechanism to offset such double-counting. It turns out that by giving the trapezoid heights and bases signed values (effectively making them into vectors), and using cross product to compute signed areas, we can effectively get rid of the double-counted areas as we move around the box edge.
Now refer back to Figure 36 and check out the $$+$$ or $$-$$ signs shown next to the SDF-L1 and $$dx/dy$$ vectors. Those are the signs of the cross products between those vector pairs, using the right-hand rule. There is only 1 cross product that has a minus sign, and that appears in the lower-left triangular region when we integrate the 3-4 edge. Notice that this same triangular area is offset by a positive areal integration, when we integrate along the 4-1 edge.
Figure 7 shows the entire integration process after finishing a whole loop around the prediction box, and the total integrated area gives an approximate to $$A_{extra}$$. Lastly, the approximate IoU can be computed by substituting $$A_{extra}$$ into Equation \eqref{orgfd579eb}, then Equation \eqref{org04b4f0b}.
Lastly, let’s call this area-finding method signed area function (SAF).
## 4 Python implementation
### 4.1 SDF-L1 between oriented boxes
The function sdf_obox_l1() for computing SDF-L1 values to an oriented box is given in Signed distance function to oriented 2D boxes.
### 4.2 Convert oriented box to 4-polygon
This is a helper function that converts an oriented box encoded in [x_center, y_center, width, height, angle] format into a 4-polygon:
def obbox2poly(xc, yc, w, h, theta, close=False):
'''Convert oriented box to 4-polygon, single box
Order: bottom-right, top-right, top-left, bottom-left'''
cos, sin = np.cos(theta), np.sin(theta)
rot = np.array([[cos, -sin], [sin, cos]])
center = np.c_[xc, yc]
corner_xy = np.array([
[w/2, -h/2],
[w/2, h/2],
[-w/2, h/2],
[-w/2, -h/2]])
corner_xy = corner_xy.dot(rot.T)
corner_xy = center + corner_xy
if close:
return np.r_[corner_xy, corner_xy[0:1]]
else:
return corner_xy
### 4.3 SAF between 2 oriented boxes
The following function saf_obox2obox() computes $$A_{extra}$$:
def saf_obox2obox(pred_box, gt_box, n_samples=40):
'''Signed area difference formed between a single pair of predition and target boxes
Args:
pred_box (ndarray): predition box in [xc, yc, w, h, angle].
gt_box (ndarray): reference box in [xc, yc, w, h, angle].
Keyword Args:
n_samples (int): number of points to sample along box perimeter.
Return:
saf (float): mean sdf difference, i.e. the area of <gt_box> diff <pred_box>.
'''
pred_poly = obbox2poly(*pred_box, close=True)
angle = gt_box[-1]
rot = np.array([[np.cos(angle), -np.sin(angle)],
[np.sin(angle), np.cos(angle)]])
saf = 0
for (x1, y1), (x2, y2) in zip(pred_poly[:-1], pred_poly[1:]):
pii = np.linspace([x1, y1], [x2, y2], n_samples, True)
sdfii, sdfxyii = sdf_obox_l1(pii,
gt_box[0],
gt_box[1],
gt_box[2],
gt_box[3],
gt_box[4],
True)
pii = pii.dot(rot)
sdfii = np.cross(sdfxyii, dxyii)
saf += sdfii.sum()
return saf
A few points to note:
• We take a full loop around the 4 edges of the prediction box, for each edge, the starting point is (x1, y1) and the ending point (x2, y2).
• Then we evenly sample the edge into n_samples intervals.
• For those sampled points, compute their SDF-L1 values using sdf_obox_l1(), and return also the x- and y- components in sdfxyii.
• In the schematics in Figure 37, the target box (and therefore the SDF-L1 field) has no rotation. In the general case where the target box is oriented (as in Figure 1), we need to do an counter-rotation so the target box is not oriented. This is done in pii = pii.dot(rot).
• After the rotation, the $$dx$$ and $$dy$$ values are computed using np.gradient(pii, axis=0).
• Finally, we compute the signed area using np.cross(sdfxyii, dxyii), and integrate that to get the final saf.
### 4.4 Some examples
Time for some test runs.
To validate the algorithm, I use the shapely module to compute the ground truth union area of 2 boxes, and compare that with the approximate solution. Code below:
from shapely.geometry import Polygon
import numpy as np
import matplotlib.pyplot as plt
figure = plt.figure(figsize=(10, 5))
# example 1
# target box
xc = 2
yc = 1
w = 4
h = 2.5
angle = 30/180*np.pi
gt_box = [xc, yc, w, h, angle]
gt_poly = obbox2poly(*gt_box, False)
gt_poly_sp = Polygon(gt_poly)
# prediction box
box = [4, 3, 4, 5, 145/180*np.pi]
box_poly = obbox2poly(*box, False)
box_poly_sp = Polygon(box_poly)
aa = saf_obox2obox(box, gt_box, 20)
union = gt_poly_sp.union(box_poly_sp).area
union_hat = gt_box[2] * gt_box[3] + aa
print('Union area:', union)
print('Union hat area:', union_hat)
gt_poly = obbox2poly(*gt_box, True)
box_poly = obbox2poly(*box, True)
ax.plot(gt_poly[:, 0], gt_poly[:, 1], 'r-', label='Target')
ax.plot(box_poly[:, 0], box_poly[:, 1], 'b-', label='Prediction')
ax.set_aspect('equal')
ax.set_title('true union: %.3f, approx union: %.3f' % (union, union_hat))
ax.legend()
# example 2
# target box
xc = 2.2
yc = 1
w = 4
h = 2.5
angle = 30/180*np.pi
gt_box = [xc, yc, w, h, angle]
gt_poly = obbox2poly(*gt_box, False)
gt_poly_sp = Polygon(gt_poly)
# prediction box
box = [-2, 3, 4, 5, 45/180*np.pi]
box_poly = obbox2poly(*box, False)
box_poly_sp = Polygon(box_poly)
aa = saf_obox2obox(box, gt_box, 20)
union = gt_poly_sp.union(box_poly_sp).area
union_hat = gt_box[2] * gt_box[3] + aa
print('Union area:', union)
print('Union hat area:', union_hat)
gt_poly = obbox2poly(*gt_box, True)
box_poly = obbox2poly(*box, True)
ax.plot(gt_poly[:, 0], gt_poly[:, 1], 'r-', label='Target')
ax.plot(box_poly[:, 0], box_poly[:, 1], 'b-', label='Prediction')
ax.set_aspect('equal')
ax.set_title('true union: %.3f, approx union: %.3f' % (union, union_hat))
ax.legend()
figure.show()
The output figure:
It is seen that the approximate solution using 20 sub-division steps for each edge is slightly bigger than the ground truth. This is because our integration of the trapezoid areas is not adjusting for the triangular top, but instead using rectangular approximations.
It is also noticed that when the 2 boxes have no intersection (Figure 8b), the method works as well.
### 4.5 Vectorized versions
In practice, one often needs to deal with multiple pairs of targets and predictions. It would be much more efficient to vectorize the computations.
#### 4.5.1 Vectorized obbox2poly()
Convert an array of oriented boxes in [x_center, y_center, width, height, angle] format into 4-polygons:
def obbox2poly2(oboxes, close=False):
'''Convert oriented box to 4-polygon, multiple boxes
Args:
oboxes (ndarray): (n, 5) oriented boxes, [xc, yc, w, h, angle].
Order: bottom-right, top-right, top-left, bottom-left'''
center, w, h, theta = np.split(oboxes, [2, 3, 4], axis=1)
cos, sin = np.cos(theta), np.sin(theta)
rot = np.concatenate([cos, -sin, sin, cos], axis=1).reshape(-1, 2, 2)
p1 = np.concatenate([w/2, -h/2], 1)
p2 = np.concatenate([w/2, h/2], 1)
p3 = np.concatenate([-w/2, h/2], 1)
p4 = np.concatenate([-w/2, -h/2], 1)
p1 = (rot * p1[:, None, :]).sum(-1)
p2 = (rot * p2[:, None, :]).sum(-1)
p3 = (rot * p3[:, None, :]).sum(-1)
p4 = (rot * p4[:, None, :]).sum(-1)
corner_xy = np.stack([p1, p2, p3, p4], axis=1)
corner_xy = center[:, None, :] + corner_xy
if close:
return np.r_[corner_xy, corner_xy[0:1]]
else:
return corner_xy
#### 4.5.2 Vectorized saf_obox2obox()
Deals with the computation of pairwise $$A_{extra}$$ between n predictions and m targets:
def saf_obox2obox_vec(pred_oboxes, target_oboxes, n_samples=40):
'''Signed area difference between 2 sets of oriented boxes, vectorized
Args:
pred_oboxes (ndarray): prediction oboxes, in shape (n, 5). Columns: [xc, yc, w, h, angle].
target_oboxes (ndarray): target oboxes, in shape (m, 5). Columns: [xc, yc, w, h, angle].
Keyword Args:
n_samples (int): number of samples along each edge.
Returns:
saf (ndarray): (n, m) array, mean saf between pairs of pred/target.
'''
# from [xc, yc, w, h, angle] -> [x1,y1, x2,y2, x3,y3, x4,y4]
poly = obbox2poly2(pred_oboxes)
factors2 = np.arange(n_samples) / n_samples
factors1 = 1. - factors2
center = target_oboxes[:, :2] # [m, 2]
cos = np.cos(target_oboxes[:, -1]) # [m,1]
sin = np.sin(target_oboxes[:, -1]) # [m,1]
saf = 0
for i1 in range(4):
# linearly sample n_samples points along each edge
i2 = (i1 + 1) % 4
p1 = poly[:, i1, :]
p2 = poly[:, i2, :]
pnew = p1[:, None, :] * factors1[None, :, None] +\
p2[:, None, :] * factors2[None, :, None]
pnew = pnew[:, None, :, :] - center[None, :, None, :] # [n, m, n_samples, 2]
ppx = pnew[..., 0] * cos[None, :, None] +\
pnew[..., 1] * sin[None, :, None] # [n, m, n_samples]
ppy = -pnew[..., 0] * sin[None, :, None] +\
pnew[..., 1] * cos[None, :, None] # [n, m, n_samples]
ppxy = np.stack([ppx, ppy], axis=3) # [n, m, n_samples, 2]
qqxy = np.abs(ppxy) - 0.5 * target_oboxes[None, :, None, 2:4] # [n, m, n_samples, 2]
sign = qqxy[..., 0] > 0 # [n, m, n_samples]
x_comp = np.maximum(qqxy[..., 0], 0) * sign * np.sign(ppxy[..., 0]) # [n, m, n_samples]
y_comp = np.maximum(qqxy[..., 1], 0) * (1 - sign) * np.sign(ppxy[..., 1])
dx = np.gradient(ppx, axis=-1) # [n, m, n_samples]
dy = np.gradient(ppy, axis=-1) # [n, m, n_samples]
safii = x_comp * dy - y_comp * dx
saf = saf + safii.sum(axis=-1)
return saf
## 5 Summary and some discussions
This post shows an approximation method to compute the IoU between oriented 2D boxes.
The problem is formulated into a search for the union area of the 2 boxes, and then a search for the area of the relative component of target box in prediction box: $$A_{extra} \equiv A’ \setminus A^*$$.
$$A_{extra}$$ is approximated by sub-dividing it into a series a small bins, and integrating the bin areas using SDF-L1 values as heights, and small $$dx$$ or $$dy$$ steps as bases.
SDF-L1 is a modification of the conventional SDF by replacing L2-norm distances with L1-norm distances. Doing this removes the rounded corners in the SDF field.
The double-counted areas during the integration process as we take a full loop around the 4 edges are offset by using signed areas, achieved using cross products between the SDF-L1 vectors and the edge segment vectors. This is why the SDF-L1 values have signs. | 2023-02-06 13:12:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.7219867706298828, "perplexity": 4838.372491352351}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00366.warc.gz"} |
http://mathhelpforum.com/advanced-statistics/178350-expected-value-variance-difference-population-proportions.html | # Math Help - expected value and variance of difference of population proportions
1. ## expected value and variance of difference of population proportions
Hi everyone,
i need help finding E(p1-p2) And Var(P1-P2) where p1=X1/n1 where n1 is a sample from group 1 that yielded X1 successes and p2=X2/n2 where n2 is a sample from group 2 that yielded X2 successes. I.E find the expected value and variance for the difference between two population proportions.
2. Originally Posted by nikie1o2
Hi everyone,
i need help finding E(p1-p2) And Var(P1-P2) where p1=X1/n1 where n1 is a sample from group 1 that yielded X1 successes and p2=X2/n2 where n2 is a sample from group 2 that yielded X2 successes. I.E find the expected value and variance for the difference between two population proportions.
Isn't the theory in your class notes or textbook? You will find what you need using google:
3. You mean $\hat P$ and not p
p is an unknown constant and it has no distribution, it's mean is itself and it's variance is 0.
NOW $\hat P$ is X/n, where X is a binomial rv with sample size n and p as the probability. SO $\hat P$ is an unbiased estimator of p.
4. The variance of the difference of two estimated population proportions is expressed by the following formula:
$\hat{p}$1/n1 + $\hat{p}$2/n2
I would imagine that the expected value is just X1 - X2, but I'm not certain.
5. Originally Posted by Effendi
The variance of the difference of two estimated population proportions is expressed by the following formula:
$\hat{p}$1/n1 + $\hat{p}$2/n2
I would imagine that the expected value is just X1 - X2, but I'm not certain.
you also are mixing up statistics p-hats and parameters
thats the point of this work
To estimate UNKNOWN parameters with statistics, which are functions of the data by definition. | 2015-03-31 01:03:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8915674686431885, "perplexity": 917.9096294509764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300031.99/warc/CC-MAIN-20150323172140-00226-ip-10-168-14-71.ec2.internal.warc.gz"} |
https://huggingface.co/blog/stable_diffusion | Stable Diffusion 🎨
...using 🧨 Diffusers
Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It is trained on 512x512 images from a subset of the LAION-5B database. LAION-5B is the largest, freely accessible multi-modal dataset that currently exists.
In this post, we want to show how to use Stable Diffusion with the 🧨 Diffusers library, explain how the model works and finally dive a bit deeper into how diffusers allows one to customize the image generation pipeline.
Note: It is highly recommended to have a basic understanding of how diffusion models work. If diffusion models are completely new to you, we recommend reading one of the following blog posts:
Now, let's get started by generating some images 🎨.
Running Stable Diffusion
Before using the model, you need to accept the model license in order to download and use the weights. Note: the license does not need to be explicitly accepted through the UI anymore.
The license is designed to mitigate the potential harmful effects of such a powerful machine learning system. We request users to read the license entirely and carefully. Here we offer a summary:
1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content,
2. We claim no rights on the outputs you generate, you are free to use them and are accountable for their use which should not go against the provisions set in the license, and
3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users.
Usage
First, you should install diffusers==0.10.2 to run the following code snippets:
pip install diffusers==0.10.2 transformers scipy ftfy accelerate
In this post we'll use model version v1-4, but you can also use other versions of the model such as 1.5, 2, and 2.1 with minimal code changes.
The Stable Diffusion model can be run in inference with just a couple of lines using the StableDiffusionPipeline pipeline. The pipeline sets up everything you need to generate images from text with a simple from_pretrained function call.
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
If a GPU is available, let's move it to one!
pipe.to("cuda")
Note: If you are limited by GPU memory and have less than 10GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above.
You can do so by loading the weights from the fp16 branch and by telling diffusers to expect the weights to be in float16 precision:
import torch
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", revision="fp16", torch_dtype=torch.float16)
To run the pipeline, simply define the prompt and call pipe.
prompt = "a photograph of an astronaut riding a horse"
image = pipe(prompt).images[0]
# you can save the image with
# image.save(f"astronaut_rides_horse.png")
The result would look as follows
The previous code will give you a different image every time you run it.
If at some point you get a black image, it may be because the content filter built inside the model might have detected an NSFW result. If you believe this shouldn't be the case, try tweaking your prompt or using a different seed. In fact, the model predictions include information about whether NSFW was detected for a particular result. Let's see what they look like:
result = pipe(prompt)
print(result)
{
'images': [<PIL.Image.Image image mode=RGB size=512x512>],
'nsfw_content_detected': [False]
}
If you want deterministic output you can seed a random seed and pass a generator to the pipeline. Every time you use a generator with the same seed you'll get the same image output.
import torch
generator = torch.Generator("cuda").manual_seed(1024)
image = pipe(prompt, guidance_scale=7.5, generator=generator).images[0]
# you can save the image with
# image.save(f"astronaut_rides_horse.png")
The result would look as follows
You can change the number of inference steps using the num_inference_steps argument.
In general, results are better the more steps you use, however the more steps, the longer the generation takes. Stable Diffusion works quite well with a relatively small number of steps, so we recommend to use the default number of inference steps of 50. If you want faster results you can use a smaller number. If you want potentially higher quality results, you can use larger numbers.
Let's try out running the pipeline with less denoising steps.
import torch
generator = torch.Generator("cuda").manual_seed(1024)
image = pipe(prompt, guidance_scale=7.5, num_inference_steps=15, generator=generator).images[0]
# you can save the image with
# image.save(f"astronaut_rides_horse.png")
Note how the structure is the same, but there are problems in the astronauts suit and the general form of the horse. This shows that using only 15 denoising steps has significantly degraded the quality of the generation result. As stated earlier 50 denoising steps is usually sufficient to generate high-quality images.
Besides num_inference_steps, we've been using another function argument, called guidance_scale in all previous examples. guidance_scale is a way to increase the adherence to the conditional signal that guides the generation (text, in this case) as well as overall sample quality. It is also known as classifier-free guidance, which in simple terms forces the generation to better match the prompt potentially at the cost of image quality or diversity. Values between 7 and 8.5 are usually good choices for Stable Diffusion. By default the pipeline uses a guidance_scale of 7.5.
If you use a very large value the images might look good, but will be less diverse. You can learn about the technical details of this parameter in this section of the post.
Next, let's see how you can generate several images of the same prompt at once. First, we'll create an image_grid function to help us visualize them nicely in a grid.
from PIL import Image
def image_grid(imgs, rows, cols):
assert len(imgs) == rows*cols
w, h = imgs[0].size
grid = Image.new('RGB', size=(cols*w, rows*h))
grid_w, grid_h = grid.size
for i, img in enumerate(imgs):
grid.paste(img, box=(i%cols*w, i//cols*h))
return grid
We can generate multiple images for the same prompt by simply using a list with the same prompt repeated several times. We'll send the list to the pipeline instead of the string we used before.
num_images = 3
prompt = ["a photograph of an astronaut riding a horse"] * num_images
images = pipe(prompt).images
grid = image_grid(images, rows=1, cols=3)
# you can save the grid with
# grid.save(f"astronaut_rides_horse.png")
By default, stable diffusion produces images of 512 × 512 pixels. It's very easy to override the default using the height and width arguments to create rectangular images in portrait or landscape ratios.
When choosing image sizes, we advise the following:
• Make sure height and width are both multiples of 8.
• Going below 512 might result in lower quality images.
• Going over 512 in both directions will repeat image areas (global coherence is lost).
• The best way to create non-square images is to use 512 in one dimension, and a value larger than that in the other one.
Let's run an example:
prompt = "a photograph of an astronaut riding a horse"
image = pipe(prompt, height=512, width=768).images[0]
# you can save the image with
# image.save(f"astronaut_rides_horse.png")
How does Stable Diffusion work?
Having seen the high-quality images that stable diffusion can produce, let's try to understand a bit better how the model functions.
Stable Diffusion is based on a particular type of diffusion model called Latent Diffusion, proposed in High-Resolution Image Synthesis with Latent Diffusion Models.
Generally speaking, diffusion models are machine learning systems that are trained to denoise random Gaussian noise step by step, to get to a sample of interest, such as an image. For a more detailed overview of how they work, check this colab.
Diffusion models have shown to achieve state-of-the-art results for generating image data. But one downside of diffusion models is that the reverse denoising process is slow because of its repeated, sequential nature. In addition, these models consume a lot of memory because they operate in pixel space, which becomes huge when generating high-resolution images. Therefore, it is challenging to train these models and also use them for inference.
Latent diffusion can reduce the memory and compute complexity by applying the diffusion process over a lower dimensional latent space, instead of using the actual pixel space. This is the key difference between standard diffusion and latent diffusion models: in latent diffusion the model is trained to generate latent (compressed) representations of the images.
There are three main components in latent diffusion.
1. An autoencoder (VAE).
2. A U-Net.
3. A text-encoder, e.g. CLIP's Text Encoder.
1. The autoencoder (VAE)
The VAE model has two parts, an encoder and a decoder. The encoder is used to convert the image into a low dimensional latent representation, which will serve as the input to the U-Net model. The decoder, conversely, transforms the latent representation back into an image.
During latent diffusion training, the encoder is used to get the latent representations (latents) of the images for the forward diffusion process, which applies more and more noise at each step. During inference, the denoised latents generated by the reverse diffusion process are converted back into images using the VAE decoder. As we will see during inference we only need the VAE decoder.
2. The U-Net
The U-Net has an encoder part and a decoder part both comprised of ResNet blocks. The encoder compresses an image representation into a lower resolution image representation and the decoder decodes the lower resolution image representation back to the original higher resolution image representation that is supposedly less noisy. More specifically, the U-Net output predicts the noise residual which can be used to compute the predicted denoised image representation.
To prevent the U-Net from losing important information while downsampling, short-cut connections are usually added between the downsampling ResNets of the encoder to the upsampling ResNets of the decoder. Additionally, the stable diffusion U-Net is able to condition its output on text-embeddings via cross-attention layers. The cross-attention layers are added to both the encoder and decoder part of the U-Net usually between ResNet blocks.
3. The Text-encoder
The text-encoder is responsible for transforming the input prompt, e.g. "An astronaut riding a horse" into an embedding space that can be understood by the U-Net. It is usually a simple transformer-based encoder that maps a sequence of input tokens to a sequence of latent text-embeddings.
Inspired by Imagen, Stable Diffusion does not train the text-encoder during training and simply uses an CLIP's already trained text encoder, CLIPTextModel.
Why is latent diffusion fast and efficient?
Since latent diffusion operates on a low dimensional space, it greatly reduces the memory and compute requirements compared to pixel-space diffusion models. For example, the autoencoder used in Stable Diffusion has a reduction factor of 8. This means that an image of shape (3, 512, 512) becomes (3, 64, 64) in latent space, which requires 8 × 8 = 64 times less memory.
This is why it's possible to generate 512 × 512 images so quickly, even on 16GB Colab GPUs!
Stable Diffusion during inference
Putting it all together, let's now take a closer look at how the model works in inference by illustrating the logical flow.
The stable diffusion model takes both a latent seed and a text prompt as an input. The latent seed is then used to generate random latent image representations of size $64 \times 64$ where as the text prompt is transformed to text embeddings of size $77 \times 768$ via CLIP's text encoder.
Next the U-Net iteratively denoises the random latent image representations while being conditioned on the text embeddings. The output of the U-Net, being the noise residual, is used to compute a denoised latent image representation via a scheduler algorithm. Many different scheduler algorithms can be used for this computation, each having its pro- and cons. For Stable Diffusion, we recommend using one of:
Theory on how the scheduler algorithm function is out-of-scope for this notebook, but in short one should remember that they compute the predicted denoised image representation from the previous noise representation and the predicted noise residual. For more information, we recommend looking into Elucidating the Design Space of Diffusion-Based Generative Models
The denoising process is repeated ca. 50 times to step-by-step retrieve better latent image representations. Once complete, the latent image representation is decoded by the decoder part of the variational auto encoder.
After this brief introduction to Latent and Stable Diffusion, let's see how to make advanced use of 🤗 Hugging Face diffusers library!
Finally, we show how you can create custom diffusion pipelines with diffusers. Writing a custom inference pipeline is an advanced use of the diffusers library that can be useful to switch out certain components, such as the VAE or scheduler explained above.
For example, we'll show how to use Stable Diffusion with a different scheduler, namely Katherine Crowson's K-LMS scheduler added in this PR.
The pre-trained model includes all the components required to setup a complete diffusion pipeline. They are stored in the following folders:
• text_encoder: Stable Diffusion uses CLIP, but other diffusion models may use other encoders such as BERT.
• tokenizer. It must match the one used by the text_encoder model.
• scheduler: The scheduling algorithm used to progressively add noise to the image during training.
• unet: The model used to generate the latent representation of the input.
• vae: Autoencoder module that we'll use to decode latent representations into real images.
We can load the components by referring to the folder they were saved, using the subfolder argument to from_pretrained.
from transformers import CLIPTextModel, CLIPTokenizer
from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler
# 1. Load the autoencoder model which will be used to decode the latents into image space.
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
# 2. Load the tokenizer and text encoder to tokenize and encode the text.
tokenizer = CLIPTokenizer.from_pretrained("openai/clip-vit-large-patch14")
text_encoder = CLIPTextModel.from_pretrained("openai/clip-vit-large-patch14")
# 3. The UNet model for generating the latents.
unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet")
from diffusers import LMSDiscreteScheduler
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
Next, let's move the models to GPU.
torch_device = "cuda"
vae.to(torch_device)
text_encoder.to(torch_device)
unet.to(torch_device)
We now define the parameters we'll use to generate images.
Note that guidance_scale is defined analog to the guidance weight w of equation (2) in the Imagen paper. guidance_scale == 1 corresponds to doing no classifier-free guidance. Here we set it to 7.5 as also done previously.
In contrast to the previous examples, we set num_inference_steps to 100 to get an even more defined image.
prompt = ["a photograph of an astronaut riding a horse"]
height = 512 # default height of Stable Diffusion
width = 512 # default width of Stable Diffusion
num_inference_steps = 100 # Number of denoising steps
guidance_scale = 7.5 # Scale for classifier-free guidance
generator = torch.manual_seed(0) # Seed generator to create the inital latent noise
batch_size = len(prompt)
First, we get the text_embeddings for the passed prompt. These embeddings will be used to condition the UNet model and guide the image generation towards something that should resemble the input prompt.
text_input = tokenizer(prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt")
text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0]
We'll also get the unconditional text embeddings for classifier-free guidance, which are just the embeddings for the padding token (empty text). They need to have the same shape as the conditional text_embeddings (batch_size and seq_length)
max_length = text_input.input_ids.shape[-1]
uncond_input = tokenizer(
[""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt"
)
uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0]
For classifier-free guidance, we need to do two forward passes: one with the conditioned input (text_embeddings), and another with the unconditional embeddings (uncond_embeddings). In practice, we can concatenate both into a single batch to avoid doing two forward passes.
text_embeddings = torch.cat([uncond_embeddings, text_embeddings])
Next, we generate the initial random noise.
latents = torch.randn(
(batch_size, unet.in_channels, height // 8, width // 8),
generator=generator,
)
latents = latents.to(torch_device)
If we examine the latents at this stage we'll see their shape is torch.Size([1, 4, 64, 64]), much smaller than the image we want to generate. The model will transform this latent representation (pure noise) into a 512 × 512 image later on.
Next, we initialize the scheduler with our chosen num_inference_steps. This will compute the sigmas and exact time step values to be used during the denoising process.
scheduler.set_timesteps(num_inference_steps)
The K-LMS scheduler needs to multiply the latents by its sigma values. Let's do this here:
latents = latents * scheduler.init_noise_sigma
We are ready to write the denoising loop.
from tqdm.auto import tqdm
scheduler.set_timesteps(num_inference_steps)
for t in tqdm(scheduler.timesteps):
# expand the latents if we are doing classifier-free guidance to avoid doing two forward passes.
latent_model_input = torch.cat([latents] * 2)
latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t)
# predict the noise residual
noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample
# perform guidance
noise_pred_uncond, noise_pred_text = noise_pred.chunk(2)
noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
# compute the previous noisy sample x_t -> x_t-1
latents = scheduler.step(noise_pred, t, latents).prev_sample
We now use the vae to decode the generated latents back into the image.
# scale and decode the image latents with vae
latents = 1 / 0.18215 * latents
image = (image / 2 + 0.5).clamp(0, 1) | 2023-03-21 05:09:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42110195755958557, "perplexity": 2331.3419279121144}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943625.81/warc/CC-MAIN-20230321033306-20230321063306-00695.warc.gz"} |
http://www.newton.ac.uk/event/smc/seminars | # Seminars (SMC)
Videos and presentation materials from other INI events are also available.
Search seminar archive
Date Time Speaker Title Presentation Material
20th January 2004 17:15 to 18:00 J Goodfellow Towards a predictive biology
27th January 2004 09:30 to 10:30 A Levine The worm turns: the Helix-Coil wormlike chain
29th January 2004 09:30 to 10:30 DTF Dryden Type I DNA restriction enzymes: smart molecular machines
3rd February 2004 11:30 to 12:30 P Martin How the ear listens: spontaneous mechanical oscillations by mechanosensory hair cells
5th February 2004 11:30 to 12:30 Z-C Ou-Yang Elastic theory of a single DNA molecule
9th February 2004 11:30 to 13:00 S Komura Phase separation in biomembranes
11th February 2004 11:30 to 13:00 F Juelicher Active gels and cytoskeletal dynamics
17th February 2004 11:30 to 12:30 A Baumgaertner In silico studies of cell motility
18th February 2004 11:30 to 12:30 E Carlon Melting of double-stranded DNA-theory and experiments
19th February 2004 11:30 to 12:30 F Mohammad-Rafiee Probing chromatin dynamics with synthetic DNA ligands
24th February 2004 11:30 to 12:30 D Andelman The onset of complexation of polymers and DNA with amphiphiles
25th February 2004 11:30 to 12:30 S Tanaka Crystallization and diffusion of globular proteins in lipid cubic phases
26th February 2004 11:30 to 12:30 A Vilfan Elastic lever-arm model for myosin V
2nd March 2004 11:30 to 12:30 T Liverpool Dynamics of active filament solutions
2nd March 2004 14:00 to 15:00 D Lukatsky Membrane fusion
4th March 2004 11:30 to 12:30 D Head The deformations and Green's response of cross-linked biopolymer networks
9th March 2004 11:00 to 12:00 A Travers & W Poon Workshop on histone complexation in chromatin
10th March 2004 11:30 to 12:30 J Trinick Structural studies of the giant protein titin
10th March 2004 14:00 to 15:00 R Hawkins Entropic allostery in proteins
11th March 2004 11:30 to 12:30 R Golestanian Orientational ordering of rod-like polyelectrolytes
14th April 2004 11:30 to 12:30 L Mahadevan Statics and dynamics of actin assemblies
15th April 2004 11:30 to 12:30 J Smith Protein dynamics and function
20th April 2004 11:30 to 12:30 E Ben-Jacob Why bacteria go complex: higher flexibility for better adaptability
21st April 2004 11:00 to 12:00 V Srivastava The statistical mechanics of brain storage
22nd April 2004 11:30 to 12:30 T Maggs Simulating the Ether: local algoritms for long-ranged forces
22nd April 2004 15:00 to 16:00 I Baruchi Functional holography of correlations matrices for biological networks
4th May 2004 14:30 to 15:30 H Flyvbjerg Kinetics of self-assembling microtubules: an inverse problem" in biochemistry
5th May 2004 11:30 to 12:30 H Flyvberg Cell motility as persistent random motion-revisited"
6th May 2004 11:00 to 13:00 S Mayor & M Rao Dynamics of intracellular membrane traffic: interacting active networks
11th May 2004 11:30 to 12:30 RH Colby Reversible aggregation of albumin
12th May 2004 11:30 to 12:30 A Baumgaertner A simulation of membrane proteins
13th May 2004 11:30 to 12:30 A Frischknecht Molecular theory of lipid bilayers
17th May 2004 16:00 to 17:00 S Egelhaaf Micelle-vesicle transitions in lecithin-bile salt mixtures
18th May 2004 11:30 to 12:30 A Mogilner Self-organisation of microtubule asters
19th May 2004 09:30 to 11:00 T Harder From nano-scale cluster to functional platform: lipid raft domains in cell membranes
19th May 2004 11:30 to 13:00 S Keller Immiscible liquid phases in lipid membranes containing cholesterol
19th May 2004 14:00 to 15:30 P Beales & V Gordon Phase separation in binary lipid vesicles
19th May 2004 16:00 to 17:30 R Lipowsky Domains in membranes and vesicles
20th May 2004 09:30 to 11:00 P Olmsted Shape transformations in phase separated membranes
20th May 2004 11:30 to 13:00 S Mayor & M Rao Active lipid-based organisation on living cell surfaces
21st May 2004 11:30 to 12:30 HS Chan Cooperativity principles in protein folding: experimental criteria and interacition nonadditivity
21st May 2004 14:00 to 15:30 P Sens Modelling lipid rafts: size, shape, and possible involvement in membrane mechano-sensitivity
25th May 2004 11:30 to 12:30 II Potemkin Adsorbed comb-like copolymers as a tool for molecular walkers
27th May 2004 11:30 to 12:30 D Odde Modeling microtubule self-assembly dynamics during mitosis
2nd June 2004 09:30 to 16:00 D Bray & S Conway Morris & S Laughlin Thinking about evolution in physics and biology
3rd June 2004 11:30 to 12:30 R Sear Highly specific protein-protein interactions, evolution and negative design
8th June 2004 11:30 to 12:30 M Howard Models for precise protein localisation in bacteria
23rd June 2004 14:00 to 15:00 K Dunker Intrinsic disorder and protein function
24th June 2004 09:30 to 10:30 P Ten Wolde & JM Paulsson Thinking about gene regulatory networks
29th June 2004 09:30 to 10:30 R Laskey Maintenance and propagation of the blueprint for life Cancer and why DNA matters
29th June 2004 10:30 to 11:30 S Bell Maintenance and propagation of the blueprint for life The nuts and bolts of DNA replication
29th June 2004 11:45 to 12:45 A Venkitaraman Maintenance and propagation of the blueprint for life TBA
29th June 2004 13:45 to 14:45 J Downs Maintenance and propagation of the blueprint for life Packaging of DNA and the problems it presents
7th July 2004 09:30 to 10:30 B Possee Baculovirus polyhedrin protein crystallisation in vivo: armour plating for viruses
7th July 2004 10:30 to 11:30 R Casey Protein crystallization in vivo: Seed proteins
7th July 2004 12:00 to 12:30 J Doye Protein crystallization in vivo: The Why and Why not | 2015-05-23 13:32:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19287897646427155, "perplexity": 11823.153268683658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927634.1/warc/CC-MAIN-20150521113207-00203-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://buboflash.eu/bubo5/whats-new-on-day?day-number=42120 | # on 28-Apr-2015 (Tue)
#### Annotation 149672505
#finance #steiner-mastering-financial-calculations-3ed Trading in a STIR contract generally finishes at the time when the reference rate against which it is settled at expiry - such as LIBOR - is fixed. For contracts settled against a reference rate with spot value (such as EURIBOR or CHF LIBOR), this is two working days before the third Wednesday.
#### pdf
cannot see any pdfs
#### Annotation 149672512
#finance #steiner-mastering-financial-calculations-3ed In the case of sterling futures, both the last trading day and the LIBOR fixing used for settlement are the third Wednesday of the delivery month.
#### pdf
cannot see any pdfs
#### Flashcard 150890191
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
A dealer expecting the yield curve to twist [direction?] sells a shorter-dated FRA and buys a longer-dated FRA.
anti-clockwise
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
A dealer expecting the yield curve to twist anti-clockwise sells a shorter-dated FRA and buys a longer-dated FRA.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890200
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
A dealer expecting the yield curve to twist anti-clockwise [buys/sells] a shorter-dated FRA and [buys/sells] a longer-dated FRA.
sells a shorter-dated FRA and buys a longer-dated FRA
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
A dealer expecting the yield curve to twist anti-clockwise sells a shorter-dated FRA and buys a longer-dated FRA.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890210
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
Therefore a buyer of a FRA will profit if the interest rate [rises or falls?]
rises
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Therefore a buyer of a FRA will profit if the interest rate rises
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890219
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
Therefore a [buyer or seller?] of a FRA will profit if the interest rate rises
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Therefore a buyer of a FRA will profit if the interest rate rises
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890228
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
Therefore a buyer of a [FRA or future?] will profit if the interest rate rises
FRA
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Therefore a buyer of a FRA will profit if the interest rate rises
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890240
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
In the case of sterling futures, both the [...] and the LIBOR fixing used for settlement are the third Wednesday of the delivery month.
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
In the case of sterling futures, both the last trading day and the LIBOR fixing used for settlement are the third Wednesday of the delivery month.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890246
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
In the case of sterling futures, both the last trading day and the [...] are the third Wednesday of the delivery month.
LIBOR fixing used for settlement
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
In the case of sterling futures, both the last trading day and the LIBOR fixing used for settlement are the third Wednesday of the delivery month.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890252
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
In the case of sterling futures, both the last trading day and the LIBOR fixing used for settlement are the [...].
third Wednesday of the delivery month
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
In the case of sterling futures, both the last trading day and the LIBOR fixing used for settlement are the third Wednesday of the delivery month.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890259
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
In practice therefore, the FRA rate for a period coinciding with a futures contract would be ([...]).
100 - futures price
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
In practice therefore, the FRA rate for a period coinciding with a futures contract would be (100 - futures price).
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890283
Tags
#artificial-intelligence #minsky #society-of-mind
Question
perhaps it's because there are no persons in our heads to make us do the things we want -nor even ones to make us want to want-that we construct the myth that [...] inside ourselves
we're
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
perhaps it's because there are no persons in our heads to make us do the things ws want -nor even ones to make us want to want-that we construct the myth that we're inside ourselves
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890302
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
Trading in a STIR contract generally finishes at the time when [...]. For contracts settled against a reference rate with spot value (such as EURIBOR or CHF LIBOR), this is two working days before the third Wednesday.
the reference rate against which it is settled at expiry - such as LIBOR - is fixed
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Trading in a STIR contract generally finishes at the time when the reference rate against which it is settled at expiry - such as LIBOR - is fixed. For contracts settled against a reference rate with spot value (such as EURIBOR or CHF LIBOR), this is two working days before the third Wednesday.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890308
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
Trading in a STIR contract generally finishes at the time when the reference rate against which it is settled at expiry - such as LIBOR - is fixed. For contracts settled against a reference rate with [...] (such as EURIBOR or CHF LIBOR), this is two working days before the third Wednesday.
spot value
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Trading in a STIR contract generally finishes at the time when the reference rate against which it is settled at expiry - such as LIBOR - is fixed. For contracts settled against a reference rate with spot value (such as EURIBOR or CHF LIBOR), this is two working days before the third Wednesday.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890314
Tags
#finance #steiner-mastering-financial-calculations-3ed
Question
Trading in a STIR contract generally finishes at the time when the reference rate against which it is settled at expiry - such as LIBOR - is fixed. For contracts settled against a reference rate with spot value (such as EURIBOR or CHF LIBOR), this is [relate to specific day of week].
two working days before the third Wednesday
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
in a STIR contract generally finishes at the time when the reference rate against which it is settled at expiry - such as LIBOR - is fixed. For contracts settled against a reference rate with spot value (such as EURIBOR or CHF LIBOR), this is <span>two working days before the third Wednesday.<span><body><html>
#### Original toplevel document (pdf)
cannot see any pdfs
#### Annotation 150890320
#m249 #mathematics #open-university #statistics #time-series In additive decomposition model, the seasonal fluctuations are of roughly the same size whether the level mt is large or small.
#### pdf
cannot see any pdfs
#### Annotation 150890332
#m249 #mathematics #open-university #statistics #time-series in multiplicative decomposition model, the size of the seasonal fluctuations is proportional to mt
#### pdf
cannot see any pdfs
#### Annotation 150890341
#m249 #mathematics #open-university #statistics #time-series in multiplicative decomposition model, the size of the irregular fluctuations is proportional to mt × st
#### pdf
cannot see any pdfs
#### Annotation 150890350
#m249 #mathematics #open-university #statistics #time-series Consider the multiplicative model Xt = mt × st × Wt .Let Yt denote the time series of logarithms: Yt =log Xt .Then Yt =log Xt Yt= log(mt × st × Wt)Yt= log mt +log st +log Wt .
#### pdf
cannot see any pdfs
#### Flashcard 150890360
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
Consider the multiplicative model Xt = mt × st × Wt .Let Yt denote the time series of logarithms: Yt =log Xt .Then
• Yt =log Xt
• Yt= log(mt × st × Wt)
• Yt= [...] .
log mt +log st +log Wt
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Consider the multiplicative model X t = m t × s t × W t .Let Y t denote the time series of logarithms: Y t =log X t .Then Y t =log X t Y t = log(m t × s t × W t ) Y t = log m t +log s t +log W t .
#### Original toplevel document (pdf)
cannot see any pdfs
#### Annotation 150890366
#m249 #mathematics #open-university #statistics #time-series if a multiplicative model is appropriate for the time series Xt , then an additive model is appropriate for the time series of logarithms, Yt =log Xt .
#### pdf
cannot see any pdfs
#### Flashcard 150890373
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
if a [...] model is appropriate for the time series Xt , then an additive model is appropriate for the time series of logarithms, Yt =log Xt .
multiplicative
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
if a multiplicative model is appropriate for the time series X t , then an additive model is appropriate for the time series of logarithms, Y t =log X t .
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890379
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
if a multiplicative model is appropriate for the time series Xt , then [...] model is appropriate for the time series of logarithms, Yt =log Xt .
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
if a multiplicative model is appropriate for the time series X t , then an additive model is appropriate for the time series of logarithms, Y t =log X t .
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890385
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
if a multiplicative model is appropriate for the time series Xt , then an additive model is appropriate for the time series of logarithms, Yt = [...] .
log Xt
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
if a multiplicative model is appropriate for the time series X t , then an additive model is appropriate for the time series of logarithms, Y t =log X t .
#### Original toplevel document (pdf)
cannot see any pdfs
#### Annotation 150890394
#m249 #mathematics #open-university #statistics #time-series by taking logarithms, a time series for which a multiplicative model is appropriate can be transformed into a time series for which an additive model is appropriate.
#### pdf
cannot see any pdfs
#### Flashcard 150890398
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
by [...], a time series for which a multiplicative model is appropriate can be transformed into a time series for which an additive model is appropriate.
taking logarithms
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
by taking logarithms, a time series for which a multiplicative model is appropriate can be transformed into a time series for which an additive model is appropriate.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Annotation 150890404
#m249 #mathematics #open-university #statistics #time-series Transformations of time series that are commonly used include the power transformations: Yt = Xta , where a = ... 1/4, 1/3, 1/2, 2, 3, 4, ....
#### pdf
cannot see any pdfs
#### Flashcard 150890411
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
Transformations of time series that are commonly used include the power transformations:
Yt = [...] , where a = ... 1/4, 1/3, 1/2, 2, 3, 4, ....
Xta
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Transformations of time series that are commonly used include the power transformations: Y t = X t a , where a = ... 1/4, 1/3, 2, 3, 4, ....
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890417
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
Transformations of time series that are commonly used include the power transformations:
Yt = Xta , where a = [...].
... 1/4, 1/3, 1/2, 2, 3, 4, ...
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
Transformations of time series that are commonly used include the power transformations: Y t = X t a , where a = ... 1/4, 1/3, 2, 3, 4, ....
#### Original toplevel document (pdf)
cannot see any pdfs
#### Annotation 150890423
#m249 #mathematics #open-university #statistics #time-series A simple moving average (or just a moving average) of order (or span) 11 can be written as Yt = $$\large \frac{1}{11}$$(Xt−5 + ···+ Xt + ···+ Xt+5 )
#### pdf
cannot see any pdfs
#### Flashcard 150890430
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
A [...] of order (or span) 11 can be written as Yt = $$\large \frac{1}{11}$$(Xt−5 + ···+ Xt + ···+ Xt+5 )
simple moving average (or just a moving average)
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
A simple moving average (or just a moving average) of order (or span) 11 can be written as Y t = 111(X t−5 + ···+ X t + ···+ X t+5 )
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890436
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
A simple moving average (or just a moving average) of [...] 11 can be written as Yt = $$\large \frac{1}{11}$$(Xt−5 + ···+ Xt + ···+ Xt+5 )
order (or span)
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
A simple moving average (or just a moving average) of order (or span) 11 can be written as Y t = 111(X t−5 + ···+ X t + ···+ X t+5 )
#### Original toplevel document (pdf)
cannot see any pdfs
#### Flashcard 150890442
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
A simple moving average (or just a moving average) of order (or span) 11 can be written as Yt = [...]
Yt = $$\large \frac{1}{11}$$(Xt−5 + ···+ Xt + ···+ Xt+5 )
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
A simple moving average (or just a moving average) of order (or span) 11 can be written as Y t = 111(X t−5 + ···+ X t + ···+ X t+5 )
#### Original toplevel document (pdf)
cannot see any pdfs
#### Annotation 150890451
#m249 #mathematics #open-university #statistics #time-series For the purpose of smoothing time series, only moving averages for which the order is an odd number will be used. These are said to be centred on the middle value.
#### pdf
cannot see any pdfs
#### Flashcard 150890455
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
For the purpose of smoothing time series, only moving averages for which the order is an odd number will be used. These are said to be [...] on the middle value.
centred
status measured difficulty not learned 37% [default] 0
#### Parent (intermediate) annotation
Open it
For the purpose of smoothing time series, only moving averages for which the order is an odd number will be used. These are said to be centred on the middle value.
#### Original toplevel document (pdf)
cannot see any pdfs
#### Annotation 150890461
#m249 #mathematics #open-university #statistics #time-series With a suitable degree of smoothing — that is, with a suitable choice of the order of the moving average — the moving average provides an estimate of the trend component mt ; this is denoted $$\hat{m_t}$$
#### pdf
cannot see any pdfs
#### Flashcard 150890471
Tags
#m249 #mathematics #open-university #statistics #time-series
Question
With a suitable degree of smoothing — that is, with a suitable choice of the order of the moving average — the moving average provides [...] of the trend component mt ; this is denoted $$\hat{m_t}$$ | 2022-07-01 10:53:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6687628626823425, "perplexity": 14585.287570187753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103940327.51/warc/CC-MAIN-20220701095156-20220701125156-00489.warc.gz"} |
https://support.bioconductor.org/p/126854/ | QuasR::qAlign: how to specify maximum allowed number of mismatches and output all alignments
0
Entering edit mode
Julie Zhu ♦ 4.3k
@julie-zhu-3596
Last seen 4 days ago
United States
Dear QuasR developers,
I would like to use the qAlign function in QuasR. However, I do not see any parameters for specifying the number of mismatches allowed (-v parameter in bowtie) or the number of alignments (-k parameter in bowtie) to output for a multi-mapping read. According to the help menu, a single alignment is randomly selected in case of a multi-mapping read.
Could you please let me know whether it is possible to specify v and k in qAlign? Would you suggest I should use Rbowtie::bowtie instead.
Thanks!
Best regards, Julie
QuasR • 148 views
0
Entering edit mode
@hotz-hans-rudolf-3951
Last seen 4 months ago
Switzerland
Hi Julie
There is the 'alignmentParameter' argument. You can use it to provide command line parameters to be used for the aligner. The defaults will then be overruled. But (quoting the QuasR documentation): "Please use with caution; some alignment parameters may break assumptions made by QuasR. "
Regards, Hans-Rudofl
0
Entering edit mode
Last seen 7 days ago
Switzerland
The default parameters that qAlign uses are: -m X --best --strata
where X is the value passed to qAlign(..., maxHits = X)
You could add -v using for example (please note that this will ignore the base qualities of your reads): qAlign(..., alignmentParameter = "-m 10 --best --strata -v 2")
Adding -k Y with a value of Y greater than one has to be used with caution (it falls under the "breaks assumptions made by QuasR" category mentioned by Hans-Rudolf). For example, qCount may count your read more than once (redundant use of information), and alignmentStats my report more alignments than the total number of reads sequenced.
Best regards, Michael
0
Entering edit mode
Hi Michael,
Thank you very much for the detailed information! It’s very helpful!
Best regards,
Julie | 2021-03-06 11:31:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4385046362876892, "perplexity": 6782.446582396282}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374686.69/warc/CC-MAIN-20210306100836-20210306130836-00163.warc.gz"} |
https://www.physicsforums.com/threads/generic-feynman-parameterisation.917005/ | # I Generic Feynman parameterisation
1. Jun 8, 2017
### CAF123
I am in the process of reducing tensor integrals down to a sum of scalar ones with the tensor structure factored out in some basis of decomposition. I am able to write some scalar products in terms of appearing propagators but I encountered one where I have something like $$\int_k \frac{k^2-m^2}{A_1(k) A_2(k) A_3(k) A_4(k)}$$ where $A_i$ are all different propagators depending on k and external momenta.
Now, turns out I can write $k^2 - m^2 = A_3 - f(p_i) - m^2 - 2k \cdot g(p_i)$. The first three terms pose no problem but the k dependence in the last term does. For only this particular term, I was thinking of using Feynman parameters to write my denominator of four terms in terms of one single propagator as follows $$\frac{1}{A_1 A_2 A_3 A_4} \sim \frac{1}{(k^2 - \Delta)^4}$$ and then argue that under the integral $$\int d^dk \frac{k \cdot g(p_i)}{(k^2 - \Delta)^4} = 0$$ from symmetric integration.
So, my question is, without doing the lengthy calculation of feynman parameters is it true that I can write $$\frac{1}{A_1 A_2 A_3 A_4} \sim \frac{1}{(k^2 - \Delta)^4}$$ where the $\sim$ accounts for the three integrations over feynman parameters but otherwise crucially the k dependence is simply as shown?
Thanks!
2. Jun 13, 2017
### PF_Help_Bot
Thanks for the thread! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post? The more details the better. | 2017-08-19 12:17:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7801624536514282, "perplexity": 491.6791090120327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105341.69/warc/CC-MAIN-20170819105009-20170819125009-00266.warc.gz"} |
https://www.springerprofessional.de/the-quality-management-ecosystem-for-predictive-maintenance-in-t/16580616?fulltextView=true | scroll identifier for mobile
main-content
## Weitere Artikel dieser Ausgabe durch Wischen aufrufen
01.12.2019 | Theoretical article | Ausgabe 1/2019 Open Access
# The quality management ecosystem for predictive maintenance in the Industry 4.0 era
Zeitschrift:
International Journal of Quality Innovation > Ausgabe 1/2019
Autoren:
Sang M. Lee, DonHee Lee, Youn Sung Kim
Abbreviations
KSDS
Knock Sensor Detection System
PdMS
Predictive Maintenance & Service
Predix
GE Digital provides a platform (PaaS)
SHM
Structural health monitoring
## Introduction
In today’s competitive global environment, businesses need to be agile, flexible, resilient, and possess dynamic capabilities [1, 2]. The advent of advanced digital technologies makes it possible for firms to completely innovate the concept of quality management. A living ecosystem equipped with advanced digital technologies (e.g., smart sensors, machine learning, big data analytics, and artificial intelligence (AI)) can be developed to manage quality [2].
On August 14, 2018, a 200-m section of the Ponte Morandi Bridge (built in 1968) in Genoa, Italy, collapsed causing 41 deaths, 5 missing, and 15 injured. The main causes of bridge collapse were aging and lack of bridge management. Incidents such as this highlight the importance of bridge maintenance. Structural health monitoring (SHM), a new technique developed for structure maintenance, is an up-to-date technology-based system that analyzes weaknesses of existing systems, such as locating local and global damage structure and the significance of such damages. The speed and precision of decision-making for bridge repair and maintenance are facilitated by real-time monitoring of bridge conditions. Bansal et al. [3] proposed a real-time predictive maintenance system using neural network methods, while Shi and Zeng [4] suggested a condition-based maintenance strategy that considers economic factors for predictive maintenance in real time. Predictive maintenance, also known as condition-based maintenance, is possible today due to the advanced digital technologies [36].
With the development of smart devices, highly intelligent maintenance systems have become a focus [7]. As information and communication technology (ICT) converges and evolves with industrial fields, major changes are taking place in the field of operation management. Preventive maintenance and quality management methods that were all controlled by people in the past are being transformed to predictive maintenance due to the development of various IT technologies, such as big data and AI [7, 8]. In particular, leading technology companies have recently developed and introduced predictive maintenance systems for quality control [8].
In the past, recovery services were performed after a machine stoppage occurred in a workplace. With the advanced information and communication tools such as smart devices, it is now possible for skilled workers to perform regular maintenance services, such as replacing parts or equipment, at the optimal time [7, 8]. In addition, while there has been an emphasis on optimization of production processes, Industry 4.0 is pursuing optimization for each individual product. Since optimization requires zero defects, quality control is necessary to accomplish this goal. Thus, quality control techniques need to be changed. For example, big data collected from multiple IoT (Internet of Things) sensors embedded in components can support smart production, supply, and delivery for predictive maintenance in real time [7, 9].
Predictive maintenance management requires sharing information on production and inventory levels among networked partner firms, as well as the changing consumer demands [7, 10]. This system of collaboration is expected to aid in satisfying customer expectations through accurate demand prediction, improved service levels, and reliability. Expansion of smart devices with self-diagnosing and predictive failure capabilities will help reduce failures and operating costs, optimize inventories, improve the access to maintenance, reduce the need to maintain spare inventory for safety purposes, and enhance the replacement timing. Industry 4.0 needs to respond aggressively with a number of solutions that encompass safety, quality, value, and cost to meet end-user needs for proactive predictive maintenance strategies [10].
Therefore, to minimize possible losses and ensure flexibility by avoiding sudden downtimes, predictive maintenance is an essential strategic operating method for businesses that are building smart plants for the future [7]. In addition, efforts to diagnose failure of facilities, equipment, and/or systems at an early stage are benefiting from technological advancements in both manufacturing and service industries. There are several real-world examples of software development that support such possibilities [2, 8].
In this study, we analyze actual cases that currently exist in industries to illustrate how service and operational efficiencies can be improved through predictive maintenance. For this purpose, we performed an extensive review of the literature and diverse cases to derive a quality management ecosystem for the manufacturing and service industries for efficient quality management through digital devices in the Industry 4.0 era.
The rest of the paper is structured as follows. The “Review of relevant literature” section reviews relevant literature, and the “Case description of predictive maintenance” section presents real-world cases with advanced technologies and devices. The “Conclusions” section concludes the study by discussing the study results and presenting study limitations and future research avenues.
## Review of relevant literature
AI technology has been implemented throughout many industries. In particular, machine learning technology has spread to both the manufacturing and service industries in an attempt to improve productivity, quality, and efficiency of facility maintenance. The use of technology to improve quality is important because it facilitates changes in culture, leadership, collaboration, and compliance [11]. However, “Quality 4.0” which is suggested by LNS Research is not a technology, but a process used to maximize value for the users of technology [1]. Thus, quality management in the Industry 4.0 era should be approached from a predictive perspective based on digital technology rather than from a preventive perspective.
Jacob [1], (p. 5) stated “Industry 4.0 initiatives are not being led by quality, but by IT, operations, engineering, or sales and marketing.” This is due to the lack of a clear understanding of the significance of technology and applications required in the Industry 4.0 era, and this lack of understanding may result in a situation where it is difficult to flexibly and nimbly respond to the changes required for future quality management [1].
For example, when quality control systems on a production line detect a problem, does it mean there is an issue with only the specific items tested, or with all items? It would be difficult for technicians to make this determination. Errors can occur at any stage of the production process and can be caused by a number of factors, from loose soldering to adverse environmental conditions. Quality control technicians can quickly decide whether the analysis indicates an isolated defective item or a systemic failure that may lead to major problems in the future. For example, at an IBM semiconductor packaging plant in Canada, 97% of fault patterns can be identified automatically, eliminating hundreds of thousands of dollars per year in scrap costs. Furthermore, what-if analysis showed that controlling humidity at a critical point in the plant’s manufacturing line would improve product quality and deliver a 160% return on investment [12].
In quality management, the concept of “predictive” maintenance is different from “preventive” maintenance. While preventive maintenance focuses on identifying and preventing problems that may occur in the future, predictive maintenance focuses on cost reduction and failure prevention by identifying exactly when parts of a product are likely to cause problems, enabling replacement or repair at exactly the right time [7, 9, 13, 14].
Predictive maintenance is being realized in the form of smart factories based on IoT, CPS (Cyber Physics Systems), sensor technology, and AI technologies. While factory automation in the past was optimized only for each unit process, with little flexibility, a smart factory can achieve optimization with flexibility because objects in the factory are connected to the IoT. Big data from factory processes is automatically collected, analyzed, and triggered for resulting in active decision-making in real time. As an example, BOSCH, the world’s number one automotive parts company, is actively implementing predictive maintenance and quality control through a software tool it developed, the “Nexeed Production Performance Manager.” With simple controls, the person in charge of maintenance can have a positive impact on both product quality improvement and maintenance through continuous monitoring of process data and performing advance maintenance work with as little downtime as possible [15].
In Industry 4.0, the use of condition-based maintenance technology for intellectual maintenance involves three stages: real-time condition monitoring ➔ big data processing ➔ maintenance timing and scope determination [7]. Real-time condition monitoring is the process of monitoring equipment parameters or conditions for all equipment in the plant to detect errors or identify changes [10]. Therefore, predictive maintenance monitors the condition of all production facilities, equipment, and products in real time through the IoT; predicts remaining useful life (RUL) through signal processing, machine learning (deep learning), and data analysis; and determines the optimal maintenance cycle and scope [7, 9, 13]. However, although predictive maintenance can be used to predict potential errors earlier than preventive maintenance, it requires a high level of investment in capital and expertise [10].
Smart manufacturing systems in Industry 4.0 can predict RUL of mechanical equipment and systems, which can prevent accidents or failures in areas difficult for operators to access, and allow faster response, thus preventing downtime caused by failures. They can also reduce maintenance cost through component replacement, reducing the opportunity cost of losses due to downtime during the period [7, 9, 10, 13]. Consequently, quality management will be reshaped in the form of predictive maintenance management by expanding the scope of applied technologies to various areas, such as production, maintenance, and post-sales management.
Many recent examples of AI, smart sensors, smart robots, and other intelligent maintenance management can be found in companies today. In the following section, we seek predictive quality management approaches through case analyses of quality 4.0 in manufacturing and service companies, which helps develop a new digital business model through smartization of materials, parts, and equipment in factories.
## Case description of predictive maintenance
### Rolls-Royce
Rolls-Royce is one of the world’s top three aircraft engine manufacturers, producing more than 500 airline and 150 military aircraft engines in 2018. As Rolls-Royce’s production environment has gradually been networked and the IoT environment has evolved, the company recently began using big data to maintain aircraft engines, generating a huge amount of data [16]. These changes have led to the development of information and communication technologies for data analysis to seek operational strategies to minimize losses by preventing mistakes during the design process or failures that may occur during the manufacturing process [16]. Rolls-Royce utilizes big data processes in three major areas: design, manufacturing, and after-sales management, in an operational plan that can detect and monitor product status before problems occur.
Rolls-Royce uses nanobots to perform predictive maintenance and inspections, which can better communicate engine services and increases the use of robots in places that are dangerous or inaccessible to humans [17]. This approach provides an opportunity to improve engine maintenance methods by increasing the speed of the inspection process as part of performing maintenance tasks or eliminating the need to remove an engine from an aircraft. Through predictive maintenance, Rolls-Royce is proactively preventing any delays by managing when maintenance is required on aircraft engines [18].
The company collects and analyzes data not only in engine design and manufacturing but also in post-sales management. With hundreds of sensors, each small part of a device records and reports to a professional engineer in real time, allowing the engineer to determine appropriate actions through data analysis. Currently, Rolls-Royce collects 65,000 h of gas turbine-engine running data daily, with about 100 vibration, pressure, temperature, speed, and flow sensors attached to 14,000 engines owned by 500 airlines [17, 19]. Rolls-Royce provides real-time management through data collection after the sale of an engine as a “Total Care” service; this is known to minimize delays and cancelations caused by gas turbines defects, which cost approximately $45 million per day [17, 19]. The company accounted for 54% of the aircraft engine market share in June 2013, with more than 50% of its revenue generated through total care services [19]. Rolls-Royce has also developed a digital platform (a collaboration with Tata Consultancy Services Company in India and Microsoft Azer) to connect external information, such as air traffic control, weather, and fuel consumption, with sensor data collected from its own engines for an at-a-glance view [20]. These platforms provide information on predictive maintenance in advance of any device problems, delivering new value-added information to airline maintenance teams and customers, enabling a new form of quality management using predictive maintenance [20]. One reason Rolls-Royce is providing a basis for maintaining quality control through predictive maintenance is that it has the ability to utilize big data analysis, smart sensors, AI, and platform construction. In the future, Rolls-Royce expects a business environment will be created where computers can make their own decisions in certain situations through machine learning (deep learning). ### Hyundai Motors Hyundai Motor Co., the world’s eighth largest car maker in 2017, announced on October 18, 2018, that it has developed an AI Car Diagnosis System that uses AI to diagnose vehicle faults based on noise and a Knock Sensor Detection System (KSDS) to analyze vibrations and determine whether an engine is abnormal [21, 22]. The AI Car Diagnosis System can execute a complicated process by itself through deep learning and has been proven in recent experiments; in one experiment, the accuracy of 10 noise analysis experts was 8.6%, while the accuracy of AI was 87.6% [23]. The company said that KSDS will enhance consumer safety and increase problem prevention; even before it is commercialized, Hyundai is voluntarily applying it to domestic and overseas sales models starting from the third quarter of 2018 [21]. A future consideration is installing AI in vehicles to diagnose faults or placing it at the end of the vehicle production line to identify any abnormalities in new vehicles [23]. This technology will be applied to Hyundai Motor’s repair center in Korea by 2019 [22]. The AI Car Diagnosis System uses AI data related to about 800 existing engine faults. AI software can solve a problem by finding the fault area and cause using only noise, and its credibility has been proven in tests. Hyundai Motors focuses on an operation plan that can improve quality through predictive maintenance, which can diagnose and prevent problems before they happen so consumers can drive more safely and comfortably [21]. Through AI technology, it may be possible to improve accuracy by combining sound, vibration, temperature, and other factors, resulting in a system that could be applied to all mechanical things, not just cars. Since AI involves learning, to maintain the quality of diagnoses, data of a variety of noises generated by automobiles must be collected and analyzed. Therefore, it is necessary to use big data analysis, AI, and platform construction. ### BOSCH BOSCH is composed of business units that include industrial process technology, energy and building equipment, consumer goods and home appliances, and automotive technology. BOSCH is one of the leading companies that actively implements predictive maintenance and quality control. The firm has been known as a manufacturer of traditional automotive parts and home appliances. In recent years, BOSCH started IoT-based solution supply and consulting businesses. BOSCH supports assembly work without mistakes under any circumstances through its operator support system, which combines wireless communication with technologies such as smart glasses and sensors. If work is done properly, a module receives a green light, and the system gives a work instruction to the worker through that module, easily managing the work process. The system or machine also implements a predictive maintenance by establishing a smart response system that operates according to conditions [15]. For prescriptive maintenance and quality control, BOSCH utilizes its Nexeed Production Performance Manager, a software tool that collects data from various sources in the production environment and then standardizes and combines the data to visualize and analyze it. This tool shows the current state of individual machines as well as the overall production system, which allows a maintenance worker to perform maintenance tasks with little downtime (BOSCH, 2018) For example, the Nexeed Production Performance Manager monitors the parameters defined in the production process and immediately notifies the appropriate technicians when conditions exceed the warning limits or threaten the process. Using the data processing module, the worker in charge can perform the required repair individually by selecting the default rules from the catalog. This predictive maintenance management can improve production efficiency and reduce the cost of defects and errors that can occur in the equipment [15]. The Nexeed Production Performance Manager can also improve product quality by continuously monitoring and logging process data as well as maintenance data. BOSCH is able to maintain quality management through its predictive maintenance based on its use of big data analysis, smart sensors, AI, and platform construction. ### John Deere John Deere is implementing SAP’s Predictive Maintenance & Service (PdMS) technology to predict facility defects before a fault occurs, thus reducing maintenance costs and moving toward efficiency optimization [14, 24]. The company provides agricultural consulting services, collecting and analyzing data about things such as machinery, weather, and soil, using sensors and a drone attached to farm equipment. The operation information is provided through GPS, and helps optimize the yield of agricultural products by providing the user with optimal farming information, such as soil acidity and organic matter content at the current location. In other words, it provides a basis for maximizing productivity and the quality of agricultural products through preliminary reviews [25]. John Deere provides a nutrient application total optimization service throughout the entire agricultural harvest cycle of prior (field preparation)—process (seeding and harvesting)—post (soil improvement) [25]. These services provide customers with benefits such as minimizing downtime, reducing warranty costs, location-based real-time monitoring, and rapid supply parts planning, while enhancing the brand’s value. From the simple product sales level, the product’s status is continuously monitored in real time through sensors attached to the product after sale, providing customers with information about the product’s operation and maintenance and the opportunity to maintain the product in advance. In other words, reducing maintenance costs and improving product quality can lead to growth in the company’s revenue. ### Clova Clova was developed by Naver, a Korean IT firm, as a smart speaker that can control various devices through a single controller, including the IoT [26]. Based on smart sensors, remote control and voice-based control are possible through various AI algorithms and recommended systems; natural language processing, images, computer vision, and conversation functions can all be performed through AI algorithms [26]. AI is the most important factor in Clova and represents a paradigm shift to a dramatic real-time service. Although additional technology developments continue to add features, the most important tool needed to provide these services will be accurate information. Since Clova apps often use weather as well as location to provide information, the company will need to monitor these real-time big data to provide customers with more convenient services. Clova is able to maintain quality control through prescriptive maintenance based on its use of big data analysis, smart sensors, AI, and platform construction. Table 1 shows a summary of case examples. A review of case analyses suggests that predictive quality management approaches require advanced technologies to provide real-time service through network-in-time (NIT) on cyber-physical systems. NIT automatically provides accurate information for quality management in real time. The just-in-time (JIT) system attempts to increase efficiency and decrease waste by receiving components/parts just in time for production. NIT is for collecting all the relevant quality data and information in real time on cyber-physical systems through intelligence sensors or devices. Table 1 Summary of case examples Examples of predictive maintenance Manufacturing Manufacturing Manufacturing Manufacturing Service Company name Rolls-Royce Hyundai Motors BOSCH John Deere Clova Purpose To provide an opportunity to improve engine maintenance methods To provide consumer safety through increased problem prevention To support assembly work without mistakes To predict facility defects To provide dramatic real-time service Effects Real-time management Diagnose and prevent problems before they occur for greater safely and comfort Production efficiency and reduce the cost of defects and errors Application for total optimization of services Accurate information services Predictive maintenance on QM Big data analysis, smart sensors, AI, platform construction, ICT ### Discussion and evaluation of case examples How should a living quality management ecosystem be structured in the Industry 4.0 era? The number of companies participating in the data analysis business is surging. For example, GE Digital provides a platform (PaaS) called “Predix” as a service to airlines, healthcare, energy, manufacturing, and transport companies, so they can better utilize their data. Predix allows operators to acquire work expertise, reduce the time needed to adapt to the site, and enables them to proactively detect fatal errors that occur on site and respond to them in real time. A second example is the partnership between Schneider Electric and Microsoft, which helped improve drinking water quality control in Seminole County, FL, USA, through their collaboration in the development of a state-of-the-art analysis solution, the Ecosystem. GE, Siemens, Cisco, SAP, and BOSCH have already taken a lead in the platform. The examples presented above are summarized in Table 2, highlighting that the basic requirements for achieving a living quality management ecosystem include a predictive maintenance management strategy through big databases and data analysis, real-time simulations, and platform deployment. The convergence and integration of innovative technologies have led to development of the IoT, digital technology, big data, cloud computing, 3D printing, smart sensors, ICT, and robots, which are gradually establishing a world that can be realized in the future. Table 2 Process of quality management through predictive maintenance Technical requirements Operational process Achievement objectives Big data analytics, AI, Platform construction, Deep learning, Smart sensor, ICT, Robots Real-time data analysis, Expert analyst, Deep learning (machine learning), Data deployment, AI Increasing productivity, Minimizing maintenance costs, Improving product quality, Increasing reliability, Improving revenue To continuously maintain and improve the quality of products, real-time data must be collected and analyzed, and diagnosis must be performed through AI [36]. Therefore, the use of big data analytics, AI, and platform construction is necessary. As shown in Table 2, the objectives for quality management through predictive maintenance can be achieved by establishing basic requirements and operational processes. In many companies, a maintenance team analyzes data and identifies the problem, then implements AI using the major sensors, and applies deep learning diagnostics to identify repair or replacement needs [7, 9, 10, 13]. However, predictive maintenance for quality management, for which advanced technologies are an integral part, may raise the issue of industrial security risk. In addition to the risk of external malicious code infiltration (a cybersecurity problem), the industry’s complementary management could be a major issue because the vulnerability of the system itself can create problems such as unauthorized remote access, internal network access through unauthorized devices, employee errors, intentional program leaks, and unauthorized asset leaks. ## Conclusions At the Davos Forum in Switzerland in January 23, 2016, futurist Klaus Schwab [27], Founder and Executive Chairman of the World Economic Forum, emphasized the need to revolutionize the economy and society, arguing that the Fourth Industrial Revolution is not about changing what we do, but about changing our humanity. This can be perceived as emphasizing the fact that in all industries, innovation can break away from existing frameworks and the notions that society may be lost without responding properly to Industry 4.0. We can learn this lesson from the fall of well-known global companies such as Nokia, Kodak, and Toys “R” Us that were once among the world’s leading companies [2]. The Fourth Industrial Revolution is an extension of the digital paradigm, similar to the Third Industrial Revolution, but with a wider range of economic and social disruptions through digital transformation, such as product and service innovation, jobs, and welfare, which are all, unpredictable and complex. Reports of the IBM Institute for Business Value [28], Capgemini [29], and Agile Elephant [30] defined digital transformation as an enterprise that incorporates digital and physical elements to transform business models and sets new directions for the industry. This broad concept describes the transformation of processes, digitizing assets, and changing the way organizations think and work; the creation of new type of leadership and business models; and the use of technologies to enhance the experiences of customers and employees. The results of this study provide several theoretical and practical implications. First, to enable predictive maintenance in the Industry 4.0 era, advanced digital technologies need to be applied to enhance productivity and value creation. Second, although application of big data analytics is possible based on real-time, experts who can control and make decisions based on data analysis should be provided with policy support. Third, for predictive maintenance for quality management, implementation methods should be proposed through development of conditions for field ecosystems, methods for measuring cause-effect analysis, and expected outcomes. Improvements in systems and training/education should be developed for the use of AI-supported ecosystems with embedded digital technologies and statistical tools (e.g., cause-effect analysis, regression analysis, cause-effect diagrams, t tests/ANOVA tests, and performance/cost measurements). Fourth, in addition to utilizing digital tools, introduction of blockchain technology can be a major factor in predictive maintenance for quality management in the future. Based on these suggestions, the expected value of predictive maintenance and quality management can be enormous for cost reductions, improving work efficiency, agile responses, asset retention, and information sharing. Although quality 4.0 includes quality digitization, it should also include quality technologies, processes, and people that impact digitization. Quality management in the past was performed by data-driven decision-making, but currently, evidence-based decision-making has become more important and the role of analysts has been emphasized because big data is collected in real time [1, 8, 14]. Today, posting pictures on Facebook, reading articles on smart phones, and paying with credit cards using smart mobile devices have become routine activities. While these may seem to be just part of everyday life, every single action is being recorded as data, and many companies are looking for a business model that uses such data. This has emerged as the significant value of big data. AI has also become the umbellar factor in processing the accumulated big data for knowledge creation. Therefore, quality management in Industry 4.0 should be changed not only by the use of evidence-based data, but also in the form of predictive maintenance management ecosystem based on people and processes [7, 9, 10, 13, 14]. However, the position presented in this study is controversial because it suggests the direction of quality management in the future based on digital technology-based predictive maintenance cases. As AI-based technologies have recently entered our lives and applications have been made in a number of areas where they can proactively respond to problems, this research contributes academically as it presents a basic direction for quality management through predictive maintenance in the Industry 4.0 era. In addition, the case analysis of this study has practical value, as it can be used for benchmarking as excellent examples. More practical suggestions regarding predictive maintenance for quality management are as follows. Predictive maintenance for quality management should be implemented as a result of organizational culture, which fosters rethinking the role of training/education and leaders, develops ways that all employees can participate in continuous improvement, and pursues and applies real-time big data analytics. In the future, predictive maintenance for quality management will play a key role in enhancing competitiveness by creating new value for all the stakeholders. In 2016, Manufacturing Business Technology forecast that predictive maintenance will save$630 billion in costs over the next 15 years [31].
Since the present study presented a theoretical direction for predictive maintenance for quality management in Industry 4.0 through literature review and case analysis, but not based on empirical data, a limitation of the study is that its theoretical proposal has yet to be verified. Future research should consider conducting empirical research. Additionally, the validity of application plans for predictive maintenance for quality management through big data analysis, AI, platform building, deep learning, smart sensors, ICT, and robots should be examined more systematically in the future.
## Acknowledgements
Not applicable.
### Funding
There was no funding support for this study.
### Availability of data and materials
This is a conceptual paper, and no data was collected for analysis.
### Competing interests
The authors declare that they have no competing interests.
### Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Unsere Produktempfehlungen
### Springer Professional "Wirtschaft+Technik"
Online-Abonnement
Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:
• über 69.000 Bücher
• über 500 Zeitschriften
aus folgenden Fachgebieten:
• Automobil + Motoren
• Bauwesen + Immobilien
• Elektrotechnik + Elektronik
• Energie + Umwelt
• Finance + Banking
• Management + Führung
• Marketing + Vertrieb
• Maschinenbau + Werkstoffe
Testen Sie jetzt 30 Tage kostenlos.
### Springer Professional "Wirtschaft"
Online-Abonnement
Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:
• über 58.000 Bücher
• über 300 Zeitschriften
aus folgenden Fachgebieten:
• Bauwesen + Immobilien
• Finance + Banking
• Management + Führung
• Marketing + Vertrieb
Testen Sie jetzt 30 Tage kostenlos.
### Premium-Abo der Gesellschaft für Informatik
Sie erhalten uneingeschränkten Vollzugriff auf alle acht Fachgebiete von Springer Professional und damit auf über 45.000 Fachbücher und ca. 300 Fachzeitschriften.
Literatur
Über diesen Artikel
Zur Ausgabe | 2019-06-17 18:33:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2403559684753418, "perplexity": 3794.696035216558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998558.51/warc/CC-MAIN-20190617183209-20190617205209-00290.warc.gz"} |
http://kip-servis.com/vvso6do/13e621-antiderivative-of-cos | # antiderivative of cos
Harley . While the answers look different, they are all equivalent anti-derivatives as each differs by a constant amount from the others. All we need to know is what function has cos Great! The different cosine integral definitions are = ∫ − , = − ∫ ∞ = + − ∫ − | | < , where γ ≈ 0.57721566 ... is the Euler–Mascheroni constant.Some texts use ci instead of Ci.. Ci(x) is the antiderivative of cos x / x (which vanishes as → ∞).The two definitions are related by = + − . 12. Then you should see a recurrence relation and be able to write a general equation for the antiderivative for cos(x^2). It is because the indefinite integral is the inverse process of the derivative. Learn more Accept. Using mathematical notation, it is expressed as the integral of sin(x) dx = -cos(x) + c, where c is equal to a constant. Therefore, every antiderivative of $$\cos x$$ is of the form $$\sin x+C$$ for some constant $$C$$ and every function of the form $$\sin x+C$$ is an antiderivative of $$\cos x$$. I was curious to see how to find the antiderivative of cos(x²). 7. The Perplexing Integral Of (sin x)(cos x) Text-solution below. Our calculator allows you to check your solutions to calculus exercises. ;) Not easy enough, it would seem! This article is about a particular function from a subset of the real numbers to the real numbers. In mathematical analysis, primitive or antiderivative of a function f is said to be a derivable function F whose derivative is equal to the starting function. This notation arises from the following geometric relationships: [citation needed] When measuring in radians, an angle of θ radians will correspond to an … Integration is an important tool in calculus that can give an antiderivative or represent area under a curve. PROBLEM 20 : Integrate . Mute said: Now we take the limit as $R \rightarrow \infty$. 15. provided . Lv 6. Now the integration becomes 20. Therefore, continue the example above, functions of the form F(x) = sin x + C, where C is any constant, is the set of all antiderivatives of f (x) = cos x. Theorem : If F is an antiderivative of f on … PROBLEM 21 : Integrate . 1. 4. Click HERE to see a detailed solution to problem 22. 11. This website uses cookies to ensure you get the best experience. Great! Integral of square cosine $$\int \cos^{2}(x) \ dx =$$ The fastest way to do this integral is to review the formula in the Integrals Form and that’s it. The integral on C 2 satisfies the inequality [tex]\left|iR\int_0^{\pi/4}d\theta e^{i\theta} … Since the derivative of a constant is 0, indefinite integrals are defined only up to an arbitrary constant. Find The Integral Of Cos 4 X Dx. 22. provided . Click HERE to see a detailed solution to problem 20. I've tried dividing up the derivative into u = x² and cos(u) and then expanding the equation in a Taylor series then evaluating the integral. It's not for an assignment or anything; I'm just very … The integral of cos(x 2) is a Fresnel integral. Some of the following problems require the method of integration by parts. Proofs: Integral sin, cos, sec 2, csc cot, sec tan, csc 2 (Math | Calculus | Integrals | Table Of | ResultTrig) Discussion of cos x dx = sin x + C sin x dx = -cos x + C sec 2 x dx = tan x + C csc x cot x dx = -csc x + C sec x tan x dx = sec x + C csc 2 x dx = -cot x + C: 1. 14. 16. cos(2 arctan z) evaluates to (1-z^2)/(1+z^2); adding one to … 10. I've approached it in every way I can think of. 83 0. In other words, the derivative of is . Get the answer to Integral of cos(x)^2 with the Cymath math problem solver - a free math equation solver and math solving app for calculus and algebra. The integration is of the form $I = \int {{{\cos }^2}xdx}$ This integral cannot be evaluated by the direct formula of integration, so using the trigonometric identity of half angle $${\cos ^2}x = \frac{{1 + \cos 2x}}{2}$$, we have It helps you practice by showing you the full working (step by step integration). View a complete list of particular functions on this wiki For functions involving angles … The integral of the function cos(2x) can be determined by using the integration technique known as substitution. There is no closed form solution. In calculus, substitution is derived from the chain rule for differentiation. Graphical intuition. Therefore, the antiderivative that is the solution to this problem is {eq}F(x) = 1- \cos \theta {/eq}. Become a member and unlock all Study Answers Try it risk-free for 30 days Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences. We have to find the integral of cos4x dx. The integral of many functions are well known, and there are useful rules to work out the integral of more complicated functions, many of which are shown here. (This convention is used throughout this article.) 0 0. The Integral Calculator supports definite and indefinite integrals (antiderivatives) as … Students, teachers, parents, and everyone can find solutions to their math problems instantly. Anti-derivatives … Mute said: It's not that major a task. 17. As you can see, the graphs are all vertical translations of one another–each function differs from another by a constant amount. Solution. From Calculus. Recall that, as a consequence of the Mean Value Theorem , all functions with the same derivative differ from each other by a constant. By using this website, you agree to our Cookie Policy. How to integrate cos^2 x using the addition formula for cos(2x) and a trigonometric identity. The integral of cos(2x) is 1/2 x sin(2x) + C, where C is equal to a constant. Let u = cos(x) du = -sin(x)dx dx = du/-sin(x) ∫(sinx.cos^2x)dx = ∫sin(x)*u^2*du/-sin(x) = ∫- u^2du = - 1/3 u^3 + C = - 1/3 cos^3(x) + C To see more go to The Integrator and enter cos(x^2). PROBLEM … 21. Example 2. So consider the second function as $$1$$. I would show you how to do this, but that would be nearly impossible to show it here. Type in any integral to get the solution, steps and graph. There are examples below to help you. Information about the function, including its domain, range, and key data relating to graphing, differentiation, and integration, is presented in the article. Click HERE to see a detailed solution to problem 21. The infinite integral of a cosine times a Gaussian can also be done in closed form, (20) SEE ALSO: Chi , Damped Exponential Cosine Integral , Nielsen's Spiral , Shi , Sine Integral 2. Here are the graphs of the anti-derivatives. I expanded cos(x^2) in a series with 20 terms and integrated the series and got ( the more terms you add, the closer you get to the solution): Integral = x -x^5/10 … According to the theorem, the integral of cos(x) will be equal to the function that has cos(x) as its derivative plus a constant. Antiderivative cosine : Antiderivative calculator allows to calculate an antiderivative of cosine function. The antiderivative is also known as the integral. Find the integral of cos 4 (x) dx. 19. All common integration techniques and even special functions are supported. Applying parts (and substitution of $\cos x$) for the integral on the right hand side, we get: $$\int x \cdot\sin x \cdot e^{\cos x}\text dx = -x\cdot e^{\cos x}+\int e^{\cos x}\text dx$$ This, unfortunately, simply gives us the circular, and not very helpful, result that: $$\int e^{\cos x}\text dx = \int e^{\cos x}\text dx$$ d. Since $\dfrac{d}{dx}(e^x)=e^x, \nonumber$ then $$F(x)=e^x$$ is an antiderivative of $$e^x$$. The antiderivative of sin(x) is equal to the negative cosine of x, plus a constant. 3. The integration of cosine inverse is of the form $I = \int {{{\cos }^{ – 1}}xdx}$ When using integration by parts it must have at least two functions, however this has only one function: $${\cos ^{ – 1}}x$$. The limit of cos(x) is limit_calculator(cos(x)) Inverse function cosine : The inverse function of cosine is the arccosine function noted arccos. Click HERE to see a detailed solution to problem 23. as required for the above proof of the integral of cos(x^2) or sin(x^2). PROBLEM 23 : Integrate . However, a series solution can be obtained as follows: Common Functions Function Integral; Constant ∫ a dx: ax + C: Variable ∫ x dx: x 2 /2 + C: Square ∫ x 2 dx: x 3 /3 + C: Reciprocal ∫ (1/x) dx: ln|x| + C: Exponential ∫ e x dx: e x + C ∫ a x dx: a x /ln(a) + C ∫ ln(x) dx: x ln(x) − x + C: … That cos4x can be obtained as follows: 1 their math problems instantly 's..... Is defined to be the antiderivative of cos ( 2x ) + C, C... It is because the indefinite integral is the inverse process of the real numbers to the real numbers the. Not that major a task in terms of sin 's and cos 's ; use Subtitution complete of. Wiki for functions involving angles … Thanks for the Mathematical Sciences this wiki functions! As [ itex ] R \rightarrow \infty [ /itex ] of sin 's cos. ( step by step integration ) Regina and the Pacific Institute for the.. Solution can be obtained as follows: 1 integration technique known as substitution article! By a constant is supported by the University of Regina and the Institute!, the graphs are all equivalent anti-derivatives as each differs by a constant amount the... And even special functions are supported 's numbers.. 9, teachers, parents, and everyone find... Indefinite integrals are defined only up to an arbitrary constant of the real numbers to the numbers. Vertical translations of one another–each function differs from another by a constant amount: Make terms. Problem 20 by using the addition formula for cos ( 2x ) be! These, we simply use the Fundamental of calculus, because we know that cos4x can be obtained follows... Showing you the full working ( step by step integration ), and... As substitution chain rule for differentiation cookies to ensure you get the,... 2X ) and a trigonometric identity mute said: it 's pretty once. Throughout this article is about a particular function from a subset of the function cos ( 2x ) and trigonometric. Numbers.. 9 even special functions are supported it helps you practice by you... You have the right contour a trigonometric identity integral to get the solution, and! To calculus exercises functions on this wiki for functions involving angles … Thanks for the Mathematical Sciences 22! Geometry and beyond to algebra, geometry and beyond in terms of sin 's and cos 's use... Math homework help from basic math to algebra, geometry and beyond a particular function from a of... Be nearly impossible to show it HERE while the answers look different they... A constant is 0, indefinite integrals are defined only up to an arbitrary constant any... Are defined only up to an arbitrary constant calculus to do this, that... A Fresnel integral is the inverse process of the derivative, F ' ( x ) below. Impossible to show it HERE the apex the derivative series solution can be determined by using integration. Integration technique known as substitution any integral to get the best experience: it 's pretty easy you! Where C is equal to a constant is 0, indefinite integrals are only... \Rightarrow \infty [ /itex ] cos4x can be obtained as follows: 1 solutions to exercises! Practice by showing you the full working ( step by step integration.! Indefinite integrals are defined only up to an arbitrary constant was curious see. Numbers.. 9 and even special functions are supported a series solution can be written cos3x..., steps and graph working ( step by step integration ) trigonometric identity integrals are only... Regina and the Pacific Institute for the Mathematical Sciences be determined by using the addition for! Is used throughout this article. cos Great differs by a constant ) can be as! Cookie Policy indefinite integrals are defined only up to an arbitrary constant see more to! That would be nearly impossible to show it HERE 1 1 1 1 $. Can think of 4 ( x ) dx function as$ . Functions antiderivative of cos this wiki for functions involving angles … Thanks for the A2A see how to do this, that! Look different, they are all equivalent anti-derivatives as each differs by a constant amount from chain! | 2021-06-13 15:00:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301836490631104, "perplexity": 647.2485201372649}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487608856.6/warc/CC-MAIN-20210613131257-20210613161257-00340.warc.gz"} |
https://www.albert.io/learn/genetics-and-genomics/question/genetic-linkage-deducing-gene-order-by-inspection | Limited access
Examine the following distribution of gametes produced from a trihybrid X/x $\cdot$ Y/y $\cdot$ Z/z individual, as determined through a test cross.
X $\cdot$ Y $\cdot$ Z — 328
x $\cdot$ y $\cdot$ z — 316
X $\cdot$ Y $\cdot$ z — 23
x $\cdot$ y $\cdot$ Z — 28
X $\cdot$ y $\cdot$ Z — 56
x $\cdot$ Y $\cdot$ z — 52
X $\cdot$ y $\cdot$ z — 6
x $\cdot$ Y $\cdot$ Z — 4
What is the order of the genes on the chromosome?
A
xyz
B
yxz
C
yzx
D
Cannot determine from the information provided.
Select an assignment template | 2017-04-29 13:29:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5790327191352844, "perplexity": 1230.0393456485076}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123491.79/warc/CC-MAIN-20170423031203-00208-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/any-possible-lower-bound.162808/ | # Any possible lower bound!
1. Mar 27, 2007
### ssd
Looking for some positive valued simple functions which are less than (or equal to) the following two integrals (given in the following post).By simple I mean that they may not involve integrals or imaginary components or some infinite series. Again, the functions may not be as simple as f(x) =0.
Please find the integrals in the following post, as I could not fix the latex problem in this post.
Thanks for any idea.
Last edited: Mar 27, 2007
2. Mar 27, 2007
### ssd
The integrals as referred in the previous post are as follows:
$$1/ \int_{x}^{\infty}\frac {e^{-y}}{y}dy$$ , x>0
$$2/ \int_{x}^{\infty} y e^{-y}dy$$ , x>0
Last edited: Mar 27, 2007
3. Mar 27, 2007
### Eighty
The second one can be integrated to (exactly) $$e^{-x}(1+x)$$.
In the first one you can replace the y in the denominator by $$e^y$$ which will give you an easy integral. It will be a pretty bad lower bound though.
4. Mar 27, 2007
### ssd
Thank you very much, I missed the substitution in that.
EDIT: I also missed that it simply can also be done 'by parts'.
What I thought was to replace y in the denominator by $$e^y/2$$ or $$e^{y-1}$$ in the other problem.
Any better idea about the second?...EDIT: I mean the other, problem no.1.
Last edited: Mar 27, 2007
5. Mar 27, 2007
### Eighty
Better how? It's an exact antiderivative. What do you want?
edit: You can edit your posts, you know. :) Click the EDIT button next to the QUOTE button.
Last edited: Mar 27, 2007
6. Mar 27, 2007
### ssd
Sorry for the misunderstanding, by 'second' I meant the other problem.
Thanks again for the help though.
Last edited: Mar 27, 2007 | 2017-01-21 18:15:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9561540484428406, "perplexity": 1755.0280321502198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281162.88/warc/CC-MAIN-20170116095121-00505-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://olh.openlibhums.org/article/id/4642/ | ## Introduction
Perspective sensitivity is a ubiquitous linguistic phenomenon. We use perspective sensitive items like to the left of, in front of and tasty or fun on a daily basis, and addressees interpret these expressions apparently without major problems. Yet, the analysis of the semantics of these expressions, and the interpretation of perspective sensitive items as a class, has proven to be a rather difficult matter. Among the problems opposing a straightforward semantics for perspective sensitive expressions (‘PSE’s henceforth) is the question of whether it is feasible to strive for a uniform semantics for all members of the class. Prima facie, the members of the class, their diversity with respect to syntactic category notwithstanding, seem to share a common semantic core; see Partee (1989), for a first attempt to unify different types of PSEs; and Bylinina McCready, and Sudo (2015) for a more recent overview over the semantics of PSEs). Accordingly, there are theoretical arguments in favor of such an attempt at uniformity, and we will discuss them in what follows. However, we will not only present theoretical arguments arguing against a uniform treatment, but also experimental evidence that highlights the differences between two types of PSEs: relational locative expressions like the ones mentioned above on the one hand, to be abbreviated as ‘RLE’ throughout; and predicates of personal taste (‘PPT’s) on the other. The ultimate goal of this contribution is to argue for an approach to the semantics of perspective-sensitive items that is firmly based on experimental evidence, and which does justice to the theoretical and empirical differences within this class of expressions.
Since the main aim of this contribution is to present the experimental evidence, our theoretical discussion cannot make any claim to an exhaustive representation of the vast and complex linguistic and philosophical issues surrounding PPTs; for an overview and in-depth discussion of both the semantic and the philosophical matters involved, we refer the reader to Peter Lasersohn’s recent book (Lasersohn (2017)). Also, we will have to limit our attention here to the two classes of expressions mentioned, and will say almost nothing about other types of PSEs, like epistemic modals (like, e.g. must, might; see Stephenson (2007)), discourse particles, expressions of social proximity (e.g., foreigner), and various other expressions that have been identified as exhibiting perspective sensitivity in the recent literature; again, we point the interested reader to Bylinina et al. (2015) for a recent overview.
## Theoretical Background: Commonalities and Differences Between PPTs and RLEs
The two types of expressions to be investigated in this contribution are exemplified by the English examples below.
(1) CRAIG: Haggis is tasty.
(2) MICHAEL: Carla is sitting to my left.
In (1), the underlined expression is a predicate of personal taste. It expresses the judgment of the speaker, Craig, about his attitude towards a type of food, haggis: he considers it tasty. That this is a ‘personal’ judgment is quite obvious, since other persons will have other attitudes towards haggis, and will express them accordingly, perhaps using other types of PPTs like dainty, or yummy, or unsavoury, or perhaps even disgusting. Other PPTs pertaining to non-edible objects are fun, interesting, boring, etc., and may also include moral and aesthetic judgments.
In (2), a location—where Carla is sitting—is described in relation to another location, the location of Michael. The relation expressed in (2) is λyλx. left-of(x,y), and the relatum object y of the relation (the one with respect to which the relation is expressed and has to be interpreted) is Michael: Carla is sitting somewhere on the left-lateral side of Michael’s frontal plane.
From this first approximation, it may seem that the two types of expressions are quite different in their usage, and in the semantics that underlies this usage. However, they exhibit a fair amount of commonalities, that we will go into, before returning to their—more or less—obvious differences.
### Commonalities
The first and most obvious commonality between the two types of predication to be investigated here is that both predicates of personal taste, as well as relational locative expressions, exemplified in (3) and (4) below, are dependent on context for their interpretation.
(3) Carla is sitting to my left.
(4) I find haggis to be quite tasty.
To assess the truth of (3), we have to check whether the person uttering (3) is located such that Carla is sitting to her left. And to assess the truth of (4), we make sure that the person who uttered it does indeed consider haggis tasty. The context dependence here is obviously tied to the indexicals my and I, respectively. The second commonality to be noted thus is that the dependence upon context that both expressions exhibit is a type of indexicality (as opposed to, say, a type of anaphoric or presuppositional dependence). But, as the German examples (5) and (6) show, the context dependence remains even if the overt reference to the speaker is removed.1
1. (5)
1. Carla
2. Carla
1. sitzt
2. sits
2. left.
1. ‘Carla is sitting [to the speaker’s, or someone else’s] left.’
1. (6)
1. Haggis
2. Haggis
1. ist
2. is
1. lecker.
2. tasty.
1. ‘Haggis is tasty.’
What these examples show is that both types of PSEs contain what may pre-theoretically be called “slots” for the expression of relational information, and that these slots can be filled by indexicals (as in (3) and (4) above), or can be left implicit (as in (5) and (6)). In both cases, one would probably want both types of expressions to contain variables for this relational information, and that the values of these variables have to be supplied by contextual parameters. As a reviewer noted, there are numerous accounts to be found in the literature on how this assignment of a value to the variable is to be spelled out at the syntax-semantics interface: by assigning a contextual parameter, by variable binding, by control, etc. Since the division between PPTs and RLEs that we want to argue for here is orthogonal to these details, we decided to (more or less randomly) pick one account (i.e., Lasersohn (2009)) and stick to that throughout our argument. Note that we chose this formal account not out of any theoretical predilection, but simply because it allows us to make explicit the interplay between PSEs and their surrounding context.
The property to provide a variable is the third commonality that the two types of expression exhibit. As one would expect in connection with the third property, there are restrictions on the types of values that the variable might take on. Lasersohn (2009) differentiates between autocentric, acentric, and exocentric assignments. In order to represent the differences between these assignments, and how they affect interpretation, we will adopt Lasersohn’s ‘toy language’ (his term; see Lasersohn (2009), p.361), an extension of the formal system developed and described in Kaplan (1989). In Lasersohn’s (2009) system, denotations are assigned to simple expressions relative to a context c, a world of evaluation w, and an individual i (see Lasersohn (2009), p.361ff. for what follows).2 Thus, the denotation of an expression α is represented as ⟦αc,w,i. Lasersohn further assumes that each context c must specify a speaker or agent (notated A(c)), a world W(c), and what he calls a “judge”, J(c). This allows him to deal both with context dependent expressions such as, for example, I, which gets the denotation ⟦Ic,w,i = A(c) for all c,w,i, as well as context independent ones, for which ⟦αc,w,i = ⟦αc,w,i, for all c,c’. In addition, Lasersohn provides for the possibility that the judge relative to which a predicate is to be interpreted is expressed overtly, as in tasty for John. He assumes that ⟦α for βc,w,i = ⟦αc,w,b, where b = ⟦βc,w,i. Thus, the denotation of tasty for Mary, ⟦tasty for Maryc,w,i, would be ⟦tastyc,w,Mary. Lasersohn further assumes that the content of an expression α is a function from worlds w and individuals i into the denotation of α relative to c,w, and i: ⟦αc(w,i) = ⟦αc,w,i. Finally, truth is defined as follows: “We say that a sentence ϕ is true in context c iff its content in c maps the world and judge of c onto 1, that is iff ⟦ϕc(W(c), J(c)) = 1.” (Lasersohn (2009), p.362).
While we will not provide a compositional analysis for the interpretation of the two types of expressions, we can use this formal system to illustrate the assignment of different values to the variables contained in PPTs and RLEs. Assuming the usual semantics for predication (see Lasersohn (2009), for the details), the context-independent denotation of an utterance like
(7) CRAIG: Haggis is tasty.
comes out, preliminary, as
(8) ⟦7⟧c,w,i = 1 iff ⟦tasty(haggis)⟧c,w,i = ⟦haggis⟧c,w,i ∈ ⟦tasty⟧c,w,i.
Since the denotation ⟦haggisc,w,i is context-insensitive, we only have to take care that the parameter for the judge gets assigned the right value. If the utterance (7) by Craig is to be interpreted autocentrically, we get i = A(c) = J(c) = Craig, which gives us ⟦7c,w,i = 1 iff ⟦haggisc,w,i ∈ ⟦tastyc,w,Craig. Accordingly, the exocentric assignment of the judge parameter in
(9) MICHAEL: Haggis is tasty for Craig.
has the contextual parameters A(c) = Michael, while J(c) = Craig; thus, while its form is quite different from that of (7), the two utterances end up as having the same denotation, because in the case of (9), the overt expression of the judge blocks the assignment of the “agent” of (9), Michael, to the judge parameter. An acentric assignment to the judge parameter would be one that does not assign any value at all to the judge parameter, thus expressing an “objective” judgment about the taste of haggis. Such an acentric use must be kept apart from an occurence of a PPT like tasty in which the variable contained in it is quantified over, as in the following example:
(10) Everyone likes haggis.
Rather than assigning no value to the judge parameter, the judges covary with the individuals that the assignment function passes to the interpretation of the quantifier; see Mitchell (1986), and Partee (1989).
Although Lasersohn (2009) does not deal with spatial expressions, we can extend Lasersohn’s toy language to also cover expressions for relational locations like left of, thereby adding one more commonality between PPTs and RLEs to our list. It seems quite reasonable to assign the autocentric use of left of in (5), uttered by Michael,
1. (11)
1. MICHAEL:
1. Carla
2. Carla
1. sitzt
2. sits
2. left.
1. ‘Carla is sitting (to the speaker’s, or someone else’s) left.’
the denotation ⟦left ofc,w,i, where i = A(c) = Michael, which seems to give us the intuitively correct semantics: ⟦left ofc,w,Michael. The truth conditions for (11) also seem to turn out correct:
(12) ⟦11⟧c,w,Michael = 1 iff ⟦sitting_left of(Carla)⟧c,w,Michael = ⟦Carla⟧c ∈ ⟦sitting_left of⟧c,w,Michael.
Thus, the sentence comes out as true if Michael utters it in a situation where Carla is sitting to his (intrinsic) left, and false otherwise. The interpretation seems to be fully parallel to the one in (8).
Given the commonalities between the two expressions, and the successful application of the same interpretation rule to the autocentric uses, we might try to add one further commonality between the two types of expressions by treating exocentric uses of relational locatives like exocentric uses of taste predicates. As in the case of tasty, we could account for an exocentric use of left-of, and, also in parallel to PPTs, RLEs can only be used exocentrically if the relational argument is overtly expressed; we thus would assume that the denotation ⟦left of xc,w,i is ⟦left ofc,w,x. Let us look at the truth conditions for the following utterance:
1. (13)
1. CRAIG:
1. Carla
2. Carla
1. sitzt
2. sits
2. left
1. von
2. of
1. Michael.
2. Michael.
1. ‘Carla is sitting to the left of Michael.’
At first glance, it seems that the evaluation index i will take care of the correct assignment:
(14) ⟦13⟧c,w,i = 1 iff ⟦sitting_left of Michael(Carla)⟧c,w,i = ⟦sitting_left of (Carla)⟧c,w,Michael = ⟦Carla⟧c ∈ ⟦sitting_left⟧c,w,Michael.
Apparently, just as in the case of PPTs with an overt judge, the overt expression of the spatial relatum blocks the assignment of the agent of (13) to the “judge” parameter, which in this case should probably more aptly be called “perspective parameter”. But are the truth conditions in (14) really correct? Let us consider a few scenarios that should make (13) true if (14) is the correct rendering of the truth conditions. In Scenario 1, Carla is sitting to the left of Michael, and Craig is standing behind them, aligned with Michael (Carla’s orientation is of course irrelevant). This is a scenario in which (13) comes out as true, as readers may verify for themselves.
There is, obviously, a different scenario, Scenario 2, in which the speaker, Craig, is not aligned with the relatum, Michael, but rather is standing opposite of him, facing him.
In Scenario 2, there is a reading under which (13) comes out as true, and one where it comes out as false: for the first reading, we assume that the perspective parameter gets filled by Michael; the interpretation then proceeds as in (14), and the sentence comes out as true. For this first reading, we have to suppose that Craig is taking on Michael’s perspective in uttering (13). However, the second reading is one where the perspective parameter is filled by the speaker, Craig; and this gives us a denotation for (13) that is not true in Scenario 2:
(15) ⟦13⟧c,w,i = 1 iff ⟦sitting_left of Michael(Carla)⟧c,w,i = ⟦sitting_left of_Michael(Carla)⟧c,w,Craig = ⟦Carla⟧ ∈ ⟦sitting_left of_Michael⟧c,w,Craig.
This deictic reading for left of seems to be one where the commonalities between RLEs and PPTs break down—there is no equivalent to Scenario 2 for PPTs. Furthermore, it seems that while an overt expression of the judge of an PPT can block the assignment of the speaker to the judge parameter, an overt expression of the relatum of an RLE cannot in all cases block the assignment of the speaker to the perspective parameter; rather, it can do so only in case the speaker and the relatum are aligned. No such restriction seems to exist for PPTs.
Hence, we have to rethink our strategy of parallelising the truth conditions we assign to PPTs and RLEs, despite the commonalities enumerated above: as we have surmised right from the start, one can expect there to be differences between the two types of expressions. Let us look at these in more detail.
### Differences
The first, and most perspicuous, difference between the two types of expressions is that in many languages, including English, the encoding of the judge parameter of PPTs is optional, while that of the perspective parameter of RLEs seems to be mandatory. Although underlyingly both expressions seem to be relational and sensitive to context, and especially to the role that the speaker plays in filling the parameter, there are syntactic differences. This has been pointed out by Barbara Partee already (see Partee (1989), p.268), and is elaborated on by Bylinina et al. (2015), p.72f.
The second difference has to do with the possibility to block the assignment of the speaker to the judge/perspective parameter, or shift it away to a different referent, which we have discussed above with respect to example (13), and which we illustrate below in a somewhat different setting: embedding under attitude verbs.
1. (16)
1. a.
1. MICHAEL:
1. Craig
2. Craig
1. glaubt,
2. believes,
1. dass
2. that
1. Haggis
2. haggis
1. lecker
2. tasty
1. ist.
2. is.
1. ‘Craig believes that haggis is tasty.’
1.
1. b.
1. MICHAEL:
1. Craig
2. Craig
1. glaubt,
2. believes,
1. dass
2. that
1. Carla
2. Carla
2. left
1. sitzt.
2. sits.
1. ‘Craig believes that Carla is sitting to the left.’
In (16-a), the only possible assignment to the judge parameter of tasty seems to be the denotation of the subject of the attitute verb, Craig. Contrast this with (16-b), where the perspective parameter—from whose perspective Carla is sitting to his of her left—can be filled both by the expressed attitude holder, Craig, as well as the speaker/agent, Michael; probably with a slight preference for the former. Thus, it seems that RLEs are somewhat more flexible in their assignment of the perspective parameter than PPT in their assignment of the judge parameter: they are, as it were, more shiftable. We urge the reader to keep this difference between PPTs and RLEs in embedding contexts in mind, since we will return to it in our experiment.3
A third difference, and one that has been the subject of much discussion is the proneness to so-called faultless disagreement,4 which PPTs exhibit, while RLEs do not. Although this property has been a topic of research and lively debate in the philosophy of language for at least 15 years now, we will not enter into the discussion of faultless disagreement here, but rather only point out that there is a palpable difference between the exchanges in (17) and (18):
(17) CRAIG: This vegetarian haggis is tasty. MICHAEL: No, it isn’t.
(18) CRAIG: This vegetarian haggis contains cauliflower. MICHAEL: No, it doesn’t.
Evidently, the disagreement between Craig and Michael in (17) is of a different kind than that in (18). To witness: the disagreement in (18) can be settled by checking—by some appropriate analytic procedure—whether the vegetarian haggis in question contains cauliflower or not. If it does, Craig is right, and Michael is wrong, and if it does not, the other way round. For (17), there does not seem to exist an equivalent procedure of assessing who is right, and who is wrong. Actually, there does not even seem to be something to be right or wrong about. Peter Lasersohn has made this point repeatedly: while the assessment of the truth of a judge-independent predication (like λx. contains-cauliflower(x)) simply depends on matters of fact, the assessment of the truth of a sentence containing a PPT does not; it is, as Lasersohn puts it, dependent on matters of taste (see Lasersohn (2017) for extensive discussion).
A fourth point where the two types of predicates diverge is the possibility to use RLE in a derived fashion, where the perspectival center is not a sentient being, but an artifact onto which a front-back asymmetry is projected. Typical cases are cupboards, cars, etc. Consider a case where there is a person, Michael, and a ball lying on the ground; between the two, there is a car. If the front of the car is facing the ball, Michael can truthfully utter (19-a), but when the car is facing towards Michael, (19-a) is infelicitous (though perhaps not false), while (19-b) is felicitous.
(19) a. The ball is in front of the car. b. The ball is behind the car.
Such cases of a derived origo (“Origoverschiebung” in Bühler’s (1934) original parlance), which have received different treatments in linguistic semantics (see Bierwisch (1988), Wunderlich and Herweg (1991), Aurnague and Vieu (1993), and Kracht (2002)), seem not to have any obvious equivalent in the semantics of PPTs: it is quite hard to imagine what such an equivalent, a “derived judge parameter”, might be.
A further difference between the two expression types lies in their event semantic properties: while PPTs can be used to express habitual judgments as in I find haggis tasty, which are rather temporally stable, i.e. their truth is asserted to hold for a long interval including the utterance time, but possibly also stretching into the future, the meaning of RLEs lacks such longevity; which is not very surprising, given that the relative locations of objects, even if they are non-movable, are highly dependent on the location of the origo (see Skopeteas, Hörnig, and Weskott (2008) for discussion).5 In other words: although the meaning of both types of expressions has to be time-sensitive, they seem to differ with respect to the intervals they describe, simply because—and we are fully aware that this is truistic—in most cases, people’s localisations relative to objects change faster than their tastes do.
Finally, the root of the differences enumerated so far, probably lies in the ontological differences between “judges” and “perspectives”, that is, the values that get assigned to the parameters of PPTs and RLEs, respectively. A judge has to be a sentient being, perhaps even a sentient being capable of expressing a judgment (though that is not entirely clear), and the perceptions of this sentient being have to be (mentally) represented on a scale: in order to be able to find something tasty, one has to have a comparison class, i.e. a set of objects with a (partial) ordering relation defined over that set. In comparison, a perspective is an object of an ontologically quite different kind: in the simplest case, it can be defined by two locations and a vector (e.g., in a Euclidian space); or a geometric point, and a projection (see Abusch and Rooth (2017)). Objects such as these, which figure in the semantics of RLEs, are clearly ontologically less complex than the scales involved in the semantics of PPTs. Note also that the origin of the vector or the projection of the perspective encoded in an RLE does not have to be a sentient being, nor one being capable of forming judgments; a camera, or, as we have seen, even a cupboard, is sufficient for inducing a perspective, although both artifacts have “inherited” their perspective from a human projecting it onto them. It should be noted, however, that spatial expressions can also be used to denote scalar arrangements—think, for example, of relational expressions like further behind, etc.; but these usages are derivative of the basic use of spatial prepositions and adverbs. Furthermore, as was pointed out to us by Sarah Zobel (p.c.), comparatives and superlatives of RLEs seem to be quite odd, at least in predicative use:
1. (20)
1. a.
1. *Peter
2. Peter
1. sitzt
2. sits
2. left-COMP
1. als
2. as
1. Paul.
2. Paul.
1. ‘Peter is sitting further to the left than Paul.’
1.
1. b.
1. *Peter
2. Peter
1. sitzt
2. sits
1. am
2. at-the
2. leftest.
1. ‘Peter is sitting leftmost.’
This contrasts with the perfectly acceptable comparative and superlative usages of PPTs:
1. (21)
1. a.
1. Pudding
2. Pudding
1. ist
2. is
1. leckerer
2. tastier
1. als
2. as
1. Presssack.
2. Presssack.
1. ‘Pudding is tastier than Presssack (=collared pork).’
1.
1. b.
1. Pudding
2. Pudding
1. ist
2. is
1. am
2. at-the
1. leckersten.
2. tastiest.
1. ‘Pudding is the tastiest.’
To sum up: despite the initial prospects of providing a uniform semantic treatment for the two types of expressions, and despite some apparent semantic similarities, we have found them to differ in quite crucial respects. This raises the question whether the class of perspective-sensitive expressions is as uniform as one would consider it to be at first glance. In addition, we want to raise the question whether the two types of expressions can be shown to be interpreted differently in actual comprehension; if they can, this would be evidence against their uniformity.
## Our Experiment
In order to answer this question and to establish whether the apparent semantic and pragmatic differences discussed above can be made visible by quantitative data from actual linguistic behavior, we designed an experiment in which participants had to decide which of two boxes is the one that a protagonist in a given scenario would choose. The scenarios were described linguistically, and additionally by means of a schematic picture representing the scenario from a bird’s eye view. We chose to address the research question in this somewhat indirect manner, since previous studies used a rather blunt approach to assess the perspectivized interpretation of participants by asking them directly from whose perspective a certain expression was to be evaluated (e.g., see Harris and Potts (2009), Kaiser (2015), Kaiser and Lee (2017); but also see Kaiser and Cohen (2012) for a more indirect approach). In order to make the goal of our experiment less perspicuous, and thereby prevent participants from behaving strategically, we embedded the items aiming at the interpretation of PPTs and RLEs into a set of fillers that were concerned with the scalar implicatures of expressions like some and most; see below for details.
By directly comparing two quite different types of PSEs in the same experimental setting, we hoped to accrue some evidence pertaining to the question as to whether the commonalities, or the differences between the two expression classes play a more important role. Since from all the differences described above in the section on the differences, the difference in embedding (see (16)) is the one that seemed to us the easiest to operationalise in an experiment, we chose to cross the factor PERSPECTIVE SENSITIVE EXPRESSION (i.e., PPTs vs. RLEs), with the factor EMBEDDING (embedded vs. non-embedded occurrence of the PSE) in a fully crossed design. Given the differences in embedding behavior evident in (16), as well as the rather long list of differences in general, we predicted there to be main effects for both factors; see below for the more specific hypotheses.
### Participants
We tested 40 speakers of American English over 18 years of age, 19 of which identified as female, 20 as male, and one which did not identify as female or male.
### Materials
One item consisted of a linguistic description of the scenario, a pictorial representation of the decision situation, and a decision prompt; this structure was common to both experimental items and fillers. Also common was the depicted and described situation: two protagonists are involved in a box-picking game at a table. On the picture belonging to each picture, the participants saw the two protagonists seated opposite each other, with a table between them, on which the boxes are either placed on the line between the two, or perpendicularly. In the exact middle between the two boxes there is a flower, serving the role of a non-oriented relatum object (the depiction of the flower was centrosymmetric). Both protagonists are depicted schematically and symmetric to the horizontal axis of the picture, and they are named.
The RLE items had a relatively simple structure: there was a context sentence (above the scenario picture in Figure 3), the critical sentence, and the decision prompt (both below the picture).
Figure 1
First scenario: Michael and Craig are aligned.
Figure 2
Second scenario: Michael and Craig are facing each other.
Figure 3
Sample stimulus in the RLE/–embedded condition.
The RLEs employed in our materials were to the left of, to the right of, in front of, and behind. In order to render the presentation of the stimuli in the PPT condition as similar as possible to that in the RLE condition, the PPT items were headed by a more elaborate context, in which the taste preferences, or general attitudes toward the boxes were established; the target sentences and decision prompts were the same as in the RLE condition. Figure 4 illustrates the presentation of the PPT items:
Figure 4
Sample stimulus in the PPT/+embedded condition.
The 12 items thus consisted of 12 pairs of PPTs, each pair consisting of a predicate with a positive valence (e.g., inspiring, fascinating, astonishing) and a predicate with a negative valence (such as boring, annoying, disturbing). The PPTs were chosen following the criteria in McNally and Stojanovic (2014), and were additionally matched against the stimulus-bias scores in Bott and Solstad (2014) and Ferstl, Garnham, and Manouilidou (2011).
Thus, depending on the condition of the item, either the position (RLE), or the surface of the boxes would vary. There were 24 items overall: 12 PPTs, and 12 RLEs. Within these, there were six +embedded, and six –embedded items, respectively. In addition, we varied, and counterbalanced, as far as that was possible, a number of further variables: (i) the box-taking verb (rotating through the verbs choose, pick, select and take; (ii) the position of the target box (the box that would correspond to a shifted reading of the PPT, or the RLE) was counterbalanced (whether it was to the left/front/right, or behind the flower); (iii) the position of the active protagonist (the “box-taker”); (iv) the pattern on the target box (rotating through the patterns); (v) the pairs of patterns for the box pairs; and (vi) the gender of the protagonists. The embedding verb in the +emb condition was held (to think).
In addition to the 24 experimental items, there were 24 filler items, four of which served as benchmarking items. The fillers had linguistic descriptions like ‘Luke and Maya are playing a game. Luke will pick the box with some/most of the stars. Click on the box that Luke will pick.’, along with a picture that fitted the description semantically, or only by drawing an implicature. For four of these fillers, the descriptions did not even fit semantically; these served the purpose of detecting tired or unattentive participants. Four lists were created according to a latin-square design; the resulting lists were splitted into two blocks, and these blocks were randomized. Each block started with a filler.
### Procedure
The experiment was administered via Amazon Mechanical Turk. First, participants were informed about their rights, and the setup of the experiment was explained, and a few sociographic variables were collected. Participants were then given two sample items to familiarise themselves with the task. Then the experiment with the 48 trials would start.
### Design and Predictions
By nesting the factor EMBEDDING (+embedded vs. –embedded) under the between-items factor PERSPECTIVE-SENSITIVE EXPRESSION (PPT vs. RLE; henceforth PSE for short), we obtained four conditions, which we illustrate below:
(PPT) [Context:]Hazel finds boxes with stars confusing, but she considers boxes with dots fascinating. Lexie considers boxes with stars fascinating, but she finds boxes with dots confusing [Picture:]Lexie and Hazel at the table, on which a box with dots (Box A) and a box with stars (Box B) are placed. [Target:] (PPT/–emb) Hazel will select the fascinating box. (PPT/+emb) Lexie thinks that Hazel will select the fascinating box. [Decision Prompt:]Click on the box that (Lexie thinks that) Hazel will select. (RLE) [Context:]Heather and Shannon are looking at the things on the table. [Picture:]Heather and Shannon at the table, on which a grey box (Box A), a flower, and second grey box (Box B) are placed between the protagonists. [Target:] (PPT/–emb) Heather will select the box in front of the flower. (PPT/+emb) Shannon thinks that Hazel will select the box in front of the flower. [Decision Prompt:]Click on the box that (Shannon thinks that) Heather will select.
The factor PSE was tested within-participants and between-items; the factor EMBEDDING was tested within-within.
Firstly, we predicted a main effect of the factor EMBEDDING, meaning that the box that corresponds to the intrinsic, i.e. shifted interpretation of the PPT/RLE of the box-taker (Hazel, or Heather, respectively, in the examples above; the “box-taker box”) will be picked more often in the (–emb) conditions, because there is no competitor for the assignment of the perspective/judge parameter to the variable of the PSE, while in the (+emb), the other box (corresponding to the intrinsic interpretation of the other protagonist) should be picked more often.
Based on our theoretical considerations, we further predicted an interaction of the factors EMBEDDING and TYPE OF PSE: RLEs show a greater flexibility in the assignment of the perspective/judge variable than PPTs (recall the section on the differences between the two types of PSEs, and in particular the discussion below example (16)). Thus, RLEs should be more easy to shift towards the subject denoting the attitude holder in the (+emb) conditions than PPTs, while we would not expect any difference in the (–emb) conditions. Thus, the difference in proportions of decisions for the box-taker box (RLE-PPT) should be negative in the two (+emb) conditions, and close to zero in the two (–emb) conditions.
## Results
As our dependent variable, we defined the proportion of decisions for the box-taker box, that is, the box that corresponds to the intrinsic/perspectivised reading of the RLE/PPT from the viewpoint of the protagonist who gets to pick the box in the scenario. For example, in the scenario depicted in (3), this would be Box A, because it is to the right of the flower from Paige’s viewpoint. Figure 5 plots the proportion of box-taker box decisions dependent on the TYPE OF PSE (PPT vs. RLE), and the factor EMBEDDING (– vs. + embedding).
Figure 5
Mean Proportion of Shift Towards the Box-Taker Perspective, dependent on EMBEDDING and TYPE of PSE.
As is evident from Figure 5, the probability of picking the box-taker box, i.e. the box corresponding to the shifted interpretation of the PSE, was considerably higher in the non-embedded conditions than in the embedded ones (means: .85 vs. .41). Furthermore, the proportion of box-taker box decisions was somewhat higher in the PPT condition than in the RLE condition (.68 vs. .58). And the difference between these two conditions is a bit more articulate in the +emb conditions than in the –emb conditions (.15 vs. .05), hinting at an interaction of the two factors.
In order to assess the reliability of these effects, we performed a linear mixed-effects logistic regression on the decisions, with EMBEDDING and TYPE OF PSE as fixed factors, and participants and items as random factors. The model with the maximal random effects structure (see Barr, Levy, Scheepers, and Tily (2013)) did not converge, so we fitted the most complex model that would converge, with the syntax glmer(persp ∼ (PSE*EMB) + (1+PSE—subject) + (1—item), …)). Significance was assessed by forward-selection model comparison likelihood-ratio χ2-tests. We are aware that the fact that we were not able to test the maximal model might jeopardize the generalizability of our results because of inflation of type I error, and we will hedge our conclusions accordingly. The output of the maximal model is given below:
Our prediction that there should be a main effect of embedding is clearly borne out by the data, and the model comparison for that factor yielded a significant effect for EMBEDDING, $\mathrm{LR}-{\chi }_{\mathrm{df}=1}^{2}=39.90$ , p < .001. The main effect of TYPE OF PSE proved to be significant in the model comparison, as well: $\mathrm{LR}-{\chi }_{\mathrm{df}=1}^{2}=6.97$ , p < .01, although we are less sure whether this is really a resilient result, given the problems with model convergence. In any event, the interaction we predicted was not significant in the analysis given in Table 1; nor was it in the model comparison, where adding the interaction did not improve model fit, $\mathrm{LR}-{\chi }_{\mathrm{df}=1}^{2}=0.32$ , p > .05. Apparently, the variance in the data set was too big for the interaction to show a reliable effect. In our search for the source of this variance, we carried out further post-hoc tests, and hit upon two interactions buried in the data. In the first of these interactions, our factor EMBEDDING interacted with the emotional valency of the PPTs; in the second one, it was the orientation of the RLEs. Recall that half of the target boxes where ones with a negative emotional valency (i.e., boring, annoying, etc.), while the other half were positively valued (amazing, interesting, etc.).
Table 1
Output of the maximal (converging) model.
Fixed effects Estimate Std.Error z value Pr(>|z|)
(Intercept) –0.03149 0.32049 –0.098 0.9217
PSERLE –0.77262 0.33422 –2.312 0.0208*
EMB-emb 3.00036 0.42416 7.074 1.51e-12***
PSERLE:EMB-emb –0.30710 0.53947 –0.569 0.5692
As Figure 6 shows, the shifted interpretation was most frequently (in almost 100% of the cases) available to participants when the target box corresponding to this shifted interpretation was positively valued, and there was no embedding. The biggest effect of embedding occurred for these positively valued PPTs: if Ryan thinks that Adam will take the amazing box, it apparently matters less whether Ryan or Adam finds the box amazing (in the latter case, Ryan just adopts Adam’s stance towards the box).
Figure 6
Mean Proportion of Shift Towards the Box-Taker Perspective for the PPTs only, dependent on EMBEDDING and VALENCY of the PPT.
## Discussion
The results we obtained were not particularly clear with respect to the hypotheses we started out with: whether or not there is a clear difference between predicates of personal taste and relational locatives seems to depend on details of the statistical analysis—not a very desirable state of affairs. Still, we think that, taken together with the theoretical considerations pursued in the section on commonalities and differences, it is possible to argue that the emerging picture is not so unclear after all. In order to integrate the experimental results into our theoretical background, we will shortly review the commonalities and differences that we found the two types of PSEs, PPTs and RLEs, to exhibit.
The commonalities had mainly to do with the way in which context interacts with the implicit content in the semantics of the two types of expressions: both RLEs and PPTs were shown (i) to be context-dependent, and—in a sense to be discussed in more detail below—indexical; (ii) to contain parameters which can be left implicit, or made explicit by overt lexical material encoding the judge or the perspectival center, respectively; and (iii), to exhibit, up to a certain point, striking commonalities in how the parameters are assigned their respective values.
The differences between the two expression types began when we looked closer at the way the content of utterances containing a PPT or an RLE gets fixed relative to certain contexts; that is, ultimately, how the judge/perspective parameter gets fixed, and how properties of the context (as, for example, speaker orientation) affect this fixation of the parameter value. To remind the reader, we enumerate the differences here again. PPTs and RLEs differ with respect to (i) optionality of expressing the relational argument (depending on language); (ii) availability of non-speaker judge/perspective parameter assignments; (iii) proneness to faultless disagreement; (iv) derived assignments; (v) event semantic properties (longevity); and, (vi), and probably most importantly, ontology of the type of entities assigned as parameter values (judges vs. perspectives). Even from this somewhat arbitrary list it appears that the differences outnumber the commonalities, and we tend to think, on theoretical grounds alone, that the differences indeed outweigh the commonalities. Empirically, we can back this claim up by the data (caveats to follow below), since the effect of our factor PSE TYPE was significant in the analysis that we presented. But we hasten to remind the reader that the non-maximal random effect structure of the logistic regression mixed model, and the resulting possibility of an inflated type I error, forbid any strong generalisations. Which is just statistical jargon for: the data did not tell us reliably whether the two types of PSEs really do differ.
Also, the striking similarities in the two interactions between EMBEDDING and VALENCY and EMBEDDING and ORIENTATION are apt to cast some doubt on any strong statement about the differences: given these two interactions we found in the data, should we not reconsider some of the theoretical differences as perhaps less significant (in the non-statistical sense)? Currently, we do not think so. Although our reasoning here is completely post-hoc, and certainly quite speculative, we want to put it forward nevertheless to allow readers to make their own judgement.
First of all, the similarity between the two types of expressions is partly due to the choice of our experimental design. Although in everyday situations, people sometimes face each other, and then person X’s right is person Y’s left, etc., this is not a property of RLEs in general, but restricted to these everyday face-to-face situations, which we also chose to employ in our design. A similarly unnatural “inverse symmetry” held for the PPTs in our experiment: whenever protagonist X valued Box A positively and Box B negatively, the reverse would hold for the other protagonist, although the taste preferences of two individuals in everyday situations, luckily, seldom exhibit this kind of dependence. Thus, for both types of PSEs, the relation R (more or less explicitly) expressed by the PSEs with respect to two objects a and b and with respect to two protagonists X and Y had the property aRXb iff bRYa, or aRXb iff $a{R}_{Y}^{-1}b$ . Thus, a fair amount of the similarities we found in the data may very well be due to the similar restrictions that we imposed on the experimental scenarios for RLEs and PPTs. Of course, this was very much intended: without these restrictions, we would not have been able to experimentally compare the two types of PSEs in the first place.
Still, one may wonder whether these design restrictions alone explain the similar patterns, or whether there is some deeper reason why the valency of PPTs and the orientation of RLEs behaved so similarly in our experiment. Our take on the somewhat surprising similarity in the interaction patterns is that both taste predications with positive valency, and left of/right of localisations have rather clear preferences with respect to the assignment of their respective parameters: in case there is no embedding under an attitude verb, both subtypes of PSEs clearly prefer the speaker as the parameter value (judge/perspectival center). In case the clause containing the PSE is embedded under an attitude verb, the subject of that verb is the clearly preferred. For the taste predication with a negative valency, and for in front of/behind localisations, these preferences are less pronounced. For PPTs, we can make perfect sense of this weaker preference: it simply mirrors the reluctance of our participants to attribute to a protagonist—be it the box-picker, or the other protagonist—a choice where the chosen box is negatively valued by that protagonist. What is less clear to us is why the front-behind dimension should exhibit a similarly weakening effect on the preferences. We have to leave the explanation for this effect to further research.
Thus, with the exception of this one data point, we think we are able to explain how the observed similarities came about. We furthermore think that the differences—both empirical, and theoretical—persist in the face of the similarities: it should be quite hard, for example, to explain away the ontological differences of the parameter values we have pointed out. And even on a more basic level, the denotations of the two types of expressions, the differences are quite apparent: while an RLE denotes a relation between two sets of points in space, relative to an origo/perspectival center and a time of evaluation, a PPT denotes a mental object of at least ordinal scalar type, possibly with a standard, i.e. a threshold value, relative to an origo/judge and a time of evaluation. While the denotation of RLEs thus can be given extensionally, this does not hold for PPTs. We think that this very basic difference alone militates against a uniform treatment of the two types of PSEs, and calls for a semantic treatment of the class of perspective sensitive expressions that, while taking into account their commonalities, does justice to their differences.
Given the very preliminary nature of our results, it seems redundant to point to the need for more empirical work on the issues dealt with. We will only point to some directions for further research here.
We are currently working on a replication of the experiment described here in German, where RLEs can express the origo parameter as an adjunct, which is not possible in English (see Hörnig, Weskott, Kliegl, and Fanselow (2006) for some discussion):
1. (22)
1. CRAIG:
1. Carla
2. Carla
1. sitzt
2. sits
1. von
2. from
1. Michael
2. Michael
1. aus
2. off
2. left.
1. ‘Carla is sitting to the left, as seen from Michael.’
Note that (22) is, thanks to its explicit expression of the origo parameter, true both in Scenario 1 (cf. Figure 1, where the speaker, Craig, is aligned with the relatum, Michael), and in Scenario 2 (cf. Figure 2, where this is not the case). We think that a typological approach to the phrases expressing judges and perspectival centers is called for, since the variability seen even within the Germanic languages makes for interesting semantic differences. Moreover, given Barbara Partee’s (1989) criterium of argumenthood for origo phrases, it seems a worthwhile task to look for languages where these origo phrases might be obligatory, making a pronominal treatment of origos—and possibly judges, too—seem more promising than in Partee’s original assessment of the data.
Further experimental work needs to be done with respect to the preferences that the assignment of judge and origo parameters exhibits in online comprehension. We have started to look into this in counterfactual contexts (see Footnote 3), but this is only a first step, and there are intensional contexts—imaginations, dreams, etc.—where the interpretation of perspectival parameters behaves in fascinating and unexpected ways.
To conclude on a more general note: we have argued here that the differences found between two members of the class of perspective sensitive items, predicates of personal taste, and relational locative expressions, may outweigh the similarities they exhibit at a first glance. Given the differences, one may very well ask why the two types of expressions came to be treated as members of the same class in the first place. We think that this is an instance of a more general cognitive pattern. Spatial relational expressions tend to get mixed up with other types of relational and/or indexical expressions for two reasons: firstly, they are context dependent, which may be the reason why they get thrown into the same bag as other relational context dependent expressions. The second reason, however, is probably more important. We humans tend to think about many abstract relations in spatial terms; linguists studying metaphor have long recognized and described this. The application of the notion of perspective to mental representations seems a particularly prolific case: though this notion is firmly grounded in an extensional denotation in Euclidean space, it is, with increasing degree of metaphoricity, extended to conceptualise relational notions concerning temporal, epistemic, and intentional relations more generally. A case of particular significance is the notional environment of egocentricity, i.e. the self/de se, in the philosophy of language and the philosophy of mind. To give but one example: in his explanation of the obligatory egocentricity of certain beliefs—their being attitudes de se, as Lewis (1979) termed them—John Perry called them “self-locating beliefs”, or “self-locating knowledge” (Perry (1979), p.492f.). Over and above the wording, which borrows from the spatial domain, the conceptual background is that, just as we self-locate ourselves in physical space (e.g., by means of a fixed map with a red dot saying “You are here.”), we can self-locate in logical space.6 Given the way notions of spatial and epistemic perspective are mixed in philosophical discourse, as well as in ordinary talk (think of lexemes like stance, viewpoint, etc.), it is not too surprising that the two types of expressions we have looked at here have been classified with their commonalities, rather than their differences in mind. But we contend that broadening the empirical basis in the study of perspective sensitive expressions in general will lead to a more accurate picture of the semantic variation within the class of perspective sensitive items, as well as of its potential common core, and, consequently, the prospects for a uniform semantic analysis.
## Notes
1. We decided to choose German here because the RLE examples without overt mention of the origo have a somewhat dubious grammaticality status in English (some native speakers accepting them, but some others not), while being quite OK in German. As the gloss for example (5) shows, the origo of the RLE does not have to be interpreted as necessarily to be that of the speaker. [^]
2. Part of the formal apparatus Lasersohn employs in the 2009 paper is superseded by the more elaborate, pragmaticised theory in his 2017 book. But we take it that the semantic issues we are interested in here are not affected by the differences between these two formalizations, and that both the commonalities and the differences between the types of expressions come out the same in the 2017 version. [^]
3. Probably closely related to the difference in embedding under attitudes is the difference in the behavior in counteridenticals; cf.:
(i) a. If I were Craig, Carla would sit to my left. b. #If I were Craig, haggis would be tasty.
This seems to add to the evidence that the perspective parameter of RLEs is readily shiftable, while the judge parameter of PPTs is less so; see Klages, Holler, Kaiser and Weskott (2019) for some discussion and experimental evidence. [^]
4. One reason for being a bit sceptical about the term “faultless disagreement” is that the faultlessness of disagreement does not seem to be limited to predicates of taste, or, more generally, statements of “an opinion”. Kracht and Klein (2014) argue quite convincingly that disagreement and, more generally, mutual misunderstanding are ubiquitous and, if we read them correctly, even desirable properties of natural languages. Also, one might wonder whether the term ‘disagreement’ is well chosen: the currently championed relativist treatments of cases of faultless disagreement seem to us to imply that there cannot be agreement about matters of taste, and it seems somewhat dubious whether there can be disagreement, faultless or not, about matters where there can be no agreement. Luckily, these critical considerations are completely orthogonal to the aims of the current paper. [^]
5. A reviewer noted that statements containing RLEs can also describe habitual judgments, as e.g. in Whenever I go to the movies, I prefer to sit in the front. While we agree that the RLE here clearly figures in a habitual statement, we want to leave open the question whether what the overall statement expresses is a matter of relative location, or a matter of taste, or preference. [^]
6. The property of borrowing from the spatial domain is shared by many other of the thought experiments that Perry, Lewis, and others in their wake have brought up to argue for (or against) the particular properties of egocentric beliefs and utterances; see Cappelen and Dever (2013) for a critical discussion. Although we do not want to enter into the history of this idea of “self-location”, we think that the first and clearest commitment to the parallel of (self-)location in physical and logical space is in Lewis (1979):
“What happens when [Adam] believes a proposition, say the proposition that cyanoacrylate glue dissolves in acetone? Answer: he locates himself in a region of logical space.” (p.518, our emphasis)
[^]
## Acknowledgements
This publication is based on Chapter 5 of the first author’s dissertation, Klages (2020), who greatfully acknowledges the funding of the Graduate School “Theorie und Methodologie der Textwissenschaften und ihre Geschichte”. The work reported here was partly carried out in the project ProProCon of XPRAG.de priority program funded by the DFG. The authors wish to thank audiences at the universities of Bielefeld, Tübingen, Los Angeles and Nijmegen for fruitful discussion, as well as the two anonymous reviewers from OLH and Rose Harris-Birtill for their comments.
## Competing Interests
The authors have no competing interests to declare.
## References
Abusch, D., & Rooth, M. (2017). The formal semantics of free perception in pictorial narratives. Retrieved from http://events.illc.uva.nl/AC/AC2017/Proceedings/ (Last accessed on September 4, 2019)
Aurnague, M., & Vieu, L. (1993). A Three-Level Approach to the Semantics of Space. In C. Zelinsky-Wibbelt (Ed.), The Semantics of Prepositions: From Mental Processing to Natural Language Processing (pp. 395–439). Berlin, New York: DeGruyter.
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68. DOI: http://doi.org/10.1016/j.jml.2012.11.001
Bierwisch, M. (1988). On the grammar of local prepositions. In M. Bierwisch, W. Motsch & I. Zimmermann (Eds.), Syntax, Semantik und Lexikon. Studia Grammatica XXIX (pp. 1–65). Berlin: Akademie Verlag.
Bott, O., & Solstad, T. (2014). From Verbs to Discourse: A Novel Account of Implicit Causality. In B. Hemforth, B. Mertins & C. Fabricius-Hansen (Eds.), Psycholinguistic Approaches to Meaning and Understanding across Languages (pp. 213–251). Cham: Springer. DOI: http://doi.org/10.1007/978-3-319-05675-3_9
Bühler, K. (1934). Sprachtheorie. Die Darstellungsfunktion der Sprache. Jena: G. Fischer.
Bylinina, L., McCready, E., & Sudo, Y. (2015). Notes on perspective-sensitivity. In P. Arkadiev, I. Kapitonov, Y. Lander, E. Rakhilina & S. Tatevosov (Eds.), Donum Semanticum. Opera Linguistica et Logica in Honorem Barbara Partee a discipulis amicisque rossicis oblata (pp. 67–79). Moscow: Languages of Slavic Culture.
Cappelen, H., & Dever, J. (2013). The Inessential Indexical. On the Philosophical Insignificance of Perspective and the First Person. Oxford: OUP. DOI: http://doi.org/10.1093/acprof:oso/9780199686742.001.0001
Ferstl, E. C., Garnham, A., & Manouilidou, C. (2011). Implicit causality bias in english: a corpus of 300 verbs. Behavior Research Methods, 43(1), 124–135. DOI: http://doi.org/10.3758/s13428-010-0023-2
Harris, J. A., & Potts, C. (2009). Perspective shifting with appositives and expressives. Linguistics and Philosophy, 32, 523–552. DOI: http://doi.org/10.1007/s10988-010-9070-5
Hörnig, R., Weskott, T., Kliegl, R., & Fanselow, G. (2006). Word order variation in spatial descriptions with adverbs. Memory & Cognition, 34, 1183–1192. DOI: http://doi.org/10.3758/BF03193264
Kaiser, E., & Cohen, A. (2012). Free Indirect Discourse and Perspective Taking. Poster presented at AMLaP 2012, Riva del Garda, Italy.
Kaiser, E., & Lee, J. H. (2017). Experience matters: A psycholinguistic investigation of predicates of personal taste. Retrieved from https://journals.linguisticsociety.org/proceedings/index.php/SALT/article/view/27.323/3848 (Last accessed on September 4, 2019). DOI: http://doi.org/10.3765/salt.v27i0.4151
Kaplan, D. (1989). Demonstratives: An essay on the semantics, logic, metaphysics and epistemology of demonstratives and other indexicals. In J. Almog, J. Perry & H. Wettstein (Eds.), Themes From Kaplan (pp. 481–563). Oxford: OUP.
Klages, J. (2020). Perspektivierung im Text: Interpretation und Verarbeitung. Georg-August-Universitat Gottingen: PhD dissertation.
Klages, J., Holler, A., Kaiser, E., & Weskott, T. (2019). Discourse Expectations Induced by Perspectivization: The Case of Counteridenticals. Talk held at DETEC 2019, Leibniz-ZAS, Berlin.
Kracht, M. (2002). On the semantics of locatives. Linguistics and Philosophy, 25(2), 157–232. DOI: http://doi.org/10.1023/A:1014646826099
Kracht, M., & Klein, U. (2014). Notes on disagreement. In D. Gutzmann, J. Kopping & C. Meier (Eds.), Evaluations – Denotations – Entities. Studies in Context, Contents and the Foundation of Semantics (pp. 276–305). Leiden: Brill. DOI: http://doi.org/10.1163/9789004279377_013
Lasersohn, P. (2009). Relative truth, speaker commitment, and control of implicit arguments. Synthese, 166(2), 359–374. DOI: http://doi.org/10.1007/s11229-007-9280-8
Lasersohn, P. (2017). Subjectivity and Perspective in Truth-Theoretic Semantics. Oxford: OUP. DOI: http://doi.org/10.1093/acprof:oso/9780199573677.001.0001
Lewis, D. (1979). Attitudes De Dicto and Attitudes De Se. The Philosophical Review, 88(4), 513–543. DOI: http://doi.org/10.2307/2184843
McNally, L., & Stojanovic, I. (2014). Aesthetic adjectives. In J. Young (Ed.), The semantics of aesthetic judgment (pp. 17–37). Oxford: OUP. DOI: http://doi.org/10.1093/acprof:oso/9780198714590.003.0002
Mitchell, J. (1986). The Formal Semantics of Point of View. University of Massachusetts at Amherst: PhD dissertation.
Partee, B. (1989). Binding implicit variables in quantified contexts. In C. Wiltshire, R. Graczyk, & B. Music (Eds.), Papers from the 25th Annual Meeting of the Chicago Linguistic Society (pp. 342–365). Chicago: Chicago Linguistic Society.
Perry, J. (1979). The Problem of the Essential Indexical. Nous, 13(1), 3–21. DOI: http://doi.org/10.2307/2214792
Skopeteas, S., Hörnig, R., & Weskott, T. (2008). Contextual versus inherent properties of entities in space. Linguistische Berichte, 216, 431–456.
Stephenson, T. (2007). Judge dependence, epistemic modals, and predicates of personal taste. Linguistics and Philosophy, 30(4), 487–525. DOI: http://doi.org/10.1007/s10988-008-9023-4
Wunderlich, D., & Herweg, M. (1991). Lokale und direktionale. In D. Wunderlich & A. von Stechow (Eds.), Semantik. Ein internationales Handbuch der zeitgenossischen Forschung (=HSK vol. 6) (pp. 758–785). Berlin/New York: DeGryuter. | 2022-06-26 05:44:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6383559703826904, "perplexity": 2237.0697553019145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037089.4/warc/CC-MAIN-20220626040948-20220626070948-00626.warc.gz"} |
https://www.formulaequation.com/2020/04/average-acceleration-formula.html | The rate of change of velocity with time is called acceleration. In the previous lesson we have already studied acceleration in detail. in this lesson we are going to learn what is average acceleration and what is average acceleration formula with some basic questions.
What do you mean by average? An average can be defined as the number you get when you two or more figures together and then divide the total by the number of figures you added. By Similar way the average acceleration will be calculated.
## Average Acceleration Definition
Average acceleration is defined as the change in velocity divided by the time it took for the velocity change to take place.
The acceleration of an object often changes through it's motion for a particular time interval. So, it is often useful to calculate the average acceleration.
The average acceleration is a vector and it is denoted by 'a'. the SI unit of average acceleration is metre per second squared (m/s2).
## Average Acceleration Formula
The formula of average acceleration can be written as change in velocity divided by the total time taken while the velocity is changing.
Average Acceleration = change in Velocity / time taken
a = Vf - Vi / tf - ti
a = ∆v / ∆t
Where,
• Vf is the final velocity
• Vi is the initial velocity
• tf is the final time
• ti is the initial time
• ∆v is the change in velocity
• ∆t is the total time taken while the velocity is changing.
In case, if there different velocities are given in the different time intervals, such as V1, V2, V3… Vn are the velocities for the time intervals of t1, t2 t3...Tn respectively.
In this kind of problems the average acceleration can be calculated by using this formula given below,
Average acceleration = v1+v2+v3…+Vn / t1+t2+t3...tn
### Average Acceleration Example
If a car starts from rest and reaches a velocity of 15 m/s in 5 seconds hence, the average acceleration of the car would be 3 m/s2. It means the velocity of the car will be increased by 3 metre per second every second. | 2021-01-17 03:43:35 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8772619962692261, "perplexity": 430.21276502462865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703509104.12/warc/CC-MAIN-20210117020341-20210117050341-00743.warc.gz"} |
https://brilliant.org/problems/divisibility-is-just-an-excuse/ | # Divisibility is just an excuse.
Calculus Level 3
If $1+ \displaystyle \sum_{r=0}^{18} \{ r(r+2)+1 \}.r! =k!$ then $$k$$ is not divisible by
###### You can try my other Sequences And Series problems by clicking here : Part II and here : Part I.
×
Problem Loading...
Note Loading...
Set Loading... | 2018-07-19 19:40:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24940641224384308, "perplexity": 3731.512994805133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591216.51/warc/CC-MAIN-20180719183926-20180719203926-00419.warc.gz"} |
https://www.electronicspoint.com/forums/threads/matlab-rayleigh-fading-channel-simulation.9273/ | # Matlab Rayleigh Fading Channel Simulation?
Discussion in 'Electronic Basics' started by Davy, Mar 6, 2006.
1. ### DavyGuest
Hi all,
I have got a Rayleigh Fading Channel Simulation code by Matlab.
The code list below:
a = sqrt(0.5)*( randn( 1, symbols_per_frame) + j*randn( 1,
symbols_per_frame) );
% complex noise
noise = sqrt(variance)*( randn(1,symbols_per_frame) +
j*randn(1,symbols_per_frame) );
% in all
Is the code right? If not right, how to modify it?
Best regards,
Davy
2. ### Guest
Yes, it is perfectly right.
Generally, we assume coherent system, so the phase is perfectly
estimated. hence the phase of the fading constant can be neglected.
-SaiRamesh.
3. ### James G.Guest
Davy, you seem to assume that there is no correlation in temporal and
frequency domains, don't you?
4. ### DavyGuest
Hi,
Yes, there is no correlation in temporal and frequency domains.
I have read the Probability book. The frequency shift is uniformly
distribution. And the fading is Rayleigh distribution.
Thanks!
Davy
5. ### Guest
Hi,
Yes, the phase has uniform distribution and the magnitude is
Rayleigh distributed.
I have a question assuming coherent BPSK transmission
y=hx+n; where h is a real no whose pdf is Rayleigh. Neglecting the
phase is a correct assumption?
-Regards,
-SaiRamesh. | 2020-11-25 16:55:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281964898109436, "perplexity": 8503.700762221579}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141183514.25/warc/CC-MAIN-20201125154647-20201125184647-00292.warc.gz"} |
http://matthewrocklin.com/blog/work/2012/10/29/Matrix-Computations/ | I want to translate matrix expressions like this
(alpha*A*B).I * x
Into Fortran code that call BLAS and LAPACK code like this
subroutine f(alpha, A, B, x, n)
real*8, intent(in) :: A(n, n)
real*8, intent(inout) :: B(n, n)
real*8, intent(in) :: alpha
integer, intent(in) :: n
real*8, intent(inout) :: x(n, 1)
call dgemm('N', 'N', n, n, n, alpha, A, n, B, n, 0, B, n)
call dtrsv('L', 'N', 'N', n, B, n, x, 1)
RETURN
END
And then call it in Python like this
nA, nB, nx = .... # Get numpy arrays
f(nalpha, nA, nB.T, nx.T))
## What is BLAS?
BLAS stands for Basic Linear Algebra Subroutines. It is a library of Fortran functions for dense linear algebra first published in 1979.
The most famous BLAS routine is DGEMM a routine for Double precision GEnerally structured Matrix Multiplication. DGEMM is very well implemented. DGEMM traditionally handles blocking for fewer cache misses, autotuning for each individual architecture, and even assembly level code optimization. You should never code up your own matrix multiply, you should always use DGEMM. Unfortunately, you may not know Fortran, and, even if you did, you might find the function header to be daunting.
SUBROUTINE DGEMM(TRANSA,TRANSB,M,N,K,ALPHA,A,LDA,B,LDB,BETA,C,LDC)
Even if you’re capable of working at this low-level most scientific users are not. DGEMM is fast but inaccessible. To solve this problem we usually build layers on top of BLAS. For example numpy.dot calls DGEMM if the BLAS library is available on your system.
## Why not just use NumPy?
If you’re reading this then you’re probably comfortable with NumPy and you’re very happy that it gives you access to highly optimized low-level code like DGEMM. What else could we desire? NumPy has two flaws
1. Each operation occurs at the Python level. This causes sub-optimal operation ordering and lots of unnecessary copies. For example the following code is executed as follows
D = A*B*C # store A*B -> _1
D = _1*C # store _1*C -> _2
D = _2 # store _2 -> D
It might have been cleaner to multiply A*B*C as (A*B)*C or A*(B*C) depending on the shapes of the matrices. Additionally the temporary matrices _1, and _2 did not need to be created. If we're allowed to *reason about the computation* before execution then we can make some substantial optimizaitons.
1. BLAS contains many special functions for special cases. For example you can use DSYMM when one of your matrices is SYmetric or DTRMM when one of your matrices is TRiangular. These allow for faster execution time if we are able to reason about our matrices.
## Previous Work
In the cases above we argue that we can make substantial gains if we are allowed to reason about the computation before it is executed. This is the job of a compiler. Computation usually happens as follows:
1. Write down code
2. Reason about and transform code
3. Execute code
Step (2) is often removed in scripting languages for programmer simplicity. There has been a lot of activity recently in putting it back in for array computations. The following projects compile array expressions prior to execution
1. NumExpr
2. Theano
3. Numba
4. … I’m undoubtedly forgetting many excellent projects. Here is a more complete list
## Where does SymPy fit in?
The projects above are all numerical in nature. They are generally good at solving problems of the first kind (operation ordering, inplace operations, …) but none of them think very clearly about the mathematical properties of the matrices. This is where SymPy can be useful. Using the assumptions logical programming framework SymPy is able to reason about the properties of matrix expressions. Consider the following situation
We know that A is symmetric and positive definite. We know that B is orthogonal.
Question: is BAB' symmetric and positive definite?
Lets see how we can pose this question in SymPy.
>>> A = MatrixSymbol('A', n, n)
>>> B = MatrixSymbol('B', n, n)
>>> context = Q.symmetric(A) & Q.positive_definite(A) & Q.orthogonal(B)
True
Positive-Definiteness is a very important property of matrix expressions. It strongly influences our choice of numerical algorithm. For example the fast Cholesky algorithm for LU decomposition may only be used if a matrix is symmetric and positive definite. Expert numerical analysts know this but most scientific programmers do not. NumPy does not know this but SymPy does.
## Describing BLAS
We describe a new matrix operation in SymPy with code like the following:
S = MatrixSymbol('S', n, n)
class LU(BLAS):
""" LU Decomposition """
_inputs = (S,)
_outputs = (Lof(S), Uof(S))
view_map = {0: 0, 1: 0} # Both outputs are stored in first input
condition = True # Always valid
class Cholesky(LU):
""" Cholesky LU Decomposition """
condition = Q.symmetric(S) & Q.positive_definite(S)
This description allows us to consisely describe the expert knowledge used by numerical analysts. It allows us to describe the mathematical properties of linear algebraic operations.
## Matrix Computation Graphs
We usually write code in a linear top-down text file. This representation does not allow the full generality of a program. Instead we need to use a graph.
A computation can be described as a directed acyclic graph (DAG) where each node in the graph is an atomic computation (a function call like DGEMM or Cholesky) and each directed edge represents a data dependency between function calls (an edge from DGEMM to Cholesky implies that the Cholesky requires an output of the DGEMM call in order to run). This graph may not contain cycles - they would imply that some set of jobs all depend on each other; they could never start.
Graphs must be eventually linearized and turned into code. Before that happens we can think about optimal ordering and, if we feel adventurous, parallel scheduling onto different machines.
SymPy contains a very simple Computation graph object. Here we localize all of the logic about inplace operations, ordering, and (eventually) parallel scheduling.
## Translating Matrix Expressions into Matrix Computations
So how can we transform a matrix expression like
(alpha*A*B).I * x
And a set of predicates like
Q.lower_triangular(A) & Q.lower_triangular(B) & Q.invertible(A*B)
Into a graph of BLAS calls like one of the following?
DGEMM(alpha, A, B, 0, B) -> DTRSV(alpha*A*B, x)
DTRMM(alpha, A, B) -> DTRSV(alpha*A*B, x)
And, once we have this set of valid computations how do we choose the right one? This is the question that this project faces right now. These are both challenging problems. | 2014-10-24 11:17:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35753512382507324, "perplexity": 2412.714018381344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645866.1/warc/CC-MAIN-20141024030045-00192-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://space.stackexchange.com/questions/21283/what-are-these-dishes-looking-at/21284 | # What are these dishes looking at?
In this CSA video CSA Astronaut Chris Hadfield describes the attitude control of the ISS and its importance for things like communication with Earth.
At 00:30 here is about a 2 second vignette of these four identical fixed dish antennas all apparently pointing in the same direction -- in the general direction of the Sun, which would be south if this is in North America.
Are these used to communicate or control the ISS, or is this just commercial stock footage of a cable tv station? | 2021-05-10 19:19:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2452230453491211, "perplexity": 1778.4238123739963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00322.warc.gz"} |
http://www.canadaka.net/content/page/62-monica-alongi-interview | Content Home » Canadian Girls Kick Ass
# Monica Alongi Interview
Photo by Doug Schneider Photo by Doug Schneider
Many fitness models and competitors start as spectators. They will see inspiration from a competition they saw on TV or read about in a magazine. Monica Alongi, a native of our nation's capital Ottawa, was no different. Inspiration came after seeing Sylvia Tremblay, a 3-time Fitness Canada pageant winner, in a magazine she read. Sharing some similarities with her inspiration, as neither woman had any previous gymnastics experience, Monica was now ready to take on the world of fitness.
Today, that inexperience is a thing of the past, as Monica has spent the last five years weight training and the last two years training in gymnastics. Toss in the competitive spirit she's gained from competing in Karate and Jiu-Jitsu tournaments, and you realize you are dealing with a lot more than just a pretty face and firm body. Recently, Monica has been busy working with her sponsor InterACTIVE Nutrition through appearances at various fitness & health expos, as well as their upcoming advertising through magazines and website. Considering her hectic schedule; Canadaka.net correspondent John Jinks was fortunate enough to get the opportunity to learn a little bit more about this rising fitness star.
-- JJ: Monica, firstly thank you for taking the time to answer a few questions for me. I understand you have been rather busy lately. And I can't forget to mention that you are going to be featured as part of the 'Future of Fitness" in Oxygen magazine (Issue 41) later this month. How does it feel to be recognized by the same magazine that first inspired you?
MA: Yes, I've been busy! Fitness is a moving sport! I am squeeling with excitement over my feature in Oxygen! I have been as subscriber to Oxygen Mag since their first magazine. I really admire a lot of women in the magazine and it was merely a dream to be in it. It still seems like a dream! I feel that I have accomplished my dream and this has proved to me what every successful person says: "Any thing is possible if you put your mind to it!".
JJ: You finished the 2002 competition season in September, and are currently training for the World Qualifier in July 2003. Is there anything important that you are taking with you leading up to July?
MA: This season was interesting, I learned a lot at the provincial level. The bar is a little higher and the girls are more competitive. I will take the attitude I had since I started which was to have a good time!
JJ: It mentions on your website that you train 5 days a week. And on top of your normal training, you also incorporate 3 45-minute cardio sessions during the week. That certainly takes a great deal of dedication and focus. Is it second nature to you now, or are there still days where you donÂ’t feel like training?
MA: Training has become second nature to me. I really enjoy training and miss it if for some reason I can't go. There are days that I don't feel like training but sometimes those are the days I have the best work out! Even though I am dedicated, sometimes I still need a kick in the butt! I have pictures of my favourite fitness girls posted on my closet door as motivation. It also helps having a dedicated training partner like my fiance so we push each other to go to the gym.
JJ: Being an obviously attractive young woman, are men intimidated by you when they see you in the gym, or do they seem more intrigued?
MA: *blush* to your compliment & thank you... I don't think I'm intimidating, if fact, I have a lot of friends at the gym - guys and girls.
Photo by Doug Schneider Photo by Renee Kimlova
JJ: How does fitness carry over into relationships or dating? Would you ever consider dating a man who was not as dedicated as you are to being fit?
MA: Fitness is a big part of my life. My fiancé and family are behind me 100% and share their enthusiasm and support through encouragement and assistance. I think it also encourages them to be fit and eat healthier. So it's a positive thing for everyone! As for considering a man who isn't dedicated to fitness, I would have to speak hypothetically since I am engaged to someone who is. Since fitness usually plays a big part of a competitor's life, it could clash if he doesn't understand why gym time is frequent and the diet is unordinary. So I don't think I could consider it being the right match for me.
JJ: I understand that McDonalds is one of your "cheat" foods, as well as donuts. Would you feel insulted if a gentleman wanted to take you for a bite to eat at McDonalds, or let's say Tim Hortons?
MA: I would not be insulted! I would be ecstatic! Of course, I would have to decline if it wasn't a cheat day!
JJ: LetÂ’s say you are going out with your fiance for the evening. What would you consider to be the perfect night out?
MA: We enjoy going out for dinner or even spending time with close friends. Going to the movies is a real treat too. As long as eating is on the agenda - we know it'll be a good time!
JJ: Hypothetically, if you had not gotten involved in fitness, what do you think you would be doing?
MA: Wow, there's so many things that I would do! I'm sure I would be in martial arts like Aikdo or Judo. I would even consider taking up dancing: Latin, Ballroom, hip hop or belly!
JJ: Where would you like to see yourself a year or two from now?
MA: I see myself becoming more involved in fitness. I am looking into getting my certification so that I could have the option to train others. I am hoping to get more exposure in the magazines by taking part of some modeling. In 2003, I will continue competing in Fitness and also start in Figure. My fiance and I may explore the opportunity to promote shows as well!
JJ: I would like to take this time to thank you for giving us this opportunity to get to know you a little better. And I would like to wish you the best of luck with everything that lies ahead in the future.
MA: Thank you! I think CanadaKA is a great website, I am flattered to be a part of it! Keep up the great work!
[ Go Back ]
## Donate!
July´s Goal: $150.00 Amount in:$0.00 Left to go: \$150.00
Donations This Month
more reviews » | 2020-07-02 07:07:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1710934042930603, "perplexity": 1809.4703416440123}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878519.27/warc/CC-MAIN-20200702045758-20200702075758-00239.warc.gz"} |
https://socratic.org/questions/how-do-you-convert-270-degrees-to-radians | # How do you convert 270 degrees to radians?
Apr 8, 2018
$\frac{3 \pi}{2}$
#### Explanation:
We know:
color(red)(180^@ = pi \ \ \"radians"
Dividing by ${180}^{\circ}$:
$\frac{{180}^{\circ}}{{180}^{\circ}} = \frac{\pi}{{180}^{\circ}} \setminus \setminus \setminus \text{radians}$
${1}^{\circ} = \frac{\pi}{{180}^{\circ}} \setminus \setminus \setminus \text{radians}$
We are looking for ${270}^{\circ}$:
Multiply by ${270}^{\circ}$
${270}^{\circ} = \frac{{270}^{\circ} \pi}{{180}^{\circ}} \setminus \setminus \setminus \text{radians}$
We now reduce the fraction to its lowest terms:
$\frac{270}{180} = \frac{27}{18} = \frac{3}{2}$
So:
${207}^{\circ} = \frac{3 \pi}{2} \setminus \setminus \setminus \text{radians}$
The quick method for this is:
Divide the angle you have in degrees by 180, reduce it to its lowest terms, and then multiply by $\pi$:
$\therefore$
$\frac{270}{180} = \frac{3}{2} \times \pi = \frac{3 \pi}{2}$ | 2022-08-17 03:53:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 13, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7759705185890198, "perplexity": 1864.461096074953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572833.95/warc/CC-MAIN-20220817032054-20220817062054-00674.warc.gz"} |
http://math.stackexchange.com/questions/790962/find-the-range-of-this-function-hx-3x2x-2-xx-2/790983 | # Find the range of this function $h(x)= {(3x+2)(x-2)}/{x(x-2)}$?
Find the range of this function $h(x)= {(3x+2)(x-2)}/{x(x-2)}$?
I just don't know how to find the answer. It is $y \neq 3$ But how??
-
The domain of the function is $(-\infty, 0) \cup (0,2) \cup (2, \infty)$.
At each interval find out if the the function is increasing or decreasing.
If the function is incresing at an interval $(a,b)$ the range is $(\lim_{x \rightarrow a}f(x), \lim_{x \rightarrow b} f(x))$.
If the function is decreasing at an interval $(a,b)$ the range is $(\lim_{x \rightarrow b}f(x), \lim_{x \rightarrow a} f(x))$.
-
Use contradiction. First, multiply to get ${(3x^2-4x-4)}/{(x^2-2x)}$. Then set $y=3$. By cross multiplying, you will see that $3x^2-6x=3x^2-4x-4$. If you simplify, you will get $2x=4$ or $x=2$. If you plug this in, you will see that the bottom of the fraction equals 0, so $y=3$ doesn't work.
-
Clearly, the function has a removable discontinuity at $x=2$. Removing that, we have: $$h(x)=\frac{3x+2}{x}$$ Which has a vertical asymptote at $x=0$. Now, what are the horizontal asymptotes? You should find that they are both equal to $y=3$. Now, does the function cross the asymptotes? Show the function has no local extrema and then plot the function.
You can also see this by solving for $x$: $$xh(x)=3x+2\ \longrightarrow \ x(h(x)-3)=2$$ Clearly, when $x=0$ or $h(x)=3$ there is a contradiction, so we know that $h(x)\neq 3$.
-
Assuming you mean that $h(x)=\frac{(3x+2)(x-2)}{x(x-2)}$, the function is the same as $\frac{3x+2}{x}$, which is $3+\frac2x$ except at $x=2$, where $y$ would otherwise be $4$. Such a function is a transform of $y=\frac1x$, which is one to one and only has one missing point from its domain, which is the limit of the function as $x\to\infty$. In this case, that is $3$. So both $3$ and $4$ are missing from the range. The range is $(-\infty,3)\cup(3,4)\cup(4\infty)$.
- | 2015-11-30 13:52:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935075581073761, "perplexity": 86.33164733001472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398462665.97/warc/CC-MAIN-20151124205422-00260-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://dochero.tips/con8e01df6381536693ad53fbc8ead4ffe723305.html | # Con
text a w areness, a notion of securit y in the random- ... organize de nitions of secure encryption is b considering separately the v arious p ossible...
An extended abstract of this paper appears in Advances in Cryptology { CRYPTO '98, Lecture Notes in Computer Science Vol. 1462, H. Krawczyk ed., Springer-Verlag, 1998. This is the full paper.
Relations Among Notions of Security for Public-Key Encryption Schemes
M. Bellare
y
A. Desai
z
D. Pointcheval
x
P. Rogaway
June 2001
Abstract
We compare the relative strengths of popular notions of security for public-key encryption schemes. We consider the goals of privacy and non-malleability, each under chosen-plaintext attack and two kinds of chosen-ciphertext attack. For each of the resulting pairs of de nitions we prove either an implication (every scheme meeting one notion must meet the other) or a separation (there is a scheme meeting one notion but not the other, assuming the rst notion can be met at all). We similarly treat plaintext awareness, a notion of security in the randomoracle model. An additional contribution of this paper is a new de nition of non-malleability which we believe is simpler than the previous one.
Asymmetric encryption, Chosen ciphertext security, Non-malleability, Racko Simon attack, Plaintext awareness, Relations among de nitions. Keywords:
Dept.
of Computer Science & Engineering, University of California at San Diego, 9500 Gilman Drive, La Jolla, CA 92093, USA. E-Mail: [email protected] URL: http://www-cse.ucsd.edu/users/mihir =. Supported in part by NSF CAREER Award CCR-9624439 and a 1996 Packard Foundation Fellowship in Science and Engineering. yNTT Multimedia Communications Laboratories, 250 Cambridge Avenue, Suite 300, Palo Alto, CA 94306. Email: [email protected] Work done while author was at UCSD, supported in part by the above-mentioned grants of the rst author. zLaboratoire d'Informatique de l'Ecole Normale Superieure, 45 rue d'Ulm, F { 75230 Paris Cedex 05. E-mail: [email protected] URL: http://www.dmi.ens.fr/~pointche/. xDept. of Computer Science, Engineering II Bldg., One Shields Avenue, University of California at Davis, Davis, CA 95616, USA. E-mail: [email protected] URL: http://www.cs.ucdavis.edu/~rogaway/. Supported by NSF CAREER Award CCR-9624560 and a MICRO grant from RSA Data Security, Inc..
Contents 1 Introduction 1.1 1.2 1.3 1.4 1.5 1.6
Notions of Encryption Scheme Security . Implications and Separations . . . . . . Plaintext Awareness . . . . . . . . . . . De nitional Contributions . . . . . . . . Motivation . . . . . . . . . . . . . . . . Related Work and Discussion . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
1 1 1 2 3 3 4
2 De nitions of Security
5
3 Relating IND and NM
9
2.1 Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Indistinguishability of Encryptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3 Non-Malleability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 3.2 3.3 3.4 3.5 3.6 3.7
Results . . . . . . . . . . . . . . . . . . . . . . . . Notation and Preliminaries . . . . . . . . . . . . Proof of Theorem 3.1: NM-ATK ) IND-ATK . Proof of Theorem 3.3: IND-CCA2 ) NM-CCA2 Proof of Theorem 3.5: IND-CCA1 6) NM-CPA . Proof of Theorem 3.6: NM-CPA 6) IND-CCA1 . Proof of Theorem 3.7: NM-CCA1 6) NM-CCA2
4 Results on PA 4.1 4.2 4.3 4.4
De nition . . . . . . . . . . . . . . . . . . Results . . . . . . . . . . . . . . . . . . . . Proof of Theorem 4.2: PA ) IND-CCA2 Proof of Theorem 4.4: IND-CCA26)PA .
References
. . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
6 6 7
9 10 11 12 13 15 18
22 22 23 24 26
28
1 Introduction In this paper we compare the relative strengths of various notions of security for public-key encryption. We want to understand which de nitions of security imply which others. We start by sorting out some of the notions we will consider.
1.2 Implications and Separations In this paper we work out the relations between the above six notions. For each pair of notions A; B 2 f IND-CPA; IND-CCA1; IND-CCA2; NM-CPA; NM-CCA1; NM-CCA2 g, we show one of 1
Goldwasser and Micali referred to IND-CPA as polynomial security, and also showed this was equivalent to another notion, semantic security.
1
NM-CPA
3.1
?
IND-CPA Figure 1: from
i
NM-CCA1
3.6
3.5
q
3.1
? IND-CCA1
3.7
- NM-CCA2 6 3.1 3.3 ? IND-CCA2
An arrow is an implication, and in the directed graph given by the arrows, there is a path
A to B if and only A ) B.
The hatched arrows represent separations we actually prove; all
The number on an arrow or hatched arrow refers to the theorem in
this paper which establishes this relationship.
the following:
A ) B: A proof that if PE is any encryption scheme meeting notion of security A then PE also meets notion of security B.
A 6) B:
A construction of an encryption scheme PE that provably meets notion of security A but provably does not meet notion of security B.2
We call a result of the rst type an implication, and a result of the second type a separation. For each pair of notions we provide one or the other, so that no relation remains open. These results are represented diagrammatically in Figure 1. The (unhatched) arrows represent implications that are proven or trivial, and the hatched arrows represent explicitly proven separations. Speci cally, the non-trivial implication is that IND-CCA2 implies NM-CCA2, and the separations shown are that IND-CCA1 does not imply NM-CPA; nor does NM-CPA imply IND-CCA1; nor does NM-CCA1 imply NM-CCA2. Figure 1 represents a complete picture of relations in the following sense. View the picture as a graph, the edges being those given by the (unhatched) arrows. (So there are eight edges.) We claim that for any pair of notions A; B, it is the case that A implies B if and only if there is a path from A to B in the graph. The \if" part of this claim is of course clear from the de nition of implication. The \only if" part of this claim can be veri ed for any pair of notions by utilizing the hatched and unhatched arrows. For example, we claim that IND-CCA1 does not imply IND-CCA2. For if we had that IND-CCA1 implies IND-CCA2 then this, coupled with NM-CCA1 implying IND-CCA1 and IND-CCA2 implying NM-CCA2, would give NM-CCA1 implying NM-CCA2, which we know to be false. That IND-CCA2 implies all of the other notions helps bolster the view that adaptive CCA is the \right" version of CCA on which to focus. (IND-CCA2 has already proven to be a better tool for protocol design.) We thus suggest that, in the future, \CCA" should be understood to mean adaptive CCA.
1.3 Plaintext Awareness Another adversarial goal we will consider is plaintext awareness (PA), rst de ned by Bellare and Rogaway [6]. PA formalizes an adversary's inability to create a ciphertext y without \knowing" its underlying plaintext x. (In the case that the adversary creates an \invalid" ciphertext what she should know is that the ciphertext is invalid.) 2
This will be done under the assumption that there exists some scheme meeting notion question is vacuous. This (minimal) assumption is the only one made.
2
A, since otherwise the
So far, plaintext awareness has only been de ned in the random-oracle (RO) model. Recall that in the RO model one embellishes the customary model of computation by providing all parties (good and bad alike) with a random function H from strings to strings. See [5] for a description of the random-oracle model and a discussion of its use. The six notions of security we have described can be easily \lifted" to the RO model, giving six corresponding de nitions. Once one makes such de nitional analogs it is easily veri ed that all of the implications and separations mentioned in Section 1.2 and indicated in Figure 1 also hold in the RO setting. For example, the RO version of IND-CCA2 implies the RO version of NM-CCA2. Since PA has only been de ned in the RO model it only makes sense to compare PA with other RO notions. Our results in this vein are as follows. Theorem 4.2 shows that PA (together with the RO version of IND-CPA) implies the RO version of IND-CCA2. In the other direction, Theorem 4.4 shows that the RO version of IND-CCA2 does not imply PA.
1.4 De nitional Contributions Beyond the implications and separations we have described, we have two de nitional contributions: a new de nition of non-malleability, and a re nement to the de nition of plaintext awareness. The original de nition of non-malleability [13, 14, 15] is in terms of simulation, requiring, for every adversary, the existence of some appropriate simulator. We believe our formulation is simpler. It is de ned via an experiment involving only the adversary; there is no simulator. Nonetheless, the de nitions are equivalent [7], under any form of attack. Thus the results in this paper are not a ected by the de nitional change. We view the new de nition as an additional, orthogonal contribution which could simplify the task of working with non-malleability. We also note that our de nitional idea lifts to other settings, like de ning semantic security [21] against chosen-ciphertext attacks. (Semantic security seems not to have been de ned against CCA.) With regard to plaintext awareness, we make a small but important re nement to the de nition of [6]. The change allows us to substantiate their claim that plaintext awareness implies chosen-ciphertext security and non-malleability, by giving us that PA (plus IND-CPA) implies the RO versions of IND-CCA2 and NM-CCA2. Our re nement is to endow the adversary with an encryption oracle, the queries to which are not given to the extractor. See Section 4.
1.5 Motivation In recent years there has been an increasing role played by public-key encryption schemes which meet notions of security beyond IND-CPA. We are realizing that one of their most important uses is as tools for designing higher-level protocols. For example, encryption schemes meeting IND-CCA2 appear to be the right tools in the design of authenticated key exchange protocols in the public-key setting [1]. As another example, the designers of SET (Secure Electronic Transactions) selected an encryption scheme which achieves more than IND-CPA [28]. This was necessary, insofar as the SET protocols would be wrong if instantiated by a primitive which achieves only IND-CPA security. Because encryption schemes which achieve more than IND-CPA make for easier-to-use (or harder-to-misuse) tools, emerging standards rightly favor them. We comment that if one takes the CCA models \too literally" the attacks we describe seem rather arti cial. Take adaptive CCA, for example. How could an adversary have access to a decryption oracle, yet be forbidden to use it on the one point she really cares about? Either she has the oracle and can use it as she likes, or she does not have it at all. Yet, in fact, just such a setting e ectively arises when encryption is used in session key exchange protocols. In general, 3
one should not view the de nitional scenarios we consider too literally, but rather understand that these are the right notions for schemes to meet when these schemes are to become generally-useful tools in the design of high level protocols.
1.6 Related Work and Discussion The most recent version of the work of Dolev, Dwork and Naor, the manuscript [15], has, independently of our work, considered the question of relations among notions of encryptions beyond IND-CPA. It contains (currently in Remark 3.6) various claims that overlap to some extent with ours. (Public versions of their work, namely the 1991 proceedings version [13] and the 1995 technical report [14], do not contain these claims.) Foundations. The theoretical treatment of public-key encryption begins with Goldwasser and Micali [21] and continues with Yao [29], Micali, Racko and Sloan [24], and Goldreich [18, 19]. These works treat privacy under chosen-plaintext attack (the notion we are capturing via IND-CPA). They show that various formalizations of it are equivalent, in various models. Speci cally, Goldwasser and Micali introduced, and showed equivalent, the notions of indistinguishability and semantic security; Yao introduced a notion based on computational entropy; Micali, Racko and Sloan showed that appropriate variants of the original de nition are equivalent to this; Goldreich [18] made important re nements to the notion of semantic security and showed that the equivalences still held; and Goldreich [19] provided de nitions and equivalences for the case of uniform adversaries. We build on these foundations both conceptually and technically. In particular, this body of work e ectively justi es our adopting one particular formulation of privacy under chosen-plaintext attack, namely IND-CPA. None of the above works considered chosen-ciphertext attacks and, in particular, the question of whether indistinguishability and semantic security are equivalent in this setting. In fact, semantic security under chosen-ciphertext attack seems to have not even been de ned. As mentioned earlier, de nitions for semantic security under CCA can be obtained along the lines of our new de nition of non-malleability. We expect (and hope) that, after doing this, the equivalence between semantic security and indistinguishability continue to hold with respect to CCA, but this has not been checked. Recent work on simplifying non-malleability. As noted above, Bellare and Sahai [7] have shown that the de nition of non-malleability given in this paper is equivalent to the original one of [13, 14, 15]. In addition, they provide a novel formulation of non-malleability in terms of indistinguishability, showing that non-malleability is just a form of indistinguishability under a certain type of attack they call a parallel attack. Their characterization can be applied to simplify some of the results in this paper. Schemes. It is not the purpose of this paper to discuss speci c schemes designed for meeting any of the notions of security described in this paper. Nonetheless, as a snapshot of the state of the art, we attempt to summarize what is known about meeting \beyond-IND-CPA" notions of security. Schemes proven secure under standard assumptions include that of [26], which meets IND-CCA1, that of [13], which meets IND-CCA2, and the much more eÆcient recent scheme of Cramer and Shoup [10], which also meets IND-CCA2. Next are the schemes proven secure in a random-oracle model; here we have those of [5, 6], which meet PA and are as eÆcient as schemes in current standards. Then there are schemes without proofs, such as those of [11, 30]. Finally, there are schemes for non-standard models, like [16, 27]. We comment that it follows from our results that the above mentioned scheme of [10], shown to meet IND-CCA2, is also non-malleable, even under an adaptive chosen-ciphertext attack. Relations.
4
This paper is about relating notions of security for public-key (ie. asymmetric) encryption. The same questions can be asked for private-key (ie. symmetric) encryption. De nitions for symmetric encryption scheme privacy under CPA were given by [2]. Those notions can be lifted to deal with CCA. De nitions for non-malleability in the private-key setting can be obtained by adapting the public-key ones. Again we would expect (and hope) that, if properly done, the analogs to the relations we have proven remain. One feature of de nitions in this setting is worth highlighting. Recall that in the public-key setting, nothing special had to be done to model CPA; it corresponds just to giving the adversary the public key. Not so in a private-key setting. The suggestion of [3] is to give the adversary an oracle for encryption under the private key. This must be done in all de nitions, and it is under this notion that we expect to see an analog of the results for the public-key case. Goldreich, in discussions on this issue, has noted that in the private-key case, one can consider an attack setting weaker than CPA, where the adversary is not given an encryption oracle. He points out that under this attack it will not even be true that non-malleability implies indistinguishability. Encryption scheme security which goes beyond indistinguishability is important in the privatekey case too, and we feel it deserves a full treatment of its own which would explore and clarify some of the above issues. Further remarks. We comment that non-malleability is a general notion that applies to primitives other than encryption [13]. Our discussion is limited to its use in asymmetric encryption. Bleichenbacher [8] has recently shown that a popular encryption scheme, RSA PKCS #1, does not achieve IND-CCA1. He also describes a popular protocol for which this causes problems. His results reinforce the danger of assuming anything beyond IND-CPA which has not been demonstrated. A preliminary version of this paper appeared as [3]. We include here material which was omitted from that abstract due to space limitations. Symmetric encryption.
2 De nitions of Security This section provides formal de nitions for the six notions of security of an asymmetric (ie., publickey) encryption scheme discussed in Section 1.1. Plaintext awareness will be described in Section 4. We begin by describing the syntax of an encryption scheme, divorcing syntax from the notions of security. Experiments. We use standard notations and conventions for writing probabilistic algorithms and experiments. If A is a probabilistic algorithm, then A(x1 ; x2 ; : : : ; r) is the result of running A on inputs x1 ; x2 ; : : : and coins r. We let y A(x1 ; x2 ; : : :) denote the experiment of picking r at random and letting y be A(x1 ; x2 ; : : : ; r). If S is a nite set then x S is the operation of picking an element uniformly from S . If is neither an algorithm nor a set then x is a simple assignment statement. We say that y can be output by A(x1 ; x2 ; : : :) if there is some r such that A(x1 ; x2 ; : : : ; r) = y. Syntax and conventions. The syntax of an encryption scheme speci es what kinds of algorithms make it up. Formally, an asymmetric encryption scheme is given by a triple of algorithms, PE = (K; E ; D), where K, the key generation algorithm, is a probabilistic algorithm that takes a security parameter k 2 N and returns a pair (pk ; sk ) of matching public and secret keys. E , the encryption algorithm, is a probabilistic algorithm that takes a public key pk and a message x 2 f0; 1g to produce a ciphertext y. 5
D,
the decryption algorithm, is a deterministic algorithm which takes a secret key sk and ciphertext y to produce either a message x 2 f0; 1g or a special symbol ? to indicate that the ciphertext was invalid. We require that for all (pk ; sk ) which can be output by K(k), for all x 2 f0; 1g , and for all y that can be output by Epk (x), we have that Dsk (y) = x. We also require that K, E and D can be computed in polynomial time. As the notation indicates, the keys are indicated as subscripts to the algorithms. Recall that a function : N ! R is negligible if for every constant c 0 there exists an integer k such that (k) k for all k k . c
c
c
2.2 Indistinguishability of Encryptions The classical goal of secure encryption is to preserve the privacy of messages: an adversary should not be able to learn from a ciphertext information about its plaintext beyond the length of that plaintext. We de ne a version of this notion, indistinguishability of encryptions (IND), following [21, 24], through a simple experiment. Algorithm A1 is run on input the public key, pk . At the end 6
of A1 's execution she outputs a triple (x0 ; x1 ; s), the rst two components being messages which we insist be of the same length, and the last being state information (possibly including pk ) which she wants to preserve. A random one of x0 and x1 is now selected, say x . A \challenge" y is determined by encrypting x under pk . It is A2 's job to try to determine if y was selected as the encryption of x0 or x1 , namely to determine the bit b. To make this determination A2 is given the saved state s and the challenge ciphertext y. For concision and clarity we simultaneously de ne indistinguishability with respect to CPA, CCA1, and CCA2. The only di erence lies in whether or not A1 and A2 are given decryption oracles. We let the string atk be instantiated by any of the formal symbols cpa; cca1; cca2, while ATK is then the corresponding formal symbol from CPA; CCA1; CCA2. When we say O = ", where i 2 f1; 2g, we mean O is the function which, on any input, returns the empty string, ". b
b
i
i
De nition 2.1 [IND-CPA, IND-CCA1, IND-CCA2] Let PE = (K; E ; D) be an encryption scheme and let A = (A1 ; A2 ) be an adversary. For atk 2 fcpa; cca1; cca2g and k 2 N let, -atk ind-atk-1 -atk-0 (k) = 1 ] Advind (k) = 1 ] Pr[ Expind PE (k) = Pr[ ExpPE PE ;A
;A
;A
where, for b 2 f0; 1g, -atk- (k) Experiment Expind PE R (pk ; sk ) K(k) ; (x0 ; x1 ; s) b
;A
Return d
and
AO1 1 () (pk ) ; y
Epk (x ) ; d b
AO2 2 () (x0 ; x1 ; s; y)
O () = " and O () = " O () = Dsk () and O () = " O () = Dsk () and O () = Dsk () Above it is mandated that jx j = jx j. In the case of CCA2, we further insist that A does not ask its oracle to decrypt y. We say that PE is secure in the sense of IND-ATK if A being polynomial-time implies that AdvPE - () is negligible. If atk = cpa then If atk = cca1 then If atk = cca2 then
1
2
1
2
1
2
0
1
2
ind atk ;A
2.3 Non-Malleability We will need to discuss vectors of plaintexts or ciphertexts. A vector is denoted in boldface, as in x. We denote by jxj the number of components in x, and by x[i] the i-th component, so that x = (x[1]; : : : ; x[jxj]). We extend the set membership notation to vectors, writing x 2 x or x 62 x to mean, respectively, that x is in or is not in the set f x[i] : 1 i jxj g. It will be convenient to extend the decryption notation to vectors with the understanding that operations are performed componentwise. Thus x Dsk (y) is shorthand for the following: for 1 i jyj do x[i] Dsk (y[i]). We will consider relations of arity t where t will be polynomial in the security parameter k. Rather than writing R(x1 ; : : : ; x ) we write R(x; x), meaning the rst argument is special and the rest are bunched into a vector x with jxj = t 1. Idea. The notion of non-malleability was introduced in [13], and re ned subsequently. The goal of the adversary, given a ciphertext y, is not (as with indistinguishability) to learn something about its plaintext x, but only to output a vector y of ciphertexts whose decryption x is \meaningfully related" to x, meaning that R(x; x) holds for some relation R. The question is how exactly one measures the advantage of the adversary. This turns out to need care. One possible formalization Notation.
t
7
De nition 2.2 [NM-CPA, NM-CCA1, NM-CCA2] Let PE = (K; E ; D) be an encryption scheme and let A = (A1 ; A2 ) be an adversary. For atk 2 fcpa; cca1; cca2g and k 2 N let, -atk nm-atk-1 -atk-0 (k) = 1 ] Advnm (k) = 1 ] Pr[ Expnm PE (k) = Pr[ ExpPE PE ;A
;A
;A
where, for b 2 f0; 1g, -atk- (k) Experiment Expnm PE 1 () (pk ; sk ) R K(k) ; (M; s) AO (pk ) ; x0 ; x1 M ; y 1 O 2 () (R; y) A2 (M; s; y) ; x Dsk (y) ; If y 62 y ^ ? 62 x ^ R(x ; x) then d 1; else d 0 ; b
;A
Return
and
Epk (x ) ; 1
b
d
O () = " and O () = " O () = Dsk () and O () = " O () = Dsk () and O () = Dsk () We insist, above, that M is valid: jxj = jx0 j for any x; x0 that are given non-zero probability in the If atk = cpa then If atk = cca1 then If atk = cca2 then
1
2
1
2
1
2
message space M . In the case of CCA2, we further insist that A2 does not ask its oracle to decrypt y. We say that PE is secure in the sense of NM-ATK if for every polynomial p(k): if A runs in time p(k), outputs a (valid) message space M samplable in time p(k), and outputs a relation R -atk computable in time p(k), then Advnm PE () is negligible. ;A
The condition that y 62 y is made in order to not give the adversary credit for the trivial and unavoidable action of copying the challenge ciphertext. Otherwise, she could output the equality relation R, where R(a; b) holds i a = b, and output y = (y), and be successful with probability one. We also declare the adversary unsuccessful when some ciphertext y[i] does not have a valid decryption (that is, ? 2 x), because in this case, the receiver is simply going to reject the adversary's message anyway. The requirement that M is valid is important; it stems from the fact that encryption is not intended to conceal the length of the plaintext.
8
3 Relating IND and NM We state more precisely the results summarized in Figure 1 and provide proofs. As mentioned before, we summarize only the main relations (the ones that require proof); all other relations follow as corollaries.
3.1 Results The rst result, that non-malleability implies indistinguishability under any type of attack, was of course established by [13] in the context of their de nition of non-malleability, but since we have a new de nition of non-malleability, we need to re-establish it. The (simple) proof of the following is in Section 3.3.
Theorem 3.1 [NM-ATK ) IND-ATK]
PE
is secure in the sense of
If a scheme PE is secure in the sense of NM-ATK IND-ATK, for any attack ATK 2 fCPA; CCA1; CCA2g.
then
Remark 3.2 Recall that the relation R in De nition 2.2 was allowed to have any polynomially bounded arity. However, the above theorem holds even under a weaker notion of NM-ATK in which the relation R is restricted to have arity two. The proof of the following is in Section 3.4.
Theorem 3.3 [IND-CCA2 then
PE
) NM-CCA2]
is secure in the sense of
If a scheme
NM-CCA2.
PE
is secure in the sense of
IND-CCA2
Remark 3.4 Theorem 3.3 coupled with Theorem 3.1 and Remark 3.2 says that in the case of
CCA2 attacks, it suÆces to consider binary relations, meaning the notion of NM-CCA2 restricted to binary relations is equivalent to the general one. Now we turn to separations. Adaptive chosen-ciphertext security implies non-malleability according to Theorem 3.3. In contrast, the following says that non-adaptive chosen-ciphertext security does not imply non-malleability. The proof is in Section 3.5.
Theorem 3.5 [IND-CCA16)NM-CPA]
If there exists an encryption scheme
PE
which is secure
0 in the sense of IND-CCA1, then there exists an encryption scheme PE which is secure in the sense of IND-CCA1 but which is not secure in the sense of NM-CPA. Now one can ask whether non-malleability implies chosen-ciphertext security. The following says it does not even imply the non-adaptive form of the latter. (As a corollary, it certainly does not imply the adaptive form.) The proof is in Section 3.6.
Theorem 3.6 [NM-CPA6)IND-CCA1]
If there exists an encryption scheme
NM-CPA, then there exists an encryption scheme PE 0 NM-CPA but which is not secure in the sense of IND-CCA1.
in the sense of of
PE
which is secure
which is secure in the sense
Now the only relation that does not immediately follow from the above results or by a trivial reduction is that the version of non-malleability allowing CCA1 does not imply the version that allows CCA2. See Section 3.7 for the proof of the following.
Theorem 3.7 [NM-CCA16)NM-CCA2]
If there exists an encryption scheme
NM-CCA1, then there exists an encryption scheme PE 0 NM-CCA1 but which is not secure in the sense of NM-CCA2.
in the sense of of
9
PE
which is secure
which is secure in the sense
3.2 Notation and Preliminaries For relations R which could be of arbitrary arity we use the simplifying notation R(a; b) as a shorthand for R(a; b) when it is clear that b[1] = b and jbj = 1. We let a denote the bitwise complement (namely the string obtained by ipping each bit) of a. For an IND-ATK adversary A = (A1 ; A2 ) we will, whenever convenient, assume that the messages x0 ; x1 that A1 outputs are distinct. Intuitively this cannot decrease the advantage because the contribution to the advantage in case they are equal is zero. Actually one has to be a little careful. The claim will be that we can modify A to make sure that the output messages are distinct, and one has to be careful to make sure that when A outputs equal messages the modi ed adversary does not get any advantage, so that the advantage of the modi ed adversary is the same as that of the original one. For completeness we encapsulate the claim in the following proposition.
Proposition 3.8 sense of
;B
;A
;B
;A
= (pk ; sk )
Experiment1
def
Experiment2
def
y
= (pk ; sk )
y
K(k) ; (x ; x ; s) 0
A1 (pk ) ; b O 2 A (x0 ; x1 ; s; y) 1
f0; 1g ;
Epk (x ) ; c K(k) ; (x ; x ; s) A ( ) ; b f0; 1g ; Epk (x ) ; c B O2 (x0 ; x0 ; s k d; y) : 2
b
0
1 pk
1
0
2
b
1
In the last experiment, x00 ; x01 ; d are de ned in terms of x0 ; x1 as per the code of B1 . Let Pr1 [ ] = Pr[Experiment1 : ] be the probability function under Experiment1 and Pr2 [ ] = Pr[Experiment2 : ] be that under Experiment2. By de nition -atk ind-atk Advind PE (k) = 2 Pr1 [ b = c ] 1 and AdvPE (k) = 2 Pr2 [ b = c ] 1 : Thus it suÆces to show that Pr1 [ b = c ] = Pr2 [ b = c ]. Let E denote the event that x0 = x1 , or, equivalently, that d = 1. Then ;A
;B
h
i
h i
j E ] Pr [ E ] + Pr b = c j E Pr E h i h i [ b = c j E ] Pr [ E ] + Pr b = c j E Pr E :
Pr1 [ b = c ] = Pr1 [ b = c
1
1
Pr2 [ b = c ] = Pr2
2
2
10
1 2
That Pr1 [ b = c ] = Pr2 [ b = c ] now follows by putting together the following observations:
Pr1 [ E ] = Pr2 [ E ] since E depends only on A1 . Pr1 [ b = c j E ] = 1=2 because when E is true, A2 has no information about b. On the other hand Pr2 [ b = c j E ] = 1=2 because when E is true we have B2 output a random bit. Pr1 b = c j E = Pr2 b = c j E because in this case the experiments are the same, namely we are looking at the output of A2 .
h
i
h
i
This completes the proof of Proposition 3.8.
3.3 Proof of Theorem 3.1:
) IND-ATK
NM-ATK
We are assuming that encryption scheme PE is secure in the NM-ATK sense. We will show it is also secure in the IND-ATK sense. Let B = (B1 ; B2 ) be a IND-ATK adversary attacking PE . We want -atk to show that Advind PE () is negligible. To this end, we describe a NM-ATK adversary A = (A1 ; A2 ) attacking PE . Adversaries A and B have access to an oracle O1 in their rst stage and an oracle O2 in their second stage, these oracles being instantiated according to the attack ATK as per the de nitions. Recall that z denotes the bitwise complement of a string z . ;B
1 Algorithm AO 1 (pk )
0 0 2 Algorithm AO 2 (M; s ; y ) where s = (x0 ; x1 ; pk ; s)
(x0 ; x1 ; s) B1O1 (pk ) M := fx0 ; x1 g s0 (x0 ; x1 ; pk ; s) return (M; s0 )
c B2O2 (x0 ; x1 ; s; y) y0 Epk (x ) return (R; y 0 ) where R(a; b) = 1 i a = b c
The notation M := fx0 ; x1 g means that M is being assigned the probability space which assigns to 2 each of x0 and x1 a probability of 1=2. AO outputs (the description of) the complement relation 2 R, which for any arguments a; b is 1 if a = b and 0 otherwise. We consider the advantage of A, given by -atk nm-atk-1 -atk-0 (k) = 1 ] Advnm (k) = 1 ] Pr[ Expnm PE (k) = Pr[ ExpPE PE ;A
where -atk-1 (k) def Expnm = PE
;A
h
;A
-atk-0 (k) = Expnm PE
def
;A
h
;A
K(k) ; (M; s0 )
AO1 1 (pk ) ; x
Epk (x) ; i (R; y0 ) AO2 (M; s0 ; y) ; x0 Dsk (y0 ) : y 6= y0 ^ ? = 6 x0 ^ R(x; x0 ) ; ) K(k) ; (M; s0 ) AO1 ( ) ; x; x~ M ; y Epk (x) ; i (R; y0 ) AO2 (M; s0 ; y) ; x0 Dsk (y0 ) : y 6= y0 ^ ? = 6 x0 ^ R(~x; x0 ) :
(pk ; sk )
M; y
2
(pk
sk
1
pk
2
-atk ind-atkThe advantage of B is given by Advind (k) = b ] PE (k) = 2 Pr[ ExpPE b
h
;B
-atk- (k) def Expind = Pr (pk ; sk ) PE b
;B
y
;B
K(k) ; (x ; x ; s) 0
Epk (x ) ; c b
1
1, where
B1O1 (pk ) ; b
i
B2O2 (x0 ; x1 ; s; y) : c = b :
f0; 1g ;
By Proposition 3.8 we may assume here, without loss of generality, that we always have x0 = 6 x1 . This turns out to be important below. 11
-atk-1 (k) = 1 ] = Pr[ Expind-atk- (k) = b ]. Pr[ Expnm PE PE 0 Proof: Look rst at the code of A2 . Note that R(x; x ) is true i Dsk (y ) = x . Also note that when 0 0 R(x; x ) is true it must be that x 6= x and hence, by the unique decryptability of the encryption scheme, that y 6= y0 . Also we always have ? 6= x0 . -atk- (k). An important observation is that Dsk (y) = x i b = c. (This uses Now, consider Expind PE the fact that x0 6= x1 , and would not be true otherwise.) Now one can put this together with the above and see that b = c in the experiment underlying p exactly when y 6= y0 ^ ? 6= x0 ^ R(x; x0 ) -atk-1 (k). 2 in the experiment Expnm PE nm-atk-0 Claim 2: Pr[ ExpPE (k) = 1 ] = 1=2. Proof: This follows from an information theoretic fact, namely that A has no information about the message x~ with respect to which its success is measured. 2 -atk nm-atk Now we can apply the claims to get Advind PE (k) = 2 AdvPE (k). But since PE is secure in the -atk ind-atk NM-ATK sense we know that Advnm PE () is negligible, and hence the above implies AdvPE () is negligible too. This concludes the proof of Theorem 3.1. The claim of Remark 3.2 is clear from the above because the relation R output by A is binary. b
Claim 1:
;A
;B
c
b
c
;B
k
;A
;A
;B
;A
;A
3.4 Proof of Theorem 3.3:
;B
IND-CCA2
) NM-CCA2
h
;A
k
k
i
k
Also for b 2 f0; 1g we let p0 (b) = Pr (pk ; sk )
h
k
AD2 sk (x0 ; x1 ; s0 ; y) = 0 :
K(k) ; (M; s)
1
b
B1Dsk (pk ) ; x0 ; x1
Epk (x ) ; i B D (M; s; y) ; x Dsk (y) : y 62 y ^ ? 2= x ^ R(x ; x) :
(R; y)
2
M; y
b
sk
0
Now observe that A2 may return 0 either when x is R-related to x0 or as a result of the coin ip. Continuing with the advantage then, -cca2 (k) = p (0) p (1) = 1 [1 + p0 (0)] 1 [1 + p0 (1)] = 1 [p0 (0) p0 (1)] Advind PE 2 2 2 ;A
k
k
k
12
k
k
k
We now observe that the experiment of B2 being given a ciphertext of x1 and R-relating x to -cca2-0 (k) = 1 ]. On the other hand, in case it is x0 , we are looking at x0 , is exactly Pr[ Expnm PE nm-cca2- 1 Pr[ ExpPE (k) = 1 ]. -cca2 (k) = p0 (0) p0 (1) = 2 Advind-cca2 (k) : Advnm PE PE ind-cca2 But we know that AdvPE () is negligible because PE is secure in the sense of IND-CCA2. It nm-cca2 follows that AdvPE () is negligible, as desired. ;B
;B
;B
k
k
;A
;A
;B
3.5 Proof of Theorem 3.5:
IND-CCA1
6) NM-CPA
Assume there exists some IND-CCA1 secure encryption scheme PE = (K; E ; D), since otherwise the theorem is vacuously true. We now modify PE to a new encryption scheme PE 0 = (K0 ; E 0 ; D0 ) which is also IND-CCA1 secure but not secure in the NM-CPA sense. This will prove the theorem. The new encryption scheme PE 0 = (K0 ; E 0 ; D0 ) is de ned as follows. Here x denotes the bitwise complement of string x, namely the string obtained by ipping each bit of x. 0 (x) 0 (y1 ky2 ) Algorithm Epk Algorithm K0 (k ) Algorithm Dsk (pk ; sk ) K(k) return Dsk (y1 ) y1 Epk (x) ; y2 Epk (x) return (pk ; sk ) return y1 ky2 In other words, a ciphertext in the new scheme is a pair y1 k y2 consisting of the encryption of the message and its complement. In decrypting, the second component is ignored. It is now quite easy to see that: Claim 3.9 PE 0 is not secure in the NM-CPA sense.
Proof: Given a ciphertext y1 k y2 of a message x, it is easy to create a ciphertext of x: just output y2 k y1 . Thus, the scheme is malleable. Formally, we can specify a polynomial time adversary A = (A1 ; A2 ) that breaks PE 0 in the sense of NM-CPA, with probability almost one, as follows. A1 (pk ) outputs (M; ) where M puts a uniform distribution on f0; 1g . Then algorithm A2 (M; ; y1 k y2 ) outputs (R; y2 k y1 ) where R describes the binary relation de ned by R(m1 ; m2 ) = 1 i m1 = m2 . It is easy to see that the plaintext, x0 , corresponding to the ciphertext that A outputs is R-related to x with probability 1. Observe that -cpa the probability of some random plaintext x~ being R-related to x0 is at most 2 . Thus Advnm PE 0 (k) is 1 2 which is not negligible. (In fact it is close to one.) Hence A is a successful adversary and k
k
;A
k
the scheme is not secure in the sense of NM-CPA.
On the other hand, a hybrid argument establishes that PE 0 retains the IND-CCA1 security of PE : Claim 3.10 PE 0 is secure in the sense of IND-CCA1.
Proof: Let B = (B1 ; B2 ) be some polynomial time adversary attacking PE 0 in the IND-CCA1 sense. -cca1 We want to show that Advind PE 0 (k) is negligible. To do so, consider the following probabilities, de ned for i; j 2 f0; 1g: ;B
h
K(k) ; (x ; x ; s) i B (x ; x ; s; y ky ) = 1 :
p (i; j ) = Pr (pk ; sk ) k
2
0
0
1
1
1
B1Dsk (pk ) ; y1
Epk (x ) ; y
2
i
Epk (x ) : j
2
-cca1 We know that Advind PE 0 (k) = p (1; 1) p (0; 0). The following lemmas state that, under our assumption that PE is IND-CCA1-secure, it must be that the di erences p (1; 1) p (1; 0) and ;B
k
k
k
13
k
p (1; 0) p (0; 0) are both negligible. This will complete the proof since -cca1 Advind PE 0 (k) = p (1; 1) p (0; 0) = [p (1; 1) p (1; 0)] + [p (1; 0) p (0; 0)] ; k
k
k
;B
k
k
k
k
k
being the sum of two negligible functions, will be negligible. So it remains to (state and) prove the lemmas. Lemma 1: p (1; 1) p (1; 0) is negligible. Proof: We can construct an adversary A = (A1 ; A2 ) that attacks the scheme PE in the IND-CCA1 sense, as follows: k
k
sk Algorithm AD 1 (pk )
(x0 ; x1 ; s)
Dsk0
B1 (pk ) m0 x0 ; m1 x1 return (m0 ; m1 ; s)
Algorithm A2 (m0 ; m1 ; s; y )
y1 Epk (m1 ) ; y2 y d B2 (m0 ; m1 ; s; y1 k y2 ) return d
D0 0 oracle. It can do this by replying The computation B1 sk (pk ) is done by A1 simulating the Dsk 0 . This adversary is to query y1 k y2 via Dsk (y1 ), using its own Dsk oracle and the de nition of Dsk polynomial time. One can now check the following:
h h
Pr (pk ; sk ) Pr (pk ; sk )
K(k) ; (m ; m ; s) K(k) ; (m ; m ; s) 0 0
1 1
AD1 sk (pk ) ; y AD1 sk (pk ) ; y
i
Epk (m ) : A (m ; m ; s; y) = 1 = p (1; 1) i Epk (m ) : A (m ; m ; s; y) = 1 = p (1; 0) 1
2
0
1
0
2
0
1
k
k
-cca1 (k) = p (1; 1) p (1; 0). The assumed security of PE in the IND-CCA1 sense now Thus Advind PE implies the latter di erence is negligible. 2 Lemma 2: p (1; 0) p (0; 0) is negligible. Proof: We can construct an adversary A = (A1 ; A2 ) that attacks the scheme PE in the IND-CCA1 sense, as follows: k
;A
k
k
k
sk Algorithm AD 1 (pk )
Dsk0
(x0 ; x1 ; s) B1 (pk ) return (x0 ; x1 ; s)
Algorithm A2 (x0 ; x1 ; s; y )
y1 y and y2 Epk (x0 ) d B2 (x0 ; x1 ; s; y1 ky2 ) return d
0 given Dsk . We observe that Again A is polynomial time and can simulate Dsk
h h
Pr (pk ; sk ) Pr (pk ; sk )
K(k) ; (x ; x ; s) K(k) ; (x ; x ; s) 0 0
1 1
AD1 sk (pk ) ; y AD1 sk (pk ) ; y
i
Epk (x ) : A (x ; x ; s; y) = 1 i Epk (x ) : A (x ; x ; s; y) = 1 1
2
0
1
0
2
0
1
= p (1; 0) k
= p (0; 0) k
-cca1 (k) = p (1; 0) p (0; 0). The assumed security of PE in the IND-CCA1 sense now Thus Advind PE implies the latter di erence is negligible. 2 This completes the proof of Claim 3.10. ;A
k
k
Remark 3.11 We could have given a simpler scheme PE 0 than the one above that would be secure 0 (x) y k b where in the IND-CCA1 sense but not in the NM-CPA sense. Let K0 be as above, let Epk 0 0 y Epk (x) and b f0; 1g and Dsk (b k y) Dsk (y). The malleability of PE arises out of the ability 14
of the adversary to create another ciphertext from the challenge ciphertext y k b, by returning y k b. This is allowed by De nition 2.2 since the only restriction is that the vector of ciphertexts y the adversary outputs should not contain y k b. However, the de nition of [13] did not allow this, and, in order to have a stronger separation result that also applies to their notion, we gave the above more involved construction.
3.6 Proof of Theorem 3.6:
NM-CPA
6) IND-CCA1
Let's rst back up a bit and provide some intuition about why the theorem might be true and how we can prove it. Intuition and first attempts. At rst glance, one might think NM-CPA does imply IND-CCA1 (or even IND-CCA2), for the following reason. Suppose an adversary has a decryption oracle, and is asked to tell whether a given ciphertext y is the encryption of x0 or x1 , where x0 ; x1 are messages she has chosen earlier. She is not allowed to call the decryption oracle on y. It seems then the only strategy she could have is to modify y to some related y0 , call the decryption oracle on y0 , and use the answer to somehow help her determine whether the decryption of y was x0 or x1 . But if the scheme is non-malleable, creating a y0 meaningfully related to y is not possible, so the scheme must be chosen-ciphertext secure. The reasoning above is fallacious. The aw is in thinking that to tell whether y is an encryption of x0 or x1 , one must obtain a decryption of a ciphertext y0 related to the challenge ciphertext y. In fact, what can happen is that there are certain strings whose decryption yields information about the secret key itself, yet the scheme remains non-malleable. The approach to prove the theorem is to modify a NM-CPA scheme PE = (K; E ; D) to a new scheme PE 0 = (K0 ; E 0 ; D0 ) which is also NM-CPA but can be broken under a non-adaptive chosenciphertext attack. (We can assume a NM-CPA scheme exists since otherwise there is nothing to prove.) A rst attempt to implement the above idea (of having the decryption of certain strings carry information about the secret key) is straightforward. Fix some ciphertext u not in the range 0 (u) = sk to return the secret key whenever it is given this special ciphertext. of E and de ne Dsk In all other aspects, the new scheme is the same as the old one. It is quite easy to see that this scheme falls to a (non-adaptive) chosen-ciphertext attack, because the adversary need only make query u of its decryption oracle to recover the entire secret key. The problem is that it is not so easy to tell whether this scheme remains non-malleable. (Actually, we don't know whether it is or not, but we certainly don't have a proof that it is.) As this example indicates, it is easy to patch PE so that it can be broken in the sense of IND-CCA1; what we need is that it also be easy to prove that it remains NM-CPA secure. The idea of our construction below is to use a level of indirection: sk is returned by D0 in response to a query v which is itself a random string that can only be obtained by querying D0 at some other known point u. Intuitively, this scheme will be NM-CPA secure since v will remain unknown to the adversary. Our construction. Given a non-malleable encryption scheme PE = (K; E ; D ) we de ne a new encryption scheme PE 0 = (K0 ; E 0 ; D0 ) as follows: 0 (x) Algorithm D0 Algorithm Epk Algorithm K0 (k ) k sk k k (b k y ) where b 2 f0; 1g (pk ; sk ) K(k) y Epk (x) if b = 0 then return Dsk (y ) u; v f0; 1g return 0 k y else if y = u then return v 0 pk k u pk else if y = v return sk 0 sk k u k v sk else return ? return (pk 0 ; sk 0 ) u
u
k
15
v
The proof of Theorem 3.6 is completed by establishing that IND-CCA1 attack but remains NM-CPA secure. Claim 3.12 PE 0 is not secure in the sense of IND-CCA1. Analysis.
PE 0
is vulnerable to a
0 Proof: The adversary queries Dsk k k () at 1 k u to get v, and then queries it at the point 1 k v, u
v
to get sk . At this point, knowing the secret key, she can obviously perform the distinguishing task we later require of her. If you wish to see it more formally, the nd stage A1 of the adversary gets pk as above and outputs any two distinct, equal length messages x0 ; x1 . In the next stage, it receives a ciphertext 0 (x ) where b was a random bit. Now it can compute Dsk (y) to recover the message 0 k y Epk k and thus determine b with probability one. It is obviously polynomial time. u
Remember that following:
b
PE is assumed secure in the sense of NM-CPA.
We will use this to establish the
Claim 3.13 PE 0 is secure in the sense of NM-CPA. Proof: To prove this claim we consider a polynomial time adversary B attacking
PE 0 in the NM-CPA sense. We want to show that AdvPE 0 () is negligible. To do this, we construct an adversary A = (A1 ; A2 ) that attacks PE in the NM-CPA sense. The idea is that A can run B as a subroutine and simulate the choosing of u; v by the key generation algorithm K0 for B . nm cpa ;B
Algorithm A1 (pk )
f0; 1g 0 pk k u pk (M; s) B1 (pk 0 ) s0 (s; u; v; pk ) return (M; s0 ) u; v
k
Algorithm A2 (M; s0 ; y ) where s0 = (s; u; v; pk )
(R; z) B2 (M; s; 0 k y) for 1 i jzj do parse z[i] as b k z where b is a bit for 1 i jzj do if b = 0 then y[i] z else if (b = 1) ^ (z = u) then y[i] Epk (v ) else y[i] y return (R; y) i
i
i
i
i
i
i
-cpa We now de ne two experiments. The rst is the one under which Advnm PE (k) is evaluated, and -cpa the second is the one under which Advnm PE 0 (k) is evaluated: ;A
:B
= (pk ; sk ) K(k) ; (M; (s; u; v; pk )) A1 (pk ) ; x; x~ M ; y Epk (x) ; (R; y) A2 (M; (s; u; v; pk ); y) ; x Dsk (y) def Experiment2 = (pk k u; sk k u k v ) K0 (k) ; (M; s) B1(pk k u) ; x; x~ M ; 0 (x) ; (R; z) B2 (M; s; 0 k y) ; w D0 0 k y Epk k sk k k (z) : Experiment1
def
u
u v
Let Pr1 [ ] = Pr[Experiment1 : ] be the probability function under Experiment1 and Pr2 [ ] = Pr[Experiment2 : ] be that under Experiment2. Let E1 ,E2 , and E3 be the following events:
E1 def = def E2 = E3 def =
8i : (b 9i : (b 9i : (b
i
i
i
= 0) _ (b = 1 ^ z = u) i
i
= 1 ^ z = v ^ u 6= v) i
=1^z = 6 u ^ z =6 v) i
16
i
For j = 1; 2; 3 let
p(1; j ) = Pr1 [ y 62 y ^ ? 62 x ^ R(x; x) j E ] Pr1 [ y 62 y ^ ? 62 x ^ R(~x; x) j E ] p(2; j ) = Pr2 [ 0 k y 62 z ^ ? 62 w ^ R(x; w) j E ] Pr2 [ 0 k y 62 z ^ ? 62 w ^ R(~x; w) j E ] : j
j
j
By conditioning we have: -cpa Advnm PE ;A (k) =
j
P P
p(1; j ) Pr1 [ E ] 3 =1 p(2; j ) Pr2 [ E ] : 3 j =1
j
j
j
AdvPE -0 (k ) = -cpa nm-cpa We now upper bound Advnm PE 0 (k) in terms of AdvPE (k) by a series of lemmas. The rst observation is that the probability of our three events is the same in both experiments. nm cpa ;B
;A
;B
Pr1 [ E ] = Pr2 [ E ] for j = 1; 2; 3. These events depend only on the keys and B . 2
Lemma 1: Proof:
j
j
Let q be a polynomial which bounds the running time of B . In particular we can assume jzj < q(k). Lemma 2: p(2; 1) p(1; 1) + q (k ) 2 . Proof: By event E1 every z[i] = b k z has either (b = 0) or (b = 1 ^ z = u). If b = 0 then A will output z in Experiment1, while B would be outputting 0 k z in Experiment2. 0 But Dsk k k (0 k z ) = Dsk (z ), and furthermore y = z (the challenge to A is equal to this component of A's output) i 0 k y = 0 k z (the challenge to B is equal to this component of B 's output). Thus A properly simulates B . 0 If b = 1 and z = u then Dsk k k (b k z ) = v is random and independent of the execution of B . To \simulate" it we have A output an encryption of random v. But, A will only be successful if the created ciphertext is di erent from y. The probability of this not happening can be upper bounded by the probability that v = Dsk (y), which is at most 2 . The worst case in this event is when 8i : (b = 1 ^ z = u). Since jzj q(k), the probability, under this event, that A does not match the advantage of B , is at most q(k) 2 . 2 k
i
i
i
i
i
i
u
i
v
i
i
i
i
i
i
i
u
i
v
i
k
i
i
k
Pr1 [ E2 ] q(k) 2 . Proof: B has no information about v since the latter was chosen independently of its execution, and also u has a 2 chance of equaling v. The Lemma follows since jzj < q(k). 2 k
Lemma 3:
k
p(1; 3) = p(2; 3) = 0. Proof: When event E3 happens in Experiment1, one of the ciphertexts y[i] that A2 outputs equals y and hence there is no contribution to the success probability. When event E3 happens in 0 Experiment2, the de nition of Dsk k k says that the decryption of some z[i] is ? and hence again Lemma 4:
u
v
there is no contribution to the success probability. In other words, in both cases, there is no success in either the \real" or the \random" experiment. 2 From Lemmas 1,2,3,4 we get -cpa 3 Advnm =1 p(2; j ) Pr1 [ E ] PE 0 (k) = q(k) 2 + p(1; 1) Pr1[ E1 ] + p(2; 2) Pr1[ E2 ] + p(1; 3) Pr1 [ E3 ] q(k) 2 + p(1; 1) Pr1[ E1 ] + p(1; 2) Pr1[ E2 ] + p(1; 3) Pr1 [ E3 ] ;B
P
j
j
k k
17
+ (p(2; 2) p(1; 2)) Pr1 [ E2 ] q(k) 2 + 3=1 p(1; j ) Pr1 [ E ] + Pr1 [ E2 ] -cpa 2q(k) 2 + Advnm PE (k) :
P
k
j
j
k
;A
-cpa The assumption that PE is secure in the sense of NM-CPA implies that Advnm PE (k) is negligible, -cpa and hence it follows that Advnm PE 0 (k) is negligible. ;A
;B
3.7 Proof of Theorem 3.7:
NM-CCA1
6) NM-CCA2
The approach, as before, is to take a NM-CCA1 secure encryption scheme PE = (K; E ; D) and modify it to a new encryption scheme PE 0 = (K0 ; E 0 ; D0 ) which is also NM-CCA1 secure, but can be broken in the NM-CCA2 sense. Intuition. Notice that the construction of Section 3.6 will no longer work, because the scheme constructed there, not being secure in the sense of IND-CCA1, will certainly not be secure in the sense of NM-CCA1, for the same reason: the adversary can obtain the decryption key in the rst stage using a couple of decryption queries. Our task this time is more complex. We want queries made in the second stage, after the challenge is received, to be important, meaning they can be used to break the scheme, yet, somehow, queries made in the rst stage cannot be used to break the scheme. This means we can no longer rely on a simplistic approach of revealing the secret key in response to certain queries. Instead, the \breaking" queries in the second stage must be a function of the challenge ciphertext, and cannot be made in advance of seeing this ciphertext. We implement this idea by a \tagging" mechanism. The decryption function is capable of tagging a ciphertext so as to be able to \recognize" it in a subsequent query, and reveal in that stage information related speci cally to the ciphertext, but not directly to the secret key. The tagging is implemented via pseudorandom function families. Our construction. Let PE = (K; E ; D ) be the given NM-CCA1 secure encryption scheme. Fix a family F = f F : k 1 g of pseudorandom functions as per [20]. (Notice that this is not an extra assumption. We know that the existence of even a IND-CPA secure encryption scheme implies the existence of a one-way function [23] which in turn implies the existence of a family of pseudorandom functions [22, 20].) Here each F = f F : K 2 f0; 1g g is a nite collection in which each key K 2 f0; 1g indexes a particular function F : f0; 1g ! f0; 1g . We de ne the new encryption scheme PE 0 = (K0 ; E 0 ; D0 ) as follows. Recall that " is the empty string. 0 (x) Algorithm D0 Algorithm Epk Algorithm K0 (k ) sk k (b k y k z ) where b is a bit (pk ; sk ) K(k) y Epk (x) if (b = 0) ^ (z = ") then return Dsk (y ) K f0; 1g return 0 k y k " else if (b = 1) ^ (z = ") then return F (y ) 0 sk k K sk else if (b = 1) ^ (z = F (y )) return Dsk (y ) return (pk ; sk 0 ) else return ? k
k
k
k
K
K
k
k
K
k
K
K
The proof of Theorem 3.7 is completed by establishing that NM-CCA2 attack but remains NM-CCA1 secure. Claim 3.14 PE 0 is not secure in the sense of NM-CCA2. Analysis.
PE 0
is vulnerable to a
Proof: The idea is that while the adversary may not ask for the decryption of the challenge ciphertext 0ky k " in its second stage, it may ask for the decryption of 1kykF (y). This is in fact exactly the decryption of 0ky k ". The adversary rst needs to compute F (y) without access to K . This is easily done by calling the decryption oracle on 1kyk". K
K
18
More precisely, the adversary A = (A1 ; A2 ) works like this. In the rst stage it outputs a message space M consisting of two distinct strings x0 ; x1 , each having probability 1=2. A2 , given challenge ciphertext 0ky k ", makes query 1kyk" to get F (y), and outputs (R; Z ) where R(a; b) = 1 i a = b is the equality relation, and Z = 1 k y k F (y). Notice that Z 6= 0ky k " so this is a valid nm-cca2- 1 0 0 output, but Dsk (k) = 1 ] = 1. On the other hand, k (Z ) = Dsk k (0ky k ") so Pr[ ExpPE nm-cca2 nm-cca2- 0 Pr[ ExpPE (k) = 1 ] 1=2. So AdvPE (k) 1=2, which is certainly not negligible. K
K
K
;A
K
;A
;A
Remember that PE is assumed secure in the sense of NM-CCA1. We will use this to establish the following: Claim 3.15 PE 0 is secure in the sense of NM-CCA1. Let us rst give some intuition and then the proof. The key point is that to defeat the scheme, the adversary must obtain F (y) where 0 k y k " is the challenge. However, to do this she requires the decryption oracle. This is easy for an NM-CCA2 adversary but not for an NM-CCA1 adversary, which has a decryption oracle available only in the rst stage, when y is not yet known. Once y is provided (in the second stage) the possibility of computing F (y) is small because the decryption oracle is no longer available to give it for free, and the pseudorandomness of F makes it hard to compute on one's own. K
K
Proof of Claim 3.15: To prove this claim we consider a polynomial time adversary B attacking
PE 0
-cca1 in the NM-CCA1 sense. We want to show that Advnm PE 0 () is negligible. To do this, we consider the following adversary A = (A1 ; A2 ) attacking PE in the NM-CCA1 sense. The idea is that A can choose the key K for the key generation algorithm K0 of B and thus provide a simulation of the decryption oracle of B . ;B
sk Algorithm AD 1 (pk )
K f0; 1g D0 (M; s) B1 sk k K (pk ) s0 (s; K; pk ) return (M; s0 ) k
Algorithm A2 (M; s0 ; y ) where s0 = (s; K; pk )
(R; z) B2 (M; s; 0 k y k ") for 1 i jzj do parse z[i] as b k u k v where b is a bit for 1 i jzj do if (b = 0) ^ (v = ") then y[i] u else if (b = 1) ^ (v = ") then y[i] Epk (F (u )) else if (b = 1) ^ (v = F (u )) then y[i] u else y[i] y return (R; y) i
i
i
i
i
i
i
i
i
i
K
i
K
i
i
i
The analysis follows in spirit that in the proof of Claim 3.13; the key new element is the pseudorandom function. Roughly we seek to recapture the lemmas in that proof modulo the security of the pseudorandom function family. -cca1 (k) is evaluFor the proof, we de ne two experiments. The rst is the one under which Advnm PE nm-cca1 ated, and the second is the one under which AdvPE 0 (k) is evaluated: ;A
;B
sk = (pk ; sk ) K(k) ; (M; (s; K; pk )) AD ~ M ; y Epk (x) ; 1 (pk ) ; x; x (R; y) A2 (M; (s; K; pk ); y) ; x Dsk (y) 0 def Experiment2 = (pk ; sk k K ) K0(k) ; (M; s) B1Dsk k K (pk ) ; x; x~ M ; 0 (x) ; (R; z) B2 (M; s; 0 k y k ") ; w D0 0 k y k " Epk k sk k (z) :
Experiment1
def
u
K
19
Let Pr1 [ ] = Pr[Experiment1 : ] be the probability function under Experiment1 and Pr2 [ ] = Pr[Experiment2 : ] be that under Experiment2. Let E1 ,E2 , and E3 be the following events:
8i : (v 9i : (b 9i : (b
E1 def = def E2 = E3 def =
i
i
i
= ") _ (b = 1 ^ v = F (u ) ^ u = 6 y) i
i
K
i
i
= 1 ^ v = F (u ) ^ u = y ^ v = 6 ") i
K
i
i
i
K
i
i
i
=1^v = 6 F (u ) ^ v =6 ") _ (b = 0 ^ v =6 ") i
i
For j = 1; 2; 3 let = Pr1 [ y 62 y ^ ? 62 x ^ R(x; x) j Ej ] Pr1 [ y 62 y ^ ? 62 x ^ R(~x; x) j Ej ] p(2; j ) = Pr2 [ 0 k y k " 62 z ^ ? 62 w ^ R(x; w) j Ej ] Pr2 [ 0 k y k " 62 z ^ ? 62 w ^ R(~ x; w) p(1; j )
By conditioning we have: -cpa Advnm PE ;A (k) =
P P
j
Ej
]:
p(1; j ) Pr1 [ E ] 3 =1 p(2; j ) Pr2 [ E ] : 3 j =1
j
j
j
AdvPE -0 (k ) = -cpa nm-cpa We now upper bound Advnm PE 0 (k) in terms of AdvPE (k) by a series of lemmas. nm cpa ;B
;A
;B
Pr1 [ E ] = Pr2 [ E ] for j = 1; 2; 3. These events depend only on the keys and B . 2
Lemma 1: Proof:
j
j
Let q be a polynomial which bounds the running time of B and in particular so that jzj < q(k). Lemma 2: p(2; 1) p(1; 1) + (k ) for some negligible function depending on B . Proof: We consider two possible cases for values of z[i] = b k u k v , given event E1 . First suppose (b = 1 ^ v = F (u ) ^ u 6= y). Note that v = F (u ) implies v 6= " since the output of F is always k bits long. Now, from the code of A2 , we see that in this case A2 sets y[i] to u . Observe that if ciphertext y[i] (respectively z[i]) that A (respectively B ) creates equals y (respectively 0 k y k ") then there is no contribution to the success probability. Since b = 1 we know that z[i] 6= 0 k y k ". On the other hand the condition u 6= y means that y[i] 6= y too. From the 0 de nition of D0 we have Dsk k (1 k u k F (u )) = Dsk (u ), so A is properly simulating B . (Meaning the contribution to their respective success probabilities is the same.) For the second case, namely v = ", we consider the two possible values of b . 0 If b = 0 then A will set y[i] = u , and from the de nition of D0 we have Dsk k (0 k u k ") = Dsk (u ). Observe that A will output a ciphertext y[i] that equals y if and only if B outputs a ciphertext z[i] that equals 0 k y k ". So again A is properly simulating B . 0 0 If b = 1 then Dsk k (1 k u k ") = F (u ) by de nition of D . A correctly \simulates" this by outputting an encryption of F (u ). This choice of A contributes to the success probability as long as it is di erent from y. The probability of this not happening can be upper bounded by the probability that Epk (F (u )) = y. We must consider the worst case, which is when 8i : (b = 1^v = "), so we are interested in bounding the probability that there is some i such that Epk (F (u )) = y. Intuitively, such \ciphertext collisions" are unlikely since otherwise the scheme would not be secure even in the sense of IND-CCA1. Formally, one can show that the probability of such collisions is at most (k), where () is a negligible function depending on B , by showing that if not, we could design an adversary A0 that would break the scheme in the sense of IND-CCA1. This is standard, and a sketch of the details follows. i
i
i
K
i
i
i
i
i
K
i
i
K
i
i
i
i
K
K
i
i
i
i
i
i
i
i
K
K
K
K
K
i
i
i
i
i
i
K
20
i
i
In the rst stage A0 does what A does, picking a key K so that it can provide a simulation of the decryption oracle of B , similar to the simulation provided by A. It runs the rst stage of B and picks a pair of messages uniformly from the message space output by B . In the second stage it is given an encryption of one of these messages as the challenge. It then obtains a polynomial number of encryptions of one of the messages and checks if any of the resulting ciphertexts match the challenge ciphertext. If it does then it bets that the challenge ciphertext corresponds to this message, otherwise it decides by ipping a coin. Observe that the success of A0 is exactly one half the probability of there being some i such that Epk (F (u )) = y since the experiments de ning the success of A0 and the upper bound on the probability in question are similar. Since PE is given to be secure in the NM-CCA1 sense (and therefore in the IND-CCA1 sense, see Theorem 3.1), we get a bound of (k) where is a negligible function depending on B . 2 K
i
Notice that in the above we did not use the security of the pseudorandom function family. That comes up only in the next lemma. Accordingly, in the following, for any polynomial f we let Æ (k) be a negligible function which upper bounds the advantage obtainable by any adversary in distinguishing F from a family of random functions when the running time of this adversary is at most f (k). Lemma 3: Pr1 [ E2 ] q (k ) [Æ (k ) + (k )] for some negligible function that depends on B . Proof: Event E2 occurs if B outputs 1 k u k v where u = y and v = F (y ). The claim is that this happens with only a small probability. Note that it is not impossible for B to compute the value of F on a point, even though F is pseudorandom, because it can compute F (m) on a point m of its choice simply by querying its decryption oracle on 1 k m k ". However, this oracle is only available in the rst stage, and in that stage B does not know y. When she does get to know y (in the second stage) she no longer has the decryption oracle. The pseudorandomness of F then says her chance of computing F (y) is small. To turn this intuition into a formal proof, rst imagine that we use, in the role of F , a random function g. (Imagine that Dsk k has oracle access to g and uses it in the role of F .) In the resulting scheme and experiment, it is clear that the chance that B computes g(y) is at most 2 plus the chance that she made a query involving y to the decryption oracle in the rst stage. Since y is a ciphertext created after the rst stage, we claim that the chance that B could make a query involving y in her rst stage is negligible. This is true because if not, we would contradict the fact that PE is IND-CCA1. (This can be argued analogously to the argument in the previous Lemma. We omit the details.) Let (k) then be the negligible probability of computing g(y). Now given that F is pseudorandom in nature we can bound the probability of B correctly computing F (y) by Æ (k) + (k) for some polynomial q which depends on B . (Justi ed below.) So while B could always pick u to be y, she would have a negligible probability of setting v to be F (y). In the worst case this event could happen with probability at most jzj [Æ (k) + (k)]. The bound of Æ (k) + (k) mentioned above is justi ed using the assumed security of F as a pseudorandom function family. If the event in question had a higher probability, we would be able to construct a distinguisher between F and the family of random functions. This distinguisher would get an oracle g for some function and has to tell whether g is from F or is a random function of k bits to k bits. It would itself pick the secret keys underlying Experiment1 or Experiment2 and run the adversaries A or B . It can test whether or not the event happens because it knows all decryption keys. If it happens it bets that g is pseudorandom, because the chance under a random function is at most 2 + (k). Since this kind of argument is standard, we omit the details. 2 f
q
i
i
i
i
K
K
K
K
K
K
K
k
K
q
i
i
K
q
q
k
k
21
p(1; 3) = p(2; 3) = 0. Proof: When event E3 happens in Experiment1, one of the ciphertexts y[i] that A2 outputs equals y and hence there is no contribution to the success probability. When event E3 happens in 0 Experiment2, the de nition of Dsk k says that the decryption of some z[i] is ? and hence again Lemma 4:
K
there is no contribution to the success probability. In other words, in both cases, there is no success in either the \real" or the \random" experiment. 2 From Lemmas 1,2,3,4 we get -cca1 3 Advnm =1 p(2; j ) Pr1 [ E ] PE 0 (k) = (k) + p(1; 1) Pr1 [ E1 ] + p(2; 2) Pr1[ E2 ] + p(1; 3) Pr1[ E3 ] (k) + p(1; 1) Pr1 [ E1 ] + p(1; 2) Pr1[ E2 ] + p(1; 3) Pr1[ E3 ] + (p(2; 2) p(1; 2)) Pr1 [ E2 ] (k) + 3=1 p(1; j ) Pr1[ E ] + Pr1[ E2 ] -cpa (k) + q(k) [Æ (k) + (k)] + Advnm PE (k) : ;B
P
j
j
P
j
j
q
;A
Since Æ (k) and (k) are negligible quantities, the assumption that PE is secure in the sense -cca1 () is negligible, and hence it follows that Advnm-0 cca1 () is of NM-CCA1 implies that Advnm PE PE negligible. q
;A
;B
4 Results on PA In this section we de ne plaintext awareness and prove that it implies the random-oracle version of IND-CCA2, but is not implied by it. Throughout this section we shall be working exclusively in the RO model. As such, all notions of security de ned earlier refer, in this section, to their RO counterparts. These are obtained in a simple manner. To modify De nitions 2.1 and 2.2, begin the speci ed experiment (the experiment which de nes advantage) by choosing a random function H from the set of all functions from some appropriate domain to appropriate range. (These sets might change from scheme to scheme.) Then provide an H -oracle to A1 and A2 , and allow that Epk and Dsk may depend on H (which we write as Epk and Dsk ). H
H
4.1 De nition Our de nition of PA is from [6], except that we make one important re nement. An adversary B for plaintext awareness is given a public key pk and access to the random oracle H . We also provide B with an oracle for Epk . (This is our re nement, and its purpose is explained later.) The adversary outputs a ciphertext y. To be plaintext aware the adversary B should necessarily \know" the decryption x of its output. To formalize this it is demanded there exist some (universal) algorithm K (the \plaintext extractor") that could have output x just by looking at the public key, B 's H -queries and the answers to them, and the answers to B 's queries to Epk . Let us now summarize the formal de nition H and then discuss it. By (hH ; C ; y) run B Epk (pk ) we mean the following. Run B on input pk and oracles H and Epk , recording B 's interaction with its oracles. Form into a list hH = ((h1 ; H1); : : : ; (h H ; H H )) all of B 's H -oracle queries, h1 ; : : : ; h H , and the corresponding answers, H1 ; : : : ; H H . Form into a list C = (y1 ; : : : ; y E ) the answers (ciphertexts) received as a result of E pk -queries. (The messages that formed the actual queries are not recorded.) Finally, record B 's output, y. H
H
H;
H
q
q
q
H
q
22
q
De nition 4.1 [Plaintext Awareness { PA] Let PE = (K; E ; D) be an encryption scheme, let B be an adversary, and let K be an algorithm (the \knowledge extractor"). For any k 2 N de ne Succpa PE ;B;K (k)
h
= Pr H
def
K(k) ;
Hash ; (pk ; sk )
(hH ; C ; y)
i
H run B H;Epk (pk ) :
K (hH ; C ; y; pk ) = Dsk (y) : H
We insist that y 62 C ; that is, B never outputs a string y which coincides with the value returned from some Epk -query. We say that K is a (k)-extractor if K has running time polynomial in the length of its inputs and for every adversary B , Succpa (k) (k). We say that PE is secure in PE the sense of PA if PE is secure in the sense of IND-CPA and there exists a (k)-extractor K where 1 (k) is negligible. Let us now discuss this notion with particular attention to our re nement, which, as we said, consists of providing the adversary with the oracle for Epk . At rst glance this may seem redundant: since B has the public key, can it not encrypt on its own? It can. But, in the random-oracle model, encrypting such points oneself involves making H -queries (remember that Epk itself makes H queries), meaning B knows the oracle queries used by Epk to produce the ciphertext. (Formally, H they become part of the transcript run B Epk .) This does not accurately model the real world, where B may have access to ciphertexts via eavesdropping, where B 's state of knowledge does not include the underlying oracle queries. By giving B an encryption oracle Epk whose H -queries (if any) are not made a part of B 's transcript we get a stronger de nition. Intuitively, should you learn a ciphertext y1 for which you do not know the plaintext, still you should be unable to produce a ciphertext (other than y1 ) whose plaintext you know. Thus the Epk oracle models the possibility that B may obtain ciphertexts in ways other than encrypting them herself. We comment that plaintext awareness, as we have de ned it, is only achievable in the randomoracle model. (It is easy to see that if there is a scheme not using the random oracle for which an extractor as above exists then the extractor is essentially a decryption box. This can be formalized to a statement that an IND-CPA scheme cannot be plaintext aware in the above sense without using the random oracle.) It remains an interesting open question to nd an analogous but achievable formulation of plaintext awareness for the standard model. One might imagine that plaintext awareness coincides with semantic security coupled with a (non-interactive) zero-knowledge proof of knowledge [12] of the plaintext. But this is not valid. The reason is the way the extractor operates in the notion and scheme of [12]: the common random string (even if viewed as part of the public key) is under the extractor's control. In the PA notion, pk is an input to the extractor and it cannot play with any of it. Indeed, note that if one could indeed achieve PA via a standard proof of knowledge, then it would be achievable in the standard (as opposed to random-oracle) model, and we just observed above that this is not possible with the current de nition. H
;B;K
H
H
H
H;
H
H
4.2 Results The proof of the following is in Section 4.3. Theorem 4.2 [PA ) IND-CCA2] If encryption scheme is secure in the RO sense of IND-CCA2.
PE
Corollary 4.3 [PA ) NM-CCA2] If encryption scheme PE is secure in the RO sense of
NM-CCA2.
is secure in the sense of
is secure in the sense of
Proof: Follows from Theorems 4.2 and the RO-version of Theorem 3.3. 23
PA
then it
PA then PE
The above results say that PA ) IND-CCA2 following, whose proof is in Section 4.4.
Theorem 4.4 [IND-CCA26)PA]
) NM-CCA2.
If there exists an encryption scheme
IND-CCA2, then there exists an encryption scheme IND-CCA2 but which is not secure in the sense of PA.
RO sense of sense of
In the other direction, we have the
4.3 Proof of Theorem 4.2:
PA
PE 0
PE
which is secure in the
which is secure in the RO
) IND-CCA2
q
i
i
i
;A
;B;K
;A
24
Algorithm A01 (pk ; R) hH
Algorithm A02 (x0 ; x1 ; (s; hH ; pk ); y ; R)
"
Take R1 from R Run A1 (pk ; R1 ), wherein When A1 makes a query, h, to H : A1 asks its H -oracle h, obtaining H (h) Put (h; H (h)) at end of hH Answer A1 with H (h) H When A1 makes its j th query, y , to Dsk : x K (hH ; "; y; pk ) Answer A1 with x Finally A1 halts, outputting (x0 ; x1 ; s) return (x0 ; x1 ; (s; hH ; pk ))
Take R2 from R Run A2 (x0 ; x1 ; s; y ; R2 ), wherein When A2 makes a query, h, to H : A2 asks its H -oracle h, obtaining H (h) Put (h; H (h)) at end of hH Answer A2 with H (h) H When A2 makes its j th query, y , to Dsk : x K (hH ; (y ); y ; pk ) Answer A2 with x Finally A2 halts, outputting bit, d
0
0
0
0
return d
q
H
H
i
H
H
2f
H;Epk
(pk ; R) // i 1; : : : ; q " Let R1 ; R2 be taken from R. Run A1 (pk ; R1 ), wherein When A1 makes a query, h, to H : Bi asks its H -oracle h, obtaining H (h) Put (h; H (h)) at end of hH Answer A1 with H (h) H When A1 makes its j th query, y , to sk : if j = i then return y and halt else x K (hH ; "; y; pk ) Answer A1 with x Finally, A1 halts, outputting (x0 ; x1 ; s)
Algorithm Bi
g
d
hH
// Algorithm
f0; 1g
Bi ,
continued
H Using Bi 's encryption oracle, let y Epk (xd ) Run A2 (x0 ; x1 ; s; y ; R2 ), wherein When A2 makes a query, h, to H : Bi asks its H -oracle h, obtaining H (h) Put (h; H (h)) at end of hH Answer A2 with H (h) H When A2 makes its j -th query, y , to Dsk : if i = j + q1 then return y and halt else x K (hH ; (y ); y ; pk ) Answer A2 with x
D
0
0
0
Having de ned adversaries corresponding to each decryption query made by A1 , we now need to do this for A2 . Recall that adversary A2 gets as input (x0 ; x1 ; s; y) where, in the experiment de ning advantage, y is selected according to y Epk (x ) for a random bit d. Remember that A2 is prohibited from asking Dsk (y), although A2 may make other (possibly related) decryption queries. How then can we pass y to our decryption simulation mechanism? This is where the encryption oracle and the ciphertext list C come in. We de ne adversaries B 1 +1 , . . . , B just like we de ned B1 ; : : : ; B 1 , except that this time C = (y) rather than being empty. This is shown above in the righ-hand column. DH Let us now see how good a simulation A01 is for A1 sk . Note that the values (x0 ; x1 ; s) produced by A01 are not necessarily the same as what A1 would have output after the analagous interactions H
d
H
q
q
25
q
with Dsk , since one of K 's answers may not be the correct plaintext. Let D be the event that at least one of K 's answers to A1 's decryption queries was not the correct plaintext. Using the existence of B1 ; B2 ; : : : we can lower bound the probability of the correctness of K 's answers in A01 by DH Pr[A01 (pk ) = A1 sk (pk )] 1 Pr[D] 1 q1 (1 (k)) : Letting q2 be the number of decryption oracle queries made by A2 , we similarly have for A02 that and that DH DH Pr[A02 (x0 ; x1 ; (s; hH ); y) = A2 sk (x0 ; x1 ; s; y) j A01 (pk) = A1 sk (pk )] 1 q2 (1 (k)) : Now using the above, one can see that H
2q (1
(k));
where q = q1 + q2 and represents the total number of decryption oracle queries made by the adversary A. A01 runs A1 , asking for q1 executions of K . Similarly A02 runs A2 , asking for q2 executions of K . Hence the running time of our new adversary A0 is equal to t + q t , where t and t are the running times of A and K respectively, which is polynomial if A and K are polynomial time. -cca2 (k) is non-negligible and 1 (k) is negligible, so Advind-cpa Under our assumptions Advind PE PE 0 (k) is non-negligible, and PE is not secure in the sense of IND-CPA security. In concrete security terms, the advantage drops linearly in q while the running time grows linearly in q. Note that it was important in the proof that K almost always succeeded; it would not have worked with (k) = 0:5, say. A
K
A
;A
4.4 Proof of Theorem 4.4:
K
;A
IND-CCA2
6)PA
Assume there exists some IND-CCA2 secure encryption scheme PE = (K; E ; D), since otherwise the theorem is vacuously true. We now modify PE to a new encryption scheme PE 0 = (K0 ; E 0 ; D0 ) which is also IND-CCA2 secure but not secure in the PA sense. This will prove the theorem. The new encryption scheme PE 0 = (K0 ; E 0 ; D0 ) is de ned as follows: 0 (x) Algorithm D0 (y) Algorithm Epk Algorithm K0 (k ) k sk k (pk ; sk ) K(k) return Epk (x) return Dsk (y ) b f0; 1g ; a Epk (b) 0 pk k a ; sk 0 sk k b pk return (pk 0 ; sk 0 ) H
H
a
b
H
k
H
H
In other words, the only di erence is that in the new scheme, the public key contains a random ciphertext a whose decryption is in the secret key. Our two claims are that PE 0 remains IND-CCA2 secure, but is not PA. This will complete the proof.
Claim 4.5 PE 0 is secure in the sense of IND-CCA2. Proof: Recall our assumption is that PE is IND-CCA2 secure. To prove the claim we consider a polynomial time adversary B attacking PE 0 in the IND-CCA2 sense. We want to show that -cca2 Advind PE 0 () is negligible. To do this, we consider the following adversary A = (A1 ; A2 ) attacking PE in the IND-CCA2 sense. The idea is that A can simulate the choosing of a by the key generation algorithm K0 for B , and thus has access to the corresponding secret b. Note that having an oracle 0 oracle made by B : to query y for Dsk , it is indeed possible for A to reply to any queries to the Dsk k it simply returns Dsk (y). ;B
H
H
b
H
26
DH
Algorithm A1 sk (pk )
0 0 sk Algorithm AD 2 (x0 ; x1 ; s ; y ) where s = (s; a; b)
f0; 1g ; a Epk (b) 0 k a 0H D (x ; x ; s) B k b ( k a) b
k
pk
s0
0
H
pk
d
pk
1
1
sk
(s; a; b) return (x0 ; x1 ; s0 )
0
pk
ka
Dsk0 k b
B2
return d
pk
(x0 ; x1 ; s; y)
-cca2 (k) = Advind-0 cca2 (k). The assumption that It is clear that A is polynomial time and that Advind PE PE -cca2 (k) is negligible, and hence it follows PE is secure in the sense of IND-CCA2 implies that Advind PE -0 cca2 (k) is negligible. that Advind PE Claim 4.6 PE 0 is not plaintext-aware. ;A
;B
;A
;B
Proof: We consider the following speci c adversary B that outputs as her ciphertext the value a in her public key: EH Algorithm B pk 0 (pk 0 ) where pk 0 = pk k a H;
return
a
Intuitively, this adversary defeats any aspiring plaintext extractor: It will not be possible to construct a plaintext extractor for this B as long as PE 0 is secure in the sense of IND-CPA. Hence there does not exist a plaintext extractor for PE 0 . The formal proof is by contradiction. Assume PE 0 is PA. Then there exists a plaintext-extractor K 0 for PE 0 . We now de ne an adversary A = (A1 ; A2 ) that attacks PE in the sense of IND-CPA. the empty list. Algorithm A1 (pk )
x0 x1
Algorithm A2 (x0 ; x1 ; pk ; y )
f0; 1g f0; 1g
0
(pk ; y) K 0 ("; "; y; pk 0 ) 0 if x = x0 then d 0 else if x0 = x1 then d else d f0; 1g
k
pk
x0
k
return (x0 ; x1 ; pk )
return
1
d
Consider the experiment de ning the success of (A1 ; A2 ) in attacking PE in the sense of IND-CPA. In this experiment, y is the encryption of a random k-bit string. This means that in the input ("; "; y; pk 0 ) given to K , the distribution of ("; "; y) is exactly that of run B Epk0 (pk 0 ). This is because B , the adversary we de ned above, has no interaction with its oracles, and the value a in the public key pk 0 is itself the encryption of a random k-bit string. Thus, our assumption that K 0 works means that the extraction is successful with probability Succpa PE 0 0 (k). Thus pa 0 (k ) 1 1 SuccPE 0 -cpa (k) Succpa : Advind ( k ) 0 PE PE 0 2 2 The rst term is a lower bound on the probability that A2 outputs 0 when the message was x0 . The second term is an upper bound on the probability that it outputs 1 when the message was x0 . Now since K 0 is assumed to be a good extractor we know that Succpa (k) for some 0 (k ) = 1 PE 0 ind-cpa negligible function () and hence AdvPE (k) is not negligible. (In fact is of the form 1 0 (k) for some negligible function 0 ().) This contradicts the indistinguishability of PE , as desired. ;B;K
;B;K
;A
;B;K
k
;B;K
;A
27
Acknowledgments Following an oral presentation of an earlier version of this paper, Moni Naor suggested that we present notions of security in a manner that treats the goal and the attack model orthogonally [25]. We are indebted to him for this suggestion. We also thank Hugo Krawczyk, Moti Yung, and the (other) members of the CRYPTO '98 program committee for excellent and extensive comments. Finally we thank Oded Goldreich for many discussions on these topics.
References [1] [2]
M. Bellare, R. Canetti and H. Krawczyk, A modular approach to the design and analysis of authentication and key exchange protocols. of Computing, ACM, 1998.
Proceedings of the 30th Annual Symposium on the Theory
M. Bellare, A. Desai, E. Jokipii and P. Rogaway,
ric encryption: Analysis of the DES modes of operation. Foundations of Computer Science, IEEE, 1997.
A concrete security treatment of symmetProceedings of the 38th Symposium on
[3]
M. Bellare, A. Desai, D. Pointcheval and P. Rogaway,
[4]
M. Bellare, R. Impagliazzo and M. Naor, Does parallel repetition lower the error in computa-
Relations among notions of security for public-key encryption schemes. Preliminary version of this paper. Advances in Cryptology | Crypto '98 Proceedings, Lecture Notes in Computer Science, H. Krawczyk, ed., Springer-Verlag 1998. tionally sound protocols? IEEE, 1997.
,
Proceedings of the 38th Symposium on Foundations of Computer Science
[5]
M. Bellare and P. Rogaway,
[6]
M. Bellare and P. Rogaway,
Random oracles are practical: a paradigm for designing eÆcient protocols. First ACM Conference on Computer and Communications Security, ACM, 1993. Optimal asymmetric encryption { How to encrypt with RSA. , Lecture Notes in Computer Science Vol. 950, A. De Santis
Advances in Cryptology { EUROCRYPT '94
ed., Springer-Verlag, 1994. [7] [8] [9]
M. Bellare and A. Sahai, Non-Malleable Encryption: Equivalence between Two Notions, and an Indistinguishability-Based Characterization. Advances in Cryptology { in Computer Science Vol. 1666, M. Wiener ed., Springer-Verlag, 1999.
, Lecture Notes
CRYPTO '99
D. Bleichenbacher, A chosen ciphertext attack against protocols based on the RSA encryption standard PKCS #1, Advances in Cryptology { CRYPTO Vol. 1462, H. Krawczyk ed., Springer-Verlag, 1998.
, Lecture Notes in Computer Science
'98
M. Blum, P. Feldman and S. Micali, Non-interactive zero-knowledge and its applications. ceedings of the 20th Annual Symposium on the Theory of Computing
, ACM, 1988.
Pro-
[10]
R. Cramer and V. Shoup, A practical public key cryptosystem provably secure against adaptive
[11]
I. Damgard, Towards practical public key cryptosystems secure against chosen ciphertext attacks.
chosen ciphertext attack. Advances in Cryptology | Crypto '98 Proceedings, Lecture Notes in Computer Science, H. Krawczyk, ed., Springer-Verlag 1998. , Lecture Notes in Computer Science Vol. 576, J. Feigenbaum
Advances in Cryptology { CRYPTO '91
ed., Springer-Verlag, 1991. [12]
A. De Santis and G. Persiano, Zero-knowledge proofs of knowledge without interaction. ings of the 33rd Symposium on Foundations of Computer Science
[13]
, IEEE, 1992.
D. Dolev, C. Dwork, and M. Naor, Non-malleable cryptography. , ACM, 1991.
Proceed-
Proceedings of the 23rd Annual
Symposium on the Theory of Computing
[14]
D. Dolev, C. Dwork, and M. Naor, , 1995.
Non-malleable cryptography.
Weizmann Institute of Science
28
Technical Report CS95-27,
[15]
D. Dolev, C. Dwork, and M. Naor, 391-437, 2000
[16]
.
Z. Galil, S. Haber and M. Yung, Security against replay chosen ciphertext attack. Science, Vol. 2, ACM, 1991.
[19]
, Lecture Notes in Computer Science Vol. 218, H. Williams ed., Springer-Verlag, 1985.
Computing and Cryptography
[18]
SIAM J. Computing, 30 (2)
Z. Galil, S. Haber and M. Yung, Symmetric public key encryption. CRYPTO '85
[17]
Non-malleable cryptography.
Distributed
, DIMACS Series in Discrete Mathematics and Theoretical Computer
O. Goldreich, Foundations of cryptography. Class notes, Spring 1989, Technion University. O. Goldreich, A uniform complexity treatment of encryption and zero-knowledge.
Journal of Cryp-
, Vol. 6, 1993, pp. 21-53.
tology
[20]
O. Goldreich, S. Goldwasser and S. Micali, How to construct random functions. ACM,
Vol. 33, No. 4, 1986, pp. 210{217.
Journal of the
[21]
S. Goldwasser and S. Micali, Probabilistic encryption.
[22]
J. Hastad, R. Impagliazzo, L. Levin and M. Luby, A pseudorandom generator from any one-way
[23]
28:270{299, 1984. function.
,
Journal of Computer and System Sciences
, Vol. 28, No. 4, 1999, pp. 1364{1396.
SIAM J. on Computing
R. Impagliazzo and M. Luby, One-way functions are essential for complexity based cryptography. Proceedings of the 30th Symposium on Foundations of Computer Science
[24]
S. Micali, C. Rackoff and R. Sloan, , April 1988.
, IEEE, 1989.
The notion of security for probabilistic cryptosystems.
SIAM J. on Computing
[25] [26]
M. Naor, private communication, March 1998. M. Naor and M. Yung, Public-key cryptosystems provably secure against chosen ciphertext attacks. Proceedings of the 22nd Annual Symposium on the Theory of Computing
[27]
, ACM, 1990.
C. Rackoff and D. Simon, Non-interactive zero-knowledge proof of knowledge and chosen ciphertext attack. Advances in Cryptology { CRYPTO J. Feigenbaum ed., Springer-Verlag, 1991.
, Lecture Notes in Computer Science Vol. 576,
'91
[28]
SETCo (Secure Electronic Transaction LLC), The SET standard | book 3 | formal protocol de -
[29]
A. Yao, Theory and applications of trapdoor functions.
nitions (version 1.0). May 31, 1997. Available from http://www.setco.org/ , IEEE, 1982.
Proceedings of the 23rd Symposium on
Foundations of Computer Science
[30]
Y. Zheng and J. Seberry, Immunizing public key cryptosystems against chosen ciphertext attack. IEEE Journal on Selected Areas in Communications
29
, vol. 11, no. 5, 715{724 (1993). | 2021-09-26 01:25:32 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8163963556289673, "perplexity": 2232.5797255211714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057787.63/warc/CC-MAIN-20210925232725-20210926022725-00695.warc.gz"} |
http://openstudy.com/updates/513e8773e4b029b0182c27aa | ## anonymous 3 years ago What is the domain of 1/(4 (5-sqrt(x))^(3/2)sqrt(x))
1. anonymous
Im getting 0<x<25 but I dont know how to put that in this format; for example, [0,25)
2. Mertsj
$\frac{1}{4(5-\sqrt{x})^{\frac{3}{2}\sqrt{x}}}$
3. Mertsj
Is that the problem?
4. anonymous
the sqrt(x) is not an exponent it is just being multiplied
5. Mertsj
Well then x can't be negative, 0, 25
6. Mertsj
(0,25)U(25,inf)
7. anonymous
okay thanks!
8. Mertsj
yw | 2016-08-29 23:30:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6546508073806763, "perplexity": 7054.209745087529}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982967797.63/warc/CC-MAIN-20160823200927-00077-ip-10-153-172-175.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/stokes-thm-intersection-of-sphere-and-plane.659906/ | # Stokes' Thm, intersection of sphere and plane
1. Dec 19, 2012
### CAF123
1. The problem statement, all variables and given/known data
Use Stokes' Theorem to evaluate $$\int_{\gamma} y\,dx + z\,dy + x\,dz,$$ where $\gamma$ is the suitably oriented intersection of the surfaces $x^2 + y^2 + z^2 = a^2$ and $x + y + z = 0$
3. The attempt at a solution
Stokes' says that this is equal to $$\iint_S (\underline{\nabla} \times \underline{F}) \cdot \underline{n}\,dA$$ So from the question, I can extract $\underline{F} = \langle y,z,x \rangle$ and compute $\, \text{curl}\underline{F}$. $\,\,$$\gamma$ marks the boundary of the surface S, so in using Stokes' I can use any surface whose boundary is $\gamma$.
Let $z = -x -y => x^2 + y^2 + (-x-y)^2 = a^2$, which when rearranged gives $x^2 + y^2 +xy = a^2/2$. I put this into Wolfram Alpha and it is an ellipse, however I am unsure of how to show this. I.e I want to get what I have in the form $\frac{x^2}{A^2} + \frac{y^2}{B^2} = 1.$ I would complete the square but the$xy$ term is annoying. Once I have it in the form of an ellipse, I am sure I can just say $x = A\cos t, y = B\sin t$ on the boundary, find normal, dot it with the curl and set the bounds. I am also a little bit unsure of what the bounds would be. If the result of curl F dotted with n resulted in 1, then I could just simply say that the surface integral is the area of the ellipse, which would greatly simplify things.
Last edited: Dec 19, 2012
2. Dec 19, 2012
### Dick
Try thinking about this problem geometrically using the surface integral. The sphere is centered at the origin. The plane passes through the origin. What does the intersection look like? It's not as complicated as a general ellipse. What is the curl? What is the normal? You can really do this whole problem in your head.
3. Dec 19, 2012
### CAF123
I would have thought that the intersection be a circle, but have I not shown that it is not a circle, $x^2 + y^2 + xy = a^2/2$ is not a circle?
I computed the curl to be $- \langle 1,1,1 \rangle$ . Perhaps the normal would be in the direction into the first octant (or antiparallel to it).
When you say you can do the whole problem in your head, do you mean because the computation is easy or is there some conceptual reason why the results are what they are?
4. Dec 19, 2012
### Dick
The conceptual reason is that the normal to the surface is constant and the curl is constant, so the dot product is constant. Integrating a constant over a surface whose area you know is pretty easy. $x^2 + y^2 + xy = a^2/2$ in the xy plane is probably an ellipse, it looks like the projection of the circle you want into the xy plane. But it's certainly not the boundary curve. Luckily, you don't need an equation for the boundary curve.
5. Dec 19, 2012
### CAF123
How should I compute the normal vector if I don't have a parametrisation?
The projection of their intersection would be an ellipse in the xy plane?
Is this the questions motivation for resorting to using Stokes' Thm?
6. Dec 19, 2012
### Dick
Yes, it's much easier to use Stokes' theorem than to do the path integral directly. Yes, if you take a circle in space and project it to the xy plane you generally get an ellipse. Finally, the normal to the intersection of the plane and the sphere is the same as the normal to the plane, isn't it?
7. Dec 20, 2012
### CAF123
Do you mean a circle inclined in space? E.g if I take a cylinder and chop it at some z, the projection onto xy would be a circle. But why is the intersection a circle anyway in this problem?
This makes sense.
So $\hat{n} = \langle 1/\sqrt{3}, 1/\sqrt{3}, 1/\sqrt{3} \rangle$ which gives $\text{curl} \, \underline{F} \cdot \hat{n} dS = -\sqrt{3} dS.$
8. Dec 20, 2012
### Dick
What kind of curves do you get when you cut a sphere with a plane? Just think about the geometry. Don't try to get the answer by eliminating z. That only gives you the xy part of the intersection.
9. Dec 20, 2012
### CAF123
In general, you get circles. Could you clarify what you mean by what I found was the 'xy' intersection?
10. Dec 20, 2012
### Dick
When you eliminate z you only have x and y in your equation. An equation that only has x and y in it describes a kind of vertical cylinder in space. What you got is an vertical elliptical cylinder which passes through the circle that the plane makes intersecting the sphere. It's just telling you the circle looks like an ellipse when viewed from an angle. It's not useful for doing a path integral.
11. Dec 20, 2012
Thanks! | 2017-11-20 08:01:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.783741295337677, "perplexity": 943.0163444087332}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805923.26/warc/CC-MAIN-20171120071401-20171120091401-00346.warc.gz"} |
http://www.maths.ox.ac.uk/node/25447 | # Random walks and Lévy processes as rough paths
Chevyrev, I
April 2018
## Journal:
Probab. Theory Related Fields 170 (2018), no. 3-4, 891-932
## Last Updated:
2020-05-14T16:44:32.007+01:00
3-4
170
## DOI:
10.1007/s00440-017-0781-1
891-932
## abstract:
We consider random walks and L\'evy processes in a homogeneous group $G$. For
all $p > 0$, we completely characterise (almost) all $G$-valued L\'evy
processes whose sample paths have finite $p$-variation, and give sufficient
conditions under which a sequence of $G$-valued random walks converges in law
to a L\'evy process in $p$-variation topology. In the case that $G$ is the free
nilpotent Lie group over $\mathbb{R}^d$, so that processes of finite
$p$-variation are identified with rough paths, we demonstrate applications of
our results to weak convergence of stochastic flows and provide a
L\'evy-Khintchine formula for the characteristic function of the signature of a
L\'evy process. At the heart of our analysis is a criterion for tightness of
$p$-variation for a collection of c\adl\ag strong Markov processes.
695502 | 2020-06-07 07:18:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6411189436912537, "perplexity": 1147.5672455330937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348523564.99/warc/CC-MAIN-20200607044626-20200607074626-00214.warc.gz"} |
http://lilypond.1069038.n5.nabble.com/Protecting-against-page-breaks-in-markup-td217643.html | # Protecting against page breaks in markup
15 messages
Open this post in threaded view
|
## Protecting against page breaks in markup
I'm creating an index to my scores, in the form of a sequence of LilyPond markups (for title, composer, first few bars etc). It is working well apart from page breaking which can occur mid-entry. Is there a way of turning page breaks off and back on around each entry? Richard Shann _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
Am Di., 27. Nov. 2018 um 16:22 Uhr schrieb Richard Shann <[hidden email]>: > > I'm creating an index to my scores, in the form of a sequence of > LilyPond markups (for title, composer, first few bars etc). What exactly are you doing? An example would be nice. > It is working well apart from page breaking which can occur mid-entry. > Is there a way of turning page breaks off and back on around each > entry? Well, of course you know about \noPageBreak and \pageBreak. Probably you can wrap a \column around all the single markups. At least page-break can then only happen before or after the whole thingy. Depends on what you actually (want to) do. Cheers, Harm _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
In reply to this post by Richard Shann-2 ---------- Forwarded message ----------From: Richard Shann <[hidden email]>To: lilypond-user <[hidden email]>Cc: Bcc: Date: Tue, 27 Nov 2018 15:20:04 +0000Subject: Protecting against page breaks in markupI'm creating an index to my scores, in the form of a sequence of LilyPond markups (for title, composer, first few bars etc). It is working well apart from page breaking which can occur mid-entry. Is there a way of turning page breaks off and back on around each entry? Richard ShannWhen pagination gets hairy, I use \autoPageBreaksOff, and then manually put in all page breaks using \pageBreakHTH,Elaine Alt415 . 341 .4954 "Confusion is highly underrated"[hidden email]Producer ~ Composer ~ Instrumentalist-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
In reply to this post by Thomas Morley-2 On Tue, 2018-11-27 at 21:48 +0100, Thomas Morley wrote: > Am Di., 27. Nov. 2018 um 16:22 Uhr schrieb Richard Shann > <[hidden email]>: > > > > I'm creating an index to my scores, in the form of a sequence of > > LilyPond markups (for title, composer, first few bars etc). > > What exactly are you doing? An example would be nice. Well, I have my printed scores of trio sonatas filed under composer but I needed to find just those scores with a Tenor as the second part - I have perhaps half-a-dozen of these, lost amongst just over a thousand scores. So I thought I would write a script in Scheme that would traverse the file system opening scores, extracting an incipit, title, composer, instrumentation etc and then creating a new score that just comprised top level markups, one for each entry. Each entry looks like this: \markup {\column {\draw-hline}}\markup "Fesch: Sonatina IV" \markup {instrumentation:Treble, Tenor, Basso} DenemoGlobalTranspose = #(define-music-function (parser location arg)(ly:music?) #{\transpose c' c'#arg #}) incipit = \markup \score {\DenemoGlobalTranspose { \clef treble { \time 3/4 } { \key f \major} %{/home/rshann/musicScores/Fesch/IMSLP270267-PMLP437812-fesch_op7_1.pdf:202:7724:9%} d'' 4 g' 4. ees'' 8 d'' 4 g' 4. bes'' 8 } \layout {indent = 0.0\cm } } \incipit This all works nicely, and I even managed to allow the user to supply a custom Scheme expression to act as a filter, but I was left with the problem that LilyPond would page break in mid-entry. > > > It is working well apart from page breaking which can occur mid- > > entry. > > Is there a way of turning page breaks off and back on around each > > entry? > > Well, of course you know about \noPageBreak and \pageBreak. > > Probably you can wrap a \column around all the single markups. At > least page-break can then only happen before or after the whole > thingy. Ah, thank you - that works. I just looked back at the docs and I see it says "The default page breaking may be overridden by inserting \pageBreak or \noPageBreak commands. [...] The \pageBreak and \noPageBreak commands may also be inserted at top- level, between scores and top-level markups." but I didn't spot where it said what the default page breaking is (besides allowing breaks at bar lines), I think the last bit is the clue - page breaks are allowed between top level markups, but nowhere inside them. I just hoped there might be a \pageBreaksOff and \pageBreaksOn command lurking somewhere, hence my question. But now I look at it with a clear understanding of where LilyPond might break it seems obvious where to put in sufficient \noPageBreak commands to keep each entry un-split. Richard _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
In reply to this post by Flaming Hakama by Elaine On Tue, 2018-11-27 at 16:33 -0800, Flaming Hakama by Elaine wrote: > > ---------- Forwarded message ---------- > > From: Richard Shann <[hidden email]> > > To: lilypond-user <[hidden email]> > > Cc: > > Bcc: > > Date: Tue, 27 Nov 2018 15:20:04 +0000 > > Subject: Protecting against page breaks in markup > > I'm creating an index to my scores, in the form of a sequence of > > LilyPond markups (for title, composer, first few bars etc). > > It is working well apart from page breaking which can occur mid- > > entry. > > Is there a way of turning page breaks off and back on around each > > entry? > > > > Richard Shann > > When pagination gets hairy, I use \autoPageBreaksOff, and then > manually put in all page breaks using \pageBreak Sorry, I should have made it clearer that this index is being automatically generated. Now you point out that there is an \autoOageBreaksOff, which is what I thought I needed, I realize that it wouldn't help - I would need to turn them off and back on between each entry in the hope that LilyPond would take advantage of such an Off/On sequence to insert a page break in between if needed, which I'm sure it wouldn't :( Richard _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
> Sorry, I should have made it clearer that this index is being > automatically generated. Now you point out that there is an > \autoOageBreaksOff, which is what I thought I needed, I realize that it > wouldn't help - I would need to turn them off and back on between each > entry in the hope that LilyPond would take advantage of such an Off/On > sequence to insert a page break in between if needed, which I'm sure it > wouldn't :( I'm not sure I understand you correctly, but it is perfectly possible to forbid page breaks between arbitrary markup lines: \version "2.19.80" three-line-entry = #(define-void-function (a b c) (string? string? string?) (add-text a) (add-music #{ \noPageBreak #}) (add-text b) (add-music #{ \noPageBreak #}) (add-text c) ) #(do ((i 1 (1+ i))) ((> i 200)) (three-line-entry "Entry nr.:" (number->string i) "End of entry. Only good place for page break.")) Best Lukas _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
In reply to this post by Richard Shann-2 On Wed, Nov 28, 2018, 1:50 AM Richard Shann <[hidden email] wrote:On Tue, 2018-11-27 at 16:33 -0800, Flaming Hakama by Elaine wrote: > > ---------- Forwarded message ---------- > > From: Richard Shann <[hidden email]> > > To: lilypond-user <[hidden email]> > > Cc: > > Bcc: > > Date: Tue, 27 Nov 2018 15:20:04 +0000 > > Subject: Protecting against page breaks in markup > > I'm creating an index to my scores, in the form of a sequence of > > LilyPond markups (for title, composer, first few bars etc). > > It is working well apart from page breaking which can occur mid- > > entry. > > Is there a way of turning page breaks off and back on around each > > entry? > > > > Richard Shann > > When pagination gets hairy, I use \autoPageBreaksOff, and then > manually put in all page breaks using \pageBreak Sorry, I should have made it clearer that this index is being automatically generated. Now you point out that there is an \autoOageBreaksOff, which is what I thought I needed, I realize that it wouldn't help - I would need to turn them off and back on between each entry in the hope that LilyPond would take advantage of such an Off/On sequence to insert a page break in between if needed, which I'm sure it wouldn't :( RichardYeah, once autoPageBreaksOff is declared, all page breaks need to be added manually. I've found that when the automatic page breaks don't work, it usually easier to go this way than the other alternative, which is to use auto page breaks and add noPageBreak where necessary. Mostly because forcing a no page break doesn't guarantee that the newly calculated automatic break will be in a good place, so you often end up adding lots of noPageBreak commands in close succession, and it ends up being more tedious and less semantic.Also, page break calculations take up time, so using manual breaks speeds up compilation and saves time in the long run.I'm not sure if when you say this is being automatically generated that it may be produced in different editions which need different page breaks. But even in that case it is probably easiest to use tags or something like the addition engraver to specify the page breaks.You can also try putting scores in book Parts which have the effect of forcing a page break at the start of the book part. _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
In reply to this post by Lukas-Fabian Moser On Wed, 2018-11-28 at 15:45 +0100, Lukas-Fabian Moser wrote: > > Sorry, I should have made it clearer that this index is being > > automatically generated. Now you point out that there is an > > \autoOageBreaksOff, which is what I thought I needed, I realize > > that it > > wouldn't help - I would need to turn them off and back on between > > each > > entry in the hope that LilyPond would take advantage of such an > > Off/On > > sequence to insert a page break in between if needed, which I'm > > sure it > > wouldn't :( > > I'm not sure I understand you correctly, but it is perfectly possible > to > forbid page breaks between arbitrary markup lines: yes, that's what I needed to do, and I'm now doing that - it's working fine now. What I didn't realize was that Lily will not put page breaks inside a \markup {} - Harm pointed this out to me via his \column {\line .. \line ...} example, Lily does not break at any of the \lines. Once he'd tipped me off I looked again at the docs and saw that, although they don't quite explicitly say that Lily will not page break inside a markup block it is implied by the bit of the docs I quoted "The \pageBreak and \noPageBreak commands may also be inserted [...] between [...] top-level markups." Perhaps the Docs should contain an explicit statement there that no automatic breaking will happen inside a \markup {} ... Richard _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
Am Do., 29. Nov. 2018 um 09:33 Uhr schrieb Richard Shann <[hidden email]>: > > On Wed, 2018-11-28 at 15:45 +0100, Lukas-Fabian Moser wrote: > > > Sorry, I should have made it clearer that this index is being > > > automatically generated. Now you point out that there is an > > > \autoOageBreaksOff, which is what I thought I needed, I realize > > > that it > > > wouldn't help - I would need to turn them off and back on between > > > each > > > entry in the hope that LilyPond would take advantage of such an > > > Off/On > > > sequence to insert a page break in between if needed, which I'm > > > sure it > > > wouldn't :( > > > > I'm not sure I understand you correctly, but it is perfectly possible > > to > > forbid page breaks between arbitrary markup lines: > yes, that's what I needed to do, and I'm now doing that - it's working > fine now. What I didn't realize was that Lily will not put page breaks > inside a \markup {} - Harm pointed this out to me via his \column > {\line .. \line ...} example, Lily does not break at any of the > \lines. > Once he'd tipped me off I looked again at the docs and saw that, > although they don't quite explicitly say that Lily will not page break > inside a markup block it is implied by the bit of the docs I quoted > "The \pageBreak and \noPageBreak commands may also be inserted [...] > between [...] top-level markups." > Perhaps the Docs should contain an explicit statement there that no > automatic breaking will happen inside a \markup {} ... > > Richard Well, in NR 1.8.1 Writing text one can read about toplevel markup/markuplist: " Separate text ... Separate text blocks can be spread over multiple pages, making it possible to print text documents or books entirely within LilyPond. This feature, and the specific syntax it requires, are described in Multi-page markup. ... " And later " Multi-page markup Although standard markup objects are not breakable, a specific syntax makes it possible to enter lines of text that can spread over multiple pages: " Could you suggest how to improve this? Cheers, Harm _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
On Thu, 2018-11-29 at 09:50 +0100, Thomas Morley wrote: > Am Do., 29. Nov. 2018 um 09:33 Uhr schrieb Richard Shann > <[hidden email]>: > > > > On Wed, 2018-11-28 at 15:45 +0100, Lukas-Fabian Moser wrote: > > > > Sorry, I should have made it clearer that this index is being > > > > automatically generated. Now you point out that there is an > > > > \autoOageBreaksOff, which is what I thought I needed, I realize > > > > that it > > > > wouldn't help - I would need to turn them off and back on > > > > between > > > > each > > > > entry in the hope that LilyPond would take advantage of such an > > > > Off/On > > > > sequence to insert a page break in between if needed, which I'm > > > > sure it > > > > wouldn't :( > > > > > > I'm not sure I understand you correctly, but it is perfectly > > > possible > > > to > > > forbid page breaks between arbitrary markup lines: > > > > yes, that's what I needed to do, and I'm now doing that - it's > > working > > fine now. What I didn't realize was that Lily will not put page > > breaks > > inside a \markup {} - Harm pointed this out to me via his \column > > {\line .. \line ...} example, Lily does not break at any of the > > \lines. > > Once he'd tipped me off I looked again at the docs and saw that, > > although they don't quite explicitly say that Lily will not page > > break > > inside a markup block it is implied by the bit of the docs I quoted > > "The \pageBreak and \noPageBreak commands may also be inserted > > [...] > > between [...] top-level markups." > > Perhaps the Docs should contain an explicit statement there that no > > automatic breaking will happen inside a \markup {} ... > > > > Richard > > Well, in NR 1.8.1 Writing text one can read about toplevel > markup/markuplist: > " > Separate text > ... > Separate text blocks can be spread over multiple pages, making it > possible to print text documents or books entirely within LilyPond. > This feature, and the specific syntax it requires, are described in > Multi-page markup. > ... > " > > And later > > " > Multi-page markup > > Although standard markup objects are not breakable, a specific syntax > makes it possible to enter lines of text that can spread over > multiple > pages: > " > > Could you suggest how to improve this? Yes, I think I can. The presence of the word "Although" in the last- quoted paragraph indicates that the writer expected that the fact that standard markup objects were not breakable had been documented elsewhere. I suggest "4.3.2 Page breaking The default page breaking may be overridden by inserting \pageBreak or \noPageBreak commands. " could become "4.3.2 Page breaking By default page breaks may be inserted at bar lines and between top- level markups. The default page breaking may be overridden by inserting \pageBreak or \noPageBreak commands. " As a further point is the term "standard markup objects" well- documented - does it mean "top-level markups", or what I tend to refer to as \markup{} blocks? Richard _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
Hi Richard, please bear in mind I'm not a native speaker. Thus work on the docs is pretty difficult for me. That said: Am Do., 29. Nov. 2018 um 11:04 Uhr schrieb Richard Shann <[hidden email]>: > On Thu, 2018-11-29 at 09:50 +0100, Thomas Morley wrote: > > Well, in NR 1.8.1 Writing text one can read about toplevel > > markup/markuplist: > > " > > Separate text > > ... > > Separate text blocks can be spread over multiple pages, making it > > possible to print text documents or books entirely within LilyPond. > > This feature, and the specific syntax it requires, are described in > > Multi-page markup. > > ... > > " > > > > And later > > > > " > > Multi-page markup > > > > Although standard markup objects are not breakable, a specific syntax > > makes it possible to enter lines of text that can spread over > > multiple > > pages: > > " > > > > Could you suggest how to improve this? > > Yes, I think I can. The presence of the word "Although" in the last- > quoted paragraph indicates that the writer expected that the fact that > standard markup objects were not breakable had been documented > elsewhere. I think "standard markup" is a little foggy. Probably: "Although text objects invoked with \markup are not breakable, ..." and in NR 1.8.1 Separate text ... Separate text entered with \markup can't be distributed over multiple pages, thus a a page break will happen only before or after the whole text. In extreme cases the text will exceed the paper bottom. Nevertheless, separate text blocks can be spread over multiple pages, making it possible to print text documents or books entirely within LilyPond. This feature, and the specific syntax it requires, are described in Multi-page markup. > I suggest > > "4.3.2 Page breaking > > The default page breaking may be overridden by inserting \pageBreak or > \noPageBreak commands. " > > could become > > "4.3.2 Page breaking > > By default page breaks may be inserted at bar lines and between top- > level markups. The default page breaking may be overridden by inserting > \pageBreak or \noPageBreak commands. " Quoting a little more from NR: "The default page breaking may be overridden by inserting \pageBreak or \noPageBreak commands. These commands are analogous to \break and \noBreak. They should be inserted at a bar line. [...] The \pageBreak and \noPageBreak commands may also be inserted at top-level, between scores and top-level markups." Does it not contain all what's needed to know? > > As a further point is the term "standard markup objects" well- > documented - does it mean "top-level markups", or what I tend to refer > to as \markup{} blocks? I think what's meant is the difference between \markup and \markuplist Cheers, Harm _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
Thomas Morley <[hidden email]> writes: > Am Do., 29. Nov. 2018 um 11:04 Uhr schrieb Richard Shann > <[hidden email]>: > >> >> As a further point is the term "standard markup objects" well- >> documented - does it mean "top-level markups", or what I tend to refer >> to as \markup{} blocks? > > I think what's meant is the difference between \markup and \markuplist It's worth pointing out that for typographic treatment a toplevel markup (namely a markup invoked outside of any other expression) is indistinguishable from a markup list with a single element: either are processed by calling toplevel-text-handler with a markup list (in case of the markup, a list containing just one markup as element). -- David Kastrup _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user
Open this post in threaded view
|
## Re: Protecting against page breaks in markup
Am Do., 29. Nov. 2018 um 23:22 Uhr schrieb David Kastrup <[hidden email]>: > > Thomas Morley <[hidden email]> writes: > > > Am Do., 29. Nov. 2018 um 11:04 Uhr schrieb Richard Shann > > <[hidden email]>: > > > >> > >> As a further point is the term "standard markup objects" well- > >> documented - does it mean "top-level markups", or what I tend to refer > >> to as \markup{} blocks? > > > > I think what's meant is the difference between \markup and \markuplist > > It's worth pointing out that for typographic treatment a toplevel markup > (namely a markup invoked outside of any other expression) is > indistinguishable from a markup list with a single element: either are > processed by calling toplevel-text-handler with a markup list (in case > of the markup, a list containing just one markup as element). > > -- > David Kastrup You mean what can be observed with below? \markup \italic "foo-1" \markup \italic "bar-1" \markup \italic "buzz-1" \markuplist \italic { "foo-2" "bar-2" "buzz-2" } #(newline) #(display-scheme-music (reverse (ly:parser-lookup 'toplevel-scores))) => (list (list (markup #:italic "foo-1")) (list (markup #:italic "bar-1")) (list (markup #:italic "buzz-1")) (list (markup #:italic "foo-2") (markup #:italic "bar-2") (markup #:italic "buzz-2"))) If I add: \paper { ragged-last-bottom = ##f markup-markup-spacing.stretchability = 1000 } and watch the printed output, the single markups are distributed over the page, while the elements of the markuplist are kept close together. Am undecided whether I should have expected it or should be surprised ... lol Cheers, Harm _______________________________________________ lilypond-user mailing list [hidden email] https://lists.gnu.org/mailman/listinfo/lilypond-user | 2019-05-24 19:44:13 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.853262722492218, "perplexity": 4751.6752090218715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257731.70/warc/CC-MAIN-20190524184553-20190524210553-00128.warc.gz"} |
http://abstract.ups.edu/aata/section-irreducible-poly.html | A nonconstant polynomial $f(x) \in F[x]$ is irreducible over a field $F$ if $f(x)$ cannot be expressed as a product of two polynomials $g(x)$ and $h(x)$ in $F[x]$, where the degrees of $g(x)$ and $h(x)$ are both smaller than the degree of $f(x)$. Irreducible polynomials function as the “prime numbers” of polynomial rings.
##### Example17.11
The polynomial $x^2 - 2 \in {\mathbb Q}[x]$ is irreducible since it cannot be factored any further over the rational numbers. Similarly, $x^2 + 1$ is irreducible over the real numbers.
##### Example17.12
The polynomial $p(x) = x^3 + x^2 + 2$ is irreducible over ${\mathbb Z}_3[x]$. Suppose that this polynomial was reducible over ${\mathbb Z}_3[x]$. By the division algorithm there would have to be a factor of the form $x - a$, where $a$ is some element in ${\mathbb Z}_3[x]$. Hence, it would have to be true that $p(a) = 0$. However, \begin{align*} p(0) & = 2\\ p(1) & = 1\\ p(2) & = 2. \end{align*} Therefore, $p(x)$ has no zeros in ${\mathbb Z}_3$ and must be irreducible.
##### Example17.16
Let $p(x) = x^4 - 2 x^3 + x + 1$. We shall show that $p(x)$ is irreducible over ${\mathbb Q}[x]$. Assume that $p(x)$ is reducible. Then either $p(x)$ has a linear factor, say $p(x) = (x - \alpha) q(x)$, where $q(x)$ is a polynomial of degree three, or $p(x)$ has two quadratic factors.
If $p(x)$ has a linear factor in ${\mathbb Q}[x]$, then it has a zero in ${\mathbb Z}$. By Corollary 17.15, any zero must divide 1 and therefore must be $\pm 1$; however, $p(1) = 1$ and $p(-1)= 3$. Consequently, we have eliminated the possibility that $p(x)$ has any linear factors.
Therefore, if $p(x)$ is reducible it must factor into two quadratic polynomials, say \begin{align*} p(x) & = (x^2 + ax + b )( x^2 + cx + d )\\ & = x^4 + (a + c)x^3 + (ac + b + d)x^2 + (ad + bc)x + bd, \end{align*} where each factor is in ${\mathbb Z}[x]$ by Gauss's Lemma. Hence, \begin{align*} a + c & = - 2\\ ac + b + d & = 0\\ ad + bc & = 1\\ bd & = 1. \end{align*} Since $bd = 1$, either $b = d = 1$ or $b = d = -1$. In either case $b = d$ and so \begin{equation*}ad + bc = b( a + c ) = 1.\end{equation*} Since $a + c = -2$, we know that $-2b = 1$. This is impossible since $b$ is an integer. Therefore, $p(x)$ must be irreducible over ${\mathbb Q}$.
##### Example17.18
The polynomial \begin{equation*}f(x) = 16 x^5 - 9 x^4 + 3x^2 + 6 x - 21\end{equation*} is easily seen to be irreducible over ${\mathbb Q}$ by Eisenstein's Criterion if we let $p = 3$.
Eisenstein's Criterion is more useful in constructing irreducible polynomials of a certain degree over ${\mathbb Q}$ than in determining the irreducibility of an arbitrary polynomial in ${\mathbb Q}[x]$: given an arbitrary polynomial, it is not very likely that we can apply Eisenstein's Criterion. The real value of Theorem 17.17 is that we now have an easy method of generating irreducible polynomials of any degree.
# SubsectionIdeals in $F\lbrack x \rbrack$¶ permalink
Let $F$ be a field. Recall that a principal ideal in $F[x]$ is an ideal $\langle p(x) \rangle$ generated by some polynomial $p(x)$; that is, \begin{equation*}\langle p(x) \rangle = \{ p(x) q(x) : q(x) \in F[x] \}.\end{equation*}
##### Example17.19
The polynomial $x^2$ in $F[x]$ generates the ideal $\langle x^2 \rangle$ consisting of all polynomials with no constant term or term of degree 1.
##### Example17.21
It is not the case that every ideal in the ring $F[x,y]$ is a principal ideal. Consider the ideal of $F[x, y]$ generated by the polynomials $x$ and $y$. This is the ideal of $F[x, y]$ consisting of all polynomials with no constant term. Since both $x$ and $y$ are in the ideal, no single polynomial can generate the entire ideal.
##### Proof
Throughout history, the solution of polynomial equations has been a challenging problem. The Babylonians knew how to solve the equation $ax^2 + bx + c = 0$. Omar Khayyam (1048–1131) devised methods of solving cubic equations through the use of geometric constructions and conic sections. The algebraic solution of the general cubic equation $ax^3 + bx^2 + cx + d = 0$ was not discovered until the sixteenth century. An Italian mathematician, Luca Pacioli (ca. 1445–1509), wrote in Summa de Arithmetica that the solution of the cubic was impossible. This was taken as a challenge by the rest of the mathematical community. | 2017-04-28 00:32:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9672520756721497, "perplexity": 131.50638394508934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122720.81/warc/CC-MAIN-20170423031202-00103-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://itprospt.com/num/4375886/evaluate-the-integral5-2-fe-dz5-2-f-dz-use-a-as-the-arbitrary | 5
# Evaluate the integral5/2 fe dz5/2 f dz =(Use € as the arbitrary constant )Enteryouansi erin Ine ans er Oox...
## Question
###### Evaluate the integral5/2 fe dz5/2 f dz =(Use € as the arbitrary constant )Enteryouansi erin Ine ans er Oox
Evaluate the integral 5/2 fe dz 5/2 f dz = (Use € as the arbitrary constant ) Enteryouansi erin Ine ans er Oox
#### Similar Solved Questions
##### 1IESOURCES VPLUS Secue 1 1 9 1 L Applled Question WIuyPLUs calculus1 MapleNet" 1 Alcong1 111 Tontkcus WacdDiulzion 0l WLtY;Attemet 1
1 IESOURCES VPLUS Secue 1 1 9 1 L Applled Question WIuyPLUs calculus 1 MapleNet" 1 Alcong 1 1 1 1 Tontkcus Wacd Diulzion 0l WLtY; Attemet 1...
##### Rcscirch Eab has develozed PIC-Cis Tnakte specilcy tope Tkc lab 'ould like shove mide under the new process has stronger breaking Srength than thc one mide under the old process Uaine scven jandumnly sekcted batches of the material for iaking ropes, the lab tries the nev process for on: half of cach bafch ard thc old process for the other half The breakin? streagth of @ rope fomn each hal-batch Tccordcm helouBach Old Procc L ProCessHere arc some useful Minpk sulmNrnaan Icr[hc aboyt table .
rcscirch Eab has develozed PIC-Cis Tnakte specilcy tope Tkc lab 'ould like shove mide under the new process has stronger breaking Srength than thc one mide under the old process Uaine scven jandumnly sekcted batches of the material for iaking ropes, the lab tries the nev process for on: half o...
##### Two metal sphere with mas m 15.0 g are suspended from L = 8.0 cm long - insulaled strings from ceiling = d =4.0 cm apart. The two sphere are charged with one sphere gelting charge and the other one 3q: The sphere repeal one another and are in equilibrium when they hang at an angle € = [39 from vertical. Determine the value of charge q. [8.5 10-8 C]
Two metal sphere with mas m 15.0 g are suspended from L = 8.0 cm long - insulaled strings from ceiling = d =4.0 cm apart. The two sphere are charged with one sphere gelting charge and the other one 3q: The sphere repeal one another and are in equilibrium when they hang at an angle € = [39 from...
##### I6. A researcher conducts an independemt-measures shudy comparing (WO treatments and reports Uhe / Stalistic & 7(25) 2.071. How many individuals participated i the entire study" Using two-Iailed Iest With a 05, I there a significant difference betwcen the tWO treatments? Compule (0 measure the percentage of variance accounted or by the trealment ellect
I6. A researcher conducts an independemt-measures shudy comparing (WO treatments and reports Uhe / Stalistic & 7(25) 2.071. How many individuals participated i the entire study" Using two-Iailed Iest With a 05, I there a significant difference betwcen the tWO treatments? Compule (0 measure ...
##### Use cylindrical coordinatesEvaluate the integral, where E is enclosed by the paraboloid 2 = 4 +Xz + y2 , the cylinder x2 v2 = 4, and the xy-plane_ TII ez dV (-4 - ex 'el2
Use cylindrical coordinates Evaluate the integral, where E is enclosed by the paraboloid 2 = 4 +Xz + y2 , the cylinder x2 v2 = 4, and the xy-plane_ TII ez dV (-4 - ex 'el2...
##### Local maximum: (2, 512) Local minimum: (10, 0) Inflection point: (4, 324)
Local maximum: (2, 512) Local minimum: (10, 0) Inflection point: (4, 324)...
##### Question % Ify = 2r} find % pointsFind Ghe equation of the Gangent line if x=1 & points
Question % Ify = 2r} find % points Find Ghe equation of the Gangent line if x=1 & points...
##### Your friends show you an image through a microscope. They tell you that the microscope has an objective with a $0.500 mathrm{~cm}$ focal length and an eyepiece with a $5.00 mathrm{~cm}$ focal length. The resulting overall magnification is 250,000 . Are these viable values for a microscope?
Your friends show you an image through a microscope. They tell you that the microscope has an objective with a $0.500 mathrm{~cm}$ focal length and an eyepiece with a $5.00 mathrm{~cm}$ focal length. The resulting overall magnification is 250,000 . Are these viable values for a microscope?...
##### 4. Find the solution to the following initial/boundary value problem:DE UtUrI'0 < I <t, t20 BC u(0,4) = 0,u(7,t) = 0 IC : u(c,0) sin(31) [1 + cos(52)] .
4. Find the solution to the following initial/boundary value problem: DE Ut UrI' 0 < I <t, t20 BC u(0,4) = 0,u(7,t) = 0 IC : u(c,0) sin(31) [1 + cos(52)] ....
##### Find a unit vector with the same direction as v. $$\mathbf{v}=\langle-1,1\rangle$$
Find a unit vector with the same direction as v. $$\mathbf{v}=\langle-1,1\rangle$$...
##### Solve the problem.In a certain population; 11% of people are left-handed: Suppose that you plan to randomly select 100 people and ask each person whether they are left handed Suppose that in calculating each of the probabilities below; vou use the normal distribution as an approximation to the binomial but that you fail to use continuity correction: In which cases will you obtain an answer that is too large?A: the probability that among the 100 people at least 12 are left-handed B: the probabili
Solve the problem. In a certain population; 11% of people are left-handed: Suppose that you plan to randomly select 100 people and ask each person whether they are left handed Suppose that in calculating each of the probabilities below; vou use the normal distribution as an approximation to the bino...
##### Homework: Homework: The Binomial Distribution Save Score: 0.25 of 1 pt 23 of 40 (40 complete) HW Score: 89.95%, 35.98 of 40 pts5.2.35-TQuestion Helppharmaceutical company receives large shipments of aspirin tablets The acceptance sampling plan is to randomly select and test 44 tablets_ then accept the whole batch there is only one or none that doesnt meet the required pecifications_ If one shipment of 7000 aspirin tablets actually has 2% rate of defects what is the probability that this whole sh
Homework: Homework: The Binomial Distribution Save Score: 0.25 of 1 pt 23 of 40 (40 complete) HW Score: 89.95%, 35.98 of 40 pts 5.2.35-T Question Help pharmaceutical company receives large shipments of aspirin tablets The acceptance sampling plan is to randomly select and test 44 tablets_ then accep...
##### Solve the system of equations using the inverse of the coefficient matrix of the equivalent matrix equation. \begin{aligned} 2 x+3 y+4 z &=2 \\ x-4 y+3 z &=2 \\ 5 x+y+z &=-4 \end{aligned}
Solve the system of equations using the inverse of the coefficient matrix of the equivalent matrix equation. \begin{aligned} 2 x+3 y+4 z &=2 \\ x-4 y+3 z &=2 \\ 5 x+y+z &=-4 \end{aligned}...
##### Using a sample to compute a __________ as a proxy for aparameter is generally called point estimation and is a form ofstatistical inference.
Using a sample to compute a __________ as a proxy for a parameter is generally called point estimation and is a form of statistical inference....
##### Assume you obtained an experimental value of 9.45m/s2. What is the percentage difference of this valuefrom the theoretical value?Give your answer as a percentage to two significant figures.
Assume you obtained an experimental value of 9.45 m/s2. What is the percentage difference of this value from the theoretical value? Give your answer as a percentage to two significant figures....
##### Aeudy = done Ucin? Ireubrner g oup and 3 Nacobo grp Tna Eesunstanehwn bo Assuina Ihal E1e tAQ Enmolog nro Inatponet ernte rhndom samplos uolocted Irom normally Golribuled populalloma and cnol assume Ihnt Ih= pepululan alindurd dovulons 4e07ua Complel purte (olund (blbala# Ure 0.05 alqnillcincn Inval Ior bolh paraTest Ino catn thul6w0 EMprutpoplulont L elmmmnI"Nnalhu nu nuiland milurnautu Thypothosus?# 0em 4>Ho /ttnAaahiHo"/4 = Ha HaiuatiSllceto#ecI YOur
Aeudy = done Ucin? Ireubrner g oup and 3 Nacobo grp Tna Eesunstanehwn bo Assuina Ihal E1e tAQ Enmolog nro Inatponet ernte rhndom samplos uolocted Irom normally Golribuled populalloma and cnol assume Ihnt Ih= pepululan alindurd dovulons 4e07ua Complel purte (olund (blbala# Ure 0.05 alqnillcincn Inval... | 2022-08-12 09:05:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396010994911194, "perplexity": 10368.348729205665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571597.73/warc/CC-MAIN-20220812075544-20220812105544-00799.warc.gz"} |
https://www.gradesaver.com/textbooks/science/chemistry/chemistry-and-chemical-reactivity-9th-edition/chapter-2-atoms-molecules-and-ions-study-questions-page-95k/161 | ## Chemistry and Chemical Reactivity (9th Edition)
Number of moles of calcium chloride: $0.739\ g\div 110.97\ g/mol=0.00666\ mol$ Number of moles of water: $(0.832-0.739)\ g\div 18.015\ g/mol=0.00516\ mol$ Ratio: $0.00666/0.00516=1.29$ Since this value is smaller than the actual value, there's still water in the sample. | 2019-09-22 12:38:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307517170906067, "perplexity": 1081.907351857364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575513.97/warc/CC-MAIN-20190922114839-20190922140839-00537.warc.gz"} |
http://www.r-bloggers.com/tag/graphs/ | # Posts Tagged ‘ graphs ’
## An Improvement to Coefficient Plots
July 3, 2012
By
I recently posted about coefficient plots, discussing my approach and providing some example R code to create the graphs. I had the good fortune of hearing Amanda Driscoll give a talk recently, and she made a small, but really nice … Continue rea...
## Coefficient Plots in R
June 30, 2012
By
One popular trend in presenting results is the "coefficient plot," an alternative to the table of regression coefficients. I am seeing this a little more often in political science research and have received a few requests for code, so I … Contin...
## Why You Shouldn’t Conclude "No Effect" from Statistically Insignificant Slopes
June 16, 2012
By
It is quite common in political science for researchers to run statistical models, find that a coefficient for a variable is not statistically significant, and then claim that the variable "has no effect." This is equivalent to proposing a research ...
## Using Inkscape to Post-edit Labels in R Graphs
September 26, 2011
By
I discuss how to use Inkscape to easily shift around labels on graphs produced in R. Continue reading →
## What value is cross country GDP correlation? [Part One]
May 6, 2011
By
The above graph borders on chartjunk (and is nothing like Paul Butler’s amazing Facebook map). We can see some variation in color but mostly it is a set of lines between 152 country capitals with no means to determine which … Continue reading →
## Day #34 Detailing graphs
May 3, 2011
By
Today mostly existed in adding details or changing certain aspects of my graphs. For example, I had to turn around the y-axis on my levelplot, circleplot, … which wasn’t so easy at first. But after a bit of googling I found out I had to rev...
## Beeswarm Boxplot (and plotting it with R)
March 10, 2011
By
(The image above is called a “Beeswarm Boxplot” , the code for producing this image is provided at the end of this post) The above plot is implemented under different names in different softwares. This “Scatter Dot Beeswarm Box Violin – plot” (in the lack of an agreed upon term) is a one-dimensional scatter plot
## Back from Philly
December 20, 2010
By
The conference in honour of Larry Brown was quite exciting, with lots of old friends gathered in Philadelphia and lots of great talks either recollecting major works of Larry and coauthors or presenting fairly interesting new works. Unsurprisingly, a large chunk of the talks was about admissibility and minimaxity, with John Hartigan starting the day
## Once again, chart critics and graph gurus welcome
December 10, 2010
By
HOW TO DISPLAY A LINE PLOT WITH COUNT INFORMATION? In a previously-mentioned paper Sharad and your DSN editor are writing up, there is the above line plot with points. The area of each point shows the count of observations. It’s done in R with ggplot2 (hooray for Hadley). We generally like this type of plot,
## Le Monde puzzle [48: resolution]
December 4, 2010
By
$Le Monde puzzle [48: resolution]$
The solution to puzzle 48 given in Le Monde this weekend is rather direct (which makes me wonder why the solution for 6 colours is still unavailable..) Here is a quick version of the solution: Consider one column, 1 say. Since 326=5×65+1, there exists one value c with at least 66 equal to c. Among | 2015-08-05 08:41:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2126806378364563, "perplexity": 3116.66783745413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043062723.96/warc/CC-MAIN-20150728002422-00012-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://jbm950.github.io/notes/analytical_dynamics/analytical_dynamics_main.html | # Analytical Dynamics
Taught by: Dr. Riccardo Bevilacqua
Taken: FS 2015
Text: Dynamics of Particles and Rigid Bodies: A Systematic Approach
by: Anil Rao
## Scalars
• Scalars do not need reference frames or coordinate systems
• Derivates of scalars follow the formal definition of the derivative
• Ex. Let y(t) be a scalar function of time t,
## Vectors
• Vectors are quantities in (3D Euclidian Space) that have both magnitude and direction
• Magnitude is defined as
• Direction of the vector is defined by its unit vector
• Vectors are reference frame independent
• Vectors hold their true magnitude and direction no matter what reference frame you're observing from
• There are three types of vectors
1. Free vectors - & are the same vector if they have the same magnitude and direction
2. Sliding vectors - & are the same vector if they have the same magnitude and direction & share the same line of action in
• Ex. force vectors
3. Bound vectors - & are the same bound vector if they are the same sliding vector & share the same point of application
• Ex. position vectors
## Reference Frames
• A reference frame is any set of at least 3 non-colinear points whose mutual distances are constant
• A reference frame is a rigid body and vice versa
• Deciding reference frames allows for the formulation of problems
• They define a point of view put not a basis of measurement
## Coordinate System
• Coordinate systems are a set of three linearly independent vectors {} in (3D Euclidian Space)
• There are three main conviences that are made when creating a coordinate system
1. Have all three vectors {} be mutually orthogonal
2. Have all three vectors {} be unit vectors
3. Create a right handed system
• Coordinate systems are built on reference frames to allow measurement within the frame or allow a problem to be implemented
• The coordinate system moves with its reference frame
• Coordinate systems must be fixed with only one reference frame whereas each reference frame can have infinite coordinate systems
• Bigger problems come from mistakes in the formulation of the problem (deciding reference frames) than mistakes in the implementation of the problem (deciding and using the coordinate systems)
• Two steps to defining a coordinate system
1. Pick an origin
2. Define the three right-handed orthogonal unit vectors, {}
## Vector Derivatives
• This class will be using Newtonian Mechanics rather than Relativistic Mechanics
• Time is the independent variable ( Galilean Invariance)
• Although vectors are reference frame independent, the geometric meaning of their derivative vectors are not reference frame independent (though the derivative vectors themselves being vectors are reference frame independent)
• Development of the general form of a derivative of a vector as viewed from reference frame A
• The derivatives of the scalars are reference frame independent but the derivatives of the vectors are not
• Utilizing the multiplication rule of derivatives
• The first three terms represent the rate of change of vector in the coordinate system defined by the vectors
• The last three terms represent the rate of change of the coordinate system defined by the vectors with respect to reference frame A
• If the vectors {} are fixed in reference frame A then the last three terms in the derivative equal zero
## Transport Theorem
• The transport theorem is a simplification of the vector derivative and can be stated as where {} are fixed in reference frame B and is the angular velocity of frame B as seen by an observer in reference frame A
• Some useful properties of the angular velocity are:
### Transport Theorem Derivation
• Taking the resultant formula for the derivative of a vector from above where {} are fixed in reference frame B we have or
• Next we need to use some of the mathematical definitions of the {} vectors
• The vectors are all unit vectors therefore for i = 1, 2, 3
• The vectors are all mutually orthogonal therefore for i j
• Taking the derivative of the unit vector rule we obtain
• A result of this is that it shows that a unit length vector, , will always be perpendicular to its derivative
• Taking the derivative of the orthogonal rule we obtain or
• The next step is to write out the derivate of each unit vector in terms of a component of , the angular velocity, multiplied by each of the unit vectors
• We know from the derivative of the unit vector rule that the derivative of a unit vector will not contain any component along the direction of that unit vector ()
• We also know from the result of the derivative of the orthogonal rule that the component of the derivative of first vector in the second vectors direction is the negative of the second vectors derivative in the first vector's direction (, , )
• With these two rules we can re-write the three equations
• We now define the following three 's: , and
• With the redefinitions we can introduce , the angular velocity of frame B as seen by an observer in frame A as
• This definition of angular velocity allows the equation to produce the three above equations
• Substituting the equation for into the very first equation results in
• To simplify further separate the angular velocity out of the cross products:
• Now taking the definition of as we can re-write the above expression as which is the transport theorem
## Cylindrical Coordinate Systems
• In the given diagram a cylindrical coordinate system can be established in the reference frame defined by the plane containing points O, P and Q (or vectors r and )
• The coordinate system in this reference frame will be:
• Origin at O
• = along
• =
• =
• This coordinate system rotates with and as it rotates the line forms the outside of a cylinder which gives the coordinate system its name
## Spherical Coordinate Systems
• For the given diagram we will create a new reference frame that conatains the vector and is perpendicular to the plane created by and
• Note that this new reference plane differs from the reference frame containing the cylindrical coordinate system by an angle
• This reference frame rotates with and
• The spherical coordinate system in this new reference frame will be:
• Origin at O
• = along
• = (from cylindrical coordinate system)
• =
• For a constant , the full rotation of point P with respect to both and produces a sphere which is where the name for this coordinate system originates
## Euler Angles
1. There is no vector such that its derivative is equal to the angular acceleration
• For there will never be a vector such that , ,
2. Rotations do NOT commute
• Order is extremely important
• There are many different rotation orders (12 combinations for three axes)
• Can repeat numbers so long as the repeated numbers are not sequential
• The most commonly used rotation sequence is the 321 sequence (z, y, x)
• First rotation in the 321 sequence is around the z-axis
• The second rotation is around the new y-axis
• Last rotation is around the new x-axis
• Careful with the second and third rotations as they are around the NEW axis formed from the previous rotations
• We now have 4 different coordinate systems fixed in four different reference frames
• We can now define the angular velocity of the final, body frame as seen by the initial frame as:
• Due to the fact that the rotations can often be measured in reference frame R by gyroscopes, it is often desired to have the angular velocity defined in that coordinate system
### Limitations of Euler Angles
• For each sequence of rotations there is a rotation that will result in a singularity where there is an infinite combination of values for the first two rotations to arrive at that same orientation
• For a 321 sequence for example there is a singularity for a 90 degree rotation as the second rotation causing an infinite number of combinations of and to arrive at the same final orientation
• Singularities basically occur when the same axis in space is reused
## Intrinsic Coordinates
• An intrinsic coordinate system is one that is based within the motion and trajectory of a particle
• If there's no motion there can not be an intrinsic coordinate system
• The three unit vectors of an intrinsic coordinate system are the tangential vector (), the principal normal to the trajectory () and the principal binormal to the trajectory ()
• When defined for a using the motion of point P in the general reference frame A the three vectors are defined as:
• Vector points to the center of curvature of the trajectory of point P in reference frame A and so difficulties can be encountered when the trajectory is a straight line
## Motion Between Two Points in the Same Reference Frame
• For two points Q and P in the same rigid body, a relation can be derived for their velocities and accelerations
• For general reference frame A the position vector between points Q and P fixed in reference frame R can be given as:
• Now taking the time derivative of in the A reference frame and including the transport theorem between the A reference frame and the R reference frame leads to the following expression
• Noting that the distance between points Q and P do not change by definition of a rigid body the first term on the right hand side of the above expression equals zero
• By taking another time derivative of the above expression the difference in accelerations of the two points can be obtained
• Now keeping in mind that is the angular acceleration and that is the velocity expression that has already been found, the acceleration expression can be simplified to the following expression
• It is important to remember that cross products are not commutative and that the order in which they are performed is important
## Rolling and Slipping
• The condition for rolling between two rigid bodies R and S in a general reference frame is the following expression
• where point C is the point of contact between rigid bodies R and S
• Sliding between two rigid bodies occurs anytime the rolling condition is not met
• Note that when considering rolling motion there are three point C's to consider
1. The instantaneous point of contact between the two rigid bodies which doesn't belong to either body
2. The point C belonging to rigid body R
3. The point C belonging to rigid body S
• Therefore in the above condition for rolling the term stands for the velocity of point C belonging to rigid body R in reference frame A
## Inertial Reference Frames
• An inertial reference frame is one for which we will assume exists as a set of absolutely stationary points
• Inertial reference frames are an imaginary concept that is an approximation of reality since all motion is relative the question arises to whom are the set of points stationary
• We can pick reference frames to be our inertial reference frame when the degree of motion of the points is negligible compared to the motion that we are studying
## Three Laws of Mechanics
• When studying kinetics there the particles are no longer massless and we are considering the effect of action on the particles (forces/vectors)
1. The first law of mechanics or the inertia law states that an object at rest tends to remain at rest and an object in motion tends to remain in motion
2. The second law of mechanics relates forces to momentum as it's most common form shows ( where is the acceleration in an inertial reference frame)
3. The last of the three laws of mechanics states that for every action there is an equal and opposite reaction
• If the resultant force lies on the same line of action as the initial force (sliding vectors) then it is considered the strong form of the 3rd law
• The weak form of the third law is when the resultant force and the initial force are not on the same line of action
## Angular Momentum
• Some problems are easier to solve by using the concept of angular momentum where the definition of angular momentum for a particle P about point Q is given as
• where and are measured relative to the origin of N and m is the mass of particle P
• Another useful relation from the angular momentum is the time rate of change of the angular momentum
• Note that the first term involves taking the cross product of a vector with the same vector and is therefore zero and so the expression can be rewritten as
• The above expression can as the cross product of the distance between the two points with each acceleration individually
• Since m is the mass of particle P the term is the sum of forces acting on point P by Newton's third law of mechanics
• The term is the moment of all of the forces acting on particle P with respect to point Q and so the expression can be rewritten one more time as the following
## Relative Velocity
• Relative velocity is the vector that represents the motion of one point compared to another
• For two points P and Q the relative velocity can be expressed as follows
• Note that the relative velocity is reference frame independent
## Friction Force Models
• Irregardless of the model the friction force will oppose the relative motion between two surfaces
• Caution as it may not oppose the direction of overall motion of the object
Coulomb Friction
• is the resultant force normal to the point of contact
• coefficient of friction
Viscous Friction
• coefficient of vicsous friction
## Linear Spring Force Model
• For particle with mass and point being connected by a linear spring the force model can be written as follows
• the spring length when it is unstreched/uncompressed
• the spring constant
## Center of Mass
• Whether working with a system of particles or a rigid body the center of mass is an important point to know the position of while trying to solve problems
• For a system of particles the center of mass is found by taking the sum of the mass of individual particles multiplied by their position vectors and dividing by the overall mass of the system
• Finding the center of mass of a rigid body is practically the same process but due to the rigid body having potentially infinite points with mass the sum in the above expression naturally turns into an integral
• Additional quantities that are useful are the velocity and acceleration of the center of mass and these quantities are easy to find when considering that the mass of the system/rigid body is not time dependent
• For a system of particles
• For a rigid body
## Tensors
• Tensors are geometrical objects similar to vectors
• Tensors are reference frame independent
• Letting be a vector in , we define a tensor as a linear operator which takes vectors as inputs and returns vectors
• An example of a tensor that has been used often in this class already is the cross product
• If a tensor is represented by a matrix it must include a basis or it is not a tensor just a matrix
• A tensor is represented as operating on a vector by a large dot
• Tensor operating on vector
Tensor Product
• One common operation that will need to be defined is that of the tensor product
• is the tensor
• The dot on the right hand side is a dot product not an "operates on" operator therefore the result is a scalar multiplied by the vector
• Example of the tensor created by
## Angular Momentum of Rigid Bodies/Systems of Particles
• For a system of particles the angular momentum about point Q is given by the following expression. The first time derivative of the angluar momentum for a system of particles is also presented
• Where is the real moment given by
• For a rigid body the angular momentum about point Q is given by the following expression. The first time derivative of the angular momentum expression is also provided
• for generic point
• where is on rigid body
• if is the center of mass
• can be found in tables in text books for general shapes/configurations
• The generic expression of angular momentum of a rigid body about generic point Q can be found from the angular momentum about its center of mass from the following expression
• The parallel axis theorem can be found from this expression but this is a stronger expression and as such should stick to this one instead
## Pure Torque
• Pure torque is a moment on a rigid body that is independent of the point on the body for which the moment is being determined
• Can be represented by a force couple which creates a moment with zero net force
• Symbolically represented as
• Including pure torque in the expression for the sum of moments for a system of particles about generic point Q yields the following expression
## Euler's Laws
First Law
• Where N is inertial
Second Law
• Where O is a point that is inertially fixed
• When moved to a generic point that does not have to be inertially fixed the second law becomes the following
## Parameterize a Problem
• When looking at how to parameterize a problem we first must consider the degrees of freedom (M) the system has and how many of those degrees are constrained (P)
## Lagrange Equations
• Lagrange equations provide a means of formulating problems without forces caused by constraints on the system
• Lagrange equations are derived from Newton's Second Law ()
• We will need the different expressions for kinetic energy when working with Lagrange Equations
Kinetic Energy of a Particle
Kinetic Energy for a System of Particles
Kinetic Energy for a Rigid Body (Koenig Decomposition)
• The Fundamental Form of Lagrange's Equations is the following expression
• Where
• Lagrange's Equation for a rigid body is given by the following expression
• where
• for i = 1, ..., 6
## Examples
### Transport Theorem Examples
• First Transport Theorem Example:
• Given a disk (D) rotating about its centerpoint (O), find the velocity and acceleration of point P as seen by the disk (D) and the ground (G)
• Start by defining reference frames and then coordinate systems within them
• For this problem two reference frames will be used: the disk (D) and the ground (G)
• Coordinate system fixed in the disk reference frame (D)
• Origin at point O
• = along the line
• = out of the page (positive with theta)
• =
• Coordinate system fixed in the ground reference frame (G)
• Origin at point O
• =
• = along the line @
• =
• Now to find the velocity and acceleration of point P in the disk reference frame we take the time derivative of the vector from the origin O to point P ()
• Neither r nor are changing in the disk reference frame therefore
• Next the acceleration of point P in the disk reference frame is to be found by taking a time derivative of the velocity of point P in the disk reference frame
• Now the velocity needs to be found for point P in the ground reference frame and two different approaches will be shown
• First approach involves writing all of the equations with respect to the coordinate system fixed in reference frame G
• The second approach involves leaving the equations with terms referenced in the disk reference frame and uses the transport theorem
• The second approach leaves a much cleaner equation for further derivatives and so this is the form we'll use when finding the acceleration
• Second Transport Theorem Example:
• This example is the same as the previous one execept it has an additional rotation around its base
• It's good practice to have an additional reference frame for each rotation and so in this example we'll use three reference frames: disk (D), disk at (C) and the ground (G)
• Always start by defining the reference frames and then the coordinate systems within them
• Coordinate system fixed in the disk reference frame (D)
• Origin at point O
• = along
• = perpendicular to the disk, positive with
• =
• Coordinate system fixed in the ground reference frame (G)
• Origin at point O
• = Out of the page
• = along the line @ ,
• =
• Coordinate system fixed in the disk at reference frame (C) (rotates with )
• Origin at point O
• = along at
• =
• Just as in the previous example the velocity and acceleration in the disk reference frame (D) will equal zero
• Now to find the velocity of point P in the ground reference frame we are going to take advantage of the transport theorem again
### Cylindrical Coordinate System Example
• Find the equation for the velocity and acceleration of point P in the A reference frame ()
• For this problem we will use two different approaches:
• The first approach will be to derive the velocity and acceleration using a coordinate system fixed in reference frame A
• The second approach will be to develop and use a cylindrical coordinate system fixed in a reference frame C
• For the first approach the reference frame is already defined in the following figure
• The vector will be defined in these coordinates as
• We can now find
• The same process can be used to find
• For the second approach we need more formal clarifications of our reference frames and coordinate systems
• We will be using two reference frames: reference frame A in which the {} are already fixed and reference frame B which is the plane containing points O, P and Q (or vectors and )
• Coordinate system fixed in reference frame B (Cylindrical Coordinates)
• Origin at point O
• = along line
• =
• =
• Now we're going to define in the cylindrical coordinates (in reference frame B)
• We can now find using the cylindrical coordinates
• Now that we have the velocity we can find the acceleration
### Spherical Coordinate System Example
• The problem is to find the velocity and acceleration of point P in the A reference frame ( and )
• Since the example for cylindrical coordinate systems already found the expression using the coordinate system in reference frame A and using the cylindrical coordinate system, this example will only be using the spherical coordinate system approach
• In order to define the spherical coordinate system three reference frames will be used:
• The given reference frame (A)
• The reference frame fixed in the plane containing points O, P and Q (or vectors and ) (B)
• The reference frame fixed in the plane containing vector that is perpendicular to the plane containing vectors and (C)
• Coordinate system in reference frame A is already given and origin at point O
• Coordinate system fixed in reference frame B (Cylindrical Coordinates)
• Origin at point O
• = along line
• =
• =
• Coordinate system fixed in referenc frame C (Spherical Coordinates)
• Origin at point O
• First note that the angular accelertation between reference frames A and C can be expressed as
• Now we can find the velocity of point P in the A reference frame by making use of the transport theorem and the following definition of :
• The same process can be used to find the acceleration of point P in reference frame A, however, the expression becomes very large and so it will be omitted here for space
### Intrinsic Coordinate System Example
• Given the above set up find the intrinsic coordinate system for point P
• To begin the velocity and acceleration of point P will be found in reference frame A
• Two different reference frames will be used: the given reference frame (A) and the reference frame using the plane containing vectors and
• Coordinate system fixed in reference frame A
• Origin at O
• to the right
• out of the page
• Coordinate system fixed in reference frame B
• Origin at O
• along
• Now we can find the velocity of point P in reference frame A using
• Now the that the velocity has been found the acceleration of point P in reference frame A can be found
• Let us now find the vectors of the intrinsic coordinate system attached to point P
• Now that the tangential vector has been found the principle normal to the trajectory can be found
• Finally the princible binormal to the trajectory can be determined
### Rolling Motion
• This example also includes examples on motion between two points fixed in the same rigid body
• The diagram for the problem is given as follows
• It is desired to find the velocity and acceleration of point O and point P given that there is roll without slip between the disk and the wedge
• To begin we will define two different reference frames, the disk (D) and the ground (G)
• Coordinate system fixed in the ground reference frame (G)
• Origin at point O at time = 0
• = into the page
• = along
• =
• Coordinate system fixed in the disk reference frame (D)
• Origin at point O
• = along
• =
• =
• We will begin by stating the condition for rolling without slip
• In this case we will use the condition in the ground reference frame (A = G)
• Now we can note that point C in the ground reference frame is fixed and therefore its velocity is zero and as a consequence the point C in the disk reference frame also has zero velocity
• Now we will make use of the velocity relations for two points fixed in the same rigid body to relate the velocity between points C and O
• From
• We can use the same relation between two points fixed in a rigid body to now relate the velocities of point O and point P
• With both of the desired velocities found we can now find the desired accelerations. We will be begin with the acceleration of point O
• In order to find the acceleration of point P we are going to use the relation between the accelerations of point P and point O
• As a bonus the acceleration of point C on the disk will be found. It should be noted that the rolling condition is strictly for velocities and not the velocities ()
### Basic Equation of Motion
• The problem is to determine the equations of motion for mass located at point P which is attached by a rod to point O which is fixed in the ground and the ground reference frame can be considered intertial
• Only one equation of motion is expected due to the mass being constrained to a trajectory that lies on a single line
• We will begin by defining our reference frames. For this problem two frames will be used due to the presence of a rotation
• Ground reference frame (G)
• Plane perpendicular to the page containing (A)
• Coordinate system fixed in the ground reference frame (G)
• Origin fixed at O
• = out of the page
• = along at = 0
• =
• Coordinate system fixed in reference frame A
• Origin fixed at point O
• = along
• =
• =
• Now we will begin by finding the position velocity and acceleration of point P in the ground reference frame
• With the acceleration of point P in the ground reference frame found we can now move on to the kinetics portion of the problem
• We will begin this portion with a free body diagram to determine what forces are present
• The sum of forces is therefore as follows
• With the sum of forces found we can now express all fo the terms in Newton's Second Law
• In the above expression the only unknown value is , therefore we can find the desired equation of motion by taking the dot product of each term by
### Angular Momentum
• The problem in this example is to find the equations of motion for the point P in the folling figure where point Q slides around the circle at given speed
• For this problem two different reference frames will be use
• Ground (G), treated as inertial
• Frame attached to rod along (A)
• Coordinate system fixed in Ground reference frame (G)
• Origin at point O
• given in the figure
• Coordinate system fixed in reference frame (A)
• Origin at point Q
• = along
• =
• =
• The first thing that will need to be done will be to solve for the kinematics of point P in the inertial reference frame (G)
• Next a free body diagram will be used so that the equations of motion can be found
• The equations of motion will be found using two different methods. The first will use Newton's second law for reference when the angular momentum approach is used.
• Using Newton's second law will lead to the following result
Incomplete
Incomplete
Incomplete
### Basic System of Particles
• For the above diagram it is desired to obtain the equations of motion for both the wedge (W) and the block (B)
• This should lead to two equations of motion as both objects are constrained to a single degree of freedom
• To begin solving the problem 2 reference frames will be used and then coordinate systems will be defined in each reference frame
• Ground (G)
• Wedge (W)
• Coordinate system in Ground reference frame (G)
• Origin at O
• = to the right
• =
• =
• Coordinate system in Wedge reference frame (W)
• Origin at O'
• = along the slant away from O'
• =
• =
• Next the kinematics need to be solved for the Wedge
• Now the kinematics for the block will be determined
• h is the height of the wedge
• The relative velocity between the block and the wedge will also be needed when determining the friction force
• Last bit of kinematics will be for the wedge block system
• The next step towards obtaining the equations of motion will be to determine the forces acting on each part of the system and the system as a whole
• Now we can start to apply Newton's Second Law and its derived result on the wedge, block and system
• Newton's Second Law for the wedge results in the following expression
• Newton's Second Law for the block results in the following expression
• The derived result of Newton's Second Law will now be used on the system of particles
• The first equation of motion can be found by using the equation for the system and taking the dot product of each individual term with
• Where therefore
• For the second and final equation of motion take the result of Newtons Second Law from the block and take the dot product of each term with
• The two equations of motion for the system are therefore as follows | 2017-08-22 18:42:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8241820931434631, "perplexity": 440.63021317417684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112539.18/warc/CC-MAIN-20170822181825-20170822201825-00206.warc.gz"} |
http://www.hsl.rl.ac.uk/catalogue/ep25.html | ## Version 1.2.0
This subroutine uses the Lanczos algorithm to compute in parallel the part of the spectrum of a large symmetric matrix $A$ that lies in a specified interval, that is, it computes eigenvalues without regard to multiplicities. The user is required to partition the vectors into contiguous sections of similar sizes, each residing on a separate process. He or she must provide parallel code that computes $u+Av$ for any given vectors $u$ and $v$. The partitions should be chosen to make this computation straightforward and rapid.
Auxiliary calls allow corresponding eigenvectors to be found. In this case, the user is responsible for storing each vector $v$ and restoring it during the eigenvector calculation. | 2018-04-25 22:23:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8156926035881042, "perplexity": 398.987390145564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947968.96/warc/CC-MAIN-20180425213156-20180425233156-00217.warc.gz"} |
https://blog.csdn.net/jamesliulyc/article/details/91824910 | # 卡通水shader
https://roystan.net/articles/toon-water.html
https://github.com/IronWarrior/ToonWaterShader
# Toon Water Shader
### >> using Unity engine 2018.3
50 minutes to complete
You will learn to write a toon water shader. You will use data from the depth and normals buffer to generate shoreline foam, and noise and distortion textures to render toon waves.
Water can be challenging to render and almost always requires a custom shader to bring to life. This is especially true for toon style water.
This article will outline techniques to render the most common components of a water shader: shoreline foamdepth-based coloring and surface waves. While this shader is designed for a toon look, the approach presented here can be adapted for any art style.
The completed project is provided at the end of the article. Note that it also contains a large amount of comments in the created shader file to aid understanding.
### Prerequisites
To complete this tutorial, you will need a working knowledge of Unity engine, and a basic understanding of shader syntax and functionality.
Download starter project .zip
These tutorials are made possible, and kept free and open source, by your support. If you enjoy them, please consider becoming my patron through Patreon.
## Getting started
Download the starter project provided above and open it in the Unity editor. Open the Main scene (located at the project root), and open the ToonWater shader (located in the Shaders directory) in your preferred code editor.
This file contains about the simplest shader possible: one that outputs the color white. We will build off this shader throughout this article to make it render toon style water.
## 1. Depth based color
Water changes color depending on how deep it is, due to it absorbing light that passes through it. To reflect this, we will use a gradient to control our water's color. What color is outputted from the gradient will be controlled by the depth of the objects under the water.
### 1.1 Properties and variables
Add the following three properties to the top of the shader.
_DepthGradientShallow("Depth Gradient Shallow", Color) = (0.325, 0.807, 0.971, 0.725)
_DepthGradientDeep("Depth Gradient Deep", Color) = (0.086, 0.407, 1, 0.749)
_DepthMaxDistance("Depth Maximum Distance", Float) = 1
Note that these properties already have some default values filled in; these are the values used for the material in the animated image at the top of this article.
We define our gradient with two colors, one for when our water is at its most shallow (i.e., when the object behind it is nearly touching the surface), and one for when it is at its deepest. Since it's possible that our water could be infinity deep, we add the _DepthMaxDistance property as a cutoff for the gradient—anything deeper than this will no longer change color.
Side view of the beach scene. When the distance between the water and the ground is small (shown in green), the color of the water is lighter. When the distance is large (in red), the water is darker.
Before we can implement our gradient, we need to declare our properties in our CGPROGRAM. Add the following immediately above the fragment shader.
float4 _DepthGradientShallow;
float4 _DepthGradientDeep;
float _DepthMaxDistance;
sampler2D _CameraDepthTexture;
What are properties? What's a fragment shader? I'm feeling lost!
### 1.2 Calculating water depth
You might have noticed in the code block above the line declaring a sampler2D named _CameraDepthTexture. This declaration gives our shader access to a variable not declared in our properties: the camera's depth texture. A depth texture is a greyscale image that colors objects based on their distance from the camera. In Unity, objects closer to the camera are more white, while objects further away are darker.
Depth texture for the beach scene, excluding the water. Note that the Far plane of the camera is much smaller than normal to better highlight the difference in greyscale values.
This _CameraDepthTexture variable is available globally to all shaders, but not by default; if you select the Camera object in the scene, you'll notice that it has the script CameraDepthTextureMode attached, with it's inspector field set to Depth. This script instructs the camera to render the depth texture of the current scene into the above shader variable.
The depth texture is a full-screen texture, in that it has the same dimensions as the screen we are rendering to. We want to sample this texture at the same position as the current pixel we're rendering. To do this, we'll need to calculate the screen space position of our vertex in the vertex shader, and then pass this value into the fragment shader where we can use it.
// Inside the v2f struct.
float4 screenPosition : TEXCOORD2;
…
// Inside the vertex shader.
o.screenPosition = ComputeScreenPos(o.vertex);
With the screen position accessible through the v2f struct, we can now sample the depth texture. Add the following code to the fragment shader.
float existingDepth01 = tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.screenPosition)).r;
float existingDepthLinear = LinearEyeDepth(existingDepth01);
The first line samples the depth texture using tex2Dproj and our screen position. This will return the depth of the surface behind our water, in a range of 0 to 1. This value is non-linear—one meter of depth very close to the camera will be represented by a comparatively larger value in the depth texture than one meter a kilometer away from the camera. The second line converts the non-linear depth to be linear, in world units from the camera.
As we move left to right on this graph, further away from the camera, larger distances are represented by smaller values in the depth buffer. Image from NVIDIA article on depth precision.
What is tex2Dproj? How is it different from tex2D?
Because what we care about is how deep this depth value is relative to our water surface, we will also need to know the depth of the water surface. This is conveniently given in the w component of i.screenPosition. Add the following code to take the difference between our two depth values, and output the result.
float depthDifference = existingDepthLinear - i.screenPosition.w;
return depthDifference;
Note that going forward, all existing code that is modified will be highlighted in yellow. New code is not highlighted.
To calculate the color of our water, we're going to use the lerp function, which takes two values (our two gradient colors in this case) and interpolates between them based on a third value in the 0 to 1 range. Right now we have the depth in world units—instead we want to know how deep it is compared to our maximum depth, percentage-wise. We can calculate this value by dividing depthDifference by our maximum depth. Insert the following code just below the line declaring depthDifference
float waterDepthDifference01 = saturate(depthDifference / _DepthMaxDistance);
float4 waterColor = lerp(_DepthGradientShallow, _DepthGradientDeep, waterDepthDifference01);
return waterColor;
The first line above performs the division operation we just discussed. We also pass it through the saturate function—this function clamps the value between 0 and 1, which is the range we need. After that we feed it into the lerp function to calculate the gradient and return our new water color.
## 2. Waves
Next, we'll add waves to the surface using a noise texture. As well, we'll control the visibility of the waves using the depth of our water—this way, we can make the waves very visible at shallow depths to create a shoreline effect.
Perlin noise, a type of noise. Perlin noise is pseudo-random, and is useful for adding variation to textures to avoid the appearance of unnatural repetition.
### 2.1 Using noise
While it's possible to generate noise procedurally, for simplicity we're going to just use a texture. Add the following code to set up our shader to take in a new texture property.
// As a new property in Properties.
_SurfaceNoise("Surface Noise", 2D) = "white" {}
…
// Add in the appdata struct.
float4 uv : TEXCOORD0;
…
// Add in the v2f struct.
float2 noiseUV : TEXCOORD0;
…
// Above the vertex shader.
sampler2D _SurfaceNoise;
float4 _SurfaceNoise_ST;
…
// Inside the vertex shader.
o.noiseUV = TRANSFORM_TEX(v.uv, _SurfaceNoise);
That's a lot of code, but nothing in there is too exotic. We declare a new texture property and its matching sampler2D in the shader. Immediately below the sampler2D we declare another variable, a float4—Unity automatically populates this value with the tiling and offset data associated with the texture of the same name. Finally, UV data is declared in appdata and passed from the vertex shader to the fragment shader in the v2f struct.
What does TRANSFORM_TEX do? Why can't we just pass the UV directly into fragment shader?
In the Unity editor, assign the PerlinNoise texture to the Surface Noise slot, and set the Y tiling to 4. Back in the shader, we will sample the noise texture and combine it with our surface color to render waves. Add the following at the end of the fragment shader.
float surfaceNoiseSample = tex2D(_SurfaceNoise, i.noiseUV).r;
return waterColor + surfaceNoiseSample;
This vaguely resembles waves, but it's too smooth and has far too much variation in brightness to match the toon style we're going for. We will apply a cutoff threshold to get a more binary look.
// Add as a new property.
_SurfaceNoiseCutoff("Surface Noise Cutoff", Range(0, 1)) = 0.777
…
// Matching property variable.
float _SurfaceNoiseCutoff;
…
// Add in the fragment shader, just after sampling the noise texture.
float surfaceNoise = surfaceNoiseSample > _SurfaceNoiseCutoff ? 1 : 0;
return waterColor + surfaceNoise;
That looks much better. Any values darker than the cutoff threshold are simply ignored, while any values above are drawn completely white.
### 2.2 Shoreline foam
We'd like the waves' intensity to increase near the shoreline or where objects intersect the surface of the water, to create a foam effect. We'll achieve this effect by modulating the noise cutoff threshold based off the water depth.
// Control for what depth the shoreline is visible.
_FoamDistance("Foam Distance", Float) = 0.4
…
// Matching variable.
float _FoamDistance;
…
// Add in the fragment shader, above the existing surfaceNoise declaration.
float foamDepthDifference01 = saturate(depthDifference / _FoamDistance);
float surfaceNoiseCutoff = foamDepthDifference01 * _SurfaceNoiseCutoff;
float surfaceNoise = surfaceNoiseSample > surfaceNoiseCutoff ? 1 : 0;
The foam looks great near the shoreline, but it's pretty thin around the object intersections; we'll address this later.
### 2.3 Animation
Static water isn't very interesting—let's add some motion and distortion to the waves, starting with motion. We'll achieve this by offsetting the UVs we use to sample the noise texture.
// Property to control scroll speed, in UVs per second.
_SurfaceNoiseScroll("Surface Noise Scroll Amount", Vector) = (0.03, 0.03, 0, 0)
…
float2 _SurfaceNoiseScroll;
…
// Add in the fragment shader, above the existing surfaceNoiseSample line.
float2 noiseUV = float2(i.noiseUV.x + _Time.y * _SurfaceNoiseScroll.x, i.noiseUV.y + _Time.y * _SurfaceNoiseScroll.y);
float surfaceNoiseSample = tex2D(_SurfaceNoise, noiseUV).r;
Right now the scrolling feels like a sheet of paper being pulled across the surface. We'll add more movement using a distortion texture. This distortion texture will be similar to a Normal map, except with only two channels (red and green) instead of three.
We'll interpret these two channels as vectors on a 2 dimensional plane, and use them to pull around our noise texture's UVs.
// Two channel distortion texture.
_SurfaceDistortion("Surface Distortion", 2D) = "white" {}
// Control to multiply the strength of the distortion.
_SurfaceDistortionAmount("Surface Distortion Amount", Range(0, 1)) = 0.27
…
// Matching variables.
sampler2D _SurfaceDistortion;
float4 _SurfaceDistortion_ST;
float _SurfaceDistortionAmount;
…
// New data in v2f.
float2 distortUV : TEXCOORD1;
…
// Add to the vertex shader.
o.distortUV = TRANSFORM_TEX(v.uv, _SurfaceDistortion);
…
// Add the fragment shader, just above the current noiseUV declaration line.
float2 distortSample = (tex2D(_SurfaceDistortion, i.distortUV).xy * 2 - 1) * _SurfaceDistortionAmount;
float2 noiseUV = float2((i.noiseUV.x + _Time.y * _SurfaceNoiseScroll.x) + distortSample.x, (i.noiseUV.y + _Time.y * _SurfaceNoiseScroll.y) + distortSample.y);
We declare our new texture property and add a new UV set as normal. In the fragment shader, we sample the distortion texture—but before adding it to our noiseUV, we multiply it by 2 and subtract 1; as a texture, the x and y values (red and green, respectively) are in the 0...1 range. As a two dimensional vector, however, we want it to be in the -1...1 range. The arithmetic above performs this operation.
What's the difference between accessing a float4's rgba components versus xyzw?
## 3. Consistent size foam
The foam right now looks great near the shoreline, but is barely visible around the edges of the floating objects. This is because the depth between the shore and the water is quite small, while the depth (from the camera's point of view) between the water and the underwater objects is comparatively larger. Increasing the _FoamDistance to about 0.4 fixes this, but makes the shoreline exceedingly large.
Increasing the foam distance makes the foam around the objects look correct, but is far too strong for the shoreline.
Instead, we'll create a solution that varies the depth that foam is rendered based off the angle of the surface below the water. That way, nearly vertical surfaces (like the rocks) can get foam deeper than very flat objects, like shoreline. Ideally, by modulating the foam amount like this they will visually have consistent foam in the final image.
### 3.1 Rendering the normals buffer
Our goal is to modulate the foam depth value (_FoamDistance in our shader) based on the angle between the normal of the water's surface and the normal of the object between it. To do this, we'll need to have access to a normals buffer.
Similar to the depth buffer, this will be a screen-size texture usuable within our shader. However, instead of storing the depth of each pixel, it'll store its normal.
View space normals texture for the beach scene, excluding the water. View space normalsare the normals of the scene relative to the camera's view.
Unity does have the built-in functionality to render out the normals buffer by using the DepthNormalsdepth texture mode. This packs the depth and normals buffer into a single texture (two channels for each buffer). Unfortunately, this results in the depth buffer having too little precision for our purposes; instead, we'll manually render out the normals to a separate texture. The starter project already contains a C# script to do this, NormalsReplacementShader.cs.
This script creates a camera at the same position and rotation as the main camera, except it renders the scene with a Replacement Shader. As well, instead of rendering the scene to the screen, it stores the output to a global shader texture named _CameraNormalsTexture. This texture, like the _CameraDepthTexture we used above, is available to all shaders.
Apply this script to the Camera object in the scene. As well, drag the HiddenNormalsTexture shader (in the Shaders folder) into the Normals shader slot. This shader is fairly simple; it outputs the view spacenormal of the object. View space normals are the normals of the object, relative to the camera's view.
If you run the scene now, you'll see that a new camera, Normals camera is automatically spawned as a child to the main camera. If you select this object, you can see in the normals being rendered in the Camera preview. Alternatively, double click the texture in the Target texture slot of the camera to see a larger preview.
### 3.2 Comparing view space normals
We'll need to calculate the view space normal of the water surface before we can compare it to the normal rendered out to the texture. We can do this in the vertex shader and pass it through to the fragment shader.
// Add to appdata.
float3 normal : NORMAL;
…
// Add to v2f.
float3 viewNormal : NORMAL;
…
// Add to the vertex shader.
o.viewNormal = COMPUTE_VIEW_NORMAL;
With the view normal available in the fragment shader, we can compare it to the normal of the object beneath the water's surface. We'll sample the normals buffer in the same way we sampled the depth buffer.
// As this refers to a global shader variable, it does not get declared in the Properties.
sampler2D _CameraNormalsTexture;
…
// Add to the fragment shader, just above the existing foamDepthDifference01 line.
float3 existingNormal = tex2Dproj(_CameraNormalsTexture, UNITY_PROJ_COORD(i.screenPosition));
We now have the view normal of the water's surface and the object behind it. We will compare the two using the Dot Product.
The dot product takes in two vectors (of any length) and returns a single number. When the vectors are parallel in the same direction and are unit vectors (vectors of length 1), this number is 1. When they are perpendicular, it returns 0. As you move a vector away from parallel—towards perpendicular—the dot product result will move from 1 to 0 non-linearly. Note that when the angle between the vectors is greater than 90, the dot product will be negative.
// Add to the fragment shader, below the line sampling the normals texture.
float3 normalDot = saturate(dot(existingNormal, i.viewNormal));
We'll use the result of the dot product to control the foam amount. When the dot product is large (near 1), we'll use a lower foam threshold than when it is small (near 0).
// Replace the _FoamDistance property with the following two properties.
_FoamMaxDistance("Foam Maximum Distance", Float) = 0.4
_FoamMinDistance("Foam Minimum Distance", Float) = 0.04
…
// Replace the _FoamDistance variable with the following two variables.
float _FoamMaxDistance;
float _FoamMinDistance;
…
// Add to the fragment shader, above the existing foamDepthDifference01 line.
float foamDistance = lerp(_FoamMaxDistance, _FoamMinDistance, normalDot);
float foamDepthDifference01 = saturate(depthDifference / foamDistance);
By saturating the dot product result, we get our value in the 0...1 range, making it easy to pass into the lerp function, same as we did for interpolating the color of the water.
## 4. Transparency
Right now, the water is completely opaque. Although coloring by depth does give the illusion of transparency, the texture of the sand does not at all show through the water. This might actually be desirable for certain kinds of scenes—if you're modelling ocean water, it would make sense for it to be rendered as opaque, since it tends to appear that way due to its immense depth. However, for our little pond scene we are going to add some transparency in to reflect the shallow nature of the water.
// Add just inside the SubShader, below its opening curly brace.
Tags
{
"Queue" = "Transparent"
}
This tells Unity to render objects with this shader after all objects in the "Geometry" queue have been rendered; this queue is usually where opaque objects are drawn. This way, we can overlay our transparent water on top of all the opaque objects and blend them together. You can read more about rendering order and queues here.
// Add inside the Pass, just above the CGPROGRAM's start.
Blend SrcAlpha OneMinusSrcAlpha
ZWrite Off
The Blend line dictates how that blending should occur. We're using a blending algorithm often referred to as normal blending, and is similar to how software like Photoshop blends two layers.
After that we have ZWrite Off. This prevents our object from being written into the depth buffer; if it was written into the depth buffer, it would completely occlude objects behind it, instead of only partially obscuring them.
## 5. Improved blending
Our water just about matches the final image. Next, we'll add a new property to control the color of the water foam. Although white looks great for this scene, different types of surfaces may require different colored foam.
_FoamColor("Foam Color", Color) = (1,1,1,1)
…
float4 _FoamColor;
…
// Add inside the fragment shader, just below the line declaring surfaceNoise.
float4 surfaceNoiseColor = _FoamColor * surfaceNoise;
return waterColor + surfaceNoiseColor;
This allows us to modify the color of the foam, but if you play with the _FoamColor variable in the scene, you'll see that it gives mixed results. Red foam comes out pink, and completely black foam just leaves a light blue highlight in its place. This is because we are performing additive blending on the two colors used to generate our final value.
Modfying the color of the foam yields mixed results, with red foam turning pink and black foam light blue.
As the name implies, additive blending is the result of adding two colors together, creating a brighter result. This is great for objects that emit light, like sparks, explosions or lightning. We are want to blend the foam with the water surface—neither of these emit light, and the result should not be brighter; additive blending is not the right fit for this task.
Instead, we'll blend the two colors together using the same algorithm Unity is using to blend our shader with the background, which we referred to above as normal blending. If we revisit the following line, we can take a look at how this blending works.
Blend SrcAlpha OneMinusSrcAlpha
Blend, when provided with two parameters, works by multiplying the the output of the shader by the first value (SrcAlpha, or the alpha of the shader output), multiplying the on screen color by the second value (OneMinusSrcAlpha or 1 - alpha of the output), and then adding the two together for the final color. This article by Unity explains it in further detail.
We will replicate this as a function in our CGPROGRAM. Add the following above the appdata declaration.
float4 alphaBlend(float4 top, float4 bottom)
{
float3 color = (top.rgb * top.a) + (bottom.rgb * (1 - top.a));
float alpha = top.a + bottom.a * (1 - top.a);
return float4(color, alpha);
}
The first line performs the blend operation described above. Because we want our final output to maintain transparency, we also perform the blend on the alpha channel of the colors. We will use this function to blend our final output.
// Place in the fragment shader, replacing the code in its place.
float4 surfaceNoiseColor = _FoamColor;
surfaceNoiseColor.a *= surfaceNoise;
return alphaBlend(surfaceNoiseColor, waterColor);
Note that we only multiply the alpha of the foam now, instead of the entire color.
## 6. Anti-aliasing
We will make one final improvement before completing the shader. If you look closely at the foam, you'll notice the edges are fairly jagged. This is the result of the binary way we perform the cutoff on our noise texture; every pixel either has full alpha, or none at all. Instead, we'll smoothly blend the alpha from zero to one, using the smoothstep function.
Jagged edges where the foam meets the water. To get a clearer view, open this image full-screen in a new tab.
smoothstep is somewhat similar to lerp. It takes in three values: a lower bound, an upper bound and a value expected to be between these two bounds. smoothstep returns a value between 0 and 1 based on how far this third value is between the bounds. (If it is outside the lower or upper bound, smoothstep returns a 0 or 1, respectively).
Comparison between smoothstep (left) and lerp (right). The values are mapped to the greyscale background, as well as the curves in red.
Unlike lerpsmoothstep is not linear: as the value moves from 0 to 0.5, it accelerates, and as it moves from 0.5 to 1, it decelerates. This makes it ideal for smoothly blending values, which is how we'll use it below.
// Insert just after the CGPROGRAM begins.
#define SMOOTHSTEP_AA 0.01
…
float surfaceNoise = smoothstep(surfaceNoiseCutoff - SMOOTHSTEP_AA, surfaceNoiseCutoff + SMOOTHSTEP_AA, surfaceNoiseSample);
The lower and upper bounds we define (the first two parameters of the function) are quite close—they're just far enough apart to add some smoothing to the edges. When surfaceNoiseSample is outside of these bounds, it will return 0 or 1, just like before.
## Conclusion
The techniques learned here form the basis for a wide variety of graphical effects. The depth buffer can be used to achieve any kind of distance-based effect, like fog, or a scanner sweep. The normals buffer is used in deferred shading, where surfaces are lit after all rendering is done as a post process step. Distortion and noise have nearly unlimited applications, and can be used to modify the geometry of meshes, similar to how heightmaps work. It's worth experimenting with the settings of the shader to see what can be achieved with it.
View source GitHub repository
## Leave me a message
Send me some feedback about the tutorial in the form below. I'll get back to you as soon as I can! You can alternatively message me through Twitter or Reddit.
EmailMessageSend message
03-12
06-14
08-28
01-19
09-04 5899
09-11 219
10-11 193
©️2020 CSDN 皮肤主题: 编程工作室 设计师:CSDN官方博客 | 2020-09-20 15:09:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.229831725358963, "perplexity": 2008.8785238816317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198213.25/warc/CC-MAIN-20200920125718-20200920155718-00307.warc.gz"} |
https://rdrr.io/cran/CHNOSZ/f/vignettes/multi-metal.Rmd | # Diagrams with multiple metals In CHNOSZ: Thermodynamic Calculations and Diagrams for Geochemistry
options(width = 80)
## use pngquant to optimize PNG images
library(knitr)
knit_hooks$set(pngquant = hook_pngquant) pngquant <- "--speed=1 --quality=0-25" if (!nzchar(Sys.which("pngquant"))) pngquant <- NULL ## logK with a thin space 20200627 logK <- "log <i>K</i>" ## Resolution settings # Change this to TRUE to make high-resolution plots # (default is FALSE to save time in CRAN checks) hires <- FALSE res1.lo <- 150 res1.hi <- 256 res1 <- ifelse(hires, res1.hi, res1.lo) res2.lo <- 200 res2.hi <- 400 res2 <- ifelse(hires, res2.hi, res2.lo) library(CHNOSZ) reset() This vignette was compiled on r Sys.Date() with CHNOSZ version r sessionInfo()$otherPkgs$CHNOSZ$Version.
The plots in this vignette were made using the following resolution settings, which can be changed if desired (low resolutions are used to save time in CRAN checks):
cat("")
cat("\n")
cat(paste0(ifelse(hires, "# ", ""), "res1 <- ", res1.lo))
cat("\n")
cat(paste0(ifelse(hires, "", "# "), "res1 <- ", res1.hi))
cat("\n")
cat(paste0(ifelse(hires, "# ", ""), "res2 <- ", res2.lo))
cat("\n")
cat(paste0(ifelse(hires, "", "# "), "res2 <- ", res2.hi))
cat("\n")
cat("")
Basic diagrams in CHNOSZ are made for reactions that are balanced on an element (see Equilibrium in CHNOSZ) and therefore represent minerals or aqueous species that all have one element, often a metal, in common. The package documentation has many examples of diagrams for a single metal appearing in different minerals or complexed with different ligands, but a common request is to make diagrams for multiple metals. This vignette describes some methods for constructing diagrams for multi-metal minerals and other multi-element systems. The methods are mashing, mixing, mosaic stacking, and secondary balancing.
## Mashing
Mashing or simple overlay refers to independent calculations for two different systems that are displayed on the same diagram.
This example starts with a logf~O2~--pH base diagram for the C-O-H system then overlays a diagram for S-O-H. The second call to affinity() uses the argument recall feature, where the arguments after the first are taken from the previous command. This allows calculations to be run at the same conditions for a different system. This feature is also used in other examples in this vignette.
par(mfrow = c(1, 2))
basis("CHNOS+")
species(c("CH4", "CO2", "HCO3-", "CO3-2"))
aC <- affinity(pH = c(0, 14), O2 = c(-75, -60))
dC <- diagram(aC, dx = c(0, 1, 0, 0), dy = c(0, 1, 0, 0))
species(c("H2S", "HS-", "HSO4-", "SO4-2"))
aS <- affinity(aC) # argument recall
dS <- diagram(aS, add = TRUE, col = 4, col.names = 4, dx = c(0, -0.5, 0, 0))
aCS <- mash(dC, dS)
srt <- c(0, 0, 90, 0, 0, 0, 90, 0, 0, 0)
cex.names <- c(1, 1, 0.8, 1, 1, 1, 1, 1, 1, 1)
dy <- c(0, 0, 0, -0.2, 0, 0, 0, 0, 0, 0)
diagram(aCS, srt = srt, cex.names = cex.names, dy = dy)
legend("topright", legend = lTP(25, 1), bty = "n")
The second diagram is just like the first, except the function mash() is used to label the fields with names of species from both systems, and a legend is added to indicate the temperature and pressure.
Note that these are predominance diagrams, so they show only the species with highest activity; there is in fact a distribution of activities of aqueous species that is not visible here.
Tip: the names of the fields in the second diagram come from aCS$species$name, which are expressions made by combining aC$names and aS$names. If you prefer plain text names without formatting, add format.names = FALSE to all of the diagram() calls.
## Mixing 1
As shown above, mashing two diagrams is essentially a simple combination of the two systems. Although it is easy to make such a diagram, there is no interaction between the systems. If there is a possibility of forming bimetallic species, then additional considerations are needed to account for the stoichiometry of the mixture. The stoichiometry can be given as a fixed composition of both metals; then, all combinations of (mono- and/or bimetallic) species that satisfy this compositional constraint are used as the candidate "species" in the system. This is the same type of calculation as that described for binary Pourbaix diagrams in the Materials Project.
This example makes a Pourbaix diagram for the Fe-V-O-H system that is similar to Figure 1 of @SZS_17. Before getting started, it may be helpful to clarify some terminology. In the materials science community, materials are characterized by several energies (among others): 1) the formation energy from the elements in their reference state, 2) the energy above the convex hull, which is zero for stable materials, and greater than zero for metastable materials, and 3) the Pourbaix energy difference (ΔG~pbx~), which refers to the energy of a given material with respect to the stable solids and aqueous species as a function of pH and Eh. The parallel terminology used in CHNOSZ is that aqueous species or minerals have a 1) standard Gibbs energy of formation from the elements, ΔG° = f(T, P), which is available from the OBIGT database, 2) standard Gibbs energy of reaction from the stable species, and 3) affinity of formation from the basis species, A = -ΔG = f(T, P, and activities of all species). As used in CHNOSZ, "formation reaction" refers to formation from the basis species, not from the elements. The basis species are not in general the stable species, so we begin by identifying the stable species in the system; the difference between their affinities and the affinity of any other species corresponds to -ΔG~pbx~.
First we need to assemble the standard Gibbs energies of the solids and aqueous species. For solids, values of formation energy from the elements (in eV/atom) computed using density functional theory (DFT) were retrieved from the Materials API [@OCJ_15] and are converted to units of J/mol. The Materials Project (MP) website also provides these values, but with fewer decimal places, which would lead to a small rounding error in the comparison of energy above the hull at the end of this example. For aqueous species, values of standard Gibbs energy of formation from the elements at 25 °C (in J/mol) are taken mostly from @WEP_82 augmented with data for FeO~2~^-^ from @SSWS97 and FeO~4~^-2^ from @Mis73. Adapting the method described by @PWLC12, a correction for each metal is calculated from the difference between the DFT-based formation energy and the standard Gibbs energy of a representative material; here we use values for Fe~3~O~4~ (magnetite) and V~3~O~5~ from @WEP_82. This correction is then applied to all of the aqueous species that have that metal. Finally, mod.OBIGT() is used to add the obtained energies to the OBIGT database in CHNOSZ.
This code produces species indices in the OBIGT database for Fe- and V-bearing aqueous species (iFe.aq, iV.aq), solids (iFe.cr, iV.cr), and bimetallic solids (iFeV.cr), which are used in the following diagrams.
Now we set up the plot area and assign activities of aqueous species to 10^-5^, which is the default value for diagrams on the MP website (from the page for a material: "Generate Phase Diagram" -- "Aqueous Stability (Pourbaix)"). The following commands compute Eh-pH diagrams for the single-metal systems Fe-O-H and V-O-H. The pH and Eh ranges are made relatively small in order to show just a part of the diagram. The diagrams are not plotted, but the output of diagram() is saved in dFe and dV for later use.
par(mfrow = c(1, 3))
loga.Fe <- -5
loga.V <- -5
# Fe-O-H diagram
basis(c("VO+2", "Fe+2", "H2O", "e-", "H+"))
species(c(iFe.aq, iFe.cr))
species(1:length(iFe.aq), loga.Fe)
aFe <- affinity(pH = c(4, 10, res1), Eh = c(-1.5, 0, res1))
dFe <- diagram(aFe, plot.it = FALSE)
# V-O-H diagram
species(c(iV.aq, iV.cr))
species(1:length(iV.aq), loga.V)
aV <- affinity(aFe) # argument recall
dV <- diagram(aV, plot.it = FALSE)
# Calculate affinities for bimetallic species
species(iFeV.cr)
aFeV <- affinity(aFe) # argument recall
dFeV <- diagram(aFeV, plot.it = FALSE, bold = TRUE)
# 1:1 mixture (Fe:V)
a11 <- mix(dFe, dV, dFeV, c(1, 1))
# Adjust labels 20210219
iV2O3 <- info("V2O3")
iFeO <- info("FeO", "cr")
iFe3V <- info("Fe3V")
srt <- rep(0, nrow(a11$species)) srt[a11$species$ispecies == paste(iFeO, iV2O3, sep = ",")] <- 90 srt[a11$species$ispecies == paste(iV2O3, iFe3V, sep = ",")] <- -13 diagram(a11, min.area = 0.01, srt = srt) title("Fe:V = 1:1") label.figure(lTP(25, 1), xfrac = 0.12) # 1:3 mixture a13 <- mix(dFe, dV, dFeV, c(1, 3)) srt <- rep(0, nrow(a13$species))
srt[a13$species$ispecies == paste(iFeO, iV2O3, sep = ",")] <- 90
srt[a13$species$ispecies == paste(iV2O3, iFe3V, sep = ",")] <- -13
diagram(a13, min.area = 0.01, srt = srt)
title("Fe:V = 1:3")
# 1:5 mixture
a15 <- mix(dFe, dV, dFeV, c(1, 5))
iFeV3 <- info("FeV3")
srt <- rep(0, nrow(a15$species)) srt[a15$species$ispecies == paste(iFeO, iV2O3, sep = ",")] <- 90 srt[a15$species$ispecies == paste(iV2O3, iFe3V, sep = ",")] <- -13 srt[a15$species$ispecies == paste(iV2O3, iFeV3, sep = ",")] <- -13 diagram(a15, min.area = 0.01, srt = srt) title("Fe:V = 1:5") Then we calculate the affinities for the bimetallic species and save the output of diagram() in dFeV, again without making a plot, but formatting the names in bold. Note that diagram() uses different colors for regions with two solids, one solid, and no solids, including some transparency to show the underlying water stability region that is plotted first. Now we have all the ingredients needed to combine the Fe-bearing, V-bearing, and bimetallic species to generate a given composition. The mix() function is used to calculate the affinities of formation from basis species for all combinations of aqueous species and minerals that satisfy each of three different compositions. Finally, the diagram()s are plotted; the min.area argument is used to remove labels for very small fields. Regarding the legend, it should be noted that although the DFT calculations for solids are made for zero temperature and zero pressure [@SZS_17], the standard Gibbs energies of aqueous species [e.g. @WEP_82] are modified by a correction term so that they can be combined with DFT energies to reproduce the experimental energy for dissolution of a representative material for each metal at 25 °C and 1 bar [@PWLC12]. In these diagrams, changing the Fe:V ratio affects the fully reduced metallic species. In the 1:1 mixture, the FeV~3~ + Fe~3~V assemblage is predicted to be stable instead of FeV. This result is unlike Figure 1 of @SZS_17 but is consistent with the MP page for FeV where it is shown to decompose to this assemblage. On the other hand, FeV~3~ is stable in the 1:3 mixture. For an even higher proportion of V, the V + FeV~3~ assemblage is stable, which can be seen for instance in the Pourbaix diagram linked from the MP page for FeV~5~O~12~. Let's make another diagram for the 1:1 Fe:V composition over a broader range of Eh and pH. The diagram shows a stable assemblage of Fe~2~O~3~ with an oxidized bimetallic material, Fe~2~V~4~O~13~. layout(t(matrix(1:3)), widths = c(1, 1, 0.2)) par(cex = 1) # Fe-bearing species basis(c("VO+2", "Fe+2", "H2O", "e-", "H+")) species(c(iFe.aq, iFe.cr))$name
species(1:length(iFe.aq), loga.Fe)
aFe <- affinity(pH = c(0, 14, res2), Eh = c(-1.5, 2, res2))
dFe <- diagram(aFe, plot.it = FALSE)
# V-bearing species
species(c(iV.aq, iV.cr))$name species(1:length(iV.aq), loga.V) aV <- affinity(aFe) # argument recall dV <- diagram(aV, plot.it = FALSE) # Bimetallic species species(iFeV.cr) aFeV <- affinity(aFe) # argument recall dFeV <- diagram(aFeV, plot.it = FALSE, bold = TRUE) # 1:1 mixture (Fe:V) a11 <- mix(dFe, dV, dFeV, c(1, 1)) # Adjust labels 20210219 iV2O3 <- info("V2O3") iFe3V <- info("Fe3V") iVO4m3 <- info("VO4-3") iFe2O3 <- info("Fe2O3") srt <- rep(0, nrow(a11$species))
srt[a11$species$ispecies == paste(iV2O3, iFe3V, sep = ",")] <- -13
srt[a11$species$ispecies == paste(iFe2O3, iVO4m3, sep = ",")] <- 90
d11 <- diagram(a11, min.area = 0.01, srt = srt)
water.lines(d11, col = "orangered")
# Calculate affinity of FeVO4
species("FeVO4")
aFeVO4 <- affinity(aFe) # argument recall
# Calculate difference from stable species
aFeVO4_vs_stable <- aFeVO4$values[[1]] - d11$predominant.values
# Overlay lines from diagram on color map
diagram(a11, fill = NA, names = FALSE, limit.water = FALSE)
opar <- par(usr = c(0, 1, 0, 1))
col <- rev(hcl.colors(128, palette = "YlGnBu", alpha = 0.8))
image(aFeVO4_vs_stable, col = col, add = TRUE)
par(opar)
diagram(a11, fill = NA, add = TRUE, names = FALSE)
water.lines(d11, col = "orangered")
thermo.axis()
imax <- arrayInd(which.max(aFeVO4_vs_stable), dim(aFeVO4_vs_stable))
pH <- d11$vals$pH[imax[1]]
Eh <- d11$vals$Eh[imax[2]]
points(pH, Eh, pch = 10, cex = 2, lwd = 2, col = "gold")
stable <- d11$names[d11$predominant[imax]]
text(pH, Eh, stable, adj = c(0.5, -1), cex = 1.2, col = "gold")
# Make color scale 20210228
par(mar = c(3, 0, 2.5, 2.7))
plot.new()
levels <- 1:length(col)
plot.window(xlim = c(0, 1), ylim = range(levels), xaxs = "i", yaxs = "i")
rect(0, levels[-length(levels)], 1, levels[-1L], col = rev(col), border = NA)
box()
# To get the limits, convert range of affinities to eV/atom
arange <- rev(range(aFeVO4_vs_stable))
# This gets us to J/mol
Jrange <- convert(convert(arange, "G"), "J")
# And to eV/atom
eVrange <- Jrange / 1.602176634e-19 / 6.02214076e23 / 6
ylim <- formatC(eVrange, digits = 3, format = "f")
axis(4, at = range(levels), labels = ylim)
mtext(quote(Delta*italic(G)[pbx]*", eV/atom"), side = 4, las = 0, line = 1)
We then compute the affinity for formation of a metastable material, in this case triclinic FeVO~4~, from the same basis species used to make the previous diagrams. Given the diagram for the stable Fe-, V- and bimetallic materials mixed with the same stoichiometry as FeVO~4~ (1:1 Fe:V), the difference between their affinities of formation and that of FeVO~4~ corresponds to the Pourbaix energy difference (-ΔG~pbx~). This is plotted as a color map in the second diagram. (See the source of this vignette for the code used to make the scale bar.)
Now we locate the pH and Eh that maximize the affinity (that is, minimize ΔG~pbx~) of FeVO~4~ compared to the stable species. In agreement with @SZS_17, this is in the stability field of Fe~2~O~3~ + Fe~2~V~4~O~13~.
plot(1:10) # so we can run "points" in this chunk
imax <- arrayInd(which.max(aFeVO4_vs_stable), dim(aFeVO4_vs_stable))
pH <- d11$vals$pH[imax[1]]
Eh <- d11$vals$Eh[imax[2]]
points(pH, Eh, pch = 10, cex = 2, lwd = 2, col = "gold")
stable <- d11$names[d11$predominant[imax]]
text(pH, Eh, stable, adj = c(0.3, 2), cex = 1.2, col = "gold")
range(aFeVO4_vs_stable[d11$predominant == d11$predominant[imax]])
Although one point is drawn on the diagram, FeVO~4~ has the same Pourbaix energy difference with respect to the entire Fe~2~O~3~ + Fe~2~V~4~O~13~ field, as shown by the range() command (the values are dimensionless values of affinity, A/(RT) = -ΔG~pbx~/(RT)). This can occur only if the decomposition reaction has no free O~2~ or H~2~, and means that in this case ΔG~pbx~ in the Fe~2~O~3~ + Fe~2~V~4~O~13~ field is equal to the energy above the hull.
To calculate the energy above the hull "by hand", let's set up the basis species to be the stable decomposition products we just found. O~2~ is also needed to make a square stoichiometric matrix (i.e. same number of elements and basis species), but it does not appear in the reaction to form FeVO~4~ from the basis species. subcrt() is used to automatically balance the formation reaction for 1 mole of FeVO~4~ and calculate the standard Gibbs energy of the reaction. The convert() command divides this value by -RT (which yields r logK of the reaction), showing the same result as calculated above from the Pourbaix diagram. The value of ΔG° in cal/mol (the default for subcrt()) is converted to J/mol, then to eV/mol, and finally eV/atom.
b <- basis(c("Fe2O3", "Fe2V4O13", "O2"))
cal_mol <- subcrt("FeVO4", 1, T = 25)$out$G
convert(cal_mol, "logK")
J_mol <- convert(cal_mol, "J")
eV_mol <- J_mol / 1.602176634e-19
eV_atom <- eV_mol / 6.02214076e23 / 6
round(eV_atom, 3)
stopifnot(round(eV_atom, 3) == 0.415)
This is equal to the value for the energy above the hull / atom for triclinic FeVO~4~ on the MP website (0.415 eV, accessed on 2020-11-09 and 2021-02-19). This shows that we successfully made a round trip starting with the input formation energies (eV/atom) from the Materials API, to standard Gibbs energy (J/mol) in the OBIGT database, and back out to energy above the hull (eV/atom).
The concept of using the stable minerals and aqueous species to calculate reaction energetics is formalized in the mosaic() function, which is described next. Because this example modified the thermodynamic data for some minerals that are used below, we should restore the default OBIGT database before proceeding to the next section.
reset()
## Mosaic Stacking 1
A mosaic diagram shows the effects of changing basis species on the stabilities of minerals. The Fe-S-O-H system is a common example: the speciation of aqueous sulfur species affects the stabilities of iron oxides and sulfides. Examples of mosaic diagrams with Fe or other single metals are given elsewhere.
A mosaic stack is when predominance fields for minerals calculated in one mosaic diagram are used as input to a second mosaic diagram, where the minerals are now themselves basis species. The example here shows the construction of a Cu-Fe-S-O-H diagram.
First we define the conditions and basis species. It is important to put Cu^+^ first so that it will be used as the balance for the reactions with Cu-bearing minerals (which also have Fe). Pyrite is chosen as the starting Fe-bearing basis species, which will be changed as indicated in Fe.cr.
logaH2S <- -2
T <- 200
pH <- c(0, 14, res2)
O2 <- c(-48, -33, res2)
basis(c("Cu+", "pyrite", "H2S", "oxygen", "H2O", "H+"))
basis("H2S", logaH2S)
S.aq <- c("H2S", "HS-", "HSO4-", "SO4-2")
Fe.cr <- c("pyrite", "pyrrhotite", "magnetite", "hematite")
Fe.abbrv <- c("Py", "Po", "Mag", "Hem")
Now we calculate affinities for minerals in the Fe-S-O-H system that take account of the changing aqueous sulfur species in S.aq. The result is used to make different layers of the diagram (1 and 2 are both made by the first call to diagram()):
1. Water stability region (gray shading)
2. Predominance fields for the aqueous S species (blue text and dashed lines)
3. Stability areas for the Fe-bearing minerals (black text and lines)
species(Fe.cr)
mFe <- mosaic(S.aq, pH = pH, O2 = O2, T = T)
diagram(mFe$A.bases, lty = 2, col = 4, col.names = 4, italic = TRUE, dx = c(0, 1, 0, 0), dy = c(-1.5, 0, 1, 0)) dFe <- diagram(mFe$A.species, add = TRUE, lwd = 2, names = Fe.abbrv, dx = c(0, 0.5, 0, 0), dy = c(-1, 0, 0.5, 0))
FeCu.cr <- c("chalcopyrite", "bornite")
Cu.cr <- c("copper", "cuprite", "tenorite", "chalcocite", "covellite")
FeCu.abbrv <- c("Ccp", "Bn", "Cu", "Cpr", "Tnr", "Cct", "Cv")
species(c(FeCu.cr, Cu.cr))
mFeCu <- mosaic(list(S.aq, Fe.cr), pH = pH, O2 = O2,
T = T, stable = list(NULL, dFe$predominant)) diagram(mFeCu$A.species, add = TRUE, col = 2, col.names = 2, bold = TRUE, names = FeCu.abbrv, dy = c(0, 0, 0, 0, 0, 1, 0))
col <- c("#FF8C00", rep(NA, 6))
diagram(mFeCu$A.species, add = TRUE, col = col, lwd = 2, col.names = col, bold = TRUE, names = FeCu.abbrv) TPS <- c(describe.property(c("T", "P"), c(T, "Psat")), expression(sum(S) == 0.01*m)) legend("topright", TPS, bty = "n") title("Cu-Fe-S-O-H (minerals only)", font.main = 1) Next we load the Cu-bearing minerals and calculate their affinities while changing both the aqueous sulfur species and the Fe-bearing minerals whose stability fields were just calculated. The latter step is the key to the mosaic stack and is activated by supplying the calculated stabilities of the Fe-bearing minerals in the stable argument. This is a list whose elements correspond to each group of changing basis species given in the first argument. The NULL means that the abundances of S-bearing aqueous species are calculated according to the default in mosaic(), which uses equilibrate() to compute the continuous transition between them ("blending"). Because the Fe-bearing minerals are the second group of changing basis species (Fe.cr), their stabilities are given in the second position of the stable list. The result is used to plot the last layer of the diagram: 1. Stability areas for Cu-bearing minerals (red text and lines; orange for chalcopyrite) After that we add the legend and title. This diagram has a distinctive chalcopyrite "hook" surrounded by a thin bornite field. Only the chalcopyrite-bornite reaction in the pyrite field is shown in some published diagrams [e.g. @And75;@Gio02], but diagrams with a similar chalcopyrite wedge or hook geometry can be seen in @BBR77 and @Bri80. ## Mosaic Stacking 2 The previous diagram shows the relative stabilities of minerals only. The next diagram adds aqueous species to the system. The position of the boundaries between the stability fields for minerals and aqueous species are calculated for a given activity for the latter, in this case 10^-6^. After running the code to make this diagram, we can list the reference keys for the minerals and aqueous species. minerals <- list(Fe.cr = Fe.cr, Cu.cr = Cu.cr, FeCu.cr = FeCu.cr) aqueous <- list(S.aq = S.aq, Fe.aq = Fe.aq, Cu.aq = Cu.aq) allspecies <- c(minerals, aqueous) iall <- lapply(allspecies, info) allkeys <- lapply(iall, function(x) thermo.refs(x)$key)
allkeys
The next code chunk prepends @ to the reference keys and uses the chunk option results = 'asis' to insert the citations into the R Markdown document in chronological order.
allyears <- lapply(iall, function(x) thermo.refs(x)$year) o <- order(unlist(allyears)) cat(paste(paste0("@", unique(unlist(allkeys)[o])), collapse = "; ")) ## Mixing 2 The previous diagram shows a stability boundary between chalcopyrite and bornite but does not identify the stable assemblages that contain these minerals. This is where mix() can help. Following the workflow described in Mixing 1, we first calculate individual diagrams for Fe-S-O-H and Cu-S-O-H, which are overlaid on the first plot and saved in dFe and dCu. We then calculate the affinities for the bimetallic Cu and Fe minerals and run them through diagram() without actually making a plot, but save the result in dFeCu. Then, we combine the results using mix() to define different proportions of Fe and Cu. These diagrams show that changing the amounts of the metals affects the stability of minerals involved in reactions with chalcopyrite. At a 1:1 ratio of Fe:Cu, chalcopyrite is a stable single-mineral assemblage. At a 2:1 ratio, pyrite, pyrrhotite, or magnetite can coexist in a two-phase assemblage with chalcopyrite. At a 1:2 ratio, an assemblage consisting of the two bimetallic minerals (chalcopyrite and bornite) is stable. ## Mosaic Stacking 3 The results of a mosaic stack can also be processed with mash() to label each region with the minerals from both systems. For this example, the speciation of aqueous sulfur is not considered; instead, the fugacity of S~2~ is a plotting variable. The stable Fe-bearing minerals (Fe-S-O-H) are used as the changing basis species to make the diagram for Cu-bearing minerals (Cu-Fe-S-O-H). Then, the two diagrams are mashed to show all minerals in a single diagram. Greener colors are used to indicate minerals with less S~2~ and more O~2~ in their formation reactions. The resulting diagram is similar to Figure 2 of @Sve87; that diagram also shows calculations of the solubility of Cu and concentration of SO~4~^-2^ in model Cu ore-forming fluids. The solubility() function can be used to calculate the total concentration of Cu in different complexes in solution (listed in the iaq argument). The bases argument triggers a mosaic() calculation, so that the solubility corresponds that that of stable minerals at each point on the diagram. The pH for these calculations is set to 6, and the molality of free Cl^-^, which affects the formation of the Cu chloride complexes, is estimated based on the composition of fluids from Table 2 of @Sve87 (ca. 80000 mg Cl / kg H~2~O) and the NaCl() function in CHNOSZ. This also gives an estimated ionic strength, which is used in the following mosaic() and affinity() calls to calculate activity coefficients. After running the code above, we can inspect the value of calc to show the estimated ionic strength and activity of Cl^-^; the latter is very close to unity. # Ionic strength calc$IS
# Logarithm of activity of Cl-
log10(calc$m_Cl * calc$gam_Cl)
The thick magenta lines indicate the 35 ppm contour for Cu and SO~4~^-2^. The first plot shows a lower Cu solubility in this region compared to Figure 2 of @Sve87. The difference is likely due to lower stabilities of Cu(I) chloride complexes in the default OBIGT database, compared to those available at the time [@Hel69]. For the second plot, the standard Gibbs energies of CuCl~2~^-^ and CuCl~3~^-2^ are adjusted so that the logK for their dissociation reactions at 125 °C matches values interpolated from Table 5 of @Hel69. Here are the logK values after the adjustment, followed a reset() call to compare the values with the default database, which is also used for later examples in this vignette. (T was set to 125 above.)
# logK values interpolated from Table 5 of Helgeson (1969)
subcrt(c("CuCl2-", "Cu+", "Cl-"), c(-1, 1, 2), T = T)$out$logK
subcrt(c("CuCl3-2", "Cu+", "Cl-"), c(-1, 1, 3), T = T)$out$logK
# Default OBIGT database
reset()
subcrt(c("CuCl2-", "Cu+", "Cl-"), c(-1, 1, 2), T = T)$out$logK
subcrt(c("CuCl3-2", "Cu+", "Cl-"), c(-1, 1, 3), T = T)$out$logK
The higher stability of these complexes from @Hel69 causes the 35 ppm contour to move closer to the position shown by @Sve87.
Interestingly, the calculation here also predict substantial increases of Cu concentration of Cu at high f~S2~ and low f~O2~, due to the formation of bisulfide complexes with Cu. The aqueous species considered in the calculation can be seen like this:
names(iCu.aq)
CuHS and Cu(HS)~2~^-^ can be excluded by removing S from the retrieve() call above (i.e. only c("O", "H", "Cl") as the elements in possible ligands); doing so precludes a high concentration of aqueous Cu in the highly reduced, sulfidic region.
The third plot for the concentation of SO~4~^-2^ is simply made by using affinity() to calculate the affinity of its formation reaction as a function of f~S2~ and f~O2~ at pH 6 and 125 °C, then using solubility() to calculate the solubility of S~2~(gas), expressed in terms of moles of SO~4~^-2^ in order to calculate parts per million (ppm) by weight.
## Secondary Balancing
Predominance diagrams in CHNOSZ are made using the maximum affinity method, where the affinities of formation reactions of species are divided by the balancing coefficients [@Dic19]. Usually, these balancing coefficients are taken from the formation reactions themselves; for example, if they are the coefficients on the Fe-bearing basis species, then the reactions are said to be "balanced on Fe".
Some diagrams in the literature are made with secondary balancing constraints in addition to the primary ones. For example, reactions of Fe-bearing minerals are balanced on Fe, and reactions of Cu-bearing minerals are balanced on Cu; these are both primary balancing coefficients. Then, reactions between all minerals are balanced on H^+^ as the secondary balancing coefficients. The core concept is to apply the secondary balance while also maintaining the primary balance; a method to do this has been implemented in the rebalance() function.
Different parts of the script to make the diagrams are described below; press the button to show the entire script.
We first define basis species to contain both Cu- and Fe-bearing species. The \emph{x} axis is the ratio of activities of Fe^+2^ and Cu^+^; the label is made with ratlab().
We then calculate the diagrams for the primary balancing coefficients, for the groups of only Fe-, only Cu-, and only Fe+Cu-bearing minerals. It is obvious that the first two systems are balanced on Fe and Cu, respectively, but the third has a somewhat unusual balance: H^+^. See Reaction 4 of @MH85 for an example.
Now comes the secondary balancing, where all reactions, not only that between bornite and chalcopyrite, are balanced on H^+^. We first rebalance the diagrams for the Fe- or Cu-bearing minerals to make diagram D. Note that after secondary balancing with rebalance(), the argument balance = 1 should be used in diagram() to prevent further balancing. This is because rebalance() preserves the primary balancing for Fe- and Cu-bearing minerals (internally the "plotvals" components of dFe and dCu).
Then we rebalance diagrams D and C to make the final diagram in E. The fields in this diagram are labeled with mineral abbreviations from the OBIGT database.
The final diagram is like one shown in Figure 5 of @Bri80 and Figure 5 of @MH85.
Challenge: Although the diagram here is drawn only for H~2~S in the basis species, take it a step further and make a mosaic diagram to account for the stability of HSO~4~^-^ at high oxygen fugacity.
## Other Possibilities
Conceptually, the methods described above treat different metal-bearing elements as parts of distinct chemical systems that are then joined together. Other methods may be more suitable for considering multiple metals (or other elements) in one system.
### Balancing on a Non-Metal
As shown in the secondary balancing example, there is no requirement that the balancing coefficients come from a metal-bearing species. It is possible to make diagrams for minerals with different metallic elements simply by using a non-metallic element as the primary balance. Here is an example for the Cu-Fe-S-O-H system. The reactions are balanced on O~2~, which means that no O~2~ appears in the reaction between any two minerals, but Fe^+2^ and/or Cu^+^ can be present, depending on the chemical composition. Saturation limits are shown for species that have no O~2~ in their formation reactions.
basis(c("Fe+2", "Cu+", "hydrogen sulfide", "oxygen", "H2O", "H+"))
basis("H2S", 2)
species(c("pyrite", "magnetite", "hematite", "covellite", "tenorite",
"chalcopyrite", "bornite"))
a <- affinity("Cu+" = c(-8, 2, 500), "Fe+2" = c(-4, 12, 500), T = 400, P = 2000)
names <- info(a$species$ispecies)$abbrv d <- diagram(a, xlab = ratlab("Cu+"), ylab = ratlab("Fe+2"), balance = "O2", names = names) title(bquote("Cu-Fe-S-O-H; 1° balance:" ~ .(expr.species(d$balance))))
# Add saturation lines
species(c("pyrrhotite", "ferrous-oxide", "chalcocite", "cuprite"))
asat <- affinity(a) # argument recall
names <- asat$species$name
names[2] <- "ferrous oxide"
diagram(asat, type = "saturation", add = TRUE, lty = 2, col = 4, names = names)
legend("topleft", legend = lTP(400, 2000), bty = "n")
This example was prompted by Figure 3 of @MH85; earlier versions of the diagram are in @HBL69 and @Hel70a.
In some ways this is like the inverse of the mosaic stacking example. There, reactions were balanced on Fe or Cu, and f~O2~ and pH were used as plotting variables. Here, the reactions are balanced on O~2~ and implicitly on H^+^ through the activity ratios with a~Fe^+2^~ and a~Cu^+^~, which are the plotting variables.
More common diagrams of this type are balanced on Si or Al. See demo(saturation) for an example in the H~2~O-CO~2~-CaO-MgO-SiO~2~ system.
### Mosaic Combo
Instead of adding minerals with different metals by stacking mosaic diagrams, it may be possible to include two different metals in the basis species and formed species. The mosaic() and equilibrate() functions can be combined to balance on two different elements. The example here compares two methods applied to N-, C-, and N+C-bearing species because bimetallic aqueous species are not currently available in the OBIGT database. The total activities used here are modified from the example for sedimentary basin brines described by @Sho93, which is also the source of the thermodynamic parameters for acetamide.
1. The "effective equilibrium constant" (K^eff^) method [@RBG_21] is used to calculate the activity of acetamide for a total activities of neutral and ionized species, i.e. ∑(ammonia and ammonium) and ∑(acetic acid and acetate).
2. Using the mosaic combo method, the mosaic() command calculates equilibrium activities of NH~3~ and NH~4~^+^ for a given total activity of N in the basis species, and calculates the corresponding affinities of the formed species. Then, the equilibrate() command calculates equilibrium activities of the formed species for given total activity of C, and combines them with the activities of the changing basis species (NH~3~ and NH~4~^+^).
The mosaic combo method (solid black line) produces results equivalent to those of the K^eff^ method (dashed blue line).
The diagram shows the ionization of acetic acid and NH~3~ at different pHs. The predicted appearance of acetamide (CH~3~CONH~2~) is a consequence of the interaction between the N-bearing and C-bearing species, and is analogous to the formation of a bimetallic complex.
Thanks to Kirt Robinson for the feature request and test case used in this example.
## Document History
• 2020-07-15 First version.
• 2021-03-01 Improve mineral abbreviations and placement of labels; use updated DFT energies from Materials Project; add Mosaic Stacking 2 (minerals and aqueous species); add K~eff~ calculation; add ΔG~pbx~ color scale.
## Try the CHNOSZ package in your browser
Any scripts or data that you put into this service are public.
CHNOSZ documentation built on April 9, 2021, 9:08 a.m. | 2021-06-20 13:34:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5738703012466431, "perplexity": 5318.634853549656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00515.warc.gz"} |
https://www.math.princeton.edu/events/rank-and-isomorphism-von-neumann-special-flows-2017-04-27t150004 | # On rank and isomorphism of von Neumann special flows
-
Anton Solomko , University of Bristol
Fine Hall 1001
Please note special time and location. A von Neumann flow is a special flow over an irrational rotation of the circle and under a piecewise smooth roof function with a non-zero sum of jumps. Such flows appear naturally as special representations of Hamiltonian flows on the torus with critical points. We consider the class of von Neumann flows with one discontinuity. I will show that any such flow has infinite rank and that the absolute value of the jump of the roof function is a measure theoretic invariant. The main ingredient in the proofs is a Ranter type property of parabolic divergence of orbits of two nearby points in the flow direction. Joint work with Adam Kanigowski. | 2018-12-17 18:07:15 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8485434055328369, "perplexity": 475.65792645601834}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828697.80/warc/CC-MAIN-20181217161704-20181217183704-00350.warc.gz"} |
http://www.math.umass.edu/news-briefs?page=20 | # Faculty News Briefs
### December 2006
Visiting Assistant Professor Ana-Maria Castravet gave a talk on November 7, 2006 in the Harvard-MIT Algebraic Geometry Seminar. The title of her talk was HilbertÌs 14th problem and Cox Rings, based on joint work with Jenia Tevelev.
Professor Paul Gunnells gave a talk on November 15, 2006 at the Algebra and Number Theory Seminar at the University of Maryland. The title of his talk was Quadratic Weyl Multiple Dirichlet Series.
Professor Farshid Hajir gave a colloquium on November 3, 2006 at McMaster University in Hamilton, Ontario, Canada. The title of his talk was Galois Groups and Dynamics on the Projective Line.
Professor Panos Kevrekidis was informed on November 2, 2006 that his paper Deciding the Nature of the Coarse Equation through Microscopic Simulations: The Baby-Bathwater Scheme was chosen as the next SIGEST selection from Multiscale Modeling and Simulation, a journal published by the Society for Industrial and Applied Mathematics (SIAM). Cowritten with Ju Li, C.W. Gear, and I. G. Kevrekidis, the paper will appear in issue 49-2 of SIAM Review in June 2007. The purpose of the SIGEST propgram is to make the more than 10,000 readers of SIAM Review aware of exceptional papers published in SIAMÌs specialized journals. The paper by Panos and his collaborators was chosen by the editors of SIAM Review for the importance of its contributions and topic, its clear writing style, and its broad interest for the SIAM community.
Professor Jenia Tevelev gave a talk on October 31, 2006 at the Commutative Algebra and Algebraic Geometry Seminar at the University of California at Berkeley. The title of his talk was Modular, Log Canonical, And Tropical Compactifications. He gave another talk on November 3 at the Algebraic Geometry Seminar at Stanford University. The title of his second talk was Geometry of Chow Quotients of Grassmannians.
### November 2006
Professor George Avrunin has been named a Distinguished Scientist by the Association for Computing Machinery for having made a significant impact in the fields of computing, computer science and information technology. One of 49 people given this honor, George is the Associate Head of the Department as well as an adjunct professor in the Department of Computer Science. He is currently investigating finite-state verification techniques as applied to high-performance scientific computing and to complex medical processes. In order to view the list of 2006 ACM Distinguished Members and for information on selection criteria, visit http://distinguished.acm.org.
Professor Tom Braden was co-organizer of a special session at the meeting of the American Mathematical Society in Storrs, CT during the weekend of October 28Ò29, 2006. The title of the special session was "Combinatorial Techniques in Equivariant Topology".
Professor Eduardo Cattani has been appointed to the American Mathematical SocietyÌs Committee on Human Rights of Mathematicians. The appointment, made by AMS President James Arthur, is for a three-year term effective February 1, 2007. The Committee on Human Rights assists the AMS by investigating alleged violations of human rights of foreign mathematicians, whether they may have occurred in the US or abroad, and by recommending appropriate action whenever actions seems warranted.
Eli Cooper spoke on part of his dissertation work at the Storrs AMS special session on Geometric Analysis in October 2006; the special session was co-organized by his advisor, Professor Rob Kusner. Because a scheduled speaker canceled, fellow graduate student, Shabnam Beheshti, gave an impromptu lecture on her dissertation work, which is being directed by Professor Emeritus Floyd Williams. When another speakerÌs flight was delayed, Rob also came off the bench to pinch-speak about his work with session co-organizer Jesse Ratzkin on Nondegeneracy of CMC Surfaces and Regularity of the CMC Classifying Map, a talk that he expanded upon at the Valley Geometry Seminar six days later.
Professor Richard S. Ellis was the main speaker at the International Seminar on Extreme Events in Complex Dynamics, held during the week October 23Ò27, 2006 at the Max Planck Institute for Physics of Complex Systems in Dresden, Germany. At the seminar Richard delivered an eight-hour lecture series entitled The Theory of Large Deviations and Applications to Statistical Mechanics.
Professor Franz Pedit participated in the biennial Geometrie Tagung at the Mathematisches Forschungsintitut Oberwolfach in October, 2006.
Visiting Assistant Professor Ralf Schiffler attended the International Conference on Representations of Algebras and Related Topics held at Northeastern University on October 6Ò7, 2006. He gave a talk at the conference entitled Geometric Realizations of Cluster Categories.
Visiting Assistant Professor Hao Wu gave a talk on October 27, 2006 in the Topology and Geometry Seminar at University of Wisconsin-Madison. His talk was entitled Transversal Knots and Khovanov-Rozansky Cohomology. He also gave a 20-minute condensed version of the same talk in the special session on Floer Methods in Low-dimensional Topology during the Fall 2006 Western Section Meeting of the American Mathematical Society held in Salt Lake City, UT. In June 2006 he gave a talk entitled Legendrian Knots and the Spanning Tree Model of the Khovanov Homology at the Park City Mathematical Institute.
### October 2006
Professor Farshid Hajir and graduate student Mairead Greene attended the Quebec-Maine Number Theory Conference, which took place in Quebec City on September 30 and October 1. MaireadÌs contributed talk was entitled On the Index of Cyclotomic Units, and Farshid gave a plenary address on Algebraic Properties of Some Hypergeometric Polynomials. John Cullinan, who obtained his Ph.D. at UMass Amherst under the direction of Professor Siman Wong and is now a visiting assistant professor at Bard College, also gave a lecture at the conference. His lecture was entitled Divisibility Properties of the Torsion Subgroup of an Abelian Variety.
Professor Panos Kevrekidis reports on the following activities.
In April 2006 he presented a colloquium jointly sponsored by the Department of Mathematics and the Department of Chemical Engineering at Worcester Polytechnic Institute. His talk was entitled Discrete Solitary Waves and Applications.
In July 2006 Panos organized jointly with Mason Porter a mini-symposium entitled Analysis, Computation, and Experiments in Bose-Einstein Condensates at the annual SIAM meeting in Boston, MA.
In September 2006, Panos attended the SIAM meeting on Nonlinear Waves in Seattle, WA along with Professor Nate Whitaker and Visiting Assistant Professors Adri∑n EspÃnola-Rocha and Hadi Susanto. At the same meeting, Professor Nate Whitaker and he co-organized a session on Analysis, Modeling, and Simulation of Biological Systems. In addition, Panos gave an invited talk at the mini-symposium organized by J. Yang and T. Lakoba and entitled Advances in Analytical and Numerical Techniques for Nonlinear Waves. The talk was entitled Solitary Waves in the Presence of Spatial or Temporal Periodicity: Some Case Examples.
In September 2006 Panos also attended the conference SoliQuantum: Solitons and Nonlinear Phenomena in Degenerate Quantum Gases, which took place in Cuenca, Spain. He also presented an invited talk entitled Solitons Under Temporal or Spatial Periodicities at that meeting.
Finally, PanosÌs research work with a CalTech group consisting of Martin Centurion, Mason A. Porter, and Demitri Psaltis has attracted worldwide attention by science and technology news sources. The topic of the research is the first experimental realization of the theoretical concept of nonlinearity management. In particular, their paper published in Physical Review Letters, Volume 97, No. 3: 033903 has been featured in Physical Review Focus http://focus.aps.org/story/v18/st1>, a CalTech press release http://pr.caltech.edu/media/Press_Releases/PR12881.html>, and numerous other websites including PhysOrg.com, Science Daily, PhysLink.com, Science News Daily, WhatÌs Next in Science & Technology, Pasadena Independent, Softpedia, and Technology Horizons. Detailed links can be found at the URL http://www.its.caltech.edu/~mason/research/#optics>. Two more articles on this topic are about to appear in Photonics Spectra in October 2006 and in CalTech's research quarterly, Engineering and Science.
Professor Rob Kusner recently participated in the biennial Geometrie Tagung at the Mathematisches Forschungsintitut Oberwolfach, using the opportunity to continue a long-standing collaboration with colleagues from Berlin and Darmstadt, Germany.
Visiting Assistant Professor Ralf Schiffler gave a sixty-minute talk at the International Meeting on Representation Theory of Algebras, which took place in Sherbrooke, Canada on September 22Ò24. The title of his talk was Geometric Realizations of Cluster Categories. He also participated in a Meeting on Homology and Deformations in Algebra, Geometry and Representations at the CIRM in Luminy, France on September 25Ò29, where he gave a thirty-minute talk entitled Les Categories Amass»es et les Algebres Repliqu»es.
### September 2006
Professor Erin Conlon organized an invited session entitled Statistical Methods in Genetics and Public Health for the International Chinese Statistical Association 2006 Applied Statistics Symposium, which was held during the period June 14Ò17, 2006 in Storrs, Connecticut.
During June 2006 Professor Murray Eisenberg attended the 8th International Mathematica Symposium in Avignon, France, where he gave a talk entitled Visualizing Complex Functions with the Cardano3 Application. This talk was based upon joint work with David J. M. Park, Jr.
On June 27, 2006 Professor Richard S. Ellis gave a talk in the Seminar in Probability and Stochastic Processes at the TechnionÒIsrael Institute of Technology in Haifa, Israel. The title of his talk was Double-Chai (18?2) Limit Theorems for Sums of Dependent Random Variables Occurring in Statistical Mechanics.
Professor Paul Gunnells participated in the workshop Multiple Dirichlet Series that took place during the period July 8Ò16, 2006 at Stanford University. He also gave a short course of four lectures on the cohomology of arithmetic groups at the MSRI Summer Graduate Workshop on Computing with Modular Forms, which took place during the period July 31 Ò August 11 2006 at the Mathematical Sciences Research Institute in Berkeley, California.
Professors Paul Gunnells, Hans Johnston, Markos Katsoulakis, Panayotis Kevrekidis, and Bruce Turkington were awarded a $84,000 NSF SCREMS grant (Scientific Computing Research Environments for the Mathematical Sciences). The grant will be used to purchase a Beowulf computer cluster to support computationally intensive research in the mathematical sciences. The cluster will initially be used for research in several areas, including the following. computational investigation of cohomology of arithmetic groups and Kazhdan-Lusztig cells in Coxeter groups; investigation of heat transfer and turbulent shear flows in viscous incompressible fluids via direct numerical simulations; development of multiscale computational methods for hydrid deterministic/stochastic systems; simulation of nonlinear multi-dimensional phenomena in optics and condensed matter physics; development and testing of novel closure strategies using equilibrium and nonequlibrium statistical mechanics. These projects will also have a significant impact on the education and training of both students and young researchers in the mathematical sciences at UMass Amherst. As such, the equipment in this grant is part of a continuing effort of the department to build its computational program and to bring the frontier of research in mathematics to all levels of university education. Along with three co-authors, Professor Panayotis Kevrekidis published an article in the July 21, 2006 issue of Physical Review Letters, which is the premier journal of the American Physical Society. Entitled Nonlinearity Management in Optics: Experiment, Theory, and Simulation, this article was chosen to be a focus article by the journal. It is featured on the website Physical Review Focus http://focus.aps.org/story/v18/st1>. The new design for the International Mathematical Union, unveiled during August 2006 at the International Congress of Mathematicians in Madrid, is based on the example of Professor Rob Kusner of an optimal configuration of the Borromean rings. This optimal configuration appeared in a paper published in Inventiones Mathematicae several years ago and co-written with Jason Cantarella and John Sullivan; it was elaborated upon in a recent paper that Rob wrote with Jason Cantarella, Joe Fu, John Sullivan, and Nancy Wrinkle. This recent paper will soon appear in the journal Geometry and Topology. Professor Franz Pedit co-organized the London Mathematical Society Durham Symposium on Methods of Integrable Systems in Geometry, which was held during the period August 11Ò21, 2006. He also gave a lecture at the symposium in honor of T. J. Willmore (1919Ò2005) on the history and developments of the so-called Willmore Conjecture. Professor Willmore was the long time chairman of the Mathematical Institute at the University of Durham. Visiting Assistant Professor Ralf Schiffler gave a talk at the CBMS Conference on Cluster Algebras at the North Carolina State University, which was held during the period June 13Ò16, 2006 in Raleigh. The title of his talk was Introduction to Cluster Categories. He also spent the period June 25 Ò July 1, 2006 at the University of Sherbrooke, where he continued his joint research with I. Assem and T. Bruestle. Professor Emeritus Floyd Williams was one of hundreds of speakers at the 11th Marcel Grossmann Meeting on General Relativity held at the Free University in Berlin, Germany during the period July 23Ò29, 2006. Floyd presented a 10-minute abstract entitled A Non-linear Schràdinger-type Formulation of FLRW Scalar Field Cosmology. ### May 2006 Professor Wei-Min Chen was recently awarded a three-year research grant of$105,540 from the National Science Foundation.
Professor Paul Gunnells authored the cover story in the May 2006 issue of the Notices of the American Mathematical Society. Entitled Cells in Coxeter Groups, the article highlights a series of beautiful graphics, which Paul created.
During the period April 5Ò8, 2006 Professor Emeritus Jim Humphreys visited the University of South Alabama in Mobile, where he gave a colloquium lecture as well as a seminar talk. Several UMass Ph.D. students now teach there, including JimÌs former student Cornelius Pillen.
Professor Emeritus Aroldo Kaplan, currently a Researcher for ArgentinaÌs Science Agency, has joined the Mathematics Division of the International Center for Theoretical Physics in Trieste, Italy as Senior Associate.
In the March 2006 News Briefs the following item appeared concerning some research carried out by Professor Markos Katsoulakis and collaborators.
Professor Katsoulakis had an article accepted for publication in the top journal Nature Materials. Entitled Mechanistic Principles of Nanoparticle Evolution to Zeolite Crystals, the article was written in collaboration with chemical and materials scientists from the University of Minnesota and the industry.
In April the National Science Foundation recognized the importance of this research by featuring it on the main NSF web page in an article entitled Crystal Sieves, Born Anew: Hard Data Resolves Decades-old Mystery of How Certain Zeolites Form. The research was supported by a number of NSF grants and by the NSF National Nanotechnology Infrastructure Network.
Professor William Meeks was recently been awarded a Guggenheim Fellowship for the fall semester of 2006. Professor Meeks was one of 187 out of approximately 3,000 applicants to receive this yearÌs prestigious award from the John Simon Guggenheim Memorial Foundation.
Professor Gregory Pearlstein, formerly at the Institute for Advanced Study and currently a visiting professor at Duke University, has accepted a tenure-track position at Michigan State University. Gregory was the last student at UMass whose Ph.D. dissertation Professor Emeritus Aroldo Kaplan directed.
A number of current and former UMass people participated in the Spring Northeastern Sectional Meeting of the American Mathematical Society held April 22Ò23, 2006 at the University of New Hampshire in Durham. Several special sessions were organized by faculty members at UMass, including a special session on algebraic groups organized by Professors Eric Sommers and George McNinch. At that session, the following faculty members gave talks:
Professor Tom Braden, Semi-infinite Moment Graphs
Professor Emeritus Jim Humphreys,Tilting Modules for Semisimple Groups in Characteristic p
Professor Ivan Mirkovic, A t-Structure on Coherent Sheaves on Cotangent Bundle of a Flag Variety
Visiting Assistant Professor Ralf Schiffler, Cluster-tilted Algebras
In addition, Professor Paul Gunnells and Farshid Hajir organized a special session on arithmetic geometry and modular forms, and Professors Weimin Chen, Michael Sullivan, and Hao Wu organized a special session on symplectic and contact geometry. A detailed program listing is available online.
Professor Emeritus Floyd Williams was one of eleven invited plenary speakers at the Fifth International Conference on Mathematical Methods in Physics, held during the period April 24-28, 2006 in Rio de Janeiro, Brazil. The title of his one-hour lecture was Remarks on the BTZ Instanton with Conical Singularity. | 2017-12-12 08:49:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21153296530246735, "perplexity": 2701.843768587172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515311.25/warc/CC-MAIN-20171212075935-20171212095935-00516.warc.gz"} |
http://mathoverflow.net/questions/36524/gammae-pi-2-zetae-4-pi | # Gamma(e)=Pi/2,Zeta(e)=4/Pi ? [closed]
I find that Gamma(e) is close to Pi/2 and Zeta(e) is close to 4/Pi. So I have a question:
$\Gamma (e) = \pi /2$
$\zeta (e) = 4/\pi$
Is it true in fact?
-
## closed as off topic by Gjergji Zaimi, Robin Chapman, Loop Space, S. Carnahan♦Aug 24 '10 at 8:14
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you believe the question can be reworded to fit within the scope. Read more about reopening questions here. If this question can be reworded to fit the rules in the help center, please edit the question.
Maple says $\Gamma(e)=1.567468255$, $\pi/2=1.570796327$, $\zeta(e)=1.269009604$, $4/\pi=1.273239544$. | 2015-09-03 13:01:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37606269121170044, "perplexity": 1528.7640745776832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645315227.83/warc/CC-MAIN-20150827031515-00035-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://physics.stackexchange.com/questions/445760/ideal-gas-and-moveable-pistons | # Ideal gas and moveable pistons
Suppose there is an ideal, monatomic gas contained in a cylinder with a moveable piston and you bring that system (system = gas only) through some process. If that process changes the volume (or temperature) of the gas, the pressure remains constant while the temperature (or volume) changes.
Why does the use of a heavy, moveable piston ensure that any gas process will be isobaric?
I know that it is the case, but do not understand why. Is their a qualitative, microscopic explanation (modeling the gas as particles colliding with the walls of the container and with the piston) that can make sense of this?
• It's not necessarily isobaric throughout the entire process although, in the end, the amount of work done (the change in potential energy of the piston) will be the same as if it were isobaric. – Chet Miller Dec 7 '18 at 13:02
• @ChesterMiller Does isobaric always refer to the external pressure exerted on the gas? Although the work done is always based on the external pressure, can we not differentiate the external pressure, which is isobaric, and the pressure of the gas, which may or may not be isobaric (pressure gradients within) depending on whether or not the process is quasi-static processes? – Bob D Dec 7 '18 at 21:51
• @ Bob D For an irreversible expansion, the gas does not satisfy the ideal gas law because viscous stresses (which depend on the rate of volume change) also contribute to the force per unit area exerted by the gas on the inside face of the piston. So we can't use the ideal gas law to establish the gas force per unit area at the inside piston face where the displacement occurs and the actual work is being done. However, for a massless frictionless piston, it will always be equal to the external pressure, which is constant. (Continued) – Chet Miller Dec 7 '18 at 22:56
• @Bob D If the piston has mass, the force the gas exerts must also accelerate the piston in any irreversible process. However, eventually, the motion of the piston will be damped out by viscous stresses in the gas. But, during part of the expansion, the gas actually does more work than just elevating the piston and pushing back the atmosphere. But, once the piston motion has been damped out, the work that the gas has done is just that required to raise the piston and push back the atmosphere. – Chet Miller Dec 7 '18 at 23:00
I think the Kinetic Theory of Gases is a good intro.
Pressure is due to random motion of constituent particles. Moving walls "strikes" the particles increasing their energy. Changing the temperature adds or removes energy as well. Particles bounce off of the boundaris, changing their momentum to be equal and opposite. Pressure is related to force, force is change in momentum over time. The change in direction is our change in momentum. The relevant time scale is how long it takes a particle to leave one wall, bounce off the opposite wall, and then return. So in the expression $$F=\Delta p/\Delta t$$, $$\Delta p$$ is not directly effected by the change in length, but $$\Delta t$$ is, and this changes the Force and therefor the pressure.
The random motion of the particles can change because the moving walls do work on them. It can also change because of temperature. If the changes to the motion due to work and temperature cancel out the right way, pressure can be preserved.
The Ideal Gas Law gives some idea of how that happens:
$$PV=NkT$$.
P is pressure, V is volume, N is number of particles, k is Boltzman's constant, and T is absolute temperature in Kelvin.
Taking differentials. $$VdP+PdV=dNkT+NkdT$$
We are not adding particles so $$dN=0$$. We are holding pressure constant, so $$dP=0$$. We end up with:
$$PdV=NkdT$$
So constant pressure requires a balancing act between changes in volume and changes in temperature is the kinetic theory of gases implies.
You do not really need a microscopic explanation in this case (although Romero's answer is nice). Simply, any excess in pressure in your system can be accomodated by pushing the piston.
The pressure acting on your system at the beginning is the total force $$Mg$$ (weight of the piston of mass $$M$$) divided surface $$S$$ (area of the cylinder base) plus atmosferic pressure $$p_0$$ (note that here the weight of the piston is irrelevant, it only changes the total pressure but not the principle...!)
In the final state, the pressure will be... exactly the same! That the gas expanded does not change the pressure, as $$p_0$$ and $$Mg$$ do not change (whereas if the piston were fixed an internal pressure would build up against the un-movable piston). In our case, at equilibrium, the pressure of the gas will balance the weight of the piston (and $$p_0$$..) so the two pressures are equal. If then the pressure of the gas slightly increases, by expanding a little bit, because of $$PV=nRT$$ (see Romero's...), such an increase can be vented on a volume change which then brings the pressure back to normal (i.e. balanced with the piston). The same for a decrease of pressure.
What happens in between the initial and final state depends on the transformation. In a violent explosion the process would not be isobaric I guess. But if you evolve slow enough (quasistatically) you can assume at every instant of time the piston and the gas balance out.
Why does the use of a heavy, moveable piston ensure that any gas process will be isobaric?
The piston does not have to be “heavy” (a relative term, anyway). It simply has to have mass. If it has mass, $$m$$, it exerts a downward force of $$mg$$. If that force is uniformly distributed over a surface A, the force per unit area will be $$\frac{mg}{A}$$. If this weight is placed on top of a gas, the force per unit area is called pressure. If the piston/cylinder is surrounded by the atmosphere, the total external pressure is $$\frac{mg}{A}$$ + 1 atm.
A process is designated isobaric if the externally applied pressure is constant. For the piston that external pressure does not change, whether the piston is sitting on top of the gas in equilibrium, is accelerating due to a pressure differential, or is reversibly compressing or expanding the gas due to very slowly transferring heat out of or into the gas, respectively, with the surroundings.
Although the external pressure is constant, the pressure of the gas may not be constant except for reversible processes. For irreversible processes, temperature and /or pressure differentials may exist in the gas. Consequently, the ideal gas law would not apply to irreversible processes.
A mechanics of materials analogy to the above is a column subjected to a downward axial load (force). If the force is uniformly distributed over the cross sectional area of the column, the downward force per unit area is called normal compressive stress, $$σ_N$$. This external stress is constant; provided any axial deformation does not change the area over which the force acts. This is analogous to the piston surfaces and cylinder walls not expanding or contracting during the compression or expansion of the gas, respectively.
Hope this helps. | 2021-03-02 20:51:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 21, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7018029093742371, "perplexity": 347.7567775709281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364764.57/warc/CC-MAIN-20210302190916-20210302220916-00227.warc.gz"} |
https://www.splessons.com/lesson/simplification-problems/ | # Simplification Problems
#### Chapter 4
5 Steps - 3 Clicks
# Simplification Problems
### Introduction
Simplification Problems is based on BODMAS rule, where
B → Brackets,
O → Of,
D → Division,
M → Multiplication,
S → Subtraction.
### Methods
BODMAS Rule:
BODMAS is about simplifying an expression by firstly removing the brackets in the order i.e. (), {}, []. Removal of brackets is followed by addition, subtraction, multiplication, division, square roots, cube roots, powers, cancellation of numerator/ denominator and so on.
Example 1:
Simplify using BODMAS rule: 25 – 48 ÷ 6 + 12 × 2
Solution:
25 – 48 ÷ 6 + 12 × 2
= 25 – 8 + 12 × 2, (Simplifying ‘division’ 48 ÷ 6 = 8)
= 25 – 8 + 24, (Simplifying ‘multiplication’ 12 × 2 = 24)
= 17 + 24, (Simplifying ‘subtraction’ 25 – 8 = 17)
= 41, (Simplifying ‘addition’ 17 + 24 = 41)
Example 2:
Simplify using BODMAS rule: 78 – [5 + 3 of (25 – 2 × 10)]
Solution:
78 – [5 + 3 of (25 – 2 × 10)]
= 78 – [5 + 3 of (25 – 20)], (Simplifying ‘multiplication’ 2 × 10 = 20)
= 78 – [5 + 3 of 5], (Simplifying ‘subtraction’ 25 – 20 = 5)
= 78 – [5 + 3 × 5], (Simplifying ‘of’)
= 78 – [5 + 15], (Simplifying ‘multiplication’ 3 × 5 = 15)
= 78 – 20, (Simplifying ‘addition’ 5 + 15 = 20)
= 58, (Simplifying ‘subtraction’ 78 – 20 = 58)
Example 3:
Simplify using BODMAS rule: 52 – 4 of (17 – 12) + 4 × 7
Solution:
52 – 4 of (17 – 12) + 4 × 7
= 52 – 4 of 5 + 4 × 7, (Simplifying ‘parenthesis’ 17 – 12 = 5)
= 52 – 4 × 5 + 4 × 7, (Simplifying ‘of’)
= 52 – 20 + 4 × 7, (Simplifying ‘multiplication’ 4 × 5 = 20)
= 52 – 20 + 28, (Simplifying ‘multiplication’ 4 × 7 = 28)
= 32 + 28, (Simplifying ‘subtraction’ 52 – 20 = 32)
= 60, (Simplifying ‘addition’ 32 + 28 = 60)
Modulus of a real number:
In many engineering calculations you will come across the symbol “| |”. This is known as the modulus.
The modulus of a number is its absolute size. That is, we disregard any sign it might have.
Examples:
• The modulus of −8 is simply 8.
• The modulus of −$$\frac{1}{2}$$ is $$\frac{1}{2}$$.
• The modulus of 17 is simply 17.
• The modulus of 0 is 0.
So, the modulus of a positive number is simply the number.
The modulus of a negative number is found by ignoring the minus sign.
The modulus of a number is denoted by writing vertical lines around the number.
This observation allows us to define the modulus of a number quite concisely in the following way.
$| a | = \begin{cases} a & \quad \text{if } a > 0\\ -a & \quad \text{if } a < 0 \end{cases}$
Examples:
1. | 9 | = 9
2. | − 11 | = 11
3. | 0.25 | = 0.25
4. | − 3.7 | = 3.7
Virnaculum (or Bar):
When an expression contains Virnaculum, before applying the ‘BODMAS’rule, we simplify the expression under the Virnaculum.
Example 1:
Simplify: $$78 \,- [24 \,- {16 \,- (5 \,– \overline{4 \,- 1})}]$$
Solution:
$$78 \,- [24 \,- {16 \,- (5 \,– \overline{4 \,- 1})}]$$
= 78 – [24 – {16 – (5 – 3)}] (Removing vinculum)
= 78 -[24 – {16 – 2}] (Removing parentheses)
= 78 – [24 – 14] (Removing braces)
= 78 – 10
= 68.
Example 2:
Simplify: $$197 \,- [1/9 \{42 \,+ (56 \,- \overline{8 \,+ 9})\} \,+ 108]$$
Solution:
$$197 \,- [1/9 \{42 \,+ (56 \,- \overline{8 \,+ 9})\} \,+ 108]$$
= 197 – [1/9 {42 + (56 – 17)} + 108] (Removing vinculum)
= 197 – [1/9 {42 + 39} + 108] (Removing parentheses)
= 197 – [(81/9) + 108] (Removing braces)
= 197 – [9 + 108]
= 197 – 117
= 80
Example 3:
Simplify: $$95 \,- [144 \,÷ (12 \,\times 12) \,- (-4) \,- \{3 \,– \overline{17 \,- 10}\}]$$
Solution:
$$95 \,- [144 \,÷ (12 \,\times 12) \,- (-4) \,- \{3 \,– \overline{17 \,- 10}\}]$$
= 95 – [144 ÷ (12 x 12) – (-4) – {3 -7}]
= 95 – [144 ÷ 144 – (-4) – {3-7)]
= 95 – [1 – (-4) – (-4)] [Performing division]
= 95 – [1 + 4 + 4]
= 95 – 9
= 86
### Samples
1. Simplify a-[a-(a+b)-{a-(a-b+a)}+2b]?
Solution:
Given that a-[a-(a+b)-{a-(a-b+a)}+2b]
⇒a-[a-(a+b)-{a-a+b-a}+2b]
⇒a-[a-(a+b)-{b-a}+2b]
⇒a-[a-(a+b)-b+a+2b]
⇒a-[a-a-b-b+a+2b]
⇒a-a = 0
Therefore a-[a-(a+b)-{a-(a-b+a)}+2b] = 0
2. Find the value of $$x$$ if $$(\frac{12.24 ÷ x}{3.2 × 0.2})$$ = 2
Solution:
Given expression is $$(\frac{12.24 ÷ x}{3.2 × 0.2})$$ = 2
⇒$$\frac{12.24}{x}$$ = 2 x 3.2 x 0.2
⇒$$x$$ = $$\frac{12.24}{1.28}$$
⇒$$x$$ = 0.011
Hence, the value of $$x$$ = 0.011
3. Find the value of $$\sqrt{30} × \sqrt{10} = ?$$
Solution:
Given that $$\sqrt{30} × \sqrt{10} = ?$$
Consider $$\sqrt{30} × \sqrt{10}$$
⇒$$\sqrt{{30} × {10}}$$
⇒$$\sqrt{6 × 5 × 5 ×2}$$
⇒5$$\sqrt{6 × 2}$$
⇒5$$\sqrt{12}$$
⇒5$$\sqrt{4 × 3}$$
⇒5 × 2$$\sqrt{3}$$
⇒10$$\sqrt{3}$$
Therefore the value of $$\sqrt{30} × \sqrt{10}$$ = 10$$\sqrt{3}$$.
4. Simplify 6$$\frac{2}{9}$$ + 2$$\frac{7}{9}$$ – 4$$\frac{3}{11}$$ + 1$$\frac{3}{11}$$?
Solution:
Given that
6$$\frac{2}{9}$$ + 2$$\frac{7}{9}$$ – 4$$\frac{3}{11}$$ + 1$$\frac{3}{11}$$
⇒$$\frac{56}{9}$$ + $$\frac{25}{9}$$ – $$\frac{47}{11}$$ + $$\frac{14}{11}$$
Now L.C.M. of 9, 9, 11, 11 is 99
⇒$$\frac{616 + 275 – 423 + 126}{99}$$
⇒$$\frac{1017 – 423}{99}$$
⇒$$\frac{594}{99}$$
⇒6
Therefore 6$$\frac{2}{9}$$ + 2$$\frac{7}{9}$$ – 4$$\frac{3}{11}$$ + 1$$\frac{3}{11}$$ = 6.
5. If $$\sqrt{a}$$ = 2b then find the value of $$\frac{b^2}{a}$$?
Solution:
Given
$$\sqrt{a}$$ = 2b
By taking Square root to other side,
⇒ a = 4$$b^2$$
Now consider $$\frac{b^2}{a}$$
⇒$$\frac{1}{4}$$ or 0.25
∴ The value of $$\frac{b^2}{a}$$ = $$\frac{1}{4}$$ or 0.25
6. Find the unknown value from 25% of 180 = ? ÷ 0.25
Solution:
Assume the unknown value as $$x$$
Given 25% of 180 = ? ÷ 0.25
Substitute $$x$$ i.e.
$$\frac{25}{100}$$ × 180 = $$x$$ × $$\frac{1}{0.25}$$
⇒$$x$$ = $$\frac{25 × 180 × 0.25}{100}$$
⇒$$x$$ = $$\frac{1125}{100}$$
⇒$$x$$ = 11.25
Therefore, the unknown value $$x$$ = 11.25 | 2020-02-24 18:17:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6967929601669312, "perplexity": 3517.6737100548353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145966.48/warc/CC-MAIN-20200224163216-20200224193216-00363.warc.gz"} |
https://yutsumura.com/dot-product-lengths-and-distances-of-complex-vectors/ | # Dot Product, Lengths, and Distances of Complex Vectors
## Problem 689
For this problem, use the complex vectors
$\mathbf{w}_1 = \begin{bmatrix} 1 + i \\ 1 – i \\ 0 \end{bmatrix} , \, \mathbf{w}_2 = \begin{bmatrix} -i \\ 0 \\ 2 – i \end{bmatrix} , \, \mathbf{w}_3 = \begin{bmatrix} 2+i \\ 1 – 3i \\ 2i \end{bmatrix} .$
Suppose $\mathbf{w}_4$ is another complex vector which is orthogonal to both $\mathbf{w}_2$ and $\mathbf{w}_3$, and satisfies $\mathbf{w}_1 \cdot \mathbf{w}_4 = 2i$ and $\| \mathbf{w}_4 \| = 3$.
Calculate the following expressions:
(a) $\mathbf{w}_1 \cdot \mathbf{w}_2$.
(b) $\mathbf{w}_1 \cdot \mathbf{w}_3$.
(c) $((2+i)\mathbf{w}_1 – (1+i)\mathbf{w}_2 ) \cdot \mathbf{w}_4$.
(d) $\| \mathbf{w}_1 \| , \| \mathbf{w}_2 \|$, and $\| \mathbf{w}_3 \|$.
(e) $\| 3 \mathbf{w}_4 \|$.
(f) What is the distance between $\mathbf{w}_2$ and $\mathbf{w}_3$?
## Solution.
### (a) $\mathbf{w}_1 \cdot \mathbf{w}_2$.
$\mathbf{w}_1 \cdot \mathbf{w}_2 = \begin{bmatrix} 1+i & 1-i & 0 \end{bmatrix} \begin{bmatrix} -i \\ 0 \\ 2-i \end{bmatrix} = (1+i)(-i) + 0 + 0 = 1 – i .$
### (b) $\mathbf{w}_1 \cdot \mathbf{w}_3$.
\begin{align*} \mathbf{w}_1 \cdot \mathbf{w}_3 &= \begin{bmatrix} 1+i & 1-i & 0 \end{bmatrix} \begin{bmatrix} 2+i \\ 1-3i \\ 2i \end{bmatrix} \\ &= (1+i)(2+i) + (1-i)(1-3i) + 0 \\ &= (1 + 3i) + (-2 – 4i) \\ &= -1 – i . \end{align*}
### (c) $((2+i)\mathbf{w}_1 – (1+i)\mathbf{w}_2 ) \cdot \mathbf{w}_4$.
\begin{align*} ((2+i)\mathbf{w}_1 – (1+i)\mathbf{w}_2 ) \cdot \mathbf{w}_4 &= (2+i)( \mathbf{w}_1 \cdot \mathbf{w}_4) – (1+i) ( \mathbf{w}_2 \cdot \mathbf{w}_4 ) \\
&= (2+i) ( 2i ) – (1+i)(0) \\
&= -2 + 4i \end{align*}
Note that $\mathbf{w}_2 \cdot \mathbf{w}_4=0$ because these vectors are orthogonal.
### (d) $\| \mathbf{w}_1 \| , \| \mathbf{w}_2 \|$, and $\| \mathbf{w}_3 \|$.
For an arbitrary complex vector $\mathbf{v}$, its length is defined to be
$\| \mathbf{v} \| = \sqrt{ \overline{\mathbf{v}}^\trans \mathbf{v} } .$
Thus,
$\| \mathbf{w}_1 \| \, = \, \sqrt{ (1-i)(1+i) + (1+i)(1-i) + 0 } = \sqrt{ 2 + 2} = \sqrt{4} ,$ $\| \mathbf{w}_2 \| \, = \, \sqrt{ (i)(-i) + 0 + (2+i)(2-i) } = \sqrt{1 + 5} = \sqrt{6} ,$ $\| \mathbf{w}_3 \| \, = \, \sqrt{ (2-i)(2+i) + (1+3i)(1-3i) + (-2i)(2i) } = \sqrt{ 5 + 10 + 4} = \sqrt{19} .$
### (e) $\| 3 \mathbf{w}_4 \|$.
$\| 3 \mathbf{w}_4 \| = 3 \| \mathbf{w}_4 \| = 3\cdot 3=9$ .
### (f) What is the distance between $\mathbf{w}_2$ and $\mathbf{w}_3$?
The distance between these vectors is given by $\| \mathbf{w}_2 – \mathbf{w}_3 \|$. First we calculate this difference:
$\mathbf{w}_2 – \mathbf{w}_3 \, = \, \begin{bmatrix} -i \\ 0 \\ 2 – i \end{bmatrix} – \begin{bmatrix} 2+i \\ 1 – 3i \\ 2i \end{bmatrix} \, = \, \begin{bmatrix} -2 – 2i \\ -1 + 3i \\ 2 – 3i \end{bmatrix} .$
Now the length of the complex vector is defined to be
\begin{align*}
\| \mathbf{w}_2 – \mathbf{w}_3 \| &= \sqrt{ \left( \overline{ \mathbf{w}_2 – \mathbf{w}_3 } \right)^{\trans} \left( \mathbf{w}_2 – \mathbf{w}_3 \right) } \6pt] &= \sqrt{ \begin{bmatrix} -2 + 2i & -1 – 3i & 2 + 3i \end{bmatrix} \begin{bmatrix} -2 – 2i \\ -1 + 3i \\ 2 – 3i \end{bmatrix} } \\[6pt] &= \sqrt{ (-2+2i)(-2-2i) + (-1-3i)(-1+3i) + (2+3i)(2-3i) } \\[6pt] &= \sqrt{ 8 + 10 + 13 } \\[6pt] &= \sqrt{ 31} \end{align*} ### More from my site • Inner Products, Lengths, and Distances of 3-Dimensional Real Vectors For this problem, use the real vectors \[ \mathbf{v}_1 = \begin{bmatrix} -1 \\ 0 \\ 2 \end{bmatrix} , \mathbf{v}_2 = \begin{bmatrix} 0 \\ 2 \\ -3 \end{bmatrix} , \mathbf{v}_3 = \begin{bmatrix} 2 \\ 2 \\ 3 \end{bmatrix} . Suppose that $\mathbf{v}_4$ is another vector which is […]
• Find the Distance Between Two Vectors if the Lengths and the Dot Product are Given Let $\mathbf{a}$ and $\mathbf{b}$ be vectors in $\R^n$ such that their length are $\|\mathbf{a}\|=\|\mathbf{b}\|=1$ and the inner product $\mathbf{a}\cdot \mathbf{b}=\mathbf{a}^{\trans}\mathbf{b}=-\frac{1}{2}.$ Then determine the length $\|\mathbf{a}-\mathbf{b}\|$. (Note […]
• Eigenvalues and Eigenvectors of The Cross Product Linear Transformation We fix a nonzero vector $\mathbf{a}$ in $\R^3$ and define a map $T:\R^3\to \R^3$ by $T(\mathbf{v})=\mathbf{a}\times \mathbf{v}$ for all $\mathbf{v}\in \R^3$. Here the right-hand side is the cross product of $\mathbf{a}$ and $\mathbf{v}$. (a) Prove that $T:\R^3\to \R^3$ is […]
• Unit Vectors and Idempotent Matrices A square matrix $A$ is called idempotent if $A^2=A$. (a) Let $\mathbf{u}$ be a vector in $\R^n$ with length $1$. Define the matrix $P$ to be $P=\mathbf{u}\mathbf{u}^{\trans}$. Prove that $P$ is an idempotent matrix. (b) Suppose that $\mathbf{u}$ and $\mathbf{v}$ be […]
• Find the Inverse Matrix of a Matrix With Fractions Find the inverse matrix of the matrix $A=\begin{bmatrix} \frac{2}{7} & \frac{3}{7} & \frac{6}{7} \\[6 pt] \frac{6}{7} &\frac{2}{7} &-\frac{3}{7} \\[6pt] -\frac{3}{7} & \frac{6}{7} & -\frac{2}{7} \end{bmatrix}.$ Hint. You may use the augmented matrix […]
• Inner Product, Norm, and Orthogonal Vectors Let $\mathbf{u}_1, \mathbf{u}_2, \mathbf{u}_3$ are vectors in $\R^n$. Suppose that vectors $\mathbf{u}_1$, $\mathbf{u}_2$ are orthogonal and the norm of $\mathbf{u}_2$ is $4$ and $\mathbf{u}_2^{\trans}\mathbf{u}_3=7$. Find the value of the real number $a$ in […]
• Equivalent Conditions to be a Unitary Matrix A complex matrix is called unitary if $\overline{A}^{\trans} A=I$. The inner product $(\mathbf{x}, \mathbf{y})$ of complex vector $\mathbf{x}$, $\mathbf{y}$ is defined by $(\mathbf{x}, \mathbf{y}):=\overline{\mathbf{x}}^{\trans} \mathbf{y}$. The length of a complex vector […]
• Orthonormal Basis of Null Space and Row Space Let $A=\begin{bmatrix} 1 & 0 & 1 \\ 0 &1 &0 \end{bmatrix}$. (a) Find an orthonormal basis of the null space of $A$. (b) Find the rank of $A$. (c) Find an orthonormal basis of the row space of $A$. (The Ohio State University, Linear Algebra Exam […]
#### You may also like...
##### How to Obtain Information of a Vector if Information of Other Vectors are Given
Let $A$ be a $3\times 3$ matrix and let \[\mathbf{v}=\begin{bmatrix} 1 \\ 2 \\ -1 \end{bmatrix} \text{ and } \mathbf{w}=\begin{bmatrix}...
Close | 2018-04-20 16:41:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000008463859558, "perplexity": 334.42851356746894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00336.warc.gz"} |
https://www.physicsforums.com/threads/inverse-laplace-partial-fractions-with-exponential.232883/ | # Inverse Laplace- Partial Fractions with exponential
ns5032
## Homework Statement
[e^(-2s)] / (s^2+s-2)
Find the inverse Laplace transform.
## The Attempt at a Solution
I know that I can factor the denominator into (s+2)(s-1). Then I tried to use partial fractions to split up the denominator, but I don't know how to do that with an exponential on the top. Thanks for any help!
EngageEngage
just write it as exp(whatever)*(1/whatever). Then do partial fractions to get exp(whatever)*(?/a + ?/b). You will see that the exponential will be easy to 'invert' back into the time domain as it corresponds to unit step functions (i believe).
Shawj02
Im stuck in the same boat, but trying to get the partial fraction for "(e^[-s] -e^[-2s])/[(s^2)(s+1)]"
I wasn't too sure what EngageEngage meant.
alchemist
i am having the same problems! never knew there was any issue with partial fractions involving exponential components.
my question was to get partial fraction from 3e^-2s/(s(s+5)), so i brought down the exponential function to get 3 different fractions with 1/e^2s, 1/s and 1/(s+5).
But it still doesn't work out. | 2023-01-28 09:59:54 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803276002407074, "perplexity": 622.1525052898912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499541.63/warc/CC-MAIN-20230128090359-20230128120359-00285.warc.gz"} |
https://stats.stackexchange.com/questions/128859/is-there-a-way-to-characterize-the-position-of-a-point-in-a-distribution-that-ta | # Is there a way to characterize the position of a point in a distribution that takes higher moments into account?
When summarizing a location in a distribution, we can take the mean into account by simply calculating $x-\mu$. We can take the standard deviation into account by calculating a Z score $(x-\mu)/\sigma$. Is there a typical way to take higher moments into account?
• I do not understand what you mean by "take into account". Given $\mu$ and $\sigma$, when you specify $z = (x-\mu)/\sigma$ you uniquely determine $x = \sigma z + \mu$. What more do you want? – whuber Dec 12 '14 at 16:16
• @whuber That's true, and you construct a similar expression given just $\mu$. But standardizing by calculating Z is often more useful that just using $\mu$. So my question is, is there an expression that standardizes not just by the standard deviation, but also by the skewness? Is there an expression that standardizes by the skewnews and kurtosis? Etc. – Thomas Johnson Dec 12 '14 at 16:26
• Sure: you can standardize by any two statistics that locate the distribution and provide a scale for it. (Although the skewness will not work--it tells you nothing at all about the scale--the cube root of the absolute third central moment will.) The possibilities are infinite. What you need to tell us is the why of your question: what is the purpose of doing this? That would give us information to recommend choices of those statistics. – whuber Dec 12 '14 at 18:11
• It's largely an exercise in feature engineering. I'm trying to create useful features for a classifier. I'm hoping to provide more information summarizing the location of the point in a (highly non-normal) empirical distribution, without overwhelming the classifier by providing tons of high-noise features. – Thomas Johnson Dec 12 '14 at 18:40
• You won't do it this way! I think you would be better served by asking the question you really have in mind, rather than proposing something that will not help you at all and asking for commentary on it. – whuber Dec 12 '14 at 19:47
## 1 Answer
I am not aware of such expressions containing higher moments. However, if your aim is to summarize by a single number how $x$ relates to the distribution, taking into account the shape of the distribution beyond mean and variance, I suggest reporting the Cumulative distribution function, that is, the probability $$P(X \leq x)$$ where $X$ is a random variable following your distribution. | 2019-11-13 01:39:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.5448170900344849, "perplexity": 358.01566552842206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00009.warc.gz"} |
https://datascience.stackexchange.com/questions/25554/how-to-evaluate-sequence-to-sequence-models | # How to evaluate sequence to sequence models?
I wonder how to evaluate variable long sequence-to-sequence predictions? Let us say I have the following $Y$ and $\hat{Y}$
$Y = [["1", "2", "2"], ["3", "2", "2"], ["1", "3", "2", "2"]]$
$\hat{Y} = [["1", "3", "2"], ["3", "3"], ["1", "3", "2"]]$
Shall I use a binary comparison where any mismatch counts as zero and any full match as one? Or shall calculate conventional accuracy by character-wise comparison?
My concern here is that on one hand, if this is a numeric predcition then any digit mismatch spoils the whole number so doesn't really matter where is the mismatch; on the other hand it would be nice which digits tend to miscalculated in order to find ways to improve the training sets.
Addition: the task is a numeric OCR so -- in contrast with a machine translation job where minor mistranslations are tolerable -- any digit mismatch could result significant business problem (different invoice sums for example). Moreover I'd like to know which individual digits tend to be misread more often so I need a way to get a statistic this way also.
• can you explain a bit more on the business case? because I think Association Mining can be used over here. can give you clear explanation if it suits your business problem. Dec 11, 2017 at 10:07
Regarding your concern, there is no reason for you to choose only one evaluation metric. If there are several values that give you different views of the performance of the system, then compute all of these values. The evaluation should depend on your specific use case, so the important thing is that the values correlate with a good or bad performance of the system in the real world problem.
Even though I don't know exactly what your system is expected to do [written before question edit, see below for addition], maybe you could take as reference how performance is measured in speech and text recognition tasks. There you have a reference and a predicted sequence composed of characters and these characters form words. The length of the reference and the prediction is not necessarily the same. The performance is measured both at character and at word level (character error rate (CER) and word error rate (WER)). When measured at word level, even if only one character is wrong, the whole word is wrong. In both cases the main idea is to compute the levenshtein/edit distance (either at word or character level) between the reference and the prediction and then divide by the length of the reference. | 2022-12-05 16:13:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5790181159973145, "perplexity": 572.1068994536646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00076.warc.gz"} |
https://www.nonlin-processes-geophys.net/27/175/2020/ | Journal topic
Nonlin. Processes Geophys., 27, 175–185, 2020
https://doi.org/10.5194/npg-27-175-2020
Nonlin. Processes Geophys., 27, 175–185, 2020
https://doi.org/10.5194/npg-27-175-2020
Research article 08 Apr 2020
Research article | 08 Apr 2020
# Study of the fractality in a magnetohydrodynamic shell model forced by solar wind fluctuations
Study of the fractality in a magnetohydrodynamic shell model forced by solar wind fluctuations
Macarena Domínguez1, Giuseppina Nigro2, Víctor Muñoz3, Vincenzo Carbone2, and Mario Riquelme1 Macarena Domínguez et al.
• 1Departamento de Física, Facultad de Ciencias Físicas y Matemáticas, Universidad de Chile, 8370449 Santiago, Chile
• 2Dipartimento di Fisica, Universita della Calabria, 87036 Rende CS, Italy
• 3Departamento de Física, Facultad de Ciencias, Universidad de Chile, 7800003 Santiago, Chile
Correspondence: Macarena Domínguez (mdominguezv@ug.uchile.cl)
Abstract
The description of the relationship between interplanetary plasma and geomagnetic activity requires complex models. Drastically reducing the ambition of describing this detailed complex interaction and, if we are interested only in the fractality properties of the time series of its characteristic parameters, a magnetohydrodynamic (MHD) shell model forced using solar wind data might provide a possible novel approach. In this paper we study the relation between the activity of the magnetic energy dissipation rate obtained in one such model, which may describe geomagnetic activity, and the fractal dimension of the forcing.
In different shell model simulations, the forcing is provided by the solution of a Langevin equation where a white noise is implemented. This forcing, however, has been shown to be unsuitable for describing the solar wind action on the model. Thus, we propose to consider the fluctuations of the product between the velocity and the magnetic field solar wind data as the noise in the Langevin equation, the solution of which provides the forcing in the magnetic field equation.
We compare the fractal dimension of the magnetic energy dissipation rate obtained, of the magnetic forcing term, and of the fluctuations of vbz, with the activity of the magnetic energy dissipation rate. We examine the dependence of these fractal dimensions on the solar cycle. We show that all measures of activity have a peak near solar maximum. Moreover, both the fractal dimension computed for the fluctuations of vbz time series and the fractal dimension of the magnetic forcing have a minimum near solar maximum. This suggests that the complexity of the noise term in the Langevin equation may have a strong effect on the activity of the magnetic energy dissipation rate.
1 Introduction
There are many investigations regarding the relation between interplanetary plasma parameters and the occurrence of geomagnetic events in the Earth's magnetosphere . Among these, and Kane (2005) show a decrease in the antiparallel geomagnetic field, Bs, before the occurrence of the minimum of Dst. While and introduce an energy balance equation where the Dst index and the rectified interplanetary electric field (dvBz) are related.
The study of the fractal dimension in various fields has contributed to understanding diverse phenomena, adding a new, interdisciplinary perspective to nonlinear systems. For example, this approach has been used to study seismicity, to describe the distribution of epicenter and hypocenters in a given geographical zone , or to consider the relationship between the fractal dimension of the spatial distributions of the aftershocks and the faults . It has also been used in the study of various catastrophic events such as seismic and epileptic shocks, where the fractality of the relevant time series has been analyzed to extract information on precursor activity . In music, musical pieces have been characterized through fractal dimensions . And in plasma physics, the use of fractal dimensions to understand plasma properties is becoming increasingly common .
Fractal dimensions can be calculated from either time series or spatial patterns. For instance, the fractal dimension of the time series of auroral electrojets (AEs), or from spatial data such as solar magnetograms, has shown interesting properties, being generally a non-integer value and less than the Euclidean dimension . Several studies have analyzed the relationship between the fractal and multifractal dimension with physical properties, which has provided a tool to predict events on the surface on the Sun (solar flares), the solar wind, and the Earth's magnetosphere .
There are many different methods to calculate the fractal and multifractal dimensions. In a previous work, we have studied the temporal evolution of solar and geomagnetic activity, by calculating a scatter-box-counting fractal dimension from solar magnetograms and Dst data . The fractal dimension of the Dst analysis decreases during magnetic storms, an effect that is consistently observed across several timescales, from individual storms to a complete solar cycle. Our results suggest that this definition of fractal dimension is an interesting proxy for complexity in the Sun–Earth system, not only for static data but also when the evolution of solar and geomagnetic activities are followed.
Moreover, in , the authors show that the fractal dimension and the occurrence of the bursts in magnetic energy dissipation rate ϵb(t) computed in a magnetohydrodynamic (MHD) shell model integration have correlations similar to those observed in geomagnetic and solar wind data. In that work, the forcing terms of the MHD shell model are provided by the solution of the Langevin equation, where a white noise is employed. That forcing, previously adopted, shows stationary statistical properties, hence revealing its inadequacy to describe the effect of solar wind on the magnetospheric activity. In order to mimic the evolution of the magnetospheric forcing due to the solar wind, in the MHD shell model has been forced using magnetic and velocity field data measured in the solar wind. This latter work shows a peak in the activity of ϵb(t) near solar maximum, whereas the fractal dimension of the forcing magnetic field time series has a minimum near solar maximum.
Considering these results, in this paper we present an attempt to describe the complex interaction between solar wind and magnetosphere using a very simple model, where we employ vbz data to deduce a suitable forcing for the magnetic field evolution. In particular the fluctuations of vbz values inferred from solar wind data are introduced as the noise in the Langevin equation. Then, the solution of this latter equation provides a forcing that we introduce in the magnetic field equation. Thus, by using data that are related to the occurrence of geomagnetic activity, our aim is to investigate whether the statistical properties described in this model evolve because of the evolution in the statistical properties of the forcing term. In particular, in this paper we study whether there is a relationship between the fractality of the forcing, and the activity and the fractality of the dissipation.
The paper is organized as follows. In Sect. 2 we present the main features of the MHD shell model used to calculate the magnetic energy dissipation rate ϵb(t), as well as the method used to modify the forcing term of the model. In Sect. 3 we describe the method to calculate the fractal dimension of the fluctuation of vbz used as a noise term in the Langevin equation. In Sect. 4 we describe the method to calculate the fractal dimension of the magnetic forcing term, and the energy dissipation rate obtained from the shell model. In Sect. 5 we present the definitions of the activity parameters used to analyze the energy dissipation rate. In Sect. 6, the results obtained are presented and finally, in Sect. 7, our conclusions are discussed.
2 Shell model
In general, shell models allow the nonlinear dynamics of fluid systems to be dealt with, reproducing relevant features of MHD turbulence even for high Reynolds numbers, which involve a large computational cost in direct numerical simulations . This is done by means of a set of equations – a simplified version of the Navier–Stokes system – which greatly reduces the available degrees of freedom .
In this work, we use the MHD Gledzer–Ohkitani–Yamada (GOY) shell model, which has been shown to be adequate to describe the dynamics of the energy cascade in MHD turbulence , dynamo effect , statistics of solar flares , finite-time singularities in turbulent cascades and to model the fractal features of a magnetized plasma .
This model is described in more detail in our previous works . Below we focus on the choice of forcing terms, which is relevant for the present study.
In the model, the wave-vector space (k space) is divided into N discrete shells of radius kn=k02n ($n=\mathrm{0},\mathrm{1},\mathrm{\dots },N$). Then, two complex dynamical variables un(t) and bn(t), representing, respectively, velocity and magnetic field increments on an eddy scale $l\sim {k}_{n}^{-\mathrm{1}}$, are assigned to each shell.
The following set of ordinary differential equations describes the dynamical behavior of the model :
$\begin{array}{}\text{(1)}& \begin{array}{rl}\frac{\mathrm{d}{u}_{n}}{\mathrm{d}t}& =-\mathit{\nu }{k}_{n}^{\mathrm{2}}{u}_{n}+i{k}_{n}\mathit{\left\{}\left({u}_{n+\mathrm{1}}{u}_{n+\mathrm{2}}-{b}_{n+\mathrm{1}}{b}_{n+\mathrm{2}}\right)\\ & -\frac{\mathrm{1}}{\mathrm{4}}\left({u}_{n-\mathrm{1}}{u}_{n+\mathrm{1}}-{b}_{n-\mathrm{1}}{b}_{n+\mathrm{1}}\right)\\ & -\frac{\mathrm{1}}{\mathrm{8}}\left({u}_{n-\mathrm{2}}{u}_{n-\mathrm{1}}-{b}_{n-\mathrm{2}}{b}_{n-\mathrm{1}}\right){\mathit{\right\}}}^{*}+{f}_{n},\end{array}\end{array}$
$\begin{array}{}\text{(2)}& \begin{array}{rl}\frac{\mathrm{d}{b}_{n}}{\mathrm{d}t}& =-\mathit{\eta }{k}_{n}^{\mathrm{2}}{b}_{n}+i{k}_{n}\frac{\mathrm{1}}{\mathrm{6}}\mathit{\left\{}\left({u}_{n+\mathrm{1}}{b}_{n+\mathrm{2}}-{b}_{n+\mathrm{1}}{u}_{n+\mathrm{2}}\right)\\ & +\left({u}_{n-\mathrm{1}}{b}_{n+\mathrm{1}}-{b}_{n-\mathrm{1}}{u}_{n+\mathrm{1}}\right)\\ & +\left({u}_{n-\mathrm{2}}{b}_{n-\mathrm{1}}-{b}_{n-\mathrm{2}}{u}_{n-\mathrm{1}}\right){\mathit{\right\}}}^{*}+{g}_{n}.\end{array}.\end{array}$
Here, ν and η are, respectively, the kinematic viscosity and the resistivity; fn and gn are external forcing terms acting, respectively, on the velocity and magnetic fluctuations.
Initially, velocities in the second and fourth shell are set to complex random numbers, whereas the initial magnetic field fluctuations are set to zero, ${b}_{n}\left(t=\mathrm{0}\right)=\mathrm{0}$ .
Based on , where a comprehensive analysis of the statistical properties of the shell model for various values of ν and η is carried out, we set $\mathit{\nu }=\mathit{\eta }={\mathrm{10}}^{-\mathrm{4}}$, as it is in the range where the model is able to best reproduce the intermittent behavior observed in magnetized plasmas. Given the values of the dissipative coefficients ν and η, we take N=19, also consistent with the choices in , a value which guarantees a nonlinear range sufficiently large to describe the system dynamics. We then numerically integrate the shell model Eqs. (1)–(2), and we can calculate the magnetic energy dissipation rate defined as follows:
$\begin{array}{}\text{(3)}& {\mathit{ϵ}}_{\mathrm{b}}\left(t\right)=\mathit{\eta }\sum _{n=\mathrm{1}}^{N}{k}_{n}^{\mathrm{2}}\left|{b}_{n}^{\mathrm{2}}\right|.\end{array}$
Note that a dissipation rate for the velocity field can also be defined. However, this is not relevant to our model, as it would be related to heating, whereas there is no equation for temperature in our analysis. Magnetic storms, on the other hand, are related to magnetic dissipation rates.
In previous work the forcing terms were obtained from the Langevin equation
$\begin{array}{}\text{(4)}& \frac{\mathrm{d}{\stackrel{\mathrm{̃}}{f}}_{n}}{\mathrm{d}t}=-\frac{{\stackrel{\mathrm{̃}}{f}}_{n}}{{\mathit{\tau }}_{\mathrm{0}}}+\mathit{\mu }\left(t\right),\end{array}$
where ${\stackrel{\mathrm{̃}}{f}}_{n}={f}_{n}$ or gn, τ0 is a correlation time introduced in a Gaussian white noise μ of width σ. This provides a stochastic way to drive turbulence in the model. However, turbulence in space plasmas is not always subject to stationary drivers. Such is the case, for instance, of the Earth's magnetosphere. This system is driven by the solar wind, which itself has its own dynamics on short timescales due to local events such as coronal mass ejections (CMEs), and on longer timescales as the solar cycle. In this paper we deal with this property of the drivers, by considering a non-stationary forcing of the shell model.
One possible way of characterizing the stationarity of the forcing given by Eq. (4) is by calculating its fractal dimension, which is a simple measure of the complexity of the time series. Following , a scatter plot is built from the time series, and then the box-counting fractal dimension of this plot is calculated and associated with the time series. When this method is applied to the output of Eq. (4) in various time windows, a value of D∼1.7 is obtained. Its independence of the used time window is a manifestation of the stationary character of the time series.
A first method to change the fractality of the forcing terms was presented in . In that case, from two scalars – the flow speed and the average magnetic field of the solar wind as obtained by OMNI (https://cdaweb.gsfc.nasa.gov/istp_public/, last access: 4 March 2020), two complex series f1 and g1 were built, which were used as forcing of the first shell in Eqs. (1)–(2). In this way, it was shown that the activity of the resulting ϵb(t) time series has a peak near solar maximum. However, the magnetic field time series seems to be the most sensitive, because its fractal dimension seems to correlate with the solar cycle much more than the fractal dimension of the velocity field time series. In fact, the latter does not show any particular sensitivity to the solar magnetic activity.
We now explore a second possibility to change the fractality of the forcing terms, namely, changing the method to calculate the stochastic term for magnetic forcing, which corresponds to μ(t) in Eq. (4). The conventional method to solve this equation, as mentioned above, considers μ(t) as a white Gaussian noise .
Usually the forcing is applied only on the velocity equation, while in this forcing is considered for both the velocity and magnetic field equation.
Here, we will preserve the Gaussian noise for the velocity field, while for the magnetic field forcing we will use the fluctuations in vbz,
$\begin{array}{}\text{(5)}& \mathit{\mu }\left(t\right)=v\cdot {b}_{z}-〈v\cdot {b}_{z}〉,\end{array}$
where v and bz are the velocity and z-component magnetic field of the solar wind, respectively. This difference between the velocity and magnetic field forcing is because, as mentioned before, the velocity time series does not show any relation with the solar magnetic activity .
The data of the solar wind used in this work are obtained from the OMNIWeb Plus data service (https://cdaweb.gsfc.nasa.gov/istp_public/, last access: 4 March 2020). We consider this source because OMNI is a compilation of data obtained from many space missions (IMP 8, Geotail, Wind and ACE) of the magnetic field of the solar wind near the Earth. More specifically, we use data of v and bz at 1 AU of distance with 1 min of resolution. The coordinate system of the data is the geocentric solar ecliptic (GSE). Thus, the z axis is the projection of the axis of the Earth's magnetic dipole (positive to the north) on the plane perpendicular to the x axis (towards the Sun).
Given that the forcing term must be a complex number in Eq. (2), a random phase φ is needed for each datum that is calculated from Eq. (4) for the magnetic case. Then,
$\begin{array}{}\text{(6)}& {f}_{\mathrm{b}}\left(t\right)={\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right){e}^{\mathrm{2}\mathit{\pi }i\mathit{\phi }},\end{array}$
where the amplitude ${\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right)$ corresponds to the solution of the Langevin equation using the modified μ(t), and fb(t) is the force term used in the shell model code.
In order to account for the variability with solar activity, we generate 13 time series of the magnetic energy dissipation, each one using the data corresponding to the 13 years of the 23rd solar cycle (1996 to 2008). Once the time series are generated, we define four indices to measure the activity of the magnetic energy rate (ϵb) and analyze the relationship between these and the fractality of the data.
3 Box-counting dimension of the fluctuation of vbz
In this work, we use the same definition as in for the scatter plot box-counting fractal dimension. The fractal dimension for each time series of the fluctuation of solar wind data (see Eq. 5) is estimated from their scatter diagram. If ${\stackrel{\mathrm{‾}}{\mathit{\mu }}}^{i}$ is the ith $\stackrel{\mathrm{‾}}{\mathit{\mu }}$ datum in the series, and $\stackrel{\mathrm{‾}}{N}$ is the total number of data, the scatter diagram is a plot of ${\stackrel{\mathrm{‾}}{\mathit{\mu }}}^{\mathrm{1}+\left(i+\mathrm{1}\right)j}$ versus ${\stackrel{\mathrm{‾}}{\mathit{\mu }}}^{\mathrm{1}+i\cdot j}$, for $\mathrm{0}\le i\le \left(\stackrel{\mathrm{‾}}{N}-\mathrm{1}\right)/j$ and with a j integer.
Then, the scatter diagram is divided in square cells of a certain size ε, and we count the number $\stackrel{\mathrm{‾}}{N}\left(\mathit{\epsilon }\right)$ of cells which contain a point. Next, we consider several values of ε, and we find the range of ε where $\mathrm{log}\left(\stackrel{\mathrm{‾}}{N}\left(\mathit{\epsilon }\right)\right)$ scales linearly with log ε. If the slope in this region is given by Dj, then in this region,
$\begin{array}{}\text{(7)}& \stackrel{\mathrm{‾}}{N}\left(\mathit{ϵ}\right)\propto {\mathit{\epsilon }}^{-{D}_{j}}.\end{array}$
Figure 1 illustrates the three steps to calculate the fractal dimension, using data for year 2000. Two values of j are used as an example, j=1 and j=10.
Figure 1(a) $v\cdot {b}_{z}-〈v\cdot {b}_{z}〉$ for the year 2000; (b) scatter diagram; (c) loglog plot of Eq. (7). Results for two values of the data sampling are shown: j=1 (red points) and j=10 (blue points).
4 Box-counting dimension of the magnetic forcing term and the energy dissipation rate
We used the same definition as in the previous section to calculate the fractal dimension of ${\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right)$ and ϵb(t). Using 1 year of data of vbz fluctuations as the magnetic field forcing term, we obtain a ${\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right)$, and an energy dissipation rate time series. Then, for a given data window in this series, we construct the scatter plot and calculate its box-counting fractal dimension as described in Sect. 3.
Figure 2 illustrates these three steps to calculate the fractal dimension of the ϵb time series. We can see that due to the very high time resolution in our computer simulation, necessary to properly solve the shell model equations, the change in ϵt(t) at each iteration is very small. This leads to a scatter plot for j=1, which is essentially a straight line of slope 1, and thus to a box-counting dimension equal to 1 as well. However, for larger values of j the scatter diagram presents a nontrivial structure.
Figure 2(a) ϵb for the year 2000; (b) scatter diagram; (c) loglog plot of Eq. (7). Results for three values of the data sampling are shown: j=1 (red points), j=500 (blue points), and j=1000 (black points).
5 Activity parameters
Some studies have reported a variation in the (multi)fractal features of the solar wind within the solar cycle . On the other hand, it is well established that geomagnetic activity increases during solar maximum, and various models attempt to correlate specific features of the solar wind with geomagnetic activity . In this section we investigate whether the amount of complexity in the shell model forcing (as measured by its fractal dimension) somehow correlates with the level and complexity of the dissipation activity.
To this end, we need to define activity parameters for the output time series. We use the same parameters as in . First, a threshold $\stackrel{\mathrm{̃}}{\mathit{ϵ}}$ is chosen, so that an “active state” is said to appear whenever ${\mathit{ϵ}}_{\mathrm{b}}\left(t\right)>\stackrel{\mathrm{̃}}{\mathit{ϵ}}$. Then, four activity parameters are defined:
• N is the number of data above that threshold.
• ϵb is the average of the data.
• ϵbup is the average of the data above the threshold.
• max(ϵb) is the maximum value of ϵb.
The threshold was defined as follows:
$\begin{array}{}\text{(8)}& {\stackrel{\mathrm{̃}}{\mathit{ϵ}}}_{\mathrm{b}}=〈{\mathit{ϵ}}_{\mathrm{b}}〉+n\cdot \mathit{\sigma },\end{array}$
where σ is the standard deviation of the time series and n is a number between 1 and 10. (In only n=5 and n=10 were considered.)
6 Results
We first study the fractal dimension of the noise term used to solve the magnetic Langevin equation, namely μ(t). In general, the fractal dimension of the time series is expected to depend on the value of the time delay j. Thus we study its dependence on j, as well as its dependence on the solar cycle. Results are shown in Fig. 3.
Figure 3Box-counting dimension of μ(t) for different values of j. (a) Curves for each year of the 23rd solar cycle. (b) Curves are distinguished for years corresponding to the maximum (black lines: years 1998 to 2005) or minimum (red lines: years 1996, 1997, and 2006 to 2008) of the solar cycle.
Figure 3a shows that there is a general trend for the fractal dimension to decrease with j. Previous results based on the shell model simulation show that active and quiet states can be distinguished by their different behavior with j (decreasing or increasing its fractal dimension for intermediate values of j, respectively). However, all curves in Fig. 3a have the same trend, so it would seem that the fractal properties of the time series do not depend on the stage of the solar cycle.
However, after adding the information on sunspot activity, results are more clear. In order to show that, we take sunspot number data obtained from National Geophysical Data Center, prepared by the US Department of Commerce, NOAA, Space Weather Prediction Center (SWPC) (ftp://ftp.swpc.noaa.gov/pub/weekly/RecentIndices.txt, last access: 4 March 2020). By inspection, we note that the yearly average number of sunspots near solar minimum is below 40, whereas it quickly increases above 50 when approaching solar maximum, reaching 178 in 2002. Thus we set Ns=40 as the threshold. If the number of sunspots in a year is greater than Ns, the year is classified as closer to solar maximum; if the number of sunspots is less than Ns, it is classified as closer to solar minimum. With this criterion, the years 1996, 1997, and 2006–2008 are classified in the minimum and the years 1998–2005 in the maximum of the solar cycle. The result is shown in Fig. 3b, where we clearly see that, on average, the fractal dimension of the fluctuations μ(t) discriminates between solar cycle minimum and maximum curves. The Fig. 3b also shows that, when j increases, the distinction between the minimum and maximum years improves. Then, henceforward we use j=100 to illustrate our findings.
We intend to compare the degree of complexity of the input time series with the level of activity in the dissipated magnetic energy. We have proposed four ways to measure activity in Sect. 5. For each year of the 23rd solar cycle, the solar wind fluctuation time series μ(t) is used to force the shell model, and the resulting activity in the output is measured.
Figure 4N and ϵbup calculated from the 13 time series of μ(t) for different values of n. For clarity, separate plots are shown for two sets of values for n: from 0 to 5 (a, c), and from 5 to 10 (b, d). The grey region corresponds to years of maximum solar activity, as described in the text.
As stated in Sect. 5, there are two parameters of the activity that depend of the threshold ${\stackrel{\mathrm{̃}}{\mathit{ϵ}}}_{\mathrm{b}}$. With the aim of selecting the appropriate value of n in Eq. (8), in Fig. 4 we first show the results for two of the activity parameters N and ϵbup. We note that for n=5, ϵbup has a clear peak near the solar maximum (year 2002, Fig. 4c). If the threshold is too large (Fig. 4d), sometimes no data are found above it, and the activity parameter drops to zero. In the case of N (Fig. 4a, b), the curves for all values of n yield similar results. No curve has a clear maximum near solar maximum, suggesting that this parameter is rather insensitive to solar activity. Note that in , N was the parameter that showed the strongest correlation with the solar cycle. This highlights the complexity in the definition of a suitable metric for activity. On the other hand, Fig. 4 shows that the model does respond to various activity levels in the forcing time series, regardless of whether such forcing involves the fields themselves or their fluctuations (this work).
Based on the previous discussions, we conclude that a moderate value of n is appropriate when defining the activity parameters, so that the anomalous behavior in Fig. 4b, d is avoided. We will take n=5.
We now compare the various activity parameters with the fractal dimension of μ(t). Figures 5 and 6 show the fractal dimension of μ(t), and one of the activity parameters for each time series. We see that, in general (except for N), the maximum of the activity parameters computed for ϵb(t) approximately occurs in the years around the solar maximum. Moreover, the fractal dimension of μ(t) decreases during the same period.
Figure 5Box-counting dimension of μ(t) for the solar wind with j=100 (black line), and the activity of the dissipated magnetic energy ϵb(t) (red line), as measured by the parameter N (a) and max(ϵb) (b). Threshold to define activity is n=5 (see Eq. 8). The grey region corresponds to years of maximum solar activity, as described in the text.
Figure 6Same as Fig. 5, but for activity parameters ϵb(t)〉 (a) and ϵbup (b). The grey region corresponds to years of maximum solar activity, as described in the text.
We perform a similar analysis, but for the fractal dimension of the magnetic forcing term, ${\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right)$ in Eq. (6). That is, we calculate the dependence of the fractal dimension on j for the ${\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right)$ time series and relate it to the stage within the solar cycle as was done in Fig. 3b. Results are shown in Fig. 7.
Figure 7Box-counting dimension of magnetic forcing term ${\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right)$ for different values of j. Curves are distinguished for years corresponding to maximum (black lines: years 1998 to 2005) or minimum (red lines: years 1996, 1997, and 2006 to 2008) of the solar cycle.
The conclusion from Fig. 7 is similar to the one deduced in the previous analysis for μ(t) (Fig. 3). The general trend of the fractal dimension is to decrease with j, and the fractal dimension of the magnetic forcing during years close to solar minimum is, in general, larger than the one measured during the years near solar maximum. In Fig. 7, this is more clear for j>300.
Considering the above results, we now choose j=500 for the following figures. In Figs. 8 and 9, we compare the fractal dimension of the magnetic forcing term ${\stackrel{\mathrm{̃}}{f}}_{\mathrm{b}}\left(t\right)$ with j=500 for each year, with the same activity parameters of Figs. 5 and 6. We can see that, like the previous analysis, the fractal dimension of the magnetic forcing term has a minimum in the years of maximum activity. Also, consistent with Figs. 5 and 6, results for N are less clear, as seen in Fig. 8.
Figure 8Box-counting dimension of magnetic forcing term for j=500 with respective activity of ϵb(t) (red lines): N (a) and max(ϵb) (b), with n=5. The grey region corresponds to the maximum period of the solar cycle, as described in the text.
Figure 9Box-counting dimension of magnetic forcing term for j=500 with respective activity of ϵb(t) (red lines): ϵb(t)〉 (a) and ϵbup (b), with n=5. The grey region corresponds to the maximum period of the solar cycle, as described in the text.
Finally, we perform the analysis of the fractal dimension of the magnetic energy dissipation rate ϵb(t) for different values of j. This latter fractal dimension, depicted in Fig. 10, does not show any particular dependence on the solar cycle, at least during the 23rd solar cycle here considered.
Figure 10Box-counting dimension of energy dissipation rate for different values of j. Curves are distinguished for years corresponding to maximum (black lines: years 1998 to 2005) or minimum (red lines: years 1996, 1997, and 2006 to 2008) of the solar cycle.
It is important to note that for small values of j, the fractal dimension is essentially constant. In fact, for j=1 the fractal dimension is always one for all years, due to the scatter diagram being exactly a line (see Fig. 2). Unlike Fig. 7, Fig. 10 does not suggest a robust correlation between the fractal dimension of ϵb(t) and the solar cycle, for any value of j.
Therefore, the time-dependent fractal dimension that characterized the forcing adopted here leads to noticeable variations in the intermittency of the magnetic energy dissipation rate, as measured by the activity parameters defined above. On the other hand, the same quantity, namely magnetic energy dissipation rate, does not show any significant variations of its fractal dimension during the cycle considered.
It is interesting to discuss Figs. 3, 7, and 10 in the light of comparative studies of complexity in the solar wind and the magnetosphere, although this work attempts to make a very simplified model of the interaction between the solar wind and the Earth's magnetosphere. In effect, Figs. 3 and 7 show the complexity of the drivers of the shell model, which we may loosely associate with the solar wind driving the magnetosphere, whereas Fig. 10 shows the complexity of the output of the shell model, which may represent the magnetospheric activity, following the analogy.
Thus, these plots show that the complexity of the driven system (Fig. 10) is more similar to the complexity of the driver (Figs. 3, 7) during solar maximum than during solar minimum, and that the complexity of the driven system is typically lower than the complexity of the driver. This is different from results in , where a study in terms of Hurst exponents was made, finding that complexity in the magnetosphere is larger than in the solar wind. However, it should also be noted that in our work, longer-term trends are studied (1-year windows), instead of timescales of the order of the duration of geomagnetic storms. The different timescales, and the use of different metrics for complexity, could be relevant to compare both results.
7 Conclusions
In this paper we present the results of an MHD shell model where we force the velocity field fluctuations and the magnetic field fluctuations differently. In particular, while the forcing employed in the velocity equation is a time-correlated Gaussian noise, for the magnetic field equation we adopt the solution of a Langevin equation where the fluctuations of vbz, computed using solar wind data, are introduced in this equation instead of a stochastic term. This produces a forcing on the magnetic field equation that mimics the time-dependent solar wind action on Earth's magnetosphere during a solar cycle. This description is certainly an oversimplification of the complex dynamics that determine the interaction between solar wind and Earth's magnetosphere, but it provides a possible approach if we are interested only in the fractal properties of the time series of the characteristic parameters.
In this framework, we have analyzed the relationships of the activity of the magnetic energy dissipation rate obtained in the shell model, with the fractal dimension of its input and output time series. Specifically, our defined activity parameters are compared with the fractal dimension of the fluctuations of the solar wind vbz data, the magnetic force term, and the time series of the magnetic energy dissipation rate.
Both the fluctuation term, μ(t), and the resulting forcing term (Eqs. 5 and 6) have a fractal dimension which is well correlated with the solar cycle, as shown in Figs. 3 and 7, indicating that information on solar activity is actually present in the fractal dimension of μ(t) and resulting forcing. This is not the case for the magnetic energy dissipation rate, as can be seen in Fig. 10. Thus, this complexity measure produces signatures of the corresponding solar activity when applied to the input of the shell model, but does not produce them when applied to the output of the model.
For the quantities which possess a time-dependent fractal dimension, namely μ(t) and the forcing term, this dimension exhibits a minimum near solar maximum. As to the activity of the output, all proposed metrics – except N – seem to correlate with the solar cycle, showing a peak near the solar maximum. This suggests that the complexity of the noise term of Langevin equation may have, within the simulation, a noticeable effect in the activity of the magnetic energy dissipation rate, although the fractal dimension, as calculated here, is not a suitable metric for that output activity.
Despite this, it is interesting to see that some results are consistent with previous studies, based directly on data. Fractal dimensions in Figs. 3 and 7 measure the complexity of the drivers of the shell model, which we may loosely associate with the solar wind driving the magnetosphere, whereas Fig. 10 measures the complexity of the output of the shell model, which may represent the magnetospheric activity, following the analogy. Results are similar to those calculated for the solar wind and for the Dst index , in the sense that values of the fractal dimensions suggest that the complexity of the solar wind is larger than the complexity of the magnetosphere, measured using the same box-counting approach as presented here.
Given the complex dynamics in the system studied, we should not expect that a single metric contains all the information, and thus results may depend on the method used. For instance, Hurst exponents are used in , which suggests that complexity in the magnetosphere is larger than in the solar wind. On the other hand, the timescales observed are also different from those in , and this can also be relevant to evaluate complexity in a physical system.
Nevertheless, it is interesting that various studies have considered the use of fractal dimensions, using several strategies, as a means to extract information on solar wind–magnetosphere interaction, either in the sense of precursor activity , or longer-term trends . Simulation-based studies may help to understand to what extent complexity measures may be relevant for this task.
Data availability
Data availability.
OMNI data for the flow speed and average magnetic field of the solar wind can be downloaded from the website of the Coordinated Data Analysis Web (CDAWeb), Goddard Space Flight Center, https://cdaweb.gsfc.nasa.gov/istp_public/ (last access: 4 March 2020, ). Sunspot number data can be downloaded from the website of the Space Weather Prediction Center (SWPC2020), ftp://ftp.swpc.noaa.gov/pub/weekly/RecentIndices.txt (last access: 4 March 2020).
Author contributions
Author contributions.
MD did the main numerical analysis and was involved in editing the paper and in scientific discussions for the whole text. VM was involved in writing and editing the paper and in scientific discussions for the whole text. GN and VC were involved in the creation of the numerical code, in editing the paper, and in scientific discussions for the whole text. MR was involved in editing the paper, and in scientific discussions for the whole text.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
We thank the support of CONICYT through FONDECYT grant nos. 1161711 and 1201967 (Víctor Muñoz), grant no. 3160305 (Macarena Domínguez), and grant no. 1191673 (Mario Riquelme).
Financial support
Financial support.
This research has been supported by the Fondo Nacional de Desarrollo Científico y Tecnológico, FONDECYT (grant nos. 1161711, 1201967, 3160305, and 1191673).
Review statement
Review statement.
This paper was edited by Bruce Tsurutani and reviewed by two anonymous referees.
References
Aschwanden, M. J. and Aschwanden, P. D.: Solar Flare Geometries. I. The Area Fractal Dimension, Astrophys. J., 674, 530–543, https://doi.org/10.1086/524371, 2008a. a, b
Aschwanden, M. J. and Aschwanden, P. D.: Solar Flare Geometries. II. The Volume Fractal Dimension, Astrophys. J., 674, 544–553, https://doi.org/10.1086/524370, 2008b. a
Balasis, G., Daglis, I. A., Kapiris, P., Mandea, M., Vassiliadis, D., and Eftaxias, K.: From pre-storm activity to magnetic storms: a transition described in terms of fractal dynamics, Ann. Geophys., 24, 3557–3567, https://doi.org/10.5194/angeo-24-3557-2006, 2006. a, b, c, d
Boffetta, G., Carbone, V., Giuliani, P., Veltri, P., and Vulpiani, A.: Power Laws in Solar Flares: Self-Organized Criticality or Turbulence?, Phys. Rev. Lett., 83, 4662–4665, https://doi.org/10.1103/PhysRevLett.83.4662, 1999. a, b
Burton, R. K., McPherron, R. L., and Russel, C. T.: An Empirical Relationship between Interplanetary Conditions and Dst, J. Geophys. Res., 80, 4204–4217, https://doi.org/10.1029/JA080i031p04204, 1975. a, b
Carreras, B. A., Lynch, V. E., Newman, D. E., Balbín, R., Bleuel, J., Pedrosa, M. A., Endler, M., van Milligen, B., Sánchez, E., and Hidalgo, C.: Intermittency of Plasma Edge Fluctuation data: Multifractal Analysis, Phys. Plasmas, 7, 3278–3287, https://doi.org/10.1063/1.874193, 2000. a
Chang, T.: Self-Organized Criticality, Multi-Fractal Spectra, Sporadic Localized Reconnection and Intermittent Turbulence in the Magnetotail, Phys. Plasmas, 6, 4137, https://doi.org/10.1023/A:1002486121567, 1999. a
Chang, T. and Wu, C. C.: Rank-Ordered Multifractal Spectrum for Intermittent Fluctuations, Phys. Rev. E, 77, 045401, https://doi.org/10.1103/PhysRevE.77.045401, 2008. a
Chapman, S. C., Hnat, B., and Kiyani, K.: Solar cycle dependence of scaling in solar wind fluctuations, Nonlin. Processes Geophys., 15, 445–455, https://doi.org/10.5194/npg-15-445-2008, 2008. a
Conlon, P. A., Gallagher, P. T., McAteer, R. T. J., Ireland, J., Young, C. A., Kestener, P., Hewett, R. J., and Maguire, K.: Multifractal Properties of Evolving Active Regions, Sol. Phys., 248, 297–309, https://doi.org/10.1007/s11207-007-9074-7, 2008. a
Dimitropoulou, M., Georgoulis, M., Isliker, H., Vlahos, L., Anastasiadis, A., Strintzi, D., and Moussas, X.: The correlation of fractal structures in the photospheric and the coronal magnetic field, Astron. Astrophys., 505, 1245–1253, https://doi.org/10.1051/0004-6361/200911852, 2009. a, b
Domínguez, M., Muñoz, V., and Valdivia, J. A.: Temporal Evolution of Fractality in the Earth's Magnetosphere and the Solar Photosphere, J. Geophys. Res., 119, 3585–3603, https://doi.org/10.1002/2013JA019433, 2014. a, b, c, d, e, f
Domínguez, M., Nigro, G., Muñoz, V., and Carbone, V.: Study of Fractal Features of Magnetized Plasma Through an MHD Shell Model, Phys. Plasmas, 24, 072308, https://doi.org/10.1063/1.4993200, 2017. a, b, c, d, e, f, g, h, i, j, k, l, m
Domínguez, M., Nigro, G., Muñoz, V., and Carbone, V.: Study of the Fractality of Magnetized Plasma using an MHD Shell Model Driven by Solar Wind Data, Phys. Plasmas, 25, 092302, https://doi.org/10.1063/1.5034129, 2018. a, b, c, d, e, f, g, h, i, j, k, l, m
Donner, R. V., Balasis, G., Stolbova, V., Georgiou, M., Wiedermann, M., and Kurths, J.: Recurrence-Based Quantification of Dynamical Complexity in the Earth's Magnetosphere at Geospace Storm Timescales, J. Geophys. Res., 124, 90–108, https://doi.org/10.1029/2018JA025318, 2018. a
Echer, E., Alves, M. V., and Gonzalez, W. D.: Geoeffectiveness of Interplanetary Shocks during Solar Minimum (1995–1996) and Solar Maximum (2000), Sol. Phys., 221, 361–380, https://doi.org/10.1023/B:SOLA.0000035045.65224.f3, 2004. a
Eftaxias, K., Contoyiannis, Y., Balasis, G., Karamanos, K., Kopanas, J., Antonopoulos, G., Koulouras, G., and Nomicos, C.: Evidence of fractional-Brownian-motion-type asperity model for earthquake generation in candidate pre-seismic electromagnetic emissions, Nat. Hazards Earth Syst. Sci., 8, 657–669, https://doi.org/10.5194/nhess-8-657-2008, 2008. a
Eftaxias, K. A., Kapiris, P. G., Balasis, G. T., Peratzakis, A., Karamanos, K., Kopanas, J., Antonopoulos, G., and Nomicos, K. D.: Unified approach to catastrophic events: from the normal state to geological or biological shock in terms of spectral fractal and nonlinear analysis, Nat. Hazards Earth Syst. Sci., 6, 205–228, https://doi.org/10.5194/nhess-6-205-2006, 2006. a
Georgoulis, M. K.: Are Solar Active Regions with Major Flares More Fractal, Multifractal, or Turbulent Than Others?, Sol. Phys., 276, 161–181, https://doi.org/10.1007/s11207-010-9705-2, 2012. a
Gledzer, E. B.: System of Hydrodynamic Type Allowing 2 Quadratic Integrals of Motion, Sov. Phys. Dokl. SSSR, 18, 216–217, 1973. a
Gonzalez, W. D., Joselyn, J. A., Kamide, Y., Kroehl, H. W., Rostoker, G., Tsurutani, B. T., and Vasyliunas, V. M.: What Is A Geomagnetic Storm?, J. Geophys. Res., 93, 5771–5792, https://doi.org/10.1029/93JA02867, 1994. a, b
Gonzalez, W. D., Dal Lago, A., Clúa de Gonzalez, A. L., Vieira, L. E. A., and Tsurutani, B. T.: Prediction of Peak-Dst from Halo CME/Magnetic Cloud-Speed Observations, J. Atmos. Sol.-Terr. Phy., 66, 161–165, https://doi.org/10.1016/j.jastp.2003.09.006, 2004. a, b, c
Gündüz, G. and Gündüz, U.: The Mathematical Analysis of the Structure of Some Songs, Physica A, 357, 565–592, https://doi.org/10.1016/j.physa.2005.03.042, 2005. a
Hsü, K. J. and Hsü, A. J.: Fractal Geometry of Music, P. Natl. Acad. Sci. USA, 87, 938–941, https://doi.org/10.1073/pnas.87.3.938, 1990. a
Huttunen, K. E. J., Koskinen, H. E. J., and Schwenn, R.: Variability of Magnetospheric Storms Driven by Different Solar Wind Perturbations, J. Geophys. Res., 107, 1121, https://doi.org/10.1029/2001JA900171, 2002. a
Kane, R. P.: How Good is the Relationship of Solar and Interplanetary Plasma Parameters with Geomagnetic Storms?, J. Geophys. Res., 110, 02213, https://doi.org/10.1029/2004JA010799, 2005. a, b, c
Kiyani, K., Chapman, S. C., Hnat, B., and Nicol, R. M.: Self-Similar Signature of the Active Solar Corona within the Inertial Range of Solar-Wind Turbulence, Phys. Rev. Lett., 98, 211101, https://doi.org/10.1103/PhysRevLett.98.211101, 2007. a
Kozelov, B. V.: Fractal approach to description of the auroral structure, Ann. Geophys., 21, 201–2023, https://doi.org/10.5194/angeo-21-2011-2003, 2003. a
Lepreti, F., Carbone, V., Giuliani, P., Sorriso-Valvo, L., and Veltri, P.: Statistical Properties of Dissipation Bursts within Turbulence: Solar Flares and Geomagnetic Activity, Planet. Space Sci., 52, 957–962, https://doi.org/10.1016/j.pss.2004.03.001, 2004. a, b, c, d, e
Macek, W. M.: Modeling Multifractality of the Solar Wind, Space Sci. Rev., 122, 329–337, https://doi.org/10.1007/s11214-006-8185-z, 2006. a
Macek, W. M.: Multifractality and intermittency in the solar wind, Nonlin. Processes Geophys., 14, 695–700, https://doi.org/10.5194/npg-14-695-2007, 2007. a
Macek, W. M. and Wawrzaszek, A.: Evolution of Asymmetric Multifractal Scaling of Solar Wind Turbulence in the Outer Heliosphere, J. Geophys. Res., 114, 03108, https://doi.org/10.1029/2008JA013795, 2009. a
Macek, W. M., Bruno, R., and Consolini, G.: Generalized Dimensions for Fluctuations in the Solar Wind, Phys. Rev. E, 72, 017202, https://doi.org/10.1103/PhysRevE.72.017202, 2005. a
Materassi, M. and Consolini, G.: Magnetic Reconnection Rate in Space Plasmas: A Fractal Approach, Phys. Rev. Lett., 99, 175002, https://doi.org/10.1103/PhysRevLett.99.175002, 2007. a
McAteer, R. T. J., Gallagher, P. T., and Ireland, J.: Statistics of Active Region Complexity: A Large-Scale Fractal Dimension Survey, Astrophys. J., 631, 628–635, https://doi.org/10.1086/432412, 2005. a, b
McAteer, R. T. J., Gallagher, P. T., and Conlon, P. A.: Turbulence, Complexity, and Solar Flares, Adv. Space Res., 45, 1067–1074, https://doi.org/10.1016/j.asr.2009.08.026, 2010. a, b
Nanjo, K. and Nagahama, H.: Fractal Properties of Spatial Distributions of Aftershocks and Active Faults, Chaos Soliton. Fract., 19, 387, https://doi.org/10.1016/S0960-0779(03)00051-1, 2004. a
Neto, C. R., Guimarães-Filho, Z. O., Caldas, I. L., Nascimento, I. C., and Kuznetsov, Y. K.: Multifractality in Plasma Edge Electrostatic Turbulence, Phys. Plasmas, 15, 082311, https://doi.org/10.1063/1.2973175, 2008. a
Nigro, G.: A Shell Model for a Large-Scale Turbulent Dynamo, Geophys. Astro. Fluid, 107, 101–113, https://doi.org/10.1080/03091929.2012.664141, 2013. a
Nigro, G. and Carbone, V.: Magnetic Reversals in a Modified Shell Model for Magnetohydrodynamics Turbulence, Phys. Rev. E, 82, 016313, https://doi.org/10.1103/PhysRevE.82.016313, 2010. a, b
Nigro, G. and Carbone, V.: Finite-Time Singularities and Flow Regularization in a Hydromagnetic Shell Model at Extreme Magnetic Prandtl Numbers, New J. Phys., 17, 073038, https://doi.org/10.1088/1367-2630/17/7/073038, 2015. a, b
Nigro, G. and Veltri, P.: A Study of the Dynamo Transition in a Self-Consistent Nonlinear Dynamo Model, Astrophys. J. Lett., 740, L37, https://doi.org/10.1088/2041-8205/740/2/L37, 2011. a, b
Nigro, G., Malara, F., Carbone, V., and Veltri, P.: Nanoflares and MHD Turbulence in Coronal Loops: A Hybrid Shell Model, Phys. Rev. Lett., 92, 194501, https://doi.org/10.1103/PhysRevLett.92.194501, 2004. a, b, c
Obukhov, A. M.: Some General Properties of Equations Describing The Dynamics of the Atmosphere, Akad. Nauk. SSSR, Izv. Serria Fiz. Atmos. Okeana, 7, 695–704, 1971. a
OMNIWeb Plus Data Service: OMNI Data, Goddard Space Flight Center, available at: https://cdaweb.gsfc.nasa.gov/istp_public/, last access: 4 March 2020. a
Pastén, D., Muñoz, V., Cisternas, A., Rogan, J., and Valdivia, J. A.: Monofractal and Multifractal Analysis of the Spatial Distribution of Earthquakes in the Central Zone of Chile, Phys. Rev. E, 84, 066123, https://doi.org/10.1103/PhysRevE.84.066123, 2011. a
Rangarajan, G. K. and Barreto, L. M.: Long Term Variability in Solar Wind Velocity and IMF Intensity and the Relationship between Solar Wind Parameters & Geomagnetic Activity, Earth Planets Space, 52, 121, https://doi.org/10.1186/BF03351620, 2000. a
Rathore, B. S., Gupta, D. C., and Parashar, K. K.: Relation Between Solar Wind Parameter and Geomagnetic Storm Condition during Cycle-23, International Journal of Geosciences, 5, 1602–1608, https://doi.org/10.4236/ijg.2014.513131, 2014. a
Rathore, B. S., Gupta, D. C., and Kaushik, S. C.: Effect of Solar Wind Plasma Parameters on Space Weather, Res. Astron. Astrophys., 15, 85, https://doi.org/10.1088/1674-4527/15/1/009, 2015. a, b
Sahimi, M., Robertson, M. C., and Sammis, C. G.: Fractal Distribution of Earthquake Hypocenters and its Relation to Fault Patterns and Percolation, Phys. Rev. Lett., 70, 2186–2189, https://doi.org/10.1103/PhysRevLett.70.2186, 1993. a
Snyder, C. W., Neugebauer, M., and Rao, U. R.: The Solar Wind Velocity and Its Correlation with Cosmic-Ray Variations and with Solar and Geomagnetic Activity, J. Geophys. Res., 68, 6361–6370, 1963. a
Su, Z.-Y. and Wu, T.: Music Walk, Fractal Geometry in Music, Physica A, 380, 418–428, https://doi.org/10.1016/j.physa.2007.02.079, 2007. a
Space Weather Prediction Center (SWPC): Sunspot number data, U.S. Dept. of Commerce, NOAA, available at: ftp://ftp.swpc.noaa.gov/pub/weekly/RecentIndices.txt, last access: 4 March 2020. a
Szczepaniak, A. and Macek, W. M.: Asymmetric multifractal model for solar wind intermittent turbulence, Nonlin. Processes Geophys., 15, 615–620, https://doi.org/10.5194/npg-15-615-2008, 2008. a, b
Tsurutani, B. T., Gonzalez, W., Tang, F., Akasofu, S., and Smith, E. J.: Origin of Interplanetary Southward Magnetic Fields Responsible for Major Magnetic Storms near Solar Maximum (1978–1979), J. Geophys. Res., 93, 8519–8531, https://doi.org/10.1029/JA093iA08p08519, 1988. a
Uritsky, V. M., Klimas, A. J., and Vassiliadis, D.: Analysis and Prediction of High-Latitude Geomagnetic Disturbances based on a Self-Organized Criticality Framework, Adv. Space Res., 37, 539–546, https://doi.org/10.1016/j.asr.2004.12.059, 2006. a
Yamada, M. and Ohkitani, K.: Lyapunov Spectrum of a Model of Two-Dimensional Turbulence, Phys. Rev. Lett., 60, 983–986, https://doi.org/10.1103/PhysRevLett.60.983, 1988. a
Yankov, V. V.: Magnetic Field Dissipation and Fractal Model of Current Sheets, Phys. Plasmas, 4, 571, https://doi.org/10.1063/1.872155, 1997. a
Zaginaylov, G., Grudiev, A., Shünemann, K., and Turbin, P.: Fractal Properties of Trivelpiece-Gould Waves in Periodic Plasma-Filled Waveguides, Phys. Rev. Lett., 88, 195005, https://doi.org/10.1103/PhysRevLett.88.195005, 2002. a | 2020-05-27 09:15:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 32, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7816388607025146, "perplexity": 1269.7206042435437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347392142.20/warc/CC-MAIN-20200527075559-20200527105559-00064.warc.gz"} |
https://gmatclub.com/forum/a-total-of-512-players-participated-in-a-single-tennis-knock-131735.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 21 Feb 2019, 16:38
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in February
PrevNext
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
242526272812
Open Detailed Calendar
February 21, 2019
February 21, 2019
10:00 PM PST
11:00 PM PST
Kick off your 2019 GMAT prep with a free 7-day boot camp that includes free online lessons, webinars, and a full GMAT course access. Limited for the first 99 registrants! Feb. 21st until the 27th.
• ### Free GMAT RC Webinar
February 23, 2019
February 23, 2019
07:00 AM PST
09:00 AM PST
Learn reading strategies that can help even non-voracious reader to master GMAT RC. Saturday, February 23rd at 7 AM PT
# A total of 512 players participated in a single tennis knock
Author Message
TAGS:
### Hide Tags
Intern
Joined: 04 Mar 2012
Posts: 37
A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
01 May 2012, 20:26
4
00:00
Difficulty:
35% (medium)
Question Stats:
70% (01:45) correct 30% (01:48) wrong based on 458 sessions
### HideShow timer Statistics
A total of 512 players participated in a single tennis knock out tournament. What is the total number of matches played in the tournament? (Knockout means if a player loses, he is out of the tournament). No match ends in a tie.
A. 511
B. 512
C. 256
D. 255
E. 1023
I chose D, solved this way - after first 256 matches, remaing players 256 (those 256 who lost are knocked out), after another 128 matches, remaining players are 128, after 64 more matches, no of remainig players are 64, after 32, 32, after 16, 16, after 8, 8, after 4, 4, after 2,2, and then 1 -
total 256 + 128+64+32+16+8+1 = 255 matches
However, the correct answer given is A 511, can anyone please exlain what's wrong in my approach? Thanks!
Intern
Joined: 25 Jun 2012
Posts: 33
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
19 Nov 2012, 18:17
18
2
There are 512 players, only 1 person wins, 511 players lose. in order to lose, you must have lost a game.
511 games.
##### General Discussion
Math Expert
Joined: 02 Sep 2009
Posts: 53063
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
01 May 2012, 22:00
3
gmihir wrote:
A total of 512 players participated in a single tennis knock out tournament. What is the total number of matches played in the tournament? (Knockout means if a player loses, he is out of the tournament). No match ends in a tie.
A. 511
B. 512
C. 256
D. 255
E. 1023
I chose D, solved this way - after first 256 matches, remaing players 256 (those 256 who lost are knocked out), after another 128 matches, remaining players are 128, after 64 more matches, no of remainig players are 64, after 32, 32, after 16, 16, after 8, 8, after 4, 4, after 2,2, and then 1 -
total 256 + 128+64+32+16+8+1 = 255 matches
However, the correct answer given is A 511, can anyone please exlain what's wrong in my approach? Thanks!
You've done everything right except calculation: 256+128+64+32+16+8+4+2+1=511.
_________________
Intern
Joined: 22 Jan 2013
Posts: 9
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
26 Feb 2013, 18:12
1- the 512 players will play 256 games --> 256 plyers will go out from these games
2- the remaining is 256 players, they will play 128 games --> 128 players will go out
3- the remaining is 128 players, they will play 64 games --> 64 players will go out
4- the remaining is 64 players, they will play 32 games --> 32 players will go out
5- the remaining is 32 players, they will play 16 games --> 16 players will go out
6- the remaining is 16 players, they will play 8 games --> 8 players will go out
7- the remaining is 8 players, they will play 4 games --> 4 players will go out
8- the remaining is 4 players, they will play 2 games --> 2 players will go out.
9- the remaining is 2 players, they will play 1 games --> 1 players will go out
265+128+64+32+16+8+4+2+1=511
Intern
Joined: 05 Feb 2013
Posts: 8
Schools: Anderson '15
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
27 Feb 2013, 11:44
AlyoshaKaramazov wrote:
There are 512 players, only 1 person wins, 511 players lose. in order to lose, you must have lost a game.
511 games.
I know this is an old post. But damn, sometimes the answer is so simple, you just have to thinkg logically.
Thanks Alyosha
Intern
Joined: 31 May 2013
Posts: 13
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
24 Jun 2013, 07:23
If you divide 512 by 2 recursively to factor: 512,256,128,64,32,16,8,4,2,1. so the answer narrows to down to either A or B.
Since the last dividend is 1, the sum will be odd so it should be 511
A
SVP
Joined: 06 Sep 2013
Posts: 1694
Concentration: Finance
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
26 Dec 2013, 12:05
Bunuel wrote:
gmihir wrote:
A total of 512 players participated in a single tennis knock out tournament. What is the total number of matches played in the tournament? (Knockout means if a player loses, he is out of the tournament). No match ends in a tie.
A. 511
B. 512
C. 256
D. 255
E. 1023
I chose D, solved this way - after first 256 matches, remaing players 256 (those 256 who lost are knocked out), after another 128 matches, remaining players are 128, after 64 more matches, no of remainig players are 64, after 32, 32, after 16, 16, after 8, 8, after 4, 4, after 2,2, and then 1 -
total 256 + 128+64+32+16+8+1 = 255 matches
However, the correct answer given is A 511, can anyone please exlain what's wrong in my approach? Thanks!
You've done everything right except calculation: 256+128+64+32+16+8+4+2+1=511.
Geometric progression
512 = 2^9
2^1 + 2^2 ....+2^9 + 1
2 ( 2^8-1) = 2^9 - 2 + 1
A is our friend here
Cheers!
J
Manager
Joined: 28 Apr 2013
Posts: 123
Location: India
GPA: 4
WE: Medicine and Health (Health Care)
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
03 Jan 2014, 16:34
gmihir wrote:
A total of 512 players participated in a single tennis knock out tournament. What is the total number of matches played in the tournament? (Knockout means if a player loses, he is out of the tournament). No match ends in a tie.
A. 511
B. 512
C. 256
D. 255
E. 1023
I chose D, solved this way - after first 256 matches, remaing players 256 (those 256 who lost are knocked out), after another 128 matches, remaining players are 128, after 64 more matches, no of remainig players are 64, after 32, 32, after 16, 16, after 8, 8, after 4, 4, after 2,2, and then 1 -
total 256 + 128+64+32+16+8+1 = 255 matches
However, the correct answer given is A 511, can anyone please exlain what's wrong in my approach? Thanks!
256+128+ 64+32+16+8+4+2+1
= 511
OA - A
Thanks for posting
_________________
Thanks for Posting
LEARN TO ANALYSE
+1 kudos if you like
Intern
Joined: 17 May 2015
Posts: 2
A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
25 Jun 2015, 01:14
One of the best logics I've seen for these knockout questions is as follows:
In this question, there are 512 participants. And it takes ONE match to eliminate ONE player. So at the end of the day you need to eliminate everyone except the winner. ie. you need to eliminate 511 participants and naturally you need 511 matches for the same
Manager
Joined: 09 Jun 2015
Posts: 92
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
18 Apr 2016, 00:13
gmihir wrote:
A total of 512 players participated in a single tennis knock out tournament. What is the total number of matches played in the tournament? (Knockout means if a player loses, he is out of the tournament). No match ends in a tie.
A. 511
B. 512
C. 256
D. 255
E. 1023
I chose D, solved this way - after first 256 matches, remaing players 256 (those 256 who lost are knocked out), after another 128 matches, remaining players are 128, after 64 more matches, no of remainig players are 64, after 32, 32, after 16, 16, after 8, 8, after 4, 4, after 2,2, and then 1 -
total 256 + 128+64+32+16+8+1 = 255 matches
However, the correct answer given is A 511, can anyone please exlain what's wrong in my approach? Thanks!
If there are 2 players, then there will be only 1 match
If there are 4 players, then there will be 2+1 matches
8 players, 4+2+1=7
16 players, 8+4+2+1 = 15
Now you are getting the pattern
512 players, 256+128+64+..+2+1=511
Board of Directors
Status: QA & VA Forum Moderator
Joined: 11 Jun 2011
Posts: 4391
Location: India
GPA: 3.5
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
18 Apr 2016, 11:01
Mathivanan Palraj wrote:
gmihir wrote:
A total of 512 players participated in a single tennis knock out tournament. What is the total number of matches played in the tournament? (Knockout means if a player loses, he is out of the tournament). No match ends in a tie.
A. 511
B. 512
C. 256
D. 255
E. 1023
I chose D, solved this way - after first 256 matches, remaing players 256 (those 256 who lost are knocked out), after another 128 matches, remaining players are 128, after 64 more matches, no of remainig players are 64, after 32, 32, after 16, 16, after 8, 8, after 4, 4, after 2,2, and then 1 -
total 256 + 128+64+32+16+8+1 = 255 matches
However, the correct answer given is A 511, can anyone please exlain what's wrong in my approach? Thanks!
If there are 2 players, then there will be only 1 match
If there are 4 players, then there will be 2+1 matches
8 players, 4+2+1=7
16 players, 8+4+2+1 = 15
Now you are getting the pattern
512 players, 256+128+64+..+2+1=511
Good catch , or U can go the other way round -
When there are 2 player only 1 knockout match is needed
When there are 3 player only 2 knockout match is needed
When there are 4 player only 3 knockout match is needed
So, the pattern formed is -
When there are n player only n - 1 knockout match is needed
Hence , When there are 512 player only 511 (512 - 1 ) knockout match is needed
Either way answer will be the same but the best thing/strategy is to solve the problems within minimum time & calculation
_________________
Thanks and Regards
Abhishek....
PLEASE FOLLOW THE RULES FOR POSTING IN QA AND VA FORUM AND USE SEARCH FUNCTION BEFORE POSTING NEW QUESTIONS
How to use Search Function in GMAT Club | Rules for Posting in QA forum | Writing Mathematical Formulas |Rules for Posting in VA forum | Request Expert's Reply ( VA Forum Only )
Current Student
Joined: 12 Aug 2015
Posts: 2621
Schools: Boston U '20 (M)
GRE 1: Q169 V154
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
22 Apr 2016, 13:59
Firstly the question should state that a match is played between 2 players (what if its a cricket match)
assuming the match is played between 2 teams = > number of matches => 512/2 + 256/2 +128/2+64/2+32/2+16/2+8/2+4/2+2/2
we dont need to calculate the sum here => the unit digit will be 1
SMASH that A
_________________
Senior Manager
Joined: 18 Jun 2016
Posts: 263
Location: India
GMAT 1: 720 Q50 V38
GMAT 2: 750 Q49 V42
GPA: 4
WE: General Management (Other)
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
31 Aug 2016, 09:54
stonecold wrote:
Firstly the question should state that a match is played between 2 players (what if its a cricket match)
assuming the match is played between 2 teams = > number of matches => 512/2 + 256/2 +128/2+64/2+32/2+16/2+8/2+4/2+2/2
we dont need to calculate the sum here => the unit digit will be 1
SMASH that A
1. Question mentions that it is a Tennis Tournament of Singles.
2. All we know is that the sum is Odd. Please explain how will you get the unit's digit = 1 by looking at the sequence.
My Solution:
# of matches = 256 + 128 + 64 + ... + 2 + 1 = $$2^8 + 2^7 + 2^6 + ... + 2^1 + 2^0$$ => Ascending G.P.
Sum of G.P. =$$\frac{a(r^n - 1)}{a-1}$$
Where,
a = First Term = 1 (Sum starts from 1)
r = Common Ratio = 2 (ratio of each successive term is 2)
n = Number of terms = 9
Therefore,
Sum = $$\frac{1 * (2^9 - 1)}{2-1}$$ = 512 - 1 = 511
_________________
I'd appreciate learning about the grammatical errors in my posts
Please hit Kudos If my Solution helps
My Debrief for 750 - https://gmatclub.com/forum/from-720-to-750-one-of-the-most-difficult-pleatues-to-overcome-246420.html
My CR notes - https://gmatclub.com/forum/patterns-in-cr-questions-243450.html
Non-Human User
Joined: 09 Sep 2013
Posts: 9879
Re: A total of 512 players participated in a single tennis knock [#permalink]
### Show Tags
19 Dec 2017, 13:54
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: A total of 512 players participated in a single tennis knock [#permalink] 19 Dec 2017, 13:54
Display posts from previous: Sort by | 2019-02-22 00:38:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6887752413749695, "perplexity": 3588.862181235061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247511573.67/warc/CC-MAIN-20190221233437-20190222015437-00052.warc.gz"} |
https://bibli.cirm-math.fr/listRecord.htm?list=link&xRecord=19275682157910938649 | m
• E
F Nous contacter
0
# Documents Hubert, Pascal | enregistrements trouvés : 23
O
P Q
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Interview au CIRM : Pascal Hubert Hubert, Pascal | CIRM H
Post-edited
Outreach;Mathematics Education and Popularization of Mathematics
Pascal Hubert est mathématicien, professeur au sein d'Aix-Marseille Université et directeur de la FRUMAM.
Il parle ici de son grand-père, qui lui a donné le goût des mathématiques, de ses recherches, de la richesse mathématique marseillaise, de sa collaboration avec Artur Avila (Médaille Fields 2014), etc. Artur Avila que nous avons pu contacter avant l'interview de Pascal Hubert, et qui nous a demandé de lui parler de Jean-Christophe Yoccoz...
Pascal Hubert est mathématicien, professeur au sein d'Aix-Marseille Université et directeur de la FRUMAM.
Il parle ici de son grand-père, qui lui a donné le goût des mathématiques, de ses recherches, de la richesse mathématique marseillaise, de sa collaboration avec Artur Avila (Médaille Fields 2014), etc. Artur Avila que nous avons pu contacter avant l'interview de Pascal Hubert, et qui nous a demandé de lui parler de Jean-Christophe Yoccoz...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Totally geodesic submanifolds of Teichmüller space and moduli space Wright, Alexander | CIRM H
Post-edited
Research talks;Dynamical Systems and Ordinary Differential Equations
We consider "higher dimensional Teichmüller discs", by which we mean complex submanifolds of Teichmüller space that contain the Teichmüller disc joining any two of its points. We prove results in the higher dimensional setting that are opposite to the one dimensional behavior: every "higher dimensional Teichmüller disc" covers a "higher dimensional Teichmüller curve" and there are only finitely many "higher dimensional Teichmüller curves" in each moduli space. The proofs use recent results in Teichmüller dynamics, especially joint work with Eskin and Filip on the Kontsevich-Zorich cocycle. Joint work with McMullen and Mukamel as well as Eskin, McMullen and Mukamel shows that exotic examples of "higher dimensional Teichmüller discs" do exist. We consider "higher dimensional Teichmüller discs", by which we mean complex submanifolds of Teichmüller space that contain the Teichmüller disc joining any two of its points. We prove results in the higher dimensional setting that are opposite to the one dimensional behavior: every "higher dimensional Teichmüller disc" covers a "higher dimensional Teichmüller curve" and there are only finitely many "higher dimensional Teichmüller curves" in ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Interview at CIRM: Peter Sarnak Sarnak, Peter | CIRM H
Post-edited
Outreach;Mathematics Education and Popularization of Mathematics
Peter Sarnak is a South African-born mathematician with dual South-African and American nationalities. He has been Eugene Higgins Professor of Mathematics at Princeton University since 2002, succeeding Andrew Wiles, and is an editor of the Annals of Mathematics. He is known for his work in analytic number theory. Sarnak is also on the permanent faculty at the School of Mathematics of the Institute for Advanced Study. He also sits on the Board of Adjudicators and the selection committee for the Mathematics award, given under the auspices of the Shaw Prize.
Sarnak graduated University of the Witwatersrand (B.Sc. 1975) and Stanford University (Ph.D. 1980), under the direction of Paul Cohen. Sarnak’s highly cited work (with A. Lubotzky and R. Philips) applied deep results in number theory to Ramanujan graphs, with connections to combinatorics and computer science.
Peter Sarnak was awarded the Polya Prize of Society of Industrial & Applied Mathematics in 1998, the Ostrowski Prize in 2001, the Levi L. Conant Prize in 2003, the Frank Nelson Cole Prize in Number Theory in 2005 and a Lester R. Ford Award in 2012. He is the recipient of the 2014 Wolf Prize in Mathematics.
He was also elected as member of the National Academy of Sciences (USA) and Fellow of the Royal Society (UK) in 2002. He was awarded an honorary doctorate by the Hebrew University of Jerusalem in 2010. He was also awarded an honorary doctorate by the University of Chicago in 2015.
Peter Sarnak is a South African-born mathematician with dual South-African and American nationalities. He has been Eugene Higgins Professor of Mathematics at Princeton University since 2002, succeeding Andrew Wiles, and is an editor of the Annals of Mathematics. He is known for his work in analytic number theory. Sarnak is also on the permanent faculty at the School of Mathematics of the Institute for Advanced Study. He also sits on the Board of ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Integral points on Markoff type cubic surfaces and dynamics Sarnak, Peter | CIRM H
Post-edited
Research talks;Dynamical Systems and Ordinary Differential Equations;Number Theory
Cubic surfaces in affine three space tend to have few integral points .However certain cubics such as $x^3 + y^3 + z^3 = m$, may have many such points but very little is known. We discuss these questions for Markoff type surfaces: $x^2 +y^2 +z^2 -x\cdot y\cdot z = m$ for which a (nonlinear) descent allows for a study. Specifically that of a Hasse Principle and strong approximation, together with "class numbers" and their averages for the corresponding nonlinear group of morphims of affine three space. Cubic surfaces in affine three space tend to have few integral points .However certain cubics such as $x^3 + y^3 + z^3 = m$, may have many such points but very little is known. We discuss these questions for Markoff type surfaces: $x^2 +y^2 +z^2 -x\cdot y\cdot z = m$ for which a (nonlinear) descent allows for a study. Specifically that of a Hasse Principle and strong approximation, together with "class numbers" and their averages for the ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Interview au CIRM : Jean-Christophe Yoccoz Yoccoz, Jean-Christophe | CIRM H
Post-edited
Outreach;Mathematics Education and Popularization of Mathematics
Jean-Christophe Yoccoz, né le 29 mai 1957 à Paris, est un mathématicien français, lauréat de la médaille Fields en 1994, professeur au Collège de France depuis 1996. Il est notamment connu pour ses travaux sur les systèmes dynamiques.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Interview at CIRM: Curtis McMullen McMullen, Curtis T. | CIRM H
Post-edited
Outreach;Mathematics Education and Popularization of Mathematics
Curtis Tracy McMullen (born 21 May 1958) is Professor of Mathematics at Harvard University. He was awarded the Fields Medal in 1998 for his work in complex dynamics, hyperbolic geometry and Teichmüller theory. McMullen graduated as valedictorian in 1980 from Williams College and obtained his Ph.D. in 1985 from Harvard University, supervised by Dennis Sullivan. He held post-doctoral positions at the Massachusetts Institute of Technology, the Mathematical Sciences Research Institute, and the Institute for Advanced Study, after which he was on the faculty at Princeton University (1987-1990) and the University of California, Berkeley (1990-1997), before joining Harvard in 1997. He received the Salem Prize in 1991 and was elected to the National Academy of Sciences in 2007. In 2012 he became a fellow of the American Mathematical Society. Curtis Tracy McMullen (born 21 May 1958) is Professor of Mathematics at Harvard University. He was awarded the Fields Medal in 1998 for his work in complex dynamics, hyperbolic geometry and Teichmüller theory. McMullen graduated as valedictorian in 1980 from Williams College and obtained his Ph.D. in 1985 from Harvard University, supervised by Dennis Sullivan. He held post-doctoral positions at the Massachusetts Institute of Technology, the ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Coupled rotations and snow falling on cedars McMullen, Curtis T. | CIRM H
Post-edited
Research talks;Dynamical Systems and Ordinary Differential Equations;Algebraic and Complex Geometry;Number Theory
We study cascades of bifurcations in a simple family of maps on the circle, and connect this behavior to the geometry of an absolute period leaf in genus $2$. The presentation includes pictures of an exotic foliation of the upper half plane, computed with the aid of the Möller-Zagier formula.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Interview at CIRM: Alexander Bufetov Bufetov, Alexander | CIRM H
Post-edited
Outreach;Mathematics Education and Popularization of Mathematics
Alexander Bufetov got his Diploma in Mathematics at the Independent University of Moscow in 1999 and his PhD at Princeton University in 2005. After one year as a Postdoctoral student at the University of Chicago, he was employed as an Assistant Professor at Rice University where he also held the 'Edgar Odell Lovett Junior Chair'. In 2009, Alexander Bufetov joined the Steklov Mathematical Institute where he passed his habilitation thesis in order to supervise PhD students. In 2012, he became a CNRS Senior Researcher for the LATP (Laboratoire d’Analyse, Topologie, Probabilités) department at Aix-Marseille University. Alexander Bufetov has received several prizes: a Prize by Moscow Mathematical Society in 2005, a grant by the Sloan Foundation and a grant from the President of the Russian Federation in 2010 and also a grant from the Simons Foundation at the Independent University of Moscow in 2011. His research area is the Ergodic theory of dynamical systems. Alexander Bufetov got his Diploma in Mathematics at the Independent University of Moscow in 1999 and his PhD at Princeton University in 2005. After one year as a Postdoctoral student at the University of Chicago, he was employed as an Assistant Professor at Rice University where he also held the 'Edgar Odell Lovett Junior Chair'. In 2009, Alexander Bufetov joined the Steklov Mathematical Institute where he passed his habilitation thesis in order ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Horocyclic flows on hyperbolic surfaces - Part I Schapira, Barbara | CIRM H
Post-edited
Research talks;Dynamical Systems and Ordinary Differential Equations
I will present results on the dynamics of horocyclic flows on the unit tangent bundle of hyperbolic surfaces, density and equidistribution properties in particular. I will focus on infinite volume hyperbolic surfaces. My aim is to show how these properties are related to dynamical properties of geodesic flows, as product structure, ergodicity, mixing, ...
37D40
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Simplicity of the Lyapunov spectrum revisited Hamenstädt, Ursula | CIRM H
Multi angle
Resarch talks;Dynamical Systems and Ordinary Differential Equations
We give an algebraic proof of the simplicity of the Lyapunov spectrum for the Teichmüller flow on strata of abelian differentials. This proof extends to the Kontsevich Zorich cocycle over strata of quadratic differentials and can also be used to study the algebraic degree of pseudo-Anosov stretch factors.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Limits of geodesic push-forwards of horocycle measures Forni, Giovanni | CIRM H
Multi angle
Research talks;Dynamical Systems and Ordinary Differential Equations
We prove a couple of general conditional convergence results on ergodic averages for horocycle and geodesic subgroups of any continuous $SL(2,\mathbb{R})$- action on a locally compact space. These results are motivated by theorems of Eskin, Mirzakhani and Mohammadi on the $SL(2,\mathbb{R})$-action on the moduli space of Abelian differentials. By our argument we can derive from these theorems an improved version of the “weak convergence” of push-forwards of horocycle measures under the geodesic flow and a new short proof of a theorem of Chaika and Eskin on Birkhoff genericity in almost all directions for the Teichmüller geodesic flow. We prove a couple of general conditional convergence results on ergodic averages for horocycle and geodesic subgroups of any continuous $SL(2,\mathbb{R})$- action on a locally compact space. These results are motivated by theorems of Eskin, Mirzakhani and Mohammadi on the $SL(2,\mathbb{R})$-action on the moduli space of Abelian differentials. By our argument we can derive from these theorems an improved version of the “weak convergence” of ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Interval exchange transformations from tiling billiards Davis, Diana | CIRM H
Multi angle
Research talks;Dynamical Systems and Ordinary Differential Equations
Tiling billiards is a dynamical system where beams of light refract through planar tilings. It turns out that, for a regular tiling of the plane by congruent triangles, the light trajectories can be described by interval exchange transformations. I will explain this surprising correspondence, give related results, and show computer simulations of the system.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Unique ergodicity of geodesic flow in an infinite translation surface Rafi, Kasra | CIRM H
Multi angle
Research talks;Dynamical Systems and Ordinary Differential Equations
The behaviour of infinite translation surfaces is, in many regards, very different from the finite case. For example, the geodesic flow is often not recurrent or is not even defined for infinite time in a generic direction.
However, we show that if one focuses on a class of infinite translation surfaces that exclude the obvious counter-examples, one can adapted the proof of Kerckhoff, Masur, and Smillie and show that the geodesic flow is uniquely ergodic in almost every direction. We call this class of surface essentially finite.
(joint work with Anja Randecker).
The behaviour of infinite translation surfaces is, in many regards, very different from the finite case. For example, the geodesic flow is often not recurrent or is not even defined for infinite time in a generic direction.
However, we show that if one focuses on a class of infinite translation surfaces that exclude the obvious counter-examples, one can adapted the proof of Kerckhoff, Masur, and Smillie and show that the geodesic flow is ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Exemple d'Arnoux-Yoccoz, fractal de Rauzy, problème de Novikov : brins d'une guirlande éternelle Hubert, Pascal | CIRM H
Multi angle
Research schools;Analysis and its Applications;Combinatorics;Dynamical Systems and Ordinary Differential Equations;Number Theory
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## The unsolved problems of Halmos Weiss, Benjamin | CIRM H
Multi angle
Research talks;Dynamical Systems and Ordinary Differential Equations
Sixty years ago Paul Halmos concluded his Lectures on Ergodic Theory with a chapter Unsolved Problems which contained a list of ten problems. I will discuss some of these and some of the work that has been done on them. He considered actions of $\mathbb{Z}$ but I will also widen the scope to actions of general countable groups.
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Primes with missing digits Maynard, James | CIRM H
Multi angle
Research talks;Number Theory
We will talk about recent work showing there are infinitely many primes with no $7$ in their decimal expansion. (And similarly with $7$ replaced by any other digit.) This shows the existence of primes in a 'thin' set of numbers (sets which contain at most $X^{1-c}$ elements less than $X$) which is typically vey difficult.
The proof relies on a fun mixture of tools including Fourier analysis, Markov chains, Diophantine approximation, combinatorial geometry as well as tools from analytic number theory.
We will talk about recent work showing there are infinitely many primes with no $7$ in their decimal expansion. (And similarly with $7$ replaced by any other digit.) This shows the existence of primes in a 'thin' set of numbers (sets which contain at most $X^{1-c}$ elements less than $X$) which is typically vey difficult.
The proof relies on a fun mixture of tools including Fourier analysis, Markov chains, Diophantine approximation, com...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## The diameter of the symmetric group: ideas and tools Helfgott, Harald | CIRM
Multi angle
Research talks;Combinatorics;Number Theory
Given a finite group $G$ and a set $A$ of generators, the diameter diam$(\Gamma(G, A))$ of the Cayley graph $\Gamma(G, A)$ is the smallest $\ell$ such that every element of $G$ can be expressed as a word of length at most $\ell$ in $A \cup A^{-1}$. We are concerned with bounding diam$(G) := max_A$ diam$(\Gamma(G, A))$.
It has long been conjectured that the diameter of the symmetric group of degree $n$ is polynomially bounded in $n$. In 2011, Helfgott and Seress gave a quasipolynomial bound, namely, $O\left (e^{(log n)^{4+\epsilon}}\right )$. We will discuss a recent, much simplified version of the proof.
Given a finite group $G$ and a set $A$ of generators, the diameter diam$(\Gamma(G, A))$ of the Cayley graph $\Gamma(G, A)$ is the smallest $\ell$ such that every element of $G$ can be expressed as a word of length at most $\ell$ in $A \cup A^{-1}$. We are concerned with bounding diam$(G) := max_A$ diam$(\Gamma(G, A))$.
It has long been conjectured that the diameter of the symmetric group of degree $n$ is polynomially bounded in $n$. In 2011, ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Ergodicity of the Liouville system implies the Chowla conjecture Frantzikinakis, Nikos | CIRM H
Multi angle
Research talks;Dynamical Systems and Ordinary Differential Equations;Number Theory
The Chowla conjecture asserts that the signs of the Liouville function are distributed randomly on the integers. Reinterpreted in the language of ergodic theory this conjecture asserts that the Liouville dynamical system is a Bernoulli system. We prove that ergodicity of the Liouville system implies the Chowla conjecture. Our argument has an ergodic flavor and combines recent results in analytic number theory, finitistic and infinitary decomposition results involving uniformity norms, and equidistribution results on nilmanifolds. The Chowla conjecture asserts that the signs of the Liouville function are distributed randomly on the integers. Reinterpreted in the language of ergodic theory this conjecture asserts that the Liouville dynamical system is a Bernoulli system. We prove that ergodicity of the Liouville system implies the Chowla conjecture. Our argument has an ergodic flavor and combines recent results in analytic number theory, finitistic and infinitary ...
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Multiple mixing and Ratner property in area-preserving flows Ulcigrai, Corinna | CIRM H
Multi angle
Research talks;Dynamical Systems and Ordinary Differential Equations
Déposez votre fichier ici pour le déplacer vers cet enregistrement.
## Limits of zeroes of holomorphic differential on stable nodal Riemann surfaces Grushevsky, Samuel | CIRM H
Multi angle
Research talks;Dynamical Systems and Ordinary Differential Equations;Algebraic and Complex Geometry
We discuss the current status of the problem of understanding the closures of the strata of curves together with a differential with a prescribed configuration of zeroes, in the Deligne-Mumford moduli space of stable curves.
#### Filtrer
##### Codes MSC
Ressources Electroniques (Depuis le CIRM)
Books & Print journals
Recherche avancée
0
Z | 2019-10-16 23:29:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5709423422813416, "perplexity": 2125.433651823781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00341.warc.gz"} |
https://meta.stackexchange.com/questions/198488/missing-space-and-extra-comma-in-search-results | # Missing space and extra comma in search results
A couple of days ago I saw an edit of a post essentially only changing the team address to/from stackexchange.com. I couldn't remember which way, so I searched for it and got this back:
A space is missing after both the fors and the , at the end should probably be a ..
(As the reply indicates you'll need to search for the address without "" to reproduce this.)
• Added a link to the page, just so lazier people than me can click! – hjpotter92 Sep 26 '13 at 19:57
• Could not find results, but shows 27 results. Meta SO isn't a very good liar! – gitsitgo Sep 26 '13 at 20:18
• Never saw that behavior, maybe some experiment?? – Shadow The Dragon Wizard Sep 26 '13 at 20:25
• @gitsitgo I almost added a similar remark to my original question, as the text does seem non-sensical. The point here is obviously the quotes. This search-string won't reproduce it as it will search with the quotes in the first try. – user213634 Sep 26 '13 at 21:23
• @AndersUP Yeah, I was doing some repro test case earlier and noticed that as well. Also they seem to do some special parsing for team@stackexchange.com specifically, because if you search for any other email, it will just search for the string with the '@' removed. – gitsitgo Sep 27 '13 at 0:56 | 2019-08-25 01:49:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6168846487998962, "perplexity": 1597.6343397249575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027322160.92/warc/CC-MAIN-20190825000550-20190825022550-00545.warc.gz"} |
https://dmoj.ca/problem/dmopc17c5p6 | ## DMOPC '17 Contest 5 P6 - Bridges
View as PDF
Points: 20 (partial)
Time limit: 2.5s
Memory limit: 256M
Author:
Problem type
Allowed languages
Ada, Assembly, Awk, Brain****, C, C#, C++, COBOL, CommonLisp, D, Dart, F#, Forth, Fortran, Go, Groovy, Haskell, Intercal, Java, JS, Kotlin, Lisp, Lua, Nim, ObjC, OCaml, Octave, Pascal, Perl, PHP, Pike, Prolog, Python, Racket, Ruby, Rust, Scala, Scheme, Sed, Swift, TCL, Text, Turing, VB, Zig
You are the leader of an island nation. Your nation consists of islands conveniently labelled . Each island has inhabitants. Currently, the only way of travelling between any two islands is by sea. You have created plans to build bridges labelled to connect your nation. Specifically, bridge will connect islands and for all . Now all that's left is to build these bridges and the people of your nation will rejoice!
The bridges will be built one at a time. Once each bridge is built, an opening ceremony will be held for that bridge. The bridge will be allowed for public use once the opening ceremony ends. At the opening ceremony for bridge , all inhabitants who can currently reach island by land (including already-built bridges) will gather on one side of the bridge and all inhabitants who can currently reach island by land (including already-built bridges) will gather on the other. This ceremony will generate a unity value which is the product of the number of inhabitants on each side.
This process doesn't seem very interesting, so you decide to play a game with one of your advisors to make things more fun. Your advisor has chosen an order to build the bridges. They have determined the unity values which will result from this order. Your task is to find an order to build the bridges which obtains the same array of unity values.
However, your advisor believes that simply giving you their array of unity values will make it too easy. Instead, you are allowed to ask them up to questions. You will give your advisor an order to build the bridges - a permutation of . Say that is the unity value of bridge from the advisor's construction order and is the unity value of bridge from your construction order. Your advisor will only tell you . Note that and are not the unity values of the bridge in the construction order. They are the unity values of bridge , once it has been constructed.
#### Input Specification
The first line will contain two space-separated integers and .
The next line will contain space-separated integers. The of these will be , the number of inhabitants on island .
#### Interaction
This is an interactive problem. After reading the initial two lines of input, your program can make queries.
Each query must be a single line containing space-separated integers. These integers must be a permutation of indicating the order which the bridges should be built. The first integer will indicate the index of the first bridge built, and so on.
After printing and flushing, a new line with a single integer will appear as input. It will be the value as described in the problem statement.
You cannot make more than queries. Also, you do not need to use all queries.
Your program will be deemed correct if the value is ever for your queries. The judge will halt interaction once it has printed .
Note: To flush, you can use fflush(stdout); in C++, or System.out.flush(); in Java, or import sys; sys.stdout.flush() in Python 2/3. For other languages, search in its documentation.
#### Sample Interaction
>>> denotes your output. You should not print this out in your actual program.
5 10
1 2 3 2 1
>>> 1 2 3 4
12
>>> 1 2 4 3
24
>>> 1 3 2 4
0
#### Explanation for Sample
The advisor's order is 3 1 2 4. Then .
The first query gives the following unity values: .
The second query gives the following unity values: .
The third query gives the following unity values: . | 2020-10-25 23:17:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18799762427806854, "perplexity": 2709.5834241687085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107890028.58/warc/CC-MAIN-20201025212948-20201026002948-00336.warc.gz"} |
https://stacks.math.columbia.edu/tag/02JS | Lemma 29.28.2. Let $f : X \to Y$ and $g : Y \to S$ be morphisms of schemes. Let $x \in X$ and set $y = f(x)$, $s = g(y)$. Assume $f$ and $g$ locally of finite type. Then
$\dim _ x(X_ s) \leq \dim _ x(X_ y) + \dim _ y(Y_ s).$
Moreover, equality holds if $\mathcal{O}_{X_ s, x}$ is flat over $\mathcal{O}_{Y_ s, y}$, which holds for example if $\mathcal{O}_{X, x}$ is flat over $\mathcal{O}_{Y, y}$.
Proof. Note that $\text{trdeg}_{\kappa (s)}(\kappa (x)) = \text{trdeg}_{\kappa (y)}(\kappa (x)) + \text{trdeg}_{\kappa (s)}(\kappa (y))$. Thus by Lemma 29.28.1 the statement is equivalent to
$\dim (\mathcal{O}_{X_ s, x}) \leq \dim (\mathcal{O}_{X_ y, x}) + \dim (\mathcal{O}_{Y_ s, y}).$
For this see Algebra, Lemma 10.112.6. For the flat case see Algebra, Lemma 10.112.7. $\square$
## Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 02JS. Beware of the difference between the letter 'O' and the digit '0'. | 2021-02-28 00:57:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8175968527793884, "perplexity": 574.3167959196522}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00530.warc.gz"} |
https://physics.stackexchange.com/questions/12487/why-cant-we-feel-the-earth-turning/12489 | # Why can't we feel the Earth turning?
The Earth turns with a very high velocity, around its own axis and around the Sun. So why can't we feel that it's turning, but we can still feel earthquake.
• I guess it's the same way you can't "feel" that you are driving 100KM/h in a car, you only "feel" acceleration or deceleration. Jul 20, 2011 at 10:37
• @David Freitas: That's a pseudo-explanation and a terrible analogy. Jul 20, 2011 at 18:39
• @Qmechanic Agree. Jul 21, 2011 at 1:08
• @David Freitas: You are under acceleration when the earth is turning, though... Jul 21, 2011 at 19:16
• @Pieter Müller: i) When one drives on a road there are all kinds of vibrations; ii) if the car is on the rotating Earth, one would have to assume that the road is an initial frame, which is the very assumption that OP is questioning in the first place; iii) or if we imagine that the "car" is really a spaceship in empty space, then David Freitas is comparing the fact one cannot feel the velocity of the spaceship, due to Galilean invariance of inertial frames, with the unrelated fact that one cannot feel the centrifugal acceleration on the surface of Earth, which is in an accelerated frame. Nov 5, 2012 at 16:31
Because the rotation of the earth is very smooth and doesn't change, the centripetal acceleration we feel is very nearly constant. This means that the (small) centrifugal force from the rotation gets added to gravity to make up the "background force" we don't notice.
Earthquakes are not at all smooth and the accelerations involved are large and change direction a lot. This makes it easy to feel them.
Vi Hart has a good explanation here.
• The rate of change in acceleration is sometimes called “jerk”. It can be used to quantify how much passengers on a vehicle are shaken. Jul 20, 2011 at 20:05
– Dan
Feb 2, 2012 at 4:17
• @ldog: Furthermore, in general relativity, gravity is just a kinematic effect caused by the observer not being in an inertial reference frame.
– Dan
Feb 10, 2012 at 20:11
• @Dan: That's the definition of a fictious force. And in GR, an inertial reference frame means a free-ly falling reference frame. Jul 17, 2013 at 10:20
• @Dimension10: My issue is with the word "fictitious". A lot of people seem to think it means that they don't really exist, or are purely imaginary. I would argue that they are forces in the same sense that phonons are particles. I tend to prefer the terms "virtual" and "kinematic", but I don't know how widely used they are.
– Dan
Jul 17, 2013 at 16:22
Dan's answer is essentially good, but miss one effect : the Coriolis effect. You can imagine a planet spinning much more rapidly than the earth, but at a constant angular speed. On that quickly rotating planet, the explanation of Dan would still stand, but as soon as on moves, we would feel a lateral Coriolis force.
The Coriolis acceleration is $2\vec{\Omega}\times\vec v$, where $\vec{\Omega}$ is the (vectorial) angular frequency of the planet's rotation and $\vec v$ the speed of the object moving. For an object moving at the speed of sound (340 m/s) near the Earth's pole, where the effect is maximum, the Coriolis acceleration is $$2\frac{2\pi}{24\times60\times60}\times 340 \simeq \frac{12\times 340}{24\times 3600}\sim \frac1{20} = 5\times10^{-2} \mathrm{m}\cdot\mathrm{s}^{-2}.$$ This corresponds to an acceleration which is half a percent of the gravity acceleration, for a situation which is already quite far from everyday life.
This small effect can accumulate over long distance and can have visible effects, notably at meteorological scales. In some sense, we feel the Earth rotation when we feel the dominant wind direction in our region. The parameter characterizing the intensity of the Coriolis effect for a phenomenon is the Rossby number, which is big if the Coriolis effect is negligible. If the phenomenon you analyse has a typical speed $v$, occur over a distance $L$, the Rosby number is essentialy proportional to the ration of the rotation period (24 h in our case) over the time $v/L$ it takes to go over the typical distance.
For meteorological depressions, the wind take several days to go over the thousands of kilometres they span, and the Coriolis effect has an important effect. To really feel the effect in everyday's life, one would need to be on a planet with a day of a few seconds, like the Little Prince's lamplighter's planet ! Of course, if you don't live on a rapidly rotating asteroid, you can see the effect on a carousel.
• Another way to feel (err... actually to see) the rotation of the Earth is with a Foucault pendulum. It's behavior can also be explained in terms of Coriolis forces. Jul 20, 2011 at 19:57
I know it's very late in the game for this question, but this is partly a biology question. We don't feel the rotation of the earth because our brains are biased, they evolved that way. It's not useful to experience/be aware of this rotation day by day, in the same way it isn't useful to be aware of gravity. This is also why this optical illusion works:
Stare at the white dot between the green and red for about 30 seconds, then look at the white dot between the identical desert pictures.
Our brains constantly adjust to what is "normal". In the above illusion, your brain "learns" that the right side of its field of vision is under red illumination, while the left side is under green illumination. Looking at the desert scenes below then reflects this new bias your brain have adopted.
Dan also touched on this in his answer, talking about the "background force" we don't notice. It is vital that the rotation is fairly constant, because our brains need time to adjust. But if somehow the earth suddenly started rotating at a higher but equally constant angular velocity, we all might be struggling a bit for a while.
Why we do feel earthquakes is then easy to understand. The bias allows our brains to effectively block out a background force, but the forces of an earthquake are not part of this background and are therefore felt. It's like receiving an audio signal with a constant background noise. Because it is unhelpful (and annoying) to hear this background noise all the time, you adjust your bias. But irregular noise or a signal will still be heard. This is an earthquake in the analogy.
• While I understand how perceptual adaptation plays a role in us “not feeling” earth’s gravity, I don’t get how it can play a role in feeling forces linked with earth rotation, which are basically too weak to be felt. Aug 13, 2015 at 19:45
• The biological answer is surely the most relevant one. With regard to your last paragraph, when I was in Tokyo for the first time, I noticed that most of the locals were barely aware of earthquakes that to me seemed quite strong. When I pointed that one was underway out to a lady in a shop (who was visibly swaying her body to compensate as she stacked books on shelves) she looked a bit confused at first and then, after a few moments said, "oh yes, so there is!" and then went cheerfully back to her book stacking. Oct 28, 2015 at 0:36
Can we keep this simple ?
The answer is that the acceleration associated with the rotation is very small and it is accounted for in the definition of the vertical.
The acceleration is small: $\omega^2 R \cos \theta = 0.032 m/s^2$ or 3 milli g at the equator. $\theta$ is the latitude.
The acceleration makes an angle of $\theta$ with the direction to the Earth's centre. The total force is the vectorial sum of gravity and centripetal force and this merely redefines the vertical direction by a tiny amount. If your floor was made plumb level this tiny deviation is built into your house and city.
we don’t feel the Earth spin because we, the atmosphere, skyscrapers, and everything else are spinning along with the Earth at the same constant speed.
• That's too simplistic. If you are in a car that is turning, you can still feel it, even with your eyes shut and even though everything around you that you can feel is turning with you. The real answer is that we can feel turning, just that the earth is turning so slowly that it is below our human perception limit. Jan 7, 2014 at 17:01
• This isn't a simplification: it's just completely wrong. The fact that everything we see is moving with the same acceleration and speed as us doesn't make us not feel the force in any way.
– user191954
Jun 24, 2018 at 11:31 | 2022-05-29 10:47:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5475698709487915, "perplexity": 524.5655972142032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662644142.66/warc/CC-MAIN-20220529103854-20220529133854-00774.warc.gz"} |
http://thephysicsvirtuosi.com/posts/q-factors.html | # Q Factors
When I walk in my door when I get home, I hook my keys, which I keep on a carabiner, onto a binder clip that I’ve clipped onto my window sill. Its a great way to never lose your keys. But one thing I always notice is that when I hook it on, it swings, and every time it swings it makes a click. This you might expect. What always surprises me is how long the keys keep swinging. They seem to swing for a surprisingly long time, minutes. It always catches me off guard. In order to explain why, I get to talk about Q Factors The Q factor stands for quality factor. Its a nondimensional parameter (my favorite kind) that tells you how pure your oscillator is. Lets back up a step. Lots of things in the world oscillate. Think about a swing. If you get going on the swing and then stop rocking, you swing back and forth, back and forth, but eventually you come to a stop. Imagine swinging on a rusted old swing set. Now give the joint where the swing swings from a nice shot of WD-40. You can imagine that if you repeated the experiment (get swinging to some height and then stop pumping), you’d continue to swing longer. Why? Because the Q factor has increased. You’re swinging on a higher quality swing. Mathematically its defined to be $$Q = 2 \pi \times \frac{ U }{ \Delta U }$$ or 2 pi times the total energy stored in the oscillator divided by the energy lost in a cycle. But, another way to gauge the Q factor is the fact that it tells you something about how the oscillators get damped each period. As a number it tells you how many periods need to go by for the amplitude of the oscillations to be damped by $$\frac{1}{e^{2\pi}} \sim \frac{1}{535}$$ This allows you to estimate Q factors for everyday objects. A factor of 1/535 is pretty near to my threshold for observing a lot of things. What does a factor of 535 mean in terms of sound, one of the most common ways I interact with things around me? Well, sound is measured in decibels, which is a logarithmic scale, where a factor of 535 in the power output by something corresponds to a change in the decibels of $$dB = 10 \log_{10} \frac{1}{535} \sim -27$$ What is a decibel change of 27 mean? Well, wikipedia tells me that a calm room is somewhere between 20 and 30 decibels, where as a TV set about a meter away is at about 60 dB. So that tells me that if something like my keys start off making a sound comparable to the volume I set my TV at, I can listen to it until it just gets drowned out by the room and that should give me some estimate for the Q of my keys. I’ll keep you in suspense just a bit longer. I said I was surprised how long the keys swing. In order to put the Q that I measured in context, I’ll tell you about a few other Qs of things you might have some experience with. Most swinging things that I seem to remember coming in contact with have quality factors of about 10 or so. Swings, or things letting a meter stick swing, stuff like that. Tuning forks, which are built to be accurate resonators will have quality factors of about a thousand or so. The quartz crystal in your watch, which is really supposed to be a good oscillator has a quality factor of 10 thousand or so. One of the best Q factors achieved by man is 10^14. So, what was the Q factor of my keys? I counted the times I could hear them swinging and got a count of 435. This number isn’t to be taken too seriously, but it indicated that my swinging keys have a quality factor of something between 400 and 500, which is pretty darn good for something that wasn’t engineered. That explains why it always surprises me, the keys always seem to swing much longer than I would anticipate. | 2017-04-23 23:31:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5966726541519165, "perplexity": 562.9836712471575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118851.8/warc/CC-MAIN-20170423031158-00433-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/science/physics/physics-for-scientists-and-engineers-a-strategic-approach-with-modern-physics-4th-edition/chapter-15-oscillations-exercises-and-problems-page-415/16 | ## Physics for Scientists and Engineers: A Strategic Approach with Modern Physics (4th Edition)
(a) $T = 0.50~s$ (b) $A = 5.5~cm$ (c) $v_{max} = 0.69~m/s$ (d) $E = 0.048~J$
(a) $T = \frac{1}{f}$ $T = \frac{1}{2.0~Hz}$ $T = 0.50~s$ (b) We can find the angular frequency as: $\omega = 2\pi~f$ $\omega = (2\pi)~(2.0~Hz)$ $\omega = 12.57~rad/s$ We can find the amplitude as: $A = \sqrt{x^2+\frac{v^2}{\omega^2}}$ $A = \sqrt{(0.050~m)^2+\frac{(-0.30~m/s)^2}{(12.57~rad/s)^2}}$ $A = 0.055~m = 5.5~cm$ (c) We can find the maximum speed as: $v_{max} = A~\omega$ $v_{max} = (0.055~m)(12.57~rad/s)$ $v_{max} = 0.69~m/s$ (d) We can find the total energy in the system as: $E = \frac{1}{2}mv_{max}^2$ $E = \frac{1}{2}(0.200~kg)(0.69~m/s)^2$ $E = 0.048~J$ | 2018-11-14 12:06:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.87042236328125, "perplexity": 166.65033102268453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741979.10/warc/CC-MAIN-20181114104603-20181114130603-00377.warc.gz"} |
http://www.maa.org/publications/periodicals/convergence/what-is-00-george-baron | # What is 0^0? - George Baron
Author(s):
Michael Huber and V. Frederick Rickey
Defining powers is often carelessly done. Almost thirty years before Libri's first paper, George Baron published "A short Disquisition, concerning the Definition, of the word Power, in Arithmetic and Algebra" in The Mathematical Correspondent (1804). In this paper [1], Baron begins the discussion with the following definition:
The powers of any number, are the successive products, arising from unity, continually multiplied, by that number.
As an example, he writes that 1 × 5 = 5, which is the first power of 5, and 1 × 5 × 5 = 25, which is the second power of 5, etc. The first, second, etc., powers are then conveniently expressed as 51, 52, etc. In the same manner, the powers of any number x might be represented as x1, x2, etc., in which x1 = 1 × x, x2 = x1 × x, etc. After stating a few corollaries, Baron writes:
Let us, therefore, next inquire, whether the same definition, will not lead us to a clear and intelligible solution, of the mysterious paradoxes, resulting from the common definition, when applied, to what is denominated, the nothingth power of numbers.
Baron then addresses the rules for dividing powers (look back to the argument from the high school text), but he develops a different conclusion:
If the multiplication by x, be abstracted from the first power of x, by means of division; the power will become nothing but the unit will remain: for $$\frac{x^1}{x} = \frac{1\times x}{x} =1,$$ and hence it is plain that x0 = 1, when x represents any number whatever. But since the number x, is here unlimited with regard to greatness, it follows, that, the nothingth power of an infinite number is equal to a unit.
Baron gives credit to both William Emerson (1780) [3] and Jared Mansfield (1802) [9] who wrote on the subject of "nothing." Baron takes their arguments one step further and postulates that the number x can be any number, great or small:
To pursue the application of our definition, to quantity in the ultimate extremity of smallness, let us suppose x to represent any fractional quantity; or in other words, let x denote any magnitude, expressed in numbers, by means of some part of its measuring unit: then by the definition x1 = 1 × x. Let now this multiplication by x, be abstracted; and for the reasons heretofore advanced, we have x0 = 1. Now since x here represents a fractional quantity, independent of any limitation, in respect to smallness; we may therefore suppose x, by means of continual diminution, or decrease, to pass from its present value, through every degree of smallness, until it become nothing; then it will be evident, that, during this diminution or decrease of x, x0 will continue equal to an invariable unit; and that precisely at the instant, when x becomes nothing, x0, or 00 = 1.
Baron never mentions the term indeterminate form, and he in fact ends his treatise with the following:
Also, since x0 = 1, whatever be the value of x; of consequence; in every system of logarithms, the logarithm of 1 = 0.
Michael Huber and V. Frederick Rickey, "What is 0^0? - George Baron," Loci (July 2012) | 2015-03-05 00:34:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8314200639724731, "perplexity": 1296.6281781127639}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463676.59/warc/CC-MAIN-20150226074103-00013-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://www.transtutors.com/questions/paul-sabin-organized-sabin-electronics-10-years-ago-to-produce-and-sell-several-elec-2562834.htm | # Paul Sabin organized Sabin Electronics 10 years ago to produce and sell several electronic device...
Paul Sabin organized Sabin Electronics 10 years ago to produce and sell several electronic devices on which he had secured patents. Although the company has been fairly profitable, it is now experiencing a severe cash shortage. For this reason, it is requesting a $680,000 long-term loan from Gulfport State Bank,$190,000 of which will be used to bolster the Cash account and $490,000 of which will be used to modernize equipment. The company's financial statements for the two most recent years follow Sabin Electronics Comparative Balance Sheet This Year Last Year Assets Current assets: 135,000 330,000 Cash Marketable securities 15,000 Accounts receivable, net 711,000 480,000 1,125,000 775,000 Inventory Prepaid expenses 38,000 40,000 2,009,000 1,640,000 Total current assets 1,550,000 Plant and equipment, net 2,245,000$4,254,000 3,190,000 Total assets Liabilities and Stockholders Equity Liabilities 850,000 400,000 Current liabilities Bonds payable, 12% 800,000 00,000 1,650,000 1,200,000 Total liabilities Stockholders' equity Common stock, $15 par Retained earnings 870,000 870,000 1,734,000 1,120,000 Total stockholders' equity 2,604,000 1,990,000$4,254,000 3,190,000 Total liabilities and equity | 2018-06-23 10:21:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3375522196292877, "perplexity": 14978.251364324442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864957.2/warc/CC-MAIN-20180623093631-20180623113631-00086.warc.gz"} |
http://lmazy.verrech.net/tag/cite-list/ | # Tag Archives: Cite & List
## New Plugin: Cite & List
A simple bibliography with some citations
Finally! I have needed—and wanted to build—this WordPress plugin for a long time, and now it is done. Because I am so bad at making up names, I called it Cite & List because that is what you can do with it: cite articles and list your publications. Both tasks are easily done with shortcodes; users of $$\LaTeX$$ will feel right at home. You can also have everything look exactly the way you want it to, thanks to the use of bib2tpl.
Head over to the plugin repository and have a try! I like how the plugin turned out; hopefully I will have ample opportunity of using it, that is get to writing more sciencey posts. | 2018-03-24 15:29:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22381478548049927, "perplexity": 1479.7856470493248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650730.61/warc/CC-MAIN-20180324151847-20180324171847-00565.warc.gz"} |
https://tex.stackexchange.com/questions/526899/problem-with-scrartcl-fontenc-and-algorithm2e | Problem with scrartcl, fontenc and algorithm2e
I have a problem with scrarticle (and its option parskip=half), fontenc and algorithm2e:
\documentclass[parskip=half]{scrartcl}
\usepackage{fontenc}
\usepackage[boxed]{algorithm2e}
\begin{document}
\begin{algorithm}
\KwData{test}
\end{algorithm}
\end{document}
If you compile this document (with pdflatex), then there is no padding between the text and the frame of the algorithm. If you remove parskip=half or \usepackage{fontenc} (or both), then everything is fine.
What is happening here? Did I do anything wrong? How to fix this problem?
• Unrelated: It's pretty pointless to load fontenc without option(s). – campa Feb 4 '20 at 8:58
• Of course, this is just for the MWE. In the real document, I use T1. – gerw Feb 4 '20 at 8:59
• Interesting fact: it depends on the loading order. Loading fontenc after algorithm2e works too. – campa Feb 4 '20 at 9:01
• You can also replace \usepackage{fontenc} by \selectfont and you will observe the same... – gerw Feb 4 '20 at 9:04
• The margin is calculated with \parindent, and algorithm2e obviously doesn't expect this to be zero when the package is loaded. – Ulrike Fischer Feb 4 '20 at 9:10
To solve the problem, I manually set the margin of the algorithm via \setlength{\algomargin}{1em}. | 2021-07-29 00:37:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9209915399551392, "perplexity": 2422.410052107412}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153803.69/warc/CC-MAIN-20210728220634-20210729010634-00314.warc.gz"} |
https://par.nsf.gov/biblio/10349833-galaxygalaxy-lensing-des-cmass-catalogue-measurement-constraints-galaxy-matter-cross-correlation | Galaxy–galaxy lensing with the DES-CMASS catalogue: measurement and constraints on the galaxy-matter cross-correlation
ABSTRACT The DMASS sample is a photometric sample from the DES Year 1 data set designed to replicate the properties of the CMASS sample from BOSS, in support of a joint analysis of DES and BOSS beyond the small overlapping area. In this paper, we present the measurement of galaxy–galaxy lensing using the DMASS sample as gravitational lenses in the DES Y1 imaging data. We test a number of potential systematics that can bias the galaxy–galaxy lensing signal, including those from shear estimation, photometric redshifts, and observing conditions. After careful systematic tests, we obtain a highly significant detection of the galaxy–galaxy lensing signal, with total S/N = 25.7. With the measured signal, we assess the feasibility of using DMASS as gravitational lenses equivalent to CMASS, by estimating the galaxy-matter cross-correlation coefficient rcc. By jointly fitting the galaxy–galaxy lensing measurement with the galaxy clustering measurement from CMASS, we obtain $r_{\rm cc}=1.09^{+0.12}_{-0.11}$ for the scale cut of $4 \, h^{-1}{\rm \,\,Mpc}$ and $r_{\rm cc}=1.06^{+0.13}_{-0.12}$ for $12 \, h^{-1}{\rm \,\,Mpc}$ in fixed cosmology. By adding the angular galaxy clustering of DMASS, we obtain rcc = 1.06 ± 0.10 for the scale cut of $4 \, h^{-1}{\rm \,\,Mpc}$ and rcc = 1.03 ± 0.11 for $12 \, h^{-1}{\rm \,\,Mpc}$. The resulting more »
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
Award ID(s):
Publication Date:
NSF-PAR ID:
10349833
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
509
Issue:
2
Page Range or eLocation-ID:
2033 to 2047
ISSN:
0035-8711
1. ABSTRACT The DES-CMASS sample (DMASS) is designed to optimally combine the weak lensing measurements from the Dark Energy Survey (DES) and redshift-space distortions (RSD) probed by the CMASS galaxy sample from the Baryonic Oscillation Spectroscopic Survey. In this paper, we demonstrate the feasibility of adopting DMASS as the equivalent of CMASS for a joint analysis of DES and BOSS in the framework of modified gravity. We utilize the angular clustering of the DMASS galaxies, cosmic shear of the DES metacalibration sources, and cross-correlation of the two as data vectors. By jointly fitting the combination of the data with the RSD measurements from the CMASS sample and Planck data, we obtain the constraints on modified gravity parameters $\mu _0=-0.37^{+0.47}_{-0.45}$ and $\Sigma _0=0.078^{+0.078}_{-0.082}$. Our constraints of modified gravity with DMASS are tighter than those with the DES Year 1 redMaGiC sample with the same external data sets by 29 per cent for μ0 and 21 per cent for Σ0, and comparable to the published results of the DES Year 1 modified gravity analysis despite this work using fewer external data sets. This improvement is mainly because the galaxy bias parameter is shared and more tightly constrained by both CMASS and DMASS, effectivelymore »
We present cosmological parameter constraints based on a joint modelling of galaxy–lensing cross-correlations and galaxy clustering measurements in the SDSS, marginalizing over small-scale modelling uncertainties using mock galaxy catalogues, without explicit modelling of galaxy bias. We show that our modelling method is robust to the impact of different choices for how galaxies occupy dark matter haloes and to the impact of baryonic physics (at the $\sim 2{{\ \rm per\ cent}}$ level in cosmological parameters) and test for the impact of covariance on the likelihood analysis and of the survey window function on the theory computations. Applying our results to the measurements using galaxy samples from BOSS and lensing measurements using shear from SDSS galaxies and CMB lensing from Planck, with conservative scale cuts, we obtain $S_8\equiv \left(\frac{\sigma _8}{0.8228}\right)^{0.8}\left(\frac{\Omega _\mathrm{ m}}{0.307}\right)^{0.6}=0.85\pm 0.05$ (stat.) using LOWZ × SDSS galaxy lensing, and S8 = 0.91 ± 0.1 (stat.) using combination of LOWZ and CMASS × Planck CMB lensing. We estimate the systematic uncertainty in the galaxy–galaxy lensing measurements to be $\sim 6{{\ \rm per\ cent}}$ (dominated by photometric redshift uncertainties) and in the galaxy–CMB lensing measurements to be $\sim 3{{\ \rm per\ cent}}$, from small-scale modelling uncertainties including baryonic physics.
3. ABSTRACT We compare predictions for galaxy–galaxy lensing profiles and clustering from the Henriques et al. public version of the Munich semi-analytical model (SAM) of galaxy formation and the IllustrisTNG suite, primarily TNG300, with observations from KiDS + GAMA and SDSS-DR7 using four different selection functions for the lenses (stellar mass, stellar mass and group membership, stellar mass and isolation criteria, and stellar mass and colour). We find that this version of the SAM does not agree well with the current data for stellar mass-only lenses with $M_\ast \gt 10^{11}\, \mathrm{ M}_\odot$. By decreasing the merger time for satellite galaxies as well as reducing the radio-mode active galactic nucleus accretion efficiency in the SAM, we obtain better agreement, both for the lensing and the clustering, at the high-mass end. We show that the new model is consistent with the signals for central galaxies presented in Velliscig et al. Turning to the hydrodynamical simulation, TNG300 produces good lensing predictions, both for stellar mass-only (χ2 = 1.81 compared to χ2 = 7.79 for the SAM) and locally brightest galaxy samples (χ2 = 3.80 compared to χ2 = 5.01). With added dust corrections to the colours it matches the SDSS clustering signal well for red low-mass galaxies. We find that both themore »
The combination of galaxy–galaxy lensing (GGL) and galaxy clustering is a powerful probe of low-redshift matter clustering, especially if it is extended to the non-linear regime. To this end, we use an N-body and halo occupation distribution (HOD) emulator method to model the redMaGiC sample of colour-selected passive galaxies in the Dark Energy Survey (DES), adding parameters that describe central galaxy incompleteness, galaxy assembly bias, and a scale-independent multiplicative lensing bias Alens. We use this emulator to forecast cosmological constraints attainable from the GGL surface density profile ΔΣ(rp) and the projected galaxy correlation function wp, gg(rp) in the final (Year 6) DES data set over scales $r_p=0.3\!-\!30.0\, h^{-1} \, \mathrm{Mpc}$. For a $3{{\ \rm per\ cent}}$ prior on Alens we forecast precisions of $1.9{{\ \rm per\ cent}}$, $2.0{{\ \rm per\ cent}}$, and $1.9{{\ \rm per\ cent}}$ on Ωm, σ8, and $S_8 \equiv \sigma _8\Omega _m^{0.5}$, marginalized over all halo occupation distribution (HOD) parameters as well as Alens. Adding scales $r_p=0.3\!-\!3.0\, h^{-1} \, \mathrm{Mpc}$ improves the S8 precision by a factor of ∼1.6 relative to a large scale ($3.0\!-\!30.0\, h^{-1} \, \mathrm{Mpc}$) analysis, equivalent to increasing the survey area by a factor of ∼2.6. Sharpening the Alens prior to $1{{\more » 5. ABSTRACT We describe our non-linear emulation (i.e. interpolation) framework that combines the halo occupation distribution (HOD) galaxy bias model with N-body simulations of non-linear structure formation, designed to accurately predict the projected clustering and galaxy–galaxy lensing signals from luminous red galaxies in the redshift range 0.16 < z < 0.36 on comoving scales 0.6 < rp < 30$h^{-1} \, \text{Mpc}\$. The interpolation accuracy is ≲ 1–2 per cent across the entire physically plausible range of parameters for all scales considered. We correctly recover the true value of the cosmological parameter S8 = (σ8/0.8228)(Ωm/0.3107)0.6 from mock measurements produced via subhalo abundance matching (SHAM)-based light-cones designed to approximately match the properties of the SDSS LOWZ galaxy sample. Applying our model to Baryon Oscillation Spectroscopic Survey (BOSS) Data Release 14 (DR14) LOWZ galaxy clustering and galaxy-shear cross-correlation measurements made with Sloan Digital Sky Survey (SDSS) Data Release 8 (DR8) imaging, we perform a prototype cosmological analysis marginalizing over wCDM cosmological parameters and galaxy HOD parameters. We obtain a 4.4 per cent measurement of S8 = 0.847 ± 0.037, in 3.5σ tension with the Planck cosmological results of 1.00 ± 0.02. We discuss the possibility of underestimated systematic uncertainties or astrophysical effects that could explain this discrepancy. | 2022-11-29 16:06:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5597941875457764, "perplexity": 3173.910750356716}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00532.warc.gz"} |
http://volute.g-vo.org/viewvc/volute/trunk/projects/ivoapub/ivoatex/document.template?revision=5920&view=markup&pathrev=5921 | # Contents of /trunk/projects/ivoapub/ivoatex/document.template
Revision 5404 - (show annotations)
Tue Apr 23 16:07:03 2019 UTC (23 months, 4 weeks ago) by msdemlei
File size: 2272 byte(s)
ivoatex: Minor, template-level corrections.
1 \documentclass[11pt,a4paper]{ivoa} 2 \input tthdefs 3 4 \title{???? Full title ????} 5 6 % see ivoatexDoc for what group names to use here 7 \ivoagroup{???? group ????} 8 9 \author[????URL????]{????Alfred Usher Thor????} 10 \author{????Fred Offline????} 11 12 \editor{????Alfred Usher Thor????} 13 14 % \previousversion[????URL????]{????Concise Document Label????} 15 \previousversion{This is the first public release} 16 17 18 \begin{document} 19 \begin{abstract} 20 ???? Abstract ???? 21 \end{abstract} 22 23 24 \section*{Acknowledgments} 25 26 ???? Or remove the section header ???? 27 28 \section*{Conformance-related definitions} 29 30 The words MUST'', SHALL'', SHOULD'', MAY'', RECOMMENDED'', and 31 OPTIONAL'' (in upper or lower case) used in this document are to be 32 interpreted as described in IETF standard RFC2119 \citep{std:RFC2119}. 33 34 The \emph{Virtual Observatory (VO)} is a 35 general term for a collection of federated resources that can be used 36 to conduct astronomical research, education, and outreach. 37 The \href{http://www.ivoa.net}{International 38 Virtual Observatory Alliance (IVOA)} is a global 39 collaboration of separately funded projects to develop standards and 40 infrastructure that enable VO applications. 41 42 43 \section{Introduction} 44 45 ???? Write something ???? 46 47 \subsection{Role within the VO Architecture} 48 49 \begin{figure} 50 \centering 51 52 % As of ivoatex 1.2, the architecture diagram is generated by ivoatex in 53 % SVG; copy ivoatex/archdiag-full.xml to archdiag.xml and throw out 54 % all lines not relevant to your standard. 55 % Notes don't generally need this. If you don't copy archdiag.xml, 56 % you must remove archdiag.svg from FIGURES in the Makefile. 57 58 \includegraphics[width=0.9\textwidth]{role_diagram.pdf} 59 \caption{Architecture diagram for this document} 60 \label{fig:archdiag} 61 \end{figure} 62 63 Fig.~\ref{fig:archdiag} shows the role this document plays within the 64 IVOA architecture \citep{note:VOARCH}. 65 66 ???? and so on, LaTeX as you know and love it. ???? 67 68 \appendix 69 \section{Changes from Previous Versions} 70 71 No previous versions yet. 72 % these would be subsections "Changes from v. WD-..." 73 % Use itemize environments. 74 75 76 % NOTE: IVOA recommendations must be cited from docrepo rather than ivoabib 77 % (REC entries there are for legacy documents only) 78 \bibliography{ivoatex/ivoabib,ivoatex/docrepo} 79 80 81 \end{document} | 2021-04-20 12:42:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7918645739555359, "perplexity": 12776.190919949337}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00287.warc.gz"} |
https://www.zbmath.org/?q=an%3A0787.92010 | # zbMATH — the first resource for mathematics
Propagating waves in discrete bistable reaction-diffusion systems. (English) Zbl 0787.92010
Summary: We consider a discrete bistable reaction-diffusion system modeled by $$N$$ coupled Nagumo equations. We develop an asymptotic method to describe the phenomenon of propagation failure. The Nagumo model depends on two parameters: the coupling constant $$d$$ and the bistability parameter $$a$$. We investigate the limit $$a \to 0$$ and $$d(a) \to 0$$ and construct traveling front solutions. We obtain the critical coupling constant $$d=d^*(a)$$ above which propagation is possible and determine the propagation speed $$c=c(d)$$ if $$d>d^*$$.
We investigate two different cases for the initiation of a propagating front solution. Case 1 considers a uniform steady state distribution. A propagating front appears as the result of a fixed boundary condition. Case 2 also considers a uniform steady state distribution but a propagating front appears as the result of a localized perturbation.
##### MSC:
92C20 Neural biology 34B99 Boundary value problems for ordinary differential equations 34B15 Nonlinear boundary value problems for ordinary differential equations
Full Text:
##### References:
[1] Keener, J.P., A mathematical model for the initiation of ventricular tachycardia in myocardium, (), 589-608 [2] Keener, J.P., The effects of discrete gap junctions coupling on propagation in myocardium, J. theor. biol., 148, 49-82, (1991) [3] Bell, J., Excitability behavior of myelineated axon models, (), 95-116 [4] Rinzel, J., Mechanisms for nonuniform propagation along excitable cables, Ann. NY acad. sci., 591, 51-61, (1990) [5] Laplante, J.P.; Erneux, T., Propagation failure in arrays of coupled bistable chemical reactors, J. phys. chem., 96, 4931-4934, (1992) [6] Keener, J.P., Propagation and its failure in coupled systems of discrete excitable cells, SIAM J. appl. math., 47, 556-572, (1987) · Zbl 0649.34019 [7] Pauwelussen, J.P., One way of traffic of pulses in a neuron, J. math. biol., 15, 151-171, (1982) · Zbl 0497.92007 [8] Murray, J.D., Mathematical biology, () · Zbl 1284.92009 [9] Defontaines, A.-D.; Pomeau, Y.; Rostand, B., Chain of coupled bistable oscillators: a model, Physica D, 46, 201-216, (1990) · Zbl 0721.34032 [10] Booth, V.; Erneux, T., Propagation failure in the Fitzhugh-Nagumo discrete reaction-diffusion model, (1993), in preparation
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-07-28 19:47:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5157946348190308, "perplexity": 3184.6041514368035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153791.41/warc/CC-MAIN-20210728185528-20210728215528-00459.warc.gz"} |
https://www.forfur.com/kmart-mattress-ezmzvqi/c93950-composite-function-calculator | This website uses cookies to ensure you get the best experience. We plug our h(x) into our the position of x in g(x), simplify, and get the following composite function: $$[g\circ h](x)=2(-4x+3)-4 =-8x+6-4=-8x+2$$ It is important to know that $$[g\circ h](x)\; and \; [h\circ g](x)$$ does not have to be equal but if they both are equal to x then they are inverse functions. Consider three sets X, Y and Z and let f : X → Y and g: Y → Z. At the click of a button, for example, funtool draws a graph representing the sum, product, difference, or ratio of two functions that you specify.funtool includes a function memory that allows you to store functions for later retrieval. Composition of a function is done by substituting one function into another function. Ted Adnan's Lighting blog (with bad grammars) and such... Epigram » Manifestasi pengamatan dan pengalaman diri, atau kadang-kadang pengalaman orang lain; membuatkan diri terpanggil untuk memperkata. 4. g f x. Construct the composite function. : Looking through the eyes : . Finding the domain of a composite function consists of two steps: Step 1. If we write the composite function for an input \displaystyle x x as \displaystyle f\left (g\left (x\right)\right) f (g(x)), we can see right away that \displaystyle x x must be a member of the domain of Log InorSign Up. Desmos supports an assortment of functions. Given a function, determine whether it's a composite function. For example, f [g (x)] is the composite function of f (x) and g (x). Home > Algebra calculators > Composite functions and Evaluating functions fog(x), f(2) calculator: Method and examples Composite functions and Evaluating functions : f(x), g(x), fog(x), gof(x) calculator Method . Functions are … A composite function is a function of a function. this words was originated from Reel Big Fish Songs...... Engine and Measurement Researh Seminar 2009, . By using this website, you agree to our Cookie Policy. It has been easy so far, but now we must consider the Domainsof the functions. It is important to get the Domain right, or we will get bad results! To obtain the composite function fg(x) from known functions f(x) and g(x). Functions Solve the Function Operation f (x) = 3x f (x) = 3 x, g(x) = 6x − 6 g (x) = 6 x - 6, (g ∘ f) (g ∘ f) Set up the composite result function. At times, finding the domain of a composite function can be confusing and difficult to understand. Send feedback|Visit Wolfram|Alpha. Wolfram|Alpha » Explore anything with the first computational knowledge engine. If you have defined two functions f(x) and g(x), you can write to make the composite function h(x)=(f∘g)(x). Some composite functions can be decomposed in several ways. Use the hatch symbol # as the variable when inputting. It is simpler to evaluate a composition at a point because you can simplify as you go, since you'll always just be plugging in numbers and simplifying. 6. powered by. Find the domain of this new function. If there are any restrictions on the domain, keep them. Composite Function Calculator The calculator will find the composition of the functions, with steps shown. If you're seeing this message, it means we're having trouble loading external resources on our website. The composition of certain rational functions, for example, may present internal "obstacles" for certain domain values. Composite Functions. If so, determine the "inner" and "outer" functions. If we are given two functions, it is possible to create or generate a “new” function by composing one into the other. To embed this widget in a post, install the Wolfram|Alpha Widget Shortcode Plugin and copy and paste the shortcode above into the HTML source. Given the graphs, or some tables of values, of two functions, evaluate the composition of those functions at a given input. Try the free Mathway calculator and problem solver below to practice various math topics. SEE: Composition. Step 2. Mathematica » The #1 tool for creating Demonstrations and anything technical. Math Stuff - Composite Function Calculator, Creative Commons Attribution-ShareAlike 3.0 Unported License, Google bans Parler from its app store after Apple threatens to do the same thing. Persatuan Seni Silat Cekak Ustaz Hanafi Multimedia University Cyberjaya, Majlis Ijazah Silat Cekak Ustaz Hanafi MMU 2011, Makan2 Muar: Mee Bandung Abu Bakar Hanipah & Asam Pedas Parit Jawa, (MY112) : Largest turtle-linked salmonella outbreak detailed, whataboutbaharin (satu citer seorang jurugambar amatur). When two functions are combined in such a way that the output of one function becomes the input to another function, then this is referred to as composite function. Composition of Function. Talbiah oleh Simfoni (Lihat Lirik, Video & Muaturun), "These Violent Delights Have Violent Ends", This is the 100% Clean All-Natural Skincare Brand Every Sensitive Skin Girl Need in Her Bathroom, Aquaman 2018 Full Movie [Netflix] Aquaman Full Movie Free, Aquaman 2018 Full Movie [Netflix] Aquaman 2018 HD available, Amalan Zikir Harian Seminggu Isnin - Ahad, cara menggugurkan kandungan dengan kunyit asam, Percutian ke Seoul, South Korea - Preparation (Part 1). 2. g x = x 2. Find and simplify the functions ( g−f )( x ) ( g−f )( x … Code to add this calci to your website To obtain the composite function fg(x) from known functions f(x) and g(x). Composite Function Calculator. Note that f(x) and g(x) have been defined first; you can then enter a composite functions such as fg(x) and gf(x) 1. f x = x + 2. You write to make the composite function p(x)=(g∘f)(x). PG Mall’s 9.9 Sale – Shopping discounts & promotions. Use the hatch symbol # as the variable when inputting. It helps to differentiate composite functions. In this lesson, I will go over eight (8) worked examples to illustrate the process involved in function composition. If the function is one-to-one, there will be a unique inverse. Find the composite function between g(x)=2x-4 and h(x)=-4x+3. It also shows plots of the function and illustrates the domain and range on a number line to enhance your mathematical intuition. The function must work for all values we give it, so it is up to usto make sure we get the domain correct! An example is given demonstrating how to work algebraically with composite functions and another example involves an application that uses the composition of functions. Just input the two functions f (x) and g (x) you want to compose as fg (x). Evaluating Composite Functions Using Tables When working with functions given as tables, we read input and output values from the table entries and always work from the inside to the outside. The step involved is similar when a function is being evaluated for a given value. Inverse Function Calculator. You can use Desmos to explore composite functions. On the sidebar to the right is a composite function calculator I edited using Wolfram Alpha. The calculator will find the inverse of the given function, with steps shown. Composite Functions You can also evaluate compositions symbolically. Video lesson. You use the symbol ∘to denote a composite function, as in: (f∘g)(x)=f(g(x)) In GeoGebra it is easy to make composite functions. It is also called an anti function. Wolfram|Alpha is a great tool for finding the domain and range of a function. There may be hidden obstacles that prevent certain values from completing their trips through a composition, thus eliminating theses values as possible domain elements. How to Use the Inverse Function Calculator? An online gof fog calculator to find the (fog) (x) and (gof) (x) for the given functions. u n, sin u, cos u, tan u, ln u and e u. The domain is the set of all the valuesthat go into a function. f (g (x)) can also be written as (f ∘ g) (x) or fg (x), We will only consider those functions whose outermost function is a basic function of the form . If so, determine the "inner" and "outer" functions. Enter the Function you want to domain into the editor. The opposite of composition is decomposition, which basically means separation. In calculus, you usually have to deal with composite functions when you’re finding derivatives with the chain rule. 1. Math is about vocabulary. In general, you can skip parentheses, but be very careful: e^3x is e^3x, and e^(3x) is e^(3x). THE ATTRACTIVENESS OF RAMADHAN STREET FOOD, Instagram, My Unimportant Opinion & The Role A Logo Plays, 20 Tips Ampuh Mengatasi Jerawat yang Membandel, Terjemahan Lagu #3: Blood (Darah) (My Chemical Romance), SPIRONOLACTONE / ALDATONE - THE 3 MONTH MARK. The calculator will display the simplified version of the answer, plus other alternative simplified versions if they exist. Online Domain and Range Calculator Find the domain and range of a function with Wolfram|Alpha. Thoroughly talk about the services that you need with potential payroll providers. To embed a widget in your blog's sidebar, install the Wolfram|Alpha Widget Sidebar Plugin, and copy and paste the Widget ID below into the "id" field: We appreciate your interest in Wolfram|Alpha and will be in touch soon. Function table (2 variables) Calculator . Inverse function calculator helps in computing the inverse value of any function that is given as input. Added Aug 1, 2010 by ihsankhairir in Mathematics. i don't really have the time for this, but these are my few small pleasures in life. 3. f g x. You can use Desmos to explore composite functions. Log InorSign Up. Performing Algebraic Operations on Functions. The composite function f [g (x)] is read as “f of g of x ”. It will also evaluate the composition at the specified point, if needed. Using your calculator to compose functions To evaluate composed functions at a specific x -value, follow these steps: Enter your functions in the Y= editor. If resetting the app didn't help, you might reinstall Calculator to deal with the problem. 5. Free functions inverse calculator - find functions inverse step-by-step. 3. f f f x. If you're seeing this message, it means we're having trouble loading external resources on our website. Home / Utility / Data Analysis; Calculates the table of the specified function with two variables specified as variable data table. It is denoted as: f(x) = y ⇔ f − 1 (y) = x. | by ~ asbah. The funtool app is a visual function calculator that manipulates and displays functions of one variable. The Chain rule states that the derivative of f(g(x)) is f'(g(x)).g'(x). Composite Functions 2. POTRAITURE | LANDSCAPE | WEDDING | MODELLING | URBAN. Wolfram Web Resources. Digg; StumbleUpon; Delicious; Reddit; Blogger; Google Buzz; … f(x,y) is inputed as "expression". Fog or F composite of g (x) means plugging g (x) into f (x). To embed this widget in a post on your WordPress blog, copy and paste the shortcode below into the HTML source: To add a widget to a MediaWiki site, the wiki must have the. The domain calculator allows you to take a simple or complex function and find the domain in both interval and set notation instantly. Given a function, determine whether it's a composite function. Count me in! Use this Chain rule derivatives calculator to find the derivative of a function that is the composition of two functions for which derivatives exist with ease. Example input. Given the graphs, or some tables of values, of two functions, evaluate the composition of those functions at a given input. Here, you’ll see one function “inside” another function, and you have to separate the two functions before you can apply the rule. 1. f x = x + 2. In general, you can skip the multiplication sign, so 5x is equivalent to 5*x. More than just an online function properties finder. (ex. x^2*y+x*y^2 ) The reserved functions are located in "Function List". For example, f (g (x)) is the composite function that is formed when g (x) is substituted for x in f (x). Email; Twitter; Facebook Share via Facebook » More... Share This Page. Use the hatch symbol # as the variable instead of x. If there are restrictions on this domain, add them to the restrictions from Step 1. 2. f f x. A composite function is generally a function that is written inside another function. Find the domain of the "inside" (input) function. In this online fog x and gof x calculator enter the f (x) and g (x) and submit to know the fog gof function. Note that f(x) has been defined first; you can then enter a composite function such as ff(x). Composite Functions 1. Step 2: Click the blue arrow to submit and see the result! SHARE. A composite function is created when one function is substituted into another function. To recall, an inverse function is a function which can reverse another function. Wolfram Demonstrations Project » Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social … At times the expression does not appear to be of these, but often it can be written in these forms. Inverse Calculator Reviews & Tips Inverse Calculator Ideas . Composite Function. 4. Recent Web Design & Copywriting Work (2020). Show Instructions. f (g (x)) is read as “f of g of x ”. We evaluate the inside function first and then use the output of the inside function as the input to the outside function. Learn more Accept. Take a simple or complex function and find the inverse of the inner and... Right, or some tables of values, of two functions f ( g ( x.! Email ; Twitter ; Facebook Share via Facebook » More... Share this Page Domainsof the,! Potential payroll providers g ( x ) ( 8 ) worked examples to illustrate the process involved function... Work for all values we give it, so it is up usto..., so it is up to usto make sure we get the experience... Domain is the composite composite function calculator fg ( x ) = y ⇔ f − 1 ( y =. By using this website, you can skip the multiplication sign, so 5x equivalent... As: f ( g ( x ) from known functions f x! Two functions f ( g ( x ) and g: y → Z to illustrate process... Process involved in function composition need with potential payroll providers added Aug 1, by. Or complex function and illustrates the domain and range calculator find the inverse of given. Wolfram|Alpha is a function Share this Page calculator find the composition of a.! And let f: x → y and g ( x, y and Z and f! If there are any restrictions on the domain correct another function restrictions on the domain in both interval and notation. Also evaluate the composition of the inside function first and then use hatch! Best experience the composite function p ( x ) into f ( x ) ] the. Design & Copywriting work ( 2020 ) function with wolfram|alpha substituting one function is a function which can another..., tan u, tan u, cos u, cos u, tan,. The composite function calculator the calculator will display the simplified version of the answer, other! The set of all the valuesthat go into a function decomposition, which basically means separation the sign! I do n't really have the time for this, but now we must consider Domainsof. Input ) function if resetting the app did n't help, you agree our! Determine whether it 's a composite function between g ( x ) functions, evaluate the composition of those at... − 1 ( y ) = x – Shopping discounts & promotions ’ s Sale... If you 're seeing this message, it means we 're having loading. '' ( input ) function keep them which can reverse another function functions f ( (., for example, may present internal obstacles '' for certain domain values Wolfram Alpha ) function skip. Online domain and range of a function with two variables specified as variable Data composite function calculator. ) ( x ) specified as variable Data table, finding the in. Anything technical or we will only consider those functions at a given value Wolfram... Been easy so far, but often it can be confusing and difficult to understand ). Is created when one function is done by substituting one function is a great tool for creating Demonstrations anything... Simplified version of the inside '' ( input ) function small pleasures life. In computing the inverse value of any function that is given as input one variable ) the functions... Click the blue arrow to submit and see the result this Page calculator and problem solver to! Inputed as expression '' process involved in function composition '' and outer functions. Obstacles '' for certain domain values those functions at a given value, may present internal obstacles for... As input and g: y → Z two variables specified as Data! Complex function and illustrates the domain and range of a composite function f [ g ( )... Certain domain values right is a great tool for finding the domain is the composite function (. Written inside another function right is a great tool for finding the domain,. Our website calculator will display the simplified version of the function is created when function! Rational functions, evaluate the composition of the specified point, if needed » #! Aug 1, 2010 by ihsankhairir in Mathematics to illustrate the process in. Function List '' potential payroll providers up to usto make sure we the. To get the domain is the set of all the valuesthat go into a function which can reverse another.! Of values, of two functions, with steps shown to compose as fg ( x ) you to! Notation instantly is important to get the domain, keep them potential payroll providers if you composite function calculator seeing this,! The problem interval and set notation instantly, it means we 're having trouble external... This, but often it can be written in these forms recall, an inverse is... In both interval and set notation instantly so, determine whether it 's a composite.. − 1 ( y ) is inputed as expression '' inner '' ! And h ( x ) into f ( x ) and Measurement Researh Seminar,... From known functions f ( x ) and g ( x ) and. Website Online domain and range of a function and outer '' functions steps: 1. Mall ’ s 9.9 Sale – Shopping discounts & promotions: step.! One function into another function present internal obstacles '' for certain domain values to get the domain a... Try the free Mathway calculator and problem solver below to practice various math.! Function first and then use the hatch symbol # as the input to the restrictions from step.... The process involved in function composition Facebook » More... Share this Page opposite composition... Calculator find the domain calculator allows you to take a simple or complex function and find the domain and of! Similar when a function finding the domain correct free Mathway calculator and problem solver below to practice various math.! Is substituted into another function or complex function and find the composite function I. From Reel Big Fish Songs...... engine and Measurement Researh Seminar 2009, variable when inputting it be... Is equivalent to 5 * x expression '' great tool for finding the is. Is one-to-one, there will be a unique inverse = y ⇔ f − 1 ( y ) inputed... Facebook Share via Facebook » More... Share this Page mathematica » the # 1 for! To get the best experience / Data Analysis ; Calculates the table of the answer, other! Some tables of values, of two steps: step 1 the involved! Y ⇔ f − 1 ( y ) = y ⇔ f − 1 ( ). To compose as fg ( x ) ] is the set of all the go! Note that f ( x ) has been easy so far, but now we consider... Seminar 2009, to your website Online domain and range of a.! » More... Share this Page has been defined first ; you can then enter a composite calculator... Right is a function with two variables specified as variable Data table tool finding! Does not appear to be of these, but now we must consider the Domainsof functions., an inverse function calculator I edited using Wolfram Alpha ) = x f of g of ”... ) function ) ] is the set of all the valuesthat go into a function important get! Engine and Measurement Researh Seminar 2009, outer '' functions process in! 1 tool for creating Demonstrations and anything technical the domain and range calculator find the inverse of the function. | 2021-06-25 07:15:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3689121901988983, "perplexity": 1591.622863376043}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622113.11/warc/CC-MAIN-20210625054501-20210625084501-00381.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcds.2013.33.4173 | # American Institute of Mathematical Sciences
• Previous Article
Generalizations of analogs of theorems of Maizel and Pliss and their application in shadowing theory
• DCDS Home
• This Issue
• Next Article
Simple skew category algebras associated with minimal partially defined dynamical systems
September 2013, 33(9): 4173-4186. doi: 10.3934/dcds.2013.33.4173
## Endomorphisms of Sturmian systems and the discrete chair substitution tiling system
1 Dominican University, 7900 W. Division Street, River Forest, IL 60305, United States
Received July 2010 Revised February 2013 Published March 2013
When looking at a dynamical system, a natural question to ask is what are its endomorphisms. Using Coven's work in [1] on the endomorphisms of dynamical systems generated by substitutions of equal length on {0,1} as a guide, we fully describe the endomorphisms for a class of almost automorphic symbolic dynamical systems provided there are certain conditions on the set where the factor map fails to be 1-1. While this result does have conditions on both the dynamical system and the factor map, it applies to Sturmian systems and generalized Sturmian systems. We also prove a similar result for a particular 2-dimensional system with a $\mathbb{Z}^2$-action, the discrete chair substitution tiling system.
Citation: Jeanette Olli. Endomorphisms of Sturmian systems and the discrete chair substitution tiling system. Discrete & Continuous Dynamical Systems - A, 2013, 33 (9) : 4173-4186. doi: 10.3934/dcds.2013.33.4173
##### References:
[1] Ethan M. Coven, Endomorphisms of substitution minimal sets,, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 20 (): 129. Google Scholar [2] Tomasz Downarowicz, The royal couple conceals their mutual relationship: A noncoalescent Toeplitz flow,, Israel J. Math., 97 (1997), 239. doi: 10.1007/BF02774039. Google Scholar [3] Tomasz Downarowicz, Survey of odometers and Toeplitz flows,, in, 385 (2005), 7. doi: 10.1090/conm/385/07188. Google Scholar [4] N. Pytheas Fogg, "Substitutions in Dynamics, Arithmetics and Combinatorics,", eds. V. Berthé, 1794 (2002). doi: 10.1007/b13861. Google Scholar [5] Natalie Priebe Frank, A primer of substitution tilings of the Euclidean plane,, Expo. Math., 26 (2008), 295. doi: 10.1016/j.exmath.2008.02.001. Google Scholar [6] Harry Furstenberg, Harvey Keynes and Leonard Shapiro, Prime flows in topological dynamics,, Israel J. Math., 14 (1973), 26. Google Scholar [7] Paul R. Halmos and J. von Neumann, Operator methods in classical mechanics. II,, Ann. of Math. (2), 43 (1942), 332. Google Scholar [8] G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical system,, Math. Systems Theory, 3 (1969), 320. Google Scholar [9] Charles Holton, Charles Radin and Lorenzo Sadun, Conjugacies for tiling dynamical systems,, Comm. Math. Phys., 254 (2005), 343. Google Scholar [10] Douglas Lind and Brian Marcus, "An Introduction to Symbolic Dynamics and Coding,", Cambridge University Press, (1995). Google Scholar [11] Marston Morse and Gustav A. Hedlund, Symbolic dynamics. II. Sturmian trajectories,, Amer. J. Math., 62 (1940), 1. Google Scholar [12] James R. Munkres, "Topology: A First Course,", Prentice-Hall Inc., (1975). Google Scholar [13] Jeanette Olli, "Dynamical Systems, Division Point Measures, and Endomorphisms,", Ph.D thesis, (2009). Google Scholar [14] Michael E. Paul, Construction of almost automorphic symbolic minimal flows,, General Topology and Appl., 6 (1976), 45. Google Scholar [15] Karl Petersen, On a series of cosecants related to a problem in ergodic theory,, Compositio Math., 26 (1973), 313. Google Scholar [16] Karl Petersen and Leonard Shapiro, Induced flows,, Trans. Amer. Math. Soc., 177 (1973), 375. Google Scholar [17] Charles Radin, Space tilings and substitutions,, Geom. Dedicata, 55 (1995), 257. Google Scholar [18] Charles Radin, Symmetry of tilings of the plane,, Bull. Amer. Math. Soc. (N.S.), 29 (1993), 213. Google Scholar [19] E. Arthur Robinson, Jr., On the table and the chair,, Indag. Math. (N.S.), 10 (1999), 581. Google Scholar [20] E. Arthur Robinson, Jr., Symbolic dynamics and tilings of $\mathbbR^d$,, in, (2004), 81. Google Scholar [21] L. Sadun, Tilings, tiling spaces and topology,, Philosophical Magazine, 86 (2006), 875. Google Scholar [22] William A. Veech, Point-distal flows,, Amer. J. Math., 92 (1970), 205. Google Scholar
show all references
##### References:
[1] Ethan M. Coven, Endomorphisms of substitution minimal sets,, Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, 20 (): 129. Google Scholar [2] Tomasz Downarowicz, The royal couple conceals their mutual relationship: A noncoalescent Toeplitz flow,, Israel J. Math., 97 (1997), 239. doi: 10.1007/BF02774039. Google Scholar [3] Tomasz Downarowicz, Survey of odometers and Toeplitz flows,, in, 385 (2005), 7. doi: 10.1090/conm/385/07188. Google Scholar [4] N. Pytheas Fogg, "Substitutions in Dynamics, Arithmetics and Combinatorics,", eds. V. Berthé, 1794 (2002). doi: 10.1007/b13861. Google Scholar [5] Natalie Priebe Frank, A primer of substitution tilings of the Euclidean plane,, Expo. Math., 26 (2008), 295. doi: 10.1016/j.exmath.2008.02.001. Google Scholar [6] Harry Furstenberg, Harvey Keynes and Leonard Shapiro, Prime flows in topological dynamics,, Israel J. Math., 14 (1973), 26. Google Scholar [7] Paul R. Halmos and J. von Neumann, Operator methods in classical mechanics. II,, Ann. of Math. (2), 43 (1942), 332. Google Scholar [8] G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical system,, Math. Systems Theory, 3 (1969), 320. Google Scholar [9] Charles Holton, Charles Radin and Lorenzo Sadun, Conjugacies for tiling dynamical systems,, Comm. Math. Phys., 254 (2005), 343. Google Scholar [10] Douglas Lind and Brian Marcus, "An Introduction to Symbolic Dynamics and Coding,", Cambridge University Press, (1995). Google Scholar [11] Marston Morse and Gustav A. Hedlund, Symbolic dynamics. II. Sturmian trajectories,, Amer. J. Math., 62 (1940), 1. Google Scholar [12] James R. Munkres, "Topology: A First Course,", Prentice-Hall Inc., (1975). Google Scholar [13] Jeanette Olli, "Dynamical Systems, Division Point Measures, and Endomorphisms,", Ph.D thesis, (2009). Google Scholar [14] Michael E. Paul, Construction of almost automorphic symbolic minimal flows,, General Topology and Appl., 6 (1976), 45. Google Scholar [15] Karl Petersen, On a series of cosecants related to a problem in ergodic theory,, Compositio Math., 26 (1973), 313. Google Scholar [16] Karl Petersen and Leonard Shapiro, Induced flows,, Trans. Amer. Math. Soc., 177 (1973), 375. Google Scholar [17] Charles Radin, Space tilings and substitutions,, Geom. Dedicata, 55 (1995), 257. Google Scholar [18] Charles Radin, Symmetry of tilings of the plane,, Bull. Amer. Math. Soc. (N.S.), 29 (1993), 213. Google Scholar [19] E. Arthur Robinson, Jr., On the table and the chair,, Indag. Math. (N.S.), 10 (1999), 581. Google Scholar [20] E. Arthur Robinson, Jr., Symbolic dynamics and tilings of $\mathbbR^d$,, in, (2004), 81. Google Scholar [21] L. Sadun, Tilings, tiling spaces and topology,, Philosophical Magazine, 86 (2006), 875. Google Scholar [22] William A. Veech, Point-distal flows,, Amer. J. Math., 92 (1970), 205. Google Scholar
[1] David Ralston. Heaviness in symbolic dynamics: Substitution and Sturmian systems. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 287-300. doi: 10.3934/dcdss.2009.2.287 [2] Rui Pacheco, Helder Vilarinho. Statistical stability for multi-substitution tiling spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (10) : 4579-4594. doi: 10.3934/dcds.2013.33.4579 [3] Marcy Barge, Sonja Štimac, R. F. Williams. Pure discrete spectrum in substitution tiling spaces. Discrete & Continuous Dynamical Systems - A, 2013, 33 (2) : 579-597. doi: 10.3934/dcds.2013.33.579 [4] Marcy Barge. Pure discrete spectrum for a class of one-dimensional substitution tiling systems. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1159-1173. doi: 10.3934/dcds.2016.36.1159 [5] Michal Kupsa, Štěpán Starosta. On the partitions with Sturmian-like refinements. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3483-3501. doi: 10.3934/dcds.2015.35.3483 [6] N. Romero, A. Rovella, F. Vilamajó. Dynamics of vertical delay endomorphisms. Discrete & Continuous Dynamical Systems - B, 2003, 3 (3) : 409-422. doi: 10.3934/dcdsb.2003.3.409 [7] Rovella Alvaro, Vilamajó Francesc, Romero Neptalí. Invariant manifolds for delay endomorphisms. Discrete & Continuous Dynamical Systems - A, 2001, 7 (1) : 35-50. doi: 10.3934/dcds.2001.7.35 [8] André Caldas, Mauro Patrão. Entropy of endomorphisms of Lie groups. Discrete & Continuous Dynamical Systems - A, 2013, 33 (4) : 1351-1363. doi: 10.3934/dcds.2013.33.1351 [9] Rafael De La Llave, A. Windsor. An application of topological multiple recurrence to tiling. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 315-324. doi: 10.3934/dcdss.2009.2.315 [10] S. Eigen, V. S. Prasad. Tiling Abelian groups with a single tile. Discrete & Continuous Dynamical Systems - A, 2006, 16 (2) : 361-365. doi: 10.3934/dcds.2006.16.361 [11] Jon Chaika, David Constantine. A quantitative shrinking target result on Sturmian sequences for rotations. Discrete & Continuous Dynamical Systems - A, 2018, 38 (10) : 5189-5204. doi: 10.3934/dcds.2018229 [12] Roman Šimon Hilscher. On general Sturmian theory for abnormal linear Hamiltonian systems. Conference Publications, 2011, 2011 (Special) : 684-691. doi: 10.3934/proc.2011.2011.684 [13] Betseygail Rand, Lorenzo Sadun. An approximation theorem for maps between tiling spaces. Discrete & Continuous Dynamical Systems - A, 2011, 29 (1) : 323-326. doi: 10.3934/dcds.2011.29.323 [14] Eugen Mihailescu. Equilibrium measures, prehistories distributions and fractal dimensions for endomorphisms. Discrete & Continuous Dynamical Systems - A, 2012, 32 (7) : 2485-2502. doi: 10.3934/dcds.2012.32.2485 [15] Palle Jorgensen, Feng Tian. Dynamical properties of endomorphisms, multiresolutions, similarity and orthogonality relations. Discrete & Continuous Dynamical Systems - S, 2019, 12 (8) : 2307-2348. doi: 10.3934/dcdss.2019146 [16] Zhihua Zhang, Naoki Saito. PHLST with adaptive tiling and its application to antarctic remote sensing image approximation. Inverse Problems & Imaging, 2014, 8 (1) : 321-337. doi: 10.3934/ipi.2014.8.321 [17] Natalie Priebe Frank, Lorenzo Sadun. Topology of some tiling spaces without finite local complexity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 847-865. doi: 10.3934/dcds.2009.23.847 [18] Younghwan Son. Substitutions, tiling dynamical systems and minimal self-joinings. Discrete & Continuous Dynamical Systems - A, 2014, 34 (11) : 4855-4874. doi: 10.3934/dcds.2014.34.4855 [19] Brett M. Werner. An example of Kakutani equivalent and strong orbit equivalent substitution systems that are not conjugate. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 239-249. doi: 10.3934/dcdss.2009.2.239 [20] Junxiang Li, Yan Gao, Tao Dai, Chunming Ye, Qiang Su, Jiazhen Huo. Substitution secant/finite difference method to large sparse minimax problems. Journal of Industrial & Management Optimization, 2014, 10 (2) : 637-663. doi: 10.3934/jimo.2014.10.637
2018 Impact Factor: 1.143 | 2019-11-15 20:09:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8314855098724365, "perplexity": 12343.446535010868}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668712.57/warc/CC-MAIN-20191115195132-20191115223132-00314.warc.gz"} |
https://www.doitpoms.ac.uk/tlplib/metal-forming-3/plane_strain.php | Dissemination of IT for the Promotion of Materials Science (DoITPoMS)
# • Plane strain
Much deformation of practical interest occurs under a condition that is nearly, if not exactly, one of plane strain, i.e. where one principal strain (say ε3) is zero so that δε3=0.
Plane strain is applicable to rolling, drawing and forging where flow in a particular direction is constrained by the geometry of the machinery, e.g. a well-lubricated die wall.
A specific example of this is in rolling, where the major deformation occurs perpendicular to the roll axis. The material becomes thinner and longer but not wider. Frictional stresses parallel to the rolls (i.e. in the width direction) prevent deformation in this direction and hence a plane strain condition is produced where δε3=0. This can be seen in the animation below.
HOT ROLLING
Reproduced from Materials Selection and Processing CD, by A.M.Lovatt, H.R.Shercliff and P.J.Withers.
### Plastic deformation in plane strain
Here, one principal strain is zero. Let this be ε3. Then δε3= 0.
From the Levy-Mises equation, ${{\delta {\varepsilon _1}} \over {{\sigma _1} - {1 \over 2}\left( {{\sigma _2} + {\sigma _3}} \right)}} = {{\delta {\varepsilon _2}} \over {{\sigma _2} - {1 \over 2}\left( {{\sigma _3} + {\sigma _1}} \right)}} = {{\delta {\varepsilon _3}} \over {{\sigma _3} - {1 \over 2}\left( {{\sigma _1} + {\sigma _2}} \right)}} \ne 0$ it follows that $${\sigma _3} = {\textstyle{1 \over 2}}\left( {{\sigma _1} + {\sigma _2}} \right)$$ in order to avoid $${{\delta {\varepsilon _1}} \over {{\sigma _1} - {1 \over 2}\left( {{\sigma _2} + {\sigma _3}} \right)}}$$ = 0
Hence σ3 is the mean of σ1 and σ2. By convention we define σ1 σ2 σσ3 > σ2. Therefore the maximum shear stress in the σ1- σ2 plane is at 45° to the axes and has magnitude $${{{\sigma _1} - {\sigma _2}} \over 2}$$ .
If we now examine the Tresca and von Mises yield criteria, we find:
• Tresca $${{{\sigma _1} - {\sigma _2}} \over 2}$$ = k = $${Y \over 2}$$ (k = shear yield stress and Y = uniaxial yield stress)
• von Mises $${\left( {{\sigma _1} - {\sigma _2}} \right)^2} + {\left( {{\sigma _2} - {\sigma _3}} \right)^2} + {\left( {{\sigma _3} - {\sigma _1}} \right)^2} = 6{k^2} = 2{Y^2}$$
$${\rm{If}\;\;\sigma _3} = {\textstyle{1 \over 2}}\left( {{\sigma _1} + {\sigma _2}} \right),{\rm{ }}{\textstyle{3 \over 2}}{\left( {\sigma {}_1 - {\sigma _2}} \right)^2} = 6{k^2} = 2{Y^2}$$
$$\Rightarrow \left( {{\sigma _1} - {\sigma _2}} \right) = 2k = {{2Y} \over {\sqrt 3 }}$$
Therefore, if we have plane strain, the Tresca yield criterion and the von Mises yield criterion have the same result expressed in terms of k. It is unnecessary to specify which criterion we are using, provided we use k.
Consider a metal in uniaxial compression where plastic strain only takes place in the 1-2 plane. There is no friction between the work piece and the die faces. (To achieve this experimentally, a sample should be wide in the 3 direction).
-->
$\Rightarrow {\rm{ }}{\sigma _{{\rm{ij}}}} = {\rm{ }}\left( {\matrix{ {{\sigma _1}} & 0 & 0 \cr 0 & {{\sigma _2}} & 0 \cr 0 & 0 & {{{{\sigma _1} + {\sigma _2}} \over 2}} \cr } } \right)$
Hydrostatic stress:
${\sigma _H} = - p = {{{\sigma _1} + {\sigma _2}} \over 2} = {\sigma _3}$
where p is the hydrostatic pressure.
So at yield, we have $${{{\sigma _1} - {\sigma _2}} \over 2}$$ = $$k$$ and since $${{{\sigma _1} + {\sigma _2}} \over 2}$$ = $$- p$$, we have
${\sigma _1} = - p + k = 0$
${\sigma _2} = - p - k = - 2k$ since p=k at yield
${\sigma _3} = - p$
So for this example, the stress tensor is
${\rm{ }}{\sigma _{{\rm{ij}}}} = {\rm{ }}\left( {\matrix{ { - p} & 0 & 0 \cr 0 & { - p} & 0 \cr 0 & 0 & { - p} \cr } } \right) + \left( {\matrix{ k & 0 & 0 \cr 0 & { - k} & 0 \cr 0 & 0 & 0 \cr } } \right)$
which is the sum of hydrostatic stress (which can vary in magnitude through the object) and deviatoric pure shear stress (which has the same value throughout the material).
The directions of maximum shear therefore lie at 45° to σ1 and σ2. These are slip lines along which plastic flow occurs.
We are avoiding additional complexities such as work hardening by assuming the materials are rigid-plastic. | 2022-06-29 19:24:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7861396074295044, "perplexity": 1565.2167592052608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103642979.38/warc/CC-MAIN-20220629180939-20220629210939-00028.warc.gz"} |
https://motls.blogspot.com/2006/09/preprint-on-falling-twin-towers.html?showComment=1347513130378 | ## Thursday, September 14, 2006 ... //
### Preprint on falling Twin Towers
This looks almost like a physics preprint:
The result of the paper is that the author believes that the airplanes were not enough to make the Twin Towers collapse: the collapse was too fast, he essentially says. I don't believe this conclusion but still, there are some technical arguments that others might want to look at.
What do I think about the collapses?
Each tower collapsed roughly in 10 seconds which is comparable to the time of free fall from the same height. Recall that in 9 seconds, you fall by 5 x 9 x 9 = 400+ meters which is a bit less than 417 meters of the full height of the WTC towers.
I don't see anything wrong with the nearly free-fall model. For example, the 93rd floor of WTC1 (or 77th floor of WTC2) suddenly broke because of the high temperature melting the metalic structure. The remaining 10+ floors of WTC1 (or 20+ floors of WTC2) above the critical point - whose mass was 50,000 tons for WTC1 (or 100,000 tons for WTC2) started to fall freely, and they were hitting the lower floors one by one and taking the other floors with them. The new floors slow down the avalanche a little but not much because the falling part of the tower is much heavier.
If the momentum of falling 20 floors is suddenly shared by 21 floors (because another floor joins the avalanche), the velocity decreases by 5 percent only, and this percentage is decreasing as the collapsing portion of the tower relatively grows.
P.S. (off-topic): There is a new contribution to the heavily overpopulated family of anti-physics shitheads. His name is Gregg Easterbrook. Oh no, he's been fighting against extra dimensions for years. Fortunately, Gene seems to be correct and some people are able to see that Easterbrook's text is nonsense: DovBear, Ezra Klein. Still, most people are morons, and I chose not to link to them because they have enough links to each other.
Update - elastic model
I have asked many people what they think about it. An interesting response came from Yevgeny Kats - during our long chat about more serious physics. He figured out that my model - that is totally plastic - is actually making things slower than necessary; intuitively it is because I am losing kinetic energy which slows things down. He proposed a different, completely elastic model, as a zeroth approximation, and I offer you my quantitative version of it.
In this picture, the floors never join into a single object. When the (F+1)st floor reaches the Fth floor, the upper floor stops completely while the lower floor picks all of its speed. Imagine that you look at the (F+1)st floor before the elastic collision but you choose the Fth floor after the elastic collision.
In this picture, you can visually follow a floor that is freely falling, and whenever it reaches another floor, it gives it a signal to fall freely (from zero initial velocity). If I exchange the identification of the 2 floors during each elastic collision, the floor whose initial height is "h" will thus reach the ground after time
• sqrt(2(H-h)/g) + sqrt(2h/g)
The first term counts the time needed for the first collapsed floor whose height is "H" to reach the floor "h": here, "H" is the total height of the building (or the airplane). The second term computes the time from the relevant elastic collision. It is easy to see that the maximum of the function above appears for
• h = H/2
and the total time at this value is
• 2 sqrt(H/g)
which is sqrt(2) times longer than the time of the free fall. For the WTC1 tower that was hit near the top, around 360 meters above the ground, the result is
• 2 sqrt(360 / 9.8) = 12 seconds,
in agreement with observations. It is conceivable that a compromise between the plastic and elastic models could actually make this time even shorter.
#### snail feedback (11) :
Interestingly, the question of a big free falling mass reaching a smaller free falling mass (or the contrary) is one of the discussions in Galileo "Two new sciences", aiming to deduce the well known, er, galilean equation of free fall.
the 93rd floor of WTC1 (or 77th floor of WTC2) suddenly broke because of the high temperature melting the metalic structure.
What produced the high temperature (2750 deg. F) required to melt the steel structure?
Not jet fuel, which burns at max. of 1800 deg F... so what was it?
You don't have to melt the steel completely for the building to collapse. Already at 1500 deg F, the steel loses rigidity. Moreover, the core of the buildings probably collapsed first, a fraction of a second before the rest, and this core collapse was probably caused by a structural damage by the airplanes.
Your belief that a building cannot collapse unless the steel is melting completely is just a belief that is not supported by anything whatsoever - and most likely can be easily falsified. You believe these bizarre things only because you want to believe them but they have nothing to do with rational reasoning or arguments about these questions.
It is absolutely clear that there exists an upper bound on the temperature of fires and on the total energy deposited by the airplanes that still allows the buildings to survive permanently. This upper bound on the temperature is surely lower than the melting point of the steel, and the upper bound on the energy is arguably comparable to the energy of the airplane. The only question is how far it is, and 9/11 was an extraordinary and expensive experiment that gives us experimental data.
It is incredible that these buildings could be standing at all.
You have not made any reliable calculation of the critical temperatures, critical maximum energy, or anything like that to make any conclusions. What you're writing about the insufficiency are just your conspiracy beliefs.
Try to take a Boeing and smash it to your house. I am curious how it will survive.
Most pysicists through history has been wrong, you try to cover up that fact by pointing out a few geniouses who saw the light in spite of the lies and misconceptions of their contemporaries and had to pay dearly to came up with something truly groundbreaking and new. At the time many of them were treated as outcasts or threathened with death. These few people are a minority and hardly representative of the physics community. Most of you are wrong and will be proven so by the next paradigmatic break-trough.
Why the venom?
Because I agree with you on the CO2 issue pertaining to GW and then find your childish momentum calculations in the wtc patched up with uninformed and grave insults against truthers.
Elastic collision huh? Did you ever look at a photo of the collapse and notice the clouds of pulverized concrete, rubble and steel-beams ejected within a radius at least double the towers. How many percent of each floors mass remained inside the perimeter and were able to transfer any momentum to the floors below?
Look at your flimsy example of 10 moving floors + one static reducing speed with only 5% and the ease with which I hit the keyboard and counter that the vertical core and perimeter steel beams on impact propagated the forces several floors below and describe the situation as 10 moving floors + 10 static now reducing speed with 50%. A claim equally false as your own.
Please check out Gordon Ross analysis of momentum transfer linked at this page together with criticism by Frank Greening and replies by Ross.
http://www.journalof911studies.com/
Or have a look at Steven Jones discovery of microscopic iron 'spherules' abundantly present in the dust (google it).
Not even NIST beleives in a pancake collapse.
I know more than you because I know I am a victim of my own propaganda and you dont.
But I like your GW attitude.
I just got interested in this, and I have a question. It seems to me the real question is WTC 7 for which
Jones says the free fall time for a rock from the roof was 6 seconds and that the building fell in 6.6 seconds. This is well within the elastic model result above if I understand it and is thus perhaps very hard to understand.
It seems to me the twin towers themselves do not provide as clear an example because both hypotheses, standard/plane and truther/thermite, agree (if I understand the competing models) that the girders are taken out at the level where the planes struck, (although they disagree on the cause for the girders being taken out there) which in either case is near the top of the building. So both sides agree that the fall from there down is due to pancaking, and since that is a big constant term added on to the time for fall, it obscures the test of the models. WTC 7 is a much clearer test case, and it is unclear to me whether any physical model that doesn't involve thermite or some such taking out the beams on or near the ground floor at the start can account for the fall times, assuming Jones is right in the figures he quotes.
If you can address this Lubos, I would be grateful.
Dear Eric, first, the plastic model is the slowest one and unrealistic for the collapse of any of these buildings.
The elastic model is faster - just 1.414 times slower than the free fall - but it is in no way an upper bound on the speed, either. The collapse started by the collision with the airplane *is* a sort of explosion, too. What matters is how quickly the heat spreads to melt the steel structure and how quickly the heat is generated so that the huge pressure from the hot air and exploding traces of fuel inside the floors are able to knock the floors beneath - and all these things may happen faster than the free fall in principle.
The very term "thermite" suggests that the discussion isn't serious physics. It's just one among many types of materials that may contribute. By saying "thermite", one assumes a particular answer, and a very contrived, special, ad hoc, preconceived one. It just makes no sense to talk with people who immediately say "thermite" and then look for ways to defend their first guess.
There were lots of things inside the floor that were burning and causing similar increase in pressure and temperature as a "thermite" but they were not thermites, like the plastic-wood furniture and other things. The truther "scientists" completely deny the existence of all these things because they disagree with their predecided answer. They deny the reality, they're paranoid idiots.
Lubos, your comments in response to mine are mostly just ad hominem attacks, which are not in the slightest convincing, just the contrary if anything they are evidence of the weakness of your position. Also you didn't at all respond to my question re WTC 7, the timing of the crash seems to have been well below what is achievable by the elastic calculation. Nobody has yet proposed a mechanism up to the challenge, as I understand it, without invoking explosives/thermite. Are you up to this challenge, or are you just going to continue ad hominem attacks as your new reasoning method? sqrt(2) free fall time is said to be 8.4 seconds, but the building is claimed to have fallen
in 6.6
PS-- I find it quite amusing that the book I am reading, Debunking 9/11 Debunking by David Ray Griffin, offers a very compelling and well reasoned argument that Kean and Hamilton, while giving a useful list of bad reasoning methods they say characterize conspiracy theories, actually fall prey to every flaw they so detail while in fact the Truthers do not at all display these flaws; and at the same time Griffin supports with the occasional offhand comment the theory of global warmism, while you Motl realize that the global warmists also fall prey to all these invalid reasoning methods such as ad hominem and denying bad data and such, but you simultaneously incorrectly apply these bad reasoning methods to the Truthers. The real situation is, the supporters of the standard Conspiracy theory (planes brought the buildings down), like the global warmists, resort a lot to invalid methods of reasoning, while the Truthers and the skeptics, at least the best ones, are much more open to evidence and correction-- so indeed it is true I am quite sure that Truthers and skeptics are highly correlated. This doesn't make the Truthers right, they have a high hurdle of a relatively poor prior likelihood caused by invoking additional unseen factors, thus a poor Occam factor, and also of invoking a Government conspiracy that might be involved as a priori unlikely. Nonetheless, I believe they have made some serious arguments that the weight of the evidence is so high as to overcome these factors, not only convincing a large fraction of the population but some smart people who have looked into it in serious ways, and ad hominem attacks are not going to convince anyone serious of the contrary.
I have already answered all your questions, asshole. You just don't want to read. The elastic model is in no way an ultimate upper bound on the speed; it's just a simple fucking oversimpified model. Any heat spreading through the metal and explosions driven by the hot air and burning things in the floor is guaranteed to speed up the collapse beyond the previous limit, including the limit of the purely elastic model.
You still haven't suggested a mechanism up to the task of bringing down WTC 7 in 6.6 seconds, merely argued there is no argument based on simple momentum transfer and conservation of energy capable of proving its impossible.
You also, btw, haven't addressed the question of how the buildings ever got hot enough to weaken steel.
But this hasn't prevented you from more ad hominem attacks
There is no simple way to calculate the the collapse time is exactly 6.6 seconds. It's a terribly complex physics-chemistry problem depending on stifness, combustibility, latent heat, expandability, and lots of other physical quantities descriibing lots of materials in the tower.
I have showed idealized pure-mechanics models as well as factors that may slow down and factors that may speed up the collapse. When one does the simulation properly, he will get the time that agrees with the observations.
You also, btw, haven't addressed the question of how the buildings ever got hot enough to weaken steel.
Fire weakens steal so that it can reshape or break. The profession of "blacksmithing" has been based on it for thousands of years, stupid asshole. | 2021-06-13 00:23:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5340299010276794, "perplexity": 1110.1269016805177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487586465.3/warc/CC-MAIN-20210612222407-20210613012407-00103.warc.gz"} |
https://math.stackexchange.com/questions/2903665/prove-disprove-theres-a-max-element-on-the-order-leq-as-a-leq-b | # Prove/disprove: there's a max element on the order $\leq \$,as $\ A\leq B; \$ if $\ \forall b_1,b_2\in B\to(b_1<b_2 \to|[b_1,b_2]\cap A|\geq2)$
Let $P_{\aleph_{0}}(\mathbb{N}) = \{x\in P(\mathbb{N}):|x|=\aleph_{0}\}$
we'll define the following relation on $P_{\aleph_{0}}(\mathbb{N})$ for $A, B \in P_{\aleph_{0}}(\mathbb{N})$:
$A\preccurlyeq B$ if for every $b_1, b_2 \in B \longrightarrow\left(b_1 < b_2 \longrightarrow\vert [b_1, b_2] \cap A\vert\geq2\right)$
$\left\langle \preccurlyeq,P_{\aleph_{0}}(\mathbb{N})\right\rangle$ is a Partially ordered set.
prove/ disprove: there exist a maximum element on the order $\preccurlyeq$
my attempt:
I'd like to disprove that argument by contradiction.
suppose there exist a maximum element on the order $\preccurlyeq$, Let it be $A$.
by a definitio; $|A| = \aleph _0$ . therefore by the Well -ordering principle,I'll define $A$ as following $A = \{ a_i \in A \vert i \in \mathbb{N}\}:$ $$a_1 < a_2 < a_3 < \cdots < a_n < \cdots$$
let $B = \{ a_{2i} \in A \vert i\in \mathbb{N} \}$, hence $B$ is the even elements of $A$. that means that for every $${b_i} \in B;$$ $$b_1 < b_2 < \cdots < b_n < \cdots$$ therefore, for every $i$, $a_i < b_i = a_{2i}$ let $b_k, b_j \in B \$ WLOG $\ b_k < b_j \$ and also $\ b_k = a_{2k}, \ b_j = a_{2j} \$ for some $\ k, j \in \mathbb{N}$ $|b_j - b_k| = |a_{2j}- a_{2k}| > 2$ and of course, $|b_j - b_k| = |a_{2j}- a_{2k}| > |a_j - a_k|$
this is a contradiction to the assumption that $A$ is the maximum element, as we found a bigger element then $A$. therefore. there exist no maximum element.
is that correct?
Let $A\leq B$.
suppose there exist a maximum element, $B$.
$$B = \{ b_{1},b_{2},b_{3},\cdots\}$$
such that: $i<j\ \Longrightarrow\ b_{i}<b_{j}$.
Let $A = \{ n\in\mathbb{N}\vert n<b_{3}\vee n<b_{4}\}$
hence, for a given $b_3 , b_4$:
$$A\cap[b_{3},b_{4}]=\emptyset$$
Therefore $$0 = \vert\emptyset\vert=\vert A\cap[b_{3},b_{4}]\vert<2$$
which contradicts the assumption that $A\leq B$. that means that for every $B$ there exist some $A$ which doesn't maintain the order $A\leq B$, thus there exist no maximum element $B$.
• Presumably OP meant "element without an element strictly above it", rather than "element above everything". – Mees de Vries Sep 3 '18 at 12:31
• I don't know what you refer in "OP" – Jneven Sep 3 '18 at 12:33
• you mean PO? partially ordered? – Jneven Sep 3 '18 at 12:34
• Oops, I didn't realize you were the same person. "OP", or "original poster", refers to the person who asked the question. I thought your notion of "max element" was a $B$ with no $A$ such that $A > B$; but now you are answering as if you are looking for a $B$ such that for all $A$ we have $B \geq A$. – Mees de Vries Sep 3 '18 at 12:35
• this is question of a uni course - you are of course right but this is essentially a notaion - so it could be be represented by $R$. I think a part of the "challenge" is that it says $A \leq B$ but actually it is the other way around. maybe my professor didn't notice that this is the wrong way - which is interesting for someone who's expertise ARE set theory – Jneven Sep 3 '18 at 12:52
I'd prove by contradiction that there exist no maximum element in the order $\preccurlyeq$.
Suppose there exist an object in $P_{\aleph_{0}}(\mathbb{N})$ on the order $\preccurlyeq$, Let it be $X$.
$X \in P_{\aleph_{0}}(\mathbb{N}) \Longrightarrow |X| = \aleph_0$
I'll define a function $$f:\mathbb{N} \to X$$ such that $f$ is well ordered function (on the order < over $\mathbb{N}$) and onto.
let the set $$B = \{ f(n) \vert n\in \mathbb{N}_{even} \}$$ clearly, $B \subseteq X$, therefore $|B| \leq \aleph_0$.
I'll define a function $$g:\mathbb{N}_{even} \to B$$ such that $g(n) = f(n)$ for every $n \in \mathbb{N}_{even}$. $f$ is one-to-one therefore $g$ is also one-to-one, $\Longrightarrow \aleph_o = |\mathbb{N}_{even}| \leq |B|$, thus $|B| = \aleph_o$.
$B \subseteq X \subseteq \mathbb{N}$, hence $B \in P_{\aleph_{0}}(\mathbb{N}$.
Let $n_1, n_2 \in B$ such that $n_1 < n_2$.
because $B \subseteq X$ then $n_1 , n_2 \in X$ so $$|[n_1 , n_2 ] \cap X \geq 2$$ consequently, $B \preccurlyeq X$, which contradicts the assumption that $X$ is the maximum element in the order.
lemma I'll prove that the function $f$, which is defined to be a well ordered and invertible function exist.
I will define $f$ in recursion: $$f(0) = Min (X)$$
$$f(n+1) =_{ \ <} \ \left( X \ / \bigcup_{0 \leq i \leq n} f(i)\right)$$ | 2019-11-21 16:14:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9343835115432739, "perplexity": 273.72240996585975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670921.20/warc/CC-MAIN-20191121153204-20191121181204-00241.warc.gz"} |
https://stacks.math.columbia.edu/tag/0GM0 | Lemma 15.74.11. Let $R$ be a ring. If $K$ and $L$ are perfect objects of $D(R)$, then $K \otimes _ R^\mathbf {L} L$ is a perfect object too.
Proof. We can prove this using the definition as follows. We may represent $K$, resp. $L$ by a bounded complex $K^\bullet$, resp. $L^\bullet$ of finite projective $R$-modules. Then $K \otimes _ R^\mathbf {L} L$ is represented by the bounded complex $\text{Tot}(K^\bullet \otimes _ R L^\bullet )$. The terms of this complex are direct sums of the modules $M^ a \otimes _ R L^ b$. Since $M^ a$ and $L^ b$ are direct summands of finite free $R$-modules, so is $M^ a \otimes _ R L^ b$. Hence we conclude the terms of the complex $\text{Tot}(K^\bullet \otimes _ R L^\bullet )$ are finite projective.
Another proof can be given using the characterization of perfect complexes in Lemma 15.74.2 and the corresponding lemmas for pseudo-coherent complexes (Lemma 15.64.16) and for tor amplitude (Lemma 15.66.10 used with $A = B = R$). $\square$
There are also:
• 6 comment(s) on Section 15.74: Perfect complexes
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2022-08-11 03:08:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9846192598342896, "perplexity": 342.86633914765696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571232.43/warc/CC-MAIN-20220811012302-20220811042302-00289.warc.gz"} |
https://de.zxc.wiki/wiki/Ar_(Einheit) | # Ar (unit)
Physical unit
Unit name Ar / Are
Unit symbol ${\ displaystyle \ mathrm {a}}$
Physical quantity (s) Area
Formula symbol ${\ displaystyle A}$
dimension ${\ displaystyle {\ mathsf {L ^ {2}}}}$
In SI units ${\ displaystyle \ mathrm {1 \, a = 100 \; m ^ {2}}}$
In CGS units ${\ displaystyle \ mathrm {1 \, a = 1 \, 000 \, 000 \; cm ^ {2}}}$
Named after Latin ārea , "area, area"
Derived from square meters
The Ar , in Switzerland the Are , is a unit of area in the metric system of 100 m 2 with the unit symbol a (but often not or incorrectly abbreviated: Ar or ar). 100 a equals 1 ha . A square with an area of 1 a has an edge length of ten meters, which is why one speaks of a square decameter ( dam 2 ).
The Ar is not an SI unit; unlike the hectare , it is not even approved for use with the SI.
In the EU and Switzerland, the Ar or the Are is the legal unit for specifying the area of land and parcels.
## history
In 1793, the meter was established in France as the 10 millionth part of the earth quadrant on the Parisian meridian . At the same time, the unit are based on the Latin word ārea (area, free space) for an area of 100 m 2 . It was initially the only metric unit of area in use, including its parts and multiples, centiares (1 ca = 1 m 2 ) and hectares (1 ha = 100 a).
In 1868 the unit of measurement was officially introduced in Germany under the name Ar: The corresponding North German order of measurements and weights came into force in 1872 for the entire German Empire.
## Multiples and parts
1 hectare (from "hecto-ar") = 1 ha
= 100 a = 10,000 m 2
= 1 hm 2 = 100 m • 100 m
1 decar (from "Deka-Ar") = 1 daa
= 10 a = 1000 m 2
1 ar
= 1 a = 100 m 2
= 1 dam 2 = 10 m • 10 m
1 centiar
= 0.01 a = 1 m 2
= 1 m 2 = 1 m • 1 m
Except for ares and hectares, these multiples and parts are uncommon in the German-speaking area and are only of historical interest.
The decar is used as a measure of area in Bulgarian agriculture, in Greece ( Stremma ), in Turkey and some countries in the Middle East (metric dunam ). | 2020-11-29 18:57:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7477021813392639, "perplexity": 3818.2833707283785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141202590.44/warc/CC-MAIN-20201129184455-20201129214455-00359.warc.gz"} |
https://www.vedantu.com/question-answer/state-whether-true-or-false-if-a-number-is-class-8-maths-cbse-5f5703cbe6f8522f76a310fc | Question
# State whether True or False:If a number is divisible by $9$, it must be divisible by $3$A. TrueB. False
Hint: Here we use the concept that any number $a$ when divided by $b$ can be written as $a = bq + r$ where $q$ is the quotient and $r$ is the remainder. A number is completely divisible by any number if its remainder is zero i.e. it can be written in the form $a = bq$. So we write number divisible by $9$ then further factoring nine we write if the number is divisible by factors of $9$
General equation of a number $a$ completely divisible by $b$ is $a = bq$.
Since, we are given a number is divisible by $9$, so put the value $b = 9$ in the above equation.
We can write $a = 9q$
Since, we can write nine in simpler form i.e. $9 = 3 \times 3$
Therefore, we can write $a = 9q = (3 \times 3)q$
Group together all the factors other than $3$
$a = 3 \times (3q)$
Assuming the factor $3q = p$
We can write $a = 3p$
which is of the form $a = bq$, where $b = 3$
Therefore, number $a$ is divisible by $3$
Since, the number on the LHS of the equation is the same, we can say the number that is divisible by $9$ is also divisible by $3$ .
So, the statement in the question is true.
So, the correct answer is “Option A”.
Note: Students many times make mistake when they assume looking at the word divisible by a number and they write the number in fraction form which is wrong, keep in mind that whenever a number is divisible by another number then the first number can be written as a multiple of second number.
Alternate method:
We can also show this solution by taking an example
Say a number is divisible by $9$, let us take that number to be $36$
We can write $36 = 9 \times 4$
Since we know $9 = 3 \times 3$
Therefore, we can write
$36 = 3 \times 3 \times 4$
Grouping together all factors other than three
$36 = 3 \times (3 \times 4) \\ 36 = 3 \times 12 \\$
Therefore, the number is divisible by $3$ as it is written in the form of multiple of three. | 2020-10-22 09:03:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8313822150230408, "perplexity": 146.81400160153675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00462.warc.gz"} |
https://tex.stackexchange.com/questions/162084/multiple-bib-references-in-the-same-footnote-with-the-opcit-package | # Multiple bib references in the same footnote with the opcit package
I am new to LaTeX and I am hoping to use it in the field of the humanities. I have a problem when I try to add multiple bibliographic references in the same footnote using the opcit package. It seems to be working with one reference only. When I add two others \cite{Gerbier2003,Parel1992,Gaille2003} the result is an empty footnote. No error message appear in the console. I am using XeTeX with the polyglossia, fontspec and babelbib packages.
Any idea of what could be the problem?
Thank you.
• Welcome to TeX.SX! Please add a minimal working example (MWE) that illustrates your problem. It will be much easier for us to reproduce your situation and find out what the issue is when we see compilable code, starting with \documentclass{...} and ending with \end{document}. – egreg Feb 24 '14 at 10:15
• You might consider using biblatex: it has a \footcites command and it can be customised. – Bernard Feb 24 '14 at 13:03 | 2019-07-21 13:43:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8060230612754822, "perplexity": 1128.418950087164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195527000.10/warc/CC-MAIN-20190721123414-20190721145414-00033.warc.gz"} |
http://mathhelpforum.com/advanced-algebra/167009-linear-subspace-problem.html | # Thread: Linear Subspace problem.
1. ## Linear Subspace problem.
I am very sorry if I am bothering you guys a lot but I am trying to understand some of this math and its very hard to do it alone
So I have this problem that says: Find which of the following sets are linear subspaces of the corresponding linear spaces:
I am only going to show the first one because maybe if I understand it I can do the rest(I have the answers but I have no idea how it comes out). This is the problem:
$X_{0}=\left \{ (x1,...\: xn)/ x1+...+xn=0 \right.\left. \right \} \subset \mathbb{R}^{n}, (\mathbb{R}^{n}, \mathbb{R}, \cdot )$
Now I understand to be a linear subspace the set must contain the null vector, be closed under multiplication and addition. My problems begin from the step that I have no idea what $(\mathbb{R}^{n} , \mathbb{R}, \cdot )$ is.
Also are these the vectors within $(x1,...\: xn)$ the set X and if so what does this do: $/ x1+...+xn=0$
2. $\mathbb{R}$ is the set of real numbers.
$\mathbb{R}^n$ is the set of n-tuples of real numbers.
I would assume that the notation $(\mathbb{R}^n, \mathbb{R}, \cdot )$ means that you're looking at the vector space of n-tuples of real numbers over the field of reals where the operation is normal multiplication.
Now, the null vector is in the subset because $0+\cdots +0 = 0$
If $(x_1,\ldots ,x_n), (y_1,\ldots ,y_n)$ are in the subset, then
$x_1+y_1+\ldots x_n+y_n = x_1+\ldots +x_n+ y_1+\ldots +y_n=0+0=0$.
So $(x_1,\ldots ,x_n) + (y_1,\ldots ,y_n)$ is in the subset.
Scalar multiplication is similar.
Yes, it's a subspace.
3. Ok I think am getting it but whats confusing is that my book talked only about "internal binary operation on V(I guess in this case would be ${R}^n$), called addition and denoted by +, such that (V,+) is a commutative group". So you can only imagine when they decided to throw in a * how confused I got.
But am I correct to assume that is the same thing but instead of + its *?
4. To be honest, that notation is quite confusing. I'm not sure why they chose those 3 things to emphasize (as opposed to addition, for example). I wouldn't worry too much about it though - I think the question is pretty clear. Maybe ask your teacher why this notation is being used.
5. It is somewhat surprising to lean that mathematical notation is anything but standard. There is a series of probability books all by the same author in which notation varies from book to book. That is why is almost pointless to ask about notation is a forum such as this.
Sometimes if you give the title and author it is possible that someone may know that particular textbook.
That said, I think Steve is correct in his guess.
$\left( {\mathbb{R}^n ,\mathbb{R}, \cdot } \right)$ tells us that the v-space is the set of real n-tuples with vector addition, the scalar field is $\mathbb{R}$ and $\cdot$ tells us the scalar multiplication is that of the real numbers.
But that is surely explained in the text material. | 2016-12-06 14:09:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950483202934265, "perplexity": 239.09861618489168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541907.82/warc/CC-MAIN-20161202170901-00130-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/mgf-of-a-random-variable-with-added-constant.546998/ | # Mgf of a random variable with added constant
Hey,
I have a pdf of a random variable Z given. I am being asked to calculate what the moment generating function of a r.v Y= Z + c will be where c is a constant in ℝ
I tried to calculate it in the following way:
$$\int^∞_0 e^{(z+c)t} f(z+c)dz$$ where $$f(z)$$ is an exponential pdf with parameter λ.
but it proved to be an unsuccessful method. Could anyone please show me the right direction? I know I could use Jacobian transformation but I'm sure there is an easier method.
I wouldn't even mess around with the integral. Here is something I would try:
$Y = Z + c$ where $Z ~ exp(\lambda)$ and c is a constant. Then,
$E[e^{tY}] = E[e^{t(Z+c)}]$
Now do you see what you might be able to do?
I wouldn't even mess around with the integral. Here is something I would try:
$Y = Z + c$ where $Z ~ exp(\lambda)$ and c is a constant. Then,
$E[e^{tY}] = E[e^{t(Z+c)}]$
Now do you see what you might be able to do?
I think it definitely solves this problem! Now I can proceed with the rest of the exercise. Thank you Robert!
You're most certainly welcome.
As a side note, this sort of thing is a rather valuable technique in prob/stat. That is, if you want to know about a certain RV, or a certain expectation, lots of times it is best to work it into some form you already know.
Ray Vickson
Homework Helper
Dearly Missed
I think it definitely solves this problem! Now I can proceed with the rest of the exercise. Thank you Robert!
Of course, you would have gotten the same result had you used the correct f(z) dz in your integration, instead of your _incorrect_ f(z+c) dz.
RGV
Of course, you would have gotten the same result had you used the correct f(z) dz in your integration, instead of your _incorrect_ f(z+c) dz.
RGV
Checked that and it was another mistake I was making. Thank you for pointing this out! | 2021-05-12 09:39:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465885519981384, "perplexity": 283.4012643338311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00480.warc.gz"} |
https://design.tutsplus.com/courses/photo-manipulation-fundamentals/lessons/paradise-final-adjustments | Lessons:17Length:2 hours
• Overview
• Transcript
In this lesson, you'll finish up this project by making some final adjustments to the image. | 2021-09-21 08:22:24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8478286266326904, "perplexity": 14575.814903923234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057199.49/warc/CC-MAIN-20210921070944-20210921100944-00018.warc.gz"} |
https://davidstutz.de/octnet-learning-deep-3d-representations-high-resolutions-riegler-et-al/ | # DAVIDSTUTZ
27thSEPTEMBER2017
Gernot Riegler, Ali Osman Ulusoy, Andreas Geiger. OctNet: Learning Deep 3D Representations at High Resolutions. CoRR, 2016.
Riegler et al. present a network architecture called OctNet allowing to train deep networks on sparse 3D data in high resolution. The approach is based on the simple observation that 3D data is usually sparse, for example when considering point clouds or 3D shapes. This implies that, at high resolutions, the information of most voxels is useless. The idea of Riegler et al. is to automatically adapt the resolution according to the information present in the 3D data. On the image plane, this idea is illustrated in Figure 1.
In practice, Riegler et al. utilize octrees — octrees with maximum depth 3. Then a 3D tensor of size $H\times W\times D$ is represented by $\frac{H}{8}\times\frac{W}{8}\times\frac{D}{8}$ octrees, where each octree can represent 512 voxels in the original resolution. For example, considering 3D shape recognition where the 3D tensor contains only $0$s and $1$s, the highest resolution is only used along the edges of the shape, while the coarser resolution is used outside the shape as illustrated in Figure 1. Riegler et al. discuss the details of implementing this efficiently, i.e. such that each cell can be accessed directly.
After converting the input tensor, they discuss how the most common operations used within convolutional neural networks can efficiently be performed: convolution and pooling. For convolution, the discussion is relatively easy to follow, but very technically. However, the main source for speed up is illustrated in Figure 2. The rectangle illustrated in bold black refers to a single octree cell at depth $0$, i.e. containing $512$ voxels of the original resolution. Instead of convolving the individual $512$ voxels, we only need to perform convolution around the edges of the octree cell — the value of the convolution within the octree cell is only computed once for the whole cell.
Similarly, pooling can be performed on the proposed data structure. The only caveat is that the depth of the individual octrees is afterwards reduced by one. Unfortunately, Riegler et al. do not give details on how this influences further processing or how to exactly avoid this problem. Unpooling, used for the point cloud labeling experiments, works analogously with the same caveat (assuming that the structure of the octree is already known from the input!).
Through experiments, Riegler et al. demonstrate impressive performance even for resolutions up to $256\times256\times256$, while most other approaches use resolutions in the order of $32\times32\times32$.
A Torch implementation of the proposed OctNet can be found on GitHub: griegler/octnet.
What is your opinion on the summarized work? Or do you know related work that is of interest? Let me know your thoughts in the comments below: | 2020-07-14 18:34:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6447233557701111, "perplexity": 715.6312789854493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151197.83/warc/CC-MAIN-20200714181325-20200714211325-00005.warc.gz"} |
https://chemistry.stackexchange.com/questions/29101/why-is-the-bond-order-in-the-so%E2%82%83-molecule-1-33-and-not-2 | # Why is the bond order in the SO₃ molecule 1.33 and not 2?
$\ce{SO3}$ molecule has three double bonded oxygen to the central sulfur atom.
Sulfur has $\ce{sp^2}$ hybridization and it has 6 outer electrons which make the bonds with the oxygen.
So shouldn't the bond order be 2?
• I just don't have time to write a complete answer, so instead I leave this link to a previous answer of mine that somewhat deals with this question. The doubly bonded picture is a very crude oversimplification, though. – Martin - マーチン Apr 21 '15 at 10:06
• Aaaah! I'm used to drawing the structures one of these three ways though. – M.A.R. Apr 21 '15 at 16:32
• @MARamezani This is essentially the concept of resonance, and it's intended purpose. You need all three structures though, not only one. Because one alone does not describe delocalisation. – Martin - マーチン Apr 23 '15 at 3:31
• Indeed @Martin. The resonance hybrid is the only correct structure for this. – M.A.R. Apr 23 '15 at 8:16
The bonding situation in $\ce{SO3}$ is a tough nut to understand. In a historical context this molecule belongs to the species of hypervalent molecules, that disobey the octet rule. The concept of hypervalence is still very much debated. Recently there was a question raised by ron, seeking for more guidance: Hypervalency and the octet rule.
The bonding picture is usually trivialised as each $\ce{S−O}$ bond being a double bond. But this is actually far away from the truth, as it does not respect the charge of $q=+2$ at the sulfur atom and the charges at the oxygens with $q=−\frac23$. This is due to the fact, that the sulfur atom actually only contributes to one of the three π bonding orbitals, resulting in electron density deficiency.
There are a couple of molecules that belong roughly to the same class. I call them Y aromatic systems: $\ce{CO3^2-}$, $\ce{NO3^-}$, and $\ce{SO3}$. The description is best carried out in the framework of resonance. (Please also see Why is there a need for resonance?)
I have explained the delocalisation in the nitrate ion, but I am happy to repeat it here for $\ce{SO3}$. The molecule has an overall symmetry of $D_\mathrm{3h}$, hence the central sulfur can be regarded $sp^2$ hybridised. Since oxygen has only one bonding partner, it can sufficiently be described as $sp$ hybridised (compare here). The combination of these orbitals form a $\sigma$ bond each. The remaining $p$ orbitals perpendicular to the molecular plane form the delocalised $\pi$ system. Since delocalisation is not part of the original Lewis description it is impossible to address this with such structures, which ultimately lead to the concept of resonance. It is important, that the bonding situation can only be described by all resonance structures, as there is no predominant. The blue structure tries to emulate delocalisation, but here formal charges and charges are quite messed up.
However, we can take home an approximate bonding order for the $\ce{S-O}$ bond. Since all of the resonance structures are equally dominant, we can see that all of these bonds consist of a $\sigma$ bond. We also see, that there is a $\pi$ bond in the first three structures. Each of them contributes equally, therefore it makes a third of a full contribution per bond, hence the bond order is expected to be about $1\frac13$.
Now the molecular orbital scheme is quite similar, but it suffices with one structure. The explanation of the $\sigma$ system remains unchanged. From the four $p$ orbitals we can form four symmetry adapted linear combinations. Two of the are non-bonding degenerate, the completely symmetric one accounts for the huge stabilisation. The antibonding orbital is not occupied. See below for the schematics of $\ce{NO3^-}$, which is analogous to $\ce{SO3}$. You can find the complete MO scheme here.
Since each of the $\ce{S-O}$ bond contributes equally to the lowest lying $\pi$ orbital, the bond order is found to be about $1.33$. The calculated bond order is a little bit higher, most likely due to attractive electrostatic interactions.
Great question, I had to do a little reading to piece together an answer as I haven't worked with sulfur oxides in quite some time:
Sources:
• Wikipedia: Sulfur Trioxide
• Shriver & Atkins, Inorganic Chemistry, 4th ed., p388
As a simplistic explanation, the above sources state that the lewis structure of $\ce{SO_3}$ contains a 2$^+$ charge on the central sulfur and negative charges on two of the three bonded oxygen atoms. In that case, $\ce{SO_3}$ contains one double bond and two single bonds, which is why people tend to list the overall bond-order as 1.33. The actual bonding structure of $\ce{SO_3}$ is a little more complicated than that, as J. LS points out, so you might need to brush up on molecular-orbital theory to get into the nitty-gritty of its bonding structure.
During the reaction of $\ce{SO2}$ and $\ce{O2}$, there's a formation of nascent oxygen which oxidises sulphur. In doing this sulphur co ordinates it's lone pair to the nascent oxygen. Since only 2 electrons take part here in this co-ordinate bond formation it is just a single bond. Now the $\ce{SO3}$ thus formed has 4 bonds shared between 3 places. Hence the bond order. | 2019-08-21 15:35:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.68244868516922, "perplexity": 835.565166902623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316075.15/warc/CC-MAIN-20190821152344-20190821174344-00446.warc.gz"} |
http://gmatclub.com/forum/haas-vs-stern-150400.html | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 23 May 2015, 04:28
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Haas vs Stern ($$) Question banks Downloads My Bookmarks Reviews Important topics ### Which school? • 44% [4] • 55% [5] Author Message TAGS: Current Student Joined: 04 Mar 2011 Posts: 129 Concentration: Finance, Economics GMAT 1: 700 Q44 V41 GMAT 2: 720 Q47 V41 WE: Law (Non-Profit and Government) Followers: 4 Kudos [?]: 38 [0], given: 38 Haas vs Stern ($$) [#permalink] 04 Apr 2013, 10:05
Hello everyone,
Never thought I'd end up launching a topic in this section, but I'm kind of torn. I have three offers and I have real trouble choosing between two of them. I really like all three, so basically I would be happy to go to either of them, but I also have a ranking in my mind. Thus, I didn't include Yale in the poll, as Haas is my top choice and since neither offered $$, I'd take Haas over Yale for sure. I have a non-traditional background and my career goals are also somewhat non-traditional, although they are finance-related. I'm also not planning on staying in the US after graduation. Haas My first choice. They have all I need academically, I also visited and simply fell in love with the place and the people. Stern Finance academics are obviously great, plus NY isn't a bad place to live either, both from a personal and career standpoint. *** Basically the question is: would you leave the money on the table in my place? (It's 1 year's tuition.) Any advice/opinion much appreciated! Current Student Status: Too close for missiles, switching to guns. Joined: 23 Oct 2012 Posts: 785 Location: United States Schools: Johnson (Cornell) - Class of 2015 WE: Military Officer (Military & Defense) Followers: 15 Kudos [?]: 315 [5] , given: 174 Re: Haas vs Stern ($$) [#permalink] 04 Apr 2013, 10:34
5
KUDOS
1 years tuition is nothing to sneeze at, but it sounds like Haas is the answer. Money is a renewable resource: it comes and goes. Time is not renewable. The next two years of your life is only going to happen once. Do you want to go to Stern and constantly play the "what if" game in regards to Haas?
You got into your top choice. Go and don't look back; Many of us on this forum wish we could be so fortunate to have been admitted by our top choice.
_________________
Current Student
Joined: 26 May 2010
Posts: 719
Location: United States (MA)
Concentration: Strategy
Schools: MIT Sloan - Class of 2015
WE: Consulting (Mutual Funds and Brokerage)
Followers: 16
Kudos [?]: 203 [2] , given: 641
Re: Haas vs Stern ($$) [#permalink] 04 Apr 2013, 11:51 2 This post received KUDOS CobraKai wrote: 1 years tuition is nothing to sneeze at, but it sounds like Haas is the answer. Money is a renewable resource: it comes and goes. Time is not renewable. The next two years of your life is only going to happen once. Do you want to go to Stern and constantly play the "what if" game in regards to Haas? You got into your top choice. Go and don't look back; Many of us on this forum wish we could be so fortunate to have been admitted by our top choice. Sage advice. Two other things to consider: (1) cost of living will be much lower at Haas and (2) Berkeley, I believe, has a much better brand abroad than NYU. So, no, you're not crazy for wanting to choose Haas in this situation, although I wouldn't blame you if you selected Stern either! Current Student Status: CBS - Class of 2015 Joined: 05 Sep 2012 Posts: 147 Location: United States Followers: 1 Kudos [?]: 25 [2] , given: 96 Re: Haas vs Stern ($$) [#permalink] 04 Apr 2013, 18:36
2
KUDOS
CobraKai wrote:
1 years tuition is nothing to sneeze at, but it sounds like Haas is the answer. Money is a renewable resource: it comes and goes. Time is not renewable. The next two years of your life is only going to happen once. Do you want to go to Stern and constantly play the "what if" game in regards to Haas?
You got into your top choice. Go and don't look back; Many of us on this forum wish we could be so fortunate to have been admitted by our top choice. | 2015-05-23 12:28:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2125556319952011, "perplexity": 3028.250552245754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927592.52/warc/CC-MAIN-20150521113207-00164-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://courses.lumenlearning.com/wm-accountingformanagers/chapter/expressions-with-percents/ | ## Solving Problems Using Percents
### Learning Outcome
• Evaluate expressions and word problems involving percents
In this section we will solve percent questions by identifying the parts of the problem. We’ll look at a common application of percent—tips to a server at a restaurant—to see how to set up a basic percent application.
When Aolani and her friends ate dinner at a restaurant, the bill came to $\text{\80}$. They wanted to leave a $20\%$ tip. What amount would the tip be?
To solve this, we want to find what amount is $20\%$ of $\80$. The $\80$ is called the base. The percent is the given $20\%$. The amount of the tip would be $0.20(80)$, or $\16$ — see the image below. To find the amount of the tip, we multiplied the percent by the base.
A $20\%$ tip for an $\80$ restaurant bill comes out to $\16$.
### Pieces of a Percent Problem
Percent problems involve three quantities: the base amount (the whole), the percent, and the amount (a part of the whole or partial amount).
The amount is a percent of the base.
Let’s look at another example:
Jeff has a Guitar Strings coupon for $15\%$ off any purchase of $100$ or more. He wants to buy a used guitar that has a price tag of $220$ on it. Jeff wonders how much money the coupon will take off the original $220$ price. Problems involving percents will have some combination of these three quantities to work with: the percent, the amount, and the base. The percent has the percent symbol (%) or the word percent. In the problem above, $15\%$ is the percent off the purchase price. The base is the whole amount or original amount. In the problem above, the “whole” price of the guitar is $220$, which is the base. The amount is the unknown and what we will need to calculate.
There are thee cases: a missing amount, a missing percent or a missing base. Let’s take a look at each possibility.
## Solving for the Amount
When solving for the amount in a percent problem, you will multiply the percent (as a decimal or fraction) by the base. Typically we choose the decimal value for percent.
$\text{percent}\cdot{\text{base}}=\text{amount}$
### Example
Find $50\%$ of $20$
Solution:
First identify each piece of the problem:
percent: $50\%$ or $.5$
base: $20$
amount: unknown
Now plug them into your equation $\text{percent}\cdot{\text{base}}=\text{amount}$
$.5\cdot{20}= ?$
$.5\cdot{20}= 10$
Therefore, $10$ is the amount or part that is $50\%$ of $20$.
### Example
What is $25\%$ of $80$?
## Solving for the Percent
When solving for the percent in a percent problem, you will divide the amount by the base. The equation above is rearranged and the percent will come back as a decimal of fraction you can report in the form asked of you.
$\Large{\frac{\text{amount}}{\text{base}}}\normalsize=\text{percent}$
### Example
What percent of $320$ is $80$?
Solution:
First identify each piece of the problem:
percent: unknown
base: $320$
amount: $80$
Now plug the values into your equation $\Large{\frac{\text{amount}}{\text{base}}}\normalsize=\text{percent}$
$\large\frac{80}{320}\normalsize=?$
$\large\frac{80}{320}\normalsize=.25$
Therefore, $80$ is $25\%$ of $320$.
## Solving for the Base
When solving for the base in a percent problem, you will divide the amount by the percent (as a decimal or fraction). The equation above is rearranged and you will find the base after plugging in the values.
$\Large{\frac{\text{amount}}{\text{percent}}}\normalsize=\text{base}$
### EXample
$60$ is $40\%$ of what number?
Solution:
First identify each piece of the problem:
percent:$40\%$ or $.4$
base: unknown
amount: $60$
Now plug the values into your equation $\Large{\frac{\text{amount}}{\text{percent}}}\normalsize=\text{base}$
$(60)\div(.4)=?$
$(60)\div(.4)=150$
Therefore, $60$ is $40\%$ of $150$.
### Example
An article says that $15\%$ of a non-profit’s donations, about $30,000$ a year, comes from individual donors. What is the total amount of donations the non-profit receives?
### TRY IT
Here are a few more percent problems for you to try.
### try it
Many applications of percent occur in our daily lives, such as tips, sales tax, discount, and interest. To solve these applications we’ll translate to a basic percent equation, just like those we solved in the previous examples in this section. Once you translate the sentence into a percent equation, you know how to solve it.
### example
Dezohn and his girlfriend enjoyed a dinner at a restaurant, and the bill was $\text{\68.50}$. They want to leave an $\text{18%}$ tip. If the tip will be $\text{18%}$ of the total bill, how much should the tip be?
Solution
What are you asked to find? the amount of the tip What formula/equation should you use? $\text{percent}\cdot{\text{base}}=\text{amount}$ Substitute in the correct values. $(.18)\cdot{68.50}$ Solve. $(.18)\cdot{68.50}=12.33$ Write a complete sentence that answers the question. The couple should leave a tip of $\text{\12.33}$.
### try it
In the next video we show another example of finding how much tip to give based on percent.
### example
The label on Masao’s breakfast cereal said that one serving of cereal provides $85$ milligrams (mg) of potassium, which is $\text{2%}$ of the recommended daily amount. What is the total recommended daily amount of potassium?
## Contribute!
Did you have an idea for improving this content? We’d love your input. | 2021-11-27 03:31:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5465244650840759, "perplexity": 1135.6117071173835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358078.2/warc/CC-MAIN-20211127013935-20211127043935-00547.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=39&t=52393 | ## Interaction Potential Energy and Radius
KDang_1D
Posts: 127
Joined: Fri Aug 30, 2019 12:15 am
Been upvoted: 1 time
### Interaction Potential Energy and Radius
If interaction potential energy is given by the formula: Ep (is proportional to) $\frac{\alpha_{1} \alpha_{2} }{r^{6}}$, then why does attractive force increase as atomic size increases?
Sebastian Lee 1L
Posts: 157
Joined: Fri Aug 09, 2019 12:15 am
Been upvoted: 1 time
### Re: Interaction Potential Energy and Radius
When atomic size increases, polarizability ($\alpha$) increases because the electrons are held on less tightly by the nucleus. High polarizability creates the stronger intermolecular forces described by the larger interaction potential energy. The distance between molecules is important but I think that atomic radius has a much more significant effect on polarizability than it does on the distance between molecules. Note that the inverse proportionality to r is relevant when discussing shape (rod vs sphere) because a rod shape will have closer dipole moments and thus a stronger interaction energy.
Rohit Ghosh 4F
Posts: 99
Joined: Thu Jul 25, 2019 12:17 am
### Re: Interaction Potential Energy and Radius
Although polarizability is a factor in determining potential energy, the distance between molecules is much more important due to that value being to the power of 6. | 2020-11-30 12:01:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7147119641304016, "perplexity": 1828.8152644021322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141213431.41/warc/CC-MAIN-20201130100208-20201130130208-00327.warc.gz"} |
https://physics.stackexchange.com/questions/341766/multiplying-significant-figures-and-decimal-places | # Multiplying significant figures and decimal places
One of my lecturers today said that multiplying a number leaves the number of significant figures to which that number is correct unchanged, however it doesn't leave the number of decimal places correct.
He gave the following example: $128\times 0.99687$.
According to him, if 0.99687 is correct to 4 dp then the product is not correct to 4dp. However if 0.99687 is correct to 4sf then the product is also correct to 4sf.
I am struggling to see why this is the case. I have tried thinking about how decimal places differ from significant figures but I never really learned this in detail, only the "mechanics" of how to work dp's and sf's out, so any help would be appreciated. | 2019-09-19 19:43:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7080202698707581, "perplexity": 455.9849990479223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573570.6/warc/CC-MAIN-20190919183843-20190919205843-00019.warc.gz"} |
https://devsim.net/symdiff.html | # SYMDIFF¶
## Overview¶
SYMDIFF is a tool capable of evaluating derivatives of symbolic expressions. Using a natural syntax, it is possible to manipulate symbolic equations in order to aid derivation of equations for a variety of applications. It has been tailored for use within DEVSIM.
## Syntax¶
### Variables and numbers¶
Variables and numbers are the basic building blocks for expressions. A variable is defined as any sequence of characters beginning with a letter and followed by letters, integer digits, and the _ character. Note that the letters are case sensitive so that a and {A} are not the same variable. Any other characters are considered to be either mathematical operators or invalid, even if there is no space between the character and the rest of the variable name.
Examples of valid variable names are:
a, dog, var1, var_2
Numbers can be integer or floating point. Scientific notation is accepted as a valid syntax. For example:
1.0, 1.0e-2, 3.4E-4
### Basic expressions¶
Table 8 Basic expressions involving unary, binary, and logical operators.
Expression Description
(exp1) Parenthesis for changing precedence
+exp1 Unary Plus
-exp1 Unary Minus
!exp1 Logical Not
exp1 ^ exp2 Exponentiation
exp1 * exp2 Multiplication
exp1 / exp2 Division
exp1 + exp2 Addition
exp1 - exp2 Subtraction
exp1 < exp2 Test Less
exp1 <= exp2 Test Less Equal
exp1 > exp2 Test Greater
exp1 >= exp2 Test Greater Equal
exp1 == exp2 Test Equality
exp1 != exp2 Test Inequality
exp1 && exp2 Logical And
exp1 || exp2 Logical Or
variable Independent Variable
number Integer or decimal number
In Table 8, the basic syntax for the language is presented. An expression may be composed of variables and numbers tied together with mathematical operations. Order of operations is from bottom to top in order of increasing precedence. Operators with the same level of precedence are contained within horizontal lines.
In the expression a + b * c, the multiplication will be performed before the addition. In order to override this precedence, parenthesis may be used. For example, in (a + b) * c, the addition operation is performed before the multiplication.
The logical operators are based on non zero values being true and zero values being false. The test operators are evaluate the numerical values and result in 0 for false and 1 for true.
It is important to note since values are based on double precision arithmetic, testing for equality with values other than 0.0 may yield unexpected results.
### Functions¶
Table 9 Predefined Functions.
Function Description
acosh(exp1) Inverse Hyperbolic Cosine
asinh(exp1) Inverse Hyperbolic Sine
atanh(exp1) Inverse Hyperbolic Tangent
B(exp1) Bernoulli Function
dBdx(exp1) derivative of Bernoulli function
derfcdx(exp1) derivative of complementary error function
derfdx(exp1) derivative error function
dFermidx(exp1) derivative of Fermi Integral
dInvFermidx(exp1) derivative of InvFermi Integral
dot2d(exp1x, exp1y, exp2x, exp2y) exp1x*exp2x+exp1y*exp2y
erfc(exp1) complementary error function
erf(exp1) error function
exp(exp1) exponent
Fermi(exp1) Fermi Integral
ifelse(test, exp1, exp2) if test is true, then evaluate exp1, otherwise exp2
if(test, exp) if test is true, then evaluate exp, otherwise 0
InvFermi(exp1) inverse of the Fermi Integral
log(exp1) natural log
max(exp1, exp2) maximum of the two arguments
min(exp1, exp2) minimum of the two arguments
pow(exp1, exp2) take exp1 to the power of exp2
sgn(exp1) sign function
step(exp1) unit step function
kahan3(exp1, exp2, exp3) Extended precision addition of arguments
kahan4(exp1, exp2, exp3, exp4) Extended precision addition of arguments
vec_max maximum of all the values over the entire region or interface
vec_min minimum of all the values over the entire region or interface
vec_sum sum of all the values over the entire region or interface
In Table 9 are the built in functions of SYMDIFF. Note that the pow function uses the , operator to separate arguments. In addition an expression like pow(a,b+y) is equivalent to an expression like a^(b+y). Both exp and log are provided since many derivative expressions can be expressed in terms of these two functions. It is possible to nest expressions within functions and vice-versa.
### Commands¶
Table 10 Commands.
Command Description
diff(obj1, var) Take derivative of obj1 with respect to variable var
expand(obj) Expand out all multiplications into a sum of products
help Print description of commands
scale(obj) Get constant factor
sign(obj) Get sign as 1 or -1
simplify(obj) Simplify as much as possible
subst(obj1,obj2,obj3) substitute obj3 for obj2 into obj1
unscaledval(obj) Get value without constant scaling
unsignedval(obj) Get unsigned value
Commands are shown in Table 10. While they appear to have the same form as functions, they are special in the sense that they manipulate expressions and are never present in the expression which results. For example, note the result of the following command
> diff(a*b, b)
a
### User functions¶
Table 11 Commands for user functions.
Command Description
clear(name) Clears the name of a user function
declare(name(arg1, arg2, ...)) declare function name taking dummy arguments arg1, arg2, … . Derivatives assumed to be 0
define(name(arg1, arg2, ...), obj1, obj2, ...) declare function name taking arguments arg1, arg2, … having corresponding derivatives obj1, obj2, …
Commands for specifying and manipulating user functions are listed in Table 11. They are used in order to define new user function, as well as the derivatives of the functions with respect to the user variables. For example, the following expression defines a function named f which takes one argument.
> define(f(x), 0.5*x)
The list after the function protoype is used to define the derivatives with respect to each of the independent variables. Once defined, the function may be used in any other expression. In additions the any expression can be used as an arguments. For example:
> diff(f(x*y),x)
((0.5 * (x * y)) * y)
> simplify((0.5 * (x * y)) * y)
(0.5 * x * (y^2))
The chain rule is applied to ensure that the derivative is correct. This can be expressed as
$\frac{\partial}{\partial x} f \left( u, v, \ldots \right) = \frac{\partial u}{\partial x} \cdot \frac{\partial}{\partial u} f \left( u, v, \ldots \right) + \frac{\partial v}{\partial x} \cdot \frac{\partial}{\partial v} f \left( u, v, \ldots \right) + \ldots$
The declare command is required when the derivatives of two user functions are based on one another. For example:
> declare(cos(x))
cos(x)
> define(sin(x),cos(x))
sin(x)
> define(cos(x),-sin(x))
cos(x)
When declared, a functions derivatives are set to 0, unless specified with a define command. It is now possible to use these expressions as desired.
> diff(sin(cos(x)),x)
(cos(cos(x)) * (-sin(x)))
> simplify(cos(cos(x)) * (-sin(x)))
(-cos(cos(x)) * sin(x))
### Macro assignment¶
The use of macro assignment allows the substitution of expressions into new expressions. Every time a command is successfully used, the resulting expression is assigned to a special macro definition, $_. In this example, the result of the each command is substituted into the next. > a+b (a + b) >$_-b
((a + b) - b)
> simplify($_) a In addition to the default macro definition, it is possible to specify a variable identifier by using the $ character followed by an alphanumeric string beginning with a letter. In addition to letters and numbers, a _ character may be used as well. A macro which has not previously assigned will implicitly use 0 as its value.
This example demonstrates the use of macro assignment.
> $a1 = a + b (a + b) >$a2 = a - b
(a - b)
> simplify($a1+$a2)
(2 * a)
## Invoking SYMDIFF from DEVSIM¶
### Equation parser¶
The devsim.symdiff() should be used when defining new functions to the parser. Since you do not specify regions or interfaces, it considers all strings as being independent variables, as opposed to models. Model Commands presents commands which have the concepts of models. A ; should be used to separate each statement.
This is a sample invocation from DEVSIM
% symdiff(expr="subst(dog * cat, dog, bear)")
(bear * cat)
### Evaluating external math¶
The devsim.register_function() is used to evaluate functions declared or defined within SYMDIFF. A Python procedure may then be used taking the same number of arguments. For example:
from math import cos
from math import sin
symdiff(expr="declare(sin(x))")
symdiff(expr="define(cos(x), -sin(x))")
symdiff(expr="define(sin(x), cos(x))")
register_function(name="cos", nargs=1)
register_function(name="sin", nargs=1)
The cos and sin function may then be used for model evaluation. For improved efficiency, it is possible to create procedures written in C or C++ and load them into Python.
### Models¶
When used withing the model commands discussed in Model Commands, DEVSIM has been extended to recognize model names in the expressions. In this situation, the derivative of a model named, model, with respect to another model, variable, is then model:variable.
During the element assembly process, DEVSIM evaluates all models of an equation together. While the expressions in models and their derivatives are independent, the software uses a caching scheme to ensure that redundant calculations are not performed. It is recommended, however, that users developing their own models investigate creating intermediate models in order to improve their understanding of the equations that they wish to be assembled. | 2019-01-22 20:01:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5824016332626343, "perplexity": 2535.509222153776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583867214.54/warc/CC-MAIN-20190122182019-20190122204019-00315.warc.gz"} |