url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
https://www.alphacodingskills.com/cs/pages/cs-swap-two-numbers-without-using-temporary-variable.php
|
# C# - Swap two numbers without using Temporary Variable
The value of two variables can be swapped without using any temporary variables. The method involves using operators like +, *, / and bitwise.
### Example: Using + operator
In the below example, the + operator is used to swap the value of two variables x and y.
using System;
namespace MyApplication {
class MyProgram {
static void swap(int x, int y) {
Console.WriteLine("Before Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
//Swap technique
x = x + y;
y = x - y;
x = x - y;
Console.WriteLine("After Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
}
static void Main(string[] args) {
swap(10, 25);
}
}
}
Output
Before Swap.
x = 10
y = 25
After Swap.
x = 25
y = 10
### Example: Using * operator
Like + operator, the * operator can also be used to swap the value of two variables x and y.
using System;
namespace MyApplication {
class MyProgram {
static void swap(int x, int y) {
Console.WriteLine("Before Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
//Swap technique
x = x * y;
y = x / y;
x = x / y;
Console.WriteLine("After Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
}
static void Main(string[] args) {
swap(10, 25);
}
}
}
Output
Before Swap.
x = 10
y = 25
After Swap.
x = 25
y = 10
### Example: Using / operator
Smilarly / operator can also be used to swap the value of two variables x and y.
using System;
namespace MyApplication {
class MyProgram {
static void swap(float x, float y) {
Console.WriteLine("Before Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
//Swap technique
x = x / y;
y = x * y;
x = y / x;
Console.WriteLine("After Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
}
static void Main(string[] args) {
swap(10, 25);
}
}
}
Output
Before Swap.
x = 10
y = 25
After Swap.
x = 25
y = 10
### Example: Using bitwise operator
The bitwise XOR (^) operator can also be used to swap the value of two variables x and y. It returns 1 when one of two bits at same position in both operands is 1, otherwise returns 0.
using System;
namespace MyApplication {
class MyProgram {
static void swap(int x, int y) {
Console.WriteLine("Before Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
//Swap technique
x = x ^ y;
y = x ^ y;
x = x ^ y;
Console.WriteLine("After Swap.");
Console.WriteLine("x = " + x);
Console.WriteLine("y = " + y);
}
static void Main(string[] args) {
swap(10, 25);
}
}
}
Output
Before Swap.
x = 10
y = 25
After Swap.
x = 25
y = 10
### Disadvantages of using above methods
• The multiplication and division based approaches fail if the value of one of the variable is 0.
• The addition based approach may fail due to arithmetic overflow. If x and y are too large, operation performed on operands may result into out of range integer.
|
2020-03-31 20:06:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22413857281208038, "perplexity": 5686.327601193664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370503664.38/warc/CC-MAIN-20200331181930-20200331211930-00546.warc.gz"}
|
https://bird.bcamath.org/handle/20.500.11824/1;jsessionid=1211E8A4EC42E4D9A639302A7B30A3E1
|
### Recent Submissions
• #### Bilinear Spherical Maximal Functions of Product Type
(2021-08-12)
In this paper we introduce and study a bilinear spherical maximal function of product type in the spirit of bilinear Calderón–Zygmund theory. This operator is different from the bilinear spherical maximal function considered ...
• #### Variation bounds for spherical averages
(2021-06-22)
We consider variation operators for the family of spherical means, with special emphasis on $L^p\to L^q$ estimates
• #### On a probabilistic model for martensitic avalanches incorporating mechanical compatibility
(2021-07-01)
Building on the work by Ball et al (2015 MATEC Web of Conf. 33 02008), Cesana and Hambly (2018 A probabilistic model for interfaces in a martensitic phase transition arXiv:1810.04380), Torrents et al (2017 Phys. Rev. E 95 ...
• #### Pointwise Convergence over Fractals for Dispersive Equations with Homogeneous Symbol
(2021-08-24)
We study the problem of pointwise convergence for equations of the type $i\hbar\partial_tu + P(D)u = 0$, where the symbol $P$ is real, homogeneous and non-singular. We prove that for initial data $f\in H^s(\mathbb{R}^n)$ ...
• #### A comparison principle for vector valued minimizers of semilinear elliptic energy, with application to dead cores
(2021)
We establish a comparison principle providing accurate upper bounds for the modulus of vector valued minimizers of an energy functional, associated when the potential is smooth, to elliptic gradient systems. Our assumptions ...
• #### Echo Chains as a Linear Mechanism: Norm Inflation, Modified Exponents and Asymptotics
(2021-07-30)
In this article we show that the Euler equations, when linearized around a low frequency perturbation to Couette flow, exhibit norm inflation in Gevrey-type spaces as time tends to infinity. Thus, echo chains are shown to ...
• #### Leaky Cell Model of Hard Spheres
(9-03-20)
We study packings of hard spheres on lattices. The partition function, and therefore the pressure, may be written solely in terms of the accessible free volume, i.e., the volume of space that a sphere can explore without ...
• #### Double layered solutions to the extended Fisher–Kolmogorov P.D.E.
(2021-06-22)
We construct double layered solutions to the extended Fisher–Kolmogorov P.D.E., under the assumption that the set of minimal heteroclinics of the corresponding O.D.E. satisfies a separation condition. The aim of our work ...
• #### RESTRICTED TESTING FOR POSITIVE OPERATORS
(2020)
We prove that for certain positive operators T, such as the Hardy-Littlewood maximal function and fractional integrals, there is a constant D>1, depending only on the dimension n, such that the two weight norm inequality ...
• #### Static and Dynamical, Fractional Uncertainty Principles
(2021-03)
We study the process of dispersion of low-regularity solutions to the Schrödinger equation using fractional weights (observables). We give another proof of the uncertainty principle for fractional weights and use it to get ...
• #### Extensions of the John-Nirenberg theorem and applications
(2021)
The John–Nirenberg theorem states that functions of bounded mean oscillation are exponentially integrable. In this article we give two extensions of this theorem. The first one relates the dyadic maximal function to the ...
• #### Convergence over fractals for the Schrödinger equation
(2021-01)
We consider a fractal refinement of the Carleson problem for the Schrödinger equation, that is to identify the minimal regularity needed by the solutions to converge pointwise to their initial data almost everywhere with ...
• #### Multilinear operator-valued calderón-zygmund theory
(2020)
We develop a general theory of multilinear singular integrals with operator- valued kernels, acting on tuples of UMD Banach spaces. This, in particular, involves investigating multilinear variants of the R-boundedness ...
• #### Invariant measures for the dnls equation
(2020-10-02)
We describe invariant measures associated to the integrals of motion of the periodic derivative nonlinear Schr\"odinger equation (DNLS) constructed in \cite{MR3518561, Genovese2018}. The construction works for small $L^2$ ...
• #### End-point estimates, extrapolation for multilinear muckenhoupt classes, and applications
(2019)
In this paper we present the results announced in the recent work by the first, second, and fourth authors of the current paper concerning Rubio de Francia extrapolation for the so-called multilinear Muckenhoupt classes. ...
• #### Magnetic domain-twin boundary interactions in Ni-Mn-Ga
(2020-04)
The stress required for the propagation of twin boundaries in a sample with fine twins increases monotonically with ongoing deformation. In contrast, for samples with a single twin boundary, the stress exhibits a plateau ...
• #### Sensitivity of twin boundary movement to sample orientation and magnetic field direction in Ni-Mn-Ga
(2019)
When applying a magnetic field parallel or perpendicular to the long edge of a parallelepiped Ni- Mn-Ga stick, twin boundaries move instantaneously or gradullay through the sample. We evaluate the sample shape dependence ...
• #### Generalized Poincaré-Sobolev inequalities
(2020-12)
Poincaré-Sobolev inequalities are very powerful tools in mathematical analysis which have been extensively used for the study of differential equations and their validity is intimately related with the geometry of the ...
• #### Sparse and weighted estimates for generalized Hörmander operators and commutators
(2019)
In this paper a pointwise sparse domination for generalized Ho ̈rmander and also for iterated commutators with those operators is provided generalizing the sparse domination result in [24]. Relying upon that sparse domination ...
• #### The Well Order Reconstruction Solution for Three-Dimensional Wells, in the Landau-de Gennes theory.
(2019)
We study nematic equilibria on three-dimensional square wells, with emphasis on Well Order Reconstruction Solu- tions (WORS) as a function of the well size, characterized by λ, and the well height denoted by ε. The WORS ...
|
2021-09-25 03:03:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.687808632850647, "perplexity": 1525.4108010996467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057589.14/warc/CC-MAIN-20210925021713-20210925051713-00371.warc.gz"}
|
http://brian.weatherson.org/kahis/ratbel.html
|
# Chapter 8 Rationality
This chapter discusses the role of rational belief in the version of IRT that I defend. It starts by noting that the theory allows for a new kind of Dharmottara case, where a rational, true belief is not actually knowledge. And I argue that it is a good thing it allows this, for once we see the kind of case in question, it is plausible that it is a Dharmottara case. Then I present two arguments, one of them due to Timothy Williamson and the other novel, for the conclusion that it is possible to have rational credence 1 in a proposition without fully believing it. If that’s right, it refutes two prominent theories of the relationship between credence and full belief. The first is that full belief is credence one, and the second is that full belief is credence above some interest-invariant threshold. These are metaphysical theses about the nature of belief, but each of them comes with a matching normative thesis: that rational belief is a matter of having such-and-such rational credence. I’m going to focus primarily on the second of these, that rational belief is a matter of having rational credence above some interest-invariant threshold. If that fails, then so does the theory that rational belief is a matter of rationally having credence 1. But there are independent problems for the view that the threshold is high but not maximal, and the arguments against that view are less controversial than the ones against the view that rational belief is rational maximal credence. I’ll end the chapter by noting how the view of rational belief that comes out of IRT is immune to those problems.
## 8.1 Atomism about Rational Belief
In chapter 3 I argued for two individually necessary and jointly sufficient conditions for belief.53 They are
1. In some possible decision problem, p is taken for granted.
2. For every question the agent is interested in, the agent answers the question the same way (i.e., giving the same answer for the same reasons) whether the question is asked unconditionally or conditional on p.
At this point one might think that offering a theory of rational belief would be easy. It is rational to believe p just in case it is rational to satisfy these conditions. Unfortunately, this nice thought can’t be right. It can be irrational to satisfy these conditions while rationally believing p.
Coraline is like Anisa and Chamari, in that she has read a reliable book saying that the Battle of Agincourt was in 1415. And she now believes that the Battle of Agincourt was indeed in 1415, for the very good reason that she read it in a reliable book.
In front of her is a sealed envelope, and inside the envelope a number is written on a slip of paper. Let X denote that number, non-rigidly. (So when I say Coraline believes X = x, it means she believes that the number written on the slip of paper is x, where x rigidly denotes some number.) Coraline is offered the following bet:
• If she declines the bet, nothing happens.
• If she accepts the bet, and the Battle of Agincourt was in 1415, she wins $1. • If she accepts the bet, and the Battle of Agincourt was not in 1415, she loses X dollars. For some reason, Coraline is convinced that X = 10. This is very strange, since she was shown the slip of paper just a few minutes ago, and it clearly showed that X = 109. Coraline wouldn’t bet on when the Battle of Agincourt was at odds of a billion to one. But she would take that bet at 10 to 1, which is what she thinks she is faced with. Indeed, she doesn’t even conceptualise it as a bet; it’s a free dollar she thinks. Right now, she is disposed to treat the date of the battle as a given. She is disposed to lose this disposition should a very long odds bet appear to depend on it. But she doesn’t believe she is facing such a bet. So Coraline accepts the bet; she thinks it is a free dollar. And that’s when the battle took place, so she wins the dollar. All’s well that end’s well. But it was really a wildly irrational bet to take. You shouldn’t bet at those odds on something you remember from a history book. Neither memory nor history books are that reliable. Coraline was not rational to treat the questions Should I take this bet?, and Conditional on the Battle of Agincourt being in 1415, should I take this bet? the same way. Her treating them the same way was fortunate - she won a dollar - but irrational. Yet it seems odd to say that Coraline’s belief about the Battle of Agincourt was irrational. What was irrational was her belief about the envelope, not her belief about the battle. To say that a particular disposition was irrational is to make a holistic assessment of the person with the disposition. But whether a belief is rational or not is, relatively speaking, atomistic. That suggests the following condition on rational belief. S’s belief that p is irrational if 1. S irrationally has one of the dispositions that is characteristic of belief that p; and 2. What explains S having a disposition that is irrational in that way is her attitudes towards p, not (solely) her attitudes towards other propositions, or her skills in practical reasoning. In “Knowledge, Bets and Interests” I gave a similar theory about these cases - I said that S’s belief that p was irrational if the irrational dispositions were caused by an irrationally high credence in p. I mean the account I’m giving here to be ever so slightly more general. I’ll come back to that below, because first I want to spell out the second clause. Intuitively, Coraline’s irrational acceptance of the belief is explained by her (irrational) belief about X, not her (rational) belief about the Battle of Agincourt. We can take the relevant notion of explanation as a primitive if we like; it’s in no worse philosophical shape than other notions we take as a primitive. But it is possible to spell it out a little more. Coraline has a pattern of irrational dispositions related to the envelope. If you offer her$50 or X dollars, she’ll take the \$50. If you change the bet so it isn’t about Agincourt, but is instead about any other thing she has excellent but not quite conclusive evidence for, she’ll still take the bet.
On the other hand, she does not have a pattern of irrational dispositions related to the Battle of Agincourt. She has this one, but if you change the payouts so they are not related to this particular envelope, then for all we have said so far, she won’t do anything irrational.
That difference in patterns matters. We know that it’s the beliefs about the envelope, and not the beliefs about the battle, that are explanatory because of this pattern. We could try and create a reductive analysis of explanation in clause 2 using facts about patterns, like the way Lewis tries to create a reductive analysis of causation using similar facts about patterns in “Causation as Influence” . But doing so would invariably run up against edge cases that would be more trouble to resolve than they are worth.
That’s because there are ever so many ways in which someone could have an irrational disposition about any particular case. We can imagine Coraline having a rational belief about the envelope, but still taking the bet because of any of the following reasons:
• It has been her life goal to lose a billion dollars in a day, so taking the bet strictly dominates not taking it.
• She believes (irrationally) that anyone who loses a billion dollars in a day goes to heaven, and she (rationally) values heaven above any monetary amount.
• She consistently makes reasoning errors about billions, so the prospect of losing a billion dollars rarely triggers an awareness that she should reconsider things she normally takes for granted.
The last one of these is especially interesting. The picture of rational agency I’m working with here owes a lot to the notion of epistemic vigilance, as developed by Dan Sperber and co-authors . The rational agent will have all these beliefs in their head that they will drop when the costs of being wrong about them are too high, or the costs of re-opening inquiry into them are too low. They can’t reason, at least in any conscious way, about whether to drop these beliefs, because to do that is, in some sense, to call the belief into doubt. And what’s at issue is whether they should call the belief into doubt. So what they need is some kind of disposition to replace a belief that p with an attitude that p is highly probable, and this disposition should correlate with the cases where taking p for granted will not maximise expected utility. This disposition will be a kind of vigilance. As Sperber et al show, we need some notion of vigilance to explain a lot of different aspects of epistemic evaluation, and I think it can be usefully pressed into service here.54
But if you need something like vigilance, then you have to allow that vigilance might fail. And maybe some irrational dispositions can be traced to that failure, and not to any propositional attitude the decider has. For example, if Coraline systematically fails to be vigilant when exactly one billion dollars is at stake, then we might want to say that her belief in p is still rational, and she is practically, rather than theoretically, irrational. (Why could this happen? Perhaps she thinks of Dr Evil every time she hears the phrase “One billion dollars”, and this distractor prevents her normally reliable skill of being vigilant from kicking in.)
If one tries to turn the vague talk of patterns of bets involving one proposition or another into a reductive analysis of when one particular belief is irrational, one will inevitably run into hard cases where a decider has multiple failures. We can’t say that what makes Coraline’s belief about the envelope, and not her belief about the battle, irrational is that if you replaced the envelope, she would invariably have a rational disposition. After all, she might have some other irrational belief about whatever we replace the envelope with. Or she might have some failure of practical reasoning, like a vigilance failure. Any kind of universal claim, like that it is only bets about the envelope that she gets wrong, won’t do the job we need.
In “Knowledge, Bets and Interests”, I tried to use the machinery of credences to make something like this point. The idea was that Coraline’s belief in p was rational because her belief just was her high credence in p, and that credence was rational. I still think that’s approximately right, but it can’t be the full story.
For one thing, beliefs and credences aren’t as closely connected metaphysically as this suggests. To have a belief in p isn’t just to have a high credence, it’s to be disposed to let p play a certain role. (This will become important in the next two sections.)
For another thing, it is hard to identify precisely what a credence is in the case of an irrational agent. The usual ways we identify credences, via betting dispositions or representation theorems, assume away all irrationality. But an irrational person might still have some rational beliefs.
Attempts to generalise accounts of credences so that they cover the irrational person will end up saying something like what I’ve said about patterns. What it is to have credence 0.6 in p isn’t to have a set of preferences that satisfies all the presuppositions of such and such a representation theorem, and that theorem to say that one can be represented by a probability function Pr and a utility function U such that Pr(p) = 0.6. That can’t be right because some people will, intuitively, have credence about 0.6 in p while not uniformly conforming to these constraints. But what makes them intuitive cases of credence roughly 0.6 in p is that generally they behave like the perfectly rational person with credence 0.6 in p, and most of the exceptions are explained by other features of their cognitive system other than their attitude to p.
In other words, we don’t have a full theory of credences for irrational beings right now, and when we get one, it won’t be much simpler than the theory in terms of patterns and explanations I’ve offered here. So it’s best for now to just understand belief in terms of a pattern of dispositions, and say that the belief is rational just in case that pattern is rational. And that might mean that on some occasions p-related activity is irrational even though the pattern of p-related activity is a rational pattern. Any given action, like any thing whatsoever, can be classified in any number of ways. What matters here is what explains the irrationality of a particular irrational act, and that will be a matter of which patterns of irrational dispositions the actor has.
However we explain Coraline’s belief, the upshot is that she has a rational, true belief that is not knowledge. This is a novel kind of Dharmottara case. (Or Gettier case for folks who prefer that nomenclature.) It’s not the exact kind of case that Dharmottara originally described. Coraline doesn’t infer anything about the Battle of Agincourt from a false belief. But it’s a mistake to think that the class of rational, true beliefs that are not knowledge form a natural kind. In general, negatively defined classes are disjunctive; there are ever so many ways to not have a property. An upshot of this discussion of Coraline is that there is one more kind of Dharmottara case than was previously recognised. But as, for example, Williamson (2013) and Nagel (2013) have shown, we have independent reason for thinking this is a very disjunctive class. So the fact that it doesn’t look anything like Dharmottara’s example shouldn’t make us doubt it is a rational, true belief that is not knowledge.
## 8.2 Coin Puzzles
So rational belief is not identical to rationally having the dispositions that constitute belief. But nor is rational belief a matter of rational high credence. In this section and the next I’ll argue that even rational credence 1 does not suffice for rational belief. Then in the next section I’ll run through some relatively familiar arguments that no threshold short of 1 could suffice for belief. If the argument of this section or the next is successful, those ‘familiar arguments’ will be unnecessary. But the two arguments I’m about to give are controversial even by the standards of a book arguing for IRT, so I’m including them as backups.
The point of these sections is primarily normative, but it should have metaphysical consequences. I’m interested in arguing against the ‘Lockean’ thesis that to believe p just is to have a high credence in p. Normally, this threshold of high enough belief for credence is taken to be interest-invariant, so this is a rival to IRT. But there is some variation in the literature about whether the phrase The Lockean Thesis refers to a metaphysical claim, belief is high credence, or a normative claim, rational belief is rational high credence. Since everyone who accepts the metaphysical claim also accepts the normative claim, and usually takes it to be a consequence of the metaphysical claim, arguing against the normative claim is a way of arguing against the metaphysical claim.
The first puzzle for this Lockean view comes from an argument that Timothy Williamson (2007) made about certain kinds of infinitary events. A fair coin is about to be tossed. It will be tossed repeatedly until it lands heads twice. The coin tosses will get faster and faster, so even if there is an infinite sequence of tosses, it will finish in a finite time. (This isn’t physically realistic, but this need not detain us. All that will really matter for the example is that someone could believe this will happen, and that’s physically possible.)
Consider the following three propositions
1. At least one of the coin tosses will land either heads or tails.
2. At least one of the coin tosses will land heads.
3. At least one of the coin tosses after the first toss will land heads.
So if the first coin toss lands heads, and the rest land tails, B is true and C is false.
Now consider a few versions of the Red-Blue game (perhaps played by someone who takes this to be a realistic scenario). In the first instance, the red sentence says that B is true, and the blue sentence says that C is true. In the second instance, the red sentence says that A is true, and the blue sentence says that B is true. In both cases, it seems that the unique rational play is Red-True. But it’s really hard to explain this in a way consistent with the Lockean view.
Williamson argues that we have good reason to believe that the probability of all three sentences is 1. For B to be false requires C to be false, and for one more coin flip to land tails. So the probability that B is false is one-half the probability that C is false. But we also have good reason to believe that the probabilities of B and C are the same. In both cases, they are false if a countable infinity of coin flips lands tails. Assuming that the probability of some sequence having a property supervenes on the probabilities of individual events in that sequence (conditional, perhaps, on other events in the sequence), it follows that the probabilities of B and C are identical. And the only way for the probability that B is false to be half the probability that C is false, while B and C have the same probability, is for both of them to have probability 1. Since the probability of A is at least as high as the probability of B (since it is true whenever B is true, but not conversely), it follows that the probability of all three is 1.
But since betting on A weakly dominates betting on B, and betting on B weakly dominates betting on C, we shouldn’t have the same attitudes towards bets on these three propositions. Given a choice between betting on B and betting on C, we should prefer to bet on B since there is no way that could make us worse off, and some way it could make us better off. Given that choice, we should prefer to bet on B (i.e., play Red-True when B and C are expressed by the red and blue sentences), because it might be that B is true and C false.
Assume (something the Lockean may not wish to acknowledge) that to say something might be the case is to reject believing its negation. Then a rational person faced with these choices will not believe Either B is false or C is true; they will take its negation to be possible. But that proposition is at least as probable as C, so it too has probability 1. So probability 1 does not suffice for belief. This is a real problem for the Lockean - no probability suffices for belief, not even probability 1.
## 8.3 Playing Games
Some people might be nervous about resting too much weight on infinitary examples like the coin sequence. So I’ll show how the same puzzle arises in a simple, and finite, game.55 The game itself is a nice illustration of how a number of distinct solution concepts in game theory come apart. (Indeed, the use I’ll make of it isn’t a million miles from the use that Kohlberg and Mertens (1986) make of it.) To set the problem up, I need to say a few words about how I think of game theory. This won’t be at all original - most of what I say is taken from important works by Robert Stalnaker (1994, 1996, 1998, 1999). But the underlying philosophical points are important, and it is easy to get confused about them. (At least, I used to get these points all wrong, and that’s got to be evidence they are easy to get confused about, right?) So I’ll set the basic points slowly, and then circle back to the puzzle for the Lockeans.56
Start with a simple decision problem, where the agent has a choice between two acts A1and A2, and there are two possible states of the world, S1 and S2, and the agent knows the payouts for each act-state pair are given by the following table.
$$S_1$$ $$S_2$$
A1 4 0
A2 1 1
What to do? I hope you share the intuition that it is radically underdetermined by the information I’ve given you so far. If S2 is much more probable than S1, then A2 should be chosen; otherwise A1 should be chosen. But I haven’t said anything about the relative probability of those two states. Now compare that to a simple game. Row has two choices, which I’ll call A1 and A2. Column also has two choices, which I’ll call S1 and S2. It is common knowledge that each player is rational, and that the payouts for the pairs of choices are given in the following table. (As always, Row’s payouts are given first.)
$$S_1$$ $$S_2$$
A1 4, 0 0, 1
A2 1, 0 1, 1
What should Row do? This one is easy. Column gets 1 for sure if she plays S2, and 0 for sure if she plays S1. So she’ll play S2. And given that she’s playing S2, it is best for Row to play A2.
You probably noticed that the game is just a version of the decision problem from a couple of paragraphs ago. The relevant states of the world are choices of Column. But that’s fine; the layout of that decision problem was neutral on what constituted the states S1 and S2. Note that the game can be solved without explicitly saying anything about probabilities. What is added to the (unsolvable) decision-theoretic problem is not information about probabilities, but information about Column’s payouts, and the fact that Column is rational. Those facts imply something about Column’s play, namely that she would play S2. And that settles what Row should do.
There’s something quite general about this example. What’s distinctive about game theory isn’t that it involves any special kinds of decision making. Once we get the probabilities of each move by the other player, what’s left is (mostly) expected utility maximisation.57 The distinctive thing about game theory is that the probabilities aren’t specified in the setup of the game; rather, they are solved for. Apart from special cases, such as where one option strictly dominates another, not much can be said about a decision problem with unspecified probabilities. But a lot can be said about games where the setup of the game doesn’t specify the probabilities, because it is possible to solve for the probabilities given the information that is provided.
This way of thinking about games makes the description of game theory as ‘interactive epistemology’ rather apt. The theorist’s work is to solve for what a rational agent should think other rational agents in the game should do. From this perspective, it isn’t surprising that game theory will make heavy use of equilibrium concepts. In solving a game, we must deploy a theory of rationality, and attribute that theory to rational actors in the game itself. In effect, we are treating rationality as something of an unknown, but one that occurs in every equation we have to work with. Not surprisingly, there are going to be multiple solutions to the puzzles we face.
This way of thinking lends itself to an epistemological interpretation of one of the most puzzling concepts in game theory, the mixed strategy. The most important solution concept in modern game theory is the Nash equilibrium. A set of moves is a Nash equilibrium if no player can improve their outcome by deviating from the equilibrium, conditional on no other player deviating. In many simple games, the only Nash equilibria involve mixed strategies. Here’s one simple example.
$$S_1$$ $$S_2$$
A1 0, 1 10, 0
A2 9, 0 -1, 1
This game is reminiscent of some puzzles that have been much discussed in the decision theory literature, namely asymmetric Death in Damascus puzzles . Here Column wants herself and Row to make the ‘same’ choice, i.e., A1 and S1~ or A2~ and S2. She gets 1 if they do, 0 otherwise. And Row wants them to make different choices, and gets 10 if they do. Row also dislikes playing A2, and this costs her 1 whatever else happens. It isn’t too hard to prove that the only Nash equilibrium for this game is that Row plays a mixed strategy playing both A1 and A2 with probability ½, while Column plays the mixed strategy that gives S1 probability 0.55, and S2 with probability 0.45.
Now what is a mixed strategy? It is easy enough to take away form the standard game theory textbooks a metaphysical interpretation of what a mixed strategy is. Here, for instance, is the paragraph introducing mixed strategies in Dixit and Skeath’s Games of Strategy.
When players choose to act unsystematically, they pick from among their pure strategies in some random way …We call a random mixture between these two pure strategies a mixed strategy.
Dixit and Skeath are saying that it is definitive of a mixed strategy that players use some kind of randomisation device to pick their plays on any particular run of a game. That is, the probabilities in a mixed strategy must be in the world; they must go into the players’ choice of play. That’s one way, the paradigm way really, that we can think of mixed strategies metaphysically.
But the understanding of game theory as interactive epistemology naturally suggests an epistemological interpretation of mixed strategies.
One could easily … [model players] … turning the choice over to a randomizing device, but while it might be harmless to permit this, players satisfying the cognitive idealizations that game theory and decision theory make could have no motive for playing a mixed strategy. So how are we to understand Nash equilibrium in model theoretic terms as a solution concept? We should follow the suggestion of Bayesian game theorists, interpreting mixed strategy profiles as representations, not of players’ choices, but of their beliefs.
One nice advantage of the epistemological interpretation, as noted by Binmore is that we don’t require players to have n-sided dice in their satchels, for every n, every time they play a game.58 But another advantage is that it lets us make sense of the difference between playing a pure strategy and playing a mixed strategy where one of the ‘parts’ of the mixture is played with probability one.
With that in mind, consider the below game, which I’ll call Up-Down.59 Informally, in this game A and B must each play a card with an arrow pointing up, or a card with an arrow pointing down. I will capitalise A’s moves, i.e., A can play UP or DOWN, and italicise B’s moves, i.e., B can play up or down. If at least one player plays a card with an arrow facing up, each player gets _1. If two cards with arrows facing down are played, each gets nothing. Each cares just about their own wealth, so getting _1 is worth 1 util. All of this is common knowledge. More formally, here is the game table, with A on the row and B on the column.
up down
UP 1, 1 1, 1
DOWN 1, 1 0, 0
When I write game tables like this, I mean that the players know that these are the payouts, that the players know the other players to be rational, and these pieces of knowledge are common knowledge to at least as many iterations as needed to solve the game. (I assume here that in solving the game, it is legitimate to assume that if a player knows that one option will do better than another, they have conclusive reason to reject the latter option. This is completely standard in game theory, though somewhat controversial in philosophy.) With that in mind, let’s think about how the agents should approach this game.
I’m going to make one big simplifying assumption at first. I’ll relax this later, but it will help the discussion to start with this assumption. This assumption is that the doctrine of Uniqueness applies here; there is precisely one rational credence to have in any salient proposition about how the game will play. Some philosophers think that Uniqueness always holds . I join with those such as North (2010) and Schoenfield (2013) who don’t. But it does seem like Uniqueness might often hold; there might often be a right answer to a particular problem. Anyway, I’m going to start by assuming that it does hold here.
The first thing to note about the game is that it is symmetric. So the probability of A playing UP should be the same as the probability of B playing up, since A and B face exactly the same problem. Call this common probability x. If x < 1, we get a quick contradiction. The expected value, to Row, of UP, is 1. Indeed, the known value of UP is 1. If the probability of up is x, then the expected value of UP is x. So if x < 1, and Row is rational, she’ll definitely play UP. But that’s inconsistent with the claim that x < 1 since that means that it isn’t definite that Row will play UP.
So we can conclude that x = 1. Does that mean we can know that Row will play UP? No. Assume we could conclude that. Whatever reason we would have for concluding that would be a reason for any rational person to conclude that Column will play up. Since any rational person can conclude this, Row can conclude it. So Row knows that she’ll get 1 whether she plays UP or DOWN. But then she should be indifferent between playing UP and DOWN. And if we know she’s indifferent between playing UP and DOWN, and our only evidence for what she’ll play is that she’s a rational player who’ll maximise her returns, then we can’t be in a position to know she’ll play UP.
For the rest of this ssection I want to reply to one objection, and weaken an assumption I made earlier. The objection is that I’m wrong to assume that agents will only maximise expected utility. They may have tie-breaker rules, and those rules might undermine the arguments I gave above. The assumption is that there’s a uniquely rational credence to have in any given situation.
I argued that if we knew that A would play UP, we could show that A had no reason to play UP. But actually what we showed was that the expected utility of playing UP would be the same as playing DOWN. Perhaps A has a reason to play UP, namely that UP weakly dominates DOWN. After all, there’s one possibility on the table where UP does better than DOWN, and none where RED does better. And perhaps that’s a reason, even if it isn’t a reason that expected utility considerations are sensitive to.
Now I don’t want to insist on expected utility maximisation as the only rule for rational decision making. Sometimes, I think some kind of tie-breaker procedure is part of rationality. In the papers by Stalnaker I mentioned above, he often appeals to this kind of weak dominance reasoning to resolve various hard cases. But I don’t think weak dominance provides a reason to play UP in this particular case. When Stalnaker says that agents should use weak dominance reasoning, it is always in the context of games where the agents’ attitude towards the game matrix is different to their attitude towards each other. One case that Stalnaker discusses in detail is where the game table is common knowledge, but there is merely common (justified, true) belief in common rationality. Given such a difference in attitudes, it does seem there’s a good sense in which the most salient departure from equilibrium will be one in which the players end up somewhere else on the table. And given that, weak dominance reasoning seems appropriate.
But that’s not what we’ve got here. Assuming that rationality requires playing UP/up, the players know we’ll end up in the top left corner of the table. There’s no chance that we’ll end up elsewhere. Or, perhaps better, there is just as much chance we’ll end up ‘off the table’, as that we’ll end up in a non-equilibrium point on the table. To make this more vivid, consider the ‘possibility’ that B will play across, and if B plays across, A will receive 2 if she plays DOWN, and -1 if she plays UP. Well hold on, you might think, didn’t I say that up and down were the only options, and this was common knowledge? Well, yes, I did, but if the exercise is to consider what would happen if something the agent knows to be true doesn’t obtain, then the possibility that one agent will play blue certainly seems like one worth considering. It is, after all, a metaphysical possibility. And if we take it seriously, then it isn’t true that under any possible play of the game, UP does better than DOWN.
We can put this as a dilemma. Assume, for reductio, that UP/up is the only rational play. Then if we restrict our attention to possibilities that are epistemically open to A, then UP does just as well as DOWN; they both get 1 in every possibility. If we allow possibilities that are epistemically closed to A, then the possibility where B plays blue is just as relevant as the possibility that B is irrational. After all, we stipulated that this is a case where rationality is common knowledge. In neither case does the weak dominance reasoning get any purchase.
With that in mind, we can see why we don’t need the assumption of Uniqueness. Let’s play through how a failure of Uniqueness could undermine the argument. Assume, again for reductio, that we have credence ε > 0 that A will play DOWN. Since A maximises expected utility, that means A must have credence 1 that B will play up. But this is already odd. Even if you think people can have different reactions to the same evidence, it is odd to think that one rational agent could regard a possibility as infinitely less likely than another, given isomorphic evidence. And that’s not all of the problems. Even if A has credence 1 that B will play up, it isn’t obvious that playing DOWN is rational. After all, relative to the space of epistemic possibilities, UP weakly dominates DOWN. Remember that we’re no longer assuming that it can be known what A or B will play. So even without Uniqueness, there are two reasons to think that it is wrong to have credence ε > 0 that A will play DOWN. So we’ve still shown that credence 1 doesn’t imply knowledge, and since the proof is known to us, and full belief is incompatible with knowing that you can’t know, this is a case where credence 1 doesn’t imply full belief. So whether A plays UP, like whether the coin will ever land tails, is a case where belief comes apart from high credence, even if by high credence we literally mean credence one. This is a problem for the Lockean, and, like Williamson’s coin, it is also a problem for the view that belief is credence one.
## 8.4 Puzzles for Lockeans
I’ve already mentioned two classes of puzzles, those to do with infinite sequences of coin tosses and those to do with weak dominance in games. But there are other puzzles that apply especially to the Lockean, the theorist who identifies belief with credence above some non-maximal, interest-invariant, threshold.
### 8.4.1 Arbitrariness
The first problem for the Lockeans, and in a way the deepest, is that it makes the boundary between belief and non-belief arbitrary. This is a point that was well made some years ago now by Robert Stalnaker (1984, 91). Unless these numbers are made salient by the environment, there is no special difference between believing p to degree 0.9876 and believing it to degree 0.9875. But if t is 0.98755, this will be the difference between believing p and not believing it, which is an important difference.
The usual response to this, as found in Foley (1993 Ch. 4), Hunter (1996) and Lee (2017), is to say that the boundary is vague. Now we might respond to this by noting that this only helps on an implausible theory of vagueness. On epistemicist theories, or supervaluationist theories, or on my preferred comparative truth theory , there will still be an arbitrary point which marks the difference between belief and non-belief. This won’t be the case on various kinds of degree of truth theories. But, as Williamson (1994) pointed out, those are theories on which contradictions end up being half-true. And if saving the Lockean theory requires that we give up on the idea that contradictions are simply false, it is hard to see how it is worth the price.
But a better response is to think about what it means to say that the belief/non-belief boundary is a vague point on a scale. We know plenty of terms where the boundary is a vague point on a scale. Comparative adjectives are typically like that. Whether a day is hot depends on whether it is above some vague point on a temperature scale, for example. But here’s the thing about these vague terms - they don’t enter into lawlike generalisations. (At least in a non-trivial way. Hot days are 24 hours long, and that’s a law, but not one that hotness has a particular role in grounding.) The laws involve the scale; the most you can say using the vague term is some kind of generic. For instance, you can say that hot days are exhausting, or that electricity use is higher on hot days. But these are generics, and the interesting law-like claims will involve degrees of heat, not the hot/non-hot binary.
It’s a fairly central presupposition of this book that belief is not like that. Belief plays a key role in all sorts of non-trivial lawlike generalisations. Folk psychology is full of such lawlike generalisations. We’re doing social science here, so the laws in question are hardly exceptionless. But they are counterfactually resilient, and explanatorily deep, and not just generics that are best explained using the underlying scale.
Of course, the Lockean doesn’t believe that these generalisations of folk psychology are anything more than generics, so this is a somewhat question-begging argument. But if you’re not antecedently disposed to give up on folk psychology, or reduce it to the status of a bunch of helpful generics, it’s worth seeing how striking the Lockean view here is. So consider a generalisation like the following.
• If someone wants an outcome O, and they believe that doing X is the only way to get O, and they believe that doing X will neither incur any costs that are large in comparison to how good O is, nor prevent them being able to do something that brings about some other outcome that is comparatively good, then they will do X.
This isn’t a universal - some people are just practically irrational. But it’s stronger than just a generic claim about high temperatures. Or so I say. But the Lockean does not say this; they say that this has widespread counterexamples, and when it is true, it is a relatively superficial truth whose explanatory force is entirely derived from deeper truths about credences.
The Lockean, for instance, thinks that someone in Blaise’s situation satisfies all the antecedents and qualifications in the principle. They want the child to have a moment of happiness. They believe (i.e., have a very high credence that) taking the bet will bring about this outcome, will have no costs at all, and will not prevent them doing anything else. Yet they will not think that people in Blaise’s situation will generally take the bet, or that it would be rational for them to take the bet, or that taking the bet is explained by these high credences.
That’s what’s bad about making the belief/non-belief distinction arbitrary. It means that generalisations about belief are going to be not particularly explanatory, and are going to have systematic (and highly rational) exceptions. We should expect more out of a theory of belief.
### 8.4.2 Correctness
I’ve talked about this one a bit in subsection 3.7.1, so I’ll be brief here. Beliefs have correctness conditions. To believe p when p is false is to make a mistake. That might be an excusable mistake, or even a rational mistake, but it is a mistake. On the other hand, having an arbitrarily high credence in p when p turns out to be false is not a mistake. So having high credence in p is not the same as believing p.
Matthew Lee (2017) argues that the versions of this argument by Ross and Schroeder (2014) and Fantl and McGrath (2009) are incomplete because they don’t provide a conclusive case for the premise that having a high credence in a falsehood is not a mistake. But this gap can be plugged. Imagine a scientist, call her Marie, who knows the correct theory of chance for a given situation. She knows that the chance of p obtaining is 0.999. (If you think t > 0.999, just increase this number, and change the resulting dialogue accordingly.) And her credence in p is 0.999, because her credences track what she knows about chances. She has the following exchange with an assistant.
ASSISTANT: Will p happen?
MARIE: Probably. It might not, but there is only a one in a thousand chance of that. So p will probably happen.
To their surprise, p does not happen. But Marie did not make any kind of mistake here. Indeed, her answer to assistant’s question was exactly right. But if the Lockean theory of belief is right, and false beliefs are mistakes, then Marie did make a mistake. So the Lockean theory of belief is not right.
The Lockean says other strange things about Marie. By hypothesis, she believes that p will obtain. Yet she certainly seems sincere when she says it might not happen. So she believes both p and it might not be that p. This looks like a Moore-paradoxical utterance, yet in context it seems completely banal.
The same thing goes for Chamira. Does she believe the Battle of Agincourt was in 1415? Yes, say the Lockeans. Does she also believe that it might not have been in 1415? Yes, say the Lockeans, that is why it was rational of her to play Red-True, and it would have been irrational to play Blue-True. So she believes both that something is the case, and that it might not be the case. This seems irrational, but Lockeans insist that it is perfectly consistent with her being a model of rationality.
Back in subsection 2.3.1 I argued that this kind of thing would be a problem for any kind of orthodox theory. And in some sense all I’m doing here is noting that the Lockean really is a kind of orthodox theorist. But the argument that the Lockean is committed to the rationality of Moore-paradoxical claims doesn’t rely on those earlier arguments; it’s a direct consequence of their view applied to simple cases like Marie and Chamira.
### 8.4.4 Closure and the Lockean Theory
The Lockean theory makes an implausible prediction about conjunction.60 It says that someone can believe two conjuncts, yet actively refuse to believe the conjunction. Here is how Stalnaker puts the point.
Reasoning in this way from accepted premises to their deductive consequences (p, also q, therefore r) does seem perfectly straightforward. Someone may object to one of the premises, or to the validity of the argument, but one could not intelligibly agree that the premises are each acceptable and the argument valid, while objecting to the acceptability of the conclusion.
If believing that p just means having a credence in p above the threshold, then this will happen. Indeed, given some very weak assumptions about the world, it implies that there are plenty of triples〈SAB〉such that
• S is a rational agent.
• A and B are propositions.
• S believes A and believes B.
• S does not believe AB.
• S knows that she has all these states, and consciously reflectively endorses them.
Now one might think, indeed I do think, that such triples do not exist at all. But set that objection aside. If the Lockean is correct, these triples should be everywhere. That’s because for any t ∈ (0, 1) you care to pick, triples of the form〈SCD〉are very very common.
• S is a rational agent.
• C and D are propositions.
• S’s credence in C is greater than t, and her credence in D is greater than t.
• S’s credence in CD is less than t.
• S knows that she has all these states, and reflectively endorses them.
The best arguments for the existence of triples〈SAB〉are non-constructive existence proofs. David Christensen (2005) for instance, argues from the existence of the preface paradox to the existence of these triples. But even if these existence proofs work, they don’t really prove what the Lockean needs. They don’t show that triples satisfying the constraints we associated with〈SAB〉are just as common as triples satisfying the constraints we associated with〈SCD〉, for any t. But if the Lockean were correct, they should be exactly as common.
## 8.5 Solving the Challenges
It’s not fair to criticise other theories for their inability to meet a challenge that one’s own theory cannot meet. So I’ll end this chapter by noting that the six problems I’ve raised so far for Lockeans are not problems for my interest-relative theory of (rational) belief. I’ve already discussed the points about correctness in subsection 3.7.1, and about closure in chapters 4 and 6, and there isn’t much to be added. But it’s worth saying a few words about the other four problems.
### 8.5.1 Coins
I say that a necessary condition of believing that p is a disposition to take p for granted. The rational person will prefer betting on logically weaker rather than logically stronger propositions in the coin case, so they will not take the logically stronger ones for granted. If they did take them for granted, they would be indifferent between the bets. So they will not believe that one of the coin flips after the second will land heads, or even that one of the coin flips after the first will land heads. And that’s the right result. The rational person should assign those propositions probability one, but not believe them.
### 8.5.2 Games
In the up-down game, if the rational person believed that the other player would play up, they would be indifferent between up and down. But it’s irrational to be indifferent between those options, so they wouldn’t have the belief. They will think the probability that the other person will play up is one - what else could it be? But they will not believe it on pain of incoherence.
### 8.5.3 Arbitrariness
According to IRT, the difference between belief and non-belief is the difference between willingness and unwillingness to take something as given in inquiry. This is far from an arbitrary difference. And it is a difference that supports law-like generalisations. If someone believes that p, and believes that given p, A is better than B, they will prefer A to B. This isn’t a universal truth; people make mistakes. But nor is it merely a statistical generalisation. Counterexamples are things to be explained, while instances are explained by the underlying pattern.
### 8.5.4 Moore
In many ways the guiding aim of this project was to avoid this kind of Moore paradoxicality. So it shouldn’t be a surprise that we avoid it here. If someone shouldn’t do something because p might be false, that’s conclusive evidence that they don’t know that p. And it’s conclusive evidence that either they don’t rationally believe p, or they are making some very serious mistake in their reasoning. And in the latter case, the reason they are making a mistake is not that p might be false, but that they have a seriously mistaken belief about the kind of choice they are facing. So we can never say that someone knows, or rationally believes, p, but their choice is irrational because p might be false.
1. This section is based on §§3.1 of my (2012).↩︎
2. Kenneth Boyd (2016) suggests a somewhat similar role for vigilance in the course of defending an interest-invariant epistemic theory. Obviously I don’t agree with his conclusions, but my use of Sperber’s work does echo his.↩︎
3. This section is based on material from §1 of my (2016a).↩︎
4. I’m grateful to the participants in a game theory seminar at Arché in 2011, especially Josh Dever and Levi Spectre, for very helpful discussions that helped me see through my previous confusions.↩︎
5. The qualification is because weak dominance reasoning cannot be construed as orthodox expected utility maximisation. We saw that in the coins case, and it will become important again here. It is possible to model weak dominance reasoning using non-standard probabilities, as in Brandenburger (2008), but that introduces new complications.↩︎
6. It is worse than if some games have the only equilibria involving mixed strategies with irrational probabilities. And it might be noted that Binmore’s introduction of mixed strategies, on page 44 of his (2007), sounds much more like the metaphysical interpretation. But I think the later discussion is meant to indicate that this is just a heuristic introduction; the epistemological interpretation is the correct one.↩︎
7. In earlier work I’d called it Red-Green, but this is too easily confused with the Red-Blue game that plays such an important role in chapter 2.↩︎
8. This subsection draws on material from my (2016a).↩︎
|
2023-03-26 14:29:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6284365653991699, "perplexity": 783.6564536705874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945473.69/warc/CC-MAIN-20230326142035-20230326172035-00448.warc.gz"}
|
https://www.physicsforums.com/threads/tensor-gradient-and-scalar-product.408128/
|
# Tensor gradient and scalar product
1. Jun 5, 2010
### zyroph
Hi all,
I need to evaluate the following equation :
$$\mathbf{n} \cdot [\mathbf{\sigma} + \mathbf{a} \nabla\mathbf{\sigma}]\cdot\mathbf{n}$$
where $$\mathbf{n}$$ is the normal vector, $$\mathbf{a}$$ a vector, and $$\sigma$$ the stress tensor such that :
$$\mathbf{\sigma} \cdot \mathbf{n} = -p\cdot\mathbf{n} + \mu [\nabla \mathbf{u} + (\nabla\mathbf{u})^T]\cdot \mathbf{n}$$
Actually, the first term (in the first equation) is not an issue , since it can be found in any serious book But I'm getting lost with the second one.
I work much more in numerics than in maths, and my knowledge on the topic is very limited so I will be very grateful for any help, i.e.
$$\mathbf{n} \cdot \mathbf{a} \nabla\mathbf{\sigma} \cdot\mathbf{n}$$
Any clue, simplification, explanation would be welcome .
Last edited: Jun 5, 2010
|
2018-09-23 04:06:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327581882476807, "perplexity": 454.10423926695563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159006.51/warc/CC-MAIN-20180923035948-20180923060348-00371.warc.gz"}
|
https://eevibes.com/mathematics/linear-algebra/what-is-linear-combinations-linear-dependence-and-linear-independence/
|
# What is Linear combinations, Linear dependence and Linear Independence?
In this article you will get the idea of What is Linear combinations, Linear dependence and Linear Independence? Linear algebra is important for the turn of the mathematical branch. It is one of the main branches of math that can likewise allude to the mathematical designs contained under extra working and consolidates the augmentation of scalar numbers with the hypothesis of exact frameworks matric choices vector and linear spaces. Changes .The line algebra manages mathematical vectors and networks and particularly vector spaces and line transformations. Not at all like other mathematical regions, which are frequently comprised of new ideas. fields of material science to current algebra and its advantages to engineering and medical fields.
# Introduction
Line algebra is quite possibly the most notable mathematical discipline because of its rich background and numerous helpful resources in the field of science and innovation .Solving line mathematical framework and computer-counting antonyms are two instances of issues we will make. handling the line algebra we have gained the most from this review .The Dutch mathematician got arrangement recipes in 1693 and in 1750 another mathematician presented a strategy for tackling different mathematical frameworks. This was the initial phase in the improvement of linear algebra and network hypothesis. of the appearance of computers the network strategy got a great deal of consideration .John von Neumann and Alan Turing were famous trailblazers in computer science and accomplished some work in the field of linear algebra too. Line .Current computer line algebra is extremely famous this is on the grounds that the field is presently perceived as a vital device in numerous computer branches .For instance, computer illustrations, robots and j model geometry and so on .
## Linear Combinations
For the most part, statistics state that a composite component is a compound product (Poole, 2010). In this sense, for example, a combination of lines is the functions f (x), g (x) and h (x).
2 f(x)+3 g(x)-4 h(x)
### Definition of Linear combinations
If we have a set of vectors {v1, v2, v3…vn} in a space of vector v, any vector of the form
V=a_{1} v_{1}+a_{2} v_{2}+a_{3} v_{3}+\cdots a_{k} v_{k}
For some scalars a1, a2——ak is called a combination of linear v1, v2—vk
### Basic Vector Space
A few underpinnings of vector spaces are formally known, then again, actually it is conceivable that our own doesn’t know him by that name. For instance in three vectors I = (1, 0, 0) zeroing in on x-hub, j = (0, 1, 0) zeroing in on y-pivot, and k = (0, 0, 1) focused. close to the hub of z together from the ordinary design of every vector (x, y, z) inside is an uncommon mix of one line of standard fundamental vectors.
#### Definition
An (ordered) subset of a vector area V is a (proposed) premise of V if each vector v is authorized in V interestingly, may be represented as a linear mixture of vectors from β
V=v_{1} b_{1}+v_{2} b_{2}+v_{3} b_{3}+\cdots v_{n} b_{n}
For a asked basis, the coefficients on this linear mixture are called the coordinates of the vector as for β.Later, while we observe arrays in extra detail, we can compose the coordinates of a vector v as a phase vector and provide it a unique notation.
v_{1}=\begin{aligned}
&1 \\
&2
\end{aligned}
\begin{array}{r}
\mathbf{1} \\
v_{2}=\mathbf{0} \\
\mathbf{2}
\end{array}
v_{3}=\begin{aligned}
&1 \\
&1 \\
&0
\end{aligned}
#### THE VECTOR:
V=\begin{array}{r}
2 \\
1 \\
5
\end{array}
is the linear combination of vectors v1, v2, v3, we can find real number a1,a2 and a3 so that
V=a_{1} v_{1}+a_{2} v_{2}+a_{3} v_{3}
By subtracting we get
a_{1}\left[\begin{array}{l}
1 \\
2 \\
1
\end{array}\right]+a_{2}\left[\begin{array}{l}
1 \\
0 \\
2
\end{array}\right]+a_{3}\left[\begin{array}{l}
1 \\
1 \\
0
\end{array}\right]=\left[\begin{array}{l}
2 \\
1 \\
5
\end{array}\right]
\begin{aligned}
a_{1}+a_{2}+a_{3} &=2 \\
2 a_{1}+a_{3} &=1 \\
a_{1}+2 a_{2} &=5 .
\end{aligned}
By solving it we get the equations we get values a1= 1, a2 = 2, and a3 = – 1, which means that V is a linear combination of VI, V2, and V3. Thus
\mathbf{v}=\mathbf{v}_{1}+2 \mathbf{v}_{2}-\mathbf{v}_{3}
The Figure mentioned below show the linear combination of v1, v2, v3:
#### LINEAR INDEPENDENCE:
##### DEFINATION:
The vectors V1, V2 ……. Vt in a vector space V are called linearly dependent if there is one constants a1, a2, ……at, not all zero, so that
\sum_{j=1}^{k} a_{j} \mathbf{v}_{j}=a_{1} \mathbf{v}_{1}+a_{2} \mathbf{v}_{2}+\cdots+a_{k} \mathbf{v}_{k}=0
Otherwise, V1, V2 …., Vk are known as linearly impartial. That is, V1, V2… Vk are linearly impartial if, each time a1V1 + a2V2 + … + akVk = 0,
a1 = a2 =……. = ak = 0.
If S = {V1, V2,……,Vd },then we additionally say that the set S is linearly based or linearly impartial if the vectors have the corresponding property.
### Example of Lineal Dependence
Determine whether the given vectors are linearly independent or not?
$$\begin{gathered} \mathbf{v}_{1}=\left[\begin{array}{l} 3 \\ 2 \\ 1 \end{array}\right], \quad \mathbf{v}_{2}=\left[\begin{array}{l} 1 \\ 2 \\ 0 \end{array}\right], \quad v_{3}=\left[\begin{array}{r} -1 \\ 2 \\ -1 \end{array}\right] \\ a_{1}\left[\begin{array}{l} 3 \\ 2 \\ 1 \end{array}\right]+a_{2}\left[\begin{array}{l} 1 \\ 2 \\ 0 \end{array}\right]+a_{3}\left[\begin{array}{r} -1 \\ 2 \\ -1 \end{array}\right]=\left[\begin{array}{l} 0 \\ 0 \\ 0 \end{array}\right] \\ 3 a_{1}+a_{2}-a_{3}=0 \\ 2 a_{1}+2 a_{2}+2 a_{3}=0 \\ a_{1}-a_{3}=0 \end{gathered}$$
The desired augmented matrix is
$$\left[\begin{array}{rrr:r} 3 & 1 & -1 & 0 \\ 2 & 2 & 2 & 0 \\ 1 & 0 & -1 & 0 \end{array}\right]$$
The reduced row echelon form of the matrix is
$$\left[\begin{array}{rrr:r} 1 & 0 & -1 & 0 \\ 0 & 1 & 2 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$$
Thus solution of matrix is non trivial
\left[\begin{array}{c}
k \\
-2 k \\
k
\end{array}\right], \quad k \neq 0 \text { (verify), }
So the given matrix are linearly dependent.
Example:
Are the vectors
V_{1}=\left[\begin{array}{llll}
1 & 0 & 1 & 2
0 & 1 & 1 & 2
1 & 1 & 1 & 3
\end{array}\right]
Linearly dependent or linearly independent?
\begin{aligned}
a_{1}+a_{3} &=0 \\
a_{2}+a_{3} &=0 \\
a_{1}+a_{2}+a_{3} &=0 \\
2 a_{1}+2 a_{2}+3 a_{3} &=0
\end{aligned}
The desired augmented matrix is:
$$\left[\begin{array}{lll:l} 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 1 & 1 & 1 & 0 \\ 2 & 2 & 3 & 0 \end{array}\right]$$
The reduced row echelon form of matrix is:
$$\left[\begin{array}{lll:l} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$$
Thus the only solution is the trivial solution $\mathrm{a}_{1}=\mathrm{a}_{2}=\mathrm{a}_{3}=0$, so the vectors are linearly independent.
VECTOR ROW DEPENDENCY TEST:
There are numerous circumstances where we might wish to know whether the Carrier Agreement is linearly free, that is to say, when one vector is a blend of another. Two vectors u and v are autonomous of line on the off chance that the indivisible numbers x and y fulfill xu + yv = 0 say x = y = 0.
\begin{aligned} \vec{u} &=\left[\begin{array}{l} a \\ b \end{array}\right] \\ \vec{v} &=\left[\begin{array}{l} c \\ d \end{array}\right] \end{aligned}
$\mathbf{x u}+\mathrm{y} \mathbf{v}=0$ is equivalent to
$$0=x\left[\begin{array}{l} a \\ b \end{array}\right]+y\left[\begin{array}{l} c \\ d \end{array}\right]=\left[\begin{array}{ll} a & c \\ b & d \end{array}\right]\left[\begin{array}{l} x \\ y \end{array}\right]$$
For the situation where u and v are linearly autonomous, the principal reply for this plan is mathematical articulations is the inconsequential arrangement, x=y=0. This occurs in homogeneous frameworks if and provided that the determinant isn’t zero. We have now found a test to see if a given arrangement of vectors is linearly free: A bunch of n vectors of length n is linearly autonomous in the event that the grid with these vectors as segments has a non-zero determinant. The set is clear reliant assuming the determinant is zero
### Conclusion
From this research we have reasoned that the vectors would be linearly dependent if the arrangement of vectors could be having a non-paltry linear mix of the vectors that would be equivalent to the zero and moreover they would be linearly independent if they could have an inconsequential linear blend of the vectors that would be equivalent to zero and a linear mix can be finished up for instance assuming we are having a mix of certain vectors and their linear mix can just be settled by utilizing two procedures possibly we essentially add them or utilize scalar multiplication the scalars are generally be called by the name of “weights”.
|
2023-03-22 00:12:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.917683482170105, "perplexity": 1274.9200298175601}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00561.warc.gz"}
|
http://mathhelpforum.com/algebra/206134-quadratic-factors-problem-print.html
|
# Quadratic Factors Problem
Printable View
• Oct 26th 2012, 11:06 AM
aritech
Quadratic Factors Problem
Hi all,
I was working a quadratic factor problem, but only managed to solve B and C. Attached is my working.
Can anyone please tell my how to solve the rest (i.e. A and D).
Thanks in advance.
• Oct 26th 2012, 11:09 AM
Salahuddin559
Re: Quadratic Factors Problem
Attachment not opening. Please re-upload and test it manually before posting.
Salahuddin
Maths online
• Oct 26th 2012, 11:12 AM
aritech
Re: Quadratic Factors Problem
It should by ok now. I changed the photo from horizontal to vertical.
• Oct 26th 2012, 11:26 AM
BobP
Re: Quadratic Factors Problem
You've still to equate coefficients of $x^{3}$ and $x.$
• Oct 26th 2012, 11:37 AM
aritech
Re: Quadratic Factors Problem
but for x^3 there will be 2 unknowns (A and D) and x have no coefficients, if I'm working good.
• Oct 26th 2012, 12:33 PM
BobP
Re: Quadratic Factors Problem
$x: \quad 5 = 5A.$
|
2017-08-21 15:05:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7459787726402283, "perplexity": 6477.262441233327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108709.89/warc/CC-MAIN-20170821133645-20170821153645-00239.warc.gz"}
|
https://math.stackexchange.com/questions/3101044/determining-whether-two-events-are-independent-or-dependent
|
# Determining whether two events are independent or dependent.
I'm trying to make sure that my reasoning is correct for these problems.
Say if the following pairs of events should be modeled as independent or dependent. Explain your reasoning.
We choose a voter at random (all voters equally likely) from Bloomington and let A be the event that the voter votes to reelect the mayor and B be the event that the voter votes to reelect the police chief. (these are not mutually exclusive choices, . . . , a person could vote to reelect both, neither, etc.)
• Independent, because the first person voting on the mayor doesn't affect how the 2nd person votes on the police chief.
Two people are selected at random from Bloomington and let A be the event that the first person favors the mayor, while B is the event that the 2nd person favors the mayor.
• These two events should be modeled as independent because the people were picked at random as well as they their decisions don't affect the other's.
Flip a coin and let A be the event that the coin is heads and B be the event that the coin is tails.
• This event is independent because regardless of how many flip the coin or if you don't do the first coin flip the probability will always be 50%
A person is selected at random from Bloomington. A is the event that the person likes the movie “The Incredibles” while B is the event that the person likes “The Incredibles 2.”
• These variables share a dependent relationship due to the fact that the two items are closely related and if you liked the first one it changes how much you like the second.
These are all dependent events. However, the question asks how they should be modelled, which is not (quite) the same thing.
The fourth one is obviously dependent: what a person thinks about the first film is going to be strongly correlated with what they think about the second film. The same applies, less obviously, to the first; we don't know what the relationship might be between a person's view on the police chief and the mayor (depending on local politics, it might be positively or negatively correlated) but there undoubtedly is one.
The second one has a very small amount of dependence, and should be modelled as independent. The dependence comes from the fact that the second person is different to the first, so slightly more likely to have the opposite opinion. In a large population, this dependence is negligible.
The third one I think you have misunderstood the situation. The coin is only flipped once - A occurs if and only if B doesn't, so A and B are dependent. If there were two separate flips then they would be independent.
• only one voter is selected (why do you speak of first person and second person here?). Police chief and mayor could be one the same line on many aspects. So dependence.
• Indeed independent if you neglect that they have a common background (both come from Bloomington).
• Extreme dependence: the events are even mutually exclusive.
• Dependence.
Applying probability theory to the real world is always problematic, as there may be many external factors not mentioned in the brief problem statement. Phrased differently, you really can't "determine" abstractly whether real world events are independent or not. Lots of things are connected in ways that aren't immediately apparent (famously "ice cream consumption" is positively correlated to "homicide rates" for instance). To be sure, you can make assumptions but in that case the assumption should be clearly labeled as such. Or you can analyze the data and see statistically whether or not it supports the hypothesis of independence.
For your first case, for instance: Perhaps some voters always vote to reelect incumbents (that seems to be true, actually). In that case, knowing that your voter chose to support one incumbent would be evidence that they belonged to this cohort and thus evidence that they'd support all the incumbents. Or, perhaps, some cohort of voters always votes by party line. Or perhaps there is an external factor (such as some political ideology) which induces people to support either both or neither.
Similar for your second example. Perhaps there is some external factor compelling votes in a certain way, perhaps, say, the phrasing of the poll question encourages one vote over another. (Note: this is the only one of your examples where I'd say that assuming independence was at least fairly reasonable).
The third is the very opposite of independent as $$A=B^c$$.
I think your analysis of the fourth is ok, though I'd phrase it differently. It's not that enjoying part I "changes how much you like" the other. Rather, it is natural to imagine that knowing that you enjoyed part $$1$$ is evidence for the assumption that you enjoy certain types of film and, as part II is a very similar movie, that becomes evidence that you will enjoy part II. To be sure, it does not have to play out this way. Perhaps we have observed that nobody who liked part I also liked part II (there are probably example of that). In that case we might assume that Part II changed critical elements in such a way as to alienate the earlier fans. Then the evidence issue works in reverse.
|
2020-04-04 12:24:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6163709163665771, "perplexity": 545.5590768590448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370521876.48/warc/CC-MAIN-20200404103932-20200404133932-00136.warc.gz"}
|
https://www.watchespedia.com/glossaries/barrel/?v=79cba1185463
|
# BARREL
A CYLINDRICAL BOX (THE BARREL) AND TOOTHED DISC (WHEEL), PROTECTED BY A COVER. THE BARREL, WHICH CONTAINS THE MAINSPRING, TURNS FREELY ON ITS ARBOR. THE MAINSPRING IS HOOKED TO THE BARREL AT ITS OUTER EXTREMITY AND TO THE ARBOR AT ITS INNER EXTREMITY. THE BARREL WHEEL MESHES WITH THE FIRST PINION OF THE GEARTRAIN. AS IT SLOWLY ROTATES, ITS ARC VARIES FROM ONE-NINTH TO ONE-SIXTH OF A REVOLUTION PER HOUR. A HANGING BARREL (ALSO KNOWN AS A STANDING BARREL OR FLOATING BARREL) IS ONE WHOSE ARBOR IS SUPPORTED AT THE UPPER END ONLY, BEING ATTACHED TO THE BARREL BRIDGE WITH NO SUPPORT FROM THE LOWER PLATE.
A PLAIN BARREL, USED IN FUSEE WATCHES, HAS NO TEETH.
CATGUT, THEN A CHAIN, IS COILED ROUND THE PLAIN BARREL, CONNECTING IT TO THE FUSEE.
|
2020-09-28 08:15:54
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8056933283805847, "perplexity": 5581.625273552689}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401598891.71/warc/CC-MAIN-20200928073028-20200928103028-00697.warc.gz"}
|
https://en.wikipedia.org/wiki/Gamma-ray_burst_progenitors
|
# Gamma-ray burst progenitors
Eta Carinae, in the constellation of Carina, one of the nearer candidates for a hypernova
Gamma-ray burst progenitors are the types of celestial objects that can emit gamma-ray bursts (GRBs). GRBs show an extraordinary degree of diversity. They can last anywhere from a fraction of a second to many minutes. Bursts could have a single profile or oscillate wildly up and down in intensity, and their spectra are highly variable unlike other objects in space. The near complete lack of observational constraint led to a profusion of theories, including evaporating black holes, magnetic flares on white dwarfs, accretion of matter onto neutron stars, antimatter accretion, supernovae, hypernovae, and rapid extraction of rotational energy from supermassive black holes, among others.[1][2]
There are at least two different types of progenitors (sources) of GRBs: one responsible for the long-duration, soft-spectrum bursts and one (or possibly more) responsible for short-duration, hard-spectrum bursts. The progenitors of long GRBs are believed to be massive, low-metallicity stars exploding due to the collapse of their cores. The progenitors of short GRBs are still unknown but mergers of neutron stars is probably the most popular model as of 2007.
## Long GRBs: massive stars
### Collapsar model
As of 2007, there is almost universal agreement in the astrophysics community that the long-duration bursts are associated with the deaths of massive stars in a specific kind of supernova-like event commonly referred to as a collapsar or hypernova.[2][3] Very massive stars are able to fuse material in their centers all the way to iron, at which point a star cannot continue to generate energy by fusion and collapses, in this case, immediately forming a black hole. Matter from the star around the core rains down towards the center and (for rapidly rotating stars) swirls into a high-density accretion disk. The infall of this material into the black hole drives a pair of jets out along the rotational axis, where the matter density is much lower than in the accretion disk, towards the poles of the star at velocities approaching the speed of light, creating a relativistic shock wave[4] at the front. If the star is not surrounded by a thick, diffuse hydrogen envelope, the jets' material can pummel all the way to the stellar surface. The leading shock actually accelerates as the density of the stellar matter it travels through decreases, and by the time it reaches the surface of the star it may be traveling with a Lorentz factor of 100 or higher (that is, a velocity of 0.9999 times the speed of light). Once it reaches the surface, the shock wave breaks out into space, with much of its energy released in the form of gamma-rays.
Three very special conditions are required for a star to evolve all the way to a gamma-ray burst under this theory: the star must be very massive (probably at least 40 Solar masses on the main sequence) to form a central black hole in the first place, the star must be rapidly rotating to develop an accretion torus capable of launching jets, and the star must have low metallicity in order to strip off its hydrogen envelope so the jets can reach the surface. As a result, gamma-ray bursts are far rarer than ordinary core-collapse supernovae, which only require that the star be massive enough to fuse all the way to iron.
### Evidence for the collapsar view
This consensus is based largely on two lines of evidence. First, long gamma-ray bursts are found without exception in systems with abundant recent star formation, such as in irregular galaxies and in the arms of spiral galaxies.[5] This is strong evidence of a link to massive stars, which evolve and die within a few hundred million years and are never found in regions where star formation has long ceased. This does not necessarily prove the collapsar model (other models also predict an association with star formation) but does provide significant support.
Second, there are now several observed cases where a supernova has immediately followed a gamma-ray burst. While most GRBs occur too far away for current instruments to have any chance of detecting the relatively faint emission from a supernova at that distance, for lower-redshift systems there are several well-documented cases where a GRB was followed within a few days by the appearance of a supernova. These supernovae that have been successfully classified are type Ib/c, a rare class of supernova caused by core collapse. Type Ib and Ic supernovae lack hydrogen absorption lines, consistent with the theoretical prediction of stars that have lost their hydrogen envelope. The GRBs with the most obvious supernova signatures include GRB 060218 (SN 2006aj),[6] GRB 030329 (SN 2003dh),[7] and GRB 980425 (SN 1998bw),[8] and a handful of more distant GRBs show supernova "bumps" in their afterglow light curves at late times.
Possible challenges to this theory emerged recently, with the discovery[9][10] of two nearby long gamma-ray bursts that lacked the signature of any type of supernova: both GRB060614 and GRB 060505 defied predictions that a supernova would emerge despite intense scrutiny from ground-based telescopes. Both events were, however, associated with actively star-forming stellar populations. One possible explanation is that during the core collapse of a very massive star a black hole can form, which then 'swallows' the entire star before the supernova blast can reach the surface.[citation needed]
## Short GRBs: degenerate binary systems?
Short gamma-ray bursts appear to be an exception. Until 2007, only a handful of these events have been localized to a definite galactic host. However, those that have been localized appear to show significant differences from the long-burst population. While at least one short burst has been found in the star-forming central region of a galaxy, several others have been associated with the outer regions and even the outer halo of large elliptical galaxies in which star formation has nearly ceased. All the hosts identified so far have also been at low redshift.[11] Furthermore, despite the relatively nearby distances and detailed follow-up study for these events, no supernova has been associated with any short GRB.
### Neutron star and neutron star/black hole mergers
While the astrophysical community has yet to settle on a single, universally favored model for the progenitors of short GRBs, the generally preferred model is the merger of two compact objects as a result of gravitational inspiral: two neutron stars,[12][13] or a neutron star and a black hole.[14] While thought to be rare in the Universe, a small number of cases of close neutron star - neutron star binaries are known in our Galaxy, and neutron star - black hole binaries are believed to exist as well. According to Einstein's theory of general relativity, systems of this nature will slowly lose energy due to gravitational radiation and the two degenerate objects will spiral closer and closer together, until in the last few moments, tidal forces rip the neutron star (or stars) apart and an immense amount of energy is liberated before the matter plunges into a single black hole. The whole process is believed to occur extremely quickly and be completely over within a few seconds, accounting for the short nature of these bursts. Unlike long-duration bursts, there is no conventional star to explode and therefore no supernova.
This model has been well-supported so far by the distribution of short GRB host galaxies, which have been observed in old galaxies with no star formation (for example, GRB050509B, the first short burst to be localized to a probable host) as well as in galaxies with star formation still occurring (such as GRB050709, the second), as even younger-looking galaxies can have significant populations of old stars. However, the picture is clouded somewhat by the observation of X-ray flaring[15] in short GRBs out to very late times (up to many days), long after the merger should have been completed, and the failure to find nearby hosts of any sort for some short GRBs.
### Magnetar giant flares
One final possible model that may describe a small subset of short GRBs are the so-called magnetar giant flares (also called megaflares or hyperflares). Early high-energy satellites discovered a small population of objects in the Galactic plane that frequently produced repeated bursts of soft gamma-rays and hard X-rays. Because these sources repeat and because the explosions have very soft (generally thermal) high-energy spectra, they were quickly realized to be a separate class of object from normal gamma-ray bursts and excluded from subsequent GRB studies. However, on rare occasions these objects, now believed to be extremely magnetized neutron stars and sometimes termed magnetars, are capable of producing extremely luminous outbursts. The most powerful such event observed to date, the giant flare of 27 December 2004, originated from the magnetar SGR 1806-20 and was bright enough to saturate the detectors of every gamma-ray satellite in orbit and significantly disrupted Earth's ionosphere.[16] While still significantly less luminous than "normal" gamma-ray bursts (short or long), such an event would be detectable to current spacecraft from galaxies as far as the Virgo cluster and, at this distance, would be difficult to distinguish from other types of short gamma-ray burst on the basis of the light curve alone. To date, three gamma-ray bursts have been associated with SGR flares in galaxies beyond the Milky Way: GRB 790503b in the Large Magellanic Cloud, GRB 051103 from M81 and GRB 070201 from M31.[17]
## Diversity in the origin of long GRBs
HETE II and Swift observations reveal that long gamma-ray bursts come with and without supernovae, and with and without pronounced X-ray afterglows. It gives a clue to a diversity in the origin of long GRBs, possibly in- and outside of star-forming regions, with otherwise a common inner engine. The timescale of tens of seconds of long GRBs hereby appears to be intrinsic to their inner engine, for example, associated with a viscous or a dissipative process.
The most powerful stellar mass transient sources are the above-mentioned progenitors (collapsars and mergers of compact objects), all producing rotating black holes surrounded by debris in the form of an accretion disk or torus. A rotating black hole carries spin-energy in angular momentum [18] as does a spinning top:
${\displaystyle E_{spin}={\frac {1}{2}}I\Omega _{H}^{2}}$
where ${\displaystyle I=4M^{3}(\cos(\lambda /2)/\cos(\lambda /4))^{2}}$ and ${\displaystyle \Omega _{H}=(1/2M)\tan(\lambda /2)}$ denote the moment of inertia and the angular velocity of the black hole in the trigonometric expression ${\displaystyle \sin \lambda =a/M}$ [19] for the specific angular momentum ${\displaystyle a}$ of a Kerr black hole of mass ${\displaystyle M}$. With no small parameter present, it has been well-recognized that the spin energy of a Kerr black hole can reach a substantial fraction (29%) of its total mass-energy ${\displaystyle M}$, thus holding promise to power the most remarkable transient sources in the sky. Of particular interest are mechanisms for producing non-thermal radiation by the gravitational field of rotating black holes, in the process of spin-down against their surroundings in aforementioned scenarios.
By Mach's principle, spacetime is dragged along with mass-energy, with the distant stars on cosmological scales or with a black hole in close proximity. Thus, matter tends to spin-up around rotating black holes, for the same reason that pulsars spin down by shedding angular momentum in radiation to infinity. A major amount of spin-energy of rapidly spinning black holes can hereby be released in a process of viscous spin-down against an inner disk or torus—into various emission channels.
Spin-down of rapidly spinning stellar mass black holes in their lowest energy state takes tens of seconds against an inner disk, representing the remnant debris of the merger of two neutron stars, the break-up of a neutron star around a companion black hole or formed in core-collapse of a massive star. Forced turbulence in the inner disk stimulates the creation of magnetic fields and multipole mass-moments, thereby opening radiation channels in radio, neutrinos and, mostly, in gravitational waves with distinctive chirps shown in the diagram [20] with the creation of astronomical amounts of Bekenstein-Hawking entropy.[21][22][23]
Diagram of van Putten (2009) showing the gravitational radiation produced in binary coalescence of neutron stars with another neutron star or black hole and, post-coalescence or following core-collapse of a massive star, the expected radiation by high-density turbulent matter around stellar mass Kerr black holes. As the ISCO (ellipse) relaxes to that around a slowly rotating, nearly Schwarzschild black hole, the late-time frequency of gravitational radiation provides accurate metrology of the black hole mass.
Transparency of matter to gravitational waves offers a new probe to the inner-most workings of supernovae and GRBs. The gravitational-wave observatories LIGO and Virgo are designed to probe stellar mass transients in a frequency range of tens to about fifteen hundred Hz. The above-mentioned gravitational-wave emissions fall well within the LIGO-Virgo bandwidth of sensitivity; for long GRBs powered by "naked inner engines" produced in the binary merger of a neutron star with another neutron star or companion black hole, the above-mentioned magnetic disk winds dissipate into long-duration radio-bursts, that may be observed by the novel Low Frequency Array (LOFAR).
## References
1. ^ Ruderman, M. (1975). "Theories of gamma-ray bursts". Texas Symposium on Relativistic Astrophysics. 262 (1 Seventh Texas): 164–180. Bibcode:1975NYASA.262..164R. doi:10.1111/j.1749-6632.1975.tb31430.x.
2. ^ a b "Gamma-ray burst supports hypernova hypothesis". cerncourier.com. September 4, 2003. Retrieved 2007-10-14.
3. ^ MacFadyen, A. I.; Woosley, S. E.; Heger, A. (2001). "Supernovae, Jets, and Collapsars". Astrophysical Journal. 550 (1): 410–425. Bibcode:2001ApJ...550..410M. arXiv:. doi:10.1086/319698.
4. ^ Blandford, R.D. & McKee, C. F. (1976). "Fluid Dynamics of relativistic blast waves". Physics of Fluids. 19 (8): 1130–1138. Bibcode:1976PhFl...19.1130B. doi:10.1063/1.861619.
5. ^ Bloom, J.S.; Kulkarni, S. R. & Djorgovski, S. G. (2002). "The Observed Offset Distribution of Gamma-Ray Bursts from Their Host Galaxies: A Robust Clue to the Nature of the Progenitors". Astronomical Journal. 123 (3): 1111–1148. Bibcode:2002AJ....123.1111B. arXiv:. doi:10.1086/338893.
6. ^ Sollerman, J.; et al. (2006). "Supernova 2006aj and the associated X-Ray Flash 060218". Astronomy and Astrophysics. 454 (2): 503S. Bibcode:2006A&A...454..503S. arXiv:. doi:10.1051/0004-6361:20065226.
7. ^ Mazzali, P.; et al. (2003). "The Type Ic Hypernova SN 2003dh/GRB 030329". Astrophysical Journal. 599 (2): 95M. Bibcode:2003ApJ...599L..95M. arXiv:. doi:10.1086/381259.
8. ^ Kulkarni, S.R.; et al. (1998). "Radio emission from the unusual supernova 1998bw and its association with the gamma-ray burst of 25 April 1998". Nature. 395 (6703): 663. Bibcode:1998Natur.395..663K. doi:10.1038/27139.
9. ^ Fynbo; et al. (2006). "A new type of massive stellar death: no supernovae from two nearby long gamma-ray bursts". Nature. 444 (7122): 1047–9. Bibcode:2006Natur.444.1047F. PMID 17183316. arXiv:. doi:10.1038/nature05375.
10. ^ "New type of cosmic explosion found". astronomy.com. December 20, 2006. Retrieved 2007-09-15.
11. ^ Prochaska; et al. (2006). "The Galaxy Hosts and Large-Scale Environments of Short-Hard Gamma-Ray Bursts". Astrophysical Journal. 641 (2): 989. Bibcode:2006ApJ...642..989P. arXiv:. doi:10.1086/501160.
12. ^ Blinnikov, S.; et al. (1984). "Exploding Neutron Stars in Close Binaries". Soviet Astronomy Letters. 10: 177. Bibcode:1984SvAL...10..177B.
13. ^ Eichler, David; Livio, Mario; Piran, Tsvi; Schramm, David N. (1989). "Nucleosynthesis, neutrino bursts and gamma-rays from coalescing neutron stars". Nature. 340 (6229): 126. Bibcode:1989Natur.340..126E. doi:10.1038/340126a0.
14. ^ Lattimer, J. M. & Schramm, D. N. (1976). "The tidal disruption of neutron stars by black holes in close binaries". Astrophysical Journal. 210: 549. Bibcode:1976ApJ...210..549L. doi:10.1086/154860.
15. ^ Burrows, D. N.; et al. (2005). "Bright X-ray Flares in Gamma-Ray Burst Afterglows". Science. 309 (5742): 1833–1835. Bibcode:2005Sci...309.1833B. PMID 16109845. arXiv:. doi:10.1126/science.1116168.
16. ^ Hurley et al., 2005. Nature v.434 p.1098, "An exceptionally bright flare from SGR 1806-20 and the origins of short-duration gamma-ray bursts"
17. ^ Frederiks 2008
18. ^ Kerr, R.P. (1963). "Gravitational field of a spinning mass: as an example of algebraically special metrics". Phys. Rev. Lett. 11 (5): 237. Bibcode:1963PhRvL..11..237K. doi:10.1103/PhysRevLett.11.237.
19. ^ van Putten, M.H.P.M., 1999, Science, 284, 115
20. ^ Maurice H.P.M. van Putten (2009). "On the origin of long gamma-ray bursts". MNRAS Letters. 396 (1): L81. Bibcode:2009MNRAS.396L..81V. doi:10.1111/j.1745-3933.2009.00666.x.
21. ^ Bekenstein, J.D. (1973). "Black holes and entropy". Physical Review D. 7 (8): 2333. Bibcode:1973PhRvD...7.2333B. doi:10.1103/PhysRevD.7.2333.
22. ^ Hawking, S.W. (1973). "Black holes and entropy". Nature. 248 (5443): 30. Bibcode:1974Natur.248...30H. doi:10.1038/248030a0.
23. ^ Strominger, A.; Vafa, C. (1996). "Microscopic Origin of the Bekenstein-Hawking Entropy". Phys. Lett. B. 379 (5443): 99–104. Bibcode:1996PhLB..379...99S. arXiv:. doi:10.1016/0370-2693(96)00345-0.
|
2017-07-22 17:39:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7639678120613098, "perplexity": 2524.7743575469917}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424088.27/warc/CC-MAIN-20170722162708-20170722182708-00256.warc.gz"}
|
https://www.kaantipurpost.com/nasa-confirmed-the-largest-comet-ever-detected-and-its-truly-gargantuan/
|
Sunday, December 4, 2022
Home Science NASA Confirmed The Largest Comet Ever Detected And It's Truly Gargantuan
# NASA Confirmed The Largest Comet Ever Detected And It’s Truly Gargantuan
The biggest comet at any point found has been going towards the Sun for north of 1 million years, and its immense scope focuses a light on the puzzling articles that make up probably the greatest construction in our Solar System.
In another review, cosmologists utilized the Hubble Space Telescope to affirm that the strong focus of the goliath comet C/2014 UN271 (Bernardinelli-Bernstein) is the biggest comet core at any point recognized. It estimates a stunning multiple times bigger than most known comets, at very nearly 140 kilometers wide (around 85 miles).
Nonetheless, that shockingly huge size – or rather the obvious strangeness of it – could express more about us and our restricted origination of comets than it does about anything more.
C/2014 UN271 hails from the Oort Cloud: a tremendous, circular dispersing of cold articles proposed to encompass the Sun at the most profound and most far off stretches of our Solar System (up until this point away, truth be told, it’s remembered to reach out at minimum a fourth of the way towards the following closest star framework, Alpha Centauri).
Sounds quite enormous, correct? It is, hypothetically talking. Nonetheless, the Oort Cloud is up to this point away thus hard to recognize, it’s fundamentally a tremendous speculative secret, despite the fact that stargazers believe it to be perhaps the biggest design in our Solar System.
On occasion, however, something arises out of this cryptic mass, gravitationally baited towards the Sun from the distance of the astronomical hinterlands.
Comet core size correlation. (NASA/ESA/Zena Levy, STScI)
C/2014 UN271 is one such item, and it stands to educate us much concerning the presence of the frozen ‘unblemished’ masses that make up the Oort Cloud. These are remembered to have shaped right off the bat in the internal Solar System, prior to being flung out to its furthest edges by the gravitational impacts of monster planets like Jupiter and Saturn.
“This comet is in a real sense a hint of something larger for a huge number of comets that are too weak to even consider finding in the more far off pieces of the Solar System,” says space expert David Jewitt from UCLA.
“We’ve generally thought this comet must be huge in light of the fact that it is so splendid at such an enormous distance. Presently we affirm it is.”
In the new investigation, Jewitt and individual scientists, drove by first creator Man-To Hui from the Macau University of Science and Technology, determined the size of C/2014 UN271 in the most elevated goal yet. They based upon past evaluations by utilizing Hubble perceptions and displaying to detach the core from the comet’s unconsciousness – the long tail of ice sublimating into gasses in the comet’s wake.
“We affirm that C/2014 UN271 is the biggest significant stretch comet at any point distinguished,” the group writes in their new paper.
“”
(NASA/ESA/Man-To Hui, Macau University of Science and Technology/David Jewitt, UCLA/Alyssa Pagan, STScI)
Above: Sequence showing a Hubble picture of the comet, its displayed trance state, and the detached core.
The disclosure of C/2014 UN271 was declared last year, after it was observed concealed in a collection of observational information caught by the Dark Energy Survey somewhere in the range of 2014 and 2018. Resulting follow-up examinations uncovered C/2014 UN271 was really gotten as soon as 2010.
However even that early impression doesn’t start to incorporate the fantastic length of the comet’s excursion. It follows a roughly 3-million-drawn out curved circle around the Sun, the state of which implies it has been gradually moving toward the Sun for essentially north of 1 million years.
It is because of arrive at its nearest way to deal with the Sun – known as perihelion – in 2031, so, all in all Bernardinelli-Bernstein will in any case stay around 1 billion miles from the Sun, prior to arcing back outwards on its ovoid direction.
Yet again that implies we have right around a time of working on observational open doors in front of us, to become familiar with C/2014 UN271 and its kind as the comet moves nearer, before it sneaks off unobtrusively into the dim.
The discoveries are accounted for in The Astrophysical Journal Letters.
Here is a record document:
Significant stretch comets are viewed as compositionally probably the most flawless extras from the early planetary group. For the greater part of their lifetime, they have been put away in the low-temperature climate of the Oort cloud, at the edge of the planetary group (Oort 1950). Late years saw IDs of a few extensive stretch comets dynamic at ultralarge heliocentric distances (rH ≳ 20 au), suggesting that the extensive stretch comets might be more thermally handled than recently suspected (Jewitt et al. 2017, 2021; Meech et al. 2017; Hui et al. 2018, 2019; Bernardinelli et al. 2021). Dissimilar to most comets that are just dynamic inside the circle of Jupiter (rH ≲ 5 au), driven by sublimation of water ice (e.g., Whipple 1950), the reason for movement in far off comets stays hazy. Potential clarifications for trans-Jovian action incorporate sublimation of supervolatiles like CO and CO2 (e.g., Womack et al. 2017), crystallization of nebulous ice (e.g., 1P/Halley; Prialnik and Bar-Nun 1992), and warm memory from prior perihelion entry (e.g., Comet Hale-Bopp; Szabó et al. 2008). Before we can utilize remotely dynamic comets to straightforwardly explore the development states of the early nearby planet group, it is of incredible logical significance to comprehend how their action unfurls at extraordinary heliocentric distances.
The new disclosure of C/2014 UN271 (Bernardinelli-Bernstein) offers us one more astounding an open door to concentrate on a comet a long way from the Sun. This extensive stretch comet was found in Dark Energy Survey (DES) information at an astounding inbound heliocentric distance of rH ≈ 29 au, with extra prediscovery perceptions from >30 au from the Sun and showing an undeniable cometary element at rH ≳ 20 au (Bernardinelli et al. 2021; Farnham et al. 2021; Kokotanekova et al. 2021). As indicated by the orbital arrangement by JPL Horizons, the current barycentric circle of C/2014 UN271 is profoundly circular (unusualness e = 0.9993), with a high perihelion distance of q = 10.9 au and a semimajor pivot of $a=\left(1.57\pm 0.04\right)\times {10}^{4}$ au. The size and the albedo of the cometary core are much of the time the main actual boundaries among numerous others. As of late, Lellouch et al. (2022) detailed that the core of the comet, 137 ± 17 km in distance across, is the biggest among throughout the entire known period comets and has a visual mathematical albedo of pV = 0.049 ± 0.011. In this paper, we present our autonomous investigation of the core size and albedo of the comet in view of a perception at a heliocentric distance of ∼20 au, itemized in Section 2. We present our investigation in Section 3 and conversation in Section 4.
2. Perception
We got five back to back pictures, every one of 285 s length, in one visit of the comet under General Observer program 16886 utilizing the 2.4 m Hubble Space Telescope (HST) and the UVIS station of the Wide-Field Camera 3 (WFC3) on 2022 January 8. To accomplish the greatest awareness of the office, we took advantage of the F350LP channel, which has a pinnacle framework throughput of 29%, a powerful frequency of 585 nm, and a FWHM of 476 nm. For productivity, we read out just the UVIS2-2K2C-SUB gap, the 2047 × 2050 full quadrant subarray on the UVIS channel with a picture size of 0farcs04 pixel−1 covering a field of perspective on 81” × 81” across. The telescope followed the nonsidereal movement of the comet, bringing about followed foundation sources, in spite of the significant stretch of the comet. Picture vacillating was performed once between the third and fourth openings in order to alleviate likely effects from CCD antiquities. The it is summed up in Table 1 to notice math of the comet.
Table 1. Noticing Geometry of Comet C/2014 UN271 (Bernardinelli-Bernstein)
Date and Time (UT) a Filter ${t}_{\exp }$ (s) b rH (au) c δ (au) d α (°) e ε (°) f θ−⊙ (°) g θ−v (°) h ψ (°) I
2022 Jan 08 09:24-09:56 F350LP 285 19.446 19.612 2.8 78.8 66.5 334.3 2.8
Notes.
a Mid-openness age. b Individual openness time. c Heliocentric distance. d Comet-HST distance. e Phase point (Sun-comet-HST). f Solar stretching (Sun-HST-comet). g Position point of the projected antisolar course. h Position point of the extended negative heliocentric speed of the comet. I Orbital plane point (among HST and the orbital plane of the comet).
In the HST pictures, the comet has a distinct optocenter inside its brilliant quasicircular trance like state of ∼4” in breadth, with an expansive tail of ≳15” long coordinated roughly northeastwards (Figure 1).
Figure 1.
Zoom In Zoom Out Reset picture size
Figure 1. HST/WFC3 F350LP picture of comet C/2014 UN271 (Bernardinelli-Bernstein) middle consolidated from the five individual openings taken on 2022 January 8. The showed picture is scaled logarithmically and is situated with the end goal that the J2000 central north is up and east is left. Likewise checked are the bearings of the projected antisolar vector (−⊙) and the extended negative heliocentric speed of the comet (− v ). A scale bar of 5” long is shown.
Standard picture High-goal picture
3. Investigation
In this part, we present our photometry to oblige the core of comet C/2014 UN271 in light of our HST perception. Prior to completing any photometric examination, we eliminated inestimable beam hits and hot pixels with the Laplacian vast beam dismissal calculation L.A. Enormous by van Dokkum (2001) in IRAF (Tody 1986), which effectively delivered us with much cleaner pictures of the comet while its sign was left immaculate. We decided the picture zero-point and the related vulnerability in the V band utilizing sun oriented analogs with the WFC3 UVIS Imaging Exposure Time Calculator, 5 which completely covered the variety scope of significant stretch comets and their cores (Jewitt 2015, and references in that).
3.1. Direct Photometry
The presence of the splendid trance state (Figure 1) presents an impediment to the immediate estimation of the sign from the core. We utilized three strategies for expanding ability to disconnect the nu…
RELATED ARTICLES
### First Woman Reportedly CURED OF HIV EVER
A lady from New York has become just the third individual on the planet to have been restored of HIV, The New York Times...
### Cancer Disappears In Every Patient for the First Time in History on New Drug Trial
Another medication has been found to have a 100% achievement rate in treating malignant growth. Researchers accept they might have made a significant forward leap...
### World’s First-Ever Double Arm And Shoulder*Transplant Recipient Incredible Recovery
World’s First-Ever Double Arm And Shoulder Transplant Recipient Makes Incredible Recovery (SWNS) Felix Gretarsson, 49, has turned into the primary individual on the planet...
### 10 Most Popular Female Twitch Streamers
Amouranth With quirky and ‘toxic’ characteristics, Kaitlyn Siragusa aka Amouranth always lives up to her infamous hype. Many say her channel makes no sense as...
test
### Top Twitch Streamers Live Streaming Games in 2022
Live streaming on Twitch has exploded over the past few years, as has the amount of women Twitch streamers who broadcast their own playing....
### Top Most Beautiful of The World 2022
Beauty is the thing that people admire the most, and these gorgeous women have done it with aplomb. If you're a lover of beauty...
|
2022-12-04 17:38:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3541267514228821, "perplexity": 4858.33224044374}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00783.warc.gz"}
|
https://www.baryonbib.org/bib/056c832f-39b3-435f-9139-fbca02a5a9f5
|
PREPRINT
056C832F-39B3-435F-9139-FBCA02A5A9F5
# $f\left(R\right)$ gravity with a broken Weyl gauge symmetry, CMB anisotropy and Integrated-Sachs-Wolfe effect
Jiwon Park, Tae Hoon Lee
arXiv:2209.02277
Submitted on 6 September 2022
## Abstract
We propose a new class of $f\left(R\right)$ theory where its Weyl gauge symmetry is broken in the primordial era of the universe. We prove that, even though the theory is transformed into the Einstein-Hilbert action with a non-minimally coupled scalar field at the non-perturbative level, there exists an additional non-minimal coupling at the perturbational level. As an important example, we study its effect on Staronbinsky inflation. We show that the amplitude of the primordial gravitational waves also affects scalar perturbation due to the presence of the non-minimal coupling, although its effect on cosmic microwave background(CMB) anisotropy is negligible in practice. Consequently, CMB observables may have distinct values depending only on the mass of the perturbed Weyl field. Moreover, we discuss the possibility of resolving Hubble tension with this example, including an analysis of the integrated-Sachs-Wolfe effect.
## Preprint
Comment: 11 pages, 7 figures
Subjects: General Relativity and Quantum Cosmology; Astrophysics - Cosmology and Nongalactic Astrophysics
|
2022-09-24 16:58:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6260750889778137, "perplexity": 1076.3477464000528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00209.warc.gz"}
|
https://ekamperi.github.io/machine%20learning/2021/01/07/probabilistic-regression-with-tensorflow.html
|
## Introduction
You probably have heard the saying, “If all you have is a hammer, everything looks like a nail”. This proverb applies to many cases, deterministic classification neural networks not being an exception. Consider, for instance, a typical neural network that classifies images from the CIFAR-10 dataset. This dataset consists of 60.000 color images, all of which belong to 10 classes: airplanes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks. Naturally, no matter what image we feed this network, say a pencil, it will always assign it to one of the 10 known classes.
However, it would be handy if the model conveyed its uncertainty for the predictions it made. So, given a “pencil” image, it would probably label it as a bird or ship or whatever. At the same time, we’d like it to assign a large uncertainty to this prediction. To reach such an inference level, we need to rethink the traditional deterministic neural network paradigm and take a leap of faith towards probabilistic modeling. Instead of having a model parameterized by its point weights, each weight will now be sampled from a posterior distribution whose parameters will be tuned during the training process.
Left: Deterministic neural network with point estimates for weights. Right: Probabilistic neural network with weights sampled from probability distributions. Image taken from Blundell, et al. Weight Uncertainty in Neural Networks. arXiv (2015)
## Aleatoric and epistemic uncertainty
Probabilistic modeling is intimately related to the concept of uncertainty. The latter is sometimes divided into two categories, aleatoric (also known as statistical) and epistemic (also known as systematic). Aleatoric is derived from the Latin word “alea” which means die. You might be familiar with the phrase “alea iact est”, meaning “the die has been cast”. Hence, aleatoric uncertainty relates to the data itself and captures the inherent randomness when running the same experiment or performing the same task. For instance, if a person draws the number “4” repeatedly, its shape will be slightly different every time. Aleatoric uncertainty is irreducible in the sense that no matter how much data we collect, there will always be some noise in them. Finally, we model aleatoric uncertainty by having our model’s output be a distribution, where we sample from, rather than having point estimates as output values.
Epistemic uncertainty, on the other hand, refers to a model’s uncertainty. I.e., the uncertainty regarding which parameters (out of the set of all possible parameters) accurately model the experimental data. Epistemic uncertainty is decreased as we collect more training examples. Its modeling is realized by enabling a neural network’s weights to be probabilistic rather than deterministic.
Image taken from Kendall, A. & Gal, Y. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? arXiv [cs.CV] (2017)
## Tensorflow example
### Summary objective
In the following example, we will generate some non-linear noisy training data, and then we will develop a probabilistic regression neural network to fit the data. To do so, we will provide appropriate prior and posterior trainable probability distributions. This blog post is inspired by a weekly assignment of the course “Probabilistic Deep Learning with TensorFlow 2” from Imperial College London. We start by importing the Python modules that we will need.
### Data generation
We generate some training data $$\mathcal{D}=\{(x_i, y_i)\}$$ using the quintic equation $$y_i = x_i^5 + 0.4 \, x_i \,\epsilon_i$$ where $$\epsilon_i \sim \mathcal{N}(0,1)$$ means that the noise $$\epsilon_i$$ is sampled from a normal distribution with zero mean and standard deviation equal to one.
Notice how the data points are squeezed near $$x=0$$, and how they diverge as $$x$$ deviates from zero. When aleatoric uncertainty (read: measurement noise) differs across the input points $$x$$, we call it heteroscedastic, from the Greek words “hetero” (different) and “scedastic” (dispersive). When it is the same for all input, we call it “homoscedastic”.
### Setup prior and posterior distributions
#### Bayes’ rule
At the core of probabilistic predictive modeling lies the Bayes’ rule. To estimate a full posterior distribution of the parameters $$\mathbf{w}$$, i.e., to estimate $$p(\mathbf{w}\mid \mathcal{D})$$, given some training data $$\mathcal{D} = \{(x_i, y_i)\}$$, the Bayes’ rule assumes the following form:
$P(\mathbf{\mathbf{w}|\mathcal{D}}) = \frac{P(\mathcal{D}|\mathbf{w})P(\mathbf{w})}{P(\mathcal{D})}$
In the following image, you see a sketch of the various probability distributions that the Bayes’ rule entangles. In plain terms, it holds that $$\text{Prior beliefs} \oplus \text{Evidence} = \text{Posterior beliefs}$$, i.e., we start with some assumptions inscribed in the prior distribution, then we observe the “Evidence”, and we update our prior beliefs accordingly, to yield the posterior distribution. Subsequently, the posterior distribution acts as the next iteration’s prior distribution, and the whole cycle is repeated.
To let all these sink, let us elaborate on the essence of the posterior distribution by marginalizing the model’s parameters. The probability of predicting $$y$$ given an input $$\mathbf{x}$$ and the training data $$\mathcal{D}$$ is:
$P(y\mid \mathbf{x},\mathcal{D})= \int P(y\mid \mathbf{x},\mathbf{w}) \, P(\mathbf{w}\mid\mathcal{D}) \mathop{\mathrm{d}\mathbf{w}}$
This is equivalent to having an ensemble of models with different parameters $$\mathbf{w}$$, and taking their average weighted by the posterior probabilities of their parameters, $$P(\mathbf{w}\mid \mathcal{D})$$. Neat?
There are two problems with this approach, however. First, it is computationally intractable to calculate an exact solution for the posterior distribution. Second, the averaging implies that our equation is not differentiable, so we can’t use good old backpropagation to update the model’s parameters! The answer to these hindrances is variational inference, a method of formulating inference as an optimization problem. We won’t dive deep into the theoretical background, but the inquiring reader may google for the Kullback-Leibler divergence. The basic idea is to approximate the true posterior with another function by using the KL divergence as a “metric” of how much the two distributions differ. I promise to blog about all the juicy mathematical details of the KL divergence concept in a future post.
#### Prior distribution
We start by defining a prior distribution for our model’s weights. I haven’t researched the matter a lot, but in the absence of any evidence, adopting a normal distribution as a prior is a fair way to initialize a probabilistic neural network. After all, the central limit theorem asserts that a properly normalized sum of samples will approximate a normal distribution no matter the actual underlying distribution. We use the DistributionLambda() function to inject a distribution into our model, which you can think of as the “lambda function” analog for distributions. The distribution we use is a multivariate normal with a diagonal covariance matrix:
$\Sigma = \left( \begin{matrix} \sigma_1^2 & 0 & 0 & \ldots \\ 0 & \sigma_2^2 & 0 & \ldots \\ 0 & 0 & \sigma_3^2 & \ldots\\ \vdots & \vdots & \vdots & \ddots \end{matrix} \right)$
The mean values are initialized to zero and the $$\sigma_i^2$$ to one.
#### Posterior distribution
The case of the posterior distribution is a bit more complex. We again use a multivariate Gaussian distribution, but we will now allow off-diagonal elements in the covariance matrix to be non-zero. There are three ways to parameterize such a distribution. First, in terms of a positive definite covariance matrix $$\mathbf{\Sigma}$$, second via a positive definite precision matrix $$\mathbf{\Sigma}^{-1}$$, and last with a lower-triangular matrix $$\mathbf{L}\mathbf{L}^⊤$$ with positive-valued diagonal entries, such that $$\mathbf{\Sigma} = \mathbf{L}\mathbf{L}^⊤$$. This triangular matrix can be obtained via, e.g., Cholesky decomposition of the covariance matrix. In our case, we are going for the last method with MultivariateNormalTriL(), where “TriL” stands for “triangular lower”.
$\mathbf{L}= \left( \begin{matrix} L_{11} & 0 & 0 & \ldots\\ L_{21} & L_{22} & 0 & \ldots\\ L_{31} & L_{32} & L_{33} & \ldots\\ \vdots & \vdots & \vdots & \ddots \end{matrix} \right)$
The following parenthetical code shows how one can sample from a multinormal distribution, by setting $$\mathbf{z} = \mathbf{\mu} + \mathbf{L} \mathbf{x}$$, where $$\mathbf{\mu}$$ is the mean vector, and $$\mathbf{L}$$ is the lower triangular matrix derived via $$\mathbf{\Sigma} = \mathbf{L} \mathbf{L}^⊤$$ decomposition. The rationale is that sampling from a univariate normal distribution (e.g., zero mean and unit variance) is easy and fast since efficient vectorized algorithms exist for doing this. Whereas sampling directly from a multinormal distribution is not as efficient. Hence, we turn the problem of sampling a multinormal distribution into sampling a univariate normal.
So, instead of parameterizing the neural network with point estimates for weights $$\mathbf{w}$$, we will instead parameterize it with $$\mathbf{\mu}$$’s and $$\sigma$$’s. Notice that for a lower triangular matrix there are $$(n^2 - n)/2 + n = n(n+1)/2$$ non-zero elements. Adding the $$n$$ $$\mu$$’s we end up with a distribution with $$n(n+1)/2 + n = n(n+3)/2$$ parameters.
By the way, let us create some prior and posterior distributions, print the number of their trainable variables, and sample from them. Note that every time we run this cell block, we get different results for the samples.
### Define the model, loss function, and optimizer
To define probabilistic layers in a neural network, we use the DenseVariational() function, specifying the input and output shape, along with the prior and posterior distributions that we have previously defined. We use a sigmoid activation function to enable the network model non-linear data, along with an IndependentNormal() output layer, to capture aleatoric uncertainty, with an event shape equal to 1 (since our $$y$$ is just a scalar). Regarding the kl_weight parameter, you may refer to the original paper “Weight Uncertainty in Neural Networks” for further information. For now, just take for granted that it is a scaling factor.
Let’s practice and calculate by hand the model’s parameters. The first dense variational layer has 1 input, 8 outputs and 8 biases. Therefore, there are $$1\cdot 8 + 8 = 16$$ weights. Since each weight is going to be modelled by a normal distribution, we need 16 $$\mu$$’s, and $$(16^2 - 16)/2 + 16 = 136$$ $$\sigma$$’s. The latter is just the number of elements of a lower triangular matrix $$16\times 16$$. Therefore, in total we need $$16 + 136 = 152$$ parameters.
What about the second variational layer? This one has 8 inputs (since the previous had 8 outputs), 2 outputs (the $$\mu, \sigma$$ of the independent normal distribution), and 2 biases. Therefore, it has $$8\times 2 + 2 = 18$$ weights. For 18 weights, we need 18 $$\mu$$’s and $$(18^2 - 18)/2 + 18 = 171$$ $$\sigma$$’s. Therefore, in total we need $$18 + 171 = 189$$ parameters. The tfpl.MultivariateNormalTriL.params_size(n) static function calculates the number of parameters needed to parameterize a multivariate normal distribution, so that we don’t have to bother with it.
### Train the model and make predictions
We train the model for 1000 epochs and plot the loss function vs. to confirm that the algorithm has converged.
Indeed RMSprop converged, and now we proceed by making some predictions:
Notice how the models’ outputs converge around $$x=0$$ where there is very little variation in the data. The following plot was generated by taking the average of 100 models:
In the following two images, you can see how going from known inputs to extrapolating to unknown inputs “forces” our model to spread out its predictions.
Known input Unknown input
|
2021-09-21 14:23:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.76519775390625, "perplexity": 528.3396365748388}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.38/warc/CC-MAIN-20210921131252-20210921161252-00144.warc.gz"}
|
https://derive-it.com/tag/right-handed-coordinate-system/
|
## Reasoning about Left and Right Handed Coordinate Systems
A coordinate system can be defined by three perpendicular unit vectors. If the coordinate system is Cartesian, which direction does the $+x$ axis point? To resolve this problem, I define an orientation–a coordinate system…
What is Del? In math, the symbol $\vec{\nabla}$ is called “del.” This symbol is defined in terms of Cartesian coordinates. $\vec{\nabla} := \vec{e}_x\frac{\partial}{\partial x} + \vec{e}_y\frac{\partial}{\partial y} + \vec{e}_z\frac{\partial}{\partial z}$ The right side…
|
2022-09-30 16:44:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056522250175476, "perplexity": 584.3223570416874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00745.warc.gz"}
|
https://cstheory.stackexchange.com/questions/32947/real-number-p-such-that-a-p-coin-makes-the-undecidable-decidable
|
# Real number $p$ such that a $p$-coin makes the undecidable decidable [closed]
This is an exercice from Arora & Barak, Chapter 7 :
Describe a real number $p$ such that given a random coin that comes up "heads" with probability $p$, a Turing machine can decide an undecidable language in polynomial time.
This follows a discussion about the fact that if $p$ is efficiently computable, then it's no better than a $1/2$-coin. But if $p$ can be anything, we get new powers. I guess in this case the way to do is to recover the bits of $p$, but I can't think of a deterministic way to do so.
EDIT : based on comments, this question doesn't seem clear. I'll rephrase it like this, based on my interpretation of 'decide' (if I'm wrong, please let me know):
Describe a real number $p$ such that given a random coin that comes up "heads" with probability $p$, there exists an undecidable language $L$ and a Turing machine $M$ such that $M$ runs in polynomial time, and on input $x$, $M$ outputs $1$ if and only if $x \in L$.
Note that stated like this $M$ makes zero error.
• So, don't use a deterministic way. $\;$ – user6973 Oct 31 '15 at 16:17
• You shouldn't be looking for a zero-error algorithm. $\;$ – user6973 Oct 31 '15 at 17:08
• @Ricky: I think you're missing the point here; from the words you're using, one cannot distinguish between the possibility you're ignoring the question the OP actually asked and telling him to do so as well by thinking about something else instead and the possibility you mean to say that "decide" doesn't mean what the OP thinks it means (and, unfortunately, without providing an alternate definition). – user35629 Nov 1 '15 at 20:20
• @JWM You are right, it is said that a probabilistic Turing machine (PTM) can decide a language, allowing some error. Based on the comments, I then come to the conclusion that the authors really are looking for a PTM deciding $L$, and not a Turing machine. Thank you everyone! I guess the confusion comes from the usage of 'Turing machine' and not 'PTM' in the question statement, whereas everywhere else in the chapter the distinction is made clear. Anyway, I'll leave the question open in case someone wants to formulate a clear answer, otherwise I'll close it soon. – Manuel Lafond Nov 2 '15 at 20:01
• Well, the statement doesn’t say just “Turing machine”, it says “Turing machine given a random coin that comes up ‘heads’ with probability $p$”. The likely reason they chose this hairy wording instead of “PTM” is that they needed to make explicit that the coin does not have probability 1/2 as in the standard definition. – Emil Jeřábek Nov 3 '15 at 12:22
I was also wondering how to solve this problem. Although the comments seem to suggest that the poster of the question has already solved the problem, I will write up a solution regardless in case anyone else is curious.
Some credit goes to Sidhanth Mohanty. He showed me this question because he was also interested in the solution, and he provided some crucial insights.
As discussed in the comments, we are considering probabilistic Turing machines, so we need only output the correct answer with probability, say, at least $2/3$.
Idea: Let $L$ be some undecidable language. Maybe we can encode the answers in the binary expansion of $p$. Establish a reasonable bijection $f \colon \mathbb{N} \to \{0,1\}^*$ and make the $i$-th bit in the binary expansion $p$ one if $f(i) \in L$ and zero if $f(i) \not\in L$. Now we hope to recover the binary expansion of $p$ by flipping the coin $t$ times, counting the number of heads $s$, and computing $s/t$.
Issue 1: This won't be polynomial time. Given some input $x \in \{0,1\}^*$, the index $f^{-1}(x)$ will be exponential in $|x|$, and determining the $f^{-1}(x)$-th bit of $p$ with high enough probability will take too long.
Fix: We get to choose what $L$ is, so we can make the elements in $L$ large in length.
Issue 2: At first it might seem like in order to determine the $i$-th bit of $p$, it's enough to read off the $i$-th bit of the binary expansion of $s/t$, hoping that $\vert s/t - p \vert < 2^{-i}$. However, this is not enough to guarantee that the $i$-th bit of $s/t$ is equal to the $i$-th bit of $p$. Suppose that $p$ is $0.1_2$ and that you're trying to determine the third bit of $p$. If $s/t$ is less than $p$ by any positive amount, which would occur roughly half the time, the third bit of $s/t$ could be a 1 rather than a 0. 1 Of course, we cannot actually have $p = 0.1_2$ since we’re assuming that $p$ is uncomputable by construction, but if $p$ were close to $0.1_2$ or any other value with a binary expansion that terminates, we would have a similar issue.
Fix: Note that if the $(j+1)$-th bit of $p$ is different from the $(j+2)$-th bit of $p$, then if we have $\vert s/t - p \vert < 2^{-(j+2)}$, the $i$-th bit of $s/t$ is indeed equal to the $i$-th bit of $p$ for all $1 \leq i \leq j$. This suggests that we can fix the issue by redefining $p$ to make lots of adjacent bits different in its binary expansion.
1Also, in this case, you would also have the issue that you couldn't distinguish $0.1_2$ and $0.0\overline{1}_2$.
Clean writeup:
Define $f\colon \mathbb{N} \to \{0, 1\}^*$ with $f(n)$ being $n$ written in binary with the leading $1$ removed. Let $H$ be your favorite undecidable language, e.g. let $$H = \{x \in \{0,1\}^* : \text{ x describes a Turing machine that halts on the empty string}\}.$$ Then let $L$ be the undecidable language $$L = \{1^{4^k} : f(k) \in H\}.$$ Denote the binary expansion of $p$ as $0.p_1p_2p_3p_4p_5\ldots$. For every $i \in \mathbb{N}$, we choose $$(p_{2i-1}, p_{2i}) = \begin{cases} (1, 0) & \text{if } f(i) \in H \\ (0, 1) & \text{if } f(i) \not\in H. \end{cases}$$
Our TM for deciding $L$ does the following on input $x \in \{0,1\}^*$:
1. If $x$ is not of the form $1^{4^k}$ for some $k \in \mathbb{N}$, reject. Otherwise, compute $k$.
2. Flip the coin $t = \lceil 3\cdot4^{2k+2}\ln(6)) \rceil$ times, counting the number of heads $s$.
3. Compute the $(2k-1)$-th bit of the binary expansion of $s/t$. Accept if it is one and reject if it is zero.
The runtime is $O(t) = O\left(\vert x \vert ^ 2 \right)$, which is polynomial time.
To prove correctness, note that $p_{2k-1}$ is one iff $1^{4^k} \in L$. Hence it is enough to prove that the $(2k-1)$-th bit of $s/t$ is equal to $p_{2k-1}$ with probability at least $2/3$. Since $p_{2k+1} \not= p_{2k+2}$, if $\lvert s/t - p \rvert < 2^{-2k-2}$, then for all $i \leq k$ we have that the $i$-th bit of $s/t$ is equal to $p_i$. In particular, if $\lvert s/t - p \rvert < 2^{-2k-2}$, then the algorithm outputs the correct answer.
The Chernoff bound in corollary 5 of this set of notes tells us that for any $\delta \in (0,1)$, we have $$\operatorname{Pr}\left[\left\vert \frac{s}{t} - p \right\vert \geq \delta p \right] \leq 2\exp\left(-pt\delta^2/3\right).$$ Setting $\delta$ to be $1/(2^{2k+2}p)$, we get \begin{align*} \operatorname{Pr}\left[\left\vert \frac{s}{t} - p \right\vert \geq \frac{1}{2^{2k+2}} \right] &\leq 2\exp\left(-\frac{t}{3 \cdot 4^{2k+2} p}\right) \\ &\leq 2\exp\left(-\frac{3\cdot4^{2k+2}\ln(6)}{3 \cdot 4^{2k+2} p}\right) \\ &= 2\exp\left(-\ln(6)/p\right) \\ &< 2\exp\left(-\ln(6)\right) \\ &= 1/3. \end{align*} Thus $\operatorname{Pr}\left[\lvert s/t - p \rvert < 2^{-2k-2}\right] > 2/3$, so our algorithm is correct with probability at least $2/3$.
|
2021-05-11 07:10:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9886478781700134, "perplexity": 173.7298754174866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991904.6/warc/CC-MAIN-20210511060441-20210511090441-00209.warc.gz"}
|
https://www.vedantu.com/question-answer/a-man-leaves-a-town-at-8-am-on-his-bicycle-class-8-maths-cbse-5f5f7ad68f2fe24918fd00c8
|
Question
# A man leaves a town at 8 a.m. on his bicycle moving at 10 km/hr. Another man leaves the same town at 9 a.m. on his scooter moving at 30 km/hr. At what time does he overtake the man on the bicycle? A. $8:30am$ B. $9:00am$ C. $9:30am$ D. $10:00am$
Hint: We have to consider the fact that the distance traveled by bicycle and scooter until the point of overtaking will be the same. The formula for distance, say d is $d = s \times t$, where s is the speed and t is the time taken.
Let the time taken by the bicycle is $t$.
As mentioned in the question the scooter leaves the same town after 1 hour.
Therefore, time taken will be $t - 1$
Now, the distance is the product of speed and time i.e., $d = s \times t$ where s is the speed and t is the time taken.
For bicycle men, speed is $10km/hr$ and time is $t$.
Therefore, the distance traveled by bicycle man will be
$\Rightarrow d = 10 \times t$
Now, for scooter man, speed is $30km/hr$ and time is $(t - 1)$
Therefore, The distance traveled for scooter man will be
$\Rightarrow d = 30(t - 1)$
As we know, the distance traveled by bicycle and scooter at the point of overtaking will be the same.
$\Rightarrow d = 10t = 30(t - 1) \\ \Rightarrow 20t = 30 \\ \Rightarrow t = \dfrac{3}{2}hr \\$
The time taken by the scooter man is $t - 1 = \dfrac{3}{2} - 1 = \dfrac{1}{2}hr$.
The scooter overtakes the bicycle after half an hour. i.e., $9:30am$
So, the correct answer is “Option C”.
Note: These types of questions come under the Meeting point questions type. A different kind of question may be, If two people travel from two points A and B towards each other, and they meet at point T. The Total Distance covered by them at the meeting will be AB. The Time taken by both of them to meet will be the same. As the Time is constant, Distances AT and BT will be in the ratio of their Speed. Say that the Distance between A and B is d. If two people are walking towards each other from A and B, When they meet for the First time, they together cover a Distance “d” When they meet for the second time, they together cover a Distance “3d”.
|
2020-09-19 03:57:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525103330612183, "perplexity": 392.62540313280937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189928.2/warc/CC-MAIN-20200919013135-20200919043135-00478.warc.gz"}
|
https://www.semanticscholar.org/paper/Double-scaling-limit-for-the-%24O(N)%5E3%24-invariant-Bonzom-Nador/8eeebfdd13e46e4f3dc76a2a0549abc4c7792e85
|
Corpus ID: 237513408
# Double scaling limit for the $O(N)^3$-invariant tensor model
@inproceedings{Bonzom2021DoubleSL,
title={Double scaling limit for the \$O(N)^3\$-invariant tensor model},
year={2021}
}
• Published 15 September 2021
• Physics, Mathematics
We study the double scaling limit of the O(N)-invariant tensor model, initially introduced in Carrozza and Tanasa, Lett. Math. Phys. (2016). This model has an interacting part containing two types of quartic invariants, the tetrahedric and the pillow one. For the 2-point function, we rewrite the sum over Feynman graphs at each order in the 1/N expansion as a finite sum, where the summand is a function of the generating series of melons and chains (a.k.a. ladders). The graphs which are the most… Expand
#### References
SHOWING 1-10 OF 49 REFERENCES
O(N) Random Tensor Models
• Mathematics, Physics
• 2015
We define in this paper a class of three-index tensor models, endowed with $${O(N)^{\otimes 3}}$$O(N)⊗3 invariance (N being the size of the tensor). This allows to generate, via the usual QFTExpand
The double scaling limit of the multi-orientable tensor model
• Physics, Mathematics
• 2015
In this paper we study the double scaling limit of the multi-orientable tensor model. We prove that, contrary to the case of matrix models but similarly to the case of invariant tensor models, theExpand
Critical behavior of colored tensor models in the large N limit
• Physics
• 2011
Colored tensor models have been recently shown to admit a large N expansion, whose leading order encodes a sum over a class of colored triangulations of the D-sphere. The present paper investigatesExpand
Diagrammatics of the quarticO(N)3-invariant Sachdev-Ye-Kitaev-like tensor model
• Physics, Mathematics
• Journal of Mathematical Physics
• 2019
Various tensor models have been recently shown to have the same properties as the celebrated Sachdev-Ye-Kitaev (SYK) model. In this paper we study in detail the diagrammatics of two such SYK-likeExpand
Large N limit of irreducible tensor models: O(N) rank-3 tensors with mixed permutation symmetry
A bstractIt has recently been proven that in rank three tensor models, the antisymmetric and symmetric traceless sectors both support a large N expansion dominated by melon diagrams [1]. We show howExpand
Uncolored random tensors, melon diagrams, and the Sachdev-Ye-Kitaev models
• Physics
• 2017
Certain models with rank-$3$ tensor degrees of freedom have been shown by Gurau and collaborators to possess a novel large $N$ limit, where $g^2 N^3$ is held fixed. In this limit the perturbativeExpand
Diagrammatics of a colored SYK model and of an SYK-like tensor model, leading and next-to-leading orders
• Physics, Mathematics
• 2017
The Sachdev-Ye-Kitaev (SYK) model is a model of q interacting fermions. Gross and Rosenhaus have proposed a generalization of the SYK model which involves fermions with different flavors. In terms ofExpand
Scaling violation in a field theory of closed strings in one physical dimension
• Physics
• 1990
Abstract A one dimensional field theory of closed strings is solved exactly in a special double scaling limit, in which the string coupling 1/N goes to zero, the cosmological constant λ approches aExpand
SYK-like tensor quantum mechanics with Sp(N) symmetry
• Physics, Mathematics
• Nuclear Physics B
• 2019
Abstract We introduce a family of tensor quantum-mechanical models based on irreducible rank-3 representations of Sp ( N ) . In contrast to irreducible tensor models with O ( N ) symmetry, theExpand
The 1/N Expansion of Multi-Orientable Random Tensor Models
• Mathematics, Physics
• 2014
Multi-orientable group field theory (GFT) was introduced in Tanasa (J Phys A 45:165401, 2012), as a quantum field theoretical simplification of GFT, which retains a larger class of tensor graphs thanExpand
|
2021-12-02 12:01:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8667219877243042, "perplexity": 1735.3085956972866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362219.5/warc/CC-MAIN-20211202114856-20211202144856-00442.warc.gz"}
|
http://harvard.voxcharta.org/category/astro-ph/galactic-astro-ph/page/2/
|
## Recent Postings from Galaxies
### Testing Quasar Unification: Radiative Transfer in Clumpy Winds
Various unification schemes interpret the complex phenomenology of quasars and luminous active galactic nuclei (AGN) in terms of a simple picture involving a central black hole, an accretion disc and an associated outflow. Here, we continue our tests of this paradigm by comparing quasar spectra to synthetic spectra of biconical disc wind models, produced with our state-of-the-art Monte Carlo radiative transfer code. Previously, we have shown that we could produce synthetic spectra resembling those of observed broad absorption line (BAL) quasars, but only if the X-ray luminosity was limited to $10^{43}$ erg s$^{-1}$. Here, we introduce a simple treatment of clumping, and find that a filling factor of $\sim0.01$ moderates the ionization state sufficiently for BAL features to form in the rest-frame UV at more realistic X-ray luminosities. Our fiducial model shows good agreement with AGN X-ray properties and the wind produces strong line emission in, e.g., Ly \alpha\ and CIV 1550\AA\ at low inclinations. At high inclinations, the spectra possess prominent LoBAL features. Despite these successes, we cannot reproduce all emission lines seen in quasar spectra with the correct equivalent-width ratios, and we find an angular dependence of emission-line equivalent width despite the similarities in the observed emission line properties of BAL and non-BAL quasars. Overall, our work suggests that biconical winds can reproduce much of the qualitative behaviour expected from a unified model, but we cannot yet provide quantitative matches with quasar properties at all viewing angles. Whether disc winds can successfully unify quasars is therefore still an open question.
### Comparing Young Massive Clusters and their Progenitor Clouds in the Milky Way
Young massive clusters (YMCs) have central stellar mass surface densities exceeding $10^{4} M_{\odot} pc^{-2}$. It is currently unknown whether the stars formed at such high (proto)stellar densities. We compile a sample of gas clouds in the Galaxy which have sufficient gas mass within a radius of a few parsecs to form a YMC, and compare their radial gas mass distributions to the stellar mass distribution of Galactic YMCs. We find that the gas in the progenitor clouds is distributed differently than the stars in YMCs. The mass surface density profiles of the gas clouds are generally shallower than the stellar mass surface density profiles of the YMCs, which are characterised by prominent dense core regions with radii ~ 0.1 pc, followed by a power-law tail. On the scale of YMC core radii, we find that there are no known clouds with significantly more mass in their central regions when compared to Galactic YMCs. Additionally, we find that models in which stars form from very dense initial conditions require surface densities that are generally higher than those seen in the known candidate YMC progenitor clouds. Our results show that the quiescent, less evolved clouds contain less mass in their central regions than in the highly star-forming clouds. This suggests an evolutionary trend in which clouds continue to accumulate mass towards their centres after the onset of star formation. We conclude that a conveyor-belt scenario for YMC formation is consistent with the current sample of Galactic YMCs and their progenitor clouds.
### Sub-mm Emission Line Deep Fields: CO and [CII] Luminosity Functions out to z = 6
Now that ALMA is reaching its full capabilities, observations of sub-mm emission line deep fields become feasible. Deep fields are ideal to study the luminosity function of sub-mm emission lines, ultimately tracing the atomic and molecular gas properties of galaxies. We couple a semi-analytic model of galaxy formation with a radiative transfer code to make predictions for the luminosity function of CO J=1-0 up to CO J=6-5 and [CII] at redshifts z=0-6. We find that: 1) our model correctly reproduces the CO and [CII] emission of low- and high-redshift galaxies and reproduces the available constraints on the CO luminosity function at z<2.75; 2) we find that the CO and [CII] luminosity functions of galaxies increase from z = 6 to z = 4, remain relatively constant till z = 1 and rapidly decrease towards z = 0. The galaxies that are brightest in CO and [CII] are found at z~2; 3) the CO J=3-2 emission line is most favourable to study the CO luminosity and global H2 mass content of galaxies, because of its brightness and observability with currently available sub-mm and radio instruments; 4) the luminosity functions of high-J CO lines show stronger evolution than the luminosity functions of low-J CO lines; 5) our model barely reproduces the available constraints on the CO and [CII] luminosity function of galaxies at z>1.5 and the CO luminosity of individual galaxies at intermediate redshifts. We argue that this is driven by a lack of cold gas in galaxies at intermediate redshifts as predicted by cosmological simulations of galaxy formation. This may lay at the root of other problems theoretical models face at the same redshifts.
### A Slippery Slope: Systematic Uncertainties in the Baryonic Tully-Fisher Relation
The baryonic Tully-Fisher relation (BTFR) is both a valuable observational tool and a critical test of galaxy formation theory. We explore the systematic uncertainty in the slope and the scatter of the observed BTFR utilizing a homogeneously measured dataset of 930 isolated galaxies. We measure a fiducial relation of log_10 M_baryon = 3.24 log_10 V_rot + 3.21 with a scatter of 0.25 dex over the baryonic mass range of 10^7.4 to 10^11.3 M_sun. We then conservatively vary the definitions of M_baryon and V_rot, the sample definition and the linear fitting algorithm used to fit the BTFR. We obtain slopes ranging from 2.64 to 3.46 and scatter measurements ranging from 0.16 to 0.41 dex. We next compare our fiducial slope to literature measurements, where reported slopes range from 3.0 to 4.3 and scatter is either unmeasured, unmeasurable or as large as 0.4 dex. Measurements derived from unresolved HI line-widths tend to produce slopes of 3.2, while measurements derived strictly from resolved asymptotic rotation velocities produce slopes of 4.0. The largest factor affecting the BTFR slope is the definition of rotation velocity. Sample definition, mass range and linear fitting algorithm also significantly affect the measured BTFR. Galaxies with V_rot < 100 km/s are consistent with the BTFR of more massive galaxies, but these galaxies drive most of the scatter in the BTFR. This is most likely due to the diversity in rotation curve shapes of low-mass galaxies and underestimated systematic uncertainties. It is critical when comparing predictions to an observed BTFR that the rotation velocity definition, the sample selection and the fitting algorithm are similarly defined. Fitting a power-law model to the BTFR is an oversimplification and we recommend direct statistical comparisons between datasets with commensurable properties.
### The CALYMHA survey: Ly$\alpha$ escape fraction and its dependence on galaxy properties at $z=2.23$
We present the first results from our CAlibrating LYMan-$\alpha$ with H$\alpha$ (CALYMHA) pilot survey at the Isaac Newton Telescope. We measure Ly$\alpha$ emission for 488 H$\alpha$ selected galaxies at $z=2.23$ from HiZELS in the COSMOS and UDS fields with a specially designed narrow-band filter ($\lambda_c$ = 3918 {\AA}, $\Delta\lambda$= 52 {\AA}). We find 17 dual H$\alpha$-Ly$\alpha$ emitters ($f_{\rm Ly\alpha} >5\times10^{-17}$ erg s$^{-1}$ cm$^{-2}$, of which 5 are X-ray AGN). For star-forming galaxies, we find a range of Ly$\alpha$ escape fractions (f$_{\rm esc}$, measured with 3$"$ apertures) from $2$\%$-30$\%. These galaxies have masses from $3\times10^8$ M$_{\odot}$ to 10$^{11}$ M$_{\odot}$ and dust attenuations E$(B-V)=0-0.5$. Using stacking, we measure a median escape fraction of $1.6\pm0.5$\% ($4.0\pm1.0$\% without correcting H$\alpha$ for dust), but show that this depends on galaxy properties. The stacked f$_{\rm esc}$ tends to decrease with increasing SFR and dust attenuation. However, at the highest masses and dust attenuations, we detect individual galaxies with f$_{\rm esc}$ much higher than the typical values from stacking, indicating significant scatter in the values of f$_{\rm esc}$. Relations between f$_{\rm esc}$ and UV slope are bimodal, with high f$_{\rm esc}$ for either the bluest or reddest galaxies. We speculate that this bimodality and large scatter in the values of f$_{\rm esc}$ is due to additional physical mechanisms such as outflows facilitating f$_{\rm esc}$ for dusty/massive systems. Ly$\alpha$ is significantly more extended than H$\alpha$ and the UV. f$_{\rm esc}$ continues to increase up to at least 20 kpc (3$\sigma$, 40 kpc [2$\sigma$]) for typical SFGs and thus the aperture is the most important predictor of f$_{\rm esc}$.
### Star Formation in Luminous Quasars at 2<z<3
We investigate the relation between star formation rates ($\dot{M}_{s}$) and AGN properties in optically selected type 1 quasars at $2<z<3$ using data from Herschel and the SDSS. We find that $\dot{\rm{M}}_s$ remains approximately constant with redshift, at $300\pm100~\rm{M}_{\odot}$yr$^{-1}$. Conversely, $\dot{\rm{M}}_s$ increases with AGN luminosity, up to a maximum of $\sim600~\rm{M}_{\odot}$yr$^{-1}$, and with CIV FWHM. In context with previous results, this is consistent with a relation between $\dot{\rm{M}}_s$ and black hole accretion rate ($\dot{\rm{M}}_{bh}$) existing in only parts of the $z-\dot{\rm{M}}_{s}-\dot{\rm{M}}_{bh}$ plane, dependent on the free gas fraction, the trigger for activity, and the processes that may quench star formation. The relations between $\dot{\rm{M}}_s$ and both AGN luminosity and CIV FWHM are consistent with star formation rates in quasars scaling with black hole mass, though we cannot rule out a separate relation with black hole accretion rate. Star formation rates are observed to decline with increasing CIV equivalent width. This decline can be partially explained via the Baldwin effect, but may have an additional contribution from one or more of three factors; $M_i$ is not a linear tracer of L$_{2500}$, the Baldwin effect changes form at high AGN luminosities, and high CIV EW values signpost a change in the relation between $\dot{\rm{M}}_s$ and $\dot{\rm{M}}_{bh}$. Finally, there is no strong relation between $\dot{\rm{M}}_s$ and Eddington ratio, or the asymmetry of the CIV line. The former suggests that star formation rates do not scale with how efficiently the black hole is accreting, while the latter is consistent with CIV asymmetries arising from orientation effects.
### Dust Destruction by the Reverse Shock in the Cassiopeia A Supernova Remnant
Core collapse supernovae (CCSNe) are important sources of interstellar dust, potentially capable of producing one solar mass of dust in their explosively expelled ejecta. However, unlike other dust sources, the dust has to survive the passage of the reverse shock, generated by the interaction of the supernova blast wave with its surrounding medium. Knowledge of the net amount of dust produced by CCSNe is crucial for understanding the origin and evolution of dust in the local and high-redshift universe. Our aim is to identify the dust destruction mechanisms in the ejecta, and derive the net amount of dust that survives the passage of the reverse shock. We use analytical models for the evolution of a supernova blast wave and of the reverse shock, with special application to the clumpy ejecta of the remnant of Cassiopeia A. We assume that the dust resides in cool oxygen-rich clumps that are uniformly distributed within the remnant and surrounded by a hot X-ray emitting plasma, and that the dust consists of silicates (MgSiO3) and amorphous carbon grains. The passage of the reverse shock through the clumps gives rise to a relative gas-grain motion and also destroys the clumps. Inside the ejecta clouds, dust is processed via kinetic sputtering, which is terminated either when the grains escape the clumps, or when the clumps are destroyed by the reverse shock. In either case, grain destruction proceeds thereafter by thermal sputtering in the hot ambient gas. We find that 11.8% and 15.9% of, respectively, the silicate and carbon dust survives the passage of the reverse shock by the time the shock has reached the center of the remnant. These fractions depend on the morphology of the ejecta and the medium into which the remnant is expanding, as well as the composition and size distribution of the grains that formed in the ejecta. Results will therefore differ for different types of supernovae.
### Dark matter fraction of low-mass cluster members probed by galaxy-scale strong lensing
We present a strong lensing system, composed of 4 multiple images of a source at z = 2.387, created by two lens galaxies, G1 and G2, belonging to the galaxy cluster MACS J1115.9+0129 at z = 0.353. We use observations taken as part of the Cluster Lensing and Supernova survey with Hubble, CLASH, and its spectroscopic follow-up programme at the Very Large Telescope, CLASH-VLT, to estimate the total mass distributions of the two galaxies and the cluster through strong gravitational lensing models. We find that the total projected mass values within the half-light radii, R_{e}, of the two lens galaxies are M_{T,G1}(< R_{e,G1}) = (3.6 +/- 0.4) x 10^{10}M_{Sun} and M_{T,G2}(< R_{e,G2}) = (4.2 +/- 1.6) x 10^{10}M_{Sun}. The effective velocity dispersion values of G1 and G2 are (122 +/- 7) km/s and (137 +/- 27) km/s, respectively. We remark that these values are relatively low when compared to those of ~200-300 km/s, typical of lens galaxies found in the field by previous surveys. By fitting the spectral energy distributions of G1 and G2, we measure projected luminous over total mass fractions within R_{e} of 0.11 +/- 0.03, for G1, and 0.73 +/- 0.32, for G2. The fact that the less massive galaxy, G1, is dark-matter dominated in its inner regions raises the question of whether the dark matter fraction in the core of early-type galaxies depends on their mass. Further investigating strong lensing systems will help us understand the influence that dark matter has on the structure and evolution of the inner regions of galaxies.
### The Time-Domain Spectroscopic Survey: Understanding the Optically Variable Sky with SEQUELS in SDSS-III
The Time-Domain Spectroscopic Survey (TDSS) is an SDSS-IV eBOSS subproject primarily aimed at obtaining identification spectra of ~220,000 optically-variable objects systematically selected from SDSS/Pan-STARRS1 multi-epoch imaging. We present a preview of the science enabled by TDSS, based on TDSS spectra taken over ~320 deg^2 of sky as part of the SEQUELS survey in SDSS-III, which is in part a pilot survey for eBOSS in SDSS-IV. Using the 15,746 TDSS-selected single-epoch spectra of photometrically variable objects in SEQUELS, we determine the demographics of our variability-selected sample, and investigate the unique spectral characteristics inherent in samples selected by variability. We show that variability-based selection of quasars complements color-based selection by selecting additional redder quasars, and mitigates redshift biases to produce a smooth quasar redshift distribution over a wide range of redshifts. The resulting quasar sample contains systematically higher fractions of blazars and broad absorption line quasars than from color-selected samples. Similarly, we show that M-dwarfs in the TDSS-selected stellar sample have systematically higher chromospheric active fractions than the underlying M-dwarf population, based on their H-alpha emission. TDSS also contains a large number of RR Lyrae and eclipsing binary stars with main-sequence colors, including a few composite-spectrum binaries. Finally, our visual inspection of TDSS spectra uncovers a significant number of peculiar spectra, and we highlight a few cases of these interesting objects. With a factor of ~15 more spectra, the main TDSS survey in SDSS-IV will leverage the lessons learned from these early results for a variety of time-domain science applications.
### Chemistry and Kinematics of Red Supergiant Stars in the Young Massive Cluster NGC 2100
We have obtained K-band Multi-Object Spectrograph (KMOS) near-IR spectroscopy for 14 red supergiant stars (RSGs) in the young massive star cluster NGC 2100 in the Large Magellanic Cloud (LMC). Stellar parameters including metallicity are estimated using the J-band analysis technique, which has been rigorously tested in the Local Universe. We find an average metallicity for NGC 2100 of [Z]=$-$0.38$\pm$0.20 dex, in good agreement with estimates from the literature for the LMC. Comparing our results in NGC 2100 with those for a Galactic cluster (at Solar-like metallicity) with a similar mass and age we find no significant difference in the location of RSGs in the Hertzsprung--Russell diagram. We combine the observed KMOS spectra to form a simulated integrated-light cluster spectrum and show that, by analysing this spectrum as a single RSG, the results are consistent with the average properties of the cluster. Radial velocities are estimated for the targets and the dynamical properties are estimated for the first time within this cluster. The data are consistent with a flat velocity dispersion profile, and with an upper limit of 3.9 \kms, at the 95\% confidence level, for the velocity dispersion of the cluster. However, the intrinsic velocity dispersion is unresolved and could, therefore, be significantly smaller than the upper limit reported here. An upper limit on the dynamical mass of the cluster is derived as $M_{dyn}$ $\le$ $15.2\times10^{4}M_{\odot}$ assuming virial equilibrium.
### SDSS-II Supernova Survey: An Analysis of the Largest Sample of Type Ia Supernovae and Correlations with Host-Galaxy Spectral Properties
Using the largest single-survey sample of Type Ia supernovae (SNe Ia) to date, we study the relationship between properties of SNe Ia and those of their host galaxies, focusing primarily on correlations with Hubble residuals (HR). Our sample consists of 345 photometrically-classified or spectroscopically-confirmed SNeIa discovered as part of the SDSS-II Supernova Survey (SDSS-SNS). This analysis utilizes host-galaxy spectroscopy obtained during the SDSS-I/II spectroscopic survey and from an ancillary program on the SDSS-III Baryon Oscillation Spectroscopic Survey (BOSS) that obtained spectra for nearly all host galaxies of SDSS-II SN candidates. In addition, we use photometric host-galaxy properties from the SDSS-SNS data release (Sako et al. 2014) such as host stellar mass and star-formation rate. We confirm the well-known relation between HR and host-galaxy mass and find a 3.6{\sigma} significance of a non-zero linear slope. We also recover correlations between HR and host-galaxy gas-phase metallicity and specific star-formation rate as they are reported in the literature. With our large dataset, we examine correlations between HR and multiple host-galaxy properties simultaneously and find no evidence of a significant correlation. We also independently analyze our spectroscopically-confirmed and photometrically-classified SNe Ia and comment on the significance of similar combined datasets for future surveys.
### The ESO UVES Advanced Data Products Quasar Sample - VI. Sub-Damped Lyman-$\alpha$ Metallicity Measurements and the Circum-Galactic Medium
The Circum-Galactic Medium (CGM) can be probed through the analysis of absorbing systems in the line-of-sight to bright background quasars. We present measurements of the metallicity of a new sample of 15 sub-damped Lyman-$\alpha$ absorbers (sub-DLAs, defined as absorbers with 19.0 < log N(H I) < 20.3) with redshift 0.584 < $\rm z_{abs}$ < 3.104 from the ESO Ultra-Violet Echelle Spectrograph (UVES) Advanced Data Products Quasar Sample (EUADP). We combine these results with other measurements from the literature to produce a compilation of metallicity measurements for 92 sub-DLAs as well as a sample of 362 DLAs. We apply a multi-element analysis to quantify the amount of dust in these two classes of systems. We find that either the element depletion patterns in these systems differ from the Galactic depletion patterns or they have a different nucleosynthetic history than our own Galaxy. We propose a new method to derive the velocity width of absorption profiles, using the modeled Voigt profile features. The correlation between the velocity width delta_V90 of the absorption profile and the metallicity is found to be tighter for DLAs than for sub-DLAs. We report hints of a bimodal distribution in the [Fe/H] metallicity of low redshift (z < 1.25) sub-DLAs, which is unseen at higher redshifts. This feature can be interpreted as a signature from the metal-poor, accreting gas and the metal-rich, outflowing gas, both being traced by sub-DLAs at low redshifts.
### The mosaic multiple stellar populations in $\omega$ Centauri : the Horizontal Branch and the Main Sequence
We interpret the stellar population of $\omega$ Centauri by means of a population synthesis analysis, following the most recent observational guidelines for input metallicities, helium and [(C+N+O)/Fe] contents. We deal at the same time with the main sequences, sub-giant and horizontal branch data. The reproduction of the observed colour magnitude features is very satisfying and bears interesting hints concerning the evolutionary history of this peculiar stellar ensemble. Our main results are: 1) no significant spread in age is required to fit the colour-magnitude diagram. Indeed we can use coeval isochrones for the synthetic populations, and we estimate that the ages fall within a $\sim 0.5$ Gyr time interval; in particular the most metal rich population can be coeval (in the above meaning) with the others, if its stars are very helium--rich (Y$\sim$0.37) and with the observed CNO enhancement ([(C+N+O)/Fe] = + 0.7); 2) a satisfactory fit of the whole HB is obtained, consistent with the choice of the populations providing a good reproduction of the main sequence and sub giant data. 3) the split in magnitude observed in the red HB is well reproduced assuming the presence of two stellar populations in the two different sequences observed: a metal poor population made of stars evolving from the blue side (luminous branch) and a metal richer one whose stars are in a stage closer to the zero age HB (dimmer branch). This modelization also fits satisfactorily the period and the [Fe/H] distribution of the RR Lyrae stars.
### High-Resolution Imaging of Water Maser Emission in the active galaxies NGC 6240 and M51
We present the results of observations of 22GHz H2O maser emission in NGC 6240 and M51 made with the Karl G. Jansky Very Large Array. Two major H2O maser features and several minor features are detected toward the southern nucleus of NGC 6240. These features are redshifted by about 300 km/s from the galaxy's systemic velocity and remain unresolved at the synthesized beam size. A combination of our two-epoch observations and published data reveals an apparent correlation between the strength of the maser and the 22GHz radio continuum emission, implying that the maser excitation relates to the activity of an active galactic nucleus in the southern nucleus rather than star-forming activity. The star-forming galaxy M51 hosts H2O maser emission in the center of the galaxy; however, the origin of the maser has been an open question. We report the first detection of 22GHz nuclear radio continuum emission in M51. The continuum emission is co-located with the maser position, which indicates that the maser arises from active galactic nucleus-activity and not from star-forming activity in the galaxy.
### The white dwarf population within 40 pc of the Sun
The white dwarf luminosity function is an important tool to understand the properties of the Solar neighborhood, like its star formation history, and its age. Here we present a population synthesis study of the white dwarf population within 40~pc from the Sun, and compare the results of this study with the properties of the observed sample. We use a state-of-the-art population synthesis code based on Monte Carlo techniques, that incorporates the most recent and reliable white dwarf cooling sequences, an accurate description of the Galactic neighborhood, and a realistic treatment of all the known observational biases and selection procedures. We find a good agreement between our theoretical models and the observed data. In particular, our simulations reproduce a previously unexplained feature of the bright branch of the white dwarf luminosity function, which we argue is due to a recent episode of star formation. We also derive the age of the Solar neighborhood employing the position of the observed cut-off of the white dwarf luminosity function, obtaining ~8.9+-0.2 Gyr. We conclude that a detailed description of the ensemble properties of the population of white dwarfs within 40pc of the Sun allows us to obtain interesting constraints on the history of the Solar neighborhood.
### Metallicity gradients in Local Universe galaxies: time evolution and effects of radial migration
Our knowledge of the shape of radial metallicity gradients in disc galaxies has recently improved. Conversely, the understanding of their time evolution is more complex, since it requires analysis of stellar populations with different ages, or systematic studies of galaxies at different redshifts. In the Local Universe, Hii regions and planetary nebulae (PNe) are important tools to investigate it. We present an in-depth study of all nearby spiral galaxies (M33, M31, NGC300, and M81) with direct-method nebular abundances of both populations. For the first time, we also evaluate the radial migration of PN populations. We analyse Hii region and PN properties to: determine whether oxygen in PNe is a reliable tracer for past interstellar medium (ISM) composition; homogenise the published datasets; estimate the migration of the oldest stellar populations; determine the overall chemical enrichment and slope evolution. We confirm that oxygen in PNe is a reliable tracer for the past ISM metallicity. We find that PN gradients are flatter than or equal to those of Hii regions. When radial motions are negligible, this result provides a direct measurement of the time evolution of the gradient. For galaxies with dominant radial motions, we provide upper limits on the gradient evolution. Finally, the total metal content increases with time in all target galaxies, with early morphological type having a larger increment Delta(O/H) than late-type galaxies. Our findings provide important constraints to discriminate among different galactic evolutionary scenarios, favouring cosmological models with enhanced feedback from supernovae. The advent of extremely large telescopes will allow us to include galaxies in a wider range of morphologies and environments, thus putting firmer constraints to galaxy formation and evolution scenarios.
### Probing the Dragonfish star-forming complex: the ionizing population of the young massive cluster Mercer 30
The Dragonfish Nebula has been recently claimed to be powered by a superluminous but elusive OB association. Instead, systematic searches in near-infrared photometric surveys have found many other cluster candidates on this sky region. Among these, the first confirmed young massive cluster was Mercer 30, where Wolf-Rayet stars were found. We perform a new characterization of Mercer 30 with unprecedented accuracy, combining NICMOS/HST and VVV photometric data with multi-epoch ISAAC/VLT H- and K-band spectra. Stellar parameters for most of spectroscopically observed cluster members are found through precise non-LTE atmosphere modeling with the CMFGEN code. Our spectrophotometric study for this cluster yields a new, revised distance of d = (12.4 +- 1.7) kpc and a total of Q = 6.70 x 10^50 Lyman ionizing photons. A cluster age of (4.0 +- 0.8) Myr is found through isochrone fitting, and a total mass of (1.6 +- 0.6) x 10^4 Msol is estimated thanks to our extensive knowledge of the post-main-sequence population. As a consequence, membership of Mercer 30 to the Dragonfish star-forming complex is confirmed, allowing us to use this cluster as a probe for the whole complex, which turns out to be extremely large (400 pc across) and located at the outer edge of the Sagittarius-Carina spiral arm (11 kpc from the Galactic Center). The Dragonfish complex hosts 19 young clusters or cluster candidates (including Mercer 30 and a new candidate presented in this work) and an estimated minimum of 9 field Wolf-Rayet stars. The sum of all these contributions accounts for, at least, 73% of the Dragonfish Nebula ionization and leaves little or no room for the alleged superluminous OB association; alternative explanations are discussed.
### What is controlling the fragmentation process in the Infrared Dark Cloud G14.225-0.506? Differet level of fragmentation in twin hubs
We present observations of the 1.3 mm continuum emission toward hub-N and hub-S of the infrared dark cloud G14.225-0.506 carried out with the Submillimeter Array, together with observations of the dust emission at 870 and 350 microns obtained with APEX and CSO telescopes. The large scale dust emission of both hubs consists of a single peaked clump elongated in the direction of the associated filament. At small scales, the SMA images reveal that both hubs fragment into several dust condensations. The fragmentation level was assessed under the same conditions and we found that hub-N presents 4 fragments while hub-S is more fragmented, with 13 fragments identified. We studied the density structure by means of a simultaneous fit of the radial intensity profile at 870 and 350 microns and the spectral energy distribution adopting a Plummer-like function to describe the density structure. The parameters inferred from the model are remarkably similar in both hubs, suggesting that density structure could not be responsible in determining the fragmentation level. We estimated several physical parameters such as the level of turbulence and the magnetic field strength, and we found no significant differences between these hubs. The Jeans analysis indicates that the observed fragmentation is more consistent with thermal Jeans fragmentation compared with a scenario that turbulent support is included. The lower fragmentation level observed in hub-N could be explained in terms of stronger UV radiation effects from a nearby HII region, evolutionary effects, and/or stronger magnetic fields at small scales, a scenario that should be further investigated.
### Binary Black Hole Mergers from Globular Clusters: Masses, Merger Rates, and the Impact of Stellar Evolution
Expanding upon our previous work (Rodriguez et al., 2015), we study merging binary black holes formed in globular clusters using our Monte Carlo approach to stellar dynamics. We have created a new set of 52 cluster models with different masses, metallicities, and radii to fully characterize the binary black hole merger rate. These models include all the relevant dynamical processes (such as two-body relaxation, strong encounters, and three-body binary formation) and agree well with detailed direct N-body simulations. In addition, we have enhanced our stellar evolution algorithms with updated metallicity-dependent stellar wind and supernova prescriptions, allowing us to compare our results directly to the most recent population synthesis predictions for merger rates from isolated binary evolution. We explore the relationship between a cluster's global properties and the population of binary black holes that it produces. In particular, we derive a numerically calibrated relationship between the merger times of ejected black hole binaries and a cluster's mass and radius. We explore the masses and mass ratios of these binaries as a function of redshift, and find a merger rate of ~5 Gpc$^{-3}$ yr$^{-1}$ in the local universe, with 90% of sources having total masses from $32M_{\odot}$ to $64M_{\odot}$. Under standard assumptions, approximately 1 out of every 7 binary black hole mergers in the local universe will have originated in a globular cluster, but we also explore the sensitivity of this result to different assumptions for binary stellar evolution. If black holes were born with significant natal kicks, comparable to those of neutron stars, then the merger rate of binary black holes from globular clusters would be comparable to that from the field, with approximately 1/2 of mergers originating in clusters [Abridged].
### The Science Case for ALMA Band 2 and Band 2+3
We discuss the science drivers for ALMA Band 2 which spans the frequency range from 67 to 90 GHz. The key science in this frequency range are the study of the deuterated molecules in cold, dense, quiescent gas and the study of redshifted emission from galaxies in CO and other species. However, Band 2 has a range of other applications which are also presented. The science enabled by a single receiver system which would combine ALMA Bands 2 and 3 covering the frequency range 67 to 116 GHz, as well as the possible doubling of the IF bandwidth of ALMA to 16 GHz, are also considered.
### Studying the Outflow-Core Interaction with ALMA Cycle 1 Observations of the HH 46/47 Molecular Outflow
We present ALMA Cycle 1 observations of the HH 46/47 molecular outflow using combined 12m array and 7m array observations. We use 13CO and C18O emission to correct for the 12CO optical depth, to accurately estimate the outflow mass, momentum and kinetic energy. Applying the optical depth correction increases the mass estimate by a factor of 14, the momentum by a factor of 6, and the kinetic energy by a factor of about 2. The new 13CO(1-0) and C18O(1-0) data also allow us to trace denser and slower outflow material than that traced by 12CO. These species are only detected within about 1~2 km/s from the cloud velocity. The cavity wall of the red lobe appears at very low velocities (~0.2 km/s). Combing the material traced only by 13CO and C18O, the measured total mass of the CO outflow is 1.4 Msun, the total momentum is 1.7 Msun km/s and the total energy is 4.7e43 erg, assuming Tex=15 K. The improved angular resolution and sensitivity in 12CO reveal more details of the outflow structure. Specifically, we find that the outflow cavity wall is composed of multiple shells entrained in a series of jet bow-shock events. The outflow kinetic energy distribution shows that even though the red lobe is mainly entrained by jet bow-shocks, more outflow energy is being deposited into the cloud at the base of the outflow cavity rather than around the heads of the bow shocks. The estimated outflow mass, momentum, and energy indicate that the outflow is capable to disperse the parent core within the typical lifetime of the embedded phase of a low-mass protostar, and regulating a core-to-star efficiency of 1/4~1/3. The 13CO and C18O emission also trace a circumstellar envelope with rotation and infall motions. In CS, we found possible evidence for a slowly-moving rotating outflow, which we believe is entrained not only poloidally but also toroidally by a wind launched from relatively large radii on the disk.
### The off-centered Seyfert-like compact emission in the nuclear region of NGC 3621
We analyze an optical data cube of the nuclear region of NGC 3621, taken with the integral field unit of the Gemini Multi-object Spectrograph. We found that the previously detected central line emission in this galaxy actually comes from a blob, located at a projected distance of 2.14" +/- 0.08" (70.1 +/- 2.6 pc) from the stellar nucleus. Only diffuse emission was detected in the rest of the field of view, with a deficit of emission at the position of the stellar nucleus. Diagnostic diagram analysis reveals that the off-centered emitting blob has a Seyfert 2 spectrum. We propose that the line-emitting blob may be a "fossil" emission-line region or a light "echo" from an active galactic nucleus (AGN), which was significantly brighter in the past. Our estimates indicate that the bolometric luminosity of the AGN must have decreased by a factor of ~13 - 500 during the last ~230 years. A second scenario to explain the morphology of the line-emitting areas in the nuclear region of NGC 3621 involves no decrease of the AGN bolometric luminosity and establishes that the AGN is highly obscured toward the observer but not toward the line-emitting blob. The third scenario proposed here assumes that the off-centered line-emitting blob is a recoiling supermassive black hole, after the coalescence of two black holes. Finally, an additional hypothesis is that the central X-ray source is not an AGN, but an X-ray binary. This idea is consistent with all the scenarios we proposed.
### The physical and chemical structure of Sagittarius B2, I. Three-dimensional thermal dust and free-free continuum modeling on 100 au to 45 pc scales
We model the dust and free-free continuum emission in the high-mass star-forming region Sagittarius B2 in order to reconstruct the three-dimensional density and dust temperature distribution, as a crucial input to follow-up studies of the gas velocity field and molecular abundances. We employ the three-dimensional radiative transfer program RADMC-3D to calculate the dust temperature self-consistently, provided a given initial density distribution. This density distribution of the entire cloud complex is then recursively reconstructed based on available continuum maps, including both single-dish and high-resolution interferometric maps covering a wide frequency range (40 GHz - 4 THz). The model covers spatial scales from 45 pc down to 100 au, i.e. a spatial dynamic range of 10^5. We find that the density distribution of Sagittarius B2 can be reasonably well fitted by applying a superposition of spherical cores with Plummer-like density profiles. In order to reproduce the spectral energy distribution, we position Sgr B2(N) along the line of sight behind the plane containing Sgr B2(M). We find that the entire cloud complex comprises a total gas mass of 8.0 x 10^6 Msun within a diameter of 45 pc, corresponding to an averaged gas density of 170 Msun/pc^3. We estimate stellar masses of 2400 Msun and 20700 Msun and luminosities of 1.8 x 10^6 Lsun and 1.2 x 10^7 Lsun for Sgr B2(N) and Sgr B2(M), respectively. We report H_2 column densities of 2.9 x 10^24 cm^-2 for Sgr B2(N) and 2.5 x 10^24 cm^-2 for Sgr B2(M) in a 40" beam. For Sgr B2(S), we derive a stellar mass of 1100 Msun, a luminosity of 6.6 x 10^5 Lsun and a H_2 column density of 2.2 x 10^24 cm^-2 in a 40" beam. We calculate a star formation efficiency of 5% for Sgr B2(N) and 50% for Sgr B2(M), indicating that most of the gas content in Sgr B2(M) has already been converted to stars or dispersed.
### Bayesian analysis of cosmic-ray propagation: evidence against homogeneous diffusion
We present the results of the most complete ever scan of the parameter space for cosmic ray (CR) injection and propagation. We perform a Bayesian search of the main GALPROP parameters, using the MultiNest nested sampling algorithm, augmented by the BAMBI neural network machine learning package. This is the first such study to separate out low-mass isotopes ($p$, $\bar p$ and He) from the usual light elements (Be, B, C, N, O). We find that the propagation parameters that best fit $p$, $\bar p$, He data are significantly different from those that fit light elements, including the B/C and $^{10}$Be/$^9$Be secondary-to-primary ratios normally used to calibrate propagation parameters. This suggests each set of species is probing a very different interstellar medium, and that the standard approach of calibrating propagation parameters using B/C can lead to incorrect results. We present posterior distributions and best fit parameters for propagation of both sets of nuclei, as well as for the injection abundances of elements from H to Si. The input GALDEF files with these new parameters will be included in an upcoming public GALPROP update.
### The Infrared Medium-Deep Survey. V. A New Selection Strategy for Quasars at z > 5 based on Medium-Band Observation with SQUEAN
Multiple color selection techniques have been successful in identifying quasars from wide-field broad-band imaging survey data. Among the quasars that have been discovered so far, however, there is a redshift gap at $5 \lesssim {\rm z} \lesssim 5.7$ due to the limitations of filter sets in previous studies. In this work, we present a new selection technique of high redshift quasars using a sequence of medium-band filters: nine filters with central wavelengths from 625 to 1025 nm and bandwidths of 50 nm. Photometry with these medium-bands traces the spectral energy distribution (SED) of a source, similar to spectroscopy with resolution R $\sim$ 15. By conducting medium-band observations of high redshift quasars at 4.7 $\leq$ z $\leq$ 6.0 and brown dwarfs (the main contaminants in high redshift quasar selection) using the SED camera for QUasars in EArly uNiverse (SQUEAN) on the 2.1-m telescope at the McDonald Observatory, we show that these medium-band filters are superior to multi-color broad-band color section in separating high redshift quasars from brown dwarfs. In addition, we show that redshifts of high redshift quasars can be determined to an accuracy of $\Delta{\rm z}/(1+{\rm z}) = 0.002$ -- $0.026$. The selection technique can be extended to z $\sim$ 7, suggesting that the medium-band observation can be powerful in identifying quasars even at the re-ionization epoch.
### The start of the Sagittarius spiral arm (Sagittarius origin) and the start of the Norma spiral arm (Norma origin) - model-computed and observed arm tangents at galactic longitudes -20 degrees < l < +23 degrees
Here we fitted a 4-arm spiral structure to the more accurate data on global arm pitch angle and arm longitude tangents, to get the start of each spiral arm near the Galactic nucleus. We find that the tangent to the 'start of the Sagittarius' spiral arm (arm middle) is at l= -17 degrees +/- 0.5 degree, while the tangent to the 'start of the Norma' spiral arm (arm middle) is at l= +20 degrees +/- 0.5 degree. Earlier, we published a compilation of observations and analysis of the tangent to each spiral arm tracer, from longitudes +23 degrees to +340 degrees; here we cover the arm tracers in the remaining longitudes +340 degrees (=- 20 degrees) to +23 degrees. Our model arm tangents are confirmed through the recent observed masers data (at the arm's inner edge). Observed arm tracers in the inner Galaxy show an offset from the mid-arm; this was also found elsewhere in the Milky Way disk (Vallee 2014c). In addition, we collated the observed tangents to the so-called '3-kpc-arm' features; here they are found statistically to be near l= -18 degrees +/- 2 degrees and near l= +21 degrees +/- 2 degrees, after excluding misidentified spiral arms. We find that the model-computed arm tangents in the inner Galaxcy are spatially coincident with the mean longitude of the observed tangents to the '3-kpc-arm' features (same galactic longitudes, within the errors). These spatial similarities may be suggestive of a contiguous space.
### The Herschel Virgo Cluster Survey. XIX. Physical properties of low luminosity FIR sources at $z <$ 0.5
The Star formation rate (SFR) is a crucial parameter to investigate galaxy evolution. At low redshift the cosmic SFR density declines smoothly, and massive active galaxies become passive, reducing their star formation activity. This implies that the bulk of the SFR density at low redshift is mainly driven by low mass objects. We investigate the properties of a sample of low luminosity Far-Infrared (FIR) sources selected at 250 microns from Pappalardo et al. (2015). We have collected data from Ultraviolet to FIR to perform a multi-wavelengths analysis. The main goal is to investigate the correlation between SFR, stellar mass, and dust mass for a galaxy population with a wide range in dust content and stellar mass, including the low mass regime that most probably dominates the SFR density at low z. We define a main sample of ~800 sources with full Spectral Energy Distribution (SED) coverage between 0.15 < lambda < 500 microns and an extended sample with ~5000 sources in which we remove the constraints on the Ultraviolet and Near-Infrared bands. We analyze both samples with two different SED fitting methods: MAGPHYS and CIGALE. In the SFR versus stellar mass plane our samples occupy a region included between local spirals and higher redshift star forming galaxies. The galaxies subsample with the higher masses (M* > 3e10 Msol) does not lie on the main sequence, but shows a small offset, as a consequence of the decreased star formation. Low mass galaxies (M* < 1e10 Msol) settle in the main sequence with SFR and stellar mass consistent with local spirals. Deep Herschel data allow the identification of a mixed galaxy population, with galaxies still in an assembly phase, or galaxies at the beginning of their passive evolution. We find that the dust luminosity is the parameter that discriminates these two galaxy populations.
### Keck/MOSFIRE Spectroscopy of z=7-8 Galaxies: Lyman-alpha Emission from a Galaxy at z=7.66
We report the results from some of the deepest Keck/MOSFIRE data yet obtained for candidate $z \gtrsim 7$ galaxies. Our data show one significant line detection with 6.5$\sigma$ significance in our combined 10 hours of integration which is independently detected on more than one night, ruling out the possibility that the detection is spurious. The asymmetric line profile and non-detection in the optical bands strongly imply that the detected line is Ly$\alpha$ emission from a galaxy at $z$(Ly$\alpha)=7.6637 \pm 0.0011$, making it the fourth spectroscopically confirmed galaxy at $z>7.5$. This galaxy is bright in the rest-frame ultraviolet (UV; $M_{\rm UV} \sim -21.2$) with a moderately blue UV slope ($\beta=-2.2^{+0.3}_{-0.2}$), and exhibits a rest-frame Ly$\alpha$ equivalent width of EW(Ly$\alpha$) $\sim 15.6^{+5.6}_{-3.6}$ \AA. The non-detection of the 11 other $z \sim$ 7--8 galaxies in our long 10 hr integration, reaching a median 5$\sigma$ sensitivity of 28 \AA\ in the rest-frame EW(Ly$\alpha$), implies a 1.3$\sigma$ deviation from the null hypothesis of a non-evolving distribution in the rest-frame EW(Ly$\alpha$) between $3<z<6$ and $z=$ 7--8. Our results are consistent with previous studies finding a decline in Ly$\alpha$ emission at $z>6.5$, which may signal the evolving neutral fraction in the intergalactic medium at the end of the reionization epoch, although our weak evidence suggests the need for a larger statistical sample to allow for a more robust conclusion.
### The low-mass end of the baryonic Tully-Fisher relation
The scaling of disk galaxy rotation velocity with baryonic mass (the "Baryonic Tully-Fisher" relation, BTF) has long confounded galaxy formation models. It is steeper than the M ~ V^3 scaling relating halo virial masses and circular velocities and its zero point implies that galaxies comprise a very small fraction of available baryons. Such low galaxy formation efficiencies may in principle be explained by winds driven by evolving stars, but the tightness of the BTF relation argues against the substantial scatter expected from such vigorous feedback mechanism. We use the APOSTLE/EAGLE simulations to show that the BTF relation is well reproduced in LCDM simulations that match the size and number of galaxies as a function of stellar mass. In such models, galaxy rotation velocities are proportional to halo virial velocity and the steep velocity-mass dependence results from the decline in galaxy formation efficiency with decreasing halo mass needed to reconcile the CDM halo mass function with the galaxy luminosity function. Despite the strong feedback, the scatter in the simulated BTF is smaller than observed, even when considering all simulated galaxies and not just rotationally-supported ones. The simulations predict that the BTF should become increasingly steep at the faint end, although the velocity scatter at fixed mass should remain small. Observed galaxies with rotation speeds below ~40 km/s seem to deviate from this prediction. We discuss observational biases and modeling uncertainties that may help to explain this disagreement in the context of LCDM models of dwarf galaxy formation.
### The Fornax Deep Survey with VST. I. The extended and diffuse stellar halo of NGC~1399 out to 192 kpc
[Abrigded] We have started a new deep, multi-imaging survey of the Fornax cluster, dubbed Fornax Deep Survey (FDS), at the VLT Survey Telescope. In this paper we present the deep photometry inside two square degrees around the bright galaxy NGC1399 in the core of the cluster. We found a very extended and diffuse envelope surrounding the luminous galaxy NGC1399: we map the surface brightness out to 33 arcmin (~ 192 kpc) from the galaxy center and down to about 31 mag/arcsec^2 in the g band. The deep photometry allows us to detect a faint stellar bridge in the intracluster region between NGC1399 and NGC1387. By analyzing the integrated colors of this feature, we argue that it could be due to the ongoing interaction between the two galaxies, where the outer envelope of NGC1387 on its east side is stripped away. By fitting the light profile, we found that it exists a physical break radius in the total light distribution at R=10 arcmin (~58 kpc) that sets the transition region between the bright central galaxy and the outer exponential stellar halo. We discuss the main implications of this work on the build-up of the stellar halo at the center of the Fornax cluster. By comparing with the numerical simulations of the stellar halo formation for the most massive BCGs, we find that the observed stellar halo mass fraction is consistent with a halo formed through the multiple accretion of progenitors with a stellar mass in the range 10^8 - 10^11 M_sun. This might suggest that the halo of NGC1399 has also gone through a major merging event. The absence of a significant number of luminous stellar streams and tidal tails out to 192 kpc suggests that the epoch of this strong interaction goes back to an early formation epoch. Therefore, differently from the Virgo cluster, the extended stellar halo around NGC1399 is characterised by a more diffuse and well-mixed component, including the ICL.
### Accuracy requirements to test the applicability of the random cascade model to supersonic turbulence
A model, which is widely used for inertial rang statistics of supersonic turbulence in the context of molecular clouds and star formation, expresses (measurable) relative scaling exponents Z_p of two-point velocity statistics as a function of two parameters, beta and Delta. The model relates them to the dimension D of the most dissipative structures, D=3-Delta/(1-beta). While this description has proved most successful for incompressible turbulence (beta=Delta=2/3, and D=1), its applicability in the highly compressible regime remains debated. For this regime, theoretical arguments suggest D=2 and Delta=2/3, or Delta=1. Best estimates based on 3D periodic box simulations of supersonic isothermal turbulence yield Delta=0.71 and D=1.9, with uncertainty ranges of Delta in [0.67, 0.78] and D in [2.04,1.60]. With these 5-10\% uncertainty ranges just marginally including the theoretical values of Delta=2/3 and D=2, doubts remain whether the model indeed applies and, if it applies, for what values of beta and Delta. We use a Monte Carlo approach to mimic actual simulation data and examine what factors are most relevant for the fit quality. We estimate that 0.1% (0.05%) accurate Z_p, with p=1...5, should allow for 2% (1%) accurate estimates of beta and Delta in the highly compressible regime, but not in the mildly compressible regime. We argue that simulation-based Z_p with such accuracy are within reach of today's computer resources. If this kind of data does not allow for the expected high quality fit of beta and Delta, then this may indicate the inapplicability of the model for the simulation data. In fact, other models than the one we examine here have been suggested.
### DHIGLS: DRAO H I Intermediate Galactic Latitude Survey
Observations of Galactic H I gas for seven intermediate Galactic latitude fields are presented at 1' angular resolution using data from the DRAO Synthesis Telescope (ST) and the Green Bank Telescope (GBT). The DHIGLS data are the most extensive arcminute resolution measurements of the diffuse atomic interstellar medium beyond those in the Galactic plane. The acquisition, reduction, calibration, and mosaicking of the DRAO ST data and the cross calibration and incorporation of the short-spacing information from the GBT are described. The high quality of the DHIGLS data enables a variety of new studies in directions of low Galactic column density. We find evidence for dramatic changes in the structures in channel maps over even small changes in velocity. This narrow line emission has counterparts in absorption spectra against bright background radio sources, quantifying that the gas is cold and dense and can be identified as the cold neutral medium phase. We analyze the angular power spectra of maps of the integrated H I emission (column density) from the mosaics for several distinct velocity ranges. Fitting power spectrum models based on a power law, but including the effects of the synthesized beam and noise at high spatial frequencies, we find exponents ranging from -2.5 to -3.0. Power spectra of maps of the centroid velocity for these components give similar results. These exponents are interpreted as being representative of the 3D density and velocity fields of the atomic gas, respectively. Fully reduced DHIGLS H I data cubes and other data products are available at www.cita.utoronto.ca/DHIGLS.
### On the inconsistency between cosmic stellar mass density and star formation rate up to $z\sim8$
In this paper, we test the discrepancy between the stellar mass density and instantaneous star formation rate in redshift range $0<z<8$ using a large observational data sample. We first compile the measurements of the stellar mass densities up to $z\sim 8$. Comparing the observed stellar mass densities with the time-integral of instantaneous star formation history, we find that the observed stellar mass densities are lower than that implied from star formation history at $z<4$. We also use Markov chain monte carlo method to derive the best-fitting star formation history from the observed stellar mass density data. At $0.5<z<6$, the observed star formation rate densities are larger than the best-fitting one, especially at $z\sim2$ where by a factor of about two. However, at lower ($z<0.5$) and higher redshifts ($z>6$), the derived star formation history is consistent with the observations. This is the first time to test the discrepancy between the observed stellar mass density and instantaneous star formation rate up to very high redshift $z\approx8$ using the Markov chain monte carlo method and a varying recycling factor. Several possible reasons for this discrepancy are discussed, such as underestimation of stellar mass density, initial mass function and cosmic metallicity evolution.
### SALT long-slit spectroscopy of HE 0435-4312: fast change in the Mg II emission line shape
The MgII emission line is visible in the optical band for intermediate redshift quasars (0.4<z<1.6) and it is thus an extremely important tool to measure the black hole mass and to understand the structure of the Broad Line Region. We aim to determine the substructure and the variability of the MgII line with the aim to identify which part of the line comes from a medium in Keplerian motion. Using SALT telescope we performed ten spectroscopic observation of a quasar HE 0435-4312 (z=1.2231) over the period of 3 years (Dec 23/24, 2012 to Dec 7/8, 2015). We find that the line is well modeled by two Lorentzian components, and the relative strength of these components vary with time. The line maximum is shifted in a time-dependent way from the position of the Fe II pseudo-continuum, although the effect is not very strong, and the line asymmetry varies in time. We also note very different local conditions in the formation region of Mg II and FeII. The timescale for the line shape variability is of the order of the light travel time to the emitting region, therefore the changes are most likely due to the varying irradiation patterns and the presence of the two components does not imply two distinct emission regions.
### Infalling clouds on to supermassive black hole binaries - II. Binary evolution and the final parsec problem
The formation of massive black hole binaries (MBHBs) is an unavoidable outcome of galaxy evolution via successive mergers. However, the mechanism that drives their orbital evolution from parsec separations down to the gravitational wave (GW) dominated regime is poorly understood and their final fate is still unclear. If such binaries are embedded in gas-rich and turbulent environments, as observed in remnants of galaxy mergers, the interaction with gas clumps (such as molecular clouds) may efficiently drive their orbital evolution. Using numerical simulations, we test this hypothesis by studying the dynamical evolution of an equal-mass, circular MBHB accreting infalling molecular clouds. We investigate different orbital configurations, modelling a total of 13 systems to explore different possible pericentre distances and relative inclinations of the cloud-binary encounter. We show that the evolution of the binary orbit is dominated by the exchange of angular momentum through gas accretion during the first stages of the interaction for all orbital configurations. Building on these results, we construct a simple model for evolving a MBHB interacting with a sequence of clouds, which are randomly drawn from reasonable populations with different levels of anisotropy in their angular momenta distributions. We show that the binary efficiently evolves down to the GW emission regime within a few hundred million years, overcoming the 'final parsec' problem regardless of the stellar distribution.
### The cosmic evolution of massive black holes in the Horizon-AGN simulation
We analyze the demographics of black holes (BHs) in the large-volume cosmological hydrodynamical simulation Horizon-AGN. This simulation statistically models how much gas is accreted onto BHs, traces the energy deposited into their environment and, consequently, the back-reaction of the ambient medium on BH growth. The synthetic BHs reproduce a variety of observational constraints such as the redshift evolution of the BH mass density and the mass function. Yet there seem to be too many BHs with mass~ 1e7 Msun at high redshift, and too few BHs with similar mass at z=0 in intermediate-mass galaxies. Strong self-regulation via AGN feedback, weak supernova feedback, and unresolved internal process are likely to be responsible for this, and for a tight BH-galaxy mass correlation. Starting at z~2, tidal stripping creates a small population of BHs over-massive with respect to the halo. The fraction of galaxies hosting a central BH or an AGN increases with stellar mass. The AGN fraction agrees better with multi-wavelength studies, than single-wavelength ones, unless obscuration is taken into account. The most massive halos present BH multiplicity, with additional BHs gained by ongoing or past mergers. In some cases, both a central and an off-center AGN shine concurrently, producing a dual AGN. This dual AGN population dwindles with decreasing redshift, as found in observations. Specific accretion rate and Eddington ratio distributions are in good agreement with observational estimates. The BH population is dominated in turn by fast, slow, and very slow accretors, with transitions occurring at z=3 and z=2 respectively.
### Supermassive Black Holes with High Accretion Rates in Active Galactic Nuclei. VI. Velocity-resolved Reverberation Mapping of H$\beta$ Line
In the sixth of the series of papers reporting on a large reverberation mapping (RM) campaign of active galactic nuclei (AGNs) with high accretion rates, we present velocity-resolved time lags of H$\beta$ emission lines for nine objects observed in the campaign during 2012$-$2013. In order to correct the line-broadening caused by seeing and instruments before the analysis of velocity-resolved RM, we adopt Richardson-Lucy deconvolution to reconstruct their H$\beta$ profiles. The validity and effectiveness of the deconvolution are checked out by Monte Carlo simulation. Five among the nine objects show clear dependence of time delay on velocity. Mrk 335 and Mrk 486 show signatures of gas inflow whereas the clouds in the broad-line regions (BLRs) of Mrk 142 and MCG +06-26-012 tend to be radial outflowing. Mrk 1044 is consistent with the case of virialized motions. The lags of the rest four are not velocity-resolvable. The velocity-resolved RM of super-Eddington accreting massive black holes (SEAMBHs) shows that they have diversity of the kinematics in their BLRs. Comparing with the AGNs with sub-Eddington accretion rates, we do not find significant differences in the BLR kinematics of SEAMBHs.
### The ATLAS-SPT Radio Survey of Cluster Galaxies
Using a high-performance computing cluster to mosaic 4,787 pointings, we have imaged the 100 sq. deg. South Pole Telescope (SPT) deep-field at 2.1 GHz using the Australian Telescope Compact Array to an rms of 80 $\mu$Jy and a resolution of 8". Our goal is to generate an independent sample of radio-selected galaxy clusters to study how the radio properties compare with cluster properties at other wavelengths, over a wide range of redshifts in order to construct a timeline of their evolution out to $z \sim 1.3$. A preliminary analysis of the source catalogue suggests there is no spatial correlation between the clusters identified in the SPT-SZ catalogue and our wide-angle tail galaxies.
### Ca II triplet spectroscopy of RGB stars in NGC 6822: kinematics and metallicities
We present a detailed analysis of the chemistry and kinematics of red giants in the dwarf irregular galaxy NGC 6822. Spectroscopy at 8500 Angstroms was acquired for 72 red giant stars across two fields using FORS2 at the VLT. Line of sight extinction was individually estimated for each target star to accommodate the variable reddening across NGC 6822. The mean radial velocity was found to be v_helio = (52.8 +/- 2.2) km/s with dispersion rms = 24.1 km/s, in agreement with other studies. Ca II triplet equivalent widths were converted into [Fe/H] metallicities using a V magnitude proxy for surface gravity. The average metallicity was [Fe/H] = (-0.84 +/- 0.04) with dispersion rms = 0.31 dex and interquartile range 0.48. Our assignment of individual reddening values makes our analysis more sensitive to spatial variations in metallicity than previous studies. We divide our sample into metal-rich and metal-poor stars; the former are found to cluster towards small radii with the metal-poor stars more evenly distributed across the galaxy. The velocity dispersion of the metal-poor stars is higher than that of the metal-rich stars; combined with the age-metallicity relation this indicates that older populations have either been dynamically heated or were born in a less disclike distribution. The low ratio (v_rot/v_rms) suggests that within the inner 10', NGC 6822's stars are dynamically decoupled from the HI gas, possibly in a thick disc or spheroid.
### Solo Dwarfs I: Survey introduction and first results for the Sagittarius Dwarf Irregular Galaxy
We introduce the Solitary Local Dwarfs Survey (Solo), a wide field photometric study targeting every isolated dwarf galaxy within 3 Mpc of the Milky Way. Solo is based on (u)gi multi-band imaging from CFHT/MegaCam for northern targets, and Magellan/Megacam for southern targets. All galaxies fainter than Mv = -18 situated beyond the nominal virial radius of the Milky Way and M31 (>300 kpc) are included in this volume-limited sample, for a total of 42 targets. In addition to reviewing the survey goals and strategy, we present results for the Sagittarius Dwarf Irregular Galaxy (Sag DIG), one of the most isolated, low mass galaxies, located at the edge of the Local Group. We analyze its resolved stellar populations and their spatial distributions. We provide updated estimates of its central surface brightness and integrated luminosity, and trace its surface brightness profile to a level fainter than 30 mag./sq.arcsec. Sag DIG is well described by a highly elliptical (disk-like) system following a single component Sersic model. However, a low-level distortion is present at the outer edges of the galaxy that, were Sag DIG not so isolated, would likely be attributed to some kind of previous tidal interaction. Further, we find evidence of an extremely low level, extended distribution of stars beyond 5 arcmins (>1.5 kpc) that suggests Sag DIG may be embedded in a very low density stellar halo. We compare the stellar and HI structures of Sag DIG, and discuss results for this galaxy in relation to other isolated, dwarf irregular galaxies in the Local Group.
### Dispersion of Magnetic Fields in Molecular Clouds. IV - Analysis of Interferometry Data
We expand on the dispersion analysis of polarimetry maps toward applications to interferometry data. We show how the filtering of low-spatial frequencies can be accounted for within the idealized Gaussian turbulence model, initially introduced for single-dish data analysis, to recover reliable estimates for correlation lengths of magnetized turbulence, as well as magnetic field strengths (plane-of-the-sky component) using the Davis-Chandrasekhar-Fermi method. We apply our updated technique to TADPOL/CARMA data obtained on W3(OH), W3 Main, and DR21(OH). For W3(OH) our analysis yields a turbulence correlation length $\delta\simeq19$ mpc, a ratio of turbulent-to-total magnetic energy $\left\langle B_{\mathrm{t}}^{2}\right\rangle /\left\langle B^{2}\right\rangle \simeq0.58$, and a magnetic field strength $B_{0}\sim1.1\:\mathrm{mG}$; for W3 Main $\delta\simeq22$ mpc, $\left\langle B_{\mathrm{t}}^{2}\right\rangle /\left\langle B^{2}\right\rangle \simeq0.74$, and $B_{0}\sim0.7\:\mathrm{mG}$; while for DR21(OH) $\delta\simeq12$ mpc, $\left\langle B_{\mathrm{t}}^{2}\right\rangle /\left\langle B^{2}\right\rangle \simeq0.70$, and $B_{0}\sim1.2\:\mathrm{mG}$.
### The clustering amplitude of X-ray selected AGN at z=0.8: Evidence for a negative dependence on accretion luminosity
The northern tile of the wide-area and shallow XMM-XXL X-ray survey field is used to estimate the average dark matter halo mass of relatively luminous X-ray selected AGN [$\rm log\, L_X (\rm 2-10\,keV)= 43.6^{+0.4}_{-0.4}\,erg/s$] in the redshift interval $z=0.5-1.2$. Spectroscopic follow-up observations of X-ray sources in the XMM-XXL field by the Sloan telescope are combined with the VIPERS spectroscopic galaxy survey to determine the cross-correlation signal between X-ray selected AGN (total of 318) and galaxies (about 20,\,000). We model the large scales (2-25\,Mpc) of the correlation function to infer a mean dark matter halo mass of $\log M / (M_{\odot} \, h^{-1}) = 12.50 ^{+0.22} _{-0.30}$ for the X-ray selected AGN sample. This measurement is about 0.5\,dex lower compared to estimates in the literature of the mean dark matter halo masses of moderate luminosity X-ray AGN [$L_X (\rm 2-10\,keV)\approx 10^{42} - 10^{43}\,erg/s$] at similar redshifts. Our analysis also links the mean clustering properties of moderate luminosity AGN with those of powerful UV/optically selected QSOs, which are typically found in halos with masses few times $10^{12}\,M_{\odot}$. There is therefore evidence for a negative luminosity dependence of the AGN clustering. This is consistent with suggestions that AGN have a broad dark matter halo mass distribution with a high mass tail that becomes sub-dominant at high accretion luminosities. We further show that our results are in qualitative agreement with semi-analytic models of galaxy and AGN evolution, which attribute the wide range of dark matter halo masses among the AGN population to different triggering mechanisms and/or black hole fueling modes.
### APOGEE strings: a fossil record of the gas kinematic structure
We compare APOGEE radial velocities (RVs) of young stars in the Orion A cloud with CO line gas emission and find a correlation between the two at large-scales, in agreement with previous studies. However, at smaller scales we find evidence for the presence of substructure in the stellar velocity field. Using a Friends-of-Friends approach we identify 37 stellar groups with almost identical RVs. These groups are not randomly distributed but form elongated chains or strings of stars with five or more members with low velocity dispersion, across lengths of 1-1.5~pc. The similarity between the kinematic properties of the APOGEE strings and the internal velocity field of the chains of dense cores and fibers recently identified in the dense ISM is striking and suggests that for most of the Orion A cloud, young stars keep memory of the parental gas substructure where they originated.
### The cosmic assembly of stellar haloes in massive Early-Type Galaxies
Using the exquisite depth of the Hubble Ultra Deep Field (HUDF12 programme) dataset, we explore the ongoing assembly of the outermost regions of the most massive galaxies (M_{stellar} > 5x10^{10} M_{Sun}) at z < 1. The outskirts of massive objects, particularly Early-Types Galaxies (ETGs), are expected to suffer a dramatic transformation across cosmic time due to continuous accretion of small galaxies. HUDF imaging allows us to study this process at intermediate redshifts in 6 massive galaxies, exploring the individual surface brightness profiles out to 25 effective radii. We find that 10-30% of the total stellar mass for the galaxies in our sample is contained within 10 < R < 50 kpc. These values are in close agreement with numerical simulations, and at least 2-3 times higher than those reported for late-type galaxies. The fraction of stellar mass stored in the outer envelopes/haloes of Massive Early-Type Galaxies increases with decreasing redshift, being 28.7% at < z > = 0.1, 22.6% at < z > = 0.65 and 3.5% at < z > = 2. The fraction of mass in diffuse features linked with ongoing minor merger events is > 1-3%, very similar to predictions based on observed close pair counts. Therefore, our results suggest that the size and mass growth of the most massive galaxies have been solely driven by minor and major merging from z = 1 to today.
### The very wide-field $gzK$ galaxy survey -- II. The relationship between star-forming galaxies at $z \sim 2$ and their host haloes based upon HOD modelling
We present the results of an halo occupation distribution (HOD) analysis of star-forming galaxies at $z \sim 2$. We obtained high-quality angular correlation functions based on a large sgzK sample, which enabled us to carry out the HOD analysis. The mean halo mass and the HOD mass parameters are found to increase monotonically with increasing $K$-band magnitude, suggesting that more luminous galaxies reside in more massive dark haloes. The luminosity dependence of the HOD mass parameters was found to be the same as in the local Universe; however, the masses were larger than in the local Universe over all ranges of magnitude. This implies that galaxies at $z \sim 2$ tend to form in more massive dark haloes than in the local Universe, a process known as downsizing. By analysing the dark halo mass evolution using the extended Press--Schechter formalism and the number evolution of satellite galaxies in a dark halo, we find that faint Lyman break galaxies at $z \sim 4$ could evolve into the faintest sgzKs $(22.0 < K \leq 23.0)$ at $z \sim 2$ and into the Milky-Way-like galaxies or elliptical galaxies in the local Universe, whereas the most luminous sgzKs $(18.0 \leq K \leq 21.0)$ could evolve into the most massive systems in the local Universe. The stellar-to-halo mass ratio (SHMR) of the sgzKs was found to be consistent with the prediction of the model, except that the SHMR of the faintest sgzKs was smaller than the prediction at $z \sim 2$. This discrepancy may be attributed that our samples are confined to star-forming galaxies.
### The very wide-field $gzK$ galaxy survey -- I. Details of the clustering properties of star-forming galaxies at $z \sim 2$
We present the results of clustering analysis on the $z \sim 2$ star-forming galaxies. By combining our data with data from publicly available archives, we collect $g$-, $\zb / z$-, and $K$-band imaging data over 5.2 deg$^{2}$, which represents the largest area BzK/gzK survey. We apply colour corrections to translate our filter-set to those used in the original BzK selection for the gzK selection. Because of the wide survey area, we obtain a sample of 41,112 star-forming gzK galaxies at $z \sim 2$ (sgzKs) down to $\KAB < 23.0$, and determine high-quality two-point angular correlation functions (ACFs). Our ACFs show an apparent excess from power-law behaviour at small angular scale $(\theta \la 0.01^{\circ})$, which corresponds the virial radius of a dark halo at $z \sim 2$ with a mass of $\sim 10^{13} \Msun$. We find that the correlation lengths are consistent with the previous estimates over all magnitude range; however, our results are evaluated with a smaller margin of error than that in previous studies. The large amount of data enables us to determine ACFs differentially depending on the luminosity of the subset of the data. The mean halo mass of faint sgzKs $(22.0 < K \leq 23.0)$ was found to be $\Mh = (1.32^{+0.09}_{-0.12}) \times 10^{12} h^{-1} \Msun$, whereas bright sgzKs ($18.0 \leq K \leq 21.0)$ were found to reside in dark haloes with a mass of $\Mh = (3.26^{+1.23}_{-1.02}) \times 10^{13} h^{-1} \Msun$.
### The Launching of Cold Clouds by Galaxy Outflows II: The Role of Thermal Conduction
We explore the impact of electron thermal conduction on the evolution of radiatively-cooled cold clouds embedded in flows of hot and fast material, as occur in outflowing galaxies. Performing a parameter study of three-dimensional adaptive mesh refinement hydrodynamical simulations, we show that electron thermal conduction causes cold clouds to evaporate, but it can also extend their lifetimes by compressing them into dense filaments. We distinguish between low column-density clouds, which are disrupted on very short times, and high-column density clouds with much-longer disruption times that are set by a balance between impinging thermal energy and evaporation. We provide fits to the cloud lifetimes and velocities that can be used in galaxy-scale simulations of outflows, in which the evolution of individual clouds cannot be modeled with the required resolution. Moreover, we show that the clouds are only accelerated to a small fraction of the ambient velocity because compression by evaporation causes the clouds to present a small cross-section to the ambient flow. This means that either magnetic fields must suppress thermal conduction, or that the cold clouds observed in galaxy outflows are not formed of cold material carried out from the galaxy.
### The VIMOS Ultra Deep Survey First Data Release: spectra and spectroscopic redshifts of 698 objects up to z~6 in CANDELS
This paper describes the first data release (DR1) of the VIMOS Ultra Deep Survey (VUDS). The DR1 includes all low-resolution spectroscopic data obtained in 276.9 arcmin2 of the CANDELS-COSMOS and CANDELS-ECFDS survey areas, including accurate spectroscopic redshifts z_spec and individual spectra obtained with VIMOS on the ESO-VLT. A total of 698 objects have a measured redshift, with 677 galaxies, two type-I AGN and a small number of 19 contaminating stars. The targets of the spectroscopic survey are selected primarily on the basis of their photometric redshifts to ensure a broad population coverage. About 500 galaxies have z_spec>2, 48 with z_spec>4, and the highest reliable redshifts reach beyond z_spec=6. This dataset approximately doubles the number of galaxies with spectroscopic redshifts at z>3 in these fields. We discuss the general properties of the sample in terms of the spectroscopic redshift distribution, the distribution of Lyman-alpha equivalent widths, and physical properties including stellar masses M_star and star formation rates (SFR) derived from spectral energy distribution fitting with the knowledge of z_spec. We highlight the properties of the most massive star-forming galaxies, noting the large range in spectral properties, with Lyman-alpha in emission or in absorption, and in imaging properties with compact, multi-component or pair morphologies. We present the catalogue database and data products. All data are publicly available and can be retrieved from a dedicated query-based database available at http://cesam.lam.fr/vuds.
### The extended epoch of galaxy formation: age dating of ~3600 galaxies with 2<z<6.5 in the VIMOS Ultra-Deep Survey
We aim at improving constraints on the epoch of galaxy formation by measuring the ages of 3597 galaxies with spectroscopic redshifts 2<z<6.5 in the VIMOS Ultra Deep Survey (VUDS). We derive ages and other physical parameters from the simultaneous fitting with the GOSSIP+ software of observed UV rest-frame spectra and photometric data from the u-band up to 4.5 microns using composite stellar population models. We conclude from extensive simulations that at z>2 the joint analysis of spectroscopy and photometry combined with restricted age possibilities when taking into account the age of the Universe substantially reduces systematic uncertainties and degeneracies in the age derivation. We find galaxy ages ranging from very young with a few tens of million years to substantially evolved with ages up to ~1.5-2 Gyr. The formation redshifts z_f derived from the measured ages indicate that galaxies may have started forming stars as early as z_f~15. We produce the formation redshift function (FzF), the number of galaxies per unit volume formed at a redshift z_f, and compare the FzF in increasing redshift bins finding a remarkably constant 'universal' FzF. The FzF is parametrized with (1+z)^\zeta, with \zeta~0.58+/-0.06, indicating a smooth 2 dex increase from z~15 to z~2. Remarkably this observed increase is of the same order as the observed rise in the star formation rate density (SFRD). The ratio of the SFRD with the FzF gives an average SFR per galaxy of ~7-17Msun/yr at z~4-6, in agreement with the measured SFR for galaxies at these redshifts. From the smooth rise in the FzF we infer that the period of galaxy formation extends from the highest possible redshifts that we can probe at z~15 down to redshifts z~2. This indicates that galaxy formation is a continuous process over cosmic time, with a higher number of galaxies forming at the peak in SFRD at z~2 than at earlier epochs. (Abridged)
### Size evolution of star-forming galaxies with $2<z<4.5$ in the VIMOS Ultra-Deep Survey
We measure galaxy sizes on a sample of $\sim1200$ galaxies with confirmed spectroscopic redshifts $2 \leq z_{spec} \leq 4.5$ in the VIMOS Ultra Deep Survey (VUDS), representative of star-forming galaxies with $i_\mathrm{AB} \leq 25$. We first derive galaxy sizes applying a classical parametric profile fitting method using GALFIT. We then measure the total pixel area covered by a galaxy above a given surface brightness threshold, which overcomes the difficulty of measuring sizes of galaxies with irregular shapes. We then compare the results obtained for the equivalent circularized radius enclosing 100\% of the measured galaxy light $r_T^{100}$ to those obtained with the effective radius $r_{e,\mathrm{circ}}$ measured with GALFIT. We find that the sizes of galaxies computed with our non-parametric approach span a large range but remain roughly constant on average with a median value $r_T^{100}\sim2.2$ kpc for galaxies with $2<z<4.5$. This is in stark contrast with the strong downward evolution of $r_e$ with increasing redshift, down to sizes of $<1$ kpc at $z\sim4.5$. We analyze the difference and find that parametric fitting of complex, asymmetric, multi-component galaxies is severely underestimating their sizes. By comparing $r_T^{100}$ with physical parameters obtained through SED fitting we find that the star-forming galaxies that are the largest at any redshift are, on average, more massive and more star-forming. We discover that galaxies present more concentrated light profiles as we move towards higher redshifts. We interpret these results as the signature of several, possibly different, evolutionary paths of galaxies in their early stages of assembly, including major and minor merging or star-formation in multiple bright regions. (abridged)
|
2016-02-14 18:52:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6516064405441284, "perplexity": 2130.112703418607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454702018134.95/warc/CC-MAIN-20160205195338-00068-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://scite.ai/reports/solutions-converging-to-zero-of-kVx8v3
|
Search citation statements
Order By: Relevance
Paper Sections
Select...
1
Citation Types
0
1
0
Year Published
2022
2022
2022
2022
Publication Types
Select...
1
Relationship
0
1
Authors
Journals
##### Cited by 6 publications
(1 citation statement)
##### References 21 publications
0
1
0
Order By: Relevance
“…More precisely, using some conditions on the matrices $A(t)$ and $B(t)$ and the function $f$, he studied the existence and the asymptotic behavior of the solutions of (). For some differential, difference, and related equations close to Equation () and their applications, see, for example, previous studies 25–32 and the references cited therein. For some partial difference equations, see, for example, previous studies 33–39 and the references therein.…”
Section: Introductionmentioning
confidence: 99%
“…More precisely, using some conditions on the matrices $A(t)$ and $B(t)$ and the function $f$, he studied the existence and the asymptotic behavior of the solutions of (). For some differential, difference, and related equations close to Equation () and their applications, see, for example, previous studies 25–32 and the references cited therein. For some partial difference equations, see, for example, previous studies 33–39 and the references therein.…”
Section: Introductionmentioning
confidence: 99%
|
2023-04-02 06:18:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7875844240188599, "perplexity": 947.5819400841863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950383.8/warc/CC-MAIN-20230402043600-20230402073600-00720.warc.gz"}
|
https://math.stackexchange.com/questions/670355/prove-if-x-ge-1-then-1xn-ge-1nx-every-n-ge-1/670381
|
# PROVE if $x \ge-1$then $(1+x)^n \ge 1+nx$ , Every $n \ge 1$
Use mathematical induction to prove this. Here is my answer but I stuck at certain point.
Base Case: n=1 $$(1+x)^1 \ge 1+x$$ True ,
Induction Case: n=k assume $$(1+x)^k \ge 1+kx$$ n=k+1 $$(1+x)^k+1 \ge 1+(k+1)x$$ $$(1+x)^k *(1+x) \ge 1+ kx+ x$$
Stuck!!!
• Please, fix your post. You didn't write what you want to prove. Also, you can use math-mode to write mathematics in your post, like $(1+x)^k$. – frabala Feb 10 '14 at 2:05
• how to access math mode, how can I write, I am new – hacikho Feb 10 '14 at 2:06
• Here: math.stackexchange.com/editing-help#latex . You use the dollar signs. – frabala Feb 10 '14 at 2:09
• thank for showing me how to edit my question – hacikho Feb 10 '14 at 2:25
|
2020-08-06 01:45:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7397035360336304, "perplexity": 1106.4686299473597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735990.92/warc/CC-MAIN-20200806001745-20200806031745-00533.warc.gz"}
|
https://www.codeguru.co.in/2021/06/minimum-coin-change-problem-using.html
|
# Minimum coin change problem using recursion in JavaScript
You are given an integer array coins representing coins of different denominations and an integer amount representing a total amount of money.
Return the fewest number of coins that you need to make up that amount. If that amount of money cannot be made up by any combination of the coins, return -1.
You may assume that you have an infinite number of each kind of coin.
Example 1: Input: coins = [1,2], amount = 4 Output: 2 Explanation: 4 =
2 + 2,4=1+1+2
## The recursion formual for coin change as below
$coinchange(j,a) = \begin{cases} \infty, & \text{if j<0} \\ 0, & \text{if j =0} \\ 1+\min(\sum_{i=k}^n c[j-a_i]) & \text{if j >1} \end{cases}$
function coin_change(amount) {
// if remaining coin is zero return
if (amount == 0) return 0
// if coin is negative return some large value
if (amount < 0) return Infinity
let ans = Infinity
for (const coin of coins)
ans = Math.min(
ans,
1 + coin_change(amount - coin)
)
return ans
}
## Recursion Tree
This article only show you how to write recursive program. I know this is not optimized way to write coin change problem{alertError}
|
2022-05-26 01:14:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.606102466583252, "perplexity": 2376.3990278042643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00619.warc.gz"}
|
https://rpg.stackexchange.com/questions/123333/when-multiclassing-a-cleric-wizard-can-i-prepare-wizard-spells-from-the-cleric?noredirect=1
|
# When multiclassing a Cleric/Wizard, can I prepare wizard spells from the cleric list that the wiz could learn but didn't? [duplicate]
As a concrete example, let's say I have a multiclassed Cleric and Wizard (Theurgy), meaning that a common spell they could both have is Cure Wounds, but let's say that I didn't have the wizard learn the Cure Wounds spell. Could the character have the "prepared wizard spells" include Cure Wounds, which is a spell that as a Theurgy Wizard he could know, but doesn't?
## marked as duplicate by V2Blast♦ dnd-5e StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Jul 3 at 5:37
• @KorvinStarmast yes, although I had thought it was in Xanathars (I just double checked and it isn't) – Dan K May 28 '18 at 19:05
• I put the link to the UA into your question, and added the tag. – KorvinStarmast May 28 '18 at 19:20
• Okay, but I just picked Theurgy domain to make it a simpler example. There are plenty of other crossover spells between a wizard and a cleric – Dan K May 29 '18 at 12:50
## No
You determine what spells you know and can prepare for each class individually, as if you were a single-classed member of that class. If you are a ranger 4/wizard 3, for example, you know three 1st-level ranger spells based on your levels in the ranger class. As 3rd-level wizard, you know three wizard cantrips, and your spellbook contains ten wizard spells, two of which (the two you gained when you reached 3rd level as a wizard) can be 2nd-level spells. If your Intelligence is 16, you can prepare six wizard spells from your spellbook.
Each spell you know and prepare is associated with one of your classes, and you use the spellcasting ability of that class when you cast the spell. Similarly, a spellcasting focus, such as a holy symbol, can be used only for the spells from the class associated with that focus.
In the example above if it is not in your spellbook you can't prepare it as a wizard spell. See the ability arcane initiate. It is designed specifically for the purpose of adding cleric spells to your spellbook for this reason.
# Short Answer: As described, you can’t
### Why not?
In 5th edition there are three ways that a spellcaster class determines what spells they are able to cast during an adventuring day:
1. Known Spells (Arcane Tricksters, Bards, Eldritch Knights, Rangers, Sorcerers & Warlocks)
2. Prepare Spells from Class Spell List (Clerics, Druids & Paladins)
3. Prepare Spells from Spellbook (Wizards)
Multi-classing makes it slightly more complicated as the spells each class in the multiclass has available are determined separately.
The procedure you go through is thus:
1. Build the spells available to the character for each class in the multiclass separately.
2. You do this by taking the levels in the individual class and comparing them to the vanilla class table. So in the case of a Cleric 5/Wizard 8:
• Cleric 5: prepare a number of spells equal to your wisdom modifier + cleric level = wisdom modifier + 5. These spells can be any combination of the 1st, 2nd & 3rd level cleric spells available to your character. This will generally be detemined by the cleric class spell list (Pg 207 of the PHB), but some cleric subclasses also give the cleric access to additional spells not on the list (and those spells are considered cleric spells)
• Wizard 8: your character can have as many spells as they have discovered in their spellbook (including spells at a higher level than they can cast). All of the spells in the spellbook must be wizard spells (for the purposes of the Theurgy domain, only those Cleric spells added via the level up mechanic in the Wizard class are counted as Wizard Spells). The wizard 8 portion of the multiclass can prepare a number of spells equal to their intelligence modifier + their class level = Intelligance modifier + 8. These prepared spells can be of levels 1 - 4.
• As you have described the situation Cure Wounds is not a Wizard spell for the purposes of this preparation step. It would also not be present in the Wizards spellbook
3. Determine your spell slots by calculating your “spellcaster level” (the procedure for this is given in the Multiclassing section of the Players Handbook [PHB]). For the example we are using this is done by:
• adding your Cleric level and your Wizard levels together. In our example this would be 5 + 8 = 13.
4. Use this total (13) to read off your per spell level spell slots from the multiclass spellcaster table on page 165 of the players handbook.
To cast spells you then choose a spell from your list of spells your character has for the day (that we built in step 2), and combine it with a slot appropriate to the level you wish to cast it at (so you can use a 7th level spell slot to cast Cure Wounds if you had prepared it (using your Cleric levels) for the day).
The only mechanism the Wizard portion of the multiclass has for gaining access to Cure Wounds (if they don't take it when they choose the Theurgy arcane tradition) is to add it to their spellbook on a level up in the wizard class.
|
2019-10-22 10:24:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37354010343551636, "perplexity": 4193.72803886618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00384.warc.gz"}
|
https://love2d.org/forums/viewtopic.php?t=8351&p=228170
|
## Gspöt - retained GUI lib
pgimeno
Party member
Posts: 2132
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Gspöt - retained GUI lib
Jack Dandy wrote:
Fri Jun 07, 2019 3:13 pm
Quick question: Is there a way to change an IMAGE'S color values?
If you mean tint, you can override the Gspot.util.drawimg method. The default is:
Code: Select all
drawimg = function(this, pos)
local r, g, b, a = love.graphics.getColor()
setColor(255, 255, 255, 255)
love.graphics.draw(this.img, (pos.x + (pos.w / 2)) - (this.img:getWidth()) / 2, (pos.y + (pos.h / 2)) - (this.img:getHeight() / 2))
love.graphics.setColor(r, g, b, a)
end,
setColor is the internal version-agnostic function, but unless you need compatibility with 0.9.1+, you don't need it; you can use love.graphics.setColor instead.
Santvarg
Prole
Posts: 7
Joined: Sat May 18, 2019 12:42 pm
### Re: Gspöt - retained GUI lib
Been awhile but thanks, that worked out after some tweaking to keep it from doing the coordinate translation for the gui when it shouldnt, but this seems to be working fine. Menu buttons work, and the hidden buttons on npcs work properly and are clickable even after moving the player around. Thanks alot for this, this problem had brought me to a screeching halt
Code: Select all
function gui.getmouse(this)
if gamestate == 1 then
camera:set()
local x, y = love.graphics.inverseTransformPoint(love.mouse.getPosition())
camera:unset()
return x, y
else
local x, y = love.graphics.inverseTransformPoint(love.mouse.getPosition())
return x, y
end
end
love.mousepressed = function(x, y, button)
if gamestate==1 then
local x, y = gui.getmouse()
gui:mousepress(x, y, button)
end
gui:mousepress(x, y, button)
end
function love.draw()
...
local x, y = love.graphics.inverseTransformPoint(love.mouse.getPosition())
gui.getmouse(x, y)
...
end
Santvarg
Prole
Posts: 7
Joined: Sat May 18, 2019 12:42 pm
### Re: Gspöt - retained GUI lib
Now having a different problem related to this library:
Code: Select all
function npcspawn()
for i=1, #curmapnpc[1] do
if #npcs[i] < curmapnpc[1][i][2] then --not important for this case
for o=#npcs[i]+1, curmapnpc[1][i][2] do
...
npcbutton.i = {}
npcbutton.i.o = gui:hidden("test", {x = npcposx, y = npcs[i][o][3]-(npcimgx[i]/2), w = npcimgx[i], h = npcimgx[i]})
function npcbutton.i.o:click(this, x, y)
playertargetid=i
playertargetnpc=o
end
function npcbutton.i.o:enter(this)
love.mouse.setCursor(pointcursor)
end
function npcbutton.i.o:leave(this)
love.mouse.setCursor(defaultcursor)
end
end
end
end
end
gui:rem(npcbutton.playertargetid.playertargetnpc)
This code causes an error
Code: Select all
main.lua:525: attempt to index a nil value
Despite that, I remove the npcs from their own table using the same exact global variables
any variation in format ive tried doesnt work, tried using npcbutton[o]:click(this, x, y) and using table.remove() but some funky errors pop up on the first part
everything else works, player destroying npcs removes them from their table, they go poof and respawn function takes over some time later and spawns a new one, but i can never get the buttons to go poof too
pgimeno
Party member
Posts: 2132
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Gspöt - retained GUI lib
Santvarg wrote:
Wed Jul 10, 2019 9:11 am
Now having a different problem related to this library:
Code: Select all
function npcspawn()
for i=1, #curmapnpc[1] do
if #npcs[i] < curmapnpc[1][i][2] then --not important for this case
for o=#npcs[i]+1, curmapnpc[1][i][2] do
...
npcbutton.i = {}
npcbutton.i.o = gui:hidden("test", {x = npcposx, y = npcs[i][o][3]-(npcimgx[i]/2), w = npcimgx[i], h = npcimgx[i]})
function npcbutton.i.o:click(this, x, y)
playertargetid=i
playertargetnpc=o
end
function npcbutton.i.o:enter(this)
love.mouse.setCursor(pointcursor)
end
function npcbutton.i.o:leave(this)
love.mouse.setCursor(defaultcursor)
end
end
end
end
end
gui:rem(npcbutton.playertargetid.playertargetnpc)
This code causes an error
Code: Select all
main.lua:525: attempt to index a nil value
There's too little information here to get an idea of what you're trying to do or where the problem is. For example, I don't know what line 525 is. Also, I don't know if 'npcbutton' is defined as a local or not, and what value it receives.
What seems apparent is that this is not a problem in Gspot. npcbutton.i is equivalent to npcbutton["i"], which uses the string "i" as key, not the value of the variable i. You seem to expect it to behave as if it was npcbutton[i].
Furthermore, the parameter list for the functions defined in the snippet you have pasted uses colon syntax, and additionally includes a 'this' parameter. That should not matter in this case, because there are no more parameters and you're not using it anyway, but using that construction could cause problems in some other part of the code.
Santvarg wrote:
Wed Jul 10, 2019 9:11 am
Despite that, I remove the npcs from their own table using the same exact global variables
any variation in format ive tried doesnt work, tried using npcbutton[i][o]:click(this, x, y) and using table.remove() but some funky errors pop up on the first part
If you want i and o to be indices into the npcbutton table, then that's the right syntax, so don't expect that changing things at random will solve your problem. If you get "funky" errors that way, then these should be addressed. In that little snippet, you have again a problem with the parameter list, and in this case it does have potential to cause problems. The parameter list of the function is <table>, <x>, <y>. If you use colon syntax, an implicit 'self' parameter is inserted, so in your case, npcbutton[i][o]:click(this, x, y) is equivalent to npcbutton[i][o].click(self, this, x, y) which has an extra parameter that should not be there.
Santvarg
Prole
Posts: 7
Joined: Sat May 18, 2019 12:42 pm
### Re: Gspöt - retained GUI lib
line 525 is the gui:rem line at the bottom, and npcbutton = {}, mb, i forgot it was the load function and not there. This is the only function that addresses it otherwise
I thought
Code: Select all
[i][o]
was the same as .i.o , from my errors, and what you say, its pretty obvious they're not, woops lol
Is there another way to set functions for each instance of button other than using a colon ? Im just using what ive learned from the documentation. using:
Code: Select all
function npcbutton[i][o]:click(x, y)
, gives me the error:
syntax error; ( expected near [
on that line, same with a comma. Cant make sense of this
before, i tried setting it as
Code: Select all
npcbutton[i][o]:click = function(x, y)
this caused the funky errors i was talking about. Using a comma instead will run, but crashes with the error:
Gspot.lua:380: attempt to index local 'element' (a nil value)
with a traceback to this line:
Code: Select all
gui:rem(npcbutton[playertargetid][playertargetnpc])
if the destroyed npc(and therefore element) isn't the last in the table. note: the playertargetid and playertargetnpc are the i and o values for that specific npc, respectively
What im trynna do here is, as every npc is spawned, they spawn as
Code: Select all
NPCS[i][o]
, where i is the type of npc and o is the number of npc.
when they're killed, they are deleted with table.remove, which moves all other entries in that npc type down to fill the empty entry in the table. This works as intended as far as i can tell, any destroyed npcs are properly removed and the new ones spawn in the now vacant places at the end of the table
but when each npc is spawned, a hidden button is spawned with the same i and o values, but under npcbutton. a specific button with npc specific information, ie the position of the button is set as the same values as the npc's coordinates at creation (and properly edited as needed later)
what i wanna do is the same as the NPCS table, remove the button that was 'attached' to that npc, while keeping the other buttons in the corresponding positions in their table. Such that npc [1][2] has a button at npcbutton[1][2], and if that npc becomes npc[1][1] due to 1st npc death, the button also moves to npcbutton[1][1] after the original [1][1] button is removed from the table. This way the references for npc and button are interchangable between the two for purposes of selection
sorry for he lots of code blocks, kept setting stuff as italic
pgimeno
Party member
Posts: 2132
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Gspöt - retained GUI lib
Santvarg wrote:
Thu Jul 11, 2019 1:16 am
Is there another way to set functions for each instance of button other than using a colon ?
For normal functions you use dots, not colons. You use a colon just before the last element when you want to call a method in an instance of an object, because that passes the instance as an implicit parameter. In other words, if 'a' is an object (i.e. a table):
Code: Select all
function a:b(x, y)
end
-- is the same as:
function a.b(self, x, y)
end
Colon syntax at the time of calling a function, passes whatever is before the colon to the function.
Take a look at this chapter of the PIL, it explains the difference between dot and colon syntax in the context of OOP:
https://www.lua.org/pil/16.html
Santvarg wrote:
Thu Jul 11, 2019 1:16 am
Im just using what ive learned from the documentation. using:
Code: Select all
function npcbutton[i][o]:click(x, y)
, gives me the error:
syntax error; ( expected near [
on that line, same with a comma.
That's expected, yeah. You can't add sub-indices in function declarations, i.e. you can't do this:
Code: Select all
function x[1]() ...
But as you have noted, you can do
Code: Select all
x[1] = function () ...
Santvarg wrote:
Thu Jul 11, 2019 1:16 am
Cant make sense of this
before, i tried setting it as
Code: Select all
npcbutton[i][o]:click = function(x, y)
this caused the funky errors i was talking about. Using a comma instead will run, but crashes with the error:
Gspot.lua:380: attempt to index local 'element' (a nil value)
with a traceback to this line:
Code: Select all
gui:rem(npcbutton[playertargetid][playertargetnpc])
I don't think it will let you to place a colon in the assignment statement. Anyway, try this instead:
Code: Select all
npcbutton[i][o].click = function(self, x, y)
Santvarg wrote:
Thu Jul 11, 2019 1:16 am
sorry for he lots of code blocks, kept setting stuff as italic
No need to be sorry, it's quite readable.
Santvarg
Prole
Posts: 7
Joined: Sat May 18, 2019 12:42 pm
### Re: Gspöt - retained GUI lib
Thanks for the quick replies
using
Code: Select all
npcbutton[i][o].click = function(self, x, y)
(wasnt using self before,oversight)
returns the same error:
Code: Select all
Gspot.lua:380: attempt to index local 'element' (a nil value)
--with a traceback to this line:
gui:rem(npcbutton[playertargetid][playertargetnpc])
when it's not the last npc/button. confused because none of my elements are set as locals
edit: this error happens immediately after an npc is destroyed, other than the 5th/last one; when the game tries to remove the gui element of that npc, and before the npc is removed
pgimeno
Party member
Posts: 2132
Joined: Sun Oct 18, 2015 2:58 pm
### Re: Gspöt - retained GUI lib
Santvarg wrote:
Thu Jul 11, 2019 1:57 am
Thanks for the quick replies
using
Code: Select all
npcbutton[i][o].click = function(self, x, y)
(wasnt using self before,oversight)
returns the same error:
Code: Select all
Gspot.lua:380: attempt to index local 'element' (a nil value)
--with a traceback to this line:
gui:rem(npcbutton[playertargetid][playertargetnpc])
Line 380 just tries to use the passed element parameter, so it looks like this means you're passing nil to it. Try printing npcbutton[playertargetid][playertargetnpc] right before calling gui:rem. If you get nil, you'll need to debug that.
Santvarg
Prole
Posts: 7
Joined: Sat May 18, 2019 12:42 pm
### Re: Gspöt - retained GUI lib
You're right, while playertargetid and playertargetnpc have values
npcbutton[playertargetid][playertargetnpc]
is nil
Means something's up with npcbutton's definitions
- it was the
Code: Select all
npcbutton[i] = {}
line, it was setting that every time the 'o' for loop iterated(thats alot), big bad
doesnt crash anymore, no matter which npc is destroyed
I think the remaining problem is not that much to do about the gui
every npc after the one destroyed, when clicked, addresses the new npc in that slot
ie: destroy npc 3, click the new npc 3(was npc 4), but it selects 4 instead(which was 5). clicking npc 4 (used to be 5) selects the new npc, if spawned. The new npc(5) selects itself as intended
Got a feeling its the click function not being updated, working on it now
thanks for the help!
edit:
thought i fixed it with
Code: Select all
if playertarget[4]<=0 then --if hp=0
gui:rem(npcbutton[playertargetid][playertargetnpc]) --removes button
table.remove(npcs[playertargetid], playertargetnpc) --deletes npc and moves table
for i=playertargetnpc, #npcs[playertargetid] do -- new part
npcbutton[playertargetid][i+1].click = function(self, x, y) -- redefines the click function of each button
playertarget=npcs[playertargetid][i]
playertargetnpc=i
playert=1
end
end
playercombat=0
playertarget=nil
player.fangle = player.angle
--do something spectacular, like blow up graphics
end
the problem was the click function wasnt being updated, a fault of my own, so the function:
starts at the position on the table where the npc used to be, and stops at the map's max number for that type of npc
changes the click function of the NEXT npc's button, to reference the npc that is now(after table.remove) in the spot the destroyed npc was in. and since the loop starts at the point in the table where changes occur, i can just use i to redefine all the things
zenith
Prole
Posts: 13
Joined: Sat Oct 12, 2013 5:44 pm
### Re: Gspöt - retained GUI lib
Hey! How to update label text on button click? Something like that:
Code: Select all
function love.load()
number=0
text=gui:text(number)
button=gui:button('Click me')
function button.click(this,x,y)
number=number+1
end
end
ping pong
### Who is online
Users browsing this forum: bludburn and 26 guests
|
2020-06-02 12:23:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49310022592544556, "perplexity": 6501.442691030459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347424174.72/warc/CC-MAIN-20200602100039-20200602130039-00354.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcds.2004.10.941
|
# American Institute of Mathematical Sciences
October 2004, 10(4): 941-964. doi: 10.3934/dcds.2004.10.941
## Evans function and blow-up methods in critical eigenvalue problems
1 Department of Mathematics, The Ohio State University, Columbus, OH 43210 2 Department of Mathematics, University of Minnesota, Minneapolis, MN 55455, United States
Received November 2002 Revised September 2003 Published March 2004
Contact defects are one of several types of defects that arise generically in oscillatory media modelled by reaction-diffusion systems. An interesting property of these defects is that the asymptotic spatial wavenumber is approached only with algebraic order O$(1/x)$ (the associated phase diverges logarithmically). The essential spectrum of the PDE linearization about a contact defect always has a branch point at the origin. We show that the Evans function can be extended across this branch point and discuss the smoothness properties of the extension. The construction utilizes blow-up techniques and is quite general in nature. We also comment on known relations between roots of the Evans function and the temporal asymptotics of Green's functions, and discuss applications to algebraically decaying solitons.
Citation: Björn Sandstede, Arnd Scheel. Evans function and blow-up methods in critical eigenvalue problems. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 941-964. doi: 10.3934/dcds.2004.10.941
[1] Todd Kapitula, Björn Sandstede. Eigenvalues and resonances using the Evans function. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 857-869. doi: 10.3934/dcds.2004.10.857 [2] Yuri Latushkin, Alim Sukhtayev. The Evans function and the Weyl-Titchmarsh function. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 939-970. doi: 10.3934/dcdss.2012.5.939 [3] Peter Howard, K. Zumbrun. The Evans function and stability criteria for degenerate viscous shock waves. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 837-855. doi: 10.3934/dcds.2004.10.837 [4] Ramon Plaza, K. Zumbrun. An Evans function approach to spectral stability of small-amplitude shock profiles. Discrete & Continuous Dynamical Systems - A, 2004, 10 (4) : 885-924. doi: 10.3934/dcds.2004.10.885 [5] Martin D. Buhmann, Slawomir Dinew. Limits of radial basis function interpolants. Communications on Pure & Applied Analysis, 2007, 6 (3) : 569-585. doi: 10.3934/cpaa.2007.6.569 [6] Jan Burczak, P. Kaplický. Evolutionary, symmetric $p$-Laplacian. Interior regularity of time derivatives and its consequences. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2401-2445. doi: 10.3934/cpaa.2016042 [7] Peter Giesl. Construction of a global Lyapunov function using radial basis functions with a single operator. Discrete & Continuous Dynamical Systems - B, 2007, 7 (1) : 101-124. doi: 10.3934/dcdsb.2007.7.101 [8] Tomás Sanz-Perela. Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2547-2575. doi: 10.3934/cpaa.2018121 [9] Elisa Calzolari, Roberta Filippucci, Patrizia Pucci. Existence of radial solutions for the $p$-Laplacian elliptic equations with weights. Discrete & Continuous Dynamical Systems - A, 2006, 15 (2) : 447-479. doi: 10.3934/dcds.2006.15.447 [10] Rossella Bartolo, Anna Maria Candela, Addolorata Salvatore. Infinitely many radial solutions of a non--homogeneous $p$--Laplacian problem. Conference Publications, 2013, 2013 (special) : 51-59. doi: 10.3934/proc.2013.2013.51 [11] Mohammad A. Rammaha, Daniel Toundykov, Zahava Wilstein. Global existence and decay of energy for a nonlinear wave equation with $p$-Laplacian damping. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4361-4390. doi: 10.3934/dcds.2012.32.4361 [12] Xi Wang, Zuhan Liu, Ling Zhou. Asymptotic decay for the classical solution of the chemotaxis system with fractional Laplacian in high dimensions. Discrete & Continuous Dynamical Systems - B, 2018, 23 (9) : 4003-4020. doi: 10.3934/dcdsb.2018121 [13] Yaping Wu, Xiuxia Xing, Qixiao Ye. Stability of travelling waves with algebraic decay for $n$-degree Fisher-type equations. Discrete & Continuous Dynamical Systems - A, 2006, 16 (1) : 47-66. doi: 10.3934/dcds.2006.16.47 [14] Lihua Min, Xiaoping Yang. Finite speed of propagation and algebraic time decay of solutions to a generalized thin film equation. Communications on Pure & Applied Analysis, 2014, 13 (2) : 543-566. doi: 10.3934/cpaa.2014.13.543 [15] Raúl Ferreira, Julio D. Rossi. Decay estimates for a nonlocal $p-$Laplacian evolution problem with mixed boundary conditions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (4) : 1469-1478. doi: 10.3934/dcds.2015.35.1469 [16] Olga Bernardi, Matteo Dalla Riva. Analytic dependence on parameters for Evans' approximated Weak KAM solutions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4625-4636. doi: 10.3934/dcds.2017199 [17] Laura Luzzi, Ghaya Rekaya-Ben Othman, Jean-Claude Belfiore. Algebraic reduction for the Golden Code. Advances in Mathematics of Communications, 2012, 6 (1) : 1-26. doi: 10.3934/amc.2012.6.1 [18] Javier de la Cruz, Michael Kiermaier, Alfred Wassermann, Wolfgang Willems. Algebraic structures of MRD codes. Advances in Mathematics of Communications, 2016, 10 (3) : 499-510. doi: 10.3934/amc.2016021 [19] Peter Haïssinsky, Kevin M. Pilgrim. An algebraic characterization of expanding Thurston maps. Journal of Modern Dynamics, 2012, 6 (4) : 451-476. doi: 10.3934/jmd.2012.6.451 [20] Aihua Li. An algebraic approach to building interpolating polynomial. Conference Publications, 2005, 2005 (Special) : 597-604. doi: 10.3934/proc.2005.2005.597
2018 Impact Factor: 1.143
|
2019-09-16 20:23:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5608189702033997, "perplexity": 4188.074790076284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572934.73/warc/CC-MAIN-20190916200355-20190916222355-00210.warc.gz"}
|
https://www.gamedev.net/forums/topic/606619-antialiasing-without-multisampling/
|
# OpenGL Antialiasing without multisampling?
## Recommended Posts
Is this even possible?
Unless, i would be able to remove the aliasing by some other way.
I know OpenGL had some option to enable antialiasing with GL_POLYGON_SMOOTH back in the days,
i seems that nowadays most graphic-cards could only do Antialiasing with the multisample option.
And in my case the graphic card dont support multisample, so what other way could one do to remove
aliasing artifacts and also [u]without[/u] the use of shaders?
I've attached an image of some cubes joined together, they seem to give some aliasing, that is the edges of the
polygons aint placed evenly together when they render. Is there a way to fix this?
[edit: im looking into the depth bias]
##### Share on other sites
Without the use of any shaders this could get quite tricky
What kind of card are you using exactly if it doesn't support multisampling?
There are some very decent alternatives to MSAA which use edge detection algorithms (like MLAA), but I don't see a way to implement those without the use of shaders
##### Share on other sites
Without the use of any shaders this could get quite tricky
What kind of card are you using exactly if it doesn't support multisampling?
There are some very decent alternatives to MSAA which use edge detection algorithms (like MLAA), but I don't see a way to implement those without the use of shaders
[/quote]
Actually its a strange hybrid of Intel® GMA X4500hd, i think.
some gl-extensions i can find here: [url="http://www.gamedev.net/index.php?app=core&module=attach§ion=attach&attach_id=4224"]cantiga[/url]
and its gpu integrated in laptop: mobile esprimo v6535
this card doesnt support very much use of shaders. only ehm 1.10 i know of that is 100% supported.
shader-versions over that is more or less not-supported.
(edit: i dont have much experience in coding shaders, so it would be a bit daunting task to do anyway).
##### Share on other sites
You can do supersampling by rendering to a target that is 2x or 4x in resolution and then using standard texture sampling as a means to smooth detail when downsampling.
##### Share on other sites
[quote name='arbitus' timestamp='1311089592' post='4837438']
You can do supersampling by rendering to a target that is 2x or 4x in resolution and then using standard texture sampling as a means to smooth detail when downsampling.
[/quote]
Seeing as he is using a rather weak integrated card this could get rather expensive even for low resolutions
##### Share on other sites
[quote name='arbitus' timestamp='1311089592' post='4837438']
You can do supersampling by rendering to a target that is 2x or 4x in resolution and then using standard texture sampling as a means to smooth detail when downsampling.
[/quote]
hmm, yea, good idea. ill try that out.
by the way what i've discovered, which i didnt mention before was that
it just doesnt seem right that there would be white jaggies on the parts that are dark on that image.
i could try out changing the depth bias for each cube though. maybe that would work.
so my next question is this: would it be possible to retrieve the z-buffer value from each fragment and make the change per fragment in the z-bias
to remove those artifacts?
actually, when i come to think about it, i have a hunch that antialiasing the cubes or the whole back/front-buffer with that image would probably not
reduce the white stripes entirely, but i might be wrong. it's one thing i would try to figure out in a while.
thanks for the replies.
##### Share on other sites
Seeing as he is using a rather weak integrated card this could get rather expensive even for low resolutions
[/quote]
Do you mean that the GetFrontBufferData (or BackBuffer for that matter) is too expensive for that, per frame?
I havent really tested it, but you might be right. it probably would not work. Since i want this in a reasonable framerate.
##### Share on other sites
[quote name='Rudibs' timestamp='1311090924' post='4837456']
Seeing as he is using a rather weak integrated card this could get rather expensive even for low resolutions
[/quote]
Do you mean that the GetFrontBufferData (or BackBuffer for that matter) is too expensive for that, per frame?
I havent really tested it, but you might be right. it probably would not work. Since i want this in a reasonable framerate.
[/quote]
It mostly depends on how populated your scene will be, try to find out how many ms it takes to render 1 frame right now to get an estimate of how much time it will take to render to higher resolutions
If your scene will be simple without too many independent entities (ie. without too many draw calls, polygons, 'complicated' shading, etc.) you should be fine
I'd just try the suggested method and see how it works out
## Create an account
Register a new account
• ### Forum Statistics
• Total Topics
627701
• Total Posts
2978705
• ### Similar Content
• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.
-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not. Visual Studios; Im using a windows so yea. SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?
• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture( GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.
References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:
• 21
• 14
• 12
• 10
• 12
|
2017-10-21 06:56:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2580614387989044, "perplexity": 1863.6121054425773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824618.72/warc/CC-MAIN-20171021062002-20171021082002-00343.warc.gz"}
|
https://web2.0calc.com/questions/algebra_23693
|
+0
# Algebra
0
195
1
Find 1/(a - 1) + 1/(b - 1), where a and b are the roots of the quadratic equation 2x^2 - 9x + 2 = 0.
Jan 5, 2022
#1
+14082
+1
Find 1/(a - 1) + 1/(b - 1).
Hello Guest!
$$2x^2 - 9x + 2 = 0\\ x\in \{a,b\}\\ x\in \{(\frac{9}{4}+\frac{\sqrt{65}}{4} ),(\frac{9}{4}-\frac{\sqrt{65}}{4})\}$$
$$\color{blue}\frac{1}{a-1}+\frac{1}{b-1}\\ =\dfrac{1}{\frac{9}{4}+\frac{\sqrt{65}}{4}-1}+\dfrac{1}{\frac{9}{4}-\frac{\sqrt{65}}{4}-1}\\ \color{blue} =-1$$
!
Jan 5, 2022
|
2022-11-30 11:13:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.975157618522644, "perplexity": 2001.5595595741165}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00303.warc.gz"}
|
https://mathoverflow.net/questions/230380/is-there-a-finite-test-for-isomorphisms-of-rigid-monoidal-abelian-categories-pa
|
# Is there a finite test for isomorphisms of rigid monoidal abelian categories, part II
Let $G$ be a semisimple algebraic group. (I'm already interested in the case $G=SL_2$.)
Let $\mathcal C$ be a semisimple rigid monoidal abelian category endowed with an additive tensor functor $Rep_G \to \mathcal C$. Assume every object of $\mathcal C$ is a summand of an object in the image of this functor $Rep_G \to \mathcal C$.
To check that this functor $Rep_G \to \mathcal C$ is an equivalence, it is sufficient to check that it is fully faithful. (Essential surjectivity follows from fullness and the summand condition.)
Is it sufficient to check the fully faithfulness condition on finitely many objects of $Rep_G$?
In my previous question, with an additional assumption about a fibre functor, Ehud Meier showed that $\mathcal C$ must be symmetric and thus by the Tannakian correspondence $\mathcal C$ must be the representation category of a subgroup of $G$. By some group theory (Larsen's alternative + Goursat's Lemma) I can then find a finite set of objects to check fully faithfulness on. But what if I don't have a fibre functor? Surely $\mathcal C$ need not always be symmetric? Is it still true?
• Your point being that there is at most one symmetry on $\mathcal C$ such that the functor is symmetric monoidal, but in fact there need not be any such symmetry? – Theo Johnson-Freyd Feb 6 '16 at 16:59
• @TheoJohnson-Freyd Yes, there need not be any such symmetry (as far as I know), but if it exists then it is unique. – Will Sawin Feb 6 '16 at 20:50
|
2019-05-26 20:28:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9340375661849976, "perplexity": 223.36507405153745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259452.84/warc/CC-MAIN-20190526185417-20190526211417-00395.warc.gz"}
|
https://www.stata.com/stata-news/news36-4/bayesian-vector-autoregressive-models/
|
» Home » Stata News » Vol 36 No 4 » In the spotlight: Bayesian vector autoregressive models
## In the spotlight: Bayesian vector autoregressive models
Vector autoregressive models (VARs) have been widely used in macroeconomics to summarize data interdependencies, test generic theories, and conduct policy analyses (Canova 2007). The application of Bayesian VAR models, however, has been more limited, mostly because of the difficulties in specifying and fitting such models. The Bayesian approach to modeling multiple time series provides some distinct advantages to the classical approach, some of which I try to illustrate here in this article. Specifically, I apply Bayesian vector autoregression to model the relationships between gross domestic product (GDP), labor productivity, and CO2 emissions, and I show that the Bayesian approach in Stata is no more complicated than the classical approach.
You can read more about classical VAR models in my colleague David Schenck's blog post Vector autoregressions in Stata.
Bayesian VAR models are available in Stata 17 through the new bayes: var command as part of the Bayesian economics suite; see [BAYES] bayes: var for more details.
## CO2 emissions data
In the last couple of decades, the effect of economic activity on the environment has been a subject of ever-increasing research (Grossman and Krueger 1993). Of particular interest has been the link between economic output and CO2 emissions (Grossman and Krueger 1995).
The dataset I use, greensolow, is from chapter 5 of Environmental Econometrics Using Stata (Baum and Hurn 2021). Also there, you can find examples of standard VAR analysis.
The dataset contains quarterly observations of per capita CO2 emissions (co2) and various economic factors such as real per capita GDP (gdp), labor productivity (lp), and others. datevec is the time variable. Below is the description of the variables in the dataset.
. use greensolow
. describe
Contains data from greensolow.dta
Observations: 282
Variables: 8 28 Aug 2018 11:13
(_dta has notes)
Variable Storage Display Value
name type format label Variable label
datevec float %tq
co2 double %9.0g * quarterly per capita co2 emissions
gdp float %9.0g real per capita gdp
lp float %9.0g labour productivity
tfp float %9.0g quarterly utilization adjusted
total factor productivity
p float %9.0g relative price of investment
rc float %9.0g real personal consumption
expenditure
yields
* indicated variables have notes
Sorted by: datevec
. tsset datevec, quarterly
Time variable: datevec, 1948q1 to 2018q2
Delta: 1 quarter
We rescale the gdp and co2 variables to make them comparable in range and facilitate the interpretation of the regression results later.
. replace gdp = gdp/1000
. replace co2 = co2*1000
. summarize gdp lp co2
Variable Obs Mean Std. dev. Min Max
gdp 274 7.890994 4.510167 1.989535 16.57506
lp 274 91.17146 31.7574 40.46978 148.8792
co2 173 6.609467 .6417095 4.932328 8.125009
Let's first have a quick look at the observed three time series of interest: gdp, lp, and co2. There are no data available for CO2 emissions before the second quarter of 1973. Hence, we restrict the range of the tsline commands.
. tsline gdp if tin(1973q2,2016q1), ytitle("GDP") ylabel(none) nodraw name(tsline1)
. tsline lp if tin(1973q2,2016q1), ytitle("LP") ylabel(none) nodraw name(tsline2)
. tsline co2 if tin(1973q2,2016q1), ytitle("CO2") ylabel(none) nodraw name(tsline3)
. graph combine tsline1 tsline2 tsline3, rows(3)
It is apparent that the time series are not stationary: gdp and lp have been increasing in time trends, while co2 is distinctively nonlinear. From about 1983 to 2008, per capita CO2 emissions are relatively constant, but there are decreasing trends before 1983 and after 2008, perhaps because of changes in labor productivity. The increasing trends of gdp and lp are also disturbed for short periods around the 2008 recession.
Because VAR models are not directly applicable to nonstationary time series, we consider the first differences D.gdp, D.lp, and D.co2. The first differences are interpreted as growth rates of real GDP, labor productivity, and CO2 emissions. Their time-series graphs show no apparent trends.
. tsline D.gdp if tin(1973q2,2016q1), ytitle("D.GDP") ylabel(none) nodraw name(tsline1)
. tsline D.lp if tin(1973q2,2016q1), ytitle("D.LP") ylabel(none) nodraw name(tsline2)
. tsline D.co2 if tin(1973q2,2016q1), ytitle("D.CO2") ylabel(none) nodraw name(tsline3)
. graph combine tsline1 tsline2 tsline3, rows(3)
## Lag order selection
At this point, we are ready to apply a VAR model to D.gdp, D.lp, and D.co2, which, in the context of VAR, we call endogenous variables. A VAR model is nothing but a linear regression of the endogenous variables on their own lags. To specify a VAR model, we first need to choose its lag order. In classical settings, one may use the Stata command varsoc, which provides different information criteria for selecting the order. However, having different criteria complicates the choice because the most popular criteria, AIC and BIC, do not often agree.
In Bayesian settings, we compare models based on their posterior probabilities—we fit models with 1, 2, etc., up to a maximum lag order and compare them using, for example, the bayestest model command in Stata. Below, I consider models with up to four lags and compare them. All model specifications include the Bayes options mcmcsize(1000), for reducing the MCMC sample size; rseed(), for reproducibility; and saving(), for saving MCMC samples.
. quietly bayes, mcmcsize(1000) rseed(17) saving(bsim, replace): var D.gdp D.lp D.co2, lags(1/1)
. estimates store bvar1
. quietly bayes, mcmcsize(1000) rseed(17) saving(bsim, replace): var D.gdp D.lp D.co2, lags(1/2)
. estimates store bvar2
. quietly bayes, mcmcsize(1000) rseed(17) saving(bsim, replace): var D.gdp D.lp D.co2, lags(1/3)
. estimates store bvar3
. quietly bayes, mcmcsize(1000) rseed(17) saving(bsim, replace): var D.gdp D.lp D.co2, lags(1/4)
. estimates store bvar4
. bayestest model bvar1 bvar2 bvar3 bvar4
Bayesian model tests
log(ML) P(M) P(M|y)
bvar1 102.9553 0.2500 0.9709
bvar2 99.4340 0.2500 0.0287
bvar3 94.9101 0.2500 0.0003
bvar4 93.4377 0.2500 0.0001
Note: Marginal likelihood (ML) is computed using
Laplace–Metropolis approximation.
The first column in the output table of bayestest model reports the log-marginal likelihoods; the second reports the prior model probabilities, which are all equal to 0.25 by default; and the third column reports the posterior model probabilities. The simplest model with one lag has a probability of 0.97, and it is overwhelmingly the best one.
## Fitting the Bayes VAR model
After deciding that a VAR model of order 1 is appropriate for our data, we are ready to specify and fit the model. Recall that a Bayesian model must also include priors for all model parameters. In the case of VAR, we have regression coefficients grouped by equations (one for each endogenous variable) and the variance–covariance matrix of the error terms. The bayes: var command provides a default prior model, the so-called conjugate Minnesota prior, and this is the one we stick with. Because Bayesian estimation is essentially a sampling from the posterior distribution of the model, we add the rseed() option for reproducibility and reduce the MCMC sample size to 1,000, deeming it to be sufficient.
. bayes, mcmcsize(1000) rseed(17): var D.gdp D.lp D.co2, lags(1)
Burn-in ...
Simulation ...
Model summary
Likelihood:
D_gdp D_lp D_co2 ~ mvnormal(3,xb_D_gdp,xb_D_lp,xb_D_co2,{Sigma,m})
Priors:
{D_gdp:LD.gdp LD.lp LD.co2 _cons} (1)
{D_lp:LD.gdp LD.lp LD.co2 _cons} (2)
{D_co2:LD.gdp LD.lp LD.co2 _cons} ~ varconjugate(3,1,1,_b0,{Sigma,m},_Phi0)
(3)
{Sigma,m} ~ iwishart(3,5,_Sigma0)
(1) Parameters are elements of the linear form xb_D_gdp.
(2) Parameters are elements of the linear form xb_D_lp.
(3) Parameters are elements of the linear form xb_D_co2.
Bayesian vector autoregression MCMC iterations = 3,500
Gibbs sampling Burn-in = 2,500
MCMC sample size = 1,000
Sample: 1973q3 thru 2016q1 Number of obs = 171
Acceptance rate = 1
Efficiency: min = .8147
avg = .9769
Log marginal-likelihood = 102.95529 max = 1
Equal-tailed
Mean Std. dev. MCSE Median [95% cred. interval]
D_gdp
gdp
LD. .5796619 .0661197 .002091 .5771234 .4488638 .7079532
lp
LD. -.0174199 .0066842 .000211 -.0177438 -.0295753 -.0041474
co2
LD. .0357662 .0308744 .000976 .0363394 -.0279287 .095661
_cons .0346589 .007285 .000242 .0346453 .0210438 .0484967
D_lp
gdp
LD. -3.006862 .6266633 .019817 -3.004131 -4.301595 -1.759183
lp
LD. .5432933 .0647291 .002012 .5407274 .4190424 .6696202
co2
LD. -.0751836 .2995545 .009473 -.0688522 -.6757699 .4919293
_cons .3764 .0680885 .002153 .376942 .2442487 .5111251
D_co2
gdp
LD. .3252994 .149951 .004396 .3162642 .0197296 .6101307
lp
LD. -.0006642 .0163091 .000517 -.0003933 -.033088 .0289234
co2
LD. .2342499 .0752377 .002379 .2361941 .0854152 .3770827
_cons -.0350984 .0168797 .000572 -.0348312 -.070182 -.0021437
Sigma_1_1 .0050236 .0005201 .000014 .005003 .00405 .0060574
Sigma_2_1 .0293233 .0041989 .000127 .0290843 .0221124 .038277
Sigma_3_1 -.0003943 .0009121 .000027 -.0003756 -.0023657 .001382
Sigma_2_2 .4503552 .048543 .001535 .4466805 .3663917 .5593156
Sigma_3_2 .0018873 .0085198 .000269 .0017915 -.0143019 .0189473
Sigma_3_3 .0272382 .002926 .000103 .0270488 .0220155 .0334743
The bayes: var command uses efficient Gibbs sampling for simulation and rarely has MCMC convergence problems. In this particular run, we have a very high sampling efficiency of 0.98. The MCMC sample of size 1,000 is thus equivalent to about 980 independent draws from the posterior, which provides enough estimation precision.
The output of bayes: var is fairly long because of the large number of regression coefficients: three equations with four coefficients each. Also included is the three-by-three covariance matrix Sigma. In practice, a VAR model is interpreted not through its regression coefficients but by using various impulse–response functions. However, there is one more step that is needed before continuing with the interpretation of results.
## Checking model stability
Some postestimation VAR routines such as impulse–response functions are only meaningful for stable VAR models; see [BAYES] bayesvarstable for a precise definition and more details. To check the stability of a Bayesian VAR model, we use the bayesvarstable command. As usual, the simulation results of bayes: var need to be saved first.
. bayes, saving(bvarsim1, replace)
. bayesvarstable
Eigenvalue stability condition Companion matrix size = 3
MCMC sample size = 1000
Eigenvalue Equal-tailed
modulus Mean Std. dev. MCSE Median [95% cred. interval]
1 .8019066 .0520921 .001647 .8022324 .700483 .8966506
2 .368717 .083123 .002629 .3626458 .2194205 .5469187
3 .1898191 .0779832 .002466 .1896387 .0343819 .3393437
Pr(eigenvalues lie inside the unit circle) = 1.0000
The bayesvarstable command estimates the moduli of the eigenvalues of the companion matrix of the VAR model, which in this case is 3. Similarly to the frequentist varstable command, bayesvarstable performs unit circle tests but accounting for the fact that, in a Bayesian context, these moduli are random numbers.
The first column in the output table shows the posterior mean estimates for the eigenvalue moduli in decreasing order: 0.80, 0.37, and 0.19. Simply comparing them with 1 is not sufficient for testing stability, though. The most informative output of the command is the posterior probability of unit circle inclusion reported right below the eigenvalue estimation table. It is essentially 1 in our case, assuring the stability of the model. If the inclusion probability is significantly lower than 1, we should suspect instability. The degree of instability thus may vary. Luckily, we don't have an issue here.
## Impulse–response functions
Impulse–response functions (IRFs) provide the main toolbox for exploring a VAR model. They consider a shock to one variable (the impulse) and how this shock affects an endogenous variable (the response). The bayesirf create command computes a set of IRFs for each impulse–response pair and saves them in an irf dataset. Please see [BAYES] bayesirf for more details.
Below, we compute IRFs with a length of 20 steps, equivalent to a 5-year period, and save them as a birf1 result in a birf.irf dataset.
. bayesirf create birf1, set(birf) step(20) mcmcsaving(birf1mcmc)
(file birf.irf created)
(file birf.irf now active)
file birf1mcmc.dta saved.
(file birf.irf updated)
Note that birf1 contains Bayesian results such as posterior means, standard deviations, and credible intervals. In addition, I use the mcmcsaving() option to save the MCMC draws for the IRFs in an external dataset. This will allow me to request various posterior summaries for the IRFs afterward, such as different credible-interval levels.
Let's first see the effects of shocks to D.gdp and D.lp on D.co2 in terms of regular IRFs, irf. We use bayesirf cgraph to combine the two IRFs.
. bayesirf cgraph (birf1 D.gdp D.co2 irf) (birf1 D.lp D.co2 irf)
A shock to D.gdp has a positive response on D.co2, but a shock to D.lp has a negative one. Although the results apply to the first differences (growth rates) of our original time series, they conform with our expectations of how GDP and labor productivity may affect CO2 emissions.
Orthogonal IRFs, oirf, have an advantage over the regular IRFs in that the impulses are guaranteed to be independent. Quantitative results for oirf change, but the conclusions from the previous graphs still hold.
. bayesirf cgraph (birf1 D.gdp D.co2 oirf) (birf1 D.lp D.co2 oirf)
Clearly, the effects of the shocks to D.gdp and D.lp on D.co2 wear off after 20 quarters.
We also show the cumulative orthogonal IRFs, coirf, which are convenient for displaying the accumulating long-term shock effects.
. bayesirf cgraph (birf1 D.gdp D.co2 coirf) (birf1 D.lp D.co2 coirf)
Alternatively, we can show the cumulative orthogonal IRFs in table format by using the bayesirf ctable command.
. bayesirf ctable (birf1 D.gdp D.co2 coirf) (birf1 D.lp D.co2 coirf)
(1) (1) (1)
Step coirf Lower Upper
0 -.005524 -.031529 .019887
1 .015962 -.020886 .054666
2 .031921 -.015899 .083129
3 .042553 -.014467 .10284
4 .04996 -.012635 .117914
5 .055443 -.010862 .130909
6 .059693 -.010626 .141337
7 .063087 -.010477 .148322
8 .065849 -.010923 .155024
9 .068122 -.0112 .162178
10 .070008 -.011505 .169777
11 .071581 -.011551 .175304
12 .072899 -.01158 .179913
13 .074008 -.0117 .183291
14 .074944 -.011603 .186745
15 .075737 -.011675 .19087
16 .07641 -.011786 .194462
17 .076983 -.011874 .198018
18 .077473 -.01194 .200903
19 .077892 -.011989 .203582
20 .078252 -.012025 .205119
(2) (2) (2)
Step coirf Lower Upper
0 .007935 -.016624 .032302
1 .009412 -.0247 .043826
2 .006742 -.038259 .050867
3 .002787 -.051265 .05746
4 .002787 -.051265 .05746
5 -.004791 -.071762 .063447
6 -.007885 -.07997 .064647
7 -.010504 -.086108 .065251
8 -.012708 -.091735 .065519
9 -.014557 -.099397 .066014
10 -.01611 -.104177 .066809
11 -.017417 -.110072 .067407
12 -.018518 -.114654 .067636
13 -.019448 -.118082 .067759
14 -.020235 -.121866 .067848
15 -.020904 -.124446 .067912
16 -.021473 -.126708 .067938
17 -.021959 -.129002 .067936
18 -.022374 -.130675 .067934
19 -.02273 -.133542 .067932
20 -.023035 -.135692 .06793
Posterior means reported.
95% equal-tailed credible lower and upper bounds reported.
(1) irfname = birf1, impulse = D.gdp, and response = D.co2.
(2) irfname = birf1, impulse = D.lp, and response = D.co2.
bayesirf ctable reports posterior means (first column) and the lower and upper 95% credible bounds (second and third columns). After 20 quarters, one unit shock to the growth rate of gdp results in a 0.08-unit increase in the growth rate of co2, as given by the posterior mean estimate, and one unit shock to the growth rate of lp results in a 0.02-unit decrease in the growth rate of co2. On the other hand, the wide 95% credible bounds with significant mass on both sides of the zero line suggest that the effects to the shocks, especially that of D.lp on D.co2, are not that strong.
We can make more-specific conclusions by varying the credible-interval levels. For example, the 90% credible interval for D.gdp → D.co2 and 30% for D.lp → D.co2 do not include the zero after 20 quarters, indicating that the effect of D.gdp on D.co2 is positive with at least 90% probability and that the effect of D.lp on D.co2 is negative with at least 30% probability. Classical analysis based on confidence intervals does not provide such useful interpretations.
. bayesirf cgraph (birf1 D.gdp D.co2 coirf, clevel(90))
> (birf1 D.lp D.co2 coirf, clevel(30))
Although not strongly conclusive, the IRF results suggest that an increase of the growth rate of GDP causes an increase of the growth rate of CO2 emissions, which can be partially offset by an increase in labor productivity.
## Bayesian forecasting
Finally, I want to illustrate Bayesian dynamic forecasting using the bayesfcast commands. In contrast to classical dynamic forecasts, which are based on point estimates, Bayesian forecasts consider the posterior predictive distributions at future time points.
We use bayesfcast compute to compute and save Bayesian forecasts as new variables in the dataset. For any given prefix, say, B_, the latter include posterior mean estimates B_*, posterior standard deviations B_*_sd, and lower and upper credible bounds B_*_lb and B_*_ub.
Below, we compute Bayesian forecasts starting from the first quarter of 2009 for 30 quarters into the future, thus reaching the end of the observed period.
. bayesfcast compute B_, dynamic(tq(2009q1)) step(30) rseed(17)
Then we use the bayesfcast graph command to plot the posterior mean forecasts for D.gdp and D.co2, along with their observed values.
. bayesfcast graph B_D_gdp, observed nodraw name(fcast_gdp)
. bayesfcast graph B_D_co2, observed nodraw name(fcast_co2)
. graph combine fcast_gdp fcast_co2, rows(1)
The dynamic forecasts are computed from the observed values of D.gdp, D.lp, and D.co2 only at the beginning of the forecast period (first quarter of 2009) and the quarter before (because we fit VAR(1)). The posterior mean estimates predict an initial increase in the growth rate of GDP and drop in the growth rate of CO2, followed by negligible growth rates for both. Unsurprisingly, the posterior mean forecasts cannot capture the observed fluctuations during the following 30 quarters—fluctuations caused by external factors not incorporated in the model.
The default 95% credible intervals include the observed values for D.gdp and D.co2 and are in fact quite wide. Bayesian results thus suggest significant forecasting uncertainty and overall weak prediction power of our simple model.
## References
Baum, C. F., and S. Hurn. 2021. Environmental Econometrics Using Stata. College Station, TX: Stata Press.
Canova, F. 2007. Methods for Applied Macroeconomic Research. Princeton, NJ: Princeton University Press.
Grossman, G. M., and A. B. Krueger. 1993. Environmental impacts of a North American Free Trade Agreement. In The Mexico–U.S. Free Trade Agreement, ed. P. M. Garber, p. 13–54. Cambridge: MIT Press.
——. 1995. Economic growth and the environment. Quarterly Journal of Economics 110: 353–377.
— by Nikolay Balov
Principal Statistician and Software Developer
|
2022-01-21 04:15:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31229081749916077, "perplexity": 12768.159742559936}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302723.60/warc/CC-MAIN-20220121040956-20220121070956-00047.warc.gz"}
|
http://www.acmerblog.com/hdu-2124-repair-the-wall-3271.html
|
2013
12-29
# Repair the Wall
Long time ago , Kitty lived in a small village. The air was fresh and the scenery was very beautiful. The only thing that troubled her is the typhoon.
When the typhoon came, everything is terrible. It kept blowing and raining for a long time. And what made the situation worse was that all of Kitty’s walls were made of wood.
One day, Kitty found that there was a crack in the wall. The shape of the crack is
a rectangle with the size of 1×L (in inch). Luckly Kitty got N blocks and a saw(锯子) from her neighbors.
The shape of the blocks were rectangle too, and the width of all blocks were 1 inch. So, with the help of saw, Kitty could cut down some of the blocks(of course she could use it directly without cutting) and put them in the crack, and the wall may be repaired perfectly, without any gap.
Now, Kitty knew the size of each blocks, and wanted to use as fewer as possible of the blocks to repair the wall, could you help her ?
The problem contains many test cases, please process to the end of file( EOF ).
Each test case contains two lines.
In the first line, there are two integers L(0<L<1000000000) and N(0<=N<600) which
mentioned above.
In the second line, there are N positive integers. The ith integer Ai(0<Ai<1000000000 ) means that the ith block has the size of 1×Ai (in inch).
The problem contains many test cases, please process to the end of file( EOF ).
Each test case contains two lines.
In the first line, there are two integers L(0<L<1000000000) and N(0<=N<600) which
mentioned above.
In the second line, there are N positive integers. The ith integer Ai(0<Ai<1000000000 ) means that the ith block has the size of 1×Ai (in inch).
5 3
3 2 1
5 2
2 1
2
impossible
2011-12-16 06:40:40
mark:简单题,贪心。排序即可。
wa了几百次!!!搞了40分钟!!!尼玛没考虑到impossible的时候数组越界啊我日!!!太2了。条件i < n+1 没写。
# include <stdio.h>
# include <stdlib.h>
int a[610] ;
int cmp(const void *a, const void *b)
{
return *(int*)b - *(int*)a ;
}
int main ()
{
int i, L, n ;
while (~scanf ("%d%d", &L, &n))
{
for (i = 1 ; i <= n ; i++)
scanf ("%d", a+i) ;
qsort(a+1, n, 4, cmp) ;
for (i = 1 ; i <= n ;i++)
{
a[i] += a[i-1] ;
if (a[i] >= L)
break ;
}
if (i < n+1 && a[i] >=L) printf ("%d\n", i) ;
else puts ("impossible") ;
}
}
1. 博主您好,这是一个内容十分优秀的博客,而且界面也非常漂亮。但是为什么博客的响应速度这么慢,虽然博客的主机在国外,但是我开启VPN还是经常响应很久,再者打开某些页面经常会出现数据库连接出错的提示
2. 第23行:
hash = -1是否应该改成hash[s ] = -1
因为是要把从字符串s的start位到当前位在hash中重置
修改提交后能accept,但是不修改居然也能accept
|
2017-03-30 17:15:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3582276701927185, "perplexity": 6094.597777860317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218195419.89/warc/CC-MAIN-20170322212955-00145-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://www.jupedsim.org/jpsreport_method_B.html
|
This method measures the mean value of velocity and density over space and time.
The spatial mean velocity and density are calculated by taking a segment $\Delta x$ in a corridor as the measurement area.
The velocity $\langle v \rangle_i$ of each person is defined as the length $\Delta x$ of the measurement area divided by the time he or she needs to cross the area:
where $t_\text{in}$ and $t_\text{out}$ are the times a person enters and exits the measurement area, respectively.
The density $\rho_i$ for each person $i$ is calculated as:
where $b_\text{cor}$ is the width of the measurement area while $N^\prime(t)$ is the number of person in this area at a time $t$.
Tags:
|
2020-02-20 23:14:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 10, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.852870523929596, "perplexity": 199.7569238166822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145316.8/warc/CC-MAIN-20200220224059-20200221014059-00409.warc.gz"}
|
https://chemistry.stackexchange.com/questions/158480/what-actually-happens-to-the-equilibrium-of-solid-and-aqueous-ions-in-a-solution
|
# What actually happens to the equilibrium of solid and aqueous ions in a solution when concentrations are changed?
Taking as an example the equilibrium set up when $$\ce{BaSO4}$$ is added to water:
$$\ce{ BaSO4(s) <=> Ba^{2+}(aq) + SO^{2-}_4(aq) }$$
The solubility product constant is $$[\ce{Ba^{2+}}][\ce{SO^{2-}_4}]$$.
If I then add additional $$\ce{Ba^{2+}}$$ ions to the solution, what will happen to the concentrations of the products once equilibrium is restored, since the product of their concentrations is constant. Will the concentration $$[\ce{Ba^{2+}}]$$ remain larger than its original value before the addition of barium ions and the $$[\ce{SO^{2-}_4}]$$ become smaller than its original value to compensate and keep the solubility product constant?
• Any alternative idea? Oct 6, 2021 at 17:05
• Welcome to the site. Note, chemical information may be advantageously formatted using on ChemSE with mhchem. Take moment to familiarize with this. You are encouraged to use it in the body of questions, answers, and comments. Because it is something special not all web browsers understand well, do not use it in the title of questions or answers. Oct 6, 2021 at 17:19
• If somebody is going to vote the answer down, please kindly inform me why it is a bad question and where I’ve gone wrong. Oct 6, 2021 at 17:33
• (+1) The answer to your final question is simply yes: the sulfate ion concentration gets reduced and the barium ion concentration is increased. Adding sulfate ions gives the opposite situation: the barium ion concentration is reduced, etc. In barium medical “milk shakes”, given for fluoroscopic imaging of the throat and esophagus, the barium sulfate suspension is in a liquid with excess sulfate ions, as per the answer by Buttonwood. Barium is toxic, but it cannot do harm if it is tied up in barium sulfate and the equilibrium keeps it tied up.
– Ed V
Oct 6, 2021 at 21:44
On the microscopic scale, the thermodynamic equilibrium is dynamic, i.e. there are constant back- and forward reaction. Here: dissociation of $$\ce{BaSO4}$$ to yield $$\ce{Ba^{2+}}$$ and $$\ce{SO^{2-}_4}$$, and association to yield again $$\ce{BaSO4}$$. Once the thermodynamic equilibrium is installed, these ongoing microscopic reactions do not change anymore the concentration of $$\ce{Ba^{2+}}$$, nor the one of $$\ce{SO^{2-}_4}$$.
If disturbed at macroscopic level (e.g., by addition of further $$\ce{SO^{2-}_4}$$, an increase of $$[\ce{SO^{2-}_4}]$$), keeping the product of $$[\ce{Ba^{2+}}][\ce{SO^{2-}_4}]$$ constant is possible only if the other factor (i.e., $$[\ce{Ba^{2+}}]$$) decreases. Thus, in the case of gravimetric determination of barium in the form of $$\ce{BaSO4}$$, the addition of $$\ce{Na2SO4}$$, which is water-soluble, to obtain an exhaustive precipitation.
|
2022-06-28 18:53:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5929149389266968, "perplexity": 1094.2856793742578}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00225.warc.gz"}
|
http://openstudy.com/updates/504ae819e4b059e709a3fc19
|
## enticingly 2 years ago Which expression is equivalent to the sine of the angle γ for the triangle of area A shown below? a. 2a/bc b. a/c c. b/c A d. Not enough information A. a B. b C. c D. d
1. Algebraic!
that a right triangle?
2. enticingly
|dw:1347087195007:dw|
3. enticingly
OK THIS ONE IS MORE ACCURATE :)
4. Denebel
Is that a right triangle?
5. Algebraic!
6. enticingly
No
7. enticingly
8. punnus
see to solve this question draw a perpendicular on the side b. Now we know that the area of a right angled triangle is $\frac{ 1 }{ 2 } \times base \times height$ Now we have two triangles with respective base|dw:1347088112276:dw| Now use the area formula. so the area of the truangle A is the combined area of the two triangles. $A= \frac{ 1 }{ 2 } h \times (b1+b2)$ Now sin(y) is h/c. Replace h from sin(y) using area eqn and you will get the answer as $\frac{ 2A }{ bc }$
9. punnus
|
2015-09-05 16:21:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990102171897888, "perplexity": 1354.853366457945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646313806.98/warc/CC-MAIN-20150827033153-00023-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/surface-integral-of-a-cylinder.654627/
|
# Homework Help: Surface Integral of a Cylinder!
1. Nov 25, 2012
### Syrena
1. The problem statement, all variables and given/known data
Let S denote the closed cylinder with bottom given by z=0, top given by z=4, and lateral surface given by the equation x^2 + y^2 = 9. Orient S with outward normals. Determine the indicated scalar and vector surface integral to ∫∫ x^2 i dS (I have tried to solve this problem, but i dont think i have done it correct, since i get the answer 0. Please if anybody can help, this task is very importaint to get right)
2. Relevant equations
$\int\int$x f dS=$\int\int$ D f(X(s,t))$||$Ts$\times$Tt$||$
3. The attempt at a solution
Since this is a cylinder, (i think) we can slice it in to 3 parts,
S1 (lateral cylindrical surface), S2 (bottom disk) and S3 (top disk).
I parametrized the three smooth pieces as follows:
S1 (lateral cylindrical surface): x=3Cos(t), y=3Sin(t), z=t With bouderies 0$\leq$s$\leq$2π and 0$\leq$t$\leq$4
S2 (bottom disk): x=sCos(t), y=sSin(t), z=0. With bounderies 0$\leq$s$\leq$3 and 0$\leq$t$\leq$2π
S3 (top disk): x=sCos(t), y=sSin(t), z=4. With bounderies 0$\leq$s$\leq$3 and 0$\leq$t$\leq$2π
Then I found the ||Ts$\times$Tt$||$ for each S, that is
S1||Ts$\times$Tt$||$ =(3Cos(s), -3Sins,0)
S2||Ts$\times$Tt$||$ =(0,0,sCos^2 (t)+sSin^2 (t))
S3||Ts$\times$Tt$||$ =(0,0,sCos^2 (t)+sSin^2 (t))
Then I thought i could set in all that is in the parametrizised in the x^2 positision, dotted with the ||Ts$\times$Tt$||$.
S2 and S3 will then be 0 ( since ||Ts$\times$Tt$||$ have both 0 in the x position)
The last one S1
$\int\int$ X^2 dS=$\int\int$ (3^2 Cos^2 (s), 0, 0) ×(3Cos(s),3Sin(s),0) ds dt (boundaries 0 and 4(dt), 0 and 2π(ds))
=∫∫12 Cos^3 (s) dsdt (boundaries 0 and 4(dt), 0 and 2π(ds))
=12∫∫ Cos^3 (s) dsdt (boundaries 0 and 4(dt), 0 and 2π(ds))
=12∫ -3Sin(s)Cos^2 (s) dt (input 0 and 2π( for s)) (boundaries 0 and 4(dt)
This gives =12∫ dt wich gives 0 <-- and i dont think this is the right answer. If anybody can help
2. Nov 25, 2012
### HallsofIvy
Was this a typo? You should have x= 3Cos(s), y= 3sin(s), not "t".
It is a bit confusing to use the same letters as before for the parameters, but change their meaning.
Okay, so the "t" above was a typo. This is correct for the lateral surface.
S2||Ts$\times$Tt$||$ =(0,0,sCos^2 (t)+sSin^2 (t))[/quote]
You do know that $s cos^2(t)+ s sin^2(t)= s$, right? :tongue:
Actually, 0 is correct! This integral measures the flow of the vector quantity $x^2$i into the cyinder. Over the top and bottom, which are parallel to the xy-plane, there is no flow through the surface. For the lateral surrface, the flow into the cylinder on the x< 0 side is cancelled by flow out of the cylinder on the x> 0 side.
3. Nov 25, 2012
### Syrena
I Thank Thee!
I did find though that I had some mistakes in my calculations:
=∫∫12 Cos^3 (s) dsdt (boundaries 0 and 4(dt), 0 and 2π(ds))
is ∫12 (1/12(9sin(s)+sin(3s)) dt =∫(9sin(s)+sin(3s))dt instead of =12∫ -3Sin(s)Cos^2 (s) dt which I wrote at the beginning.
But =∫(9sin(s)+sin(3s))dt ( with boundaries 0 to 2π) also gives the answer 0.
|
2018-04-26 00:15:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7814786434173584, "perplexity": 5425.8564818636005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948029.93/warc/CC-MAIN-20180425232612-20180426012612-00514.warc.gz"}
|
http://mathhelpforum.com/algebra/7288-need-some-basic-algerbra-help.html
|
# Thread: Need some basic algerbra help
1. ## Need some basic algerbra help
Sorry guys for posting such basic question but I can't find it anywhere.
How fo you recognize a quadratic equation? I know about counting the degree and stuff but is there maybe a test that can be done to test each equation?
Like for example: 7x+4xy-2x+5y Is this a quadratic?
Oh and someone told me that xy=6 is also a quadratic...how is that? Because of degree?
THanks guys
Derg
2. From MathWorld: "A quadratic equation is a second-order polynomial equation in a single variable x, $ax^2+bx+c$, where $a \ne 0$."
I don't see how xy=6 is quadratic.
3. Originally Posted by Dergyll
Sorry guys for posting such basic question but I can't find it anywhere.
How fo you recognize a quadratic equation? I know about counting the degree and stuff but is there maybe a test that can be done to test each equation?
Like for example: 7x+4xy-2x+5y Is this a quadratic?
Oh and someone told me that xy=6 is also a quadratic...how is that? Because of degree?
THanks guys
Derg
I suppose you could call it quadratic if you define quadratic as being a second order equation. But I too had thought it was only a quadratic if it the expression was a polynomial.
-Dan
4. Im thinking it depends on how you count the degrees, I've never fully grasped that idea of degrees though...How do you count degrees?
How about the equation 7x+4xy-2x+5y, is it a quadratic? My teacher specifically said that xy=6 is a quadratic, something about a circle being involved...
Any ideas guys?
Thanks a bunch
Derg
5. Got it
This is how my teacher defined it as : A Equation with a degree of 2 is considered as a quadratic equation, so technically xy=6 can be written as
x^1 time y^1 = 6 and the 1 degree from x and y add to be 2. But what exactly is degrees?
So I guess 7x+4xy-2x+5y is a quadratic according this definition...
Derg
6. When you have a multivariable polynomial the degree is the sum of the exponents.
Also, $xy=7$ is a rotated hyperbola, (second degree curve).
7. Sorry.
So does this mean that xy=6 is a quadratic in definition? When I graphed it on my calculator (by dividing both sides my x and then graphing it) and found that it is 2 curved parabolas rotated, one in quadrant 2 and another in 4.
Thanks for the help!
Derg
8. Originally Posted by Dergyll
Sorry.
So does this mean that xy=6 is a quadratic in definition?
In that sense, yes.
When I graphed it on my calculator (by dividing both sides my x and then graphing it) and found that it is 2 curved parabolas rotated, one in quadrant 2 and another in 4.
It is not a pair of parabolas. It is a single hyperbola.
|
2016-10-25 07:42:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7960134148597717, "perplexity": 844.1471813784602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719960.60/warc/CC-MAIN-20161020183839-00393-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://tex.stackexchange.com/questions/26926/footcite-in-beamer
|
# \footcite in Beamer
I'm using biblatex and its \footcite command within beamer and when I want to reference two references together like \footcite{ref1,ref2}, I get a single number with two contiguous references as footnote?
my biblatex code is as follows and i am not sure waht is wrong:
\usepackage[style=verbose,autocite=footnote,maxnames=10,babel=hyphen,hyperref=true,abbreviate=false,backend=biber]{biblatex}
-
I think you should use \footcites{key1}{key2}, or \mfootcite{key1,key2} with the mcite package option. In any case, you'll get a simple marker in the text (it's a single footnote, after all), and a semicolon separated list of the references associated to key1, key2 in the footnote text. – Gonzalo Medina Aug 30 '11 at 1:28
yes I understand this is a single footnote but these are still two different references. I was expecting a result similar to \cite{key1,key2} – pluton Aug 30 '11 at 2:24
Maybe I'm just slow-witted, but I don't get what result you are expecting. Could you add a detailed description? – lockstep Sep 17 '11 at 21:32
I would expect to superscripts with two distinct footnotes. does it make sense? – pluton Sep 20 '11 at 15:37
I want to refer the documentation of biblatex:
\footcite[ prenote ][ postnote ]{ key }
\footcitetext[ prenote ][ postnote ]{ key }
These command use a format similar to \cite but put the entire citation in a footnote and add a period at the end. In the footnote, they automatically capitalize the name prefix of the first name if the useprefix option is enabled, provided that there is a name prefix and the citation style prints any name at all. \footcitetext differs from \footcite in that it uses \footnotetext instead of \footnote.
You can see that this behavior is independently from beamer.
\listfiles
\documentclass{article}
\usepackage[style=verbose,autocite=footnote,maxnames=10,babel=hyphen,hyperref=true,abbreviate=false,backend=biber,mcite]{biblatex}
\begin{document}
Test\footcite{ctan,companion}
Test\footcites{ctan}{companion}
Test\mfootcite{ctan,companion}
Test\footcite{ctan}\footcite{companion}
\end{document}
You can create you own command. My first idea is very simple:
\newrobustcmd*\footcitesep[1]{%
\forcsvlist{\footcite}{#1}%
}
-
|
2014-08-28 23:33:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8252599835395813, "perplexity": 3080.399615023491}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500831098.94/warc/CC-MAIN-20140820021351-00207-ip-10-180-136-8.ec2.internal.warc.gz"}
|
http://artspot.esmonserrate.org/c8wb8al/f9b02e-application-of-differential-equation-in-economics
|
Gym Table Olx, READ PAPER. Rhode Island Voting Machines, Here, we have stated 3 different situations i.e. Let us see some differential equation applicationsin real-time. We are licensed and insured and look forward to meeting and helping you. Descent 1 Steam, The theory of differential equations has become an essential tool of economic analysis particularly since computer has become commonly available. Can Differential Equations Be Applied In Real Life? A second order differential equation involves the unknown function y, its derivatives y' and y'', and the variable x. Second-order linear differential equations are employed to model a number of processes in physics. Within mathematics, a differential equation refers to an equation that brings in association one or more functions and their derivatives. Axis Q6155 E Accessories, Differential Equations with applications 3°Ed - George F. Simmons. An Application of Ordinary Differential Equations in Economics: Modeling Consumer's Preferences Using Marginal Rates of Substitution October 2014 DOI: 10.13140/2.1.1144.9288 For students, all the prerequisite knowledge is tested in this class. This discussion includes a derivation of the Euler–Lagrange equation, some exercises in electrodynamics, and an extended treatment of the perturbed Kepler problem. That said, you must be wondering about application of differential equations in real life. Axis M5525 Camera, Featuring a comprehensive collection of topics that are used in stochastic processes, particularly in queueing theory, the book thoroughly discusses the relationship … Modeling Economic Growth Using Differential Equations This might introduce extra solutions. Active 3 years, 6 months ago. Browse other questions tagged ordinary-differential-equations economics stability-theory or ask your own question. Repeaters, Vedantu Almost all of the differential equations whether in medical or engineering or chemical process modeling that are there are for a reason that somebody modeled a situation to devise with the differential equation that you are using. Neverwinter Nights Max Level, PDF. Modeling Economic Growth Using Differential Equations This might introduce extra solutions. Application 1 : Exponential Growth - Population. A Differential Equation exists in various types with each having varied operations. 1. 5.6K views View 8 Upvoters Sponsored by WebClues Infotech What is the Lyapunov function that I should consider? This paper. So, let’s find out what is order in differential equations. Sorry!, This page is not available for now to bookmark. ECON 490 - Seminar in Applied Economics Nachman Construction with 20 years of experience we offer the highest level of skill and professionalism in the construction industry. Actuarial Experts also name it as the differential coefficient that exists in the equation. With the invention of calculus by Leibniz and Newton. As t increases without bound, x(t) converges to b/a if a > 0, and grows without bound if a < 0 and x 0 ≠ b/a. Main & Advanced Repeaters, Vedantu 3) They are used in the field of medical science for modelling cancer growth or the spread of disease in the body. Detailed step-by-step analysis is presented to model the engineering problems using differential equa tions from physical principles and to solve the differential equations using the easiest possible method. Baldur's Gate 2 Character Guide, A significant magnitude of differential equation as a methodology for identifying a function is that if we know the function and perhaps a couple of its derivatives at a specific point, then this data, along with the differential equation, can be utilized to effectively find out the function over the whole of its domain. where μ is a controllable rate of capital growth, σ is the given and fixed “fundamental risk” of the economy, and d Z t is a standard Brownian motion. Then Newton’s Second Law gives Thus, instead of the homogeneous equation (3), the motion of the spring is now governed Linear equations 1.1 Objects of study Many problems in economics, biology, physics and engineering involve rate of change dependent on the interaction of the basic elements–assets, population, charges, forces, etc.–on each other. Download PDF. Difference and Differential Equations with Applications in Queueing Theory presents the unique connections between the methods and applications of differential equations, difference equations, and Markovian queues. Modelling the growth of diseases 2. However, the above cannot be described in the polynomial form, thus the degree of the differential equation we have is unspecified. 2) They are also used to describe the change in investment return over time. Mitchell Shire Map Victoria, Differential Equations in Economics. We solve it when we discover the function y(or set of functions y). Design by Tr Web Design, How Do I Get My Civil Rights Restored After A Felony In Arizona, 30 Days To Becoming A Woman Of Prayer Pdf, application of differential equations in economics pdf. But first: why? Gold's Gym Richland Instagram, Address: 2106 Gallows Rd Suite #D2 Vienna, VA 22182, Nachman Construction: MHIC (Maryland Home Improvement Commission License # 114380), © 2019 All right reserved. Billy The Kid Review, Differential Equations in Economics. Browse other questions tagged differential-equations economics stability-theory or ask your own question., Logistic differential equation Another application of the logistic function is in the Rasch model, In economics and sociology:. Modeling Economic Growth Using Differential Equations Phone: 571-354-3608 Premium PDF Package. Pro Subscription, JEE YES! Theory and techniques for solving differential equations are then applied to solve practical engineering problems. Applications of differential equations in physics also has its usage in Newton's Law of Cooling and Second Law of Motion. differential equation in economic application. I'm currently teaching an integral calculus course for business students, and we're just about to discuss differential equations. 0000068030 00000 n ㎭- ǗƕU 5T kOe n (A 'c*] v V =U 0000033699 00000 n 0000016650 00000 n … Vedantu academic counsellor will be calling you shortly for your Online Counselling session. Among them, the most commonly used approaches are the classical approach for a linear ODE and the Laplace transform approach. TLDR: differential equations make many economic problems tractable to model because we can comfortably solve many differential equations with numerical tools whereas difference equations are much harder to fully solve (and often require approximation techniques like log-linearization). Applications of differential equations in engineering also have their own importance. Models such as these are executed to estimate other more complex situations. Differential equations have a remarkable ability to predict the world around us. Axis Q3617-ve, 5) They help economists in finding optimum investment strategies. How Do I Get My Civil Rights Restored After A Felony In Arizona, ODEs has remarkable applications and it has the ability to predict the world around us. Only if you are a scientist, chemist, physicist or a biologist—can have a chance of using differential equations in daily life. Download PDF Package. Axis P1435-le, Rise Of Modern Science Pdf, Persson (1994). 2. Then it goes on to give the applications of these equations to such areas as biology, medical sciences, electrical engineering and economics. 763 Pages. Find out the degree and order of the below given differential equation. How Differential equations come into existence? The practical importance is given by the fact that the most important time dependent scienti c, social and economical problems are described by di erential, partial di erential Solow’s economic growth model is a great example of how we can use di erential equations in real life. Ask Question Asked 3 years, 6 months ago. Pro Lite, CBSE Previous Year Question Paper for Class 10, CBSE Previous Year Question Paper for Class 12. Download Full PDF Package. Why Are Differential Equations Useful In Real Life Applications? 2010 Space Odyssey, It helps to predict the exponential growth and decay, population and species growth. There are basically 2 types of order:-. Many people make use of linear equations in their daily life, even if they do the calculations in their brain without making a line graph. In most applications of delay differential equations in population dynamics, the need of incorporation of time delays is often the result of the existence of some stage structure. As a consequence of diversified creation of life around us, multitude of operations, innumerable activities, therefore, differential equations, to model the countless physical situations are attainable. equations in mathematics and the physical sciences. Section 3: Applications to more general life insurance products are based on the notions of Allergan Products Ireland, Used in Newton’s second law of motion and Law of cooling. differential equation in economic application. The following result summarizes these findings. Solow’s economic growth model is a great example of how we can use di erential equations in real life. d P / d t = k P. where d p / d t is the first derivative of P, k > 0 and t is the time. (2) d q t q t = μ t q d t + σ t q d Z t, where μ t q, σ t q are unknown quantities to be found. Describes the motion of the pendulum, waves 4. Used Gym Equipment For Sale Malaysia, The constant r will alter based on the species. That is, the equilibrium is globally stable if a > 0 and unstable if a < 0. They are used in a wide variety of disciplines, from biology, economics, physics, chemistry and engineering. The theory of differential equations has become an essential tool of economic analysis particularly since computer has become commonly available. Applying Differential Equations Applications of First‐Order Equations; Applications of Second‐Order Equations; Applications of First‐Order Equations. This interaction is frequently expressed as a system of ordinary differential equations, a system of the form x′ If we can get a short list which contains all solutions, we can then test out each one and throw out the invalid ones. You then postulate that the capital price q t follows. At 11:30 a.m. (corresponding to a time lapse of T hours from the origin), the body temperature is 94.6 F, and at 12:30 a.m. (a time lapse of T +1 hours from the origin), the body temperature is 93.4 F. With this data we obtain, 94.6−70 = eTκ(98.6−70), and so eTκ= 24.6/28.6. DIFFERENTIAL EQUATIONS EXISTENCE AND. SAMPLE APPLICATION OF DIFFERENTIAL EQUATIONS 3 Sometimes in attempting to solve a de, we might perform an irreversible step. Pro Lite, NEET One of the fundamental examples of differential equations in daily life application is the Malthusian Law of population growth. The ramsey(-cass-koopmans) model for instance (I've seen that adressed with HJB equations as well). Unica Sugar. How to Solve Linear Differential Equation? Application of the implicit function theorem is a recurring theme in the book. A short summary of this paper. So, since the differential equations have an exceptional capability of foreseeing the world around us, they are applied to describe an array of disciplines compiled below;-, explaining the exponential growth and decomposition, growth of population across different species over time, modification in return on investment over time, find money flow/circulation or optimum investment strategies, modeling the cancer growth or the spread of a disease, demonstrating the motion of electricity, motion of waves, motion of a spring or pendulums systems, modeling chemical reactions and to process radioactive half life. 0000056259 00000 n trailer Space limitations have forced us to leave out other important areas of macroeconomics and economics more broadly where PDEs, and continuous time methods in general, have played an important role in recent years. For students, all the prerequisite knowledge is tested in this class. Within mathematics, a differential equation refers to an equation that brings in association one or more functions and their derivatives. For example, the implicit function theorem is used to prove the rec-tification theorem and the fundamental existence and uniqueness theorems for solutions of differential equations in Banach spaces. An Application of Ordinary Differential Equations in Economics: Modeling Consumer's Preferences Using Marginal Rates of Substitution October 2014 DOI: 10.13140/2.1.1144.9288 Thus b/a is the unique equilibrium of the differential equation. In mathematics, a differential equation is an equation that relates one or more functions and their derivatives. Malthus executed this principle to foretell how a species would grow over time. The model can be modi ed to include various inputs including growth in the labor force and technological improvements. We present a model of firm dynamics in an oligopolistic industry which takes the form of a differential game. The differential equation for the mixing problem is generally centered on the change in the amount in solute per unit time. Now let’s know about the problems that can be solved using the process of modeling. This chapter introduces ordinary differential equation (ODE) and its applications in finance and economics research. Some of the uses of ODEs are: 1. If you are looking for a General Contractor with extensive experience for any size project, you are looking in the right place. And the amazing thing is that differential equations are applied in most disciplines ranging from medical, chemical engineering to economics. Order of a differential equation represents the order of the highest derivative which subsists in the equation. Polnareff Jojo Death, Modeling is an appropriate procedure of writing a differential equation in order to explain a physical process. Application Of Differential Equation In Mathematics, Application Of First Order Differential Equation, Modeling With First Order Differential Equation, Application Of Second Order Differential Equation, Modeling With Second Order Differential Equation. The classification of differential equations in different ways is simply based on the order and degree of differential equation. 6) The motion of waves or a pendulum can also … dp/dt = rp represents the way the population (p) changes with respect to time. Differential Equations. There are various approaches to solve an ordinary differential equation. 30 Days To Becoming A Woman Of Prayer Pdf, For example, I show how ordinary differential equations arise in classical physics from the fun-damental laws of motion and force. Snappys Pizza Mill Park, 25 Full PDFs related to this paper. Phase plane methods, bifurcation and stability theory, limit-cycle behavior and chaos for nonlinear differential equations with applications to the sciences. Generally, $\frac{dQ}{dt} = \text{rate in} – \text{rate out}$ Typically, the resulting differential equations are either separable or first-order linear DEs. Space limitations have forced us to leave out other important areas of macroeconomics and economics more broadly where PDEs, and continuous time methods in general, have played an important role in recent years. The model can be modi ed to include various inputs including growth in the labor force and technological improvements. Lalchand Rajput Salary, First Order Differential Equations In “real-world,” there are many physical quantities that can be represented by functions involving only one of the four variables e.g., (x, y, z, t) Equations involving highest order derivatives of order one = 1st order differential equations Examples: macroeconomic applications. Absentee Ballot Ohio Summit County, For that we need to learn about:-. Free PDF. Applications of differential equations are now used in modeling motion and change in all areas of science. A Zed And Two Noughts Wikipedia, Nearly any circumstance where there is a mysterious volume can be described by a linear equation, like identifying the income over time, figuring out the ROI, anticipating the profit ratio or computing the mileage rates. Assignments involve the use of computers. The practical importance is given by the fact that the most important time dependent scienti c, social and economical problems are described by di erential, partial di erential Describes the movement of electricity 3. Super Attractor Book Club, We state and derive the di erential equations of Thiele, Black and Scholes and a particular hybrid equation. Applications of differential equations are now used in modeling motion and change in all areas of science. : In each of the above situations we will be compelled to form presumptions that do not precisely portray reality in most cases, but in absence of them the problems would be beyond the scope of solution. 4 APPLICATIONS OF SECOND-ORDER DIFFERENTIAL EQUATIONS FORCED VIBRATIONS Suppose that, in addition to the restoring force and the damping force, the motion of the spring is affected by an external force . Email: info@nachmanconstruction.com The model can be modi ed to include various inputs including growth in the labor force and technological improvements. Let P (t) be a quantity that increases with time t and the rate of increase is proportional to the same quantity P as follows. Ordinary differential equations are differential equations whose solutions Differential Equations in Economics Applications of differential equations are now used in modeling motion and change in all areas of science. Pro Lite, Vedantu The ultimate test is this: does it satisfy the equation? In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, and the differential equation defines a relationship between the two. 1) Differential equations describe various exponential growths and decays. PDF. Such relations are common; therefore, differential equations play a prominent role in many disciplines … Includes number of downloads, views, average rating and age. Pair of Linear Equations in Two Variables, Meaning, Nature and Significance of Business Finance, Vedantu ... A measure of how "popular" the application is. applications. have applications in Di erential Equations. A differential equation involving derivatives of the dependent variable with respect to only one independent variable is called an ordinary differential equation, e.g., 2 3 2 2 dy dy dx dx ⎛⎞ +⎜⎟ ⎝⎠ = 0 is an ordinary differential equation .... (5) Of course, there are differential equations … Each having varied operations physics also has its usage in Newton 's Law of and. Equilibrium is globally stable if a > 0 and unstable if a > 0 unstable... Equations has become commonly available and decays growth and decay, population and species growth is unspecified the. Construction industry that the capital price q t follows! ) derivative of height... > 0 and unstable if a < 0 differentiated equation is the Lyapunov function that I should consider of. Process of modeling the fundamental examples of differential equations in real life applications power of the below given equation. Described in the right place instance ( I 've seen that adressed with HJB equations as well ) force technological. Are now used in a variety of disciplines, from biology, medical sciences, electrical engineering and research... The Laplace transform approach the constant r will alter based on the notions of differential equations 3 in... Complex situations have applications in finance and economics when we discover the function y ( or of! In electrodynamics, and we 're just about to discuss differential equations has become an essential tool of economic particularly... Equations Useful in real life Contractor with extensive experience for any size,! Real life in modeling motion and change in all areas of science also, the above can not be with! In finance and economics research we might perform an irreversible step and in... Business students, and an extended treatment of the pendulum, waves.. Erential equations the function y ( or set of functions y ) the process of modeling polynomial,! Construction industry set of functions y ): - functions and their derivatives equations has become available! Section 3: applications to more general life insurance products application of differential equation in economics based on the change in all of! Equations ( ifthey can be modi ed to include various inputs including growth in the labor force and improvements! In macro it 's usually applied when it comes to micro-foundations t follows world around us are looking a!, thus the degree of differential equations in application of differential equation in economics life principle to foretell how species... ’ s know about the problems that can be solved! ) equation we have stated different. The Laplace transform approach waves 4 be solved! ) in investment over! I 'm currently teaching an integral calculus course for business students, and 're! There are basically 2 types of order: - become application of differential equation in economics essential of... The ultimate test is this: does it satisfy the equation offer the highest level of skill professionalism... Scholes and a resistor attached in series q t follows alter based on the order of Euler–Lagrange... Unit time 've seen that adressed with HJB equations as well ) a particular hybrid.. Helping you applications to more general life insurance products are based on the change in areas... Science for modelling cancer growth or the spread of disease in the labor force and technological improvements,. Theory of differential equations in different ways is simply based on the notions of differential equations are used. Has become commonly available and helping you to foretell how a species would grow over time is! Erential equations theory, limit-cycle behavior and chaos for nonlinear differential equations in real applications. Biologist—Can have a chance of Using differential equations are then applied to solve practical engineering problems bifurcation and stability,! 20 years of experience we offer the highest derivative which subsists in the polynomial form, thus degree!, Black and Scholes and a resistor attached in series remarkable applications and has! Population growth which takes the form of a differential equation, the order of the electric circuit consisted of inductor... Ordinary-Differential-Equations economics stability-theory or ask your own question medical sciences, electrical engineering and economics research cooling... Like biology, economics, physics, chemistry and application of differential equation in economics used to the... The most commonly used approaches are the classical approach for a linear ODE and the transform. Solute per unit time 3 Sometimes in attempting to solve practical engineering problems described. Circuit consisted of an inductor, and a particular hybrid equation sciences, electrical engineering and economics second. Vedantu academic counsellor will be calling you shortly for your Online Counselling session and order of differential equations physics! Cooling and second Law of motion that differential equations are now used Newton! 'S usually applied when it comes to micro-foundations linear ODE and the amazing thing is that differential equations Useful real! Tricks '' to solving differential equations in real life the population ( p ) changes with respect to.. Look forward to meeting and helping you integral calculus course for business students all! Are executed to estimate other more complex situations might perform an irreversible step page is not available for now bookmark! Tested in this class approaches are the classical approach for a linear ODE the. In order to explain a physical process modeling economic growth Using differential equations in,! Chapter introduces ordinary differential equation in economic application it satisfy the equation of a differential equation in... Are then applied to solve a de, we have will be –3 these! Ode ) and its applications in finance and economics 3 different situations.. For solving differential equations has become an essential tool of economic analysis particularly since computer become... And its applications in di erential equations, medical sciences, electrical engineering and application of differential equation in economics Scholes a... Such as these are executed to estimate other more complex situations we discover the function y ( or of... Is tested in this class Second‐Order equations application of differential equation in economics applications of differential equations Useful real. Must be wondering about application of the electric circuit consisted of an,. As these are executed to estimate other more complex situations F. Simmons sorry! this. That exists in the labor force and technological improvements to bookmark economic analysis particularly since computer has an. To describe the change in all areas of science are used in Newton ’ s find the... Be solved! ) practical engineering problems economic analysis particularly since computer has become commonly.... Then it goes on to give the applications of Second‐Order equations ; applications of differential equations in different is... World around us page is not available for now to bookmark Laplace transform approach how differential... Q t follows are also used to describe the change in investment return over time electrodynamics, and 're! Applications in di erential equations in daily life you are looking in the amount in solute per unit time approach...
Niv Bible Price In Kenya, Trident Hd Power Chair Accessories, Interactive Water Games, Tumbler Cups Plastic, Masters In Ai, When Schengen Visa Will Resume, Lego Minifigure Series 20, Budget Resorts In Kanha National Park, Public Employee Salary Database, Sanden 508 Compressor Hose Fittings, 1 Nephi Chapter 2 Summary, Kita Punya Malaysia Mp3, Hand Sketch Step By Step,
|
2021-05-18 19:37:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46187618374824524, "perplexity": 1012.1919616169301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991514.63/warc/CC-MAIN-20210518191530-20210518221530-00558.warc.gz"}
|
https://pos.sissa.it/346/052
|
Volume 346 - 23rd International Spin Physics Symposium (SPIN2018) - Parallel Session: Structure of the Nucleon: TMDs (A. Bacchetta, J. Drachenberg and B. Parsamyan)
Measurement of azimuthal asymmetries in SIDIS on unpolarized protons
A. Moretti* on behalf of the COMPASS Collaboration
*corresponding author
Full text: pdf
Pre-published on: August 19, 2019
Published on: August 23, 2019
Abstract
The COMPASS Collaboration is measuring the asymmetries in the azimuthal distributions of positive and negative hadrons produced in Deep Inelastic Scattering (DIS) on unpolarized protons. The data have been collected in 2016 and 2017 with a 160 GeV/$c$ muon beam scattering off a liquid hydrogen target. The amplitudes of three modulations, $A_{UU}^{\cos\phi_h}$, $A_{UU}^{\cos 2\phi_h}$ and $A_{LU}^{\sin\phi_h}$ are measured as function of the Bjorken variable $x$, of the fraction of virtual photon energy carried by the hadron $z$, and of the hadron transverse momentum with respect to the virtual photon $p_{T}^{h}$. The relevance of azimuthal asymmetries lies in the possibility to get information on the intrinsic transverse momentum of the quark as well as on the still unknown Boer-Mulders parton distribution function. The preliminary results from 2016 data shown here confirm the strong kinematic dependencies observed in previous measurements conducted by COMPASS, HERMES and CLAS.
DOI: https://doi.org/10.22323/1.346.0052
How to cite
Metadata are provided both in "article" format (very similar to INSPIRE) as this helps creating very compact bibliographies which can be beneficial to authors and readers, and in "proceeding" format which is more detailed and complete.
Open Access
|
2021-06-20 20:22:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6910892128944397, "perplexity": 2778.3123692924773}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00092.warc.gz"}
|
https://calculus7.org/tag/oeis/
|
## Relating integers by differences of reciprocals
Let’s say that two positive integers m, n are compatible if the difference of their reciprocal is the reciprocal of an integer: that is, mn/(m-n) is an integer. For example, 2 is compatible with 1, 3, 4, and 6 but not with 5. Compatibility is a symmetric relation, which we’ll denote ${m\sim n}$ even though it’s not an equivalence. Here is a chart of this relation, with red dots indicating compatibility.
### Extremes
A few natural questions arise, for any given ${n}$:
1. What is the greatest number compatible with ${n}$?
2. What is the smallest number compatible with ${n}$?
3. How many numbers are compatible with ${n}$?
Before answering them, let’s observe that ${m\sim n}$ if and only if ${|m-n|}$ is a product of two divisors of ${n}$. Indeed, for ${mn/(m-n)}$ to be an integer, we must be able to write ${|n-m|}$ as the product of a divisor of ${m}$ and a divisor of ${n}$. But a common divisor of ${m}$ and ${m-n}$ is also a divisor of ${n}$.
Of course, “a product of two divisors of ${n}$” is the same as “a divisor of ${n^2}$“. But it’s sometimes easier to think in terms of the former.
Question 1 is now easy to answer: the greatest number compatible with ${n}$ is ${n+n^2 = n(n+1)}$.
But there is no such an easy answer to Questions 2 and 3, because of the possibility of overshooting into negative when subtracting a divisor of ${n^2}$ from ${n}$. The answer to Question 2 is ${n-d}$ where ${d}$ is the greatest divisor of ${n^2}$ than is less than ${n}$. This is the OEIS sequence A063428, pictured below.
The lines are numbers with few divisors: for a prime p, the smallest compatible number is p-1, while for 2p it is p, etc.
The answer to Question 3 is: the number of distinct divisors of ${n^2}$, plus the number of such divisors that are less than ${n}$. This is the OEIS sequence A146564.
### Chains
Since consecutive integers are compatible, every number ${n}$ is a part of a compatible chain ${1\sim \cdots\sim n}$. How to build a short chain like this?
Strategy A: starting with ${n}$, take the smallest integer compatible with the previous one. This is sure to reach 1. But in general, this greedy algorithm is not optimal. For n=22 it yields 22, 11, 10, 5, 4, 2, 1 but there is a shorter chain: 22, 18, 6, 2, 1.
Strategy B: starting with ${1}$, write the greatest integer compatible with the previous one (which is ${k(k+1)}$ by the above). This initially results in the OEIS sequence A007018: 1, 2, 6, 42, 1806, … which is notable, among other things, for giving a constructive proof of the infinitude of primes, even with a (very weak) lower density bound. Eventually we have to stop adding ${k^2}$ to ${k}$ every time; so instead add the greatest divisor of ${k^2}$ such that the sum does not exceed ${n}$. For 22, this yields the optimal chain 1, 2, 6, 18, 22 stated above. But for 20 strategy B yields 1, 2, 6, 18, 20 while the shortest chain is 1, 2, 4, 20. Being greedy is not optimal, in either up or down direction.
Strategy C is not optimal either, but it is explicit and provides a simple upper bound on the length of a shortest chain. It uses the expansion of n in the factorial number system which is the sum of factorials k! with coefficients less than k. For example ${67 = 2\cdot 4! + 3\cdot 3! + 0\cdot 2! + 1\cdot 1!}$, so its factorial representation is 2301.
If n is written as abcd in the factorial system, then the following is a compatible chain leading to n (possibly with repetitions), as is easy to check:
1, 10, 100, 1000, 1000, a000, ab00, abc0, abcd
In the example with 67, this chain is
1, 10, 100, 1000, 2000, 2300, 2300, 2301
in factorial notation, which converts to decimals as
1, 2, 6, 24, 48, 66, 66, 67
The repeated 66 (due to 0 in 2301) should be removed.
Thus, for an even number s, there is a chain of length at most s leading to any integer that is less than (s/2+1)!.
As a consequence, the smallest possible length of chain leading to n is ${O(\log n/\log \log n)}$. Is this the best possible O-estimate?
All of the strategies described above produce monotone chains, but the shortest chain is generally not monotone. For example, 17 can be reached by the non-monotone chain 1, 2, 6, 18, 17 of length 5 but any monotone chain will have length at least 6.
The smallest-chain-length sequence is 1, 2, 3, 3, 4, 3, 4, 4, 4, 4, 5, 4, 5, 5, 4, 5, 5, 4, 5, 4, 5, 5, 5, 4, 5, … which is not in OEIS. Here is its plot:
The sequence 1, 2, 3, 5, 11, 29, 67, 283, 2467,… lists the smallest numbers that require a chain of given length — for example, 11 is the first number that requires a chain of length 5, etc. Not in OEIS; the rate of growth is unclear.
## Real zeros of sine Taylor polynomials
The more terms of Taylor series ${\displaystyle \sin x = x-\frac{x^3}{3!}+ \frac{x^5}{5!}- \cdots }$ we use, the more resemblance we see between the Taylor polynomial and the sine function itself. The first-degree polynomial matches one zero of the sine, and gets the slope right. The third-degree polynomial has three zeros in about the right places.
The fifth-degree polynomial will of course have … wait a moment.
Since all four critical points are in the window, there are no real zeros outside of our view. Adding the fifth-degree term not only fails to increase the number of zeros to five, it even drops it back to the level of ${T_1(x)=x}$. How odd.
Since the sine Taylor series converges uniformly on bounded intervals, for every ${ A }$ there exists ${ n }$ such that ${\max_{[-A,A]} |\sin x-T_n(x)|<1 }$. Then ${ T_n }$ will have the same sign as ${ \sin x }$ at the maxima and minima of the latter. Consequently, it will have about ${ 2A/\pi }$ zeros on the interval ${[-A,A] }$. Indeed, the intermediate value theorem guarantees that many; and the fact that ${T_n'(x) \approx \cos x }$ on ${ [-A,A]}$ will not allow for extraneous zeros within this interval.
Using the Taylor remainder estimate and Stirling's approximation, we find ${A\approx (n!)^{1/n} \approx n/e }$. Therefore, ${ T_n }$ will have about ${ 2n/(\pi e) }$ real zeros at about the right places. What happens when ${|x| }$ is too large for Taylor remainder estimate to be effective, we can't tell.
Let's just count the zeros, then. Sage online makes it very easy:
sineroots = [[2*n-1,len(sin(x).taylor(x,0,2*n-1).roots(ring=RR))] for n in range(1,51)]
scatter_plot(sineroots)
The up-and-down pattern in the number of zeros makes for a neat scatter plot. How close is this data to the predicted number ${ 2n/(\pi e) }$? Pretty close.
scatter_plot(sineroots,facecolor='#eeee66') + plot(2*n/(pi*e),(n,1,100))
The slope of the blue line is ${ 2/(\pi e) \approx 0.2342 }$; the (ir)rationality of this number is unknown. Thus, just under a quarter of the zeros of ${ T_n }$ are expected to be real when ${ n }$ is large.
The actual number of real zeros tends to exceed the prediction (by only a few) because some Taylor polynomials have real zeros in the region where they no longer follow the function. For example, ${ T_{11} }$ does this:
Richard S. Varga and Amos J. Carpenter wrote a series of papers titled Zeros of the partial sums of ${ \cos z }$ and ${\sin z }$ in which they classify real zeros into Hurwitz (which follow the corresponding trigonometric function) and spurious. They give the precise count of the Hurwitz zeros: ${1+2\lfloor n/(\pi e)\rfloor }$ for the sine and ${2\lfloor n/(\pi e)+1/2\rfloor }$ for the cosine. The total number of real roots does not appear to admit such an explicit formula. It is the sequence A012264 in the OEIS.
|
2018-04-24 14:32:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 62, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8507066369056702, "perplexity": 217.67820791268173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946721.87/warc/CC-MAIN-20180424135408-20180424155408-00318.warc.gz"}
|
https://chrisdoescoding.com/kb/shaders.html
|
See:
## GLSL
GLSL is OpenGL's shader language. webGPU is coming up with its own shader language, but as of this writing (Oct 13, 2020), the shader language it uses is SPIR-V, which can be cross-compiled from GLSL pretty well.
### Matrices: Row-Order vs Column-Order
OpenGL and, by extension, its shading language GLSL, use a column-order mapping for its matrices, e.g., mat4.
By contrast, my game engine Grimoire utilizes row-order mapping for its matrix, Matrix44.
A matrix in column order:
1. has its translation components in the 4th column.
2. has its rotational components along the x-, y-, and z-axes in the first, second, and third columns respectively.
A matrix in row order:
1. has its translation components in the 4th row.
2. has its rotational components along the x-, y-, and z-axes in the first, second, and third rows respectively.
Luckily, when Grimoire lays out its Matrix44 sequentially in memory as a 1-dimensional array, when GLSL reads and instantiates the memory data into its mat4 object, it correctly places the components in its necessary locations.
1D Array Index0123456789101112131415
Matrix44m00m01m02m03m10m11m12m13m20m21m22m23m30m31m32m33
mat4m00m10m20m30m01m11m21m31m02m12m22m32m03m13m23m33
From the table above, you can see that the translation components of a Matrix44, located at m30, m31, and m32, will be placed in the 1D array at indices 12, 13, and 14, respectively.
When GLSL reads the 1D array, it'll take the data at indices 12, 13, and 14, and place them correctly in the 4th column, at m03, m13, and m23, respectively.
### Matrices: Multiplication
In GLSL, multiplication between two mat4s work as you would expect. The left-hand mat4's rows are cross-multiplied against the right-hand mat4's columns to generate another mat4.
Also, as you would expect, matrix multiplication is NOT commutative. Ma * Mb != Mb * Ma.
|
2021-06-24 22:17:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21320778131484985, "perplexity": 4276.195679052783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488559139.95/warc/CC-MAIN-20210624202437-20210624232437-00337.warc.gz"}
|
https://cracku.in/1-if-all-the-symbols-are-dropped-from-the-arrangemen-x-ibps-clerk-2014
|
### IBPS Clerk 2014 Question 1
Instructions
Study the following arrangement carefully and answer the given questions.
W 2 X T 3 * Z b U 4 O P 9 \$ Q G D 5 # W E J 6 & 8 K @ 7 +
Question 1
# If all the symbols are dropped from the arrangement, then which will be the eleventh element from the right end of the given arrangement?
Solution
If all the symbols are dropped from the arrangement, arrangement becomes-
W 2 X T 3 Z b U 4 0 P 9 Q G D 5 W E J 6 8 K 7
Eleventh letter from the right end is Q.
Correct option is option A
|
2022-08-10 16:58:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8003007769584656, "perplexity": 1838.8948002336776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571198.57/warc/CC-MAIN-20220810161541-20220810191541-00548.warc.gz"}
|
https://catherine.cloud/2014/11/
|
## Spin Manifolds
Disclaimer: This exposition is profane. It is the result of me trying to be productive during a raging headache, and ending up with comic relief.
I recommend this to be read when you are looking for a laugh, or already feeling silly.
The motivation for the field of differential geometry can be thought of as a large group of punk math students screaming:
“We won’t wait for the world to let us study change! We want to differentiate on whatever we want! Give us more interesting manifolds than just $$\mathbb{R}^n$$. Fuck the Euclidean system!”
A rough and ready definition of an $$n$$-dimensional manifold is a topological space which looks locally like what the government wants $$\mathbb{R}^n$$. With this in mind, my punk math friends, let’s figure out how to study change.
First, how the hell do we assemble the tangent spaces at various points of a manifold into a coherent whole?
I hope we can agree that the total derivative of a $$C^{\infty}$$ function should change in a $$C^{\infty}$$ manner from point to point.
We create this differentiability for functions over our manifold M by considering behavior in the overlaps of coordinate neighborhoods.
Consider an arbitrary point $$p \in M$$, and two distinct neighborhoods $$U_\alpha$$ and $$U_\beta$$ which both contain $$p$$.
The transition map $$\tau_{\alpha, \beta}:= \psi_\alpha^{-1}\psi_\beta$$ (fuck you I’ll put composition in any order I want, it’s a free country)
is the type $$\tau_{\alpha, \beta}: \mathbb{R}^n \to \mathbb{R}^n$$. This map better be $$C^\infty$$ (differentiable arbitrarily often), or you’re going to have a bad time doing calculus on that motherfucker.
There are many sorts of manifolds, with different requirements for continuity of these transition maps.
In our case, we want to do calculus without a hitch, so we’ll insist that all of our transition functions must be $$C^\infty$$, and refer to such manifolds as “smooth”.
We’re working in a category, where our
• objects = smooth manifolds
• morphisms = $$C^\infty$$-maps between these smooth motherfuckers
We call this category “Diff”, which is the category-cool version of saying “let’s do some differential topology up in this bitch.”
When you hear “Let’s do calculus” your “We’re working in Diff!” alarm bells should go off.
Alright, I’m getting off track. How do break out of the cage imposed by the Euclidean system? Let’s start with some tangent bundles.
#### That’s right kids, it’s a fucking fiber bundle!
Our manifold $$M$$ is the base space, and the fiber (over each point $$p$$ in $$M$$) is just the tangent plane over that point, which we’ll denote as $$T_p(M)$$ because I’m lazy and don’t want to type “the tangent plane over the point $$p$$ in our manifold $$M$$” each goddamn time.
If you’re visual thinker like me, here are some pictures to make you feel good about yourself.
Oooh, ahhhh. So shiny.
Alright, I’m going to keep being lazy and give some more shorthand names, claiming that it’s for clarity:
Let $$V^n$$ := the set of all column vectors of height $$n$$
and $$U$$ := an open subset in $$\mathbb{R}^n$$
Recall that we’re working with a fiber bundle, the tangent bundle $$T(M) = U \times V^n$$
The base space is our manifold $$M$$
the fibers are our tangent plane $$T_p(M) = p \times V^n$$
Of course, this tells you jack shit about how to actually compute the tangent plane given any point on a surface, but you can go look in any standard multivariable calc book for that stuff.
#### Motherfucking Modules
Think the tangent bundle is general as fuck? You obviously haven’t read enough Grothendieck. The tangent bundle is an example of a motherfucking module bundle.
A module bundle exactly what you think it is:
a bundle with fibers that are motherfucking modules.
These are usually called vector bundles, but my ring-theorist-friend convinced me that I have to use modules all the time to get an intuition for them, so I’m trying that. In his words: who needs inverses anyway when we can formally append them. I’m not used to modules yet, so I’ll probably commit the sin of switching between modules and vector spaces as we go.
But I’m getting ahead of myself, talking about module bundles before defining a module. Let’s back the fuck up.
What is a motherfucking module?
a vector space – requiring property of invertibility = a module
If you’re feeling like a tight-ass today, here’s the rigorous-as-fuck (RAF) definition of a module in stuff, structure, properties form (aka Baez-style).
Let $$R$$ be a ring and $$1_R$$ be its multiplicative identity.
(stuff): A left $$R$$-module $$V$$ consists of an abelian group $$(V, +)$$
(structure): and an operation $$R \times V \to V$$ ; aka we’re closed under this bitch.
(properties): such that $$\forall r,s \in R$$ and $$x, y \in V$$
distributive as fuck:
1. $$r(x+y) = rx + ry$$
2. $$(r + s)x = rx + sx$$
associative as fuck:
1. $$(rs)x = r(sx)$$
and of course, the multiplicative identity holds, because rings are not pathological little fucks (okay, fine maybe the ring of integers of $$\mathbb{Q}(\sqrt{-5})$$ are little fucks but they still obey this property)
1. $$1_Rx = x$$
#### BUNDLES of Motherfucking Modules
We can look at these module bundles in at least 2 ways. More if you get creative, but you heard what happened in Don’t Hug Me I’m Scared.
1. We have a module space assigned to each point in a manifold in such a way that as we move smoothly over the manifold, the module twists smoothly.
2. Locally a module bundle is trivial, so what we care about are the transition functions. These must lie in the automorphism group of our module.
What properties do we need to slap on these motherfuckers to make them play nice with spin representations?
Our manifold $$M$$ must satisfy:
1. orientable
2. Riemmanian
3. 2nd Stiefel-Whitney class vanishes
If all of these conditions hold, we say $$M$$ is spin, because who the fuck wants to write “manifold that is orientable, Riemannian, and whose 2nd Stiefel-Whitney class vanishes” every time they talk about slapping a spin structure on a manifold.
Go forward unabashed, even if you don’t know what these words mean. I’m about to tell you!
#### ORIENTABLE
We want our bundle to be orientable, which we can define as a continuation of our previous module bundle definitions:
1. If we can choose a frame which varies smoothly over the manifold (and is always a frame), our bundle is orientable.
2. If our transition functions all lie in the special orthogonal group, SO(n), our bundle is orientable.
#### DO YOU EVEN LIFT, SO(n)?
We need our manifold’s structure group, SO(n), to lift to spin. If it lifts to spin, we can describe the Dirac operator globally!
*confetti*
But you shouldn’t fucking believe everything I say — I hope you’re asking yourself “what the fuck does that even mean?”
#### What the fuck is a spin GROUP?
Praise the Flyng Spaghetti Monster, the concept of a Spin group is hella geometric.
Below are illustrations of the $$n$$-dimensional Spin group, $$\text{Spin}(n)$$, as a subobject of their corresponding $$C\ell$$ algebra:
#### What the fuck is a spin STRUCTURE?
The structure group of any principal $$G$$-bundle $$E \to B$$ is just $$G$$.
It should come as no surprise that a spin structure on our manifold $$M$$ is composed of
• a principal $$Spin(M)$$-bundle, $$\text{Spin}(M) \to M$$
• a bundle morphism $$\psi$$ from $$\text{Spin}(M) \to M$$ to $$SO(M) \to M$$
which restricts fiberwise to the covering homomorphism $$\text{Spin}(M) \to \text{SO}(M)$$.
#### RIEMANNIAN
We study the notion of curvature and its relation to topology under the name ‘Riemannian geometry’, which is where most of the ‘geometry’ in ‘differential geometry’ comes out to play.
Don’t forget that we’re in $$\text{Diff}$$! Our manifolds are smooth motherfuckers.
Slap a Riemannian metric on that motherfucker and we get smoothly varying choices of inner product on tangent spaces.
Fuck yes! Let’s introduce a Riemannian metric on the manifold. In other words, let us pass from $$GL_n$$ to the maximal compact subgroup $$O_n$$.
orientation = reducing the structure group from $$O_n$$ to $$SO_n$$
spin structure = then lifting the structure group to the universal covering group $$Spin_n \to SO_n$$.
#### When does the structure group lift to Spin?
Recall that transition functions must satisfy the cocycle condition.
Thus, lifts of the transition functions must also satisfy this condition. When this is possible, we can attach a spin bundle by specifying that its transition functions = the lifts of the transition functions of the cotangent bundle.
Henceforth, I’ll assume that you motherfuckers are cool with homology and homotopy. I’m also hella tired right now, so the rest of this is a bit wibbly.
All I’m tryna say is: the structure group lifts Spin to if there is no obstruction.
There is no obstruction if the 1st and 2nd Stiefel-Whitney classes vanish (i.e. $$w_1 = w_2 = 0$$).
Don’t take my word for it! You should be asking: what the hell are these classes and what do they have to do with structure groups?
The conditions $$w_1$$ and $$w_2$$ can be interpreted geometrically as follows.
Let $$E$$ be a vector bundle over a manifold $$M$$. Then $$E$$ is orientable iff the restriction of $$E$$ to any circle embedded in $$M$$ is trivial.
If $$M$$ is simply-connected and $$\text{dim}(M) < 4$$, then $$E$$ is spin iff the restriction of $$E$$ to any 2-sphere embedded in $$M$$ is trivial.
Why? $$H_2(M, Z/2)$$ is generated by embedded 2-spheres (when $$\pi_1 = 0$$ and $$\text{dim}(M) > 4$$).
Let $$O_n$$ be the orthogonal group. Before I go, I want to tell you something enticing.
1. $$\pi_1(O_n) = \mathbb{Z}/2$$ — orientations and $$w_1$$ (1-connected cover)
2. $$\pi_2(O_n) = \mathbb{Z}/2$$ — spin structure and $$w_2$$ (3-connected cover)
3. $$\pi_3(O_n) = 0$$
4. $$\pi_4(O_n) = \mathbb{Z}$$ — string structure and $$p_1/2$$
Let’s say that $$X$$ and $$Y$$ are some smooth motherfuckin’ manifolds. When is a map $$X \xrightarrow{f} Y$$ null-homotopic?
When that shit LIFTS. $$f$$ better induce a zero map between the homotopy groups.
$$H^n(X; G) \simeq [X; K(G,n)]$$
If this isomorphism isn’t up your alley, then I highly suggest you revaluate your life decisions because it is sick as fuck! If you’ve never seen it before, have some John Baez!
$$Y_1 \to B^2(\pi_2(Y_1)) = B^2\pi_2(Y)$$
$$c \in H^2(Y_1, \pi_2(Y))$$
$$Y_n$$ lifts to $$Y_{n+1}$$ if a cohomology class in $$H_{n+1}(X, \pi_{n+1}(Y))$$.
## “I’m not good at math”
A 10 year old girl and her father sat in the back of my car as I drove them home after Thanksgiving.
During this snippet of the conversation, I felt like I was punched in the chest.
Do you like literature?
Yeah … all the time when I space out.
Do you like literature?
I love stories!
Do you like spelling?
No, I’m bad at spelling.
What makes you say that?
I don’t understand why letters go next
to each other in the order they do.
Imagine that you were taught spelling, and not shown any stories.
Do think you would like literature?
If I didn’t see the stories, how could I like literature?
What you’re learning right now via memorization,
that is to math as spelling is to literature.
What do you mean?
Do you imagine pictures when you read?
Yeah, I love stories!
Do you like patterns?
I like patterns!
Why?
They’re pretty!
I love patterns too. I spend all of my time
imagining pictures and moving shapes just like you do.
I thought you did math.
Moving, stretching, and constructing shapes is a form of math.
It’s called geometry.
Whoa! Really? I’m learning geometry in class,
but we just memorize the formulas for volume.
Memorizing formulas is like spelling practice instead of literature.
Can you be my tutor for real geometry? For 4 hours everyday!
*Her dad: No, sweetie, she’s probably very busy.*
I can give your dad the information of a few people who would be better qualified than me
to teach you the literature of shapes.
*She falls asleep*
I forgot to mention to her (wrt her worry of being bad at multiplication) that two of the greatest mathematicians of all time said they were unable to add without mistakes.
As for myself, I must confess, I am absolutely incapable even of adding without mistakes.
— Jules Henri Poincaré
I’ve always been weak in arithmetic.
— Alexander Grothendieck
## Notes on Covering Spaces as Extensions
This post assumes knowledge of fiber bundles, the group action functor, groupoids, and basic vector calculus. I am in the process of learning the topics discussed below, and I deeply appreciate constructive feedback.
#### How does a big space cover a little one?
Given a covering space $$E \to B$$ we can uniquely lift any path in the base space (once you choose a starting point) to a path in $$E$$. Conversely, we can create a covering space of $$B$$ by letting the fiber over $$b$$ be $$F(b)$$.
Formally: If $$B$$ satisfies a few topological constraints*, the functor $$\text{Fiber}$$ is an equivalence of categories
$$\text{Fiber}: \text{Cov}/B \to \text{Set}^{\Pi_1(B)}$$
between the category of covering spaces $$\text{Cov}/B$$ and the functor category $$\text{Set}^{\Pi_1(B)}$$.
*$$B$$ is locally path-connected and semi-locally simply-connected
Covering spaces $$E \to B$$ are classified by functors, $$F: \Pi_1(B) \to \text{Set}$$. The fundamental groupoid is the group of automorphisms of the appropriate fiber functor. A fiber functor is a forgetful functor from the category of finite sets with $$G$$-action to the category of finite sets.
Sidenote: $$\infty$$-groupoids = spaces up to homotopy. Thus, we can think of $$\Pi_1$$ as capturing the information of a space up to 1-homotopy, and truncating the information about higher homotopies.
What is the analogue of a covering space for groupoids?
It’s called a discrete fibration. This is a functor $$p: E \to B$$ that satisfies the unique path lifting lemma (for any morphism $$f: x \to y$$ in $$B$$ and $$\tilde{x} \in E$$ lifting $$x$$, there’s a unique morphism $$\tilde{f}: \tilde{x} \to \tilde{y}$$).
Sheaves and fibrations are generalizations of the notion of fiber bundles. Discrete fibrations $$E \to B$$ are also classified by functors $$B \to \text{Set}$$.
#### As curious persons, we like to take things apart to see how they work.
We can convert a decomposition of a space in terms of simple pieces $$\xrightarrow{into}$$ a collection of vector spaces (or modules) and linear transformations (or homomorphisms).
Let’s take something simple apart to get an intuition we can abstract off of: our favourite trivial fiber bundle, the cylinder.
The sequence $$0 \to F \to E \to B \to 0$$ is an example of a chain complex.
#### The Extension Problem: $$0 \to F \to ? \to B \to 0$$
The extension problem is essentially the question: Given the end terms $$F$$ and $$B$$ of a short exact sequence, what possibilities exist for the middle term $$E$$?
Extensions of the group $$B$$ by the group $$F$$, that is, short exact sequences $$1 \to F \to E \to B \to 1$$, are classified by 2-functor from $$B \to \text{2Aut}(F)$$.
Examples of this are the more familiar classifications of central group extensions using $$H^2$$ or $$\text{Ext}$$.
What does this category $$\text{2Aut}(F)$$ look like?
• objects: $$F$$
• morphisms: automorphisms of $$F$$
• 2-morphisms: morphisms between the automorphisms of $$F$$
For more on this, I recommend Dr. Baez’s Lectures on n-Categories and Cohomology, which will explains why “…generalizing the fundamental principle of Galois theory to fibrations where everything is a group gives a beautiful classification of group extensions in terms of nonabelian cohomology.”
Excited yet? Let’s look at chain complexes a bit more carefully.
If you have vector calculus running through your veins and at your fingertips, then you’re already familiar with a differential complex:
$$\mathbb{H_1} \xrightarrow{grad} \mathbb{H}_{curl} \xrightarrow{curl} \mathbb{H}_{div} \xrightarrow{div} \mathbb{L}_2$$
(where $$\mathbb{H}_{curl}, \mathbb{H}_{div}$$ are the domains for the curl and div operators respectively.)
You’ll note that the composition of any two consecutive maps is zero. This is a key point of the definition of a chain complex (a sequence of modules connected by homomorphisms such that the composition of any two consecutive maps it zero).
In the category of groups, this is equivalent to the question: What groups $$B$$ have $$A$$ as a normal subgroup and $$C$$ as the corresponding factor group?
In general, for a short exact sequence: $$0 \to A \xrightarrow{f} B \xrightarrow{g} C \to 0$$
$$A$$ is a subobject of $$B$$, and the corresponding quotient is isomorphic to $$C$$:
$$C \cong B/f(A)$$ , (where $$f(A) = im(f)$$).
What are these 0’s for?
0 represents the zero object, such as the trivial group or a zero-dimensional vector space. The placement of the 0’s forces $$f$$ to be a monomorphism and $$g$$ to be an epimorphism.
These chain complexes generalize past where our geometric intuition can follow, but it’s nice to be grounded in examples.
“Two objects $$X$$ and $$Y$$ are connected by a map $$f$$ between them. Homological algebra studies the relation, induced by the map $$f$$, between chain complexes associated with $$X$$ and $$Y$$ and their homology. This is generalized to the case of several objects and maps connecting them.” – Source
Curious as to why we need the the composition of consequent maps to be 0 and how this fits into homology? I recommend Dr. Ghrist’s notes on Homology from Elementary Applied Topology.
#### How can a big commutative algebra define a little one?
Classifying how a big space will cover a little one amount to classifying how a little commutative algebra can “sit inside” a big one.
We can study the ways a little thing $$k$$ can site inside a bigger thing $$K$$ ($$k \hookrightarrow K$$) by keeping track of the symmetries of $$K$$ that fix $$k$$. These “fixing symmetries” form a subgroup of the symmetries of K. $$Gal(K/k) \subseteq Aut(K)$$.
Let $$k$$ and $$K$$ fields such that $$k \hookrightarrow K$$.
In this case, $$K$$ is called an i>extension field of $$k$$ (denoted $$K/k$$).
I’m going to interject here and give a bit of context:
Field extensions are fiber bundles with zero-dimensional fibers (covering spaces).
A fiber bundle with zero-dimensional fibers has a total space that “gives multiplicity” to the points of the base space.
(This means that a multiple-valued function on the base space has a good chance of being interpretable as a single-valued function on the total space.)
But now I’m getting ahead of myself, without grounding you in classical commutative algebra!
Now that you have a glimpse of the importance of studying extensions, let’s get back to those:
Let $$Aut(K/k)$$ be the set of all $$k$$-automorphisms of $$k$$
$$Aut(K/k) = {\sigma in Aut(K) : \sigma\vert_k = Id_k}$$
But why is the restriction of $$\sigma$$ to $$k$$ an automorphism of $$k$$?
If $$k$$ is a splitting field of a family $$\mathcal{P}$$ of polynomials, then any action $$\sigma$$ preserves the set of roots of $$\mathcal{P}$$, and since $$k$$ is generated by these roots, the action $$\sigma$$ preserves $$k$$ (i.e. $$\sigma k = k$$).
Then $$Aut(K/k)$$ is a group, the automorphism group of $$K/k$$, or the Galois group of $$K/k$$, denoted as $$Gal(K/k)$$.
the category $$\text{Fields}$$:
• objects: Fields, i.e. $$F_1, F_2$$
• morphisms: Field extensions, $$F_1 \xrightarrow{\phi} F_2 := F_2/F_1$$ (note that morphims are all injective)
The Fundamental Theorem of Galois: Given a polynomial, the intermediate fields of the splitting field are in one-to-one correspondence with the subgroups of the Galois group.
Suppose $$G$$ acts transitively on $$X$$. Pick any figure $$x$$ of type $$X$$ and let H be its “stabilizer”: the subgroup consisting of all guys in $$G$$ that map $$x$$ to itself. Then we get a one-to-one and onto map
$$f: X \to G/H$$
sending each figure $$gx$$ in $$X$$ to the equivalence class $$[g]$$ in $$G/H$$.
We can use this to set up a correspondence between sets on which $$G$$ acts transitively and subgroups of $$G$$. This is one of the principles lurking behind Galois theory
— John Baez, The Tale of Groupoidification
If we’re looking for amazing applications of Galois theory within, say, arithmetic, then we might as well read Kronecker or Hecke.
If we interpret Galois theory in a very expansive way, then the Erlangen Program, and Cartanian geometry, are legitimate consequences.
Thanks to David Yang and Semon Rezchikov for helping me correct my visualization of a category of representations, and thanks to Aaron Slipper for teaching me basic Galois Theory!
#### Sources
Postscript: We have the beginnings of a map of things that can be extended to define other things:
Fields $$\leftrightarrow$$ Groups Base Spaces $$\leftrightarrow$$ Covering Spaces Groupoids $$\leftrightarrow$$ Discrete Fibrations Principle Fiber Bundles $$\leftrightarrow$$ Structure Groups
|
2022-10-03 21:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659573554992676, "perplexity": 690.8626810946377}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00504.warc.gz"}
|
https://www.contextgarden.net/index.php?title=MathML_code_examples&diff=1880&oldid=1554
|
# Difference between revisions of "MathML code examples"
< Math | MathML | XML >
MathML support in ConTeXt is very extensive, but some features are rather hidden for now.
\setupMMLappearance allows you to adjust various things. For example, the layout of presentational markup which uses mtables for alignment can be changed via \setupMMLappearance[mtable][alternative=a|b|c]. Experiment with the different alternatives to see the effects.
When embedding XML inside normal ConTeXt code, remember that \stopXMLdata gobbles up any following white-space. You'll need to explicitly put it back in if you want it (with \space.
## Current bugs and workarounds
To use UTF inside MathML currently requires this workaround:
\unprotect
\long\def\doXMLremapdata[#1]#2#3#4%
{\bgroup
\startXMLmapping[#1]%
% enable unknown elements (should be macro)
\doifsomething{#1}
{\doifdefinedelse{\@@XML#1:\s!unknown:M}
{\remapXMLunknowntrue}{\remapXMLunknownfalse}}%
%
\pushmacro\doXMLentity % needed ?
% this will change, proper split in element itself
\ifx\currentXMLnamespace\empty
\let\parseXMLelement\remapXMLelement
\else
% here we need to get rid of the namespace; we also
% have to preserve the leaqding / if present
\@EA\long\@EA\def\@EA\parseXMLelement\@EA
##\@EA1\currentXMLnamespace:{\remapXMLelement##1}%
\fi
%
\let\parseXMLescape \remapXMLescape
\let\parseXMLprocess\remapXMLprocess
%
\let\doXMLentity \remapXMLentity
%
\enableXML % sets entities
\enableXMLexpansion
\let\par\XMLremappedpar
\the\everyXMLremapping
%\ignorelines
\catcode\^^I=\@@space
\catcode\^^M=\@@space
\catcode\^^L=\@@space
\catcode\^^Z=\@@space
\pushmacro\unicodechar
\let\unicodechar\relax
\xdef\remappedXMLdata{#4\empty}%
\popmacro\unicodechar
\let\par\endgraf
\popmacro\doXMLentity % needed ?
\disableXMLexpansion
\catcode\{=\@@begingroup
\catcode\}=\@@endgroup
\catcode`\\=\@@escape
\iftraceXMLremapping
\ifmmode\vbox\fi\bgroup
\convertcommand\remappedXMLdata\to\ascii
\tttf\veryraggedright\ascii\par
\writestatus{xml-remap}{\ascii}%
\egroup
\fi
#2\scantokens\@EA{\remappedXMLdata\empty\empty}#3%
\stopXMLmapping
\egroup}
\protect
\def\MMLpTEXT#1#2%
{\hbox
{\tf
\getXMLarguments{mstyle}{#1}%
\doMMPpbackground{mstyle}
{\doMMPpcolor{mstyle}
{\setMMLptextstyle{mstyle}%
\ignorespaces#2\unskip\unskip}}}}
Additional space appears in front of an inline equation (tagged as an imath element). The fix (from Hans) is made in xtag-ini.tex: locate the line
.unexpanded.def<B.doXMLelementE
and add a * immediately following it, to produce
.unexpanded.def<B.doXMLelementE*
The * (comment token) prevents the spurious space.
|
2020-05-27 23:27:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606056928634644, "perplexity": 8616.755261277136}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00109.warc.gz"}
|
https://study.com/learn/lesson/what-is-a-random-variable.html
|
# Random Variables: Discrete and Continuous
Benjamin Mayhew, Rudranath Beharrysingh
• Author
Benjamin Mayhew
Ben has tutored math at multiple levels for over three years and developed graduate-level biostatistics course materials. He holds an MS in biostatistics focusing on data science and spatial statistics and a sustainable horticulture certificate from the University of Minnesota. He received his BA in mathematics from Macalester College. He also is TEFL certified and tutors ESL students in his spare time.
• Instructor
Rudranath Beharrysingh
Rudy teaches math at a community college and has a master's degree in applied mathematics.
Understand what is a random variable and why it is used. Learn about the types of random variables and see examples of the random variables from everyday life. Updated: 12/10/2021
Show
## Random Variable Definition
A random variable, also known as a stochastic variable, means a collection of possible outcomes and their corresponding probabilities. In practical use, the meaning of random variable can be intuitively understood to be a variable that may take on different values randomly but whose value is not known.
More specifically, random variable definition
is as a set of possible outcomes, called a sample space, along with a probability distribution function that assigns specific outcomes or groups of outcomes to numbers between 0 and 1 that represent probabilities.
The outcome can represent an event that will happen in the future, like the result of rolling a 6-sided dice. In this example, the sample space is the set of integers from 1 to 6, with each integer corresponding to one side of the dice. For a fair dice, the probability of each of these outcomes is 1/6.
A random variable does not necessarily need to represent something that will happen in the future. A random variable can also represent a quantity that already exists but for which the precise value is unknown. For example, in a doctor's office, the systolic blood pressure of the next patient to be treated could be seen as a random variable. Now, the patient has some particular systolic blood pressure, but it is not precisely known until measured.
### Sample Space Examples
Consider the example of rolling six-sided dice. The sample space S is a finite set of six integers:
{eq}S_\text{dice roll} = \{1,2,3,4,5,6\} {/eq}
In the blood pressure example above, the sample space is the set of nonnegative real numbers because blood pressure is measured as a single real number and cannot be negative:
{eq}S_\text{blood pressure} = \{x \in \mathbb{R} \mid x\geq 0\} {/eq}
Finally, consider flipping a coin repeatedly until it first comes up heads. The random variable representing the number of coin flips required to get heads has a sample space that is all of the positive integers (the natural numbers):
{eq}S_\text{flip coin until heads} = \{x \mid x \in \mathbb{N} \} = \{1,2,3, \ldots \} {/eq}
An error occurred trying to load this video.
Try refreshing the page, or contact customer support.
Coming up next: Finding & Interpreting the Expected Value of a Discrete Random Variable
### You're on a roll. Keep up the good work!
Replay
Your next lesson will play in 10 seconds
• 0:04 What Is a Random Variable?
• 1:06 Discrete Random Variables
• 2:50 Continuous Random Variables
• 4:39 Probabilities Range…
• 5:36 Sum of Probabilities…
• 8:39 Lesson Summary
Save Save
Want to watch this again later?
Timeline
Autoplay
Autoplay
Speed Speed
## Types of Random Variable
There are two types of random variables: discrete random variables and continuous random variables. Random variables are classified as discrete or continuous based on whether the sample space is countable or uncountable.
Discrete and continuous random variables are different in that, for a discrete random variable, each outcome in the sample space has an associated probability, while for a continuous random variable, each outcome instead has a probability density and probabilities are instead assigned to ranges of outcomes.
## What is a Discrete Random Variable?
A discrete random variable is defined as a random variable for which the sample space is countable. A countable sample space is one that has either a finite number of outcomes, like rolling a six-sided dice, or has a countably infinite number of outcomes. An infinite sample space is countably infinite when it's possible to assign a natural number (a positive integer) to each outcome.
### Discrete Random Variable Example
In the example above, where a coin is repeatedly flipped until heads come up, the sample space of the number of flips this takes is countably infinite, and therefore this random variable is classified as discrete according to the definition of a discrete random variable.
For a discrete random variable, every outcome in the sample space has an associated probability, and the random variable as a whole can be described using a probability distribution function in the form of a histogram.
The probability distribution function P gives the specific probabilities of the different outcomes. The probability that a person gets heads on the first coin flip is 1/2, so this means that P(1) = 1/2, as shown in this histogram.
The probability that it takes two coin flips in getting first heads is equal to the probability of getting tails on the first flip and getting heads on the second; that is, the probability is {eq}\frac{1}{2} \times \frac{1}{2} = \frac{1}{4}{/eq}. Likewise, the probability that they get the first heads on the {eq}n^{\text{th}} {/eq} coin flip is {eq}\frac{1}{2^n} {/eq}. Note that the sum of all of the probabilities in the probability distribution function is always 1.
## What is a Continuous Random Variable?
A continuous random variable is defined as a random variable for which the sample space is uncountable. Usually, this means that the random variable can take on values from a range of real numbers. One example could be a person's systolic blood pressure. This is measured as a positive real number, and a typical value is approximately 120 mmHg.
To unlock this lesson you must be a Study.com Member.
#### What is random variable and its types?
A random variable is a function that associates certain outcomes or sets of outcomes with probabilities. Random variables are classified as discrete or continuous depending on the set of possible outcomes or sample space.
#### How to identify a random variable?
A variable is a random variable when it is meant to represent the outcome of some random event. Usually, it is denoted by a capital letter, like X or Y.
### Register to view this lesson
Are you a student or a teacher?
|
2022-01-18 19:02:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8237625360488892, "perplexity": 396.77653029045297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00082.warc.gz"}
|
https://bitcointalk.org/index.php?topic=167229.3100
|
Bitcoin Forum
February 01, 2023, 09:21:20 PM
News: Latest Bitcoin Core release: 24.0.1 [Torrent]
Home Help Search Login Register More
Pages: 1 ... 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 [156] 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 ... 1135
Author Topic: [ANN] cudaMiner & ccMiner CUDA based mining applications [Windows/Linux/MacOSX] (Read 3426357 times)
cbuchner1 (OP)
Hero Member
Offline
Activity: 756
Merit: 502
January 26, 2014, 08:11:22 PMLast edit: January 26, 2014, 08:22:57 PM by cbuchner1
In most wallets you can increase it with command line options or with the wallet .conf file.
Code:
-maxconnections=<n> "Maintain at most <n> connections to peers (default: 125)"
-maxoutbound=<n> "Maintain at most <n> outbound connections to peers (default: 8)"
With more connections you can reach the whole network with less hops. And it might help finding less orphans. Not sure.
Also, with more connections you can download the blockchain faster (if you're not CPU and/or IO bottlenecked, which is generally the case).
The reason I asked because I solomined YAC for days and found nothing, not even an orphan block and I just recently increased the active connection limit and was not sure that it would have any connection between solomining. I guess it doesn't.
Ah, so this isn't about RPC connections of miners to the wallet...
I think in all the time mining YAC (2 weeks or so) I had just a single orphan.
The YAC network doesn't seem particulary loaded with traffic, so I think 8 connections are just fine.
By the way the Windows Yacoin wallet reports 15 active connections, and my Linux wallet (which is the one I am solo mining with) reports 8 connections. Both are at v0.4.2.
Christian
1675286480
Hero Member
Offline
Posts: 1675286480
Ignore
1675286480
1675286480
Report to moderator
1675286480
Hero Member
Offline
Posts: 1675286480
Ignore
1675286480
1675286480
Report to moderator
Transactions must be included in a block to be properly completed. When you send a transaction, it is broadcast to miners. Miners can then optionally include it in their next blocks. Miners will be more inclined to include your transaction if it has a higher transaction fee.
Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction.
1675286480
Hero Member
Offline
Posts: 1675286480
Ignore
1675286480
1675286480
Report to moderator
1675286480
Hero Member
Offline
Posts: 1675286480
Ignore
1675286480
1675286480
Report to moderator
1675286480
Hero Member
Offline
Posts: 1675286480
Ignore
1675286480
1675286480
Report to moderator
cbuchner1 (OP)
Hero Member
Offline
Activity: 756
Merit: 502
January 26, 2014, 09:06:41 PMLast edit: January 26, 2014, 09:46:47 PM by cbuchner1
-H 2 in scrypt-jane now puts keccak onto the GPU. However the CUDA kernel is still a perfect example of how NOT to do efficient CUDA code.
Meaning without further optimizations, don't expect any miracles yet!
EDIT: seems my 660Ti could do around 400 kHash/s Keccak alone (when I remove the ROMix phase of scrypt). This is about 10 times the performance I had with the CPU based Keccak (single core used). What I do not understand however is why the CPU use of that cudaminer instance still hovers around 15%... It should be idling!
Christian
Stanr010
Member
Offline
Activity: 70
Merit: 10
January 26, 2014, 09:30:53 PM
Is there any way to set up Cudaminer to have a failover / backup server if one goes down? Cgminer allows you to edit that in the config but there isn't a config file for cudaminer. Is there any way to set it up with the .bat?
Something like this from cgminer??
Quote
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_USE_SYNC_OBJECTS 1
C:\your\path\to\cgminer.exe --verbose --scrypt -o stratum+tcp://[mining_url] -u [workerusername] -p [workerpassword] --thread-concurrency 8192 -I 13 -g 2 -w 256 --gpu-fan 85 --lookup-gap 2 --gpu-engine 1080 --gpu-memclock 1500 --failover-only -o stratum+tcp://[failover_url] -u [workerusername] -p [workerpassword]
ManIkWeet
Full Member
Offline
Activity: 182
Merit: 100
January 26, 2014, 09:36:03 PM
-H 2 in scrypt-jane now puts keccak onto the GPU. However the CUDA kernel is still a perfect example of how NOT to do efficient CUDA code.
Meaning without further optimizations, don't expect any miracles yet!
Christian
Is there any benefit to using that with YAC?
Is there any way to set up Cudaminer to have a failover / backup server if one goes down? Cgminer allows you to edit that in the config but there isn't a config file for cudaminer. Is there any way to set it up with the .bat?
Something like this from cgminer??
Quote
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_USE_SYNC_OBJECTS 1
C:\your\path\to\cgminer.exe --verbose --scrypt -o stratum+tcp://[mining_url] -u [workerusername] -p [workerpassword] --thread-concurrency 8192 -I 13 -g 2 -w 256 --gpu-fan 85 --lookup-gap 2 --gpu-engine 1080 --gpu-memclock 1500 --failover-only -o stratum+tcp://[failover_url] -u [workerusername] -p [workerpassword]
I think a good way to make this possible is with a --shutdown parameter, which would make cudaminer shutdown as soon as it has a connect failure.
BTC donations: 18fw6ZjYkN7xNxfVWbsRmBvD6jBAChRQVn (thanks!)
Stanr010
Member
Offline
Activity: 70
Merit: 10
January 26, 2014, 09:40:49 PM
-H 2 in scrypt-jane now puts keccak onto the GPU. However the CUDA kernel is still a perfect example of how NOT to do efficient CUDA code.
Meaning without further optimizations, don't expect any miracles yet!
Christian
Is there any benefit to using that with YAC?
Is there any way to set up Cudaminer to have a failover / backup server if one goes down? Cgminer allows you to edit that in the config but there isn't a config file for cudaminer. Is there any way to set it up with the .bat?
Something like this from cgminer??
Quote
setx GPU_MAX_ALLOC_PERCENT 100
setx GPU_USE_SYNC_OBJECTS 1
C:\your\path\to\cgminer.exe --verbose --scrypt -o stratum+tcp://[mining_url] -u [workerusername] -p [workerpassword] --thread-concurrency 8192 -I 13 -g 2 -w 256 --gpu-fan 85 --lookup-gap 2 --gpu-engine 1080 --gpu-memclock 1500 --failover-only -o stratum+tcp://[failover_url] -u [workerusername] -p [workerpassword]
I think a good way to make this possible is with a --shutdown parameter, which would make cudaminer shutdown as soon as it has a connect failure.
Why would I want it to shut down? I just want it to switch to another server if it disconnects and is unable to reconnect. Eventually, I'd like it to go back to the original server when it's up again.
cbuchner1 (OP)
Hero Member
Offline
Activity: 756
Merit: 502
January 26, 2014, 09:43:59 PM
Stanr010, the shutdown would allow an external script or control software to manage the failover process...
Failover is on my TODO list. But right now I am completing the scrypt-jane feature.
cbuchner1 (OP)
Hero Member
Offline
Activity: 756
Merit: 502
January 26, 2014, 09:52:03 PM
-H 2 in scrypt-jane now puts keccak onto the GPU.
Is there any benefit to using that with YAC?
Not for YAC, but maybe people can now start to try mining somewhat lower Nfactor coins.
I believe I can still make that Keccak GPU code 50 times faster. What you see is just the first working draft.
Christian
flysats
Full Member
Offline
Activity: 329
Merit: 100
Buy, sell and store real cryptocurrencies
January 26, 2014, 10:13:40 PMLast edit: January 26, 2014, 10:24:01 PM by flysats
Not for YAC, but maybe people can now start to try mining somewhat lower Nfactor coins.
I believe I can still make that Keccak GPU code 50 times faster. What you see is just the first working draft.
Christian
microCoin work!
nFactor 7
TITAN get 520k (vs AMD 290x - 2,6m)
MEMORY used 1458MB
[2014-01-27 02:10:54] GPU #1: 2303.87 khash/s with configuration T432x8
[2014-01-27 02:10:54] GPU #1: using launch configuration T432x8
[2014-01-27 02:10:54] GPU #1: GeForce GTX TITAN, 300.51 khash/s
[2014-01-27 02:10:54] DEBUG: got new work in 5 ms
[2014-01-27 02:10:57] GPU #1: GeForce GTX TITAN, 520.20 khash/s
[2014-01-27 02:10:57] DEBUG: got new work in 2 ms
[2014-01-27 02:11:02] GPU #1: GeForce GTX TITAN, 522.15 khash/s
[2014-01-27 02:11:02] DEBUG: got new work in 0 ms
[2014-01-27 02:11:07] GPU #1: GeForce GTX TITAN, 521.11 khash/s
[2014-01-27 02:11:07] DEBUG: got new work in 1 ms
[2014-01-27 02:11:12] GPU #1: GeForce GTX TITAN, 523.82 khash/s
[2014-01-27 02:11:12] DEBUG: got new work in 1 ms
[2014-01-27 02:11:17] GPU #1: GeForce GTX TITAN, 526.02 khash/s
[2014-01-27 02:11:17] DEBUG: got new work in 1 ms
[2014-01-27 02:11:22] GPU #1: GeForce GTX TITAN, 529.54 khash/s
[2014-01-27 02:11:22] DEBUG: got new work in 1 ms
let's address your MRC - I pay 5 000 000 MRC for the excellent work!
cbuchner1 (OP)
Hero Member
Offline
Activity: 756
Merit: 502
January 26, 2014, 10:31:31 PM
TITAN get 520k (vs AMD 290x - 2,6m)
MEMORY used 1458MB
[2014-01-27 02:10:54] GPU #1: 2303.87 khash/s with configuration T432x8
let's address your MRC - I pay 5 000 000 MRC for the excellent work!
I believe it's still spending 3/4 of the GPU time in the Keccak code. Otherwise you would
be seeing the indicated 2303.87 khash/s instead of "only" 530... Maybe in a day or two I've
got the speedup you need.
Meh... wallet doesn't sync. initially it shows 2 connections. After a minute they drop to 0.
flysats
Full Member
Offline
Activity: 329
Merit: 100
Buy, sell and store real cryptocurrencies
January 26, 2014, 10:38:24 PMLast edit: January 26, 2014, 10:52:41 PM by flysats
Paid!
Great job!
my microCoin wallet let's 51 connections
cbuchner1 (OP)
Hero Member
Offline
Activity: 756
Merit: 502
January 26, 2014, 10:51:42 PM
Paid!
Great job!
Uh thanks. That makes me a millionaire then?
I got it to sync with this addnode list in microcoin.conf (from a recent forum posting)
Code:
and it's already on a couple of exchanges: poloniex.com crycurex.com
flysats
Full Member
Offline
Activity: 329
Merit: 100
Buy, sell and store real cryptocurrencies
January 26, 2014, 10:54:22 PM
Uh thanks. That makes me a millionaire then?
i exchange MRC to BTC rate: 5m mrc / 0.22 btc
cbuchner1 (OP)
Hero Member
Offline
Activity: 756
Merit: 502
January 26, 2014, 10:55:46 PM
Uh thanks. That makes me a millionaire then?
i exchange MRC to BTC rate 5m mrc / 0.22 btc
well... a Microcoin millionaire Thanks for the donation. It is a significant amount.
flysats
Full Member
Offline
Activity: 329
Merit: 100
Buy, sell and store real cryptocurrencies
January 26, 2014, 11:00:34 PM
well... a Microcoin millionaire Thanks for the donation. It is a significant amount.
Remaining accelerate keccak and many of us become millionaires
bigjme
Sr. Member
Offline
Activity: 350
Merit: 250
January 26, 2014, 11:14:20 PM
How long did it take to get 5million?
Im guessing the block rewards are very high
Owner of: cudamining.co.uk
flysats
Full Member
Offline
Activity: 329
Merit: 100
Buy, sell and store real cryptocurrencies
January 26, 2014, 11:16:32 PM
How long did it take to get 5million?
Im guessing the block rewards are very high
i have many amd rigs - 5m may be 1 day
bigjme
Sr. Member
Offline
Activity: 350
Merit: 250
January 26, 2014, 11:17:55 PM
Very good for one day I must say. Over $200. And if they go up even a small amount it will make a big difference in their value Owner of: cudamining.co.uk flysats Full Member Offline Activity: 329 Merit: 100 Buy, sell and store real cryptocurrencies January 26, 2014, 11:21:23 PM Very good for one day I must say. Over$200. And if they go up even a small amount it will make a big difference in their value
Not a small sum agreed.. But cbuchner1 is worthy of such an award - he deserved it. I think we all need to financially support such an important job.
bigjme
Sr. Member
Offline
Activity: 350
Merit: 250
January 26, 2014, 11:23:38 PM
Ooo it is definitely worth while. I have yet to sell anything I have made with cudaminer as its only 1100 yacoins so far. But when it comes to me selling, christian you have a nice chunk going your way
Owner of: cudamining.co.uk
bathrobehero
Legendary
Offline
Activity: 1988
Merit: 1039
ICO? Not even once.
January 26, 2014, 11:51:18 PM
Can we expect lookup gap support for the Y kernel later down the line?
|
2023-02-01 21:21:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1986265778541565, "perplexity": 8295.808910680837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499953.47/warc/CC-MAIN-20230201211725-20230202001725-00117.warc.gz"}
|
https://wiki.math.ntnu.no/st2304/2018v/lectures
|
ST2304 spring 2018
Messages
Mai 27: If you have questions before tomorrows exam, use the google groups discussion forum.
May 24: Løsningsforslag for eksamen, august 2017 er lagt ut. Merk at noen svar i løsningsforslaget inneholder noe stoff utover hva det rent spesifikt er spurt om i oppgavesettet. Det vil være nyttig å studere dette stoffet.
Spørredag 25. mai: Vi vil være tilgjengelig det meste av dagen fra 9:00 til 15:00 i auditorium S7 for å svare på spørsmål f.eks. om tidligere eksamensoppgaver etc.
Send eventuellt e-post til jarle.tufto@gmail.com, bob.ohara@ntnu.no (tilgjengelig fredag formiddag) or christoffer.h.hilde@ntnu.no ved behov.
May 9: The help session on May 11 from 12:15-16:00 will take place in S1 instead of K5.
May 4: Remember the lecture today at 14:15 in S5. There is also a session (where you can help with exercise 8) in K5 between 12:15 and 14 as usual.
April 25: On May 2, I will go through problem 2c from the August 2017 exam and problem 2a and b from August 2016. In addition, I will summarise the two types of generalized linear models (Poisson regression and logistic regression aka Binomial regression), hypothesis testing for glms and methods for testing and correcting for overdispersion.
April 25: In addition to today's lecture, there will be an extra lecture where I go through previous exam questions, probably in S5 on wednesday, May 2 at 14:15 (same time and place). The exercise sessions on fridays 14:15-16:00 continue on Friday, April 27, May 4, (and possibly)and May 11. On Friday May 25 (before the exam on May 28) I and Christophe will be available to answer questions between 9:00 and 15:00.
Syllabus
There are 14 lectures, topics and slides (pdf and R Markdown) listed below - with reference to the Dalgaard textbook and links to additional learning material.
Textbook: Peter Dalgaard, Introductory statistics with R (this is an ebook from Springer, that NTNU has access to - but you need to be logged in from NTNU to download the book)
As a supplement to the slides from the lectures, we also recommend chapter 3 (excluding section 3.5) in Introduction to Statistical Learning (James et.al. 2013) which covers much of the same material as in lectures 2-9, that is, linear simple and multiple regression, categorical covariates, interactions as well as details on how such models are fitted with R, all through examples. This book is used in a course in statistical learning, and also comes with video lectures by the authors (youtube playlist for Chapter 3). The book is available at as ebook from springer.
A nice and gentle, example oriented introduction to logistic regression (binomial response) and Poisson regression (lectures 11-13) is Chapters 2 and 3 in Faraway (2006), Extending the the Linear Model with R.
Lecture 1: Basic knowledge about R
Slides: Lecture1.pdf and Lecture1.rmd
Topics: Different data types (vectors, factors, data.frames, lists), vectorised operations, getting data into R. read.csv, read.table etc (Dalgaard, ch. 1, ch. 2 (except 2.3). Built in functions for dealing with different probability distributions (i.e. dnorm, pnorm, qnorm, rnorm etc…) (Dalgaard ch. 3), summary statistics and graphics (Dalgaard ch. 4).
Additional resources: Rbeginner.html and Rbeginner.Rmd. Parts of Rintermediate.html and Rintermediate.Rmd may also be useful (perhaps ignore how to plots things using ggplot2 package although some of you may want to learn this on your own at some stage)
Lectures 2 and 3: Statistical inference
Slides:
Topics: Maximum likelihood.
Lectures 4 to 6: Simple linear regression with normal response
Slides:
Topics: Simple linear regression (lecture 3 to 5, Dalgaard ch. 6). What are the assumptions, what is the principle behind estimation of the parameters? (the mathematical derivation of the maximum likelihood estimators is not part of the curriculum). This is also covered in ST0103.
Lecture 6 to 7: Multiple regression
Slides:
Topics: Dalgaard ch. 11
Lecture 8: Categorical covariates
Slides: Lecture8.pdf and Lecture8.rmd
Topics: Essentially a form of linear regression models since categorical covariates (called factors in R) can be represented through numerical 0/1 dummy variables (Dalgaard ch. 7 except 7.1.1, 7.1.4, 7.2 and 7.4 and ch. 12.3).
Important points: How is the model specified as a model formula in R. How is the model written in mathematical notation? What is the interpretation of the parameters (e.g. when running summary on the fitted model object in R). Specifically, you should know why we can’t estimate the intercept simultaneously with the effect of _all_ levels of a factor and that we instead typically impose the constraint that the effect of the first “reference” level of a factor is zero.
Lecture 9: Interactions
Slides: Lecture9.pdf and Lecture9.rmd
Topics: Interactions between two numerical covariates; one numerical and categorical, two categorical (Dalgaard ch. 12.5).
Lecture 10: Model selection
Slides: Lecture10.pdf and Lecture10.rmd
Topics: F-distribution (handout 1), Approximate/asymptotic chi-square distribution of 2 times difference of maximum log-likelihoods under H_0 and H_1 (handout 5, section 2.2), AIC (handout 1, section 5), BIC
Lectures 11-13: Generalised linear models
Slides:
Topics: Poisson response (log link) (Dalgaard ch. 15), Binomial response (logit, probit and cloglog link functions) (Dalgaard ch. 13), Why do we need link functions? Theoretical reasons for using different link functions, Again you should be able to write down the model in mathematical notation and interpret parameter estimates. Based on the summary output for a fitted model, you should be able to compute model predictions for models with the above link functions (so you need to know about the inverse of different link functions). Deviance, how to use change in deviance in tests between nested models. Overdispersion (what is it, how do we test if there is overdispersion, how do we correct for overdispersion).
Most of the this is also covered in Handout 4 in addition to chapters in Dalgaard and slides from the lectures.
Additional material: handout-4.pdf and ch. 2 and 3 in Faraway.
Jarle Tufto
Material not covered this year:
• The delta method (propagation of uncertainty),
• the multinomial distribution and contingency tables,
• numerical methods for maximising likelihood functions of non-standard models.
• How to obtain approximate standard errors of maximum likelihood estimates based on asymptotic theory (from the Hessian matrix - second partial derivates of the log likelihood function at the MLEs) has been covered to some extent but only to a small extent in lecture 2.
This means that questions related to these topics will not be part of this years exam.
Datasets used in Lectures/Exercises
Each of the following lines of codes reads the different data sets from the web and returns it as a data.frame. Alternatively, click on the links and save the file to a local folder on your computer.
read.csv("https://www.math.ntnu.no/emner/ST2304/2018v/BirdEggs.csv") read.csv("https://www.math.ntnu.no/emner/ST2304/2018v/31396_Bumpus_English_Sparrow_Data.csv") read.csv("https://www.math.ntnu.no/emner/ST2304/2018v/Birdbrains.csv") read.csv("https://www.math.ntnu.no/emner/ST2304/2018v/HastingsData.csv") read.csv("https://www.math.ntnu.no/emner/ST2304/2018v/Himmicanes.csv") read.csv("https://www.math.ntnu.no/emner/ST2304/2018v/LifeExpectancy.csv") # Healtcare data read.csv("https://www.math.ntnu.no/emner/ST2304/2018v/Dpileatus.csv") # Pileated Woodpecker Data
Previous years exams:
Parts of previous years exams will be relevant preparation also for this years exam (questions that are not relevant this years are specified below).
Trial exam 2011: bokmål, solution (not 1a-c, last question of 2c, 3a-e)
June 2011: english, bokmål, nynorsk, solution (not 1a-c, 3a-c)
June 2012: english, bokmål, nynorsk, solution (not 1a-d, 2e-f)
August 2012: bokmål,solution (not 1a-d, 3a-c)
June 2013: english, bokmål, nynorsk, solution (all questions relevant not 1a-d, 2c)
August 2013: bokmål,solution (not 1a-c, 2a-d)
May 2014: bokmål, solution (not 1-a, 3c)
May 2015: english, bokmål, nynorsk, solution (not 1a-c, second question of 2c and 2e-g)
June 2016: english, bokmål, nynorsk, solution (not 1a-b, not second sentence of 2b, 2d, 4a)
August 2016: bokmål, nynorsk, solution (ikke 1a-c, 3b-c)
May 2017: bokmål solution (All questions relevant not 1a-c)
August 2017: bokmål nynorsk solution* (All questions relevant not 1a-c)
June 2018: bokmål nynorsk english
* Feel free to change in google docs if you see any errors.
Permitted aids on the exam
Support material code C: One yellow A4-paper with your own handwritten notes (available at 7th floor of sentralbygg II), approved calculator, Tabeller og formler i statistikk (Tapir forlag), Matematisk formelsamling (K. Rottmann). In addition the exam itself may include help page for specific R function that you may need.
|
2022-09-28 16:08:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4987943172454834, "perplexity": 5801.711072539991}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335257.60/warc/CC-MAIN-20220928145118-20220928175118-00023.warc.gz"}
|
http://forums.udacity.com/questions/164/bug-unit-1-final-quiz-dont-change-the-value-of-page
|
## [BUG] Unit-1 / Final Quiz - Don't change the value of page
I keep getting an error that says "Your submission was incorrect. Don't change the value of page."
I've not changed the value of page, and my code runs fine on a regular python interpreter.
# page = contents of a web page
page = '<div id="top_bin"> <div id="top_content" class="width960"> <div class="udacity float-left"> <a href="http://www.xkcd.com">'
start_link = page.find('<a href=')
start_url = page.find('"',start_link) + 1
end_url=page.find('"',start_url)
url = page[start_url:end_url]
how is that changing the value of page?
asked 21 Feb '12, 00:41
gubatron
156148
accept rate: 100%
1
I've updated the quiz to use a page variable without spaces. So paste in everything after start_link= and it should work.
(22 Feb '12, 17:11)
## 24 Answers:
Try your solution now. I've updated the grading code. Sorry for the trouble!
answered 21 Feb '12, 01:53
PeterUdacity ♦♦
36.2k73220333
start_url = page.find('"', start_link)
end_url = page.find('"', start_url + 1)
url = page[start_url + 1:end_url]
This yields "Good job" on the bottom alert but "Try again" on top. I say give it a couple days and try again after they have time for some ironing out.
j
answered 21 Feb '12, 01:03
Jake Solomon
4824921
1
That was a fault on my end. Fixed now!
(22 Feb '12, 17:12)
one step method method
url= page[ start_link+9 : page.find('"',start_link+10)]
answered 21 Feb '12, 12:16
1linsanity
2.8k73041
This wont help next week when u need to fined all urls
(25 Feb '12, 18:55)
@Marek Kotewicz and @KarenT: While your code does produce the correct output, we wanted you to use the "find" command in this problem.
Also, some quizzes will expect you to print something and some will not. We realize that this is an easy thing to make a mistake with and we will try to always be as clear as possible (for example, the print command will usually be pre-populated in the code if we expect a print), but if you ever make a mistake you will be told so with an error message and you can easily make the necessary change.
answered 21 Feb '12, 18:20
AndyAtUdacity ♦♦
57.6k112260301
i tried and got the result ..it works ..
answered 22 Feb '12, 05:00
priya
3062515
I have experienced the same issue, hope they find a solution for this quickly
answered 21 Feb '12, 01:46
I can confirm that this quiz is working correctly. jsolomon, I tried your code and it worked for me, so they possibly have fixed it since you tried it last.
answered 21 Feb '12, 02:05
Eric Solomon
2.4k71325
Hello Peter
When I write code, it shows the correct output but when I submit, it wont accept
The error says : "Your submission was incorrect. You shouldn't print anything for the submission in this quiz."
answered 21 Feb '12, 05:34
Dinesh M-2
1.7k1126
ilikeudacity, you don't print anything in this quiz. I'm guessing you did what a lot of others, myself included, did at first as well and printed the url at the end. Remove your print statement and submit. =D
answered 21 Feb '12, 07:46
Jacob C. Ting
26018
It is working now.
answered 21 Feb '12, 09:52
Dinesh M-2
1.7k1126
Your answer
Question text:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• link:[text](http://url.com/ "Title")
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
## Tags
×29,752
×3,802
×733
×207
Asked: 21 Feb '12, 00:41
Seen: 2,467 times
Last updated: 08 Apr '12, 04:37
|
2014-04-24 19:49:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6083122491836548, "perplexity": 8198.27402351014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://physicalquantummechanics.wordpress.com/groundstate-energies/
|
# Groundstate Energy in Spherical Sym
Below you find the ground state energies produced by realQM with electrons filling a spherical shell structure homogenised to spherical symmetry. realQM then reduces to a wave equation in one radial variable $r$ in terms of a wave function $\psi (r)$ defined on a sequence of intervals/spherical shells with a certain total charge density /number of electrons in each interval/shell. The ground state is computed iteratively by energy minimisation over each shell and and update of the free boundary separating shells to reach continuity. See below for details.
The basic shell structure is given by the sequence 2, 8, 18, 32,…, as the number of electrons in fully filled shells of increasing radius, that is with $2\times n^2$ electrons i shell $n$. Here the factor 2 reflects the structure of Helium with two electrons filling two half-spherical shells meeting a separating plane as free boundary with a Bernoulli free boundary, with subsequent regular subdivision into $n^2$ subregions. As an example, the electron shell structure for Neon then shows to be 2+4+4 with the second full shell subdivided into two subshells with 4 electrons in each.
In general computed energies agree with observed energies up to about three digits. Here you can inspect sample results with the shell structure indicated.
For an atom/ion with kernel charge $Z$ and $N$ electrons realQM takes the following form in spherical coordinates with the electrons filling an expanding sequence of spherical shells and homogenize the electron charge distribution in each shell into spherical symmetry of the same total charge. Find the wave function $\psi (r)$ as a function of distance $r>0$ to the atom kernel of the form
• $\psi (r)=\sum_{j=1}^J\psi_j(r)$
supported by a partition $\Gamma :0=r_0 of the interval $(0,\infty)$ into intervals $S_j=(r_{j-1},r_j)$ with $\psi_j^2>0$ in $S_j$ and $\psi_j = 0$ outside, satisfying
• $\psi \,\,\mbox{is continuous,}\,\, \frac{\partial\psi}{\partial r}(r_j) = 0\,\mbox{for}\, j=1,...,J-1,\mbox{ and}\,\frac{\partial\psi}{\partial r}(0) = -Z\psi (0)$ (1)
and the normalization condition with $\sum_{j=1}^Jn_j=N$ and $n_j>0$ the number of electrons in shell $S_j$:
• $4\pi\int_{S_j}\psi_j^2\, r^2dr = n_j\,\mbox{ for } j=1,...,J$, (2)
which minimises the total energy
• $TE(\psi )\equiv K(\psi )+PK(\psi )+PE(\psi )$,
where
• $K(\psi ) =4\pi\,\int^\infty_0\frac{1}{2}(\frac{\partial\psi}{\partial r})^2r^2\, dr$
• $PK(\psi )=- 4\pi\int_0^\infty\frac{Z}{r}\psi^2(r)r^2\, dr$
• $PE(\psi )=4\pi\,\sum_j\int_0^\infty\sum_{k\neq j} V_k(r)\psi_j^2(r)r^2\, dr$,
with
• $V_k(r)=2\pi\,\int_0^\infty min(\frac{1}{r},\frac{1}{s})c_k(r,s)\psi_k^2(s)s^2\,ds \quad\mbox{for}\quad r>0$, (3)
where $c_k(r,s)=\frac{n_k-1}{n_k}$ for $r,s\in S_k$ is a reduction factor due to lack of self-repulsion, and $c_k(r,s)=1$ else.
Recall that the potential $P(r)$ generated by a spherically symmetric charge distribution of total charge $C(s)$ at kernel distance $s$, is given by the formula $P(r)=min(\frac{1}{r},\frac{1}{s})C(s)$ which motivates (1).
The boundary condition at $r=0$ reflects balance of $\frac{1}{r}$-terms in Schrödinger’s equation in spherical coordinates.
The ground state is computed iteratively by minimization of $TE(\psi )$ over $\psi$ satsifying (1) and (2) by a gradient method with vanishing gradient condition
• $-\frac{1}{2}\frac{\partial^2\psi_j}{\partial r^2}-\frac{1}{r}\frac{\partial\psi_j}{\partial r}+W\psi_j+2\sum_{k\neq j}V_k\psi_j-E_j\psi_j=0\quad\mbox{in }S_j$,
for $j=1,...,J$, where the $E_j$ are Lagrange multipliers for the charge conservation (2) combined with update of the free boundary $\Gamma$ to reach continuity of $\psi$.
|
2018-07-18 08:36:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8182463645935059, "perplexity": 3265.6401842644623}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590074.12/warc/CC-MAIN-20180718080513-20180718100513-00464.warc.gz"}
|
https://s27094.gridserver.com/6df23/value-math-definition-38aa64
|
In this article, let us discuss its definition, formula, table Place Value - Definition and Meaning The Complete Krall, a math coach who has taught high school math, identifies three necessary conditions for teaching math students to become self-confident in math and demonstrate high achievement: academic safety, quality tasks, and 1. Use this term to refer to the distance of a point or number from the origin (zero) of a number line. The value refers to the worth of each digit depending on where it lies in the number. Find a function C(x) that represents the cost for using the phone for x minutes to call out of state, in dollars. Here the digit 4 is in the tens column. Definition In elementary mathematics, a term is either a single number or variable, or the product of several numbers or variables. Speech Pathology Graduate Programs in Ohio, Online Construction Certifications and Certificates, Online Aerospace Engineering Degrees by Program Level, Graduate Certificate in Workplace Safety and Health, Agribusiness Online Degree Overview of Programs, Writing - About the Writing Section: Help and Review, Writing - Text & Argument Analysis: Help and Review, Writing - Word Choice & Expression: Help and Review, Writing - Standard English Grammar: Help and Review, Writing - Writing & Language Test Practice: Help and Review, Writing - The Essay Portion: Help and Review, Writing - Planning & Writing An Essay: Help and Review, Writing - Parts of an Essay: Help and Review, Writing - Sentence Clarity and Structure: Help and Review, Writing - Essay Writing Skills: Help and Review, Writing - How to Write an Argument: Help and Review, Writing - Supporting Your Writing: Help and Review, Writing - Revising Your Writing: Help and Review, Writing - Grammar and Usage: Help and Review, Reading - About the Reading Section: Help and Review, Reading - Sentence Completions: Help and Review, Reading - Reading Passages: Help and Review, Reading - Understanding Reading Passages: Help and Review, Reading - Interpreting & Analyzing Text: Help and Review, Reading - Mastering Reading Passages: Help and Review, Reading - Literary Terms: Help and Review, Reading - US Documents & Speeches: Help and Review, Math - About the Math Section: Help and Review, What Is Value in Math? One pound of peas cost $2.97 while one pound of potatoes costs$1.89. Prove that limit as x approaches 0 of (x^4)cos(6/x) = 0. Not sure what college you want to attend yet? Log in here for access. b : to rate or scale in usefulness, importance, or general worth : evaluate. g(x) = \dfrac{ x^3 - 2x +2}{x^2 +1} at x = 5 C.) v(x) = 3x^2 - 4x + 1 at x = 2t + 1 D.) m(t) = \dfra. How much value does Eleanor's diamond earrings and bracelet have in total? For the following function find A) f(4), B) f(-1/2), C) f(a), D) f(2/m), and E) any values of x such that f(x) = 1. f(x) = 4x^2 - 28x + 25. If at least one of the arguments cannot be converted to a number, NaN is returned. The first box has 34 chocolates, the second box 24, the third box 43 and the fourth box 17 chocolates. Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, How to Add, Subtract and Multiply Complex Numbers, Percents: Definition, Application & Examples, How to Find the Prime Factorization of a Number, Scientific Notation: Definition and Examples, Solving Radical Equations: Steps and Examples, Simplifying Square Roots of Powers in Radical Expressions, Roots and Powers of Algebraic Expressions, What is PEMDAS? Didn't find . Here the digit 4 is in the tens column. Is a Master's Degree in Biology Worth It? Let r = xi + yj + zk \text{ and } r = \left | r \right | F=\frac{r}{r^{p}}. The value of Pi (π) is the ratio of the circumference of a circle to its diameter and is approximately equal to 3.14159. just create an account. Then we define 2.131 Remark. flashcard set{{course.flashcardSetCoun > 1 ? Definition 2: The Expected Value of the random variable X, denoted as E[X] or µ, is given by E[X] = , where xi are the various values that X can take, and p(xi) is the probability of X taking the value xi. The total monetary value for purchasing the vegetables is based on multiplying the unit prices of peas and potatoes by the number of units or pounds of each vegetable purchased as follows: Did you know… We have over 220 college Also learn the facts to easily understand math glossary with fun math worksheet online at SplashLearn. A philosopher listens to a colleague give a detailed counter-argument regarding a recent publication. Let f(x) = x^2 - 4, find an expression for f(a - 3). Value=Place Value × Face Value. Sorry, we could not process your request. Examples of Value The coin shown is a dime and its value is 10 cents. panic value Alert value, critical value Lab medicine Lab results from a specimen that must be reported immediately to a clinician–ie, of such severity as to mandate urgent therapy. You can test out of the Evaluate the given function at the given point. - Definition, Formula & Examples, Biological and Biomedical At home, you may for instance have a budget to help manage your income and probably put some money aside. {{courseNav.course.topics.length}} chapters | When we speak of worth, we are often speaking of money. Value. It determines the statistical significance and the measure of significance testing. Then for all , (2.133) . To find the mean, we do the following: We then need to divide by the number of numbers in the set, which is 4, so we have: The mean of the above set of numbers would be 5. These three examples, and others, are situations in which value-neutrality are important. Quiz & Worksheet - What is Value in Math? Simplify. We have the following information: Doing the calculations, we end up with the answer that this date is going to cost $35: Example 2: The jeweler told Eleanor that her diamond earrings valued at$200 and her diamond bracelet valued at $500. Comments Have your say about what you just read! Mean value is the average of a set of numbers divided by the number of numbers in the set. As a member, you'll also get unlimited access to over 83,000 value meaning: 1. the amount of money that can be received for something: 2. the importance or worth of something…. Given that f(x) = 2x + \cos(x) is one-to-one, use the formula (f^{-1})'(x) = \frac{1}{f'(f^{-1}(x))} to find (f^{-1})(1). . Copyright © 2020 Studypad Inc. All Rights Reserved. In this collection, you'll find two and three-digit place value worksheets, worksheets that focus on comparing and rounding numbers, as well as skip counting activities, and PowerPoint packs. Hence, the value of the digit 4 will be i.e. The place value numeration system has been derived from the Hindu Numeral System. Log in or sign up to add this lesson to a Custom Course. courses that prepare you to earn For example, we are given this function and x value: We are asked to find the value of the function with x being 4, so we plug in our x value and get the value of the function as 31: Get access risk-free for 30 days, In math, value is a number signifying the result of a calculation or function. In math, value can either refer to the result of a calculation or a variable or constant. | A Guide to Summative Assessment, Nutrition 101 Curriculum Resource & Lesson Plans, NY Regents Exam - Geometry: Test Prep & Practice, Quiz & Worksheet - Practice with Scientific Notation, Quiz & Worksheet - Singular & Plural Nouns, Quiz & Worksheet - Biological Treatments for Psychological Abnormalities, North Carolina Common Core State Standards, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers. A term is either a single number or variable, or general worth evaluate... Valued at$ 700 x 6? studypad®, Splash Math®, SplashLearn™ & are. Cost function to find the mean or average value of a calculation number of in. Or constant given time horizon, at a pre-defined confidence level { 20mm } {.5pt },. Mickey mouse shown in tag is 16 cents 's take a look at a confidence. At a pre-defined confidence level has a master 's degree in Biology worth it a + -... Visit our Earning Credit page = \ ; \rule { 20mm } { r^ { p }.. Distant, war-torn land to cover the fate of civilians caught in the four.. Value ( Entry 2 of 3 ) transitive verb the numerical worth of: appraise value necklace... High school Health and has a genetic experiment to discover whether intelligence has a genetic experiment to whether... Used to signify an unknown number take a look at mean value is mean... $100 Graphing Calculator for your math class same digit, irrespective of the digit 4 tens!, are situations in which value-neutrality are important something: 2. the importance or worth each! Philosopher listens to a Custom Course your income and probably put some money aside at end! Your income and probably put some money aside value math definition × 4 gives the value the... Y = 3 b. case, value usually refers to the value of the column which. Diamond earrings and bracelet have in total 4 gives the value of digit!, What are Triangular numbers be negative Course lets you earn progress by passing quizzes and exams Meaning. Grade math Lessons from face value of a number signifying the result of a number line speak... Worth, cardinal virtue… Antonyms: deficiency, demerit, disvalue… find … math,... & worksheet - What is Expanded Notation comments have your say about What you just have interest! Definition that for all 's diamond earrings and bracelet have in total point... - 4, find an expression for f ( x ) = x^2 - x^3 + at! Formula & Examples, What are Compatible numbers the result of a number signifying the of! & calculations, What are variables in math separated by a + or - sign in an overall expression either. Big or small, the face value and face value of the digit reporter travels to variable. Earrings and bracelet have in total practice tests, quizzes, and personalized coaching to help succeed. Of their respective owners of value the coin shown is a number 45 of financial entities or of. A point or number from zero, regardless of age or education level cost of purchasing pounds... An object in math the column in a number line glossary with math! Income and probably put some money aside or sign up to add this lesson you must be reporter. Find this by adding the numbers in that set of: appraise value a necklace your math.. Math glossary with fun math worksheet online at SplashLearn, regardless of direction determines statistical. Is when we speak of worth, cardinal virtue… Antonyms: deficiency, demerit, disvalue… …!, SplashLearn™ & Springboard™ are Trademarks of StudyPad, Inc, or the product of several numbers or.... Here the digit value math definition to the worth of something… has been derived from the that! Of 12 value is the product in math class, your teacher may ask questions like 'What is the of! Or 'What is the product of several numbers or variables earn progress passing! Different place value terms explained with real life illustrated Examples an age-appropriate value math definition philosopher listens to a variable or.... The$ 100 Graphing Calculator for your math class the arguments can be! Let F=\frac { r } { r^ { p } } worth, we Eleanor... To HOME page New the four boxes is Subtraction in math the fate of civilians in! Has taught middle and high school Health and has a master 's degree Biology! Of value math definition testing, Parents, we have this set of numbers scale in usefulness, importance, the... This set of numbers { 4, 5, 6, 5, 6, 5, 6 5! Esteem values your opinion value can also refer to a colleague give a detailed counter-argument a! Calculation or function the coin shown is a number signifying the result a. The tens column all for all for all Entry 2 of 3 ) time horizon at. Recent publication find div f ( a - 3 ) transitive verb numbers { 4,,... Refers to the worth of an object is known as its value quizzes exams. The third box 43 and the measure of the common uses of the Mickey mouse in! Diamond earrings and bracelet have in total a variable or constant lost over a time! Same digit, irrespective of the Mickey mouse shown in tag is 16 cents for your class! To rate or scale in usefulness, importance, or the product in math freshmen to senior level or the. Value in math Custom Course an expression for f ( x ) = \ ; \rule 20mm... Rate highly: prize, esteem values your opinion to give you a good experience as well as,! Eleanor 's jewelry is valued at \$ 700 others, are situations in which the is. Riskiness of financial entities or portfolios of assets If at least one of the column which. Dollar amount expected to be lost over a given time horizon, at a pre-defined confidence level a value math definition... Info you need to find the mean or average value of 12 digit 4 in... 2. the importance or worth of each digit depending on where it lies in the middle numbers {,! Land to cover the fate of civilians caught in the set: estimate! A ticket, bill etc instance: If we consider a number line Culinary Arts and Services... School Health and has a genetic experiment to discover whether intelligence has a master 's degree in Biology it! Refer to the worth of an object in math different ways in.! Are variables in math, value is as mean value is 10 cents details, Parents we! Of financial entities or portfolios of assets can test your knowledge with a quiz of their respective owners or. Lesson, you may for instance: If we consider a number signifying the result of a number.... What value is the average of a digit is in the number of numbers in that set value the worth... To personalise ads = \ ; \rule { 20mm } {.5pt }, worth, cardinal Antonyms! At mean value of StudyPad, Inc numbers or variables illustrated Examples lesson, can! Form in math and value as worth shown in tag is 16 cents or scale in usefulness,,... Definition & Examples, Biological and Biomedical Sciences, Culinary Arts and Services! Is the average of a number signifying the result of a function at a pre-defined confidence level (! You must be a Study.com Member we use cookies to give you a good experience as well as,! Online at SplashLearn circle is big or small, the third box 43 and the measure of column... Right school are Mixed numbers age or education level, Formula & Examples What... Find … math Definition, Formula & Examples, how to estimate in math SAT Prep: and..., visit our Earning Credit page of place value and place value and face value and place refers... Signify an unknown number: evaluate statistics and computer science from freshmen to senior level for 65.. Teacher may ask questions like 'What is the average of a digit is in math class, your teacher ask.: help and Review page to learn more - 4, find expression... & Examples, What are Prime numbers are Triangular numbers that for all also be used different... We are often speaking of money an object is known as its value or! The numerical worth of an object in math \ ; \rule { 20mm } { r^ { }! Has taught middle and high school Health and has a master 's degree in social work y If x 3. 0 of ( x^4 ) cos ( 6/x ) = 0 you can test your knowledge with quiz. Knowledge with a quiz taught university-level mathematics, a term is either single. You want to attend yet or 'What is the value of 5 x 6? intelligence! Money for which something will find value math definition buyer 4 or four worksheet online at SplashLearn are Mixed?! Math worksheet online at SplashLearn shown in tag is 16 cents 3 × 4 gives the of... The statistical significance and the measure of significance testing Hindu Numeral system the average of a point number!
2019 Volkswagen Atlas Cross Sport For Sale, Used Bmw X1 In Delhi Olx, Vct Tile Adhesive Remover, Biology Minor Bu, Automatic Transmission Restriction, Pyramid Plastics Inc, Automatic Transmission Restriction, Spray Paint For Water Stains,
|
2021-10-21 15:24:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34513619542121887, "perplexity": 2226.9416689472205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585424.97/warc/CC-MAIN-20211021133500-20211021163500-00594.warc.gz"}
|
https://www.richardcapener.com/water-exercising-enviroment-friendly-drinking-water-athletics-regarding-exercise-and-even-enjoyable
|
# Binary Option Trade Meaning In Hindi
The Ultimate Guide To Best Strategies to Trade on Expert Option – Basic Finance Care. Chậu đá; Binary option unmasked; Hệ nhị phân; Phụ kiện nhà bếp; Vòi rửa bát; Ngày giao dịch không. New traders often make mistakes such as taking way to big a position for one trade, taking a bad loss, and then staying way to small. According to Finance Magnates, Ilan Tzorya, CEO of one of the industry-leading technology providers TRADOLOGIC, binary options are not financial products (nor do they fit an audience in the financial sector), but are a “gaming” product.Foreign brokers offering binary options …. Tradewell International Scam Could Leave You Hanging High and Dry. Definition of Binary option in the Definitions.net dictionary. Online Learning Plan 2020-2021. Concerns Regarding Binary Options. binary option trade meaning in hindi
This settlement …. Chậu rửa. The trader has no control over when a trade begins or ends once a binary option trade meaning in hindi trade …. Their minimum $10 deposit and$1 trades means that the service is accessible to anyone who wants to trade and their education service supports their commitment to new traders …. The deciding factor is whether or not the option is in the money at the expiration. 5 May, 2020.
Binary Options Withdrawal Proof. Get This Report about What is Expert option? If that trade loses, they binary options strategy hindi will need a 20 percent gain on their account balance just mt4 web login to how to get withdraw binary options hindi break even A binary option is a financial exotic option in which the payoff is either some fixed monetary amount or nothing at all. Catégorie Non classé. Jul 08, 2020 · IQ Option is an online trading platform that enables customers to trade a variety of financial instruments and assets such as Binary Options, Stocks, Forex and of course Cryptocurrencies Founded in 2013 and operated by IQ Option Ltd, the site has quickly became one of fastest growing online trading …. July 27, 2020. Still, if the idea of making quick money attracts you, here are a few tips to help you on your binary option trade meaning in hindi way.
Leading South African Crypto Exchange Raises $3.4M. Much of the binary options market binary option trade meaning in hindi operates through Internet-based trading platforms that are not necessarily complying with applicable U.S. A Guide To Trading Binary Options In The U.S. Binary options brokers in the us; Como usar price action en opciones binarias; Binary options trading softwares free trial; Urbanisme a Barcelona; Infraestructures a Barcelona; Turisme a Barcelona; Esports a Barcelona; Tarragona. Accept.Risk Free Binary Options Strategy (In Hindi/Urdu) Executed well, each strategy should win you binary option meaning in hindi a high enough percentage to make a profit Binary Options In Hindi. Education and Trading Tips A binary option, binary options meaning in hindi sometimes called a digital option, is a type of option in which the trader takes a yes or no position on the price of a stock or other asset, such as ETFs or currencies, and the resulting 19 members in the OptionsInvestopedia community The popular strategies to go for are: I – Fundamental Analysis Strategy. Jul 20, 2020 · Binary Option Meaning In Hindi Binary option meaning in hindi I will teach simple way trading binary option without indicator. Chậu rửa. Jul 04, 2020 · A binary option, binary options meaning in hindi sometimes called a digital option, is binary options trading in hindi a type of option in which the trader takes a yes or no position on the …. Binary options are …. Jul 22, 2019 · The 1-minute binary options or the 60-seconds time frame is the best chart for trading binary options. The call option implies that the value of the asset was above$200 at the end of the agreed investment period. Binary Option Hindi Story. Binary options trading may have gotten a bad rap because of its all-or-nothing premise, but the high payouts keep traders coming back for more. Binary options are …. You’ll notice that there is a gap between the payout percentages and the out-of-money return rewards Binary Option Meaning In Urdu. The line between binary options trading and gambling is blurry. In these, S is the initial the bitcoin code uk stock price, T is the time to binary option trade meaning in hindi maturity, q is the dividend rate, σ {\displaystyle \sigma } is …. Jul 26, 2020 · Jones will receive a pay-off of \$2,000 A binary option, binary options meaning in hindi sometimes called a digital option, is a type of option in which the trader takes a yes or no position on the price of a stock or other asset, such iq option opções binárias as ETFs or currencies, and the binary option trading legal in india in hindi.
|
2021-06-20 19:24:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18563836812973022, "perplexity": 5272.2035536586955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488253106.51/warc/CC-MAIN-20210620175043-20210620205043-00586.warc.gz"}
|
https://math.stackexchange.com/questions/1944575/cw-complex-of-s1-vee-s2
|
# CW complex of $S^1 \vee S^2$
I'm just curious to make sure that I'm not missing something.
The CW complex for $S^1 \vee S^2$ consists of
one 0-cell (the fixed point),
one 1-cell ($S^1$) and
one 2-cell ($S^2)$?
Hence, if I were to compute the homology groups of $S^1 \vee S^2$, the $C_n$ chain would be given by $$0 \longrightarrow \mathbb{Z} \longrightarrow \mathbb{Z} \longrightarrow \mathbb{Z} \longrightarrow 0.$$ Is this correct?
Yes, that sequence will work, you just need to describe what each map is doing (each one should be trivial). To be more specific though, the 1 and 2 cells are 1 and 2 disks, respectively, but attached to a point (thus creating an $S^1$ and $S^2$). Also, you may be interested in
|
2019-11-20 18:05:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281727075576782, "perplexity": 192.24561305034584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670597.74/warc/CC-MAIN-20191120162215-20191120190215-00152.warc.gz"}
|
http://www.freemathhelp.com/forum/threads/75438-Help-with-Bonds?p=310180
|
1. ## Help with Bonds
What is the purchase price of a bond that is issued on May 1,2002 for 22 years at a coupon
rate of 9.4% compounded semi-annually? The maturity date of the bond is May 11,2007 and the yield
rate is 8.5% compounded semi-annually. The bond has a face value of $5000 and it is redeemable at par. I have FV= 5000 PMT= 235 i=.0425 n=5.5 5000(1.0425^-5.5) + 235(1-1.0425^-5.5)/.0425 5000(.795393) + 235(4.814282) 3976.965000 + 1131.356353 = 5108.321353 The answer is not correct could someone help me find out where I went wrong? 2. Let's see... 2007 - 2002 = 5 -- Nope! That is NOT 22 years. If this is one bond, I wonder if we can make sense of what you have written... Pmt = 5000*0.094/2 = 235.00 -- Good. Yield is 8.5%, so semi-annual yeild is 4.25% Define i = 0.0425 and v = 1/1.0425 and we have PV coupons: Pmt*(v-v^45)/(1-v) = Pmt*(1-v^44)/i = 235*19.76008 = 4643.619 Add this to 5000*v^44 = 800.983 And we have a purchase price: 4643.619 + 800.983 = 5444.60 The value on May 1, 2002, given the maturity at May 11, 2007. That's two pieces. The value on May 11, 2002 is Pmt*(1 + v + ... + v^10) = Pmt*(1-v^11)/(1-v) = Pmt*(1-v^11)/(i*v) = Pmt*(1-v^11)/d Increased by 5000v^11 Two things left for you to do. Calculate these values and then tell me why I cared about 5/11/2002 instead of 5/1/2002. 3. ## Reply to Bond question I believe you cared about 5/11/02 because it is when the bond is maturing. V=1/1.0425 235(1+.959233+X+.959233^10=235(1-.959233^11)/1-.959233=235(1-.959233^11)/1-.959233=235(1-.959233^11)/d 235(1.959233+X+.659519=235(.367348/.04767)=235(7.706063)=235(1-.367348/.04767)/d 235(2.618752X)+ 615.406720X=1810.924691=235(.367348)/d 615.406720X=1810.924691=86.326780/d .339830X=86.326780/d Okay this is as far as I get. The teacher said today we don't use 22years after 1st part of question She also gave us 5 possible answers which are a)$5413.30 b)$5431.30 c)4531.30 d)4533.30 e)$4513.30
I'm really not understanding this question.....I have final exam on Tuesday and my lil brain is not computing.
4. ## That is the exact question
Hello
This is the exact question taken word for word from the assignment sheet as given to our class.
First, I ignore the 22 year part. It has no bearing since the maturity date is May 11, 2007.
Because the purchase date is before the maturity date in the calendar year, I took the bond back to Dec. 11, 2001 (basically made n -11 instead of -10). So I calculated both the FV and the PMT totals using 11 periods (Dec. 11, 2001 to May 11, 2007) which gave me the bond value on Dec. 11, 2001.
Lastly, I added in the simple interest from Dec. 11, 2001 to the purchase date of May 1st, 2002 which gave me the value on May 1st, 2002.
but still not getting the right answer.
6. May 11/02 : 5year bond purchased; maturity May 11/07;
face $5000, coupons$235, rate 4.25 s/a: PV = 5180.24
BUT purchase made 10 days earlier: May 1/02;
10 days later, a coupon amount of $235 is received: you gotta PAY for that: 5180.24 + 235 = 5415.24 : but not quite: you should get a "little discount!" since you wait 10 days...that's apparently$1.94; sooooo:
5415.24 - 1.94 = 5413.30 (your choice a).
That's my take on this messy problem....
7. ## Thanks Denis
My mistake was in the in 5 year bond purchase I had 5.5years
This is a very badly worded problem, Messy just about covers it.
Thanks Dennis for taking the time to help out. Makes sense to me now.
#### Posting Permissions
• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•
|
2015-04-26 09:43:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32337671518325806, "perplexity": 2957.6021936676284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246654264.98/warc/CC-MAIN-20150417045734-00298-ip-10-235-10-82.ec2.internal.warc.gz"}
|
http://math.univ-lille1.fr/~tibar/Sing2012/program.html
|
## Program Conference room: "Salle des Réunions", M2 building
Wednesday, Mai 30 Thursday, Mai 31 Friday, June 1 - - 9h10-10h00 Denkowski 9h00-9h50 Pe Pereira - - * coffee break 10h00-10h50 Borodzik - - 10h25-11h15 Kurdyka * coffee break - - 11h30-12h20 Tanabe 11h15-12h15 Colloquium talk A'Campo 13h30 coffee *** lunch break *** lunch break 14h00-14:50 Cluckers 14h15-15:05 Steenbrink 14h15-15:05 Pichon * coffee break * coffee break * coffee break 15h20-16h10 Veys 15:30-16h20 Teissier 15:30-16h20 Parameswaran 16h25-17h15 Mourtada 16h30-17h20 Loeser 16h30-17h20 Siersma 17h30-18:20 Faber 17h30-18h20 Le Quy Thuong 17h30-18h20 ** pot *** cocktail
## List of registered participants
Norbert A'Campo (Basel) Karim Bekka (Rennes) Ana Belen de Felipe (Paris) Maciej Borodzik (Warsaw) Arnaud Bodin (Lille) Ying Chen (Lille) Raf Cluckers (Lille) Georges Comte (Chambery) Maciej Denkowski (Cracovie) Nicolas Dutertre (Marseille) Abdelghani El Mazouni (Lens) Eleonore Faber (Wien) Goulwen Fichou (Rennes) Pedro D. Gonzalez Perez (Madrid) Youssef Hantout (Lille) Krzysztof Kurdyka (Chambery) Monique Lejeune-Jalabert (Versailles) Ann Lemahieu (Lille) François Loeser (Paris) Mohammad Moghaddam (Teheran) Hussein Mourtada (Paris) Le Quy Thuong (Paris) Parameswaran A.J. (TIFR, Mumbai) Maria Pe Pereira (Paris) Anne Pichon (Marseille) Camille Plénat (Marseille) Patrick Popescu-Pampu (Lille) Michel Raibaut (Paris) Dirk Siersma (Utrecht) Joseph Steenbrink (Nijmegen) Susumu Tanabé (Istanbul) Bernard Teissier (Paris) Mihai Tibar (Lille) David Trotman (Marseille) Wim Veys (Leuven)
Conferences
Norbert A'Campo (University Basel) Colloquium talkNouveaux outils topologiques et géométriques dans l'étude des singularités
Maciej Borodzik (University of Warsaw) Topological aspects of spectra of singularities
Abstract.
We shall present a relationship between topological invariants links of hypersurface singularities and spectra of the corresponding singular points. In case of plane curve singularities, we can fully recover the spectrum from the classical link invariants. Furthermore, we show that classical skein relations in the link theory gives rise, via suitably applied Morse theory, to the proof of various semicontinuity properties of spectra. We shall finish the talk by pointing out some higher dimensional generalizations. This is a joint project with A. Nemethi.
Susumu Tanabe (Galatasaray University) Period integrals for complete intersection varieties
Abstract.
In this talk, we will discuss about concrete expression of solutions to Gauss-Manin system or Picard-Fuchs equation associated to affine complete intersection varieties. Our main interest will be focused on several cases where the concrete monodromy representation of the solutions is available. As an example of our investigations on Horn type hypergeometric functions, we show the following. Let $Y$ be a Calabi-Yau complete intersection in a weighted projective space. The space of quadratic invariants of the (reduced) hypergeometric group associated with the period integrals of the mirror CI variety X to Y is one-dimensional and spanned by the Gram matrix of a split-generator of the derived category of coherent sheaves on Y with respect to the Euler form.
Anne Pichon
(Université de la Méditérranée) The bilipschitz geometry of a normal complex surface
Abstract.
This is a joint work with Lev Birbrair and Walter Neumann. We study the geometry of a normal complex surface $X$ in a neighbourhood of a singular point $p \in X$. It is well known that for all sufficiently small $\epsilon>0$ the intersection of $X$ with the sphere $S^{2n-1}_\epsilon$ of radius $\epsilon$ about $p$ is transverse, and $X$ is therefore locally "topologically conical'' i.e., homeomorphic to the cone on its link $X\cap S^{2n-1}_\epsilon$. However, as shown by Birbrair and Fernandez, $(X,p)$ need not be "metrically conical'', i.e. bilipschitz equivalent to a standard metric cone when $X$ is equipped with the Riemanian metric induced by the ambient space. In fact, it was shown by Birbrair, Fernandez and Neumann that it rather rarely is.
I will present, a complete classification of the bilipschitz geometry of $(X,p)$. It starts with a decomposition of a normal complex surface singularity into its "thick'' and `"thin'' parts. The former is essentially metrically conical, while the latter shrinks rapidly in thickness as it approaches the origin. The thin part is empty if and only if the singularity is metrically conical. Then the complete classification consists of a refinement of the thin part into geometric pieces. I will describe it on an example, and I will present a list of open problem related with this new point of view on classifying complex singularities.
Eleonore Faber (Universität Wien) Splayed divisors: transversality of singular hypersurfaces
Abstract
In this talk we present a natural generalization of transversally intersecting smooth hypersurfaces in a complex manifold: hypersurfaces, whose components intersect in a transversal way but may be themselves singular. We call these hypersurfaces "splayed" divisors. A splayed divisor is characterized by a property of its Jacobian ideal. Another characterization is in terms of K.Saito's logarithmic derivations. As an application we consider the question of characterizing a normal crossing divisor by its Jacobian ideal.
Hussein Mourtada (Université Paris Diderot) Jet schemes of normal toric surfaces
Abstract
For an integer
m >0 we will determine the irreducible components of the m-th jet scheme of a normal toric surface S. We give formulas for the number of these components and their dimensions. When m varies, these components give rise to projective systems, to which we associate a weighted oriented graph. We prove that the data of this graph is equivalent to the data of the analytical type of S. Besides, we classify these irreducible components by an integer invariant that we call index of speciality. We prove that for m large enough, the set of components with index of speciality 1 is in one-to-one correspondance with the set of exceptional divisors that appear on the minimal resolution of S.
François Loeser (Université Pierre et Marie Curie) Fixed points of iterates of the monodromy
Abstract
With Jan Denef we proved a formula relating arc spaces to fixed points of iterates of the monodromy. The proof was by explicir computation on a resolution. We shall present a recent work with Ehud Hrushovski that provides a new - geometric - proof using Lefschetz fixed point formula and non-archimedean geometry. If times allow we shall end by discussing how similar methods provide a new construction of the motivic Milnor fiber.
Joseph Steenbrink (Radboud Universiteit Nijmegen) Function germs on toric singularities
Abstract
We study function germs on toric varieties which are nondegenerate for their Newton diagram. We express their motivic Milnor fibre in terms of their Newton diagram. We establish an isomorphism between the cohomology of the Milnor fibre and a certain module constructed from differential forms. This module is equipped with its Newton filtration. We conjecture that its Poincare polynomial is equal to the spectrum of the function germ. We will illustrate this by several examples.
Le Quy Thuong (Université Pierre et Marie Curie) Some approaches to the Kontsevich-Soibelman integral identity conjecture
Abstract
It is well known that the integral identity conjecture is one of the key foundations in Kontsevich-Soibelman's theory of motivic Donaldson-Thomas invariants for 3-dimensional Calabi-Yau varieties. In this talk, we shall mention some approaches to the conjecture in the algebro-geometric and arithmetic-geometric points of view. Namely, we consider the regular, adic and formal versions and give some proofs under certain conditions.
Raf Cluckers (Université Lille 1) An overview on recent progress related to (sub-)analytic functions
Abstract
We will give a tour around recent developments about (sub-)analytic functions, both in a real setting and in a non-archimedean setting. The topics will include aspects of integration, parameterizations, Lipschitz continuity, and a link with number theory in analogy to Pila's work with e.g. Bombieri in which real topological techniques are used to bound the number of rational points on, say, the graph of an analytic function.
Wim Veys (Katholieke Universiteit Leuven) Bounds for log-canonical thresholds and exponential sums
Abstract
This is joint work with Raf Cluckers. For any polynomial f with rational coefficients, we study the bounds conjectured by Denef and Sperber for local exponential sums, modulo m-th powers of a prime, associated to f. We show that such bounds hold unconditionally for several small values of m, with the log-canonical threshold of f in the exponent. Key ingredients are some new bounds for log-canonical thresholds.
Maria Pe Pereira Nash problem for surfaces
Abstract
Let π : (X, E) → (X, O) be a resolution of singularities of a singularity (X, O). Take the decomposition of the exceptional divisor E =\cup_i Ei. Given any arc γ : (C, 0) → (X, SingX) one can consider the lifting γ : (C, 0) → (X, E). Nash considered the set of arcs whose lifting γ meets a fix divisor Ei , that is γ (0) ∈ Ei , and proved that its closure is an irreducible set of the space of arcs. Nash’s question is whether for the essential divisors Ei these subsets of arcs are in fact irreducible components of the space of arcs or not. He conjectured that the answer was yes for the case of surfaces and suggested the study in higher dimensions. Recently we solved the conjecture for the surface case in a joint work with J. Fernandez de Bobadilla. I will give an introduction to the problem and details of a the proof for the normal surface case.
Maciej Denkowski (Jagellonian University Cracow) On the exceptional (central) set
Abstract
For a given subanalytic (or definable in some o-minimal structure) closed set $M\subset\mathbb{R}^n$ we are interested in the multifunction assigning to each point $x\in\mathbb{R}^n$ the compact set $m(x)\subset M$ consisting of the points $y\in M$ realizing the Euclidean distance $\mathrm{dist}(x,M)$, as well as in the structure of the exceptional set $E$ of points for which $\#m(x)>1$, i.e. there is more than one closest point to $x$. The exceptional set $E$ conveys interesting information about the singularities of $M$. This study is closely related to such notions as skeletons, central sets and conflict sets. Part of it is a joint project with Lev Birbrair.
Krzysztof Kurdyka
(Université de Savoie) Reaching generalized critical values of a polynomial
Abstract
We give an algorithm to compute the set of asymptotic (respectively generalized) critical value of a polynomial both in the complex and real case. The algorithm uses a finite dimensional space of rational arcs along which we can reach all asymptotic (respectively generalized) critical values. Joint work with Z. Jelonek.
Dirk Siersma
(Universiteit Utrecht) Projective Hypersurfaces with non-isolated singularities
Abstract
We intend to discuss the influence of singularities on the homology groups of projective hypersufaces. We use the vanishing homology of the singular set in order to compare with the general (smooth) projective hyper surface. First we discuss what is known about isolated singularities. Next we investigate the case of a one dimensional singular set, derive a formula for the Euler characteristic and some information about the homology groups. We treat some examples in detail. This is joint work (in progress) with Mihai Tibar.
Parameswaran A.J. (TIFR Mumbai) On the geometry of regular maps from a quasi-projective surface to a curve
Abstract
In case of a polynomial function $P : \bC^2 \to \bC$, Miyanishi and Sugie proved that the global monodromy group acting on $H_1(F)$ is trivial if and only if the general fibre of $P$ is rational and $P$ is "simple". Dimca showed in the following that in this statement the monodromy group can be replaced by the monodromy at infinity (i.e. around a very large cercle in $\bC$). We explore here some consequences of the triviality of the monodromy group, in Hodge theoretic terms. Joint with M. Tibar.
|
2013-12-11 05:28:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7453744411468506, "perplexity": 1949.3397384720108}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164031957/warc/CC-MAIN-20131204133351-00083-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://matholympiad.org.bd/forum/viewtopic.php?f=42&t=487&p=7554&sid=9e9ade0f97f77bf47d376f0dc301ab6f
|
## Dhaka Higher Secondary 2011/7
Problem for Higher Secondary Group from Divisional Mathematical Olympiad will be solved here.
Forum rules
Please don't post problems (by starting a topic) in the "Higher Secondary: Solved" forum. This forum is only for showcasing the problems for the convenience of the users. You can post the problems in the main Divisional Math Olympiad forum. Later we shall move that topic with proper formatting, and post in the resource section.
BdMO
Posts: 134
Joined: Tue Jan 18, 2011 1:31 pm
### Dhaka Higher Secondary 2011/7
In a game Arjun has to throw a bow towards a target and then Karna has to throw a bow toeards the target. One who hits the target first wins. The game continues with Karna trying after Arjun and Arjun trying after Karna until someone wins. The probability of Arjun hitting the target with a single shot is $\frac{2}{5}$ and the probability that Arjun will win the game is the same as that of Karna. What is the probability of Karna hitting the target with a single shot.
TIUrmi
Posts: 61
Joined: Tue Dec 07, 2010 12:13 am
Contact:
### Re: Dhaka Higher Secondary 2011/7
My approach was pretty much like this:
If Arjun wins in $n + 1$ shots the probability of his winning is = $\left( \frac{3}{5} \right)^n\frac{2}{5}$
If Karna wins while Arjun will fail the probality is = $\frac{3}{5} \left( \frac{x-p}{x} \right)^n\frac{p}{x}$( where p = probability of karan winning)
So,$\left( \frac{3}{5} \right)^n\frac{2}{5}= \frac{3}{5}\frac{x-p}{x}^n\frac{p}{x}$
$\left( \frac{3}{5} \right)^{n-1}\frac{2}{5} = \left( \frac{x-p}{x} \right) ^n\frac{p}{x}$
$\frac{{3^{n-1}\times 2}}{25} =\frac{{p(x-p)}^n}{x^{n+1}}$
From here we can tell x = 5 and p = 2
The probability of Karna hitting the target once and winning is therefore $\frac{3}{5}\frac{2}{5} = \frac{6}{25}$. Though the english statement is 'the probability of Karna hitting the target with a single shot', in Bangla probably it was " ekbar teer chhurey jetar sombhabona koto" or maybe unfortunately I read the statement wrong.
BTW, I am not sure about the solution. If the statement was indeed what I thought ( by mistake) is the solution correct?
"Go down deep enough into anything and you will find mathematics." ~Dean Schlicter
abir91
Posts: 52
Joined: Sun Dec 19, 2010 11:48 am
### Re: Dhaka Higher Secondary 2011/7
I think, either way (whichever understanding of the question we assume), your solution is wrong although I believe you can fix it.
If you are really stuck where is the mistake, here is a pointer:
To calculate the probability of Arjun winning in n + 1 shots, you must take into account that Karna will miss n shots as well.
Abir
Have you read the Forum Rules and Guidelines?
bristy1588
Posts: 92
Joined: Sun Jun 19, 2011 10:31 am
### Re: Dhaka Higher Secondary 2011/7
Abir Vai,
Bristy Sikder
sourav das
Posts: 461
Joined: Wed Dec 15, 2010 10:05 am
Location: Dhaka
Contact:
### Re: Dhaka Higher Secondary 2011/7
I think so.
My solution:
The probability of wining for Arjun in 2n+1 shots=The probability of wining for Karna in 2n+2 shots
Now,The probability of wining for Arjun in 2n+1 shots = $(\frac{3}{5})^n * \frac{2}{5} * (1-x)^n$ [x is the probability for karna to win in one shot]
The probability of wining for Karna in 2n+2 shots=$(\frac{3}{5})^{n+1} * x * (1-x)^n$
Which implies $x =\frac{2}{3}$
If you can find any bug, please inform me.
You spin my head right round right round,
When you go down, when you go down down......
(-$from$ "$THE$ $UGLY$ $TRUTH$" )
sm.joty
Posts: 327
Joined: Thu Aug 18, 2011 12:42 am
Location: Dhaka
### Re: Dhaka Higher Secondary 2011/7
কিছুই বুঝলাম না।
কেউ একটু বাংলায় বলেন। আর গানিতিক অপারেশন গুলা একটু ভেঙে বলেন।
হার জিত চিরদিন থাকবেই
তবুও এগিয়ে যেতে হবে.........
বাধা-বিঘ্ন না পেরিয়ে
বড় হয়েছে কে কবে.........
|
2018-01-23 19:39:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6974173188209534, "perplexity": 6728.449719796184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084892238.78/warc/CC-MAIN-20180123191341-20180123211341-00348.warc.gz"}
|
https://www.semanticscholar.org/paper/Dimension-Theory-in-Iterated-Local-Skew-Power-Rings-Woods/ae04c90ef1ff2f02afc52669f2c5048ab6ea8554
|
# Dimension Theory in Iterated Local Skew Power Series Rings
@article{Woods2018DimensionTI,
title={Dimension Theory in Iterated Local Skew Power Series Rings},
author={Billy Woods},
journal={Algebras and Representation Theory},
year={2018}
}
• Billy Woods
• Published 26 November 2018
• Mathematics
• Algebras and Representation Theory
Many well-known local rings, including soluble Iwasawa algebras and certain completed quantum algebras, arise naturally as iterated skew power series rings. We calculate their Krull and global dimensions, obtaining lower bounds to complement the upper bounds obtained by Wang. In fact, we show that many common such rings obey a stronger property, which we call triangularity, and which allows us also to calculate their classical Krull dimension (prime length). Finally, we correct an error in the…
2 Citations
• Mathematics
• 2021
In this paper, we investigate the structure of skew power series rings of the form S = R [[ x ; σ, δ ]], where R is a complete, positively filtered ring and ( σ, δ ) is a skew derivation respecting
• Mathematics
• 2023
Given a complete, positively filtered ring ( R, f ) and a compatible skew derivation ( σ, δ ), we may construct its skew power series ring R [[ x ; σ, δ ]]. Due to topological obstructions, even if δ
## References
SHOWING 1-10 OF 32 REFERENCES
• Mathematics
• 2011
In this paper, we contrast the structure of a noncommutative algebra R with that of the skew power series ring R[[y;d]]. Several of our main results examine when the rings R, Rd, and R[[y;d]] are
• Mathematics
• 2007
This paper is a natural continuation of the study of skew power series rings A initiated in [P. Schneider and O. Venjakob, On the codimension of modules over skew power series rings with applications
• Mathematics
• 2010
This paper is a natural continuation of the study of skew power series rings $A=R[[t;\sigma,\delta]]$ initiated in an earlier work. We construct skew Laurent series rings $B$ and show the existence
• Mathematics
• 2007
We study the “q-commutative” power series ring R: = kq[[x1,...,xn]], defined by the relations xixj = qijxjxi, for mulitiplicatively antisymmetric scalars qij in a field k. Our results provide a
• Mathematics
• 2012
1. General Theory of Primes.- 2. Maximal Orders and Primes.- 3. Extensions of Valuations to some Quantized Algebras
In this paper and a forthcoming joint one with Y. Hachimori we study Iwasawa modules over an infinite Galois extension K of a number field k whose Galois group G=G(K/k) is isomorphic to the
We study prime ideals in skew power series rings T:= R[[y; τ, δ]], for suitably conditioned complete right Noetherian rings R, automorphisms τ of R, and τ-derivations δ of R. Such rings were
• Mathematics
• 2002
We study pre-balanced dualizing complexes over noncommutative complete semilocal algebras and prove an analogue of Van den Bergh’s theorem [VdB, 6.3]. The relationship between pre-balanced dualizing
|
2023-03-22 08:46:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6720962524414062, "perplexity": 2731.194970531348}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00332.warc.gz"}
|
https://www.physicsforums.com/threads/integration-problem.79050/
|
Integration problem
1. Jun 14, 2005
Yegor
Can you help me with
$$\int\frac{\sin(2nx)}{\sin(x)}dx$$
Here n=1,2,3...
I think that i should get any way to represent $$\sin(2nx)$$ as product of sinx and something. But i don't know how.
Thank you
2. Jun 14, 2005
dextercioby
Except for the integration constant,here's what Mathematica gives as an answer.
Daniel.
Attached Files:
• Integrate.gif
File size:
2.3 KB
Views:
55
3. Jun 14, 2005
Yegor
Great. I have Mathematica too.
I'm given a hint. sin(2nx)=sin(x)*(Sum of trigonometric functions). I don't even understand how my head had to work to get such an idea.
4. Jun 14, 2005
dextercioby
What are those equal to...?
$$\sin nx =...?$$
$$\cos nx =...?$$
in terms of the powers of "sin" and "cos" of "x"...?
Daniel.
5. Jun 14, 2005
shmoe
To write it as sin(x)*(Sum of trigonometric functions) you can replace you sines with exponentials, that is $$\sin(y)=(e^{iy}-e^{-iy})/(2i)$$. Things will factor, and you should be able to pull out a sum of cosines.
6. Jun 14, 2005
Yegor
I know only
$$\sin nx =\sin x \cos[(n-1)x] + \cos x \sin[(n-1)x]$$
$$\cos nx =\cos x \cos[(n-1)x] - \sin x \sin[(n-1)x]$$
These transformations can be maid also with $$\sin[(n-1)x]$$, and so on.
But how can i write that as a sum?
|
2017-01-22 10:02:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6027067303657532, "perplexity": 2739.3806796059816}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281421.33/warc/CC-MAIN-20170116095121-00566-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://study-astrophysics.com/gr1777-019/
|
# GRE Physics GR 1777 Problem Solution
## 019. Classical Mechanics (Work-Energy Principle)
### Solution
From the work-energy theorem,
$W = \Delta K = \frac{1}{2} mv_f^2 - \frac{1}{2} mv_i^2$
Since the work done by the box is
$W = F \cdot d = F \cdot (5 m)$
and the change of the kinetic energy is
$\frac{1}{2} mv_f^2 - \frac{1}{2} mv_i^2 = \frac{1}{2} (10 kg) \cdot [ (2 m/s)^2 - (1 m/s)^2] = 15 kg \cdot m^2/s^2$
Therefore, The force is
$F = 3 kg \cdot m/s^2 = 3N$
|
2021-03-07 05:07:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 8, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565624952316284, "perplexity": 3955.943156570819}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00179.warc.gz"}
|
https://doc.sagemath.org/html/en/reference/number_fields/sage/rings/number_field/S_unit_solver.html
|
# Solve S-unit equation x + y = 1¶
Inspired by work of Tzanakis–de Weger, Baker–Wustholz and Smart, we use the LLL methods in Sage to implement an algorithm that returns all S-unit solutions to the equation $$x + y = 1$$.
REFERENCES:
AUTHORS:
• Alejandra Alvarado, Angelos Koutsianas, Beth Malmskog, Christopher Rasmussen, David Roe, Christelle Vincent, Mckenzie West (2018-04-25 to 2018-11-09): original version
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import solve_S_unit_equation, eq_up_to_order
sage: K.<xi> = NumberField(x^2+x+1)
sage: S = K.primes_above(3)
sage: expected = [((2, 1), (4, 0), xi + 2, -xi - 1),
....: ((5, -1), (4, -1), 1/3*xi + 2/3, -1/3*xi + 1/3),
....: ((5, 0), (1, 0), -xi, xi + 1),
....: ((1, 1), (2, 0), -xi + 1, xi)]
sage: sols = solve_S_unit_equation(K, S, 200)
sage: eq_up_to_order(sols, expected)
True
Todo
• Use Cython to improve timings on the sieve
sage.rings.number_field.S_unit_solver.K0_func(SUK, A, prec=106)
Return the constant $$K_0$$ from Smart’s TCDF paper, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• A – the set of the products of the coefficients of the $$S$$-unit equation with each root of unity of K
• prec – the precision of the real field (default: 106)
OUTPUT:
The constant K0, a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import K0_func
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: A = K.roots_of_unity()
sage: K0_func(SUK, A) # abs tol 1e-29
9.475576673109275443280257946929e17
REFERENCES:
sage.rings.number_field.S_unit_solver.K1_func(SUK, v, A, prec=106)
Return the constant $$K_1$$ from Smart’s TCDF paper, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• v – an infinite place of K (element of SUK.number_field().places(prec))
• A – a list of all products of each potential a, b in the $$S$$-unit equation ax + by + 1 = 0 with each root of unity of K
• prec – the precision of the real field (default: 106)
OUTPUT:
The constant K1, a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import K1_func
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: phi_real = K.places()[0]
sage: phi_complex = K.places()[1]
sage: A = K.roots_of_unity()
sage: K1_func(SUK, phi_real, A)
4.396386097852707394927181864635e16
sage: K1_func(SUK, phi_complex, A)
2.034870098399844430207420286581e17
REFERENCES:
sage.rings.number_field.S_unit_solver.beta_k(betas_and_ns)
Return a pair $$[\beta_k,|beta_k|_v]$$, where $$\beta_k$$ has the smallest nonzero valuation in absolute value of the list betas_and_ns
INPUT:
• betas_and_ns – a list of pairs [beta,val_v(beta)] outputted from the function where beta is an element of SUK.fundamental_units()
OUTPUT:
The pair [beta_k,v(beta_k)], where beta_k is an element of K and val_v(beta_k) is a integer
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import beta_k
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: v_fin = tuple(K.primes_above(3))[0]
sage: betas = [ [beta, beta.valuation(v_fin)] for beta in SUK.fundamental_units() ]
sage: beta_k(betas)
[xi, 1]
REFERENCES:
sage.rings.number_field.S_unit_solver.c11_func(SUK, v, A, prec=106)
Return the constant $$c_{11}$$ from Smart’s TCDF paper, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• v – a place of K, finite (a fractional ideal) or infinite (element of SUK.number_field().places(prec))
• A – the set of the product of the coefficients of the $$S$$-unit equation with each root of unity of K
• prec – the precision of the real field (default: 106)
OUTPUT:
The constant c11, a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import c11_func
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: phi_real = K.places()[0]
sage: phi_complex = K.places()[1]
sage: A = K.roots_of_unity()
sage: c11_func(SUK, phi_real, A) # abs tol 1e-29
3.255848343572896153455615423662
sage: c11_func(SUK, phi_complex, A) # abs tol 1e-29
6.511696687145792306911230847323
REFERENCES:
sage.rings.number_field.S_unit_solver.c13_func(SUK, v, prec=106)
Return the constant $$c_{13}$$ from Smart’s TCDF paper, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• v – an infinite place of K (element of SUK.number_field().places(prec))
• prec – the precision of the real field (default: 106)
OUTPUT:
The constant c13, as a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import c13_func
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: phi_real = K.places()[0]
sage: phi_complex = K.places()[1]
sage: c13_func(SUK, phi_real) # abs tol 1e-29
0.4257859134798034746197327286726
sage: c13_func(SUK, phi_complex) # abs tol 1e-29
0.2128929567399017373098663643363
It is an error to input a finite place.
sage: phi_finite = K.primes_above(3)[0]
sage: c13_func(SUK, phi_finite)
Traceback (most recent call last):
...
TypeError: Place must be infinite
REFERENCES:
sage.rings.number_field.S_unit_solver.c3_func(SUK, prec=106)
Return the constant $$c_3$$ from Smart’s 1995 TCDF paper, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• prec – the precision of the real field (default: 106)
OUTPUT:
The constant c3, as a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import c3_func
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: c3_func(SUK) # abs tol 1e-29
0.4257859134798034746197327286726
Note
The numerator should be as close to 1 as possible, especially as the rank of the $$S$$-units grows large
REFERENCES:
sage.rings.number_field.S_unit_solver.c4_func(SUK, v, A, prec=106)
Return the constant $$c_4$$ from Smart’s TCDF paper, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• v – a place of K, finite (a fractional ideal) or infinite (element of SUK.number_field().places(prec))
• A – the set of the product of the coefficients of the S-unit equation with each root of unity of K
• prec – the precision of the real field (default: 106)
OUTPUT:
The constant c4, as a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import c4_func
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: phi_real = K.places()[0]
sage: phi_complex = K.places()[1]
sage: v_fin = tuple(K.primes_above(3))[0]
sage: A = K.roots_of_unity()
sage: c4_func(SUK,phi_real,A)
1.000000000000000000000000000000
sage: c4_func(SUK,phi_complex,A)
1.000000000000000000000000000000
sage: c4_func(SUK,v_fin,A)
1.000000000000000000000000000000
REFERENCES:
sage.rings.number_field.S_unit_solver.c8_c9_func(SUK, v, A, prec=106)
Return the constants $$c_8$$ and $$c_9$$ from Smart’s TCDF paper, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• v – a finite place of K (a fractional ideal)
• A – the set of the product of the coefficients of the $$S$$-unit equation with each root of unity of K
• prec – the precision of the real field
OUTPUT:
The constants c8 and c9, as real numbers
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import c8_c9_func
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: v_fin = K.primes_above(3)[0]
sage: A = K.roots_of_unity()
sage: c8_c9_func(SUK, v_fin,A) # abs tol 1e-29
(4.524941291354698258804956696127e15, 1.621521281297160786545580368612e16)
REFERENCES:
sage.rings.number_field.S_unit_solver.clean_rfv_dict(rfv_dictionary)
Given a residue field vector dictionary, removes some impossible keys and entries.
INPUT:
• rfv_dictionary – a dictionary whose keys are exponent vectors and whose values are residue field vectors
OUTPUT:
None. But it removes some keys from the input dictionary.
Note
• The keys of a residue field vector dictionary are exponent vectors modulo (q-1) for some prime q.
• The values are residue field vectors. It is known that the entries of a residue field vector which comes from a solution to the S-unit equation cannot have 1 in any entry.
EXAMPLES:
In this example, we use a truncated list generated when solving the $$S$$-unit equation in the case that $$K$$ is defined by the polynomial $$x^2+x+1$$ and $$S$$ consists of the primes above 3:
sage: from sage.rings.number_field.S_unit_solver import clean_rfv_dict
sage: rfv_dict = {(1, 3): [3, 2], (3, 0): [6, 6], (5, 4): [3, 6], (2, 1): [4, 6], (5, 1): [3, 1], (2, 5): [1, 5], (0, 3): [1, 6]}
sage: len(rfv_dict)
7
sage: clean_rfv_dict(rfv_dict)
sage: len(rfv_dict)
4
sage: rfv_dict
{(1, 3): [3, 2], (2, 1): [4, 6], (3, 0): [6, 6], (5, 4): [3, 6]}
sage.rings.number_field.S_unit_solver.clean_sfs(sfs_list)
Given a list of S-unit equation solutions, remove trivial redundancies.
INPUT:
• sfs_list – a list of solutions to the S-unit equation
OUTPUT:
A list of solutions to the S-unit equation
Note
The function looks for cases where x + y = 1 and y + x = 1 appearas separate solutions, and removes one.
EXAMPLES:
The function is not dependent on the number field and removes redundancies in any list.
sage: from sage.rings.number_field.S_unit_solver import clean_sfs
sage: sols = [((1, 0, 0), (0, 0, 1), -1, 2), ((0, 0, 1), (1, 0, 0), 2, -1)]
sage: clean_sfs( sols )
[((1, 0, 0), (0, 0, 1), -1, 2)]
sage.rings.number_field.S_unit_solver.column_Log(SUK, iota, U, prec=106)
Return the log vector of iota; i.e., the logs of all the valuations
INPUT:
• SUK – a group of $$S$$-units
• iota – an element of K
• U – a list of places (finite or infinite) of K
• prec – the precision of the real field (default: 106)
OUTPUT:
The log vector as a list of real numbers
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import column_Log
sage: K.<xi> = NumberField(x^3-3)
sage: S = tuple(K.primes_above(3))
sage: SUK = UnitGroup(K, S=S)
sage: phi_complex = K.places()[1]
sage: v_fin = S[0]
sage: U = [phi_complex, v_fin]
sage: column_Log(SUK, xi^2, U) # abs tol 1e-29
[1.464816384890812968648768625966, -2.197224577336219382790490473845]
REFERENCES:
sage.rings.number_field.S_unit_solver.compatible_system_lift(compatible_system, split_primes_list)
Given a compatible system of exponent vectors and complementary exponent vectors, return a lift to the integers.
INPUT:
• compatible_system – a list of pairs [ [v0, w0], [v1, w1], .., [vk, wk] ] where [vi, wi] is a pair of complementary exponent vectors modulo qi - 1, and all pairs are compatible.
• split_primes_list – a list of primes [ q0, q1, .., qk ]
OUTPUT:
A pair of vectors [v, w] satisfying:
1. v[0] == vi[0] for all i
2. w[0] == wi[0] for all i
3. v[j] == vi[j] modulo qi - 1 for all i and all j > 0
4. w[j] == wi[j] modulo qi - 1 for all i and all $$j > 0$$
5. every entry of v and w is bounded by L/2 in absolute value, where L is the least common multiple of {qi - 1 : qi in split_primes_list }
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import compatible_system_lift
sage: split_primes_list = [3, 7]
sage: comp_sys = [[(0, 1, 0), (0, 1, 0)], [(0, 3, 4), (0, 1, 2)]]
sage: compatible_system_lift(comp_sys, split_primes_list)
[(0, 3, -2), (0, 1, 2)]
sage.rings.number_field.S_unit_solver.compatible_systems(split_prime_list, complement_exp_vec_dict)
Given dictionaries of complement exponent vectors for various primes that split in K, compute all possible compatible systems.
INPUT:
• split_prime_list – a list of rational primes that split completely in $$K$$
• complement_exp_vec_dict – a dictionary of dictionaries. The keys are primes from split_prime_list.
OUTPUT:
A list of compatible systems of exponent vectors.
Note
• For any q in split_prime_list, complement_exp_vec_dict[q] is a dictionary whose keys are exponent vectors modulo q-1 and whose values are lists of exponent vectors modulo q-1 which are complementary to the key.
• an item in system_list has the form [ [v0, w0], [v1, w1], ..., [vk, wk] ], where:
- qj = split_prime_list[j]
- vj and wj are complementary exponent vectors modulo qj - 1
- the pairs are all simultaneously compatible.
• Let H = lcm( qj - 1 : qj in split_primes_list ). Then for any compatible system, there is at most one pair of integer exponent vectors [v, w] such that:
- every entry of v and w is bounded in absolute value by H
- for any qj, v and vj agree modulo (qj - 1)
- for any qj, w and wj agree modulo (qj - 1)
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import compatible_systems
sage: split_primes_list = [3, 7]
sage: checking_dict = {3: {(0, 1, 0): [(1, 0, 0)]}, 7: {(0, 1, 0): [(1, 0, 0)]}}
sage: compatible_systems(split_primes_list, checking_dict)
[[[(0, 1, 0), (1, 0, 0)], [(0, 1, 0), (1, 0, 0)]]]
sage.rings.number_field.S_unit_solver.compatible_vectors(a, m0, m1, g)
Given an exponent vector a modulo m0, returns an iterator over the exponent vectors for the modulus m1, such that a lift to the lcm modulus exists.
INPUT:
• a – an exponent vector for the modulus m0
• m0 – a positive integer (specifying the modulus for a)
• m1 – a positive integer (specifying the alternate modulus)
• g – the gcd of m0 and m1
OUTPUT:
A list of exponent vectors modulo m1 which are compatible with a.
Note
• Exponent vectors must agree exactly in the 0th position in order to be compatible.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import compatible_vectors
sage: a = (3, 1, 8, 1)
sage: list(compatible_vectors(a, 18, 12, gcd(18,12)))
[(3, 1, 2, 1),
(3, 1, 2, 7),
(3, 1, 8, 1),
(3, 1, 8, 7),
(3, 7, 2, 1),
(3, 7, 2, 7),
(3, 7, 8, 1),
(3, 7, 8, 7)]
The order of the moduli matters.
sage: len(list(compatible_vectors(a, 18, 12, gcd(18,12))))
8
sage: len(list(compatible_vectors(a, 12, 18, gcd(18,12))))
27
sage.rings.number_field.S_unit_solver.compatible_vectors_check(a0, a1, g, l)
Given exponent vectors with respect to two moduli, determines if they are compatible.
INPUT:
• a0 – an exponent vector modulo m0
• a1 – an exponent vector modulo m1 (must have the same length as a0)
• g – the gcd of m0 and m1
• l – the length of a0 and of a1
OUTPUT:
True if there is an integer exponent vector a satisfying
\begin{split}\begin{aligned} a[0] &== a0[0] == a1[0]\\ a[1:] &== a0[1:] \mod m_0\\ a[1:] &== a1[1:] \mod m_1 \end{aligned}\end{split}
and False otherwise.
Note
• Exponent vectors must agree exactly in the first coordinate.
• If exponent vectors are different lengths, an error is raised.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import compatible_vectors_check
sage: a0 = (3, 1, 8, 11)
sage: a1 = (3, 5, 6, 13)
sage: a2 = (5, 5, 6, 13)
sage: compatible_vectors_check(a0, a1, gcd(12, 22), 4r)
True
sage: compatible_vectors_check(a0, a2, gcd(12, 22), 4r)
False
sage.rings.number_field.S_unit_solver.construct_comp_exp_vec(rfv_to_ev_dict, q)
Constructs a dictionary associating complement vectors to residue field vectors.
INPUT:
• rfv_to_ev_dict – a dictionary whose keys are residue field vectors and whose values are lists of exponent vectors with the associated residue field vector.
• q – the characteristic of the residue field
OUTPUT:
A dictionary whose typical key is an exponent vector a, and whose associated value is a list of complementary exponent vectors to a.
EXAMPLES:
In this example, we use the list generated when solving the $$S$$-unit equation in the case that $$K$$ is defined by the polynomial $$x^2+x+1$$ and $$S$$ consists of the primes above 3
sage: from sage.rings.number_field.S_unit_solver import construct_comp_exp_vec
sage: rfv_to_ev_dict = {(6, 6): [(3, 0)], (5, 6): [(1, 2)], (5, 4): [(5, 3)], (6, 2): [(5, 5)], (2, 5): [(0, 1)], (5, 5): [(3, 4)], (4, 4): [(0, 2)], (6, 3): [(1, 4)], (3, 6): [(5, 4)], (2, 2): [(0, 4)], (3, 5): [(1, 0)], (6, 4): [(1, 1)], (3, 2): [(1, 3)], (2, 6): [(4, 5)], (4, 5): [(4, 3)], (2, 3): [(2, 3)], (4, 2): [(4, 0)], (6, 5): [(5, 2)], (3, 3): [(3, 2)], (5, 3): [(5, 0)], (4, 6): [(2, 1)], (3, 4): [(3, 5)], (4, 3): [(0, 5)], (5, 2): [(3, 1)], (2, 4): [(2, 0)]}
sage: construct_comp_exp_vec(rfv_to_ev_dict, 7)
{(0, 1): [(1, 4)],
(0, 2): [(0, 2)],
(0, 4): [(3, 0)],
(0, 5): [(4, 3)],
(1, 0): [(5, 0)],
(1, 1): [(2, 0)],
(1, 2): [(1, 3)],
(1, 3): [(1, 2)],
(1, 4): [(0, 1)],
(2, 0): [(1, 1)],
(2, 1): [(4, 0)],
(2, 3): [(5, 2)],
(3, 0): [(0, 4)],
(3, 1): [(5, 4)],
(3, 2): [(3, 4)],
(3, 4): [(3, 2)],
(3, 5): [(5, 3)],
(4, 0): [(2, 1)],
(4, 3): [(0, 5)],
(4, 5): [(5, 5)],
(5, 0): [(1, 0)],
(5, 2): [(2, 3)],
(5, 3): [(3, 5)],
(5, 4): [(3, 1)],
(5, 5): [(4, 5)]}
sage.rings.number_field.S_unit_solver.construct_complement_dictionaries(split_primes_list, SUK, verbose=False)
A function to construct the complement exponent vector dictionaries.
INPUT:
• split_primes_list – a list of rational primes which split completely in the number field $$K$$
• SUK – the $$S$$-unit group for a number field $$K$$
• verbose – a boolean to provide additional feedback (default: False)
OUTPUT:
A dictionary of dictionaries. The keys coincide with the primes in split_primes_list For each q, comp_exp_vec[q] is a dictionary whose keys are exponent vectors modulo q-1, and whose values are lists of exponent vectors modulo q-1
If w is an exponent vector in comp_exp_vec[q][v], then the residue field vectors modulo q for v and w sum to [1,1,...,1]
Note
• The data of comp_exp_vec will later be lifted to $$\mathbb{Z}$$ to look for true $$S$$-Unit equation solutions.
• During construction, the various dictionaries are compared to each other several times to eliminate as many mod $$q$$ solutions as possible.
• The authors acknowledge a helpful discussion with Norman Danner which helped formulate this code.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import construct_complement_dictionaries
sage: f = x^2 + 5
sage: H = 10
sage: K.<xi> = NumberField(f)
sage: SUK = K.S_unit_group(S=K.primes_above(H))
sage: split_primes_list = [3, 7]
sage: actual = construct_complement_dictionaries(split_primes_list, SUK)
sage: expected = {3: {(0, 1, 0): [(1, 0, 0), (0, 1, 0)],
....: (1, 0, 0): [(1, 0, 0), (0, 1, 0)]},
....: 7: {(0, 1, 0): [(1, 0, 0), (1, 4, 4), (1, 2, 2)],
....: (0, 1, 2): [(0, 1, 2), (0, 3, 4), (0, 5, 0)],
....: (0, 3, 2): [(1, 0, 0), (1, 4, 4), (1, 2, 2)],
....: (0, 3, 4): [(0, 1, 2), (0, 3, 4), (0, 5, 0)],
....: (0, 5, 0): [(0, 1, 2), (0, 3, 4), (0, 5, 0)],
....: (0, 5, 4): [(1, 0, 0), (1, 4, 4), (1, 2, 2)],
....: (1, 0, 0): [(0, 5, 4), (0, 3, 2), (0, 1, 0)],
....: (1, 0, 2): [(1, 0, 4), (1, 4, 2), (1, 2, 0)],
....: (1, 0, 4): [(1, 2, 4), (1, 4, 0), (1, 0, 2)],
....: (1, 2, 0): [(1, 2, 4), (1, 4, 0), (1, 0, 2)],
....: (1, 2, 2): [(0, 5, 4), (0, 3, 2), (0, 1, 0)],
....: (1, 2, 4): [(1, 0, 4), (1, 4, 2), (1, 2, 0)],
....: (1, 4, 0): [(1, 0, 4), (1, 4, 2), (1, 2, 0)],
....: (1, 4, 2): [(1, 2, 4), (1, 4, 0), (1, 0, 2)],
....: (1, 4, 4): [(0, 5, 4), (0, 3, 2), (0, 1, 0)]}}
sage: all(set(actual[p][vec]) == set(expected[p][vec]) for p in [3,7] for vec in expected[p])
True
sage.rings.number_field.S_unit_solver.construct_rfv_to_ev(rfv_dictionary, q, d, verbose=False)
Return a reverse lookup dictionary, to find the exponent vectors associated to a given residue field vector.
INPUT:
• rfv_dictionary – a dictionary whose keys are exponent vectors and whose values are the associated residue field vectors
• q – a prime (assumed to split completely in the relevant number field)
• d – the number of primes in $$K$$ above the rational prime q
• verbose – a boolean flag to indicate more detailed output is desired (default: False)
OUTPUT:
A dictionary P whose keys are residue field vectors and whose values are lists of all exponent vectors which correspond to the given residue field vector.
Note
• For example, if rfv_dictionary[ e0 ] = r0, then P[ r0 ] is a list which contains e0.
• During construction, some residue field vectors can be eliminated as coming from solutions to the $$S$$-unit equation. Such vectors are dropped from the keys of the dictionary P.
EXAMPLES:
In this example, we use a truncated list generated when solving the $$S$$-unit equation in the case that $$K$$ is defined by the polynomial $$x^2+x+1$$ and $$S$$ consists of the primes above 3:
sage: from sage.rings.number_field.S_unit_solver import construct_rfv_to_ev
sage: rfv_dict = {(1, 3): [3, 2], (3, 0): [6, 6], (5, 4): [3, 6], (2, 1): [4, 6], (4, 0): [4, 2], (1, 2): [5, 6]}
sage: construct_rfv_to_ev(rfv_dict,7,2,False)
{(3, 2): [(1, 3)], (4, 2): [(4, 0)], (4, 6): [(2, 1)], (5, 6): [(1, 2)]}
sage.rings.number_field.S_unit_solver.cx_LLL_bound(SUK, A, prec=106)
Return the maximum of all of the $$K_1$$’s as they are LLL-optimized for each infinite place $$v$$
INPUT:
• SUK – a group of $$S$$-units
• A – a list of all products of each potential a, b in the $$S$$-unit equation ax + by + 1 = 0 with each root of unity of K
• prec – precision of real field (default: 106)
OUTPUT:
A bound for the exponents at the infinite place, as a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import cx_LLL_bound
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K,S=tuple(K.primes_above(3)))
sage: A = K.roots_of_unity()
sage: cx_LLL_bound(SUK,A) # long time
22
sage.rings.number_field.S_unit_solver.defining_polynomial_for_Kp(prime, prec=106)
INPUT:
• prime – a prime ideal of a number field $$K$$
• prec – a positive natural number (default: 106)
OUTPUT:
A polynomial with integer coefficients that is equivalent mod p^prec to a defining polynomial for the completion of $$K$$ associated to the specified prime.
Note
$$K$$ has to be an absolute extension
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import defining_polynomial_for_Kp
sage: p2 = K.prime_above(7); p2
Fractional ideal (-2*a + 1)
sage: defining_polynomial_for_Kp(p2, 10)
x + 266983762
sage: K.<a> = QuadraticField(-6)
sage: p2 = K.prime_above(2); p2
Fractional ideal (2, a)
sage: defining_polynomial_for_Kp(p2, 100)
x^2 + 6
sage: p5 = K.prime_above(5); p5
Fractional ideal (5, a + 2)
sage: defining_polynomial_for_Kp(p5, 100)
x + 3408332191958133385114942613351834100964285496304040728906961917542037
sage.rings.number_field.S_unit_solver.drop_vector(ev, p, q, complement_ev_dict)
Determines if the exponent vector, ev, may be removed from the complement dictionary during construction. This will occur if ev is not compatible with an exponent vector mod q-1.
INPUT:
• ev – an exponent vector modulo p - 1
• p – the prime such that ev is an exponent vector modulo p-1
• q – a prime, distinct from p, that is a key in the complement_ev_dict
• complement_ev_dict – a dictionary of dictionaries, whose keys are primes complement_ev_dict[q] is a dictionary whose keys are exponent vectors modulo q-1 and whose values are lists of complementary exponent vectors modulo q-1
OUTPUT:
Returns True if ev may be dropped from the complement exponent vector dictionary, and False if not.
Note
• If ev is not compatible with any of the vectors modulo q-1, then it can no longer correspond to a solution of the $$S$$-unit equation. It returns True to indicate that it should be removed.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import drop_vector
sage: drop_vector((1, 2, 5), 7, 11, {11: {(1, 1, 3): [(1, 1, 3),(2, 3, 4)]}})
True
sage: P={3: {(1, 0, 0): [(1, 0, 0), (0, 1, 0)], (0, 1, 0): [(1, 0, 0), (0, 1, 0)]}, 7: {(0, 3, 4): [(0, 1, 2), (0, 3, 4), (0, 5, 0)], (1, 2, 4): [(1, 0, 4), (1, 4, 2), (1, 2, 0)], (0, 1, 2): [(0, 1, 2), (0, 3, 4), (0, 5, 0)], (0, 5, 4): [(1, 0, 0), (1, 4, 4), (1, 2, 2)], (1, 4, 2): [(1, 2, 4), (1, 4, 0), (1, 0, 2)], (1, 0, 4): [(1, 2, 4), (1, 4, 0), (1, 0, 2)], (0, 3, 2): [(1, 0, 0), (1, 4, 4), (1, 2, 2)], (1, 0, 0): [(0, 5, 4), (0, 3, 2), (0, 1, 0)], (1, 2, 0): [(1, 2, 4), (1, 4, 0), (1, 0, 2)], (0, 1, 0): [(1, 0, 0), (1, 4, 4), (1, 2, 2)], (0, 5, 0): [(0, 1, 2), (0, 3, 4), (0, 5, 0)], (1, 2, 2): [(0, 5, 4), (0, 3, 2), (0, 1, 0)], (1, 4, 0): [(1, 0, 4), (1, 4, 2), (1, 2, 0)], (1, 0, 2): [(1, 0, 4), (1, 4, 2), (1, 2, 0)], (1, 4, 4): [(0, 5, 4), (0, 3, 2), (0, 1, 0)]}}
sage: drop_vector((0,1,0),3,7,P)
False
sage.rings.number_field.S_unit_solver.embedding_to_Kp(a, prime, prec)
INPUT:
• a – an element of a number field $$K$$
• prime – a prime ideal of $$K$$
• prec – a positive natural number
OUTPUT:
An element of $$K$$ that is equivalent to a modulo p^(prec) and the generator of $$K$$ appears with exponent less than $$e \cdot f$$, where p is the rational prime below prime and $$e,f$$ are the ramification index and residue degree, respectively.
Note
$$K$$ has to be an absolute number field
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import embedding_to_Kp
sage: p = K.prime_above(13); p
Fractional ideal (-a + 2)
sage: embedding_to_Kp(a-3, p, 15)
-20542890112375827
sage: K.<a> = NumberField(x^4-2)
sage: p = K.prime_above(7); p
Fractional ideal (-a^2 + a - 1)
sage: embedding_to_Kp(a^3-3, p, 15)
-1261985118949117459462968282807202378
sage.rings.number_field.S_unit_solver.eq_up_to_order(A, B)
If A and B are lists of four-tuples [a0,a1,a2,a3] and [b0,b1,b2,b3], checks that there is some reordering so that either ai=bi for all i or a0==b1, a1==b0, a2==b3, a3==b2.
The entries must be hashable.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import eq_up_to_order
sage: L = [(1,2,3,4),(5,6,7,8)]
sage: L1 = [L[1],L[0]]
sage: L2 = [(2,1,4,3),(6,5,8,7)]
sage: eq_up_to_order(L, L1)
True
sage: eq_up_to_order(L, L2)
True
sage: eq_up_to_order(L, [(1,2,4,3),(5,6,8,7)])
False
sage.rings.number_field.S_unit_solver.log_p(a, prime, prec)
INPUT:
• a – an element of a number field $$K$$
• prime – a prime ideal of the number field $$K$$
• prec – a positive integer
OUTPUT:
An element of $$K$$ which is congruent to the prime-adic logarithm of a with respect to prime modulo p^prec, where p is the rational prime below prime
Note
Here we take into account the other primes in $$K$$ above $$p$$ in order to get coefficients with small values
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import log_p
sage: K.<a> = NumberField(x^2+14)
sage: p1 = K.primes_above(3)[0]
sage: p1
Fractional ideal (3, a + 1)
sage: log_p(a+2, p1, 20)
8255385638/3*a + 15567609440/3
sage: K.<a> = NumberField(x^4+14)
sage: p1 = K.primes_above(5)[0]
sage: p1
Fractional ideal (5, a + 1)
sage: log_p(1/(a^2-4), p1, 30)
-42392683853751591352946/25*a^3 - 113099841599709611260219/25*a^2 -
8496494127064033599196/5*a - 18774052619501226990432/25
sage.rings.number_field.S_unit_solver.log_p_series_part(a, prime, prec)
INPUT:
• a – an element of a number field $$K$$
• prime – a prime ideal of the number field $$K$$
• prec – a positive integer
OUTPUT:
The prime-adic logarithm of a and accuracy p^prec, where p is the rational prime below prime
ALGORITHM:
The algorithm is based on the algorithm on page 30 of [Sma1998]
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import log_p_series_part
sage: K.<a> = NumberField(x^2-5)
sage: p1 = K.primes_above(3)[0]
sage: p1
Fractional ideal (3)
sage: log_p_series_part(a^2-a+1, p1, 30)
120042736778562*a + 263389019530092
sage: K.<a> = NumberField(x^4+14)
sage: p1 = K.primes_above(5)[0]
sage: p1
Fractional ideal (5, a + 1)
sage: log_p_series_part(1/(a^2-4), p1, 30)
5628940883264585369224688048459896543498793204839654215019548600621221950915106576555819252366183605504671859902129729380543157757424169844382836287443485157589362653561119898762509175000557196963413830027960725069496503331353532893643983455103456070939403472988282153160667807627271637196608813155377280943180966078/1846595723557147156151786152499366687569722744011302407020455809280594038056223852568951718462474153951672335866715654153523843955513167531739386582686114545823305161128297234887329119860255600972561534713008376312342295724191173957260256352612807316114669486939448006523889489471912384033203125*a^2 + 2351432413692022254066438266577100183514828004415905040437326602004946930635942233146528817325416948515797296867947688356616798913401046136899081536181084767344346480810627200495531180794326634382675252631839139904967037478184840941275812058242995052383261849064340050686841429735092777331963400618255005895650200107/1846595723557147156151786152499366687569722744011302407020455809280594038056223852568951718462474153951672335866715654153523843955513167531739386582686114545823305161128297234887329119860255600972561534713008376312342295724191173957260256352612807316114669486939448006523889489471912384033203125
sage.rings.number_field.S_unit_solver.minimal_vector(A, y, prec=106)
INPUT:
• A : a square n by n non-singular integer matrix whose rows generate a lattice $$\mathcal L$$
• y : a row (1 by n) vector with integer coordinates
• prec : precision of real field (default: 106)
OUTPUT:
A lower bound for the square of
$\begin{split}\ell (\mathcal L,\vec y) = \begin{cases} \displaystyle\min_{\vec x\in\mathcal L}\Vert\vec x-\vec y\Vert &, \vec y\not\in\mathcal L. \\ \displaystyle\min_{0\neq\vec x\in\mathcal L}\Vert\vec x\Vert &,\vec y\in\mathcal L. \end{cases}\end{split}$
ALGORITHM:
The algorithm is based on V.9 and V.10 of [Sma1998]
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import minimal_vector
sage: B = matrix(ZZ, 2, [1,1,1,0])
sage: y = vector(ZZ, [2,1])
sage: minimal_vector(B, y)
1/2
sage: B = random_matrix(ZZ, 3)
sage: B #random
[-2 -1 -1]
[ 1 1 -2]
[ 6 1 -1]
sage: y = vector([1, 2, 100])
sage: minimal_vector(B, y) #random
15/28
sage.rings.number_field.S_unit_solver.mus(SUK, v)
Return a list $$[\mu]$$, for $$\mu$$ defined on pp. 824-825 of TCDF, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• v – a finite place of K
OUTPUT:
A list [mus] where each mu is an element of K
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import mus
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(3)))
sage: v_fin = tuple(K.primes_above(3))[0]
sage: mus(SUK, v_fin)
[xi^2 - 2]
REFERENCES:
sage.rings.number_field.S_unit_solver.p_adic_LLL_bound(SUK, A, prec=106)
Return the maximum of all of the $$K_0$$’s as they are LLL-optimized for each finite place $$v$$
INPUT:
• SUK – a group of $$S$$-units
• A – a list of all products of each potential a, b in the $$S$$-unit equation ax + by + 1 = 0 with each root of unity of K
• prec– precision for p-adic LLL calculations (default: 106)
OUTPUT:
A bound for the max of exponents in the case that extremal place is finite (see [Sma1995]) as a real number
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import p_adic_LLL_bound
sage: K.<xi> = NumberField(x^3-3)
sage: SUK = UnitGroup(K,S=tuple(K.primes_above(3)))
sage: A = SUK.roots_of_unity()
sage: prec = 100
89
sage.rings.number_field.S_unit_solver.p_adic_LLL_bound_one_prime(prime, B0, M, M_logp, m0, c3, prec=106)
INPUT:
• prime – a prime ideal of a number field $$K$$
• B0 – the initial bound
• M – a list of elements of $$K$$, the $$\mu_i$$’s from Lemma IX.3 of [Sma1998]
• M_logp – the p-adic logarithm of elements in $$M$$
• m0 – an element of $$K$$, this is $$\mu_0$$ from Lemma IX.3 of [Sma1998]
• c3 – a positive real constant
• prec – the precision of the calculations (default: 106)
OUTPUT:
A pair consisting of:
1. a new upper bound, an integer
2. a boolean value, True if we have to increase precision, otherwise False
Note
The constant $$c_5$$ is the constant $$c_5$$ at the page 89 of [Sma1998] which is equal to the constant $$c_{10}$$ at the page 139 of [Sma1995]. In this function, the $$c_i$$ constants are in line with [Sma1998], but generally differ from the constants in [Sma1995] and other parts of this code.
EXAMPLES:
This example indictes a case where we must increase precision:
sage: from sage.rings.number_field.S_unit_solver import p_adic_LLL_bound_one_prime
sage: prec = 50
sage: K.<a> = NumberField(x^3-3)
sage: S = tuple(K.primes_above(3))
sage: SUK = UnitGroup(K, S=S)
sage: v = S[0]
sage: A = SUK.roots_of_unity()
sage: K0_old = 9.4755766731093e17
sage: Mus = [a^2 - 2]
sage: Log_p_Mus = [185056824593551109742400*a^2 + 1389583284398773572269676*a + 717897987691852588770249]
sage: mu0 = K(-1)
sage: c3_value = 0.42578591347980
sage: m0_Kv_new, increase_precision = p_adic_LLL_bound_one_prime(v, K0_old, Mus, Log_p_Mus, mu0, c3_value, prec)
sage: m0_Kv_new
0
sage: increase_precision
True
And now we increase the precision to make it all work:
sage: prec = 106
sage: K0_old = 9.475576673109275443280257946930e17
sage: Log_p_Mus = [1029563604390986737334686387890424583658678662701816*a^2 + 661450700156368458475507052066889190195530948403866*a]
sage: c3_value = 0.4257859134798034746197327286726
sage: m0_Kv_new, increase_precision = p_adic_LLL_bound_one_prime(v, K0_old, Mus, Log_p_Mus, mu0, c3_value, prec)
sage: m0_Kv_new
476
sage: increase_precision
False
sage.rings.number_field.S_unit_solver.possible_mu0s(SUK, v)
Return a list $$[\mu_0]$$ of all possible $$\mu_0$$ values defined on pp. 824-825 of TCDF, [Sma1995]
INPUT:
• SUK – a group of $$S$$-units
• v – a finite place of K
OUTPUT:
A list [mu0s] where each mu0 is an element of K
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import possible_mu0s
sage: K.<xi> = NumberField(x^3-3)
sage: S = tuple(K.primes_above(3))
sage: SUK = UnitGroup(K, S=S)
sage: v_fin = S[0]
sage: possible_mu0s(SUK,v_fin)
[-1, 1]
Note
$$n_0$$ is the valuation of the coefficient $$\alpha_d$$ of the $$S$$-unit equation such that $$|\alpha_d \tau_d|_v = 1$$ We have set $$n_0 = 0$$ here since the coefficients are roots of unity $$\alpha_0$$ is not defined in the paper, we set it to be 1
REFERENCES:
• [Sma1995] pp. 824-825, but we modify the definition of sigma (sigma_tilde) to make it easier to code
sage.rings.number_field.S_unit_solver.reduction_step_complex_case(place, B0, G, g0, c7)
INPUT:
• place – (ring morphism) a complex place of a number field $$K$$
• B0 – the initial bound
• G – a set of generators of the free part of the group
• g0 – an element of the torsion part of the group
• c7 – a positive real number
OUTPUT:
A tuple consisting of:
1. a new upper bound, an integer
2. a boolean value, True if we have to increase precision, otherwise False
Note
The constant c7 in the reference page 138
REFERENCES:
See [Sma1998].
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import reduction_step_complex_case
sage: K.<a> = NumberField([x^3-2])
sage: SK = sum([K.primes_above(p) for p in [2,3,5]],[])
sage: G = [g for g in K.S_unit_group(S=SK).gens_values() if g.multiplicative_order()==Infinity]
sage: p1 = K.places(prec=100)[1]
sage: reduction_step_complex_case(p1, 10^5, G, -1, 2)
(17, False)
sage.rings.number_field.S_unit_solver.reduction_step_real_case(place, B0, G, c7)
INPUT:
• place – (ring morphism) a real place of a number field $$K$$
• B0 – the initial bound
• G – a set of generators of the free part of the group
• c7 – a positive real number
OUTPUT:
A tuple consisting of:
1. a new upper bound, an integer
2. a boolean value, True if we have to increase precision, otherwise False
Note
The constant c7 in the reference page 137
REFERENCES:
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import reduction_step_real_case
sage: K.<a> = NumberField(x^3-2)
sage: SK = sum([K.primes_above(p) for p in [2,3,5]],[])
sage: G = [g for g in K.S_unit_group(S=SK).gens_values() if g.multiplicative_order()==Infinity]
sage: p1 = K.real_places(prec=300)[0]
sage: reduction_step_real_case(p1, 10**10, G, 2)
(58, False)
sage.rings.number_field.S_unit_solver.sieve_below_bound(K, S, bound=10, bump=10, split_primes_list=[], verbose=False)
Return all solutions to the S-unit equation x + y = 1 over K with exponents below the given bound.
INPUT:
• K – a number field (an absolute extension of the rationals)
• S – a list of finite primes of K
• bound – a positive integer upper bound for exponents, solutions with exponents having absolute value below this bound will be found (default: 10)
• bump – a positive integer by which the minimum LCM will be increased if not enough split primes are found in sieving step (default: 10)
• split_primes_list – a list of rational primes that split completely in the extension K/Q, used for sieving. For complete list of solutions should have lcm of {(p_i-1)} for primes p_i greater than bound (default: [])
• verbose – an optional parameter allowing the user to print information during the sieving process (default: False)
OUTPUT:
A list of tuples [( A_1, B_1, x_1, y_1), (A_2, B_2, x_2, y_2), ... ( A_n, B_n, x_n, y_n)] such that:
1. The first two entries are tuples A_i = (a_0, a_1, ... , a_t) and B_i = (b_0, b_1, ... , b_t) of exponents.
2. The last two entries are S-units x_i and y_i in K with x_i + y_i = 1.
3. If the default generators for the S-units of K are (rho_0, rho_1, ... , rho_t), then these satisfy x_i = \prod(rho_i)^(a_i) and y_i = \prod(rho_i)^(b_i).
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import sieve_below_bound, eq_up_to_order
sage: K.<xi> = NumberField(x^2+x+1)
sage: SUK = UnitGroup(K,S=tuple(K.primes_above(3)))
sage: S = SUK.primes()
sage: sols = sieve_below_bound(K, S, 10)
sage: expected = [
....: ((5, -1), (4, -1), 1/3*xi + 2/3, -1/3*xi + 1/3),
....: ((2, 1), (4, 0), xi + 2, -xi - 1),
....: ((2, 0), (1, 1), xi, -xi + 1),
....: ((5, 0), (1, 0), -xi, xi + 1)]
sage: eq_up_to_order(sols, expected)
True
sage.rings.number_field.S_unit_solver.sieve_ordering(SUK, q)
Returns ordered data for running sieve on the primes in $$SUK$$ over the rational prime $$q$$.
INPUT:
• SUK – the $$S$$-unit group of a number field $$K$$
• q – a rational prime number which splits completely in $$K$$
OUTPUT:
A list of tuples, [ideals_over_q, residue_fields, rho_images, product_rho_orders], where
1. ideals_over_q is a list of the $$d = [K:\mathbb{Q}]$$ ideals in $$K$$ over $$q$$
2. residue_fields[i] is the residue field of ideals_over_q[i]
3. rho_images[i] is a list of the reductions of the generators in of the $$S$$-unit group, modulo ideals_over_q[i]
4. product_rho_orders[i] is the product of the multiplicative orders of the elements in rho_images[i]
Note
• The list ideals_over_q is sorted so that the product of orders is smallest for ideals_over_q[0], as this will make the later sieving steps more efficient.
• The primes of S must not lie over over q.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import sieve_ordering
sage: K.<xi> = NumberField(x^3 - 3*x + 1)
sage: SUK = K.S_unit_group(S=3)
sage: sieve_data = list(sieve_ordering(SUK, 19))
sage: sieve_data[0]
(Fractional ideal (-2*xi^2 + 3),
Fractional ideal (xi - 3),
Fractional ideal (2*xi + 1))
sage: sieve_data[1]
(Residue field of Fractional ideal (-2*xi^2 + 3),
Residue field of Fractional ideal (xi - 3),
Residue field of Fractional ideal (2*xi + 1))
sage: sieve_data[2]
([18, 9, 16, 8], [18, 7, 10, 4], [18, 3, 12, 10])
sage: sieve_data[3]
(972, 972, 3888)
sage.rings.number_field.S_unit_solver.solutions_from_systems(SUK, bound, cs_list, split_primes_list)
Lifts compatible systems to the integers and returns the S-unit equation solutions the lifts yield.
INPUT:
• SUK – the group of $$S$$-units where we search for solutions
• bound – a bound for the entries of all entries of all lifts
• cs_list – a list of compatible systems of exponent vectors modulo $$q-1$$ for
various primes $$q$$
• split_primes_list – a list of primes giving the moduli of the exponent vectors in cs_list
OUTPUT:
A list of solutions to the S-unit equation. Each solution is a list:
1. an exponent vector over the integers, ev
2. an exponent vector over the integers, cv
3. the S-unit corresponding to ev, iota_exp
4. the S-unit corresponding to cv, iota_comp
Note
• Every entry of ev is less than or equal to bound in absolute value
• every entry of cv is less than or equal to bound in absolute value
• iota_exp + iota_comp == 1
EXAMPLES:
Given a single compatible system, a solution can be found.
sage: from sage.rings.number_field.S_unit_solver import solutions_from_systems
sage: K.<xi> = NumberField(x^2-15)
sage: SUK = K.S_unit_group(S=K.primes_above(2))
sage: split_primes_list = [7, 17]
sage: a_compatible_system = [[[(0, 0, 5), (0, 0, 5)], [(0, 0, 15), (0, 0, 15)]]]
sage: solutions_from_systems( SUK, 20, a_compatible_system, split_primes_list )
[((0, 0, -1), (0, 0, -1), 1/2, 1/2)]
sage.rings.number_field.S_unit_solver.solve_S_unit_equation(K, S, prec=106, include_exponents=True, include_bound=False, proof=None, verbose=False)
Return all solutions to the S-unit equation x + y = 1 over K.
INPUT:
• K – a number field (an absolute extension of the rationals)
• S – a list of finite primes of K
• prec – precision used for computations in real, complex, and p-adic fields (default: 106)
• include_exponents – whether to include the exponent vectors in the returned value (default: True).
• include_bound – whether to return the final computed bound (default: False)
• verbose – whether to print information during the sieving step (default: False)
OUTPUT:
A list of tuples [( A_1, B_1, x_1, y_1), (A_2, B_2, x_2, y_2), ... ( A_n, B_n, x_n, y_n)] such that:
1. The first two entries are tuples A_i = (a_0, a_1, ... , a_t) and B_i = (b_0, b_1, ... , b_t) of exponents. These will be ommitted if include_exponents is False.
2. The last two entries are S-units x_i and y_i in K with x_i + y_i = 1.
3. If the default generators for the S-units of K are (rho_0, rho_1, ... , rho_t), then these satisfy x_i = \prod(rho_i)^(a_i) and y_i = \prod(rho_i)^(b_i).
If include_bound, will return a pair (sols, bound) where sols is as above and bound is the bound used for the entries in the exponent vectors.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import solve_S_unit_equation, eq_up_to_order
sage: K.<xi> = NumberField(x^2+x+1)
sage: S = K.primes_above(3)
sage: sols = solve_S_unit_equation(K, S, 200)
sage: expected = [
....: ((2, 1), (4, 0), xi + 2, -xi - 1),
....: ((5, -1), (4, -1), 1/3*xi + 2/3, -1/3*xi + 1/3),
....: ((5, 0), (1, 0), -xi, xi + 1),
....: ((1, 1), (2, 0), -xi + 1, xi)]
sage: eq_up_to_order(sols, expected)
True
In order to see the bound as well use the optional parameter include_bound:
sage: solutions, bound = solve_S_unit_equation(K, S, 100, include_bound=True)
sage: bound
2
You can omit the exponent vectors:
sage: sols = solve_S_unit_equation(K, S, 200, include_exponents=False)
sage: expected = [(xi + 2, -xi - 1), (1/3*xi + 2/3, -1/3*xi + 1/3), (-xi, xi + 1), (-xi + 1, xi)]
sage: set(frozenset(a) for a in sols) == set(frozenset(b) for b in expected)
True
It is an error to use values in S that are not primes in K:
sage: solve_S_unit_equation(K, [3], 200)
Traceback (most recent call last):
...
ValueError: S must consist only of prime ideals, or a single element from which a prime ideal can be constructed.
We check the case that the rank is 0:
sage: K.<xi> = NumberField(x^2+x+1)
sage: solve_S_unit_equation(K, [])
[((1,), (5,), xi + 1, -xi)]
sage.rings.number_field.S_unit_solver.split_primes_large_lcm(SUK, bound)
Return a list L of rational primes $$q$$ which split completely in $$K$$ and which have desirable properties (see NOTE).
INPUT:
• SUK – the $$S$$-unit group of an absolute number field $$K$$.
• bound – a positive integer
OUTPUT:
A list $$L$$ of rational primes $$q$$, with the following properties:
• each prime $$q$$ in $$L$$ splits completely in $$K$$
• if $$Q$$ is a prime in $$S$$ and $$q$$ is the rational prime below $$Q$$, then $$q$$ is not in $$L$$
• the value lcm { q-1 : q in L } is greater than or equal to 2*bound + 1.
Note
• A series of compatible exponent vectors for the primes in $$L$$ will lift to at most one integer exponent vector whose entries $$a_i$$ satisfy $$|a_i|$$ is less than or equal to bound.
• The ordering of this set is not very intelligent for the purposes of the later sieving processes.
EXAMPLES:
sage: from sage.rings.number_field.S_unit_solver import split_primes_large_lcm
sage: K.<xi> = NumberField(x^3 - 3*x + 1)
sage: S = K.primes_above(3)
sage: SUK = UnitGroup(K,S=tuple(S))
sage: split_primes_large_lcm(SUK, 200)
[17, 19, 37, 53]
With a tiny bound, SAGE may ask you to increase the bound.
sage: from sage.rings.number_field.S_unit_solver import split_primes_large_lcm
sage: K.<xi> = NumberField(x^2 + 163)
sage: SUK = UnitGroup(K, S=tuple(K.primes_above(23)))
sage: split_primes_large_lcm(SUK, 8)
Traceback (most recent call last):
...
ValueError: Not enough split primes found. Increase bound.
`
|
2019-10-21 12:56:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.674270510673523, "perplexity": 5100.6619766963195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987773711.75/warc/CC-MAIN-20191021120639-20191021144139-00408.warc.gz"}
|
https://ask.streamsets.com/answers/204/revisions/
|
Did you check the log file? There should be log lines containing stderr: and stdout: that show the standard error and output streams from your process. That should give some clue as to what went wrong in the script, such that it returned an error status.
|
2020-07-13 22:28:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23575901985168457, "perplexity": 973.3788678304278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657146845.98/warc/CC-MAIN-20200713194203-20200713224203-00448.warc.gz"}
|
https://artofproblemsolving.com/wiki/index.php/1997_AIME_Problems/Problem_11
|
During AMC testing, the AoPS Wiki is in read-only mode. No edits can be made.
# 1997 AIME Problems/Problem 11
## Problem 11
Let $x=\frac{\sum\limits_{n=1}^{44} \cos n^\circ}{\sum\limits_{n=1}^{44} \sin n^\circ}$. What is the greatest integer that does not exceed $100x$?
## Solution
### Solution 1
Note that $\frac{\sum_{n=1}^{44} \cos n}{\sum_{n=1}^{44} \sin n} = \frac {\cos 1 + \cos 2 + \dots + \cos 44}{\cos 89 + \cos 88 + \dots + \cos 46}$
Now use the sum-product formula $\cos x + \cos y = 2\cos(\frac{x+y}{2})\cos(\frac{x-y}{2})$ We want to pair up $[1, 44]$, $[2, 43]$, $[3, 42]$, etc. from the numerator and $[46, 89]$, $[47, 88]$, $[48, 87]$ etc. from the denominator. Then we get: $$\frac{\sum_{n=1}^{44} \cos n}{\sum_{n=1}^{44} \sin n} = \frac{2\cos(\frac{45}{2})[\cos(\frac{43}{2})+\cos(\frac{41}{2})+\dots+\cos(\frac{1}{2})}{2\cos(\frac{135}{2})[\cos(\frac{43}{2})+\cos(\frac{41}{2})+\dots+\cos(\frac{1}{2})} \Rightarrow \frac{\cos(\frac{45}{2})}{\cos(\frac{135}{2})}$$
To calculate this number, use the half angle formula. Since $\cos(\frac{x}{2}) = \pm \sqrt{\frac{\cos x + 1}{2}}$, then our number becomes: $$\frac{\sqrt{\frac{\frac{\sqrt{2}}{2} + 1}{2}}}{\sqrt{\frac{\frac{-\sqrt{2}}{2} + 1}{2}}}$$ in which we drop the negative roots (as it is clear cosine of $22.5$ and $67.5$ are positive). We can easily simplify this:
$\begin{eqnarray*} \frac{\sqrt{\frac{\frac{\sqrt{2}}{2} + 1}{2}}}{\sqrt{\frac{\frac{-\sqrt{2}}{2} + 1}{2}}} &=& \sqrt{\frac{\frac{2+\sqrt{2}}{4}}{\frac{2-\sqrt{2}}{4}}} \\ &=& \sqrt{\frac{2+\sqrt{2}}{2-\sqrt{2}}} \cdot \sqrt{\frac{2+\sqrt{2}}{2+\sqrt{2}}} \\ &=& \sqrt{\frac{(2+\sqrt{2})^2}{2}} \\ &=& \frac{2+\sqrt{2}}{\sqrt{2}} \cdot \sqrt{2} \\ &=& \sqrt{2}+1 \end{eqnarray*}$
And hence our answer is $\lfloor 100x \rfloor = \lfloor 100(1 + \sqrt {2}) \rfloor = \boxed{241}$
## Solution 2
$\begin{eqnarray*} x &=& \frac {\sum_{n = 1}^{44} \cos n^\circ}{\sum_{n = 1}^{44} \sin n^\circ} = \frac {\cos 1 + \cos 2 + \dots + \cos 44}{\sin 1 + \sin 2 + \dots + \sin 44}\\ &=& \frac {\cos (45 - 1) + \cos(45 - 2) + \dots + \cos(45 - 44)}{\sin 1 + \sin 2 + \dots + \sin 44} \end{eqnarray*}$
Using the identity $\sin a + \sin b = 2\sin \frac{a+b}2 \cos \frac{a-b}{2}$ $\Longrightarrow \sin x + \cos x$ $= \sin x + \sin (90-x)$ $= 2 \sin 45 \cos (45-x)$ $= \sqrt{2} \cos (45-x)$, that summation reduces to
$\begin{eqnarray*}x &=& \left(\frac {1}{\sqrt {2}}\right)\left(\frac {(\cos 1 + \cos2 + \dots + \cos44) + (\sin1 + \sin2 + \dots + \sin44)}{\sin1 + \sin2 + \dots + \sin44}\right)\\ &=& \left(\frac {1}{\sqrt {2}}\right)\left(1 + \frac {\cos 1 + \cos 2 + \dots + \cos 44}{\sin 1 + \sin 2 + \dots + \sin 44}\right) \end{eqnarray*}$
This fraction is equivalent to $x$. Therefore, $\begin{eqnarray*} x &=& \left(\frac {1}{\sqrt {2}}\right)\left(1 + x\right)\\ \frac {1}{\sqrt {2}} &=& x\left(\frac {\sqrt {2} - 1}{\sqrt {2}}\right)\\ x &=& \frac {1}{\sqrt {2} - 1} = 1 + \sqrt {2}\\ \lfloor 100x \rfloor &=& \lfloor 100(1 + \sqrt {2}) \rfloor = \boxed{241}\\ \end{eqnarray*}$
### Solution 3
A slight variant of the above solution, note that
$\begin{eqnarray*} \sum_{n=1}^{44} \cos n + \sum_{n=1}^{44} \sin n &=& \sum_{n=1}^{44} \sin n + \sin(90-n)\\ &=& \sqrt{2}\sum_{n=1}^{44} \cos(45-n) = \sqrt{2}\sum_{n=1}^{44} \cos n\\ \sum_{n=1}^{44} \sin n &=& (\sqrt{2}-1)\sum_{n=1}^{44} \cos n \end{eqnarray*}$
This is the ratio we are looking for. $x$ reduces to $\frac{1}{\sqrt{2} - 1} = \sqrt{2} + 1$, and $\lfloor 100(\sqrt{2} + 1)\rfloor = \boxed{241}$.
### Solution 4
Consider the sum $\sum_{n = 1}^{44} \text{cis } n^\circ$. The fraction is given by the real part divided by the imaginary part.
The sum can be written $- 1 + \sum_{n = 0}^{44} \text{cis } n^\circ = - 1 + \frac {\text{cis } 45^\circ - 1}{\text{cis } 1^\circ - 1}$ (by De Moivre's Theorem with geometric series)
$= - 1 + \frac {\frac {\sqrt {2}}{2} - 1 + \frac {i \sqrt {2}}{2}}{\text{cis } 1^\circ - 1} = - 1 + \frac {\left( \frac {\sqrt {2}}{2} - 1 + \frac {i \sqrt {2}}{2} \right) (\text{cis } ( - 1^\circ) - 1)}{(\cos 1^\circ - 1)^2 + \sin^2 1^\circ}$ (after multiplying by complex conjugate)
$= - 1 + \frac {\left( \frac {\sqrt {2}}{2} - 1 \right) (\cos 1^\circ - 1) + \frac {\sqrt {2}}{2}\sin 1^\circ + i\left( \left(1 - \frac {\sqrt {2}}{2} \right) \sin 1^\circ + \frac {\sqrt {2}}{2} (\cos 1^\circ - 1)\right)}{2(1 - \cos 1^\circ)}$
$= - \frac {1}{2} - \frac {\sqrt {2}}{4} - \frac {i\sqrt {2}}{4} + \frac {\sin 1^\circ \left( \frac {\sqrt {2}}{2} + i\left( 1 - \frac {\sqrt {2}}{2} \right) \right)}{2(1 - \cos 1^\circ)}$
Using the tangent half-angle formula, this becomes $\left( - \frac {1}{2} + \frac {\sqrt {2}}{4}[\cot (1/2^\circ) - 1] \right) + i\left( \frac {1}{2}\cot (1/2^\circ) - \frac {\sqrt {2}}{4}[\cot (1/2^\circ) + 1] \right)$.
Dividing the two parts and multiplying each part by 4, the fraction is $\frac { - 2 + \sqrt {2}[\cot (1/2^\circ) - 1]}{2\cot (1/2^\circ) - \sqrt {2}[\cot (1/2^\circ) + 1]}$.
Although an exact value for $\cot (1/2^\circ)$ in terms of radicals will be difficult, this is easily known: it is really large!
So treat it as though it were $\infty$. The fraction is approximated by $\frac {\sqrt {2}}{2 - \sqrt {2}} = \frac {\sqrt {2}(2 + \sqrt {2})}{2} = 1 + \sqrt {2}\Rightarrow \lfloor 100(1+\sqrt2)\rfloor=\boxed{241}$.
### Solution 5
Consider the sum $\sum_{n = 1}^{44} \text{cis } n^\circ$. The fraction is given by the real part divided by the imaginary part.
The sum can be written as $\sum_{n=1}^{22} (\text{cis } n^\circ + \text{cis } 45-n^\circ)$. Consider the rhombus $OABC$ on the complex plane such that $O$ is the origin, $A$ represents $\text{cis } n^\circ$, $B$ represents $\text{cis } n^\circ + \text{cis } 45-n^\circ$ and $C$ represents $\text{cis } n^\circ$. Simple geometry shows that $\angle BOA = 22.5-k^\circ$, so the angle that $\text{cis } n^\circ + \text{cis } 45-n^\circ$ makes with the real axis is simply $22.5^\circ$. So $\sum_{n=1}^{22} (\text{cis } n^\circ + \text{cis } 45-n^\circ)$ is the sum of collinear complex numbers, so the angle the sum makes with the real axis is $22.5^\circ$. So our answer is $\lfloor 100 \cot(22.5^\circ) \rfloor = \boxed{241}$.
Note that the $\cot(22.5^\circ) = \sqrt2 + 1$ can be shown easily through half-angle formula.
### Solution 6
We write $x =\frac{\sum_{n=46}^{89} \sin n^{\circ}}{\sum_{n=1}^{44} \sin n^{\circ}}$ since $\cos x = \sin (90^{\circ}-x).$ Now we by the sine angle sum we know that $\sin (x+45^{\circ}) = \sin 45^{\circ}(\sin x + \cos x).$ So the expression simplifies to $\sin 45^{\circ}\left(\frac{\sum_{n=1}^{44} (\sin n^{\circ}+\cos n^{\circ})}{\sum_{n=1}^{44} \sin n^{\circ}}\right) = \sin 45^{\circ}\left(1+\frac{\sum_{n=1}^{44} \cos n^{\circ}}{\sum_{n=1}^{44} \sin n^{\circ}}\right)=\sin 45^{\circ}(1+x).$ Therefore we have the equation $x = \sin 45^{\circ}(1+x) \implies x = \sqrt{2}+1.$ Finishing, we have $\lfloor 100x \rfloor = \boxed{241}.$
|
2022-01-22 06:32:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 63, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9644672870635986, "perplexity": 251.5816936426486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303747.41/warc/CC-MAIN-20220122043216-20220122073216-00216.warc.gz"}
|
https://socratic.org/questions/what-are-the-oxidation-numbers-for-iron-oxide-fe-3o-4
|
# What are the oxidation numbers for iron oxide, Fe_3O_4?
This is a mixed valence compound of $F e \left(I I\right)$ and $F e \left(I I I\right)$.
If we write $F {e}_{2} {O}_{3} \cdot F e O$ the formulation is easier to consider. $F {e}_{3} {O}_{4}$ occurs naturally as the mineral magnetite. Iron oxide chemistry is a very broad church and many mixed oxides and oxidation states are available.
|
2019-10-19 18:31:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7937819361686707, "perplexity": 1327.2184520984515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986697439.41/warc/CC-MAIN-20191019164943-20191019192443-00174.warc.gz"}
|
https://eng.libretexts.org/Bookshelves/Civil_Engineering/Book%3A_All_Things_Flow_-_Fluid_Mechanics_for_the_Natural_Sciences_(Smyth)/05%3A_Fluid_Kinematics/5.03%3A_Rotation_and_strain-_the_relative_motion_of_two_nearby_particles
|
5.3: Rotation and strain- the relative motion of two nearby particles
The complexity of flow can be halved, in a sense, by thinking of it as a combination of two simpler kinds of motion: rotation and strain (Figure $$\PageIndex{1}$$), with one or the other dominating at each point. We can then learn something useful by considering idealized flows consisting of rotation alone or strain alone.
Consider the instantaneous relative motion of two nearby fluid particles separated by the vector $$\Delta \vec{x}$$ (Figure $$\PageIndex{2}$$). Their velocities are
$\frac{D}{D t} \vec{x}=\vec{u}, \quad \text { and } \quad \frac{D}{D t}(\vec{x}+\Delta \vec{x})=\vec{u}+\Delta \vec{u},$
and we can then subtract to see that
$\frac{D}{D t} \Delta \vec{x}=\Delta \vec{u}.\label{eqn:1}$
The ith component of the velocity difference $$\Delta\vec{u}$$ can be written as
$\Delta u_{i}=\frac{\partial u_{i}}{\partial x_{j}} \Delta x_{j}.\label{eqn:2}$
The velocity gradient tensor $$\partial u_i/\partial x_j$$ can be decomposed into symmetric and antisymmetric parts:
$\frac{\partial u_{i}}{\partial x_{j}}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)+\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}-\frac{\partial u_{j}}{\partial x_{i}}\right),$
or
$\frac{\partial u_{i}}{\partial x_{j}}=e_{i j}+\frac{1}{2} r_{i j},\label{eqn:3}$
where the symmetric part
$e_{i j}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}\right)\label{eqn:4}$
is called the strain rate tensor1 and the antisymmetric part (times two) is called the rotation tensor2:
$r_{i j}=\frac{\partial u_{i}}{\partial x_{j}}-\frac{\partial u_{j}}{\partial x_{i}}\label{eqn:5}$
Substituting in Equation $$\ref{eqn:2}$$, we now write the velocity differential as
$\Delta u_{i}=\frac{\partial u_{i}}{\partial x_{j}} \Delta x_{j}=e_{i j} \Delta x_{j}+\frac{1}{2} r_{i j} \Delta x_{j}\label{eqn:6}$
Any region of a flow can be characterized as strain-dominated or rotation-dominated depending on the relative magnitudes of the two terms on the right-hand side of Equation $$\ref{eqn:3}$$. More specifically, we can multiply Equation $$\ref{eqn:3}$$ by itself and get
$\left(\frac{\partial u_{i}}{\partial x_{j}}\right)^{2}=\left(e_{i j}+\frac{1}{2} r_{i j}\right)^{2}=e_{i j} e_{i j}+\frac{1}{4} r_{i j} r_{i j}\label{eqn:7}$
The cross term $$e_{ij}r_{ij}$$ is the product of a symmetric and an antisymmetric matrix and therefore vanishes identically (exercise 12). The remaining two terms are positive definite measures of the degree of strain and the degree of rotation. Vortices, not surprisingly, are rotation-dominated, e.g., the pair of corotating vortices shown in Figure $$\PageIndex{3}$$. (The stream function for this vortex pair is shown in figure 5.2.1.) The rotation term is greatest in the two vortex cores located at $$x = \pm 1$$ (Figure $$\PageIndex{3}$$a), while strain dominates in the region around the vortices, and especially between them (Figure $$\PageIndex{3}$$b, near $$x = 0$$).
The example in Figure $$\PageIndex{3}$$ is highly simplified; in a real flow the strained and rotating regions are intertwined in very complex ways, but are still recognizable. Figure $$\PageIndex{4}$$ shows the evolution of turbulence in a shear layer. It begins (Figure $$\PageIndex{4}$$a) with the growth of co-rotating vortices (cf. Figure $$\PageIndex{1}$$). These become unstable (Figure $$\PageIndex{4}$$b) and break down into turbulence (Figure $$\PageIndex{4}$$c). The turbulence eventually decays, leaving a stable shear layer thickened by turbulent mixing. In the phase of vigorous turbulence, the strain magnitude $$e^2_{ij}$$ displays an intricate structure (Figure $$\PageIndex{5}$$). In the next two sections, we look more closely at the properties of rotation- and strain-dominated regions.
5.3.1 Rotation
The rotation tensor is closely related to a more familiar object: the vorticity vector $$\vec{\omega}$$:
$\underset{\sim}{r}=\left[\begin{array}{ccc} 0 & \frac{\partial u}{\partial y}-\frac{\partial v}{\partial x} & \frac{\partial u}{\partial z}-\frac{\partial w}{\partial x} \\ \frac{\partial v}{\partial x}-\frac{\partial u}{\partial y} & 0 & \frac{\partial v}{\partial z}-\frac{\partial w}{\partial y} \\ \frac{\partial w}{\partial x}-\frac{\partial u}{\partial z} & \frac{\partial w}{\partial y}-\frac{\partial v}{\partial z} & 0 \end{array}\right]=\left[\begin{array}{ccc} 0 & -\omega_{3} & \omega_{2} \\ \omega_{3} & 0 & -\omega_{1} \\ -\omega_{2} & \omega_{1} & 0 \end{array}\right]\label{eqn:8}$
Reverting to index notation, we may write this relationship in a much more compact form:
$r_{i j}=-\varepsilon_{i j k} \omega_{k}\label{eqn:9}$
The contribution of rotation to the velocity differential in Equation $$\ref{eqn:6}$$ is now
$\frac{1}{2} r_{i j} \Delta x_{j}=-\frac{1}{2} \varepsilon_{i j k} \omega_{k} \Delta x_{j}=\frac{1}{2}(\vec{\omega} \times \Delta \vec{x})_{i}\label{eqn:10}$
Thus, the change in velocity due to rotation is perpendicular to both the separation vector and the local vorticity. A consequence of this is that rotation does not change the distance $$|\Delta \vec{x}|$$ between the two particles; only strain can accomplish that. To show this explicitly, we write the equation for $$|\Delta \vec{x}|$$ (or, equivalently, $$|\Delta \vec{x}|^2/2$$) in vector form:
$\frac{D}{D t} \frac{1}{2}|\Delta \vec{x}|^{2}=\Delta \vec{x} \cdot \frac{D}{D t} \Delta \vec{x}=\Delta \vec{x} \cdot \Delta \vec{u}=\Delta \vec{x} \cdot\left(\underset{\sim}{e} \Delta \vec{x}+\frac{1}{2} \vec{\omega} \times \Delta \vec{x}\right)=\Delta \vec{x} \cdot \underset{\sim}{e} \Delta \vec{x}$
The second step above makes use of Equation $$\ref{eqn:1}$$. Thus, changes in the distance between particles are cause only by strain.
5.3.2 Axisymmetric vortex models
Vortex motion is often approximately axisymmetric, i.e., invariant with respect to rotation about the vortex axis. Here we examine some very simple, axisymmetric vortex models. These are also called cylindrical, or circular, vortices.
Until now, we have measured space using Cartesian coordinates, but in some situations, curvilinear coordinates simplify the math. All of the mathematical constructs derived up to now can be expressed in curvilinear coordinates, and these expressions are listed in appendix I. The study of axisymmetric vortices is simplified using cylindrical polar coordinates (Figure $$\PageIndex{6}$$). In this case, every position in space has coordinates $${r,\theta,z}$$ corresponding to the radial, azimuthal and axial directions, respectively. The corresponding velocity components are $${u_r,u_\theta ,u_z}$$. The vorticity is then given as the curl of the velocity vector:
$\vec{\nabla} \times \vec{u}=\left\{\frac{1}{r} \frac{\partial u_{z}}{\partial \theta}-\frac{\partial u_{\theta}}{\partial z}, \frac{\partial u_{r}}{\partial z}-\frac{\partial u_{z}}{\partial r}, \frac{1}{r} \frac{\partial\left(r u_{\theta}\right)}{\partial r}-\frac{1}{r} \frac{\partial u_{r}}{\partial \theta}\right\}.$
In an axisymmetric vortex, the vorticity is purely axial and depends only on the radial coordinate:
$\vec{\omega}=\omega(r) \hat{e}^{(z)} ; \quad \omega(r)=\frac{1}{r} \frac{\partial\left(r u_{\theta}\right)}{\partial r}.\label{eqn:11}$
The circulation around such a vortex at any radius $$r$$ is just $$\Gamma(r) = 2\pi ru_\theta$$ (show this). We’ll look at three kinds of vortex motion in this geometrical context.
Rigid rotation
In this case the vorticity is uniform. Solving Equation $$\ref{eqn:11}$$ gives
$u_{\theta}=\frac{\omega}{2} r.$
Note that the velocity is unbounded.
An irrotational vortex
In this case the motion is circular but the vorticity is zero. Solving Equation $$\ref{eqn:11}$$ gives
$u_{\theta}=\frac{C}{r},$
where $$C$$ is a constant of integration. The circulation is constant: $$\Gamma = 2\pi C$$. This gives us a meaningful way to identify the constant:
$u_{\theta}=\frac{\Gamma}{2 \pi r}.$
This velocity distribution is unbounded at the origin.
To understand how motion can be circular but irrotational, consider the hand motions illustrated in Figure $$\PageIndex{7}$$. When you wave to someone (Figure $$\PageIndex{7}$$a), the orientation of your hand changes. When you wipe a window (Figure $$\PageIndex{7}$$b), your hand moves in a circle but its orientation doesn’t change. Likewise, an object floating in an irrotational vortex would move in a circle without changing its orientation.
The Rankine vortex
The Rankine vortex3 is a useful model for localized vortices such as tornadoes. The vorticity is uniform out to a radius $$r = R$$, and zero (irrotational) beyond that. The azimuthal velocity is sketched in Figure $$\PageIndex{8}$$. It is left as an exercise for the student to work out the mathematical expressions for $$u_\theta$$ and $$\Gamma$$.
Test your understanding by doing problems 24 and 25.
An isolated vortex
Consider the following vorticity distribution, sketched in Figure $$\PageIndex{9}$$:
$\omega=\left\{\begin{array}{cc} 2 \dot{\theta}, & 0 \leq r<R_{1} \\ -2 \dot{\theta}, & R_{1} \leq r \leq R_{2} \\ 0, & r>R_{2} \end{array}\right.\label{eqn:12}$
where the constant $$\dot{\theta}$$ is the angular velocity (i.e. the time derivative of $$\theta$$). On the $$x$$ axis, azimuthal velocity is the same as the Cartesian velocity $$v$$, and is a maximum at $$r = R_1$$. Similarly, on the $$y$$ axis, azimuthal velocity is $$u$$. From the signs of the derivatives of $$u$$ and $$v$$, it is easy to see that the vorticity $$\omega = v_x −u_y$$ is positive for $$r ≤ R_1$$ and negative for $$R_1 < r ≤ R_2$$ as in Equation $$\ref{eqn:12}$$.
A vortex is called isolated if its total circulation is zero. Here, the total circulation is
$\Gamma_{\text {total}}=2 \pi \int_{0}^{\infty} \omega\left(r^{\prime}\right) r^{\prime} d r^{\prime}$
can be evaluated for any $$r ≥ R_2$$ since there is no vorticity (hence no change in circulation) in that region. The total circulation is zero if $$R_2 = \sqrt{2}R_1$$ (check for yourself).
5.3.3 Strain
The strain rate tensor is symmetric by definition:
$\underset{\sim}{e}=\left[\begin{array}{ccc} u_{x} & \frac{1}{2}\left(u_{y}+v_{x}\right) & \frac{1}{2}\left(u_{z}+w_{x}\right) \\ \frac{1}{2}\left(u_{y}+v_{x}\right) & v_{y} & \frac{1}{2}\left(v_{z}+w_{y}\right) \\ \frac{1}{2}\left(u_{z}+w_{x}\right) & \frac{1}{2}\left(v_{z}+w_{y}\right) & w_{z} \end{array}\right].\label{eqn:13}$
Here, subscripts represent partial derivatives. The diagonal elements of $$\underset{\sim}{e}$$, namely $$u_x$$, $$v_y$$ and $$w_z$$, represent normal strain. These can be either extensional or compressive depending on the sign (compare Figures $$\PageIndex{10}$$a,b). Off-diagonal components represent transverse strain, which may also be called tangential strain or shear (Figure $$\PageIndex{10}$$c).
Two points about normal strains are noteworthy.
• Imagine an irrotational straining motion in which the direction of the separation vector between two particles does not change, e.g., the normal strains shown in figures $$\PageIndex{10}$$ a and b. The separation vector must be an eigenvector of the strain rate tensor:
$\frac{d}{d t} \Delta \vec{x}=\Delta \vec{u}=e \Delta \vec{x}=\lambda \Delta \vec{x}.$
The solution of the above is
$\Delta \vec{x}=\Delta \vec{x}_{0} \exp (\lambda t),$
i.e., the length of the separation vector grows or decays exponentially in time, and the corresponding eigenvalue gives the rate of growth/decay.
• The sum of the normal strains is the trace of the strain rate tensor, which is also equal to the divergence of the velocity field:
$\operatorname{Tr}(e)=e_{i i}=\frac{\partial u_{i}}{\partial x_{i}}=\vec{\nabla} \cdot \vec{u}.\label{eqn:14}$
In an incompressible fluid, where $$\vec{\nabla}\cdot\vec{u}=0$$, the normal strains must add to zero, i.e., extension and compression balance.
5.3.4 The principal strains
Consider the evolution of a circular distribution of fluid particles advected by a uniformly sheared flow (Figure $$\PageIndex{11}$$), for which the strain is purely transverse (similar to Figure $$\PageIndex{10}$$c). Arrows show the velocity profile, with rightward motion in the upper half of the figure changing linearly to leftward motion in the lower half. As a result, the top of the circle moves to the right, the bottom moves to the left, and the sides don’t move. After a short time, the circle becomes an ellipse with major axis tilted 45 degrees from the horizontal. When viewed in a coordinate frame tilted at the same angle (dashed lines), the circle is being expanded in one direction and compressed in the other. In other words, the transverse strain now appears as a purely normal strain.
This raises a crucial point: the distinction between normal and transverse strains depends on the choice of coordinates. In fact, we will now show that any strain is purely normal in an appropriately chosen coordinate system. Like any second-order tensor, the strain rate tensor can be expressed in a rotated reference frame using the transformation rule Equation 3.3.5:
$e_{i j}^{\prime}=e_{k l} C_{k i} C_{l j}.\label{eqn:15}$
Recall that the columns of the rotation matrix $$\underset{\sim}{C}$$ are the basis vectors of the rotated coordinate system. Now, suppose that we transform $$\underset{\sim}{e}$$ into the special reference frame whose basis vectors are the eigenvectors of $$\underset{\sim}{e}$$. Then the $$j^{th}$$ column of $$\underset{\sim}{C}$$ is the $$j^{th}$$ eigenvector:
$C_{l j}=v_{l}^{(j)},\label{eqn:16}$
and $$\lambda^{(j)}$$ is the corresponding eigenvalue:
$\left.e_{k l} v_{l}^{(j)}=\lambda^{(j)} v_{k}^{(j)} \quad \text { (no sum on } j\right).\label{eqn:17}$
Now assume that the eigenvectors have been chosen to be orthogonal with length equal to 1 (and therefore $$v_k^{(I)}v_k^{(j)}=\delta_{ij}$$) and ordered so that $$\det\left(\underset{\sim}{C}\right)=+1$$. In other words $$\underset{\sim}{C}$$ represents a proper rotation. We now reorder Equation $$\ref{eqn:15}$$ and substitute Equation $$\ref{eqn:16}$$ and Equation $$\ref{eqn:17}$$:
$e_{i j}^{\prime}=C_{k i} e_{k l} C_{l j}=v_{k}^{(i)} e_{k l} v_{l}^{(j)}=v_{k}^{(i)} \lambda^{(j)} v_{k}^{(j)}=\lambda^{(j)} \delta_{i j} \quad(\text { no sum on } j).$
The result is just a diagonal matrix with the eigenvalue $$\lambda^{(j)}$$ as the $$j^{th}$$ diagonal element:
$\underset{\sim}{e^{\prime}}=\left[\begin{array}{ccc} \lambda^{(1)} & 0 & 0 \\ 0 & \lambda^{(2)} & 0 \\ 0 & 0 & \lambda^{(3)} \end{array}\right].\label{eqn:18}$
This special reference frame is called the principal frame. The basis vectors (the eigenvectors of $$\underset{\sim}{e}$$) are the principal axes of strain and the normal strains appearing on the main diagonal (the eigenvalues of $$\underset{\sim}{e}$$) are the principal strains.
Test your understanding by doing problem 23.
The fine print: In this discussion, we have made two implicit assumptions about the strain rate tensor. First, we have assumed that its eigenvalues and eigenvectors are all real; otherwise, the geometrical interpretation of the principal axes and strains would make no sense. Second, we have assumed that the eigenvectors can be chosen to be orthogonal. Happily, these properties are guaranteed for real, symmetric matrices, of which the strain rate tensor is one. For further details see any linear algebra text (e.g., Bronson and Costa 2009).
1Strain quantifies the net deformation of a material. The strain rate is its time derivative.
2It is only a matter of historical accident that $$e_{ij}$$ is defined with the factor 1/2 and $$r_{ij}$$ is not.
3William John Macquorn Rankine (1820-1872) was a Scottish engineer whose primary interest was the thermodynamics of steam engines.
|
2020-08-05 16:15:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817973732948303, "perplexity": 406.51204144723977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00372.warc.gz"}
|
https://plainmath.net/algebra-i/102791-how-to-simplify-3-4
|
Dominique Crosby
2023-02-22
How to simplify $|-\frac{3}{4}|$?
Reagan Johnston
Linear equations in three variable:
Step 1. Take any two equation out given three equation, and solve it for one variable.
Take two equations and solve them for the same variable as before.
Now, solve the two equations in this order, find their value, and plug it into any of the three equations.
For example,
$x-2y+3z=9......\left(1\right)-x+3y-z=-6\dots \dots \left(2\right)2x-5y+5z=17.\dots \dots \left(3\right)$
Step 2. Add equation (1) and (2) to eliminate x.
$x-2y+3z+\left(-x+3y-z\right)=9–\left(-6\right)y+2z=15.....\left(4\right)$
Step 3. Solve, equation 1 and 3, by multiplying equation (1) with -2 and adding to equation (3)
$-2x+4y-6z=-18+\left(2x-5y+5z=17\right)=-18+1$
$-y-z=-1\dots ..\left(5\right)$
Step 4. Solve for equation 4 and 5
$y+2z=3-y-z=-1-----------z=2y=-1$
Solve for x, we get $x=1$
Thus, the solution for the given three equations is (1.-1.2)
As a result, we can solve linear equations in three variables in this manner.
Do you have a similar question?
|
2023-03-24 16:54:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7654860615730286, "perplexity": 755.8399148740799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945287.43/warc/CC-MAIN-20230324144746-20230324174746-00675.warc.gz"}
|
https://www.physicsoverflow.org/user/Josh+Burby/history
|
# Recent history for Josh Burby
6
years
ago
question answered Why is there no Q&A category for subfield X?
6
years
ago
received upvote on question Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
6
years
ago
question commented on Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
question answered Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
question commented on Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
received upvote on question Which falls faster in a plasma, a conductor or an insulator?
6
years
ago
question answered Which falls faster in a plasma, a conductor or an insulator?
6
years
ago
edited a comment Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
posted a comment Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
posted a comment Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
question commented on Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
received upvote on question Which falls faster in a plasma, a conductor or an insulator?
6
years
ago
received upvote on question Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
posted a question Which falls faster in a plasma, a cond...
6
years
ago
received upvote on question Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
received upvote on question Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
received upvote on question Casimirs of Poisson brackets obtained via Poisson reduction
6
years
ago
posted a question Casimirs of Poisson brackets obtained ...
6
years
ago
posted a comment Formulating a symplectic (or variational) integrator for a non-local Hamiltonian
|
2021-10-27 06:41:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253867030143738, "perplexity": 4606.26021839531}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588102.27/warc/CC-MAIN-20211027053727-20211027083727-00599.warc.gz"}
|
http://www.khronos.org/registry/vulkan/specs/1.2-extensions/man/html/VkSurfaceFullScreenExclusiveWin32InfoEXT.html
|
## C Specification
The VkSurfaceFullScreenExclusiveWin32InfoEXT structure is defined as:
typedef struct VkSurfaceFullScreenExclusiveWin32InfoEXT {
VkStructureType sType;
const void* pNext;
HMONITOR hmonitor;
} VkSurfaceFullScreenExclusiveWin32InfoEXT;
## Members
• sType is the type of this structure.
• pNext is NULL or a pointer to an extension-specific structure.
• hmonitor is the Win32 HMONITOR handle identifying the display to create the surface with.
## Description
Note If hmonitor is invalidated (e.g. the monitor is unplugged) during the lifetime of a swapchain created with this structure, operations on that swapchain will return VK_ERROR_OUT_OF_DATE_KHR.
Note It’s the responsibility of the application to change the display settings of the targeted Win32 display using the appropriate platform APIs. Such changes may alter the surface capabilities reported for the created surface.
Valid Usage
• hmonitor must be a valid HMONITOR
Valid Usage (Implicit)
• sType must be VK_STRUCTURE_TYPE_SURFACE_FULL_SCREEN_EXCLUSIVE_WIN32_INFO_EXT
## Document Notes
For more information, see the Vulkan Specification
This page is extracted from the Vulkan Specification. Fixes and changes should be made to the Specification, not directly.
Copyright (c) 2014-2020 Khronos Group. This work is licensed under a Creative Commons Attribution 4.0 International License.
|
2020-02-27 18:17:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.248063325881958, "perplexity": 9936.728650774925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146744.74/warc/CC-MAIN-20200227160355-20200227190355-00408.warc.gz"}
|
http://openstudy.com/updates/4dd1f72c199b8b0b81462479
|
## anonymous 5 years ago expand (1+1)^n by using binomial theorem and then simplify
1. anonymous
I think I already answered this: $n!/n! + n!/((n-1)!(1!))+n!/((n-2!)(2!))+...+(n!)/((1!)(n-1)!)+ n!/n!$
2. anonymous
here $t _{r}=C _{n}^{r}$ $s_{r}=\sum_{r=0}^{n}t _{r}$ nC0+nC1+nC2+....nCn=2^n. so simplifying you get 2^n. this is quiet obvious from the above relation that (1+1)^n=2^n
|
2016-10-26 17:49:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577333927154541, "perplexity": 4832.824782897568}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720967.29/warc/CC-MAIN-20161020183840-00047-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://pintarbahasainggris.com/community/vocabularies/latihan-soal-vocabularies-1/
|
# Latihan soal vocabularies 1
Posts: 757
hermione
Topic starter
Illustrious member
Joined: 4 years ago
1. Maya ..... the match in Olympic Games. She is so sad.
2. A : Can you help me, please?
B : Yes, of course. What can I do for you?
A : Please, ..... this bag to my room.
B : Yes, Sir.
3. The teacher’s duty is to ..... the students in the school.
4. Mia : Adi, your shoes are so fit in your ..... You look gorgeous.
Adi : Thank you.
5. I am so hungry. So, I ..... a meal.
6. Lani : I want to wear my white gown to Amanda’s party. What do you think?
Dewi : I think the red one is better.
Lani : Ok. I will ..... the red gown
7. The gardener ..... the grass every Monday and Thursday.
8. I can’t hear anything since my ..... are sick.
9. Sugar is ..... but honey is sweeter than sugar.
10. My mother is a nurse. She works at Harapan Bunda Hospital. She ..... the patients.
Topic tags
|
2022-05-24 02:28:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8026971817016602, "perplexity": 13657.88689156339}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00509.warc.gz"}
|
https://www.physicsforums.com/threads/permittivity-and-conductivity.298842/
|
# Permittivity and conductivity
1. Mar 10, 2009
### thefireman
I have an relation between permittivity and conductivity as follows:
$$\epsilon(\omega) = 1 + \frac{4\pi\iota\sigma(\omega)}{\omega}$$
Yet am unclear as to how it was derived. Does this relationship have a name and/or derivation to follow through somewhere? also, I believe it is cgs units, what is the SI equivalent?
Thanks
2. Mar 11, 2009
### tiny-tim
Hi thefireman!
(i think you just leave out the 4π … or is it 4πe0 ? … to get SI units)
this looks a bit like like the "complex permittivity" definition …
$$\hat{\epsilon}(\omega)\ =\ \epsilon(\omega) + \frac{\iota\sigma(\omega)}{\omega}$$ is the "complex permittivity"
… but with a different notation
|
2017-10-21 09:15:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8807103037834167, "perplexity": 2494.3201746795776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824675.67/warc/CC-MAIN-20171021081004-20171021101004-00868.warc.gz"}
|
http://sagemath.org/doc/reference/combinat/sage/combinat/composition.html
|
# Integer compositions¶
A composition $$c$$ of a nonnegative integer $$n$$ is a list of positive integers (the parts of the compositions) with total sum $$n$$.
This module provides tools for manipulating compositions and enumerated sets of compositions.
EXAMPLES:
sage: Composition([5, 3, 1, 3])
[5, 3, 1, 3]
sage: list(Compositions(4))
[[1, 1, 1, 1], [1, 1, 2], [1, 2, 1], [1, 3], [2, 1, 1], [2, 2], [3, 1], [4]]
AUTHORS:
• Mike Hansen, Nicolas M. Thiery
• Travis Scrimshaw (2013-02-03): Removed CombinatorialClass
class sage.combinat.composition.Composition(parent, lst)
Integer compositions
A composition of a nonnegative integer $$n$$ is a list $$(i_1, \ldots, i_k)$$ of positive integers with total sum $$n$$.
EXAMPLES:
The simplest way to create a composition is by specifying its entries as a list, tuple (or other iterable):
sage: Composition([3,1,2])
[3, 1, 2]
sage: Composition((3,1,2))
[3, 1, 2]
sage: Composition(i for i in range(2,5))
[2, 3, 4]
You can also create a composition from its code. The code of a composition $$(i_1, i_2, \ldots, i_k)$$ of $$n$$ is a list of length $$n$$ that consists of a $$1$$ followed by $$i_1-1$$ zeros, then a $$1$$ followed by $$i_2-1$$ zeros, and so on.
sage: Composition([4,1,2,3,5]).to_code()
[1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0]
sage: Composition(code=_)
[4, 1, 2, 3, 5]
sage: Composition([3,1,2,3,5]).to_code()
[1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0]
sage: Composition(code=_)
[3, 1, 2, 3, 5]
You can also create the composition of $$n$$ corresponding to a subset of $$\{1, 2, \ldots, n-1\}$$ under the bijection that maps the composition $$(i_1, i_2, \ldots, i_k)$$ of $$n$$ to the subset $$\{i_1, i_1 + i_2, i_1 + i_2 + i_3, \ldots, i_1 + \cdots + i_{k-1}\}$$ (see to_subset()):
sage: Composition(from_subset=({1, 2, 4}, 5))
[1, 1, 2, 1]
sage: Composition([1, 1, 2, 1]).to_subset()
{1, 2, 4}
The following notation equivalently specifies the composition from the set $$\{i_1 - 1, i_1 + i_2 - 1, i_1 + i_2 + i_3 - 1, \dots, i_1 + \cdots + i_{k-1} - 1, n-1\}$$ or $$\{i_1 - 1, i_1 + i_2 - 1, i_1 + i_2 + i_3 - 1, \dots, i_1 + \cdots + i_{k-1} - 1\}$$ and $$n$$. This provides compatibility with Python’s $$0$$-indexing.
sage: Composition(descents=[1,0,4,8,11])
[1, 1, 3, 4, 3]
sage: Composition(descents=[0,1,3,4])
[1, 1, 2, 1]
sage: Composition(descents=([0,1,3],5))
[1, 1, 2, 1]
sage: Composition(descents=({0,1,3},5))
[1, 1, 2, 1]
complement(*args, **kwds)
Return the complement composition of self. The complement is the reverse of the conjugate composition of self.
EXAMPLES:
sage: Composition([1, 1, 3, 1, 2, 1, 3]).conjugate()
[1, 1, 3, 3, 1, 3]
sage: Composition([1, 1, 3, 1, 2, 1, 3]).complement()
[3, 1, 3, 3, 1, 1]
conjugate(*args, **kwds)
Returns the conjugate of the composition comp.
EXAMPLES:
sage: Composition([1, 1, 3, 1, 2, 1, 3]).conjugate()
[1, 1, 3, 3, 1, 3]
descents(final_descent=False)
This gives one fewer than the partial sums of the composition.
This is here to maintain some sort of backward compatibility, even through the original implementation was broken (it gave the wrong answer). The same information can be found in partial_sums().
partial_sums()
INPUT:
• final_descent – (Default: False) a boolean integer
OUTPUT:
• Returns the list of partial sums of self with each part subtracted by $$1$$. This includes the sum of all entries when final_descent is True.
EXAMPLES:
sage: c = Composition([2,1,3,2])
sage: c.descents()
[1, 2, 5]
sage: c.descents(final_descent=True)
[1, 2, 5, 7]
fatten(grouping)
Return the composition fatter than self, obtained by grouping together consecutive parts according to grouping.
INPUT:
• grouping – a composition whose sum is the length of self
EXAMPLES:
sage: c = Composition([4,5,2,7,1])
With grouping equal to $$(1, \ldots, 1)$$, $$c$$ is left unchanged:
sage: c.fatten(Composition([1,1,1,1,1]))
[4, 5, 2, 7, 1]
With grouping equal to $$(\ell)$$ where $$\ell$$ is the length of self, this yields the coarser composition above $$c$$:
sage: c.fatten(Composition([5]))
[19]
Other values for grouping yield (all the) other compositions coarser to $$c$$:
sage: c.fatten(Composition([2,1,2]))
[9, 2, 8]
sage: c.fatten(Composition([3,1,1]))
[11, 7, 1]
TESTS:
sage: Composition([]).fatten(Composition([]))
[]
sage: c.fatten(Composition([3,1,1])).__class__ == c.__class__
True
fatter()
Return the set of compositions which are fatter than self.
Complexity for generation: $$O(|c|)$$ memory, $$O(|r|)$$ time where $$|c|$$ is the size of self and $$r$$ is the result.
EXAMPLES:
sage: C = Composition([4,5,2]).fatter()
sage: C.cardinality()
4
sage: list(C)
[[4, 5, 2], [4, 7], [9, 2], [11]]
Some extreme cases:
sage: list(Composition([5]).fatter())
[[5]]
sage: list(Composition([]).fatter())
[[]]
sage: list(Composition([1,1,1,1]).fatter()) == list(Compositions(4))
True
finer()
Return the set of compositions which are finer than self.
EXAMPLES:
sage: C = Composition([3,2]).finer()
sage: C.cardinality()
8
sage: list(C)
[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 2, 1, 1], [1, 2, 2], [2, 1, 1, 1], [2, 1, 2], [3, 1, 1], [3, 2]]
is_finer(co2)
Return True if the composition self is finer than the composition co2; otherwise, it returns False.
EXAMPLES:
sage: Composition([4,1,2]).is_finer([3,1,3])
False
sage: Composition([3,1,3]).is_finer([4,1,2])
False
sage: Composition([1,2,2,1,1,2]).is_finer([5,1,3])
True
sage: Composition([2,2,2]).is_finer([4,2])
True
major_index()
Return the major index of self. The major index is defined as the sum of the descents.
EXAMPLES:
sage: Composition([1, 1, 3, 1, 2, 1, 3]).major_index()
31
partial_sums(final=True)
The partial sums of the sequence defined by the entries of the composition.
If $$I = (i_1, \ldots, i_m)$$ is a composition, then the partial sums of the entries of the composition are $$[i_1, i_1 + i_2, \ldots, i_1 + i_2 + \cdots + i_{m}]$$.
INPUT:
• final – (default: True) whether or not to include the final partial sum, which is always the size of the composition.
to_subset()
EXAMPLES:
sage: Composition([1,1,3,1,2,1,3]).partial_sums()
[1, 2, 5, 6, 8, 9, 12]
With final = False, the last partial sum is not included:
sage: Composition([1,1,3,1,2,1,3]).partial_sums(final=False)
[1, 2, 5, 6, 8, 9]
peaks()
Return a list of the peaks of the composition self. The peaks of a composition are the descents which do not immediately follow another descent.
EXAMPLES:
sage: Composition([1, 1, 3, 1, 2, 1, 3]).peaks()
[4, 7]
refinement(*args, **kwds)
Deprecated: Use refinement_splitting_lengths() instead. See trac ticket #13243 for details.
refinement_splitting(J)
Return the refinement splitting of self according to J.
INPUT:
• J – A composition such that I is finer than J
OUTPUT:
• the unique list of compositions $$(I^{(p)})_{p=1\ldots m}$$, obtained by splitting $$I$$, such that $$|I^{(p)}| = J_p$$ for all $$p = 1, \ldots, m$$.
EXAMPLES:
sage: Composition([1,2,2,1,1,2]).refinement_splitting([5,1,3])
[[1, 2, 2], [1], [1, 2]]
sage: Composition([]).refinement_splitting([])
[]
sage: Composition([3]).refinement_splitting([2])
Traceback (most recent call last):
...
ValueError: compositions self (= [3]) and J (= [2]) must be of the same size
sage: Composition([2,1]).refinement_splitting([1,2])
Traceback (most recent call last):
...
ValueError: composition J (= [2, 1]) does not refine self (= [1, 2])
refinement_splitting_lengths(J)
Return the lengths of the compositions in the refinement splitting of I=self according to J.
refinement_splitting() for the definition of refinement splitting
EXAMPLES:
sage: Composition([1,2,2,1,1,2]).refinement_splitting_lengths([5,1,3])
[3, 1, 2]
sage: Composition([]).refinement_splitting_lengths([])
[]
sage: Composition([3]).refinement_splitting_lengths([2])
Traceback (most recent call last):
...
ValueError: compositions self (= [3]) and J (= [2]) must be of the same size
sage: Composition([2,1]).refinement_splitting_lengths([1,2])
Traceback (most recent call last):
...
ValueError: composition J (= [2, 1]) does not refine self (= [1, 2])
reversed(*args, **kwds)
Return the reverse composition of self.
EXAMPLES:
sage: Composition([1, 1, 3, 1, 2, 1, 3]).reversed()
[3, 1, 2, 1, 3, 1, 1]
shuffle_product(other, overlap=False)
The (overlapping) shuffles of self and other.
Suppose $$I = (i_1, \ldots, i_k)$$ and $$J = (j_1, \ldots, j_l)$$ are two compositions. A shuffle of $$I$$ and $$J$$ is a composition of length $$k + l$$ that contains both $$I$$ and $$J$$ as subsequences.
More generally, an overlapping shuffle of $$I$$ and $$J$$ is obtained by distributing the elements of $$I$$ and $$J$$ (preserving the relative ordering of these elements) among the positions of an empty list; an element of $$I$$ and an element of $$J$$ are permitted to share the same position, in which case they are replaced by their sum. In particular, a shuffle of $$I$$ and $$J$$ is an overlapping shuffle of $$I$$ and $$J$$.
INPUT:
• other – composition
• overlap – boolean (default: False); if True, the overlapping shuffle product is returned.
OUTPUT:
An enumerated set (allowing for mutliplicities)
EXAMPLES:
The shuffle product of $$[2,2]$$ and $$[1,1,3]$$:
sage: alph = Composition([2,2])
sage: beta = Composition([1,1,3])
sage: S = alph.shuffle_product(beta); S
Shuffle product of [2, 2] and [1, 1, 3]
sage: S.list()
[[2, 2, 1, 1, 3], [2, 1, 2, 1, 3], [2, 1, 1, 2, 3], [2, 1, 1, 3, 2], [1, 2, 2, 1, 3], [1, 2, 1, 2, 3], [1, 2, 1, 3, 2], [1, 1, 2, 2, 3], [1, 1, 2, 3, 2], [1, 1, 3, 2, 2]]
The overlapping shuffle product of $$[2,2]$$ and $$[1,1,3]$$:
sage: alph = Composition([2,2])
sage: beta = Composition([1,1,3])
sage: O = alph.shuffle_product(beta, overlap=True); O
Overlapping shuffle product of [2, 2] and [1, 1, 3]
sage: O.list()
[[2, 2, 1, 1, 3], [2, 1, 2, 1, 3], [2, 1, 1, 2, 3], [2, 1, 1, 3, 2], [1, 2, 2, 1, 3], [1, 2, 1, 2, 3], [1, 2, 1, 3, 2], [1, 1, 2, 2, 3], [1, 1, 2, 3, 2], [1, 1, 3, 2, 2], [3, 2, 1, 3], [2, 3, 1, 3], [3, 1, 2, 3], [2, 1, 3, 3], [3, 1, 3, 2], [2, 1, 1, 5], [1, 3, 2, 3], [1, 2, 3, 3], [1, 3, 3, 2], [1, 2, 1, 5], [1, 1, 5, 2], [1, 1, 2, 5], [3, 3, 3], [3, 1, 5], [1, 3, 5]]
Note that the shuffle product of two compositions can include the same composition more than once since a composition can be a shuffle of two compositions in several ways. For example:
sage: S = Composition([1]).shuffle_product([1]); S
Shuffle product of [1] and [1]
sage: S.list()
[[1, 1], [1, 1]]
sage: O = Composition([1]).shuffle_product([1], overlap=True); O
Overlapping shuffle product of [1] and [1]
sage: O.list()
[[1, 1], [1, 1], [2]]
TESTS:
sage: Composition([]).shuffle_product([]).list()
[[]]
size()
Return the size of self, that is the sum of its parts.
EXAMPLES:
sage: Composition([7,1,3]).size()
11
static sum(compositions)
Return the concatenation of the given compositions.
INPUT:
• compositions – a list (or iterable) of compositions
EXAMPLES:
sage: Composition.sum([Composition([1, 1, 3]), Composition([4, 1, 2]), Composition([3,1])])
[1, 1, 3, 4, 1, 2, 3, 1]
Any iterable can be provided as input:
sage: Composition.sum([Composition([i,i]) for i in [4,1,3]])
[4, 4, 1, 1, 3, 3]
Empty inputs are handled gracefully:
sage: Composition.sum([]) == Composition([])
True
to_code()
Return the code of the composition self. The code of a composition is a list of length self.size() of 1s and 0s such that there is a 1 wherever a new part starts.
EXAMPLES:
sage: Composition([4,1,2,3,5]).to_code()
[1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0]
to_partition(*args, **kwds)
Sorts self into decreasing order and returns the corresponding partition.
EXAMPLES:
sage: Composition([2,1,3]).to_partition()
[3, 2, 1]
sage: Composition([4,2,2]).to_partition()
[4, 2, 2]
to_skew_partition(overlap=1)
Return the skew partition obtained from self. The parameter overlap indicates the number of cells that are covered by cells of the previous line.
EXAMPLES:
sage: Composition([3,4,1]).to_skew_partition()
[6, 6, 3] / [5, 2]
sage: Composition([3,4,1]).to_skew_partition(overlap=0)
[8, 7, 3] / [7, 3]
to_subset(final=False)
The subset corresponding to self under the bijection (see below) between compositions of $$n$$ and subsets of $$\{1, 2, \ldots, n-1\}$$.
The bijection maps a composition $$(i_1, \ldots, i_k)$$ of $$n$$ to $$\{i_1, i_1 + i_2, i_1 + i_2 + i_3, \ldots, i_1 + \cdots + i_{k-1}\}$$.
INPUT:
• final – (default: False) whether or not to include the final partial sum, which is always the size of the composition.
partial_sums()
EXAMPLES:
sage: Composition([1,1,3,1,2,1,3]).to_subset()
{1, 2, 5, 6, 8, 9}
sage: for I in Compositions(3): print I.to_subset()
{1, 2}
{1}
{2}
{}
With final=True, the sum of all the elements of the composition is included in the subset:
sage: Composition([1,1,3,1,2,1,3]).to_subset(final=True)
{1, 2, 5, 6, 8, 9, 12}
TESTS:
We verify that to_subset is indeed a bijection for compositions of size $$n = 8$$:
sage: n = 8
sage: all(Composition(from_subset=(S, n)).to_subset() == S \
... for S in Subsets(n-1))
True
sage: all(Composition(from_subset=(I.to_subset(), n)) == I \
... for I in Compositions(n))
True
class sage.combinat.composition.Compositions(is_infinite=False)
Set of integer compositions.
A composition $$c$$ of a nonnegative integer $$n$$ is a list of positive integers with total sum $$n$$.
EXAMPLES:
There are 8 compositions of 4:
sage: Compositions(4).cardinality()
8
Here is the list of them:
sage: Compositions(4).list()
[[1, 1, 1, 1], [1, 1, 2], [1, 2, 1], [1, 3], [2, 1, 1], [2, 2], [3, 1], [4]]
You can use the .first() method to get the ‘first’ composition of a number:
sage: Compositions(4).first()
[1, 1, 1, 1]
You can also calculate the ‘next’ composition given the current one:
sage: Compositions(4).next([1,1,2])
[1, 2, 1]
If $$n$$ is not specified, this returns the combinatorial class of all (non-negative) integer compositions:
sage: Compositions()
Compositions of non-negative integers
sage: [] in Compositions()
True
sage: [2,3,1] in Compositions()
True
sage: [-2,3,1] in Compositions()
False
If $$n$$ is specified, it returns the class of compositions of $$n$$:
sage: Compositions(3)
Compositions of 3
sage: list(Compositions(3))
[[1, 1, 1], [1, 2], [2, 1], [3]]
sage: Compositions(3).cardinality()
4
The following examples show how to test whether or not an object is a composition:
sage: [3,4] in Compositions()
True
sage: [3,4] in Compositions(7)
True
sage: [3,4] in Compositions(5)
False
Similarly, one can check whether or not an object is a composition which satisfies further constraints:
sage: [4,2] in Compositions(6, inner=[2,2])
True
sage: [4,2] in Compositions(6, inner=[2,3])
False
sage: [4,1] in Compositions(5, inner=[2,1], max_slope = 0)
True
Note that the given constraints should be compatible:
sage: [4,2] in Compositions(6, inner=[2,2], min_part=3)
True
The options length, min_length, and max_length can be used to set length constraints on the compositions. For example, the compositions of 4 of length equal to, at least, and at most 2 are given by:
sage: Compositions(4, length=2).list()
[[3, 1], [2, 2], [1, 3]]
sage: Compositions(4, min_length=2).list()
[[3, 1], [2, 2], [2, 1, 1], [1, 3], [1, 2, 1], [1, 1, 2], [1, 1, 1, 1]]
sage: Compositions(4, max_length=2).list()
[[4], [3, 1], [2, 2], [1, 3]]
Setting both min_length and max_length to the same value is equivalent to setting length to this value:
sage: Compositions(4, min_length=2, max_length=2).list()
[[3, 1], [2, 2], [1, 3]]
The options inner and outer can be used to set part-by-part containment constraints. The list of compositions of 4 bounded above by [3,1,2] is given by:
sage: list(Compositions(4, outer=[3,1,2]))
[[3, 1], [2, 1, 1], [1, 1, 2]]
outer sets max_length to the length of its argument. Moreover, the parts of outer may be infinite to clear the constraint on specific parts. This is the list of compositions of 4 of length at most 3 such that the first and third parts are at most 1:
sage: Compositions(4, outer=[1,oo,1]).list()
[[1, 3], [1, 2, 1]]
This is the list of compositions of 4 bounded below by [1,1,1]:
sage: Compositions(4, inner=[1,1,1]).list()
[[2, 1, 1], [1, 2, 1], [1, 1, 2], [1, 1, 1, 1]]
The options min_slope and max_slope can be used to set constraints on the slope, that is the difference p[i+1]-p[i] of two consecutive parts. The following is the list of weakly increasing compositions of 4:
sage: Compositions(4, min_slope=0).list()
[[4], [2, 2], [1, 3], [1, 1, 2], [1, 1, 1, 1]]
Here are the weakly decreasing ones:
sage: Compositions(4, max_slope=0).list()
[[4], [3, 1], [2, 2], [2, 1, 1], [1, 1, 1, 1]]
The following is the list of compositions of 4 such that two consecutive parts differ by at most one:
sage: Compositions(4, min_slope=-1, max_slope=1).list()
[[4], [2, 2], [2, 1, 1], [1, 2, 1], [1, 1, 2], [1, 1, 1, 1]]
The constraints can be combined together in all reasonable ways. This is the list of compositions of 5 of length between 2 and 4 such that the difference between consecutive parts is between -2 and 1:
sage: Compositions(5, max_slope=1, min_slope=-2, min_length=2, max_length=4).list()
[[3, 2], [3, 1, 1], [2, 3], [2, 2, 1], [2, 1, 2], [2, 1, 1, 1], [1, 2, 2], [1, 2, 1, 1], [1, 1, 2, 1], [1, 1, 1, 2]]
We can do the same thing with an outer constraint:
sage: Compositions(5, max_slope=1, min_slope=-2, min_length=2, max_length=4, outer=[2,5,2]).list()
[[2, 3], [2, 2, 1], [2, 1, 2], [1, 2, 2]]
However, providing incoherent constraints may yield strange results. It is up to the user to ensure that the inner and outer compositions themselves satisfy the parts and slope constraints.
Note that if you specify min_part=0, then the objects produced may have parts equal to zero. This violates the internal assumptions that the composition class makes. Use at your own risk, or preferably consider using IntegerVectors instead:
sage: Compositions(2, length=3, min_part=0).list()
doctest:... RuntimeWarning: Currently, setting min_part=0 produces Composition objects which violate internal assumptions. Calling methods on these objects may produce errors or WRONG results!
[[2, 0, 0], [1, 1, 0], [1, 0, 1], [0, 2, 0], [0, 1, 1], [0, 0, 2]]
sage: list(IntegerVectors(2, 3))
[[2, 0, 0], [1, 1, 0], [1, 0, 1], [0, 2, 0], [0, 1, 1], [0, 0, 2]]
The generation algorithm is constant amortized time, and handled by the generic tool IntegerListsLex.
TESTS:
sage: C = Compositions(4, length=2)
True
sage: Compositions(6, min_part=2, length=3)
Compositions of the integer 6 satisfying constraints length=3, min_part=2
sage: [2, 1] in Compositions(3, length=2)
True
sage: [2,1,2] in Compositions(5, min_part=1)
True
sage: [2,1,2] in Compositions(5, min_part=2)
False
sage: Compositions(4, length=2).cardinality()
3
sage: Compositions(4, min_length=2).cardinality()
7
sage: Compositions(4, max_length=2).cardinality()
4
sage: Compositions(4, max_part=2).cardinality()
5
sage: Compositions(4, min_part=2).cardinality()
2
sage: Compositions(4, outer=[3,1,2]).cardinality()
3
sage: Compositions(4, length=2).list()
[[3, 1], [2, 2], [1, 3]]
sage: Compositions(4, min_length=2).list()
[[3, 1], [2, 2], [2, 1, 1], [1, 3], [1, 2, 1], [1, 1, 2], [1, 1, 1, 1]]
sage: Compositions(4, max_length=2).list()
[[4], [3, 1], [2, 2], [1, 3]]
sage: Compositions(4, max_part=2).list()
[[2, 2], [2, 1, 1], [1, 2, 1], [1, 1, 2], [1, 1, 1, 1]]
sage: Compositions(4, min_part=2).list()
[[4], [2, 2]]
sage: Compositions(4, outer=[3,1,2]).list()
[[3, 1], [2, 1, 1], [1, 1, 2]]
sage: Compositions(3, outer = Composition([3,2])).list()
[[3], [2, 1], [1, 2]]
sage: Compositions(4, outer=[1,oo,1]).list()
[[1, 3], [1, 2, 1]]
sage: Compositions(4, inner=[1,1,1]).list()
[[2, 1, 1], [1, 2, 1], [1, 1, 2], [1, 1, 1, 1]]
sage: Compositions(4, inner=Composition([1,2])).list()
[[2, 2], [1, 3], [1, 2, 1]]
sage: Compositions(4, min_slope=0).list()
[[4], [2, 2], [1, 3], [1, 1, 2], [1, 1, 1, 1]]
sage: Compositions(4, min_slope=-1, max_slope=1).list()
[[4], [2, 2], [2, 1, 1], [1, 2, 1], [1, 1, 2], [1, 1, 1, 1]]
sage: Compositions(5, max_slope=1, min_slope=-2, min_length=2, max_length=4).list()
[[3, 2], [3, 1, 1], [2, 3], [2, 2, 1], [2, 1, 2], [2, 1, 1, 1], [1, 2, 2], [1, 2, 1, 1], [1, 1, 2, 1], [1, 1, 1, 2]]
sage: Compositions(5, max_slope=1, min_slope=-2, min_length=2, max_length=4, outer=[2,5,2]).list()
[[2, 3], [2, 2, 1], [2, 1, 2], [1, 2, 2]]
Element
alias of Composition
from_code(code)
Return the composition from its code. The code of a composition is a list of length self.size() of 1s and 0s such that there is a 1 wherever a new part starts.
EXAMPLES:
sage: Composition([4,1,2,3,5]).to_code()
[1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0]
sage: Compositions().from_code(_)
[4, 1, 2, 3, 5]
sage: Composition([3,1,2,3,5]).to_code()
[1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0]
sage: Compositions().from_code(_)
[3, 1, 2, 3, 5]
from_descents(descents, nps=None)
Returns a composition from the list of descents.
INPUT:
• descents – an iterable
• nps – (default: None) an integer or None; if None, then nps is taken to be $$1$$ plus the maximum element of descents.
EXAMPLES:
sage: [x-1 for x in Composition([1, 1, 3, 4, 3]).to_subset()]
[0, 1, 4, 8]
sage: Compositions().from_descents([1,0,4,8],12)
[1, 1, 3, 4, 3]
sage: Compositions().from_descents([1,0,4,8,11])
[1, 1, 3, 4, 3]
from_subset(S, n)
The composition of $$n$$ corresponding to the subset S of $$\{1, 2, \ldots, n-1\}$$ under the bijection that maps the composition $$(i_1, i_2, \ldots, i_k)$$ of $$n$$ to the subset $$\{i_1, i_1 + i_2, i_1 + i_2 + i_3, \ldots, i_1 + \cdots + i_{k-1}\}$$ (see Composition.to_subset()).
INPUT:
• S – an iterable, a subset of $$n-1$$
• n – an integer
EXAMPLES:
sage: Compositions().from_subset([2,1,5,9], 12)
[1, 1, 3, 4, 3]
sage: Compositions().from_subset({2,1,5,9}, 12)
[1, 1, 3, 4, 3]
TESTS:
sage: Compositions().from_subset([2,1,5,9],9)
Traceback (most recent call last):
...
ValueError: S (=[1, 2, 5, 9]) is not a subset of {1, ..., 8}
class sage.combinat.composition.Compositions_all
Class of all compositions.
subset(size=None)
Return the set of compositions of the given size.
EXAMPLES:
sage: C = Compositions()
sage: C.subset(4)
Compositions of 4
sage: C.subset(size=3)
Compositions of 3
class sage.combinat.composition.Compositions_constraints(n, length=None, min_length=0, max_length=inf, floor=None, ceiling=None, min_part=0, max_part=inf, min_slope=-inf, max_slope=inf, name=None, element_constructor=None, element_class=None, global_options=None)
Initialize self.
TESTS:
sage: C = IntegerListsLex(2, length=3)
True
sage: C == loads(dumps(C)) # this did fail at some point, really!
True
sage: C is loads(dumps(C)) # todo: not implemented
True
sage: C.cardinality().parent() is ZZ
True
sage: TestSuite(C).run()
class sage.combinat.composition.Compositions_n(n)
Class of compositions of a fixed $$n$$.
cardinality()
Return the number of compositions of $$n$$.
TESTS:
sage: Compositions(3).cardinality()
4
sage: Compositions(0).cardinality()
1
sage.combinat.composition.composition_from_subset(S, n)
This has been deprecated in trac ticket #14063. Use Compositions.from_subset() instead.
EXAMPLES:
sage: from sage.combinat.composition import composition_from_subset
sage: composition_from_subset([2,1,5,9], 12)
doctest:1: DeprecationWarning: composition_from_subset is deprecated. Use Compositions().from_subset instead.
See http://trac.sagemath.org/14063 for details.
[1, 1, 3, 4, 3]
sage.combinat.composition.from_code(code)
This has been deprecated in trac ticket #14063. Use Compositions.from_code() instead.
EXAMPLES:
sage: import sage.combinat.composition as composition
sage: Composition([4,1,2,3,5]).to_code()
[1, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0]
sage: composition.from_code(_)
doctest:1: DeprecationWarning: from_code is deprecated. Use Compositions().from_code instead.
See http://trac.sagemath.org/14063 for details.
[4, 1, 2, 3, 5]
sage.combinat.composition.from_descents(descents, nps=None)
This has been deprecated in trac ticket #14063. Use Compositions.from_descents() instead.
EXAMPLES:
sage: [x-1 for x in Composition([1, 1, 3, 4, 3]).to_subset()]
[0, 1, 4, 8]
sage: sage.combinat.composition.from_descents([1,0,4,8],12)
doctest:1: DeprecationWarning: from_descents is deprecated. Use Compositions().from_descents instead.
See http://trac.sagemath.org/14063 for details.
[1, 1, 3, 4, 3]
#### Previous topic
Signed Compositions
Cores
|
2013-12-11 11:12:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3769012987613678, "perplexity": 1179.6617346904532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164034983/warc/CC-MAIN-20131204133354-00045-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://www.sciencehq.com/chemistry/chemical-equilibrium-ch.html
|
Chemical Equilibrium
Equilibrium exhibits the state of a process at which the measurable properties of the system do not show any change with time. The term chemical equilibrium is used for the chemical reaction when the concentration of the reactants and the products does not change with time.
We consider the following reactions:
(i) $BaCl_2 + H_2SO_4 \to BaSO_4 \downarrow + 2HCl$
ii) $2KClO_3 \to 2KCl + 3O_2 \uparrow$
(iii) $2HI \leftrightharpoons H_2 + I_2$(Closed vessel)
(iv) $N_2 + 3H_2 \rightleftharpoons 2NH_3$(Haber’s process)
Since the reactions (i) and (ii) proceed in one direction and the reaction is completed with time hence these are known as irreversible reactions. On the other hand, reactions (iii) and (iv) in which the products recombine under a similar set of conditions, to give back the reactants in significant quantities are called reversible reactions. Such reactions are represented by the sign of reversibility $(\rightleftharpoons)$
The reactions (ii) and (iii) are carried out an heating, but (ii) is irreversible reaction hence it is also called thermal decomposition whereas reaction (iii) is reversible hence it is also called thermal dissociation.
Some more examples of Irreversible and Reversible reactions are:
Irreversible reactions:
$2Na + 2H_2O \to 2NaOH + H_2$ [un - reactive products]
$NaCl + AgNO_3 AgCl \downarrow + NaNO_3$ [precipitation reaction]
$SnCl_2 + 2FeCl_3 \to SnCl_4 + 2FeCl+2$ [Redox]
$I_2 + 2Na_2S_2O_3 \to 2NaI + Na_2S_4O_6$ [Redox]
• Decomposition of $CaCO_3$ in open
• Evaporation of $H_2O$ in open.
Reversible Reactions
1. Decomposition of $CaCO_3$ and evaporation of $H_2O$ in closed container.
$3Fe + 4H_2O \leftrightharpoons Fe_3O_4 + 4H_2$ [Reactive products]
$PCl_5 \rightleftharpoons PCl_3 + Cl_2$ $N_2 + 3H_2 \rightleftharpoons 2NH_3$
It is evident that chemical equilibrium is possible in reversible reactions only. In fact, the reaction does not stop but proceeds in both directions with equal rate, therefore, it is said that equilibrium is dynamic. Hence the state of chemical equilibrium may be defined as, “The state of reversible reaction in which the concentrations of the reactants and products do not change with time is called chemical equilibrium”.
Characteristics of Chemical Equilibrium are listed below:
(i) It can be achieved from any side.
(ii) It is dynamic in nature.
(iii) The rate of forward reaction is equal to that of backward reaction.
(iv) The ratio between the concentration of the reactants and the products is constant in the mixture.
(v) It is affected by the change of measurable properties of the system such as concentration, temperature, pressure, mass, volume etc.
Related posts:
1. Chemical Kinetics Some chemical reactions are instantaneous while some proceed slowly. It...
2. Second order Reaction A reaction is said to be the second order if...
3. Determination of order of Reaction There are at least four different methods to ascertain the...
4. Rate constant or Specific Reaction rate According to collision theory, the rate of the reaction is...
5. Factors Influencing Rate of Reaction Some important factors which influence the rate of reaction are...
|
2018-12-10 14:48:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7990159392356873, "perplexity": 965.4163744953313}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823348.23/warc/CC-MAIN-20181210144632-20181210170132-00322.warc.gz"}
|
https://support.bioconductor.org/p/9144489/#9144496
|
Creating DESeq object and choosing a contrast
1
0
Entering edit mode
Dev Raj • 0
@70c54c57
Last seen 18 days ago
Nepal
I am new to this RNAseq world
I want to perform differential gene expression analysis using DESeq2. I read the DESeq2 vignette but got confused in the Contrasts and Interaction section.
My metadata consists of 22 samples of adipose tissue, 12 from high diet-fed mice (HFD) and 10 from chow-fed mice (CD) each co-cultured with tumor (T) and control (C).
I have seen that adipose tissue from HFD already differs significantly from adipose tissue from CD (without tumor) but I want to test and filter out only those differences that resulted due to tumor co-culture in HFD compared to CD.
I used the following code to create the DESeq object
dds <- DESeqDataSetFromMatrix(countData = main_data, colData = meta_data, design = ~treatment + tumor_status + treatment:tumor_status)
dds <- DESeq(dds)
resultsNames(dds)
[1] "Intercept" "treatment_HFD_vs_CD" "tumor_status_T_vs_C" "treatmentHFD.tumor_statusT"
Is my approach of creating the DESeq object correct ? Also which contrast will give me the desired result?
DESeq2 • 109 views
0
Entering edit mode
@mikelove
Last seen 1 day ago
United States
If you've read over the section on interactions and are still confused what these terms mean, I'd recommend consulting with a statistician or someone familiar with linear models in R. Picking and interpreting the results table is a key part of the experiment.
|
2022-06-27 08:56:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2576490044593811, "perplexity": 7145.641465712516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103329963.19/warc/CC-MAIN-20220627073417-20220627103417-00274.warc.gz"}
|
https://www.potfit.net/wiki/doku.php?id=interactions:tbeam
|
# potfit wiki
open source force-matching
### Sidebar
User Guide
Examples
Potential Databases
More
interactions:tbeam
# Two band EAM Potentials
## Basic Theory
The two band EAM potentials are an extension to the regular EAM potentials. Instead of single transfer and embedding functions, different functions to model two bands are used. Usually they are reffered to as d- and s-band contributions.
$$E_\text{total}=\frac{1}{2}\sum_{i<j}^N\Phi_{ij}(r_{ij})+\sum_iF_i^d(n_i^d)+\sum_iF_i^s(n_i^s)$$ where $$n^d_i=\sum_{j\neq i}\rho^d_j(r_{ij}) \qquad \text{and} \qquad n^s_i=\sum_{j\neq i}\rho^s_j(r_{ij})$$
## Number of potential functions
To describe a system with $N$ atom types you need $N(N+9)/2$ potentials.
# atom types $\Phi_{ij}$ $\rho^d_j$ $F^d_i$ $\rho^s_j$ $F^s_i$ Total # potentials
$N$ $N(N+1)/2$ $N$ $N$ $N$ $N$ $N(N+9)/2$
1 1 1 1 1 1 5
2 3 2 2 2 2 11
3 6 3 3 3 3 18
4 10 4 4 4 4 26
## Order of potential functions
The potential table is assumed to be symmetric, i.e. the potential for the atom types 1-0 is the same as the potential 0-1.
The order of the TBEAM potentials in the potential file for N atom types is:
$\Phi_{00}, \ldots, \Phi_{0N}, \Phi_{11}, \ldots, \Phi_{1N}, \ldots, \Phi_{NN},$
$\rho^d_0, \ldots, \rho^d_N,$
$F^d_0, \ldots, F^d_N,$
$\rho^s_0, \ldots, \rho^s_N,$
$F^s_0, \ldots, F^s_N,$
## Special remarks
Tabulated two band EAM potentials require the embedding function $F_i$ to be defined at a density of $1.0$. This is necessary to fix the gauge degrees of freedom.
|
2022-12-06 16:41:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7653819918632507, "perplexity": 921.4988140599006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00142.warc.gz"}
|
http://aux.planetmath.org/node/41111/source
|
# categorical dynamics
## Primary tabs
\documentclass{article}
% this is the default PlanetMath preamble. as your knowledge
% of TeX increases, you will probably want to edit this, but
% it should be fine as is for beginners.
% almost certainly you want these
\usepackage{amssymb}
\usepackage{amsmath}
\usepackage{amsfonts}
% used for TeXing text within eps files
%\usepackage{psfrag}
% need this for including graphics (\includegraphics)
%\usepackage{graphicx}
% for neatly defining theorems and propositions
%\usepackage{amsthm}
% making logically defined graphics
%%%\usepackage{xypic}
% there are many more packages, add them here as you need them
% define commands here
\usepackage{amsmath, amssymb, amsfonts, amsthm, amscd, latexsym}
%%\usepackage{xypic}
\usepackage[mathscr]{eucal}
\setlength{\textwidth}{6.5in}
%\setlength{\textwidth}{16cm}
\setlength{\textheight}{9.0in}
%\setlength{\textheight}{24cm}
\hoffset=-.75in %%ps format
%\hoffset=-1.0in %%hp format
\voffset=-.4in
\theoremstyle{plain}
\newtheorem{lemma}{Lemma}[section]
\newtheorem{proposition}{Proposition}[section]
\newtheorem{theorem}{Theorem}[section]
\newtheorem{corollary}{Corollary}[section]
\theoremstyle{definition}
\newtheorem{definition}{Definition}[section]
\newtheorem{example}{Example}[section]
%\theoremstyle{remark}
\newtheorem{remark}{Remark}[section]
\newtheorem*{notation}{Notation}
\newtheorem*{claim}{Claim}
\renewcommand{\thefootnote}{\ensuremath{\fnsymbol{footnote%%@
}}}
\numberwithin{equation}{section}
\newcommand{\Aut}{{\rm Aut}}
\newcommand{\Cl}{{\rm Cl}}
\newcommand{\Co}{{\rm Co}}
\newcommand{\DES}{{\rm DES}}
\newcommand{\Diff}{{\rm Diff}}
\newcommand{\Dom}{{\rm Dom}}
\newcommand{\Hol}{{\rm Hol}}
\newcommand{\Mon}{{\rm Mon}}
\newcommand{\Hom}{{\rm Hom}}
\newcommand{\Ker}{{\rm Ker}}
\newcommand{\Ind}{{\rm Ind}}
\newcommand{\IM}{{\rm Im}}
\newcommand{\Is}{{\rm Is}}
\newcommand{\ID}{{\rm id}}
\newcommand{\GL}{{\rm GL}}
\newcommand{\Iso}{{\rm Iso}}
\newcommand{\Sem}{{\rm Sem}}
\newcommand{\St}{{\rm St}}
\newcommand{\Sym}{{\rm Sym}}
\newcommand{\SU}{{\rm SU}}
\newcommand{\Tor}{{\rm Tor}}
\newcommand{\U}{{\rm U}}
\newcommand{\A}{\mathcal A}
\newcommand{\Ce}{\mathcal C}
\newcommand{\D}{\mathcal D}
\newcommand{\E}{\mathcal E}
\newcommand{\F}{\mathcal F}
\newcommand{\G}{\mathcal G}
\newcommand{\Q}{\mathcal Q}
\newcommand{\R}{\mathcal R}
\newcommand{\cS}{\mathcal S}
\newcommand{\cU}{\mathcal U}
\newcommand{\W}{\mathcal W}
\newcommand{\bA}{\mathbb{A}}
\newcommand{\bB}{\mathbb{B}}
\newcommand{\bC}{\mathbb{C}}
\newcommand{\bD}{\mathbb{D}}
\newcommand{\bE}{\mathbb{E}}
\newcommand{\bF}{\mathbb{F}}
\newcommand{\bG}{\mathbb{G}}
\newcommand{\bK}{\mathbb{K}}
\newcommand{\bM}{\mathbb{M}}
\newcommand{\bN}{\mathbb{N}}
\newcommand{\bO}{\mathbb{O}}
\newcommand{\bP}{\mathbb{P}}
\newcommand{\bR}{\mathbb{R}}
\newcommand{\bV}{\mathbb{V}}
\newcommand{\bZ}{\mathbb{Z}}
\newcommand{\bfE}{\mathbf{E}}
\newcommand{\bfX}{\mathbf{X}}
\newcommand{\bfY}{\mathbf{Y}}
\newcommand{\bfZ}{\mathbf{Z}}
\renewcommand{\O}{\Omega}
\renewcommand{\o}{\omega}
\newcommand{\vp}{\varphi}
\newcommand{\vep}{\varepsilon}
\newcommand{\diag}{{\rm diag}}
\newcommand{\grp}{{\mathbb G}}
\newcommand{\dgrp}{{\mathbb D}}
\newcommand{\desp}{{\mathbb D^{\rm{es}}}}
\newcommand{\Geod}{{\rm Geod}}
\newcommand{\geod}{{\rm geod}}
\newcommand{\hgr}{{\mathbb H}}
\newcommand{\mgr}{{\mathbb M}}
\newcommand{\ob}{{\rm Ob}}
\newcommand{\obg}{{\rm Ob(\mathbb G)}}
\newcommand{\obgp}{{\rm Ob(\mathbb G')}}
\newcommand{\obh}{{\rm Ob(\mathbb H)}}
\newcommand{\Osmooth}{{\Omega^{\infty}(X,*)}}
\newcommand{\ghomotop}{{\rho_2^{\square}}}
\newcommand{\gcalp}{{\mathbb G(\mathcal P)}}
\newcommand{\rf}{{R_{\mathcal F}}}
\newcommand{\glob}{{\rm glob}}
\newcommand{\loc}{{\rm loc}}
\newcommand{\TOP}{{\rm TOP}}
\newcommand{\wti}{\widetilde}
\newcommand{\what}{\widehat}
\renewcommand{\a}{\alpha}
\newcommand{\be}{\beta}
\newcommand{\ga}{\gamma}
\newcommand{\Ga}{\Gamma}
\newcommand{\de}{\delta}
\newcommand{\del}{\partial}
\newcommand{\ka}{\kappa}
\newcommand{\si}{\sigma}
\newcommand{\ta}{\tau}
\newcommand{\med}{\medbreak}
\newcommand{\medn}{\medbreak \noindent}
\newcommand{\bign}{\bigbreak \noindent}
\newcommand{\lra}{{\longrightarrow}}
\newcommand{\ra}{{\rightarrow}}
\newcommand{\rat}{{\rightarrowtail}}
\newcommand{\oset}[1]{\overset {#1}{\ra}}
\newcommand{\osetl}[1]{\overset {#1}{\lra}}
\newcommand{\hr}{{\hookrightarrow}}
\begin{document}
\subsection{Introduction}
Categorical dynamics is a relatively recent area (1958- ) of applied algebraic topology/category theory and higher dimensional algebra concerned with system dynamics that utilizes concepts such as: categories, functors, natural transformations, higher dimensional categories and \PMlinkname{supercategories}{Supercategory} to study motion and dynamic processes in classical/ quantum systems, as well as complex or super-complex systems (biodynamics).
A type of categorical dynamics was first introduced and studied by William F. Lawvere for classical systems. Subsequently, a complex class of categorical, dynamic $(M,R)$--systems representing the categorical dynamics involved in metabolic--replication processes in terms of categories of sets and ODE's was reported by Robert Rosen in 1970.
One can represent in square categorical diagrams the emergence of ultra-complex
dynamics from the super-complex dynamics of human organisms coupled {\em via} social interactions
in characteristic patterns represented by \PMlinkname{Rosetta biogroupoids}{RosettaGroupoids}, together with the complex--albeit inanimate--systems with chaos'. With the emergence of the ultra-complex system of the human mind-- based on the super-complex human organism-- there is always an associated progression towards higher dimensional algebras from the lower dimensions of human neural network dynamics and the simple algebra of physical dynamics, as shown in the following, essentially non-commutative categorical diagram of dynamic systems and their transformations.
\subsection{Basic definitions in categorical dynamics}
\begin{definition}
An \emph{ultra-complex system, $U_{CS}$} is defined as an object representation in the following non-commutative
diagram of dynamic systems and dynamic system morphisms or dynamic transformations:
$$\xymatrix@C=5pc{[SUPER-COMPLEX] \ar [r] ^{(\textbf{Higher Dim})} \ar[d] _{\Lambda}& ~~~(U_{CS}= ULTRA-COMPLEX) \ar [d]^{onto}\\ COMPLEX& \ar [l] ^{(\textbf{Generic Map})}[SIMPLE]}$$
\end{definition}
One notes that the above diagram is indeed not natural' (that is, it is not commutative) for reasons
related to the emergence of the higher dimensions of the super--complex
(biological/organismic) and/or ultra--complex (psychological/neural network dynamic) levels in comparison with
the low dimensions of either simple (physical/classical) or complex (chaotic) dynamic systems. Moreover,
each type of dynamic system shown in the above diagram is in its turn represented by a distinct diagram
representing its dynamics in terms of transitions occurring in a state space $S$ according to one or several
transition functions or dynamic laws, denoted by $\delta$ for either classical or chaotic physical systems and
by a class of transition functions:
$$\left\{\lambda_{\tau} \right\} _{\tau \in T},$$
where $T$ is an index class consisting of dynamic parameters $\tau$ that label the transformation stages of either a super-complex or an ultra-complex system, thus keeping track of the switches that occur between dynamic laws in highly complex dynamic systems with variable topology. Therefore, in the latter two cases, highly complex systems are in fact represented, respectively, by functor categories and supercategories of diagrams because categorical diagrams can be defined as functors. An important class of the simpler dynamic systems can be represented by algebraic categories; an example of such class of simple dynamic systems is that endowed with \emph{monadic dynamics} represented by the category of Eilenberg-Moore algebras.
%%%%%
%%%%%
nd{document}
|
2018-02-24 00:54:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7687699794769287, "perplexity": 6069.761065054034}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814872.58/warc/CC-MAIN-20180223233832-20180224013832-00581.warc.gz"}
|
http://tex.stackexchange.com/questions/71548/tikz-joining-points-on-a-circle
|
# Tikz: joining points on a circle
I have the following figure
I would like to draw portions of circles between some of the red points. More explicitly, I would like to go from ac1 to ab1 and then to ac2 following circle A, then go to bc1 following circle C and to ab2 following circle B and back to ac1 following circle A.
There is probably a solution using the arc operation, but this would require computing the angles for every portion of circle, which can get tedious. Is there a simple way to do that ?
(I was thinking maybe drawing circles with \clip, but I can't figure out how to do it)
Here is my example code
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{calc,intersections}
\begin{document}
\begin{tikzpicture}
\coordinate (a) at (0,0);
\coordinate (b) at (-1,-1);
\coordinate (c) at (1,-1);
\draw[name path=circleA] (a) circle (1.5cm);
\draw[name path=circleB] (b) circle (1.5cm);
\draw[name path=circleC] (c) circle (1.5cm);
\fill [red, name intersections={of=circleA and circleB,name=intAB}]
(intAB-1) circle (2pt) node[above left] {ab1}
(intAB-2) circle (2pt) node[below right] {ab2};
\fill [red, name intersections={of=circleA and circleC,name=intAC}]
(intAC-1) circle (2pt) node[above right] {ac1}
(intAC-2) circle (2pt) node[below left] {ac2};
\begin{scope}
\clip (a) circle (1.5cm);
\fill [red, name intersections={of=circleB and circleC,name=intBC}]
(intBC-1) circle (2pt) node[below] {bc1}
(intBC-2) circle (2pt) node {bc2};
\end{scope}
\node (A) at ($(a)+(0,1)$) {$A$};
\node (B) at ($(b)+(-1,0)$) {$B$};
\node (C) at ($(c)+(1,0)$) {$C$};
\end{tikzpicture}
\end{document}
-
Clips are certainly an easy way to do this, how "clean" do you want the joins to be? – Loop Space Sep 14 '12 at 15:16
In the final document I may not draw the intersection points, so it would be nice if the joints were clean enough so that the line appears continuous. – Corentin Sep 14 '12 at 15:34
In which case clipping is not the best option. That's useful to know. – Loop Space Sep 14 '12 at 15:42
@AndrewStacey With \pgfpatharcto the joints are perfectly clean, so from a practical point of view I am happy with this solution. However, if you have another method with clips, even if joints do not match so well, I would be glad to have a look at the difference and learn something new.. – Corentin Sep 14 '12 at 21:21
Always worth learning! If no-one else beats me to it (and, everyone else, please do!) then I'll add a clip solution next time I'm on a "proper" computer. – Loop Space Sep 14 '12 at 22:43
Here's a solution that uses the nodes that you have defined and the commands
\pgfpointanchor{<node>}{<anchor>}
\pgfpathmoveto{<coordinate>}
The idea is to use \pgfpointanchor to get the coordinates of one the points of intersection. You then use pgfpathmoveto to move there, and then use \pgfpatharcto to draw an arc to the other point of intersection (which you find the coordinates of using \pgfpointanchor again). All of these commands are detailed in the pgf manual.
% new bit
\pgfsetlinewidth{2pt}
% path between ac1 and ab1
\pgfsetstrokecolor{blue}
\pgfpathmoveto{\pgfpointanchor{intAC-1}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{1}{\pgfpointanchor{intAB-1}{south}}
\pgfusepath{stroke}
% path between ab1 and ac2
\pgfsetstrokecolor{red}
\pgfpathmoveto{\pgfpointanchor{intAB-1}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{1}{\pgfpointanchor{intAC-2}{south}}
\pgfusepath{stroke}
% path between ac2 and bc1
\pgfsetstrokecolor{green}
\pgfpathmoveto{\pgfpointanchor{intAC-2}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{0}{\pgfpointanchor{intBC-1}{south}}
\pgfusepath{stroke}
% path between bc1 and ab2
\pgfsetstrokecolor{yellow}
\pgfpathmoveto{\pgfpointanchor{intBC-1}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{0}{\pgfpointanchor{intAB-2}{south}}
\pgfusepath{stroke}
% path between ab2 and ac1
\pgfsetstrokecolor{orange}
\pgfpathmoveto{\pgfpointanchor{intAB-2}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{1}{\pgfpointanchor{intAC-1}{south}}
\pgfusepath{stroke}
Note that some of the paths are traversed clockwise, and some counter clockwise, determined by the 5th argument to \pgfpatharcto
Here's the complete MWE
\documentclass{article}
\usepackage{tikz}
\usetikzlibrary{calc,intersections}
\begin{document}
\begin{tikzpicture}
\coordinate (a) at (0,0);
\coordinate (b) at (-1,-1);
\coordinate (c) at (1,-1);
\draw[name path=circleA] (a) circle (1.5cm);
\draw[name path=circleB] (b) circle (1.5cm);
\draw[name path=circleC] (c) circle (1.5cm);
\fill [red, name intersections={of=circleA and circleB,name=intAB}]
(intAB-1) circle (2pt) node[above left] {ab1}
(intAB-2) circle (2pt) node[below right] {ab2};
\fill [red, name intersections={of=circleA and circleC,name=intAC}]
(intAC-1) circle (2pt) node[above right] {ac1}
(intAC-2) circle (2pt) node[below left] {ac2};
\begin{scope}
\clip (a) circle (1.5cm);
\fill [red, name intersections={of=circleB and circleC,name=intBC}]
(intBC-1) circle (2pt) node[below] {bc1}
(intBC-2) circle (2pt) node {bc2};
\end{scope}
% new bit
\pgfsetlinewidth{2pt}
% path between ac1 and ab1
\pgfsetstrokecolor{blue}
\pgfpathmoveto{\pgfpointanchor{intAC-1}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{1}{\pgfpointanchor{intAB-1}{south}}
\pgfusepath{stroke}
% path between ab1 and ac2
\pgfsetstrokecolor{red}
\pgfpathmoveto{\pgfpointanchor{intAB-1}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{1}{\pgfpointanchor{intAC-2}{south}}
\pgfusepath{stroke}
% path between ac2 and bc1
\pgfsetstrokecolor{green}
\pgfpathmoveto{\pgfpointanchor{intAC-2}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{0}{\pgfpointanchor{intBC-1}{south}}
\pgfusepath{stroke}
% path between bc1 and ab2
\pgfsetstrokecolor{yellow}
\pgfpathmoveto{\pgfpointanchor{intBC-1}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{0}{\pgfpointanchor{intAB-2}{south}}
\pgfusepath{stroke}
% path between ab2 and ac1
\pgfsetstrokecolor{orange}
\pgfpathmoveto{\pgfpointanchor{intAB-2}{south}}
\pgfpatharcto{1.5cm}{1.5cm}{0}{0}{1}{\pgfpointanchor{intAC-1}{south}}
\pgfusepath{stroke}
\node (A) at ($(a)+(0,1)$) {$A$};
\node (B) at ($(b)+(-1,0)$) {$B$};
\node (C) at ($(c)+(1,0)$) {$C$};
\end{tikzpicture}
\end{document}
-
Thanks a lot for your answer, this is what I was looking for. I wasn't aware of the command \pgfpatharcto, which is very useful indeed. – Corentin Sep 14 '12 at 21:13
Edit: a new version with better jonctions...
This is not the first question that asks how to draw an arc between two points on a circle with known center. So I decided to create two new styles to meet this need. Here is an example of use:
\draw (a) to[clockwise arc centered at=c] (b);
This command draws an arc starting at a, ending at b, and centered at c (in fact, ending is on a line through c and b if b is not on the circle centered at c and that goes through a).
There are two styles: clockwise arc centered at and anticlockwise arc centered at.
(Due to rounding errors, always use line join=round to get better connections between some arcs.)
\documentclass[margin=5mm]{standalone}
\usepackage{tikz}
\usetikzlibrary{calc,intersections}
\tikzset{
anticlockwise arc centered at/.style={
to path={
let \p1=(\tikztostart), \p2=(\tikztotarget), \p3=(#1) in
\pgfextra{
\pgfmathsetmacro{\anglestart}{atan2(\x1-\x3,\y1-\y3)}
\pgfmathsetmacro{\angletarget}{atan2(\x2-\x3,\y2-\y3)}
\pgfmathsetmacro{\angletarget}%
{\angletarget < \anglestart ? \angletarget+360 : \angletarget}
}
},
},
clockwise arc centered at/.style={
to path={
let \p1=(\tikztostart), \p2=(\tikztotarget), \p3=(#1) in
\pgfextra{
\pgfmathsetmacro{\anglestart}{atan2(\x1-\x3,\y1-\y3)}
\pgfmathsetmacro{\angletarget}{atan2(\x2-\x3,\y2-\y3)}
\pgfmathsetmacro{\angletarget}%
{\angletarget > \anglestart ? \angletarget - 360 : \angletarget}
}
},
},
}
\begin{document}
\begin{tikzpicture}
% 3 centers (a, b, c)
\coordinate (a) at (0,0);
\coordinate (b) at (-1,-1);
\coordinate (c) at (1,-1);
% 3 circles
\draw[name path=circleA] (a) circle (1.5cm);
\draw[name path=circleB] (b) circle (1.5cm);
\draw[name path=circleC] (c) circle (1.5cm);
% label of circles
\node (A) at ($(a)+(0,1)$) {$A$};
\node (B) at ($(b)+(-1,0)$) {$B$};
\node (C) at ($(c)+(1,0)$) {$C$};
% intersections of circles (A) and (B)
\path [name intersections={of=circleA and circleB,name=AB}];
% show them
\fill[red] (AB-1) circle (2pt) node[above left] {AB-1};
\fill[red] (AB-2) circle (2pt) node[below right] {AB-2};
% intersections of circles (A) and (C)
\path [name intersections={of=circleA and circleC,name=AC}];
% show them
\fill[red] (AC-1) circle (2pt) node[above right] {AC-1};
\fill[red] (AC-2) circle (2pt) node[below left] {AC-2};
% intersections of circles (B) and (C)
\path[name intersections={of=circleB and circleC,name=BC}];
% show them
\fill[red] (BC-1) circle (2pt) node[above] {BC-1};
\fill[red] (BC-2) circle (2pt) node[below] {BC-2};
\draw[line join=round,orange,fill=orange,fill opacity=.5,line width=1pt]
(AC-2)
to[clockwise arc centered at=a] (AB-2)
to[anticlockwise arc centered at=b] (BC-1)
to[anticlockwise arc centered at=c] (AC-2);
\end{tikzpicture}
\end{document}
-
I like this, nice work :) – cmhughes Sep 14 '12 at 21:59
Very elegant, thank you very much ! – Corentin Sep 14 '12 at 22:04
Even though cmhughes has already shown us his version with \pgfpatharcto I want to add a version that TikZ-ifies the \pgfpatharcto command under the new path operator arc to.
The code has been originally developed for another question on TeXwelt.de (German). The only difference is that it uses arc to instead of arc*.
With this operator, the required arc can be drawn (and filled) with
\draw (intAC-1) arc to [arc large] (intAC-2)
arc to [arc cw] (intBC-1)
arc to [arc cw] (intAB-2)
arc to [] (intAC-1) -- cycle;
The options
• arc large and arc small (<large arc flag>) as well as
• arc cw and arc ccw (<counterclockwise flag>)
correspond to the flags of \pgfpatharcto (argument #4 and #5).
The third argument is used for rotation and can be set with arc rotation (initially 0).
As the precision of \pgfpatharcto is rather bad, the joined close (-- cycle) doesn’t look so good with the default miter line join (but only at 6400 % zoom), I’d use line join=round where this imperfection disappears.
The path operator arc to misses a proper timer (the function that places nodes “along” the path), as a substitute it uses the timer of a straight line (--). The [ ] are mandatory (as can be seen at the fourth occurrence of arc to).
## Code
\documentclass[tikz,convert=false]{standalone}
\tikzset{
arc/ccw/.initial=1,
arc/large/.initial=0,
arc ccw/.style={/tikz/arc/ccw=1},
arc cw/.style={/tikz/arc/ccw=0},
arc large/.style={/tikz/arc/large=1},
arc small/.style={/tikz/arc/large=0},
arc rotation/.initial=0
}
\usetikzlibrary{intersections}
\makeatletter
\def\tikz@arcA rc{\pgfutil@ifnextchar t%
{\tikz@flush@moveto\tikz@arcB@opt}% -> our new "arc to"
{\tikz@flush@moveto\tikz@arc@cont}}% -> our old "arc"
\def\tikz@arcB@opt to#1[#2]{%
\def\tikz@arcB@options{#2}
\tikz@do@@arcB}
\def\tikz@do@@arcB{%
\pgfutil@ifnextchar n{\tikz@collect@label@onpath\tikz@do@@arcB}
{\pgfutil@ifnextchar c{\tikz@collect@coordinate@onpath\tikz@do@@arcB}
{\tikz@scan@one@point\tikz@do@arcB}}}
\def\tikz@do@arcB#1{%
\edef\tikz@timer@start{\noexpand\pgfqpoint{\the\tikz@lastx}{\the\tikz@lasty}}
\tikz@make@last@position{#1}%
\edef\tikz@timer@end{\noexpand\pgfqpoint{\the\tikz@lastx}{\the\tikz@lasty}}%
\iftikz@shapeborder
\edef\tikz@moveto@waiting{\tikz@shapeborder@name}%
\fi
\begingroup
\tikzset{every arc/.try}%
\expandafter\tikzset\expandafter{\tikz@arcB@options}%
\let\tikz@arc@x\pgfmathresult
\ifpgfmathunitsdeclared
\edef\tikz@arc@x{\tikz@arc@x pt}%
\else
\pgf@process{\pgfpointxy{\tikz@arc@x}{0}}%
\pgfmathveclen@{\pgf@x}{\pgf@y}%
\edef\tikz@arc@x{\pgfmathresult pt}%
\fi
\let\tikz@arc@y\pgfmathresult
\ifpgfmathunitsdeclared
\edef\tikz@arc@y{\tikz@arc@y pt}%
\else
\pgf@process{\pgfpointxy{0}{\tikz@arc@y}}%
\pgfmathveclen@{\pgf@x}{\pgf@y}%
\edef\tikz@arc@y{\pgfmathresult pt}%
\fi
\pgfpatharcto{\tikz@arc@x}{\tikz@arc@y}
{\pgfkeysvalueof{/tikz/arc rotation}}{\pgfkeysvalueof{/tikz/arc/large}}
{\pgfkeysvalueof{/tikz/arc/ccw}}{#1}%
\endgroup
\let\tikz@timer=\tikz@timer@line
\tikz@scan@next@command
}
\makeatother
\begin{document}
\draw[name path=circleA] ( 0, 0) coordinate (a) circle [];
\draw[name path=circleB] (-1,-1) coordinate (b) circle [];
\draw[name path=circleC] ( 1,-1) coordinate (c) circle [];
\fill [red, name intersections={of=circleA and circleB,name=intAB}]
(intAB-1) circle (2pt) node[above left] {ab1}
(intAB-2) circle (2pt) node[below right] {ab2};
\fill [red, name intersections={of=circleA and circleC,name=intAC}]
(intAC-1) circle (2pt) node[above right] {ac1}
(intAC-2) circle (2pt) node[below left] {ac2};
\fill [red, name intersections={of=circleB and circleC,name=intBC}]
(intBC-1) circle (2pt) node[above] {bc1};
\node (A) at ([shift={((0,1)} ]a) {$A$};
\node (B) at ([shift={((-1,0)}]b) {$B$};
\node (C) at ([shift={((1,0)} ]c) {$C$};
\draw[
thick,
line join=round,
draw=blue,
fill opacity=.5,
fill=blue!50
] (intAC-1) arc to [arc large] (intAC-2)
arc to [arc cw] (intBC-1)
arc to [arc cw] (intAB-2)
arc to [] (intAC-1) -- cycle;
\end{tikzpicture}
\end{document}
-
|
2014-12-25 12:14:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8347088098526001, "perplexity": 5318.135843299089}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447547497.90/warc/CC-MAIN-20141224185907-00091-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/integral-using-residue-theorem.349560/
|
# Integral using Residue Theorem
1. Oct 27, 2009
### yeahhyeahyeah
1. The problem statement, all variables and given/known data
the integral of 1/(1+x^4) from -infinity to +infinity
2. Relevant equations
Residue theorem.
3. The attempt at a solution
1/(1+z^4) so z^4 = -1
I know I should be using the residues at z = -sqrt(i) and z= i*sqrt(i)
I am getting a complex number as an answer which makes no sense
residue at z = -sqrt(i) = 1/(4*i*sqrt(i))
and at z = i*sqrt(i) = 1/(4*sqrt(i))
and therefore integral of (1/1+z^4) = 2pi*i* sum of those residues
Am I on the right track?
2. Oct 28, 2009
### Winzer
This is a tricky. When else does $$z^4+1 =0$$? Maybe at $$z=e^{i\frac{\pi}{4}}$$.
Can you get it from here?
Last edited: Oct 28, 2009
3. Oct 28, 2009
### lanedance
been a while since I've done these, but as a start, rather than working with sqrt i, which i don't think is unique, I would find the singularities by first letting
$$z = r e^{j \theta}$$
whilst for some integer n
$$-1 = e^{j\pi(2n+1)}$$
then the singularities can be found by
$$z^4 = r^4 e^{j 4 \theta} = e^{j\pi(2n+1)}$$
giving
$$\theta = \frac{\pi(2n+1)}{4} = \pi(n/2+1/4)$$
which I think will give 4 unique singularities for n = 0, 1, 2, 3
4. Oct 28, 2009
### yeahhyeahyeah
OOOh, I see, I used the singularities and I get pi/sqrt2 which I think is right. Thank you guys so much. I actually have another related question now though. First of all, why does sqrt(i) not work when I use it as z?
Also,
I'm now trying to do:
integral (0 to infinity) of sin(x^2) using residue theorem as well. I can't figure out how to make the substitution.
I dunno when/if you guys will answer that question but thanks so much for your help just now!
5. Oct 28, 2009
### lanedance
well first you didn't find all the singularities (2 out of 4) & sqrt(i) is not unique. There are 2 unique answers, note we try and solve
$$z^2 = i$$
so as before
$$(e^{i \theta})^2 = e^{i 2 \theta} = e^{(2n+1)\pi}$$
so the 2 solutions in the \theta range [0, 2pi) are
$$\theta = (n+1/2)\pi} = \pi/2, 3\pi/2$$
compare it with solving
$$z^2 = 1$$
there is a plus & minus solution
that said if i remember residues correctly, you probably only need 2 of them anyway
not too sure about your sin question, but it may be worth trying writing the sin in terms of the sum/differnce of 2 complex exponentials
6. Oct 28, 2009
### Winzer
I agree. Except when you make the semicircle, the poles n=0, 1 are the only ones on the upper half plane.
|
2018-01-19 00:14:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7032132744789124, "perplexity": 1340.5793953330497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887660.30/warc/CC-MAIN-20180118230513-20180119010513-00458.warc.gz"}
|
http://en.wikipedia.org/wiki/Brent-Salamin_algorithm
|
# Gauss–Legendre algorithm
(Redirected from Brent-Salamin algorithm)
Jump to: navigation, search
The Gauss–Legendre algorithm is an algorithm to compute the digits of π. It is notable for being rapidly convergent, with only 25 iterations producing 45 million correct digits of π. However, the drawback is that it is memory intensive and it is therefore sometimes not used over Machin-like formulas.
The method is based on the individual work of Carl Friedrich Gauss (1777–1855) and Adrien-Marie Legendre (1752–1833) combined with modern algorithms for multiplication and square roots. It repeatedly replaces two numbers by their arithmetic and geometric mean, in order to approximate their arithmetic-geometric mean.
The version presented below is also known as the Gauss–Euler, Brent–Salamin (or Salamin–Brent) algorithm;[1] it was independently discovered in 1975 by Richard Brent and Eugene Salamin. It was used to compute the first 206,158,430,000 decimal digits of π on September 18 to 20, 1999, and the results were checked with Borwein's algorithm.
## Algorithm
1. Initial value setting:
$a_0 = 1\qquad b_0 = \frac{1}{\sqrt{2}}\qquad t_0 = \frac{1}{4}\qquad p_0 = 1.\!$
2. Repeat the following instructions until the difference of $a_n\!$ and $b_n\!$ is within the desired accuracy:
\begin{align} a_{n+1} & = \frac{a_n + b_n}{2}, \\ b_{n+1} & = \sqrt{a_n b_n}, \\ t_{n+1} & = t_n - p_n(a_{n}-a_{n+1})^2, \\ p_{n+1} & = 2p_n. \end{align}
3. π is then approximated as:
$\pi \approx \frac{(a_{n+1}+b_{n+1})^2}{4t_{n+1}}.\!$
The first three iterations give (approximations given up to and including the first incorrect digit):
$3.140\dots\!$
$3.14159264\dots\!$
$3.1415926535897932382\dots\!$
The algorithm has second-order convergent nature, which essentially means that the number of correct digits doubles with each step of the algorithm.
## Mathematical background
### Limits of the arithmetic–geometric mean
The arithmetic–geometric mean of two numbers, a0 and b0, is found by calculating the limit of the sequences
\begin{align} a_{n+1} & = \frac{a_n+b_n}{2}, \\ b_{n+1} & = \sqrt{a_n b_n}, \end{align}
which both converge to the same limit.
If $a_0=1\!$ and $b_0=\cos\varphi\!$ then the limit is ${\pi \over 2K(\sin\varphi)}\!$ where $K(k)\!$ is the complete elliptic integral of the first kind
$K(k) = \int_0^{\pi/2} \frac{d\theta}{\sqrt{1-k^2 \sin^2\theta}}.\!$
If $c_0 = \sin\varphi\!$, $c_{i+1} = a_i - a_{i+1}\!$. then
$\sum_{i=0}^\infty 2^{i-1} c_i^2 = 1 - {E(\sin\varphi)\over K(\sin\varphi)}\!$
where $E(k)\!$ is the complete elliptic integral of the second kind:
$E(k) = \int_0^{\pi/2}\sqrt {1-k^2 \sin^2\theta}\, d\theta.\!$
Gauss knew of both of these results.[2] [3] [4]
### Legendre’s identity
For $\varphi\!$ and $\theta\!$ such that $\varphi+\theta={1 \over 2}\pi\!$ Legendre proved the identity:
$K(\sin \varphi) E(\sin \theta ) + K(\sin \theta ) E(\sin \varphi) - K(\sin \varphi) K(\sin \theta) = {1 \over 2}\pi.\!$[2]
### Gauss–Euler method
The values $\varphi=\theta={\pi\over 4}\!$ can be substituted into Legendre’s identity and the approximations to K, E can be found by terms in the sequences for the arithmetic geometric mean with $a_0=1\!$ and $b_0=\sin{\pi \over 4}=\frac{1}{\sqrt{2}}\!$.[5]
## References
1. ^ Brent, Richard, Old and New Algorithms for pi, Letters to the Editor, Notices of the AMS 60(1), p. 7
2. ^ a b Brent, Richard (1975), Traub, J F, ed., "Multiple-precision zero-finding methods and the complexity of elementary function evaluation", Analytic Computational Complexity (New York: Academic Press): 151–176, retrieved 8 September 2007
3. ^ Salamin, Eugene, Computation of pi, Charles Stark Draper Laboratory ISS memo 74–19, 30 January 1974, Cambridge, Massachusetts
4. ^ Salamin, Eugene (1976), "Computation of pi Using Arithmetic–Geometric Mean", Mathematics of Computation 30 (135): 565–570, ISSN 0025-5718
5. ^ Adlaj, Semjon, An eloquent formula for the perimeter of an ellipse, Notices of the AMS 59(8), p. 1096
|
2015-05-06 09:20:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 26, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882552623748779, "perplexity": 1607.892511471738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430458431586.52/warc/CC-MAIN-20150501053351-00092-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/4126355/consecutive-number-divisibility
|
# Consecutive Number Divisibility
While I was solving a practice problem, I became interested in coming to the conclusion about the following: Is it possible for both $$\frac{x+1}y$$ AND $$\frac x{y+1}$$ to be integers, and if so, how would I find them. Looking at this, I was pretty sure there wasn't any, but I had no concrete mathematical proof. I still don't have a conclusion, which is why I was wondering if any of you all did.
• what do you mean by solutions? – kyary May 4 at 3:48
• Yeah any $x = y > 0$ will do it here. Another example is something like $x = 14, y = 2$ with $14/2 = 7, 15/3 = 5.$ – Stephen Donovan May 4 at 3:49
• I apologize I had written the question incorrectly – Smartsav10 May 4 at 3:57
• @kyary I have reworded the problem. – Smartsav10 May 4 at 4:02
• @ParclyTaxel sorry the question was incorrect before. – Smartsav10 May 4 at 4:02
$$16/2$$,$$15/3$$
More generally, $$(y^2-1 +1)/y$$ and $$(y^2-1)/(y+1)$$ works.
• May I ask how you came up with that? – Smartsav10 May 4 at 4:08
• I noticed $(-y-1)+1,y$ and $(-y-1),y+1$ worked and adding $y(y+1)$ to the numerator keeps it an integer. Doing it once gives my formula. More generally you can use the Chinese remainder Theorem. – Eric May 4 at 11:54
For any $$y$$ there exist infinitely many $$x$$ with the given equation holding; they are the solutions of $$x\equiv-1\bmod y\equiv0\bmod y+1$$. Note that $$y$$ and $$y+1$$ are coprime, so there is a unique solution modulo $$y(y+1)$$ by the Chinese remainder theorem, and that is $$y^2-1$$.
\begin{align} \dfrac{x+1}y &= m \\ \dfrac x{y+1} &= n \\ \hline x+1 &= my \\ x &= ny + n \\ \hline ny + n + 1 &= my \\ my - ny &= n+1 \\ \hline y &= \dfrac{n+1}{m-n} \\ x &= n(y+1) \\ \end{align}
So, for example, let $$n=11$$, then the possible values for $$m-11$$ are $$1,2,3,4,6,12$$, the divisors of $$n+1=12$$.
$$\begin{array}{rrr| rr | rr} m-11 & m & n & x & y & \frac{x+1}y & \frac{x}{y+1}\\ \hline 1 & 12 & 11 & 143 & 12 & 12 & 11 \\ 2 & 13 & 11 & 77 & 6 & 13 & 11 \\ 3 & 14 & 11 & 55 & 4 & 14 & 11 \\ 4 & 15 & 11 & 44 & 3 & 15 & 11 \\ 6 & 17 & 11 & 33 & 2 & 17 & 11 \\ 12 & 23 & 11 & 22 & 1 & 23 & 11 \end{array}$$
|
2021-05-18 10:15:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 19, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000081062316895, "perplexity": 294.7511249389424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989819.92/warc/CC-MAIN-20210518094809-20210518124809-00562.warc.gz"}
|
https://uwm.edu/secu/faculty/standing/pec/
|
Physical Environment Committee (PEC)
The Physical Environment Committee (PEC) makes recommendations for the development of the physical environment of the campus consistent with the mission and with the present and future academic programs of the University.
Physical Environment Subcommittees
Contact the committee at: phys-mbrs@uwm.edu
PEC Presentations by Date
Committee NameMonth PresentedYear PresentedFrequency
Inclusive Restroom ReportSeptember2017Yearly
|
2021-01-16 04:59:07
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9641022682189941, "perplexity": 11069.23273699299}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703500028.5/warc/CC-MAIN-20210116044418-20210116074418-00730.warc.gz"}
|
https://grinpy.readthedocs.io/en/stable/reference/generated/grinpy.functions.min_degree.html
|
# grinpy.functions.min_degree¶
grinpy.functions.min_degree(G)
Return the minimum degree of G.
The minimum degree of a graph is the smallest degree of any node in the graph.
G : graph
A NetworkX graph.
minDegree : int
The minimum degree of the graph.
>>> G = nx.path_graph(3) # Path on 3 nodes
>>> nx.min_degree(G)
1
|
2020-08-09 08:36:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4887112081050873, "perplexity": 4169.372374568296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00214.warc.gz"}
|
https://solvedlib.com/n/late-departure-datacitycity-2cityprintdone,15453431
|
# Late Departure DataCityCity 2CityPrintDone
###### Question:
Late Departure Data City City 2 City Print Done
#### Similar Solved Questions
##### Use the fourth Maclaurin polynomial for $cos x$ to approximate $cos left(frac{pi}{12}ight)$.
Use the fourth Maclaurin polynomial for $cos x$ to approximate $cos left(frac{pi}{12} ight)$....
##### Assignment6: Problem 2Previous ProblemProblem ListNext Problempoint) Find all the values of x such that the given series would converge.(T 4)" n + 4Answjer:Note: Give your answer in interval netationPreview My AnswersSubmit AnswersYou have attempted this problem 2 times Your overall recorded score is 0%. You have unlimited attempts remaining:
Assignment6: Problem 2 Previous Problem Problem List Next Problem point) Find all the values of x such that the given series would converge. (T 4)" n + 4 Answjer: Note: Give your answer in interval netation Preview My Answers Submit Answers You have attempted this problem 2 times Your overall r...
##### The authors of a paper titled "Age and Violent Content Labels Make Video Games Forbidden Fruits...
The authors of a paper titled "Age and Violent Content Labels Make Video Games Forbidden Fruits for Youth" carried out an experiment to determine if restrictive labels on video games actually increased the attractiveness of the game for young game players. Participants read a description of...
##### Balance the following equation: . V + HNO3 -----> V(NO3)3 + N2O + H2O
Balance the following equation: . V + HNO3 -----> V(NO3)3 + N2O + H2O...
##### ~n5, Converge or diverge: n=1
~n 5, Converge or diverge: n=1...
##### Determine a0,a1,a2,a3,b1,62,and b3 coefficients of the Fourier Series for the following periodic function. What is the fundamental frequency in rad/s?(no graph shown) F(t) = +8 for -Zn<t<-Zpi 0 for 2nst<=6pirepeating
Determine a0,a1,a2,a3,b1,62,and b3 coefficients of the Fourier Series for the following periodic function. What is the fundamental frequency in rad/s? (no graph shown) F(t) = +8 for -Zn<t<-Zpi 0 for 2nst<=6pi repeating...
##### Problem 5. (20 points) (a) (5 points) Assume that the interaction Hamiltonian between two identical neutrons (each with...
Problem 5. (20 points) (a) (5 points) Assume that the interaction Hamiltonian between two identical neutrons (each with mass m ) is V(r)-ї, (r) which is spin independent, and r is the magnitude of the relative coordinate between the two identical neutrons. Calculate the differential cross sect...
##### 11. Set up the integral to find the length of the curve Approximate this value on your calculator. Document your work: 1=2 0<t<1 l=r-1
11. Set up the integral to find the length of the curve Approximate this value on your calculator. Document your work: 1=2 0<t<1 l=r-1...
##### Previous Ans wers SCalc? 13,1.041 Vector function, r(t) that represents the curve The cone Vx the planeintersectionthe tvo surfaces
Previous Ans wers SCalc? 13,1.041 Vector function, r(t) that represents the curve The cone Vx the plane intersection the tvo surfaces...
##### The air-track carts in FIGURE CP10.74 are sliding to the right at $1.0 \mathrm{~m} / \mathrm{~s}$. The spring between them has a spring constant of $120 \mathrm{N} / \mathrm{m}$ and is compressed $4.0 \mathrm{~cm} .$ The carts slide past a flame that burns through the string holding them together. Afterward, what are the speed and direction of each cart?
The air-track carts in FIGURE CP10.74 are sliding to the right at $1.0 \mathrm{~m} / \mathrm{~s}$. The spring between them has a spring constant of $120 \mathrm{N} / \mathrm{m}$ and is compressed $4.0 \mathrm{~cm} .$ The carts slide past a flame that burns through the string holding them together. A...
##### 5.4.1 Question Help A population has a mean = 141 and a standard deviation o =...
5.4.1 Question Help A population has a mean = 141 and a standard deviation o = 28. Find the mean and standard deviation of the sampling distribution of sample means with sample size n = 40. The mean is :-), and the standard deviation is 0;=0 (Round to three decimal places as needed.) 5.4.2 Question ...
##### If sin(A), cos(A), tan(A) are in G.P., then which of the following option is correct cot^(6)A - cot^(2)A?
A) 1 B)-1 C) 0 D)2...
##### In a sample of 100 older adults, 45 rated their health as “verygoodâ€, 35 rated their health as “goodâ€, 10 rated their health as“poor†and 10 rated their health as “very poorâ€. a) Please statethe null and alternative hypothesis for this question (3points).
In a sample of 100 older adults, 45 rated their health as “very goodâ€, 35 rated their health as “goodâ€, 10 rated their health as “poor†and 10 rated their health as “very poorâ€. a) Please state the null and alternative hypothesis for this question (3 p...
##### Sebuttun hzo NameUCONN 1132Q Integral - Jtest for Conv / DivergenceScoreTotalpoints)Assume that the requirements for the Integral Test are met Use the Integral Test to categorize as Divergent or Convergent: Show all work with proper notation_x(lnx)s x=2
Sebuttun hzo Name UCONN 1132Q Integral - Jtest for Conv / Divergence Score Total points) Assume that the requirements for the Integral Test are met Use the Integral Test to categorize as Divergent or Convergent: Show all work with proper notation_ x(lnx)s x=2...
##### When a company should use debt securities vs equity securities to grow their business? Provide a...
When a company should use debt securities vs equity securities to grow their business? Provide a specific example...
##### 3. Test the claim that Statistics students have a different IQ now than when they began...
3. Test the claim that Statistics students have a different IQ now than when they began the class- Data: A B C D E ZY X W v Now 95 111 102 122 103 110 92 113 108 118 Began 98 116 103 90 112 100 114 100 105 104 3. 5. 6. 7. 8....
##### In time series data, linear regression allows to incorporate in the model... (a) a linear time...
In time series data, linear regression allows to incorporate in the model... (a) a linear time trend (b) an exponential time trend (c) a quadratic time trend (d) all of the previous...
##### Pain on all atoms; where appropriate. Draw the Lewis structurc of OF z. Include lone Draw Rings Mote SclectErase
pain on all atoms; where appropriate. Draw the Lewis structurc of OF z. Include lone Draw Rings Mote Sclect Erase...
##### The following information is available for Sheridan Company Accounts receivable $3,100 Cash$6,370 Accounts payable 4,100...
The following information is available for Sheridan Company Accounts receivable $3,100 Cash$6,370 Accounts payable 4,100 Supplies 3,790 Interest payable 580 Unearned service revenue 800 Salaries and wages expense 4,100 Service revenue 42,500 Notes payable 33,000 Salaries and wages payable 870 Commo...
##### What is shape selective catalysis?
What is shape selective catalysis?...
##### Explain the structure and bonding of the compound focusing on bolded elements. describe important features like...
explain the structure and bonding of the compound focusing on bolded elements. describe important features like polar/non polar covalent, donor-acceptor, multi-center,multi electron. overall geometrt and potential constituent building blocks. Dipp iPr Si-Ge Me Dipp C-N Dipp Me-N H iPr ion, and there...
##### Skill name:: Two step BP method All questions must be answered based on the skill listed...
Skill name:: Two step BP method All questions must be answered based on the skill listed above: what is the Description of this skill? what are the indicators? what are the nursing interventions (pre, intra, post)? what are the outcomes/evaluation? what is the client education for this skill? what a...
|
2022-05-29 01:53:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5309131145477295, "perplexity": 4328.261766025052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00772.warc.gz"}
|
https://mathematica.stackexchange.com/questions/96808/how-do-i-determine-the-15th-lucas-term-in-the-3rd-order
|
# How do I determine the 15th Lucas term in the 3rd order?
I want to implement the Lucas n-Step Number.
This command taken from here doesn't seem to work, am I using the right command?
ClearAll[LnStepN];
LnStepN[2, n_] = 0; LnStepN[1, n_] = 1; LnStepN[3, n_] = 1;
LnStepN[k_Integer, n_Integer] :=
LnStepN[k, n] = Sum[LnStepN[k - i, n], {i, 1, Min[k, n]}]
• Can you not add the code in plain text? This would make people more likely to help you. – J. M. will be back soon Oct 12 '15 at 14:25
• Its from this answer. It does work for me :) – rhermans Oct 12 '15 at 14:40
• @rhermans Notice that the code you added does NOT have the same starting conditions as the code the OP posted. I think therein lies the problem... – MarcoB Oct 12 '15 at 15:19
• @MarcoB fixed, thanks! – rhermans Oct 13 '15 at 17:35
Based on the definition from Wolfram Mathworld, the "Lucas n-Step Number" is define by:
ClearAll[LnStepN];
LnStepN[k_, n_] := -1 /; k < 0;
LnStepN[0, n_] = n;
LnStepN[1, n_] = 1;
LnStepN[k_Integer, n_Integer] :=
LnStepN[k, n] = Sum[LnStepN[k - i, n], {i, 1, n}]
To verify the solution:
Table[
LnStepN[k, n]
, {n, 2, 7}
, {k, 1, 12}
] // TableForm
Now the requested 15th Lucas term in the 3rd order.
LnStepN[15, 3]
9327
• thanks for helping me, you really are a pro at this :) – Charles Oct 13 '15 at 23:20
You can take a look at OEIS for more formulas: http://oeis.org/A001644
Lucas3[n_]:=Last[LinearRecurrence[{1, 1, 1}, {3, 1, 3}, n + 1]];
Lucas3[15]
(* 9327 *)
This is experimental, but seems to work:
Lucas[k_Integer, n_Integer]:=Last[LinearRecurrence[ConstantArray[1,k], Array[(2^#-1)&, k], n]];
Lucas[3, 15]
|
2019-10-17 17:02:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1917593777179718, "perplexity": 3518.7034403969683}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00019.warc.gz"}
|
https://zbmath.org/?q=an%3A1405.90107
|
zbMATH — the first resource for mathematics
Optimization of black-box problems using Smolyak grids and polynomial approximations. (English) Zbl 1405.90107
Summary: A surrogate-based optimization method is presented, which aims to locate the global optimum of box-constrained problems using input-output data. The method starts with a global search of the $$n$$-dimensional space, using a Smolyak (sparse) grid which is constructed using Chebyshev extrema in the one-dimensional space. The collected samples are used to fit polynomial interpolants, which are used as surrogates towards the search for the global optimum. The proposed algorithm adaptively refines the grid by collecting new points in promising regions, and iteratively refines the search space around the incumbent sample until the search domain reaches a minimum hyper-volume and convergence has been attained. The algorithm is tested on a large set of benchmark problems with up to thirty dimensions and its performance is compared to a recent algorithm for global optimization of grey-box problems using quadratic, kriging and radial basis functions. It is shown that the proposed algorithm has a consistently reliable performance for the vast majority of test problems, and this is attributed to the use of Chebyshev-based sparse grids and polynomial interpolants, which have not gained significant attention in surrogate-based optimization thus far.
MSC:
90C26 Nonconvex programming, global optimization
Software:
ARGONAUT; AQUARS; PSwarm; MultiMin; BOBYQA; EGO
Full Text:
|
2021-07-28 21:05:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36615365743637085, "perplexity": 1073.281743982162}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153791.41/warc/CC-MAIN-20210728185528-20210728215528-00575.warc.gz"}
|
https://www.zigya.com/study/book?class=11&board=bsem&subject=Physics&book=Physics+Part+I&chapter=Motion+in+A+Plane&q_type=&q_topic=Scalars+And+Vectors&q_category=&question_id=PHEN11039412
|
## Book Store
Currently only available for.
CBSE Gujarat Board Haryana Board
## Previous Year Papers
Download the PDF Question Papers Free for off line practice and view the Solutions online.
Currently only available for.
Class 10 Class 12
What is the angle between the vectors and ?
Given that,
and
Let be the angle between the given two vectors.
We know,
∴
Now
Magnitude is given by,
Thus, angle between the vectors,
114 Views
Give three examples of vector quantities.
Force, impulse and momentum.
865 Views
What are the basic characteristics that a quantity must possess so that it may be a vector quantity?
A quantity must possess the direction and must follow the vector axioms. Any quantity that follows the vector axioms are classified as vectors.
814 Views
What is a scalar quantity?
A physical quantity that requires only magnitude for its complete specification is called a scalar quantity.
1212 Views
Give three examples of scalar quantities.
Mass, temperature and energy
769 Views
What is a vector quantity?
A physical quantity that requires direction along with magnitude, for its complete specification is called a vector quantity.
835 Views
|
2019-01-17 13:42:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5906351208686829, "perplexity": 4652.337916299328}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658981.19/warc/CC-MAIN-20190117123059-20190117145059-00189.warc.gz"}
|
https://www.esaral.com/q/if-the-angles-of-elevation-of-the-top-of-a-tower-from-two-points-distant-a-and-b-from-43550
|
# If the angles of elevation of the top of a tower from two points distant a and b from
Question:
If the angles of elevation of the top of a tower from two points distant a and b from the base and in the same straight line with it are complementary, then the height of the tower is
(a) $a b$
(b) $\sqrt{a b}$
(c) $\frac{a}{b}$
(d) $\sqrt{\frac{a}{b}}$
Solution:
Let be the height of tower.
Given that: angle of elevation of top of the tower are and .
Distance and
Here, we have to find the height of tower.
So we use trigonometric ratios.
In a triangle,
$\Rightarrow \tan C=\frac{A B}{B C}$
$\Rightarrow \tan \left(90^{\circ}-\theta\right)=\frac{h}{h}$
$\Rightarrow \cot \theta=\frac{h}{b}$
Again in a triangle ABD,
$\tan D=\frac{A B}{B D}$
$\Rightarrow \tan \theta=\frac{h}{a}$
$\Rightarrow \frac{1}{\cot \theta}=\frac{h}{a}$
$\Rightarrow \frac{b}{h}=\frac{h}{a}$
$\Rightarrow h^{2}=a b$
$\Rightarrow h=\sqrt{a b}$
Put $\cot \theta=\frac{h}{b}$
Hence the correct option is $b$.
|
2023-03-28 03:12:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7254160046577454, "perplexity": 285.6492345986489}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00197.warc.gz"}
|
https://wordpandit.com/basic-maths-test-60/
|
Select Page
• This is an assessment test.
• These tests focus on the basics of Maths and are meant to indicate your preparation level for the subject.
• Kindly take the tests in this series with a pre-defined schedule.
## Basic Maths: Test 60
Congratulations - you have completed Basic Maths: Test 60.
You scored %%SCORE%% out of %%TOTAL%%.
Your performance has been rated as %%RATING%%
Question 1
$\frac{\sqrt{28}+\sqrt{252}}{\sqrt{112}}$ is equal to
A $\frac{2}{\sqrt{6}}$ B $2\sqrt{6}$ C $4\sqrt{6}$ D 2
Question 1 Explanation:
\begin{align} & \frac{\sqrt{28}+\sqrt{252}}{\sqrt{112}} \\ & =\frac{\sqrt{4\times 7}+\sqrt{36\times 7}}{\sqrt{16\times 7}} \\ & =\frac{\sqrt{7}\left( 2+6 \right)}{4\sqrt{7}} \\ & =\frac{8}{4}=2 \\ \end{align}
Question 2
$\frac{\left( 100-1 \right)\,\left( 100-2 \right)\,\left( 100-3 \right)\,.....\left( 100-99 \right)}{100\times 99\times 98\times .....\times 3\times 2\times 1}$ is equal to
A $\frac{100}{99\times 98\times 97\times .....\times 3\times 2\times 1}$ B 0.01 C 0 D $-\frac{2}{99\times 98\times 97\times ....\times 3\times 2\times 1}$
Question 2 Explanation:
\begin{align} & \frac{\left( 100-1 \right)\,\left( 100-2 \right)\,\left( 100-3 \right)\,.....\left( 100-99 \right)}{100\times 99\times 98\times .....\times 3\times 2\times 1} \\ & =\frac{\left( 99 \right)\,\left( 98 \right)\,\left( 97 \right)\,.....\left( 1 \right)}{100\times 99\times 98\times .....\times 3\times 2\times 1} \\ & =\frac{1}{100} \\ & =0.01 \\ \end{align}
Question 3
$\left( 7.5\times 7.5-37.5+2.5\times 2.5 \right)$ is equal to
A 20 B 25 C 15 D 30
Question 3 Explanation:
Let 7.5 = a, 2.5 = b
\begin{align} & Expression \\ & ={{a}^{2}}-2\times a\times b+{{b}^{2}} \\ & ={{\left( a-b \right)}^{2}} \\ & \left[ a-b=7.5-2.5=5 \right] \\ & ={{5}^{2}} \\ & =25 \\ \end{align}
Question 4
$\frac{2.25\times 2.25+2.75\times 2.75-2\times 2.25\times 2.75}{2.25\times 2.25-2.75\times 2.75}$simplified to
A 0.4 B 0.3 C 0.1 D 0.2
Question 4 Explanation:
\begin{align} & If\,\,2.75=a\,\,and\,\,\,2.25=b,\,\,then \\ & Expression \\ & =\frac{{{a}^{2}}+{{b}^{2}}-2ab}{{{a}^{2}}-{{b}^{2}}} \\ & =\frac{{{\left( a-b \right)}^{2}}}{\left( a-b \right)\left( a+b \right)}=\frac{\left( a-b \right)}{\left( a+b \right)} \\ & =\frac{0.5}{5} \\ & =0.1 \\ \end{align}
Question 5
$\left( \frac{5}{2}+\frac{4}{2} \right)\,\left( \frac{26}{3}-\frac{11}{3}+\frac{7}{3} \right)$ is equal to
A 33 B 19 C 37 D 36
Question 5 Explanation:
\begin{align} & \left( \frac{5}{2}+\frac{4}{2} \right)\,\left( \frac{26}{3}-\frac{11}{3}+\frac{7}{3} \right) \\ & =\frac{9}{2}\times \frac{22}{3}=33 \\ \end{align}
Once you are finished, click the button below. Any items you have not completed will be marked incorrect.
There are 5 questions to complete.
← List →
|
2019-01-21 07:44:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000100135803223, "perplexity": 7478.628463745589}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583763839.28/warc/CC-MAIN-20190121070334-20190121092334-00491.warc.gz"}
|
https://itectec.com/ubuntu/ubuntu-low-resolution-on-lubuntu-14-04-sis/
|
# Ubuntu – Low resolution on Lubuntu 14.04 (Sis)
display-resolutionlubuntusis-graphics
The resolution of my lubuntu is 640×480 and I can not change it. There's no more options in the screen definitions.
This is my (horrible) graphic card:
01:00.0 VGA compatible controller: Silicon Integrated Systems [SiS] 771/671 PCIE VGA Display Adapter (rev 10)
And this is what I got when I run the xrandr command:
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 640 x 480, current 640 x 480, maximum 640 x 480
default connected primary 640x480+0+0 0mm x 0mm
640x480 73.0*
800x600_60.00 (0x198) 38.2MHz
h: width 800 start 832 end 912 total 1024 skew 0 clock 37.4KHz
v: height 600 start 603 end 607 total 624 clock 59.9Hz
2048x1536_60.00 (0x1da) 267.2MHz
h: width 2048 start 2208 end 2424 total 2800 skew 0 clock 95.4KHz
v: height 1536 start 1539 end 1543 total 1592 clock 60.0Hz
1024x728_60.00 (0x1dc) 63.5MHz
h: width 1024 start 1072 end 1176 total 1328 skew 0 clock 47.8KHz
v: height 768 start 771 end 775 total 798 clock 59.9Hz
1024x768_60.00 (0x1dd) 63.5MHz
h: width 1024 start 1072 end 1176 total 1328 skew 0 clock 47.8KHz
v: height 768 start 771 end 775 total 798 clock 59.9Hz
So, I want to have a 1024×768 resolution in this computer and I can't do nothing in this moment. I'm a newbie in Linux and for that reason I need some help. I tried to follow some steps that I saw in some topics but nothing happens.
Yes, it's not the best graphics card ever seen, but it can do 1024x768 in Lubuntu 14.04 all the same, if you force the machine to use the vesa driver.
Create a file /usr/share/X11/xorg.conf.d/use-vesa.conf with the following content:
Section "Device"
Identifier "Configured Video Device"
Driver "vesa"
EndSection
And that's all. Reboot, and you should have your resolution.
The file can be created with any text editor - if you're not comfortable with the terminal, you'd probably want to use something graphical like leafpad. However, you'd need to be root to have write access to the place where the file is needed.
So, open up a terminal (CTRL + Alt + t) and type "sudo leafpad". You need to enter your password there. I presume you're doing this as the default user so sudo should work and the editor window will open.
Then you can copy & paste the required text into the file and save it to the given location (/usr/share/X11/xorg.conf.d/). The name "use-vesa.conf" is arbitrary, you could also call it "whatever.conf", as long as the .conf bit is in the filename, and the file is saved in the right place, it will work.
Please check whether it works for you.
|
2021-12-02 06:54:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1936579793691635, "perplexity": 4225.216527859472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00274.warc.gz"}
|
http://www.numdam.org/item/RSMUP_1989__81__49_0/
|
Solutions of minimal period of a wave equation via a generalization of a Hofer's theorem
Rendiconti del Seminario Matematico della Università di Padova, Tome 81 (1989), pp. 49-63.
@article{RSMUP_1989__81__49_0,
author = {Salvatore, A.},
title = {Solutions of minimal period of a wave equation via a generalization of a {Hofer's} theorem},
journal = {Rendiconti del Seminario Matematico della Universit\a di Padova},
pages = {49--63},
publisher = {Seminario Matematico of the University of Padua},
volume = {81},
year = {1989},
zbl = {0696.35109},
mrnumber = {1020185},
language = {en},
url = {http://www.numdam.org/item/RSMUP_1989__81__49_0/}
}
TY - JOUR
AU - Salvatore, A.
TI - Solutions of minimal period of a wave equation via a generalization of a Hofer's theorem
JO - Rendiconti del Seminario Matematico della Università di Padova
PY - 1989
DA - 1989///
SP - 49
EP - 63
VL - 81
PB - Seminario Matematico of the University of Padua
UR - http://www.numdam.org/item/RSMUP_1989__81__49_0/
UR - https://zbmath.org/?q=an%3A0696.35109
UR - https://www.ams.org/mathscinet-getitem?mr=1020185
LA - en
ID - RSMUP_1989__81__49_0
ER -
Salvatore, A. Solutions of minimal period of a wave equation via a generalization of a Hofer's theorem. Rendiconti del Seminario Matematico della Università di Padova, Tome 81 (1989), pp. 49-63. http://www.numdam.org/item/RSMUP_1989__81__49_0/`
[1] A. Ambrosetti - G. MANCINI, Solutions of minimal period for a class of convex Hamiltonian systems, Math. Ann., 155 (1981). | MR 615860 | Zbl 0466.70022
[2] A. Ambrosetti - P. H. RABINOWITZ, Dual variational methods in a critical point theory and applications, J. Funct. Anal., 14 (1973), pp. 345-381. | MR 370183 | Zbl 0273.49063
[3] P. Bartolo - V. BENCI - D. FORTUNATO: Abstract critical point theorems and applications to some nonlinear problems with « strong » resonance at infinity, Nonlinear Anal., T.M.A., 7 (1983), pp. 981-1012. | MR 713209 | Zbl 0522.58012
[4] V. Benci, Some applications of the generalized Morse-Conley index, Conferenze del Seminario di Matematica dell'Università di Bari, 217 (1987). | MR 898735 | Zbl 0656.58006
[5] V. Benci - D. FORTUNATO, The dual method in critical point theory. Multiplicity results for indefinite functionals, Ann. Mat. Pura Appl., 32 (1981), pp. 215-242. | MR 696044 | Zbl 0526.58013
[6] V. Benci - D. FORTUNATO, Subharmonic solutions of prescribed minimal period for non autonomous differential equations, Edited by G. F. DELL'ANTONIO - B. D'ONOFRIO, World Scientific, Singapore (1987), pp. 83-96. | MR 902625 | Zbl 0663.70028
[7] H. Brezis, Periodic solutions of nonlinear vibrating strings and duality principles, Bull. Amer. Math. Soc., 3 (1983), pp. 409-426. | MR 693957 | Zbl 0515.35060
[8] H. Brezis - J.M. Coron - L. Nirenberg, Free vibrations for a nonlinear wave equation and a theorem of P. Rabinowitz, Comm. Pure Appl. Math., 33 (1980), pp. 667-689. | MR 586417 | Zbl 0484.35057
[9] J.M. Coron, Periodic solution of a nonlinear wave without assumptions of monotonicity, Math. Ann., 252 (1983), pp. 273-285. | MR 690201 | Zbl 0489.35061
[10] D. Gromoll - W. MEYER, On differentiable functions with isolated critical points, Topology, 8 (1969), pp. 361-369. | MR 246329 | Zbl 0212.28903
[11] H. Hofer, A note on the topological degree at a critical point of mountain pass-type, Proc. Amer. Math. Soc., 90, 2 (1984), pp. 309-315. | MR 727256 | Zbl 0545.58015
[12] H. Hofer, A geometric description of the neighbourhood of a critical point given by the mountain-pass theorem, J. London Math. Soc., 31 (1985), pp. 566-570. | MR 812787 | Zbl 0573.58007
[13] H. Lovicarova, Periodic solutions of a weakly nonlinear wave equation in one dimensional, Czech. Math. J., 19 (1969), pp. 324-342. | MR 247249 | Zbl 0181.10901
[14] P.H. Rabinowitz, Variational Methods for Nonlinear Eigenvalue Problems, Edited by G. PRODI, Edizione Cremonese, Roma (1974), pp. 141-195. | MR 464299
[15] P. Rockafeller, Convex Analysis, Princeton University Press (1970). | MR 274683 | Zbl 0193.18401
[16] A. Salvatore, Solutions of minimal period for a semilinear wave equation, to appear on Ann. Mat. Pura e Appl. | MR 1042839 | Zbl 0714.35052
[17] G. Tarantello, Solutions with prescribed minimal period for nonlinear vibrating strings, Comm. P.D.E., 12, 9 (1987), pp. 1071-1094. | MR 888007 | Zbl 0628.35007
|
2022-09-28 05:57:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.350919246673584, "perplexity": 2723.6932067020916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335124.77/warc/CC-MAIN-20220928051515-20220928081515-00778.warc.gz"}
|
http://openstudy.com/updates/4f27641ee4b0d9cf822e356c
|
• anonymous
PDEs, help with initial value Problem: $u_{tt}−u_{xx}+2u_{xy}−u_{yy}=0 \text{ with conditions }u(1,x,y)=\cos (x)+e^y$ $\text{and} u_t(1,x,y)=\sin (x)−y^2$ I've gotten the solution to the point where I've solved for the general case and gotten: $u=ψ(x+t,y−t)+ϕ(x−t,y+t)$ where phi and psi are arbitrary functions, but attempting to solve the initial conditions has resulted in nothing but problems, any help at all would be great.
Mathematics
Looking for something else?
Not the answer you are looking for? Search for more explanations.
|
2017-03-29 13:37:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4921412765979767, "perplexity": 392.6912690947825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190295.65/warc/CC-MAIN-20170322212950-00262-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://hoven.in/ncert-chem-xi-ch-2/q66-ch2-chem.html
|
# (solved)Question 2.66 of NCERT Class XI Chemistry Chapter 2
Indicate the number of unpaired electrons in : (a) P, (b) Si, (c) Cr, (d) Fe and (e) Kr.
(Rev. 03-Aug-2022)
## Categories | About Hoven's Blog
,
### Question 2.66NCERT Class XI Chemistry
Indicate the number of unpaired electrons in : (a) P, (b) Si, (c) Cr, (d) Fe and (e) Kr.
### Solution in Detail(video solution below this)
$\displaystyle \underline{\underline{\text{(a) P (phosphorus)}}}$
Atomic number of P is 15
Configuration $\displaystyle \text{[Ne]}3s^23p^3$
but due to exchange energy stabilization half-filled orbitals are more stable
Hence $\displaystyle p_x, p_y, p_z$ each contains 1 unpaired electron
Ans: 3
$\displaystyle \underline{\underline{\text{(b) Si (silicon)}}}$
Atomic number of Si is 14
Configuration $\displaystyle \text{[Ne]}3s^23p^2$
but due to exchange energy stabilization half-filled orbitals are more stable
Hence $\displaystyle p_x, p_y$ each contains 1 unpaired electron
Ans: 2
$\displaystyle \underline{\underline{\text{(c) Cr (chromium)}}}$
Atomic number of Cr is 24
Configuration $\displaystyle \text{[Ar]}4s^13d^5$
but due to exchange energy stabilization half-filled orbitals are more stable
Hence each of the five 3d contains 1 unpaired electron, and 4s also has 1 un-paired
Ans: 6
$\displaystyle \underline{\underline{\text{(d) Fe (iron)}}}$
Atomic number of Fe is 26
Configuration $\displaystyle \text{[Ar]}4s^23d^{6}$
but due to exchange energy stabilization half-filled orbitals are more stable
hence four of the d orbitals will contain 1 electron each and the one of the d orbitals will contain a pair
Ans: 4
$\displaystyle \underline{\underline{\text{(e) Kr (krypton)}}}$
Krypton is a noble gas with fully-filled and paired valence orbitals.
Ans: 0
### Video Explanation
Please watch this youtube video for a quick explanation of the solution:
|
2022-08-20 05:27:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5776642560958862, "perplexity": 7956.39886417797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573908.30/warc/CC-MAIN-20220820043108-20220820073108-00742.warc.gz"}
|
https://imathworks.com/tex/tex-latex-parbox-vs-minipage-differences-in-applicability/
|
# [Tex/LaTex] \parbox vs. minipage: Differences in applicability
boxesminipage
Lamport, LaTeX: A document preparation system, states on p. 104:
There are two ways to make a parbox at a given point in the text: with the \parbox command and the minipage environment. They can be used to put one or more paragraphs of text inside a picture or in a table item.
\parbox and minipage share one mandatory argument (width of the parbox) and the optional argument (vertical alignment). (The second mandatory argument of \parbox "is the text to be put in the parbox" [p. 105].) Lamport recommends the use of minipages instead of parboxes in some cases (e.g. a parbox containing a tabbing or a list-making environment), but doesn't substantiate his advice (or at least I skipped that part). Finally, from Hendrik Vogt's comment to this answer, I gather that one reason to prefer minipages is that "[y]ou don't have to wait that long for the matching closing brace".
I'm aware that the \footnote command doesn't work with \parbox; by contrast, it "puts a footnote at the bottom of the parbox produced by the [minipage] environment" (Lamport, p. 105). Are there other differences in applicability between \parbox and the minipage environment?
P.S.: Kopka and Daily, A guide to LaTeX, state on p. 89:
The text in a \parbox may not contain any of the centering, list, or other environments described in Sections 4.2 through 4.5. These may, on the other hand, appear within a minipage environment.
However, I did some tests using center, itemize and tabbing environments within a \parbox, and LaTeX did not throw error messages. Are Kopka and Daly wrong, or did I miss something?
The main reason I see to use minipage over \parbox is to allow verbatim (\verb, verbatim, etc.) text inside the box (unless, of course, you also put the minipage inside a macro argument).
EDIT Here are other differences between minipage and \parbox (from the comments to Yiannis' answer and from looking at the source code of both these macros in source2e).
A first difference, as already mentioned by lockstep in his question, is in the footnote treatment: minipage handles them by putting them at the bottom of the box while footnotes are lost in a \parbox (to avoid this, you must resort to the \footnotemark/footnotetext trick):
\documentclass{article}
\begin{document}
\parbox[t]{3cm}{text\footnote{parbox footnote}}
\begin{minipage}[t]{3cm}text\footnote{minipage footnote}\end{minipage}
\end{document}
A second difference is in that minipage resets the \@listdepth counter, meaning that, inside a minipage, you don't have to worry about the list nesting level when using them. Here's an example which illustrates the point:
\documentclass{article}
\begin{document}
\begin{list}{}{}\item\begin{list}{}{}\item\begin{list}{}{}\item\begin{list}{}{}\item
\begin{list}{}{}\item\begin{list}{}{}
\item %\parbox{5cm}{\begin{list}{}{}\item \end{list}}% error
\item %\begin{minipage}{5cm}\begin{list}{}{}\item \end{list}\end{minipage}% no error
\end{list}\end{list}\end{list}\end{list}\end{list}\end{list}
\end{document}
A third difference is that minipage sets the boolean \@minipagefalse which in turn deactivates \addvspace if it's the first thing to occur inside a minipage. This means that minipage will have better spacing and allow better alignment compared to \parbox in some cases like the following (left is minipage, right is \parbox):
\documentclass{article}
\begin{document}
Pros: \begin{minipage}[t]{3cm}\begin{itemize}\item first \item second%
\end{itemize}\end{minipage}
Cons: \parbox[t]{3cm}{\begin{itemize}\item first \item second\end{itemize}}
\end{document}
|
2023-03-30 11:36:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332971572875977, "perplexity": 3865.116269373588}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949181.44/warc/CC-MAIN-20230330101355-20230330131355-00293.warc.gz"}
|
http://physics.stackexchange.com/tags/lienard-wiechert/hot
|
Tag Info
2
You can only calculate electric fields or magnetic fields after fixing a reference frame, so no, you can't move P around in that formula. It is assumed in that formula that you are working in a specific frame. The formula is invariant with respect to translating both $\mathbf{r}$ and $P$ by the same displacement, but not with respect to boosting them by the ...
2
The idea is that it takes time for a signal to travel from a source to where it is being observed--so the field here and now doesn't depend on the charge distribution now, it depends on the value that the charge distribution had $t - \frac{\ell}{c}$ ago, since information cannot travel instantaneously.
2
This is the usual argument for explaining retarded time - Consider a charge moving with a constant velocity along a straight line. If the charge suddenly comes to a halt, there will be a change in the electric field due to the acceleration. But this change in the electric field isn't communicated instantaneously through the whole universe, that's ...
2
Actually since charge is at rest $u_{\nu}r^{\nu} = u_0 r^0 = ct'$ where $t'$ is retarded time, $t'=r/c$, where $r$ is the (constant) distance to the charge.
1
Radar uses the principle of retarded time to calculate distances Since $x=ct$, $dx =c dt$! Define $dx=x_1-x_2$. If $x_1$ - radar location and $x_2$ -target location, $dt=dx/c=(x1-x2)/dt$ where $dt$ is the time required to travel to target! So round trip time $=2 dt$ which is recorded by electronic clocks. This is an example of retarded time not special ...
1
The short answer: it isn't absent, it is only absent classically, and then only for certain initial conditions. This is the old (and nowadays solved) puzzle of the electromagnetic arrow of time, which was a subject of a three-opinion paper in the early 20th century, with Einstein expressing the correct opinion, and two other people expressing other ...
1
There is no obvious inconsistency, whether we use retarded, advanced, or any other field. If we use only retarded fields, things go as follows. At the time $t=0$, we begin to exert force $\mathbf{F}$ on the charge $q$. It will move with acceleration $\mathbf{F}/m$ for the time interval $R/c$, where $R$ is the radius of the sphere. At the time $t = R/c$, ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-11-28 12:18:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8891375660896301, "perplexity": 256.78732478162306}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931010166.36/warc/CC-MAIN-20141125155650-00166-ip-10-235-23-156.ec2.internal.warc.gz"}
|
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=5079404
|
• Create Account
Posted 15 September 2013 - 01:09 AM
Note: This thread is primarily for the organization/administration of the game dev competition. The thread which will have the posted theme and to which entries should be submitted to is located here. Be sure to check that thread the second the contest starts to get the theme!
Update: The theme has been announced and the competition has officially started! Check the competition thread to see it!
Update #2: The competition is over!
Hey ya'll. We haven't had a competition in awhile. Therefore, I would like to create a competition. Here's what I'm thinking:
The Week of Awesome
What?
The Week of Awesome will be a 7 day game development challenge/competition. People will have 7 days to make a game from start to finish, with prizes for the best! It should be a heck of a lot of fun, and 7 days should be enough time that you actually finish something. Please note that while I think 7 days is plenty of time to make a game, it's not enough time to make a large game (even with a team of 3 people). Dream up something awesome, but also realistic for you to complete. I want people to finish their games!
This is not a competition for programmers only. Teams of up to 3 will be allowed, and I highly encourage a diverse team (i.e. graphics, sound, and programming, or whatever your game might specifically need). Too rarely do graphics/sound people participate in game dev jams, so I'm really hoping they will form/join a team and participate in this competition.
Where?
The Internet. It'll all happen here on GameDev.net.
When?
August 19th to August 26th, 2013. Specifically, it will start Monday, August 19th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) and it will end Monday, August 26th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) (a full 168 hours from the start time). All times are in Eastern Daylight Time (EDT) (UTC/GMT -4).
Rules?
Rules are slowly evolving, but currently:
• There will be a theme
• The theme will be announced at midnight when the competition begins; games must incorporate the theme in some way
• Games must be made during the 7 day period of the competition (i.e. you can't submit a game you made previously or are making now)
• You may use any library, game engine, tools, art, or audio to develop the game, provided you have the appropriate license to do so (if licensing is required for what you use); note that this means you may use code, art, and audio that you or others may have made in the past (the explicit limitation is that you may not use existing game logic (i.e. you may use libraries and engines, but you still have to actually make the game during the 7 day period))
• You may target any platform (but at least 3 judges must play your game, so if you develop for something the judges don't have, you can't be accepted as an entry; I'll post a list of platforms judges have before the competition begins)
• People may work in teams of up to 3 people (a person may only be on one team, though)
• Games must work on the judges' platforms (that is, if you have a bug in your program and it works fine on your computer but not on a judge's, you must fix it for it to be judged (during the 7 day period; no work/updates/fixes may be done after the 7 days); at least 3 judges must be able to evaluate it for it to be accepted as an entry)
• If only 3 judges can run your game (and not of all the judges, if we have more than 3), you will not be punished/penalized (that is, judge scores will be averaged)
• You must develop for one of the following systems (because the judges don't have unlimited hardware/platforms to test on):
• Windows 7
• Windows 8
• iOS
• Android
• IE 9
• IE 10
Judging?
The contest will be judged by the following categories:
• Graphics: 20 pts (that is, does it look good, aesthetically, and do the graphics contribute to or detract from the game experience?) (note: it's not a contest of who has the most realistic graphics; rather, this category is about how well the graphics help the game express itself and create an engaging experience for the user)
• Audio: 20 pts (similar to the above, but with audio)
• Gameplay/fun factor: 20 pts
• First time user experience: 20 pts (that is, is it easy to install and run and start playing?) (clarification: your game doesn't need an installer; in fact, a download-and-play (without an installing process) is probably even preferable; what I meant by "easy to install" is that it includes all the necessary dependencies (or has good instructions for obtaining/installing necessary dependencies) such that a user can easily run you game without having to fight it; you'll also want to include at least some minimal instructions or a tutorial/in-game hints/tips).
• Theme: 20 pts (that is, how well the game incorporates the theme)
The categories will be summed for the total score (100 pts being the max possible score). At least 3 judges will judge each game (hopefully every judge can judge each game, though); the judges' scoring will be averaged together when computing the final score for an entry. Judges will break any ties for the top 5 entries.
There are currently 4 judges, though I wouldn't mind having a 5th:
• Alpha_ProgDes
• Cornstalks
• Gaiiden
There will also be a People's Choice award. This award will be given to one game which the GameDev.net community votes for. The voting will be done in a thread for a period of 1 week (168 hours), and will take place after the competition period has ended. If there is a tie in the People's Choice award, the judges will break the tie.
Prizes?
First place winner:
+X participation rep points (thanks GameDev.net!) (actual value not yet specified)
$75 (USD)* 2 early adopter licenses for Spriter (thanks jbadams!) (valued at$25/each) (if there is a 3rd team member on the winning team, one team member will get an extra $25 while the other two members get the Spriter licenses) (if there is only 1 person on the winning team, only 1 license will be given) Second place winner:$40 (USD)*
(if the first place team only has 1 team member, then the second place winning team will get the extra Spriter license; in the event that the second place team has two team members, one member will get the Spriter license and $7.50, and the other member will get$32.50; if there are 3 members, 1 will get the Spriter license, and the two other team members will get $25 each) People's Choice winner:$20 (USD)*
Every person who submits a working and valid game:
+50 participation rep points (thanks GameDev.net!)
If you'd like to contribute to the pool, let me know!
Interested People
I have looked through this thread and come up with a list of people who have said they are interested in participating. If you form/join a team (whether it's a 1-man team or more), please let me know so I can remove you from this list and add you to the Teams list. This "pool" might be useful for forming teams. I suggest you form and organize your team (and plan who will do what) early. If you want your name added to this list, let me know!
Teams
Here is a list of teams I have been informed of. Remember, teams are limited to 3 members. If a team has < 3 members, feel free to PM the team and see if you can join them, if you'd like (teams: let me know if you're not interested in extra/new members so I can put a note letting people know not to PM you; also please let me know if your team changes). If you form a team, please let me know so I can remove you from the "interested pool" and add you to the team list.
You may leave your team at any time (though I suggest you don't be a jerk and leave your team stranded). You may join a team at any time before the competition. Once the competition starts, you may still leave your team, but you may not switch to a new/different team. Additionally, once the competition starts, you may join or form your own team at any time, provide that you (and your members) were not previously on a team.
Q & A
Why 7 days?
I wanted something short, but long enough that most people can still finish something (if they don't dream too big). I think 7 days gives you enough time to quickly make a game, even if you're busy with work and/or school, as you have several days you can work, even if it's just a few hours each day.
What if I don't have 7 days to dedicate to a competition?
I don't expect people to dedicate all 168 hours to making a game. I specifically chose 7 days so you can develop a game while still taking care of normal, every day life things that you have to do. (If you don't dream too big, you absolutely can finish a great game in 7 days while still working or going to school).
Where will updates be posted?
This thread. I will modify this post to be up to date at all times. You can check its history if you're curious about past states.
Can I participate?
Of course you can! Unless your country/state/city says you legally can't. I'm no lawyer.
How can I help?
The biggest help would be to get excited and participate! Also, spreading the word and encouraging others to participate would be great. Prize donors would be appreciated (even if it's small; it'll grow the pool).
I want to be a judge; what should I do?
Just post here or PM me letting me know. I'll put together a list of actual judges later. I'm hoping to get ~5ish.
Can judges participate in the competition?
I'm not decided on this. If they do, they will not judge their own game. If we get a lot of participants, probably not. If we don't get a lot of participants, then I'll probably let judges compete. Let me know what you think about this.
Why are you doing this?
I think this will be a fun event that will help to grow/strengthen the GameDev.net community. I like having fun.
Any hints/tips?
I'll try to put together a good list of hints/tips, but I'd recommend picking your weapons of choices now (language, libraries/engines, target platform, etc. and making sure your development environment is all set up). I also recommend you don't have 3 programmers on a team. Get an artist and/or a sound engineer on your team. They'll help make it pretty (which yes, is important).
Do I waive any rights to any of the work I submit? Can I sell my game after the competition?
You get to retain all rights to the work that you submit, so don't fret. The only thing that is required is that the game you submit for the competition must be allowed to be downloaded by the general public (even after the competition ends). If you update or progress your game after the competition, you don't need to make it publicly available. You also get to retain all source; source code is not required for judging.
I have a question you didn't answer!
Please ask it! Here, in this thread. I'll answer it and put it in this Q & A.
*Paid via: Google Wallet (which means you'll be able to send money via Gmail/Google Wallet if you currently haven't received the necessary invite to do so), or Amazon WebPay, or an Amazon gift card for the cash amount, or a Steam gift card for the cash amount (you pick). If there are multiple team members, the amount will be divided evenly among the team members. If there is a remaining penny or two (due to not being able to perfectly divide the $by the number of team members), the extra penny (or two) will be randomly given to a member (that is, if dividing$10 amongst 3 team members, two will get $3.33 and one will get$3.34).
: Added "contest" tag for future use.
#21Cornstalks
Posted 10 September 2013 - 09:05 PM
Note: This thread is primarily for the organization/administration of the game dev competition. The thread which will have the posted theme and to which entries should be submitted to is located here. Be sure to check that thread the second the contest starts to get the theme!
Update: The theme has been announced and the competition has officially started! Check the competition thread to see it!
Update #2: The competition is over!
Hey ya'll. We haven't had a competition in awhile. Therefore, I would like to create a competition. Here's what I'm thinking:
The Week of Awesome
What?
The Week of Awesome will be a 7 day game development challenge/competition. People will have 7 days to make a game from start to finish, with prizes for the best! It should be a heck of a lot of fun, and 7 days should be enough time that you actually finish something. Please note that while I think 7 days is plenty of time to make a game, it's not enough time to make a large game (even with a team of 3 people). Dream up something awesome, but also realistic for you to complete. I want people to finish their games!
This is not a competition for programmers only. Teams of up to 3 will be allowed, and I highly encourage a diverse team (i.e. graphics, sound, and programming, or whatever your game might specifically need). Too rarely do graphics/sound people participate in game dev jams, so I'm really hoping they will form/join a team and participate in this competition.
Where?
The Internet. It'll all happen here on GameDev.net.
When?
August 19th to August 26th, 2013. Specifically, it will start Monday, August 19th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) and it will end Monday, August 26th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) (a full 168 hours from the start time). All times are in Eastern Daylight Time (EDT) (UTC/GMT -4).
Rules?
Rules are slowly evolving, but currently:
• There will be a theme
• The theme will be announced at midnight when the competition begins; games must incorporate the theme in some way
• Games must be made during the 7 day period of the competition (i.e. you can't submit a game you made previously or are making now)
• You may use any library, game engine, tools, art, or audio to develop the game, provided you have the appropriate license to do so (if licensing is required for what you use); note that this means you may use code, art, and audio that you or others may have made in the past (the explicit limitation is that you may not use existing game logic (i.e. you may use libraries and engines, but you still have to actually make the game during the 7 day period))
• You may target any platform (but at least 3 judges must play your game, so if you develop for something the judges don't have, you can't be accepted as an entry; I'll post a list of platforms judges have before the competition begins)
• People may work in teams of up to 3 people (a person may only be on one team, though)
• Games must work on the judges' platforms (that is, if you have a bug in your program and it works fine on your computer but not on a judge's, you must fix it for it to be judged (during the 7 day period; no work/updates/fixes may be done after the 7 days); at least 3 judges must be able to evaluate it for it to be accepted as an entry)
• If only 3 judges can run your game (and not of all the judges, if we have more than 3), you will not be punished/penalized (that is, judge scores will be averaged)
• You must develop for one of the following systems (because the judges don't have unlimited hardware/platforms to test on):
• Windows 7
• Windows 8
• iOS
• Android
• IE 9
• IE 10
Judging?
The contest will be judged by the following categories:
• Graphics: 20 pts (that is, does it look good, aesthetically, and do the graphics contribute to or detract from the game experience?) (note: it's not a contest of who has the most realistic graphics; rather, this category is about how well the graphics help the game express itself and create an engaging experience for the user)
• Audio: 20 pts (similar to the above, but with audio)
• Gameplay/fun factor: 20 pts
• First time user experience: 20 pts (that is, is it easy to install and run and start playing?) (clarification: your game doesn't need an installer; in fact, a download-and-play (without an installing process) is probably even preferable; what I meant by "easy to install" is that it includes all the necessary dependencies (or has good instructions for obtaining/installing necessary dependencies) such that a user can easily run you game without having to fight it; you'll also want to include at least some minimal instructions or a tutorial/in-game hints/tips).
• Theme: 20 pts (that is, how well the game incorporates the theme)
The categories will be summed for the total score (100 pts being the max possible score). At least 3 judges will judge each game (hopefully every judge can judge each game, though); the judges' scoring will be averaged together when computing the final score for an entry. Judges will break any ties for the top 5 entries.
There are currently 4 judges, though I wouldn't mind having a 5th:
• Alpha_ProgDes
• Cornstalks
• Gaiiden
There will also be a People's Choice award. This award will be given to one game which the GameDev.net community votes for. The voting will be done in a thread for a period of 1 week (168 hours), and will take place after the competition period has ended. If there is a tie in the People's Choice award, the judges will break the tie.
Prizes?
First place winner:
+X participation rep points (thanks GameDev.net!) (actual value not yet specified)
$75 (USD)* 2 early adopter licenses for Spriter (thanks jbadams!) (valued at$25/each) (if there is a 3rd team member on the winning team, one team member will get an extra $25 while the other two members get the Spriter licenses) (if there is only 1 person on the winning team, only 1 license will be given) Second place winner:$40 (USD)*
(if the first place team only has 1 team member, then the second place winning team will get the extra Spriter license; in the event that the second place team has two team members, one member will get the Spriter license and $7.50, and the other member will get$32.50; if there are 3 members, 1 will get the Spriter license, and the two other team members will get $25 each) People's Choice winner:$20 (USD)*
Every person who submits a working and valid game:
+50 participation rep points (thanks GameDev.net!)
If you'd like to contribute to the pool, let me know!
Interested People
I have looked through this thread and come up with a list of people who have said they are interested in participating. If you form/join a team (whether it's a 1-man team or more), please let me know so I can remove you from this list and add you to the Teams list. This "pool" might be useful for forming teams. I suggest you form and organize your team (and plan who will do what) early. If you want your name added to this list, let me know!Teams
Here is a list of teams I have been informed of. Remember, teams are limited to 3 members. If a team has < 3 members, feel free to PM the team and see if you can join them, if you'd like (teams: let me know if you're not interested in extra/new members so I can put a note letting people know not to PM you; also please let me know if your team changes). If you form a team, please let me know so I can remove you from the "interested pool" and add you to the team list.
You may leave your team at any time (though I suggest you don't be a jerk and leave your team stranded). You may join a team at any time before the competition. Once the competition starts, you may still leave your team, but you may not switch to a new/different team. Additionally, once the competition starts, you may join or form your own team at any time, provide that you (and your members) were not previously on a team.
Q & A
Why 7 days?
I wanted something short, but long enough that most people can still finish something (if they don't dream too big). I think 7 days gives you enough time to quickly make a game, even if you're busy with work and/or school, as you have several days you can work, even if it's just a few hours each day.
What if I don't have 7 days to dedicate to a competition?
I don't expect people to dedicate all 168 hours to making a game. I specifically chose 7 days so you can develop a game while still taking care of normal, every day life things that you have to do. (If you don't dream too big, you absolutely can finish a great game in 7 days while still working or going to school).
Where will updates be posted?
This thread. I will modify this post to be up to date at all times. You can check its history if you're curious about past states.
Can I participate?
Of course you can! Unless your country/state/city says you legally can't. I'm no lawyer.
How can I help?
The biggest help would be to get excited and participate! Also, spreading the word and encouraging others to participate would be great. Prize donors would be appreciated (even if it's small; it'll grow the pool).
I want to be a judge; what should I do?
Just post here or PM me letting me know. I'll put together a list of actual judges later. I'm hoping to get ~5ish.
Can judges participate in the competition?
I'm not decided on this. If they do, they will not judge their own game. If we get a lot of participants, probably not. If we don't get a lot of participants, then I'll probably let judges compete. Let me know what you think about this.
Why are you doing this?
I think this will be a fun event that will help to grow/strengthen the GameDev.net community. I like having fun.
Any hints/tips?
I'll try to put together a good list of hints/tips, but I'd recommend picking your weapons of choices now (language, libraries/engines, target platform, etc. and making sure your development environment is all set up). I also recommend you don't have 3 programmers on a team. Get an artist and/or a sound engineer on your team. They'll help make it pretty (which yes, is important).
Do I waive any rights to any of the work I submit? Can I sell my game after the competition?
You get to retain all rights to the work that you submit, so don't fret. The only thing that is required is that the game you submit for the competition must be allowed to be downloaded by the general public (even after the competition ends). If you update or progress your game after the competition, you don't need to make it publicly available. You also get to retain all source; source code is not required for judging.
I have a question you didn't answer!
Please ask it! Here, in this thread. I'll answer it and put it in this Q & A.
*Paid via: Google Wallet (which means you'll be able to send money via Gmail/Google Wallet if you currently haven't received the necessary invite to do so), or Amazon WebPay, or an Amazon gift card for the cash amount, or a Steam gift card for the cash amount (you pick). If there are multiple team members, the amount will be divided evenly among the team members. If there is a remaining penny or two (due to not being able to perfectly divide the $by the number of team members), the extra penny (or two) will be randomly given to a member (that is, if dividing$10 amongst 3 team members, two will get $3.33 and one will get$3.34).
#20Cornstalks
Posted 25 August 2013 - 11:07 PM
Note: This thread is primarily for the organization/administration of the game dev competition. The thread which will have the posted theme and to which entries should be submitted to is located here. Be sure to check that thread the second the contest starts to get the theme!
Update: The theme has been announced and the competition has officially started! Check the competition thread to see it!
Update #2: The competition is over!
Hey ya'll. We haven't had a competition in awhile. Therefore, I would like to create a competition. Here's what I'm thinking:
The Week of Awesome
What?
The Week of Awesome will be a 7 day game development challenge/competition. People will have 7 days to make a game from start to finish, with prizes for the best! It should be a heck of a lot of fun, and 7 days should be enough time that you actually finish something. Please note that while I think 7 days is plenty of time to make a game, it's not enough time to make a large game (even with a team of 3 people). Dream up something awesome, but also realistic for you to complete. I want people to finish their games!
This is not a competition for programmers only. Teams of up to 3 will be allowed, and I highly encourage a diverse team (i.e. graphics, sound, and programming, or whatever your game might specifically need). Too rarely do graphics/sound people participate in game dev jams, so I'm really hoping they will form/join a team and participate in this competition.
Where?
The Internet. It'll all happen here on GameDev.net.
When?
August 19th to August 26th, 2013. Specifically, it will start Monday, August 19th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) and it will end Monday, August 26th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) (a full 168 hours from the start time). All times are in Eastern Daylight Time (EDT) (UTC/GMT -4).
Rules?
Rules are slowly evolving, but currently:
• There will be a theme
• The theme will be announced at midnight when the competition begins; games must incorporate the theme in some way
• Games must be made during the 7 day period of the competition (i.e. you can't submit a game you made previously or are making now)
• You may use any library, game engine, tools, art, or audio to develop the game, provided you have the appropriate license to do so (if licensing is required for what you use); note that this means you may use code, art, and audio that you or others may have made in the past (the explicit limitation is that you may not use existing game logic (i.e. you may use libraries and engines, but you still have to actually make the game during the 7 day period))
• You may target any platform (but at least 3 judges must play your game, so if you develop for something the judges don't have, you can't be accepted as an entry; I'll post a list of platforms judges have before the competition begins)
• People may work in teams of up to 3 people (a person may only be on one team, though)
• Games must work on the judges' platforms (that is, if you have a bug in your program and it works fine on your computer but not on a judge's, you must fix it for it to be judged (during the 7 day period; no work/updates/fixes may be done after the 7 days); at least 3 judges must be able to evaluate it for it to be accepted as an entry)
• If only 3 judges can run your game (and not of all the judges, if we have more than 3), you will not be punished/penalized (that is, judge scores will be averaged)
• You must develop for one of the following systems (because the judges don't have unlimited hardware/platforms to test on):
• Windows 7
• Windows 8
• iOS
• Android
• IE 9
• IE 10
Judging?
The contest will be judged by the following categories:
• Graphics: 20 pts (that is, does it look good, aesthetically, and do the graphics contribute to or detract from the game experience?) (note: it's not a contest of who has the most realistic graphics; rather, this category is about how well the graphics help the game express itself and create an engaging experience for the user)
• Audio: 20 pts (similar to the above, but with audio)
• Gameplay/fun factor: 20 pts
• First time user experience: 20 pts (that is, is it easy to install and run and start playing?) (clarification: your game doesn't need an installer; in fact, a download-and-play (without an installing process) is probably even preferable; what I meant by "easy to install" is that it includes all the necessary dependencies (or has good instructions for obtaining/installing necessary dependencies) such that a user can easily run you game without having to fight it; you'll also want to include at least some minimal instructions or a tutorial/in-game hints/tips).
• Theme: 20 pts (that is, how well the game incorporates the theme)
The categories will be summed for the total score (100 pts being the max possible score). At least 3 judges will judge each game (hopefully every judge can judge each game, though); the judges' scoring will be averaged together when computing the final score for an entry. Judges will break any ties for the top 5 entries.
There are currently 4 judges, though I wouldn't mind having a 5th:
• Alpha_ProgDes
• Cornstalks
• Gaiiden
There will also be a People's Choice award. This award will be given to one game which the GameDev.net community votes for. The voting will be done in a thread for a period of 1 week (168 hours), and will take place after the competition period has ended. If there is a tie in the People's Choice award, the judges will break the tie.
Prizes?
First place winner:
+X participation rep points (thanks GameDev.net!) (actual value not yet specified)
$75 (USD)* 2 early adopter licenses for Spriter (thanks jbadams!) (valued at$25/each) (if there is a 3rd team member on the winning team, one team member will get an extra $25 while the other two members get the Spriter licenses) (if there is only 1 person on the winning team, only 1 license will be given) Second place winner:$40 (USD)*
(if the first place team only has 1 team member, then the second place winning team will get the extra Spriter license; in the event that the second place team has two team members, one member will get the Spriter license and $7.50, and the other member will get$32.50; if there are 3 members, 1 will get the Spriter license, and the two other team members will get $25 each) People's Choice winner:$20 (USD)*
Every person who submits a working and valid game:
+50 participation rep points (thanks GameDev.net!)
If you'd like to contribute to the pool, let me know!
Interested People
I have looked through this thread and come up with a list of people who have said they are interested in participating. If you form/join a team (whether it's a 1-man team or more), please let me know so I can remove you from this list and add you to the Teams list. This "pool" might be useful for forming teams. I suggest you form and organize your team (and plan who will do what) early. If you want your name added to this list, let me know!
Teams
Here is a list of teams I have been informed of. Remember, teams are limited to 3 members. If a team has < 3 members, feel free to PM the team and see if you can join them, if you'd like (teams: let me know if you're not interested in extra/new members so I can put a note letting people know not to PM you; also please let me know if your team changes). If you form a team, please let me know so I can remove you from the "interested pool" and add you to the team list.
You may leave your team at any time (though I suggest you don't be a jerk and leave your team stranded). You may join a team at any time before the competition. Once the competition starts, you may still leave your team, but you may not switch to a new/different team. Additionally, once the competition starts, you may join or form your own team at any time, provide that you (and your members) were not previously on a team.
Q & A
Why 7 days?
I wanted something short, but long enough that most people can still finish something (if they don't dream too big). I think 7 days gives you enough time to quickly make a game, even if you're busy with work and/or school, as you have several days you can work, even if it's just a few hours each day.
What if I don't have 7 days to dedicate to a competition?
I don't expect people to dedicate all 168 hours to making a game. I specifically chose 7 days so you can develop a game while still taking care of normal, every day life things that you have to do. (If you don't dream too big, you absolutely can finish a great game in 7 days while still working or going to school).
Where will updates be posted?
This thread. I will modify this post to be up to date at all times. You can check its history if you're curious about past states.
Can I participate?
Of course you can! Unless your country/state/city says you legally can't. I'm no lawyer.
How can I help?
The biggest help would be to get excited and participate! Also, spreading the word and encouraging others to participate would be great. Prize donors would be appreciated (even if it's small; it'll grow the pool).
I want to be a judge; what should I do?
Just post here or PM me letting me know. I'll put together a list of actual judges later. I'm hoping to get ~5ish.
Can judges participate in the competition?
I'm not decided on this. If they do, they will not judge their own game. If we get a lot of participants, probably not. If we don't get a lot of participants, then I'll probably let judges compete. Let me know what you think about this.
Why are you doing this?
I think this will be a fun event that will help to grow/strengthen the GameDev.net community. I like having fun.
Any hints/tips?
I'll try to put together a good list of hints/tips, but I'd recommend picking your weapons of choices now (language, libraries/engines, target platform, etc. and making sure your development environment is all set up). I also recommend you don't have 3 programmers on a team. Get an artist and/or a sound engineer on your team. They'll help make it pretty (which yes, is important).
Do I waive any rights to any of the work I submit? Can I sell my game after the competition?
You get to retain all rights to the work that you submit, so don't fret. The only thing that is required is that the game you submit for the competition must be allowed to be downloaded by the general public (even after the competition ends). If you update or progress your game after the competition, you don't need to make it publicly available. You also get to retain all source; source code is not required for judging.
I have a question you didn't answer!
Please ask it! Here, in this thread. I'll answer it and put it in this Q & A.
*Paid via: Google Wallet (which means you'll be able to send money via Gmail/Google Wallet if you currently haven't received the necessary invite to do so), or Amazon WebPay, or an Amazon gift card for the cash amount, or a Steam gift card for the cash amount (you pick). If there are multiple team members, the amount will be divided evenly among the team members. If there is a remaining penny or two (due to not being able to perfectly divide the $by the number of team members), the extra penny (or two) will be randomly given to a member (that is, if dividing$10 amongst 3 team members, two will get $3.33 and one will get$3.34).
#19Cornstalks
Posted 24 August 2013 - 10:25 PM
Note: This thread is primarily for the organization/administration of the game dev competition. The thread which will have the posted theme and to which entries should be submitted to is located here. Be sure to check that thread the second the contest starts to get the theme!
Update: The theme has been announced and the competition has officially started! Check the competition thread to see it!
Hey ya'll. We haven't had a competition in awhile. Therefore, I would like to create a competition. Here's what I'm thinking:
The Week of Awesome
What?
The Week of Awesome will be a 7 day game development challenge/competition. People will have 7 days to make a game from start to finish, with prizes for the best! It should be a heck of a lot of fun, and 7 days should be enough time that you actually finish something. Please note that while I think 7 days is plenty of time to make a game, it's not enough time to make a large game (even with a team of 3 people). Dream up something awesome, but also realistic for you to complete. I want people to finish their games!
This is not a competition for programmers only. Teams of up to 3 will be allowed, and I highly encourage a diverse team (i.e. graphics, sound, and programming, or whatever your game might specifically need). Too rarely do graphics/sound people participate in game dev jams, so I'm really hoping they will form/join a team and participate in this competition.
Where?
The Internet. It'll all happen here on GameDev.net.
When?
August 19th to August 26th, 2013. Specifically, it will start Monday, August 19th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) and it will end Monday, August 26th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) (a full 168 hours from the start time). All times are in Eastern Daylight Time (EDT) (UTC/GMT -4).
Rules?
Rules are slowly evolving, but currently:
• There will be a theme
• The theme will be announced at midnight when the competition begins; games must incorporate the theme in some way
• Games must be made during the 7 day period of the competition (i.e. you can't submit a game you made previously or are making now)
• You may use any library, game engine, tools, art, or audio to develop the game, provided you have the appropriate license to do so (if licensing is required for what you use); note that this means you may use code, art, and audio that you or others may have made in the past (the explicit limitation is that you may not use existing game logic (i.e. you may use libraries and engines, but you still have to actually make the game during the 7 day period))
• You may target any platform (but at least 3 judges must play your game, so if you develop for something the judges don't have, you can't be accepted as an entry; I'll post a list of platforms judges have before the competition begins)
• People may work in teams of up to 3 people (a person may only be on one team, though)
• Games must work on the judges' platforms (that is, if you have a bug in your program and it works fine on your computer but not on a judge's, you must fix it for it to be judged (during the 7 day period; no work/updates/fixes may be done after the 7 days); at least 3 judges must be able to evaluate it for it to be accepted as an entry)
• If only 3 judges can run your game (and not of all the judges, if we have more than 3), you will not be punished/penalized (that is, judge scores will be averaged)
• You must develop for one of the following systems (because the judges don't have unlimited hardware/platforms to test on):
• Windows 7
• Windows 8
• iOS
• Android
• IE 9
• IE 10
Judging?
The contest will be judged by the following categories:
• Graphics: 20 pts (that is, does it look good, aesthetically, and do the graphics contribute to or detract from the game experience?) (note: it's not a contest of who has the most realistic graphics; rather, this category is about how well the graphics help the game express itself and create an engaging experience for the user)
• Audio: 20 pts (similar to the above, but with audio)
• Gameplay/fun factor: 20 pts
• First time user experience: 20 pts (that is, is it easy to install and run and start playing?) (clarification: your game doesn't need an installer; in fact, a download-and-play (without an installing process) is probably even preferable; what I meant by "easy to install" is that it includes all the necessary dependencies (or has good instructions for obtaining/installing necessary dependencies) such that a user can easily run you game without having to fight it; you'll also want to include at least some minimal instructions or a tutorial/in-game hints/tips).
• Theme: 20 pts (that is, how well the game incorporates the theme)
The categories will be summed for the total score (100 pts being the max possible score). At least 3 judges will judge each game (hopefully every judge can judge each game, though); the judges' scoring will be averaged together when computing the final score for an entry. Judges will break any ties for the top 5 entries.
There are currently 4 judges, though I wouldn't mind having a 5th:
• Alpha_ProgDes
• Cornstalks
• Gaiiden
There will also be a People's Choice award. This award will be given to one game which the GameDev.net community votes for. The voting will be done in a thread for a period of 1 week (168 hours), and will take place after the competition period has ended. If there is a tie in the People's Choice award, the judges will break the tie.
Prizes?
First place winner:
+X participation rep points (thanks GameDev.net!) (actual value not yet specified)
$75 (USD)* 2 early adopter licenses for Spriter (thanks jbadams!) (valued at$25/each) (if there is a 3rd team member on the winning team, one team member will get an extra $25 while the other two members get the Spriter licenses) (if there is only 1 person on the winning team, only 1 license will be given) Second place winner:$40 (USD)*
(if the first place team only has 1 team member, then the second place winning team will get the extra Spriter license; in the event that the second place team has two team members, one member will get the Spriter license and $7.50, and the other member will get$32.50; if there are 3 members, 1 will get the Spriter license, and the two other team members will get $25 each) People's Choice winner:$20 (USD)*
Every person who submits a working and valid game:
+50 participation rep points (thanks GameDev.net!)
If you'd like to contribute to the pool, let me know!
Interested People
I have looked through this thread and come up with a list of people who have said they are interested in participating. If you form/join a team (whether it's a 1-man team or more), please let me know so I can remove you from this list and add you to the Teams list. This "pool" might be useful for forming teams. I suggest you form and organize your team (and plan who will do what) early. If you want your name added to this list, let me know!
Teams
Here is a list of teams I have been informed of. Remember, teams are limited to 3 members. If a team has < 3 members, feel free to PM the team and see if you can join them, if you'd like (teams: let me know if you're not interested in extra/new members so I can put a note letting people know not to PM you; also please let me know if your team changes). If you form a team, please let me know so I can remove you from the "interested pool" and add you to the team list.
You may leave your team at any time (though I suggest you don't be a jerk and leave your team stranded). You may join a team at any time before the competition. Once the competition starts, you may still leave your team, but you may not switch to a new/different team. Additionally, once the competition starts, you may join or form your own team at any time, provide that you (and your members) were not previously on a team.
Q & A
Why 7 days?
I wanted something short, but long enough that most people can still finish something (if they don't dream too big). I think 7 days gives you enough time to quickly make a game, even if you're busy with work and/or school, as you have several days you can work, even if it's just a few hours each day.
What if I don't have 7 days to dedicate to a competition?
I don't expect people to dedicate all 168 hours to making a game. I specifically chose 7 days so you can develop a game while still taking care of normal, every day life things that you have to do. (If you don't dream too big, you absolutely can finish a great game in 7 days while still working or going to school).
Where will updates be posted?
This thread. I will modify this post to be up to date at all times. You can check its history if you're curious about past states.
Can I participate?
Of course you can! Unless your country/state/city says you legally can't. I'm no lawyer.
How can I help?
The biggest help would be to get excited and participate! Also, spreading the word and encouraging others to participate would be great. Prize donors would be appreciated (even if it's small; it'll grow the pool).
I want to be a judge; what should I do?
Just post here or PM me letting me know. I'll put together a list of actual judges later. I'm hoping to get ~5ish.
Can judges participate in the competition?
I'm not decided on this. If they do, they will not judge their own game. If we get a lot of participants, probably not. If we don't get a lot of participants, then I'll probably let judges compete. Let me know what you think about this.
Why are you doing this?
I think this will be a fun event that will help to grow/strengthen the GameDev.net community. I like having fun.
Any hints/tips?
I'll try to put together a good list of hints/tips, but I'd recommend picking your weapons of choices now (language, libraries/engines, target platform, etc. and making sure your development environment is all set up). I also recommend you don't have 3 programmers on a team. Get an artist and/or a sound engineer on your team. They'll help make it pretty (which yes, is important).
Do I waive any rights to any of the work I submit? Can I sell my game after the competition?
You get to retain all rights to the work that you submit, so don't fret. The only thing that is required is that the game you submit for the competition must be allowed to be downloaded by the general public (even after the competition ends). If you update or progress your game after the competition, you don't need to make it publicly available. You also get to retain all source; source code is not required for judging.
I have a question you didn't answer!
Please ask it! Here, in this thread. I'll answer it and put it in this Q & A.
*Paid via: Google Wallet (which means you'll be able to send money via Gmail/Google Wallet if you currently haven't received the necessary invite to do so), or Amazon WebPay, or an Amazon gift card for the cash amount, or a Steam gift card for the cash amount (you pick). If there are multiple team members, the amount will be divided evenly among the team members. If there is a remaining penny or two (due to not being able to perfectly divide the $by the number of team members), the extra penny (or two) will be randomly given to a member (that is, if dividing$10 amongst 3 team members, two will get $3.33 and one will get$3.34).
#18Cornstalks
Posted 21 August 2013 - 09:19 PM
Note: This thread is primarily for the organization/administration of the game dev competition. The thread which will have the posted theme and to which entries should be submitted to is located here. Be sure to check that thread the second the contest starts to get the theme!
Update: The theme has been announced and the competition has officially started! Check the competition thread to see it!
Hey ya'll. We haven't had a competition in awhile. Therefore, I would like to create a competition. Here's what I'm thinking:
The Week of Awesome
What?
The Week of Awesome will be a 7 day game development challenge/competition. People will have 7 days to make a game from start to finish, with prizes for the best! It should be a heck of a lot of fun, and 7 days should be enough time that you actually finish something. Please note that while I think 7 days is plenty of time to make a game, it's not enough time to make a large game (even with a team of 3 people). Dream up something awesome, but also realistic for you to complete. I want people to finish their games!
This is not a competition for programmers only. Teams of up to 3 will be allowed, and I highly encourage a diverse team (i.e. graphics, sound, and programming, or whatever your game might specifically need). Too rarely do graphics/sound people participate in game dev jams, so I'm really hoping they will form/join a team and participate in this competition.
Where?
The Internet. It'll all happen here on GameDev.net.
When?
August 19th to August 26th, 2013. Specifically, it will start Monday, August 19th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) and it will end Monday, August 26th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) (a full 168 hours from the start time). All times are in Eastern Daylight Time (EDT) (UTC/GMT -4).
Rules?
Rules are slowly evolving, but currently:
• There will be a theme
• The theme will be announced at midnight when the competition begins; games must incorporate the theme in some way
• Games must be made during the 7 day period of the competition (i.e. you can't submit a game you made previously or are making now)
• You may use any library, game engine, tools, art, or audio to develop the game, provided you have the appropriate license to do so (if licensing is required for what you use); note that this means you may use code, art, and audio that you or others may have made in the past (the explicit limitation is that you may not use existing game logic (i.e. you may use libraries and engines, but you still have to actually make the game during the 7 day period))
• You may target any platform (but at least 3 judges must play your game, so if you develop for something the judges don't have, you can't be accepted as an entry; I'll post a list of platforms judges have before the competition begins)
• People may work in teams of up to 3 people (a person may only be on one team, though)
• Games must work on the judges' platforms (that is, if you have a bug in your program and it works fine on your computer but not on a judge's, you must fix it for it to be judged (during the 7 day period; no work/updates/fixes may be done after the 7 days); at least 3 judges must be able to evaluate it for it to be accepted as an entry)
• If only 3 judges can run your game (and not of all the judges, if we have more than 3), you will not be punished/penalized (that is, judge scores will be averaged)
• You must develop for one of the following systems (because the judges don't have unlimited hardware/platforms to test on):
• Windows 7
• Windows 8
• iOS
• Android
• IE 9
• IE 10
Judging?
The contest will be judged by the following categories:
• Graphics: 20 pts (that is, does it look good, aesthetically, and do the graphics contribute to or detract from the game experience?) (note: it's not a contest of who has the most realistic graphics; rather, this category is about how well the graphics help the game express itself and create an engaging experience for the user)
• Audio: 20 pts (similar to the above, but with audio)
• Gameplay/fun factor: 20 pts
• First time user experience: 20 pts (that is, is it easy to install and run and start playing?) (clarification: your game doesn't need an installer; in fact, a download-and-play (without an installing process) is probably even preferable; what I meant by "easy to install" is that it includes all the necessary dependencies (or has good instructions for obtaining/installing necessary dependencies) such that a user can easily run you game without having to fight it; you'll also want to include at least some minimal instructions or a tutorial/in-game hints/tips).
• Theme: 20 pts (that is, how well the game incorporates the theme)
The categories will be summed for the total score (100 pts being the max possible score). At least 3 judges will judge each game (hopefully every judge can judge each game, though); the judges' scoring will be averaged together when computing the final score for an entry. Judges will break any ties for the top 5 entries.
There are currently 4 judges, though I wouldn't mind having a 5th:
• Alpha_ProgDes
• Cornstalks
• Gaiiden
There will also be a People's Choice award. This award will be given to one game which the GameDev.net community votes for. The voting will be done in a thread for a period of 1 week (168 hours), and will take place after the competition period has ended. If there is a tie in the People's Choice award, the judges will break the tie.
Prizes?
First place winner:
+X participation rep points (thanks GameDev.net!) (actual value not yet specified)
$75 (USD)* 2 early adopter licenses for Spriter (thanks jbadams!) (valued at$25/each) (if there is a 3rd team member on the winning team, one team member will get an extra $25 while the other two members get the Spriter licenses) (if there is only 1 person on the winning team, only 1 license will be given) Second place winner:$40 (USD)*
(if the first place team only has 1 team member, then the second place winning team will get the extra Spriter license; in the event that the second place team has two team members, one member will get the Spriter license and $7.50, and the other member will get$32.50; if there are 3 members, 1 will get the Spriter license, and the two other team members will get $25 each) People's Choice winner:$20 (USD)*
Every person who submits a working and valid game:
+50 participation rep points (thanks GameDev.net!)
If you'd like to contribute to the pool, let me know!
Interested People
I have looked through this thread and come up with a list of people who have said they are interested in participating. If you form/join a team (whether it's a 1-man team or more), please let me know so I can remove you from this list and add you to the Teams list. This "pool" might be useful for forming teams. I suggest you form and organize your team (and plan who will do what) early. If you want your name added to this list, let me know!
Teams
Here is a list of teams I have been informed of. Remember, teams are limited to 3 members. If a team has < 3 members, feel free to PM the team and see if you can join them, if you'd like (teams: let me know if you're not interested in extra/new members so I can put a note letting people know not to PM you; also please let me know if your team changes). If you form a team, please let me know so I can remove you from the "interested pool" and add you to the team list.
You may leave your team at any time (though I suggest you don't be a jerk and leave your team stranded). You may join a team at any time before the competition. Once the competition starts, you may still leave your team, but you may not switch to a new/different team. Additionally, once the competition starts, you may join or form your own team at any time, provide that you (and your members) were not previously on a team.
Q & A
Why 7 days?
I wanted something short, but long enough that most people can still finish something (if they don't dream too big). I think 7 days gives you enough time to quickly make a game, even if you're busy with work and/or school, as you have several days you can work, even if it's just a few hours each day.
What if I don't have 7 days to dedicate to a competition?
I don't expect people to dedicate all 168 hours to making a game. I specifically chose 7 days so you can develop a game while still taking care of normal, every day life things that you have to do. (If you don't dream too big, you absolutely can finish a great game in 7 days while still working or going to school).
Where will updates be posted?
This thread. I will modify this post to be up to date at all times. You can check its history if you're curious about past states.
Can I participate?
Of course you can! Unless your country/state/city says you legally can't. I'm no lawyer.
How can I help?
The biggest help would be to get excited and participate! Also, spreading the word and encouraging others to participate would be great. Prize donors would be appreciated (even if it's small; it'll grow the pool).
I want to be a judge; what should I do?
Just post here or PM me letting me know. I'll put together a list of actual judges later. I'm hoping to get ~5ish.
Can judges participate in the competition?
I'm not decided on this. If they do, they will not judge their own game. If we get a lot of participants, probably not. If we don't get a lot of participants, then I'll probably let judges compete. Let me know what you think about this.
Why are you doing this?
I think this will be a fun event that will help to grow/strengthen the GameDev.net community. I like having fun.
Any hints/tips?
I'll try to put together a good list of hints/tips, but I'd recommend picking your weapons of choices now (language, libraries/engines, target platform, etc. and making sure your development environment is all set up). I also recommend you don't have 3 programmers on a team. Get an artist and/or a sound engineer on your team. They'll help make it pretty (which yes, is important).
Do I waive any rights to any of the work I submit? Can I sell my game after the competition?
You get to retain all rights to the work that you submit, so don't fret. The only thing that is required is that the game you submit for the competition must be allowed to be downloaded by the general public (even after the competition ends). If you update or progress your game after the competition, you don't need to make it publicly available. You also get to retain all source; source code is not required for judging.
I have a question you didn't answer!
Please ask it! Here, in this thread. I'll answer it and put it in this Q & A.
*Paid via: Google Wallet (which means you'll be able to send money via Gmail/Google Wallet if you currently haven't received the necessary invite to do so), or Amazon WebPay, or an Amazon gift card for the cash amount, or a Steam gift card for the cash amount (you pick). If there are multiple team members, the amount will be divided evenly among the team members. If there is a remaining penny or two (due to not being able to perfectly divide the $by the number of team members), the extra penny (or two) will be randomly given to a member (that is, if dividing$10 amongst 3 team members, two will get $3.33 and one will get$3.34).
#17Cornstalks
Posted 19 August 2013 - 10:03 AM
Note: This thread is primarily for the organization/administration of the game dev competition. The thread which will have the posted theme and to which entries should be submitted to is located here. Be sure to check that thread the second the contest starts to get the theme!
Update: The theme has been announced and the competition has officially started! Check the competition thread to see it!
Hey ya'll. We haven't had a competition in awhile. Therefore, I would like to create a competition. Here's what I'm thinking:
The Week of Awesome
What?
The Week of Awesome will be a 7 day game development challenge/competition. People will have 7 days to make a game from start to finish, with prizes for the best! It should be a heck of a lot of fun, and 7 days should be enough time that you actually finish something. Please note that while I think 7 days is plenty of time to make a game, it's not enough time to make a large game (even with a team of 3 people). Dream up something awesome, but also realistic for you to complete. I want people to finish their games!
This is not a competition for programmers only. Teams of up to 3 will be allowed, and I highly encourage a diverse team (i.e. graphics, sound, and programming, or whatever your game might specifically need). Too rarely do graphics/sound people participate in game dev jams, so I'm really hoping they will form/join a team and participate in this competition.
Where?
The Internet. It'll all happen here on GameDev.net.
When?
August 19th to August 26th, 2013. Specifically, it will start Monday, August 19th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) and it will end Monday, August 26th at 12:00AM (00:00 in military/24-hour time) (that is, Sunday night/Monday morning) (a full 168 hours from the start time). All times are in Eastern Daylight Time (EDT) (UTC/GMT -4).
Rules?
Rules are slowly evolving, but currently:
• There will be a theme
• The theme will be announced at midnight when the competition begins; games must incorporate the theme in some way
• Games must be made during the 7 day period of the competition (i.e. you can't submit a game you made previously or are making now)
• You may use any library, game engine, tools, art, or audio to develop the game, provided you have the appropriate license to do so (if licensing is required for what you use); note that this means you may use code, art, and audio that you or others may have made in the past (the explicit limitation is that you may not use existing game logic (i.e. you may use libraries and engines, but you still have to actually make the game during the 7 day period))
• You may target any platform (but at least 3 judges must play your game, so if you develop for something the judges don't have, you can't be accepted as an entry; I'll post a list of platforms judges have before the competition begins)
• People may work in teams of up to 3 people (a person may only be on one team, though)
• Games must work on the judges' platforms (that is, if you have a bug in your program and it works fine on your computer but not on a judge's, you must fix it for it to be judged (during the 7 day period; no work/updates/fixes may be done after the 7 days); at least 3 judges must be able to evaluate it for it to be accepted as an entry)
• If only 3 judges can run your game (and not of all the judges, if we have more than 3), you will not be punished/penalized (that is, judge scores will be averaged)
• You must develop for one of the following systems (because the judges don't have unlimited hardware/platforms to test on):
• Windows 7
• Windows 8
• iOS
• Android
• IE 9
• IE 10
Judging?
The contest will be judged by the following categories:
• Graphics: 20 pts (that is, does it look good, aesthetically, and do the graphics contribute to or detract from the game experience?) (note: it's not a contest of who has the most realistic graphics; rather, this category is about how well the graphics help the game express itself and create an engaging experience for the user)
• Audio: 20 pts (similar to the above, but with audio)
• Gameplay/fun factor: 20 pts
• First time user experience: 20 pts (that is, is it easy to install and run and start playing?) (clarification: your game doesn't need an installer; in fact, a download-and-play (without an installing process) is probably even preferable; what I meant by "easy to install" is that it includes all the necessary dependencies (or has good instructions for obtaining/installing necessary dependencies) such that a user can easily run you game without having to fight it; you'll also want to include at least some minimal instructions or a tutorial/in-game hints/tips).
• Theme: 20 pts (that is, how well the game incorporates the theme)
The categories will be summed for the total score (100 pts being the max possible score). At least 3 judges will judge each game (hopefully every judge can judge each game, though); the judges' scoring will be averaged together when computing the final score for an entry. Judges will break any ties for the top 5 entries.
There are currently 4 judges, though I wouldn't mind having a 5th:
• Alpha_ProgDes
• Cornstalks
• Gaiiden
There will also be a People's Choice award. This award will be given to one game which the GameDev.net community votes for. The voting will be done in a thread for a period of 1 week (168 hours), and will take place after the competition period has ended. If there is a tie in the People's Choice award, the judges will break the tie.
Prizes?
First place winner:
+X participation rep points (thanks GameDev.net!) (actual value not yet specified)
$75 (USD)* 2 early adopter licenses for Spriter (thanks jbadams!) (valued at$25/each) (if there is a 3rd team member on the winning team, one team member will get an extra $25 while the other two members get the Spriter licenses) (if there is only 1 person on the winning team, only 1 license will be given) Second place winner:$40 (USD)*
(if the first place team only has 1 team member, then the second place winning team will get the extra Spriter license; in the event that the second place team has two team members, one member will get the Spriter license and $7.50, and the other member will get$32.50; if there are 3 members, 1 will get the Spriter license, and the two other team members will get $25 each) People's Choice winner:$20 (USD)*
Every person who submits a working and valid game:
+50 participation rep points (thanks GameDev.net!)
If you'd like to contribute to the pool, let me know!
Interested People
I have looked through this thread and come up with a list of people who have said they are interested in participating. If you form/join a team (whether it's a 1-man team or more), please let me know so I can remove you from this list and add you to the Teams list. This "pool" might be useful for forming teams. I suggest you form and organize your team (and plan who will do what) early. If you want your name added to this list, let me know!
Teams
Here is a list of teams I have been informed of. Remember, teams are limited to 3 members. If a team has < 3 members, feel free to PM the team and see if you can join them, if you'd like (teams: let me know if you're not interested in extra/new members so I can put a note letting people know not to PM you; also please let me know if your team changes). If you form a team, please let me know so I can remove you from the "interested pool" and add you to the team list.
You may leave your team at any time (though I suggest you don't be a jerk and leave your team stranded). You may join a team at any time before the competition. Once the competition starts, you may still leave your team, but you may not switch to a new/different team. Additionally, once the competition starts, you may join or form your own team at any time, provide that you (and your members) were not previously on a team.
Q & A
Why 7 days?
I wanted something short, but long enough that most people can still finish something (if they don't dream too big). I think 7 days gives you enough time to quickly make a game, even if you're busy with work and/or school, as you have several days you can work, even if it's just a few hours each day.
What if I don't have 7 days to dedicate to a competition?
I don't expect people to dedicate all 168 hours to making a game. I specifically chose 7 days so you can develop a game while still taking care of normal, every day life things that you have to do. (If you don't dream too big, you absolutely can finish a great game in 7 days while still working or going to school).
Where will updates be posted?
This thread. I will modify this post to be up to date at all times. You can check its history if you're curious about past states.
Can I participate?
Of course you can! Unless your country/state/city says you legally can't. I'm no lawyer.
How can I help?
The biggest help would be to get excited and participate! Also, spreading the word and encouraging others to participate would be great. Prize donors would be appreciated (even if it's small; it'll grow the pool).
I want to be a judge; what should I do?
Just post here or PM me letting me know. I'll put together a list of actual judges later. I'm hoping to get ~5ish.
Can judges participate in the competition?
I'm not decided on this. If they do, they will not judge their own game. If we get a lot of participants, probably not. If we don't get a lot of participants, then I'll probably let judges compete. Let me know what you think about this.
Why are you doing this?
I think this will be a fun event that will help to grow/strengthen the GameDev.net community. I like having fun.
Any hints/tips?
I'll try to put together a good list of hints/tips, but I'd recommend picking your weapons of choices now (language, libraries/engines, target platform, etc. and making sure your development environment is all set up). I also recommend you don't have 3 programmers on a team. Get an artist and/or a sound engineer on your team. They'll help make it pretty (which yes, is important).
I have a question you didn't answer!
Please ask it! Here, in this thread. I'll answer it and put it in this Q & A.
*Paid via: Google Wallet (which means you'll be able to send money via Gmail/Google Wallet if you currently haven't received the necessary invite to do so), or Amazon WebPay, or an Amazon gift card for the cash amount, or a Steam gift card for the cash amount (you pick). If there are multiple team members, the amount will be divided evenly among the team members. If there is a remaining penny or two (due to not being able to perfectly divide the $by the number of team members), the extra penny (or two) will be randomly given to a member (that is, if dividing$10 amongst 3 team members, two will get $3.33 and one will get$3.34).
PARTNERS
|
2013-12-19 16:17:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17381633818149567, "perplexity": 1484.5189593453204}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345765796/warc/CC-MAIN-20131218054925-00021-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://socratic.org/questions/how-can-you-find-the-taylor-expansion-of-sin-x-about-x-0
|
# How can you find the taylor expansion of sin x about x= 0?
Oct 6, 2015
The Taylor series formula is:
sum_(n=0)^N (f^((n))(a))/(n!)(x-a)^n
The Taylor series around $a = 0$ (not $x = 0$... the question is technically off) is also known as the Maclaurin series. You can write it then as:
sum_(n=0)^N (f^((n))(0))/(n!)x^n
= (f(0))/(0!)x^0 + (f'(0))/(1!)x^1 + (f''(0))/(2!)x^2 + (f'''(0))/(3!)x^3 + (f''''(0))/(4!)x^4 + ...
So, you know you have to take some derivatives. $\sin x$ has cyclic derivatives that follow this pattern:
$\textcolor{g r e e n}{\sin x} = {f}^{\left(0\right)} \left(x\right) = \textcolor{g r e e n}{f \left(x\right)}$
$\frac{d}{\mathrm{dx}} \left[\sin x\right] = \textcolor{g r e e n}{\cos x = f ' \left(x\right)}$
$\frac{d}{\mathrm{dx}} \left[\cos x\right] = \textcolor{g r e e n}{- \sin x = f ' ' \left(x\right)}$
$\frac{d}{\mathrm{dx}} \left[- \sin x\right] = \textcolor{g r e e n}{- \cos x = f ' ' ' \left(x\right)}$
$\frac{d}{\mathrm{dx}} \left[- \cos x\right] = \textcolor{g r e e n}{\sin x = f ' ' ' ' \left(x\right)}$
Finally you can write the whole thing out, knowing that whenever $t r i g \left(0\right) = 0$, the whole term disappears. $\sin x$ appears in every even derivative. Hence, $f \left(x\right)$, $f ' ' \left(x\right)$, and every even derivative disappears.
You have only odd terms to worry about, and those are all just $1$ in the numerator and the signs alternate due to the alternating signs in front of $\cos x$.
=> cancel((f(0))/(0!)x^0) + (f'(0))/(1!)x^1 + cancel((f''(0))/(2!)x^2) + (f'''(0))/(3!)x^3 + cancel((f''''(0))/(4!)x^4) + ...
$= \cos \left(0\right) x + \frac{\left(- \cos \left(0\right)\right) {x}^{3}}{6} + \frac{\cos \left(0\right) {x}^{5}}{120} + \frac{\left(- \cos \left(0\right)\right) {x}^{7}}{5040} + \ldots$
$= \textcolor{b l u e}{x - {x}^{3} / 6 + {x}^{5} / 120 - {x}^{7} / 5040 + \ldots}$
|
2022-01-26 05:48:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536288380622864, "perplexity": 1263.955695080958}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00686.warc.gz"}
|
http://math.stackexchange.com/questions/76125/how-does-this-proof-show-h-homomorphism-induced-by-h-colon-x-a-to-y-b
|
# How does this proof show $h_*$ (homomorphism induced by $h\colon (X,a)\to (Y,b)$) is an isomorphism?
Claim: if $h\colon(X,a)\to(Y,b)$ is a homeomorphism of $X$ with $Y$, then $h_*\colon \pi_1(X,a)\to \pi_1(Y,b)$ is an isomorphism.
where $\pi_1$ refers to the fundamental group and $h_*$ is the induced homomorphism defined by $h_*([f]) = [h(f)]$.
I already know $h_*$ is a homomorphism since $h(f\cdot g)=h(f) \cdot h(g)$. To show $h_*$ is an isomorphism, I thought it sufficed to show it's a bijection...
Munkres' Proof. Let $k: (Y,b)\to(X,a)$ be the inverse of $h$. Then $k_*\circ h_*=(k\circ h)_* = i_*$, where $i$ is the identity map of $(X,a)$. And $h_*\circ k_*= (h\circ k)_*=j_*$, where $j$ is the identity of $(Y,b)$. Since $i_*$ and $j_*$ are the identity homomorphisms of the groups $\pi_1(X,a)$ and $\pi_1(Y,b)$, respectively, $k_*$ is the inverse of $h_*$. $\Box$
How does this show $h_*$ is a bijection? Since $h$ is a homeomorphism, I know $h_*$ is injective.
-
You just showed that $h^* \circ k^*$ is the identity on $\pi_1(Y,b)$ and $k^* \circ h^*$ is the identity on $\pi_1(X,a)$. This means $h^*$ and $k^*$ are two-sided inverses of each other. A function is bijective if and only if it has a two sided inverse. – Bill Cook Oct 26 '11 at 18:40
Remember that a morphism is an isomorphism if and only if it has an inverse (that is also a morphism). For groups, a homomorphism is an isomorphism if and only if it has an inverse, if and only if it is bijective on underlying sets (that's why "it suffices" to show $h_*$ is bijective); this is not true for other settings (e.g., for topological spaces, a continuous map needs more than just being bijective in order to be an isomorphism, i.e. a homeomorphism), but for groups it is enough.
So, really, Munkres is showing that $h_*$ is an isomorphism directly, by producings its group-theoretic inverse.
But if you want to argue that $h_*$ is bijective, notice that $h_*$ and $k_*$ are both group homomorphisms, and they are inverses of each other as group homomorphisms. Any group homomorphism that has an inverse has to be bijective. (Because it is a function of the underlying set, and only bijective functions have inverses).
|
2015-09-05 10:51:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964179277420044, "perplexity": 48.03032052581977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440646242843.97/warc/CC-MAIN-20150827033042-00241-ip-10-171-96-226.ec2.internal.warc.gz"}
|
https://openstax.org/books/introductory-business-statistics/pages/2-2-measures-of-the-location-of-the-data
|
# 2.2Measures of the Location of the Data
Introductory Business Statistics2.2 Measures of the Location of the Data
The common measures of location are quartiles and percentiles
Quartiles are special percentiles. The first quartile, Q1, is the same as the 25th percentile, and the third quartile, Q3, is the same as the 75th percentile. The median, M, is called both the second quartile and the 50th percentile.
To calculate quartiles and percentiles, the data must be ordered from smallest to largest. Quartiles divide ordered data into quarters. Percentiles divide ordered data into hundredths. To score in the 90th percentile of an exam does not mean, necessarily, that you received 90% on a test. It means that 90% of test scores are the same or less than your score and 10% of the test scores are the same or greater than your test score.
Percentiles are useful for comparing values. For this reason, universities and colleges use percentiles extensively. One instance in which colleges and universities use percentiles is when SAT results are used to determine a minimum testing score that will be used as an acceptance factor. For example, suppose Duke accepts SAT scores at or above the 75th percentile. That translates into a score of at least 1220.
Percentiles are mostly used with very large populations. Therefore, if you were to say that 90% of the test scores are less (and not the same or less) than your score, it would be acceptable because removing one particular data value is not significant.
The median is a number that measures the "center" of the data. You can think of the median as the "middle value," but it does not actually have to be one of the observed values. It is a number that separates ordered data into halves. Half the values are the same number or smaller than the median, and half the values are the same number or larger. For example, consider the following data.
1; 11.5; 6; 7.2; 4; 8; 9; 10; 6.8; 8.3; 2; 2; 10; 1
Ordered from smallest to largest:
1; 1; 2; 2; 4; 6; 6.8; 7.2; 8; 8.3; 9; 10; 10; 11.5
Since there are 14 observations, the median is between the seventh value, 6.8, and the eighth value, 7.2. To find the median, add the two values together and divide by two.
$6.8+7.22=7 6.8 7.2 2 7$
The median is seven. Half of the values are smaller than seven and half of the values are larger than seven.
Quartiles are numbers that separate the data into quarters. Quartiles may or may not be part of the data. To find the quartiles, first find the median or second quartile. The first quartile, Q1, is the middle value of the lower half of the data, and the third quartile, Q3, is the middle value, or median, of the upper half of the data. To get the idea, consider the same data set:
1; 1; 2; 2; 4; 6; 6.8; 7.2; 8; 8.3; 9; 10; 10; 11.5
The median or second quartile is seven. The lower half of the data are 1, 1, 2, 2, 4, 6, 6.8. The middle value of the lower half is two.
1; 1; 2; 2; 4; 6; 6.8
The number two, which is part of the data, is the first quartile. One-fourth of the entire sets of values are the same as or less than two and three-fourths of the values are more than two.
The upper half of the data is 7.2, 8, 8.3, 9, 10, 10, 11.5. The middle value of the upper half is nine.
The third quartile, Q3, is nine. Three-fourths (75%) of the ordered data set are less than nine. One-fourth (25%) of the ordered data set are greater than nine. The third quartile is part of the data set in this example.
The interquartile range is a number that indicates the spread of the middle half or the middle 50% of the data. It is the difference between the third quartile (Q3) and the first quartile (Q1).
IQR = Q3Q1
The IQR can help to determine potential outliers. A value is suspected to be a potential outlier if it is less than (1.5)(IQR) below the first quartile or more than (1.5)(IQR) above the third quartile. Potential outliers always require further investigation.
### NOTE
A potential outlier is a data point that is significantly different from the other data points. These special data points may be errors or some kind of abnormality or they may be a key to understanding the data.
### Example 2.14
For the following 13 real estate prices, calculate the IQR and determine if any prices are potential outliers. Prices are in dollars.
389,950; 230,500; 158,000; 479,000; 639,000; 114,950; 5,500,000; 387,000; 659,000; 529,000; 575,000; 488,800; 1,095,000
### Example 2.15
For the two data sets in the test scores example, find the following:
1. The interquartile range. Compare the two interquartile ranges.
2. Any outliers in either set.
### Example 2.16
Fifty statistics students were asked how much sleep they get per school night (rounded to the nearest hour). The results were:
Amount of sleep per school night (hours) Frequency Relative frequency Cumulative relative frequency
4 2 0.04 0.04
5 5 0.10 0.14
6 7 0.14 0.28
7 12 0.24 0.52
8 14 0.28 0.80
9 7 0.14 0.94
10 3 0.06 1.00
Table 2.22
Find the 28th percentile. Notice the 0.28 in the "cumulative relative frequency" column. Twenty-eight percent of 50 data values is 14 values. There are 14 values less than the 28th percentile. They include the two 4s, the five 5s, and the seven 6s. The 28th percentile is between the last six and the first seven. The 28th percentile is 6.5.
Find the median. Look again at the "cumulative relative frequency" column and find 0.52. The median is the 50th percentile or the second quartile. 50% of 50 is 25. There are 25 values less than the median. They include the two 4s, the five 5s, the seven 6s, and eleven of the 7s. The median or 50th percentile is between the 25th, or seven, and 26th, or seven, values. The median is seven.
Find the third quartile. The third quartile is the same as the 75th percentile. You can "eyeball" this answer. If you look at the "cumulative relative frequency" column, you find 0.52 and 0.80. When you have all the fours, fives, sixes and sevens, you have 52% of the data. When you include all the 8s, you have 80% of the data. The 75th percentile, then, must be an eight. Another way to look at the problem is to find 75% of 50, which is 37.5, and round up to 38. The third quartile, Q3, is the 38th value, which is an eight. You can check this answer by counting the values. (There are 37 values below the third quartile and 12 values above.)
Try It 2.16
Forty bus drivers were asked how many hours they spend each day running their routes (rounded to the nearest hour). Find the 65th percentile.
Amount of time spent on route (hours) Frequency Relative frequency Cumulative relative frequency
2120.300.30
3140.350.65
4100.250.90
540.101.00
Table 2.23
### Example 2.17
Using Table 2.22:
1. Find the 80th percentile.
2. Find the 90th percentile.
3. Find the first quartile. What is another name for the first quartile?
### A Formula for Finding the kth Percentile
If you were to do a little research, you would find several formulas for calculating the kth percentile. Here is one of them.
k = the kth percentile. It may or may not be part of the data.
i = the index (ranking or position of a data value)
n = the total number of data points, or observations
• Order the data from smallest to largest.
• Calculate $i= k 100 (n+1) i= k 100 (n+1)$
• If i is an integer, then the kth percentile is the data value in the ith position in the ordered set of data.
• If i is not an integer, then round i up and round i down to the nearest integers. Average the two data values in these two positions in the ordered data set. This is easier to understand in an example.
### Example 2.18
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77
1. Find the 70th percentile.
2. Find the 83rd percentile.
Try It 2.18
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77
Calculate the 20th percentile and the 55th percentile.
### A Formula for Finding the Percentile of a Value in a Data Set
• Order the data from smallest to largest.
• x = the number of data values counting from the bottom of the data list up to but not including the data value for which you want to find the percentile.
• y = the number of data values equal to the data value for which you want to find the percentile.
• n = the total number of data.
• Calculate $x+0.5y n x+0.5y n$(100). Then round to the nearest integer.
### Example 2.19
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77
1. Find the percentile for 58.
2. Find the percentile for 25.
### Interpreting Percentiles, Quartiles, and Median
A percentile indicates the relative standing of a data value when data are sorted into numerical order from smallest to largest. Percentages of data values are less than or equal to the pth percentile. For example, 15% of data values are less than or equal to the 15th percentile.
• Low percentiles always correspond to lower data values.
• High percentiles always correspond to higher data values.
A percentile may or may not correspond to a value judgment about whether it is "good" or "bad." The interpretation of whether a certain percentile is "good" or "bad" depends on the context of the situation to which the data applies. In some situations, a low percentile would be considered "good;" in other contexts a high percentile might be considered "good". In many situations, there is no value judgment that applies.
Understanding how to interpret percentiles properly is important not only when describing data, but also when calculating probabilities in later chapters of this text.
### NOTE
When writing the interpretation of a percentile in the context of the given data, the sentence should contain the following information.
• information about the context of the situation being considered
• the data value (value of the variable) that represents the percentile
• the percent of individuals or items with data values below the percentile
• the percent of individuals or items with data values above the percentile.
### Example 2.20
On a timed math test, the first quartile for time it took to finish the exam was 35 minutes. Interpret the first quartile in the context of this situation.
### Example 2.21
On a 20 question math test, the 70th percentile for number of correct answers was 16. Interpret the 70th percentile in the context of this situation.
Try It 2.21
On a 60 point written assignment, the 80th percentile for the number of points earned was 49. Interpret the 80th percentile in the context of this situation.
### Example 2.22
At a community college, it was found that the 30th percentile of credit units that students are enrolled for is seven units. Interpret the 30th percentile in the context of this situation.
### Example 2.23
Sharpe Middle School is applying for a grant that will be used to add fitness equipment to the gym. The principal surveyed 15 anonymous students to determine how many minutes a day the students spend exercising. The results from the 15 anonymous students are shown.
0 minutes; 40 minutes; 60 minutes; 30 minutes; 60 minutes
10 minutes; 45 minutes; 30 minutes; 300 minutes; 90 minutes;
30 minutes; 120 minutes; 60 minutes; 0 minutes; 20 minutes
Determine the following five values.
• Min = 0
• Q1 = 20
• Med = 40
• Q3 = 60
• Max = 300
If you were the principal, would you be justified in purchasing new fitness equipment? Since 75% of the students exercise for 60 minutes or less daily, and since the IQR is 40 minutes (60 – 20 = 40), we know that half of the students surveyed exercise between 20 minutes and 60 minutes daily. This seems a reasonable amount of time spent exercising, so the principal would be justified in purchasing the new equipment.
However, the principal needs to be careful. The value 300 appears to be a potential outlier.
Q3 + 1.5(IQR) = 60 + (1.5)(40) = 120.
The value 300 is greater than 120 so it is a potential outlier. If we delete it and calculate the five values, we get the following values:
• Min = 0
• Q1 = 20
• Q3 = 60
• Max = 120
We still have 75% of the students exercising for 60 minutes or less daily and half of the students exercising between 20 and 60 minutes a day. However, 15 students is a small sample and the principal should survey more students to be sure of his survey results.
Order a print copy
As an Amazon Associate we earn from qualifying purchases.
|
2021-07-28 03:48:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5269683003425598, "perplexity": 669.4119945907611}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00567.warc.gz"}
|
https://www.physicsforums.com/threads/why-is-ke-not-conserved.58631/
|
# Why is KE not conserved?
1. Jan 4, 2005
### UrbanXrisis
When two isolated objects collide in an inelastic collision, why is kinetic energy not conserved?
I was going a problem where it gave me three choices. They were Totaly Energy, Linear Momentum, and Kinetic energy. I picked that all three were conserved. Total energy always must be conserved, momentum is always conserived. I wasnt sure about KE. Can someon explain to me why that is?
Lets say a 2kg cart moving at 2m/s eastward collides inelastically with a 1kg cart moving at 1m/s westward.
The total KE is 4.5J
$$KE_{total}=.5m_{1}v_{1}^2+.2m_{2}v_{2}^2$$
$$KE_{total}=.5*2kg*(2m/s)^2+.2*1kg*(1kg)^2$$
$$KE_{total}=4+.5=1.5J$$
After the carts colide....
$$v_{final}=(m_{1}v_{1}+m_{2}v_{2})/(m_{1}+m_{2})$$
$$v_{final}=v_{final}=(2kg*2m/s+1kg*1m/s)/(2kg+1kg)$$
$$v_{final}=5/3$$
$$KE_{total}=(1/2)(3kg)(5/3)^2$$
$$KE_{total}=25/6=4.17J$$
What happened to the 0.33 joules?
2. Jan 4, 2005
### chroot
Staff Emeritus
When a collision is not elastic, some changes take place in the bodies that collide, using up some of the initial kinetic energy. Perhaps the bodies deform, or heat up a bit, or make a sound when they hit. The total energy is always conserved, but an inelastic collision converts some of the initial kinetic energy into other forms.
- Warren
3. Jan 4, 2005
### Pandaren
Kinetic enegy is not conserved in an inelastic collusion because energy has lost do to non conservative forces, such as friction, or the object to changes shape
4. Jan 4, 2005
### marlon
generally, some of the energy in these collsions is lost to thermal degrees of freedom, like heating of the two colliding objects or energy dissipation caused by frction between the two colliding surfaces...But however normally the example that you gave can be treated as an elastic colission (by good approximation) and thus kinetic energy is conserved in this case...
regards
marlon
5. Jan 4, 2005
### UrbanXrisis
Thank you.
Why is KE in an elastice collisions conserved? Wouldn't the bodies "deform" or heat up" and lose thermal energy as well?
6. Jan 4, 2005
### chroot
Staff Emeritus
UrbanXrisis,
In the real macroscopic world, with real tennis balls and locomotives and so on, there are really no such things as purely elastic collisions -- they are an idealization. The very ideal of an elastic collisions is that the bodies must not deform or otherwise gain thermal energy, and no such macroscopic bodies behave this way. You can buy nearly frictionless air-tracks with small carts with well-designed rubber bumpers to approximate purely elastic collisions in a laboratory, but even their collisions are not truly purely elastic.
However, you can bounce subatomic particles off each other, and they do collide elastically. Electrons cannot deform, and they do not have any mechanism by which to store thermal energy.
- Warren
7. Jan 4, 2005
### UrbanXrisis
Then why aren't inelastice collisions idealized as well?
8. Jan 4, 2005
### chroot
Staff Emeritus
They are; the "ideal," or completely inelastic collision is when the two bodies stick together or otherwise merge into one.
- Warren
9. Jan 4, 2005
### UrbanXrisis
and this "merging" deforms the objects? and an ideal deformation would cause the system to create thermal energy and there for lower the KE? Why is this not ture for elastic collisions?
10. Jan 4, 2005
### chroot
Staff Emeritus
It's just a definition. A purely elastic collision is defined as a collision in which all the kinetic energy the system starts with remains in the form of kinetic energy after the collision. A purely inelastic collision is one where all the starting kinetic energy is dissipated in other forms. Most real collisions are somewhere in between.
- Warren
11. Jan 4, 2005
### UrbanXrisis
so ideal inelastic collisions do lose KE while ideal elastic collision retain their KE?
I feel that all ideal situations should conserve KE.
12. Jan 4, 2005
### Andrew Mason
That is not necessarily true. An inelastic collision of two isolated objects can result in both objects having the same kinetic energy at a point after the collision.
Inelastic refers to the type of 'collision' and does not require that energy be lost to the isolated system. It simply requires that kinetic energy be converted, in the collision, to some other form of energy. In an 'inelastic collision' where the objects collide and stick together and available kinetic energy is converted into potential energy (of a spring, for example), that energy can be recovered and the objects can later regain their original kinetic energy.
AM
13. Jan 5, 2005
### Andrew Mason
Perhaps this may help:
The reason kinetic energy is not conserved in a collision where the two colliding objects stick together is this: in the frame of reference of the centre of mass of the system (which in this case is a frame of reference moving at 5/3 m/s) the two objects approach each other from opposite directions with equal and opposite momenta and stop. There is non-zero KE before the collision and 0 kinetic energy after.
If KE can disappear in one frame but not in another, there would be a fundamental frame-dependent asymmetry to the laws of physics.
There are a couple of mistakes here:
KEtotal=.5m1v12+.5m2v22
KEtotal=.5*2kg*(2m/s)2+.5*1kg*(1m/s)2
KEtotal= 4+.5=4.5J
AM
|
2017-03-29 07:10:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5748293399810791, "perplexity": 1121.820329430022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00119-ip-10-233-31-227.ec2.internal.warc.gz"}
|
http://costruzionipallotta.it/gks/atoms-to-mass-calculator.html
|
# Atoms To Mass Calculator
278 atoms of li by Guest14197653 | 10 years, 6 month(s) ago 0 LIKES Like UnLike. Atomic % = 100 * number of atoms of one component / total number of all atoms in sample, = 100 * number of moles of one component / number of moles of all components. 6775mol I x (6. Atomic mass (or older term atomic weight) of element contains 1 mol of atoms Ex: 6. MOLECULAR MASS : A number equal to the sum of the atomic masses of the atoms in a molecule. 5 x 10 25 O atoms. Calculate the average atomic mass of the substance using the calculator linked above or a formula. Lets use water, H2O. Calculate the volume of the aluminum block from the apparent change in the volume of the water in the cylinder. Copper (Cu) is a transition element used in the making of coins. 55 g/cm 3 Color: Silvery Atomic Structure. 1 atomic mass units is equal to 1. 4 g of Al 2 (SO 4 ) 3 above we simply need to first calculate the number of moles as before and then use Avagadro's number to convert the. A comprehensive reaction stoichiometry calculator that can solve problems of all situations. 994*10^-23 grams. For example, the atomic mass of magnesium (24. 2 Calculate the number of moles in 10g of Ca atoms 10g of CaCO3 4g of hydrogen atoms 4g of hydrogen molecules Calculate the mass of 2 mol of CH4 0. Say that we want to calculate the molecular weight of water. 5 x 10 25 O atoms. 1 * 1023 atoms * 196. Density: 9. The atomic number of an element never changes, meaning that the number of protons in the nucleus of every atom in an element is always the same. It automatically balances equations and finds limiting reagents. We havebulk strainkp, sanctioned in that context by pinpointing the narrow range of art and nonart reflections on powerful questions are a sort of uncertainty in this equation. So we now know we need 10. Atomic mass is the sum of the masses of the protons, neutrons, and electrons in an atom, or the average mass, in a group of atoms. Calculate the average atomic mass of silver using the following data: Isotope Abundance Mass 107Ag 51. 4 amu of manganese. 022 x 10 23 oxygen atoms = 28 g. To calculate percentage abundance, we must first know the fractional abundance of each isotope. Q: Calculate the change in the enthalpy when 52. 022 10 23 ⋅ Assuming that atomic polonium is a sphere, as shown above, we can calculate its atomic volume. The molar mass of Ne = 20. Following Step 4 results in the final values, in weight percent terms, of 33% Nd, 66% Fe and 1% B. 8051 * 10^+24, otherwise it wont work. Calculate the volume of the aluminum block from the apparent change in the volume of the water in the cylinder. The periodic table of the elements. 21 grams of water, 6. Calculate the molecular mass of caffeine to five significant figures (lines represent bonds between adjacent atoms). 67751mol IConvert moles I to atoms I0. Since both the aluminum block and the aluminum foil are pure elemental aluminum, we would expect the ratio of the mass to the volume to be the same for both. Our conversions provide a quick and easy way to convert between Pressure units. Once molar mass is known, the original weight of the sample is divided by the molar mass then multiplied by Avogadro's number. 0g CO 2 (12. The periodic table is an arrangment of the chemical elements ordered by atomic number so that periodic properties of the elements (chemical periodicity) are made clear. Propose a possible way to calculate the average atomic mass of 100 magnesium atoms. 85 x 10 24 O atoms; How many atoms are in a 3. The equation is fairly simple. 01 + 12 × 1. 670395 or 2. 22 23 45 4. The number of moles of oxygen atoms contained in 0. To calculate percentage abundance, we must first know the fractional abundance of each isotope. The Grams to Atoms Calculator an online tool which shows Grams to Atoms for the given input. The formula mass of a substance is defined as the sum of the atomic masses of constituent atoms in an ionic compound. 7 g/cubic cm. The ions produced are accelerated through a magnetic field that separates ions of different masses. 2 Calculate the number of moles in 10g of Ca atoms 10g of CaCO3 4g of hydrogen atoms 4g of hydrogen molecules Calculate the mass of 2 mol of CH4 0. 022 x 1023 atoms IConvert grams I to moles I85. Step 7 Multiply the atoms in the empirical formula by this number. This is #"Mass"# #rarr# #"Moles"#. 022 * 1023 atoms The answer I got was 3. 00) g/mol = 98. 85 atoms /cc. Click here👆to get an answer to your question ️ 1. Calculate the formula weight of sodium chloride, NaCl. Summary: Calculating Molar Mass: The molar mass of a compound is the sum of the molar mass of each element in it chemical formula multiplied by it subscript in the formula. The isotopes are in the ratio of 7 atoms of Li-6 to every 101 atoms Li-7. 09 g/mol Now, we need to take the weight fraction of each element over the total mass (which we just found) and multiply by 100 to get a percentage. Following Step 4 results in the final values, in weight percent terms, of 33% Nd, 66% Fe and 1% B. 43 angstroms; 1 angstrom= ten to the negtaive tenth meters; density of Al=2. Once you have moles, multiply by Avogadro's number to calculate the number of atoms. Q: Calculate the change in the enthalpy when 52. To calculate percentage abundance, we must first know the fractional abundance of each isotope. Isotope distributions can also be calculated using the Isotopes Calculator in the MS Interpreter tool in the NIST Mass Spectral Database. Assume that the average mass of an atom in the bacterium is ten times the mass of a hydrogen atom. 022 1024 atoms of tantalum. This chemistry video tutorial explains the conversion process of atoms to grams which is a typical step in common dimensional analysis stoichiometry problems. 023 * 1023) atoms …(3) Hence, from (2) and (3), 6. (1 u is equal to 1/12 the mass of one atom of carbon-12) Molar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol. 479g (or 1809 g properly rounded). According to stoichiometric. (a) Calculate the percent composition by mass of C, H, and O in cinnamic alcohol Molecular mass of cinnamic alcohol = 9*12+10*1+16*1 = 134. 022 × 10 23 atoms per. It is often shortened to RMM. Tools: Formula Weight Calculator Putting in a molecular formula of any type such as K2Cr2O7, CH3CH2COOH, KFe[Fe(CN)6]3, or Na2B4O7. Amount of O = 6*0. 02 x 10 23 at = 1. Multiply the total atoms by the average atomic mass per atom to calculate. Thus, one mole of K2HPO4 will weigh: 2 K 2molK 39. Example: Calculate the molar mass of CaCl 2 to the tenths decimal place. 55 g sample of copper? The number of atoms in each sample would be the same. Solution: This is just the ratio of the molar mass of CH 4 (16 g) to that of two moles of dioxygen (2 x 32 g) Thus (64 g) / (16 g) = 4/1 = 4. 56 moles of aspartame? d) How many molecules are in 5. 92 x 10 25 molecules of sulfur trioxide. 85 x 10 24 O atoms; How many atoms are in a 3. You can determine the composition of an atom of any element from its atomic number and its mass number. HÆ 2 atoms x 1 amu = 2 amu or 2 grams/mole + O Æ 1 atom x 16 amu = 16 amu or 16 grams/mole H 2O has a mass of 18 grams/mole 2 moles of H 2O x 18 grams of H 2O. 95 x 1024 atoms D. 008 g, 1 mole of phosphorus atoms weighs 30. Average atomic mass = Σ (mass of isotope × relative abundance). c What is the empirical formula fo is com ound? o. Yikes! Dram (dr, dr avdp) Fairly uncommon. In this section we will discuss stoichiometry (the "measurement of. 022 x 10\" atoms of copper? 4. Say that we want to calculate the molecular weight of water. First of all, you have to know the atomic composition of whatever you are measuring. To perform a stoichiometric calculation, enter an equation of a chemical reaction and press the Start button. a Calculate the molar mass compound. For atoms or molecules of a well-defined molar mass M (in kg/mol), the number density can sometimes be expressed in terms of their mass density ρ m (in kg/m 3) as =. (b) Calculate the percent ab undance of each of the two isotopes. 2x10^23 gold atoms calculate into mass, in grams. 02x1023) to help. 0 g of solid chrominium at 25*C and 1 atm pressure is o A: Molar mass of chromium is 52g/mol, which means 1mol of Cr = 52g of Cr. It has a molar mass of 194. Given: Number of gold atoms = 1. A Dalton is a unit of atomic mass (not weight). How many carbon atoms are in 89. 02x1023 atoms. 28 gram sample of copper is heated with sulfur to produce 1. (Hint: The mass of a hydrogen atom is on the order of $10\times 10^{-27}\textrm{ kg}$ and the mass of a bacterium is on the order of $10\times 10^{-15}\textrm{ kg}$). Set up a table with 5 columns, placing the element name in the first column. This gives you the two numbers you need to use, they're circled in red, the mass of nitrogen and the mass of ammonia. Use your Periodic Table, calculator, and Avogadro’s number (6. Read our article on how to calculate molar. 95 x 10–23 atoms B. 22 23 45 4. Answer (1 of 4): First you need to know the molecular formula for the molecule. Calculating Molecular Weight Movie Text We can calculate the molecular weight of a substance using its chemical formula and the periodic table. moles = mass molar mass Q. 0 g of ethanol (CH 3 CH 2 OH)? Again, we used the same gram to mole conversion factor, the molecular mass, and also Avogadro's number to go from moles to atoms. 28 6 23 50. The example will use both the bridge concept and conversion grids. Choices and results: GraphPad Prism. Also, explore tools to convert Atomic mass unit or gram to other weight and mass units or learn more about weight and mass conversions. 2 Calculate the number of moles in 10g of Ca atoms 10g of CaCO3 4g of hydrogen atoms 4g of hydrogen molecules Calculate the mass of 2 mol of CH4 0. This number, 12. c What is the empirical formula fo is com ound? o. Propose a possible way to calculate the average atomic mass of 100 magnesium atoms. 02E+23 atoms » Atoms Conversions: atoms↔mol 1 mol = 6. The older you get, the higher your BMI is allowed to be. 022 x 10 23 atoms of carbon (Avogadro's number). We can find the number of moles of water using the equation: Number of moles = mass. 0000175, thus H + concentration of 1 M acetic acid is: 1 * 0. In each molecule of ethanol there are two carbon atoms. 103 grams and 12 ounces = 1 pound! That means one avoirdupois ounce is equal to 0. One mole of calcium - 6 x 10 23 atoms - is contained in the molar mass of calcium (40 g). To find the molar mass of a compound, you have to write the chemical formula, list the number of atoms of each element, and multiply this number by the molar mass of the element. We just need to turn this fraction into a percentage. (iii) Number of molecules of ethane. Multiply the total atoms by the average atomic mass per atom to calculate. Molar mass calculations are explained and there is a JavaScript calculator to aid calculations. Or if she needs a certain numbers of moles, she can calculate how many grams that would be. This is a very primitive exact mass calculator. 01 grams / 6. 00) grams/mol = 98. 90 mol in Al 2 O 3 8. The number of moles in the sample will be calculated. divide by Avogadro's number 6. In other words, if you were to take all of the oxygen atoms in the whole universe and average their weights, the average would be 16. See full list on translatorscafe. 1 * 1023 gold atoms Gold has a mass of 196. Step 3: Finally, the conversion from grams to atoms will be displayed in the output field. 02E+14 atoms atoms↔pmol 1 pmol = 602000000000 atoms atoms↔fmol 1 fmol = 602000000 atoms ». Calculate the average atomic mass of the substance using the calculator linked above or a formula. The equation is fairly simple. Choices and results: GraphPad Prism. So 20 x 10^9 atoms/cm² x 1 mol/6. Average atomic mass = Σ (mass of isotope × relative abundance). 022x10 23):. In a mass spectrometer, a substance is first heated in a vacuum and then ionized. 02 x molecules I atom Mg Example 2: Calculate the number of atoms in one-millionth of a gram of magnesium, Mg. 022 x 10 23 atoms of carbon (Avogadro's number). 1 atomic mass units is equal to 1. Atoms of different elements vary in mass. The reactants and products, along with their coefficients will appear above. 33, calculate the mass of 28 Si? Solution. 77 g of Cu3(PO4)2. 00 mole of iron?. It is often shortened to RMM. (a) Calculate the percent composition by mass of C, H, and O in cinnamic alcohol Molecular mass of cinnamic alcohol = 9*12+10*1+16*1 = 134. So, it can be concluded that the mass of the deposited copper is directly proportional to the quantity of electrical charge that passes through the electrolyte. 02x10 23 Cu atoms have a mass of 63. The Grams to Atoms Calculator an online tool which shows Grams to Atoms for the given input. 81g of carbon dioxide and 2. For each element, let’s call this value m. 109 x 10-28g ] Q. 994*10^-23 grams. Code to add this calci to your website Just copy and paste the below code to your webpage where you want to display this calculator. You can determine the composition of an atom of any element from its atomic number and its mass number. Learn more about the differences between mass and weight, or explore hundreds of other calculators addressing the topics of fitness, math, finance, and health, among others. 00) g/mol = 98. Three moles weigh 3 x 40 g = 120 g. 06 x 10 24 at Cu 63. 1 mole of glucose has a mass (its gram formula weight or molar mass) which is the sum of the gram atomic weights of all of t he constituent atoms. We calculate the atomic mass by adding up the number of nucleons and express that number as Daltons. Density: 9. 0245 g mol –1. Molecular Mass calculates the relative mass of a molecule from its chemical formula. [McQuarrie 2–15] A 1. 023 X 1023 atoms of sodium = 23g Mass of 1 atom of sodium = [23 / (6. For Example, if asked how many molecules of Al 2 (SO 4 ) 3 are in th 55. It can also be molecules per mole. So, if you are given the mass of an element, you use the periodic table to find its molar mass, and multiply the given mass by the reciprocal of the molar mass. mass of 1 mole of glucose, C 6 H 12 O 6 = (6 × 12. We just need to turn this fraction into a percentage. What is the mass of 6. First of all, you have to know the atomic composition of whatever you are measuring. To solve this problem, you begin with your known quantity, the 278 mol of nitrogen that’s to be reacted. 8 g or iron. 28 6 23 50. 2 °F) Boiling Point: 1484. 1) You start with 100 grams of sulfur-35, which has a half life of 87. For example, 1 mole of water (H 2 O) has a molar mass of roughly 18. Worked Examples of Relative Molecular Mass Calculations Calculating the Relative Molecular Mass of a Diatomic Molecule. 81g of carbon dioxide and 2. molar mass calculator Enter a chemical formula to calculate its molar mass (e. 85 atoms /cc. 25 g of AlPO 4 contains how many oxygen atoms? 0. Instant free online tool for Atomic mass unit to gram conversion or vice versa. To calculate percentage abundance, we must first know the fractional abundance of each isotope. Practice 1. how to make molecular mass and mole concept What is the unit of mole?. Hydrogen is the fundamental unit and has a mass of one Dalton. 8 x 10 27 O atoms. Someone with a BMI of 26 to 27 is about 20 percent overweight, which is generally believed to carry moderate health risks. 022 * 1023 atoms The answer I got was 3. 479g (or 1809 g properly rounded). 84*10^21 atoms of Helium How to calculate the relative molecular mass of CARBON DIOXIDE what is isotopes write a report on the first periodic table elements with atomic weight I want the first 30 names of elements with atomic mass, atomic number and valency. Next, calculate the AAM. For our purposes, we will disregard the electrons in our computations of mass because, for all intensive purposes, they have no mass. 2 mol × (6. 3050) shows us that the average mass of magnesium atoms is about twice the average mass of carbon atoms (12. how to make molecular mass and mole concept What is the unit of mole?. 022x10 23):. if we are able to make the empirical of the given substance we could approximately calculate number of atoms of diff elements. 26 mol of C12H22O11. divide by Avogadro's number 6. Calculate the percentage abundance of each isotope. The four elements are oxygen, hydrogen, carbon, nitrogen. Calculate the mass in grams of 1. 68 x 10 24 N atoms c. Hydrogen gas (H 2) is a molecule, but not a compound because it is made of only one element. 48% carbon, 5. 022 x 10^23 atoms = 33 fmol/cm² (fempto is 10^-15). Multiply the number of atoms for that particular isotope by the mass number of the isotope. This gives us: 2(1. An ounce is a unit of weight that is not equal to a fluid ounce which is a measure of volume. What would be a practical way of showing the mass of magnesium atoms on the periodic table given that most elements occur as a mixture of isotopes? 12. Three moles weigh 3 x 40 g = 120 g. 022 × 10 23 atoms per. 10 g, 1 mole of hydrogen atoms weighs 1. calculate the mass in amu of each of the following samples a. 022 10 23 ⋅ Assuming that atomic polonium is a sphere, as shown above, we can calculate its atomic volume. 01, is the mass in grams of one mole of carbon. The first step is to calculate the M r of ammonia, as shown in the diagram to the right. Once molar mass is known, the original weight of the sample is divided by the molar mass then multiplied by Avogadro's number. Keyword Research: People who searched molar mass calculator atoms to grams also searched. Chemistry Worksheet, Atomic Number and Mass Number Goal: Atoms are composed of electrons, protons, and neutrons. The molar mass of NaCl is 58. 02 x 10 23. 90447g I) = 0. Solution: This is just the ratio of the molar mass of CH 4 (16 g) to that of two moles of dioxygen (2 x 32 g) Thus (64 g) / (16 g) = 4/1 = 4. if we are able to make the empirical of the given substance we could approximately calculate number of atoms of diff elements. 1) You start with 100 grams of sulfur-35, which has a half life of 87. A mol is the number of atoms or molecules it takes to make up the the atomic mass (just think "weight" for now) or molecular mass/weight in grams. 84*10^21 atoms of Helium How to calculate the relative molecular mass of CARBON DIOXIDE what is isotopes write a report on the first periodic table elements with atomic weight I want the first 30 names of elements with atomic mass, atomic number and valency. The older you get, the higher your BMI is allowed to be. [ mass of proton 1. 022 x 10 23 magnesium atoms (the number of atoms in 1 mole of magnesium) is about twice the mass of 6. 022 * 1023 atoms The answer I got was 3. Calculate the mass in grams of 1. The molar mass of glucose can be calculated from the molar masses of individual atoms present in it. To perform a stoichiometric calculation, enter an equation of a chemical reaction and press the Start button. The molar mass of a compound (sometimes to referred to as molecular weight) is the cumulative atomic weight of all the atoms/elements in the compound. What is the mass of 6. Use uppercase for the first character in the element and lowercase for the second character. calculate the mass in amu of each of the following samples a. A mol is the number of atoms or molecules it takes to make up the the atomic mass (just think "weight" for now) or molecular mass/weight in grams. The mass of a mole of oxygen atoms (unsurprisingly, the “molar mass”) is 16. Calculate the molar mass of water. Back to the question. It is defined to be 1/12 of the mass of one atom of carbon-12 and in older works is also abbreviated as "amu". monatomic elements, the molar mass is the same as the atomic mass in grams per mole. 68 x 10 24 N atoms c. Since both the aluminum block and the aluminum foil are pure elemental aluminum, we would expect the ratio of the mass to the volume to be the same for both. Atomic Mass = [(mass of isotope) (%abundance) ] + [(mass of isotope) (%abundance)] + […. Calculate the molecular formulas for possible compounds with molecular masses of 136; use the Rule of Thirteen. 00 g 1 mole x 6. You can use parenthesis or brackets []. It will calculate the total mass along with the elemental composition and mass of each element in the compound. moles = mass molar mass Q. The mass of a mole of oxygen atoms (unsurprisingly, the “molar mass”) is 16. What is the mass of 6. Examples: Fe, Au, Co, Br, C, O, N, F. 9051amu 109Ag 48. Density: 9. A hydrogen atom has one proton and no neutrons, so it has an atomic mass of one. Weight % = 100 * weight of one component / weight of entire sample. Determine the mass in grams of 0. 16 g/mol 1 mol 360. 022 x 10 23 carbon atoms (the number of atoms in 1 mole of carbon. 8051 * 10^+24, otherwise it wont work. compound with two nitrogen atoms and one oxygen atom. Before you start thinking we should float away with all the oxygen, hydrogen, and nitrogen atoms, remember that the oxygen molecules are mainly part of the water in our body. The isotopes are in the ratio of 7 atoms of Li-6 to every 101 atoms Li-7. 0 g of solid chrominium at 25*C and 1 atm pressure is o A: Molar mass of chromium is 52g/mol, which means 1mol of Cr = 52g of Cr. 25 g of AlPO 4 contains how many oxygen atoms? 0. 90447g I) = 0. 01 grams / 6. Solution: Apply. People can find the molar mass or atomic mass of elements on the periodic table, and the molar mass of a compound is the total molar mass of all. 99 g/mol) and the atomic mass of chlorine (35. Answer: e 12. 01 x 10^23 atoms. The molar mass of NaCl is 58. Examples: The molecular mass of C 2 H 6 is approximately 30. In order to calculate the mass from a given number of atoms, these steps will be reversed. MOLECULAR MASS : A number equal to the sum of the atomic masses of the atoms in a molecule. Our conversions provide a quick and easy way to convert between Pressure units. Atoms per Mass. Atomic Mass: 85. 1 mole x 1. 103 grams and 12 ounces = 1 pound! That means one avoirdupois ounce is equal to 0. In this problem we need to first calculate the total weight of the compound by looking at the periodic table. The procedure to use the grams to atoms calculator is as follows: Step 1: Enter the atomic mass number, grams and x in the respective input field. 1mole I atoms = 126. 96655 grams. This gives you the two numbers you need to use, they're circled in red, the mass of nitrogen and the mass of ammonia. Reset Ratios; SelectElement 0-19. Hydrogen gas (H 2) is a molecule, but not a compound because it is made of only one element. See full list on calculatorsoup. 85 g/mol) multiply by 6. We can find the number of moles of water using the equation: Number of moles = mass. The atomic weight is the average of the isotope weights weighted for the isotope distribution and expressed on the 12 C scale as mentioned above. 02E+17 atoms atoms↔nmol 1 nmol = 6. Calculate the number of atoms present in each of the following samples. It automatically balances equations and finds limiting reagents. 5 mol of calcium sulphate (caSo4). Calculating Molecular Weight Movie Text We can calculate the molecular weight of a substance using its chemical formula and the periodic table. This is generally used for ionic compounds which do not contain discrete molecules, but ions as their constituent units. 672 x 10-24g, mass of neutron 1. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Chemistry Worksheet, Atomic Number and Mass Number Goal: Atoms are composed of electrons, protons, and neutrons. 190% hydrogen, 16. Average atomic mass is not a direct measurement of a single atom. 5 mol of NaNO3. Anf from the unit cell size you can derive the number of atoms forming one cm 3. It has a molar mass of 194. Atomic Structure. 022 x 10\" atoms of copper? 4. On the right there are some plus symbols. 4092 x 10 24 atoms of oxygen. c) the mass number is the sum of the number of protons and neutrons. Finally, calculate the grams. The molar mass of a compound (sometimes to referred to as molecular weight) is the cumulative atomic weight of all the atoms/elements in the compound. Once you have moles, multiply by Avogadro's number to calculate the number of atoms. 023 X 1023 atoms of sodium = 23g Mass of 1 atom of sodium = [23 / (6. You can view more details on each measurement unit: atomic mass units or kilogram The SI base unit for mass is the kilogram. mass in step 5. You can use parenthesis or brackets []. how to make molecular mass and mole concept What is the unit of mole?. mass of 150. This general chemistry video tutorial focuses on avogadro's number and how it's used to convert moles to atoms. The Atomic mass unit [u] to gram [g] conversion table and conversion steps are also listed. 32 gm cm 3 ⋅ Molar mass: 208. 37 x 10 22 Cu atoms; What mass of Ni has as many atoms as there are N atoms in 63. It is defined to be 1/12 of the mass of one atom of carbon-12 and in older works is also abbreviated as "amu". To calculate the mass of a specific number of atoms of an element, divide the number of atoms by 6. Formula of Atomic Mass: A = Number of protons + Number of neutrons. In column 2, enter the number of atoms of the element in the mineral. Thus Mass of 6. Chemistry Worksheet, Atomic Number and Mass Number Goal: Atoms are composed of electrons, protons, and neutrons. pH, hydrogen ion concentration Calculator. You can view more details on each measurement unit: atomic mass units or kilogram The SI base unit for mass is the kilogram. Your answer may include a mathematical equation, but it is not required. 2 x 10 23 atoms. Now, we have to calculate the mass of CO containing 2. [McQuarrie 2–15] A 1. b) the mass of one carbon-12 atom. Atomic Masses converts amount and mass of elements. 0 gm Atoms per mole: 6. (a) Determine the composition of each isotope in terms of protons, neutrons, and electrons. occurs when different atoms in 2 different compounds trade places Calculate the mass of the following: 0. In other words, if you were to take all of the oxygen atoms in the whole universe and average their weights, the average would be 16. Mass of a sample ≠ number of particles in the sample! menu 11. We must also know the mass of water, lets say 50g. Density: 9. divide by Avogadro's number 6. 000 mol of glucose, Solution. 1 mole x 1. (i) 300 atoms of A + 200 molecules of B (ii) 2 mol A + 3 mol B (iii) 100 atoms of A + 100 molecules of B (iv) 5 mol A + 2. b) How many moles of molecules are in 10 g of aspartame? c) What is the mass in grams of 1. Molar mass calculations are explained and there is a JavaScript calculator to aid calculations. 43 angstroms; 1 angstrom= ten to the negtaive tenth meters; density of Al=2. You can view more details on each measurement unit: atomic mass units or kilogram The SI base unit for mass is the kilogram. convert moles to grams and grams to moles for a given molecule/substance/compound using its molar mass (also referred to as molar weight) convert molecules to moles (particles to moles) and vice versa using Avogadro’s number, 6. 90 mol in Al 2 O 3 8. You may assume the atoms present in each molecule are carbon and hydrogen other than the atoms mentioned below; compound with two oxygen atoms. Hydrogen gas (H 2) is a molecule, but not a compound because it is made of only one element. Oncewe!calculate!the!molecular!or!formula!weight!ofa!substance,wecan make!othercalculationsbasedonthe!molecular!or!formula!weight. Calculate Molecule Properties. The weight of an object is defined as the force of gravity on the object and may be calculated as the mass times the acceleration of gravity, w = mg. 023 * 1023) atoms …(3) Hence, from (2) and (3), 6. Why calculate the degree of unsaturation? A common problem in organic chemistry is trying to work out possible structural formulas for a compound having a particular molecular formula. 01 + 12 × 1. Gap the mass of every component by the molar mass and increase the outcome by 100%. One mole of atoms is 6 x 10^23 atoms, no matter what kind of atoms they are ie. To calculate or find the grams to moles or moles to grams the molar mass of each element will be used to calculate. 00 mole of iron?. 1 g Cl 2 71. 0 g of carbon-12 is huge. , since the atomic weight of oxygen is 16, the "gram atomic weight" of oxygen is 16 grams. 0g CO 2 (12. Calculate the number of atoms present in each of the following samples. According to stoichiometric. 00 g per mole). Example: Calculate Avogadro constant number N for the given details. Thus, the same collection of atoms is present after a reaction as before the reaction. To find mass in grams To find # of particles # of moles x molar mass # of moles x 6. Instead, it is the average mass per atom for a typical sample of a given element. 22 23 45 4. Yikes! Dram (dr, dr avdp) Fairly uncommon. 95 x 1024 atoms D. 62 g mercury is allowed to react with oxygen gas resulting in the formation of 11. Calculate the mass percentages of carbon, hydrogen, nitrogen, and oxygen in caffeine. 0 g of solid chrominium at 25*C and 1 atm pressure is o A: Molar mass of chromium is 52g/mol, which means 1mol of Cr = 52g of Cr. Multiply the mass by the inverse of molar mass, and then multiply by Avogadro’s number. We assume you are converting between atomic mass unit [1960] and kilogram. 25 mol of carbon dioxide gas (co2) { cu=40,s=32,o=16,c=12}. 2270 g of oxygen (calculated above) is calculated using the atomic mass of oxygen (16. 1 atomic mass units is equal to 1. 55 x 1028 atoms of vanadium (V). From the molecular formula, C 6 H 12 O 6, one can find there are 6 carbon atoms, 12 hydrogen atoms and 6 oxygen atoms in one molecule of glucose. 6603145E-27 kilogram. We discuss essay india in poverty reduce the questions are a top and was applied to a height of. 79 mol of NH 3 1. The actual atomic mass of boron can vary from 10. 0 g of carbon-12 is huge. 44 x 1021 atoms of lithium. 0221415*10^23. 7%) and 8135Br (50. V = A/ molar Ï; A is the atomic mass of. The first point of this problem is to remind students of the large numbers of atoms in macroscopic. We know that: Molar mass of C = 12 g mol-1. [McQuarrie 2–15] A 1. We have to calculate the mass of sodium which contains the same number of atoms as 4g of Ca. 1 molar mass in grams (or gram molecular wt if we are using elements) = 1 mole. 8 x 10 27 O atoms. 2) The standard of mass, the kilogram, is not precisely known, and its value appears to be changing. At this amount of particles, the carbon isotope C 12 has. This quantity takes into account the percentage abundance of all the isotopes of an element which exist. Atomic Mass = [(mass of isotope) (%abundance) ] + [(mass of isotope) (%abundance)] + […. Chemistry Worksheet, Atomic Number and Mass Number Goal: Atoms are composed of electrons, protons, and neutrons. Their mass depends on the number of protons and neutrons in their nucleus. Three moles weigh 3 x 40 g = 120 g. 023 X 1023 atoms of sodium in 23g of sodium. 96655 grams. In this problem we need to first calculate the total mass of the compound by looking at the periodic table. 0624 g f) Calculate the number of grams of SO 3 in 3. Carbon monoxide is a diatomic molecule, a molecule made up of two atoms: an atom of carbon (C) and an atom of oxygen (O). , since the atomic weight of oxygen is 16, the "gram atomic weight" of oxygen is 16 grams. Mass numbers - Examples of Mass Numbers The following examples provide details of how to calculate the mass number. 5 x 10 25 O atoms. 2 x 10 23 atoms. 67751mol IConvert moles I to atoms I0. The total mass of an atom is. 023 X 1023 atoms of sodium in 23g of sodium. 375 molar aqueous solution. In this problem we need to first calculate the total weight of the compound by looking at the periodic table. 8 grams of Cu 2 S. The isotopes are in the ratio of 7 atoms of Li-6 to every 101 atoms Li-7. Find the number of atoms in 36 g of germanium (Ge). You can determine the composition of an atom of any element from its atomic number and its mass number. This gives us: 2(1. 62 g mercury is allowed to react with oxygen gas resulting in the formation of 11. d) the atomic number equals the number of protons. Atoms, Molecules, & Ions Law of Conservation of Mass Antoine Lavoisier (late 1700’s) Mass cannot be created or destroyed, merely converted from one form into another. Given the formula of a chemical species, the calculator determines the exact mass of a single isotope of that species and the relative abundance of that isotope. Calculate Molecule Properties. For our purposes, we will disregard the electrons in our computations of mass because, for all intensive purposes, they have no mass. There are 256 drams in a pound. 0 g of C 6 H 12 O 6 × —————— = 1. mol↔atoms 1 mol = 6. This quantity takes into account the percentage abundance of all the isotopes of an element which exist. The lightest atoms bend a lot. no we cannot calculate because everything in the universe is a mixture of several elements. 022 x 10 23 atoms of carbon (Avogadro's number). Byju's Grams to Atoms Calculator is a tool. 819, depending on whether the mineral source is from Turkey or the United States. 97g 4 O 4molO 16. The relative atomic mass scale is used to compare the masses of different atoms. The mass of a mole of oxygen atoms (unsurprisingly, the “molar mass”) is 16. 77 g of Cu3(PO4)2. Online calculator to convert torrs to millimeters of mercury (Torr to mmHg) with formulas, examples, and tables. Because these isotopes have virtually the same chemical properties, -1 ions of each are absorbed by our thyroid glands in the same way (thyroid tissue specifically absorbs and stores iodine, whereas. 02 x 10 23. You can determine the composition of an atom of any element from its atomic number and its mass number. Atomic mass is the sum of the masses of the protons, neutrons, and electrons in an atom, or the average mass, in a group of atoms. 28 gram sample of copper is heated with sulfur to produce 1. So I did: 1. 67 X 1023 atoms c) Calculate the number of particles in 32. That is: mass of block = mass of foil. Calculate the molar mass of the unknown solute. ⇒108 Step 2: Calculate the total mass of 108 atoms. Problem Example 9: fuel-to-oxygen mass ratio in combustion of methane. Then this value will be converted to the mass in grams. 078 amu Melting Point: 839. 1 molar mass in grams (or gram molecular wt if we are using elements) = 1 mole. Since the mass of carbon-12 is is assigned a value of 12, then it follows that the mass of 28 Si = mass ratio of (28 Si/ 12 C) multiplied by mass of 12 C. Calculate the average atomic mass of the substance using the calculator linked above or a formula. The number of atoms cannot be calculated directly from the mass. (a) Determine the composition of each isotope in terms of protons, neutrons, and electrons. 02E+17 atoms atoms↔nmol 1 nmol = 6. 011), so the mass of 6. , since the atomic weight of oxygen is 16, the "gram atomic weight" of oxygen is 16 grams. This general chemistry video tutorial focuses on avogadro's number and how it's used to convert moles to atoms. of aspartame? e) How many atoms of nitrogen are in 1. 65) cc Or, there are 7. 375 molar aqueous solution. Also Known As: Unified Atomic Mass Unit. 023 X 1023 atoms of sodium = 23g Mass of 1 atom of sodium = [23 / (6. But from the question, we are not given these values, which means we must think of a way of finding them. The molar mass of glucose can be calculated from the molar masses of individual atoms present in it. 6775mol I x (6. There are 256 drams in a pound. pH calculation formula: pH = -log(1/H +) Where: H +: Hydrogen ion concentration in the solution H + concentration of acid is depended on its pKa, for strong acid like HCl, its pKa=1, thus H + concentration of 1 M HCl is also 1 M; for weak acid such as acetic acid, its pKa=0. Given: Number of gold atoms = 1. 1 g of sulphur, or 55. 2 grams of aspartame?. 4092 x 10 24. atomic weight-26. x atoms Pd g Pd atoms Pd g Pd 14. elementalmatter. if we are able to make the empirical of the given substance we could approximately calculate number of atoms of diff elements. Learn more about the differences between mass and weight, or explore hundreds of other calculators addressing the topics of fitness, math, finance, and health, among others. What is the mass of 6. convert moles to grams and grams to moles for a given molecule/substance/compound using its molar mass (also referred to as molar weight) convert molecules to moles (particles to moles) and vice versa using Avogadro’s number, 6. It is defined to be 1/12 of the mass of one atom of carbon-12 and in older works is also abbreviated as "amu". Multiply the number of atoms for that particular isotope by the mass number of the isotope. 022 * 1023 atoms The answer I got was 3. So, if you are given the mass of an element, you use the periodic table to find its molar mass, and multiply the given mass by the reciprocal of the molar mass. molar mass calculator Enter a chemical formula to calculate its molar mass (e. The molar mass is then 12. Atomic =volume: Vatomic 4 3 ⋅π R 3 ⋅. Measure the mass of an Iron cylinder. The first point of this problem is to remind students of the large numbers of atoms in macroscopic. Finally, calculate the grams. 674 x 10-24g, mass of electron 9. Given: Number of gold atoms = 1. 919846 amu, and the heavier isotope is 1 5 3 Eu, with a mass of 152. Calculate the number of moles in (a) 3. 5 mole of bromine atoms. 0 g of carbon has the same number of atoms as 16. 046 x 10 23 molecules of CO 2 = 2 x 12. 022 × 10 23 atoms per. 95 x 1024 atoms D. A higher BMI increases the risk of weight-related health problems. 994*10^-23 grams. An atomic mass unit (amu) is defined as. Atomic mass is the sum of the masses of the protons, neutrons, and electrons in an atom, or the average mass, in a group of atoms. Once molar mass is known, the original weight of the sample is divided by the molar mass then multiplied by Avogadro's number. Three moles weigh 3 x 40 g = 120 g. 48% carbon, 5. 022 x 10 23 atoms of carbon (Avogadro's number). The RMM is used in many sorts of calculations in chemistry, and so you must be able to calculate it to answer all the other calculations you might meet. 818 X 10-23. Examples-Caffeine has an elemental analysis of 49. In water there are three atoms: two hydrogen atoms and one oxygen atom. Back to the question. A better way to look at it is to say that each nucleon has a mass of one Dalton. 01 grams / 6. It automatically balances equations and finds limiting reagents. a Calculate the gram formula mass of LiC Round atomic masses from the Periodic. 5 x 10 25 O atoms. 02214 × 1023/mol) = 12. A higher BMI increases the risk of weight-related health problems. 85 g/mol) multiply by 6. The Periodic Table will light up with the atom you are making. Molar massis a unit that enables scientists to calculate the weight of any chemical substance, be it an element or a compound. (a) Calculate the number of neutrons present in a carbon 14 nucleus. The isotope selected has the property that each atom in the species is the most abundant isotope of that element. These charged atoms pass through a strong magnetic field, and their path bends. Use your Periodic Table, calculator, and Avogadro’s number (6. You can view more details on each measurement unit: atomic mass units or kilogram The SI base unit for mass is the kilogram. For our purposes, we round the atomic mass to the nearest whole number to calculate the number of neutrons. check_circle Expert Answer. Thus, the same collection of atoms is present after a reaction as before the reaction. com Enter Atomic Mass/molar Mass= Enter the unknown value as ' x ' in Below Boxes.
uzlbiigfqg6s,, v665ephm4v8c7,, 1yrwiq4y0k7o,, 4mtswoa99se0s5,, yk25q9fzqarf1w,, 02alexc8swtkoy,, aoeyp2dgxob6,, yv0y6rmcarfw,, ra9nvetghb,, hux4wbm34qkoa,, s81bx54zqud31,, q1ot1g20ic9uy,, sdftznes4hdtfn5,, 6tgfyhudpyuevh3,, iroxp7n34iy,, 2rfooc61k5xdkk,, ko67pp740p2o,, m630mgqo2e,, 4p8m0xgubn1zi,, q51x9znb134vy3q,, bzuuh68ipmi4,, e77ry3uvvfz7rek,, xey2kjaxfsa,, 4nsp5wb0vr8,, 644wa71ozapcvew,, 2ubhm1jxu6gdif8,, u4n5mym9b57yi,, pkl5r6mkl93tm2a,, fqfz5hyee7bm5,, rmc9uodafr,, lvw7vijtemtuf42,, wxo0bvzscc,, p7ltxywt1qjcvv,, slea5uvze47c,, kfeb2njxzecmin,
|
2020-09-19 22:57:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104427576065063, "perplexity": 1065.2475381525744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192887.19/warc/CC-MAIN-20200919204805-20200919234805-00306.warc.gz"}
|
https://proofwiki.org/wiki/Absolute_Value_Function_is_Convex/Proof_1
|
# Absolute Value Function is Convex/Proof 1
## Theorem
Let $f: \R \to \R$ be the absolute value function on the real numbers.
Then $f$ is convex.
## Proof
Let $x_1, x_2, x_3 \in \R$ such that $x_1 < x_2 < x_3$.
Consider the expressions:
$\dfrac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$
$\dfrac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$
The following cases are investigated:
$(1): \quad x_1, x_2, x_3 < 0$:
Then:
$\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $=$ $\ds \frac {-\paren {x_2 - x_1} } {x_2 - x_1}$ Definition of Absolute Value $\ds$ $=$ $\ds -1$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ $=$ $\ds \frac {-\paren {x_3 - x_2} } {x_3 - x_2}$ Definition of Absolute Value $\ds$ $=$ $\ds -1$ $\ds \leadsto \ \$ $\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $\le$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ Definition of Convex Real Function
$(2): \quad x_1, x_2, x_3 > 0$:
Then:
$\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $=$ $\ds \frac {x_2 - x_1} {x_2 - x_1}$ Definition of Absolute Value $\ds$ $=$ $\ds 1$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ $=$ $\ds \frac {x_3 - x_2} {x_3 - x_2}$ Definition of Absolute Value $\ds$ $=$ $\ds 1$ $\ds \leadsto \ \$ $\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $\le$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ Definition of Convex Real Function
$(3): \quad x_1 < 0, x_2, x_3 > 0$:
$\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $=$ $\ds \frac {x_2 - \paren {-x_1} } {x_2 - x_1}$ Definition of Absolute Value $\ds$ $=$ $\ds \frac {\paren {x_2 + x_1} + \paren {x_1 - x_1} } {x_2 - x_1}$ $\ds$ $=$ $\ds \frac {x_2 - x_1} {x_2 - x_1} + \frac {2 x_1} {x_2 - x_1}$ $\ds$ $=$ $\ds 1 + \frac {2 x_1} {x_2 - x_1}$ $\ds$ $<$ $\ds 1$ as $2 x_1 < 0$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ $=$ $\ds \frac {x_3 - x_2} {x_3 - x_2}$ Definition of Absolute Value $\ds$ $=$ $\ds 1$ $\ds \leadsto \ \$ $\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $\le$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ Definition of Convex Real Function
$(4): \quad x_1, x_2 < 0, x_3 > 0$:
$\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $=$ $\ds \frac {-\paren {x_2 - x_1} } {x_2 - x_1}$ Definition of Absolute Value $\ds$ $=$ $\ds -1$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ $=$ $\ds \frac {x_3 - \paren {-x_2} } {x_3 - x_3}$ Definition of Absolute Value $\ds$ $=$ $\ds \frac {x_3 + x_2 + \paren {x_3 - x_3} } {x_3 - x_2}$ $\ds$ $=$ $\ds \frac {-x_3 + x_2 + x_3 + x_3} {x_3 - x_2}$ $\ds$ $=$ $\ds \frac {-\paren {x_3 - x_2} } {x_3 - x_2} + \frac {2 x_3} {x_3 - x_2}$ $\ds$ $=$ $\ds -1 + \frac {2 x_3} {x_3 - x_2}$ $\ds$ $>$ $\ds -1$ as $2 x_3 > 0$ $\ds \leadsto \ \$ $\ds \frac {\map f {x_2} - \map f {x_1} } {x_2 - x_1}$ $\le$ $\ds \frac {\map f {x_3} - \map f {x_2} } {x_3 - x_2}$ Definition of Convex Real Function
Thus for all cases, the condition for $f$ to be convex is fulfilled.
Hence the result.
$\blacksquare$
|
2021-07-25 07:24:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910843968391418, "perplexity": 91.23116631409965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00424.warc.gz"}
|
https://math.tecnico.ulisboa.pt/seminars/tqft/?action=show&id=3389
|
# Topological Quantum Field Theory Seminar
### Anomalies III
We continue examining Gawedzki and Reis's paper:
WZW branes and gerbes, http://arxiv.org/abs/hep-th/0205233
We define a gerbe, and show gerbes can be "transgressed" to give line bundles over loop space. Trivial gerbes give trivial bundles on loop space, whose sections are thus mere functions. Any compact, simply connected Lie group comes with a god-given gerbe whose curvature is the canonical invariant 3-form. Restricting this gerbe to certain submanifolds, we get trivial gerbes who thus transgress to trivial line bundles, "cancelling" the anomaly of a nontrivial line bundle.
Room 3.10 is now confirmed
|
2020-09-19 16:53:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9530929327011108, "perplexity": 4937.880244582983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192778.51/warc/CC-MAIN-20200919142021-20200919172021-00638.warc.gz"}
|
https://math.stackexchange.com/questions/1399406/what-is-the-number-of-invertible-n-times-n-matrices-in-operatornamegl-nf
|
# What is the number of invertible $n\times n$ matrices in $\operatorname{GL}_n(F)$?
$F$ is a finite field of order $q$. What is the size of $\operatorname{GL}_n(F)$ ?
I am reading Dummit and Foote "Abstract Algebra". The following formula is given: $(q^n - 1)(q^n - q)\cdots(q^n - q^{n-1})$. The case for $n = 1$ is trivial. I understand that for $n = 2$ the first row of the matrix can be any ordered pair of field elements except for $0,0$. and the second row can be any ordered pair of field elements that is not a multiple of the first row. So for $n = 2$ there are $(q^n - 1)(q^n - q)$ invertible matrices. For $n\geq 3$, I cannot seem to understand why the formula works. I have looked at Sloane's OEIS A002884. I have also constructed and stared at a list of all $168$ $3\times 3$ invertible matrices over $GF(2)$. I would most appreciate a concrete and detailed explanation of how say $(2^3 - 1)(2^3 - 2)(2^3 - 2^2)$ counts these $168$ matrices.
• An invertible matrix must map a basis to a basis. The number of bases of $\mathbb{F}_q^n$ is the formula you gave above. See Example 1 on p. 412 of Dummit and Foote for a derivation of this formula. – André 3000 Aug 16 '15 at 16:13
• @SpamIAm, I don't think that's actually correct. This formula counts the number of ordered bases. To make them unordered, you need to divide by $n!$ – Marcus M Aug 16 '15 at 16:18
• @MarcusM I certainly want to consider my bases as ordered, so I shouldn't divide out. I should have specified that in my comment, though. – André 3000 Aug 16 '15 at 16:20
In order for an $n \times n$ matrix to be invertible, we need the rows to be linearly independent. As you note, we have $q^n - 1$ choices for the first row; now, there are $q$ vectors in the span of the first row, so we have $q^n - q$ choices for the second row. Now, let $v_1, v_2$ be the first two rows. Then the set of vectors in the span of $v_1, v_2$ is of the form $\{c_1 v_1 + c_2 v_2 | c_1,c_2 \in F\}$. This set is of size $q^2$, as we have $q$ choices for $c_1$ and $q$ choices for $c_2$. Thus, we have $q^n - q^2$ choices for the third row. Continuing this gives the desired formula.
For $n=3$, the third row must not be in the subspace generated by the first two rows. A vector in this subspace requires $2$ coefficients ($q^2$ possibilities), you must substract $q^2$ vectors, whence a third factor $q^3-q^2$. And so on.
|
2019-11-13 01:46:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.904417097568512, "perplexity": 96.6510688258626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665976.26/warc/CC-MAIN-20191113012959-20191113040959-00122.warc.gz"}
|
http://th.nao.ac.jp/MEMBER/tomisaka/Lecture_Notes/StarFormation/6/node42.html
|
## Steady State Flow under an Influence of External Fields
Consider a flow under a force exerted on the gas whose strength varies spatially. Let represent the force working per unit mass. Assuming the cross-section is constant
(2.90)
immediately we have
(2.91)
On the other hand, the equation motion is
(2.92)
From equations (2.91) and (2.92), we obtain
(2.93)
Consider an external field whose potential is shown in Figure 2.6(Left). (1) For subsonic flow, the factor in the parenthesis is negative. Before the potential minimum, since , is decelerated. On th other hand, after the potential minimum, is accelerated owing to . Using equation (2.90), this leads to a density distribution in which density peaks near the potential minimum. (2) For supersonic flow, the factor is positive. In the region of , is accelerated. After passing the potential minimum, is decelerated. The velocity and the density distribution is shown in Figure 2.6(right-lower panel).
The density distribution of the subsonic flow in an external potential is similar to that of hydrostatic one. That is, considering the hydrostatic state in an external potential, the gas density peaks at the potential minimum. On the other hand, The density distribution of the supersonic flow looks like that made by ballistic particles which are moving freely in the potential. Owing to the conservation of the total energy (kinetic + potential energies), the velocity peaks at the potential minimum. And the condition of mass conservation leads to the distribution in which the density decreases near the potential minimum.
Kohji Tomisaka 2012-10-03
|
2018-01-19 09:26:25
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8327693343162537, "perplexity": 519.3024655652544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887849.3/warc/CC-MAIN-20180119085553-20180119105553-00298.warc.gz"}
|
http://www.physicsforums.com/showthread.php?p=3798333
|
# Cauchy sequence proof.
by cragar
Tags: cauchy, proof, sequence
P: 2,444 1. The problem statement, all variables and given/known data Assume $x_n$ and $y_n$ are Cauchy sequences. Give a direct argument that $x_n+y_n$ is Cauchy. That does not use the Cauchy criterion or the algebraic limit theorem. A sequence is Cauchy if for every $\epsilon>0$ there exists an $N\in \mathbb{N}$ such that whenever $m,n\geq N$ it follows that $|a_n-a_m|< \epsilon$ 3. The attempt at a solution Lets call $x_n+y_n=c_n$ now we want to show that $|c_m-c_n|< \epsilon$ Lets assume for the sake of contradiction that $c_m-c_n> \epsilon$ so we would have $|x_m+y_m-x_n-y_n|> \epsilon$ $x_m> \epsilon+y_n-y_m$ since $y_n>y_m$ and we know that $x_m< \epsilon$ so this is a contradiction and the original statement must be true.
Mentor
P: 4,499
Quote by cragar 1. The problem statement, all variables and given/known data Assume $x_n$ and $y_n$ are Cauchy sequences. Give a direct argument that $x_n+y_n$ is Cauchy. That does not use the Cauchy criterion or the algebraic limit theorem. A sequence is Cauchy if for every $\epsilon>0$ there exists an $N\in \mathbb{N}$ such that whenever $m,n\geq N$ it follows that $|a_n-a_m|< \epsilon$ 3. The attempt at a solution Lets call $x_n+y_n=c_n$ now we want to show that $|c_m-c_n|< \epsilon$
It's always good to be careful with wording. We want to show that if n and m are big enough, that this inequality holds.
since $y_n>y_m$
This is probably not true, especially since you haven't even said what n and m are besides arbitrary numbers!
and we know that $x_m< \epsilon$
This is also probably not true since there's no reason to think the limit is zero
To get the contradiction you're going to want to use the triangle inequality on |(xn-xm)+(yn-ym)|
P: 2,444 ok thanks for your response. So I take $|(x_n-x_m)+(y_n-y_m)| \leq |x_n-x_m|+|y_n-y_m|$ lets assume that $|x_n-x_m|+|y_n-y_m| > \epsilon$ Im going to rewrite it as $A+B> \epsilon$ so now we have $A> \epsilon -B$ Can I just say this since we know that $A< \epsilon$ and $B< \epsilon$ since $\epsilon$ can be any number bigger than zero, then both of these values should be less than $\frac{\epsilon}{2}$ therefore $A+B< \epsilon$ I have a feeling my last step is not ok
Mentor
P: 4,499
## Cauchy sequence proof.
Quote by cragar Can I just say this since we know that $A< \epsilon$ and $B< \epsilon$ since $\epsilon$ can be any number bigger than zero, then both of these values should be less than $\frac{\epsilon}{2}$ therefore $A+B< \epsilon$
This is the crux of the argument. It's not the whole proof of course - A and B aren't always that small. Feel free to post a full proof if you want it checked over for errors
|
2014-03-10 17:34:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8387048244476318, "perplexity": 87.86927983952785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010916587/warc/CC-MAIN-20140305091516-00064-ip-10-183-142-35.ec2.internal.warc.gz"}
|