url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
|---|---|---|---|
http://lixiaozhe.com/post/ecnu/nagel.what.does.it.all.mean
|
## 哲学导论:《这一切有什么意义》
Nagel T. What does it all mean?: a very short introduction to philosophy[M]. Oxford University Press, 2004.
## 1. Introduction(导论)
This book is a brief introduction to philosophy for people who don't know the first thing about the subject. People ordinarily study philosophy only when they go to college, and I suppose that most readers will be of college age or older. But that has nothing to do with the nature of the subject, and I would be very glad if the book were also of interest to intelligent high school students with a taste for abstract ideas and theoretical arguments -- should any of them read it.
Our analytical capacities are often highly developed before we have learned a great deal about the world, and around the age of fourteen many people start to think about philosophical problems on their own -- about what really exists, whether we can know anything, whether anthing is really right or wrong, whether life has any meaning, whether death is the end. These problems have been written about for thousands of years, but the philosophical raw material comes directly from the world and our relation to it, not from writings of the past. That is why they come up again and again, in the heads of people who haven't read about them.
This is a direct introduction to nine philosophical problems, each of which can be understood in itself, without reference to the history of thought. I shall not discuss the great philosophical writings of the past or the cultural background of those writings. The center of philosophy lies in certain questions which the reflective human mind finds naturally puzzling, and the best way to begin the study of philosophy is to think about them directly. Once you've done that, you are in a better position to appreciate the work of others who have tried to solve the same problems.
Philosophy is different from science and from mathematics. Unlike science it doesn't rely on experiments or observation, but only on thought. And unlike mathematics it has no formal methods of proof. It is done just by asking questions, arguing, trying out ideas and thinking of possible arguments against them, and wondering how our concepts really work.
The main concern of philosophy is to question and understand very common ideas that all of us use every day without thinking about them. A historian may ask what happened at some time in the past, but a philosopher will ask, "What is time?" A mathematician may investigate the relations among numbers, but a philosopher will ask, "What is a number?" A physicist will ask what atoms are made of or what explains gravity, but a philosopher will ask how we can know there is anything outside of our own minds. A psychologist may investigate how children learn a language, but a philosopher will ask, "What makes a word mean anything?" Anyone can ask whether it's wrong to sneak into a movie without paying, but a philosopher will ask, "What makes an action right or wrong?"
We couldn't get along in life without taking the ideas of time, number, knowledge, language, right and wrong for granted most of the time; but in philosophy we investigate those things themselves. The aim is to push our understanding of the world and ourselves a bit deeper. Obviously it isn't easy. The more basic the ideas you are trying to investigate, the fewer tools you have to work with. There isn't much you can assume or take for granted. So philosophy is a somewhat dizzying activity, and few of its results go unchallenged for long.
Since I believe the best way to learn about philosophy is to think about particular questions, I won't try to say more about its general nature. The nine problems we'll consider are these:
• Knowledge of the world beyond our minds
• Knowledge of minds other than our own
• The relation between mind and brain
• How language is possible
• Whether we have free will
• The basis of morality
• What inequalities are unjust
• The nature of death
• The meaning of life
1. 我们心灵之外的世界的知识
2. 他人心灵的知识
3. 心灵与大脑的关系
4. 语言如何可能
5. 自由一直是否存在
6. 道德的基础
7. 什么样的不平等是不公正的
8. 死亡的性质
9. 人生的意义
They are only a selection: there are many, many others.
What I say will reflect my own view of these problems and will not necessarily represent what most philosophers think. There probably isn't anything that most philosophers think about these questions anyway: philosophers disagree, and there are more than two sides to every philosophical question. My personal opinion is that most of these problems have not been solved, and that perhaps some of them never will be. But the object here is not to give answers -not even answers that I myself may think are right -- but to introduce you to the problems in a very preliminary way so that you can worry about them yourself. Before learning a lot of philosophical theories it is better to get puzzled about the philosophical questions which those theories try to answer. And the best way to do that is to look at some possible solutions and see what is wrong with them. I'll try to leave the problems open, but even if I say what I think, you have no reason to believe it unless you find it convincing.
There are many excellent introductory texts that include selections from the great philosophers of the past and from more recent writings. This short book is not a substitute for that approach, but I hope it provides a first look at the subject that is as clear and direct as possible. If after reading it you decide to take a second look, you'll see how much more there is to say about these problems than I say here.
## 2. How Do We Know Anything?(我们能知道什么)
If you think about it, the inside of your own mind is the only thing you can be sure of.
Whatever you believe -- whether it's about the sun, moon, and stars, the house and neighborhood in which you live, history, science, other people, even the existence of your own body -is based on your experiences and thoughts, feelings and sense impressions. That's all you have to go on directly, whether you see the book in your hands, or feel the floor under your feet, or remember that George Washington was the first president of the United States, or that water is H 2 O. Everything else is farther away from you than your inner experiences and thoughts, and reaches you only through them.
Ordinarily you have no doubts about the existence of the floor under your feet, or the tree outside the window, or your own teeth. In fact most of the time you don't even think about the mental states that make you aware of those things: you seem to be aware of them directly. But how do you know they really exist?
Would things seem any different to you if in fact all these things existed only in your mind -- if everything you took to be the real world outside was just a giant dream or hallucination, from which you will never wake up? If it were like that, then of course you couldn't wake up, as you can from a dream, because it would mean there was no "real" world to wake up into. So it wouldn't be exactly like a normal dream or hallucination. As we usually think of dreams, they go on in the minds of people who are actually lying in a real bed in a real house, even if in the dream they are running away from a homicidal lawnmor through the streets of Kansas City. We also assume that normal dreams depend on what is happening in the dreamer's brain while he sleeps.
But couldn't all your experiences be like a giant dream with no external world outside it? How can you know that isn't what's going on? If all your experience were a dream with nothing outside, then any evidence you tried to use to prove to yourself that there was an outside world would just be part of the dream. If you knocked on the table or pinched yourself, you would hear the knock and feel the pinch, but that would be just one more thing going on inside your mind like everything else. It's no use: If you want to find out whether what's inside your mind is any guide to what's outside your mind, you can't depend on how things seem -- from inside your mind -- to give you the answer.
But what else is there to depend on? All your evidence about anything has to come through your mind -- whether in the form of perception, the testimony of books and other people, or memory -- and it is entirely consistent with everything you're aware of that nothing at all exists except the inside of your mind.
It's even possible that you don't have a body or a brain -- since your beliefs about that come only through the evidence of your senses. You've never seen your brain -- you just assume that everybody has one -- but even if you had seen it, or thought you had, that would have been just another visual experience. Maybe you, the subject of experience, are the only thing that exists, and there is no physical world at all -- no stars, no earth, no human bodies. Maybe there isn't even any space.
The most radical conclusion to draw from this would be that your mind is the only thing that exists. This view is called solipsism. It is a very lonely view, and not too many people have held it. As you can tell from that remark, I don't hold it myself. If I were a solipsist I probably wouldn't be writing this book, since I wouldn't believe there was anybody else to read it. On the other hand, perhaps I would write it to make my inner life more interesting, by including the impression of the appearance of the book in print, of other people reading it and telling me their reactions, and so forth. I might even get the impression of royalties, if I'm lucky.
Perhaps you are a solipsist: in that case you will regard this book as a product of your own mind, coming into existence in your experience as you read it. Obviously nothing I can say can prove to you that I really exist, or that the book as a physical object exists.
On the other hand, to conclude that you are the only thing that exists is more than the evidence warrants. You can't know on the basis of what's in your mind that there's no world outside it. Perhaps the right conclusion is the more modest one that you don't know anything beyond your impressions and experiences. There may or may not be an external world, and if there is it may or may not be completely different from how it seems to you -- there's no way for you to tell. This view is called skepticism about the external world.
An even stronger form of skepticism is possible. Similar arguments seem to show that you don't know anything even about your own past existence and experiences, since all you have to go on are the present contents of your mind, including memory impressions. If you can't be sure that the world outside your mind exists now, how can you be sure that you yourself existed before now? How do you know you didn't just come into existence a few minutes ago, complete with all your present memories? The only evidence that you couldn't have come into existence a few minutes ago depends on beliefs about how people and their memories are produced, which rely in turn on beliefs about what has happened in the past. But to rely on those beliefs to prove that you existed in the past would again be to argue in a circle. You would be assuming the reality of the past to prove the reality of the past.
It seems that you are stuck with nothing you can be sure of except the contents of your own mind at the present moment. And it seems that anything you try to do to argue your way out of this predicament will fail, because the argument will have to assume what you are trying to prove -- the existence of the external world beyond your mind.
Suppose, for instance, you argue that there must be an external world, because it is incredible that you should be having all these experiences without there being some explanation in terms of external causes. The skeptic can make two replies. First, even if there are external causes, how can you tell from the contents of your experience what those causes are like? You've never observed any of them directly. Second, what is the basis of your idea that everything has to have an explanation? It's true that in your normal, nonphilosophical conception of the world, processes like those which go on in your mind are caused, at least in part, by other things outside them. But you can't assume that this is true if what you're trying to figure out is how you know anything about the world outside your mind. And there is no way to prove such a principle just by looking at what's inside your mind. However plausible the principle may seem to you, what reason do you have to believe that it applies to the world?
Science won't help us with this problem either, though it might seem to. In ordinary scientific thinking, we rely on general principles of explanation to pass from the way the world first seems to us //to a different conception of what it is really like. We try to explain the appearances in terms of a theory that describes the reality behind them, a reality that we can't observe directly. That is how physics and chemistry conclude that all the things we see around us are composed of invisibly small atoms. Could we argue that the general belief in the external world has the same kind of scientific backing as the belief in atoms?
The skeptic's answer is that the process of scientific reasoning raises the same skeptical problem we have been considering all along: Science is just as vulnerable as perception. How can we know that the world outside our minds corresponds to our ideas of what would be a good theoretical explanation of our observations? If we can't establish the reliability of our sense experiences in relation to the external world, there's no reason to think we can rely on our scientific theories either.
There is another very different response to the problem. Some would argue that radical skepticism of the kind I have been talking about is meaningless, because the idea of an external reality that no one could ever discover is meaningless. The argument is that a dream, for instance, has to be something from which you can wake up to discover that you have been asleep; a hallucination has to be something which others (or you later) can see is not really there. Impressions and appearances that do not correspond to reality must be contrasted with others that do correspond to reality, or else the contrast between appearance and reality is meaningless.
According to this view, the idea of a dream from which you can never wake up is not the idea of a dream at all: it is the idea of reality -the real world in which you live. Our idea of the things that exist is just our idea of what we can observe. (This view is sometimes called verificationism.) Sometimes our observations are mistaken, but that means they can be corrected by other observations -- as when you wake up from a dream or discover that what you thought was a snake was just a shadow on the grass. But without some possibility of a correct view of how things are (either yours or someone else's), the thought that your impressions of the world are not true is meaningless.
If this is right, then the skeptic is kidding himself if he thinks he can imagine that the only thing that exists is his own mind. He is kidding himself, because it couldn't be true that the physical world doesn't'really exist, unless somebody could observe that it doesn't exist. And what the skeptic is trying to imagine is precisely that there is no one to observe that or anything else -- except of course the skeptic himself, and all he can observe is the inside of his own mind. So solipsism is meaningless. It tries to subtract the external world from the totality of my impressions; but it fails, because if the external world is subtracted, they stop being mere impressions, and become instead perceptions of reality.
Is this argument against solipsism and skepticism any good? Not unless reality can be defined as what we can observe. But are we really unable to understand the idea of a real world, or a fact about reality, that can't be observed by anyone, human or otherwise?
The skeptic will claim that if there is an external world, the things in it are observable because they exist, and not the other way around: that existence isn't the same thing as observability. And although we get the idea of dreams and hallucinations from cases where we think we can observe the contrast between our experiences and reality, it certainly seems as if the same idea can be extended to cases where the reality is not observable.
If that is right, it seems to follow that it is not meaningless to think that the world might consist of nothing but the inside of your mind, though neither you nor anyone else could find out that this was true. And if this is not meaningless, but is a possibility you must consider, there seems no way to prove that it is false, without arguing in a circle. So there may be no way out of the cage of your own mind. This is sometimes called the egocentric predicament.
And yet, after all this has been said, I have to admit it is practically impossible to believe seriously that all the things in the world around you might not really exist. Our acceptance of the external world is instinctive and powerful: we cannot just get rid of it by philosophical arguments. Not only do we go on acting as if other people and things exist: we believe that they do, even after we've gone through the arguments which appear to show we have no grounds for this belief. (We may have grounds, within the overall system of our beliefs about the world, for more particular beliefs about the existence of particular things: like a mouse in the breadbox, for example. But that is different. It assumes the existence of the external world.)If a belief in the world outside our minds comes so naturally to us, perhaps we don't need grounds for it. We can just let it be and hope that we're right. And that in fact is what most people do after giving up the attempt to prove it: even if they can't give reasons against skepticism, they can't live with it either.
But this means that we hold on to most of our ordinary beliefs about the world in face of the fact that (a) they might be completely false, and (b) we have no basis for ruling out that possibility.
We are left then with three questions:
1. Is it a meaningful possibility that the inside of your mind is the only thing that exists -- or that even if there is a world outside your mind, it is totally unlike what you believe it to be?
2. If these things are possible, do you have any way of proving to yourself that they are not actually true?
3. If you can't prove that anything exists outside your own mind, is it all right to go on believing in the external world anyway?
1. 认为你的内心世界是唯一存在的东西,这种观点有意义吗?或者说,如果存在外部世界,这个世界与你所相信的信念大不一样,这有意义吗?
2. 如果它们有意义,你有任何方式像自己证明上面的说法实际上不正确吗?
3. 如果你不能证明外部世界的存在,那么继续相信外部世界存在的做法正确吗?
## 3. Other Minds(他人的心灵)
There is one special kind of skepticism which continues to be a problem even if you assume that your mind is not the only thing there is -that the physical world you seem to see and feel around you, including your own body, really exists. That is skepticism about the nature or even existence of minds or experiences other than your own.
How much do you really know about what goes on in anyone else's mind? Clearly you observe only the bodies of other creatures, including people. You watch what they do, listen to what they say and to the other sounds they make, and see how they respond to their environment -- what things attract them and what things repel them, what they eat, and so forth. You can also cut open other creatures and look at their physical insides, and perhaps compare their anatomy with yours.
But none of this will give you direct access to their experiences, thoughts, and feelings. The only experiences you can actually have are your own: if you believe anything about the mental lives of others, it is on the basis of observing their physical construction and behavior.
To take a simple example, how do you know, when you and a friend are eating chocolate ice cream, whether it tastes the same to him as it tastes to you? You can try a taste of his ice cream, but if it tastes the same as yours, that only means it tastes the same to you: you haven't experienced the way it tastes to him. There seems to be no way to compare the two flavor experiences directly.
Well, you might say that since you're both human beings, and you can both distinguish among flavors of ice cream -- for example you can both tell the difference between chocolate and vanilla with your eyes closed -- it's likely that your flavor experiences are similar. But how do you know that? The only connection you've ever observed between a type of ice cream and a flavor is in your own case; so what reason do you have to think that similar correlations hold for other human beings? Why isn't it just as consistent with all the evidence that chocolate tastes to him the way vanilla tastes to you, and vice versa?
The same question could be asked about other kinds of experience. How do you know that red things don't look to your friend the way yellow things look to you? Of course if you ask him how a fire engine looks, he'll say it looks red, like blood, and not yellow, like a dandelion; but that's because he, like you, uses the word "red" for the color that blood and fire engines look to him, whatever it is. Maybe it's what you call yellow, or what you call blue, or maybe it's a color experience you've never had, and can't even imagine.
To deny this, you have to appeal to an assumption that flavor and color experiences are uniformly correlated with certain physical stimulations of the sense organs, whoever undergoes them. But the skeptic would say you have no evidence for that assumption, and because of the kind of assumption it is, you couldn't have any evidence for it. All you can observe is the correlation in your own case.
Faced with this argument, you might first concede that there is some uncertainty here. The correlation between stimulus and experience may not be exactly the same from one person to another: there may be slight shades of difference between two people's color or flavor experience of the same type of ice cream. In fact, since people are physically different from one another, this wouldn't be surprising. But, you might say, the difference in experience can't be too radical, or else we'd be able to tell. For instance, chocolate ice cream couldn't taste to your friend the way a lemon tastes to you, otherwise his mouth would pucker up when he ate it.
But notice that this claim assumes another correlation from one person to another: a correlation between inner experience and certain kinds of observable reaction. And the same question arises about that. You've observed the connection between puckering of the mouth and the taste you call sour only in your own case: how do you know it exists in other people? Maybe what makes your friend's mouth pucker up is an experience like the one you get from eating oatmeal.
For all you know, it could be something you would call a sound -- or maybe it's unlike anything you've ever experienced, or could imagine.
If we continue on this path, it leads finally to the most radical skepticism of all about other minds. How do you even know that your friend is conscious? How do you know that there are any minds at all besides your own?
If we go on pressing these kinds of questions relentlessly enough, we will move from a mild and harmless skepticism about whether chocolate ice cream tastes exactly the same to you and to your friend, to a much more radical skepticism about whether there is any similarity between your experiences and his. How do you know that when he puts something in his mouth he even has an experience of the kind that you would call a flavor? For all you know, it could be something you would call a sound -- or maybe it's unlike anything you've ever experienced, or could imagine.
The only example you've ever directly observed of a correlation between mind, behavior, anatomy, and physical circumstances is yourself. Even if other people and animals had no experiences whatever, no mental inner life of any kind, but were just elaborate biological machines, they would look just the same to you. So how do you know that's not what they are? How do you know that the beings around you aren't all mindless robots? You've never seen into their minds -- you couldn't -- and their physical behavior could all be produced by purely physical causes. Maybe your relatives, your neighbors, your cat and your dog have no inner experiences whatever. If they don't, there is no way you could ever find it out.
You can't even appeal to the evidence of their behavior, including what they say -- because that assumes that in them outer behavior is connected with inner experience as it is in you; and that's just what you don't know.
To consider the possibility that none of the people around you may be conscious produces an uncanny feeling. On the one hand it seems conceivable, and no evidence you could possibly have can rule it out decisively. On the other hand it is something you can't really believe is possible: your conviction that there are minds in those bodies, sight behind those eyes, hearing in those ears, etc., is instinctive. But if its power comes from instinct, is it really knowledge? Once you admit the possibility that the belief in other minds is mistaken, don't you need something more reliable to justify holding on to it?
There is another side to this question, which goes completely in the opposite direction.
Ordinarily we believe that other human beings are conscious, and almost everyone believes that other mammals and birds are conscious too. But people differ over whether fish are conscious, or insects, worms, and jellyfish. They are still more doubtful about whether onecelled animals like amoebae and paramecia have conscious experiences, even though such creatures react conspicuously to stimuli of various kinds. Most people believe that plants aren't conscious; and almost no one believes that rocks are conscious, or kleenex, or automobiles, or mountain lakes, or cigarettes. And to take another biological example, most of us would say, if we thought about it, that the individual cells of which our bodies are composed do not have any conscious experiences.
How do we know all these things? How do you know that when you cut a branch off a tree it doesn't hurt the tree -- only it can't express its pain because it can't move? (Or maybe it loves having its branches pruned.) How do you know that the muscle cells in your heart don't feel pain or excitement when you run up a flight of stairs? How do you know that a kleenex doesn't feel anything when you blow your nose into it?
And what about computers? Suppose computers are developed to the point where they can be used to control robots that look on the outside like dogs, respond in complicated ways to the environment, and behave in many ways just like dogs, though they are just a mass of circuitry and silicon chips on the inside? Would we have any way of knowing whether such machines were conscious?
These cases are different from one another, of course. If a thing is incapable of movement, it can't give any behavioral evidence of feeling or perception. And if it isn't a natural organism, it is radically different from us in internal constitution. But what grounds do we have for thinking that only things that behave like us to some degree and that have an observable physical structure roughly like ours are capable of having experiences of any kind? Perhaps trees feel things in a way totally different from us, but we have no way of finding out about it, because we have no way of discovering the correlations between experience and observable manifestations or physical conditions in their case. We could discover such correlations only if we could observe both the experiences and the external manifestations together: but there is no way we can observe the experiences directly, except in our own case. And for the same reason there is no way we could observe the absence of any experiences, and consequently the absence of any such correlations, in any other case. You can't tell that a tree has no experience, by looking inside it, any more than you can tell that a worm has experience, by looking inside it.
So the question is: what can you really know about the conscious life in this world beyond the fact that you yourself have a conscious mind? Is it possible that there might be much less conscious life than you assume (none except yours), or much more (even in things you assume to be unconscious)?
## 4. The Mind-Body Problem(身心问题)
Let's forget about skepticism, and assume the physical world exists, including your body and your brain; and let's put aside our skepticism about other minds. I'll assume you're conscious if you assume I am. Now what might be the relation between consciousness and the brain?
Everybody knows that what happens in consciousness depends on what happens to the body. If you stub your toe it hurts. If you close your eyes you can't see what's in front of you. If you bite into a Hershey bar you taste chocolate. If someone conks you on the head you pass out.
The evidence shows that for anything to happen in your mind or consciousness, something has to happen in your brain. (You wouldn't feel any pain from stubbing your toe if the nerves in your leg and spine didn't carry impulses from the toe to your brain.) We don't know what happens in the brain when you think, "I wonder whether I have time to get a haircut this afternoon." But we're pretty sure something does something involving chemical and electrical changes in the billions of nerve cells that your brain is made of.
In some cases, we know how the brain affects the mind and how the mind affects the brain. We know, for instance, that the stimulation of certain brain cells near the back of the head produces visual experiences. And we know that when you decide to help yourself to another piece of cake, certain other brain cells send out impulses to the muscles in your arm. We don't know many of the details, but it is clear that there are complex relations between what happens in your mind and the physical processes that go on in your brain. So far, all of this belongs to science, not philosophy.
But there is also a philosophical question about the relation between mind and brain, and it is this: Is your mind something different from your brain, though connected to it, or is it your brain? Are your thoughts, feelings, perceptions, sensations, and wishes things that happen in addition to all the physical processes in your brain, or are they themselves some of those physical processes?
What happens, for instance, when you bite into a chocolate bar? The chocolate melts on your tongue and causes chemical changes in your taste buds; the taste buds send some electrical impulses along the nerves leading from your tongue to your brain, and when those impulses reach the brain they produce further physical changes there; finally, you taste the taste of chocolate. What is that? Could it just be a physical event in some of your brain cells, or does it have to be something of a completely different kind?
If a scientist took off the top of your skull and looked into your brain while you were eating the chocolate bar, all he would see is a grey mass of neurons. If he used instruments to measure what was happening inside, he would detect complicated physical processes of many different kinds. But would he find the taste of chocolate?
It's not just that the taste of chocolate is a flavor and therefore can't be seen. Suppose a scientist were crazy enough to try to observe your experience of tasting chocolate by licking your brain while you ate a chocolate bar. First of all, your brain probably wouldn't taste like chocolate to him at all. But even if it did, he wouldn't have succeeded in getting into your mind and observing your experience of tasting chocolate. He would just have discovered, oddly enough, that when you taste chocolate, your brain changes so that it tastes like chocolate to other people. He would have his taste of chocolate and you would have yours.
If what happens in your experience is inside your mind in a way in which what happens in your brain is not, it looks as though your experiences and other mental states can't just be physical states of your brain. There has to be more to you than your body with its humming nervous system.
One possible conclusion is that there has to be a soul, attached to your body in some way which allows them to interact. If that's true, then you are made up of two very different things: a complex physical organism, and a soul which is purely mental. (This view is called dualism, for obvious reasons.)
But many people think that belief in a soul is old-fashioned and unscientific. Everything else in the world is made of physical matter -- different combinations of the same chemical elements. Why shouldn't we be? Our bodies grow by a complex physical process from the single cell produced by the joining of sperm and egg at conception.
Ordinary matter is added gradually in such a way that the cell turns into a baby, with arms, legs, eyes, ears, and a brain, able to move and feel and see, and eventually to talk and think. Some people believe that this complex physical system is sufficient by itself to give rise to mental life. Why shouldn't it be? Anyway, how can mere philosophical argument show that it isn't? Philosophy can't tell us what stars or diamonds are made of, so how can it tell us what people are or aren't made of?
The view that people consist of nothing but physical matter, and that their mental states are physical states of their brains, is called physicalism (or sometimes materialism). Physicalists don't have a specific theory of what process in the brain can be identified as the experience of tasting chocolate, for instance. But they believe that mental states are just states of the brain, and that there's no philosophical reason to think they can't be. The details will have to be discovered by science.
The idea is that we might discover that experiences are really brain processes just as we have discovered that other familiar things have a real nature that we couldn't have guessed until it was revealed by scientific investigation. For instance, it turns out that diamonds are composed of carbon, the same material as coal: the atoms are just differently arranged. And water, as we all know, is composed of hydrogen and oxygen, even though those two elements are nothing like water when taken by themselves.
So while it might seem surprising that the experience of tasting chocolate could be nothing but a complicated physical event in your brain, it would be no stranger than lots of things that have been discovered about the real nature of ordinary objects and processes. Scientists have discovered what light is, how plants grow, how muscles move - it is only a matter of time before they discover the biological nature of the mind. That's what physicalists think.
A dualist would reply that those other things are different. When we discover the chemical composition of water, for instance, we are dealing with something that is clearly out there in the physical world -- something we can all see and touch. When we find out that it's made up of hydrogen and oxygen atoms, we're just breaking down an external physical substance into smaller physical parts. It is an essential feature of this kind of analysis that we are not giving a chemical breakdown of the way water looks, feels, and tastes to us. Those things go on in our inner experience, not in the water that we have broken down into atoms. The physical or chemical analysis of water leaves them aside.
But to discover that tasting chocolate was really just a brain process, we would have to analyze something mental -- not an externally observed physical substance but an inner taste sensation -- in terms of parts that are physical. And there is no way that a large number of physical events in the brain, however complicated, could be the parts out of which a taste sensation was composed. A physical whole can be analyzed into smaller physical parts, but a mental process can't be. Physical parts just can't add up to a mental whole.
There is another possible view which is different from both dualism and physicalism. Dualism is the view that you consist of a body plus a soul, and that your mental life goes on in your soul. Physicalism is the view that your mental life consists of physical processes in your brain. But another possibility is that your mental life goes on in your brain, yet that all those experiences, feelings, thoughts, and desires are not physical processes in your brain. This would mean that the grey mass of billions of nerve cells in your skull is not just a physical object. It has lots of physical properties -- great quantities of chemical and electrical activity go on in it -- but it has mental processes going on in it as well.
The view that the brain is the seat of consciousness, but that its conscious states are not just physical states, is called dual aspect theory. It is called that because it means that when you bite into a chocolate bar, this produces in your brain a state or process with two aspects: a physical aspect involving various chemical and electrical changes, and a mental aspect -- the flavor experience of chocolate. When this process occurs, a scientist looking into your brain will be able to observe the physical aspect, but you yourself will undergo, from the inside, the mental aspect: you will have the sensation of tasting chocolate. If this were true, your brain itself would have an inside that could not be reached by an outside observer even if he cut it open. It would feel, or taste, a certain way to you to have that process going on in your brain.
We could express this view by saying that you are not a body plus a soul -- that you are just a body, but your body, or at least your brain, is not just a physical system. It is an object with both physical and mental aspects: it can be dissected, but it also has the kind of inside that can't be exposed by dissection. There's something it's like from the inside to taste chocolate because there's something it's like from the inside to have your brain in the condition that is produced when you eat a chocolate bar.
Physicalists believe that nothing exists but the physical world that can be studied by science: the world of objective reality. But then they have to find room somehow for feelings, desires, thoughts, and experiences -- for you and me -- in such a world.
One theory offered in defense of physicalism is that the mental nature of your mental states consists in their relations to things that cause them and things they cause. For instance, when you stub your toe and feel pain, the pain is something going on in your brain. But its painfulness is not just the sum of its physical characteristics, and it is not some mysterious nonphysical property either. Rather, what makes it a pain is that it is the kind of state of your brain that is usually caused by injury, and that usually causes you to yell and hop around and avoid the thing that caused the injury. And that could be a purely physical state of your brain.
But that doesn't seem enough to make something a pain. It's true that pains are caused by injury, and they do make you hop and yell. But they also feel a certain way, and that seems to be something different from all their relations to causes and effects, as well as all the physical properties they may have -- if they are in fact events in your brain. I myself believe that this inner aspect of pain and other conscious experiences cannot be adequately analyzed in terms of any system of causal relations to physical stimuli and behavior, however complicated.
There seem to be two very different kinds of things going on in the world: the things that belong to physical reality, which many different people can observe from the outside, and those other things that belong to mental reality, which each of us experiences from the inside in his own case. This isn't true only of human beings: dogs and cats and horses and birds seem to be conscious, and fish and ants and beetles probably are too. Who knows where it stops?
We won't have an adequate general conception of the world until we can explain how, when a lot of physical elements are put together in the right way, they form not just a functioning biological organism but a conscious being. If consciousness itself could be identified with some kind of physical state, the way would be open for a unified physical theory of mind and body, and therefore perhaps for a unified physical theory of the universe. But the reasons against a purely physical theory of consciousness are strong enough to make it seem likely that a physical theory of the whole of reality is impossible. Physical science has progressed by leaving the mind out of what it tries to explain, but there may be more to the world than can be understood by physical science.
## 5. The Meaning of Words(词语的意义)
How can a word -- a noise or a set of marks on paper -- mean something? There are some words, like "bang" or "whisper," which sound a bit like what they refer to, but usually there is no resemblance between a name and the thing it is the name of. The relation in general must be something entirely different.
There are many types of words: some of them name people or things, others name qualities or activities, others refer to relations between things or events, others name numbers, places, or times, and some, like "and" and "of," have meaning only because they contribute to the meaning of larger statements or questions in which they appear as parts. In fact all words do their real work in this way: their meaning is really something they contribute to the meaning of sentences or statements. Words are mostly used in talking and writing, rather than just as labels.
However, taking that as understood,let us ask how a word can have a meaning. Some words can be defined in terms ofother words: "square" for example means "four-sided equilateralequiangular plane figure." And most of the terms in that definition can also be defined. But definitions can't be the basis of meaning for all words,or we'd go forever in a circle. Eventually we must get to some words which have meaning directly.
Take the word "tobacco," which may seem like an easy example. It refers to a kind of plant whose Latin name most of us don't know, and whose leaves are used to make cigars and cigarettes. All of us have seen and smelled tobacco, but the word as you use it refers not just to the samples of the stuff that you have seen, or that is around you when you use the word, but to all examples of it, whether or not you know of their existence. You may have learned the word by being shown some samples, but you won't understand it if you think it is just the name of those samples.
”烟草“这个词看起来就是一个很简单的例子。它指称一种植物,我们当中的大部分人不知道这种植物的拉丁文名字,并且这种植物的叶子被用来制作雪茄和香烟。我们所有的人都看过和闻过香烟,但是你所使用的这个词并不仅仅是指称那个你所看到的样本,或者你周围的东西,而是指所有的烟草个体,无论你是否知道它们的存在。也许有人通过给你看了一些烟草的样本而学会了这个词,但你如果认为烟草只是这部分样本的名字,你就永远也无法理解”烟草“这个词。
So if you say, "I wonder if more tobacco was smoked in China last year than in the entire Western hemisphere," you have asked a meaningful question, and it has an answer, even if you can't find it out. But the meaning of the question, and its answer,depend on the fact that when you use the word "tobacco," it refers to every example of the substance in the world throughout all past and futuretime, in fact -- to every cigarette smoked in China last year, to every cigarsmoked in Cuba, and so forth. The other words in the sentence limit the reference to particular times and places, but the word "tobacco" can be used to ask such a question only because it has this enormous but special reach, beyond all your experience to every sample of a certain kind of stuff.
How does the word do that? How can a mere noise or scribble reach that far? Not, obviously, because of its sound or look. And not because of the relatively small number of examples of tobacco that you've encountered, and that have been in the same room when you have uttered or heard or read the word. There's something else going on, and it is something general, which applies to everyone's use of the word. You and I, who have never met and have encountered different samples of tobacco, use the word with the same meaning. If we both use the word to ask the question about China and the Western hemisphere, it is the same question, and the answer is the same. Further, a speaker of Chinese can ask the same question, using the Chinese word with the same meaning. Whatever relation the word "tobacco" has to the stuff itself, other words can have as well.
This very naturally suggests that the relation of the word "tobacco" to all those plants, cigarettes, and cigars in the past, present, and future, is indirect. The word as you use it has something else behind it -- a concept or idea or thought -- which somehow reaches out to all the tobacco in the universe. This, however, raises new problems.
First, what kind of thing is this middleman? Is it in your mind, or is it something outside your mind that you somehow latch onto? It would seem to have to be something that you and I and a speaker of Chinese can all latch onto, in order to mean the same thing by our words for tobacco. But how, with our very different experiences of the word and the plant, do we do that? Isn't this just as hard to explain as our all being able to refer to the same enormous and widespread amount of stuff by our different uses of the word or words? Isn't there just as much of a problem about how the word means the idea or concept (whatever that is) as there was before about how the word means the plant or substance?
Not only that,but there's also a problem about how this idea or concept is related to all the samples of actual tobacco. What kind of thing is it that it can have this exclusive connection with tobacco and nothing else? It looks as though we've just added to the problem. In trying to explain the relation between the word "tobacco" and tobacco by interposing between them the idea or concept of tobacco, we've just created the further need to explain the relations between the word and the idea, and between the idea and the stuff.
With or without the concept or idea, the problem seems to be that very particular sounds, marks, and examples are involved in each person's use of a word, but the word applies to something universal, which other particular speakers can also mean by that word or other words in other languages. How can anything as particular as the noise I make when I say "tobacco" mean something so general that I can use it to say, "I bet people will be smoking tobacco on Mars 200 years from now."
You might think that the universal element is provided by something we all have in our minds when we use the word. But what do we all have in our minds? Consciously, at least, I don't need anything more than the word itself in my mind to think, "Tobacco is getting more expensive every year." Still, I certainly may have an image of some sort in my mind when I use the word: perhaps of a plant, or of some dried leaves, or of the inside of a cigarette. Still, this will not help to explain the generality of the meaning of the word, because any such image will be a particular image. It will be an image of the appearance or smell of a particular sample of tobacco; and how is that supposed to encompass all actual and possible examples of tobacco? Also, even if you have a certain picture in your mind when you hear or use the word "tobacco," every other person will probably have a different picture; yet that does not prevent us all from using the word with the same meaning.
The mystery of meaning is that it doesn't seem to be located anywhere -- not in the word, not in the mind, not in a separate concept or idea hovering between the word, the mind, and the things we are talking about. And yet we use language all the time, and it enables us to think complicated thoughts which span great reaches of time and space. You can talk about how many people in Okinawa are over five feet tall, or whether there is life in other galaxies, and the little noises you make will be sentences which are true or false in virtue of complicated facts about far away things that you will probably never encounter directly.
You may think I have been making too much of the universal reach of language. In ordinary life, most of the statements and thoughts we use language for are much more local and particular. If I say "Pass the salt," and you pass me the salt, this doesn't have to involve any universal meaning of the word "salt," of the kind that's present when we ask, "How long ago in the history of our galaxy was salt first formed out of sodium and chlorine?" Words are often used simply as tools in the relations between people. On a sign in a bus station you see the little figure with the skirt, and an arrow, and you know that's the way to the ladies' room. Isn't most of language just a system of signals and responses like that?
Well, perhaps some of it is, and perhaps that's how we start to learn to use words: "Daddy," "Mommy," "No," "All gone." But it doesn't stop there, and it's not clear how the simple transactions possible using one or two words at a time can help us to understand the use of language to describe and misdescribe the world far beyond our present neighborhood It seems more likely, in fact, that the use of language for much larger purposes shows us something about what is going on when we use it on a smaller scale.
A statement like, "There's salt on the table," means the same whether it's said for practical reasons during lunch, or as part of the description of a situation distant in space and time, or merely as a hypothetical description of an imaginary possibility. It means the same whether it is true or false, and whether or not the speaker or hearer know if it's true or false. Whatever is going on in the ordinary, practical case must be something general enough also to explain these other, quite different cases where it means the same thing.
”盐在桌子上。“这样的陈述,无论它实在午餐为了实用的理由说的,还是作为遥远时空中某种场景描述的一部分,或者它仅仅只是一个关于想象场景的可能性描述,句子的含义都是一样的。这意味着,无论句子的真假。无论说话者与听话者是否它的真假,它们的含义都是一样的。不管常规意义上这样的句子会发生什么,实际的例子必须足够解释它的其他情况:完全不同的例子中它也表达了同样的情况。(出于日常的、实用的目的所说的话中一定有某种普遍的东西,这种东西应该同样能被用来解释那些出于其他完全不同的目的而说的话,从而使得这些话的意义相同。)
It is of course important that language is a social phenomenon. Each person doesn't make it up for himself. When as children we learn a language, we get plugged into an already existing system, in which millions of people have been using the same words to talk to one another for centuries. My use of the word "tobacco" doesn't have a meaning just on its own, but rather as part of the much wider use of that word in English. (Even if I were to adopt a private code, in which I used the word "blibble" to mean tobacco, I'd do it by defining "blibble" to myself in terms of the common word "tobacco.") We still have to explain how my use of the word gets its content from all those other uses, most of which I don't know about -- but putting my words into this larger context may seem to help explain their universal meaning.
But this doesn't solve the problem. When I use the word, it may have its meaning as part of the English language, but how does the use of the word by all those other speakers of English give it its universal range, well beyond all the situations in which it is actually used? The problem of the relation of language to the world is not so different whether we are talking about one sentence or billions. The meaning of a word contains all its possible uses, true and false, not only its actual ones, and the actual uses are only a tiny fraction of the possible ones.
We are small finite creatures, but meaning enables us with the help of sounds or marks on paper to grasp the whole world and many things in it, and even to invent things that do not exist and perhaps never will. The problem is to explain how this is possible: How does anything we say or write mean anything -- including all the words in this book?
## 6. Free Will(自由意志)
Suppose you're going through a cafeteria line and when you come to the desserts, you hesitate between a peach and a big wedge of chocolate cake with creamy icing. The cake looks good, but you know it's fattening. Still, you take it and eat it with pleasure. The next day you look in the mirror or get on the scale and think, "I wish I hadn't eaten that chocolate cake. I could have had a peach instead."
"I could have had a peach instead." What does that mean, and is it true?
“我本应该可以吃桃子。”这是什么意思,这句话是真的吗?
Peaches were available when you went through the cafeteria line: you had the opportunity to take a peach instead. But that isn't all you mean. You mean you could have taken the peach instead of the cake. You could have done something different from what you actually did. Before you made up your mind, it was open whether you would take fruit or cake, and it was only your choice that decided which it would be.
Is that it? When you say, "I could have had a peach instead," do you mean that it depended only on your choice? You chose chocolate cake, so that's what you had, but if you had chosen the peach, you would have had that.
This still doesn't seem to be enough. You don't mean only that if you had chosen the peach, you would have had it. When you say, "I could have had a peach instead," you also mean that you could have chosen it -- no "ifs" about it. But what does that mean?
It can't be explained by pointing out other occasions when you have chosen fruit. And it can't be explained by saying that if you had thought about it harder, or if a friend had been with you who eats like a bird, you would have chosen it. What you are saying is that you could have chosen a peach instead of chocolate cake just then, as things actually were. You think you could have chosen a peach even if everything else had been exactly the same as it was up to the point when you in fact chose chocolate cake. The only difference would have been that instead of thinking, "Oh well," and reaching for the cake, you would have thought, "Better not," and reached for the peach.
This is an idea of "can" or "could have" which we apply only to people (and maybe some animals). When we say, "The car could have climbed to the top of the hill," we mean the car had enough power to reach the top of the hill if someone drove it there. We don't mean that on an occasion when it was parked at the bottom of the hill, the car could have just taken off and climbed to the top, instead of continuing to sit there. Something else would have had to happen differently first, like a person getting in and starting the motor. But when it comes to people, we seem to think that they can do various things they don't actually do, just like that, without anything else happening differently first. What does this mean?
Part of what it means may be this: Nothing up to the point at which you choose determines irrevocably what your choice will be. It remains an open possibility that you will choose a peach until the moment when you actually choose chocolate cake. It isn't determined in advance.
Some things that happen are determined in advance. For instance, it seems to be determined in advance that the sun will rise tomorrow at a certain hour. It is not an open possibility that tomorrow the sun won't rise and night will just continue. That is not possible because it could happen only if the earth stopped rotating, or the sun stopped existing, and there is nothing going on in our galaxy which might make either of those things happen. The earth will continue rotating unless it is stopped, and tomorrow morning its rotation will bring us back around to face inward in the solar system, toward the sun, instead of outward, away from it. If there is no possibility that the earth will stop or that the sun won't be there, there is no possibility that the sun won't rise tomorrow.
When you say you could have had a peach instead of chocolate cake, part of what you mean may be that it wasn't determined in advance what you would do, as it is determined in advance that the sun will rise tomorrow. There were no processes or forces at work before you made your choice that made it inevitable that you would choose chocolate cake.
That may not be all you mean, but it seems to be at least part of what you mean. For if it was really determined in advance that you would choose cake, how could it also be true that you could have chosen fruit? It would be true that nothing would have prevented you from having a peach if you had chosen it instead of cake. But these ifs are not the same as saying you could have chosen a peach, period. You couldn't have chosen it unless the possibility remained open until you closed it off by choosing cake.
Some people have thought that it is never possible for us to do anything different from what we actually do, in this absolute sense. They acknowledge that what we do depends on our choices, decisions, and wants, and that we make different choices in different circumstances: we're not like the earth rotating on its axis with monotonous regularity. But the claim is that, in each case, the circumstances that exist before we act determine our actions and make them inevitable. The sum total of a person's experiences, desires and knowledge, his hereditary constitution, the social circumstances and the nature of the choice facing him, together with other factors that we may not know about, all combine to make a particular action in the circumstances inevitable.
This view is called determinism. The idea is not that we can know all the laws of the universe and use them to predict what will happen. First of all, we can't know all the complex circumstances that affect a human choice. Secondly, even when we do learn something about the circumstances, and try to make a prediction, that is itself a change in the circumstances, which may change the predicted result. But predictability isn't the point. The hypothesis is that there are laws of nature, like those that govern the movement of the planets, which govern everything that happens in the world -- and that in accordance with those laws, the circumstances before an action determine that it will happen, and rule out any other possibility.
If that is true, then even while you were making up your mind about dessert, it was already determined by the many factors working on you and in you that you would choose cake. You couldn't have chosen the peach, even though you thought you could: the process of decision is just the working out of the determined result inside your mind.
If determinism is true for everything that happens, it was already determined before you were born that you would choose cake. Your choice was determined by the situation immediately before, and that situation was determined by the situation before it, and so on as far back as you want to go.
Even if determinism isn't true for everything that happens -- even if some things just happen without being determined by causes that were there in advance -- it would still be very significant if everything we did were determined before we did it. However free you might feel when choosing between fruit and cake, or between two candidates in an election, you would really be able to make only one choice in those circumstances-though if the circumstances or your desires had been different, you would have chosen differently.
If you believed that about yourself and other people, it would probably change the way you felt about things. For instance, could you blame yourself for giving in to temptation and having the cake? Would it make sense to say, "I really should have had a peach instead," if you couldn't have chosen a peach instead? It certainly wouldn't make sense to say it if there was no fruit. So how can it make sense if there was fruit, but you couldn't have chosen it because it was determined in advance that you would choose cake?
This seems to have serious consequences. Besides not being able sensibly to blame yourself for having had cake, you probably wouldn't be able sensibly to blame anyone at all for doing something bad, or praise them for doing something good. If it was determined in advance that they would do it, it was inevitable: they couldn't have done anything else, given the circumstances as they were. So how can we hold them responsible?
You may be very mad at someone who comes to a party at your house and steals all your Glenn Gould records, but suppose you believed that his action was determined in advance by his nature and the situation. Suppose you believed that everything he did, including the earlier actions that had contributed to the formation of his character, was determined in advance by earlier circumstances. Could you still hold him responsible for such low-grade behavior? Or would it be more reasonable to regard him as a kind of natural disaster -- as if your records had been eaten by termites?
People disagree about this. Some think that if determinism is true, no one can reasonably be praised or blamed for anything, any more than the rain can be praised or blamed for falling. Others think that it still makes sense to praise good actions and condemn bad ones, even if they were inevitable. After all, the fact that someone was determined in advance to behave badly doesn't mean that he didn't behave badly. If he steals your records, that shows inconsiderateness and dishonesty, whether it was determined or not. Furthermore, if we don't blame him, or perhaps even punish him, he'll probably do it again.
On the other hand, if we think that what he did was determined in advance, this seems more like punishing a dog for chewing on the rug. It doesn't mean we hold him responsible for what he did: we're just trying to influence his behavior in the future. I myself don't think it makes sense to blame someone for doing what it was impossible for him not to do. (Though of course determinism implies that it was determined in advance that I would think this.)
These are the problems we must face if determinism is true. But perhaps it isn't true. Many scientists now believe that it isn't true for the basic particles of matter -- that in a given situation, there's more than one thing that an electron may do. Perhaps if determinism isn't true for human actions, either, this leaves room for free will and responsibility. What if human actions, or at least some of them, are not determined in advance? What if, up to the moment when you choose, it's an open possibility that you will choose either chocolate cake or a peach? Then, so far as what has happened before is concerned, you could choose either one. Even if you actually choose cake, you could have chosen a peach.
But is even this enough for free will? Is this all you mean when you say, "I could have chosen fruit instead?" -- that the choice wasn't determined in advance? No, you believe something more. You believe that you determined what you would do, by doing it. It wasn't determined in advance, but it didn't just happen, either. You did it, and you could have done the opposite. But what does that mean?
This is a funny question: we all know what it means to do something. But the problem is, if the act wasn't determined in advance, by your desires, beliefs, and personality, among other things, it seems to be something that just happened, without any explanation. And in that case, how was it your doing? One possible reply would be that there is no answer to that question. Free action is just a basic feature of the world, and it can't be analyzed. There's a difference between something just happening without a cause and an action just being done without a cause. It's a difference we all understand, even if we can't explain it.
Some people would leave it at that. But others find it suspicious that we must appeal to this unexplained idea to explain the sense in which you could have chosen fruit instead of cake. Up to now it has seemed that determinism is the big threat to responsibility. But now it seems that even if our choices are not determined in advance, it is still hard to understand in what way we can do what we don't do. Either of two choices may be possible in advance, but unless I determine which of them occurs, it is no more my responsibility than if it was determined by causes beyond my control. And how can I determine it if nothing determines it?
This raises the alarming possibility that we're not responsible for our actions whether determinism is true or whether it's false. If determinism is true, antecedent circumstances are responsible. If determinism is false, nothing is responsible. That would really be a dead end.
There is another possible view, completely opposite to most of what we've been saying. Some people think responsibility for our actions requires that our actions be determined, rather than requiring that they not be. The claim is that for an action to be something you have done, it has to be produced by certain kinds of causes in you. For instance, when you chose the chocolate cake, that was something you did, rather than something that just happened, because you wanted chocolate cake more than you wanted a peach. Because your appetite for cake was stronger at the time than your desire to avoid gaining weight, it resulted in your choosing the cake. In other cases of action, the psychological explanation will be more complex, but there will always be one -- otherwise the action wouldn't be yours. This explanation seems to mean that what you did was determined in advance after all. If it wasn't determined by anything, it was just an unexplained event, something that happened out of the blue rather than something that you did.
According to this position, causal determination by itself does not threaten freedom -- only a certain kind of cause does that. If you grabbed the cake because someone else pushed you into it, then it wouldn't be a free choice. But free action doesn't require that there be no determining cause at all: it means that the cause has to be of a familiar psychological type. I myself can't accept this solution. If I thought that everything I did was determined by my circumstances and my psychological condition, I would feel trapped. And if I thought the same about everybody else, I would feel that they were like a lot of puppets. It wouldn't make sense to hold them responsible for their actions any more than you hold a dog or a cat or even an elevator responsible.
On the other hand, I'm not sure I understand how responsibility for our choices makes sense if they are not determined. It's not clear what it means to say I determine the choice, if nothing about me determines it. So perhaps the feeling that you could have chosen a peach instead of a piece of cake is a philosophical illusion, and couldn't be right whatever was the case.
To avoid this conclusion, you would have to explain (a) what you mean if you say you could have done something other than what you did, and (b) what you and the world would have to be like for this to be true.
## 7. Right and Wrong(是非对错)
Suppose you work in a library, checking people's books as they leave, and a friend asks you to let him smuggle out a hard-to-find reference work that he wants to own.
You might hesitate to agree for various reasons. You might be afraid that he'll be caught, and that both you and he will then get into trouble. You might want the book to stay in the library so that you can consult it yourself.
But you may also think that what he proposes is wrong -- that he shouldn't do it and you shouldn't help him. If you think that, what does it mean, and what, if anything, makes it true?
To say it's wrong is not just to say it's against the rules. There can be bad rules which prohibit what isn't wrong -- like a law against criticizing the government. A rule can also be bad because it requires something that is wrong -- like a law that requires racial segregation in hotels and restaurants. The ideas of wrong and right are different from the ideas of what is and is not against the rules. Otherwise they couldn't be used in the evaluation of rules as well as of actions.
If you think it would be wrong to help your friend steal the book, then you will feel uncomfortable about doing it: in some way you won't want to do it, even if you are also reluctant to refuse help to a friend. Where does the desire not to do it come from; what is its motive, the reason behind it?
There are various ways in which something can be wrong, but in this case, if you had to explain it, you'd probably say that it would be unfair to other users of the library who may be just as interested in the book as your friend is, but who consult it in the reference room, where anyone who needs it can find it. You may also feel that to let him take it would betray your employers, who are paying you precisely to keep this sort of thing from happening.
These thoughts have to do with effects on others-not necessarily effects on their feelings, since they may never find out about it, but some kind of damage nevertheless. In general, the thought that something is wrong depends on its impact not just on the person who does it but on other people. They wouldn't like it, and they'd object if they found out.
But suppose you try to explain all this to your friend, and he says, "I know the head librarian wouldn't like it if he found out, and probably some of the other users of the library would be unhappy to find the book gone, but who cares? I want the book; why should I care about them?"
The argument that it would be wrong is supposed to give him a reason not to do it. But if someone just doesn't care about other people, what reason does he have to refrain from doing any of the things usually thought to be wrong, if he can get away with it: what reason does he have not to kill, steal, lie, or hurt others? If he can get what he wants by doing such things, why shouldn't he? And if there's no reason why he shouldn't, in what sense is it wrong?
Of course most people do care about others to some extent. But if someone doesn't care, most of us wouldn't conclude that he's exempt from morality. A person who kills someone just to steal his wallet, without caring about the victim, is not automatically excused. The fact that he doesn't care doesn't make it all right: He should care. But why should he care?
There have been many attempts to answer this question. One type of answer tries to identify something else that the person already cares about, and then connect morality to it.
For example, some people believe that even if you can get away with awful crimes on this earth, and are not punished by the law or your fellow men, such acts are forbidden by God, who will punish you after death (and reward you if you didn't do wrong when you were tempted to). So even when it seems to be in your interest to do such a thing, it really isn't. Some people have even believed that if there is no God to back up moral requirements with the threat of punishment and the promise of reward, morality is an illusion: "If God does not exist, everything is permitted."
This is a rather crude version of the religious foundation for morality. A more appealing version might be that the motive for obeying God's commands is not fear but love. He loves you, and you should love Him, and should wish to obey His commands in order not to offend Him.
But however we interpret the religious motivation, there are three objections to this type of answer. First, plenty of people who don't believe in God still make judgments of right and wrong, and think no one should kill another for his wallet even if he can be sure to get away with it. Second, if God exists, and forbids what's wrong, that still isn't what makes it wrong. Murder is wrong in itself, and that's why God forbids it (if He does.) God couldn't make just any old thing wrong -- like putting on your left sock before your right -- simply by prohibiting it. If God would punish you for doing that it would be inadvisable to do it, but it wouldn't be wrong. Third, fear of punishment and hope of reward, and even love of God, seem not to be the right motives for morality. If you think it's wrong to kill, cheat, or steal, you should want to avoid doing such things because they are bad things to do to the victims, not just because you fear the consequences for yourself, or because you don't want to offend your Creator.
This third objection also applies to other explanations of the force of morality which appeal to the interests of the person who must act. For example, it may be said that you should treat others with consideration so that they'll do the same for you. This may be sound advice, but it is valid only so far as you think what you do will affect how others treat you. It's not a reason for doing the right thing if others won't find out about it, or against doing the wrong thing if you can get away with it (like being a hit and run driver).
There is no substitute for a direct concern for other people as the basis of morality. But morality is supposed to apply to everyone: and can we assume that everyone has such a concern for others? Obviously not: some people are very selfish, and even those who are not selfish may care only about the people they know, and not about everyone. So where will we find a reason that everyone has not to hurt other people, even those they don't know? Well, there's one general argument against hurting other people which can be given to anybody who understands English (or any other language), and which seems to show that he has some reason to care about others, even if in the end his selfish motives are so strong that he persists in treating other people badly anyway. It's an argument that I'm sure you've heard, and it goes like this: "How would you like it if someone did that to you?"
It's not easy to explain how this argument is supposed to work. Suppose you're about to steal someone else's umbrella as you leave a restaurant in a rainstorm, and a bystander says, "How would you like it if someone did that to you?" Why is it supposed to make you hesitate, or feel guilty?
Obviously the direct answer to the question is supposed to be, "I wouldn't like it at all!" But what's the next step? Suppose you were to say, "I wouldn't like it if someone did that to me. But luckily no one is doing it to me. I'm doing it to someone else, and I don't mind that at all!"
This answer misses the point of the question. When you are asked how you would like it if someone did that to you, you are supposed to think about all the feelings you would have if someone stole your umbrella. And that includes more than just "not liking it" -- as you wouldn't "like it" if you stubbed your toe on a rock. If someone stole your umbrella you'd resent it. You'd have feelings about the umbrella thief, not just about the loss of the umbrella. You'd think, "Where does he get off, taking my umbrella that I bought with my hard-earned money and that I had the foresight to bring after reading the weather report? Why didn't he bring his own umbrella?" and so forth.
When our own interests are threatened by the inconsiderate behavior of others, most of us find it easy to appreciate that those others have a reason to be more considerate. When you are hurt, you probably feel that other people should care about it: you don't think it's no concern of theirs, and that they have no reason to avoid hurting you. That is the feeling that the "How would you like it?" argument is supposed to arouse.
Because if you admit that you would resent it if someone else did to you what you are now doing to him, you are admitting that you think he would have a reason not to do it to you. And if you admit that, you have to consider what that reason is. It couldn't be just that it's you that he's hurting, of all the people in the world. There's no special reason for him not to steal your umbrella, as opposed to anyone else's. There's nothing so special about you. Whatever the reason is, it's a reason he would have against hurting anyone else in the same way. And it's a reason anyone else would have too, in a similar situation, against hurting you or anyone else.
But if it's a reason anyone would have not to hurt anyone else in this way, then it's a reason you have not to hurt someone else in this way (since anyone means everyone). Therefore it's a reason not to steal the other person's umbrella now.
This is a matter of simple consistency. Once you admit that another person would have a reason not to harm you in similar circumstances, and once you admit that the reason he would have is very general and doesn't apply only to you, or to him, then to be consistent you have to admit that the same reason applies to you now. You shouldn't steal the umbrella, and you ought to feel guilty if you do.
Someone could escape from this argument if, when he was asked, "How would you like it if someone did that to you?" he answered, "I wouldn't resent it at all. I wouldn't like it if someone stole my umbrella in a rainstorm, but I wouldn't think there was any reason for him to consider my feelings about it." But how many people could honestly give that answer? I think most people, unless they're crazy, would think that their own interests and harms matter, not only to themselves, but in a way that gives other people a reason to care about them too. We all think that when we suffer it is not just bad for us, but bad, period.
The basis of morality is a belief that good and harm to particular people (or animals) is good or bad not just from their point of view, but from a more general point of view, which every thinking person can understand. That means that each person has a reason to consider not only his own interests but the interests of others in deciding what to do. And it isn't enough if he is considerate only of some others -- his family and friends, those he specially cares about. Of course he will care more about certain people, and also about himself. But he has some reason to consider the effect of what he does on the good or harm of everyone. If he's like most of us, that is what he thinks others should do with regard to him, even if they aren't friends of his.
Even if this is right, it is only a bare outline of the source of morality. It doesn't tell us in detail how we should consider the interests of others, or how we should weigh them against the special interest we all have in ourselves and the particular people close to us. It doesn't even tell us how much we should care about people in other countries in comparison with our fellow citizens. There are many disagreements among those who accept morality in general, about what in particular is right and what is wrong. For instance: should you care about every other person as much as you care about yourself? Should you in other words love your neighbor as yourself (even if he isn't your neighbor)? Should you ask yourself, every time you go to a movie, whether the cost of the ticket could provide more happiness if you gave it to someone else, or donated the money to famine relief?,
Very few people are so unselfish. And if someone were that impartial between himself and others, he would probably also feel that he should be just as impartial among other people. That would rule out caring more about his friends and relatives than he does about strangers. He might have special feelings about certain people who are close to him, but complete impartiality would mean that he won't favor them -- if for example he has to choose between helping a friend or a stranger to avoid suffering, or between taking his children to a movie and donating the money to famine relief.
This degree of impartiality seems too much to ask of most people: someone who had it would be a kind of terrifying saint. But it's an important question in moral thought, how much impartiality we should try for. You are a particular person, but you are also able to recognize that you're just one person among many others, and no more important than they are, when looked at from outside. How much should that point of view influence you? You do matter somewhat from outside -- otherwise you wouldn't think other people had any reason to care about what they did to you. But you don't matter as much from the outside as you matter to yourself, from the inside -- since from the outside you don't matter any more than anybody else.
Not only is it unclear how impartial we should be; it's unclear what would make an answer to this question the right one. Is there a single correct way for everyone to strike the balance between what he cares about personally and what matters impartially? Or will the answer vary from person to person depending on the strength of their different motives?
This brings us to another big issue: Are right and wrong the same for everyone?
Morality is often thought to be universal. If something is wrong, it's supposed to be wrong for everybody; for instance if it's wrong to kill someone because you want to steal his wallet, then it's wrong whether you care about him or not. But if something's being wrong is supposed to be a reason against doing it, and if your reasons for doing things depend on your motives and people's motives can vary greatly, then it looks as though there won't be a single right and wrong for everybody. There won't be a single right and wrong, because if people's basic motives differ, there won't be one basic standard of behavior that everyone has a reason to follow. There are three ways of dealing with this problem, none of them very satisfactory.
First, we could say that the same things are right and wrong for everybody, but that not everyone has a reason to do what's right and avoid what's wrong: only people with the right sort of "moral" motives -- particularly a concern for others -- have any reason to do what's right, for its own sake. This makes morality universal, but at the cost of draining it of its force. It's not clear what it amounts to to say that it would be wrong for someone to commit murder, but he has no reason not to do it.
Second, we could say that everyone has a reason to do what's right and avoid what's wrong, but that these reasons don't depend on people's actual motives. Rather they are reasons to change our motives if they aren't the right ones. This connects morality with reasons for action, but leaves it unclear what these universal reasons are which do not depend on motives that everyone actually has. What does it mean to say that a murderer had a reason not to do it, even though none of his actual motives or desires gave him such a reason?
Third, we could say that morality is not universal, and that what a person is morally required to do goes only as far as what he has a certain kind of reason to do, where the reason depends on how much he actually cares about other people in general. If he has strong moral motives, they will yield strong reasons and strong moral requirements. If his moral motives are weak or nonexistent, the moral requirements on him will likewise be weak or nonexistent. This may seem psychologically realistic, but it goes against the idea that the same moral rules apply to all of us, and not only to good people.
The question whether moral requirements are universal comes up not only when we compare the motives of different individuals, but also when we compare the moral standards that are accepted in different societies and at different times. Many things that you probably think are wrong have been accepted as morally correct by large groups of people in the past: slavery, serfdom, human sacrifice, racial segregation, denial of religious and political freedom, hereditary caste systems. And probably some things you now think are right will be thought wrong by future societies. Is it reasonable to believe that there is some single truth about all this, even though we can't be sure what it is? Or is it more reasonable to believe that right and wrong are relative to a particular time and place and social background?
There is one way in which right and wrong are obviously relative to circumstances. It is usually right to return a knife you have borrowed to its owner if he asks for it back. But if he has gone crazy in the meantime, and wants the knife to murder someone with, then you shouldn't return it. This isn't the kind of relativity I am talking about, because it doesn't mean morality is relative at the basic level. It means only that the same basic moral principles will require different actions in different circumstances.
The deeper kind of relativity, which some people believe in, would mean that the most basic standards of right and wrong -- like when it is and is not all right to kill, or what sacrifices you're required to make for others -- depend entirely on what standards are generally accepted in the society in which you live.
This I find very hard to believe, mainly because it always seems possible to criticize the accepted standards of your own society and say that they are morally mistaken. But if you do that, you must be appealing to some more objective standard, an idea of what is really right and wrong, as opposed to what most people think. It is hard to say what this is, but it is an idea most of us understand, unless we are slavish followers of what the community says.
There are many philosophical problems about the content of morality -- how a moral concern or respect for others should express itself; whether we should help them get what they want or mainly refrain from harming and hindering them; how impartial we should be, and in what ways. I have left most of these questions aside because my concern here is with the foundation of morality in general -- how universal and objective it is.
I should answer one possible objection to the whole idea of morality. You've probably heard it said that the only reason anybody ever does anything is that it makes him feel good, or that not doing it will make him feel bad. If we are really motivated only by our own comfort, it is hopeless for morality to try to appeal to a concern for others. On this view, even apparently moral conduct in which one person seems to sacrifice his own interests for the sake of others is really motivated by his concern for himself: he wants to avoid the guilt he'll feel if he doesn't do the "right" thing, or to experience the warm glow of self-congratulation he'll get if he does. But those who don't have these feelings have no motive to be "moral."
Now it's true that when people do what they think they ought to do, they often feel good about it: similarly if they do what they think is wrong, they often feel bad. But that doesn't mean that these feelings are their motives for acting. In many cases the feelings result from motives which also produce the action. You wouldn't feel good about doing the right thin unless you thought there was some other reason to do it, besides the fact that it would make you feel good. And you wouldn't feel guilty about doing the wrong thing unless you thought that there was some other reason not to do it, besides the fact that it made you feel guilty: something which made it right to feel guilty. At least that's how things should be. It's true that some people feel irrational guilt about things they don't have any independent reason to think are wrong -but that's not the way morality is supposed to work.
In a sense, people do what they want to do. But their reasons and motives for wanting to do things vary enormously. I may "want" to give someone my wallet only because he has a gun pointed at my head and threatens to kill me if I don't. And I may want to jump into an icy river to save a drowning stranger not because it will make me feel good, but because I recognize that his life is important, just as mine is, and I recognize that I have a reason to save his life just as he would have a reason to save mine if our positions were reversed.
Moral argument tries to appeal to a capacity for impartial motivation which is supposed to be present in all of us. Unfortunately it may be deeply buried, and in some cases it may not be present at all. In any case it has to compete with powerful selfish motives, and other personal motives that may not be so selfish, in its bid for control of our behavior. The difficulty of justifying morality is not that there is only one human motive, but that there are so many.
## 8. Justice(公正)
Is it unfair that some people are born rich and some are born poor? If it's unfair, should anything be done about it?
The world is full of inequalities -- within countries, and from one country to another. Some children are born into comfortable, prosperous homes, and grow up well fed and well educated. Others are born poor, don't get enough to eat, and never have access to much education or medical care. Clearly, this is a matter of luck: we are not responsible for the social or economic class or country into which we are born. The question is, how bad are inequalities which are not the fault of the people who suffer from them? Should governments use their power to try to reduce inequalities of this kind, for which the victims are not responsible?
Some inequalities are deliberately imposed. Racial discrimination, for example, deliberately excludes people of one race from jobs, housing, and education which are available to people of another race. Or women may be kept out of jobs or denied privileges available only to men. This is not merely a matter of bad luck. Racial and sexual discrimination are clearly unfair: they are forms of inequality caused by factors that should not be allowed to influence people's basic welfare. Fairness requires that opportunities should be open to those who are qualified, and it is clearly a good thing when governments try to enforce such equality of opportunity.
But it is harder to know what to say about inequalities that arise in the ordinary course of events, without deliberate racial or sexual discrimination. Because even if there is equality of opportunity, and any qualified person can go to a university or get a job or buy a house or run for office -- regardless of race, religion, sex, or national origin -- there will still be plenty of inequalities left. People from wealthier backgrounds will usually have better training and more resources, and they will tend to be better able to compete for good jobs. Even in a system of equality of opportunity, some people will have a head start and will end up with greater benefits than others whose native talents are the same.
Not only that, but differences in native talent will produce big differences in the resulting benefits, in a competitive system. Those who have abilities that are in high demand will be able to earn much more than those without any special skills or talents. These differences too are partly a matter of luck. Though people have to develop and use their abilities, no amount of effort would enable most people to act like Meryl Streep, paint like Picasso, or manufacture automobiles like Henry Ford. Something similar is true of lesser accomplishments. The luck of both natural talent and family and class background are important factors in determining one's income and position in a competitive society. Equal opportunity produces unequal results.
These inequalities, unlike the results of racial and sexual discrimination, are produced by choices and actions that don't seem wrong in themselves. People try to provide for their children and give them a good education, and some have more money to use for this purpose than others. People pay for the products, services, and performances they want, and some performers or manufacturers get richer than others because what they have to offer is wanted by more people. Businesses and organizations of all kinds try to hire employees who will do the job well, and pay higher salaries for those with unusual skills. If one restaurant is full of people and another next door is empty because the first has a talented chef and the second doesn't, the customers who choose the first restaurant and avoid the second haven't done anything wrong, even though their choices have an unhappy effect on the owner and employees of the second restaurant, and on their families.
Such effects are most disturbing when they leave some people in a very bad way. In some countries large segments of the population live in poverty from generation to generation. But even in a wealthy country like the United States, lots of people start life with two strikes against them, from economic and educational disadvantages. Some can overcome those disadvantages, but it's much harder than making good from a higher starting point.
Most disturbing of all are the enormous inequalities in wealth, health, education, and development between rich and poor countries. Most people in the world have no chance of ever being as well off economically as the poorest people in Europe, Japan, or the United States. These large differences in good and bad luck certainly seem unfair; but what, if anything, should be done about them?
We have to think about both the inequality itself, and the remedy that would be needed to reduce or get rid of it. The main question about the inequalities themselves is: What kinds of causes of inequality are wrong? The main question about remedies is: What methods of interfering with the inequality are right?
In the case of deliberate racial or sexual discrimination, the answers are easy. The cause of the inequality is wrong because the discriminator is doing something wrong. And the remedy is simply to prevent him from doing it. If a landlord refuses to rent to blacks, he should be prosecuted.
But the questions are more difficult in other cases. The problem is that inequalities which seem wrong can arise from causes which don't involve people doing anything wrong. It seems unfair that people born much poorer than others should suffer disadvantages through no fault of their own. But such inequalities exist because some people have been more successful than others at earning money and have tried to help their children as much as possible; and because people tend to marry members of their own economic and social class, wealth and position accumulate and are passed on from generation to generation. The actions which combine to form these causes -- employment decisions, purchases, marriages, bequests, and efforts to provide for and educate children, don't seem wrong in themselves. What's wrong, if anything, is the result: that some people start life with undeserved disadvantages.
If we object to this kind of bad luck as unfair, it must be because we object to people's suffering disadvantages through no fault of their own, merely as a result of the ordinary operation of the socioeconomic system into which they are born. Some of us may also believe that all bad luck that is not a person's fault, such as that of being born with a physical handicap, should be compensated if possible. But let us leave those cases aside in this discussion. I want to concentrate on the undeserved inequalities that arise through the working of society and the economy, particularly a competitive economy. The two main sources of these undeserved inequalities, as I have said, are differences in the socioeconomic classes into which people are born, and differences in their natural abilities or talents for tasks which are in demand. You may not think there is anything wrong with inequality caused in these ways. But if you think there is something wrong with it, and if you think a society should try to reduce it, then you must propose a remedy which either interferes with the causes themselves, or interferes with the unequal effects directly.
Now the causes themselves, as we have seen, include relatively innocent choices by many people about how to spend their time and money and how to lead their lives. To interfere with people's choices about what products to buy, how to help their children, or how much to pay their employees, is very different from interfering with them when they want to rob banks or discriminate against blacks or women. A more indirect interference in the economic life of individuals is taxation, particularly taxation of income and inheritance, and some taxes on consumption, which can be designed to take more from the rich than from the poor. This is one way a government can try to reduce the development of great inequalities in wealth over generations -- by not letting people keep all of their money.
More important, however, would be to use the public resources obtained through taxes to provide some of the missing advantages of education and support to the children of those families that can't afford to do it themselves. Public social welfare programs try to do this, by using tax revenues to provide basic benefits of health care, food, housing, and education. This attacks the inequalities directly.
When it comes to the inequalities that result from differences in ability, there isn't much one can do to interfere with the causes short of abolishing the competitive economy. So long as there is competition to hire people for jobs, competition between people to get jobs, and competition between firms for customers, some people are going to make more money than others. The only alternative would be a centrally directed economy in which everyone was paid roughly the same and people were assigned to their jobs by some kind of centralized authority. Though it has been tried, this system has heavy costs in both freedom and efficiency -- far too heavy, in my opinion, to be acceptable, though others would disagree.
If one wants to reduce the inequalities resulting from different abilities without getting rid of the competitive economy, it will be necessary to attack the inequalities themselves. This can be done through higher taxation of higher incomes, and some free provision of public services to everyone, or to people with lower incomes. It could include cash payments to those whose earning power is lowest, in the form of a so-called "negative income tax." None of these programs would get rid of undeserved inequalities completely, and any system of taxation will have other effects on the economy, including effects on employment and the poor, which may be hard to predict; so the issue of a remedy is always complicated.
But to concentrate on the philosophical point: the measures needed to reduce undeserved inequalities arising from differences in class background and natural talent will involve interference with people's economic activities, mainly through taxation: the government takes money from some people and uses it to help others. This is not the only use of taxation, or even the main use: many taxes are spent on things which benefit the well-off more than the poor. But redistributive taxation, as it is called, is the type relevant to our problem. It does involve the use of government power to interfere with what people do, not because what they do is wrong in itself, like theft or discrimination, but because it contributes to an effect which seems unfair.
There are those who don't think redistributive taxation is right, because the government shouldn't interfere with people unless they are doing something wrong, and the economic transactions that produce all these inequalities aren't wrong, but perfectly innocent. They may also hold that there's nothing wrong with the resulting inequalities themselves: that even though they're undeserved and not the fault of the victims, society is not obliged to fix them. That's just life, they will say: some people are more fortunate than others. The only time we have to do anything about it is when the misfortune is the result of someone's doing a wrong to someone else.
This is a controversial political issue, and there are many different opinions about it. Some people object more to the inequalities that come from the socioeconomic class a person is born into, than to the inequalities resulting from differences in talent or ability. They don't like the effects of one person being born rich and another in a slum, but feel that a person deserves what he can earn with his own efforts -so that there's nothing unfair about one person earning a lot and another very little because the first has a marketable talent or capacity for learning sophisticated skills while the second can only do unskilled labor.
I myself think that inequalities resulting from either of these causes are unfair, and that it is clearly unjust when a socioeconomic system results in some people living under significant material and social disadvantages through no fault of their own, if this could be prevented through a system of redistributive taxation and social welfare programs. But to make up your own mind about the issue, you have to consider both what causes of inequality you find unfair, and what remedies you find legitimate.
-85- We've been talking mainly about the problem of social justice within one society. The problem is much more difficult on a world scale, both because the inequalities are so great and because it's not clear what remedies are possible in the absence of a world government that could levy world taxes and see that they are used effectively. There is no prospect of a world government, which is just as well, since it would probably be a horrible government in many ways. However there is still a problem of global justice, though it's hard to know what to do about it in the system of separate sovereign states we have now.
## 9. Death(死亡)
Everybody dies, but not everybody agrees about what death is. Some believe they will survive after the death of their bodies, going to Heaven or Hell or somewhere else, becoming a ghost, or returning to Earth in a different body, perhaps not even as a human being. Others believe they will cease to exist -- that the self is snuffed out when the body dies. And among those who believe they will cease to exist, some think this is a terrible fact, and others don't.
It is sometimes said that no one can conceive of his own nonexistence, and that therefore we can't really believe that our existence will come to an end with our deaths. But this doesn't seem true. Of course you can't conceive of your own nonexistence from the inside. You can't conceive of what it would be like to be totally annihilated, because there's nothing it would be like, from the inside. But in that sense, you can't conceive of what it would be like to be completely unconscious, even temporarily. The fact that you can't conceive of that from the inside doesn't mean you can't conceive of it at all: you just have to think of yourself from the outside, having been knocked out, or in a deep sleep. And even though you have to be conscious to think that, it doesn't mean that you're thinking of yourself as conscious.
It's the same with death. To imagine your own annihilation you have to think of it from the outside -- think about the body of the person you are, with all the life and experience gone from it. To imagine something it is not necessary to imagine how it would feel for you to experience it. When you imagine your own funeral, you are not imagining the impossible situation of being present at your own funeral: you're imagining how it would look through someone else's eyes. Of course you are alive while you think of your own death, but that is no more of a problem than being conscious while imagining yourself unconscious. The question of survival after death is related to the mind-body problem, which we discussed earlier. If dualism is true, and each person consists of a soul and a body connected together, we can understand how life after death might be possible. The soul would have to be able to exist on its own and have a mental life without the help of the body: then it might leave the body when the body dies, instead of being destroyed. It wouldn't be able to have the kind of mental life of action and sensory perception that depends on being attached to the body (unless it got attached to a new body), but it might have a different sort of inner life, perhaps depending on different causes and influences -- direct communication with other souls, for instance.
I say life after death might be possible if dualism were true. It also might not be possible, because the survival of the soul, and its continued consciousness, might depend entirely on the support and stimulation it gets from the body in which it is housed -- and it might not be able to switch bodies.
But if dualism is not true, and mental processes go on in the brain and are entirely dependent on the biological functioning of the brain and the rest of the organism, then life after death of the body is not possible. Or to put it more exactly, mental life after death would require the restoration of biological, physical life: it would require that the body come to life again. This might become technically possible some day: It may become possible to freeze people's bodies when they die, and then later on by advanced medical procedures to fix whatever was the matter with them, and bring them back to life.
Even if this became possible, there would still be a question whether the person who was brought to life several centuries later would be you or somebody else. Maybe if you were frozen after death and your body was later revived, you wouldn't wake up, but only someone very like you, with memories of your past life. But even if revival after death of the same you in the same body should become possible, that's not what's ordinarily meant by life after death. Life after death usually means life without your old body.
It's hard to know how we could decide whether we have separable souls. All the evidence is that before death, conscious life depends entirely on what happens in the nervous system. If we go only by ordinary observation, rather than religious doctrines or spiritualist claims to communicate with the dead, there is no reason to believe in an afterlife. Is that, however, a reason to believe that there is not an afterlife? I think so, but others may prefer to remain neutral.
Still others may believe in an afterlife on the basis of faith, in the absence of evidence. I myself don't fully understand how this kind of faith-inspired belief is possible, but evidently some people can manage it, and even find it natural.
Let me turn to the other part of the problem: how we ought to feel about death. Is it a good thing, a bad thing, or neutral? I am talking about how it's reasonable to feel about your own death -- not so much about other people's. Should you look forward to the prospect of death with terror, sorrow, indifference, or relief?
Obviously it depends on what death is. If there is life after death, the prospect will be grim or happy depending on where your soul will end up. But the difficult and most philosophically interesting question is how we should feel about death if it's the end. Is it a terrible thing to go out of existence?
People differ about this. Some say that nonexistence, being nothing at all, can't possibly be either good or bad for the dead person. Others say that to be annihilated, to have the possible future course of your life cut off completely, is the ultimate evil, even if we all have to face it. Still others say death is a blessing -- not of course if it comes too early, but eventually -- because it would be unbearably boring to live forever.
If death without anything after it is either a good or a bad thing for the person who dies, it must be a negative good or evil. Since in itself it is nothing, it can't be either pleasant or unpleasant. If it's good, that must be because it is the absence of something bad (like boredom or pain); if it's bad, that must be because it is the absence of something good (like interesting or pleasant experiences).
Now it might seem that death can't have any value, positive or negative, because someone who doesn't exist can't be either benefited or harmed: after all, even a negative good or evil has to happen to somebody. But on reflection, this is not really a problem. We can say that the person who used to exist has been benefited or harmed by death. For instance, suppose he is trapped in a burning building, and a beam falls on his head, killing him instantly. As a result, he doesn't suffer the agony of being burned to death. It seems that in that case we can say he was lucky to be killed painlessly, because it avoided something worse. Death at that time was a negative good, because it saved him from the positive evil he would otherwise have suffered for the next five minutes. And the fact that he's not around to enjoy that negative good doesn't mean it's not a good for him at all. "Him" means the person who was alive, and who would have suffered if he hadn't died.
The same kind of thing could be said about death as a negative evil. When you die, all the good things in your life come to a stop: no more meals, movies, travel, conversation, love, work, books, music, or anything else. If those things would be good, their absence is bad. Of course you won't miss them: death is not like being locked up in solitary confinement. But the ending of everything good in life, because of the stopping of life itself, seems clearly to be a negative evil for the person who was alive and is now dead. When someone we know dies, we feel sorry not only for ourselves but for him, because he can't see the sun shine today, or smell the bread in the toaster.
When you think of your own death, the fact that all the good things in life will come to an end is certainly a reason for regret. But that doesn't seem to be the whole story. Most people want there to be more of what they enjoy in life, but for some people, the prospect of nonexistence is itself frightening, in a way that isn't adequately explained by what has been said so far. The thought that the world will go on without you, that you will become nothing, is very hard to take in.
It's not clear why. We all accept the fact that there was a time before we were born, when we didn't yet exist -- so why should we be so disturbed at the prospect of nonexistence after our death? But somehow it doesn't feel the same. The prospect of nonexistence is frightening, at least to many people, in a way that past nonexistence cannot be.
The fear of death is very puzzling, in a way that regret about the end of life is not. It's easy to understand that we might want to have more life, more of the things it contains, so that we see death as a negative evil. But how can the prospect of your own nonexistence be alarming in a positive way? If we really cease to exist at death, there's nothing to look forward to, so how can there be anything to be afraid of? If one thinks about it logically, it seems as though death should be something to be afraid of only if we will survive it, and perhaps undergo some terrifying transformation. But that doesn't prevent many people from thinking that annihilation is one of the worst things that could happen to them.
## 10. The Meaning of Life(人生的意义)
Perhaps you have had the thought that nothing really matters, because in two hundred years we'll all be dead. This is a peculiar thought, because it's not clear why the fact that we'll be dead in two hundred years should imply that nothing we do now really matters.
The idea seems to be that we are in some kind of rat race, struggling to achieve our goals and make something of our lives, but that this makes sense only if those achievements will be permanent. But they won't be. Even if you produce a great work of literature which continues to be read thousands of years from now, eventually the solar system will cool or the universe will wind down or collapse, and all trace of your efforts will vanish. In any case, we can't hope for even a fraction of this sort of immortality. If there's any point at all to what we do, we have to find it within our own lives.
Why is there any difficulty in that? You can explain the point of most of the things you do. You work to earn money to support yourself and perhaps your family. You eat because you're hungry, sleep because you're tired, go for a walk or call up a friend because you feel like it, read the newspaper to find out what's going on in the world. If you didn't do any of those things you'd be miserable; so what's the big problem?
The problem is that although there are justifications and explanations for most of the things, big and small, that we do within life, none of these explanations explain the point of your life as a whole -- the whole of which all these activities, successes and failures, strivings and disappointments are parts. If you think about the whole thing, there seems to be no point to it at all. Looking at it from the outside, it wouldn't matter if you had never existed. And after you have gone out of existence, it won't matter that you did exist.
Of course your existence matters to other peopleyour parents and others who care about you -- but taken as a whole, their lives have no point either, so it ultimately doesn't matter that you matter to them. You matter to them and they matter to you, and that may give your life a feeling of significance, but you're just taking in each other's washing, so to speak. Given that any person exists, he has needs and concerns which make particular things and people within his life matter to him. But the whole thing doesn't matter.
But does it matter that it doesn't matter? "So what?" you might say. "It's enough that it matters whether I get to the station before my train leaves, or whether I've remembered to feed the cat. I don't need more than that to keep going." This is a perfectly good reply. But it only works if you really can avoid setting your sights higher, and asking what the point of the whole thing is. For once you do that, you open yourself to the possibility that your life is meaningless.
The thought that you'll be dead in two hundred years is just a way of seeing your life embedded in a larger context, so that the point of smaller things inside it seems not to be enough -- seems to leave a larger question unanswered. But what if your life as a whole did have a point in relation to something larger? Would that mean that it wasn't meaningless after all? There are various ways your life could have a larger meaning. You might be part of a political or social movement which changed the world for the better, to the benefit of future generations. Or you might just help provide a good life for your own children and their descendants. Or your life might be thought to have meaning in a religious context, so that your time on Earth was just a preparation for an eternity in direct contact with God.
About the types of meaning that depend on relations to other people, even people in the distant future, I've already indicated what the problem is. If one's life has a point as a part of something larger, it is still possible to ask about that larger thing, what is the point of it? Either there's an answer in terms of something still larger or there isn't. If there is, we simply repeat the question. If there isn't, then our search for a point has come to an end with something which has no point. But if that pointlessness is acceptable for the larger thing of which our life is a part, why shouldn't it be acceptable already for our life taken as a whole? Why isn't it all right for your life to be pointless? And if it isn't acceptable there, why should it be acceptable when we get to the larger context? Why don't we have to go on to ask, "But what is the point of all that?" (human history, the succession of the generations, or whatever).
The appeal to a religious meaning to life is a bit different. If you believe that the meaning of your life comes from fulfilling the purpose of God, who loves you, and seeing Him in eternity, then it doesn't seem appropriate to ask, "And what is the point of that?" It's supposed to be something which is its own point, and can't have a purpose outside itself. But for this very reason it has its own problems.
The idea of God seems to be the idea of something that can explain everything else, without having to be explained itself. But it's very hard to understand how there could be such a thing. If we ask the question, "Why is the world like this?" and are offered a religious answer, how can we be prevented from asking again, "And why is that true?" What kind of answer would bring all of our "Why?" questions to a stop, once and for all? And if they can stop there, why couldn't they have stopped earlier?
The same problem seems to arise if God and His purposes are offered as the ultimate explanation of the value and meaning of our lives. The idea that our lives fulfil God's purpose is supposed to give them their point, in a way that doesn't require or admit of any further point. One isn't supposed to ask "What is the point of God?" any more than one is supposed to ask, "What is the explanation of God?" But my problem here, as with the role of God as ultimate explanation, is that I'm not sure I understand the idea. Can there really be something which gives point to everything else by encompassing it, but which couldn't have, or need, any point itself? Something whose point can't be questioned from outside because there is no outside?
If God is supposed to give our lives a meaning that we can't understand, it's not much of a consolation. God as ultimate justification, like God as ultimate explanation, may be an incomprehensible answer to a question that we can't get rid of. On the other hand, maybe that's the whole point, and I am just failing to understand religious ideas. Perhaps the belief in God is the belief that the universe is intelligible, but not to US.
Leaving that issue aside, let me return to the smaller-scale dimensions of human life. Even if life as a whole is meaningless, perhaps that's nothing to worry about. Perhaps we can recognize it and just go on as before. The trick is to keep your eyes on what's in front of you, and allow justifications to come to an end inside your life, and inside the lives of others to whom you are connected. If you ever ask yourself the question, "But what's the point of being alive at all?" -- leading the particular life of a student or bartender or whatever you happen to be -- you'll answer "There's no point. It wouldn't matter if I didn't exist at all, or if I didn't care about anything. But I do. That's all there is to it."
Some people find this attitude perfectly satisfying. Others find it depressing, though unavoidable. Part of the problem is that some of us have an incurable tendency to take ourselves seriously. We want to matter to ourselves "from the outside." If our lives as a whole seem pointless, then a part of us is dissatisfied -- the part that is always looking over our shoulders at what we are doing. Many human efforts, particularly those in the service of serious ambitions rather than just comfort and survival, get some of their energy from a sense of importance -- a sense that what you are doing is not just important to you, but important in some larger sense: important, period. If we have to give this up, it may threaten to take the wind out of our sails. If life is not real, life is not earnest, and the grave is its goal, perhaps it's ridiculous to take ourselves so seriously. On the other hand, if we can't help taking ourselves so seriously, perhaps we just have to put up with being ridiculous. Life may be not only meaningless but absurd.
|
2020-01-26 23:47:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.352239727973938, "perplexity": 592.1570867930424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251694071.63/warc/CC-MAIN-20200126230255-20200127020255-00058.warc.gz"}
|
https://www.nextgurukul.in/wiki/concept/cbse/class-8/science/some-natural-phenomena/lightning/3957897
|
Lightning
Lightning is a natural phenomenon that has fascinated people for ages. Several people thought and researched about the cause of lightning and its process. Benjamin Franklin discovered that there is an electric discharge between clouds that produces a spark, and it is the electric spark between the clouds and the earth that appears as lightning. His famous kite experiment proved this fact. The occurrence of lightning is as follows. The formation of clouds involves friction between water droplets in the atmosphere. The friction charges the particles in the atmosphere. By a process, not yet completely understood, among the positive and negative charges, the negative charges accumulate at the bottom of the cloud and the positive charges at the top. As the accumulation of the charges increases, the cloud will induce positive charges on the ground nearby. As the amount of charge increases, the negative charges on the cloud tend to make a path towards the ground, and it results in a narrow streak of electrical discharge, which we call lightning. An electroscope is a device that detects whether an object is charged or not, and the type of charge on a body. It consists of a glass jar fitted with a cork lid and a metallic wire passing through it. There are two metallic strips at the bottom of the wire. The upper end of the wire is connected to a metal disc. When a positively charged body is brought in contact with the metal disc, the charge is transferred to the metal strips through the wire, and the strips diverge because both the strips acquire like charges and they repel. If the metal disc of the electroscope is touched with the hand, it loses its charge to the ground by transfer of charge through the human body. This is called earthing.
#### Summary
Lightning is a natural phenomenon that has fascinated people for ages. Several people thought and researched about the cause of lightning and its process. Benjamin Franklin discovered that there is an electric discharge between clouds that produces a spark, and it is the electric spark between the clouds and the earth that appears as lightning. His famous kite experiment proved this fact. The occurrence of lightning is as follows. The formation of clouds involves friction between water droplets in the atmosphere. The friction charges the particles in the atmosphere. By a process, not yet completely understood, among the positive and negative charges, the negative charges accumulate at the bottom of the cloud and the positive charges at the top. As the accumulation of the charges increases, the cloud will induce positive charges on the ground nearby. As the amount of charge increases, the negative charges on the cloud tend to make a path towards the ground, and it results in a narrow streak of electrical discharge, which we call lightning. An electroscope is a device that detects whether an object is charged or not, and the type of charge on a body. It consists of a glass jar fitted with a cork lid and a metallic wire passing through it. There are two metallic strips at the bottom of the wire. The upper end of the wire is connected to a metal disc. When a positively charged body is brought in contact with the metal disc, the charge is transferred to the metal strips through the wire, and the strips diverge because both the strips acquire like charges and they repel. If the metal disc of the electroscope is touched with the hand, it loses its charge to the ground by transfer of charge through the human body. This is called earthing.
#### Activities
Activity 1 Alt.coxnewsweb.com has created an animation to show how lightning strikes, in three stages. In the first stage, negative charges from the clouds zigzag downward in a forked pattern. In the second stage, positive charges from the ground rise to a height and meet the descending negative charges, creating a powerful flow of electricity. In the third stage, creation of light and thunder are shown. Go to Activity Activity 2 Learnalberta.ca has created an animation to show how static charges are built up in the clouds ahe how their discharge causes lightning. there are six parts of animation which clearly explain the water cycle and the lightning strike. Go to Activity
Next
|
2019-11-14 02:16:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4008156955242157, "perplexity": 382.57538411819644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667767.6/warc/CC-MAIN-20191114002636-20191114030636-00152.warc.gz"}
|
https://www.infermieriassistenzaligure.it/which-butanone-represents/22685.html
|
CAS:78-93-3
CAS:108-94-1
CAS:67-64-1
CAS:64-19-7
CAS:141-78-6
CAS:108-88-3
CAS:71-43-2
CAS:64-17-5
CAS:67-56-1
### Identifiion of an Unknown Liquid Lab Report SchoolWorkHelper
2,4-dinitrophenylhydrazine classifiion tests positive for ketones and aldehydes, and negative for alcohols. Therefore, the controls will be 2-butanone (positive) and tert -amyl alcohol (negative). Potassium Permanganate, also referred to as Baeyer’s Test is said to show positive results for alkenes or alkynes and negative results for alkanes.
### Butanone - Simple English Wikipedia, the free encyclopedia
Butanone, also called methyl ethyl ketone ( MEK ), is an organic compound with the chemical formula C H 3 C ( O )CH 2 CH 3. It is a simple ketone with four carbon atoms. It smells sharp …
### What is Butanone?
Irritated eyes, coughing, dizziness, vomiting, and skin irritation are just some of the unpleasant signs and symptoms you may experience if you mishandle a lot of different chemicals. One of …
### Solved 22. Which of the following represents the correct Chegg…
Which of the following represents the correct ranking A) n-butane<1-butanol < diethyl ether < 2-butanone B) n-butane2-butanone < diethyl ether < 1-butanol C) n-butane < diethyl ether < 1-butanol <2-butanone D) n-butane < diethyl ether < 2-butanone < 1-butanol in terms of increasing boiling point? 23.
### What Are the Uses of Butanone? - Shanghai Douwin Chemical Co.,Ltd.
4. The butanone is one of the types of compounds in organic chemistry, and the raw material for preparing the acaricide tebufenpyrad. 5. The butanone is a raw material for organic synthesis and can be used as a solvent. The butanone is used as a dewaxing agent for lubriing oil in the oil refining industry.
### Butanol - Wikipedia
Butanol. Butanol (also called butyl alcohol) is a four-carbon alcohol with a formula of C 4 H 9 O H, which occurs in five isomeric structures (four structural isomers), from a straight-chain primary alcohol to a branched-chain tertiary alcohol; [1] all are a butyl or isobutyl group linked to a hydroxyl group (sometimes represented as BuOH, n
Isomers· Toxicity· Uses· Recreational Use· Biobutanol· Production
• How to Write the Structure for 2-Butanone - /cite>
How to Write the Structure for 2-Butanone 16,057 views Dec 14, 2019 To write the structure for organic molecule 2-Butanone (also called methylpropane and isobutane) we’ll start by writing a
### What Are the Uses of Butanone? - Shanghai Douwin Chemical Co.,Ltd.
4. The butanone is one of the types of compounds in organic chemistry, and the raw material for preparing the acaricide tebufenpyrad. 5. The butanone is a raw material for organic synthesis and can be used as a solvent. The butanone is used as a dewaxing agent for lubriing oil in the oil refining industry.
Guan di miao, Yulong town
XingYang, Henan, China
7*24 Hours 365 Days
|
2022-12-08 07:15:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5995894074440002, "perplexity": 11025.728219869658}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711278.74/warc/CC-MAIN-20221208050236-20221208080236-00315.warc.gz"}
|
https://en.m.wikiversity.org/wiki/Electromagnetism
|
# Fundamental Physics/Electromagnetism
(Redirected from Electromagnetism)
It is necessary to revise the following mathematical concepts as they are used throughout this course:
## Electrostatics
Electric charge
Electric charge Charge process Charge quantity Electric field Magnetic field Negative charge O + e -> - - Q -->E<-- B ↓ Positive charge O - e -> + + Q <--E--> B ↑
Electric charge interaction
Coulomb law . Like charges repel, different charges attract . Negative charge attracts positive charge . The force of attraction negative charge attracts positive charge is called Electrostatic force can be calculated by Coulomb law as follow
${\displaystyle F_{Q}=K{\frac {Q_{+}Q_{-}}{r^{2}}}}$
Ampere law . The force that sets electric charge in motion from stationary state is called Electromotive force can be calculated by Ampere law as follow
${\displaystyle F_{Q}=QE}$
Lorentz law . When electric charge interacts with magnetic field of a magnet, the magnetic force of the magnet will make electric charge to move perpendicular to the initial moving direction . The positive charge will move up, the negative charge will move down . The force of magnetic field that sets electric charge to move perpendicular to the intial moving direction is called Electromagnetomotive force can be calculated as follow
${\displaystyle F_{B}=\pm QvB}$
The sum of 2 forces Electromotive force and Electromagnetomotive force creates Electromagnetic force
${\displaystyle F_{EB}=F_{E}+F_{B}=QE\pm QvB=Q(E\pm vB)}$
Electrostatic field
The force of attraction of 2 different charges
${\displaystyle F=K{\frac {Q_{-}Q_{+}}{r^{2}}}=K{\frac {Q^{2}}{r^{2}}}}$ . ${\displaystyle Q_{-}=Q+}$
The electric field of the charge in motion
${\displaystyle E={\frac {F}{Q}}={\frac {Q}{r^{2}}}}$
Electric Potential
The potential of the electric field
${\displaystyle V=\int Edr={\frac {Q}{r}}}$
## Magnetostatics
${\displaystyle F_{Q}={\frac {Q_{+}Q_{-}}{r^{2}}}}$
${\displaystyle \Phi =\oint _{S}\mathbf {E} \cdot d\mathbf {S} ={\frac {q_{in}}{\varepsilon _{0}}}}$
${\displaystyle \mathbf {B} (\mathbf {r} )={\frac {\mu _{0}}{4\pi }}\int _{C}{\frac {Id\mathbf {l} \times \mathbf {r'} }{|\mathbf {r'} |^{3}}}}$
${\displaystyle B=Li}$
${\displaystyle \phi =-B}$
${\displaystyle V={\frac {d}{dt}}B=L{\frac {d}{dt}}i}$
${\displaystyle F_{B}=\pm QvB}$
${\displaystyle F_{EB}=Q(E\pm vB)}$
## Electromagnetism
### Electromagnet
${\displaystyle B=Li}$
Configuration Magnetic Field Magnetic field intensity For a straight line conductor circular magnetic field surrounds a point charge along the straight line ${\displaystyle B=LI={\frac {\mu }{2\pi r}}I}$ For a circular loop conductor circular magnetic field surrounds a point charge along the circular loop ${\displaystyle B=LI={\frac {\mu }{2r}}I}$ For a coil of N circular loops conductor lines of magnetic field runs from North pole (Positive polarity) to South pole (Negative polarity) ${\displaystyle B=LI={\frac {N\mu }{l}}I}$
### Electromagnetic induction
Electromagnetic induction takes place in a circular loop and a coil of N circular loops . According to Faraday, change in magnetic field will produce electric potential
${\displaystyle V={\frac {dB}{dt}}}$
${\displaystyle \epsilon =-{\frac {d\phi }{dt}}}$
For a single circular loop ${\displaystyle B=LI={\frac {\mu }{2r}}I}$ ${\displaystyle V={\frac {dB}{dt}}=L{\frac {dI}{dt}}}$ Coil of N circular loops ${\displaystyle B=LI={\frac {N\mu }{l}}I}$ ${\displaystyle V={\frac {dB}{dt}}=L{\frac {dI}{dt}}}$ ${\displaystyle \phi =-NB=-NLI}$ ${\displaystyle \epsilon =-{\frac {d\phi }{dt}}=-N{\frac {dB}{dt}}=-NL{\frac {dI}{dt}}}$
### Electromagnetization
The way a coil of N circular loops turn metal inside the loops into a electromagnet
${\displaystyle B=Li={\frac {N\mu }{l}}i}$
${\displaystyle H={\frac {B}{\mu }}=i{\frac {N}{l}}}$
Maxwell's equation
${\displaystyle \nabla \cdot D=\rho }$
${\displaystyle \nabla \times E=-\nabla B}$
${\displaystyle \nabla \cdot B=0}$
${\displaystyle \nabla \times H=J+\nabla B}$
### Electromagnetism of a straight line conductor
${\displaystyle V=IR}$ ${\displaystyle I={\frac {V}{R}}}$ ${\displaystyle R={\frac {V}{I}}}$ ${\displaystyle G={\frac {I}{V}}}$ ${\displaystyle B=Li={\frac {\mu }{2\pi r}}i}$ ${\displaystyle R(T)=R_{o}+nT}$ ${\displaystyle R(T)=R_{o}e^{nT}}$ ${\displaystyle E_{R}=i^{2}R(T)=mC\Delta T}$ ${\displaystyle E_{V}=iv}$ ${\displaystyle E=E_{V}-E_{R}}$ ${\displaystyle m={\frac {i^{2}R(T)}{C\Delta T}}}$ ${\displaystyle C={\frac {i^{2}R(T)}{m\Delta T}}}$
### Electromagnetism of a circular loop conductor
${\displaystyle B=Li={\frac {\mu }{2r}}i}$ ${\displaystyle V={\frac {dB}{dt}}=L{\frac {di}{dt}}}$ ${\displaystyle F_{r}=F_{B}}$ ${\displaystyle m{\frac {v^{2}}{r}}=QvB}$ ${\displaystyle v={\frac {Q}{m}}Br}$ ${\displaystyle r={\frac {mv}{QB}}}$
### Electromagnetism of a coil of N circular loops
Electromagnet ${\displaystyle B=LI=\mu i{\frac {N}{l}}}$ ${\displaystyle L=\mu {\frac {N}{l}}}$ ${\displaystyle -\phi =-NB=-NLi=-{\frac {N^{2}\mu i}{l}}}$ ${\displaystyle H={\frac {B}{\mu }}=N\mu i}$ Electromagnetic induction ${\displaystyle V={\frac {dB}{dt}}=L{\frac {di}{dt}}}$ ${\displaystyle \epsilon =-{\frac {d\phi }{dt}}=-NL{\frac {di}{dt}}}$ Electromagnetic oscillation ${\displaystyle \nabla \cdot E=0}$ ${\displaystyle \nabla \times E=-{\frac {1}{T}}E}$ ${\displaystyle \nabla \cdot B=0}$ ${\displaystyle \nabla \times B=-{\frac {1}{T}}B}$ ${\displaystyle T=\mu \epsilon }$ Electromagnetic wave Electromagnetic wave equation ${\displaystyle \nabla ^{2}E=-\omega E}$ ${\displaystyle \nabla ^{2}B=-\omega B}$ Electromagnetic wave function ${\displaystyle E=A\sin \omega t}$ ${\displaystyle B=A\sin \omega t}$ ${\displaystyle \omega =\lambda f={\sqrt {\frac {1}{T}}}=C}$ ${\displaystyle T=\mu \epsilon }$ ≈≈≈ Electromagnetic wave radiation ${\displaystyle v=\omega =\lambda f={\sqrt {\frac {1}{\mu \epsilon }}}=C}$ ${\displaystyle E=pv=pC=p\lambda f=hf=h{\frac {\omega }{2\pi }}=\hbar \omega }$ ${\displaystyle h=p\lambda =h{\frac {k}{2\pi }}=\hbar k}$ ${\displaystyle p={\frac {h}{\lambda }}}$ ${\displaystyle \lambda ={\frac {h}{p}}={\frac {C}{f}}}$
## Resources
Type classification: this resource is a course.
Subject classification: this is a physics resource.
Subject classification: this is an engineering resource.
|
2019-08-23 03:15:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 74, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5941548347473145, "perplexity": 1124.6987624240226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317817.76/warc/CC-MAIN-20190823020039-20190823042039-00411.warc.gz"}
|
http://sasplein.nl/product-b-pagina/
|
# Product B pagina
Product B pagina
Hint:
“Only” means that only the selected subscription levels can see this page, if they have not reached their page limit.
“Always” means that the selected subscription levels can see this page, even if they have reached their page limit.
“Only and Always” means that only the selected subscription levels can see this page, even if they have reached their page limit.
Hint:
“Only” means that only the selected subscription levels can see this page, if they have not reached their page limit.
“Always” means that the selected subscription levels can see this page, even if they have reached their page limit.
“Only and Always” means that only the selected subscription levels can see this page, even if they have reached their page limit.
Hint:
“Only” means that only the selected subscription levels can see this page, if they have not reached their page limit.
“Always” means that the selected subscription levels can see this page, even if they have reached their page limit.
“Only and Always” means that only the selected subscription levels can see this page, even if they have reached their page limit.
|
2019-03-20 20:13:16
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8700760006904602, "perplexity": 1209.7024542298566}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202450.86/warc/CC-MAIN-20190320190324-20190320212324-00515.warc.gz"}
|
https://www.embeddedrelated.com/showarticle/1353.php
|
# Tolerance Analysis
May 31, 2020
Today we’re going to talk about tolerance analysis. This is a topic that I have danced around in several previous articles, but never really touched upon in its own right. The closest I’ve come is Margin Call, where I discussed several different techniques of determining design margin, and ran through some calculations to justify that it was safe to allow a certain amount of current through an IRFP260N MOSFET.
Tolerance analysis is used in electronics to determine the amount of variation of some quantity in a circuit design. It could be voltage, or current, or resistance, or amplifier gain, or power, or temperature. When you are designing a circuit, the components that you use will have some nominal value, like a 4.99kΩ resistor, or a 33pF capacitor, or a 1.25V voltage reference, or a 3.3V regulator. That 4.99kΩ resistance is just the nominal value; in reality, the manufacturer claims — and I discussed what this means in an article on datasheets — that the resistance is 4.99kΩ ± 1%, that is, between 4940 and 5040 ohms. Actually, most components have more than one nominal value to characterize their behavior; even a simple resistor has several:
• resistance R
• temperature coefficient of resistance
• thermal resistance in °C / W
• maximum voltage across the resistor
• operating and storage temperature ranges
More complex components like microcontrollers may have hundreds of parameters described in their datasheet.
At any rate, what you care about is not the variation in the component, but the variation of something important in your circuit: some quantity X. Tolerance analysis allows you to combine the variations of component values and determine the variation of X. Here is an example:
# Example: Overvoltage Detector, Part 1
I work with digitally-controlled motor drives. It’s usually a good idea to put a hardware overvoltage detector in a motor drive, so that if the voltage across the DC link gets too large, the drive shuts off before components attached to the DC link, like capacitors and transistors, can get damaged. This is important if the drive is operating in regeneration, where the motor is used to provide braking torque to the mechanical load and to convert the resulting energy into electrical form on the DC link, where it has to go somewhere. In an electric bike or a hybrid car, this energy will flow back into a battery. In some large AC mains-connected systems, it may flow back into the power grid. Otherwise, it will cause the DC link voltage to rise, storing some energy in the DC link capacitor, until one of the components fails or the motor drive stops regenerating. (Hint: you don’t want a component to fail; you want the motor drive to stop regenerating.) The control of the motor drive is done through firmware, but you never know when something can go wrong, and that’s why it’s important to have an independent analog sensor that can shut down the gate drives for the power transistors if the DC link voltage gets too high.
So here’s the goal of a hypothetical overvoltage detector for a 24V nominal system:
• Provide a 5V logic output signal that is HIGH if the DC link voltage $V_{DC} < V_{OV}$ (normal operation) and LOW if $V_{DC} \ge V_{OV}$ (overvoltage fault) for some overvoltage threshold $V_{OV}$.
• $V_{OV} > 30V$ to allow some design margin and avoid false trips
• $V_{OV} < 35V$ to ensure that capacitors and transistors do not experience voltage overstress.
• The preceding thresholds are for slowly-changing values of $V_{DC}$. (Instantaneous voltage detection is neither possible nor desirable: all real systems have some kind of noise and parasitic filtering.)
• Under all circumstances, $V_{DC} \ge 0$
• The logic output signal shall detect an overvoltage and produce a LOW value for any voltage step reaching at least 40V in no more than 3μs after the beginning of the step.
• Overvoltage spikes between $V_{OV} - 1.0V$ and 40V, which exceed $V_{OV} - 1.0V$ for no more than a 100ns pulse out of every 100μs, shall not cause a LOW value. (This sets a lower bound on noise filtering.)
• The overvoltage detector will be located in an ambient temperature between -20 °C and +70 °C.
• A 5V ± 5% analog supply is available for signal conditioning circuitry.
For the time being, forget about the dynamic aspects of this circuit (the 40V step detected in no more than 3μs, and the 100ns spikes up to 40V not causing a fault) and focus on the DC accuracy, namely that the circuit should trip somewhere between 30V and 35V.
Let’s also assume, for the time being, that we have a comparator circuit that trips exactly at 2.5V. Then all we need to do is create a voltage divider so that this 2.5V comparator input corresponds to a DC link voltage between 30V and 35V.
We probably want to choose resistors so that their nominal values correspond to the midpoint of our allowable voltage range: that is, 32.5V in produces 2.5V out. That’s a ratio of 13:1, so we could use R1 = 120K and R2 = 10K. Except if you look at the standard 1% resistor values ( (the so-called E96 preferred numbers), you’ll see the closest is R1 = 121K and R2 = 10K, for a ratio of 13.1:1. That gets us to 32.75V in → 2.5V out.
But if these are 1% resistors, then their room-temperature values can vary by as much as ±1%:
• R1 = 121K nominal, 119.79K minimum, 122.21K maximum
• R2 = 10K nominal, 9.9K minimum, 10.1K maximum
If we want to go through a worst-case analysis, the extremes in resistor divider ratio are at minimum R1 and maximum R2, and at minimum R2 and maximum R1:
• R1 = 119.79K, R2 = 10.1K → 12.86 : 1 → 32.15V in : 2.5V out
• R2 = 122.21K, R1 = 9.9K → 13.34 : 1 → 33.36V in : 2.5V out
This gets to be tedious to do by hand, and it can help to use spreadsheets, or analysis in MATLAB / MathCAD / Scilab / Python / Julia / R or whatever your favorite scientific computing environment is:
import numpy as np
def resistor_divider(R1, R2):
return R2/(R1+R2)
def worst_case_resistor_divider(R1,R2,tol=0.01):
"""
Calculate worst-case resistor divider ratio
for R1, R2 +/- tol
Returns a 3-tuple of nominal, minimum, maximum
"""
return (resistor_divider(R1,R2),
resistor_divider(R1*(1-tol),R2*(1+tol)),
resistor_divider(R1*(1+tol),R2*(1-tol)))
ratios = np.array([1.0/K for K in worst_case_resistor_divider(121.0, 10.0, tol=0.01)])
def print_range(r, title, format='%.3f'):
rs = sorted(list(r))
print(("%-10s nom="+format+", min="+format+", max="+format) % (title, rs[1], rs[0], rs[2]))
print_range(ratios, 'ratios:')
print_range(ratios*2.5, 'DC link:')
ratios: nom=13.100, min=12.860, max=13.344
I’ve just assumed we were going to use 1% resistors here, because I have experience picking resistors. Even so, it helps to double-check assumptions, so as of this writing, here are the lowest-cost chip resistors for 120K (5%) or 121K (1% or less) in 1000 quantity from Digi-Key. Price is per thousand, so \$2.33 per thousand = \$0.00233 each. (I’m looking up 120K/121K rather than 10K because 10K is a common, dirt-cheap value.)
The upshot of this is that 1% and 5% chip resistors are now around the same price. You might save a teensy bit with a 5% resistor: \$0.00233 vs \$0.00262 is a difference of \$0.00029 each, or 29 cents more per thousand for 1% resistors. This is much smaller than the cost of a pick-and-place machine to assemble the component on a PCB; it’s harder to get a good estimate of how much that might cost, but if you look at on-line assembly cost calculators for an estimate, you can get an idea. Here’s one which quotes \$0.73 assembly cost per board at 1000 boards, 10 different unique components, 100 SMT parts on one side of the board only; if you increase to 200 SMT parts it’s \$1.28 per board, and 300 SMT parts it’s \$1.76 per board — at that rate, you’re looking at about a half-cent to place each part, so don’t quibble on the difference between resistors that cost 0.233 cents vs. 0.262 cents each. 5% and 1% chip resistors effectively are the same price; you’re going to pay more to have them assembled than to buy them.
So 1% 0603 or 0402 resistors should be your default choice. I’d probably choose 0603 for most boards, since there’s not too much price difference, and on prototypes I can solder those by hand, if I really need to; 0402 and smaller require more skill than I can handle. (DON’T SNEEZE!)
0.5% resistors are a bit more expensive at about 1.5 - 1.7 cents each.
0.1% resistors are still more expensive at 4.6 - 4.7 cents each, but that’s not too bad if you need the accuracy.
(The 1% and 5% resistors are thick-film chip resistors; the 0.5% and 0.1% resistors are thin-film. For the most part this is just an internal construction detail, but more on this subject in a bit.)
At any rate, we figured out that we could use 1% resistors and have our 2.5V threshold at the comparator equivalent to something between 32.15V and 33.36V at the DC link.
Are we done yet?
No, because there are a bunch of other things that determine resistor values. Let’s look at the datasheets for KOA Speer RK73H (\$4.08 per thousand for 0603 1% RK73H1JTTD1213F) and Panasonic ERJ (\$8.72 per thousand for 0603 1% ERJ-3EKF1213V):
Both have a temperature coefficient of ±100ppm/°C. Our ambient spec of -20°C to +70°C deviates from room temperature by 45°C. (Let’s assume the board doesn’t heat up significantly.) That will add another 4500ppm = 0.45%.
Then there are all these gotchas where the resistance can change by 0.5% - 3% from overload, soldering heat, rapid change in temperature, moisture, endurance at 70°C, or high temperature exposure. The more serious one is soldering heat.
Panasonic:
Yes, that’s right, your resistors meet their advertised 1% specs when they aren’t connected to anything; if you actually want to solder them to a board, their resistance value will change. Some of you are reading this and thinking, “Sure, of course the resistance will change, it’s some function of temperature R(T).” That implies reversible resistance change, back to its original value, after the part cools down. But the resistance may also undergo irreversible change. Some of that is probably due to slight physical or chemical changes in the resistor caused by heating and cooling, and some of it is due to the strain placed on the part by the solder solidifying. I found a few documents on this subject. From a Vishay application note titled “Reading Between the Lines in Resistor Datasheets”
The end customer must also evaluate whether a tolerance offered by a manufacturer is really practical. For example, some surface mount thin film chip resistors are offered in very tight tolerances for very low resistance values. That’s impressive on the datasheet but not compatible with assembly processes. As these resistors are mounted on the board there is a resistance change due to solder heat. The solder terminations melt, flow, and re-solidify with changed resistance values. For low-value resistors the amount of resistance change is much greater than the specified tolerance. Having paid a premium price for an impractically tight tolerance, the customer ends up with looser-tolerance resistors once they’re assembled on the PCB.
One study, Capacitors and Resistors Mounting Guide Survey Based on Commercial Manufacturers’ Public Documents, mentions sulfur contamination:
Sulphur contamination is mainly associated with use and reliability of thick-film chip resistor with Ag-system as inner termination. The silver in the inner termination is very susceptible to contamination via sulphur which produces silver sulphide in chip resistors. Silver is so susceptible to combination with sulphur that the sulphur diffuses through the outer termination layers to the inner termination forming silver sulphide. Silver sulphide unfortunately makes the termination material nonconductive and effectively raises the resistance value until it is essentially open circuit. The reaction velocity in this case is influenced by sulphur gas density, temperature and humidity greatly. This process can be initiated or inhibited already by heat-stress while mounting.
Inert gas atmosphere mounting (as mentioned in this chapter above) can be recommended as a prevention measure to suppress the sulphide contamination issues.
As for the effects of strain, one concern is piezoresistivity: you can read another Vishay application note titled “Mechanical Stress and Deformation of SMT Components During Temperature Cycling and PCB Bending”. No good sound bites in this one, other than the in conclusion:
• The piezoresistive effect can cause significant resistance changes to thick film chip resistors, especially when the PCB bends, temperature changes occur, or the components experience stress when they are embedded or molded. The component’s TCR will be also affected.
• These effects are not seen in thin metal film chip resistors.
So that’s another reason why the SMT resistors that are better than 1% tolerance are thin-film. This affects SMT more than through-hole parts, because there’s not much strain relief for chip components; through-hole parts at least have leads to allow the part to avoid mechanical stress when the board flexes a little bit. Neither of the resistor datasheets I showed above, however, mentions the effect of strain on resistance.
La la la la la, let’s just pretend we didn’t hear all that, and we have ± 1.45% resistance tolerance due to part variation and temperature coefficients.
ratios = np.array([1.0/K for K in worst_case_resistor_divider(10.0, 121.0, tol=0.0145)])
print_range(ratios, 'ratios:')
print_range(ratios*2.5, 'DC link:')
ratios: nom=13.100, min=12.754, max=13.456
Now we’re looking at 31.89V to 33.64V. Still within our spec of 30-35V for $V_{OV}$. Are we done yet?
No — we need the rest of the circuit, it’s not just a voltage divider.
But before we go there, let’s look at how the resistor tolerance affects the voltage divider ratio.
import matplotlib.pyplot as plt
%matplotlib inline
alpha = np.arange(0.001, 1 + 1e-12, 0.001)
Rtotal = 10e-3 # this doesn't matter
R1 = alpha*Rtotal
R2 = (1-alpha)*Rtotal
for whichfig, ytext, ysym in [(1,'Ratio','\\rho'),
(2,'Sensitivity','S')]:
fig = plt.figure()
for tol in [0.1, 0.05, 0.01]:
r_nominal, r_min, r_max = worst_case_resistor_divider(R1, R2, tol)
S = np.maximum(r_nominal-r_min, r_max-r_nominal) / tol
y = S if whichfig == 1 else S/alpha
ax.plot(alpha, y, label='$\\delta =$%.1f%%' % (tol*100))
ax.grid(True)
ax.legend(loc='best', fontsize=11, labelspacing = 0)
ax.set_xlabel('$\\alpha$',fontsize=14)
ax.set_ylabel('$%s$' % ysym,fontsize=14)
nt = np.arange(11)
xt = nt * 0.1
ax.set_xticks(xt)
ax.set_title(('%s $%s(\\alpha, \\delta) = (V - \\bar{V})/(%s\\delta\\cdot V_{\\rm in})$\n'
+'$R_1=\\alpha R, R_2=(1-\\alpha R), V = \\alpha V_{\\rm in}$')
% (ytext, ysym, '' if whichfig == 1 else '\\alpha \\cdot'));
OK, what are we looking at? The top graph, $\rho(\alpha, \delta)$ is the ratio of the voltage divider error to the resistor tolerance, where $\alpha =$ the nominal voltage divider ratio, and $\delta$ is the resistor tolerance. The bottom graph, $S(\alpha, \delta)$ is the sensitivity of the voltage divider output; we just divide by the nominal voltage divider ratio, so $S = \frac{\rho}{\alpha}$.
Here are three concrete examples:
• $R_1 = R_2 = R$ and $\delta =$ 1%. Then $\alpha = R_1/(R_1+R_2) = 0.5$ and the output can vary from 0.99/(0.99+1.01) = 0.495 to 1.01/(1.01+0.99) = 0.505. This is a ±0.005 output error, and if we divide by $\delta = 0.01$ we get $\rho = 0.5$ and then $S = \rho / \alpha = 1.$
• $R_1 = R, R_2 = 4R$ and $\delta =$ 1%. Then $\alpha = R/5R = 0.2$ and the output can vary from 0.99/(0.99+4.04) = 0.1968 to 1.01/(1.01+3.96) = 0.2032. This is a ±0.0032 output error, and if we divide by $\delta = 0.01$ we get $\rho = 0.32$ and then $S = \rho / \alpha = 1.6.$
• $R_1 = 4R, R_2 = R$ and $\delta =$ 1%. Then $\alpha = 4R/5R = 0.8$ and the output can vary from 3.96/(3.96+1.01) = 0.7968 to 4.04/(4.04+0.99) = 0.8032. This is a ±0.0032 output error, and if we divide by $\delta = 0.01$ we get $\rho = 0.32$ and then $S = \rho / \alpha = 0.4.$
Some important takeaways are:
• The ratio $\rho \approx 2\alpha(1-\alpha)$ and sensitivity $S \approx 2(1-\alpha)$.
• Absolute error in voltage divider output is symmetrical with $\alpha$ and reaches a maximum for $\alpha=0.5$ and is very low for $\alpha$ near 0 or 1.
• Sensitivity of voltage divider output for $\alpha << 1$ is approximately $S=2$. If I am dividing down a much higher voltage to a lower voltage, this means if I use 1% resistors I can expect about 2% gain error, or if I use 0.1% resistors I can expect about 0.2% gain error.
• Sensitivity of voltage divider output for $\beta << 1$ where $\beta = 1-\alpha$ is approximately $S=2\beta$. This means if I want a voltage divider ratio that is very close to 1, and I use 1% resistors I can expect a much lower gain error. In one of my earlier articles on Thevenin equivalents I used the example of R1 = 2.10kΩ, R2 = 49.9Ω where $\alpha = 0.9768, \beta = 0.0232$, and that means for 1% resistors I can expect a gain error of only about 0.0464%.
ratios = np.array([1.0/K for K in worst_case_resistor_divider(2100, 49.9, tol=0.01)])
print_range(ratios, "ratios", '%.5f')
print "sensitivity S", ratios[1:3] - ratios[0]
ratios nom=1.02376, min=1.02329, max=1.02424
sensitivity S [ 0.00048004 -0.00047053]
## Overvoltage Detector, Part 2: The Other Stuff at DC
Here’s the whole circuit we’re going to be looking at:
### Selecting 2.5V Voltage References
First we need a 2.5V source, so we can compare the output of our voltage divider to it.
#### TL431
In theory, I like the TL431 type of shunt voltage reference. It’s a three-terminal device that’s kind of like a precision transistor: if the reference terminal is less than its 2.5V threshold, it does not conduct from cathode to anode; if it’s greater than its 2.5V threshold, it does conduct.
TL431s are cheap and ubiquitous. You want a 0.5%-tolerance 2.5V reference for less than 10 cents in quantity 1000? You got it. The Diodes Inc. AN431 is available in 0.5% grade from Digi-Key for about 7 cents in quantity 1000. This is pin- and function-compatible with the TL431. (Mess up the pinout? There’s the AS431, same price, which swaps ref and cathode pins, compatible with the TL432.)
The only downside is that its voltage accuracy is specified at 10mA, so that’s kind of a power hog. You can run it down as low as 1mA, but then you have to use the specification for dynamic impedance, $Z_{KA}$ to figure out how much the voltage changes at 1mA. For the AN431, it’s a maximum of 0.5Ω, so for a change from 10mA down to 1mA (ΔI of -9mA), the voltage could drop by as much as 4.5mV, which adds another 0.18% to the effective accuracy.
#### Low-current TL431
The next step up from those are the ON Semiconductor NCP431B which you can buy from Digi-Key at 9.9 cents each in quantity 1000. These work down to at least 60μA, and their voltage accuracy is specified at 1mA. The dynamic impedance $Z_{KA}$ is specified between 1mA and 100mA (same 0.5Ω maximum), but there is no spec for 60μA to 1mA — they do show a figure 36 (“Knee of Reference”) claiming a typical 4.5A/V = 0.22Ω, and you could decide to use the 0.5Ω maximum value and double it for good measure: 1 ohm times (100μA - 1mA) = 0.9mV, which is less than 0.04% of 2.5V. But there’s no spec, so how can you possibly know whether you can trust the voltage accuracy at those low currents? You could have a part that regulated to 2.45V at 100μA and it would meet the specification but represent a 50mV error from nominal.
Diodes Inc has the AP431 for 8.6 cents (quantity 1000) from Digi-Key with similar specs: ±0.5% at 1mA, works down to 100μA cathode current, dynamic impedance $Z_{KA}$ < 0.3Ω from 1mA to 100mA. But nothing useful for determining voltage accuracy below 1mA.
Diodes Inc also has the ZR431 which it inherited from Zetex, specified at 10mA and no specs below 10mA.
TI has the similar ATL431LI for 17 cents (qty 1000) from Digi-Key, ±0.5% at 1mA, works down to 100μA cathode current, dynamic impedance $Z_{KA}$ < 0.65Ω from 1mA to 15mA, and nothing about voltage accuracy below 1mA.
These guys are either copying each other’s collective blunders, or there’s a conspiracy, a kind of mini-Phoebus cartel when it comes to specifying voltage accuracy below 1mA. Sigh. My guess is that it was Zetex’s fault for poor specsmanship of the ZR431, and then everybody just copied the general form of the datasheet, without bothering to make any claims about low-current voltage accuracy.
#### LM4040 / LM4041
The next step up are the LM4040 and LM4041 voltage references; these have specified voltage accuracy at 100μA operation, and are available from a number of manufacturers. The LM4040 is a fixed voltage reference, and the LM4041 is an adjustable reference based on a 1.23V bandgap voltage, kind of an upside-down TL431. For precision circuits, unless you need the adjustability, the LM4040 is a better choice; otherwise, you’ll need to add your own resistor divider which will raise the effective tolerance. For the LM4040, if you get the A grade version, it’s 0.1% accuracy, but you’ll pay extra for that. Here are some options for the C grade (0.5% accuracy), prices from Digi-Key at 1000 piece quantity:
TI also has the TL4050 which has some nice specs but it’s more expensive.
#### Series references
Finally, if you are working with micropower designs and you really need to guarantee low current, or you need to minimize parts count, there are series references which will give you a buffered voltage reference, like the ones listed below, but you’ll pay more for them, typically in the 50-60 cent range in 1000 quantity.
### Designing with 2.5V shunt references
I’m going to stick with the ON Semi NCP431B, and just use it at 1mA — although I still think it’s a tragedy that you can’t rely on the voltage spec below 1mA.
For the NCP431BI, the voltage specification at 1mA current over its temperature range is 2.4775V to 2.5125V.
Our 5V ± 5% supply can go as low as 4.75V. We’ll use a 2.00kΩ shunt resistor with it to guarantee a minimum cathode current of (4.75V - 2.5125V) / (2.00kΩ × 1.0145) = 1.103mA. (Remember: the factor of 1.0145 comes from the 1% resistor range on top of the 4500ppm swing due to 100ppm/°C tempco and ±45°C swing. This is slightly above the 1mA voltage specification, and leaves 103μA above spec, which is much more than the max gate current of 190nA.)
On the other side of the tolerance ranges, we could have as much as 5.25V, with a cathode current of up to (5.25 - 2.4775) / (2.00kΩ × 0.9855) = 1.41mA. The specification on dynamic impedance $Z_{KA}$ < 0.5Ω tells us we might see as much as (1.41 - 1mA) * 0.5Ω = 0.205mV increase due to worst-case cathode current tolerance, making our overall voltage reference range:
• 2.500V nominal
• 2.5127V maximum (2.5125V + 0.205mV)
• 2.4775V minimum
## Selecting comparators
We also need a comparator. The important system requirements are that we want one that can be powered from 5V in ambient temperatures of -20 to +70°C, has a short enough response time, and doesn’t introduce much voltage error. Our system requirement of at most 5 microsecond response time for a 40V 1μs pulse means, at first glance, that we’ll probably need a fast response, say around 1μs or less, but there are some factors that work both for and against us to meet the system requirement.
Aside from that, it’s a matter of good judgment and frugality. The most inexpensive comparators, by far, are of the LM393 variety. Digi-Key price in 1000 quantity is about the same; the lowest is the ON Semi LM393DR2GH in a SOIC-8 package, at about 8.4 cents. Others from TI, ST, and Diodes Inc. are in the 8.5 - 10 cent range.
They are so cheap that you cannot buy a single comparator for less money; the LM393 is a dual comparator, with open-collector output, and if you’re not going to use the second comparator, you have to read the fine print in the datasheet, which says that unused pins should be grounded.
There are a couple of important specs in the LM393 datasheet to note:
• Offset voltage. This is ± 5mV max at 25°C and ± 9mV max over the full temperature range; this effectively adds to the 2.5V reference tolerance; ± 9mV is 0.36% of 2.5V
• Response time. This is typically 1.3μs for a 100mV step change with 5mV overdrive, which means that to turn the comparator from output high to output low, we start with Vin- 95mV below Vin+, and then increase Vin- to 5mV above Vin+. Think of this device as a balance scale: if the two inputs are nearly equal, then the output can change slowly, whereas if they are different enough, the balance will tip quickly to note which is greater. Comparator datasheets will usually have graphs showing typical response time vs. overdrive level. The ON Semi LM393 does not, and this is one reason it may be better to pick another part. Here are the response time graphs from the TI LM393 datasheet — I prefer the original from National Semiconductor before they were acquired by TI, but unfortunately TI hasn’t maintained earlier variants, so we’re stuck with the more confusing TIified version:
You will note that the output transition from high-to-low is faster than the low-to-high transition. The reason for this may be clearer if we look at the simplified equivalent circuit — which is part of why these parts are so cheap. They’re simple!
All we really have here is a Darlington bipolar differential pair (Q1-Q4), loaded down by a current mirror (Q5 and Q6), with an open-collector output stage (Q7 and Q8).
• When the positive input is greater than the negative output, then more than half of the 100μA current source flows through Q3 than Q2; Q2, Q5, and Q6 have the same current flowing through them, so more current flows through Q3 than Q6, and that turns Q7 on which turns Q8 off, and the output is open-collector.
• When the negative input is greater than the positive output, the reverse is true: more than half of the 100μA current source flows through Q2 than Q3, which means more current flows through Q6 than Q3, and that turns Q7 off which turns Q8 on, and the output is pulled low.
The reason the high → low transition is faster than the low → high transition is because the output transistor Q8 has storage time to come out of saturation. It’s a bit puzzling why National didn’t make a version of the LM393 comparator with a Baker clamp on the output transistor to speed up this time. It’s also too bad the LM393 doesn’t have separate specs for turn-on and turn-off transition times — although since they’re typical rather than maximum specs, you might as well just use the graphs for information instead. (Or use a part like the ON Semi TL331 which lists typical values in the spec tables.)
Anyway, this is important because we have a system requirement to detect overvoltage and transition from high-to-low within a bounded time, but no time requirement to transition in the other direction. So in our particular application, we care about the high-to-low response time.
Other specs of importance to ensure it will work for our application are:
• Common-mode voltage range: down to zero (because of the PNP input stage), up to Vcc - 2.0V over the full temperature range (Vcc - 1.5V at 25°C) — we need this to work at 2.5V input, and Vcc can be as low as 4.75V, so we can support an input voltage range up to 4.75 - 2.0 = 2.75V. That represents a voltage margin of a quarter-volt (2.75V - 2.5V).
• Input bias current (400nA max) and input offset current (150nA max): The LM393 is a bipolar device, not CMOS, so the inputs are not perfectly high-impedance. Input bias current is the current flowing through each input. Input offset current is the difference between the two input bias currents. If your input sources have low enough impedance, you can ignore input offset current and just analyze the input bias currents; otherwise, you can try to match source impedances so the voltage drop across your source impedances cancel to some extent. An upper bound for voltage error in either case is $\Delta R I_{\textrm{bias}} + RI_{\textrm{ofs}}.$
In this application, we’re using R1 = 10K, R2 = 121K, so the source impedance is 10K || 121K = 9.24K, and the voltage error at the comparator input is 9.24K × a maximum of 400nA = 3.7mV. This is small (0.15%) but not zero. It’s not hard to match the input impedances to 10K || 121K. In this case, the worst-case voltage error at the comparator inputs is $\Delta R I_{\textrm{bias}} + RI_{\textrm{ofs}}$ = 9.24K × 0.02 × 400nA + 9.24K × 150nA = 1.46mV.
• Operating temperature range: Here we’re in trouble. The LM393 is rated for an operating range of 0 - 70°C, but we need a circuit that works down to -20°C.
### C, I, M (Temperature ratings)
For those of us engineers of a certain age, the letters CIM mean something:
• C = commercial (0 - 70°C)
• I = industrial (“cold” - 85°C) where “cold” varied by manufacturer: for example, -40°C for TI and Motorola, -25°C for National Semiconductor
• M = military (-55°C - 125°C), usually in ceramic rather than plastic packages
TI used the CIM lettering system, sometimes CIME or CIMQ; see for example the TLC272 and TLC393 — the TLC393 datasheet states
The TLC393C is characterized for operation over the commercial temperature range of TA = 0°C to 70°C. The TLC393I is characterized for operation over the extended industrial temperature range of TA = −40°C to 85°C. The TLC393Q is characterized for operation over the full automotive temperature range of TA = −40°C to 125°C. The TLC193M and TLC393M are characterized for operation over the full military temperature range of TA = −55°C to 125°C.
“E” (extended) was sometimes -40°C to +125°C. Presumably coverage of the -55°C to -40°C range was difficult to design and test, and aside from military and aerospace usage, it is not a frequent need in circuit design.
ON Semiconductor appears to use something similar, at least for the NCP431:
• C = 0 to +70°C
• I = -40 to +85°C
• V = -40 to +125°C
National Semiconductor used the part number to indicate temperature range: for example, the LM393 datasheet includes the LM193 (military temp range), LM293 (industrial), and LM393 (commercial), so the LM3xx series was commercial, LM2xx was industrial, and LM1xx was military.
Other manufacturers like Burr-Brown and Linear Technology just tended to design everything for -40°C to +85°C by default, sometimes with military-grade variants to cover the -55°C to +125°C range. This is now the more typical behavior for more recent devices from most manufacturers. Instead of seeing 3 or 4 temperature grades, new devices may have only 1 or 2, with different specs covering the 0 to 70 or -40 to +85 ranges.
### LM293
At any rate, to cover our -20°C to +70°C range, we need the LM293, not the LM393. (And for the reference, we’ll need the NCP431BI.) This isn’t a big deal nowadays (it was more significant 10-20 years ago; the industrial and military range devices were more expensive and less common): Digi-Key sells the LM293ADR for just under 10 cents at 1000 quantity.
### Better comparators
We could also use the TI LM393B or LM2903B comparators, which are basically LM293 with better specs in almost every area (it’s part of the same datasheet):
• temperature range (LM393B = -40°C to +85°C; LM2903B = -40°C to +125° C)
• offset voltage: 2.5mV at 25°C, 4mV over temperature range
• input bias and offset current: 50nA max input bias current, 25nA max input offset current (vs. 400nA, 150nA for LM193/293/393)
• supply voltage: 3-36V operating, as compared to 2-30V for the LM193/293/393 (note that we have to give up ultra-low supply voltage, but that’s ok in our application)
• response time: 1μs typ. (vs. 1.3μs for LM393)
• quiescent current: 800μA worst-case (vs. 2.5mA for the LM393)
• output low voltage: 550mV max at 4mA sink (vs 700mV max at 4mA sink for the LM393)
Common-mode input voltage is the same.
Cost from Digi-Key in 1000 quantity is about 9.4 cents for the LM393B and 9.0 cents for the LM2903B. Since the LM2903B has a wider temperature range for the same specs, and is slightly cheaper — an example of price inversion! — we’ll use the LM2903B.
The other kind of specs that are available in more expensive comparators include:
• lower offset voltage (rare)
• push-pull output instead of open-collector
• rail-to-rail input
• CMOS input for supporting high-impedance applications
• faster response
• micropower
• built-in voltage reference
We don’t need them for our application — although a built-in voltage reference would be cost-effective if we could find a part that has about the same total price as the 2.5V reference and the comparator — but you should know about them in case you need those sorts of things. Just for a couple of examples, you can look at the ON Semi NCS2250 or TI LMV762 or TI TLV3011 or Maxim MAX40002. The least-expensive comparator with built-in voltage reference that I could find is the Microchip MCP65R41T-2402E for 33 cents at Digi-Key, and that costs more than the voltage reference and comparator we picked; for applications that are size-constrained, this kind of device might be appropriate.
### Hysteresis
To help the comparator switch quickly and avoid noise sensitivity when its input is around the voltage threshold, we need to add some positive feedback. We don’t need much; only a few millivolts is sufficient. The easiest way to do this is put a little resistance between our 2.5V source and the comparator’s + input. Perhaps 1kΩ. Then add 1MΩ from the comparator’s output to the + input. This will form a 1001:1 voltage divider, adding approximately 2.5mV if the output is at 5V, and subtracting approximately 2.5mV if the output is at 0V.
Now, in reality we don’t reach either 5V or 0V output: at the top end, it depends on the pullup resistance of our open-collector circuit in series with the 1MΩ resistor — the LM2903B’s specs are listed with a 5.1kΩ pullup resistor, so instead of a 1001:1 voltage divider, we’ll have effectively a 1006:1 voltage divider, adding at least roughly 2.49mV hysteresis to the threshold to turn the comparator output low.
If you really want to incorporate the effects of resistor tolerance over temperature range, then run the numbers for (1MΩ+5.1kΩ)×(1±0.0145) and 1kΩ × (1∓0.0145):
hyst = np.array([K*2500 for K in worst_case_resistor_divider(1005.1, 1, tol=0.0145)])
print_range(hyst, 'hysteresis (mV):')
hysteresis (mV): nom=2.485, min=2.414, max=2.558
That’s only about ± 72μV, which is really small, and represents less than 0.003% error compared to the 2.5V threshold, which is insignificant compared to the dominant sources of error — namely the 0.5% accuracy of the reference itself.
At the turn-off point, where output is transitioning from low to high, the LM2903B has a max spec of output voltage low at 0.55V with current of 4mA or less; the 1001:1 voltage divider will give us roughly (2.5 - 0.55)/1001, subtracting at least roughly 1.95mV hysteresis to the threshold to turn the comparator output high.
Since the effect of resistance tolerance on turn-off hysteresis is small and not very critical to our application, we’ll ignore it.
### Designing with the LM2903B
Here are the error sources for the LM2903B:
• Offset voltage: 4mV max over temperature
• Input bias current: 50nA max over temperature — with our 121K / 10K input voltage divider on the negative input, this leads to additional effective offset voltage of at most 50nA × (121K || 10K) = 0.46mV, which is low enough that we don’t have to care about matching input resistance on the posiive input, as long as its source resistance is smaller.
That’s a total input voltage offset error of 4.46mV.
## Putting It All Together
Okay, so here’s our full circuit design:
• R1 = 121kΩ
• R2 = 10.0kΩ
• R3 = 2.00kΩ
• R4 = 1.00kΩ
• R5 = 1.00MΩ
• R6 = 5.1kΩ
• U1 = 1/2 LM2903B
• U2 = NCP431B
• C1 = 56pF
• C2 = 100pF
We’ll discuss the reasons for selecting these capacitor values in the next section.
• R1 and R2 set the voltage divider ratio for comparison against the voltage reference producing Vthresh = 2.5V.
• R3 sets the shunt current into the NCP431B to at least 1.1mA worst-case, so it is definitely more than the 1mA level at which the voltage reference is specified
• R4 and R5 set the approximate hysteresis level = ≈ R4/R5 × (Vout - Vthresh)
• R6 sets pulldown current when the output is low; this value just matches the 5.1kΩ value cited in the datasheet. (if the value is too low, then it increases current consumption and may violate the comparator specifications for output voltage level, which are for 4mA or less; if the value is too high, then the switching speed will suffer and, in the extreme, may not reach a valid logic high)
We can now determine the worst-case DC thresholds for comparator output switching, by combining the tolerance analysis we completed earlier:
• Resistor divider ratio R1/R2: nominal=13.100, minimum=12.754, maximum=13.456
• Voltage reference: nominal=2.500V, minimum=2.4775V, maximum=2.5127V
• Comparator:
• Total input voltage error (including input offset voltage + input bias current) is at most 4.46mV
• Hysteresis to turn output low: add ≈ 2.49mV to the + input (this has a roughly ±3% variation due to resistor tolerances, but that error is down around 72μV)
• Hysteresis to turn output high: subtract between 1.95mV and 2.5mV
The input voltage levels are therefore:
Turn-on: (no overvoltage → overvoltage)
• 32.783V nominal = 13.1 × (2.500V + 2.49mV)
• 31.572V minimum = 12.754 × (2.4775V − 4.46mV + 2.49mV − 72μV)
• 33.905V maximum = 13.456 × (2.5127V + 4.46mV + 2.49mV + 72μV)
Overall tolerance is about 3.4-3.7%, and consists approximately of:
• ±2.7% tolerance from resistor divider
• −0.9%, +0.5% tolerance from voltage reference
• ± 0.18% tolerance from input voltage error of comparator
Turn-off: (overvoltage → no overvoltage)
• 32.717V nominal = 13.1 × (2.500V − 2.5mV)
• 31.508V minimum = 12.754 × (2.4775V − 4.46mV − 2.5mV − 72μV)
• 33.846V maximum = 13.456 × (2.5127V + 4.46mV − 1.95mV + 72μV)
Hysteresis:
• 65mV nominal = 13.1 × (2.49mV + 2.5mV)
• 57mV minimum = 12.754 × (2.49mV − 72μV + 1.95mV)
• 67mV maximum = 13.456 × (2.49mV + 72μV + 2.5mV)
These levels are well within our 30V - 35V requirement for DC voltage trip threshold.
## Overvoltage Detector, Part 3: Dynamics
Electronics don’t respond instantly to changes, so we have to take into account the dynamics of our input and our circuit. This involves the choice of capacitor values and possibly the comparator.
### NCP431B Bypassing
Capacitor C2 is just a bypass capacitor for the NCP431B, used to dampen high-frequency noise. Most of the TL431-style shunt references have a kind of anti-Goldilocks behavior, where the reference is stable when the parallel capacitance is small or large, but it may oscillate when the capacitance is just right. Figure 19 from the NCP431B datasheet shows this:
Since we’re not using it with a voltage divider to bump up the cathode-to-anode voltage $V_{KA}$ beyond the 2.5V value, we’re stuck with curve A, which says that the parallel capacitance should either be below about 1nF or above 10μF for cathode currents above 400μA. (Figure 18 shows cathode currents in the 0-140mA range, but it’s essentially impossible to read the limits for 1mA cathode current — which is rather unfortunate, since the voltage spec for this part is at 1mA; neither of Figures 18 or 19 are very helpful for currents in the 1-10mA range.)
At any rate, we’ll choose C2 = 100pF, which is low enough to stay below the lower capacitance limit, but high enough to keep the output low-impedance at high frequencies. Just as a double-check: at f=10MHz, the capacitor impedance is $Z=1/j2\pi fC \rightarrow \left|Z\right| = 159\Omega$. Figure 13 in the datasheet shows typical dynamic output impedance vs. frequency, with about 0.5Ω at 1MHz and about 4Ω at 10MHz, so a 100pF isn’t going to change that much, and even a 1000pF at the edge of stability would still have higher impedance than the curve in Figure 13. But passive components are cheap insurance; it’s hard to be 100% certain that the silicon will dampen noise without some kind of capacitance hanging on the output.
Other devices, such as the LM4040, are designed to be stable with any capacitive load, but you’ll generally pay more.
### Comparator response time
OK, as far as the comparator response time goes, we have to look at the LM2903B datasheet. Figures 30, 31, 36, and 37 help characterize typical comparator response time as a function of overdrive.
Now, we have a response time requirement for a 40V input transient. This is way above the DC threshold for our comparator circuit. When tolerances are at their worst, the input voltage divider is 13.456 : 1, and the maximum threshold for the circuit is 33.905V, or 2.520V at the comparator “−” input. If we have an input voltage of just over 33.905V, it will trip the comparator eventually, but it might take a long time. To ensure a faster response, we need to exceed this worst-case comparator threshold by some nominal amount: this is the overdrive level. The datasheet specifies typical response time at 5mV or greater. At 5mV, the typical propagation delay is 1000ns.
(Interestingly, while the high-to-low output delay can be lower than low-to-high, it looks like for very low overdrive levels, the low-to-high output delay is lower.)
I’m going to read these figures off the +85°C graph of Figure 30:
• 1000ns for 5mV
• 620ns for 10mV
• 410ns for 20mV
• 260ns for 50mV
• 200ns for 100mV
And here’s how we’ll utilize the overdrive curve: I’ll pick a couple of capacitor values for C1, and we’ll look at the RC relaxation curves for a step input from 0V → 40V. (which yields 2.973V at the output of the voltage divider when it is at its worst-case value of 13.456 : 1)
With this worst-case voltage divider, the Thevenin-equivalent resistance is (121K + 1.45%) || (10K − 1.45%) = 9.12kΩ. Let’s see what happens if we use C1 = 47pF.
def scale_formatter(K):
def f(value, tick_number):
return value * K
return plt.FuncFormatter(f)
def show_comparator_response(R1nom, R2nom, C, Rtol, Ctol):
tmax = 4e-6
t = np.arange(-0.1,1,0.001) * tmax
ovtresp_comparator=np.array([(5e-3,1000e-9),
(10e-3,620e-9),
(20e-3,410e-9),
(50e-3,260e-9),
(100e-3,200e-9),
(200e-3,165e-9),
(300e-3,155e-9),
(400e-3,150e-9),
# (500e-3,145e-9),
# (1000e-3,135e-9)
])
ov_comp = ovtresp_comparator[:,0]
t_comp = ovtresp_comparator[:,1]
tresp_requirement = 3e-6
Vthresh_max = 2.520
R1 = R1nom*(1+Rtol)
R2 = R2nom*(1-Rtol)
Rth = 1.0/(1.0/R1 + 1.0/R2)
RC = Rth*C*(1+Ctol)
K = R2 / (R1+R2)
# Driving signal
Vin_end = 40
y_end = Vin_end*K
u = (t >= 0) * y_end
y = (t >= 0) * y_end * (1-np.exp(-t/RC))
# time for RC filter to reach a particular overdrive level above Vthresh_max
y_ov = Vthresh_max+ov_comp
t_ov = -RC*np.log((y_ov-y_end)/(0-y_end))
fig = plt.figure(figsize=(7,4))
ax.plot(t,u)
ax.plot(t,y)
xlim = [-0.1*tmax, tmax]
ylim = [0,3]
ax.plot(xlim, [Vthresh_max, Vthresh_max],color='red',dashes=[3,2],linewidth=0.8)
ax.plot([tresp_requirement, tresp_requirement], ylim, color='red',dashes=[3,2],linewidth=0.8)
ax.plot(t_ov+t_comp, y_ov, '-', color='red')
tresp_min = (t_ov+t_comp).min()
#for t1,t2,y1 in zip(t_ov,t_comp,ov_comp):
# print t1*1e6,t2*1e6,y1
ax.fill_betweenx(y_ov, t_ov, t_ov+t_comp, color='red', alpha=0.25)
ax.xaxis.set_major_formatter(scale_formatter(1e6))
ax.set_xlabel(u'time (\u00b5s)')
ax.set_xlim(xlim)
ax.grid(True)
ax.set_ylabel(u'Voltage (V)')
ax.annotate(u"$t_\\min =$%.2f\u00b5s" % (tresp_min*1e6),
xy=(tresp_min, Vthresh_max), xycoords='data',
xytext=(0,-50), textcoords='offset points',
size=14, va="center", ha="center",
bbox=dict(boxstyle="round", fc="w"),
arrowprops=dict(arrowstyle="-|>",
connectionstyle="arc3",
shrinkA=0
),
)
ax.set_title(('Comparator response to RC filter; steady-state voltage = %.2fV (%.3fV @comp)\n'
+'thresh voltage = %.2fV (%.3fV @comp), R1=%.1fK+%.0f%%, R2=%.1fK-%.0f%%, C=%.0fpF+%.0f%%')
% (Vin_end, y_end, Vthresh_max/K, Vthresh_max,
R1nom/1e3, Rtol*100, R2nom/1e3, Rtol*100, C*1e12, Ctol*100),
fontsize=9)
show_comparator_response(121e3, 10e3, 47e-12, 0.0145, 0.05)
Here a little explanation is needed.
• The blue step is the divided-down voltage from an input step from 0V to 40V at the DC link, with no capacitive load.
• The green curve is the voltage at the comparator “−” input, caused by RC filtering.
• The horizontal dashed line at 2.52V represents the worst-case highest voltage at the “−” input that would trip the comparator. (Nominal is at 2.5V + 2.49mV = 2.502V, remember?)
• The vertical dashed line at 3μs represents our time limit to respond to this step.
• The red curve is the typical comparator response time added to the green curve.
For that red curve, imagine a few cases:
• that the green curve came up to the comparator threshold of 2.52V and stayed there. No overdrive. This takes about 0.85μs, but the comparator may take forever to switch, because there is no overdrive. (No overdrive = takes forever.)
• that the green curve came up to 2.525mV, and then stopped increasing. That represents a 5mV overdrive, also at around t=0.85μs after the input step, and it would typically take another microsecond for the comparator output to switch with 5mV overdrive, for a total of about 1.85μs
• that the green curve came up to 2.92V and stayed there, with 400mV overdrive. This time, with 400mV overdrive it only takes 150ns for the comparator to switch, but the green curve took 1.82μs to get to that point, for a total of 1.97μs. (High overdrive = comparator switches quickly, but capacitor takes forever to get to that point.)
• finally, there is a sweet spot at around 100mV overdrive, which the green curve reaches at around 0.96μs, and with 100mV overdrive the typical response time is 200ns, for a total of 1.16μs.
So we can expect the comparator to switch output low roughly 1.16μs after the input voltage step occurs, perhaps a bit earlier since the input doesn’t just stay there but instead keeps increasing.
This total response time of 1.16μs is pretty quick and we have lots of margin between that and our 3μs requirement. What about raising the capacitance a little bit, to 68pF:
show_comparator_response(121e3, 10e3, 100e-12, 0.0145, 0.05)
show_comparator_response(121e3, 10e3, 150e-12, 0.0145, 0.05)
Um… just past the edge of our 3μs deadline.
I’d probably pick 120pF, which produces a total response time of roughly 2.56μs at the high end of its tolerance, and still has some room to accommodate stray capacitance:
show_comparator_response(121e3, 10e3, 120e-12, 0.0145, 0.05)
Your cheapest 120pF ±5% NP0 50V 0603 capacitor at 1000-piece quantity at Digi-Key is the Waisin 0603N121J500CT at about 1.1 cents each. If you’re willing to use 0402 capacitors, pick the Waisin 0402N121J500CT, at just under 0.77 cents each. (0201 are even cheaper at about 0.66 cents each for the Murata GRM0335C1H121JA01D. If we can live with 100pF, since it’s a more standard value, we can find 0402 Yageo CC0402JRNPO9BN101 capacitors at 0.55 cents each.)
C0G/NP0 capacitors are more stable over temperature than X5R/X7R/Y5V capacitors; they cost more at higher capacitance, but if you’re under 1000pF, generally there’s no significant cost premium to using C0G/NP0 capacitors. This is the kind of capacitor you should use for tight-tolerance filtering; pick the ±5% kind if you can. And at 120pF there’s no cost premium for using 5% tolerance. Finally, the voltage rating of 50 or 100V is “free” if you are at these low capacitance values, so don’t bother trying to optimize and buy a 10V or 25V part to lower cost.
### The flip side of filtering: ignoring momentary spikes
We also have a requirement to prevent 100ns pulses from $V_{OV} - 1.0V$ to 40V reaching $V_{OV}$ and causing an overvoltage. Let’s check to make sure our 120pF filter capacitor does the trick — actually, to be certain, we’ll use the low side of the capacitor tolerance, 120pF − 5% = 114pF:
t = np.arange(-0.25,3,0.001)*1e-6
dt = t[1]-t[0]
u1 = (t >= 0)*1.0
tpulse = 100e-9
u2 = (t >= tpulse) * 1.0
R1 = 121e3
R2 = 10e3
Rth = 1.0/(1.0/R1 + 1.0/R2)
RC = 120e-12 * 0.95 * Rth
fig = plt.figure(figsize=(7,7))
for row in [1,2]:
for V_OV, label in [(31.572,'minimum $V_{OV}$'),
(32.783,'nominal $V_{OV}$'),
(33.905,'maximum $V_{OV}$')]:
v_pre_spike = (V_OV - 1.0)
v_in = v_pre_spike + (40-v_pre_spike)*(u1-u2)
dV1 = (40-v_pre_spike)*u1*(1-np.exp(-tpulse/RC))
y = (v_pre_spike
+ (40-v_pre_spike)*(u1-u2)*(1-np.exp(-t/RC))
+ dV1*u2*(np.exp(-(t-tpulse)/RC))
)
y2 = (v_pre_spike
+ (40-v_pre_spike)*u1*(1-np.exp(-t/RC)))
hl = ax.plot(t,y-V_OV,label=label)
c = hl[0].get_color()
ax.plot(t,y2-V_OV, dashes=[4,2],color=c)
ax.plot(t,v_in - V_OV, linewidth=0.5, color=c)
ax.plot(t,t*0,'--',color='black')
if row == 1:
ax.set_ylim(-1.5,9.5)
else:
ax.set_ylim(-1.2,0.5)
ax.grid(True)
ax.set_xlim(t.min(), t.max())
ax.legend(loc='lower right', fontsize=11, labelspacing=0)
ax.xaxis.set_major_formatter(scale_formatter(1e6))
ax.set_ylabel(u'$V_{in} - V_{OV}$ (V)', fontsize=13)
if row == 2:
ax.set_xlabel(u'time (microseconds)')
fig.suptitle(u'Short pulse rejection: RC=%.2f$\mu$s' % (RC/1e-6),y=0.93);
It does, with some but not a huge amount of margin. (Originally I thought up a pulse requirement of 500ns from $V_{OV}-0.5V$ to 40V but that did NOT WORK.)
There’s a fine line here: we need a filter that is slow enough that it will block these spikes, but fast enough that it will let overvoltages trip the comparator in less than 3μs.
## Other thoughts
### Worst-case vs. root-sum-squares
I do most of my work assuming worst-case everywhere. This is like Murphy’s Law on steroids: R1 is at its tolerance limit on the low side, and R2 is at its tolerance limit on the high side, and U2’s reference is on the edge of its tolerance limit, with the right direction for all these factors to conspire against me and give me a worst-case output.
On the whole, this is really unlikely, so much more unlikely than any of the individual components being on the edge of their limit… that it may be overly pessimistic.
Another approach is to use the root-sum-squares of the individual component tolerances. This is a little naive, because not all of the tolerances weight equally in determining the limits of system variability. But you can use Monte Carlo analysis, where you simulate a large random number of values. For example, let’s just take the voltage divider, and assume those 1% resistors have a Gaussian distribution with a standard deviation of, say, 0.2%. Then we can try a million samples:
np.random.seed(123)
Rstd = 0.002
N = 1000000
R1 = 121e3 * (1 + Rstd*np.random.randn(N))
R2 = 10e3 * (1 + Rstd*np.random.randn(N))
fig = plt.figure(figsize=(7,11))
ax.hist(R1/1000, bins=100)
ax.set_xlabel('$R_1$',fontsize=13)
ax.hist(R2/1000, bins=100)
ax.set_xlabel('$R_2$',fontsize=13)
a = R2/(R1+R2)
ax.hist(a, bins=100)
ax.set_xlabel('$a=R_2/(R_1+R_2)$',fontsize=13)
import pandas as pd
def get_stats(x):
x0 = np.mean(x)
s = np.std(x)
dev = max(np.max(x-x0), np.max(x0-x))
return dict(mean=x0, max=np.max(x), min=np.min(x),std=s,
normstd=s/x0, normdev=dev/x0)
df = pd.DataFrame([get_stats(x) for x in [R1, R2, a]], index=['R1','R2','a'],
columns=['mean','min','max','std','normstd','normdev']).transpose()
def tagfunc(x):
return pd.Series([('K'
if x.name.startswith('R') and not k.startswith('norm')
else 'ratio',
x[k])
for k in x.index], x.index)
def formatfunc(x):
tag, v = x
if tag == 'K':
return '%.3f K' % (v*1e-3)
else:
return '%.5f' % v
df.apply(tagfunc).style.applymap(lambda cell: 'text-align: right').format(formatfunc)
R1 R2 a
mean 121.000 K 10.000 K 0.07634
min 119.885 K 9.908 K 0.07542
max 122.120 K 10.097 K 0.07729
std 0.242 K 0.020 K 0.00020
normstd 0.00200 0.00200 0.00261
normdev 0.00925 0.00973 0.01245
The table above is somewhat terse:
• a is the voltage divider ratio
• mean is the mean value $\mu_x$ of all samples
• min is the minimum value $x_\min$ of all samples
• max is the maximum value $x_\max$ of all samples
• std is the standard deviation $\sigma_x$ of all samples
• normstd is the normalized standard deviation ($\sigma_x/\mu_x$)
• normdev is the normalized worst-case deviation = $\max(x_\max-\mu_x, \mu_x-x_\min)/\mu_x$
For this set of samples, the worst-case deviation of R1 is 0.925%, the worst-case deviation of R2 is 0.973%, and the worst-case deviation of a is 1.245%.
Compare these results with a worst-case analysis approach if someone told us R1 had 0.925% tolerance and R2 had 0.973% tolerance:
R1_nom = 121e3
R2_nom = 10e3
a_nom = R2_nom/(R1_nom+R2_nom)
def showsign(x):
return '-' if x < 0 else '+'
for s in [-1,+1]:
R1 = R1_nom*(1+s*0.00925)
R2 = R2_nom*(1-s*0.00973)
a = R2/(R1+R2)
print "R1=121K%s0.925%%, R2=10K%s0.973%% => a=%.5f = a_nom*%.5f (%+.2f%%)" % (
showsign(s), showsign(-s), a, a/a_nom, (a/a_nom-1)*100
)
R1=121K-0.925%, R2=10K+0.973% => a=0.07768 = a_nom*1.01767 (+1.77%)
R1=121K+0.925%, R2=10K-0.973% => a=0.07501 = a_nom*0.98260 (-1.74%)
In other words, Monte Carlo analysis gives us a bound of ±1.245% for the voltage divider ratio, but worst-case analysis gives us a bound of ±1.77% for the voltage divider ratio.
Worst-case analysis is always pessimistic (assuming you’ve taken into account all possible factors that produce error — which is not easy, or even practical… but the major ones are going to be component tolerance and temperature coefficients, and that’s about the best you can do) and Monte Carlo analysis is… optimistic? realistic? The problem is that you can’t tell unless you know the error distributions.
If I buy a reel of ±1% surface-mount chip resistors, I have absolutely no idea what the distribution of their resistance is going to be, except they’ll all be very likely to be within 1% of their nominal values at 25°C, because that is what the manufacturer claims. Suppose I’ve got a reel of 5000 10kΩ resistors. Then 2500 of them could measure 10.1kΩ and 2500 could measure 9.9kΩ. Or they might all be 10.1kΩ. Or they might be uniformly distributed between 9.9kΩ and 10.1kΩ. Or they might have a tight normal distribution around 9.93kΩ (say, a mean of 9.93kΩ and standard deviation 2.4Ω) for this reel, but if I buy another reel manufactured from a different batch of raw materials, then they might have a similar tight normal distribution around 10.02kΩ. Maybe the resistors manufactured on Thursday nights are typically 20 ohms greater than other manufacturers, because the factory foreman is a stupid jerk and likes the temperature in his factory to be a few degrees warmer than the 20°C ± 1°C specified by the company’s engineering staff, and that throws off some of the manufacturing processes slightly.... most likely that would still pass the 1% tolerance test, although the foreman should be fired for adding unnecessary sources of error.
The distributions are likely to be somewhat Gaussian. It’s just that you can’t trust that to be the case. There’s an apocryphal story I read somewhere, but cannot find, and is probably false, that long ago, the 1% resistors and 5% resistors were two different grades from the same manufacturing process, in other words:
• each resistor was measured
• if the resistor was within 1% of nominal, it went into the 1% pile
• if the resistor was not within 1% of nominal, but was within 5% of nominal, it went into the 5% pile
• if the resistor was not within 5% of nominal, it went into the trash
This kind of case could produce some strange distributions:
np.random.seed(123)
Rnom = 10e3
R = Rnom * (1 + 0.015*np.random.randn(N))
bin_size = 0.002
bins = np.arange(0.92,1.08001,bin_size) * Rnom
counts, _ = np.histogram(R,bins=bins)
bin_center = (bins[:-1] + bins[1:])/2.0
selections = [('1%', 'green',lambda x: abs(x-1) <= 0.01),
('5%', 'yellow',lambda x: ((0.01 < abs(x-1)) & (abs(x-1) <= 0.05))),
('reject', 'red',lambda x: 0.05 < abs(x-1))]
fig=plt.figure(figsize=(7,4))
for name, color, select_func in selections:
ii = select_func(bin_center/Rnom)
ax.bar(bin_center[ii], counts[ii], width=bin_size*Rnom,
color=color, label='%s (N=%d)' % (name, sum(counts[ii])))
ax.set_xlim(0.92*Rnom, 1.08*Rnom)
ax.legend(fontsize=12, labelspacing=0)
ax.set_title('Distribution of resistors selected from $\\sigma=0.015$')
ax.set_xlabel('resistance (ohms)')
ax.set_ylabel('count')
<matplotlib.text.Text at 0x10e44d610>
Here we have a normal distribution with $\sigma=0.015 R$ (150 ohms).
• about half of them are 1% resistors with the green distribution: mostly uniformly distributed with a slight clustering around nominal
• about half are 5% resistors with the yellow distribution: a normal distribution with a gap in the center; most are in the 1-2% tolerance range
• around 0.08% of them are rejected because they’re more than 5% from nominal
Numerical answers for this Gaussian distribution — rather than samples from a Monte Carlo process — can be determined using the cumulative distribution function scipy.stats.norm.cdf:
import scipy.stats
stdev = 0.015
def cdf_between(r1, r2=None):
cdf1 = scipy.stats.norm.cdf(r1/stdev)
if r2 is None:
return 1-cdf1
else:
return scipy.stats.norm.cdf(r2/stdev)-cdf1
# the 2* is to capture left-side and right-side distributions
N1pct = 2*cdf_between(0,0.01)
N5pct = 2*cdf_between(0.01,0.05)
ranges = [0, 0.005, 0.01, 0.02, 0.05]
for i, r0 in enumerate(ranges):
cdf0 = scipy.stats.norm.cdf(r0/stdev)
try:
r1 = ranges[i+1]
tail = False
except:
r1 = None
tail = True
fraction = 2*cdf_between(r0,r1)
if not tail:
print "%.1f%% - %.1f%%: %.6f (%.2f%% of %d%% tolerance)" % (r0*100,r1*100,fraction,
fraction/(N1pct if r0 < 0.01 else N5pct)*100,
1 if r0 < 0.01 else 5)
else:
print " > %.1f%%: %.6f" % (r0*100,fraction)
print "%.6f: 1%% tolerance" % N1pct
print "%.6f: 5%% tolerance" % N5pct
0.0% - 0.5%: 0.261117 (52.75% of 1% tolerance)
0.5% - 1.0%: 0.233898 (47.25% of 1% tolerance)
1.0% - 2.0%: 0.322563 (63.98% of 5% tolerance)
2.0% - 5.0%: 0.181564 (36.02% of 5% tolerance)
> 5.0%: 0.000858
0.495015: 1% tolerance
0.504127: 5% tolerance
• 49.50% of these apocryphal resistors were graded as 1%
• 52.75% of them less than 0.5% from nominal
• 47.25% of them between 0.5% and 1% tolerance
• 50.41% of these apocryphal resistors were graded as 5%
• 63.98% of them between 1% and 2% tolerance
• 36.02% of them between 2% and 5% tolerance
• 0.09% of these apocryphal resistors were more than 5% from nominal
Grading may still be done for some electronic components (perhaps voltage references or op-amps), but it’s not a great manufacturing strategy. The demand for different grades may fluctuate with time, and is unlikely to match up perfectly with the yields from various grades. Suppose that you are the manufacturing VP of Danalog Vices, Inc., which produces the DV123 op-amp in two grades:
• an “A” grade op-amp with input offset voltage less than 1mV
• a “B” grade op-amp with 1mV - 5mV input offset.
Suppose also that the manufacturing process ends up with 40.7% in the “A” grade, 57.2% in the “B” grade, and 2.1% as yield failures.
Maybe in 2019, there were orders for 650,000 DV123A op-amps and 800,000 DV123B op-amps. To meet this demand, Danalog Vices fabricated wafers with enough dice for 1.8 million parts: 732,600 DV123A, 1,029,600 DV123B, and 37,800 yield failures, meeting demand and a little extra. At the end of the year, there are 82,600 excess DV123A in inventory and 229,600 excess DV123B in inventory.
Now in 2020, the forecasted orders are 900,000 DV123A and 720,000 DV123B op-amps. (Some major customer decided they needed higher precision.) You don’t have many options here… making 2.2 million dice would produce 895,400 DV123A op-amps and 1,258,400 DV123B op-amps. Combined with the previous year’s inventory, this would be enough to meet demand plus 78,000 extra DV123A op-amps and 768,000 DV123B. Tons of excess B grade op-amps.
That’s not going to work very well. If the fraction of customer orders of A grade parts is much higher than the natural yield of A grade parts, then there will be an excess of B grade parts. If we had too many A grade op-amps, Danalog Vices could package and sell them as B grade op-amps, but an excess of B grade op-amps will end up as scrapped inventory.
There’s no realistic way to shift the manufacturing process to make more A grade op-amps through grading alone. We could add a laser-trimming step on the manufacturing line to improve B-grade dice until they meet A-grade specs, which adds some cost.
To jump out of this grading quagmire and back to the overall point I am trying to make: you cannot be sure of the error distribution of components. Can’t can’t can’t. The best you might be able to do is get characterization data from the manufacturer, but this would be for one sample batch and may not be representative of the manufacturing process through the product’s full life cycle.
### Characterization data
Whether you will find this characterization data in the datasheet is really hit-or-miss. Some datasheets don’t have it at all. Some have limited information, as in the LM2903B datasheet:
The datasheet lists a specification as ±2.5mV offset voltage at 25°C. The characterization graph shows 62 samples within ±1.0mV offset voltage, with little variation over temperature.
A more detailed example of this type of characterization data is from the MCP6001 op-amp datasheet, which shows histograms of input offset voltage, offset voltage tempco, the offset voltage curvature or quadratic temperature coefficient (!), and input bias current.
Here’s Figure 2-1, showing an offset voltage histogram of around 65000 samples:
The MCP6001 datasheet claims ±4.5mV maximum at 25°C. I crunched some numbers based on reading the histogram and came up with a mean of −0.3mV with a standard deviation of σ=1.04mV; if this were representative of the population as a whole, then the limits of ±4.5mV are roughly −4σ and +4.6σ, and for a normal distribution, would have expected yield failures of roughly 32ppm below −4.5mV and 2ppm above 4.5mV. (These are just the results of scipy.stats.norm.cdf(-4) and scipy.stats.norm.cdf(-4.6).)
The main value of the characterization graphs (to me, at least) are not as numerical data that I can depend on directly, but rather that they show a roughly Gaussian distribution (and not, say, a uniform distribution) and show how conservative the manufacturer is in choosing minimum/maximum limits given this characterization data. You hear “six sigma” bandied about a lot — which can be interpreted in one way as having limits equal to six standard deviations from the mean — and for a Gaussian distribution, this represents about 2 failures per billion samples covering both low-end and high-end tails. (2*scipy.stats.norm.cdf(-6))
Note, however, the fine print at the beginning of the Typical Performance Curves section of the MCP6001 datasheet:
The graphs and tables provided following this note are a statistical summary based on a limited number of samples and are provided for informational purposes only. The performance characteristics listed herein are not tested or guaranteed. In some graphs or tables, the data presented may be outside the specified operating range (e.g., outside specified power supply range) and therefore outside the warranted range.
So use the specifications! I recommend you ignore worst-case analysis only at your own peril.
## Mitigating strategies
We’ve talked a lot about how different sources of error — resistance tolerance, comparator input offset voltage, temperature coefficients, etc. — contribute to total uncertainty of a circuit parameter like threshold voltage. It paints a grim picture; you will find that except for the simplest of circuits, it is hard to provide an overall error of less than 1%.
There are, however, ways to compensate for the effects of component tolerances. I count at least three:
• we can use ratiometric design techniques to reduce the effect of certain error sources
• we can calibrate our circuitry
• we can use digital signal processing to reduce the need for components with tolerance
### Ratiometric design
Ratiometric design is a method of circuit design where measurements are made of the ratio of two quantities rather than their absolute values. If they share some common source of error, then that error will cancel out. I talked about this in an article on thermistor signal conditioning. If I have a 3.3V supply feeding a voltage divider, and the same 3.3V supply used for an analog-to-digital converter, then the ADC reading will be the voltage divider ratio $R2/(R1+R2)$ — plus ADC gain/offset/linearity errors — and will not be subject to any variation in the 3.3V supply itself.
Or, suppose there are reasons to avoid a voltage divider configuration, and instead I need a current source to drive a resistive sensor, as shown in the left circuit below:
Here the ADC reading (as a fraction of fullscale voltage $V_{ref}$) is $I_0R_{sense}/V_{ref}$, which is sensitive to errors in both the current $I_0$ and the voltage reference $V_{ref}$.
We can handle this resistive sensor ratiometrically with the circuit on the right, by using a reference resistor $R_{ref}$ and a pair of analog multiplexers U1, U2. Here we have to take two readings, $x_1 = I_0R_{sense}/V_{ref}$ and $x_2 = I_0R_{ref}/V_{ref}$; if we divide them, we get $x_1/x_2 = R_{sense}/R_{ref}$ which is sensitive only to tolerances in the two resistors; variations in current $I_0$ and voltage $V_{ref}$ cancel out.
### Calibration
Calibration involves a measurement of an accurate, known reference. If I have some device with some measurement error that is consistent — for example a gain and offset — then I can measure one or more known inputs during a calibration step, and use those measurements to compensate for device errors.
One very common instance of calibration is the use of a tare weight with a scale — the weight of a platform or container is unimportant, so when that platform or container is empty, we can weigh it and use the measurement as a reference to subtract from a second measurement. When you go to the deli counter at a supermarket and get a half-pound or 200g of sliced turkey, the scale is automatically calibrated to an empty measurement first; then the weight of the turkey is determined using a measurement relative to the empty measurement.
That kind of measurement calibrates out the offset but not the gain; a gain calibration would require some standard weight to be used, like a standard 1kg weight.
Calibration can be done during manufacturing with external equipment (1kg weights, voltage or temperature standards, etc.) — this can be somewhat time-consuming or costly. After manufacturing, such measurements are possible only at limited times and with substantial expense.
The most important aspect of relying on calibration is to ensure that the calibration measurements remain valid.
If a circuit is prone to voltage offset, and we want to use calibration to compensate for that offset, we need to ensure that the offset does not change significantly during the time of use: drift due to time and temperature changes can eliminate the benefit of calibration. In fact, excessive drift can make a measurement using calibration worse than without that calibration — suppose some device measures voltage and has a worst-case accuracy of 2mV. The voltage offset during calibration might be +1.4mV; if it drifts to −1.4mV then the resulting accuracy including calibration is 2.8mV error. So measurement drift is a serious concern. Tare weight is an easy circumstance to avoid the effects of drift: people or sliced turkey or trucks on a scale can be measured a few seconds after a tare step to calibrate out offset, which is generally too short for changes in temperature or time drift. On the other hand, laboratory test equipment like oscilloscopes or multimeters typically are used for 12 months between calibrations, so they need to be designed for low drift and low temperature coefficient.
### Digital signal processing
DSP can also help to remove the need for analog components that introduce errors. Analog signal conditioning is still necessary to handle low-level amplification and high-frequency issues, but other operations like squaring or logarithms or filtering or applying temperature compensation can be done in the digital domain, where numeric errors can be made arbitrarily small.
One of the major achievements of DSP has been in equalization in communications. 56K dialup modems and DSL both represent a triumph of DSP over the limitations of analog signal processing. We now take it for granted that we have Internet bandwidths of 40Mbps. I remember the old acoustic-coupled 300-baud modems: imagine transmitting a file at 30 bytes per second. That’s about 2.6 megabytes per day. There’s only so much you can do with analog signal conditioning before you run into the challenges of component tolerances. DSP eliminates all that — assuming you can sample and process the data fast enough.
Just make sure to avoid using software as an overused crutch — the mantra of “Oh, we can fix that in software” makes me cringe. If there are design errors in the analog domain, they can be much more complex and costly to fix (and verify!) in the digital domain. Mismatches in components or noise coupling are problems that should be handled before they get inside a microcontroller. One of my pet peeves is the use of single-ended current-sense amplifiers in motor drives. Current sense resistors are relatively inexpensive these days: you can buy 10mΩ 1% 1W 1206 chip resistors for less than 10 cents in quantity 1000 that will produce a high-quality current sense signal without adding a lot of extra voltage drop. But they should be used with a differential amplifier to remove the effects of common-mode voltage that results from parasitic circuit resistance and inductance. This common-mode voltage is hard (if not impossible) to “just fix in software” — it can change with temperature, causes undesirable errors in analog overcurrent sensing, and can introduce coupling between current paths in the circuit.
Sources of measurement error need to be well-understood. The overvoltage circuit I discussed in this article is a good example; if I just design a circuit and build it and consider it done because “well, it seems to work”, then every source of error represents a latent risk in my design. Understanding and bounding those risks is the key to a successful design.
## Wrapup
Today we went through a grand tour around the idea of component tolerances.
We looked at this overvoltage detection circuit:
• R1 = 121kΩ
• R2 = 10.0kΩ
• R3 = 2.00kΩ
• R4 = 1.00kΩ
• R5 = 1.00MΩ
• R6 = 5.1kΩ
• U1 = 1/2 LM2903B
• U2 = NCP431B
• C1 = 56pF
• C2 = 100pF
We examined the various sources of component tolerance error, including:
• static errors
• mismatch of R1 / R2
• other sources of error besides the “1%” tolerance listed on a bill of materials (for example, temperature coefficient and mechanical strain)
• accuracy of voltage reference U2
• comparator input offset voltage and input bias current
• dynamic errors
• noise filtering with C1
• comparator response time vs. overdrive
We talked about some aspects of selecting the voltage reference and comparator, and about comparator hysteresis.
We explored the use of statistical analysis (Monte Carlo methods) as a more optimistic alternative to worst-case analysis, and investigated the issues of component error distribution.
Finally, we looked at methods of mitigating component error:
• ratiometric measurements
• calibration
• digital signal processing
Along the way, we touched upon a number of minor tangents:
• the price of 5% / 1% / 0.5% / 0.1% chip resistors
• irreversible resistance changes upon soldering into a PCB
• voltage divider error sensitivity to resistor tolerance, as a function of the nominal voltage divider ratio $\alpha$, namely $S \approx 2(1-\alpha).$ So a voltage divider ratio near 1 has hardly any error, whereas small voltage divider ratios almost double the resistor tolerance: a 10:1 voltage divider using 1% resistors can have a worst-case error of approximately 2%.
• the internal architecture of the LM393 comparator
• grading of components based on measured values
I hope you take away some useful techniques for managing component error in your next project.
Previous post by Jason Sachs:
Scorchers, Part 2: Unknown Bugs and Popcorn
Next post by Jason Sachs:
Scorchers, Part 3: Bare-Metal Concurrency With Double-Buffering and the Revolving Fireplace
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
|
2020-09-24 15:44:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46345773339271545, "perplexity": 3613.8274727905077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00482.warc.gz"}
|
https://scala-lang.org/files/archive/spec/2.13/01-lexical-syntax.html
|
Lexical Syntax
Scala source code consists of Unicode text.
The program text is tokenized as described in this chapter. See the last section for special support for XML literals, which are parsed in XML mode.
To construct tokens, characters are distinguished according to the following classes (Unicode general category given in parentheses):
1. Whitespace characters. \u0020 | \u0009 | \u000D | \u000A.
2. Letters, which include lower case letters (Ll), upper case letters (Lu), title case letters (Lt), other letters (Lo), modifier letters (Lm), letter numerals (Nl) and the two characters \u0024 ‘$’ and \u005F ‘_’. 3. Digits ‘0’ | … | ‘9’. 4. Parentheses ‘(’ | ‘)’ | ‘[’ | ‘]’ | ‘{’ | ‘}’. 5. Delimiter characters ‘’ | ‘'’ | ‘"’ | ‘.’ | ‘;’ | ‘,’. 6. Operator characters. These consist of all printable ASCII characters (\u0020 - \u007E) that are in none of the sets above, mathematical symbols (Sm) and other symbols (So). Identifiers There are three ways to form an identifier. First, an identifier can start with a letter, followed by an arbitrary sequence of letters and digits. This may be followed by underscore ‘_‘ characters and another string composed of either letters and digits or of operator characters. Second, an identifier can start with an operator character followed by an arbitrary sequence of operator characters. The preceding two forms are called plain identifiers. Finally, an identifier may also be formed by an arbitrary string between back-quotes (host systems may impose some restrictions on which strings are legal for identifiers). The identifier then is composed of all characters excluding the backquotes themselves. As usual, the longest match rule applies. For instance, the string decomposes into the three identifiers big_bob, ++=, and def. The rules for pattern matching further distinguish between variable identifiers, which start with a lower case letter or _, and constant identifiers, which do not. For this purpose, lower case letters include not only a-z, but also all characters in Unicode category Ll (lowercase letter), as well as all letters that have contributory property Other_Lowercase, except characters in category Nl (letter numerals), which are never taken as lower case. The following are examples of variable identifiers: Some examples of constant identifiers are The ‘$’ character is reserved for compiler-synthesized identifiers. User programs should not define identifiers that contain ‘$’ characters. The following names are reserved words instead of being members of the syntactic class id of lexical identifiers. The Unicode operators \u21D2 ‘´\Rightarrow´’ and \u2190 ‘´\leftarrow´’, which have the ASCII equivalents => and <-, are also reserved. Here are examples of identifiers: When one needs to access Java identifiers that are reserved words in Scala, use backquote-enclosed strings. For instance, the statement Thread.yield() is illegal, since yield is a reserved word in Scala. However, here's a work-around: Thread.yield() Newline Characters Scala is a line-oriented language where statements may be terminated by semi-colons or newlines. A newline in a Scala source text is treated as the special token “nl” if the three following criteria are satisfied: 1. The token immediately preceding the newline can terminate a statement. 2. The token immediately following the newline can begin a statement. 3. The token appears in a region where newlines are enabled. The tokens that can terminate a statement are: literals, identifiers and the following delimiters and reserved words: The tokens that can begin a statement are all Scala tokens except the following delimiters and reserved words: A case token can begin a statement only if followed by a class or object token. Newlines are enabled in: 1. all of a Scala source file, except for nested regions where newlines are disabled, and 2. the interval between matching { and } brace tokens, except for nested regions where newlines are disabled. Newlines are disabled in: 1. the interval between matching ( and ) parenthesis tokens, except for nested regions where newlines are enabled, and 2. the interval between matching [ and ] bracket tokens, except for nested regions where newlines are enabled. 3. The interval between a case token and its matching => token, except for nested regions where newlines are enabled. 4. Any regions analyzed in XML mode. Note that the brace characters of {...} escapes in XML and string literals are not tokens, and therefore do not enclose a region where newlines are enabled. Normally, only a single nl token is inserted between two consecutive non-newline tokens which are on different lines, even if there are multiple lines between the two tokens. However, if two tokens are separated by at least one completely blank line (i.e a line which contains no printable characters), then two nl tokens are inserted. The Scala grammar (given in full here) contains productions where optional nl tokens, but not semicolons, are accepted. This has the effect that a new line in one of these positions does not terminate an expression or statement. These positions can be summarized as follows: Multiple newline tokens are accepted in the following places (note that a semicolon in place of the newline would be illegal in every one of these cases): A single new line token is accepted • in front of an opening brace ‘{’, if that brace is a legal continuation of the current statement or expression, • after an infix operator, if the first token on the next line can start an expression, • in front of a parameter clause, and • after an annotation. The newline tokens between the two lines are not treated as statement separators. With an additional newline character, the same code is interpreted as an object creation followed by a local block: With an additional newline character, the same code is interpreted as two expressions: With an additional newline character, the same code is interpreted as an abstract function definition and a syntactically illegal statement: With an additional newline character, the same code is interpreted as an attribute and a separate statement (which is syntactically illegal). Literals There are literals for integer numbers, floating point numbers, characters, booleans, symbols, strings. The syntax of these literals is in each case as in Java. Integer Literals Values of type Int are all integer numbers between$-2^{31}$and$2^{31}-1$, inclusive. Values of type Long are all integer numbers between$-2^{63}$and$2^{63}-1$, inclusive. A compile-time error occurs if an integer literal denotes a number outside these ranges. Integer literals are usually of type Int, or of type Long when followed by a L or l suffix. (Lowercase l is deprecated for reasons of legibility.) However, if the expected type pt of a literal in an expression is either Byte, Short, or Char and the integer number fits in the numeric range defined by the type, then the number is converted to type pt and the literal's type is pt. The numeric ranges given by these types are: Byte ´-2^7´ to ´2^7-1´ Short ´-2^{15}´ to ´2^{15}-1´ Char ´0´ to ´2^{16}-1´ The digits of a numeric literal may be separated by arbitrarily many underscores for purposes of legibility. Floating Point Literals Floating point literals are of type Float when followed by a floating point type suffix F or f, and are of type Double otherwise. The type Float consists of all IEEE 754 32-bit single-precision binary floating point values, whereas the type Double consists of all IEEE 754 64-bit double-precision binary floating point values. If a floating point literal in a program is followed by a token starting with a letter, there must be at least one intervening whitespace character between the two tokens. The phrase 1.toString parses as three different tokens: the integer literal 1, a ., and the identifier toString. 1. is not a valid floating point literal because the mandatory digit after the . is missing. Boolean Literals The boolean literals true and false are members of type Boolean. Character Literals A character literal is a single character enclosed in quotes. The character can be any Unicode character except the single quote delimiter or \u000A (LF) or \u000D (CR); or any Unicode character represented by an escape sequence. String Literals A string literal is a sequence of characters in double quotes. The characters can be any Unicode character except the double quote delimiter or \u000A (LF) or \u000D (CR); or any Unicode character represented by an escape sequence. If the string literal contains a double quote character, it must be escaped using "\"". The value of a string literal is an instance of class String. Multi-Line String Literals A multi-line string literal is a sequence of characters enclosed in triple quotes """ ... """. The sequence of characters is arbitrary, except that it may contain three or more consecutive quote characters only at the very end. Characters must not necessarily be printable; newlines or other control characters are also permitted. Escape sequences are not processed, except for Unicode escapes (this is deprecated since 2.13.2). This would produce the string: The Scala library contains a utility method stripMargin which can be used to strip leading whitespace from multi-line strings. The expression evaluates to Method stripMargin is defined in class scala.collection.StringOps. Interpolated string An interpolated string consists of an identifier starting with a letter immediately followed by a string literal. There may be no whitespace characters or comments between the leading identifier and the opening quote " of the string. The string literal in an interpolated string can be standard (single quote) or multi-line (triple quote). Inside an interpolated string none of the usual escape characters are interpreted no matter whether the string literal is normal (enclosed in single quotes) or multi-line (enclosed in triple quotes). Note that the sequence \" does not close a normal string literal (enclosed in single quotes). There are three forms of dollar sign escape. The most general form encloses an expression in ${ and }, i.e. ${expr}. The expression enclosed in the braces that follow the leading $ character is of syntactical category BlockExpr. Hence, it can contain multiple statements, and newlines are significant. Single ‘$’-signs are not permitted in isolation in an interpolated string. A single ‘$’-sign can still be obtained by doubling the ‘$’ character: ‘$$’. A single ‘"’-sign can be obtained by the sequence ‘\$"’.
The simpler form consists of a ‘$’-sign followed by an identifier starting with a letter and followed only by letters, digits, and underscore characters, e.g $id. The simpler form is expanded by putting braces around the identifier, e.g $id is equivalent to ${id}. In the following, unless we explicitly state otherwise, we assume that this expansion has already been performed.
The expanded expression is type checked normally. Usually, StringContext will resolve to the default implementation in the scala package, but it could also be user-defined. Note that new interpolators can also be added through implicit conversion of the built-in scala.StringContext.
One could write an extension
Escape Sequences
The following character escape sequences are recognized in character and string literals.
charEscapeSeq unicode name char
‘\‘ ‘b‘ \u0008 backspace BS
‘\‘ ‘t‘ \u0009 horizontal tab HT
‘\‘ ‘n‘ \u000a linefeed LF
‘\‘ ‘f‘ \u000c form feed FF
‘\‘ ‘r‘ \u000d carriage return CR
‘\‘ ‘"‘ \u0022 double quote "
‘\‘ ‘'‘ \u0027 single quote '
‘\‘ ‘\‘ \u005c backslash \
In addition, Unicode escape sequences of the form \uxxxx, where each x is a hex digit are recognized in character and string literals.
It is a compile time error if a backslash character in a character or string literal does not start a valid escape sequence.
Symbol literals
A symbol literal 'x is deprecated shorthand for the expression scala.Symbol("x").
The apply method of Symbol's companion object caches weak references to Symbols, thus ensuring that identical symbol literals are equivalent with respect to reference equality.
Tokens may be separated by whitespace characters and/or comments. Comments come in two forms:
A single-line comment is a sequence of characters which starts with // and extends to the end of the line.
A multi-line comment is a sequence of characters between /* and */. Multi-line comments may be nested, but are required to be properly nested. Therefore, a comment like /* /* */ will be rejected as having an unterminated comment.
Trailing Commas in Multi-line Expressions
If a comma (,) is followed immediately, ignoring whitespace, by a newline and a closing parenthesis ()), bracket (]), or brace (}`), then the comma is treated as a "trailing comma" and is ignored. For example:
XML mode
In order to allow literal inclusion of XML fragments, lexical analysis switches from Scala mode to XML mode when encountering an opening angle bracket ‘<’ in the following circumstance: The ‘<’ must be preceded either by whitespace, an opening parenthesis or an opening brace and immediately followed by a character starting an XML name.
The scanner switches from XML mode to Scala mode if either
• the XML expression or the XML pattern started by the initial ‘<’ has been successfully parsed, or if
• the parser encounters an embedded Scala expression or pattern and forces the Scanner back to normal mode, until the Scala expression or pattern is successfully parsed. In this case, since code and XML fragments can be nested, the parser has to maintain a stack that reflects the nesting of XML and Scala expressions adequately.
Note that no Scala tokens are constructed in XML mode, and that comments are interpreted as text.
The following value definition uses an XML literal with two embedded Scala expressions:
|
2022-01-26 05:00:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5235188603401184, "perplexity": 7777.553560088896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00199.warc.gz"}
|
https://www.physicsforums.com/threads/finding-potential-from-force.131011/
|
# Finding potential from force
1. Sep 6, 2006
### genius2687
A perticle moves in a plane under the influence of a force, acting toward a center of force, whose magnitude is
F= 1/r^2{1 - 1/c^2[(r')^2 - 2(r'')r]}
where r is the distance of the particle to the center of force. Find the generalized potential that will result in such a force, and from that the Lagrangian for the motion in a plane.
I have assumed that we can use F = -partial(V)/partial(r). Is there an easy way to do this problem?
I have tried integrating the second term (r')^2/r^2 by parts (You get this when you distribute the 1/r^2 factor in front). When I do this, I get two terms, one of which looks like 1/r*2(r'(dr') instead of something in the form of (....)dr.
|
2017-11-23 23:21:59
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441972136497498, "perplexity": 486.4521089508985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806979.99/warc/CC-MAIN-20171123214752-20171123234752-00203.warc.gz"}
|
http://www.reference.com/browse/Probabilistic+proof
|
Definitions
Mathematical proof
In mathematics, a proof is a convincing demonstration (within the accepted standards of the field) that some mathematical statement is necessarily true. Proofs are obtained from deductive reasoning, rather than from inductive or empirical arguments. That is, a proof must demonstrate that a statement is true in all cases, without a single exception. An unproved proposition that is believed to be true is known as a conjecture.
The statement that is proved is often called a theorem. Once a theorem is proved, it can be used as the basis to prove further statements. A theorem may also be referred to as a lemma, especially if it is intended for use as a stepping stone in the proof of another theorem.
Proofs employ logic but usually include some amount of natural language which usually admits some ambiguity. In fact, the vast majority of proofs in written mathematics can be considered as applications of informal logic. Purely formal proofs are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics (in both senses of that term). The philosophy of mathematics is concerned with the role of language and logic in proofs, and mathematics as a language.
History and etymology
Plausibility arguments using heuristic devices such as pictures and analogies preceded strict mathematical proof.. The early history of the concept of proof dates back to the early Greek and Chinese civilisations. Thales (640–546 BCE) proved some theorems in geometry. Eudoxus (408–355 BCE) and Theaetetus (417–369 BCE) formulated theorems but did not prove them. Aristotle (384–322 BCE) said definitions should describe the concept being defined in terms of other concepts already known. Euclid (300 BCE) began with undefined terms and axioms (propositions regarding the undefined terms assumed to be self-evidently true, from the Greek “axios” meaning “something worthy”) and used these to prove theorems using deductive logic. Modern proof theory treats proofs as inductively defined data structures. There is no longer an assumption that axioms are "true" in any sense; this allows for parallel mathematical theories built on alternate sets of axioms (see Axiomatic set theory and Non-Euclidean geometry for examples).
The word Proof comes from the Latin probare meaning "to test". Related modern words are the English "probe", "proboscis”, "probity", and "probability", and the Spanish "probar" (to smell or taste, or (lesser use) touch or test). The early use of "probity" was in the presentation of legal evidence. A person of authority, such as a nobleman, was said to have probity, whereby the evidence was by his relative authority, which outweighed empirical testimony.
Nature and purpose
There are two different conceptions of mathematical proof. The first is an informal proof, a natural-language expression that is intended to convince the audience of the truth of a theorem. This is the type of proof typically encountered in mathematics. Because of their use of natural language, the standards of rigor for informal proofs will depend on the audience of the proof.
The second sort of proof is a formal proof. These are strings of symbols that follow precisely specified definitions. The field of proof theory studies formal proofs and their properties. Formal proofs are rarely used in published mathematics, however.
A classic question in philosophy asks whether mathematical proofs are analytic or synthetic. Kant, who introduced the analytic-synthetic distinction, believed mathematical proofs are synthetic.
Proofs may be viewed as aesthetic objects, admired for their mathematical beauty. The mathematician Paul Erdős was known for describing proofs he found particularly elegant as coming from "The Book", a hypothetical tome containing the most beautiful method(s) of proving each theorem. The book Proofs from THE BOOK, published in 2003, is devoted to presenting 32 proofs its editors find particularly pleasing.
Methods of proof
Direct proof
In direct proof, the conclusion is established by logically combining the axioms, definitions, and earlier theorems. For example, direct proof can be used to establish that the sum of two even integers is always even:
For any two even integers $x$ and $y$ we can write $x=2a$ and $y=2b$ for some integers $a$ and $b$, since both $x$ and $y$ are multiples of 2. But the sum $x+y = 2a + 2b = 2\left(a+b\right)$ is also a multiple of 2, so it is therefore even by definition.
This proof uses definition of even integers, as well as distribution law.
Proof by mathematical induction
In proof by mathematical induction, first a "base case" is proved, and then an "induction rule" is used to prove a (often infinite) series of other cases. Since the base case is true, the infinity of other cases must also be true, even if all of them cannot be proved directly because of their infinite number. A subset of induction is Infinite descent. Infinite descent can be used to prove the irrationality of the square root of two.
The principle of mathematical induction states that: Let N = { 1, 2, 3, 4, ... } be the set of natural numbers and P(n) be a mathematical statement involving the natural number n belonging to N such that (i) P(1) is true, ie, P(n) is true for n = 1 (ii) P(m + 1) is true whenever P(m) is true, ie, P(m) is true implies that P(m + 1) is true. Then P(n) is true for all natural numbers n.
Mathematicians often use the term "proof by induction" as shorthand for a proof by mathematical induction. However, the term "proof by induction" may also be used in logic to mean an argument that uses inductive reasoning.
Proof by transposition
Proof by Transposition establishes the conclusion "if p then q" by proving the equivalent contrapositive statement "if not q then not p".
In proof by contradiction (also known as reductio ad absurdum, Latin for "reduction into the absurd"), it is shown that if some statement were false, a logical contradiction occurs, hence the statement must be true. This method is perhaps the most prevalent of mathematical proofs. A famous example of a proof by contradiction shows that $sqrt\left\{2\right\}$ is irrational:
Suppose that $sqrt\left\{2\right\}$ is rational, so $sqrt\left\{2\right\} = \left\{aover b\right\}$ where a and b are non-zero integers with no common factor (definition of rational number). Thus, $bsqrt\left\{2\right\} = a$. Squaring both sides yields 2b2 = a2. Since 2 divides the left hand side, 2 must also divide the right hand side (as they are equal and both integers). So a2 is even, which implies that a must also be even. So we can write a = 2c, where c is also an integer. Substitution into the original equation yields 2b2 = (2c)2 = 4c2. Dividing both sides by 2 yields b2 = 2c2. But then, by the same argument as before, 2 divides b2, so b must be even. However, if a and b are both even, they share a factor, namely 2. This contradicts our assumption, so we are forced to conclude that $sqrt\left\{2\right\}$ is irrational.
Proof by construction
Proof by construction, or proof by example, is the construction of a concrete example with a property to show that something having that property exists. Joseph Liouville, for instance, proved the existence of transcendental numbers by constructing an explicit example.
Proof by exhaustion
In proof by exhaustion, the conclusion is established by dividing it into a finite number of cases and proving each one separately. The number of cases sometimes can become very large. For example, the first proof of the four colour theorem was a proof by exhaustion with 1,936 cases. This proof was controversial because the majority of the cases were checked by a computer program, not by hand. The shortest known proof of the four colour theorem today still has over 600 cases.
Probabilistic proof
A probabilistic proof is one in which an example is shown to exist, with certainty, by using methods of probability theory. This is not to be confused with an argument that a theorem is 'probably' true. The latter type of reasoning can be called a 'plausibility argument' and is not a proof; in the case of the Collatz conjecture it is clear how far that is from a genuine proof. Probabilistic proof, like proof by construction, is one of many ways to show existence theorems.
Combinatorial proof
A combinatorial proof establishes the equivalence of different expressions by showing that they count the same object in different ways. Usually a bijection is used to show that the two interpretations give the same result.
Nonconstructive proof
A nonconstructive proof establishes that a certain mathematical object must exist (e.g. "Some X satisfies f(X)"), without explaining how such an object can be found. Often, this takes the form of a proof by contradiction in which the nonexistence of the object is proven to be impossible. In contrast, a constructive proof establishes that a particular object exists by providing a method of finding it. A famous example of a nonconstructive proof shows that there exist two irrational numbers $a$ and $b$ such that $a^b$ is a rational number:
Either $sqrt\left\{2\right\}^\left\{sqrt\left\{2\right\}\right\}$ is a rational number and we are done (take $a=b=sqrt\left\{2\right\}$), or $sqrt\left\{2\right\}^\left\{sqrt\left\{2\right\}\right\}$ is irrational so we can write $a=sqrt\left\{2\right\}^\left\{sqrt\left\{2\right\}\right\}$ and $b=sqrt\left\{2\right\}$. This then gives $left \left(sqrt\left\{2\right\}^\left\{sqrt\left\{2\right\}\right\}right \right)^\left\{sqrt\left\{2\right\}\right\}=sqrt\left\{2\right\}^\left\{2\right\}=2$, which is thus a rational of the form $a^b$
Visual proof
Although not a formal proof, a visual demonstration of a mathematical theorem is sometimes called a "proof without words". The picture at right is an example of a historic visual proof of the Pythagorean theorem in the case of the (3,4,5) triangle.
Elementary proof
An elementary proof is (usually) a proof which does not use complex analysis. For some time it was thought that certain theorems, like the prime number theorem, could only be proved using "higher" mathematics. However, over time, many of these results have been reproved using only elementary techniques.
Two-column proof
A particular form of proof using two parallel columns is often used in elementary geometry classes. The proof is written as a series of lines in two columns. In each line, the left hand column contains a proposition, while the right hand column contains a brief explanation how this proposition is either an axiom, hypothesis, or can be obtained from previous lines.
Statistical proofs in pure mathematics
The expression "statistical proof" may be used technically or colloquially in areas of pure mathematics, such as involving cryptography, chaotic series, and probabilistic or analytic number theory. It is less commonly used to refer to a mathematical proof in the branch of mathematics known as mathematical statistics. See also "Statistical proof using data" section below.
Computer-assisted proofs
Until the twentieth century it was therefore assumed that any proof could, in principle, be checked by a competent mathematician to confirm its validity. However, computers are now used both to prove theorems and to carry out calculations that are too long for any human or team of humans to check; the first proof of the four color theorem is an example of a computer-assisted proof. Some mathematicians are concerned that the possibility of an error in a computer program or a run-time error in its calculations calls the validity of such computer-assisted proofs into question. In practice, the chances of an error invalidating a computer-assisted proof can be reduced by incorporating redundancy and self-checks into calculations, and by developing multiple independent approaches and programs.
Undecidable statements
A statement that is neither provable nor disprovable from a set of axioms is called undecidable (from those axioms). One example is the parallel postulate, which is neither provable nor refutable from the remaining axioms of Euclidean geometry.
Mathematicians have shown there are many statements that are neither provable nor disprovable in Zermelo-Fraenkel set theory with the axiom of choice (ZFC), the standard system of set theory in mathematics; see list of statements undecidable in ZFC.
Gödel's (first) incompleteness theorem shows that many axiom systems of mathematical interest will have undecidable statements.
Heuristic mathematics and experimental mathematics
While early mathematicians such as Eudoxus of Cnidus did not use proofs, from Euclid to the foundational mathematics developments of the late 19th and 20th centuries, proofs were an essential part of mathematics. With the increase in computing power in the 1960’s, significant work began to be done investigating mathematical objects outside of the proof-theorem framework, in experimental mathematics. Early pioneers of these methods intended the work ultimately to be embedded in a classical proof-theorem framework, e.g. the early development of fractal geometry, which was ultimately so embedded.
Related concepts
Colloquial use of "mathematical proof"
The expression "mathematical proof" is used by lay people to refer to using mathematical methods or arguing with mathematical objects, such as numbers, to demonstrate something about everyday life, or when data used in an argument are numbers. It is sometime also used to mean a "statistical proof" (below), especially when used to argue from data.
Statistical proof using data
"Statistical proof" from data refers to the application of statistics, data analysis, or Bayesian analysis to infer propositions regarding the probability of data. While using mathematical proof to establish theorems in statistics, it is usually not a mathematical proof in that the assumpions from which probability statements are derived require empirical evidence from outside mathematics to verify. In physics, in addtion to statistical methods, "statistical proof" can refer to the specialized mathematical methods of physics applied to analyze data in a particle physics experiment or observational study in cosmology. "Statistical proof" may also refer to raw data or a convincing diagram involving data, such as scatter plots, when the data or diagram is adequately convincing without further anaylisis.
Inductive logic proofs and Bayesian analysis
Proofs using inductive logic, while considered mathematical in nature, seek to establish propositions with a degree of certainty, which acts in a similar manner to probability, and may be less than one certainty. Bayesian analysis establishes assertions as to the degree of a person's subjective belief. Inductive logic should not be confused with mathematical induction.
Proofs as mental objects
Psychologism views mathematical proofs as psyshological or mental objects. Mathematician philosophers such as Leibnitz, Frege, and Carnap, have attempted to develop a symantics for what they considered to be the language of thought, whereby whereby standards of mathematical proof might be applied to empirical science.
Influence of mathematical proof methods outside mathematics
Philosopher-mathematicians such as Schopenhauer have attempted to formulate philosophical arguments in an axiomatic manner, whereby mathematical proof standards could be applied to argumentation in general philosophy. Other mathematician-philosophers have tried to use standards of mathematical proof and reason, without empiricism, to arrive at statements outside of mathematics, but having the certainty of propositions deduced in a mathematical proof, such as Descarte’s cogito argument. Kant and Frege considered mathamatical proof to be analytic apriori.
Ending a proof
Sometimes, the abbreviation "Q.E.D." is written to indicate the end of a proof. This abbreviation stands for "Quod Erat Demonstrandum", which is Latin for "that which was to be demonstrated". An alternative is to use a square or a rectangle, such as □ or ∎, known as a "tombstone" or "halmos". Often, "which was to be shown" is verbally stated when writing "QED", "□", or "∎" in an oral presentation on a board.
Sources
• Polya, G. Mathematics and Plausible Reasoning. Princeton University Press, 1954.
• Fallis, Don (2002) “What Do Mathematicians Want? Probabilistic Proofs and the Epistemic Goals of Mathematicians.” Logique et Analyse 45:373-88.
• Franklin, J. and Daoud, A. Proof in Mathematics: An Introduction. Quakers Hill Press, 1996. ISBN 1-876192-00-3
• Solow, D. How to Read and Do Proofs: An Introduction to Mathematical Thought Processes. Wiley, 2004. ISBN 0-471-68058-3
• Velleman, D. How to Prove It: A Structured Approach. Cambridge University Press, 2006. ISBN 0-521-67599-5
|
2014-12-18 22:46:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8013496994972229, "perplexity": 525.3171059993424}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768034.59/warc/CC-MAIN-20141217075248-00136-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://deltaepsilons.wordpress.com/tag/ordinary-differential-equations/
|
## Geodesics and the exponential mapNovember 4, 2009
Posted by Akhil Mathew in differential geometry, MaBloWriMo.
Tags: , , ,
Ok, we know what connections and covariant derivatives are. Now we can use them to get a map from the tangent space ${T_p(M)}$ at one point to the manifold ${M}$ which is a local isomorphism. This is interesting because it gives a way of saying, “start at point ${p}$ and go five units in the direction of the tangent vector ${v}$,” in a rigorous sense, and will be useful in proofs of things like the tubular neighborhood theorem—which I’ll get to shortly.
Anyway, first I need to talk about geodesics. A geodesic is a curve ${c}$ such that the vector field along ${c=(c_1, \dots, c_n)}$ created by the derivative ${c'}$ is parallel. In local coordinates ${x_1, \dots, x_n}$, here’s what this means. Let the Christoffel symbols be ${\Gamma^k_{ij}}$. Then using the local formula for covariant differentiation along a curve, we get
$\displaystyle D(c')(t) = \sum_j \left( c_j''(t) + \sum_{i,k} c_i'(t) c_k'(t) \Gamma^j_{ij}(c(t)) \right) \partial_j,$
so ${c}$ being a geodesic is equivalent to the system of differential equations
$\displaystyle c_j''(t) + \sum_{i,k} c_i'(t) c_k'(t) \Gamma^j_{ij}(c(t)) = 0, \ 1 \leq j \leq n.$ (more…)
## Covariant derivatives and parallelismNovember 1, 2009
Posted by Akhil Mathew in differential geometry, MaBloWriMo.
Tags: , , ,
First of all, here is a minor remark I should have made before. Given a connection ${\nabla}$ and a vector field ${Y}$, the operation ${X \rightarrow \nabla_X Y}$ is linear in ${X}$ over smooth functions—thus it is a tensor (of type (1,1)), and the value at a point ${p}$ can be defined if ${X}$ is replaced by a tangent vector at ${p}$. In other words, we get a map ${T(M)_p \times \Gamma(TM) \rightarrow T(M)_p}$, where ${\Gamma(TM)}$ denotes the space of vector fields. We’re going to need this below. (more…)
|
2017-07-24 00:53:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 21, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909812867641449, "perplexity": 148.48693300405517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424645.77/warc/CC-MAIN-20170724002355-20170724022355-00500.warc.gz"}
|
http://clay6.com/qa/jee-main-aipmt/jeemain%2Cphysics/class12/electrostatic-potential-and-capacitance
|
# Recent questions and answers in Electrostatic Potential and Capacitance
Questions from: Electrostatic Potential and Capacitance
### If the electric potential of the inner shell is 10V and that of the outer shell is 5V , then the potential at the centre will be
To see more, click for all the questions in this category.
|
2019-10-21 08:06:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5881227254867554, "perplexity": 480.1167894605937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987763641.74/warc/CC-MAIN-20191021070341-20191021093841-00555.warc.gz"}
|
https://web-apotheke24.de/student-exploration-half-life-answer-key-activity-b.html
|
student exploration half life answer key activity b. Student Exploration Distance Time And Velocity Time Graphs Answer Key 2020. Have a personal gallery or a blog to share with your friends. PDF Student Exploration Collision Theory Answer Key. Here is the access download page of student exploration disease spread gizmo answer key pdf click this link to download or read online. Collision Theory Worksheet Answer. com · the electron dot diagram for a neutral atom of chlorine (atomic. Free Gizmos Answers Keys Circuits : Worksheet Half Life from img. A prairie is flat or gently rolling grassland with few trees, such as in parts of central United States and Canada. You invest $,1000 in savings account that earns 3% interest for 3 years. Student exploration coral reefs 1 answers. exploration cell structure answers. He was the first to distinguish between different types of memory. Given various terms in the radioactive decay equation. 45 Add to Cart Browse Study Resource | Subjects Accounting Anthropology Architecture Art Astronomy Biology Business Chemistry Communications Computer Science. Manual Guided Activity Life In Ancient Rome Answers. The cornell doodle notes are 3 pages and there are 3 scaffolded versions plus the answer key included. Gizmo allows you to look at typical animal and plant cells under a microscope. B Describe what is meant by the term half life of a radioactive nuclide. Dna profiling gizmo answers quizlet ~ student exploration dna profiling gizmo answer key quizlet + my pdf collection 2021. Everything's an Argument with 2016 MLA Update University Andrea A Lunsford, University John J Ruszkiewicz. circuits gizmo quiz answers, Student exploration phases of water answer key, All gizmo answer keys pdf, Student exploration air track answers key work. Over millions of years rocks are broken down and. The time required for one of the radioactive atoms in a sample to decay. A brief explanation of how to find the half-life time of a radioactive substance. The students will basically teach themselves how exponential functions work simply by completing this activity. Student Exploration- Half-life (ANSWER KEY)$7. Student exploration radiation answer key. Start a new game then press the tilde key then type in these codes if it does not work then answer again. Section biology mitosis worksheet coloring answers cell cycle pdf. Different minerals show different degrees of solubility in water in that some minerals dissolve much more readily than others. This is most commonly referred to as a location-time chart. Does the mean molecular speed change as much as the temperature as the water heats up? Explain. An instrumnet that detects the particles emmitted by decaying atoms. Gizmos Student Exploration: Crumple Zones. Gizmos student exploration rock cycle answer key. Student Exploration Half Life Gizmo Answer Key Activity B from www. The dna must be copied so there is a full set of dna to pass on to each daughter cell. Meiosis Gizmo Answer Key Pdf Free. Some of the worksheets displayed are Half life data teacher answer key, Atoms half life questions and answers, Student exploration half life answers gizmo, Gizmos work answers, Student exploration phases water answers key, Answer key to gizmo cell energy cycle, Gizmo answer key, Half life lab answer key for pennies. Interpret: How does the Half-life setting affect how quickly the simulated substance decays? Activity B (continued from previous page). Life Student Exploration: Half-life (ANSWER KEY) You can use the Half-life Gizmo to model the decay of Carbon-14, which has a half-life of Aofnf wsiwthoeurt cKaresy. When the living thing dies, the carbon -14 begins to decay at a steady rate with a half -life of 5,730 years. The amount of cloud cover is shown by filling in the circle. Read Free Explore Learning Gizmo Answer Key Half Life. Student Exploration- Collision Theory Answer Key By. 2022 jan 12 2021 student exploration half life answer key activity b. This study source was downloaded by 100000793680026 from qwivy. Meiosis gizmo answer key pdf page 1 student exploration dna analysis worksheet answers meiosis vs. Primary and secondary sources activity the byzantine empire and emerging europe answer key pdf. Student exploration gizmo diffusion cell structure answer key gizmo explorelearning. You can learn a lot using our short fill-in-the-blank activities called Student Exploration Sheets. Check that Theoretical decay is . Collision theory helps explain why these factors affect reaction rate. Student Workbook Answer Key: PRE-LESSON NESPA Lesson One AK 2 EG-2007-01-203-ARC Name: Date: Pre-Lesson Activity Step 1: On the back of this paper draw a picture of our solar system. Half Life Gizmo Answer Key Activity B. Students should observe that the more time that passes the more radioactive decaying takes place. Mafic silicates like olivine and pyroxene tend to weather much faster than felsic minerals like quartz and feldspar. Select the BAR CHART tab, and turn on Show numerical values. M6l2 gizmo lab cell division honors vocabulary cell division centriole centromere chromatid cell division mitosis vocabulary closeup views and animations of certain organelles is provided. Tara markov half sister of brion markov was the illegitimate daughter of the king of markovia while in markovia she came under the care of a dr. Students that reinforce student learning through practice and instant feedback. Observe the effect of gaining and losing electrons on charge and rearrange the atoms to represent the molecular structure. Gizmos Student Exploration: Senses Answer Key. Course Title CHEMISTRY CHM 2211L. Dec 20, 2021 · Student exploration stoichiometry answer key activity a moles. Experiment: Click Reset, and select the GRAPH tab. gizmo half life answer key - Bing. Identify the flanking sequences and the number of repeat units gaat in the. Ischemic orchitis is more commonly caused by injury to the pampiniform plexus than to the testicular artery (C). Read online student exploration half life gizmo answers ncpdev book pdf free download link book now. Half-life = 5 seconds Half-life = 35 seconds 12. Half life kentschools you can use the half life gizmo to model the decay of carbon 14. In the Gizmo, select User chooses half-life and Theoretical decay. A radioactive sample contains 2. For times other than whole half-lives, the equation R = R 0 e − λt must be used to find R. com Student Exploration Half Life Answer Key - My gizmos diffusion answer key osmosis gizmo answer sheet osmosis worksheet Answer key student exploration hr diagram pdf may not make exciting reading but gizmo Student Exploration Osmosis Gizmo Answer Key Sock, skirt, shirt. Shadow Health Ppatient Tina Jones Transcript Subjective Data Collection Objective Data. Small structure that synthesizes proteins. February 9, 2022 Leave a Comment on Addition Number Bonds Worksheets For Grade 1. Key topics include the chemistry of life, the cell, genetics, plant and animal structure and function, ecology and human biology. This worksheet features a World Map with an Answer Key. Student%20Exploration%20Half%20Life%20Gizmo%20Answer%20Key. A black circle indicates completely overcast conditions, while a white circle indicates a clear sky. You will need blue green and orange markers or colored pencils for the first part of this activity. Introduction: Different isotopes of the same element have the same number of protons but different numbers of neutrons in the nucleus. Hlaf life gizmo answer key displaying top 8 worksheets found for this concept. ATOMS: HALF LIFE QUESTIONS AND ANSWERS. a egg cell has to be able to create a new life, while a perm cell just carries. Gizmos Bailey Sage Student Exploration Moles Download To Score An A 2019 In 2021 Dimensional Analysis Chemistry Paper Scientific Notation. Chemistry questions and answers. student-exploration-half-life-gizmo-answers-ncpdev 1/1 Downloaded from dev. In each trial, measure the half-life using the Half-life probe on the graph. PAG 12 - Research skills ZIP 315KB. Half-life ANSWER KEY You can use the Half-life Gizmo to model the decay of Carbon-14 which has a half-life of approximately 6000 years actual value is 5730 years. The piles graphically show the meaning of the term "half-life. Fill porosity gizmo answer key edit online. 7), Chapter Tests, Review Tests, Quiz, Assessment Tests, Cumulative Assessments, etc. Student Exploration Building Dna Answer Key Quizlet My. Vocabulary: atomic mass, …Sep 30, 2021 · Gizmos student exploration cell structure answers. 16-10-2021 · Explore learning building dna gizmo answer key pdf. You may not be perplexed to enjoy all book collections half life gizmo answer key that we will totally offer. Half Life Gizmo Answer Key – Half Life Gizmo Answer Key Activity B Half Life Gizmo Answer Key Activity B Measuring Half Life 13122020 half life answer key you can use the half life gizmo to model the decay of carbon 14 which has a half life of approximately 6 000 years actual value is. Answers to student activity sheet questions are provided. Draw arrows to show where the outer electrons will go during a chemical reaction then draw the resulting compound. Answer key rock cycle gizmo answers. 3 Formation of the Solar System. Activity b measuring half life get the gizmo ready click reset select isotope a from hist. • Early European explorers sought gold in Africa then began to trade slaves. Student #4 holds up the second sphere and the group will repeats the activity. Student exploration weather maps gizmo answer key created date. Gizmo Answer Key Activity Student Exploration Distance Time. Hypothesize what half‐lifeis: Half‐life is the amount of time it takes for approximately half of the radioactive atoms in a sample to decay intoa more stable form. 1 using the following equation. Gizmos Student Exploration: Convection Cells Answer Key. Values for the depth of the water level were recorded at various times. Flows internal heat of Earth cycle. For each question, choose the best answer. Desmos Classroom Activities Loading. Hlaf Life Gizmo Answer Key - Displaying top 8 worksheets found for this concept. Student Exploration Half Life Answer Key Activity B ExploreLearning ® is a Charlottesville, VA based company that develops online solutions to improve student learning in. Get the Gizmo ready: Click Reset ( ). Worksheets are student exploration stoichiometry gizmo answer key pdf meiosis and mitosis answers work honors biology ninth grade pendleton high school 013368718x ch11 159 178 richmond public schools department of curriculum and electricitymagnetism study guide answer key. Cell Division Gizmo Answer Key Activity B » Free Practice. 2021 · Half life gizmo answer key activity b half life gizmo quiz answers. Student Exploration Half Life Gizmo Answer Key Answer: Calculate the number of half-lives; 0. Where would you push or pull on the lever shown to lift the sheep most easily. Select Isotope A from the left drop-down menu. A weather station symbol, shown at right, summarizes the weather conditions at a location. File Type PDF Student Exploration Half Life Answer Key Student Exploration Half Life Gizmo Answers Ncpdev Student Exploration: Half-life (ANSWER KEY) You can use the Half-life Gizmo to model the decay of Carbon-14, which has a half-life of approximately 6,000 years (actual value is 5,730 years). Waves gizmo worksheet answer key activity b. Describe a way to determine the amount of radioactive isotope remaining after a given number of half-lives. Earth spins around its axis, which is tilted relative to Earth's orbit. Cell Division Gizmo Answer Key Activity B. Sketch each resulting decay curve graph in the spaces below. Coefficient, combustion, compound, decomposition, double replacement, element, molecule, product in the balancing chemical equations gizmo, look at the floating molecules below the initial reaction: Balancing equations get the gizmo 18 · student exploration half life gizmo answer key activity b life gizmo answer key 02 · title: Check your. Student Exploration: Element Builder. Some of the worksheets for this concept are Half life of paper mms pennies puzzle pieces licorice, Atoms half life questions and answers, Half life gizmo quiz answers, , Compound interest name work, Appendix a human karyotyping work, Answer key to nuclear. Activity B continued on next page. What are some of the different substances that make up. Showing top 8 worksheets in the category - Gizmos Student Exploration Half Life. Some of the worksheets displayed are Half life data teacher answer key, Atoms half life questions and answers, Student exploration half life answers gizmo, Gizmos work answers, Student exploration. Boyles And Charles Law Worksheet Answer Key. com student exploration for gizmo answer key chemical equations. If angular momentum is conserved, then any change in the size of a nebula must be compensated for by a proportional change in period. Gizmos student exploration cell structure answers. Formulae, equations and amount of substance This guide will help teachers plan for and teach formulae, equations and amount of substance by giving guidance on key concepts and suggesting classroom activities. Student exploration stoichiometry answer key. during mitosis, a single cell divides to produce two daughter cells. Guided reading activity 10 1 answer key. Student exploration cell structure. Ebook Pdf Explorelearning Gizmo Answer Key Stoichiometry available for free PDF download. Some of the worksheets displayed are Student exploration half life gizmo answers ncpdev, Student exploration gizmo answers half life, Student exploration half life gizmo answer key, Gizmo work answers, Half life gizmo answer key, Gizmos work answers, Answers to the half life gizmo, Half life data teacher answer key. He found ways to uncover hidden or lost memories. Math 202 module 3 lesson 4 practice activity one 31. Cell Division Worksheet Answer Key. Half-Life: Teacher Answer Key Each radioactive (unstable) element has a different half-life. student exploration half life answers gizmo Posts. Showing top 8 worksheets in the category - Half Life Gizmos. a powerful letter of recommendation the ecg made easy john r hampton cross stitch alphabet guide chapter wise notes igcse study bank bon voyage level 2 workbook and audio activities student hd. The condition is usually self-limited (E), so urgent exploration (B) is not indicated. Full PDF Package Download Full PDF Package. Vocabulary: activated complex, catalyst, chemical reaction, concentration, enzyme, half-life, molecule, product, reactant, surface area. questions and prompts in the orange boxes. Answer the questions based on the above reading. Topics Covered: Cell Cycle, Interphase, Mitosis, Cytokinesis, Chromatin, Chromosomes, Role of the cell cycle in growth and healing. Sheet 6 hours ago Gizmo comes with an answer key Answers for explore learning gizmosEach lesson includes a Student Exploration Sheet an Exploration Sheet Answer Key a Teacher Guide a Vocabulary Sheet and Assessment QuestionsThe Assessment Questions do not come with an answer keyGizmos is an online learning tool Explore Learning Gizmos Answer …. A student says "I think that some chlorine atoms have 16 protons. Showing top 8 worksheets in the category - Gizmos Half Life Answer. Student exploration energy conversions gizmo answer key. half-life and Random decay are selected. How do you think the radius of an atom . Benefits of career exploration activities. PDF 380KB; Atomic structure, periodicity and inorganic chemistry This guide will help teachers plan for and teach atomic structure by giving guidance on key concepts and suggesting. Half-Life Gizmo Answer Key Pdf Activity B. The answer key indicates a correct answer provided by the question, but might not be the only acceptable answer. com Student Exploration Half Life Answer Key – My gizmos diffusion answer key osmosis gizmo answer sheet osmosis worksheet Answer key student exploration hr diagram pdf may not make exciting reading but gizmo Student Exploration Osmosis Gizmo Answer Key Sock, skirt, shirt. " COVID-19 Learning Note: Half-life is also the way that scientists describe how long the coronavirus can last on different surfaces. 1 Bottle Graph Exploration, Part I. Half Life Lab Gizmo Answer Key by Hedvig on November 25, 2021 November 25, 2021 Leave a Comment on Half Life Lab Gizmo Answer Key Gizmos Student Exploration Dehydration Synthesis Dehydrator This Or That Questions Gizmo. Find the activity of the sample after 7. Building Dna Gizmo Answer Key Activity B. Apply: Use the Gizmo to find the half-life of Isotope B. Student Exploration Half Life Gizmo Answer Key Activity B. Refer to the Stoichiometry Gizmo bartleby. Ad bring learning to life with thousands of worksheets, games, and more from education. RADIOACTIVE DECAY AND HALF LIFE (2011;3) (b) Describe what is meant by the term, "half life of a radioactive nuclide". Screenshot of Half-life Gizmo Exploration Sheet Answer Key. For sexually reproducing organisms it is th. Based on these two observations, would you say that a cell spends most of its life. Daughter atom, decay, geiger counter, halflife, isotope, neutron, radiation, radioactive,. This problem has been solved! See the answer . What is the total amount of money that you will have after this 3-year period? Case 2: 2. Using the energy of sunlight, plants build molecules of glucose C 6 H 12 O 6 and Getting the books student exploration cell energy cycle gizmo answer key now is not type of A plant needs carbon dioxide, nutrients, soil, water, and. These half life calculations worksheet answers free printable worksheets include geometry questions which will need to obtain answered. A possible answers 1 a word processor is a computer program which manipulates text and produces documents suitable for printing. Read free student exploration half life gizmo answer key life of approximately 6 000 years actual value is 5 730 years. With a team of extremely dedicated and quality lecturers, student exploration: pond ecosystem activity b answer key will not only be a place to share knowledge but also to help students get inspired to explore and discover many. Gizmos Student Exploration: Comparing Climates (Metric) 4. Half-Life: Teacher Answer Key Each radioactive (unstable) element has a different half‐life. The answer key is provided at the end of the document. Half Life Worksheet Answer Key What Is Radioactivity. Some of the worksheets for this concept are Student exploration half life gizmo answers ncpdev, Student exploration gizmo answers half life, Student exploration half life gizmo answer key, Gizmo work answers, Half life gizmo answer key, Gizmos work answers, Answers to the half life gizmo, Half life data teacher. Each link also contains an Activity Guide with implementation suggestions and a Teacher Journal post concerning further details about the use of the activity in the classroom. Student Exploration: Balancing Chemical Equations Directions: Follow the instructions to go through the simulation. Student exploration coral reefs 1 answers exploration half life gizmo answers. Student exploration gizmo answers half student exploration. The Results for Practice Meiosis Answer Key. States RadTown Radon Activity Set EPA 402-B-19-055 Environmental Protection ^^^1 . Gallery of half life gizmo quiz answer key. Activity B: Measuring half-life Get the Gizmo ready: Click Reset. mcdougal littell algebra 1 chapter 4 test b answers; ap psychology 2008 released multiple choice answers; ap biology chi square pogil answers; acls pretest answers 2022 pre assessment test; chemistry mcqs for class 12 with answers pdf; spanish reflexive verbs answer key; 2002 d ap chemistry free response answers; brainpop answer key acids and bases. With water, light energy from the sun,. com About answer activity gizmo division… Read. The main reason we have tides is because we have a ___moon_____. Student Exploration Building Dna Answer Key Quizlet May 11, 2021 · Student Exploration Building Dna Answer Key Quizlet My. Student exploration half life gizmo answer key student exploration. Hypothesize what half‐life is: The amount of time ittakes for half of the radioactive atoms ina sample to decay intoa more stable form. Have you ever made microwave popcorn? If so, what do you hear while the popcorn is in the microwave?. PDF Conejo Valley Unified School District > Homepage. Record the half-life time in the first space of the table below. 0 ratings0% found this document useful (0 votes). Calculate: Calculate the mean half-life for each temperature. Vocabulary: coefficient , combustion, compound, decomposition, double replacement, element, molecule, product, reactant, single replacement, subscript, synthesis Prior Knowledge Questions (Do these BEFORE using the Gizmo. Energy conversions answer key vocabulary. Student Exploration Half Life Gizmo Answer Key. Student Exploration- Nuclear Decay (ANSWER KEY) Prior Knowledge Questions (Do these BEFORE using the Gizmo. Explain: How is temperature related to the motions of molecules?. Oxygen production is used to measure the rate of photosynthesis. the engagement and exploration phases of the instructional model. Radioactive decay shows disappearance of a constant fraction of activity per unit time. Measuring motion answer key vocabulary: The plate tectonics gizmo answer key is attached below, may also be used. Activity B Weathering Rates Gizmo Answer Key. Set up Gizmo: Click Reset, and turn on Show axis. The half life of Uranium-238 is 4. Match the phrases on the left with the term that best fits. GIZMOS Student Exploration. A molecule is two or more atoms bonded together. Check that Theoretical decay is selected. Dec 22, 2020 -- student exploration carbon cycle gizmo answer key activity a. axis angle to set the axis angle to a realistic 23°. This tutorial has all the answers. Student Exploration: Meiosis Voc abul ary: a na phas e , ch r o m o so m e , cr o s so ver , cyto ki ne si s, d ip lo id, D N A , d o m in a nt, ga m ete , ge n o t yp e , ge rm cell, ha ploid, ho m o l o go u s c hrom o so m es , inte rpha se , me ios is, m e t a pha se ,. World Population Map Activity Guide An Introduction for Teachers Population Education partnered with ODT Maps to bring you a 2015 edition of the World Population Map and a set of classroom activities to use the map as a launch pad for student exploration of global demographics and human development. Find the amount of simple interest that you would earn at the end of a 3-year period. ) Dec 26, 2021 · Carbon Cycle Gizmo Answer Key 2021 [*FREE Unlocks Inside*] Density gizmo answers activity b. What does the number next to isotopes signify? The number indicates the isotope's mass number. 5 hours Materials: An Introduction to the Circulatory System (Student Handout) Teacher access to computer, projector and the Internet QUIZ - An Introduction to the Circulatory System (Student Handout). What is an isotope? Isotopes are versions of the same element. Place Your Final Answer In The Formula Mass Column. The purpose of these questions is to activate prior knowledge and get students thinking. In the early 1600s, Johannes Kepler discovered that both Mercury and Venus would transit the sun in 1631. Sep 03, 2021 · Worksheet half life gizmo answer key. / even something as easy as guessing the b. The approximate mass of each particle is given in universal mass units (u). Sac that stores water, nutrients, or waste products. You can find the Student Exploration Sheets in two different places: Before you launch a Gizmo, it is located under Lesson Materials below the Gizmo. An ecosystem consists of all organisms (living things) in an area, plus the natural landscape. Showing top 8 worksheets in the category hlaf life gizmo answer key. Drag the red point along the graph. In the Student Exploration Gizmo Answers Half Life. de Gizmo carbon cycle activity a answer key Some of theIntroduction Gizmo… Lab equipment activity answer key part b Student Exploration Covalent Bonds Gizmo Answer Key Pdf : Natural Selection Gizmos Answers Gizmo Half Life … Along the way, some of the the neutrons will decay into a. Gizmos - Student Exploration: Half-life - Questions with true Answers Vocabulary: daughter atom, decay, Geiger counter, half-life, isotope, neutron, radiation, radioactive, radiometric dating Preview 2 out of 9 pages. Carbon dioxide doesn't have any hydrogen in it, though, so the plant must use another source for hydrogen. Gizmo Warm-up Usually when you. Homework: A-Day - Please c omplete the Nuclear Energy Exploration for 1/5/2016 & Continue working on your Model. Throw one hundred coins, remove all those that come up tails, place them in a pile, repeat—you've got yourself a hands-on model for radioactive decay. 2022 · jan 12, 2021 · student exploration half life answer key activity b. At 100 °C, the mean molecular speed is about 17% faster than at 0 °C. Lewis's Medical-Surgical Nursing Diane Brown, Helen Edwards, Lesley Seaton, Thomas. pdf from BIOLOGY 10 at North Oconee High School. merely said, the student exploration mineral identification gizmo answer key is universally May 09, 2021 · Some of the worksheets displayed are half life answer key what is radioactivity, gizmos work answers, explore learning student exploration stoichiometry answer key, half life data Stoichiometry Gizmo Flashcards Quizlet January 2nd, 2019 -. Free trials available to teachers who haven't tried Mystery Science, at schools that haven't had full memberships. 2 Bottle Graph Exploration, Part II. Download student exploration osmosis gizmo answer key pdf download 40 free theory worksheets. ExploreLearning ® is a Charlottesville, VA based company that develops online solutions to improve student learning in math and science. This is really frustrating because the whole point of this in a class setting is to be able to turn in the carbon copy to a professor. In the Gizmo run the race many times with a variety of different graphs. Grade 1 addition worksheets these math worksheets start with simple addition using pictures or number lines followed by one digit math facts and then. Modeling bacteria transformation worksheet answer key docx. Half Life of Radioactive Isotopes. Acces PDF Student Exploration Half Life Answer Key As this student exploration half life answer key, it ends going on physical one of the favored book student exploration half life answer key collections that we have. info Cross section worksheet form a answer key. Vocabulary: atom, atomic number, electron, electron dot diagram, element, energy level, ion, isotope, mass number, neutron, nucleus, periodic table, proton, radioactive, valence electrons. Answer: It takes 10 half-lives for the activity to fall from 6144 to 6 Bq. Half life gizmo answer key : Get the answers you need, now! Half life gizmo answer key, gizmos work answers, answers to the half life. Click on the "Absolute Dating" interactive exploration. Student Exploration Half Life Answer Key Activity B. Physical Activity and Physical Education: Relationship to. Choose from 500 different sets of dna profiling flashcards on quizlet. Student Exploration Cell Division Bio Miscbl3 Ga 3 Student Exploration Worksheet Cell Divisio Cell Division Division Activities Student. Explorelearning Gizmo Answer Key Stoichiometry. Student Behavior Explain Teaching Strategy Explains possible solutions or answers to other students. You can use the Half-life Gizmo to model the decay of Carbon-14, which has a half-life of approximately 6,000 years (actual value is 5,730 years). The Cell Structure Gizmo allows you to look at typical animal and plant cells under a microscope. Student answer to the dichotomous have students work through an online dichotomous key. Read indiglo thermostat manual 44155c Reader Get link elements of language first course answer key; elgin. Students will read about the circulatory system and answer probing questions to test their understanding. Gravity from the moon and sun drive the tides. Showing top 8 worksheets in the category - Half Life Lab Gizmos Answer Key. The book covalent bond gizmo answer key pdf kindle is very good and also much like. A cycle is a path with the same start … Continue reading "Answer Key Rock Cycle Gizmo Answers". Student Exploration Half Life Gizmo Answer Key Activity B Gizmo Warm-up: Determining density A mineral is a naturally formed. Some of the worksheets for this concept are answer key for student. Reefs are also impacted by disease-causing bacteria, humans, and other biotic factors, or living parts of theThe decline of coral reefs has been well documented, reef by reef. • Trade increased in Southeast Asia, and the Dutch built a trade empire based on. Student exploration moles answers. Which entrance to the Student Center is used most often between classes, (e. What was Hermann Ebbinghaus's contribution to the study of memory? *A. Kinematic Equations Worksheet Math Worksheets #1 Answers Physical Science: half life answer key,contains student exploration collision theory worksheet (a): Review the group's answer to question #2 on the Explore 2 worksheet. Life answers student exploration gizmo answers half life student exploration half. Student Exploration Half Life Answer Key Activity B ExploreLearning ® is a Charlottesville, VA based company that develops. Gizmo cell structure answer key activity b dink-magazin. Apply : Use the Gizmo to find the half-life of Isotope B. 12 write the balanced equation for the reaction of acetic acid with aluminum hydroxide to form water and. World atlas student activities answer key Student Exploration: Phases of Water Answer Key Vocabulary: boil, condense, density, freeze, gas, liquid, melt, molecule, phase, solid, volume Prior Knowledge Questions (Do these BEFORE using the Half Life Gizmo Answer Key 50000 Free Ebooks In The …. Some of the worksheets displayed are Gizmos work answers, Half life gizmo quiz answers, Half life gizmo answer key, Atoms half life questions and answers, Unit conversion work with answer key, , Student exploration phases of water answer key, Get the gizmo ready activity b reset micro view. The procedure follows the three-stage learning cycle of exploration,concept development,and application. MATH 202 Module 3 Lesson 4 Practice Activity One. Gizmos Student Exploration: Water Cycle Answer Key. Calculators (optional) Management. Written by teachers for teachers and students, The Physics Classroom provides a wealth of resources that meets the varied needs of both students and teachers. This is why you remain in the best website to look the amazing ebook to have. explore learning student exploration cell structure. Each correct answer is 1 point. Select the electron configuration tab. Investigate the decay of a radioactive substance. ) Calculate: Calculate the mean half-life for each temperature. Then paste the image into a blank document, and label each image with the half. Half Life Gizmo Answer Key Activity B Islero Guide from lamborghini-islero. Activity A: Phases of the cell cycle. Calculators are optional but can simplify the calculations. Half-life: time required to decay a sample to 50% of its . Package created by the Golgi apparatus. When you are assigned an observation activity, you can do the activity whenever it is most convenient for you. half of a quadratic to write the equation of the situation, and use the video of half of a Key concepts include b) zeros; c) x- and y-intercepts) NCTM Standards Student Exploration: Individual Work : Students are responsible for contributing to the success of their group video. Worksheets are half life gizmo answer key half life gizmo quiz. X32 Helicon Remote Wifi Activator Full Version Nulled. Students will be assessed on their understanding of adding, subtracting, multiplying, dividing, and factoring polynomials as it relates to real-life problems. All books are in clear copy here and all files are secure so don t worry about it. a half-life of 30 min, and she leaves her lab to meet with a grad student for . Student exploration coral reefs 1 answers Waves gizmo quiz answers - victorina. Organisms often found in a prairie ecosystem include prairie dogs, swift foxes, black-footed ferrets, and of course the grass itself. The mass of an object can be measured with a calibrated. dll manual; farmall super m mv owners manual operators ih tractor genie 08 activity board; gmc yukon owners manual 2005; gravimetric determination of phosphorus lab; hackers heroes. Activity b moles gizmo answer key Activity b moles gizmo answer key. In your drawing, show the different sizes of the planets and where they are located. Theory Answer KeyStudent Exploration Collision Theory Answer Key half-life, molecule, product, reactant, surface Page 5/15. Student Exploration: Collision Theory. Nuclear decay gizmo answer key activity a continued 1 day ago Stitcher. Dec 21, 2016 Circuit Builder Gizmo Answer Key. The batteries are contained in the removable power unit and the fan is controlled by a switch. Student exploration waves answer key. Meiosis Gizmo Answer Key Pdf Activity A Meiosis Se 2 Bsc 1020 Human Biology. STEM Cases, Handbooks and the associated Realtime Reporting System are protected by US Patent No. The half-life and the number of radioactive atoms can be adjusted, and theoretical or random decay can be observed. Student exploration half life gizmo answer key student. Y: Activity B: Calculating density Get the Gizmo ready: Replace the objects on the shelves. Gizmos Student Exploration Cell Structure Answers. Activity 1 mitosis and meiosis key names consider an organism with a diploid number of 4 for each of the following stages of mitosis and meiosis 1 within the. answer terms=definition, may be hard to answer definition=terms. exploration stoichiometry answer key activity b get the gizmo ready charles t m epub. Students will be evaluated on how clearly and accurately they explain. Hypothesize what half-life is: The amount of time it takes for half of the radioactive atoms in a sample to decay into a more stable form. Some of the worksheets displayed are student exploration phases of water answer key 11 circumference and area of circles work methods of heat transfer conduction gizmo student exploration circuits answer key pdf phase changes gizmo answer key chapter 18 the circulatory. Data can be interpreted visually using a dynamic graph, a bar chart, and a table. answer terms=definition, may be hard to answer definition=terms Learn with flashcards, games, and more — for free. student exploration: pond ecosystem activity b answer key provides a comprehensive and comprehensive pathway for students to see progress after the end of each module. Cell division gizmo answer key : Customizable versions of all lesson materials. Testicular torsion (D) is less likely than a vascular injury in this case, although both would present with acute testicular pain and decreased or. Then they will make estimations and student #1 will records it on the chart. The Physics Classroom serves students, teachers and classrooms by providing classroom-ready resources that utilize an easy-to-understand language that makes learning interactive and multi-dimensional. The angular momentum of an object is proportional to the square of its size (diameter) divided by its period of rotation (D 2 P) (D 2 P). Experiment : Click Reset , and select the GRAPH tab. Module I: Functions and Rate of Change. Student Exploration- Ocean Tides (ANSWER KEY). Technological advances of modern society have contributed to a sedentary lifestyle that has changed the phenotype of children from that of 20 years ago. The paper student exploration energy conversions gizmo answer key. Activity B (continued from previous page) 6. The oceans help to distribute heat and cold; without the oceans, climate variations would be much more severe. This process of replicating the cell car of the body cells is called mitosis. docx Sh«jir Open New! Edit your docs with Microsoft Word Online. the amount of time the radioactive atom releases particles and energy during a decay process. Big Ideas Math Book Algebra 1 Answer Key Chapter 5 Solving Systems of Linear Equations We have covered all the Questions from Exercises(5. Click on pop out icon or print icon to worksheet to print or. Read the introduction and answer the following question. the amount of time it takes for half of the radioactive particles to decay. PDF Student Exploration Half Life Gizmo Answer Key. Each radioactive unstable element has a different halflife. (answers can vary but the energy needed to vaporize the sample should be greater that. In this setting, the half-life will be different each time you run the simulation. Building pangea gizmo answers activity b, photosynthesis gizmo answer key explore learning, student exploration plate tectonics, calorimetry gizmo work. Earth's axis is an imaginary line that connects the. Showing top 8 worksheets in the category half life gizmo answer key 6th grade. (Note: To get exact times, you can refer to the TABLE tab. ANSWER KEY : BUILD AN ATOM PART I: ATOM SCREEN Build an Atom simulation . Carbon Cycle Gizmo Answer Key Activity A. What day of creation did God create the sun, moon, and stars? a. Activity b_ plant cells gizmo answers. In the isotope symbol of each atom, there is a superscripted (raised) number. 13 \mu g of pure 11C, which has a half-life of 20. Hlaf Life Gizmo Answer Key Worksheets. Relate: 50 Teacher Interview Questions and Answers to Help You Prepare. Check that the Half-life is 20 seconds and the. Put the spheres back in the bag. their own lunch 2 false, she deals with. Gizmos Student Exploration| Magnetism Answer Key| Grade A+| Vocabulary: attract, bar magnet, ferromagnetic, magnetize, north pole, repel, south pole| Success!. Avogadros number balanced equation cancel coefficient dimensional analysis molar mass mole molecular mass stoichiometry prior knowledge questions do these. Activity B: Measuring half-life Get the Gizmo ready : Click Reset. Chemistry nuclear reactions worksheet answer key. Lucky Leprechaun Half Life Problems Worksheet Answer Key. In the gizmo select user chooses half lifeand theoretical decay. Select the Mystery half-life from the left menu. Element Builder Gizmo Assessment Answers : Electron. There are three times when the answer key might be displayed: In tutorial questions, if you skip a step, the answer key is displayed for that step before the due date. Gizmos Student Exploration: Half-life. Teacher guide gizmo cell division answer key ebook cell structure exploration activities student exploration stoichiometry gizmo answer key pdf student exploration dichotomous keys gizmo answer key ionic. Gizmo answers, half life gizmo answer key, student exploration collision theory worksheet answers, read student […]. Student Exploration Advanced Circuits May 8, 2010 Exploration: The objective of the following activities is to give students the Circuit Builder. Electron Configuration Gizmo Answer Key Activity B - Riz Books from s3. Suppose you added a spoonful of sugar to hot water and another to ice-cold water. half-life will be different each time you run the simulation. Carbon -14 is a radioactive isotope found in small amounts in all living things. Mar 05, 2022 · Gizmo answer key pdf results. Student Exploration: Nuclear Decay Vocabulary: alpha particle, atomic number, beta particle, daughter product, gamma ray, isotope, mass number, nuclear decay, positron, radioactive, subatomic particle Prior Knowledge Questions (Do these BEFORE using the Gizmo. Cycle Gizmo Answer Key Activity A. Number Bonds Grade 1 - Displaying top 8 worksheets found for this concept. 250 mCi in two half-lives, to 0. (Gardner's: Bodily Kinesthetic) a. Gizmo Teacher Answer Keys - XpCourse. Prior Knowledge Questions (Do these BEFORE using the Gizmo. Jan 17 2021 Gizmo Warm-up Meiosis is a type of cell division that results in four. Content practice a lesson 3 dna and genetics answer key. The behaviors and traits of today's children, along with their genetics, are determinants of their growth and development; their physical, mental, and psychosocial health; and their physical, cognitive, and academic performance. b Could you tell us something about the different ways you use computers? c What do you think about people Coursebook answer key. This exploration is not only effective, but students enjoy the real-world problems. Student Exploration: Activity B: Measuring half-life Get the Gizmo ready: Click Reset. Half‐Life Number of Number Radioactive Atoms 0 4000 1 2000 2 1000 3 500 4 250 5 125 Half-Life Data-Teacher Answer Key 1. Student Exploration Half Life Gizmo Answer Key Activity B Gizmo Warm-up: Determining density A mineral is a naturally formed crystal. Trial 200 °C 150 °C 100 °C 50 °C 1 2 Mean half-life Repeat the experiment at different temperatures to complete the table. ) The chart below gives the locations, charges, and approximate masses of three subatomic.
|
2022-05-18 14:02:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2123969942331314, "perplexity": 6305.286566102839}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00705.warc.gz"}
|
https://cs.stackexchange.com/questions/61136/prove-that-l-nsubseteq-m-langle-m-rangle-lm-nsubseteq-lm
|
# Prove that $L^{\nsubseteq}_{M'}=\{ \langle M \rangle \ | \ L(M) \nsubseteq L(M')\}$ with $M'$ a TM that always halt is undecidable
I've done some other problems by reduction, but I'm quite stuck here. I'm not really sure what to do with $M'$. I know that because $L(M) \nsubseteq L(M')$ there exists $w \in L(M)$ and $w \notin L(M')$ and I'm trying to create the machine that ignores the input and accepts only when M accepts $w$, but I'm kinda stuck on what to do next. Any suggestions?
Don't be confused by this question. Instead of studying $L^{\nsubseteq}_{M'}=\{ \langle M \rangle \ | \ L(M) \nsubseteq L(M')\}$ you might consider as well the language $L_G=\{ \langle M \rangle \ | \ L(M) \nsubseteq G\}$ for $G$ being any language. It is important to notice that there is always a Turing machines $M$ with $\langle M \rangle \in L_G$ but always a machine $M'$ with $\langle M' \rangle \not \in L_G$. So you are looking for a nontrivial set of Turing machines. This allows you to apply Rice's theorem and you are done.
|
2021-09-17 06:21:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7197985649108887, "perplexity": 91.2181455119312}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00680.warc.gz"}
|
https://www.r-bloggers.com/2016/02/dynamic-stochastic-general-equilibrium-models-made-relatively-easy-with-r/
|
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
## General Equilibrium economic models
To expand my economics toolkit I’ve been trying to get my head around Computable General Equilibrium (CGE) and Dynamic Stochastic General Equilibrium (DSGE) models. Both classes of model are used in theoretical and policy settings to understand the impact of changes to an economic system on its equilibrium state.
I’m not a specialist in this area so the below should be taken as the best effort by a keen amateur. Corrections or suggestions welcomed!
CGE models have the simpler approach of the two and a longer history and have been very widely applied to practical policy questions such as the impact of trade deals. Many economic consultancies have their own in-house CGE model/s which they wheel out and aadapt to a range of their clients’ questions. They work by comparing static equilibrium states, assumed to meet requirements (such as markets clearing effectively instantly) needed to be in equilibrium, “calibrated” to the real economy by choosing a set of numbers for the various parameters that match the state of the economy at a particular point in time. The model is then adjusted – for example, to allow for changes in prices from a free trade agreement – and the new equilibrium compared to the old.
DSGE models are also based on an assumption of a steady state equilibrium of the economy, but they allow for real amounts of time being taken to move towards that steady state, and for a random (ie stochastic) element in the path taken towards that steady state. This greatly improves their coherence in terms of philosophy of science – compared to a CGE which simply calibrates to a single point of time and doesn’t have any degrees of freedom to quantify uncertainty or the fit of the model to reality, the parameters in DSGEs can be estimated based on a history of observations, and parameters can have probability distributions not just points. Parameters are typically estimated with Bayesian methods.
Over the past 15 years or so as the maths and computing has gotten better, the DSGE approach has become dominant in macroeconomic modelling, although not yet (to my observation) in everyday applied economics of the sort done by consultants for government agencies contemplating policy choices. DSGE models perform ok (to the degree that anything does) at economic forecasting, and give a nice coherent framework for considering policy options. For example, the Reserve Bank of New Zealand (like many if not all monetary authorities around the world – I haven’t counted) developed the cutely-named KITT (Kiwi Inflation Targeting Technology) DSGE model and adopted it in 2009 as the main forecasting and scenario tool; and apparently replaced this with NZSIM, a more parsimonious model in 2014 – slightly condescendingly described as “deliberately kept small so is easily understood and applied by a range of users.”
## Critiques
General Equilibrium approach has been roundly criticised from the margins of the economics field (pun) ever since it leapt to dominance in the second half of the twentieth century, from a philosophy of science perspective (doesn’t really make Popperian falsifiable predictions) and from the obvious and acknowledged absurdity/simplification of the assumptions needed to make the system tractable.
Noah Smith provides a good sceptical discussion of the worth of DSGE in this post and elsewhere.
I find the arguments for a post-Walrasian economics – beyond the DSGE fairly compelling. My key takeout from that book (not necessarily it’s main intent) is that new computing power means that we are increasingly in a position where we no longer have to just accept the simplifying assumptions needed for General Equilibrium approach to be tractable. Agent-based models are now possible that allow simulation of much more complex interactions between agents who lack the perfect knowledge required in the GE approach and behave realistically in terms of interactions and other ways. Such models lack analytical solutions (ie ones that could in principle be worked out with pen and paper) but Monte Carlo methods can lead to insight as to how the real economy behaves.
## Defining a GE model with gEcon
While I do think that these general equilibrium approaches will be superseded in the next 20 years (bit of a call I know), the alternatives are currently immature. I still need to get my head around GE approaches. Luckily I came across the gEcon project – an exciting off-shoot of work done for the Polish government, now open-sourced and maintained by the original authors. gEcon provides an easy language for defining a CGE or DSGE model, taking much of the hand-done mathematics pain out of the whole thing:
“Owing to the development of an algorithm for automatic derivation of first order conditions and implementation of a comprehensive symbolic library, gEcon allows users to describe their models in terms of optimisation problems of agents. To authors’ best knowledge there is no other publicly available framework for writing and solving DSGE & CGE models in this natural way. Writing models in terms of optimisation problems instead of the FOCs is far more natural to an economist, takes off the burden of tedious differentiation, and reduces the risk of making a mistake.”
The definition of agents’ optimisation problems in a sophisticated DGSE in the gEcon environment looks like this example extract from an implementation of the classic Smets-Wouters 2003 DSGE for the Euro area. The total model description is around 500 lines of code. This particular block is describing aspects of the price setting mechanism. It’s not easy, but it is a lot easier than solving first order conditions by hand:
block PRICE_SETTING_PROBLEM # example gEcon language extract
{
identities
{
g_1[] = (1 + lambda_p) * g_2[] + eta_p[];
g_1[] = lambda[] * pi_star[] * Y[] + beta * xi_p *
E[][(pi[] ^ gamma_p / pi[1]) ^ (-1 / lambda_p) *
(pi_star[] / pi_star[1]) * g_1[1]];
g_2[] = lambda[] * mc[] * Y[] + beta * xi_p *
E[][(pi[] ^ gamma_p / pi[1]) ^ (-((1 + lambda_p) / lambda_p)) * g_2[1]];
};
shocks
{
eta_p[]; # Price mark-up shock
};
calibration
{
xi_p = 0.908; # Probability of not receiving the price-change signal''
gamma_p = 0.469; # Indexation parameter for non-optimising firms
};
};
Similar code controls parts of the system such as the approach taken by the monetary authority (how much weight do they give to controlling inflation?), government expenditure, friction in the labour market, etc.
Incidentally, when the gEcon authors implemented the Smets-Wouters ‘03 model they identified a few small mistakes in the original implementation, which for me adds to the credibility of their argument that their (relatively) natural agent-based optimisation language is less prone to human error.
gEcon is implemented in R; the authors give the reason for this (as opposed to more traditional Matlab / Octave / GAMS solution) being the greater flexibility (“not everything needs to be a matrix”) and easy connections to full range of other econometric and data management methods.
## Solving a gEcon DSGE model from R
Once the model has been defined, R functions can perform tasks such as :
• estimate the impact of randomness
• simulate paths through time (the “dynamic stochastic” bit)
• estimate the impact of changes over time through impulse-response functions
For example, here is a simulation of one path through time for the deviation from the steady state of consumption, investment, capital, wages and income just from randomness in the Smets-Wouters ‘03 model:
Getting to this point used this R code. Note that I’m not reproducing below the full definition of the model, though I am including code that downloads it for you
# ###################################################################
# (c) Chancellery of the Prime Minister 2012-2015 #
# Licence terms for gEcon can be found in the file: #
# http://gecon.r-forge.r-project.org/files/gEcon_licence.txt #
# #
# gEcon authors: Grzegorz Klima, Karol Podemski, #
# Kaja Retkiewicz-Wijtiwiak, Anna Sowińska #
# ###################################################################
library(gEcon)
library(dplyr)
library(tidyr)
destfile = "SW_03.gcn")
sw_gecon1 <- make_model('SW_03.gcn')
# set some initial variable values:
initv <- list(z = 1, z_f = 1, Q = 1, Q_f = 1, pi = 1, pi_obj = 1,
epsilon_b = 1, epsilon_L = 1, epsilon_I = 1, epsilon_a = 1, epsilon_G = 1,
r_k = 0.01, r_k_f = 0.01)
sw_gecon1 <- initval_var(sw_gecon1, init_var = initv)
# set some initial parameter values:
initf <- list(
beta = 0.99, # Discount factor
tau = 0.025, # Capital depreciation rate
varphi = 6.771, # Parameter of investment adjustment cost function
psi = 0.169, # Capacity utilisation cost parameter
sigma_c = 1.353, # Coefficient of relative risk aversion
h = 0.573, # Habit formation intensity
sigma_l = 2.4, # Reciprocal of labour elasticity w.r.t. wage
omega = 1 # Labour disutility parameter
)
sw_gecon1 <- set_free_par(sw_gecon1, initf)
# find the steady state for that set of starting values:
get_ss_values(sw_gecon2)
# solve the model in linearised form for 1st order perturbations/randomness:
sw_gecon2 <- solve_pert(sw_gecon2, loglin = TRUE)
# simulate one path:
one_path <- random_path(sw_gecon2, var_list = list("Y", "K", "I", "C", "W"))
plot_simulation(one_path) # shows deviation from the steady_state
## Impulse response functions
In a method familiar to users of other economic modelling methods like Vector Autoregressions (VARs), it’s possible to “shock” the DSGE system and see the impact play out over time as the complex inter-relationships of agents within the system move from the shock towards a new equilibrium. Here’s an example of the expected impact of a shock to the inflation objective applied to the Smets-Wouters ‘03 model:
One of the characteristics of the DSGE models is the importance they give to agents’ expectations. As their philosophy has increasingly dominated in recent decades, discussion of monetary policy has become less focused on individual actions of the monetary authority than on the overal regime and set of targets. Any more-than-casual observer of public economic debate will have noticed the importance given to discussion of the overall inflation-targetting regime. Hence the plot above shows the modelled impact of a change in the inflation objective - with no other direct exogenous shock in the model at all.
Here’s the R code that produced that plot:
# set covariance matrix of the parameters to be used in shock simulation:
a <- c(eta_b = 0.336 ^ 2, eta_L = 3.52 ^ 2, eta_I = 0.085 ^ 2, eta_a = 0.598 ^ 2,
eta_w = 0.6853261 ^ 2, eta_p = 0.7896512 ^ 2,
eta_G = 0.325 ^ 2, eta_R = 0.081 ^ 2, eta_pi = 0.017 ^ 2)
sw_gecon3 <- set_shock_cov_mat(sw_gecon2, shock_matrix = diag(a), shock_order = names(a))
# compute the moments with that covariance matrix:
sw_gecon3 <- compute_moments(sw_gecon3)
sw_gecon_irf <- compute_irf(sw_gecon3, var_list = c('C', 'Y', 'K', 'I', 'L'), chol = T,
shock_list = list('eta_pi'), path_length = 40)
plot_simulation(sw_gecon_irf, to_tex = FALSE)
## Putting a DSGE model into Shiny
With so many complex and mysteriously named parameters - and I’ve shown very few of them - an interactive web application seems an obvious way to explore a model of this sort. I’ve set up a prototype which explores the impulse response functions of the model, responding to shocks to parameters like labour supply, investment, productivity and government spending:
• The full screen version of the web app. Once all buttons on the screen appear, give it 90 seconds or so to solve its steady state. Then try choosing different variables to shock by from the drop down list.
• The source code
## Conclusion
I’m a way off from being able to define my own DSGE for the New Zealand and connecting economies. In particular the important question of calibration/estimation of parameters based on actual observable data is a a mystery to me at this stage. But I can see how the basic idea works, and the gEcon package looks very promising for its relatively simple agent-optimisation approach to specifying the model.
|
2021-04-12 03:12:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5530208349227905, "perplexity": 2540.3953088109465}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066568.16/warc/CC-MAIN-20210412023359-20210412053359-00574.warc.gz"}
|
http://www.birs.ca/events/2009/5-day-workshops/09w5102
|
# Dedekind sums in geometry, topology, and arithmetic (09w5102)
Arriving in Banff, Alberta Sunday, October 11 and departing Friday October 16, 2009
## Organizers
(San Francisco State University)
(University of Massachusetts Amherst)
Adam Sikora (State University of New York (SUNY) - Buffalo)
## Objectives
The goal of the workshop is to bring together topologists, geometers,
and number theorists working in the above areas. The emphasis will be
on interaction between these groups of researchers, with the hopes of
engendering cross-fertilization and new and unusual collaborations.
All of the proposed participants are either active in these areas or
have professed interest in them. The large areas of overlap among the
topics above and the research interests of the proposed participants
means that the intimate setting of Banff is ideal for such a workshop.
The time is ripe for such a workshop. Although two-dimensional
Dedekind sums have been around since the 19th century and
higher-dimensional Dedekind sums have been explored since the 1950s,
it is only recently that such sums have figured prominently in so many
different areas. Moreover there have been conferences devoted to
individual topics under consideration, such as enumerating lattice
points in polytopes and special values of $L$-functions, but to date
there has not been a meeting emphasizing Dedekind sums as a unifying
theme between these subjects. Bringing together a group of
researchers will likely lead to significant breakthroughs in current
research programs, and may also uncover new connections between these
fields.
|
2017-09-21 23:13:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4245394170284271, "perplexity": 3747.1172192153945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687938.15/warc/CC-MAIN-20170921224617-20170922004617-00187.warc.gz"}
|
https://math.stackexchange.com/questions/3770004/suppose-a-b-and-c-are-sets-prove-that-c-subseteq-a-delta-b-iff-c-sub
|
# Suppose $A$, $B$, and $C$ are sets. Prove that $C\subseteq A\Delta B$ iff $C\subseteq A\cup B$ and $A\cap B\cap C=\emptyset$.
Not a duplicate of
Suppose $A$, $B$, and $C$ are sets. Prove that $C ⊆ A △ B$ iff $C ⊆ A ∪ B$ and $A ∩ B ∩ C = ∅$.
Suppose $A, B$, and C are sets. Prove that $C\subset A\Delta B \Leftrightarrow C \subset A \cup B$ and $A \cap B \cap C = \emptyset$
Set theory: Prove that $C \subseteq A \Delta B \iff C \subseteq A \cup B \wedge A \cap B \cap C = \emptyset$
This is exercise $$3.5.21$$ from the book How to Prove it by Velleman $$(2^{nd}$$ edition$$)$$:
Suppose $$A$$, $$B$$, and $$C$$ are sets. Prove that $$C\subseteq A\Delta B$$ iff $$C\subseteq A\cup B$$ and $$A\cap B\cap C=\emptyset$$.
Here is my proof:
$$(\rightarrow)$$ Suppose $$C\subseteq A\Delta B$$.
$$(1)$$ Let $$x$$ be an arbitrary element of $$C$$. From $$C\subseteq A\Delta B$$ and $$x\in C$$, $$x\in A\Delta B$$. Now we consider two cases.
Case $$1.$$ Suppose $$x\in A\setminus B$$. Ergo $$x\in A\cup B$$.
Case $$2.$$ Suppose $$x\in B\setminus A$$. Ergo $$x\in A\cup B$$.
Since the above cases are exhaustive, $$x\in A\cup B$$. Thus if $$x\in C$$ then $$x\in A\cup B$$. Since $$x$$ is arbitrary, $$\forall x(x\in C\rightarrow x\in A\cup B)$$ and so $$C\subseteq A\cup B$$. Therefore if $$C\subseteq A\Delta B$$ then $$C\subseteq A\cup B$$.
$$(2)$$ Suppose $$A\cap B\cap C\neq\emptyset$$. So we can choose some $$x_0$$ such that $$x_0\in A$$, $$x_0\in B$$, and $$x_0\in C$$. From $$C\subseteq A\Delta B$$ and $$x_0\in C$$, $$x_0\in A\Delta B$$. Now we consider two cases.
Case $$1.$$ Suppose $$x_0\in A\setminus B$$. Ergo $$x_0\notin B$$ which contradicts $$x_0\in B$$ and so it must be the case that $$A\cap B\cap C=\emptyset$$.
Case $$2.$$ Suppose $$x_0\in B\setminus A$$. Ergo $$x_0\notin A$$ which contradicts $$x_0\in A$$ and so it must be the case that $$A\cap B\cap C=\emptyset$$.
Since the above cases are exhaustive, $$A\cap B\cap C=\emptyset$$. Therefore if $$C\subseteq A\Delta B$$ then $$A\cap B\cap C=\emptyset$$.
From parts $$(1)$$ and $$(2)$$ we can conclude that if $$C\subseteq A\Delta B$$ then $$C\subseteq A\cup B$$ and $$A\cap B\cap C=\emptyset$$.
$$(\leftarrow)$$ Suppose $$C\subseteq A\cup B$$ and $$A\cap B\cap C=\emptyset$$. Let $$x$$ be an arbitrary element of $$C$$. From $$C\subseteq A\cup B$$ and $$x\in C$$, $$x\in A\cup B$$. Now we consider two cases.
Case $$1.$$ Suppose $$x\in A$$. Now we consider two cases.
Case $$1.1.$$ Suppose $$x\in A\setminus B$$. Ergo $$x\in A\Delta B$$.
Case $$1.2.$$ Suppose $$x\notin A\setminus B$$ and so $$x\notin A$$ or $$x\in B$$. Now we consider two cases.
Case $$1.2.1.$$ Suppose $$x\notin A$$ which is a contradiction.
Case $$1.2.2.$$ Suppose $$x\in B$$ which is a contradiction since $$A\cap B\cap C=\emptyset$$.
Since cases $$1.2.1$$ and $$1.2.2$$ lead to a contradiction then case $$1.2$$ leads to a contradiction. From case $$1.1$$ or case $$1.2$$ we can conclude $$x\in A\Delta B$$.
Case $$2.$$ Suppose $$x\in B$$ and a similar argument shows $$x\in A\Delta B$$.
Since case $$1$$ and case $$2$$ are exhaustive, $$x\in A\Delta B$$. Thus if $$x\in C$$ then $$x\in A\Delta B$$. Since $$x$$ is arbitrary, $$\forall x(x\in C\rightarrow x\in A\Delta B)$$ and so $$C\subseteq A\Delta B$$. Therefore if $$C\subseteq A\cup B$$ and $$A\cap B\cap C=\emptyset$$ then $$C\subseteq A\Delta B$$.
From $$(\rightarrow)$$ and $$(\leftarrow)$$ we can conclude $$C\subseteq A\Delta B$$ iff $$C\subseteq A\cup B$$ and $$A\cap B\cap C=\emptyset$$. $$Q.E.D.$$
Is my proof valid$$?$$ Is my proof unnecessarily redundant or every step is needed$$?$$
• You know. Saying it is not a duplicate, doesn't mean it actually isnt a duplicate. Jul 26, 2020 at 16:59
• Of course. But I checked those posts and I am definitely sure that my proof is different. Jul 26, 2020 at 17:04
Your proof is correct. Here is a proof that avoids any mention of specific elements (following the theme of my answer to one of your previous questions). The key statements we use are the following:
(a) If $$X$$ and $$Y$$ are sets then $$X \subseteq Y$$ iff $$X \setminus Y = \emptyset$$.
(b) If $$X$$ and $$Y$$ are sets then $$X \cup Y = \emptyset$$ iff $$X = \emptyset$$ and $$Y = \emptyset$$.
(We discussed both of these before, so let's not reprove them!)
Now, in this problem we care about when $$C \subseteq A \Delta B$$. So, guided by property (a), we should examine $$C \setminus (A\Delta B)$$. Use axioms of set operations (e.g., De Morgan etc) to prove: $$C \setminus (A\Delta B) = \big(C \setminus (A\cup B)\big) \cup \big(A \cap B \cap C\big)\tag{1}$$
I have hidden the proof of $$(1)$$ at the bottom of this answer; but try it yourself first. It's also a sensible thing to say out loud: $$A \Delta B$$ is the set of elements that are in either $$A$$ or $$B$$, but not both. So being in $$C \setminus (A \Delta B)$$ is the same as either being in $$C$$ and not in $$A$$ or $$B$$, or being in $$C$$ and in both $$A$$ and $$B$$.
Once you have $$(1)$$, the rest is very straightforward.
\begin{align} C \subseteq A \Delta B &\iff C \setminus (A \Delta B) = \emptyset \tag{using (a)} \\ &\iff \big(C \setminus (A\cup B)\big) \cup \big(A \cap B \cap C\big) = \emptyset \tag{using (1)}\\ &\iff C \setminus (A \cup B) = \emptyset \text{ and } A \cap B \cap C = \emptyset \tag{using (b)}\\ &\iff C \subseteq A\cup B \text{ and } A\cap B\cap C = \emptyset \tag{using (a)} \end{align}
Proof of $$(1)$$:
Recall that $$A \Delta B = (A \cup B) \setminus (A \cap B) = (A \cup B) \cap \neg(A \cap B)\tag{2}$$ So \begin{align}C \setminus (A \Delta B) &= C\cap \neg\big((A \cup B)\cap \neg (A \cap B)\big) \tag{by (2)} \\ &= C \cap \big(\neg (A \cup B) \cup (A \cap B)\big) \tag{De Morgan} \\ &= \big(C \cap \neg (A \cup B)\big) \cup \big(C \cap (A \cap B)\big) \tag{distributivity} \\ &= \big(C \setminus (A \cup B)\big) \cup \big(A \cap B \cap C\big)\end{align} In the last line we used the definition of set difference on the left side, and associativity/commutativity of intersection on the right side.
• I cannot thank you enough. Jul 26, 2020 at 17:13
The first inclusion follows from the fact that the symmetric difference is inside the union. The second condition foolows from the fact that symmetric difference is disjoint from the intersection.
As for your proof, it is correct but too long.
In the first part, in both the second cases(where it says Case $$2$$) you can simply refer to the similar arguments as in the first cases but with $$B \setminus A$$ instead of $$A\setminus B$$.
Since you assume in Case 1 (in the converse part) that $$x\in A$$, the cases including and after Case 1.2 can be shortened to: "If $$x\notin A\setminus B$$ then $$x\in B$$ contradicting $$A\cap B \cap C = \emptyset$$".
The rest seems good!
You can shorten you proof by writing $$A \cup B=(A \bigtriangleup B) \cup (A \cap B)$$. First assume that $$C \subseteq A \bigtriangleup B$$. Since, $$A \bigtriangleup B \subseteq A\cup B$$, then $$C \subseteq A \bigtriangleup B \implies C \subseteq A \cup B$$. Also, $$A \bigtriangleup B$$ is disjoint from $$A \cap B$$. And so, $$A \cap B \cap C= \phi$$. The reverse implication follows by observing that $$C \subseteq A \cup B= (A \bigtriangleup B) \cup (A \cap B)$$ but $$A\cap B \cap C = \phi$$ and so, $$C \subseteq A \bigtriangleup B$$.
• @JCAA Why? I don't see anything wrong there. Jul 27, 2020 at 22:10
I think the "($$\to$$)" direction of your proof is fine. The "($$\leftarrow$$)" direction is correct but could be shortened. There was no need to break case 1 into cases 1.1, 1.2, 1.2.1, and 1.2.2. You could have completed case 1 like this:
Case 1. Suppose $$x \in A$$. If $$x \in B$$ then $$x \in A \cap B \cap C$$, which contradicts the fact that $$A \cap B \cap C = \emptyset$$. Therefore $$x \notin B$$. Since $$x \in A$$ and $$x \notin B$$, $$x \in A \bigtriangleup B$$.
|
2023-02-03 16:37:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 163, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969828128814697, "perplexity": 189.9355481683699}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500058.1/warc/CC-MAIN-20230203154140-20230203184140-00361.warc.gz"}
|
http://openstudy.com/updates/558f729ee4b0058b2bb71866
|
## anonymous one year ago I'm having trouble solving Delta T. The equation: The Delta H for the solution process when solid sodium hydroxide dissolves in water is 44.4kJ/mol. When a 13.9-g sample of NaOH dissolves in 250.0g of water in a coffee-cup calorimeter, the temperature increases from 23.0 C to ----C. Assume that the solution has the same specific heat as liquid water,i.e., 4.18 J/g-K.
• This Question is Open
1. Photon336
I think it would be Q =mCdeltaT Solve for delta T = Q/(mC) Molar heat capacity = 75J/mol K 250g H20 x 1mol/18g = 14 mol H20 14 x 75J/mol K = 1050 J/mol K 44400 (J/mol)/ (1050 J/mol K) = 42.3 K 42.3 K = ( Temp final - temp initial) 42.2K + temp initial = temp final Have 23+273 = 296K 296K+42.3 = 338.3K or 65.3C
2. aaronq
^You didn't take into account the moles of NaOH. In this case were using the molar enthalpy of solvation of NaOH, and the moles of NaOH. $$\sf \large q=n_{NaOH}*\Delta H_{solvation}=\dfrac{13.9~g}{39.99~g/mol}*44.4~kJ/mol\approx 1.5*10^4~J$$ Rearrange the calorimetry formula: $$\sf \large q=m*C_p*(T_f-T_i)\rightarrow T_f=\dfrac{q}{m*C_p}+T_i$$ Plug in values: $$\sf \large T_f=\dfrac{q}{m*C_p}+T_i=\dfrac{1.5*10^4~J}{250~g*4.18~J/^oC~g}+23^oC \approx 37.8 ^oC$$
3. anonymous
What does the n stand for in the first line
4. aaronq
$$\sf \Large n$$ is the symbol for moles
5. Photon336
@aaronq I see what you did here, but why doesn't the mass of water matter in this calculation?
6. aaronq
it does, it matters when were observing the heat from the solvation of NaOH affecting the temperature of the water. If you notice, in the calorimetry equation, the mass of the water used appears in the denominator of the rearranged formula.
7. Photon336
@aaronq sorry! One last thing 1. okay, so you found the molar enthalpy for dissolving NaOH in H2O by multiplying the enthalpy by the number of moles of NaOH. 2. I understood how you manipulated the equation. 3. so in the process, your obviously heating the water ( I originally thought you had to use the molar heat capacity of water).. 4. I see in the equation that the number of moles cancels out! so you're left with joules.. so we don't have to (molar heat capacity of H20)..
8. aaronq
yup, that's exactly it
9. Photon336
@aaronq I'd give you a medal on top of a metal lol
|
2016-10-23 09:37:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7441117167472839, "perplexity": 2754.6706609910434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719215.16/warc/CC-MAIN-20161020183839-00443-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.clutchprep.com/chemistry/practice-problems/117625/the-most-stable-nucleus-in-terms-of-binding-energy-per-nucleon-is-56fe-if-the-at
|
# Problem: The most stable nucleus in terms of binding energy per nucleon is 56Fe. If the atomic mass of 56Fe is 55.9349 u, calculate the binding energy per nucleon for 56Fe.
###### FREE Expert Solution
Step 1: Calculate the mass defect (Δm).
Given:
mass 56Fe = 55.9349 u
atomic # Ti = # of protons = 26
mass # = 56
# of neutrons = 56 - 26 = 30
mass of proton = 1.007276 amu
mass neutron = 1.008665 amu
$\mathbf{∆}\mathbf{m}\mathbf{=}\mathbf{\left(}\mathbf{neutrons}\mathbf{+}\mathbf{protons}\mathbf{\right)}\mathbf{-}\mathbf{Fe}$
Δm = 0.514226 amu
Step 2: Calculate the mass defect (Δm) in kg.
1 amu = 1.6606x10-27 kg
Δm = 8.5392x10-28 kg
Step 3: Calculate the energy released (E).
91% (208 ratings)
###### Problem Details
The most stable nucleus in terms of binding energy per nucleon is 56Fe. If the atomic mass of 56Fe is 55.9349 u, calculate the binding energy per nucleon for 56Fe.
Frequently Asked Questions
What scientific concept do you need to know in order to solve this problem?
Our tutors have indicated that to solve this problem you will need to apply the Mass Defect concept. If you need more Mass Defect practice, you can also practice Mass Defect practice problems.
What professor is this problem relevant for?
Based on our data, we think this problem is relevant for Professor Sharma's class at UM.
What textbook is this problem found in?
Our data indicates that this problem or a close variation was asked in Chemistry: An Atoms First Approach - Zumdahl Atoms 1st 2nd Edition. You can also practice Chemistry: An Atoms First Approach - Zumdahl Atoms 1st 2nd Edition practice problems.
|
2021-04-21 17:49:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26721954345703125, "perplexity": 2097.7347778666376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00371.warc.gz"}
|
https://www.sdss.org/dr15/manga/manga-tutorials/dap-tutorial/dap-python-tutorial/
|
# DAP Python Tutorial
Back to MaNGA tutorials
Disclaimer: This tutorial teaches you how to look at the DAP output files using standard python packages. However, we highly recommend you consider using Marvin, a python package designed specifically for downloading, visualizing, and analyzing MaNGA data.
The goal of this introductory tutorial will be to show you the basics of loading and visualizing MaNGA DAP products. We assume you're using ipython or something similar. Let's first import the packages we'll need and turn on interactive plotting:
import numpy
from astropy.io import fits
from matplotlib import pyplot
pyplot.ion()
Now, read in a maps file. For this tutorial, we'll use the maps file for MaNGA object (plate-ifu) 7443-12703.
hdu = fits.open(dir+'manga-7443-12703-MAPS-HYB10-GAU-MILESHC.fits.gz')
where "dir" is a string specifying where this data is located on your computer. This file contains several extensions. To see them all, type:
hdu.info()
For a full description of these extensions, see the DAP documentation.
## Making an emission line map and applying masks
Let's make a simple map of Hα flux. The Hα flux measurements are stored in the EMLINE_GFLUX extension. If we examine the size of the data in this extension:
hdu['EMLINE_GFLUX'].data.shape
we'll see that it has multiple layers. Each layer holds the flux measurements for a different emission line. We can see which index corresponds to which line by looking in the header:
hdu['EMLINE_GFLUX'].header
Hα is in channel 19, although since we're in python where indices start at 0, we want index 18:
flux_halpha = hdu['EMLINE_GFLUX'].data[18,:,:]
It may be more convenient to make a dictionary that maps the different line names to their corresponding index:
emline = {}
for k, v in hdu['EMLINE_GFLUX'].header.items():
if k[0] == 'C':
try:
i = int(k[1:])-1
except ValueError:
continue
emline[v] = i
print(emline)
So now we can select the Hα flux map using:
flux_halpha = hdu['EMLINE_GFLUX'].data[emline['Ha-6564'],:,:]
Now we can plot the map with a colorbar:
pyplot.clf()
pyplot.imshow(hdu['EMLINE_GFLUX'].data[emline['Ha-6564'],:,:],cmap='inferno', origin='lower', interpolation='nearest')
pyplot.colorbar(label=r'H$\alpha$ flux ($1\times10^{-17}$ erg s$^{-1}$ spaxel$^{-1}$ cm$^{-2}$)')
pyplot.show()
An example map of Hα emission.
Typically the maps contain some spaxels with unreliable mesaurements. The MAPS files contains extensions which identify these spaxels. The header of the EMLINE_GFLUX extension tells you which extension holds its mask.
mask_extension = hdu['EMLINE_GFLUX'].header['QUALDATA']
We can use this extension to make a masked image
masked_image = numpy.ma.array(hdu['EMLINE_GFLUX'].data[emline['Ha-6564'],:,:],
pyplot.imshow(masked_image, origin='lower', cmap='inferno', interpolation='nearest')
pyplot.show()
An example Hα emission map with a mask applied to remove unreliable data.
## Plotting a velocity field and creating your own masks
Now, let's use the same basic procedure to plot the ionized gas velocity field.
mask_ext = hdu['EMLINE_GVEL'].header['QUALDATA']
pyplot.clf()
pyplot.imshow(gas_vfield, origin='lower', interpolation='nearest', vmin=-125, vmax=125, cmap='RdBu_r')
pyplot.colorbar()
An ionized gas velocity field
Depending on your science, you may want to only examine regions with very well-detected emission lines. To do this we need to read in the flux uncertainties, which are given as inverse variances. We can find the proper extension with
ivar_extension = hdu['EMLINE_GFLUX'].header['ERRDATA']
Let's now calculate the S/N per spaxel and use it to reject spaxels where the line flux has S/N < 10
snr_map = hdu['EMLINE_GFLUX'].data[emline['Ha-6564'],:,:]*numpy.sqrt(hdu[ivar_extension].data[emline['Ha-6564'],:,:])
sncut = 10
pyplot.clf()
pyplot.imshow(gas_vfield_alt, origin='lower', interpolation='nearest', vmin=-125, vmax=125, cmap='RdBu_r')
pyplot.colorbar()
An ionized gas velocity field limited to regions where the Hα flux has S/N>10.
An important note regarding gas velocity fields: The velocities of the emission lines are tied together, so for example, the velocity of the [OIII]-5007 line is the same as the Hα line, as are the uncertainties. You cannot reduce the uncertainty on the measured velocity by averaging the velocities of several lines together.
## Plotting stellar velocity dispersion
Next we'll make a map of the stellar velocity dispersion. First, we'll access the raw dispersion measurements:
disp_raw = hdu['STELLAR_SIGMA'].data
However, we need to correct the measured dispersion for the instrumental resolution, which is reported in a different extension":
disp_inst = hdu['STELLAR_SIGMACORR'].data
Now, let's apply the correction and plot the results (also removing masked values). The calculation below will ignore any points where the correction is larger than the measured dispersion:
disp_stars_corr = numpy.sqrt(numpy.square(disp_raw) - numpy.square(disp_inst))
pyplot.clf()
pyplot.imshow(disp_stars_final,origin='lower',interpolation='nearest',cmap='RdBu_r')
pyplot.colorbar()
Stellar velocity dispersion after correction for instrumental spectral resolution.
## Plotting spectral-index measurements
Next we'll show how to plot spectral indices and correct them for velocity dispersion. The spectral indices are stored in the "SPECINDEX" extension of the maps file. Like the emission line measurements, We can create a dictionary which makes accessing different spectral indices easier. We'll also track their units, which will be relevant shortly:
spec_ind = {}
spec_unit = numpy.empty(numpy.shape(hdu['SPECINDEX'].data)[0],dtype=object)
for k, v in hdu['SPECINDEX'].header.items():
if k[0] == 'C':
try:
i = int(k[1:])-1
except ValueError:
continue
spec_ind[v] = i
if k[0] == 'U':
try:
i = int(k[1:])-1
except ValueError:
continue
spec_unit[i] = v.strip()
Let's make a masked array holding the Hβ spectral index
mask_ext = hdu['SPECINDEX'].header['QUALDATA']
The spectral index measurements need to be corrected for velocity dispersion. Keep in mind that the way in which the corrections are applied depends on whether the units are angstroms or magnitudes. Hβ is in Angstroms:
corr = hdu['SPECINDEX_CORR'].data[spec_ind['Hb'],:,:]
hb_corr = hb_raw*corr
pyplot.clf()
pyplot.imshow(hb_corr,origin='lower', cmap='inferno', interpolation='nearest')
pyplot.colorbar(label=spec_unit[spec_ind['Hb']])
Hβ spectral index measurement after applying the correction for velocity dispersion.
## Identifying unique bins
The spaxels are binned in different ways depending on the measurement being made (the DAP documentation provides more information). This binning means that two spaxels can belong the same bin, and therefore a derived quantity at those locations will be identical. The BINID extension provides information about which spaxels are in which bins. There are 5 channels providing the IDs of spaxels associated with
• 0: each binned spectrum. Any spaxel with BINID=-1 as not included in any bin.
• 1: any binned spectrum with an attempted stellar kinematics fit.
• 2: any binned spectrum with emission-line moment measurements.
• 3: any binned spectrum with an attempted emission-line fit.
• 4: any binned spectrum with spectral-index measurements.
For any analysis, you'll want to extract the unique spectra and/or maps values. For instance, to find the indices of the unique bins where stellar kinematics were fit:
bin_indx = hdu['binid'].data[1,:,:]
unique_bins, unique_indices = tuple(map(lambda x : x[1:], numpy.unique(bin_indx.ravel(), return_index=True)))
Let's now use this information to plot the position of each unique bin and color-code it by the measured stellar velocity:
pyplot.clf()
# Get the x and y coordinates and the stellar velocities
x = hdu['BIN_LWSKYCOO'].data[0,:,:].ravel()[unique_indices]
y = hdu['BIN_LWSKYCOO'].data[1,:,:].ravel()[unique_indices]
pyplot.scatter(x, y, c=v, vmin=-150, vmax=150, cmap='RdBu', marker='.', s=30, lw=0, zorder=3)
pyplot.colorbar()
Positions of each unique stellar velocity measurement from the binned spectra, color-coded by value.
## Extract a binned spectrum and its model fit
The model cube files provide detailed information about the output the binned spaxels and the model fitting. We can use these files to compare an individual bin's measured spectrum, model fit, model residuals, and so on. However, one needs to be careful when comparing binned spectra with the model fits. Specifically, there are two types of files with different binning schemes:
• VOR10: The spectra are voronoi binned to S/N~10. Stellar and emission line parameters are estimated from those bins.
• HYB10: The spectra are again voronoi binned to S/N~10, and the stellar parameters are calculated using these voronoi binned spectra. However, emission line parameters are measured using the individual 0.5"x0.5" spaxels.
If you are using the HYB10 files and want to compare the best fitting model (including stellar continuum and emission lines) to the data, you need to compare the models to the individual spectra (measured in 0.5"x0.5" spaxels) from the DRP LOGCUBE files, not the binned spectra in the HYB10 files. If you are comparing the best-fitting stellar continuum models to the data, you should use the binned spectra within the HYB10 files.
Below are a few examples using both VOR10 and HYB10 files to demonstrate the proper way to compare the models and data using these two files types.
### VOR10 Files
We'll start with the VOR10 files where comparing the models and data is simpler. First, let's read in the BIN_SNR extension from the VOR10 maps file and find the bin with the highest S/N.
hdu = fits.open(path+'manga-7443-12703-MAPS-VOR10-GAU-MILESHC.fits.gz')
j,i = numpy.unravel_index(snr.argmax(), snr.shape)
Next we'll load the modelcube file for this galaxy, pull out the binned spectrum at its location, and then plot the binned spectrum, the full best fit model, the model stellar continuum, the model emission lines, and the residuals.
hdu_cube = fits.open(dir+'manga-7443-12703-LOGCUBE-VOR10-GAU-MILESHC.fits.gz')
wave=hdu_cube['wave'].data
stellarcontinuum = numpy.ma.MaskedArray(hdu_cube['MODEL'].data[:,j,i] - hdu_cube['EMLINE'].data[:,j,i] - hdu_cube['EMLINE_BASE'].data[:,j,i], mask=hdu_cube['MASK'].data[:,j,i] > 0)
resid = flux-model-0.5
pyplot.clf()
pyplot.step(wave, flux, where='mid', color='k', lw=0.5,label='flux')
pyplot.plot(wave, model, color='r', lw=1,label='model')
pyplot.plot(wave, stellarcontinuum, color='g', lw=1,label='stellar cont.')
pyplot.plot(wave, emlines, color='b', lw=1,label='Emission lines')
pyplot.step(wave, resid, where='mid', color='0.5', lw=0.5,label='residuals')
pyplot.legend()
Example of one binned spectrum in a MaNGA data cube. Lines show the binned flux, full model fit, model stellar continuum, model emission lines, and model fit residuals
### HYB10 Files
Identifying the bin with the highest S/N is done the same way as for the VOR10 files.
hdu = fits.open(dir+'manga-7443-12703-MAPS-HYB10-GAU-MILESHC.fits.gz')
j,i = numpy.unravel_index(snr.argmax(), snr.shape)
Recall that although stellar parameters are measured using the voronoi bins, the emission line parameters are estimated using the individual spaxels. Therefore, if we want to compare the data to the full best-fitting model which includes stellar continuum and emission lines, we need to use the spectra from the DRP LOGCUBE file, not the DAP HYB10 LOGCUBE file. Let's compare the best-fitting model to the data at the position (j,i) found above:
hdu_cube = fits.open(dir+'manga-7443-12703-LOGCUBE-HYB10-GAU-MILESHC.fits.gz') #DAP MODELCUBE file
hdu_drpcube = fits.open(dir+'manga-7443-12703-LOGCUBE.fits.gz') #DRP LOGCUBE file
pyplot.clf()
pyplot.step(wave, flux, where='mid', color='k', lw=0.5,label='flux')
pyplot.plot(wave, model, color='r', lw=1,label='model')
pyplot.legend()
If we just want to compare the best fitting stellar continuum to the data, we should use the binned spectra within the DAP HYB10 LOGCUBE file:
flux = numpy.ma.MaskedArray(hdu_cube['FLUX'].data[:,j,i],mask=hdu_cube['MASK'].data[:,j,i] > 0)
stellarcontinuum = numpy.ma.MaskedArray(hdu_cube['MODEL'].data[:,j,i] - hdu_cube['EMLINE'].data[:,j,i] - hdu_cube['EMLINE_BASE'].data[:,j,i], mask=hdu_cube['MASK'].data[:,j,i] > 0)
pyplot.clf()
pyplot.step(wave, flux, where='mid', color='k', lw=0.5,label='flux')
pyplot.plot(wave, stellarcontinuum, color='g', lw=1,label='stellar cont.')
pyplot.ylim(1.2,2.2)
pyplot.legend()
Now go use Marvin!
|
2019-01-18 16:28:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.636211097240448, "perplexity": 5643.202418129554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660175.18/warc/CC-MAIN-20190118151716-20190118173716-00423.warc.gz"}
|
https://physics.aps.org/articles/v10/89
|
# Viewpoint: Neutron-Star Implosions as Heavy-Element Sources
Physics 10, 89
A dramatic scenario in which a compact black hole eats a spinning neutron star from inside might explain a nearby galaxy’s unexpectedly high abundance of heavy elements.
The lightest of the chemical elements—hydrogen, helium, and lithium—were created in the hot, early phase of the Universe, about a minute after the big bang. Heavier elements were forged later—in the nuclear fires of many generations of stars and during supernova explosions [1]. But the origin of many rare chemical species, particularly the heaviest elements, remains uncertain. In particular, recent observations [2] of a nearby galaxy enriched with heavy elements challenge traditional nucleosynthesis models. George Fuller of the University of California, San Diego, and colleagues [3] now propose a novel scenario for the origin of the heaviest elements, including gold, platinum, and uranium. Their hypothesis involves tiny black holes inducing neutron-star implosions and, if viable, would in one fell swoop offer solutions to other astrophysical riddles beyond heavy element synthesis.
Elements heavier than iron can be assembled only from lighter “seed” nuclei that capture free neutrons or protons [1]. Neutron capture occurs through either a “slow” s process or a “rapid” r process. In both cases, the neutron-rich nucleus undergoes beta decay, converting neutrons to protons and advancing to higher atomic numbers. The s process can proceed at the modest neutron densities available in the outer shells of evolving stars. By contrast, the r process requires 10 billion times greater neutron densities (above $1{0}^{18}\phantom{\rule{2.22198pt}{0ex}}{\text{cm}}^{-3}$) in order that neutron captures occur much faster than beta decay. The r process is responsible for gold, platinum, most of the lanthanides, and all of the natural actinides. The heaviest r-process nuclei—up to and beyond an atomic mass number of 240—occur through the “strong” r process, in which an iron seed captures 100 or more neutrons.
The strong r process requires a high neutron density and some combination of a large excess of neutrons over protons, very high temperatures, and rapid expansion. Such extremes are expected in supernovae—but only in rare cases [4, 5]—and in mergers between two neutron stars or between a neutron star and a black hole [6]. These compact binary mergers are estimated to be 1000 times less frequent than supernovae, but they can expel considerably larger amounts of neutron-rich matter [7, 8]—a low-rate/high-yield scenario that’s consistent with the rarity of plutonium-244 in the early Solar System and in deep-sea reservoirs on Earth [9, 10].
A wrinkle in this picture is a nearby low-luminosity dwarf galaxy known as Reticulum II, whose stars are highly enriched with strong-r-process nuclei [2]. Reticulum II is the only dwarf galaxy (out of ten) with a significant “excess” of heavy nuclei, which suggests the nuclei were produced by an infrequent event, but perhaps one not so rare as a compact-object merger [11]. Fuller and co-workers [3] therefore envision an alternative scenario in which r-process nuclei are generated in the ejected matter of a very rapidly spinning neutron star, or “millisecond pulsar,” as it implodes to form a black hole.
The researchers imagine that the trigger for this catastrophic collapse is a primordial black hole (PBH). Hypothetical relics from the early Universe, PBHs can have the mass of an asteroid packed into an atom-sized space and collectively they are one of several candidates for dark matter. PBHs would roam dwarf galaxies and the center of our Milky Way with a relatively high abundance, so they would collide with neutron stars at a higher rate than that of compact-object mergers. When a PBH is captured by a neutron star, it sinks towards the center and swallows the star from the inside. Then, as the growing black hole sucks in neutron-star matter, viscous shearing and magnetic fields carry angular momentum to the star’s outer layers along its equator. Fuller et al. argue that these mechanisms rip off dense nuclear matter in which the strong r process can develop (Fig. 1).
This scenario is similar to one proposed by Joseph Bramante and Tim Linden in 2016 [11]. Instead of PBHs, they proposed that dark matter particles could accumulate inside an aging neutron star to form a star-consuming black hole. As the black hole accreted mass, it would release enough gravitational binding energy to power the ejection of dense neutron matter for strong-r-process synthesis. Both teams estimated the parameters required by their models to predict implosion rates that are compatible with the r-process-enhancement of Reticulum II and the distribution of r-process elements in the Milky Way. These calculated parameters, which include, for example, dark matter density, appear to be realistic.
What’s attractive about the models presented by Fuller et al. and by Bramante and Linden is that they might simultaneously resolve a number of astrophysical conundrums. For example, the possibility that neutron stars are being routinely eaten by black holes could explain why there are far fewer pulsars at the center of our Galaxy than astrophysicists expect—though the average collapse time of a star is sufficiently long that a large population of old pulsars should still exist. In addition, both teams refer to a possibility suggested by another group [12]: The final stages of a neutron star’s demise, as well as its release of energy via the “reconnection” of its magnetic field, might be connected to recently discovered extragalactic fast radio bursts. Fuller et al. also explain the mysterious 511-keV line in the gamma-ray emission from our Galaxy’s center, linking it to positron production in the radioactively heated ejecta from a neutron-star implosion.
But while these phenomena are all consistent with the r-process scenario proposed by Fuller et al., each could be explained with less speculative (and not necessarily related) ideas. Moreover, the viability of their proposal, and that by Bramante and Linden, depends on whether the neutron stars eject sufficient mass as they collapse. Assessing this fact will require detailed relativistic hydrodynamical calculations that go beyond the coarse analytical estimates in both papers. Researchers might distinguish various scenarios by looking for a transient electromagnetic signal associated with a source that produces r-process nuclei; they would then need to use other observations to identify the source. For example, did the signal come from a region of copious dark matter, as Fuller et al. and Bramante and Linden propose, or was it accompanied by gravitational waves, as expected for inspiralling and merging compact binary stars? Such gravitational waves should be detectable by Advanced LIGO, VIRGO, and KAGRA, and they may ultimately be the smoking gun that allows physicists to solve the mysterious origin of gold.
This research is published in Physical Review Letters.
## References
1. E. M. Burbidge, G. R. Burbidge, W. A. Fowler, and F. Hoyle, “Synthesis of the Elements in Stars,” Rev. Mod. Phys. 29, 547 (1957).
2. A. P. Ji, A. Frebel, A. Chiti, and J. D. Simon, “R-process Enrichment from a Single Event in an Ancient Dwarf Galaxy,” Nature 531, 610 (2016).
3. G. M. Fuller, A. Kusenko, and V. Takhistov, “Primordial Black Holes and r-Process Nucleosynthesis,” Phys. Rev. Lett. 119, 061101 (2017).
4. C. Winteler, R. Käppeli, A. Perego, A. Arcones, N. Vasset, N. Nishimura, M. Liebendörfer, and F.-K. Thielemann, “Magnetorotationally Driven Supernovae as the Origin of Early Galaxy r-Process Elements?,” Astrophys. J. Lett. 750, L22 (2012).
5. P. Banerjee, W. C. Haxton, and Y.-Z. Qian, “Long, Cold, Early r Process? Neutrino-Induced Nucleosynthesis in He Shells Revisited,” Phys. Rev. Lett. 106, 201104 (2011).
6. J. M. Lattimer, F. Mackie, D. G. Ravenhall, and D. N. Schramm, “The Decompression of Cold Neutron Star Matter,” Astrophys. J. 213, 225 (1977).
7. C. Freiburghaus, S. Rosswog, and F.-K. Thielemann, “r-Process in Neutron Star Mergers,” Astrophys. J. Lett. 525, L121 (1999).
8. A. Bauswein, R. Ardevol Pulpillo, H.-T. Janka, and S. Goriely, “Nucleosynthesis Constraints on the Neutron Star-Black Hole Merger Rate,” Astrophys. J. Lett. 795, L9 (2014).
9. A. Wallner et al., “Abundance of Live ${}^{244}\text{Pu}$ in Deep-Sea Reservoirs on Earth Points to Rarity of Actinide Nucleosynthesis,” Nat. Commun. 6, 5956 (2015).
10. K. Hotokezaka, T. Piran, and M. Paul, “Short-Lived ${}^{244}\text{Pu}$ Points to Compact Binary Mergers as Sites for Heavy r-Process Nucleosynthesis,” Nat. Phys. 11, 1042 (2015).
11. J. Bramante and T. Linden, “On the r-Process Enrichment of Dwarf Spheroidal Galaxies,” Astrophys. J. 826, 57 (2016).
12. J. Fuller and C. D. Ott, “Dark Matter-Induced Collapse of Neutron Stars: A Possible Link Between Fast Radio Bursts and the Missing Pulsar Problem,” Mon. Not. R. Astron. Soc. Lett. 450, L71 (2015).
Hans-Thomas Janka obtained his Ph.D. in physics from the Technical University of Munich (TUM) in 1991. After postdoctoral studies as an Otto Hahn Fellow and a Visiting Scholar at the University of Chicago, he became a staff member of the Max Planck Institute for Astrophysics in Garching, Germany. There, he leads a group of researchers who work on supernova theory and neutrino and nuclear astrophysics. He has a teaching affiliation with TUM as an Adjunct Professor. In 2013, he was awarded an Advanced Grant by the European Research Council for three-dimensional computational studies of core-collapse supernovae.
## Related Articles
Gravitation
### Viewpoint: Spinning Black Holes May Grow Hair
A spinning black hole may lose up to 9% of its mass by spontaneously growing “hair” in the form of excitations of a hypothetical particle field with a tiny mass. Read More »
Cosmology
### Synopsis: A Reionization Filter for the Cosmic Microwave Background
A new method of analyzing cosmic microwave background data could isolate signatures from the so-called reionization period that occurred a few hundred million years after the big bang. Read More »
Astrophysics
### Synopsis: LIGO’s Black Hole Got the Boot
An analysis of data from LIGO’s second gravitational-wave event indicates that a supernova can impart a strong kick to the black hole it creates. Read More »
|
2017-08-21 02:52:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6275599598884583, "perplexity": 2670.3292097053877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107487.10/warc/CC-MAIN-20170821022354-20170821042354-00609.warc.gz"}
|
https://mail.haskell.org/pipermail/haskell-cafe/2005-January/008908.html
|
Jacques Carette carette at mcmaster.ca
Fri Jan 28 17:48:13 EST 2005
[I have now subscribed to haskell-cafe]
Henning Thielemann <lemming at henning-thielemann.de> wrote:
> This seems to be related to what I wrote yesterday
Yes, very much. Except that rather than trying to tell mathematicians that they are *wrong*, I am trying to see which
of their notations I can explain (in a typed way). There will be some 'leftovers', where the notation is simply bad.
> I've collected some examples of abuse and bad mathematical notation:
> http://www.math.uni-bremen.de/~thielema/Research/notation.pdf
Some of what you point out there is bad notation. Other bits point to some misunderstanding of the issues.
Starting from your original post:
> f(x) \in L(\R)
> where f \in L(\R) is meant
>
> F(x) = \int f(x) \dif x
> where x shouldn't be visible outside the integral
First, mathematicians like to write f(x) to indicate clearly that they are denoting a function. This is equivalent to
writing down (f \in -> ) with domain/range omitted.
Second, every mathematician knows that \int f(x) \dif x == \int f(y) \dif y (ie alpha conversion), so that combined
with the previous convention, there is no confusion in writing F(x) = \int f(x) \dif x. It is just as well-defined as
(\x.x x) (\x.x x)
which requires alpha-conversion too for proper understanding.
You also write
> O(n)
> which should be O(\n -> n) (a remark by Simon Thompson in
> The Craft of Functional Programming)
but the only reason for this is that computer scientists don't like open terms. Since the argument to O must be a
univariate function with range the Reals, then whatever is written there must *denote* such a function. The term
n'' can only denote only one such function, \n -> n. So the mathematician's notation is in fact much more pleasing.
However, you have to remember one crucial fact: a set of typographical symbols are meant to *denote* a value, they are
not values in themselves. So there is always a function around which is the denotes'' function that is implicitly
applied. Russell's work in the early 1900, "sense and denotation" are worth reading if you want to learn more about
this.
What is definitely confusing is the use of = with O notation. The denotation there is much more complex - and borders
on incorrect.
> a < b < c
> which is a short-cut of a < b \land b < c
That is plain confusion between the concepts of denotation'' and value''. Where < is a denotation of a binary
function from bool x bool -> bool, _ < _ < _ is a mixfix denotation of a constraint, which could be denoted in a
long-winded fashion by
p a b c = a<b and b<c
but more accurately by
p a c = \b -> b \in )a,c(
where I am using mathematical notation for the body above.
On your notation.pdf'' (link above), you have some other mis-interpretations. On p.10 you seem to think that
Mathematica is a lazy language, when it is in fact an eager language. So your interpretation does not make sense''.
Not that your observation is incorrect. In Maple, there are two functions, eval(expr, x=pt) and subs(x=pt, expr)
which do similar'' things. But subs is pure textual substitution (ie the CS thing), whereas 2-argument eval means
"evaluate the function that \x -> expr denotes at the point pt" (ie the math thing). The interesting thing is that
"the function that \x -> expr denotes" is allowed to remove (removeable) singularities in its denotation'' map.
However,
> subs(x=2,'diff'(ln(x),x)) ;
diff(ln(2),2)
where the '' quotes mean to delay evaluation of the underlying function. On the other hand
> eval('diff'(ln(x),x),x=2) ;
eval('diff'(ln(x),x),x=2)
because it makes no sense to evaluate an open term which introduces a (temporary) binding for one of its variables.
Note that without access to terms, it is not possible to write a function like diff (or derive as you phrase it, or D
as Mathematica calls it). Mathematician's diff looks like it is has signature diff: function -> function, but it in
fact is treated more often as having signature diff: function_denotation -> function_denotation. But you can see a
post by Oleg in December on the haskell list how it is possible (with type classes) in Haskell to automatically pair
up function and function_denotation.
You also seem to assume that set theory is the only axiomatization of mathematics that counts (on p.31). I do not see
functions A -> B as somehow being subsets of powerset(A x B). That to me is one possible 'implementation' of
functions. This identification is just as faulty as the one you point out on p.14 of the naturals not really''
being a subset of the rationals. In both cases, there is a natural embedding taking place, but it is not the
identity.
You also have the signature of a number of functions not-quite-right. indefinite'' integration does not map
functions to functions, but functions to equivalence classes of functions. Fourier transforms (and other integral
transforms) map into functionals, not functions.
I hope your audience (for things like slide 35) was made of computer scientists - it is so amazingly condescending to
thousands of mathematicians, it is amazing you would not get immediately booted out!
On p.37, you have polynomials backwards. Polynomials are formal objects, but they (uniquely) denote a function. So
polynomials CANNOT be evaluated, but they denote a unique function which can.
On p.51 where you speak of hidden'' quantifiers, you yourself omit the thereexists quantifiers that are implicit on
the last 2 lines -- why?
The above was meant to be constructive -- I apologize if it has come across otherwise. This is an under-studied area,
and you should continue to look at it. But you should try a little harder to not assume that thousands of
mathematicians for a couple of hundred years (at least) are that easily wrong''. Redundant, sub-optimal, sure.
Jacques
|
2017-07-27 05:27:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9364346265792847, "perplexity": 2208.02908690343}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427429.9/warc/CC-MAIN-20170727042127-20170727062127-00490.warc.gz"}
|
https://en.wikipedia.org/wiki/BTZ_black_hole
|
# BTZ black hole
The BTZ black hole, named after Máximo Bañados, Claudio Teitelboim, and Jorge Zanelli, is a black hole solution for (2+1)-dimensional topological gravity with a negative cosmological constant[clarification needed].
## History
In 1992 Bañados, Teitelboim and Zanelli discovered the BTZ black hole solution (Bañados, Teitelboim & Zanelli 1992). At that time[clarification needed], it came as a surprise because it is believed[according to whom?] that no black hole solutions are shown to exist for a negative cosmological constant and BTZ black hole has remarkably similar properties to the 3+1 dimensional black hole, which would exist in our real universe.
When the cosmological constant is zero, a vacuum solution of (2+1)-dimensional gravity is necessarily flat (the Weyl tensor vanishes in three dimensions, while the Ricci tensor vanishes due to the Einstein field equations, so the full Riemann tensor vanishes), and it can be shown that no black hole solutions with event horizons exist[citation needed]. By introducing dilatons, we can have black holes.[verification needed] We do have conical angle deficit solutions, but they don't have event horizons. It therefore came as a surprise when black hole solutions were shown to exist for a negative cosmological constant.
## Properties
The similarities to the ordinary black holes in 3+1 dimensions:
Since (2+1)-dimensional gravity has no Newtonian limit, one might fear[why?] that the BTZ black hole is not the final state of a gravitational collapse. It was however shown, that this black hole could arise from collapsing matter and we can calculate the energy-moment tensor of BTZ as same as (3+1) black holes. (Carlip 1995) section 3 Black Holes and Gravitational Collapse.
The BTZ solution is often discussed in the realm on (2+1)-dimensional quantum gravity.
## The case without charge
The metric in the absence of charge is
${\displaystyle ds^{2}=-{\frac {(r^{2}-r_{+}^{2})(r^{2}-r_{-}^{2})}{l^{2}r^{2}}}dt^{2}+{\frac {l^{2}r^{2}dr^{2}}{(r^{2}-r_{+}^{2})(r^{2}-r_{-}^{2})}}+r^{2}\left(d\phi -{\frac {r_{+}r_{-}}{lr^{2}}}dt\right)^{2}}$
where ${\displaystyle r_{+},~r_{-}}$ are the black hole radii and ${\displaystyle l}$ is the radius of AdS3 space. The mass and angular momentum of the black hole is
${\displaystyle M={\frac {r_{+}^{2}+r_{-}^{2}}{l^{2}}},~~~~~J={\frac {2r_{+}r_{-}}{l}}}$
BTZ black holes without any electric charge are locally isometric to anti-de Sitter space. More precisely, it corresponds to an orbifold of the universal covering space of AdS3.
A rotating BTZ black hole admits closed timelike curves.
|
2017-06-23 15:14:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9248633980751038, "perplexity": 772.8705652491285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320063.74/warc/CC-MAIN-20170623133357-20170623153357-00447.warc.gz"}
|
https://www.physicsforums.com/threads/chemical-elements-produced-inside-the-sun.949259/
|
# I Chemical Elements produced inside the Sun
1. Jun 9, 2018
### DaTario
Hi All,
I would like to know if the following statement is true or false:
The nuclear processes that happen inside the Sun can produce at least one unity of each of the known chemical elements.
Best Regards,
DaTario
2. Jun 9, 2018
3. Jun 9, 2018
### phyzguy
What do you mean by "one unity"? One atom?
4. Jun 9, 2018
### stefan r
False.
Lithium and plutonium come to mind.
Assume he means one quanta. An atom. What are the odds of a 197Au atom appearing?
5. Jun 9, 2018
### DaTario
Yes, at least one atom of each of the existing elements, without any external interference. Just the Sun with its initial condition and its natural burning process.
6. Jun 9, 2018
### DaTario
Hi stefan r, thank you for the response, but why are the synthesis of Lithium and Plutonium impossible in the Sun? Please note that my question does not address the discussion about abundancies. It has to do only with the possibility of the corresponding nucleosynthesis to occur in the Sun, due to internal natural processes.
Last edited: Jun 9, 2018
7. Jun 9, 2018
### DaTario
Thank you, Chem Air, but it seems that these pages do not contain explicitly the answer, although they contain a lot of usefull information on this subject.
8. Jun 9, 2018
### rootone
A star the size of the Sun can produce Carbon,
Oxygen and Nitrogen are interesting by products of that.
9. Jun 9, 2018
### DaTario
Nothing beyond these elements? Is it impossible for an atom of iron (and others havier than iron) to appear in the Sun?
10. Jun 9, 2018
### davenn
no, nothing.
http://www.astronomynotes.com/evolutn/s7.htm
Created in and appearing in have 2 very different meanings.
you need to be very careful with your use of terms/definitions
That doesn't mean to say other elements are not present in stars the mass of our sun, but they were not created in the sun
11. Jun 9, 2018
### DaTario
I guess I understand the difference now. Appearing suggests that the element was already present in the begining of the star. Is it correct?
My question has to do with the creating part. By starting from hydrogen going step by step until the formation of , say, Uranium.
12. Jun 9, 2018
### DaTario
From this reference, I took the following:
"The atoms heavier than helium up to the iron and nickel atoms were made in the cores of stars (the process that creates iron also creates a smaller amount of nickel too). The lowest mass stars can only synthesize helium. Stars around the mass of our Sun can synthesize helium, carbon, and oxygen. Massive stars (M* > 8 solar masses) can synthesize helium, carbon, oxygen, neon, magnesium, silicon, sulfur, argon, calcium, titanium, chromium, and iron (and nickel). Elements heavier than iron are made in supernova explosions from the rapid combination of the abundant neutrons with heavy nuclei. Massive red giants are also able to make small amounts of elements heavier than iron (up to mercury and lead) through a slower combination of neutrons with heavy nuclei, but supernova probably generate the majority of elements heavier than iron and nickel (and certainly those heavier than lead up to uranium). The synthesized elements are dispersed into the interstellar medium during the planetary nebula or supernova stage (with supernova being the best way to distribute the heavy elements far and wide). These elements will be later incorporated into giant molecular clouds and eventually become part of future stars and planets (and life forms?)"
A small part of my question is still standing on its foot. When this author, Nick Strobel, says that Stars around the mass of our Sun can synthesize helium, carbon, and oxygen. is he meaning that we are not to expect relevant amount of other heavier atoms to be produced (by nuclear processes) in the Sun or that we must accept that the Sun has not sufficient energy to produce even one atom of those heavier elements, like iron or uranium, for instance?
13. Jun 10, 2018
### davenn
Yes, they have come from other massive star supernovas and were present in the dust/gas clouds that coalesced into the sun and the planets
Yes, there isn't enough energy for reactions to produce those heavier elements. It takes more massive stars than our sun
Even one atom of ??
I doubt anyone could prove or disprove that and the big scheme of things, it's hardly relevant
I would rather say … detectable amounts that were guaranteed to have been CREATED in the Sun
Last edited: Jun 10, 2018
14. Jun 10, 2018
### DaTario
Thank you, davenn. Let me just present a last idea on this discussion, which is to me a bit confusing. When we study the thermal state, we learn that, at a given temperature, the probability of existence of particle with a very very high velocity is not zero, although it is very small. With this in view, must we say that the nuclear process that produces a heavier element (just one atom of it) in the Sun is, in fact, impossible?
(This ideia of thermodynamics leads me to think that it is only very unlikely to occur, but once in a while it happens, yielding a negligible population of these species.)
15. Jun 10, 2018
### davenn
That's OK for things outside the core of a star.
You do know that it takes 1000's of years for photons produced in the core to get to the surface of the sun ?
there's no room for very high velocities in the core of a star.... the densities are too high
16. Jun 10, 2018
### Bandersnatch
I think there is always a non-zero cross section for any fusion reaction to occur, at any temperature above zero. That is to say, there is no hard cut-off absolutely preventing further steps from happening (unless there is? Let's ask @mfb ).
So the probability of there occurring the entire chain of reactions even up to uranium fusion should also have a non-zero probability.
The question would then become 'how probable is it that a star like the Sun can produce at least one of all elements, up to uranium (or even heavier), over its life time?'.
The answer would require running some actual numbers, which I don't have. My gut feeling, though, is that it'd be as probable as for a bowl of petunias to suddenly appear in Earth's orbit.
17. Jun 10, 2018
### Staff: Mentor
Lithium is created routinely and in huge amounts in one of the proton-proton fusion chains (P-P II).
A couple of uranium atoms will capture neutrons and become plutonium. That is not a common process, but the Sun consists of 1057 nuclei.
The density doesn't matter as long as the system is not degenerate (it is not).
You can multiply the Gamow factor with the Maxwell-Boltzmann distribution to get a rough estimate of the probability that particles have enough energy and fuse. For two protons this chance is very small, but with the huge number of collisions it still happens once in a while. Try the same for two helium nuclei, or even heavier nuclei.
Who said it has to be fusion? Our Sun contains uranium, uranium can fission spontaneously; it releases a few neutrons in the process. The neutrons can be captured by all other elements in the Sun, often allowing them to beta decay to a different element. This is not a common process, but we have 1057 nuclei to work with - it does happen.
In addition, cosmic rays strike the surface, leading to various reactions.
Superheavy elements would need some really weird production mechanism, however - a heavy ion coming from space hitting a heavy ion in the outer regions of the Sun or something like that. I'm not sure how often that happens. Probably more than once per 5 billion years.
18. Jun 10, 2018
### stefan r
There is a chance that a bowl of petunias will appear in Earth's orbit. The probability of a quantum tunneling event is strongly effected by the number of particles involved and the distance that each particle moves. The atoms in your foot rearranging into a bowl of petunias at the end of your leg is much more probable than a bowl of petunias appearing in orbit unless it happens in a satellite because the particles need to move a shorter distance. The core of the sun has high density so the petunia probability should be higher. It is safe to say that a spontaneous quantum bowl of petunias is so unlikely that it has never occurred anywhere in the visible universe since the big bang.
I saw the calculations for one mole of water tunneling in a text book. The author gave estimates for tunneling from one shot glass to an adjacent shot glass as water. That is less likely than tunneling out of the shot glass. That was much less likely than tunneling event inside the shot glass where the atoms in the water molecules move a few angstroms and become iron and release enough energy to destroy the neighborhood. Even though a spontaneous nuclear explosion in your toe is much much more likely it is still highly unlikely that is has happened anywhere in the visible universe within 4 x 1017 seconds.
If the same mechanism can happen on Earth or Ceres then I think it is reasonable to say it is not part of "the nuclear processes in the Sun".
The core of the sun burns lithium faster than it produces lithium.
I would not include cosmic ray spallation or spontaneous fission. Both can occur on Earth.
7Li is in the p-p chain. 6Li can be formed by 3H and 3He. 3H should be extremely rare. Is there an easier route to 6Li?
19. Jun 10, 2018
### Staff: Mentor
OP was talking about a single atom and the Sun. One isotope of lithium is enough and what happens on Earth is not relevant.
D + He-4 doesn’t have enough energy I guess (and photon emission - rare process if possible at all)? Can’t check right now.
20. Jun 10, 2018
### Bystander
Along that same/similar line of inquiry, thirty to forty years ago, 8Be was "forbidden"/had an infinitessimal lifetime; any more recent measurements/results?
21. Jun 10, 2018
### Staff: Mentor
Its lifetime is still very short, $(6.7\pm1.7)\cdot 10^{-17} s$.
Edit: Minus sign
Last edited: Jun 11, 2018
22. Jun 10, 2018
### Bystander
Minus? Thanks.
23. Jun 10, 2018
### alantheastronomer
The nuclear burning cores of sun-like stars are convective, which means they have a uniform temperature. While it's true that energies of an isotropic medium lie along a gaussian curve, with few atoms along the high end of the curve, their energies are insufficient to produce the high atomic mass elements to which you are referring.
24. Jun 11, 2018
### DaTario
But the question posed in the OP is relative to the possibility of creation of these elements by any natural process inside the Sun; it is not related to the corresponding lifetime.
I would like to ask if you (alantheastronomer) agree with the following sentence, which is your sentence, quoted above, with a small change (bold):
The nuclear burning cores of sun-like stars are convective, which means they have a uniform temperature. While it's true that energies of an isotropic medium lie along a gaussian curve, with few atoms along the high end of the curve, their energies are insufficient to produce even just one of the high atomic mass elements to which you are referring.
25. Jun 11, 2018
### stefan r
I think G and F stars do not have convective cores. convection is outside of the tachocline. Many K dwarfs are convective to the core. Types A,B,O have core convection.
|
2018-06-21 12:44:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5377600193023682, "perplexity": 1047.5119909648206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864148.93/warc/CC-MAIN-20180621114153-20180621134153-00102.warc.gz"}
|
https://stats.stackexchange.com/questions/190296/kernel-nonparametric-regression
|
Kernel nonparametric regression
One of the methods for nonparametric regression is using kernels. My question is what are the conditions on the kernels functions in this method? In other words how can I decide if a given function can be used as a kernel?
Thanks
1 Answer
Notion of a kernel has a strict mathematical definition (from here):
Definition. $k : \mathcal{X} \times \mathcal{X} \rightarrow \mathbb{R}$ is a kernel if
1. $k$ is symmetric: $k(x,y) =k(y,x)$.
2. $k$ is positive semi-definite, i.e., $\forall x_1,x_2,...,x_n \in \mathcal{X}$, the ”Gram Matrix” $K$ defined by $K_{ij}=k(x_i,x_j)$ is positive semi-definite. (A matrix $M \in \mathbb{R}^{n \times n}$ is positive semi-definite if $\forall a \in \mathbb{R}^n, a'Ma\ge0$.)
Intuition behind a kernel is that it implicitly maps its input to some space (possibly infinite-dimensional), and then computes an inner product in that space:
$$k(x, y) = \phi(x)^T \phi(y)$$
Then $K$ is effectively a Gram matrix, so you have to check if it's symmetric and positive-definite. This is not something you can test on a computer, you'll have to prove it mathematically.
Mercer's theorem says that a kernel can be represented as $$k(s, t) = \sum_{j=1}^\infty \lambda_j e_j(t) e_j(s)$$ for some non-negative $\lambda$. From this form it easily follows that $K$ is positive semi-definite. So if you can represent your function in a form of RHS of the above equation, your function is a kernel.
You can also show a function is a kernel if you decompose it into a combination of known-to-be kernels:
• Sum of two kernels is a kernel
• A kernel multiplied by a positive number is a kernel
|
2020-07-04 04:26:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170687794685364, "perplexity": 198.57338745668818}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655884012.26/warc/CC-MAIN-20200704042252-20200704072252-00075.warc.gz"}
|
https://mathematica.stackexchange.com/questions/77481/working-with-derivative-of-conjugate-of-a-complex-number
|
# Working with derivative of conjugate of a complex number
I have a complex function, lets say $g(x)$. I want to take its and its conjugate's derivative. I need the solution of derivative which must be symbolically and computationally efficient.
Lets take an example:
Derivative[1][g][x_] := d[g[x]]
Derivative[1][Conjugate][g[x_]] := Conjugate[d[g[x]]]/d[g[x]];
Derivative[1][Conjugate][d[x_]] := Conjugate[d[d[x]]]/d[d[x]]
Derivative[1][d][x_] := d[d[x]]/d[x];
Derivative[1][d][x_Symbol] := d[d[x]]
This will give me an effective symbolic representation of the derivative of $g(x)$ and $Conjugate(g(x))$ as d[g[x]] and Conjugate[d[g[x]]] but when I have to plug the analytical complex expression of $g(x)$ in d[g[x]], it will not compute the derivative of $g(x)$ instead gives only the symbolic representation of $d(g(x))$, which is computationally inefficient.
Is something can be done which is capable of symbolic as well as algebraic computation.
P.S. I do need the symbolic representation of the derivatives of conjugate in the above format only.
Maybe you can use the following two constructs to your advantage, which will keep the Conjugate, but evaluate and simplify the derivative inside. Using ReleaseHold, you can then evaluate even the Conjugate.
Note that I left out the divisor in the Conjugate-case for clarity, but you can easily add that into the second function's definition.
d[g_] := Derivative[1][g]
d[Conjugate[g_]] := With[{dg = d[g]@# // Simplify},
HoldForm[Conjugate[dg]] &]
(* example function *)
g[x_] := TrigToExp@Sin[x]
(* evaluation *)
d[g][x]
(* \[ExponentialE]^(-\[ImaginaryI] x)/2+
\[ExponentialE]^(\[ImaginaryI] x)/2 *)
d[Conjugate[g]][x]
(* Conjugate[1/2 \[ExponentialE]^(-\[ImaginaryI] x)
(1+\[ExponentialE]^(2 \[ImaginaryI] x))] *)
## Update
If you want further derivatives, you can instead use this slight expansion of the idea above:
d[g_, n_:1] := Derivative[n][g]
d[Conjugate[g_], n_:1] := With[{dg = d[g, n]@# // Simplify}, HoldForm[Conjugate[dg]] &]
n gives the order of derivation you want. If left out, the first derivative is generated.
Interesting sidenote: n can even be negative, giving you the integral of your function. Observe e.g.:
h[x_]:=Sin[x]
d[h,0][x] (* Sin[x] *)
d[h,-1][x] (* -Cos[x] *)
d[h,1][x] (* Cos[x] *)
d[h][x] (* Cos[x] *)
• Thanks for your input. Can you please tell me, where to read from to understand such programming in mathematica in a better way? I couldn't find this type of programming in any textbooks. – Shivam Sahu Mar 20 '15 at 13:53
• @ShivamSahu: I can only speak for myself: I learned the little I know about Mathematica from its documentation and my own humble experiments. If my answer helped in your quest, please "accept" it. – Jinxed Mar 20 '15 at 20:27
• I have one more question. How can we handle double derivative of g(x) and Conjugate(g(x)) with this? – Shivam Sahu Mar 22 '15 at 12:31
• @ShivamSahu: Have a look at the update I made to my answer. :) – Jinxed Mar 24 '15 at 13:49
• Thanks for the update. This was precisely what I was looking for. :) – Shivam Sahu Mar 25 '15 at 5:06
|
2020-01-20 08:30:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5981793403625488, "perplexity": 1943.7503903318566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250598217.23/warc/CC-MAIN-20200120081337-20200120105337-00007.warc.gz"}
|
https://hackaday.com/tag/gear/
|
# Couch Potato Refined: Self-Rotating TV Uses Plywood Gears
When we first saw [Mikeasaurus’] project to rotate his TV 90 degrees in case he wanted to lay down and channel surf we were ready to be unimpressed. But it grew on us as we read about how he fabricated his own gearing system to make a car seat motor rotate the TV.
The gearing system is made from plywood and the design was from geargenerator.com, a freebie design tool we’ve covered before. You’d think you’d need a laser cutter, but in this case, the gear forms were printed out, glued on the plywood and then cut out manually. Each gear is made of several laminated together.
# Firing Bullets Through Propellers
Early airborne combat was more like a drive-by shooting as pilot used handheld firearms to fire upon other aircraft. Whomever could boost firepower and accuracy would have the upper hand and so machine guns were added to planes. But it certainly wasn’t as simple as just bolting one to the chassis.
This was during World War I which spanned 1914 to 1918 and the controllable airplane had been invented a mere eleven years before. Most airplanes still used wooden frames, fabric-covered wings, and external cable bracing. The engineers became pretty inventive, even finding ways to fire bullets through the path of the wooden propeller blades while somehow not tearing them to splinters.
# Vintage Logan Lathe Gets 3D Printed Gears
In December 2016, [Bruno M.] was lucky enough to score a 70+ year old Logan 825 lathe for free from Craigslist. But as you might expect for a piece of machinery older than 95% of the people reading this page, it wasn’t in the best of condition. He’s made plenty of progress so far, and recently started tackling some broken gears in the machine’s transmission. There’s only one problem: the broken gears have a retail price of about \$80 USD each. Ouch.
On his blog, [Bruno] documents his attempts at replacing these expensive gears with 3D printed versions, which so far looks very promising. He notes that usually 3D printed gears wouldn’t survive in this sort of application, but the gears in question are actually in a relatively low-stress portion of the transmission. He does mention that he’s still considering repairing the broken gears by filling the gaps left by the missing teeth and filing new ones in, but the 3D printed gears should at least buy him some time.
As it turns out, there’s a plugin available for Fusion 360 that helpfully does all the work of creating gears for you. You just need to enter in basic details like the number of teeth, diametral pitch, pressure angle, thickness, etc. He loaded up the generated STL in Cura, and ran off a test gear on his delta printer.
Of course, it didn’t work. Desktop 3D printing is still a finicky endeavour, and [Bruno] found with a pair of digital calipers that the printed gear was about 10% larger than the desired dimensions. It would have been interesting to find out if the issue was something in the printer (such as over-extrusion) or in the Fusion 360 plugin. In any event, a quick tweak to the slicer scale factor was all it took to get a workable gear printed on the third try.
This isn’t the first time we’ve seen 3D printed gears stand in for more suitable replacement parts, nor the first time we’ve seen them in situations that would appear beyond their capability. As 3D printer hardware and software improves, it seems fewer and fewer of the old caveats apply.
# Man-in-the-Middle Jog Pendant: Two Parts Make Easier Dev Work
In a project, repetitive tasks that break the flow of development work are incredibly tiresome and even simple automation can make a world of difference. [Simon Merrett] ran into exactly this while testing different stepper motors in a strain-wave gear project. The system that drives the motor accepts G-Code, but he got fed up with the overhead needed just to make a stepper rotate for a bit on demand. His solution? A grbl man-in-the-middle jog pendant that consists of not much more than a rotary encoder and an Arduino Nano. The unit dutifully passes through any commands received from a host controller, but if the encoder knob is turned it sends custom G-Code allowing [Simon] to dial in a bit acceleration-controlled motor rotation on demand. A brief demo video is below, which gives an idea of how much easier it is to focus on the nuts-and-bolts end of hardware when some simple motor movement is just a knob twist away.
# Edgytokei’s Incredible Mechanism Shows Time Without a Face
Taking inspiration from Japanese nunchucks, [ekaggrat singh kalsi] came up with a brilliant clock that tells time using only hour and minute hands, and of course a base for them to sit on. The hands at certain parts of the hour seem to float in the air, or as he puts it, to sit on their edges, hence the name, the Edgytokei, translating as “edge clock”.
The time is a little difficult to read at first unless you’ve drawn in a clock face with numbers as we’ve done here. 9:02 and 9:54 are simple enough, but 9:20 and 9:33 can be difficult to translate into a time at first glance. Since both hands have to be the same length for the mechanism to work, how do you tell the two hands apart? [ekaggrat] included a ring of LEDs in the hub at the base and another at the end of one of the hands. Whichever ring of LEDs is turned on, indicates the tip of the minute hand. But the best way to get an idea of how it works is to watch it action in the video below.
We have to admire the simplicity and cleanliness of his implementation. The elbow and the hub at the base each hide a stepper motor with attached gear. Gear tracks lining the interior of the hands’ interact with the motor gears to move the hands. And to keep things clean, power is transferred using copper tape lining the exteriors.
On the Hackaday.io page [ekaggrat] talks about how difficult it was to come up with the algorithms and especially the code for homing the hands to the 12:00 position, given that homing can be initiated while the hands can be in any orientation. The hand positions are encoded in G-code, and a borrowed G-code parser running on an Arduino Nano in the base controls it all.
# 3D Printed Gear Serves Seven Months Hard Labor
Even the staunchest 3D printing supporter would have to concede that in general, the greatest strength of 3D printing is not in the production of final parts, but in prototyping. Sure you can make functional prints, as the pages of this site will attest; but few would argue that you wouldn’t be better off getting your design cut out of metal or injection molded if you planned on putting the part into service over the long term. Especially if the part was to be subjected to rough service in an industrial setting.
While that’s valid advice, it certainly isn’t the definitive word on the issue. Just because a part is printed in plastic on a desktop 3D printer doesn’t necessarily mean it can’t be put into real service, at least for as long as it takes to get proper replacement parts. A recent success story from [bloomautomatic] serves as a perfect example, when one of the gears in his MIG welder split, he decided to try and print up a replacement in PLA while he waited for the nylon gear to get shipped out to him. Fast forward seven months and approximately 80,000 welds later, and [bloomautomatic] reports it’s finally time to install those replacement gears he ordered.
In the pictures [bloomautomatic] posted you can see the printed gear finally wore down to the point the teeth were essentially gone where they meshed with their metal counterparts. To those wondering why the gear was plastic to begin with, [bloomautomatic] explains that it’s intended to be a sacrificial gear that will give way instead of destroying the entire gearbox in the event of a jam. According to the original post he made when he installed the replacement gear, the part was printed in Folgertech PLA on a Monoprice Select Mini. There’s no mention of infill percentage, but with such a small part most slicers would likely have made it essentially solid to begin with.
While surviving seven tortuous months inside of the welder is no small feat, we wonder if hardier PLA formulationstreatment of the part post-printing, or even casting it in a different material couldn’t have turned this temporary part into a permanent replacement.
# Network Analysers: The Electrical Kind
Instrumentation has progressed by leaps and bounds in the last few years, however, the fundamental analysis techniques that are the foundation of modern-day equipment remain the same. A network analyzer is an instrument that allows us to characterize RF networks such as filters, mixers, antennas and even new materials for microwave electronics such as ceramic capacitors and resonators in the gigahertz range. In this write-up, I discuss network analyzers in brief and how the DIY movement has helped bring down the cost of such devices. I will also share some existing projects that may help you build your own along with some use cases where a network analyzer may be employed. Let’s dive right in.
# Network Analysis Fundamentals
As a conceptual model, think of light hitting a lens and most of it going through but part of it getting reflected back.
The same applies to an electrical/RF network where the RF energy that is launched into the device may be attenuated a bit, transmitted to an extent and some of it reflected back. This analysis gives us an attenuation coefficient and a reflection coefficient which explains the behavior of the device under test (DUT).
Of course, this may not be enough and we may also require information about the phase relationship between the signals. Such instruments are termed Vector Network Analysers and are helpful in measuring the scattering parameters or S-Parameters of a DUT.
The scattering matrix links the incident waves a1, a2 to the outgoing waves b1, b2 according to the following linear equation: $\begin{bmatrix} b_1 \\ b_2 \end{bmatrix} = \begin{bmatrix} S_{11} & S_{12} \\ S_{21} & S_{22} \end{bmatrix} * \begin{bmatrix} a_1 \\ a_2 \end{bmatrix}$.
The equation shows that the S-parameters are expressed as the matrix S, where and denote the output and input port numbers of the DUT.
This completely characterizes a network for attenuation, reflection as well as insertion loss. S-Parameters are explained more in details in Electromagnetic Field Theory and Transmission Line Theory but suffice to say that these measurements will be used to deduce the properties of the DUT and generate a mathematical model for the same.
# General Architecture
As mentioned previously, a simple network analyzer would be a signal generator connected and a spectrum analyzer combined to work together. The signal generator would be configured to output a signal of a known frequency and the spectrum analyzer would be used to detect the signal at the other end. Then the frequency would be changed to another and the process repeats such that the system sweeps a range of frequencies and the output can be tabulated or plotted on a graph. In order to get reflected power, a microwave component such as a magic-T or directional couplers, however, all of this is usually inbuilt into modern-day VNAs.
Continue reading “Network Analysers: The Electrical Kind”
|
2018-08-15 22:50:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28923535346984863, "perplexity": 1567.1916033641537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210362.19/warc/CC-MAIN-20180815220136-20180816000136-00491.warc.gz"}
|
https://proofwiki.org/wiki/Gauss%27s_Integral_Form_of_Digamma_Function
|
# Gauss's Integral Form of Digamma Function
## Theorem
Let $z$ be a complex number with a positive real part, then:
$\displaystyle \psi \left({z}\right) = \int_0^\infty \left({\frac{ e^{-t} } t - \frac {e^{-zt } } {1 - e^{-t} } }\right) \rd t$
where $\psi$ is the digamma function.
## Source of Name
This entry was named for Carl Friedrich Gauss.
|
2020-06-03 02:11:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906539797782898, "perplexity": 976.0445490914326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347428990.62/warc/CC-MAIN-20200603015534-20200603045534-00496.warc.gz"}
|
https://indico.cern.ch/event/839985/contributions/3983645/
|
# LXX International conference "NUCLEUS – 2020. Nuclear physics and elementary particle physics. Nuclear physics technologies"
Oct 11 – 17, 2020
Online
Europe/Moscow timezone
## Hadron Production in High-Energy Particle Collisions
Oct 14, 2020, 3:30 PM
25m
Online
#### Online
Oral report Section 4. Relativistic nuclear physics, elementary particle physics and high-energy physics.
### Speaker
Prof. Andrew Koshelkin (National Research Nuclear University )
### Description
Based on the quark-hadron duality concept the hadronization of the deconfined matter arising in high-energy particle collisions is considered. The number of generated hadrons is shown to be entirely determined by the exact non-equilibrium Green's functions of partons in the deconfined matter and the vertex function governed by the probability of the confinement-deconfinement phase transition.
Compactifying the standard (3+1) chromodynamics into $QCD_{xy} + QCD_{zt}$, the rate of hadrons produced in particle collisions with respect to both the rapidity and $p_T$ distributions is derived in the flux tube approach. Provided that the hadronization is the first order phase transition, the hadron rate is derived in the explicit form. The obtained rate is found to depend strongly on the energy of the colliding particles, number of tubes, hadron mass as well as on the temperature of the confinement-deconfinement phase transition. In the case of the pion production in $pp$ collisions we obtain a good agreement to the experimental results on the pion yield with respect to both the rapidity and $p_T$ distributions.
### Primary author
Prof. Andrew Koshelkin (National Research Nuclear University )
|
2023-03-31 00:20:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7502158284187317, "perplexity": 1468.7631404420033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949506.62/warc/CC-MAIN-20230330225648-20230331015648-00389.warc.gz"}
|
http://mathoverflow.net/questions/102793/what-fraction-of-a-spheres-volume-lies-within-a-cone
|
# What fraction of a sphere's volume lies within a cone?
Let $B \subset \mathbb{R}^n$ be a collection of $n$ (not necessarily independent) unit vectors which we will label $v_1,\ldots, v_n$ for convenience. The cone $K_B \subset \mathbb{R}^n$ associated to $B$ is the non-negative linear span of $B$, i.e., $$K_B = \lbrace r_1v_1 + r_2v_2 + \ldots + r_nv_n~|~r_j \geq 0 \rbrace.$$ Let $\mathbb{S}^n$ denote the unit $(n-1)$-sphere defined as usual by $$\mathbb{S}^n = \lbrace v \in \mathbb{R}^n ~|~ \|v\| = 1\rbrace.$$
Question:
Is there a nice formula known for the ratio $$\angle B = \frac{\text{Vol}(K_B\cap\mathbb{S}^n)}{\text{Vol}(\mathbb{S}^n)}?$$
Where $\text{Vol}$ refers to $(n-1)$ dimensional volume and nice means "directly involving the coordinates of the vectors in $B$"? The motivation comes from the trivial case $n=2$: when $B = \lbrace v_1, v_2\rbrace \subset \mathbb{R}^2$ then the fraction of the unit circle's perimeter lying within the cone spanned by $v_1$ and $v_2$ can immediately be recovered from the inner product (which of course directly involves coordinates): $$\angle B = \frac{1}{2\pi} \cos^{-1}(v_1\cdot v_2).$$
I assume this is an extremely well-studied problem, but all my google searches so far have only yielded high school trigonometry so I am obviously missing some keywords. All help is appreciated!
-
– Ryan Budney Jul 21 '12 at 3:48
In principle, one can compute the volume of a spherical polyhedron by first dividing it into simplices (e.g. by barycentric subdivision), and then computing the sum of the volumes of the simplices.
Inductive formulae for volumes of spherical simplices go back to Schlafli; see also Peschl.
-
|
2014-03-09 18:49:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9698758125305176, "perplexity": 251.82271365114033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010128060/warc/CC-MAIN-20140305090208-00073-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://mathinsight.org/visualizing_two_dimensional_linear_system
|
# Math Insight
### Visualizing the solution to a two-dimensional system of linear ordinary differential equations
Below are two applets through which you can explore the solution a system of two linear ODEs, i.e., a system of the form \begin{align*} \diff{\vc{x}}{t} &= A \vc{x}\\ \vc{x}(0) &= \vc{x}_0, \end{align*} where $\vc{x}$ is a two-dimensional vector, $\vc{x}=(x,y)$, $A$ is a $2 \times 2$ matrix, and the initial condition is $\vc{x}_0=(x_0,y_0)$.
#### Interactive phase plane applet
The first applet shows the solution to $\diff{\vc{x}}{t} = A \vc{x}$, plotted both as functions as time and in the phase plane. The applet demonstrates how the phase plane represents the solution trajectory $(x(t),y(t))$ through time. It also illustrates the link between the solution and the eigenvalues and eigenvectors of $A$.
Control panel (Show)
Eigenvalues and eigenvectors (Show)
Equilibrium classification (Show)
Solution (Show)
A linear system with phase plane and versus time.
Illustration of the solution to a system of two linear ordinary differential equations. The system is of the form $\diff{\vc{x}}{t} = A\vc{x}$ with prescribed initial conditions $\vc{x}(0)=\vc{x}_0$, where $\vc{x}(t)=(x(t),y(t))$. The solution trajectory $(x(t),y(t))$ is plotted as a cyan curve on the phase plane in the left panel. In the right panel, the components of the solution $x(t)$ (top axes, solid cyan curve) and $y(t)$ (bottom axes, dashed cyan curve) are plotted versus time.
To visualize how the solution changes as a function of time in the phase plane, one can change the time $t$ with the slider in the right panel or press the play button (triangle) in the lower left of one of the panels to start the animation of $t$ increasing. The red points in both panel move with $t$ to correspond to the solution $(x(t),y(t))$.
Values of the matrix $A$ can be changed in the top control panel. The initial condition $\vc{x}(0)= (x_0,y_0)$ can be changed by dragging the cyan points in either panel or by entering numbers in the control panel.
If the eigenvalues of $A$ are real, then one can check the “show eigenvectors” box to show the directions of the eigenvectors of $A$ in the left phase plane. If the corresponding eigenvalue is not zero, arrows along the eigenvector indicates the direction the solution moves along the eigenvector direction. Checking the “show vector” box displays a vector from the origin to $(x(t),y(t))$, allowing one to track the direction of the solution even when the point $(x(t),y(t))$ moves out of view. Checking the “show decompositions” box, shows the decomposition of $(x(t),y(t))$ as a sum of components along the eigenvectors.
If you check the box “show eigenvalues”, then the phase plane plot shows an overlay of the eigenvalues, where the axes are reused to represent the real and imaginary axes of the complex plane. The eigenvalues appear as two points on this complex plane, and will be along the x-axis (the real axis) if the eigenvalues are real. If both eigenvalues are in the left half of the plane (which becomes shaded when the box is checked), then the equilibrium at the origin is stable.
The solution, eigenvalues, eigenvectors, and characterization of the equilibrium at the origin are shown in the sections at the bottom of the applet. These calculations depend on values of $A$ and initial condition chosen.
#### MIT cursor entry mathlet
The second applet, from the MIT Mathlet collections, is the Linear Phase Portraits: Cursor Entry Mathlet (distributed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported license). In this applet, you specify the matrix by changing the trace and the determinant of the matrix $A$ (lower left), which determine the eigenvalues of the matrix $A$, and hence type of the system. The eigenvalues, however, don't fully determine the entries of the matrix. In the upper left, you can change two more quantities that determine the rotation and the asymmetry of the solutions in the phase plane. Combined with the eigenvalues, these quantities completely specify the entries of $A$.
LINEAR PHASE PORTRAITS: CURSOR ENTRY
The graphing window at right displays a few trajectories of the linear system x' = Ax. Below the window the name of the phase portrait is displayed, along with the matrix A and the eigenvalues of A.
To control the matrix one first sets the trace and the determinant by dragging the cursor over the diagram at bottom left or by grabbing the sliders below or to the left of that diagram. Select from among the matrices with given trace and determinant by dragging the cursor over the window at upper left, or by grabbing the sliders below and to the left of that window. The bottom slider conjugates the matrix A by a rotation matrix; the effect is to rotate the phase portrait. The left slider controls the "asymmetry" of A, half the difference of between its off-diagonal entries. When the eigenvalues are not real, the asymmetry is at least the imaginary part of the eigenvalue in absolute value, so the upper left window splits into two portions (corresponding to clockwise or counterclockwise spirals).
Depress the mousekey over the graphing window to display a trajectory through that point. The trajectory can be dragged by moving the cursor with the mousekey depressed. Releasing it will leave the trajectory in place. Click on [Clear] to clear all the trajectories.
© 2001 H. Hohn and H. Miller
Another applet that may be of interest is the Linear Phase Portraits: Matrix Entry Mathlet, also from the MIT Mathlet collections.
|
2017-08-20 07:59:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8338053226470947, "perplexity": 469.23102426534507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886106358.80/warc/CC-MAIN-20170820073631-20170820093631-00208.warc.gz"}
|
https://math.stackexchange.com/questions/3145282/coin-flipping-problem-with-sequences-of-results
|
# Coin flipping problem with sequences of results
Two players (A and B) are playing a game. Player A randomly chooses a sequence of three possible coin flips (eg HTH, TTH, etc) from the possible 8 and then player B replies with his own (non-random) choice. Then they flip a coin until one of the two sequences appears. What is the probability of player B winning?
We can immediately tell that certain sequences are advantageous to player B, for example if player A chooses HHH, then the sequence THH is an automatic win unless the first three tosses come out Heads. The same for TTT. Are all the others fair for both?
• Each sequence is equally likely to appear. This is not about guessing the number of heads or tails, right? In your example, why do you says that $THH$ is an automatic win? If the first toss is head, player $B$ does not win. Obviously, the win probability is not greater than $50$%. – Vasya Mar 12 at 16:40
• @Vasya the idea is you keep flipping until one or the other sequence appears, so with THH vs. HHH, THH will win 7/8 of the time, only losing when the first 3 flips are HHH. – Ned Mar 12 at 16:50
• Spoiler alert: See Penney's game. The table indicates which sequence is least bad for A, and the resulting winning probability for B (expressed as odds). – Brian Tung Mar 12 at 17:32
• Penney's Game is about choosing one sequence in response to another sequence. This question is about choosing 3 sequences in response to 3 sequences /and finding the probability of winning. – user558317 Mar 13 at 1:57
• Thank you all for your replies, I did not know this game had a name (Penney's game). I could see why some of the sequences win 7/8 of the time, but not why others were better (for example HHT being better than HTH). Knowing the name of the game I can research it, much appreciated! – Nikos Vlaseros Mar 15 at 16:18
|
2019-06-20 21:17:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5193741917610168, "perplexity": 704.6395363102831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.79/warc/CC-MAIN-20190620210153-20190620232153-00311.warc.gz"}
|
https://www.scienceforums.net/topic/35738-electric-resistance/?tab=comments
|
electric resistance!!
Recommended Posts
well, I was doing some homework and I was asked why the headlights of a car go dim when the starter motor is used
somebody can explain me:rolleyes:??
Share on other sites
Sounds like a homework question, so I can't just tell you the answer. Are you familiar with basic electrical circuits? With Ohm's law? If so, try drawing out the equivalent circuit, both with the starter motor on; and with it off. Don't forget the internal resistance of the battery. What, with respect to the headlights, is the difference between these two?
Share on other sites
They are in parallel?
Share on other sites
They are in parallel?
Irrelevant. Think about where the power is coming from when starting vs. during normal operation.
Share on other sites
To hopefully clarify, what I think npts2020 means is to look at this intuitively and consider that the battery has a limited amount of power it can produce. With and without the starter, how is this power distributed? If you think about it you should be able to come up with the answer easily.
If you still have trouble, or need to explain this in a report, you can solve for this mathematically using ohms law per my suggestion; P = VI and V = IR and come to the same conclusion. Let V = 12V for the car battery. There is a resistance in the light; in the battery; and two resistances for the starter (on and off). I'll let you decide appropriate values for these (a google search might be helpful). So calculate the current and voltage (and therefore power) in the light when the starter is on, and when it is off.
Share on other sites
quite simply because the starter uses more juice than the battery and loom can supply at ~13.8V, its right at the very limits, thats why it`s never a good idea to keep trying to turn the engine over on fault.
Share on other sites
• 3 weeks later...
Vertical component of Lorentz force in a strong magnetic fields
Share on other sites
The power source (car battery) has internal impedance.
What this means to you is that when any load is placed on the battery its voltage drops.
The bigger the load the more it drops. A starter is a very significant load and thus the voltage drop is significant.
Anyway if you are running the starter the engine is not running and the alternator is not trying to hold the battery voltage up.
Even touching the brake lights will dim the headlights when the engine is not running just not as much as the starter.
Share on other sites
• 2 weeks later...
Similarly:
A car battery is not an ideal source, and cannot maintain a voltage of 12 V (or whatever the rating is) independent of the current drained.
Create an account
Register a new account
|
2021-05-12 08:54:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6084141731262207, "perplexity": 895.3281095929138}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00625.warc.gz"}
|
https://www.astronomicalreturns.com/2019/04/the-skylab-stowaway.html
|
### The Skylab Stowaway
#### TL;DR
1. The Skylab stowaway (RIP Owen Garriott)
2. The only wristwatch I'd ever splurge on
3. Personal items I'd bring to Mars
#### Current events
Sad news - astronaut Owen Garriott passed away on Monday at the age of 88. Though I love NASA history, I'll admit I'd never heard of him until he died, but turns out he's a fascinating guy. Garriott was part of NASA's 4th class of astronauts, a special group nicknamed "The Scientists" (every NASA astronaut class has a nickname) because they were the first class chosen on the basis of academic research and experience (masters or PhD required), rather than military service as test pilots.
Garriott's first mission was Skylab 3 in 1973, the second mission to the US's first space station, along with Alan Bean and Jack Lousma. My favorite part was when Garriott pranked Mission Control - a sexy female voice inexplicably started speaking to Houston from the station, calling capsule communicator (capcom) Bob Crippen by his name and saying "The boys haven't had a home-cooked meal in so long I thought I'd bring one up." After describing the view from space, the voice then said "Oh oh. I have to cut off now. I think the boys are floating up here toward the command module and I'm not supposed to be talking to you." Turns out, Garriott had secretly recorded his wife's voice beforehand!
Garriott on a spacewalk during Skylab 3
Funny enough, Garriott himself was in charge of capcom for Neil Armstrong and Buzz Aldrin on Apollo 11. Guess that wasn't enough to stop him from messing with his own capcom guy! Garriott would fly again on Space Shuttle Columbia in 1983.
#### Today I learned
Even on the moon, astronauts need to keep track of time, so NASA needed a chronograph wristwatch that could survive the harsh environment of space. They considered watches from Longines-Wittnauer, Rolex, and Omega, and eventually picked the Omega Speedmaster. See the Wikipedia page for the intense performance criteria they were evaluated on
Buzz Aldrin wearing his Speedmaster on Apollo 11
Although Armstrong was first on the moon, his Speedmaster didn't make it to the lunar surface because the Lunar Module's electronic clock malfunctioned, so he left his watch on board as a backup before stepping off.
I'm not really a watch guy, I feel a $10 plastic watch tells time just fine, but if there's one watch I'd one day splurge on, it's the gold Apollo 50th anniversary limited edition of the Omega Speedmaster. It'd only set me back about$30,000!
Just... WOW
#### This week in space history
Apollo 16 was the fifth and penultimate manned lunar landing, launching from Cape Canaveral on April 16, 1972. John Young and Charlie Duke spent three days on the moon while Ken Mattingly orbited above. Their landing site, the Descartes Formation, was chosen because it was geologically older than the prior landing sites and was thought to be volcanic in origin (a hypothesis the mission disproved)
On the Apollo missions, the astronauts were allowed to carry a small "Personal Preference Kit" with a few personal items to bring to the moon. Duke left a photo of his family, something I found quite touching
If I were the first man on Mars, I think these would be the items I'd leave behind: a photo of my family, my burnt orange Texas Longhorns golf ball, my San Antonio Spurs hat, a Chinese red envelope, and the paper rocket to Mars I made in pre-K
My rocket to Mars, designed in 2000
|
2023-03-30 02:54:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20672856271266937, "perplexity": 7521.247562293795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00049.warc.gz"}
|
http://tex.stackexchange.com/questions/102754/even-columns-in-document-with-long-bibliography
|
# Even columns in document with long bibliography
I am writing a document in two column format and the last two pages are devoted to the bibliography, which I generate with BibTeX. I need to finish the last page with the two columns having the same size. I am lost at that point.
I have tried to use \enlargethispage{-X cm} finding X by trial and error. The problem is that when using that command after the call to \bibliography{MyBib}, the outcome is not the expected one: bibliography items span along all the first column and there is no column adjustment. I found out that if I write the command before \bibliography{MyBib} everything works as expected, but the problem is that the bibliography takes two pages so I can't do that, the only possibility is writing \enlargethispage afterwards. Could you please help me on that?
I self-answer :-) I suceeded by manually editing the .bbl file. If you have a smarter solution please let me know. Thanks.
-
Welcome to TeX.sx! It would be very useful to know what document class you're using and how you get two column format. – egreg Mar 16 '13 at 11:05
Hi, thank you. I use \documentclass[conference]{IEEEtran} and inside the template IEEEtran I found these options \ExecuteOptions{letterpaper,10pt,twocolumn,oneside,final,journal} – Carlgar Mar 16 '13 at 11:13
Is the paper to be submitted? In this case I wouldn't bother if the submission instructions don't ask for balancing the columns in the last page. – egreg Mar 16 '13 at 11:15
It is a camera ready version, which is not going to be edited because it directly goes to a conference proceedings. The two columns issue is indicated by the conference guidelines. Up to the date I never had any problem in achieving that with \enlargethispage but this case is special for the rather long bibliography section. I am wondering if the only solution is not using bibtex, and instead of that, write each bibitem individually and then I write the command in the middle of them... – Carlgar Mar 16 '13 at 11:22
The IEEEtran class has a trick for doing what you want in a simple fashion; here's an example, where I used a large bib database available in TeX Live.
\documentclass[conference]{IEEEtran}
\usepackage[T1]{fontenc}
\usepackage{lipsum} % some mock text
% the following commands are just to avoid errors
\newcommand{\mkbibquote}[1]{#1''}
\newcommand{\hyphen}{\-}
%%%
\begin{document}
\title{Title}
\author{A. U. Thor}
\maketitle
\section{Section}
\lipsum[1-3]
\nocite{*}
\bibliographystyle{plain}
% this will issue a column break just before reference 68
\IEEEtriggeratref{68}
\bibliography{biblatex-examples}
\end{document}
A couple of attempts gave a quite good result. The columns are not perfectly balanced, but doing it would split a reference.
Alternatively you could try
\bibliographystyle{plain}
\IEEEtriggercmd{\enlargethispage{-3in}}
\IEEEtriggeratref{60}
\bibliography{biblatex-examples}
which gives the following "balancing"
The \IEEEtriggeratref command places, by default, a column break before the specified bib entry; with \IEEEtriggercmd you can change the default command.
-
Brilliant, that is definitely what I was looking for, I wasn't aware of those commands in IEEEtran. Thanks so much. – Carlgar Mar 16 '13 at 12:18
Happy to have helped. Please, have a look at the site's faq – egreg Mar 16 '13 at 13:43
@Carlgar: you might consider to mark this answer as solution. – lmsasu Nov 13 '13 at 16:36
|
2014-10-30 17:06:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8705942034721375, "perplexity": 1155.8311638632638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637898611.54/warc/CC-MAIN-20141030025818-00126-ip-10-16-133-185.ec2.internal.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-1-section-1-8-introduction-to-variables-algebraic-expressions-and-equations-exercise-set-page-81/17
|
## Prealgebra (7th Edition)
Published by Pearson
# Chapter 1 - Section 1.8 - Introduction to Variables, Algebraic Expressions, and Equations - Exercise Set: 17
94
#### Work Step by Step
2$\times$(2)$\times$5^2 - 6 = 4$\times$ 25 - 6 = 100 - 6 = 94
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-04-23 00:44:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6980148553848267, "perplexity": 2212.128606639793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945668.34/warc/CC-MAIN-20180422232447-20180423012447-00235.warc.gz"}
|
http://myelectrical.com/notes/entryid/103/myelectrical-cable-sizing-tool-upgrade
|
# myElectrical - Cable Sizing Tool Upgrade
By on December 1st, 2011
Our IEE cable sizing was wrote a few years ago and had become rough around the edges. I thought it was time to give the tool a service. Unfortunately when I looked under the hood I found cracked cylinders, broken bell ends and worn cylinders. Rather than a quick service I had no choice be to do a major rewrite on the software. The only thing I didn't touch is the chassis [database], which while suffering from patches of rust, was still usable.
In rewriting the software good things have happened. There has been a large increase in performance [no more very long waits, with frequent postbacks] and I have put in a couple of enhancements. The main things users will notice are:
• things should be a lot quicker. The number of postbacks has been minimized (unfortunately a few are required to retrieve cable configuration data). Calculation of the cable size itself has been improved to make the processing more efficient.
• everything is all on one page. A slicker user interface with no more switching between tabs.
• you now have quick access to the underlying data. Click any of the 'i' buttons on the form and the relevant data table should pop up.
Of all the software tools on the site, the cable sizing one is the most complex. A lot of the complexity derives from strictly following the Wiring Regulations, which while designed for humans to work through, are not necessarily software friendly. Hopefully the tool should be working well, but if you do come across and bugs or have any suggestions, please let me know.
More interesting Notes:
Steven has over twenty five years experience working on some of the largest construction projects. He has a deep technical understanding of electrical engineering and is keen to share this knowledge. About the author
#### View 2 Comments (old system)
1. skalooba76 says:
12/7/2011 4:49 AM
hi,
i noticed when i try to size cable for small load, the software keep giving the wrong size, i.e for 12 Amp load, 380V, protected by 16 Amp, the cable calculation software will give 35mm2 which is wrong.
thanks for all the help
niki
• Steven says:
12/7/2011 5:25 AM
It may be the fault level. In calculating the size the following happens:
Cable is calculated on current capacity
Voltage drop is calculated (and cable size increased if necessary)
Fault level withstand is calculated (and cable size increased if necessary)
I have just tried your scenario (with XLPE cable) and at 25kA fault, 25m2 was required, but at 1kA fault only 1mm2.
Comments are closed for this post:
• have a question or need help, please use our Questions Section
• spotted an error or have additional info that you think should be in this post, feel free to Contact Us
## Latest Questions:
### Most Popular Notes:
Renewable and Efficient Electric Power ...
Gilbert M. Masters
Hardcover - 712 pages
$101.50 Solar Electricity Handbook - 2015 Edition: ... Michael Boxwell Paperback - 204 pages$17.09
Electric Power Distribution Engineering, ...
Turan Gonen
Hardcover - 1061 pages
$104.97 Energy Systems Engineering: Evaluation and ... Francis Vanek, Louis ... Hardcover - 672 pages$51.30
Power System Relaying
Stanley H. Horowitz, ...
Hardcover - 398 pages
$122.30 Renewable Energy in Power Systems Leon Freris, David ... Hardcover - 300 pages$67.20
Power Systems and Renewable Energy: Design, ...
Gary D. Price
Paperback - 192 pages
$59.95 Photovoltaic Systems Engineering, Third ... Roger Messenger, Amir ... Hardcover - 528 pages$103.66
Solar Engineering of Thermal Processes
John A. Duffie, ...
Hardcover - 936 pages
$108.50 Design of Smart Power Grid Renewable Energy ... Ali Keyhani Hardcover - 592 pages$142.06
Power Electronics and Renewable Energy ...
Hardcover - 1607 pages
$399.00 Photovoltaic Design and Installation For ... Ryan Mayfield Paperback - 384 pages$16.99
Renewable and Efficient Electric Power ...
Gilbert M. Masters
Hardcover - 680 pages
Solar PV Engineering and Installation: ...
Sean White
Paperback - 248 pages
$37.27 Photovoltaics: Design and Installation ... Solar Energy ... Paperback - 336 pages$38.12
Solar Energy Engineering, Second Edition: ...
Soteris A. Kalogirou
Hardcover - 840 pages
$99.96 Electric Power Substations Engineering, ... Hardcover - 536 pages$149.95
Principles of Sustainable Energy Systems, ...
Frank Kreith, Susan ...
Hardcover - 790 pages
$114.16 Wind Turbine Operation in Electric Power ... Zbigniew Lubosny Paperback - 262 pages$209.00
Electric Energy: An Introduction, Third ...
Mohamed A. El-Sharkawi
Hardcover - 606 pages
$101.41 Submarine Power Cables: Design, ... Thomas Worzyk Hardcover - 296 pages$189.00
Power Conversion and Control of Wind Energy ...
Bin Wu, Yongqiang ...
Hardcover - 480 pages
$86.35 Alternative Energy Systems and Applications B. K. Hodge Paperback - 432 pages Smart Power Grids 2011 (Power Systems) Hardcover - 696 pages$299.00
Large-Scale Solar Power Systems: ...
Dr Peter Gevorkian
Paperback - 400 pages
$72.74 Electrical Power System Essentials Pieter Schavemaker, ... Hardcover - 340 pages$72.95
Convex Optimization of Power Systems
Hardcover - 209 pages
$95.00 The Guide to Photovoltaic System ... Gregory W Fletcher Paperback - 384 pages$81.03
Electric Power Systems
B. M. Weedy, B. J. ...
Hardcover - 512 pages
\$72.00
## Have some knowledge to share
If you have some expert knowledge or experience, why not consider sharing this with our community.
By writing an electrical note, you will be educating our users and at the same time promoting your expertise within the engineering community.
To get started and understand our policy, you can read our How to Write an Electrical Note
|
2016-02-12 12:00:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31356075406074524, "perplexity": 6112.836069918219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163729.14/warc/CC-MAIN-20160205193923-00195-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://www.expii.com/t/extension-integrand-divergence-test-for-improper-integrals-373
|
Expii
# Extension: Integrand Divergence Test for Improper Integrals - Expii
On an unbounded interval, if the values of the integrand (a function) approach a nonzero number, then the improper integral doesn't exist.
|
2022-01-22 02:13:18
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995384216308594, "perplexity": 770.4982622092574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00540.warc.gz"}
|
https://codereview.stackexchange.com/questions/77648/copy-arbitrary-number-of-bits-at-arbitrary-offset-from-buffer-to-another-buffer
|
# Copy arbitrary number of bits at arbitrary offset from buffer to another buffer
I have just written a function copy_lowbits_off to copy any number of bits (not bytes) from a source buffer to a destination buffer. The function also supports arbitrary offset (expressed in bits) in the source and the destination buffer.
Could you have a look at it and let me know if anything can be improved in style, design or best practices? Specifically, I find my code too complicated for what it does and I am rather unhappy by the naming of functions and variables.
I compile in C++11 but I don't use much of the C++ features here.
#define BITMASKU8(x) ((1U << (x)) - 1)
/**
* Select low_sel bits from src starting from the lower bit and
* copy the selected bits to dst at offset msb_off, starting from the most
* significant bit.
* The caller must ensure that low_sel + msb_off <= 8
*
* @param dst: Destination byte
* @param src: Source byte
* @param low_sel: Number of low bits to select from src
* @param msb_off: Offset from msb in dst
*/
void offset_bitcpy(std::uint8_t& dst, const std::uint8_t src,
std::uint8_t low_sel, std::uint8_t msb_off) {
const std::uint8_t sel_src = src & BITMASKU8(low_sel);
std::uint8_t shift = 8 - (msb_off + low_sel);
const std::uint8_t shift_src = sel_src << shift;
dst |= shift_src;
}
/**
* Copy an arbitrary number of *bits* from an arbitrary position (expressed
* in bits) in the src buffer, to an abitrary position (expressed in *bits*) in
* the dst buffer.
* The caller must ensure that dst_offbits < 8 && src_offbits < 8. If you need
* to have a bit offset > 8, set dst += bit offset / 8 and
* dst_offbits = bit offset % 8 (respectively for src). low_sel may be >= 8.
*
* @param dst: Destination buffer
* @param src: Source buffer
* @param low_sel: Number of bits to copy from src to dst
* @params src_offbits: [0,7] Offset in the first byte of the source buffer
* @params dst_offbits: [0,7] Offset in the second byte of the destination
* buffer
*/
void copy_lowbits_off(std::uint8_t* dst, const std::uint8_t* src,
unsigned low_sel, std::uint8_t dst_offbits, std::uint8_t src_offbits) {
while(low_sel != 0) {
// Number of bits to select from the first byte of src, and write
// to the first byte of dst
const std::uint8_t sel_byte = std::min(low_sel, 8U - dst_offbits);
const std::uint8_t off_src = *src >> src_offbits;
offset_bitcpy(*dst, off_src, sel_byte, dst_offbits);
// Update dst offsets
const std::uint8_t add_dst_off = sel_byte + dst_offbits;
// Update src offsets
src_offbits += sel_byte;
src += src_offbits / 8;
src_offbits = src_offbits % 8;
low_sel -= sel_byte;
}
}
• BITMASKU8 should preferably be a function. Such a short function would certainly to be inlined by the compiler, producing the same results as the macro you currently have. If you can't afford the chance of overhead, or your compiler is not capable of inlining, then you should at least #undef the macro name at the end of the file to clear the namespace.
• In offset_bitcpy(), I see that you've nicely marked the variables that are initialized only once with const, this is a good practice, even for small functions/blocks, you never know how big they might grow in the future, after a few refactorings. But you did forget to mark std::uint8_t shift as const too, since it is never modified after being set.
• Decide how you are going to handle invalid inputs. The only thing that can break your code is an ill-formed input in copy_lowbits_off(), such as a null pointer or a zero count. Decide which course of action should be taken in such cases. Just doing nothing is hardly a good choice. At least assert to facilitate debugging, or throw and exception if the caller should be responsible for handling the error.
• Overall, I find your code too tightly packed. Blank lines before each comment inside copy_lowbits_off() would visually separate stuff in a more natural way, IMHO.
|
2019-12-10 03:37:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21192671358585358, "perplexity": 7511.702785909677}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525781.64/warc/CC-MAIN-20191210013645-20191210041645-00442.warc.gz"}
|
https://en.wikibooks.org/wiki/Topology/Free_group_and_presentation_of_a_group
|
# Topology/Free group and presentation of a group
## Free monoid spanned by a set
Let ${\displaystyle V}$ be a vector space and ${\displaystyle v_{1},\ldots ,v_{n}}$ be a basis of ${\displaystyle V}$. Given any vector space ${\displaystyle W}$ and any elements ${\displaystyle w_{1},\ldots ,w_{n}\in W}$, there is a linear transformation ${\displaystyle \varphi :V\rightarrow W}$ such that ${\displaystyle \forall i\in \{1,\ldots ,n\},\,\varphi (v_{i})=w_{i}}$. One could say that this happens because the elements ${\displaystyle v_{1},\ldots ,v_{n}}$ of a basis are not "related" to each other (formally, they are linearly independent). Indeed, if, for example, we had the relation ${\displaystyle v_{1}=\lambda v_{2}}$ for some scalar ${\displaystyle \lambda }$ (and then ${\displaystyle v_{1},\ldots ,v_{n}}$ wasn't linearly independent), then the linear transformation ${\displaystyle \varphi }$ could not exist.
Let us consider a similar problem with groups: given a group ${\displaystyle G}$ spanned by a set ${\displaystyle X=\{x_{i}:i\in I\}\subseteq G}$ and given any group ${\displaystyle H}$ and any set ${\displaystyle Y=\{y_{i}:i\in I\}\subseteq H}$, does there always exist a group morphism ${\displaystyle \varphi :G\rightarrow H}$ such that ${\displaystyle \forall i\in I,\,\varphi (x_{i})=y_{i}}$? The answer is no. For example, consider the group ${\displaystyle G=\mathbb {Z} _{n}=\mathbb {Z} /n\mathbb {Z} }$ which is spanned by the set ${\displaystyle X=\{1\}}$, the group ${\displaystyle H=\mathbb {R} }$ (with the adition operation) and the set ${\displaystyle Y=\{2\}}$. If there exists a group morphism ${\displaystyle \varphi :\mathbb {Z} _{n}\rightarrow \mathbb {R} }$ such that ${\displaystyle \varphi (1)=2}$, then ${\displaystyle n2=n\varphi (1)=\varphi (n\,1)=\varphi (0)=0}$, which is impossible. But if instead we had choose ${\displaystyle G=\mathbb {Z} }$, then such a group morphism does exist and it would be given by ${\displaystyle \varphi (t)=2t}$. Indeed, given any group ${\displaystyle H}$ and any ${\displaystyle y\in H}$, we have the group morphism ${\displaystyle \varphi :\mathbb {Z} \rightarrow H}$ defined by ${\displaystyle \varphi (t)=y^{t}}$ (in multiplicative notation) that verifies ${\displaystyle \varphi (1)=y}$. In a way, we can think that this happens because the elements of the set ${\displaystyle X=\{1\}\subseteq \mathbb {Z} }$ (that spans ${\displaystyle \mathbb {Z} }$) don't verify relations like ${\displaystyle nx=1}$ (like ${\displaystyle \mathbb {Z} _{n}}$) or ${\displaystyle xy=yx}$. So, it seems that ${\displaystyle \mathbb {Z} }$ is a group more "free" that ${\displaystyle \mathbb {Z} _{n}}$.
Our goal in this section will be, given a set ${\displaystyle X}$, build a group spanned by the set ${\displaystyle X}$ such that it will be the most "free" possible, in the sense that it doesn't have to obey relations like ${\displaystyle x^{n}=1}$ or ${\displaystyle xy=yx}$. To do so, we begin by constructing a "free" monoid (in the same sense). Informally, this monoid will be the monoid of the words written with the letters of the alphabet ${\displaystyle X}$, where the identity will be the word with no letters (the "empty word"), and the binary operation of the monoid will be concatenation of words. The notation ${\displaystyle x_{1}\ldots x_{n}}$ that we will use for the element of this monoid meets this idea that the elements of this monoid are the words ${\displaystyle x_{1}\ldots x_{n}}$ where ${\displaystyle x_{1},\ldots ,x_{n}}$ are letters of the alphabet ${\displaystyle X}$. Here is the definition of this monoid.
Definition Let ${\displaystyle X}$ be a set.
1. We denote the ${\displaystyle n}$-tuples ${\displaystyle (x_{1},\ldots ,x_{n})}$ with ${\displaystyle x_{i}\in X}$ and ${\displaystyle n\in \mathbb {N} }$ by ${\displaystyle x_{1}\ldots x_{n}}$.
2. We denote ${\displaystyle ()}$, that is ${\displaystyle (x_{1},\ldots ,x_{n})}$ with ${\displaystyle n=0}$, by ${\displaystyle 1}$.
3. We denote by ${\displaystyle FM(X)}$ the set ${\displaystyle \{x_{1}\ldots x_{n}:n\in \mathbb {N} ,x_{i}\in X\}}$.
4. We define in ${\displaystyle FM(X)}$ the concatenation operation ${\displaystyle *}$ by ${\displaystyle x_{1}\ldots x_{m}*y_{1}\ldots y_{n}=x_{1}\ldots x_{m}y_{1}\ldots y_{n}}$.
Next we prove that this monoid is indeed a monoid. It's an easy to prove result, we need to show associativity of ${\displaystyle *}$ and that ${\displaystyle 1*x=x*1=x}$.
Proposition ${\displaystyle (FM(X),*)}$ is a monoid with identity ${\displaystyle 1}$.
Proof The operation ${\displaystyle *}$ is associative because, given any ${\displaystyle x_{1}\ldots x_{m},y_{1}\ldots y_{n},z_{1}\ldots z_{p}\in FM(X)}$, we have
${\displaystyle (x_{1}\ldots x_{m}*y_{1}\ldots y_{n})*z_{1}\ldots z_{m}}$
${\displaystyle =x_{1}\ldots x_{m}y_{1}\ldots y_{n}*z_{1}\ldots z_{m}}$
${\displaystyle =x_{1}\ldots x_{m}y_{1}\ldots y_{n}z_{1}\ldots z_{m}}$
${\displaystyle =x_{1}\ldots x_{m}*(y_{1}\ldots y_{n}z_{1}\ldots z_{m})}$
${\displaystyle =x_{1}\ldots x_{m}(y_{1}\ldots y_{n}*z_{1}\ldots z_{m})}$.
It's obvious that ${\displaystyle (FM(X),*)}$ has the identity ${\displaystyle 1}$ as ${\displaystyle 1*x_{1}\ldots x_{n}=x_{1}\ldots x_{n}}$ by the definition of ${\displaystyle 1}$ and ${\displaystyle *}$. ${\displaystyle \square }$
Following the idea that the monoid ${\displaystyle (FM(X),*)}$ is the most "free" monoid spanned by ${\displaystyle X}$, we will call it the free monoid spanned by ${\displaystyle X}$.
Definition Let ${\displaystyle X}$ be a set. We denote the free monoid spanned by ${\displaystyle X}$ by ${\displaystyle (FM(X),*)}$.
Examples
1. Let ${\displaystyle X=\{x\}}$. Then ${\displaystyle FM(X)=\{1,x,xx,xxx,\ldots \}}$ and, for example, ${\displaystyle xx*xxx=xxxxx}$.
2. Let ${\displaystyle X=\{x,y,z\}}$. Then ${\displaystyle 1,x,y,z,xxx,yxz,xyzzz\in FM(X)}$ and, for example, ${\displaystyle xxx*yxz=xxxyxz}$.
## Free group spanned by a set
Now let us construct the more "free" group spanned by a set ${\displaystyle X}$. Informally, what we will do is insert in the monoid ${\displaystyle FM(X)}$ the inverse elements that are missing in it for it to be a group. In a more precise way, we will have a set ${\displaystyle {\bar {X}}}$ equipotent to ${\displaystyle X}$, choose a bijection from ${\displaystyle X}$ to ${\displaystyle {\bar {X}}}$ and in this way achieve a "association" between the elements of ${\displaystyle X}$ and the elements of ${\displaystyle {\bar {X}}}$. Then we face ${\displaystyle x_{1}\ldots x_{n}\in FM(X)}$ (with ${\displaystyle x_{1},\ldots ,x_{n}\in X}$) as having the inverse element ${\displaystyle {\overline {x_{n}}}\ldots {\overline {x_{1}}}}$ (with ${\displaystyle {\overline {x_{1}}},\ldots ,{\overline {x_{n}}}\in X}$), where ${\displaystyle x_{1},\ldots ,x_{n}\in X}$ is is associated with ${\displaystyle {\overline {x_{1}}}\ldots {\overline {x_{n}}}}$, respectively. Let us note that the order of the elements in ${\displaystyle {\overline {x_{n}}}\ldots {\overline {x_{1}}}}$ is "reversed" because the inverse of the product ${\displaystyle x_{1}\ldots x_{n}=x_{1}*\cdots *x_{n}}$ must be ${\displaystyle x_{n}^{-1}*\cdots *x_{1}^{-1}}$, and the ${\displaystyle x_{1}^{-1},\ldots ,x_{n}^{-1}}$ are, respectively, ${\displaystyle {\overline {x_{1}}},\ldots ,{\overline {x_{n}}}}$. The way we do that ${\displaystyle {\overline {x_{n}}}\ldots {\overline {x_{1}}}}$ be the inverse of ${\displaystyle x_{1}\ldots x_{n}}$ is to take a congruence relation ${\displaystyle R}$ that identifies ${\displaystyle x_{1}\ldots x_{n}{\overline {x_{n}}}\ldots {\overline {x_{1}}}}$ with ${\displaystyle 1}$, and pass to the quotient ${\displaystyle FM(X\cup {\bar {X}})}$ by this relation (defining then, in a natural way, the binary operation of the group, ${\displaystyle [u]_{R}\star [v]_{R}=[u*v]_{R}}$). By taking the quotient, we are formalizing the intuitive idea of identifying ${\displaystyle x_{1}\ldots x_{n}{\overline {x_{n}}}\ldots {\overline {x_{1}}}}$ with ${\displaystyle 1}$, because in the quotient we have the equality ${\displaystyle [x_{1}\ldots x_{n}{\overline {x_{n}}}\ldots {\overline {x_{1}}}]_{R}=[1]_{R}}$. Let us give the formal definition.
Definition Let ${\displaystyle X}$ be a set. Let us take another set ${\displaystyle {\overline {X}}}$ equipotent to ${\displaystyle X}$ and disjoint from ${\displaystyle X}$ and let ${\displaystyle f:X\rightarrow {\overline {X}}}$ be a bijective application.
1. For each ${\displaystyle x\in X}$ let us denote ${\displaystyle f(x)}$ by ${\displaystyle {\bar {x}}}$, for each ${\displaystyle x\in {\overline {X}}}$ let us denote ${\displaystyle f^{-1}(x)}$ by ${\displaystyle {\bar {x}}}$ and for each ${\displaystyle x_{1}\ldots x_{n}\in FM(X\cup {\overline {X}})}$ let us denote ${\displaystyle {\overline {x_{n}}}\ldots {\overline {x_{1}}}}$ by ${\displaystyle {\overline {x_{1}\ldots x_{n}}}}$.
2. Let ${\displaystyle R}$ be the congruence relation of ${\displaystyle FM(X\cup {\overline {X}})}$ spanned by ${\displaystyle G=\{(u*{\bar {u}},1):u\in X\cup {\overline {X}}\}}$, this is, ${\displaystyle R}$ is the intersection of all the congruence relations in ${\displaystyle FM(X\cup {\overline {X}})}$ wich have ${\displaystyle G}$ as a subset. We denote the quotient set ${\displaystyle FM(X\cup {\overline {X}})/R}$ by ${\displaystyle FG(X)}$.
Frequently, abusing the notation, we represent an element ${\displaystyle [u]_{R}\in FG(X)}$ simply by ${\displaystyle u}$.
Because the operation ${\displaystyle [u]_{R}\star [v]_{R}=[u*v]_{R}}$ that we want to define in ${\displaystyle FM(X\cup {\bar {X}})/R}$ is defined using particular represententes ${\displaystyle u}$ and ${\displaystyle v}$ of the equivalence classes ${\displaystyle [u]_{R}}$ and ${\displaystyle [v]_{R}}$, a first precaution is to verify that the definition does not depend on the chosen representatives. It's an easy verification.
Lemma Let ${\displaystyle X}$ be a set. It is well defined in ${\displaystyle FG(X)}$ the binary operation ${\displaystyle \star }$ by ${\displaystyle [u]_{R}\star [v]_{R}=[u_{r}*v_{r}]_{R}}$ (where ${\displaystyle R}$ is the congruence relation of the previous definition).
Proof Let ${\displaystyle u,u',v,v'\in FM(X)}$ be any elements such that ${\displaystyle [u]_{R}=[u']_{R}}$ and ${\displaystyle [v]_{R}=[v']_{R}}$, this is, ${\displaystyle uRu'}$ and ${\displaystyle vRv'}$. Because ${\displaystyle R}$ is a congruence relation in ${\displaystyle FM(X\cup {\bar {X}})}$, we have ${\displaystyle u*vRu'*v'}$, this is, ${\displaystyle [u*v]_{R}=[u'*v']_{R}}$. ${\displaystyle \square }$
Because the definition is valid, we present it.
Definition Let ${\displaystyle X}$ be a set. We define in ${\displaystyle FG(X)}$ the binary operation ${\displaystyle \star }$ by ${\displaystyle [u]_{R}\star [v]_{R}=[u_{r}*v_{r}]_{R}}$.
Finally, we verify that the group that we constructed is indeed a group.
Proposition Let ${\displaystyle X}$ be a set. ${\displaystyle (FG(X),\star )}$ is a group with identity ${\displaystyle [1]_{R}}$ and where ${\displaystyle \forall [u]_{R}\in FG(X),\,{[u]_{R}}^{-1}=[{\bar {u}}]_{R}}$.
Proof
1. ${\displaystyle (FG(X),\star )}$ is associative because ${\displaystyle \forall [u]_{R},[v]_{R},[w]_{R}\in FG(X),\,([u]_{R}\star [v]_{R})\star [w]_{R}=([u*v]_{R})\star [w]_{R}=[([u]_{R}*[v]_{R})*w]_{R}=}$ ${\displaystyle [u*(v*w)]_{R}=[u]_{R}\star [v*w]_{R}=[u]_{R}\star ([v]_{R}\star [w]_{R}).}$
2. Let us see that ${\displaystyle [1]_{R}}$ is the identity ${\displaystyle (FG(X),\star )}$. Let ${\displaystyle [u]_{R}\in FG(X)}$ be any element. We have ${\displaystyle [u]_{R}\star [1]_{R}=[u*1]_{R}=[u]_{R}}$ and, in the same way, ${\displaystyle [1]_{R}\star [u]_{R}=[u]_{R}}$.
3. Let ${\displaystyle [u]_{R}\in FG(X)}$ be any element and let us see that ${\displaystyle [u]_{R}\star [{\bar {u}}]_{R}=[1]_{R}}$. We have ${\displaystyle [u]_{R}\star [{\bar {u}}]_{R}=[u*{\bar {u}}]_{R}}$ and, by definition of ${\displaystyle R}$, ${\displaystyle u*{\bar {u}}R1}$, this is, ${\displaystyle [u*{\bar {u}}]_{R}=[1]_{R}}$, therefore ${\displaystyle [u]_{R}\star [{\bar {u}}]_{R}=[1]_{R}}$ and, in the same way, ${\displaystyle [{\bar {u}}]_{R}\star [u]_{R}=[1]_{R}}$. ${\displaystyle \square }$
In the same way that we did with the free monoid, we will call free group spanned by the set ${\displaystyle X}$ to the more "free" group spanned by this set.
Definition Let ${\displaystyle X}$ be a set. We call free group spanned by ${\displaystyle X}$ to ${\displaystyle (FG(X),\star )}$.
Example Let ${\displaystyle X=\{x\}}$. Let us choose any set ${\displaystyle {\bar {X}}=\{y\}}$ disjoint (and equipotent) of ${\displaystyle X}$. Let ${\displaystyle f:X\rightarrow {\bar {X}}}$ be any (in fact, the only) bijective application of ${\displaystyle X}$ in ${\displaystyle {\bar {X}}}$. Then we denote ${\displaystyle f(x)=y}$ by ${\displaystyle {\bar {x}}}$ and we denote ${\displaystyle f^{-1}(y)=x}$ by ${\displaystyle {\bar {y}}}$. We regard ${\displaystyle x}$ and ${\displaystyle y}$ as inverse elements. Let ${\displaystyle R}$ be the congruence relation of ${\displaystyle FM(X\cup {\bar {X}})}$ spanned by ${\displaystyle G=\{(1,1),(x{\bar {x}},1),(xx{\bar {x}}{\bar {x}},1),\ldots \}}$. ${\displaystyle FG(X)=FM(\{x,{\bar {x}}\})/R}$ is the set of all "words" written in the alphabet ${\displaystyle \{[x]_{R},[{\bar {x}}]_{R}\}}$. For example, ${\displaystyle [1]_{R},[x]_{R},[{\bar {x}}]_{R},[xx{\bar {x}}xx]_{R}\in FG(X)}$.
We have ${\displaystyle G\subseteq R}$ and, for example, ${\displaystyle (xx{\bar {x}},1)\in R}$, because ${\displaystyle (x{\bar {x}},1)\in G\subseteq R}$ (therefore ${\displaystyle x{\bar {x}}R1}$) and because ${\displaystyle R}$ is a congruence relation, we can "multiply" both "members" of the relation ${\displaystyle x{\bar {x}}R1}$ by ${\displaystyle x}$ and obtain ${\displaystyle xx{\bar {x}}Rx}$. We see ${\displaystyle x{\bar {x}}R1}$ as meaning that in ${\displaystyle FG(X)}$ We have ${\displaystyle xx{\bar {x}}=x}$ (more precisely, ${\displaystyle [xx{\bar {x}}]_{R}=[x]_{R}}$), and we think in this equality as being the result of one ${\displaystyle x}$ "cut out" with ${\displaystyle {\bar {x}}}$ in ${\displaystyle xx{\bar {x}}}$.
Given ${\displaystyle u\in FM(X\cup {\bar {X}})}$, let us denote the exact number of times that the "letter" ${\displaystyle x}$ appears in ${\displaystyle u}$ by ${\displaystyle |u|_{x}}$ and let us denote the exact number of times the "letter" ${\displaystyle {\bar {x}}}$ appears in ${\displaystyle u}$ by ${\displaystyle |u|_{\bar {x}}}$. Then "cutting" ${\displaystyle x}$'s with ${\displaystyle {\bar {x}}}$'s it remains a reduced word word with ${\displaystyle |u|_{x}-|u|_{\bar {x}}}$ times the letter ${\displaystyle x}$ (if ${\displaystyle |u|_{x}-|u|_{\bar {x}}<0}$, let us us consider that there aren't any letters ${\displaystyle x}$ and remains ${\displaystyle -(|u|_{x}-|u|_{\bar {x}})}$ times the letter ${\displaystyle {\bar {x}}}$). Let us denote ${\displaystyle |u|_{x}-|u|_{\bar {x}}}$ by ${\displaystyle |u|_{x-{\bar {x}}}}$. We have
1. ${\displaystyle [u]_{R}=[v]_{R}}$ if and only if ${\displaystyle |u|_{x-{\bar {x}}}=|v|_{x-{\bar {x}}}}$ and
2. ${\displaystyle \forall [u]_{R},[v]_{R}\in FG(X),\,|uv|_{x-{\bar {x}}}=|u|_{x-{\bar {x}}}+|v|_{x-{\bar {x}}}}$.
In this way, each element ${\displaystyle [u]_{R}\in FG(X)}$ is determined by the integer number ${\displaystyle |u|_{x-{\bar {x}}}}$ and the product ${\displaystyle \star }$ of two elements ${\displaystyle [u]_{R},[v]_{R}\in FG(X)}$ correspondent to the sum of they associated integers numbers ${\displaystyle |u|_{x-{\bar {x}}}}$ and ${\displaystyle |v|_{x-{\bar {x}}}}$. Therefore, it seems that the group ${\displaystyle (FG(X),\star )}$ is "similar" to ${\displaystyle (\mathbb {Z} ,+)}$. indeed ${\displaystyle (FG(X),\star )}$ is isomorph to ${\displaystyle (\mathbb {Z} ,+)}$ and the application ${\displaystyle |\cdot |_{x-{\bar {x}}}:FG(X)\rightarrow \mathbb {Z} }$ is a group isomorphism.
## Presentation of a group
Informally, it seems that ${\displaystyle \mathbb {Z} _{n}}$ is obtained from the "free" group ${\displaystyle \mathbb {Z} }$ imposing the relation ${\displaystyle nx=1}$. Let us try formalize this idea. We start with a set ${\displaystyle X}$ that spans a group ${\displaystyle G}$ that que want to create and a set of relations ${\displaystyle R}$ (such as ${\displaystyle x^{n}=1}$ or ${\displaystyle xy=yz}$) that the elements of ${\displaystyle G}$ must verify and we obtain a group ${\displaystyle G/R}$ spanned by ${\displaystyle G}$ and that verify the relations of ${\displaystyle R}$. More precisely, we write each relation ${\displaystyle u=v}$ in the form ${\displaystyle uv^{-1}=1}$ (for example, ${\displaystyle xy=yx}$ is written in the form ${\displaystyle xyx^{-1}y^{-1}=1}$) and we see each ${\displaystyle uv^{-1}}$ as a "word" of ${\displaystyle FG(X)}$. Because ${\displaystyle R}$ doesn't have to be a normal subgroup of ${\displaystyle G}$, we can not consider the quotient ${\displaystyle FG(X)/R}$, so we consider the quotient ${\displaystyle FG(X)/N}$ where ${\displaystyle N}$ is the normal subgroup of ${\displaystyle FG(X)}$ spanned by ${\displaystyle R}$. In ${\displaystyle G/N}$, we will have ${\displaystyle uv^{-1}N=1N}$, which we see as meaning that in ${\displaystyle G/N}$ the elements ${\displaystyle uv^{-1}}$ and ${\displaystyle 1}$ are the same. In this way, ${\displaystyle FG(X)/N}$ will verify all the relations that we want and will be spanned by ${\displaystyle X}$ (more precisely, by ${\displaystyle \{xN:x\in X\}}$). Let us formalize this idea.
Definition Let ${\displaystyle G}$ be a group. We call presentation of ${\displaystyle G}$, and denote by ${\displaystyle }$, to a ordered pair ${\displaystyle (X,R)}$ where ${\displaystyle X}$ is a set, ${\displaystyle R\subseteq FG(X)}$ and ${\displaystyle G\simeq FG(X)/N}$, where ${\displaystyle N}$ is the normal subgroup of ${\displaystyle FG(X)}$ spanned by ${\displaystyle R}$. Given a presentation ${\displaystyle }$, we call spanning set to ${\displaystyle X}$ and set of relations to ${\displaystyle R}$.
Let us see some examples of presentations of the free group ${\displaystyle FG(X)}$ and the groups ${\displaystyle \mathbb {Z} _{n}}$, ${\displaystyle \mathbb {Z} \oplus \mathbb {Z} }$, ${\displaystyle \mathbb {Z} _{m}\oplus \mathbb {Z} _{n}}$ and ${\displaystyle S_{3}}$. We also use the examples to present some common notation and to show that a presentation of a group does not need to be unique.
Examples
1. Let ${\displaystyle X}$ be a set. ${\displaystyle }$ is a presentation of ${\displaystyle FG(X)}$ because ${\displaystyle FG(X)\simeq FG(X)/\{1\}}$, where ${\displaystyle \{1\}}$ is the normal subgroup of ${\displaystyle FG(X)}$ spanned by ${\displaystyle \emptyset }$. In particular, ${\displaystyle <\{x\}:\emptyset >}$ is a presentation of ${\displaystyle (\mathbb {Z} ,+)\simeq FG(\{x\})}$, more commonly denoted by ${\displaystyle }$. Another presentation of ${\displaystyle (\mathbb {Z} ,+)}$ is ${\displaystyle <\{x,y\}:\{xy^{-1}\}>}$, more commonly denoted by ${\displaystyle }$. Informally, in the presentation ${\displaystyle }$ we insert a new element ${\displaystyle y}$ in the spanning set, but we impose the relation ${\displaystyle xy^{-1}=1}$, this is, ${\displaystyle x=y}$, which is the same as having not introduced the element ${\displaystyle y}$ and have stayed by the presentation ${\displaystyle }$.
2. Let ${\displaystyle X=\{x\}}$. ${\displaystyle }$ (where ${\displaystyle x^{n}=x\star \cdots \star x\in FG(X)}$ ${\displaystyle n}$ times) is a presentation of ${\displaystyle \mathbb {Z} _{n}}$. Indeed, the subgroup of ${\displaystyle FG(X)}$ spanned by ${\displaystyle \{x^{n}\}}$ is ${\displaystyle N=\{\ldots ,{\bar {x}}^{2n},{\bar {x}}^{n},1,x^{n},x^{2n},\ldots \}\simeq n\mathbb {Z} }$ and ${\displaystyle FG(X)\simeq \mathbb {Z} }$, therefore${\displaystyle \mathbb {Z} _{n}=\mathbb {Z} /n\mathbb {Z} \simeq FG(X)/N}$. Is more common to denote ${\displaystyle <\{x\}:\{x^{n}\}>}$ by ${\displaystyle }$.
3. Let ${\displaystyle X=\{x,y\}}$ (with ${\displaystyle x}$ and ${\displaystyle y}$ distinct) and ${\displaystyle R=\{xyx^{-1}y^{-1}\}}$. ${\displaystyle }$ is a presentation of ${\displaystyle \mathbb {Z} \oplus \mathbb {Z} }$. Informally, what we do is impose comutatibility in ${\displaystyle FG(X)}$, this is, ${\displaystyle xy=yx}$, this is, ${\displaystyle xyx^{-1}y^{-1}=1}$, obtaining a group isomorph to ${\displaystyle \mathbb {Z} \oplus \mathbb {Z} }$. It's more usually denote ${\displaystyle <\{x,y\}:\{xyx^{-1}y^{-1}\}>}$ by ${\displaystyle }$.
4. Let ${\displaystyle X=\{x,y\}}$ and ${\displaystyle R=\{xyx^{-1}y^{-1},x^{m},y^{n}\}}$. ${\displaystyle }$ is a presentation of ${\displaystyle \mathbb {Z} _{m}\times \mathbb {Z} _{n}}$. Informally, what we do is impose comutability in the same way as in the previous example, and we impose ${\displaystyle x^{m}=1}$ and ${\displaystyle x^{n}=1}$ to obtain ${\displaystyle \mathbb {Z} _{m}\times \mathbb {Z} _{n}}$ instead ${\displaystyle \mathbb {Z} \oplus \mathbb {Z} }$. It's more common denote${\displaystyle <\{x,y\}:\{xyx^{-1}y^{-1},x^{m},y^{n}\}>}$ by ${\displaystyle }$.
5. ${\displaystyle <\{a,b,c\}:\{aa,bb,cc,abac,cbab\}>}$, more commonly written ${\displaystyle }$, is a presentation of ${\displaystyle S_{3}}$, the group of the permutations of ${\displaystyle \{1,2,3\}}$ with the composition of applications. To verify this, one can verify that any group with presentation ${\displaystyle }$ as exactly six elements ${\displaystyle id}$, ${\displaystyle a}$, ${\displaystyle b}$, ${\displaystyle c}$, ${\displaystyle a}$, ${\displaystyle ab}$ and ${\displaystyle ac}$, and that the multiplication of this elements results in the following Cayley table that is equal to the Cayley table of ${\displaystyle S_{3}}$. Just to give an idea how this can be achived, a group with presentation ${\displaystyle }$ as exactly the elements ${\displaystyle id}$, ${\displaystyle a}$, ${\displaystyle b}$, ${\displaystyle c}$, ${\displaystyle a}$, ${\displaystyle ab}$ and ${\displaystyle ac}$ because none of this elements are the same (the relations ${\displaystyle a^{2}=b^{2}=c^{2}=abac=cbab=1}$ don't allow us to conclude that two of the elements are equal) and because "another" elements like ${\displaystyle bc}$ are actually one of the previous elements (for example, from ${\displaystyle cbab=id}$ we have ${\displaystyle cb=ab}$, and taking inverses of both members, we have ${\displaystyle b^{-1}c^{-1}=b^{-1}a^{-1}}$, which, using ${\displaystyle a^{2}=b^{2}=c^{2}=id}$, this is, ${\displaystyle a=a^{-1}}$, ${\displaystyle b=b^{-1}}$ and ${\displaystyle c=c^{-1}}$, results in ${\displaystyle bc=ba}$). Then, using the relations of the presentation, one can compute the Cayley table. For example, ${\displaystyle a(ab)=b}$ because we have the relation ${\displaystyle a^{2}=1}$. Another example: we have ${\displaystyle b(ac)=a}$ because we can multiply both members of the relation ${\displaystyle abac=id}$ by ${\displaystyle a}$ and then use ${\displaystyle a^{2}=id}$. One could have suspected of this presentation by taking ${\displaystyle a=(1\ 2)}$, ${\displaystyle b=(1\ 3)}$ and ${\displaystyle c=(2\ 3)}$ and then, trying to construct the Cayley table of ${\displaystyle S_{3}}$, found out that it was possible if one know that ${\displaystyle a^{2}=b^{2}=c^{2}=abac=cbab=1}$.
${\displaystyle \times }$ ${\displaystyle id}$ ${\displaystyle a}$ ${\displaystyle b}$ ${\displaystyle c}$ ${\displaystyle ab}$ ${\displaystyle ac}$
${\displaystyle id}$ ${\displaystyle id}$ ${\displaystyle a}$ ${\displaystyle b}$ ${\displaystyle c}$ ${\displaystyle ab}$ ${\displaystyle ac}$
${\displaystyle a}$ ${\displaystyle a}$ ${\displaystyle id}$ ${\displaystyle ab}$ ${\displaystyle ac}$ ${\displaystyle b}$ ${\displaystyle c}$
${\displaystyle b}$ ${\displaystyle b}$ ${\displaystyle ac}$ ${\displaystyle id}$ ${\displaystyle ab}$ ${\displaystyle c}$ ${\displaystyle a}$
${\displaystyle c}$ ${\displaystyle c}$ ${\displaystyle ab}$ ${\displaystyle ac}$ ${\displaystyle id}$ ${\displaystyle a}$ ${\displaystyle b}$
${\displaystyle ab}$ ${\displaystyle ab}$ ${\displaystyle c}$ ${\displaystyle a}$ ${\displaystyle b}$ ${\displaystyle ac}$ ${\displaystyle id}$
${\displaystyle ac}$ ${\displaystyle ac}$ ${\displaystyle b}$ ${\displaystyle c}$ ${\displaystyle a}$ ${\displaystyle id}$ ${\displaystyle ab}$
It's natural to ask if all groups have a presentation. The following theorem tell us that the answer is yes, and it give us a presentation.
Theorem Let ${\displaystyle (G,\times )}$ be a group.
1. The application ${\displaystyle \varphi :FG(G)\rightarrow G}$ defined by ${\displaystyle \varphi ([x_{1}]_{R}\star \cdots \star [x_{n}]_{R})=x_{1}\times \cdots \times x_{n}}$ (where ${\displaystyle x_{1},\ldots ,x_{n}\in G}$) is an epimorphism of groups.
2. ${\displaystyle }$ is a presentation of ${\displaystyle (G,\times )}$.
Proof
1. ${\displaystyle \varphi }$ is well defined because every element of ${\displaystyle FG(X)}$ as a unique representations in the form ${\displaystyle [x_{n}]\star \cdots [x_{n}]_{R}}$ with ${\displaystyle x_{1},\ldots ,x_{n}\in G}$, with the exception of ${\displaystyle [1]_{R}}$ appears several times in the representation, but that doesn't affect the value of ${\displaystyle x_{1}\times \cdots \times x_{n}}$ Let ${\displaystyle [x_{1}]_{R}\star \cdots \star [x_{m}]_{R},[y_{1}]_{R}\star \cdots \star [y_{n}]_{R}\in FG(X)}$ be any elements, where ${\displaystyle x_{1},\ldots ,x_{m},y_{1},\ldots ,y_{n}\in G}$. We have ${\displaystyle \varphi (([x_{1}]_{R}\star \cdots \star [x_{m}]_{R})\star ([y_{1}]_{R}\star \cdots \star [y_{n}]_{R}))=(x_{1}\times \cdots \times x_{m})\times (y_{1}\times \cdots \times y_{n})=}$ ${\displaystyle \varphi ([x_{1}]_{R}\star \cdots \star [x_{m}]_{R})\times \varphi ([y_{1}]_{R}\star \cdots \star [y_{n}]_{R})}$, therefore ${\displaystyle \varphi }$ is a morphism of groups. Because ${\displaystyle \forall x\in G,\,\varphi ([x]_{R})=x}$, then ${\displaystyle G}$ is a epimorphism of groups.
2. Using the first isomorphism theorem (for groups), we have ${\displaystyle FG(G)/{\textrm {ker}}\,\varphi \simeq \varphi (G)=G}$, therefore ${\displaystyle }$ is a presentation of ${\displaystyle (G,\times )}$. ${\displaystyle \square }$
The previous theorem, despite giving us a presentations of the group ${\displaystyle G}$, doesn't give us a "good" presentation, because the spanning set ${\displaystyle G}$ is usually much larger that other spanning sets, and the set of relations ${\displaystyle \mathrm {ker} \,\varphi }$ is also usually much larger then other sufficient sets of relations (it is even a normal subgroup of ${\displaystyle FG(G)}$, when it would be enough that it span an appropriated normal subgroup).
|
2016-10-26 06:06:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 481, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791499376296997, "perplexity": 82.01559639954449}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720737.84/warc/CC-MAIN-20161020183840-00469-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.phidgets.com/docs21/1122_User_Guide
|
Notice: This page contains information for the legacy Phidget21 Library. Phidget21 does not support VINT Phidgets, and will not support any new Phidgets. Phidget21 will be maintained until 2020. We recommend that new projects be developed against the Phidget22 Library. Click on the button in the menu bar to go to the Phidget22 version of this page.
# 1122 User Guide
## Getting Started
### Checking the Contents
You should have received: A 30A current sensor AC/DC A sensor cable In order to test your new Phidget you will also need: A PhidgetInterfaceKit 8/8/8 or PhidgetTextLCD A USB cable
### Connecting the Pieces
Connect the 30 Amp Current Sensor AC/DC to the Analog Input 6 on the PhidgetInterfaceKit 8/8/8 using the sensor cable. To measure alternating current, you can use the DC port to measure a peak-to-peak AC signal, or you can use the AC port to measure the RMS value. To measure direct current, either the AC port or the DC port will work, since RMS calculations will have no effect on a DC signal. Connect your power source to the terminal block. Connect the PhidgetInterfaceKit to your PC using the USB cable.
### Testing Using Windows 2000 / XP / Vista / 7
Make sure you have the current version of the Phidget library installed on your PC. If you don't, follow these steps:
3. You should see the icon on the right hand corner of the Task Bar.
### Running Phidgets Sample Program
Double clicking on the icon loads the Phidget Control Panel; we will use this program to ensure that your new Phidget works properly.
The source code for the InterfaceKit-full sample program can be found in the quick downloads section on the C# Language Page. If you'd like to see examples in other languages, you can visit our Languages page.
#### Updating Device Firmware
If an entry in this list is red, it means the firmware for that device is out of date. Double click on the entry to be given the option of updating the firmware. If you choose not to update the firmware, you can still run the example for that device after refusing.
### Testing Using Mac OS X
1. Go to the Quick Downloads section on the Mac OS X page
3. Click on System Preferences >> Phidgets (under Other) to activate the Preference Pane
4. Make sure that the is properly attached.
5. Double Click on in the Phidget Preference Pane to bring up the Sample program. This program will function in a similar way as the Windows version.
### Using Linux
For a step-by-step guide on getting Phidgets running on Linux, check the Linux page.
### Using Windows Mobile / CE 5.0 / CE 6.0
For a step-by-step guide on getting Phidgets running on Windows CE, check the Windows CE page.
## Technical Details
The 30 Amp Sensor measures alternating current (AC) up to 30 Amps and direct current (DC) between –30 and +30 Amps. It uses a hall-effect based sensor to measure the magnetic field induced by the applied current flowing through a copper conductor. It then converts the magnetic data into a current measurement with internal calculations. The AC output will give the RMS (Root Mean Square) value of an alternating current assuming the current is sinusoidal, and the sine wave is varying equally across the zero point. The AC output can also be used for signals that are not varying evenly around the zero point but the value will be the RMS plus a DC component. If a DC signal is being measured, the AC output will produce a signal that can be used to calculate the current but without the value representing direction of current flow.
### Measuring Current
The Phidgets Current Sensor should be wired in series with the circuit under test, as shown in the following diagrams.
In the diagrams above, the voltage source is represented by the battery symbol. The load is represented by a light bulb or schematic resistor symbol. The current flowing from the battery to the load is measured through the current sensor.
### Formulas
The formula to translate SensorValue into Current is:
${\displaystyle {\text{DC Current (A)}}={\frac {\text{SensorValue}}{13.2}}-37.8787}$
${\displaystyle {\text{AC Current (RMS)}}={\text{SensorValue}}\times 0.04204}$
### Other Interfacing Alternatives
If you want maximum accuracy, you can use the RawSensorValue property from the PhidgetInterfaceKit. To adjust a formula, substitute (SensorValue) with (RawSensorValue / 4.095) If the sensor is being interfaced to your own Analog to Digital Converter and not a Phidget device, our formulas can be modified by replacing (SensorValue) with (Vin * 200). It is important to consider the voltage reference and input voltage range of your ADC for full accuracy and range.
Each Analog Input uses a 3-pin, 0.100 inch pitch locking connector. Pictured here is a plug with the connections labelled. The connectors are commonly available - refer to the Analog Input Primer for manufacturer part numbers.
## API
Phidget analog sensors do not have their own API- they simply output a voltage that is converted to a digital value and accessed through the "Sensor" properties and events on the PhidgetInterfaceKit API. It is not possible to programmatically identify which sensor is attached to the Analog Input. To an InterfaceKit, every sensor looks the same. Your application will need to apply formulas from this manual to the SensorValue (an integer that ranges from 0 to 1000) to convert it into the units of the quantity being measured. For example, this is how you would use a temperature sensor in a C# program:
// set up the interfacekit object
InterfaceKit IFK = new InterfaceKit();
// link the new interfacekit object to the connected board
IFK.open("localhost", 5001);
int sensorvalue = IFK.sensors[0].Value;
// Convert sensorvalue into temperature in degrees Celsius
double roomtemp = Math.Round(((sensorvalue * 0.22222) - 61.11), 1);
See the PhidgetInterfaceKit User Guide for more information on the API and a description of our architecture.
For more code samples, find your preferred language on the Languages page.
## Product History
Date Board Revision Device Version Comment
March 2008 0 N/A Product Release
|
2019-07-19 02:25:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28262051939964294, "perplexity": 2063.0606424065013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525973.56/warc/CC-MAIN-20190719012046-20190719034046-00503.warc.gz"}
|
https://www.physicsforums.com/threads/parallel-wire-repulsive-force.436291/
|
# Parallel wire repulsive force
1. Oct 8, 2010
### cdotter
1. The problem statement, all variables and given/known data
[PLAIN]http://img813.imageshack.us/img813/6592/36756643.png [Broken]
2. Relevant equations
$$F=\mu_0\frac{II'L}{2\pi r}$$
3. The attempt at a solution
The wires have a weight $$\lambda Lg$$. They require an equal but opposite force to keep them at equilibrium at 6.00 degrees. This force comes from the repulsion between the oppositely flowing currents in the wires, given by
$$F=\mu_0\frac{I^2L}{2\pi r}$$. The distance r between the two wires is $$sin(6.00 degrees)*0.0400 m*2$$.
I'm stuck at the components of the forces. Could someone give me a hint? I'm terrible at geometry.
Last edited by a moderator: May 5, 2017
2. Oct 9, 2010
### tiny-tim
hi cdotter!
hint: call the tension in each string T,
and do components of forces in the y and x directions
3. Oct 10, 2010
### cdotter
Got it, thank you.
|
2019-03-20 07:38:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46209192276000977, "perplexity": 1270.1145249138024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202303.66/warc/CC-MAIN-20190320064940-20190320090940-00243.warc.gz"}
|
https://codeforces.com/blog/entry/7221
|
### Nourin_Eka's blog
By Nourin_Eka, 8 years ago,
Any useful link to learn the idea o Digit DP?
To solve problems like those:
Or any idea about solving this kinds of problems...
• -1
» 8 years ago, # | ← Rev. 2 → +2 LOJ - 1068Let's generalize the problem. Given a number (1 <=) N (<= INT_MAX). Find how many numbers in the interval [0, N] are divisible by (1 <) K (< 1e4).So you've to make all the numbers from 0 to N, how to do it? In each state, suppose you have made a number. Now try to place digits (0 — 9) after it and check if the new number is smaller or equal to N. Say, N = 1652 and current number = 165. So the valid numbers are : 165 * 10 + 0 = 1650 165 * 10 + 1 = 1651 165 * 10 + 2 = 1652 These are the new numbers we can make. But handling integers will cause MLE. So you've to treat them as string. Again passing whole string would be stupid too. So let's optimize it. In current state, what you really need to know is how to make a valid new number? That is, which digits you can place (or till which digit you can place starting from 0). Now, in any previous state, if you have placed a smaller digit corresponding to N's digit, then current number is absolutely smaller then N and whatever you place after this number will be smaller then N. N = 1652 current number = 155 So you can place any digit after 155 to make a valid number Now, if you haven't placed any smaller digit corresponding to N's digit, then you can from 0 to N's corresponding digit. N = 1652 current number = 165 So you can place 0, 1 and 2 after 155 to make a valid number And of course this process would continue till length length (intToString (N)). So one DP state is [pos], another is [preSmall]. One more state is left (divisibility).Now, checking for divisibility. Following code-segment would do the work : DP (int pos, int moded, bool preSmall) { moded *= 10; for (....) DP (pos + 1, (moded + digit) % K, ...); } After making a number, if moded is 0, then the number is divisible by K.The value of moded will be [0,K). So the total complexity will be len * K * 2. And maximum value of len will be at most 10.This is the solution of the generalized problem. To get the expected result, simply do solve (B) - solve (A - 1).Thanks
• » » 8 years ago, # ^ | 0 Thank You :)
• » » 8 years ago, # ^ | 0 So far I could guess your DP (int pos, int moded, bool preSmall) gives the number of integers divisible by k, not whose sum of digits also divisible by k. LOJ 1068 requires that. And if I my guess is correct then it can be calculated mathematically, n/k gives us the number of integers less than or equal to n divisible by k. DP(int pos, int moded, int preSmall, int sum) I think if we design the DP like this one, it may answer that question, but complexity grows 10*1e4*2*83 which is neither time nor memory efficient. How can we optimize it further.
• » » » 8 years ago, # ^ | 0 if K > 100 there is no solution so answer is 0 :)
• » » » » 5 weeks ago, # ^ | 0 actually if k > 74 then there is no solution.since number of digit is 9 and 2 * (8 * 9) = 74.
• » » 4 years ago, # ^ | -6 Helped me alot. Thank You :)
» 4 years ago, # | 0 This stack overflow link is also very useful : LINK
» 4 years ago, # | ← Rev. 3 → -34 I am a newbie to Dp. Now i am trying to solve some problem related to digit dp. link: http://www.lightoj.com/volume_showproblem.php?problem=1068solution: https://pastebin.com/4R9Q4BYvI was trying to solve the above metioned problem but got stucked.Any help would be appriciated.
• » » 4 years ago, # ^ | +10 Please, remove your code and paste it to any code sharing site like ideone.com, pastebin.com then share the code link.
» 4 years ago, # | 0 I am a newbie to Dp. Now i am trying to solve some problem related to digit dp. link: http://www.lightoj.com/volume_showproblem.php?problem=1068solution: https://pastebin.com/4R9Q4BYvI was trying to solve the above metioned problem but got stucked.Any help would be appriciated.
|
2020-12-05 02:16:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4467335641384125, "perplexity": 1138.0065808477325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00173.warc.gz"}
|
http://openstudy.com/updates/55e08d7ee4b022720610f05f
|
## anonymous one year ago I have to write this as a single natural logarithm. 3 ln 3 + 2 ln x
1. Nnesha
same like log :=)
2. Nnesha
power rule first :=) it's like backwards :=)
3. Nnesha
power rule $\large\rm log_b x^y = y \log_b x$
4. anonymous
log3 3^2 = 2 log3 3 I am so bad at these
5. Nnesha
let's deal with log and ln today! so you can i'm really good at it!
6. anonymous
awesome okay
7. Nnesha
okay so here is an example ln and log same rules apply for both $\rm 5 \ln y = \ln y^5$ number at front of ln becomes exponent of y
8. Nnesha
|dw:1440780028325:dw|
9. Nnesha
so how would you write $\huge\rm 3 \ln 3 =??$ on the previous post you were expanding log equation not condense write in single ln form
10. anonymous
as ln^3?
11. anonymous
I'm sorry let me go back and try again thats wrong
12. Nnesha
okay :=)
13. anonymous
I feel like it should be ln 3^3
14. Nnesha
yes right!
15. Nnesha
same rules apply for 2 ln x = ??
16. anonymous
ln 2^2
17. Nnesha
hmm
18. Nnesha
there is only one 2
19. anonymous
oh right yeah
20. anonymous
so just ln 2 ?
21. Nnesha
it's 2 ln of x |dw:1440780554920:dw|
22. anonymous
oh okay
23. Nnesha
so 2 wll become exponent of x
24. anonymous
okay
25. Nnesha
so how would yo write 2 ln x = ?
26. anonymous
x^2 ?
27. Nnesha
yes right! but don't forget the ln so $\huge\rm \color{reD}{3} \ln 3 +\color{blue}{ 2} \ln x$$\huge\rm ln 3^\color{red}{2} + \ln x^\color{blue}{2}$ now we can write it in single ln
28. Nnesha
there is plus sign which property you should apply ?
29. anonymous
product rule?
30. Nnesha
yes right
31. anonymous
isnt this product rule logb(x ∙ y) = logb(x) + logb(y)
32. anonymous
so would all the logb be log3?
33. Nnesha
no simple log means log base 10
34. Nnesha
in ur question it's ln 3^3 + ln x^2
35. Nnesha
it's natural log you don't have to worry about the base
36. anonymous
so just ln 9x^2 ? I think
37. Nnesha
yes right
38. anonymous
oh well the last part was easier than I thought. i think I just get confused with how to simplify everything. Thank you I really appreciate you helping :)))
39. Nnesha
my pleasure :=)
|
2017-01-23 21:37:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8075737357139587, "perplexity": 11929.566602161836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560283008.19/warc/CC-MAIN-20170116095123-00520-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/differential-geometry/112490-derivative-bouned-variation.html
|
# Thread: derivative and bouned variation
1. ## derivative and bouned variation
Let $F(x)=x^2\sin(\frac{1}{x})$ and $G(x)=x^2\sin(\frac{1}{x^2})$ with domain $[-1,1]$ (when $x=0$ both functions are said to be equal to 0).
I can show that both $F$ and $G$ are differentiable at every point. However, I don't know how to show that $F$ is of bounded variation while $G$ is not. I am only allowed to use the definition in my proof.
Any hints as where to start would be greatly appreciated.
2. Originally Posted by putnam120
Let $F(x)=x^2\sin(\frac{1}{x})$ and $G(x)=x^2\sin(\frac{1}{x^2})$ with domain $[-1,1]$ (when $x=0$ both functions are said to be equal to 0).
I can show that both $F$ and $G$ are differentiable at every point. However, I don't know how to show that $F$ is of bounded variation while $G$ is not. I am only allowed to use the definition in my proof.
Any hints as where to start would be greatly appreciated.
What hints do you want? You can use only the definition so begin writing the sums for F,G over the given interval and try to discover something interesting (I really don't know)...perhaps the fact that sin x is bounded will help?
Tonio
3. Originally Posted by putnam120
Let $F(x)=x^2\sin(\frac{1}{x})$ and $G(x)=x^2\sin(\frac{1}{x^2})$ with domain $[-1,1]$ (when $x=0$ both functions are said to be equal to 0).
I can show that both $F$ and $G$ are differentiable at every point. However, I don't know how to show that $F$ is of bounded variation while $G$ is not. I am only allowed to use the definition in my proof.
The function F is monotonic in each of the intervals $\left[\frac1{\bigl(n+\tfrac12\bigr)\pi}, \frac1{\bigl(n-\tfrac12\bigr)\pi}\right]$, in which the variation is less than $\frac{2}{\bigl(n-\tfrac12\bigr)^2\pi^2}$.
The function G is monotonic in each of the intervals $\left[\frac1{\sqrt{\bigl(n+\tfrac12\bigr)\pi}}, \frac1{\sqrt{\bigl(n-\tfrac12\bigr)\pi}}\right]$, in which the variation is greater than $\frac{2}{\bigl(n+\tfrac12\bigr)\pi}$.
4. Originally Posted by Opalg
The function F is monotonic in each of the intervals $\left[\frac1{\bigl(n+\tfrac12\bigr)\pi}, \frac1{\bigl(n-\tfrac12\bigr)\pi}\right]$, in which the variation is less than $\frac{2}{\bigl(n-\tfrac12\bigr)^2\pi^2}$.
The function G is monotonic in each of the intervals $\left[\frac1{\sqrt{\bigl(n+\tfrac12\bigr)\pi}}, \frac1{\sqrt{\bigl(n-\tfrac12\bigr)\pi}}\right]$, in which the variation is greater than $\frac{2}{\bigl(n+\tfrac12\bigr)\pi}$.
Thanks, that makes things really clear. However, how did you know to find those intervals?
5. Originally Posted by putnam120
Thanks, that makes things really clear. However, how did you know to find those intervals?
The function sin x oscillates between –1 and +1, and it takes those values at odd multiples of π/2. So the function G(x) oscillates between $-x^2$ and $+x^2$, attaining those values at the points where $1/x^2$ is an odd multiple of π/2. However, I was wrong to say that G(x) is monotonic in the intervals between those points. Fortunately, that does not affect the argument that I gave to show that G(x) is not of bounded variation. It is still true that the variation of G(x) in each of those intervals is at least $2/((n+\tfrac12)\pi)$, and the sum of those numbers diverges.
For F(x), the situation is more serious, and the argument that I gave before does not work without some modification. What you can say is that the turning points of F(x) occur at the points where $\tan1/x = 1/(2x)$ (easy calculus calculation). There is exactly one such point in each of the intervals $\left[\frac1{\bigl(n+\tfrac12\bigr)\pi}, \frac1{\bigl(n-\tfrac12\bigr)\pi}\right]$. So F has only one turning point in each such interval. The variation of F in that interval is therefore at most twice the maximum difference between $-x^2$ and $+x^2$ in the interval, namely $\frac4{\bigl(n-\tfrac12\bigr)^2\pi^2}$. That's sufficient to show that F is of bounded variation.
|
2017-11-23 17:25:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 41, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9324599504470825, "perplexity": 110.99576468175513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806844.51/warc/CC-MAIN-20171123161612-20171123181612-00636.warc.gz"}
|
http://math.stackexchange.com/questions/321172/convergent-or-divergent-series/321179
|
# Convergent or divergent series?
suppose that $\sum_{n=1}^{\infty}a_n$ is a convergent series of positive terms. Prove that $\sum_{n=1}^{\infty} \sqrt{a_n \cdot a_{n+1}}$ is also convergent. Demonstrate that the converse is false.
-
By the arithmeticg-geometric inequality $\sqrt{xy}\le \frac{x+y}2$ for $x,y\ge0$. Therefore the second series is dominated as $$\sum_{n=1}^\infty\sqrt{a_na_{n+1}}\le\sum_{n=1}^\infty\frac{a_n+a_{n+1}}2=\frac12\left(\sum_{n=1}^\infty a_n+\sum_{n=2}^\infty a_{n+1}\right)=-\frac{a_1}2+\sum_{n=1}^\infty a_n.$$
The converse is false as can be seen from $a_{2n}=n^{-4}$, $a_{2n+1}=1$.
Note that $2\sqrt{a_n a_{n+1}}\le a_n+a_{n+1}$. (This is a disguised version of $(x-y)^2\ge 0$.) Then use Comparison.
For the falsity of the converse, let $a_n=1$ if $n$ is odd, and let $a_n$ be a fairly rapidly convergent series when $n$ is even. For example, $\frac{1}{n^4}$ will work.
|
2016-07-26 10:45:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9694890975952148, "perplexity": 95.49879844316033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824757.8/warc/CC-MAIN-20160723071024-00014-ip-10-185-27-174.ec2.internal.warc.gz"}
|
https://itectec.com/database/postgresql-retrieving-users-by-their-age-efficiently/
|
# Postgresql – Retrieving users by their age efficiently
datepostgresql
I have a PostgreSQL database of users and their birthdays and want to get all users in a specific age range:
SELECT * FROM users
WHERE age(birthday) >= 21 AND age(birthday) <= 30;
How can I achieve that this query stays efficient, even when there are millions of users in the database?
Of course there are more filters, I just don't want to calculate the age for every row.
I do not want to use a materialized view.
#### Best Answer
You want to move the computation from the column, to the other side of the inequality operators where you have literals rather than a column:
WHERE birthday >= current_date - interval '30 years' AND birthday <= current_date - interval '21 years';
Then a simple index on "birthday" will make it efficient. It might need some fiddling depending on what datatype "birthday" is. I tested it as a timestamp without time zone.
|
2021-10-20 03:45:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2663986086845398, "perplexity": 2470.013841087808}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.56/warc/CC-MAIN-20211020024111-20211020054111-00439.warc.gz"}
|
http://mathhelpforum.com/discrete-math/174162-discrete-sigma.html
|
1. ## Discrete sigma
Hi
Consider the following similarities between the sums:
1 = 1
2 + 3 + 4 = 1 + 8
5 + 6 + 7 + 8 + 9 = 8 + 27
10 + 11 + 12 + 13 + 14 + 15 + 16 = 27 + 64
Do you see the pattern? Find a general formula, with sigma and Exponentiations, which summarizes this pattern.
I need some guiding...
2. Originally Posted by iHeji
Hi
Consider the following similarities between the sums:
1 = 1
2 + 3 + 4 = 1 + 8
5 + 6 + 7 + 8 + 9 = 8 + 27
10 + 11 + 12 + 13 + 14 + 15 + 16 = 27 + 64
Do you see the pattern? Find a general formula, with sigma and Exponentiations, which summarizes this pattern.
I need some guiding...
Perhaps this will help, perhaps it will not.
The first equation (N = 1) is given as
$\displaystyle \sum_{n=1}^{1+0}n = (N - 1)^3 + N^3$
The second (N = 2) is
$\displaystyle \sum_{n=2}^{2+2}n = (N - 1)^3 + N^3$
The third (N = 3) is
$\displaystyle \sum_{n=5}^{5+4}n = (N - 1)^3 + N^3$
etc.
Notice that the lowest number for n is given by the series 1, 2, 5, 10, 17, etc. This is the recursive function
$f(N) = f(N - 1) + (2N - 3),~f(1) = 1$
where N is the equation number. So we have:
The first equation (N = 1) is given as
$\displaystyle \sum_{n=f(1)}^{f(1)+0}n = (N - 1)^3 + N^3$
The second (N = 2) is
$\displaystyle \sum_{n=f(2)}^{f(2)+2}n = (N - 1)^3 + N^3$
The third (N = 3) is
$\displaystyle \sum_{n=f(3)}^{f(3)+4}n = (N - 1)^3 + N^3$
etc.
Now notice that the upper term of the summation is of the form f(N) + g(N) where g(n) = 2(N - 1). So the general Nth equation is given by
$\displaystyle \sum_{n = f(N)}^{f(N) + 2(N - 1)}n = (N - 1)^3 + N^3$
This is probably a more complicated method than is needed, but it's the one I came up with that was the most demonstrative. Anyway, now solve f(N) as a function of N (rather than by recursion) and I'd guess the rest would be a simple induction proof.
-Dan
3. Go figure. That F(N) function is a simple quadratic. I should have seen that coming.
-Dan
4. I just finished this. It's takes a while to get an explicit form for f(N), but it is do-able this way. Perhaps one of the other helpers will have a shortcut.
-Dan
|
2013-12-12 05:04:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8444831967353821, "perplexity": 236.6438735041782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164491055/warc/CC-MAIN-20131204134131-00014-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://fall2021.data606.net/blog/meetup_09b_logistic_regression/
|
# Maximum Likelihood Estimation and Logistic Regression
Click here to open the slides (PDF).
Supplement documents:
|
2021-12-08 18:07:52
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9792917966842651, "perplexity": 14513.159395564655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00505.warc.gz"}
|
https://zbmath.org/?q=an:1050.49016&format=complete
|
# zbMATH — the first resource for mathematics
Transformation of quadratic forms to perfect squares for broken extremals. (English) Zbl 1050.49016
Summary: In this paper we study a quadratic form which corresponds to an extremal with piecewise continuous control in variational problems. This form, compared with the classical one, has some new terms connected with the set $$\Theta$$ of all points of discontinuity of the control. Its positive definiteness is a sufficient optimality condition for broken extremals. We show that if there exists a solution to the corresponding Riccati equation satisfying some jump condition at each point of the set $$\Theta$$, then the quadratic form can be transformed to a perfect square, just as in the classical case. As a result we obtain sufficient conditions for positive definiteness of the quadratic form in terms of the Riccati equation and hence, sufficient optimality conditions for broken extremals.
##### MSC:
49K15 Optimality conditions for problems involving ordinary differential equations
Full Text:
|
2021-07-27 19:58:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6345157623291016, "perplexity": 199.37432443559442}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153474.19/warc/CC-MAIN-20210727170836-20210727200836-00608.warc.gz"}
|
https://judge.mcpt.ca/problem/abaseproblem
|
## A Base Problem
View as PDF
Points: 5 (partial)
Time limit: 2.0s
Memory limit: 64M
Author:
Problem type
You are given $$2$$ integers $$x$$ and $$y$$, which are in base $$b$$. Please print out the sum, and product of $$x$$ and $$y$$ in base $$b$$.
#### Input Specification
The first line will contain the integer $$b\ (2 \le b \le 10)$$.
The second line will contain the integer $$x$$.
The third line will contain the integer $$y$$.
$$x$$ and $$y$$ will each contain at most $$5$$ digits, and each digit will be in the range $$[0, b)$$. $$x$$ and $$y$$ will not contain leading zeros.
#### Output Specification
On the first line, output the sum of $$x$$ and $$y$$.
On the second line, output the product of $$x$$ and $$y$$.
$$b = 10$$
No further constraints.
#### Sample Input
10
3
5
#### Sample Output
8
15
|
2019-04-21 19:03:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4017355144023895, "perplexity": 1019.0402519106615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532050.7/warc/CC-MAIN-20190421180010-20190421202010-00513.warc.gz"}
|
https://ai.stackexchange.com/questions/32452/what-parameters-or-hyper-parameters-of-my-model-for-time-series-should-i-change
|
# What parameters or hyper-parameters of my model for time-series should I change to improve the MAE?
The following time series exercise is about writing the best possible model, minimizing the MAE. Helper functions normalize_series, windowed_dataset are given and not to be changed as well as BATCH_SIZE, N_PAST, N_FUTURE or SHIFT. We use a window of the past 10 observations of 1 feature, and train the model to predict the next 10 observations of that feature.
import pandas as pd
import tensorflow as tf
def normalize_series(data, min, max):
data = data - min
data = data / max
return data
def windowed_dataset(series, batch_size, n_past=10, n_future=10, shift=1):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(size=n_past + n_future, shift=shift, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(n_past + n_future))
ds = ds.map(lambda w: (w[:n_past], w[n_past:]))
return ds.batch(batch_size).prefetch(1)
def solution_model():
# Number of features in the dataset. We use all features as predictors to
# predict all features of future time steps.
N_FEATURES = len(df.columns) # =1
# Normalizes the data
data = df.values
data = normalize_series(data, data.min(axis=0), data.max(axis=0))
# Splits the data into training and validation sets.
SPLIT_TIME = int(len(data) * 0.8)
x_train = data[:SPLIT_TIME]
x_valid = data[SPLIT_TIME:]
tf.keras.backend.clear_session()
tf.random.set_seed(42)
BATCH_SIZE = 32
# Number of past time steps based on which future observations should be
# predicted
N_PAST = 10
# Number of future time steps which are to be predicted.
N_FUTURE = 10
# By how many positions the window slides to create a new window
# of observations.
SHIFT = 1
# Code to create windowed train and validation datasets.
train_set = windowed_dataset(series=x_train, batch_size=BATCH_SIZE,
n_past=N_PAST, n_future=N_FUTURE,
shift=SHIFT)
valid_set = windowed_dataset(series=x_valid, batch_size=BATCH_SIZE,
n_past=N_PAST, n_future=N_FUTURE,
shift=SHIFT)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=1,
kernel_size=5,
strides=1,
activation="relu",
input_shape=[None, 1]
),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(60, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(60, return_sequences=True)),
#tf.keras.layers.Dense(30, activation="relu"),
#tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(N_FEATURES)
])
# Code to train and compile the model
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10 ** (epoch / 20)
)
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-5, momentum=0.9)
model.compile(
loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"]
)
model.fit(
train_set, validation_data=valid_set, epochs=100, callbacks=[lr_schedule]
)
model.summary()
return model
With main
if __name__ == '__main__':
model = solution_model()
model.save("model.h5")
I tried adding several dense layers, replacing the Bidirectional LSTM layers with LSTM/GRU/Bidirectional GRU layers. MAE obtained stays around 0.2 at best.
Is there any obvious (ex. input or output size) mistake in the approach, and how to improve to lower mae?
|
2022-01-21 02:51:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23495665192604065, "perplexity": 12689.165098693933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302715.38/warc/CC-MAIN-20220121010736-20220121040736-00573.warc.gz"}
|
https://nbhmcsirgate.theindianmathematician.com/2020/04/csir-june-2011-part-b-question-32_2.html
|
### CSIR JUNE 2011 PART B QUESTION 32 SOLUTION (Derivative of the determinat map evaluated at $(H,K)$)
For $V = (V_1,V_2) \in \Bbb R^2$ and $W = (W_1,W_2)$, consider the determinant map $det:\Bbb R^2 \times \Bbb R^2 \to \Bbb R$ defined by $det(V,W) = V_1W_2 - V_2W_1$. Then the derivative of the determinant map at $(V,W) \in \Bbb R^2 \times \Bbb R^2$ evaluated on $(H,K) \in \Bbb R^2 \times \Bbb R^2$ is
1) $det(H,W) + det(V,W)$,
2) $det(H,K)$,
3)$det(H,V) + det(W,K)$,
4) $det(V,H) + det(K,W)$.
Solution: We want to find the derivative of the determinant map at the point $(H,K)$, that is along the vector $(H,K)$. So we need to find the directional derivative of determinant along the point $(H,K)$.
Now, the directional derivative of the determinant map along $(H, K)$ is given by $\nabla(det) \cdot (H,K)$ where $\cdot$ is the dot product.
First, we shall calculate the gradient $\nabla(det)$:
$$\big(\frac{\partial(det)}{\partial V_1},\frac{\partial(det)}{\partial V_2},\frac{\partial(det)}{\partial W_1},\frac{\partial(det)}{W_2}\big).$$
and this is equal to $$(W_2,-W_1,-V_2,V_1)$$
Let $H=(H_1,H_2)$ and $K=(K_1,K_2)$. Now, to calculate the required directional derivative, we have to take the inner product of the above gradient with the vector $(H_1, H_2, K_1,K_2)$ and this gives $W_2H_1-W_1H_2-V_2K_1+V_1K_2 = (H_1W_2-H_2W_1)+ (V_1K_2-V_2K_1)$ and this is equal to $$det(H,W) + det(V,K).$$ So option (1) is correct.
Share to your groups:
FOLLOW BY EMAIL TO GET NOTIFICATION OF NEW PROBLEMS. SHARE YOUR DOUBTS AND COMMENTS BELOW IN THE COMMENTS SECTION. ALSO, SUGGEST PROBLEMS TO SOLVE.
### NBHM 2020 PART A Question 4 Solution $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$
Evaluate : $$\int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx$$ Solution : \int_{-\infty}^{\infty}(1+2x^4)e^{-x^2} dx = \int_{-\infty}^{\inft...
|
2020-10-28 05:56:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057084083557129, "perplexity": 239.91100903246692}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00607.warc.gz"}
|
https://www.transtutors.com/questions/suppose-the-distribution-of-the-time-x-in-hours-spent-suppose-the-distribution-of-th-4203897.htm
|
# Suppose the distribution of the time X in hours spent Suppose the distribution of the time X (in...
Suppose the distribution of the time X in hours spent
Suppose the distribution of the time X (in hours) spent by students at a certain university on a particular project is gamma with parameters α = 50 and β = 2. Because α is large, it can be shown that X has approximately a normal distribution. Use this fact to compute the approximate probability that a randomly selected student spends at most 125 hours on the project.
Suppose the distribution of the time X in hours spent
## Plagiarism Checker
Submit your documents and get free Plagiarism report
Free Plagiarism Checker
|
2020-08-08 14:11:50
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8611223697662354, "perplexity": 569.7552241209324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00249.warc.gz"}
|
https://socratic.org/questions/how-do-you-solve-by-completing-the-square-x-2-4x-11-0-1
|
# How do you solve by completing the square: x^2- 4x-11=0?
Apr 5, 2015
• First, we Transpose the Constant to one side of the equation.
Transposing $- 11$ to the other side we get:
${x}^{2} - 4 x = 11$
• Application of ${\left(a - b\right)}^{2} = {a}^{2} - 2 a b + {b}^{2}$
We look at the Co-efficient of $x$. It's $- 4$
We take half of this number (including the sign), giving us –2
We square this value to get ${\left(- 2\right)}^{2} = 4$. We add this number to BOTH sides of the Equation.
${x}^{2} - 4 x + 4 = 11 + 4$
${x}^{2} - 4 x + 4 = 15$
The Left Hand side ${x}^{2} - 4 x + 4$ is in the form ${a}^{2} - 2 a b + {b}^{2}$
where $a$ is $x$, and $b$ is $2$
• The equation can be written as
${\left(x - 2\right)}^{2} = 15$
So $\left(x - 2\right)$ can take either $\sqrt{15}$ or $- \sqrt{15}$ as a value. That's because squaring either will give us 15.
$x - 2 = \sqrt{15}$ (or) $x - 2 = - \sqrt{15}$
$x = 2 + \sqrt{15}$ (or) $x = 2 - \sqrt{15}$
• Solution : $x = 2 + \sqrt{15} , 2 - \sqrt{15}$
• Verify your answer by substituting these values in the Original Equation ${x}^{2} - 4 x - 11 = 0$
You will see that the Solution is correct.
|
2021-06-16 00:27:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 25, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7900585532188416, "perplexity": 345.4928286280288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00219.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=131&t=59088
|
Calculating Change in Entropy for Phase Changes
$\Delta S = \frac{q_{rev}}{T}$
Sydney Myers 4I
Posts: 100
Joined: Fri Aug 09, 2019 12:17 am
Calculating Change in Entropy for Phase Changes
Why are there three steps to finding the change in entropy at a transition temperature? If entropy is a state function, then how does heating a substance then cooling a substance give us anything other than delta S = 0?
Benjamin Feng 1B
Posts: 102
Joined: Sat Sep 07, 2019 12:19 am
Re: Calculating Change in Entropy for Phase Changes
This is because you have to change the phase of the substance. The 2 different phases will also have different molar entropies.
|
2020-11-25 23:10:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6739934682846069, "perplexity": 1653.5088417520553}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141184870.26/warc/CC-MAIN-20201125213038-20201126003038-00492.warc.gz"}
|
https://socratic.org/questions/how-do-you-differentiate-f-x-x-x-1-2-using-the-chain-rule
|
# How do you differentiate f(x) = x+x^(1/2) using the chain rule?
Dec 23, 2016
$\frac{\mathrm{dy}}{\mathrm{dx}} = 1 + {x}^{- \frac{1}{2}} / 2$
$y = x + {x}^{\frac{1}{2}}$
$\frac{\mathrm{dy}}{\mathrm{dx}} = 1 + \frac{1}{2} {x}^{- \frac{1}{2}}$
$\frac{\mathrm{dy}}{\mathrm{dx}} = 1 + {x}^{- \frac{1}{2}} / 2$
|
2019-06-24 13:28:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7374542951583862, "perplexity": 6735.670861209994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999539.60/warc/CC-MAIN-20190624130856-20190624152856-00165.warc.gz"}
|
https://www.freemathhelp.com/forum/threads/calculating-percentages-between-points.114123/
|
# Calculating percentages between Points
##### New member
Its probably a simple equation but I'm brain fried atm. I basically have 2 lines... each line is divided into sections - One line (Line A) has 5 points on it (And so its divided into 4 sections) The second line (Line B) has 6 points on it (so its divided into 5 sections). I basically want to compare the two lines and figure out what percentage the points on Line B fall between when compared to the points on Line A. For example Starting at the bottom, Point 1 and Point 2 on Line A have a distance of .25 (25%) and Point 1 and Point 2 on Line B have a distance of .2 (20%) so Point 2 on Line B is 80% of the way between Points 1 and 2 on Line A. I'm trying to figure out the percentages for the rest of the points on Line B essentially. (So Point 3 (.40%) would fall between Point 2 and 3 on Line A... so what's the percentage there?) (Point 4 (60%) would fall between Point 3 and 4, Point 5 (80%) would fall between Points 4 and 5, and Point 6 would simply be 100% when compared to Point 5 of Line A.
#### Denis
##### Senior Member
One line (Line A) has 5 points on it (And so its divided into 4 sections)
The second line (Line B) has 6 points on it (so its divided into 5 sections).
I basically want to compare the two lines and figure out what percentage
the points on Line B fall between when compared to the points on Line A.
For example: Starting at the bottom, Point 1 and Point 2 on Line A have
a distance of .25 (25%) and Point 1 and Point 2 on Line B have a distance
of .2 (20%) so Point 2 on Line B is 80% of the way between
Points 1 and 2 on Line A.
I'm trying to figure out the percentages for the rest of the points on Line B essentially.
So Point 3 (.40%) would fall between Point 2 and 3 on Line A...
so what's the percentage there?
Point 4 (60%) would fall between Point 3 and 4, Point 5 (80%)
would fall between Points 4 and 5, and Point 6 would simply
be 100% when compared to Point 5 of Line A.
A bit like this? :
A: 1................2...............3...............4................5
B: 1...........2............3.............4............5............6
#### Bobby Bones
##### New member
Its probably a simple equation but I'm brain fried atm. I basically have 2 lines... each line is divided into sections - One line (Line A) has 5 points on it (And so its divided into 4 sections) The second line (Line B) has 6 points on it (so its divided into 5 sections). I basically want to compare the two lines and figure out what percentage the points on Line B fall between when compared to the points on Line A. For example Starting at the bottom, Point 1 and Point 2 on Line A have a distance of .25 (25%) and Point 1 and Point 2 on Line B have a distance of .2 (20%) so Point 2 on Line B is 80% of the way between Points 1 and 2 on Line A. I'm trying to figure out the percentages for the rest of the points on Line B essentially. (So Point 3 (.40%) would fall between Point 2 and 3 on Line A... so what's the percentage there?) (Point 4 (60%) would fall between Point 3 and 4, Point 5 (80%) would fall between Points 4 and 5, and Point 6 would simply be 100% when compared to Point 5 of Line A.
The line B points will always fall at 80% between the corresponding points on line A.Except point 1, which is 0%, and point 6 which is 100%.
A formula you can use is: [((Pb-1)*20%)/((Pa-1)*25%)]*100
Pb is number of point on line B (e.g. 1,2,3,4,5, or 6)
Pa is corresponding point on line A, so if you choose point 3 on line B then Pa = 3 too.
(Pb-1)*20% and (Pa-1)*25% will give you the exact percentage that each point falls on the line. When you divide first by the second and using the same point number for both equations, you get the fraction that Pb is between Pa and Pa-1. Then *100% to convert the fraction to percentage.
Last edited:
##### New member
A bit like this? :
A: 1................2...............3...............4................5
B: 1...........2............3.............4............5............6
Yes, So I'm basically trying to get this:
A: 1................2...............3...............4................5
B: 1...........2...|.......3.......|......4........|....5..........6[/QUOTE]
So LineB Point One is 100% to 1 and 0% to 2
Point Two is 20% to 1 and 80% to 2
Point Three is ???
Point Four is ???
Point Five is ???
Point Six is 0% to 4 and 100% to 5
##### New member
More like this.
I'm trying to find what the percentage is in relation to the the points on the other chain.
A: 1................2...............3...............4................5
\ / \ / \ / \ /
B: 1...........2...|.......3.......|......4........|....5..........6[/QUOTE]
So LineB Point One is 100% to 1 and 0% to 2
Point Two is 20% to 1 and 80% to 2
Point Three is ???
Point Four is ???
Point Five is ???
Point Six is 0% to 4 and 100% to 5
#### Denis
##### Senior Member
Did you see Bobby Bones' post ?
#### Dr.Peterson
##### Elite Member
More like this.
I'm trying to find what the percentage is in relation to the the points on the other chain.
Code:
[FONT=courier new]A: 1................2...............3...............4................5
\ / \ / \ / \ /
B: 1............2...|.......3.......|......4........|....5...........6[/FONT]
So LineB Point One is 100% to 1 and 0% to 2
Point Two is 20% to 1 and 80% to 2
Point Three is ???
Point Four is ???
Point Five is ???
Point Six is 0% to 4 and 100% to 5
Is my correction above (putting the picture into CODE form) what you intended?
I'm not quite sure of your terminology. How is point 2 of line B "20% to 1"? My understanding of what you wanted was "80% of the way from 1 to 2".
What's happening is that each point in B falls 20% behind the previous one: point B3 is 60% of the way from A2 to A3, point B4 is 40% of the way from A3 to A4, point B5 is 20% of the way from A4 to A5, point B6 is 0% of the way beyond A5 (i.e. at A5).
Last edited:
|
2019-03-23 17:01:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4648190438747406, "perplexity": 979.4355953308751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00557.warc.gz"}
|
http://www.shogun-toolbox.org/doc/en/latest/classes.html
|
SHOGUN 3.2.1
Class Index
A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | Z | _
A
CEvaluation (shogun) CKernelStructuredOutputMachine (shogun) CMultitaskLeastSquaresRegression (shogun) CSparsePolyFeatures (shogun)
CEvaluationResult (shogun) CKernelTwoSampleTest (shogun) CMultitaskLinearMachine (shogun) CSparsePreprocessor (shogun)
CAccuracyMeasure (shogun) CExactInferenceMethod (shogun) CKMeans (shogun) CMultitaskLogisticRegression (shogun) CSparseSpatialSampleStringKernel (shogun)
CAlphabet (shogun) CExplicitSpecFeatures (shogun) CKMeansLloydImpl (shogun) CMultitaskROCEvaluation (shogun) SparsityStructure (shogun)
CANOVAKernel (shogun) CExponentialKernel (shogun) CKMeansMiniBatchImpl (shogun) CMultitaskTraceLogisticRegression (shogun) CSpecificityMeasure (shogun)
CApproxJointDiagonalizer (shogun)
F
CKNN (shogun) Munkres (shogun) CSpectrumMismatchRBFKernel (shogun)
CAttenuatedEuclideanDistance (shogun)
L
N
CSpectrumRBFKernel (shogun)
CAttributeFeatures (shogun) CF1Measure (shogun) CSphericalKernel (shogun)
CAUCKernel (shogun) CFactor (shogun) CLabels (shogun) CNativeMulticlassMachine (shogun) CSplineKernel (shogun)
CAveragedPerceptron (shogun) CFactorAnalysis (shogun) CLabelsFactory (shogun) CNearestCentroid (shogun) CSplittingStrategy (shogun)
CAvgDiagKernelNormalizer (shogun) CFactorDataSource (shogun) CLanczosEigenSolver (shogun) CNeighborhoodPreservingEmbedding (shogun) CSqrtDiagKernelNormalizer (shogun)
B
CFactorGraph (shogun) CLaplacianEigenmaps (shogun) CNewtonSVM (shogun) CSquaredHingeLoss (shogun)
CFactorGraphFeatures (shogun) CLaplacianInferenceMethod (shogun) CNGramTokenizer (shogun) CSquaredLoss (shogun)
CBaggingMachine (shogun) CFactorGraphLabels (shogun) CLaRank (shogun) node SSKFeatures (shogun)
CBalancedConditionalProbabilityTree (shogun) CFactorGraphModel (shogun) CLatentFeatures (shogun) CNode (shogun) CStateModel (shogun)
CBALMeasure (shogun) CFactorGraphObservation (shogun) CLatentLabels (shogun) CNormalSampler (shogun) CStatistics (shogun)
CBaseMulticlassMachine (shogun) CFactorType (shogun) CLatentModel (shogun) CNormOne (shogun) CStochasticProximityEmbedding (shogun)
CBesselKernel (shogun) CFastICA (shogun) CLatentSOSVM (shogun)
O
CStochasticSOSVM (shogun)
CBinaryClassEvaluation (shogun) CFeatureBlockLogisticRegression (shogun) CLatentSVM (shogun) CStoreScalarAggregator (shogun)
CBinaryFile (shogun) CFeatures (shogun) lbfgs_parameter_t (shogun) COligoStringKernel (shogun) CStoreVectorAggregator (shogun)
CBinaryLabels (shogun) CFFDiag (shogun) CLBPPyrDotFeatures (shogun) ConditionalProbabilityTreeNodeData (shogun) CStratifiedCrossValidationSplitting (shogun)
CBinaryStream (shogun) CFFSep (shogun) CLDA (shogun) COnlineLibLinear (shogun) CStreamingAsciiFile (shogun)
CBinaryTreeMachineNode (shogun) CFile (shogun) CLeastAngleRegression (shogun) COnlineLinearMachine (shogun) CStreamingDenseFeatures (shogun)
CBinnedDotFeatures (shogun) CFirstElementKernelNormalizer (shogun) CLeastSquaresRegression (shogun) COnlineSVMSGD (shogun) CStreamingDotFeatures (shogun)
CBitString (shogun) CFITCInferenceMethod (shogun) CLibLinear (shogun) COperatorFunction (shogun) CStreamingFeatures (shogun)
block_tree_node_t CFixedDegreeStringKernel (shogun) CLibLinearMTL (shogun)
P
CStreamingFile (shogun)
bmrm_ll (shogun) CFKFeatures (shogun) CLibLinearRegression (shogun) CStreamingFileFromDenseFeatures (shogun)
BmrmStatistics (shogun) CFunction (shogun) CLibSVM (shogun) Parallel (shogun) CStreamingFileFromFeatures (shogun)
CBrayCurtisDistance (shogun)
G
CLibSVMFile (shogun) Parameter (shogun) CStreamingFileFromSparseFeatures (shogun)
C
CLibSVMOneClass (shogun) CParameterCombination (shogun) CStreamingFileFromStringFeatures (shogun)
CGaussian (shogun) CLibSVR (shogun) ParameterMap (shogun) CStreamingHashedDenseFeatures (shogun)
CCache (shogun) CGaussianARDKernel (shogun) CLikelihoodModel (shogun) ParameterMapElement (shogun) CStreamingHashedDocDotFeatures (shogun)
CCanberraMetric (shogun) CGaussianBlobsDataGenerator (shogun) line_search_res (shogun) CParser (shogun) CStreamingHashedSparseFeatures (shogun)
CCanberraWordDistance (shogun) CGaussianDistribution (shogun) CLinearARDKernel (shogun) CPCA (shogun) CStreamingMMD (shogun)
CCauchyKernel (shogun) CGaussianKernel (shogun) CLinearHMM (shogun) CPerceptron (shogun) CStreamingSparseFeatures (shogun)
CCCSOSVM (shogun) CGaussianLikelihood (shogun) CLinearKernel (shogun) CPlif (shogun) CStreamingStringFeatures (shogun)
CCGMShiftedFamilySolver (shogun) CGaussianMatchStringKernel (shogun) CLinearLatentMachine (shogun) CPlifArray (shogun) CStreamingVwCacheFile (shogun)
CChebyshewMetric (shogun) CGaussianNaiveBayes (shogun) CLinearLocalTangentSpaceAlignment (shogun) CPlifBase (shogun) CStreamingVwFeatures (shogun)
CChi2Kernel (shogun) CGaussianProcessBinaryClassification (shogun) CLinearMachine (shogun) CPlifMatrix (shogun) CStreamingVwFile (shogun)
CChiSquareDistance (shogun) CGaussianProcessMachine (shogun) CLinearMulticlassMachine (shogun) CPluginEstimate (shogun) CStringDistance (shogun)
CCircularBuffer (shogun) CGaussianProcessRegression (shogun) CLinearOperator (shogun) CPNorm (shogun) CStringFeatures (shogun)
CCircularKernel (shogun) CGaussianShiftKernel (shogun) CLinearRidgeRegression (shogun) CPolyFeatures (shogun) CStringFileFeatures (shogun)
CClusteringAccuracy (shogun) CGaussianShortRealKernel (shogun) CLinearSolver (shogun) CPolyKernel (shogun) CStringKernel (shogun)
CClusteringEvaluation (shogun) CGCArray (shogun) CLinearStringKernel (shogun) CPolyMatchStringKernel (shogun) CStringPreprocessor (shogun)
CClusteringMutualInformation (shogun) CGeodesicMetric (shogun) CLinearStructuredOutputMachine (shogun) CPolyMatchWordStringKernel (shogun) CStructuredAccuracy (shogun)
CCombinationRule (shogun) CGHMM (shogun) CLinearTimeMMD (shogun) CPositionalPWM (shogun) CStructuredData (shogun)
CCombinedDotFeatures (shogun) CGMM (shogun) CLineReader (shogun) CPowerKernel (shogun) CStructuredLabels (shogun)
CCombinedFeatures (shogun) CGMNPLib (shogun) CList (shogun) CPRCEvaluation (shogun) CStructuredModel (shogun)
CCombinedKernel (shogun) CGMNPSVM (shogun) CListElement (shogun) CPrecisionMeasure (shogun) CStructuredOutputMachine (shogun)
CCommUlongStringKernel (shogun) CGNPPLib (shogun) CLMNN (shogun) CPreprocessor (shogun) CStudentsTLikelihood (shogun)
CCommWordStringKernel (shogun) CGNPPSVM (shogun) CLMNNStatistics (shogun) CProbabilityDistribution (shogun) CSubsequenceStringKernel (shogun)
CCompressor (shogun) CGPBTSVM (shogun) CLocalAlignmentStringKernel (shogun) CProbitLikelihood (shogun) CSubset (shogun)
CConditionalProbabilityTree (shogun) CGradientCriterion (shogun) CLocalityImprovedStringKernel (shogun) CProductKernel (shogun) CSubsetStack (shogun)
CConjugateOrthogonalCGSolver (shogun) CGradientModelSelection (shogun) CLocallyLinearEmbedding (shogun) CPruneVarSubMean (shogun) CSumOne (shogun)
CConstKernel (shogun) CGradientResult (shogun) CLocalTangentSpaceAlignment (shogun) CPyramidChi2 (shogun) CSVM (shogun)
CContingencyTableEvaluation (shogun) CGridSearchModelSelection (shogun) CLock (shogun)
Q
CSVMLight (shogun)
CConverter (shogun) CGUIClassifier (shogun) CLogDetEstimator (shogun) CSVMLightOneClass (shogun)
CCosineDistance (shogun) CGUIConverter (shogun) CLogitLikelihood (shogun) CQDA (shogun) CSVMLin (shogun)
CCplex (shogun) CGUIDistance (shogun) CLogKernel (shogun) CQDiag (shogun) CSVMOcas (shogun)
CCPLEXSVM (shogun) CGUIFeatures (shogun) CLogLoss (shogun) CQPBSVMLib (shogun) CSVMSGD (shogun)
CCrossCorrelationMeasure (shogun) CGUIHMM (shogun) CLogLossMargin (shogun) CQuadraticTimeMMD (shogun) CSVRLight (shogun)
CCrossValidation (shogun) CGUIKernel (shogun) CLogPlusOne (shogun)
R
CSyntaxHighLight
CCrossValidationMKLStorage (shogun) CGUILabels (shogun) CLogRationalApproximationCGM (shogun)
T
CCrossValidationMulticlassStorage (shogun) CGUIMath (shogun) CLogRationalApproximationIndividual (shogun) CRandom (shogun)
CCrossValidationOutput (shogun) CGUIPluginEstimate (shogun) CLOOCrossValidationSplitting (shogun) CRandomConditionalProbabilityTree (shogun) CTableFactorType (shogun)
CCrossValidationPrintOutput (shogun) CGUIPreprocessor (shogun) CLoss (shogun) CRandomFourierDotFeatures (shogun) tag_callback_data (shogun)
CCrossValidationResult (shogun) CGUIStructure (shogun) CLossFunction (shogun) CRandomFourierGaussPreproc (shogun) tag_iteration_data (shogun)
CCrossValidationSplitting (shogun) CGUITime (shogun) CLPBoost (shogun) CRandomKitchenSinksDotFeatures (shogun) CTanimotoDistance (shogun)
CCSVFile (shogun)
H
CLPM (shogun) CRandomSearchModelSelection (shogun) CTanimotoKernelNormalizer (shogun)
CCustomDistance (shogun)
M
CCustomKernel (shogun) CHammingWordDistance (shogun) CRationalApproximationCGMJob (shogun) task_tree_node_t
CCustomMahalanobisDistance (shogun) CHash (shogun) CMachine (shogun) CRationalApproximationIndividualJob (shogun) CTaskGroup (shogun)
D
CHashedDocConverter (shogun) CMahalanobisDistance (shogun) CRealDistance (shogun) CTaskTree (shogun)
d_node CHashedDocDotFeatures (shogun) CMajorityVote (shogun) CRealFileFeatures (shogun) CTaxonomy (shogun)
D_THREAD_PARAM CHashedSparseFeatures (shogun) CManhattanMetric (shogun) CRealNumber (shogun) CTDistributedStochasticNeighborEmbedding (shogun)
CData (shogun) CHashedWDFeatures (shogun) CManhattanWordDistance (shogun) CRecallMeasure (shogun) CTensorProductPairKernel (shogun)
CDataGenerator (shogun) CHashedWDFeaturesTransposed (shogun) CManifoldSculpting (shogun) RefCount (shogun) CThresholdRejectionStrategy (shogun)
CDecompressString (shogun) CHessianLocallyLinearEmbedding (shogun) CMap (shogun) CRegressionLabels (shogun) CTime (shogun)
CDelimiterTokenizer (shogun) CHierarchical (shogun) CMAPInference (shogun) CRegulatoryModulesStringKernel (shogun) TMultipleCPinfo (shogun)
CDenseDistance (shogun) CHingeLoss (shogun) CMAPInferImpl (shogun) CRejectionStrategy (shogun) CTokenizer (shogun)
CDenseExactLogJob (shogun) CHistogram (shogun) MappedSparseMatrix (shogun) CRelaxedTree (shogun) CTOPFeatures (shogun)
CDenseFeatures (shogun) CHistogramIntersectionKernel (shogun) CMatchWordStringKernel (shogun) RelaxedTreeNodeData (shogun) TParameter (shogun)
CDenseLabels (shogun) CHistogramWordStringKernel (shogun) CMath (shogun) RelaxedTreeUtil (shogun) CTraceSampler (shogun)
CDenseMatrixExactLog (shogun) CHMM (shogun) CMatrixFeatures (shogun) CRescaleFeatures (shogun) tree_node_t
CDenseMatrixOperator (shogun) CHMSVMModel (shogun) CMatrixOperator (shogun) CResultSet (shogun) CTreeMachine (shogun)
CDensePreprocessor (shogun) CHomogeneousKernelMap (shogun) CMCLDA (shogun) CRidgeKernelNormalizer (shogun) CTreeMachineNode (shogun)
CDenseSubsetFeatures (shogun) CHSIC (shogun) CMeanAbsoluteError (shogun) CROCEvaluation (shogun) CTrie (shogun)
CDiagKernel (shogun) CHypothesisTest (shogun) CMeanFunction (shogun)
S
CTron
CDiceKernelNormalizer (shogun)
I
CDifferentiableFunction (shogun) CMeanShiftDataGenerator (shogun) CSalzbergWordStringKernel (shogun) TSGDataType (shogun)
CDiffusionMaps (shogun) CICAConverter (shogun) CMeanSquaredError (shogun) CScalarResult (shogun) CTStudentKernel (shogun)
CDimensionReductionPreprocessor (shogun) ICP_stats (shogun) CMeanSquaredLogError (shogun) CScatterKernelNormalizer (shogun) CTwoSampleTest (shogun)
CDirectEigenSolver (shogun) id3TreeNodeData (shogun) CMemoryMappedFile (shogun) CScatterSVM (shogun) CTwoStateModel (shogun)
CDirectLinearSolverComplex (shogun) CIdentityKernelNormalizer (shogun) CMinkowskiMetric (shogun) CSegmentLoss (shogun)
U
CDirectSparseLinearSolver (shogun) CImplicitWeightedSpecFeatures (shogun) CMKL (shogun) CSequence (shogun)
CDisjointSet (shogun) CIndependenceTest (shogun) CMKLClassification (shogun) CSequenceLabels (shogun) CUWedge (shogun)
CDistance (shogun) CIndependentComputationEngine (shogun) CMKLMulticlass (shogun) CSerialComputationEngine (shogun) CUWedgeSep (shogun)
CDistanceKernel (shogun) CIndependentJob (shogun) MKLMulticlassGLPK (shogun) CSerializableAsciiFile (shogun)
V
CDistantSegmentsKernel (shogun) CIndexBlockGroup (shogun) MKLMulticlassOptimizationBase (shogun) CSerializableFile (shogun) v_array (shogun)
CDistribution (shogun) CIndexBlockRelation (shogun) CMKLOneClass (shogun) CSet (shogun) CVarianceKernelNormalizer (shogun)
CDixonQTestRejectionStrategy (shogun) CIndexBlockTree (shogun) CMKLRegression (shogun) CSGDQN (shogun) CVectorResult (shogun)
CDomainAdaptationMulticlassLibLinear (shogun) IndexSorter (shogun) CMMDKernelSelection (shogun) SGDynamicRefObjectArray (shogun) Version (shogun)
CDomainAdaptationSVM (shogun) CIndirectObject (shogun) CMMDKernelSelectionComb (shogun) SGIO (shogun) CVowpalWabbit (shogun)
CDotFeatures (shogun) CInferenceMethod (shogun) CMMDKernelSelectionCombOpt (shogun) SGMatrixList (shogun) CVwCacheReader (shogun)
CDotKernel (shogun) CIntegration (shogun) CMMDKernelSelectionMax (shogun) SGNDArray (shogun) CVwCacheWriter (shogun)
ds_node CIntronList (shogun) CMMDKernelSelectionMedian (shogun) CSGObject (shogun) CVwConditionalProbabilityTree (shogun)
CDualLibQPBMSOSVM (shogun) CInverseMultiQuadricKernel (shogun) CMMDKernelSelectionOpt (shogun) SGParamInfo (shogun) VwConditionalProbabilityTreeNodeData (shogun)
CDummyFeatures (shogun) CIOBuffer (shogun) mocas_data SGReferencedData (shogun) CVwEnvironment (shogun)
CDynamicArray (shogun) CIsomap (shogun) Model (shogun) SGRefObject (shogun) VwExample (shogun)
CDynamicObjectArray (shogun) CIterativeLinearSolver (shogun) CModelSelection (shogun) SGSparseMatrix (shogun) VwFeature (shogun)
DynArray (shogun) CIterativeShiftedLinearFamilySolver (shogun) CModelSelectionParameters (shogun) SGSparseVector (shogun) VwLabel (shogun)
CDynInt (shogun) IterativeSolverIterator (shogun) CMPDSVM (shogun) SGSparseVectorEntry (shogun) CVwLearner (shogun)
CDynProg (shogun)
J
CMulticlassAccuracy (shogun) SGString (shogun) CVwNativeCacheReader (shogun)
E
CMulticlassLabels (shogun) SGStringList (shogun) CVwNativeCacheWriter (shogun)
CJacobiEllipticFunctions (shogun) CMulticlassLibLinear (shogun) SGVector (shogun) CVwNonAdaptiveLearner (shogun)
CECOCAEDDecoder (shogun) CJade (shogun) CMulticlassLibSVM (shogun) CShareBoost (shogun) CVwParser (shogun)
CECOCDecoder (shogun) CJADiag (shogun) CMulticlassLogisticRegression (shogun) ShareBoostOptimizer (shogun) CVwRegressor (shogun)
CECOCDiscriminantEncoder (shogun) CJADiagOrth (shogun) CMulticlassMachine (shogun) ShogunException (shogun)
W
CECOCEDDecoder (shogun) CJediDiag (shogun) CMulticlassModel (shogun) ShogunFeatureVectorCallback
CECOCEncoder (shogun) CJediSep (shogun) CMulticlassMultipleOutputLabels (shogun) ShogunLoggerImplementation CWaveKernel (shogun)
CECOCForestEncoder (shogun) CJensenMetric (shogun) CMulticlassOCAS (shogun) CSigmoidKernel (shogun) CWaveletKernel (shogun)
CECOCHDDecoder (shogun) CJensenShannonKernel (shogun) CMulticlassOneVsOneStrategy (shogun) CStatistics::SigmoidParamters (shogun) CWDFeatures (shogun)
CECOCIHDDecoder (shogun) CJLCoverTreePoint (shogun) CMulticlassOneVsRestStrategy (shogun) CSignal (shogun) CWDSVMOcas (shogun)
CECOCLLBDecoder (shogun) CJobResult (shogun) CMulticlassOVREvaluation (shogun) CSimpleFile (shogun) CWeightedCommWordStringKernel (shogun)
CECOCOVOEncoder (shogun) CJobResultAggregator (shogun) CMulticlassSOLabels (shogun) CSimpleLocalityImprovedStringKernel (shogun) CWeightedDegreePositionStringKernel (shogun)
CECOCOVREncoder (shogun)
K
CMulticlassStrategy (shogun) CSmoothHingeLoss (shogun) CWeightedDegreeRBFKernel (shogun)
CECOCRandomDenseEncoder (shogun) CMulticlassSVM (shogun) CSNPFeatures (shogun) CWeightedDegreeStringKernel (shogun)
CECOCRandomSparseEncoder (shogun) K_THREAD_PARAM (shogun) CMulticlassTreeGuidedLogisticRegression (shogun) CSNPStringKernel (shogun) CWeightedMajorityVote (shogun)
CECOCSimpleDecoder (shogun) CKernel (shogun) CMultidimensionalScaling (shogun) CSOBI (shogun) CWRACCMeasure (shogun)
CECOCStrategy (shogun) CKernelDistance (shogun) CMultiquadricKernel (shogun) CSortUlongString (shogun)
Z
CECOCUtil (shogun) CKernelIndependenceTest (shogun) CMultitaskClusteredLogisticRegression (shogun) CSortWordString (shogun)
|
2014-03-10 20:22:19
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9883419871330261, "perplexity": 12392.316371062585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011005264/warc/CC-MAIN-20140305091645-00073-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://electronics.stackexchange.com/questions
|
# All Questions
7 views
I built a driver for a pulsed laser. This required a low voltage DC source to provide a bias current through the laser and also a high voltage DC source to drive an avalanche transistor to make the ...
3 views
### Generating Large Test Cases for Power Flow Simulation
I am trying to investigate the scalability of various power flow methods for large power systems (1M+ buses). It seems however that most test cases that are readily available are generally quite ...
21 views
### Why is the red diode in my RGB led burning out?
I have been working on a simple circuit that uses a 10mm RGB led. I have had the same problem again with the red diode dying out after maybe a day. This time I thought I fixed it, but I guess I'm ...
18 views
### What does the Y capacitor in a SMPS do?
It seems that a well-designed SMPS has a capacitor connecting the ground planes of the primary and secondary sides of the transformer, such as the C13 capacitor here. What is the purpose of this ...
12 views
### How to modulate a LED to send data and recieve with a photodiode
What is the concept of modulating a light source (white led, IR led, etc) to send data and then demodulating it with a photodiode back into data? I am wanting to experiment with trying to use ...
25 views
### How does vacuum tube heating (heater voltage) affect tube performance/sound?
I have replaced a 12AX7 tube with ECC85 in my guitar amp. Seems to work great and sound great. The part that is questioning me is whether the sound is being at its best because it takes a good 20-30 ...
29 views
### What does “capacity achieving code” mean?
I am an electrical engineer passed several courses in information theory and coding theory, however as I think now, I see that I have not understand the basic ideas in these concepts till now. We ...
15 views
### connecting a generator with an RCCB to a service panel
Consider the following schematic : simulate this circuit – Schematic created using CircuitLab in this setup we have two power sources Main power lines and a diesel generator ( with ...
10 views
### Output in unknown state even though specified in verilog
I am having some troubles with creating an edge detector in verilog. So far I've come up with this: ...
46 views
### Need the pinout for this noname FTDI board
I would really like to know the pinout for this FTDI board. It has no model number on it and on google picture search I couldn't find it. It has 2x4 pins on the end of the two sides soldered in. I ...
92 views
### Can't find the right power supply - could I use something similar?
So I've just bought a Behringer MX2642A mixer off eBay, which worked for two hours then promptly shut down. After investigation, it seems that the power supply might have given up - the specs for the ...
28 views
### MOSFET power dissipation calculations - Diodes Inc. datasheets
Looking at Diodes Inc. datasheets, I am having trouble following their power dissipation limit calculations for their MOSFETS. E.g. for DMG4496SSS http://www.diodes.com/_files/datasheets/ds32048.pdf ...
9 views
### ngspice - Measure param
I'm trying to post-process a value computed by a measure statement in ngspice (rev 26) as described in the manual: ...
20 views
### L293DD overtemperature protection
I am using an L293DD Dual H-Bridge IC to power my DC motors. I have realized I am using inadequate heatsinks, and 1,5 times the nominal maximum current(1,8A instead of 1,2A), but the circuit is ...
51 views
### What's the easiest way to remove the power rails of a breadboard?
All of the breadboards I get come with an adhesive backing that, while keeping the metal rows inside the breadboard in, keep the power rails connected to the breadboard. I want to remove the power ...
59 views
### Can this shift-register circuit be simplified?
I've created a circuit using a shift register to drive a 7-segment display which is all controlled by an Arduino Uno. The circuit appears to work fine, I have a sketch running which simply counts ...
21 views
### UART: no input while connecting to some hosts [on hold]
I have Android tablet with configured UART (console=ttyS0,115200 in /proc/cmdline). When I connect to it from my workstation it ...
20 views
### Maximum Achievable Current from a MOSFET Current Source \ Sink [on hold]
consider the circuit has a bias voltage VGG = 5V; and the transistor parameters are k’ = 100, W = 50 micrometer , L = l micrometer , and Vt = 0.8V
20 views
### Communication with a database - assignment
i'm a second year student of electronics and comupter science. I was asked to implement a system which works with a web application. Application should enable displaying information about current ...
15 views
### Thevenin Resistance and Voltage
So I've confirmed the Thevenin voltage ( between A and B ) to be .5V and a the VLoad as .3633V. I'm now supposed to verify that RLoad is 4k by redrawing the circuit as its Thevenin equivalent and ...
29 views
### Kid car 4x4 3-6 pin foot switch
I'm wiring my son's electric car and am having trouble with the foot switch. My original plans were drawn up with a 3 pin momentary foot switch; I have since learned that it is in fact a 6 pin SPDT ...
12 views
### Problems connecting esp8266 through HL-340 usb serial adapter on Ubuntu (Input/output error)
No matter what I try I can't seem to get a serial connection going between an esp8266 (this model: http://www.autonomii.com/wp-content/uploads/2015/04/esp8266_pinout_h-569x236.png) and my HL-340 (was ...
16 views
### Color Coding one brushless DC motor?
I have used brushed dc motors earlier.. WHere the color coding means red(+ve),black(-), (white/yellow) signal What do the color codes on bruhless dc motor means... through several forums I came ...
17 views
### BLDC 3 Phases Motor wiring
I have a BLDC 3 phase 8 pole motor, with hall effect sensors, most other three phases motors have 1 wire for each phase U , V , W however this motor has two for each phase(these do not include the ...
22 views
### I need some help calculating values for a variable 555 oscillator circuit
For the past few hours i have been strugling with a 555 timer. What i am trying to do is generate a square wave of variable frequency controlled by a potentiometer. At the time i have a 200KΩ ...
41 views
### Can a COTS FM Radio transmit?
This is probably a really silly question, but I still want to know... Given a standard off-the-shelf FM radio alarm clock, what is to stop it from transmitting radio waves when physical motion causes ...
25 views
### how can we make a signal out of a 2 pin mic output
ok im looking to make this type of setup so that i can use its output pin to read via analog. this is the mic i got and would like to make that 3rd pinout which as you can see it only has 2 and ...
47 views
### How to switch micro DC motors fast?
i'm planning to make micro/nano quadcopter and ive bought all major components so far. Only thing that i cant figure out are motor drivers. I was planning to use some kind of transistor to do that for ...
204 views
### Is a half-wave rectifier particularly hard on a transformer?
In the book Practical Electronics for Inventors, 3rd Ed., the authors recommend against using half-wave rectifiers because they're inefficient and cause "...the core to become polarized and to ...
34 views
### Finding Cutoff Frequency
simulate this circuit – Schematic created using CircuitLab What is the smallest value of C that will ensure that the 3dB frequency is not greater than 100Hz? That is the question ...
23 views
### DSP + ARM Processor For Digital Audio Mixer [on hold]
I'm currently working on a project where I have been tasked with a team to create a digital audio mixer. I need a processor that preferably has a 1 DSP core and 1 ARM core both running close to or ...
33 views
### Reuse a microphone (formerly) wired to the digital “General Purpose Input/Output” of a Realtek sound card?
I recently repurposed an old laptop screen (from a Dell XPS L502x) as external monitor, and then made the built-in webcam work without too much fuss. The camera module also has an embedded ...
65 views
### Bad idea? Powering uC from two separate supplies?
Okay, a little background: I am designing a micro controller daughter board (uCDB) that is nominally powered at 3.3V by a voltage regulator on a mother board (MB). Also I will have a programmer board ...
16 views
### Solve the sync. system. Up till zero, XOR the bits, from zero, I/O is same
I need to build an sync. system with single I/O that, until we reach the first zero(including that zero), the output will be XOR of the bits that were read up till that point(including the zero), ...
29 views
### How do I prove that the following function is a complete system
Here's the picture: How do I prove that {f,1} is a complete system? Well, in the answers it is written that if we apply 1 to A, then we'll be able to create NOR. and NOR is a complete system. (It ...
14 views
### Photomultipliers in Parallel
I have a photo-multiplier going into a preamp then two stage shaping amp (PC250 and PC275 board from Amptek). The photo-multiplier is attached to a plastic scintillator and then a beta emitting ...
25 views
### Failover Management from AC to DC and DC to DC
I'm working on a project that has a high availability requirement. As such, I must maintain a power supply output of 12vdc ~1a (ideally 13.5vdc ~35a). The 1a option will sustain the core equipment, ...
74 views
### Power factor correction consequences
From what I know so far (might be wrong), The ideal or unity power factor means that all the energy is consumed by resistive components and are converted to heat, light, etc. In this case apparent ...
34 views
### Car ignition coil power supply
I'm making a car ignition coil tester. The thing is I need a constant 12v 5A DC supply for the project, so I'm going to use a laptop power supply. The one I have is 220V AC/DC 19V 4.75A. How can I ...
16 views
### Powering RC Servos using Switching Power Supply
I have 3 RC HV Servos (working voltage 6-8.4V) (link). I purchased a 75 Watt Switching Power Supply (link). The servos draw about 3amps at full load.Is a 75Watt supply sufficient or do I need more ...
47 views
### Advice and critical remarks regarding the manufacturing of a DC-DC step-down converter
I'm in the third year of my electrical engineering major and I want to take things a bit further than the usual analogue electronic circuits I've studied thus far by making a DC-DC step down converter ...
26 views
### Current flow in long channel MOSFET in saturation
Why doesn't the current in long channel nMOS decrease with drain voltage due to pinch off that takes place when gate voltage at a point in the channel is less than the threshold.
47 views
### Solve a circuit with the nodal analysis
I have the following circuit, and I have to find Vo: I solved it in this way: \\ \begin{cases} \frac{v_1 - 40}{1} + \frac{v_1 - v_0}{2} + 5 = 0 \\ \frac{v_1 ...
11 views
### CN Interrupt (interrupt on change) on PIC32mx not working
I attached a module which communicates with my PIC32mx via UART. I am trying to bit banging this UART communication but I am not able to see the interrupt being triggered. I am using the CN4/RB2 Port ...
20 views
### About transconductance amplifier Capacitance direction
If I use a polarized capacitor for C_large, which side should be the positive terminal? You have to decide this based on the total voltage across the capacitor. I am using 330 microfarad ...
42 views
### how to solve this problem by thevenin's theorem [on hold]
Find the voltage across the 4 Ω resistor in Fig. B. using Thévenin’s theorem. circiut diagram ...
59 views
### PIC C program flow problem XC8
Problem: My C code is not running to the end. To test my EEPROM read and write, I decided to write a simple array and read it back to different array using a PIC. However the problem is not the ...
36 views
### stm8s I2C Communication
I want to use I2C Communication between two stm8s208cb MCUs. I use stm8s standard peripharel library. My code: Master: ...
|
2016-02-13 02:35:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7537233233451843, "perplexity": 2790.2866330405955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701165697.9/warc/CC-MAIN-20160205193925-00100-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://tadaatoolbox.tadaa-data.de/
|
tadaatoolbox contains helpers for data analysis and presentation focused on undergrad psychology, the target audience being students at University of Bremen.
This was a teaching project with the primary goal being to assist undergrad students and tutors in a course that has since changed hands and moved on.
Some of the choices made in this package were motivated by didactics and demonstration, but are unfit for methodologically sound use. One example was the automated calculation of post-hoc power in t-tests and ANOVA, which is generally frowned upon. Also, the automated test for heteroskedasticity in tadaa_t.test is a source of multiplicity, leading to uncontrolled type I errors.
Both mechanisms have since been removed in the GitHub version.
There are many other packages available that do a better job at cleaning/presenting statistical test output, notably in the easystats ecosystem, so I suggest you look there rather than using this package.
For ANOVA & others, try the afex package.
# Installation
Install the current development version from GitHub (recommended):
if (!("remotes" %in% installed.packages())){
install.packages("remotes")
}
remotes::install_github("tadaadata/tadaatoolbox")
Or install the most recent stable version from CRAN:
install.packages("tadaatoolbox")
## Code of Conduct
Please note that the tadaatoolbox project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.
|
2020-06-05 00:47:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18278692662715912, "perplexity": 3486.7686513031294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348492295.88/warc/CC-MAIN-20200604223445-20200605013445-00322.warc.gz"}
|
https://byjus.com/maths/permutation/
|
# Permutation
A permutation is an arrangement of objects in a definite order. The members or elements of sets are arranged here in a sequence or linear order. For example, the permutation of set A={1,6} is 2, such as {1,6}, {6,1}. As you can see, there are no other ways to arrange the elements of set A.
In permutation, the elements should be arranged in a particular order whereas in combination the order of elements does not matter. Also, read: Permutation And Combination
When we look at the schedules of trains, buses and the flights we really wonder how they are scheduled according to the public’s convenience. Of course, the permutation is very much helpful to prepare the schedules on departure and arrival of these. Also, when we come across licence plates of vehicles which consists of few alphabets and digits. We can easily prepare these codes using permutations.
## Definition of Permutation
Basically Permutation is an arrangement of objects in a particular way or order. While dealing with permutation one should concern about the selection as well as arrangement. In Short, ordering is very much essential in permutations. In other words, the permutation is considered as an ordered combination.
## Representation of Permutation
We can represent permutation in many ways, such as:
• $\large \mathbf{P(n,k)}$
• $\large \mathbf{P^{n}_{k}}$
• $\large \mathbf{_{n}P_{k}}$
• $\large \mathbf{^{n}P_{k}}$
• $\large \mathbf{P _{n}\, _{,k}}$
## Formula
The formula for permutation of n objects for r selection of objects is given by:
P(n,r) = n!/(n-r)!
For example, the number of ways 3rd and 4th position can be awarded to 10 members is given by:
P(10, 2) = 10!/(10-2)! = 10!/8! = (10.9.8!)/8! = 10 x 9 = 90
## Types of Permutation
Permutation can be classified in three different categories:
• Permutation of n different objects (when repetition is not allowed)
• Repetition, where repetition is allowed
• Permutation when the objects are not distinct (Permutation of multi sets)
Let us understand all the cases of permutation in details.
### Permutation of n different objects
If n is a positive integer and r is a whole number, such that r < n, then P(n, r) represents the number of all possible arrangements or permutations of n distinct objects taken r at a time. In the case of permutation without repetition, the number of available choices will be reduced each time. It can also be represented as:
$^{n}P_{r}$.
P(n, r) = n(n-1)(n-2)(n-3)……..upto r factors
$\Rightarrow$ P(n, r) = n(n-1)(n-2)(n-3)……..(n – r +1)
$\large \Rightarrow P(n,r) = \frac{n!}{(n-r)!}$
Here, “nPr” represents the “n” objects to be selected from “r” objects without repetition, in which the order matters.
Example: How many 3 letter words with or without meaning can be formed out of the letters of the word SWING when repetition of letters is not allowed?
Solution: Here n = 5, as the word SWING has 5 letters. Since we have to frame 3 letter words with or without meaning and without repetition, therefore total permutations possible are:
$\large \Rightarrow P(n,r) = \frac{5!}{(5-3)!} = \frac{5 \times 4 \times 3 \times 2 \times 1}{2 \times 1} = 60$
### Permutation when repetition is allowed
We can easily calculate the permutation with repetition. The permutation with repetition of objects can be written using the exponent form.
When the number of object is “n,” and we have “r” to be the selection of object, then;
Choosing an object can be in n different ways (each time).
Thus, the permutation of objects when repetition is allowed will be equal to,
n × n × n × ……(r times) = nr
This is the permutation formula to compute the number of permutations feasible for the choice of “r” items from the “n” objects when repetition is allowed.
Example: How many 3 letter words with or without meaning can be formed out of the letters of the word SMOKE when repetition of words is allowed?
Solution:
The number of objects, in this case, is 5, as the word SMOKE has 5 alphabets.
and r = 3, as 3-letter word has to be chosen.
Thus, the permutation will be:
Permutation (when repetition is allowed) = $\large 5^{3}$ = 125
### Permutation of multi-sets
Permutation of n different objects when $p_{1}$ objects among ‘n’ objects are similar, $p_{2}$ objects of the second kind are similar, $p_{3}$ objects of the third kind are similar ……… and so on, $p_{k}$ objects of the kth kind are similar and the remaining of all are of a different kind,
Thus it forms a multiset, where the permutation is given as:
$\large \mathbf{\large \frac{n!}{p_{1}!\; p_{2}!\; p_{3}…..p_{n}!}}$
### Difference Between Permutation and Combination
The major difference between the permutation and combination are given below:
Permutation Combination Permutation means the selection of objects, where the order of selection matters The combination means the selection of objects, in which the order of selection does not matter. In other words, it is the arrangement of r objects taken out of n objects. In other words, it is the selection of r objects taken out of n objects irrespective of the object arrangement. The formula for permutation is nPr = n! /(n-r)! The formula for combination is nCr = n!/[r!(n-r)!]
## Fundamental Counting Principle
According to this principle, “If one operation can be performed in ‘m’ ways and there are n ways of performing a second operation, then the number of ways of performing the two operations together is m x n “.
This principle can be extended to the case in which the different operation be performed in m, n, p, . . . . . . ways.
In this case the number of ways of performing all the operations one after the other is m x n x p x . . . . . . . . and so on
## Solved Examples
Example 1: In how many ways 6 children can be arranged in a line, such that (i) Two particular children of them are always together (ii) Two particular children of them are never together Solution: (i) The given condition states that 2 students need to be together, hence we can consider them 1. Thus, the remaining 7 gives the arrangement in 5! ways, i.e. 120. Also, the two children in a line can be arranged in 2! Ways. Hence, the total number of arrangements will be, 5! × 2! = 120 × 2 = 240 ways (ii) The total number of arrangements of 6 children will be 6!, i.e. 720 ways. Out of the total arrangement, we know that two particular children when together can be arranged in 240 ways. Therefore, total arrangement of children in which two particular children are never together will be 720 – 240 ways, i.e. 480 ways. Example 2:Consider a set having 5 elements a,b,c,d,e. In how many ways 3 elements can be selected (without repetition) out of the total number of elements. Solution: Given X = {a,b,c,d,e} 3 are to be selected. Therefore, $^{5}C_{3} = 10$ Example 3: It is required to seat 5 men and 4 women in a row so that the women occupy the even places. How many such arrangements are possible? Solution: We are given that there are 5 men and 4 women. i.e. there are 9 positions. The even positions are: 2nd, 4th, 6th and the 8th places These four places can be occupied by 4 women in P(4, 4) ways = 4! = 4 . 3. 2. 1 = 24 ways The remaining 5 positions can be occupied by 5 men in P(5, 5) = 5! = 5.4.3.2.1 = 120 ways Therefore, by the Fundamental Counting Principle, Total number of ways of seating arrangements = 24 x 120 = 2880
## Practice Problems
Practice below listed problems:
1. How many numbers lying between 100 and 1000 can be formed with the digits 1, 2, 3, 4, 5, if the repetition of digits is not allowed.
2. Seven athletes are participating in a race. In how many ways can the first three prizes be won.
To solve more problems or to take a test, download BYJU’S – The Learning App.
## Frequently Asked Questions – FAQs
### What is permutation?
Permutation is a way of changing or arranging the elements or objects in a linear order.
### What is the formula for permutation?
The formula for permutation for n objects taken r at a time is given by:
P(n,r) = n!/(n-r)!
### What are the types of permutation?
The permutation of an arrangement of objects or elements in order, depends on three conditions:
When repetition of elements is not allowed
When repetition of elements is allowed
When the elements of a set are not distinct
### What is the formula for permutation when repetition is allowed?
Let n be the number of objects and r be the selection of objects, then if repetition is allowed, the permutation of objects will be n × n × n × ……(r times) = n^r
### What is the permutation for multisets?
The permutation formula for multisets where all the elements are not distinct is given by: n!/(P1!P2!…Pn!)
|
2021-06-13 23:11:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7579185962677002, "perplexity": 437.24896071185265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00552.warc.gz"}
|
http://www.r-bloggers.com/r-tips-in-stat-511/
|
# R Tips in Stat 511
March 22, 2010
By
(This article was first published on Statistics, R, Graphics and Fun » R Language, and kindly contributed to R-bloggers)
Here are some (trivial) R tips in the course Stat 511. I’ll update this post till the semester is over.
1. ## Formatting R Code
2. I’ve submitted an R package named formatR to CRAN yesterday. This package should be easier than the code below, because there is a GUI to tidy your R code. Install with install.packages('formatR').
Reading code is pain, but the well-formatted code might alleviate the pain a little bit. The function tidy.source() in the animation package can help us format our R code automatically. By default it will read your code in the clipboard, parse it and return the well-formatted code. You have options to keep or remove the comments/blank lines and set the width of the code, etc. Spaces and indent will be added automatically. This can save us time typing spaces and paying attention to indent.
## install.packages('animation') if it is not installed yet
library(animation)
## copy some R code somewhere and type:
tidy.source()
## or specify the path of your code file
tidy.source(file.path(system.file(package = "graphics"), "demo", "image.R"))
## can also use a URL
tidy.source('http://www.public.iastate.edu/~dnett/S511/twofactor.R')
## remove blank lines
tidy.source('http://www.public.iastate.edu/~dnett/S511/twofactor.R',
keep.blank.line = FALSE)
tidy.source('http://www.public.iastate.edu/~dnett/S511/twofactor.R',
keep.comment = FALSE)
3. ## Approximating Rationals by Fractions
We often deal with matrices like $\reverse C(X'X)^{-1}X'$ in 511 and may wonder what on earth they are. If we directly compute solve(t(X)%*%X)%*%t(X) (or generalized inverse ginv() in MASS) we often end up with seeing a lot of decimals, which makes it difficult to see what these numbers really mean. The function fractions() in the MASS package can approximate rationals by fractions. For example:
## from the movie rating example
X = matrix(c(1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0,
0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1,
1, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0,
0, 1, 0, 1, 0), byrow = T, nrow = 7)
XX = t(X) %*% X
library(MASS)
XXgi = ginv(XX)
C = matrix(c(1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0,
0, 1, 0, 1, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0,
1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 1, 0, 0,
1, 0, 1, 0, 0, 1, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0,
0, 1, 1, 0, 0, 0, 1, 1, 0, 0, 1, 0, 0, 0, 1, 0, 1, 0, 1,
0, 0, 0, 1, 0, 0, 1), byrow = T, nrow = 12)
## what does C(X'X)^{-}X' mean?
# hard to see
C %*% XXgi %*% t(X)
# [,1] [,2] [,3] [,4]
# [1,] 7.500000e-01 2.500000e-01 2.220446e-16 5.551115e-17
# [2,] 2.500000e-01 7.500000e-01 1.387779e-16 5.551115e-17
# [3,] 2.500000e-01 7.500000e-01 -1.000000e+00 1.000000e+00
# [4,] 5.000000e-01 -5.000000e-01 1.000000e+00 0.000000e+00
# [5,] -1.665335e-16 -5.551115e-17 1.000000e+00 -2.220446e-16
# [6,] 3.330669e-16 1.110223e-16 1.110223e-16 1.000000e+00
# [7,] 5.000000e-01 -5.000000e-01 1.000000e+00 -1.000000e+00
# [8,] -5.551115e-16 -3.330669e-16 1.000000e+00 -1.000000e+00
# [9,] -5.551115e-17 -2.220446e-16 -1.665335e-16 1.110223e-16
#[10,] 2.500000e-01 -2.500000e-01 -1.110223e-16 3.053113e-16
#[11,] -2.500000e-01 2.500000e-01 -2.220446e-16 2.775558e-16
#[12,] -2.500000e-01 2.500000e-01 -1.000000e+00 1.000000e+00
# [,5] [,6] [,7]
# [1,] 2.775558e-17 2.500000e-01 -2.500000e-01
# [2,] -1.665335e-16 -2.500000e-01 2.500000e-01
# [3,] -4.440892e-16 -2.500000e-01 2.500000e-01
# [4,] 4.440892e-16 5.000000e-01 -5.000000e-01
# [5,] 2.220446e-16 0.000000e+00 1.110223e-16
# [6,] 1.110223e-16 0.000000e+00 -2.220446e-16
# [7,] 1.000000e+00 5.000000e-01 -5.000000e-01
# [8,] 1.000000e+00 2.220446e-16 4.440892e-16
# [9,] 1.000000e+00 2.775558e-16 1.110223e-16
#[10,] -5.551115e-17 7.500000e-01 2.500000e-01
#[11,] -2.220446e-16 2.500000e-01 7.500000e-01
#[12,] -6.661338e-16 2.500000e-01 7.500000e-01
# much easier using fractions
fractions(C %*% XXgi %*% t(X))
# [,1] [,2] [,3] [,4] [,5] [,6] [,7]
# [1,] 3/4 1/4 0 0 0 1/4 -1/4
# [2,] 1/4 3/4 0 0 0 -1/4 1/4
# [3,] 1/4 3/4 -1 1 0 -1/4 1/4
# [4,] 1/2 -1/2 1 0 0 1/2 -1/2
# [5,] 0 0 1 0 0 0 0
# [6,] 0 0 0 1 0 0 0
# [7,] 1/2 -1/2 1 -1 1 1/2 -1/2
# [8,] 0 0 1 -1 1 0 0
# [9,] 0 0 0 0 1 0 0
#[10,] 1/4 -1/4 0 0 0 3/4 1/4
#[11,] -1/4 1/4 0 0 0 1/4 3/4
#[12,] -1/4 1/4 -1 1 0 1/4 3/4
4. ## Jittered Strip Chart
5. Strip chart is a common tool for batch comparisons. When points get overlapped in the plot, we may “jitter” the points by adding a little noise to the data. The R function jitter() is an option to manipulate the data, but stripchart() already supports jittered points.
## some people do not realize that the 'colClasses' argument in
# read.table() is quite useful -- can avoid explicit conversion
header = TRUE, colClasses = c("factor", "factor", "factor",
"numeric"))
## R base graphics: method = 'jitter' will do
stripchart(SeedlingWeight ~ Tray, data = d, method = "jitter",
pch = 20, panel.first = grid())
## or the ggplot2 version: geom = 'jitter'
library(ggplot2)
qplot(Tray, SeedlingWeight, data = d, colour = Genotype, geom = "jitter")
Jittered Strip Chart by stripchart()
Jittered Strip Chart by ggplot2
6. ## Testing $\reverse C\beta=d$ in a Linear Model
7. R base does not provide a general test for the coefficients of a linear model, but we can use the function glh.test() in the gmodels package to do it. If you take a look at its source code, you will find unsurprisingly it is nothing but the code in page 7 of slide set 9 of Dr Nettleton’s lecture notes.
library(gmodels)
time = factor(rep(c(3, 6), each = 5))
temp = factor(rep(c(20, 30, 20, 30), c(2, 3, 4, 1)))
y = c(2, 5, 9, 12, 15, 6, 6, 7, 7, 16)
d = data.frame(time, temp, y)
o = lm(y ~ time + temp + time:temp, data = d)
## compare with page 7-11 in slide set 9
Ctime = matrix(c(0, 1, 0, 0.5), nrow = 1, byrow = T)
glh.test(o, Ctime)
# Test of General Linear Hypothesis
# Call:
# glh.test(reg = o, cm = Ctime)
# F = 6.0051, df1 = 1, df2 = 6, p-value = 0.04975
Ctemp = matrix(c(0, 0, 1, 0.5), nrow = 1, byrow = T)
glh.test(o, Ctemp)
# Test of General Linear Hypothesis
# Call:
# glh.test(reg = o, cm = Ctemp)
# F = 39.7072, df1 = 1, df2 = 6, p-value = 0.0007447
Ctimetempint = matrix(c(0, 0, 0, 1), nrow = 1, byrow = T)
glh.test(o, Ctimetempint)
# Test of General Linear Hypothesis
# Call:
# glh.test(reg = o, cm = Ctimetempint)
# F = 0.1226, df1 = 1, df2 = 6, p-value = 0.7382
Coverall = matrix(c(0, 1, 0, 0, 0, 0, 1, 0, 0, 0,
0, 1), nrow = 3, byrow = T)
glh.test(o, Coverall)
# Test of General Linear Hypothesis
# Call:
# glh.test(reg = o, cm = Coverall)
# F = 13.5319, df1 = 3, df2 = 6, p-value = 0.004439
8. ## Demo for the F Distribution
9. I created a dynamic demo to illustrate the power of the F test here: Demonstrating the Power of F Test with gWidgets. Play with it and have fun!
11. Many people do not realize the possibility of converting the data types of columns in read.table() and always use such specific post hoc conversion:
TRUE)
soup$taster = factor(soup$taster)
soup$batch = factor(soup$batch)
soup$recipe = factor(soup$recipe)
soup$tasteorder = factor(soup$tasteorder)
But in fact, we can specify the types of columns while reading data:
## we know the first 4 are factors and the last one is numeric:
TRUE, colClasses = c(rep("factor", 4), "numeric"))
> str(soup)
'data.frame': 72 obs. of 5 variables:
$recipe : Factor w/ 4 levels "1","2","3","4": 1 1 1 1 1 1 2 2 2 2 ...$ batch : Factor w/ 12 levels "1","10","11",..: 1 1 1 1 1 1 5 5 5 5 ...
$taster : Factor w/ 24 levels "1","10","11",..: 1 12 18 19 20 21 1 12 18 19 ...$ tasteorder: Factor w/ 3 levels "1","2","3": 1 1 2 2 3 3 2 3 1 3 ...
\$ y : num 3 5 6 4 4 3 6 9 6 7 ...
There are other tips in read.table() but I find this one the most useful. Check the 22 arguments in ?read.table if you want to know more magic (e.g. how to specify the first column in the data file as the row names).
12. ## Demo for Newton’s Method
13. There is a function newton.method() in the package animation which shows the detailed iterations in Newton’s method. Here is a demo:
library(animation)
par(pch = 20)
ani.options(nmax = 50)
newton.method(function(x) 5 * x^3 - 7 * x^2 - 40 *
x + 100, 7.15, c(-6.2, 7.1))
Newton-Raphson Method for Root-finding
I hope this is useful for understanding iterative algorithms.
14. ## Misc Tips
Some little tips:
1. unname(): to remove the names of objects
2. > x = c(a = 1, b = 2)
> x
a b
1 2
> unname(x) ## x = unname(x) if one wants to replace x
[1] 1 2
## Related Posts
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
|
2014-09-23 00:22:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3038579821586609, "perplexity": 1709.39363299501}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137841.57/warc/CC-MAIN-20140914011217-00258-ip-10-234-18-248.ec2.internal.warc.gz"}
|
https://solvedlib.com/n/stringstrawmasssymmetric-modeanticymmetric-mode-xylen-mk4l,6947935
|
# StringStrawMassSymmetric ModeAnticymmetric [email protected] Publ 14h4ng. Lne-Figure 9 . 2 : Normal modes che coupled pendula. Bach pendulum consists nassive brass
###### Question:
String Straw Mass Symmetric Mode Anticymmetric Mode @xylen-Mk4l Publ 14h4ng. Lne- Figure 9 . 2 : Normal modes che coupled pendula. Bach pendulum consists nassive brass bob suspended by string. The pen - duluns are coupled together by rigid straw placed halfway up the string When both bobe oscillate in che same direction (symmetric mode} che gtraw has no effect When the bobg 0scil- late In opposite directions (antisymmetic mode) che upper part of che string remalng stallonary and che bobs plvot che strav_ The general notion the system the superpositon theze two modes Symmetric versus anti-symmetric problem: If the length of your string it 60 cm and the mass of the bob is 97 gm, what do you expect the period T1 for the mode in Figure 9.2a to be? The straw should be halfway down the string;; What do you predict for the period of T2 of the mode illustrated in Figure 9.2b? Please right your answers with 1 decimal places: sec sec
#### Similar Solved Questions
##### Enler your nuswer In (bve provideul Ivox.Hydrogen Iluoride [s used in the manulacture of Freons (which destroy utone in (he sralophtre) and in the prlucdion Of aluminu Ietal. I( Is preparedby the reattionCalz I,SO4Caso4 2MFIn Que process, 5.75 kg Of CaFz is treated with Jn excess of HzSO4 and yields 2 3Skg of HF. Cakculate thie perceut yield of HF;yield
Enler your nuswer In (bve provideul Ivox. Hydrogen Iluoride [s used in the manulacture of Freons (which destroy utone in (he sralophtre) and in the prlucdion Of aluminu Ietal. I( Is preparedby the reattion Calz I,SO4 Caso4 2MF In Que process, 5.75 kg Of CaFz is treated with Jn excess of HzSO4 and yi...
##### THE WIDTH OF A PHOTOGRAPH IS 4 CENTIMETERS MORE THAN THREE TENTHS OF THE LENGTH
THE WIDTH OF A PHOTOGRAPH IS 4 CENTIMETERS MORE THAN THREE TENTHS OF THE LENGTH. IF THE WIDHT IS 13cm.FIND THE LENGHT PLEASEEEEE HELPPPPPP...
##### Please solve and show all the steps and answer (in a neat handwriting or typed !)...
Please solve and show all the steps and answer (in a neat handwriting or typed !) Page 5 4. A tank is filled with incompressible oil to a depth of (h,) 6.43 m open tank The tank is being drained via a horizontal pipe (radius 1.92 cm), attached at a height of (h,) 0.789 m above the tank bottom Oil is...
##### 0,5-1.5F0.51.50.5-15150.5-0.50.515-0.5~15c) Find the range of f . 0 < 2 < 00 1/v3 < 2 < 00 < 2 < 00 0 < 2 < 1/v3 0 < 2 < 1/v3 1/v3 < 2 < 0
0,5 -1.5 F0.5 1.5 0.5 -15 15 0.5 -0.5 0.5 15 -0.5 ~15 c) Find the range of f . 0 < 2 < 0 0 1/v3 < 2 < 0 0 < 2 < 00 0 < 2 < 1/v3 0 < 2 < 1/v3 1/v3 < 2 < 0...
##### Now First, You exlstite 1 find down the computc the 1 1 smallest possiblc bound on 1 approximating Ir(x)I 5x}SEn Now Flrst; Yau 1 down and theemallest compute Jolpunoo Jun 1 possible bound approximating Ir(x)I1 mlddolneusingJurodpiw rule with subintervals ,audtneemdols
Now First, You exlstite 1 find down the computc the 1 1 smallest possiblc bound on 1 approximating Ir(x)I 5x} SEn Now Flrst; Yau 1 down and theemallest compute Jolpunoo Jun 1 possible bound approximating Ir(x)I 1 mlddolne using Jurodpiw rule with subintervals , audtneemdols...
##### Gravel is being dumped from a conveyor belt at a rate of 30 cubic feet per minute.
Gravel is being dumped from a conveyor belt at a rate of 30 cubic feet per minute. It forms a pile in the shape of a right circular cone whose base diameter is alwaysequal to its height. How fast is the height of the pile increasing when the pile is 18 feet high? Recall that the volume of a right ci...
##### Draw structures for thc major product cach of the [ollowing rcactionsTuAt JaCA 4z0can one enankosa0c) O1LAzoSo 0z (cus)znu Li cehv/pa 0 128*vCr H?0.+ Azoc 1.1"8"ivCJ (*n
Draw structures for thc major product cach of the [ollowing rcactions TuAt JaCA 4z0 can one enankosa 0c) O1L Azo So 0z (cus)znu Li cehv/ pa 0 128*v Cr H?0. + Azo c 1.1"8"iv CJ (*n...
##### And gerloty pcot 4 (with H Ttios) Dwarf (D) is Phenotype Genotype the 1 and Fz with dominant t0 tall (d) H with (C) H Write in H tall pcA plar
and gerloty pcot 4 (with H Ttios) Dwarf (D) is Phenotype Genotype the 1 and Fz with dominant t0 tall (d) H with (C) H Write in H tall pcA plar...
##### On January 1, 2021, the Highlands Company began construction on a new manufacturing facility for its...
On January 1, 2021, the Highlands Company began construction on a new manufacturing facility for its own use. The building was completed in 2022. The company borrowed \$2,000,000 at 13% on January 1 to help finance the construction. In addition to the construction loan, Highlands had the following de...
##### Let S² be the unit sphere of R³. Show that the application of f:(x,y,z)€S²->(-x,-y,-z) €S² is a diffeomorphism
Let S² be the unit sphere of R³. Show that the application of f:(x,y,z)€S²->(-x,-y,-z) €S² is a diffeomorphism...
Bob TomPolitical Jokes per day 10 11Celebrity Jokes per day 2 12Suppose Bob and Tom are writing jokes for a their new TV show. Suppose there are two types of jokes, political jokes and jokes about celebrities. The number of jokesthat can be produced by each person in each category are listed in the ...
##### Question 18 (2 points) You have 16.0 0z (473-mL) glass of lemonade with concentration of 2.37 M. The lemonade sits out on your counter for couple of days, and 150 mL of water evaporates from the glass What is the new concentration of the lemonade? Show all work for full marks.
Question 18 (2 points) You have 16.0 0z (473-mL) glass of lemonade with concentration of 2.37 M. The lemonade sits out on your counter for couple of days, and 150 mL of water evaporates from the glass What is the new concentration of the lemonade? Show all work for full marks....
|
2023-02-01 15:16:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5959513187408447, "perplexity": 7223.313736559013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499946.80/warc/CC-MAIN-20230201144459-20230201174459-00260.warc.gz"}
|
http://bx-community.wikidot.com/examples:nonsimplymatching
|
NonSimplyMatching
# Title: NonSimplyMatching
## Overview
This artificial example shows a bx that is matching but not simply matching.
## Models
Each model is a boolean: $M = N = \{0,1\}$
## Consistency
$R(m,n)$ iff $mn = 0$; that is
R 0 1
0 T T
1 T F
## Consistency Restoration
There's only one choice if $R$ is to be correct and hippocratic. Writing Rf, Rb for forward and backward consistency restoration functions:
Rf 0 1
0 0 1
1 0 0
Rb 0 1
0 0 0
1 1 0
## Properties
• correct
• hippocratic
• matching (by bijection $f : 0 \mapsto 1, f : 1 \mapsto 0$)
but not
• simply matching (see discussion)
• undoable.
## Discussion
This is Example 8 from Stevens' paper referred to below. In terms of the equivalences $\sim_F$ and $\sim_B$ on each of $M$, $N$ defined there:
Each element of $M$ forms an equivalence class under each equivalence, and dually for $N$. Therefore there is only one choice of transversal, and we identify the elements with the equivalence classes.
Considered as a subset of $M_F \times M_B$, M is the diagonal subset {(0,0),(1,1)} where (0,0) represents 0 and (1,1) represents 1. Similarly for $N$. That is, the coordinate grids for $M$ and $N$ are both, identically:
1 0 1 0
We have here the simplest possible example in which there are two distinct elements of $M_F$ (0 and 1) compatible with one element of $N_B$ (0), and only one of those $M_F$ elements (0) is also compatible with a second, distinct element of$N_B$ (1). In other words, both columns of $M$'s coordinate grid are compatible with the 0 row of $N$'s, and the 0 column of $M$'s grid is also compatible with the 1 row of $N$'s.
## References
@article{DBLP:journals/eceasst/Stevens12,
author = {Perdita Stevens},
title = {Observations relating to the equivalences induced on model
sets by bidirectional transformations},
journal = {ECEASST},
volume = {49},
year = {2012},
ee = {http://journal.ub.tu-berlin.de/eceasst/article/view/714},
bibsource = {DBLP, http://dblp.uni-trier.de}
}
Perdita Stevens
|
2022-06-24 23:28:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728704452514648, "perplexity": 1493.8378279892318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033816.0/warc/CC-MAIN-20220624213908-20220625003908-00589.warc.gz"}
|
https://mathhelpboards.com/threads/p-0-z-a-0-3554.525/
|
# P(0<z<a)=0.3554 ?
#### CaptainBlack
##### Well-known member
Question by KS, reposted from Yahoo Questions
P(0<z<a) = 0.3554 solution
help!
i have to find a
Since z is used for the variable we may assume that this is a normal distribution question, and that Z is a RV with a standard normal distribution.
In which case we have a problem asking us to do an inverse table look-up in a table of standard normal distribution. These come in two varieties one gives exactly the probability you require the area under the curve from 0 to a, the other gives the area from -infinity to a. In the latter case for you need:
P(0<z<a)=P(-infinity<z<a) - P(-infinity<z<0) = P(-infinity,a) - 1/2.
So for this type of table we look up:
P(-infinity<z<a)=0.8554
The way an inverse table look up is done is to look in the body of the table for the value of the probability and the value of a is then the corresponding value you would have looked up. This is shown in the attachment:
CB
Last edited:
|
2020-09-24 21:38:22
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8825949430465698, "perplexity": 433.28289783808316}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00050.warc.gz"}
|
https://www.transtutors.com/questions/1-partners-allen-baker-and-coe-share-profits-and-losses-262572.htm
|
# 1. Partners Allen, Baker, and Coe share profits and losses 1 answer below »
1. Partners Allen, Baker, and Coe share profits and losses 50:30:20, respectively. The balance sheet at April 30, 2011, follows:
The assets and liabilities are recorded and presented at their respective fair values. Jones is to be admitted as a new partner with a 20% capital interest and a 20% share of profits and losses in exchange for a cash contribution. No goodwill or bonus is to be recorded. How much cash should Jones contribute?
(a) $60,000 (b)$72,000
(c) $75,000 (d)$80,000
2. Elton and Don are partners who share profits and losses in the ratio of 7:3, respectively. On November 5, 2011, their respective capital accounts were as follows:
Elton …….. $70,000 Don ……….. 60,000$130,000
On that date they agreed to admit Kravitz as a partner with a one-third interest in the capital and profits and losses upon his investment of $50,000. The new partnership will begin with a total capital of$180,000. Immediately after Kravitz’s admission, what are the capital balances of Elton, Don, and Kravitz, respectively?
(a) $60,000,$60,000, $60,000 (b)$63,000, $57,000,$60,000
(c) $63,333,$56,667, $60,000 (d)$70,000, $60,000,$50,000
3. William desires to purchase a one-fourth capital and profit and loss interest in the partnership of Eli, George, and Dick. The three partners agree to sell William one-fourth of their respective capital and profit and loss interests in exchange for a total payment of $40,000. The capital accounts and the respective percentage interests in profits and losses immediately before the sale to William are as follows: Eli capital (60%) …….……$ 80,000
George capital (30%) ………. 40,000
Dick capital (10%) …………. 20,000
$140,000 All other assets and liabilities are fairly valued, and implied goodwill is to be recorded prior to the acquisition by William. Immediately after William’s acquisition, what should be the capital balances of Eli, George, and Dick, respectively? (a)$60,000, $30,000,$15,000
(b) $69,000,$34,500, $16,500 (c)$77,000, $38,500,$19,500
(d) $92,000,$46,000, $22,000 4. The capital accounts of the partnership of Newton, Sharman, and Jackson on June 1, 2011, are presented, along with their respective profit and loss ratios: On June 1, 2011, Sidney was admitted to the partnership when he purchased, for$132,000, a proportionate interest from Newton and Sharman in the net assets and profits of the partnership. As a result of this transaction, Sidney acquired a one-fifth interest in the net assets and profits of the firm. Assuming that implied goodwill is not to be recorded, what is the combined gain realized by Newton and Sharman upon the sale of a portion of their interests in the partnership to Sidney?
(a) $0 (b)$43,200
(c) $62,400 (d)$82,000
5. Kern and Pate are partners with capital balances of $60,000 and$20,000, respectively. Profits and losses are divided in the ratio of 60:40. Kern and Pate decide to admit Grant, who invested land valued at $15,000 for a 20% capital interest in the partnership. Grant’s capital account should be credited for: (a)$12,000
(b) $15,000 (c)$16,000
(d) $19,000 6. James Dixon, a partner in an accounting firm, decided to withdraw from the partnership. Dixon’s share of the partnership profits and losses was 20%. Upon withdrawing from the partnership, he was paid$74,000 in final settlement for his partnership interest. The total of the partners’ capital accounts before recognition of partnership goodwill prior to Dixon’s withdrawal was $210,000. After his withdrawal, the remaining partners’ capital accounts, excluding their share of goodwill, totaled$160,000. The total agreed-upon goodwill of the firm was:
(a) $120,000 (b)$140,000
(c) $160,000 (c)$250,000
7. On June 30, 2011, the balance sheet for the partnership of Williams, Brown, and Lowe, together with their respective profit and loss ratios, is summarized as follows:
Williams has decided to retire from the partnership, and by mutual agreement the assets are to be adjusted to their fair value of $360,000 at June 30, 2011. It is agreed that the partnership will pay Williams$102,000 cash for his partnership interest exclusive of his loan, which is to be repaid in full. Goodwill is to be recorded in this transaction, as implied by the excess payment to Williams. After Williams’s retirement, what are the capital account balances of Brown and Lowe, respectively?
(a) $65,000 and$150,000
(b) $97,000 and$246,000
(c) $73,000 and$174,000
(d) $77,000 and$186,000
Shweta J
Solution 1
As we see. ratio for Allen , Backer Coe is 50:30:20 respectively, Here Allen Capital - 74000
...
|
2019-09-19 13:46:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35794034600257874, "perplexity": 7882.2489891653895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573519.72/warc/CC-MAIN-20190919122032-20190919144032-00536.warc.gz"}
|
http://stackoverflow.com/questions/3412813/multiline-brace-in-eqnarray
|
# Multiline brace in eqnarray
I have an eqnarray that consists of 3 lines. I would like to have a right brace } that spans the last two lines and some brief text explaining these two parts of the equation. Something like
foo = bar
= baz }
} explain
= etc }
but using one large brace, obviously. Is this possible?
-
Is there anything I could add to my answer, since it isn't accepted yet? – Cloudanger Aug 13 '10 at 19:32
Nope. Just forgot. Now fixed. – thekindamzkyoulike Sep 7 '10 at 13:55
It is recommended to use align instead of eqnarray (it gives wrong spacing sometimes). Here is how it can be done with align:
\begin{align}
foo & \left.\begin{array}{l} = bar \\ \end{array}\right. \\
&
\left. \begin{array}{l}
= baz \\
= etc
\end{array}\right\} explain
\end{align}
The second line's array is just to make the spacing right.
Result will be like this (but of course with equation numbers):
-
You only get one number for the second two equations. Is this sufficient, OP? – Geoff Aug 5 '10 at 13:16
Looks good. I don't need equation numbers for individual lines. Thanks! – thekindamzkyoulike Aug 5 '10 at 23:53
Using \begin{aligned}...\end{aligned} worked better for me than \begin{array}{l}...\end{array} because of consistent line spacing. – chs Feb 4 '14 at 13:21
|
2015-09-01 00:22:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9663844108581543, "perplexity": 4610.463853618272}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068184.11/warc/CC-MAIN-20150827025428-00260-ip-10-171-96-226.ec2.internal.warc.gz"}
|
http://electronics.stackexchange.com/tags/digital-logic/hot
|
# Tag Info
25
What happens is usually cases 3. or 5. You have not defined case 5 :-) The joined input-output will sit at some voltage near the middle of the supply. 74HC14: When a Schmitt triggered gate is used oscillation will almost certainly occur. Assume Vin-out initially = low = 0. When input = 0 output will transition to 1. Time to do this is ...
22
what you are describing is called a Ring oscillator Your output will oscilate with a certain frequency depending of the gate delay of your NOT gate. A perfect NOT Gate would oscillate with an infinite high frequency. Since such a perfect device does not exist, your frequency will be $f=\frac{1}{2*t}$ where t is the gate delay of the NOT gate you use....
21
What is 180 degrees phase shift? When the signal is a sine wave, a 180 degrees phase shift delays the signal for half the period of that sine wave, the sine wave then looks inverted: Can an inverter do this? No, because it has signal gain, the output would be a square wave, not a sine. When the signal is a square wave with a 50% duty cycle, then ...
11
Looking at the transistor schematic it can be seen that the resulting circuit consists of two transistors that have their gates connected to their drains. This so called "diode-connected" transistor acts like a non-linear resistor. simulate this circuit – Schematic created using CircuitLab Basically you end up with a voltage divider and ...
6
I wouldn't recommend it. There are a lot of weird quirks to redstone logic in Minecraft, and it's constrained by the mechanics of the game. In particular: Redstone circuits must be constructed on the surface of a structure in 3D space, and the player must move around that space to work on their circuit. Building that structure will suck up a lot of time. ...
6
LTspice does not have a limit on the number of components or nodes. Likely, not the number of components or the overall size of yor schematic is the problem, but you might be facing some no-nos that make your circuit hard to solve. If you have capacitors, try adding a small ESR by editing the component properties. If you have inductors, adding a DC (copper)...
5
"I want this!" You're not the only one. Such a circuit would be very useful to tell when share prices had bottomed out and I'd use the interrupt to tell me when to buy. As drawn in your second diagram you are looking for a circuit that will predict that this is the lowest value that's coming. We don't have electronics to predict the future. What you can ...
5
It depends on how short the pulse is. If it is extremely short, the transmission gate or tristate element which grants access to the master latch will not have time to even properly turn on, so the bitcell will retain its original value and nothing will happen. The other case of failure due to minimum pulse width is the case in which the forward element ...
4
Your circuit is a strange mix of upside-down-ness. Figure 1 shows a more-likely-to-work configuration. simulate this circuit – Schematic created using CircuitLab Figure 1. The standard solution to this problem. A few tips: Draw your schematics with positive rail on top and negative at the bottom. It will be easier to trace current flow from ...
4
Use some filtering to get rid of the majority of false flats and then use an analogue differentiator (CR high pass filter circuit) to produce zero crosses on the remaining flats. This can feed a comparator that rises or falls depending on the signal direction change. Then, use an exclusive OR gate and a small RC time constant to convert the comparator ...
4
Your problem is that your load resistance (an electromagnet) is far too small. The 4000 series should not be asked to put out more than a few mA at 12 volts. Try disconnecting the load and measuring the voltage. Without knowing how much current your magnet requires, I can suggest the following circuit: simulate this circuit – Schematic created ...
4
I am not that good with explanations, but I'll try. Mike, the creator of LTspice, had went through great lengths to ensure that the solver does not encounter abrupt changes which could pose problems. This means that even the ideal diode, when simulated, will show a small rounding around the knee. Add enough points and it will get sharper, but zoom in and ...
3
This may be technology dependent, but at least a TTL NOT gate (bipolar transistors) can often be viewed as just a high gain inverting amplifier. By connecting input to the output, you create the strong negative feedback, so the amplifier will stabilize somewhere between logical 0 and logical 1. If you connect input to output through a resistor, it may be ...
3
Metastability is generally not oscillation, but the signal from a latch, not an inverter, hovering around 50% of rail for an extended period of time before settling to one or other state. Just a few weeks ago, I successfully observed metastability in an LtSpice simulation. I googled for a transistor level model of a d-latch, and then used a binary search ...
3
Never underestimate the benefits of capacitors. If the lighting was constant intensity any capacitors would be redundant, however if your strobing the LED's or sequencing them, small brief voltage drops would occur in the power feeds. Even if the WS2812B drives the LED's with constant current 100uF 16vdc electrolytics will remove glitches from the power ...
3
Given inputs A, B, C, D and outputs X, Y, Z, where XYZ is a 3 bit unsigned binary number representing the number of bits in ABCD that are 1. Let X be the most siginfiant bit of the binary number and let Z be the least significant bit. The truth table for the function looks like... ABCD => XYZ 0000 => 000 0001 => 001 0010 => 001 0011 => 010 0100 => 001 ...
3
Why not just switch the mosfet directly? Its not as if you are using the logic gate to perform any logical combination.
3
The great answer of Zebonaut was about the circuital aspects that may impact the simulation. I'll add a couple of points on the software side aspects: Try increasing the precision LTspice uses in performing the calculations. By default it uses single precision. If you add the directive .options numdgt=12 on the schematic it will use double precision. This ...
3
But what about when the gate is disconnected? The disconneced gate pin acts as an antenna, and will pick up some electromagnetic noise from the environment - likely the 50 or 60 Hz from the nearest wall power lines. The end result is largely random, and there are other effects like leakage currents to account for. Thats why you want a pullup or ...
2
Using the parts we have in CircuitLab, which are 2 and 3 input ORs and including some logic for getting an output tied to even/odd which was not in your initial question (for the initial question the circuit without the AND gates will do): simulate this circuit – Schematic created using CircuitLab This version gives you an additional output ...
2
The simplest transistor-based circuit is the RTL flip-flop: simulate this circuit – Schematic created using CircuitLab C1 is added so that the circuit prefers to power up with the output low, with Q1 not conducting and Q2 conducting. It also sets a minimum pulse width on the input signal. Once that threshold is exceeded, Q1 switches on, Q2 ...
2
I would suggest an exclusive-or gate. If you tie one input high, you have an inverter. If you tie one input low, you have a buffer. The propagation time should be the same.
2
If 'SET' and 'RST' are 15 volt signals, then these mosfets are designed for a gate voltage of +2 to +10vdc, with a limit of +/- 20 vdc. Full ON voltage is +10 volts to +15 volts, so going below 10 volts on the gates takes them out of saturation and into a resistance (non-linear) much greater than zero ohms.As the voltage lowers down to +2 volts and below the ...
2
Not a new-answer, but as simple-way understanding that "point- 5." (that was explaind by other users), with a simple mechanical analogy. A not-gate could be compared with a lever, with a fixed, resting fulcrum at the centre of the lever. (Such as in a scissors). If its one-end (supposed as input-end) pressed-down, the other-end (supposed-as output-end) ...
2
A LUT (Lookup Table) in modern FPGAs is nothing more than a RAM. The inputs are the address lines, and the output is the data output bus. There's really nothing more to it. FPGAs also tend to have more advanced logic modules (some vendors call them ALMs) which consist of one or more LUTs along with additional dedicated adders, high speed carry chains, and ...
2
That bubble stands for P channel MOSFET transistor. See the following equivalent symbols. See the picture for structure of a P channel MOSFET. In CMOS technology the main substrate is P: For example in the NAND gate in the question making both A and B HIGH will cause the upper transistors to be OFF and lower transistors to be ON, therefore F is ...
2
AO222 just means And-Or 2 2 2 which logically means: 3 2-input ands feeding into 1 3-input or. Most inverting complex gates are actually implemented in a single CMOS stage, but non-inverting gates, like the one you mention, need at least two stages. AO222 is most likely made from a AOI222 and an inverter. AOI222 = 3 2-input ands feeding into one 3-input ...
2
Usually the transistor parameters are provided to the customer by the foundry after signing an NDA (non-disclosure agreement). Transistor modeling and parameter extraction is a nontrivial task and usually based on a large number of measurements to get reliable data for the models. Of course with an automated setup this task can be done very efficiently. ...
2
No, your circuit does not invert. When IN is driven low, it should be fairly obvious that OUT will then be driven low thru the diode. OUT will be one diode drop above IN, or Vcc, whichever is lower.
2
A not gate has too much gain to provide a clean 180 degree phase shift but with the right amount of negative feedback some not gates can do it.
Only top voted, non community-wiki answers of a minimum length are eligible
|
2016-07-25 02:23:50
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5563446283340454, "perplexity": 1371.974182041096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824201.56/warc/CC-MAIN-20160723071024-00060-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://www.gbhatnagar.com/2002/10/experience-mathematics-16-apple-day.html
|
## Thursday, October 03, 2002
### Experience Mathematics #16 -- An apple a day
If you study mathematics, then you will have to deal with many statements that contain expressions of the form: If $A$ then $B$ (or, $A$ implies $B$).
Suppose it is true that if you have an Apple a day, then you keep the doctor away. Is it true that if you did not visit the doctor, then you must have had an Apple everyday? Not necessarily. In other words: “if $A$ then $B$” is a true statement, then “if $B$, then $A$” may be false. The statement “if $B$, then $A$” is the converse of “if $A$ then $B$”.
The converse is not to be confused with the contrapositive of the statement. The contrapositive of “if $A$ then $B$” is: “if not $B$ then not $A$”. Unlike the converse, if a statement is true, its contrapositive is true too. Indeed, either they are both true, or they are both false. For example, suppose that it is true that an Apple a day keeps the doctor away. Now if the doctor comes to visit you, you must not have had an Apple some day. Mathematics contains axioms (that may be regarded as “truths”) together with chains of implications—statements of the form “$A$ implies $B$”, where $A$ and $B$ are mathematical expressions. Suppose your axioms say:
1. An Orange contains the daily requirement of Vitamin C.
2. Having your daily requirement of Vitamin C will keep you healthy.
3. If you are healthy, the doctor will stay away
Then, logic dictates that an Orange a day will keep the doctor away. Unfortunately, an Apple does not contain a lot of Vitamin C.
|
2022-11-27 07:53:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2545388340950012, "perplexity": 335.6755759284939}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00222.warc.gz"}
|
https://homework.cpm.org/category/CON_FOUND/textbook/a2c/chapter/7/lesson/7.3.1/problem/7-160
|
Home > A2C > Chapter 7 > Lesson 7.3.1 > Problem7-160
7-160.
Consider $\sqrt { 5 - 2 x } + 7 = 4$.
1. Solve the equation and check your solution.
Subtract 7 from both sides and square both sides to remove the square root.
2. Did you really check your solution? If not, do it now. What happened?
$-\sqrt{A}\text{ can equal}-3, \text{ but can}\sqrt{A} = -3?$
|
2022-07-02 06:14:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7654504776000977, "perplexity": 1942.7680503026336}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00538.warc.gz"}
|
https://calculator.academy/return-on-cd-calculator/
|
Enter the current value of the CD ($) and the purchase price of CD ($) into the Return on CD Calculator. The calculator will evaluate and display the Return on CD.
## Return on CD Formula
The following formula is used to calculate the Return on CD.
ROCD = (CV – PP) / PP * 100
• Where ROCD is the Return on CD (%)
• CV is the current value of the CD ($) • PP is the purchase price of CD ($)
## How to Calculate Return on CD?
The following example problems outline how to calculate Return on CD.
Example Problem #1:
1. First, determine the current value of the CD ($). • The current value of the CD ($) is given as: 500.
2. Next, determine the purchase price of CD ($). • The purchase price of CD ($) is provided as: 400.
3. Finally, calculate the Return on CD using the equation above:
ROCD = (CV – PP) / PP * 100
The values given above are inserted into the equation below and the solution is calculated:
ROCD = (500 – 400) / 400 * 100 = 25 (%)
Example Problem #2:
For this problem, the variables needed are provided below:
current value of the CD ($) = 600 purchase price of CD ($) = 150
This example problem is a test of your knowledge on the subject. Use the calculator above to check your answer.
ROCD = (CV – PP) / PP * 100 = ?
|
2023-02-03 19:36:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37597328424453735, "perplexity": 1941.1230756253196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500074.73/warc/CC-MAIN-20230203185547-20230203215547-00833.warc.gz"}
|
https://learn.saylor.org/mod/page/view.php?id=7519
|
## Boundless: "Finance: Chapter 6, Section 6: Types of Bonds"
### Government Bonds
A government bond is a bond issued by a national government denominated in the country's domestic currency.
#### LEARNING OBJECTIVE
• Analyze the risks and characteristics of government bonds
#### KEY POINTS
• A government bond is a bond issued by a national government, generally promising to pay a certain amount (the face value) on a certain date, as well as periodic interest payments. Such bonds are often denominated in the country's domestic currency.
• In the primary market, Government Bonds are often issued via auctions at Stock Exchanges. In the secondary market, government bonds are traded at Stock Exchanges.
• Although, government bonds are usually referred to as risk-free, there are currency, inflation, and default risks for government bondholders.
#### TERMS
Purchasing power (sometimes retroactively called adjusted for inflation) is the amount of goods or services that can be purchased with a unit of currency.
a theory of long-term equilibrium exchange rates based on relative price levels of two countries
#### FULL TEXT
A government bond is a bond issued by a national government, generally promising to pay a certain amount (the face value) on a certain date as well as periodic interest payments. Such bonds are often denominated in the country's domestic currency. Government bonds are sometimes regarded as risk-free bonds because national governments can raise taxes or reduce spending up to a certain point. In many cases, they "print more money" to redeem the bond at maturity. Most developed country governments are prohibited by law from printing money directly, that function having been relegated to their central banks. However, central banks may buy government bonds in order to finance government spending, thereby monetizing the debt .
Government Bond
The short-term bond of Kolchak government in 1919 with a face value of 500 rubles.
Bonds issued by national governments in foreign currencies are normally referred to as sovereign bonds. Investors in sovereign bonds denominated in foreign currency have the additional risk that the issuer may be unable to obtain foreign currency to redeem the bonds. For example, in the 2010 Greek debt crisis the debt was held by Greece in Euros. One proposed solution was for Greece to go back to issuing its own Drachma.
In the primary market, Government Bonds are often issued via auctions at Stock Exchanges. There are several different methods of issuing such as auctions, including guarantee, combined auction and guarantee, and others. There are two types of interest rates: fixed and floating. In the secondary market, government bonds are traded at Stock Exchanges. Unlikely equity system, the bond secondary market uses a completely different system with different method of trading. At the secondary market, each bond will be assigned with very own bond code (ISIN code).
Government bonds are usually referred to as risk-free bonds because the government can raise taxes or create additional currency in order to redeem the bond at maturity. Some counter examples do exist where a government has defaulted on its domestic currency debt, such as Russia in 1998 (the "ruble crisis"), although this is very rare (see national bankruptcy). Another example is Greece in 2011. Its bonds were considered very risky, in part because Greece did not have its own currency.
There is currency risk for government bondholders. As an example, in the U.S., Treasury securities are denominated in U.S. dollars. In this instance, the term "risk-free" means free of credit risk. However, other risks still exist, such as currency risk for foreign investors (for example non-U.S. investors of U.S. Treasury securities would have received lower returns in 2004 because the value of the U.S. dollar declined against most other currencies). Secondly, there is inflation risk, in that the principal repaid at maturity will have less purchasing power than anticipated if the inflation rate is higher than expected. Many governments issue inflation-indexed bonds, which protect investors against inflation risk by increasing the interest rate given to the investor as the inflation rate of the economy increases.
Zero-Coupon Bonds
A zero-coupon bond is a bond with no coupon payments, bought at a price lower than its face value, with the face value repaid at the time of maturity.
#### LEARNING OBJECTIVE
• Distinguish zero coupon bonds from other types
#### KEY POINTS
• Zero-coupon bonds may be created from fixed rate bonds by a financial institution separating ("stripping off") the coupons from the principal. In other words, the separated coupons and the final principal payment of the bond may be traded separately.
• Zero coupon bonds have a duration equal to the bond's time to maturity, which makes them sensitive to any changes in the interest rates.
• Pension funds and insurance companies like to own long maturity zero-coupon bonds since these bonds' prices are particularly sensitive to changes in the interest rate and, therefore, offset or immunize the interest rate risk of these firms' long-term liabilities.
#### TERMS
• Pension funds
A pension fund is any plan, fund, or scheme which provides retirement income.
• immunize
In finance, interest rate immunization is a strategy that ensures that a change in interest rates will not affect the value of a portfolio. Similarly, immunization can be used to ensure that the value of a pension fund's or a firm's assets will increase or decrease in exactly the opposite amount of their liabilities, thus leaving the value of the pension fund's surplus or firm's equity unchanged, regardless of changes in the interest rate.
FULL TEXT
Zero coupon bonds were first introduced in 1960s, but they did not become popular until the 1980s. A zero-coupon bond (also called a "discount bond" or "deep discount bond") is a bond bought at a price lower than its face value, with the face value repaid at the time of maturity. It does not make periodic interest payments, or have so-called "coupons," hence the term zero-coupon bond. When the bond reaches maturity, its investor receives its par (or face) value. Examples of zero-coupon bonds include U.S.Treasury bills, U.S. savings bonds, and long-term zero-coupon bonds.
SMAC bond
Bond on VMOK with the signature on Boris Saraf.
Zero-coupon bonds may be created from fixed rate bonds by a financial institution separating ("stripping off") the coupons from the principal. In other words, the separated coupons and the final principal payment of the bond may be traded separately. Investment banks or dealers separate coupons from the principal of coupon bonds, which is known as the "residue," so that different investors may receive the principal and each of the coupon payments. This creates a supply of new zero coupon bonds. The coupons and residue are sold separately to investors. Each of these investments then pays a single lump sum. This method of creating zero coupon bonds is known as stripping, and the contracts are known as strip bonds. "STRIPS" stands for Separate Trading of Registered Interest and Principal Securities.
Zero coupon bonds may be long- or short-term investments. Long-term zero coupon maturity dates typically start at 10 to 15 years. The bonds can be held until maturity or sold on secondary bond markets. Short-term zero coupon bonds generally have maturities of less than one year and are called bills. The U.S. Treasury bill market is the most active and liquid debt market in the world.
Zero coupon bonds have a duration equal to the bond's time to maturity, which makes them sensitive to any changes in the interest rates. The impact of interest rate fluctuations on strip bonds is higher than for a coupon bond.
Pension funds and insurance companies like to own long maturity zero-coupon bonds because of the bonds' high duration. This high duration means that these bonds' prices are particularly sensitive to changes in the interest rate and, therefore, offset or immunize the interest rate risk of these firms' long-term liabilities.
Floating-Rate Bonds
Floating rate bonds are bonds that have a variable coupon equal to a money market reference rate (e.g., LIBOR), plus a quoted spread.
#### LEARNING OBJECTIVE
• Describe a floating-rate bond
#### KEY POINTS
• FRBs are typically quoted as a spread over the reference rate. At the beginning of each coupon period, the coupon is calculated by taking the fixing of the reference rate for that day and adding the spread. A typical coupon would look like three months USD LIBOR +0.20%.
• FRBs carry little interest rate risk. A FRB has a duration close to zero, and its price shows very low sensitivity to changes in market rates. As FRBs are almost immune to interest rate risk. The risk that remains is a credit risk.
• Securities dealers make markets in FRBs. They are traded over the counter, instead of on a stock exchange. In Europe, most FRBs are liquid, as the biggest investors are banks. In the United States, FRBs are mostly held to maturity, so the markets aren't as liquid.
#### TERMS
• duration
A measure of the sensitivity of the price of a financial asset to changes in interest rates, computed for a simple bond as a weighted average of the maturities of the interest and principal payments associated with it
• floating-rate bond
a debt instruments with a variable coupon
• LIBOR
The London Interbank Offered Rate is the average interest rate estimated by leading banks in London that they would be charged if borrowing from other banks.
#### FULL TEXT
Floating rate bonds (FRBs) are bonds that have a variable coupon, equal to a money market reference rate, like LIBOR or federal funds rate, plus a quoted spread (i.e., quoted margin). The spread is a rate that remains constant. Almost all FRBs have quarterly coupons (i.e., they pay out interest every three months), though counter examples do exist. At the beginning of each coupon period, the coupon is calculated by taking the fixing of the reference rate for that day and adding the spread. A typical coupon would look like three months USD LIBOR +0.20%.
In the United States, government sponsored enterprises (GSEs), such as the Federal Home Loan Banks, the Federal National Mortgage Association (Fannie Mae), and the Federal Home Loan Mortgage Corporation (Freddie Mac), are important issuers. In Europe, the main issuers are banks.
Municipal bond
Municipal bond issued in 1929 by city of Kraków (Poland).
There are many variations of floating-rate bonds. For instance, some FRBs have special features, such as maximum or minimum coupons, called "capped FRBs" and "floored FRBs. " Those with both minimum and maximum coupons are called collared FRBs. Perpetual FRBs are another form of FRBs that are also called irredeemable or unrated FRBs and are akin to a form of capital. FRBs can also be obtained synthetically by the combination of a fixed rate bond and an interest rate swap. This combination is known as an "asset swap. "
FRBs carry little interest rate risk. A FRB has a duration close to zero, and its price shows very low sensitivity to changes in market rates. When market rates rise, the expected coupons of the FRB increase in line with the increase in forward rates, which means its price remains constant. Thus, FRBs differ from fixed rate bonds, whose prices decline when market rates rise. As FRBs are almost immune to interest rate risk, they are considered conservative investments for investors who believe market rates will increase. The risk that remains is credit risk.
Securities dealers make markets in FRBs. They are traded over the counter, instead of on a stock exchange. In Europe, most FRBs are liquid, as the biggest investors are banks. In the United States, FRBs are mostly held to maturity, so the markets aren't as liquid. In the wholesale markets, FRBs are typically quoted as a spread over the reference rate.
Other Types of Bonds
Other bonds include register vs. bearer bonds, convertible bonds, exchangeable bonds, asset-backed securities, and foreign currency bonds.
#### LEARNING OBJECTIVE
• Classify the different types of bonds
#### KEY POINTS
• Bonds directly linked to interest rates include fixed rate bonds, floating rate bonds, and zero coupon bonds.
• Convertible bonds are bonds that let a bondholder exchange a bond to a number of shares of the issuer's common stock. Exchangeable bonds allows for exchange to shares of a corporation other than the issuer.
• Asset-backed securities are bonds whose interest and principal payments are backed by underlying cash flows from other assets.
• Subordinated bonds are those that have a lower priority than other bonds of the issuer in case of liquidation.
• Foreign currency bonds are issued by companies, banks, governments, and other sovereign entities in foreign currencies, as it may appear to be more stable and predictable than their domestic currency.
#### TERMS
• LIBOR
The London Interbank Offered Rate is the average interest rate estimated by leading banks in London that they would be charged if borrowing from other banks.
• gross domestic product
A measure of the economic production of a particular territory in financial capital terms over a specific time period.
• tranches
One of a number of related securities offered as part of the same transaction.
## General Categorization
Based on coupon interest rates, bonds can be classified into
• Fixed rate bonds
• Floating rate bonds
• Zero-coupon bonds
Fixed rate bonds have a coupon that remains constant throughout the life of the bond. A variation is a stepped-coupon bonds, whose coupon increases during the life of the bond.
Floating rate notes (FRNs, floaters) have a variable coupon that is linked to a reference rate of interest, such as LIBOR or Euribor. For example the coupon may be defined as three month USD LIBOR + 0.20%. The coupon rate is recalculated periodically, typically every one or three months.
Zero-coupon bonds pay no regular interest. They are issued at a substantial discount topar value, so that the interest is effectively rolled up to maturity (and usually taxed as such). The bondholder receives the full principal amount on the redemption date. Zero-coupon bonds may be created from fixed rate bonds by a financial institution separating ("stripping off") the coupons from the principal. In other words, the separated coupons and the final principal payment of the bond may be traded separately .
There are additional special classes of bonds, including:
Inflation linked bonds (linkers) are those in which the principal amount and the interest payments are indexed to inflation. It is one type of floating rate bond. The interest rate is normally lower than for fixed rate bonds, with a comparable maturity. However, as the principal amount grows, the payments increase with inflation. Treasury Inflation-Protected Securities (TIPS) and I-bonds are examples of inflation linked bonds issued by the U.S. government. There are also other indexed bonds. For example equity-linked notes and bonds indexed on a business indicator (income, added value) or on a country's gross domestic product (GDP).
Convertible bonds are bonds that let a bondholder exchange a bond for a number of shares of the issuer's common stock. Exchangeable bonds allows for exchange to shares of a corporation other than the issuer.
Asset-backed securities are bonds whose interest and principal payments are backed by underlying cash flows from other assets. Examples of asset-backed securities are mortgage-backed securities (MBS's), collateralized mortgage obligations (CMOs), and collateralized debt obligations (CDOs).
Subordinated bonds are those that have a lower priority than other bonds of the issuer in case of liquidation. In case of bankruptcy, there is a hierarchy of creditors. First the liquidator is paid, then government taxes, etc. The first bond holders in line to be paid are those holding what is called senior bonds. After they have been paid, the subordinated bond holders are paid. As a result, the risk is higher. Therefore, subordinated bonds usually have a lower credit rating than senior bonds. The main examples of subordinated bonds can be found in bonds issued by banks and asset-backed securities. The latter are often issued in tranches. The senior tranches get paid back first, the subordinated tranches later.
Perpetual bonds are also often called perpetuities or "perps. " They have no maturity date. The most famous of these are the UK Consols, which are also known as Treasury Annuities or Undated Treasuries.
A registered bond is a bond whose ownership (and any subsequent purchaser) is recorded by the issuer or by a transfer agent. It is the alternative to a bearer bond. Interest payments, and the principal upon maturity, are sent to the registered owner. On the contrary, a bearer bond is an official certificate issued without a named holder. In other words, the person who has the paper certificate can claim the value of the bond. Often they are registered by a number to prevent counterfeiting, but may be traded like cash. Bearer bonds are very risky because they can be lost or stolen. Especially after federal income tax began in the United States, bearer bonds were seen as an opportunity to conceal income or assets.
A serial bond is a bond that matures in installments over a period of time. In effect, a $100,000, 5-year serial bond would mature in a$20,000 annuity over a 5-year interval.
Some companies, banks, governments, and other sovereign entities may decide to issue bonds in foreign currencies because it may appear to be more stable and predictable than their domestic currency. Issuing bonds denominated in foreign currencies also gives issuers the ability to access investment capital available in foreign markets. Some examples include:
• Eurodollar bond - U.S. dollar-denominated bond issued by a non-U.S. (European) entity.
• U.S. Yankee bond - a US dollar-denominated bond issued by a non-U.S. entity in the U.S. market.
• Samurai bond - a Japanese yen-denominated bond issued by a non-Japanese entity in the Japanese market.
• Bulldog bond - a pound-sterling-denominated bond issued in England by a foreign institution or government.
• Kimchi bond - a Korean won-denominated bond issued by a non-Korean entity in the Korean market.
|
2019-10-18 20:04:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2987065017223358, "perplexity": 4095.981257488952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684425.36/warc/CC-MAIN-20191018181458-20191018204958-00384.warc.gz"}
|
https://www.nature.com/articles/s41467-019-13915-7?error=cookies_not_supported&code=31ec93f1-a741-4b6b-b85c-98dec274fb80
|
# Extreme intratumour heterogeneity and driver evolution in mismatch repair deficient gastro-oesophageal cancer
## Abstract
Mismatch repair deficient (dMMR) gastro-oesophageal adenocarcinomas (GOAs) show better outcomes than their MMR-proficient counterparts and high immunotherapy sensitivity. The hypermutator-phenotype of dMMR tumours theoretically enables high evolvability but their evolution has not been investigated. Here we apply multi-region exome sequencing (MSeq) to four treatment-naive dMMR GOAs. This reveals extreme intratumour heterogeneity (ITH), exceeding ITH in other cancer types >20-fold, but also long phylogenetic trunks which may explain the exquisite immunotherapy sensitivity of dMMR tumours. Subclonal driver mutations are common and parallel evolution occurs in RAS, PIK3CA, SWI/SNF-complex genes and in immune evasion regulators. MSeq data and evolution analysis of single region-data from 64 MSI GOAs show that chromosome 8 gains are early genetic events and that the hypermutator-phenotype remains active during progression. MSeq may be necessary for biomarker development in these heterogeneous cancers. Comparison with other MSeq-analysed tumour types reveals mutation rates and their timing to determine phylogenetic tree morphologies.
## Introduction
Gastro-oesophageal adenocarcinomas (GOAs) are one of the commonest causes of cancer mortality worldwide1. Microsatellite instable (MSI) and DNA mismatch repair deficient (dMMR) cancers are a distinct subtype of GOAs with a prevalence of up to ~20% in the stomach and gastro-oesophageal junction2,3,4. dMMR results from genetic inactivation of MLH1, MSH2, MSH6, PMS2 or methylation of MLH1. These tumours are characterized by a hypermutator-phenotype leading to high mutation loads and a large fraction of small insertions and deletions (indels), predominantly in homopolymer and dinucleotide repeats. dMMR GOAs have distinct clinical characteristics compared to their MMR-proficient counterparts, including lower stage in the UICC TNM classification of malignant tumours at presentation and better survival3. This has been attributed to a large number of mutation-encoded neoantigens, which enable recognition by the adaptive immune system. Consistent with the notion of high immunogenicity, dMMR cancers are among the tumour types most sensitive to checkpoint-inhibiting immunotherapy (85.7% response rate in small series)5,6. However, not all tumours respond to immunotherapy and some acquire resistance after initial benefit. Chemotherapy and anti-angiogenic drugs are the only other systemic treatment options for dMMR GOAs and the identification of novel therapeutics is important to improve outcomes.
Genetic intratumour heterogeneity (ITH) and ongoing cancer evolution have been demonstrated in multiple cancer types7. The ability to evolve is thought to foster cancer progression, drug resistance and poor outcomes8. High mutation rates may fuel evolvability by generating an abundance of novel phenotypes which selection can act upon9. A pan-cancer study indeed demonstrated large numbers of subclonal mutations within single tumour regions of MSI cancers10. However, it has not been investigated in dMMR GOAs whether the MSI hypermutator-phenotype remains active during progression, how this impacts ITH and phylogenetic trees, and whether subclonal driver mutations evolve. Our previous work in kidney cancer for example showed that most driver mutations are located in subclones11. Subclonal driver mutations are poor therapeutic targets as co-existing wild-type subclones remain untargeted12. They furthermore hinder effective biomarker development as the analysis of single tumour regions incompletely profiles the genomic landscape of the entire tumour. Large-scale sequencing analyses of MSI GOAs identified TP53, RNF43, ARID1A, PIK3CA, KRAS and PTEN, as the most frequently altered driver genes13. Mutations in antigen presentation (MHC, B2M)2 and interferon signalling pathway (JAK1/2)14,15 genes also frequently occur in MSI tumours and they have been suggested to enable immune evasion2. However whether they are truncal or subclonal within individual tumours is unknown.
Multi-region exome sequencing (MSeq) reconstructs cancer evolution by comparing mutational profiles from spatially separated tumour regions. MSeq found that mutations often appear to be present in all cancer cells (i.e. clonal) in a single tumour region even if they are absent from other regions of the same tumour11,16. Spatial constraints in solid tumours that preclude intermixing of evolving subclones likely explains this ‘illusion of clonality’ phenomenon when heterogeneity is only investigated in a single sample per tumour17,18. We apply MSeq to four surgically resected GOAs showing dMMR on immunohistochemistry and combine this with subclonality analysis of single tumour biopsies from 64 MSI GOAs sequenced by The Cancer Genome Atlas (TCGA)2 to assess ITH and the evolution of these tumours.
## Results
### Samples
Seven primary tumour regions from each of four GOAs (Fig. 1a) were subjected to MSeq with a target depth >200× (Supplementary Data 1). Two lymph node metastases were included from each of two cases. TNM-stage was assessed but no other clinical information was available as the samples had to be anonymised to comply with local ethics and research legislation. Absence of MLH1 and PMS2 staining and positive staining for MSH2 and MSH6 (Fig. 1b), indicated MLH1 deficiency. No known Lynch syndrome mutations in MLH1, MSH2/6 or PMS2 were identified in DNA from non-malignant tissue, confirming that these were sporadic dMMR tumours.
### Mutational intratumour heterogeneity
About 1518–4148 (median: 1814) non-silent mutations were identified per case (Fig. 1c). The high mutation burden and the large fraction of indels (20–34%) were consistent with an MSI-phenotype2. The number of ubiquitous non-silent mutations that were detected across all sequenced regions per tumour ranged from 329 to 1006 (median: 702). This exceeded the number of ubiquitous non-silent mutations reported for clear cell renal cell carcinomas (ccRCC, median: 28)11, and even for lung cancers (median: 137)16 and melanomas (median: 436)19, which are among the most highly mutated cancer types20 (Fig. 1d). The difference was significant between dMMR GOA and lung and ccRCC but not for melanomas. MSeq-identified ubiquitous mutations are likely to define the mutations that were present in the founding cell of each tumour before diversification into subclones occurred11. These high numbers hence reveal that the dMMR-phenotype was likely acquired in the precancerous cell lineage considerably earlier than malignant transformation of the founding cell. Malignant transformation shortly after dMMR acquisition which was then followed by selective sweeps is an alternative explanation. Yet, it appears unlikely that this would have left no trace of the early subclones in any tumour.
A median of 1194 mutations were only detectable in some but not in all analysed tumour regions per case and hence heterogeneous. This significantly exceeded the heterogeneous mutation burden detected by MSeq in ccRCC11 by 24-fold, in lung cancer16 by 40-fold, and in melanoma19 by 32-fold (Fig. 1d). Importantly, the median mutation load per region in these MSeq series was similar to those reported by the TCGA for the respective cancer type (Fig. 1e), suggesting that these small series are reasonably representative of each tumour type. Thus, dMMR tumours are characterized by extreme ITH compared to other cancer types.
High mutation and neoantigen loads are associated with immunotherapy benefit. Recent data suggested more specifically that a high burden of clonal mutations/neoantigens is important for immunotherapy success21,22. Applying the NetMHC algorithm predicted 1120–3052 strong class I MHC binding neoantigens per tumour (Supplementary Fig. 1). Between 215 and 926 of these were clonal. This is higher than clonal neoantigen loads reported for most lung cancers or melanomas23. It is conceivable that this high clonal neoantigen burden explains the immunotherapy sensitivity of dMMR tumours21.
### Mutational signatures reveal processes driving evolution
We next investigated mutational signatures by counting the number of all possible base substitution in their trinucleotide contexts (Supplementary Fig. 2) and assigning these to 30 mutational signatures20 (Fig. 1f). The COSMIC mutational signatures 6 and 15 are characteristic for MSI cancers and these were abundant among ubiquitous and heterogeneous mutations. Signature 1 mutations reflect the spontaneous deamination of methylated cytosine, a mutational process active in most normal tissues. Signature 1 was detected in 17–52% (219–449 mutations in absolute number) of ubiquitous mutations. A fraction of these were likely acquired in the normal cells over the lifetime of these patients. However, based on the estimated mutation rate in normal gastro-oesophageal epithelium, only 0.5–1 signature 1 mutations would be expected to accumulate per lifeyear24,25,26. It is hence likely that the dMMR-phenotype also contributes to the generation of signature 1 mutations. This is further supported by 9–10% of the subclonal mutations in Tumours 1–3 and 36% in Tumour 4 showing signature 1 and consistent with a recently suggested role of the MMR-system in the repair of deamination defects27. A total of 10.5% of the ubiquitous mutations in Tumour 3 showed signature 14, which has been described in dMMR cancers that are also POLE or POLD1 mutant28. Tumour 3 harboured a POLD1 mutation but this was subclonal and could not explain the presence of clonal signature 14 mutations. The absence of signature 14 from subclonal mutations furthermore suggested that this is a passenger mutation. No other mutational signatures contributed substantially to the heterogeneous mutations, confirming that the MSI-phenotype remains active during cancer progression and is the primary mechanism generating these large numbers of subclonal mutations.
### The evolution of copy number aberrations
DNA copy number aberration (CNA) profiles revealed near-diploid profiles across all regions of Tumours 2 and 3 (Fig. 2a and Supplementary Fig. 3). Tumour 4 showed highly aberrant near-tetraploid profiles in all regions. A high number of mutations were present on all copies of the major allele of most gained chromosomes (Fig. 2b), indicating that whole genome duplication and chromosomal instability (CIN) had occurred late on the trunk of the phylogenetic tree in Tumour 4. CIN was confirmed by the weighted genome integrity index (wGII) that measures the proportion of all chromsomes with copy number states that differ from the ploidy of a sample and where values above 0.2 support the presence of CIN29 (Fig. 2a). Near-diploid and near-triploid CNA profiles were found in distinct regions of Tumour 1. Together with an increase in wGII from ~0.2 in the near-diploid regions to >0.5 in near-triploid regions and the occurrence of new CNAs in individual tumour regions, this revealed the acquisition of subclonal CIN during cancer progression. All four lymph node metastases were near-diploid with wGII values ≤0.2, demonstrating that CIN, which has been associated with tumour aggressiveness in several cancer types including GOA28, is not required for metastasis formation.
We next investigated which specific CNAs were ubiquitous/clonal and had hence occurred early in the evolution of these dMMR tumours (Fig. 2c and Supplementary Fig. 3). Ubiquitous Chr17p, Chr18 and Chr22 loss of heterozygosity (LOH) were each present in two tumours. Ubiquitous LOH of Chr3p, Chr5q and Chr17p encompassed tumour suppressor genes, which are recurrently mutated in dMMR GOAs2 (MLH1, APC and TP53). Among the small number of ubiquitous gains, only Chr8q and Chr20q were gained in more than one tumour. To further time the acquisition of these recurrent truncal CNAs, we mapped ubiquitous mutations onto the allele-specific CNA profiles. Copy number gains that occurred early can be identified if the majority of mutations in that region have a mutation copy number23 which is lower than that of the gained allele. The Chr8 gain in Tumour 2 and the Chr8q gain in Tumour 4 (Fig. 2d), but not Chr20 gains (Fig. 2e), showed a near complete absence of mutations on all copies of the gained allele and were hence acquired on the phylogenetic trunk before or soon after the MSI-phenotype. Thus, Chr8q gains, which are the commonest CNAs in MSI GOAs2, can be among the earliest genetic aberrations in these tumours.
### Reconstruction of tumour phylogenies
We next deconvoluted the subclonal composition of individual regions and reconstructed the phylogenetic tree for each tumour (Fig. 3). Similar to MSeq analyses of other tumour types11,16,19, this revealed branched evolution. Comparison of the phylogenetic trees with the mutation heatmaps showed some phylogenetic conflicts. Inspection of the CNA status of the mutated DNA positions showed that most conflicts could be explained by losses of chromosome copies in individual regions (marked in green in Fig. 1c and Supplementary Fig. 4). Thus, subclones can lose a small proportion of mutations during cancer evolution.
Phylogenetically closely related clones were usually located in close physical proximity (Supplementary Fig. 5), indicating that cell motility is limited and that these tumours evolve in a spatially ordered fashion. Importantly, each of the two lymph node metastases analysed in Tumours 2 and 3 had evolved from distinct subclones rather than being seeded by the same subclone or sequentially from one node to the other (Fig. 3). Dissemination hence propagated subclonal diversity from the primary tumour to metastatic sites. In addition, subclonal mutations, defined as private mutations estimated to be present in ≤70% of the cancer cells of a sample, were detectable within three metastatic sites with good cancer cell content (Supplementary Table 1). Subclonal mutations within lymph nodes were again predominated by the MSI-specific mutational signatures 6 and 15 (Supplementary Table 2). Thus, the dMMR-phenotype continues to generate ITH in metastases.
### Identification of truncal drivers
We next assessed the evolution of putative driver mutations and of corresponding LOH of tumour suppressor genes and mapped them onto the phylogenetic trees (Fig. 3 and Supplementary Data 2). A frameshift mutation and LOH of MLH1 occurred on the trunk of Tumour 1, consistent with biallelic MLH1 loss. No genetic aberrations of MLH1 were detectable in Tumours 2–4 but qPCR confirmed hypermethylation of the MLH1 promoter as the cause for dMMR in these cases (Supplementary Fig. 6)30. Tumours 2–4 furthermore harboured a truncal frameshift mutation in MSH6. Mutations in the histone methyltransferase and tumour suppressor gene PRDM2, one in combination with LOH of the second allele were also truncal in all four cases and truncal frameshift mutations of the TGFβ signalling regulator ACVR2A were detected in three cancers. Both genes have been suggested as likely drivers in MSI GOAs13.
One tumour showed a disrupting mutation and LOH of ARID1B and two tumours each harboured two truncal mutations in ARID1A, which are all members of the SWI/SNF-chromatin-modifying complex. We could not formally demonstrate that the two mutations affected both alleles of the ARID1A tumour suppressor gene but biallelic inactivation is likely as all mutations were disrupting in nature, suggesting evolutionary selection for inactivating events. A frameshift mutation and LOH of PBRM1, a further SWI/SNF-complex member, co-occurred with biallelic ARID1B loss on the trunk of Tumour 1. This emphasizes an important role for SWI/SNF-complex aberrations in dMMR GOA development.
Truncal mutations in TP53 were found in three tumours. Tumours 1 and 4 also showed LOH, leading to biallelic TP53 inactivation. These specific cancers had undergone genome duplication and acquired CIN, consistent with a permissive role of TP53 loss for CIN31. Moreover, both showed truncal Chr18q loss which promotes CIN in colorectal cancer32. TP53 inactivation and Chr18q loss may hence predispose tumours to subsequently evolve CIN. Frameshift mutations of RNF43, a negative regulator of the APC/β-catenin-pathway that frequently acquires heterozygous mutations in MSI tumours33, were present in three tumours. The tumour without an RNF43 mutation harboured two truncal mutations in the APC tumour suppressor gene as an alternative mechanism of β-catenin activation. Together, aberrations in TP53, the SWI/SNF-complex, PRDM2, dMMR-, APC/β-catenin signalling- and TGFβ signalling-genes each occurred on the phylogenetic trunks of at least two cases.
### Parallel evolution
Assessing heterogeneous driver mutations revealed striking examples of parallel evolution, a strong signal that these evolved through Darwinian selection:7,17,34,35 Tumour 2 acquired five subclonal mutations in SMARCA4, encoding a catalytic subunit of the SWI/SNF-complex. These had occurred in addition to two truncal mutations (M274fs, K1071fs) in ARID1A. A third ARID1A mutation was subclonal and affected recurrently mutated amino acids (AA163-164del) located proximally to the truncal frameshift mutations. This may be functionally relevant if ARID1A had retained some residual activity despite the more distal mutations. Parallel evolution of five subclonal SMARCA4 mutations in this tumour with truncal ARID1A mutations suggests that SWI/SNF-complex aberrations are not only important for carcinogenesis but that progressive inactivation may contribute to cancer progression.
A PIK3CA hotspot mutation (H1047R) was detected in P1 and Y1 but also in the distantly related subclone AL in Tumour 2. Copy number changes that could explain a loss of this mutation in subclones with wild-type PIK3CA were absent (Supplementary Fig. 3). The most parsimonious explanation for this phylogenetic conflict is that the same mutation independently evolved twice, once in AL and once in the ancestor cell of P1 and Y1. Intuitively this may appear unlikely, but a tumour of this diameter contains >10 × 109 cancer cells9 that have undergone approximately the same number of cell divisions to grow to this size from the founding cell. It is conceivable that two cells independently acquire the same mutation in some tumours of this size. With one further PIK3CA hotspot mutation in region E (Y1021C), this identified three PIK3CA parallel evolution events in Tumour 2.
Mutations in the SWI/SNF-complex members SMARCA4 and ARID1A were present on the trunk of Tumour 3. Additional SWI/SNF mutations, one in ARID2 and one in SMARCA4, evolved in subclones, the latter potentially complementing monoallelic SMARCA4 loss on the trunk to biallelic inactivation in the subclone. Further parallel evolution was apparent in Tumour 3 based on the acquisition of KRAS (G13D) and NRAS (G12C) oncogenic mutations in distinct subclones. Two hotspot PIK3CA mutations (E418K, Y1021H) sequentially occurred in one clade of Tumour 3.
The tumour suppressor gene PRDM2 harboured frameshift mutations on the trunks of Tumours 2 and 3 and a second frameshift mutation was acquired in subclones of each tumour, potentially leading to biallelic inactivation. Subclonal inactivating mutations of the cell cycle regulator and DNA damage repair genes CHEK2, ATR and BLM occurred in Tumour 3. Together with truncal LOH of CHEK2, both alleles of this gene were inactivated. Heterozygous BLM and ATR mutations may be functionally relevant as both genes show haploinsufficiency36,37.
Given the high burden of mutations caused by dMMR, it is possible that several mutations which we classified as likely drivers are passengers without significant fitness effects. However, parallel evolution and the strong functional evidence for driver status of the identified KRAS, NRAS and PIK3CA mutations and of inactivating mutations in SWI/SNF-complex members in cancer38 support the functional relevance of these specific aberrations.
### The evolution of immune evasion drivers
Tumour 2 harboured a truncal JAK2 frameshift mutation. In addition, a subclonal JAK2 splice-site mutation evolved in one clade and a frameshift mutation in region AE. Another subclone had acquired a JAK1 frameshift mutation but no evidence for biallelic inactivation was found. A subclonal frameshift mutation was present in HLA-A*02:01 (Supplementary Data 3). Assessing the neoantigens binding to this HLA allotype revealed that this could lead to a 12% reduction in the number of neoantigens presented by these subclones (Supplementary Fig. 7). One clade in Tumour 2 furthermore acquired two disrupting mutations in B2M. Inspecting short read sequencing data confirmed that these were not located on the same allele but conferred biallelic inactivation which abrogates MHC Class I antigen presentation (Supplementary Fig. 8).
LOH of B2M was present on the trunk in Tumour 3 and a B2M frameshift mutation was acquired in a subclone, also establishing biallelic B2M loss. Although several primary tumour regions in Tumours 2 and 3 showed biallelic B2M inactivation this was not propagated to any of the four lymph node metastases (Fig. 3). The lymph node metastasis AE in Tumour 3 acquired a missense mutation in HLA-B*40:02 (Supplementary Data 3) with unknown functional impact. If this HLA-B*40:02 mutation compromised antigen presentation, 12% of neoantigens could no longer be presented. In contrast to lung cancers which are frequently chromosomally unstable and acquire subclonal LOH of HLA genes as immune evasion mechanisms39, no such LOH events were identified (Supplementary Data 3).
To investigate why immune evasion drivers only evolved in 2/4 tumours, we assessed cytotoxic CD8+ T-cell infiltrates by immunostaining. The two tumours with evidence of immune evasion events, which also had the highest truncal and subclonal mutation burdens, showed higher T-cell infiltrates than the other two cases (Fig. 4). dMMR GOAs with high immunogenicity and T-cell infiltrates may hence be particularly prone to subclonal immunoediting.
### Darwinian selection over time
The ratio of non-synonymous mutations to synonymous mutations (dN/dS-ratio) has been used to estimate positive and negative selection in cancer40. dMMR tumours have high clonal but also subclonal mutation burdens and we reasoned that this may enable applying these ratios to evaluate how selection changes from truncal mutations to subclones. dN/dS ratios were close to 1 for the truncal mutations of all cases (0.95–1.06), indicating that the majority of mutations are neither under positive nor under negative selection. However, the dN/dS ratios increased to 1.16 in Tumour 1 and 1.31 in Tumour 2 for private mutations, indicating positive selection (Fig. 5 and Supplementary Table 3). Together with the identification of parallel evolution in Tumours 2 and 3, this suggests that these tumours are under selection pressure and adaptive mutations continue to evolve. The dN/dS <1 in the shared mutations of Tumour 4 may be a sign of negative selection during early evolution. Our results show that MSeq allows to dissect the temporal dynamics of selection in dMMR tumours and this can be used to reveal what genetic alterations are selected for or against in larger series.
### Multi-region vs. single-region heterogeneity analysis
Our next aim was to gain further insights into the evolution of dMMR GOAs by deconvolution of clonal and subclonal mutations in single samples from the TCGA GOA dataset2.
We first used our MSeq dataset to assess which information can be robustly generated by single sample deconvolution and which ones are more likely to be gained by MSeq. The total mutation load in a single sample exceeded the MSeq-determined ubiquitous/truncal mutation load by an average of 73% across the four tumours (Fig. 6a). Following bioinformatic deconvolution of regional mutations into clonal and subclonal, the average clonal mutation burden determined in single samples still exceeded the number of mutations identified as ubiquitous by MSeq by 34%. Moreover, the number of mutations identified as clonal in a single region varied highly between samples from the same tumour. This could not be attributed to different cancer cell contents as no correlation was observed (Supplementary Fig. 9).
We furthermore assessed whether the parallel evolution mutations, that have a high probability of being actual drivers and were found to be subclonal by MSeq analysis, could also have been accurately identified as subclonal by single-region analysis. Only 40% of B2M mutations that were subclonal based on MSeq were accurately identified as subclonal in individual regions whereas 60% appeared clonal (Fig. 6b, c). This illusion of clonality in single sample analysis also affected 40% of JAK2 mutations, 76.2% of SMARCA4 mutations, 66.7% of RAS mutations and 35.7% of PIK3CA mutations. Overall, 59.0% of these likely driver mutations appeared clonal in single-region analysis despite clear subclonal status based on MSeq. This supports the conclusion from MSeq studies in other tumour types, that single-region analysis overestimates the clonal dominance of driver mutations11,16.
We next analysed 64 MSI GOAs cancers from TCGA. All samples harboured subclonal mutations but only a median of 21.3% of mutations were subclonal (Fig. 6d) compared to a median of 60.1% in MSeq data. We then assessed the clonality of mutations in driver genes which we had found to be either predominantly clonal or subclonal by MSeq. The highest frequency of subclonal mutations was found in ARID2 and SMARCA4 whereas ACVR2A was almost always clonal in TCGA data (Fig. 6e), consistent with MSeq data where these occurred late and early, respectively. Mutations in the remaining driver genes were predominantly clonal in TCGA data, but in light of our MSeq data this is likely limited by the overestimation of clonal status in single-region analysis.
Only 2/64 TCGA cases showed parallel evolution of two subclonal SMARC4 mutations, each, and two subclonal PIK3CA mutations evolved in one case. No parallel evolution of driver mutations in RAS or immune evasion regulators was identified. Together with the detection of parallel evolution in spatially distinct tumour regions by MSeq, this illustrates the limitation to identify such events by single sample analysis. Two independent disrupting mutations in ARID1A were found to be clonal in each of 16/64 tumours (25%) and only four tumours had one clonal and one subclonal inactivating event. This confirms frequent biallelic inactivation.
Clonal and subclonal mutations in TCGA samples were dominated by the MSI-specific mutational signatures 6 and 15 (Fig. 6f, g), confirming our MSeq results. A total of 44.0% of clonal mutations displayed signature 1 and although this significantly decreased among subclonal mutations, it remained the second most abundant mutation signature. Together with a significant increase in signature 15 among subclonal mutations, this supports the change in mutational processes between early progression and subclonal diversification as seen in the MSeq dataset. Timing of copy number changes in the TCGA dataset supported that chromosome 8 gains had been acquired before or early after the MSI-phenotype in ~60% of cases (Fig. 6h and Supplementary Fig. 10).
### Mutational mechansism and their timing influence phylogenies
To investigate how mutational processes and their timing influence phylogenetic tree morphologies, we represented dMMR GOAs, melanomas19, lung16 and renal cancers11 as a single phylogenetic tree with a branching structure similar to those revealed by MSeq and by using the average number of ubiquitous and heterogeneous mutations (Fig. 1d) to scale trunk and branch sizes (Fig. 7). This revealed that dMMR leads to long trunks even exceeding the trunk size of carcinogen-induced cancers (UV light in melanomas, cigarette smoke in lung cancer). Additionally, dMMR tumours showed prominent branches, whilst branch lengths in lung cancer and melanoma were similarly short as in ccRCC11, a consequence of the limited impact of the initiating carcinogens during cancer progression16,19. These associations show that mutation rates and their temporal activity are major factors determining phylogenetic tree shapes and sizes.
## Discussion
With recent success rates of cancer-immunotherapy, understanding the genetic landscapes of immunotherapy-sensitive tumour types and how these influence treatment sensitivity are major needs. dMMR cancers are among the most sensitive solid tumours to checkpoint-inhibiting immunotherapies5,6 but their genetic evolution, clonal mutation burden and ITH remained unknown. Our series of four treatment-naive dMMR GOAs revealed strikingly high clonal mutation burdens. This may explain the exquisite sensitivity of these cancers to immunotherapy as recent data showed that a high clonal mutation burden is a better predictor of immunotherapy success than the total mutation burden21. The presence of mutational ITH has furthermore been suggested to impair effective immunotherapy in lung cancer and other malignancies21,22. Extremely high numbers of heterogeneous mutations were found in all four dMMR GOAs and these significantly exceeded those in other cancer types analysed by MSeq. Although the analysed tumours were not treated with immunotherapy, these results and the overall high response rate of dMMR GOAs suggests that extreme ITH is unlikely to fundamentally preclude immunotherapy efficacy in tumours with abundant clonal mutations. This warrants MSeq analyses of MSI GOAs that were treated with checkpoint-inhibitors in order to assess whether these hypotheses can be validated in the clinic.
Our study also provides first insights into the clonal origin of lymph node metastatic disease in dMMR GOAs. Lymph nodes were seeded by distinct subclones in the primary tumours, propagating some of the heterogeneity from the primary tumour to metastatic sites. Subclonal mutation generation continued in metastases and similar heterogeneity as observed in primary tumours should therefore be expected in more advanced metastatic disease.
The mutation load of individual tumour regions exceeded the number of truncal mutations by 73%, and still by 34% following subclonal deconvolution. Studies investigating mutation burden as immunotherapy biomarkers may hence benefit from MSeq to robustly and accurately estimate truncal mutation loads. Subclonal immune evasion drivers were identified in two of four cases. Mutations in the JAK1/2 and inactivation of B2M can confer resistance to checkpoint-inhibiting immunotherapy15,41,42. Although in MSI colorectal cancer it has been shown that most patients with B2M inactivation benefitted from immunotherapy43, our data suggest that B2M loss can be subclonal and is not necessarily propagated to metastases. How subclonal immune evasion drivers and their localization in primary tumours or in metastases impairs immune checkpoint-inhibitor efficacy in dMMR GOAs should be investigated by MSeq in larger, immunotherapy-treated cohorts.
Despite the selection pressure resulting from the high immunogenicity of dMMR tumours, we found no evidence of reversion of the hypermutator-phenotype. Immune evasion mechanisms which can be readily accessed through single mutations, for example in HLA genes, or through biallelic B2M or JAK mutations may more effectively mitigate against this selection pressure than loss of the dMMR-phenotype, which would still leave behind neoantigen-encoding mutations that have already been generated. Despite considerable mutation loads, cytotoxic T-cell infiltrates were low in two tumours and we could not identify immune evasion events that explain this. This warrants further investigation into immune escape mechanisms in dMMR GOAs.
Defining driver mutations which are commonly truncal is critical for precision cancer medicine approaches as targeting of subclonal driver mutations is likely futile12. Several tumour suppressor genes were inactivated by genetic alterations on the trunk in all four tumours. However, loss-of-function of tumour suppressor genes is usually not directly targetable. Two of four dMMR GOAs harboured two inactivating mutations in ARID1A. In addition, 25% of MSI GOAs from the TCGA dataset showed two clonal ARID1A mutations, further suggesting that biallelic disruption is common. However, given the uncertainty of clonality estimates from single region data, the prevalence of biallelic truncal inactivation will need confirmation by MSeq in larger series. ARID1A-deficiency sensitizes cancer cells to small molecule inhibitors of the ATR DNA damage sensor44. Such a potential synthetic lethal interaction should be investigated in dMMR GOAs. Additional subclonal mutations in ARID1A and in other SWI/SNF-complex members evolved during cancer progression, indicating a role of SWI/SNF-complex modulation during carcinogenesis and cancer progression. MSeq and single sample TCGA data analysis also showed that chromosome 8 gains are among the earliest genetic events in ~60% of these tumours. Further studies are necessary to investigate whether this is relevant for the tolerance of the MSI-phenotype or a marker of aggressiveness as described for other cancer types45,46.
Comparing results from MSeq analysis and single-region analysis showed that MSeq more accurately identifies clonal and subclonal mutation loads, drivers that are acquired early vs those that evolve late and particularly parallel evolution events. It can furthermore avoid the illusion of clonality of driver mutations and overcome sampling biases which can lead to the failure to accurately identify subclonal driver mutations, for example in JAK or B2M, that have been suggested to confer therapy resistance15,41,42. MSeq should therefore be considered for biomarker discovery in such highly heterogeneous tumour types. Bulk sequencing of DNA from multiple regions and metastases or ctDNA sequencing, followed by bioinformatic identification of clonal mutations are alternative approaches to address the illusion of clonality. MSeq also revealed how the genetic profile of metastatic disease can differ from primary tumours and within different metastatic sites. It finally allowed to assess how selection changes from truncal to private mutations.
Taken together, the dMMR-phenotype remained active throughout the evolution of primary tumours and in metastatic sites, generating extreme ITH. We furthermore revealed the generation of multiple subclonal driver mutations, including remarkable parallel evolution of multiple functionally similar subclonal drivers and a dN/dS ratio indicating positive selection in three of four tumours. These results confirm a high evolvability of dMMR tumours. High heterogeneity and evolvability are thought to enable cancer aggressiveness and poor outcomes47, yet these data demonstrate a paradoxical association with good prognosis in dMMR tumours. dMMR tumours are unique models to advance insights into cancer evolution rules and into the potential and current limitations of evolutionary metrics for clinical outcome prediction.
## Methods
### Sample collection and preparation
Samples from treatment-naive GOA resection specimens were routinely paraffin embedded and fresh frozen at the University Medical Center Hamburg-Eppendorf (Germany). The research use of specimens left over after the pathological diagnosis is regulated through the ‘Hamburger Krankenhausgesetz’ (Hamburg Hospital Law) in Hamburg, consent and ethical approval are explicitly waived for samples that are fully anonymised. Thus, information about age, sex of the patients and outcome data is not available.
Immunohistochemistry for MLH1, PMS2, MSH2 and MSH6 was performed on 20 cases and four with dMMR (each showing absence of MLH1 and PMS2 staining in cancer cells, see Fig. 1b) were identified by a pathologist. Seven tumour regions representing the spatial extent of each primary tumour were selected (surface area ~8 × 5 mm and a depth of ~10 mm) based on the H&E slide and spatial location within the tumour by a pathologist. Two cases each included two lymph node metastases (Station 1–2, right and left paracardial nodes), which were sufficiently large for analysis. Where necessary, samples were macrodissected to minimize stromal contamination. DNA was extracted using the Qiagen AllPrep kit following the manufacturer’s instructions. Nucleic acid yields were determined by Qubit (Invitrogen), and the quality and integrity of DNA was examined by agarose gel electrophoresis. DNA from tumour adjacent non-malignant tissue was used as a source of normal (‘germline’) DNA. For this, either oesophageal or gastric wall tissue, embedded as “normal mucosa”, was chosen and tumour contamination excluded by a pathologist based on H&E slides taken from levels before and after slides for DNA extraction.
### Multiplex immunohistochemistry
The Opal 7 Tumor Infiltrating Lymphocyte kit (PerkinElmer) was used to perform combined CD8 (antibody dilution 1:300, Opal 570 1:150), pan-Cytokeratin (antibody dilution 1:500, Opal 690 1:150) and DAPI (counter-) stains for each region following the manufacturer’s instructions. In Tumour 2, two regions had not enough tissue left after DNA extraction. Slides were scanned using the Vectra 3.0 pathology imaging system (PerkinElmer)48.
After low-magnification scanning, intratumour regions of interest were scanned at high resolution (20×). Spectral unmixing, tissue and cell segmentation and phenotyping of CD8- and Cytokeratin-positive cells were performed with InForm image analysis software under pathologist supervision. Five representative regions of interests were chosen and cytotoxic T-cells and tumour cells in cancer tissue segmented areas were quantified. From the sum of the five regions, we calculated the ratio of cytotoxic T-cells/tumour cells for each region of Tumours 1–4.
### Whole-exome sequencing
Tumour and matched germline DNA were sequenced by the NGS-Sequencing facility of the Tumour Profiling Unit at the Institute of Cancer Research. Exome-sequencing libraries were prepared from 1 µg DNA using the Agilent SureSelectXT Human All Exon v6 kit according to the manufacturer’s protocol. Paired-end sequencing was performed on the Illumina HiSeq 2500 or NovaSeq 6000 with a minimum target depth of 100× in the adjacent normal samples and a minimum target depth of 200× in tumour regions.
BWA-MEM49 (v0.7.12) was used to align the paired-end reads to the hg19 human reference genome to generate BAM format files. Picard Tools (http://picard.sourceforge.net) (v2.1.0) MarkDuplicates was run with duplicates removed. BAM files were coordinate sorted and indexed with SAMtools50 (v0.1.19). BAM files were quality controlled using GATK51 (v3.5-0) DepthOfCoverage, Picard CollectAlignmentSummaryMetrics (v2.1.0) and fastqc (https://www.bioinformatics.babraham.ac.uk/projects/fastqc/) (v0.11.4).
### Somatic mutation analysis
Single-nucleotide variant (SNV) calls were generated with MuTect52 (v1.1.7) and VarScan253 (v2.4.1) and mutation calls from both callers were combined. MuTect was run with default settings and post-filtered for a minimum variant frequency of 2%. SNVs generated by MuTect and flagged with ‘PASS’, ’alt_allele_in_normal’ or ‘possible_contamination’ were retained. SAMtools (v1.3) mpileup was run with minimum mapping quality 1 and minimum base quality 20. The pileup file was inputted to VarScan2 somatic and run with a minimum variant frequency of 2%. The VarScan2 call loci were converted to BED format and bam-readcount (https://github.com/genome/bam-readcount) (v0.7.4) run on these positions with minimum mapping quality 1. The bam-readcount output allowed the VarScan2 calls to be further filtered using the recommended fpfilter.pl accessory script54 run on default settings. Indel calls were generated using Platypus55 (v.0.8.1) callVariants run on default settings. Calls were filtered based on the following FILTER flags—‘GOF, ‘badReads, ‘hp10,’ MQ’, ‘strandBias’,’ QualDepth’,’ REFCALL’. We then filtered for somatic indels with normal genotype to be homozygous, minimum depth ≥10 in the normal, minimum depth ≥20 in the tumour and ≥5 variant reads in the tumour.
The bam-readcount tool was run on all SNV loci using minimum mapping quality 1 and minimum base quality 5 to generate call QC metrics (e.g. average variant base quality, average variant mapping quality). High-confidence SNVs were identified by filtering minimum average variant mapping quality 55 and minimum average variant base quality 35 in called tumour regions based on the bam-readcount QC metrics. Bam-readcount was then run on the filtered loci using minimum mapping quality 10 and minimum base quality 20 to generate allele counts for the merged VarScan2 and MuTect call loci. All SNV and indel calls were required to have a depth of at least 70 across all tumour regions. SNVs at positions sequenced to less than 20× depth in the matched germline and those which showed a variant frequency in the germline >2% and a variant count >2 were also excluded. Retained mutation calls were then passed through a cross-‘germline’ filter that flags SNV and indel calls which are present with a VAF of > = 2% in one of fourteen normal samples from the same sample collection. A call is rejected if the variant is flagged as present in 20% or more of the normal samples to remove common alignment artefacts or those arising recurrently at genomic positions which are difficult to sequence. Finally, we applied the following two-tiered filtering strategy to generate MSeq mutation calls. A positive call was made if at least one tumour region had a minimum VAF of 5%. This first tier assures that only mutation calls which have a high probability of being real mutations are selected for further analysis. For any of the mutations that were called in this way, we then determined whether it was present or absent in individual tumour regions. The VAFs for a mutation were looked up with BAM-readcount and a region was called positive if the VAF exceeded 2.5%. Similar two-tier VAF thresholding strategies have been employed in prior MSeq studies11,16,34. Private and shared mutations are defined as those that were only detected in a single region or in some but not all tumour regions, respectively, using the minimum VAF of 2.5% as a cutoff. Variant calls on chromosomes X and Y were not considered.
SNV and indel calls were annotated using annovar56 (v20160201) and oncotator57 (v1.8.0.0 and oncotator_v1_ds_Jan262015 database) with hg19 build versions. The oncotator ‘COSMIC_n_overlapping_mutations’ field was used to flag mutations as possible drivers if they occurred in oncogenes and tumour suppressor genes in the online COSMIC Cancer Gene Census (CGC)58 or in driver genes identified in MSI tumours in the TCGA STAD publication2. Mutations were defined as likely driver genes if they led to (1) an amino acid alteration that had previously been described in the COSMIC database, (2) a disrupting mutation, including frameshift-, splice site- or premature stop/nonsense-mutations in a tumour suppressor gene or (3) an amino acid alteration at a position that shows an alteration in the COSMIC CGC but is distinct from the change reported in COSMIC if it was considered a likely driver by the Cancer Genome Interpreter59.
### DNA copy number aberration analysis
CNVKit60 (v0.8.1) was run in non-batch mode for copy number evaluation. Basic target and antitarget files were generated based on the Agilent SureSelectXT Human All Exon v6 kit. Accessible regions suggested by CNVKit (provided in the source distribution as ‘access-5kb-mappable.hg19.bed’) with a masked HLA interval (chr6:28866528-33775446) form the accessible loci. A pooled normal sample was created from all sequenced germline samples in the series. The copynumber61 R62 library functions Winsorize (run with ‘return.outliers’ = TRUE) and pcf (run with ‘gamma’ = 200) were used to identify outliers and regions of highly uneven coverage (defined as an absolute log ratio value greater than 0.5) to exclude from the analysis.
We identified high confidence SNP locations using bcftools call50 (v1.3) with snp137 reference and SnpEff SnpSift63 (v4.2) to filter heterozygous loci with minimum depth 50. VarScan2 was used to call the tumour sample BAMs at these locations to generate B-Allele Frequency (BAF) data as input for CNVKit. CNVKit was run with matched germline samples along with the adjusted access and antitarget files. For the segmentation step we ran the copynumber function pcf with gamma = 70. Breakpoints were then fed into Sequenza64 (v2.1.2) to calculate estimates of purity/ploidy and these values were used to recenter and scale the LogR profiles in CNVKit. BAF and LogR profiles were also manually reviewed by two researchers to determine their likely integer copy number states. Adjustments were made in cases where both manual reviews identified a consensus solution that differed from the bioinformatically generated integer copy number profile.
### Cancer cell content, ploidy estimation and wGII
Cancer cell content was estimated using the scaling factor of the copy number consensus solution. Ploidy was estimated as follows:
$${\mathrm{{Ploidy}}} = \left( {{\mathrm{CN}}_{{\mathrm{{Absolute}}}}\times\;{\mathrm{SegmentLength}}} \right)/ {\scriptstyle{\sum}} \left( {{\mathrm{SegmentLength}}} \right),$$
(1)
with CNAbsolute representing the unrounded copy number estimate and SegmentLength the genomic length between segment break points.
The wGII (ref. 32) is used to define CIN. We calculated the percentage of integer copy number segments on each chromosome different from the ploidy estimate rounded to the nearest integer state. The percentages are then averaged over the 22 autosomal chromsomes to give the wGII score.
### Subclonality analysis and phylogenetic tree reconstruction
Allele specific copy number estimates65 for SNV and indels were calculated as follows:
$${\mathrm{{MUT}}}_{{\mathrm{{CN}}}} = {\mathrm{{VAF}}}(1/p)\times\left( {(p\times{\mathrm{{CN}}}_{{\mathrm{{Absolute}}}}) + \left( {2 \times (1 - p)} \right)} \right),$$
(2)
where VAF is the variant allele frequency and p is the estimated tumour cell content. Cancer cell fraction (CCF) was estimated using the R package Palimpsest66. LICHeE67 was applied to infer phylogenetic trees from the estimated CCF values. The build algorithm was run with CCF/2 as input, -maxVAFAbsent 0, -minVAFPresent 0.0001 and ‘-s 10’. In each case, we report the top ranked tree solution. A single valid tree was identified for Tumour 1 (error score: 0.02), Tumour 2 (error score: 0.13) and Tumour 4 (error score: 0.06). LICHeE identified six valid trees for Tumour 3 (error scores: 0.088, 0.095, 0.96, 0.106, 0.112, 0.113). These solutions differed only in the positioning of the branch immediately preceding H2 (which could be positioned at H1 or H3) and of that preceding G2 (which could be positioned at G1) in Fig. 3. The tree with the lowest error score was chosen for the analysis, but selecting any of the alternative solutions would not change any of the conclusions presented in this study. Otherwise, only a low percentage of mutations (2–7% per case) could not be assigned to a subclone in the phylogenetic tree.
Trees were re-drawn and branch lengths scaled to the number of mutations in each subclonal mutation cluster and likely driver mutations were mapped onto the trunk or the appropriate branch. Private mutations identified by LICHeE were split into clonal and subclonal mutations using a CCF threshold of 0.7 unless the algorithm had already identified and split clonal and subclonal clusters. A short branch was added to Tumour 2 following a manual review of the tree solution to represent an 8 mutation cluster that was too small for the algorithm to detect but contained a B2M frameshift mutation which was identified as a likely driver.
### Mutational signatures
All SNV calls were loaded into R using VRanges (v1.28.3)68 VariantAnnotation, given trinucleotide motifs using SomaticSignatures (v2.18.0)69 mutationContext and tabulated using motifMatrix with ‘normalize’ = TRUE. The somatic motifs were then compared with the 30 mutational signatures established in COSMIC70 V2 using deconstructSigs (v1.8.0)71 whichSignatures by selecting ‘signature.cutoff’ = 0 and ‘signatures.ref’ = ‘signatures.cosmic’ as run parameters. Mutation signatures representing at least 5% of mutations in one of the analysed mutation groups were reported.
### Ratio of non-synonymous to synonymous mutations (dN/dS)
We ran dNdScv40 to generate dN/dS estimates which use trinucleotide context dependent substitution matrices to adjust for common mutation biases. We ran dNdScv with the following optional parameters: ‘outp = 1’, ‘max_muts_per_gene_per_sample = inf’ and ‘max_coding_muts_per_sample = inf’. This was done separately for mutations shown as truncal (blue), shared (yellow) and private (red and purple) on the phylogenetic trees in Fig. 3.
### HLA mutations and LOH calling
Mutations in HLA genes were predicted using the program POLYSOLVER72. In particular, we first predicted patients’ HLA types from germline samples using the shell_call_hla_type script of the POLYSOLVER suite, with the following parameters: race = Unknown, includeFreq = 1 and insertCalc = 0. Then, we used these HLA predictions as input to the shell_call_hla_mutations_from_type script for predicting HLA mutations in tumour samples. Finally, the shell_annotate_hla_mutations script was used to annotate the mutations identified in the previous step.
LOH events in HLA genes were predicted using the program LOHHLA39. LOHHLA requires as input normal HLA types, for which we used POLYSOLVER predictions, along with ploidy and CCF estimates, which were available from the calculations described above. All other parameters were set to default values.
Neopeptides associated to somatic mutations were generated as decribed in ref. 73. Note that we had to discard ~1.2% of somatic mutations because of inconsistencies between the variant annotation (this can be for either somatic variants or germline variants occurring on the same transcripts as the somatic ones) and the refseq_cds.txt file (GRCh37/hg19 Feb 2009) we used for generating the neopeptides. We used netMHCpan4.0 (28978689) to predict the neopeptides’ eluted ligand likelihood percentile rank scores. For each sample, we ran netMHCpan4.0 on all of the samples’ neopeptides against all samples’ HLA allotypes. As HLA-presented neopeptides, we picked all core peptides (see ref. 73) with a percentile rank <0.5%.
### MLH1 promoter qPCR
A total of 250 ng of tumour DNA, CpGenome Human Methylated DNA Standard (Millipore) and CpGenome Human Non-Methylated DNA Standard (Millipore) were subject to bisulphite conversion using the EZ DNA Methylation Gold Kit according to the manufacturer’s protocol (Zymo Research Corp.). Methylight primers and probe were used to amplify the MLH1 CpG island: (forward) 5′-AGGAAGAGCGGATAGCGATTT-3′, (reverse) 5′-TCTTCGTCCCTCCCTAAAACG-3′, (probe) 5′-FAM-CCCGCTACCTAAAAAAATATACGCTTACGCG-BHQ-3′ (ref. 74). qPCR was performed in a 25-µl reaction with 300 nM primers, 100 nM probe and 1× TaqMan Universal Master Mix II no UNG (Applied Biosystems) using the following program: 50 °C for 2 min, 95 °C for 10 min, followed by 50 cycles at 95 °C for 15 s and 60 °C for 1 min. Samples were analysed in duplicate in 96-well plates on an AB QuantStudio 6 Flex RT-PCR System.
### Mutation loads and clonal/subclonal drivers in TCGA MSI GOAs
Sixty-four GOAs from TCGA cohort2 are classified as MSI in the cBIO web portal75. We downloaded the BAM files of these cases from the NIH GDC Legacy Archive76. Adjustments to the analysis steps were necessary due to the properties of the TCGA sequencing data. A minimum variant frequency of 5% was applied throughout the mutation calling and the fpfilter.pl parameters ‘min-ref-avrl’ and ‘min-var-avrl’ filters were relaxed to 50. The minimum depth requirement in the tumour sample was relaxed to 20, while the minimum average base and mapping quality were set to 20 and 40, respectively. No adjustments were made to the default access and antitarget files of the CNVkit analysis due to large variations in the sequencing depths of the normal samples across the cohort. Otherwise, the somatic mutation, copy number and subclonality analysis steps were as described above. Mutational signatures were run as before and those detected with a mean contribution of 5% or more were further analysed.
### Reporting Summary
Further information on research design is available in the Nature Research Reporting Summary linked to this Article.
## Data availability
The multi-region exome-sequencing data have been deposited in the European Genome-Phenome archive under the accession code EGAS00001003434. The TCGA gastroesophageal dataset referenced during the study is available from the NIH GDC Data Portal website (https://portal.gdc.cancer.gov). All the other data supporting the findings of this study are available within the Article and its Supplementary Information files and from the corresponding author upon reasonable request. A Reporting Summary for this Article is available as a Supplementary Information file.
## Change history
• ### 29 January 2020
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
## References
1. 1.
Ferlay, J. et al. Cancer incidence and mortality worldwide: sources, methods and major patterns in GLOBOCAN 2012. Int. J. Cancer 136, E359–E386 (2015).
2. 2.
TCGA. Comprehensive molecular characterization of gastric adenocarcinoma. Nature 513, 202–209 (2014).
3. 3.
Polom, K. et al. Meta-analysis of microsatellite instability in relation to clinicopathological characteristics and overall survival in gastric cancer. Br. J. Surg. 105, 159–167 (2018).
4. 4.
Smyth, E. C. et al. Prognostic and predictive effect of microsatellite instability (MSI) in MAGIC. J. Clin. Oncol. 33, 62 (2015).
5. 5.
Le, D. T. et al. Mismatch repair deficiency predicts response of solid tumors to PD-1 blockade. Science 357, 409–413 (2017).
6. 6.
Kim, S. T. et al. Comprehensive molecular characterization of clinical responses to PD-1 inhibition in metastatic gastric cancer. Nat. Med. 24, 1449–1458 (2018).
7. 7.
Gerlinger, M. et al. Genomic architecture and evolution of clear cell renal cell carcinomas defined by multiregion sequencing. Nat. Genet. 46, 225–233 (2014).
8. 8.
Gerlinger, M. & Swanton, C. How Darwinian models inform therapeutic failure initiated by clonal heterogeneity in cancer medicine. Br. J. Cancer 103, 1139–1143 (2010).
9. 9.
Lipinski, K. A. et al. Cancer evolution and the limits of predictability in precision cancer medicine. Trends Cancer 2, 49–63 (2016).
10. 10.
Raynaud, F., Mina, M., Tavernari, D. & Ciriello, G. Pan-cancer inference of intra-tumor heterogeneity reveals associations with different forms of genomic instability. PLoS Genet. 14, e1007669 (2018).
11. 11.
Gerlinger, M. et al. Genomic architecture and evolution of clear cell renal cell carcinomas defined by multiregion sequencing. Nat. Genet. 46, 225–233 (2014).
12. 12.
Yap, T. A., Gerlinger, M., Futreal, P. A., Pusztai, L. & Swanton, C. Intratumor heterogeneity: seeing the wood for the trees. Sci. Transl. Med. 4, 127ps10 (2012).
13. 13.
Maruvka, Y. E. et al. Analysis of somatic microsatellite indels identifies driver events in human tumors. Nat. Biotechnol. 35, 951–959 (2017).
14. 14.
Albacker, L. A. et al. Loss of function JAK1 mutations occur at high frequency in cancers with microsatellite instability and are suggestive of immune evasion. PLoS ONE 12, e0176181 (2017).
15. 15.
Shin, D. S. et al. Primary resistance to PD-1 blockade mediated by JAK1/2 mutations. Cancer Discov. 7, 188–201 (2017).
16. 16.
de Bruin, E. C. et al. Spatial and temporal diversity in genomic instability processes defines lung cancer evolution. Science 346, 251–256 (2014).
17. 17.
Gerlinger, M. et al. Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N. Engl. J. Med. 366, 883–892 (2012).
18. 18.
McGranahan, N. & Swanton, C. Clonal heterogeneity and tumor evolution: past, present, and the future. Cell 168, 613–628 (2017).
19. 19.
Harbst, K. et al. Multiregion whole-exome sequencing uncovers the genetic evolution and mutational heterogeneity of early-stage metastatic melanoma. Cancer Res. 76, 4765–4774 (2016).
20. 20.
Alexandrov, L. B. et al. Signatures of mutational processes in human cancer. Nature 500, 415–421 (2013).
21. 21.
McGranahan, N. et al. Clonal neoantigens elicit T cell immunoreactivity and sensitivity to immune checkpoint blockade. Science 351, 1463–1469 (2016).
22. 22.
Cristescu, R. et al. Pan-tumor genomic biomarkers for PD-1 checkpoint blockade-based immunotherapy. Science 362, eaar3593 (2018).
23. 23.
McGranahan, N. et al. Clonal status of actionable driver events and the timing of mutational processes in cancer evolution. Sci. Transl. Med. 7, 283ra254 (2015).
24. 24.
Alexandrov, L. B. et al. Clock-like mutational processes in human somatic cells. Nat. Genet. 47, 1402–1407 (2015).
25. 25.
Blokzijl, F. et al. Tissue-specific mutation accumulation in human adult stem cells during life. Nature 538, 260–264 (2016).
26. 26.
Yokoyama, A. et al. Age-related remodelling of oesophageal epithelia by mutated cancer drivers. Nature 565, 312–317 (2019).
27. 27.
Meier, B. et al. Mutational signatures of DNA mismatch repair deficiency in C. elegans and human cancers. Genome Res. 28, 666–675 (2018).
28. 28.
Birkbak, N. J. et al. Paradoxical relationship between chromosomal instability and survival outcome in cancer. Cancer Res. 71, 3447–3452 (2011).
29. 29.
Dewhurst, S. M. et al. Tolerance of whole-genome doubling propagates chromosomal instability and accelerates cancer genome evolution. Cancer Discov. 4, 175–185 (2014).
30. 30.
Kawakami, H., Zaanan, A. & Sinicrope, F. A. Microsatellite instability testing and its role in the management of colorectal cancer. Curr. Treat. Options Oncol. 16, 30 (2015).
31. 31.
Pihan, G. & Doxsey, S. J. Mutations and aneuploidy: co-conspirators in cancer? Cancer Cell 4, 89–94 (2003).
32. 32.
Burrell, R. A. et al. Replication stress links structural and numerical cancer chromosomal instability. Nature 494, 492–496 (2013).
33. 33.
Giannakis, M. et al. RNF43 is frequently mutated in colorectal and endometrial cancers. Nat. Genet. 46, 1264–1266 (2014).
34. 34.
Jamal-Hanjani, M. et al. Tracking the evolution of non-small-cell lung cancer. N. Engl. J. Med. 376, 2109–2121 (2017).
35. 35.
Yates, L. R. et al. Genomic evolution of breast cancer metastasis and relapse. Cancer Cell 32, 169–184 (2017).
36. 36.
Lewis, K. A. et al. Heterozygous ATR mutations in mismatch repair-deficient cancer cells have functional significance. Cancer Res. 65, 7091–7095 (2005).
37. 37.
Gruber, S. B. et al. BLM heterozygosity and the risk of colorectal cancer. Science 297, 2013 (2002).
38. 38.
Wilson, B. G. & Roberts, C. W. SWI/SNF nucleosome remodellers and cancer. Nat. Rev. Cancer 11, 481–492 (2011).
39. 39.
McGranahan, N. et al. Allele-specific HLA loss and immune escape in lung cancer evolution. Cell 171, 1259–1271 (2017).
40. 40.
Martincorena, I. et al. Universal patterns of selection in cancer and somatic tissues. Cell 171, 1029–1041 (2017).
41. 41.
Zaretsky, J. M. et al. Mutations associated with acquired resistance to PD-1 blockade in melanoma. New Engl. J. Med. 375, 819–829 (2016).
42. 42.
Sveen, A. et al. Multilevel genomics of colorectal cancers with microsatellite instability-clinical impact of JAK1 mutations and consensus molecular subtype 1. Genome Med. 9, 46 (2017).
43. 43.
Middha, S. et al. Majority of B2M-mutant and -deficient colorectal carcinomas achieve clinical benefit from immune checkpoint inhibitor therapy and are microsatellite instability-high. JCO Precis. Oncol. 3, https://doi.org/10.1200/PO.18.00321 (2019).
44. 44.
Williamson, C. T. et al. ATR inhibitors as a synthetic lethal therapy for tumours deficient in ARID1A. Nat. Commun. 7, 13837 (2016).
45. 45.
Steiner, T. et al. Gain in chromosome 8q correlates with early progression in hormonal treated prostate cancer. Eur. Urol. 41, 167–171 (2002).
46. 46.
Klatte, T. et al. Gain of chromosome 8q is associated with metastases and poor survival of patients with clear cell renal cell carcinoma. Cancer 118, 5777–5782 (2012).
47. 47.
Maley, C. C. et al. Classifying the evolutionary and ecological features of neoplasms. Nat. Rev. Cancer 17, 605–619 (2017).
48. 48.
Stack, E. C., Wang, C., Roman, K. A. & Hoyt, C. C. Multiplexed immunohistochemistry, imaging, and quantitation: a review, with an assessment of Tyramide signal amplification, multispectral imaging and multiplex analysis. Methods 70, 46–58 (2014).
49. 49.
Li, H. & Durbin, R. Fast and accurate short read alignment with Burrows-Wheeler transform. Bioinformatics 25, 1754–1760 (2009).
50. 50.
Li, H. et al. The sequence Alignment/Map format and SAMtools. Bioinformatics 25, 2078–2079 (2009).
51. 51.
McKenna, A. et al. The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 20, 1297–1303 (2010).
52. 52.
Cibulskis, K. et al. Sensitive detection of somatic point mutations in impure and heterogeneous cancer samples. Nat. Biotechnol. 31, 213–219 (2013).
53. 53.
Koboldt, D. C. et al. VarScan 2: somatic mutation and copy number alteration discovery in cancer by exome sequencing. Genome Res. 22, 568–576 (2012).
54. 54.
Koboldt, D. C., Larson, D. E. & Wilson, R. K. Using VarScan 2 for germline variant calling and somatic mutation detection. Curr. Protoc. Bioinformatics 44, 15.14.11–17 (2013).
55. 55.
Rimmer, A. et al. Integrating mapping-, assembly- and haplotype-based approaches for calling variants in clinical sequencing applications. Nat. Genet. 46, 912–918 (2014).
56. 56.
Wang, K., Li, M. & Hakonarson, H. ANNOVAR: functional annotation of genetic variants from high-throughput sequencing data. Nucleic Acids Res. 38, e164 (2010).
57. 57.
Ramos, A. H. et al. Oncotator: cancer variant annotation tool. Hum. Mutat. 36, E2423–E2429 (2015).
58. 58.
Sondka, Z. et al. The COSMIC Cancer Gene Census: describing genetic dysfunction across all human cancers. Nat. Rev. Cancer 18, 696–705 (2018).
59. 59.
Tamborero, D. et al. Cancer Genome Interpreter annotates the biological and clinical relevance of tumor alterations. Genome Med. 10, 25 (2018).
60. 60.
Talevich, E., Shain, A. H., Botton, T. & Bastian, B. C. CNVkit: genome-wide copy number detection and visualization from targeted DNA sequencing. PLOS Comput. Biol. 12, e1004873 (2016).
61. 61.
Nilsen, G. et al. Copynumber: efficient algorithms for single- and multi-track copy number segmentation. BMC Genomics 13, 591–591 (2012).
62. 62.
R Core Team. R: A Language and Environment for Statistical Computing (R Foundation for Statistical Computing, Vienna, Austria, 2018).
63. 63.
Cingolani, P. et al. A program for annotating and predicting the effects of single nucleotide polymorphisms, SnpEff: SNPs in the genome of Drosophila melanogaster strain w1118; iso-2; iso-3. Fly 6, 80–92 (2012).
64. 64.
Favero, F. et al. Sequenza: allele-specific copy number and mutation profiles from tumor sequencing data. Ann. Oncol. 26, 64–70 (2015).
65. 65.
Stephens, P. J. et al. The landscape of cancer genes and mutational processes in breast cancer. Nature 486, 400–404 (2012).
66. 66.
Letouze, E. et al. Mutational signatures reveal the dynamic interplay of risk factors and cellular processes during liver tumorigenesis. Nat. Commun. 8, 1315 (2017).
67. 67.
Popic, V. et al. Fast and scalable inference of multi-sample cancer lineages. Genome Biol. 16, 91 (2015).
68. 68.
Obenchain, V. et al. VariantAnnotation: a Bioconductor package for exploration and annotation of genetic variants. Bioinformatics 30, 2076–2078 (2014).
69. 69.
Gehring, J. S., Fischer, B., Lawrence, M. & Huber, W. SomaticSignatures: inferring mutational signatures from single-nucleotide variants. Bioinformatics 31, 3673–3675 (2015).
70. 70.
Forbes, S. A. et al. COSMIC: somatic cancer genetics at high-resolution. Nucleic Acids Res. 45, D777–D783 (2017).
71. 71.
Rosenthal, R., McGranahan, N., Herrero, J., Taylor, B. S. & Swanton, C. DeconstructSigs: delineating mutational processes in single tumors distinguishes DNA repair deficiencies and patterns of carcinoma evolution. Genome Biol. 17, 31 (2016).
72. 72.
Shukla, S. A. et al. Comprehensive analysis of cancer-associated somatic mutations in class I HLA genes. Nat. Biotechnol. 33, 1152–1158 (2015).
73. 73.
Woolston, A. et al. Genomic and transcriptomic determinants of therapy resistance and immune landscape evolution during anti-EGFR treatment in colorectal cancer. Cancer Cell 36, 35–50 (2019).
74. 74.
Weisenberger, D. J. et al. CpG island methylator phenotype underlies sporadic microsatellite instability and is tightly associated with BRAF mutation in colorectal cancer. Nat. Genet. 38, 787–793 (2006).
75. 75.
Gao, J. et al. Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal. Sci. Signal. 6, pl1 (2013).
76. 76.
Grossman, R. L. et al. Toward a shared vision for cancer genomic data. N. Engl. J. Med. 375, 1109–1112 (2016).
## Acknowledgements
The study was supported by a Wellcome Trust Strategic Grant (105104/Z/14/Z) to the ICR Centre for Evolution and Cancer, by the National Institute for Health Research Biomedical Research Centre for Cancer at the ICR/RMH, by a Clinician Scientist Fellowship from Cancer Research UK and by grants from the Schottlander Research Charitable Trust, Cancer Genetics UK and the Constance Travis Trust.
## Author information
Authors
### Contributions
K.v.L. processed the tissue, designed and conducted experiment, analysed the data and wrote the paper. A.W. performed bioinformatics analyses, analysed the data and wrote the paper. M.P. and S.L. performed bioinformatics analyses. B.G. processed the tissue. L.B., M.S., G.Sp. and B.C. analysed the data. K.F. and N.M. conducted exome sequencing. R.S., A.M. and G.Sa. provided the tissue and performed dMMR analysis. M.G. designed the study, supervised the experiments and data analysis and wrote the paper. All authors read and approved the manuscript.
### Corresponding author
Correspondence to Marco Gerlinger.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Peer review information Nature Communications thanks Rebecca Fitzgerald and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
von Loga, K., Woolston, A., Punta, M. et al. Extreme intratumour heterogeneity and driver evolution in mismatch repair deficient gastro-oesophageal cancer. Nat Commun 11, 139 (2020). https://doi.org/10.1038/s41467-019-13915-7
• Accepted:
• Published:
• ### Computational Image Analysis of T-Cell Infiltrates in Resectable Gastric Cancer: Association with Survival and Molecular Subtypes
• Benjamin R Challoner
• , Katharina von Loga
• , Andrew Woolston
• , Beatrice Griffiths
• , Nanna Sivamanoharan
• , Maria Semiannikova
• , Alice Newey
• , Louise J Barber
• , David Mansfield
• , Lindsay C Hewitt
• , Yuichi Saito
• , Naser Davarzani
• , Naureen Starling
• , Alan Melcher
• , Heike I Grabsch
• & Marco Gerlinger
JNCI: Journal of the National Cancer Institute (2020)
• ### Tumor Heterogeneity
• Steven Maron
|
2020-08-05 03:22:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6066117286682129, "perplexity": 14689.82988269954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735906.77/warc/CC-MAIN-20200805010001-20200805040001-00160.warc.gz"}
|
https://ohdsi.github.io/CohortMethod/reference/saveCohortMethodData.html
|
Saves an object of type CohortMethodData to a file.
saveCohortMethodData(cohortMethodData, file)
## Arguments
cohortMethodData An object of type CohortMethodData as generated using getDbCohortMethodData(). The name of the file where the data will be written. If the file already exists it will be overwritten.
## Value
Returns no output.
|
2020-12-04 02:42:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2611500918865204, "perplexity": 3506.9336777345566}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141733120.84/warc/CC-MAIN-20201204010410-20201204040410-00506.warc.gz"}
|
http://extremewarrior.com/tripping-billies-buuleku/maximum-oxidation-state-is-present-in-b41bc2
|
... one peroxide bond is present. Main principles of identifying oxidation state. harshkushwaha969 harshkushwaha969 27.06.2020 Science Secondary School +5 pts. Maximum oxidation state is present in. In Group 8 (the iron group) the second and third row elements show a maximum oxidation state of (+VIII) compared with (+VI) for Fe. In 1990 IUPAC resorted to a postulatory (rule-based) method to determine the oxidation state. 2 Let's take an example to explain the given statement. 2.1k VIEWS. It has the maximum oxidation number of +1 in the positive state and minimum oxidation number of -1 in the negative state. Oxidation occurs when an atom, molecule, or ion loses one or more electrons in a chemical reaction. This assumption ... but rather a maximum of the all-electron ground state at the nuclear position. While in neutral medium the change in oxidation state is of 3 and in basic medium it is 1. Oxidation-reduction reaction - Oxidation-reduction reaction - Oxidation states: The idea of assigning an oxidation state to each of the atoms in a molecule evolved from the electron-pair concept of the chemical bond. Na P Ba Pb S Mn Cr +1 +5 +2 +4 +6 +7 +6 What will be the usual and the maximum oxidation state in compounds of? Download : Download full-size image; Fig. Li Br Sr O B NUSUAL +1 -1 +2 -2 +3 -3 or +5MAXIMUM +1 +7 +2 +6 +3 +5 16. of an element may be defined as the charge which an atom of the element has in its ion or appears to have when present in the combined states with other atoms Rules for assigning oxidation number The O. N of the element in the free or elementary state is always zero irrespective of it Lithium Oxidation Number. Ten is the maximum oxidation state exhibited by any atom. However, +3 and +4 ions tend to hydrolyse. The oxidation state in compound naming for transition metals and lanthanides and actinides is placed either as a right superscript to the element symbol in a chemical formula, such as FeIII, or in parentheses after the name of the element in chemical names, such as iron(III). • to avoid ambiguity, the oxidation state is often included in the name of a species e.g. Finally, fractional oxidation numbers should not be used in naming. This assumption is used ubiquitously to rationalize phenomena observed with TiO2. Les processus d'oxydation avancés (abréviation : POA), au sens large, sont un ensemble de procédures de traitement physicochimique conçues pour détruire des matières organiques (et parfois inorganiques) en suspension ou en solution dans l'eau d'effluents domestiques, urbains ou industriels, par oxydation via des réactions avec des radicaux hydroxyle (HO •) [1]. For example, FeCl3 is ferric chloride and FeCl2 is ferrous chloride. Oxidation: Oxidation state can be defined as the number of electrons that a particular atom can lose, gain or share with another atom. Again this can be described as a resonance hybrid of five equivalent structures, each having four carbons with oxidation state −1 and one with −2. Oxidation no. This happens as the maximum number ofelectrons are donated during the dissociation of {eq}{{\rm{H}}^ + } Valency vs. Oxidation State Search. These oxidation states add up to eight, which is exactly the number of electrons that typically make up the outer (valence) shell — where chemistry happens. Ask your question. Highest Oxidation State for a Transition metal = Number of Unpaired d-electrons + Two s-orbital electrons. (At.number: Fe=26). Atoms within a molecule are held together by the force of attraction that the nuclei of two or more of them exert on electrons in the space between them. Oxidation results in an increase in the oxidation state. Do atoms form either a positive or a negative charge, but not both? - Definition & Examples, Substitution Reaction Examples in Organic Chemistry, Acidic & Basic Salt Solutions: Explanation & Examples, Titration of a Strong Acid or a Strong Base, Pennsylvania Biology Keystone Exam: Test Prep & Practice, All India Pre-Veterinary Test (AIPVT): Exam Prep, DSST Environmental Science: Study Guide & Test Prep, Michigan Merit Exam - Science: Test Prep & Practice, Middle School Life Science: Help and Review, High School Biology: Homework Help Resource, TExES Life Science 7-12 (238): Practice & Study Guide, Holt McDougal Biology: Online Textbook Help, UExcel Microbiology: Study Guide & Test Prep, Middle School Life Science: Homework Help Resource, Middle School Life Science: Tutoring Solution, Biological and Biomedical [1] This does have a general quantum chemical explanation. Oxidation itself was first studied by Antoine Lavoisier, who defined it as the result of reactions with oxygen (hence the name). Notice, the oxidation states of the molecules here, they add up to the whole-- or the oxidation state of each of the atoms in a molecule, they add up to the entire charge of the molecule. The table is based on that of Greenwood and Earnshaw,[21] with additions noted. of electrons which the element lose or gain to form a molecule. {/eq}. harshkushwaha969 harshkushwaha969 27.06.2020 Science Secondary School +5 pts. Every element exists in oxidation state 0 when it is the pure non-ionized element in any phase, whether monatomic or polyatomic allotrope. phenyl]borane), see, Ga(−2), Ga(−4), and Ga(−5) have been observed in the magnesium gallides MgGa, Mg, Ge(−1), Ge(−2), and Ge(−3) have been observed in, Yttrium and all lanthanides except Ce and Pm have been observed in the oxidation state 0 in bis(1,3,5-tri-t-butylbenzene) complexes, see, Y(II) has been observed in [(18-crown-6)K][(C, Complexes of Nb(0) and Ta(0) have been observed, see, Te(V) is mentioned by Greenwood and Earnshaw, but they do not give any example of a Te(V) compound. A figure with a similar format was used by Irving Langmuir in 1919 in one of the early papers about the octet rule. Answer: Manganese is the 3d series transition element shows the highest oxidation state. REDOX REACTIONS Redox When reduction and oxidation take place Oxidation Removal of electrons; species will get less negative / more positive Reduction Gain of electrons; species will become more negative / less positive REDUCTION in O.S. So, the oxidation state of Mn changes from Mn(VII) to Mn(IV). The change in the oxidation state of the Mn is maximum in case of acidic medium that is, 5. (a) For N and P, + 5 oxidation state is more stable than + 3 oxidation Log in. Join now. Does this mean for Fe (iron) it's 0 to +3? All efforts to synthesize a solid or solute salt of the above mentioned [IrO4]+have failed so far. i) Sulphur dioxide is reducing agent because sulphur has d-orbital so it can easily expand its oxidation state +4 to +6 and thus behave as reducing agent. View All. Reduction involves a decrease in oxidation state Oxidation state of an atom depends upon the electronic configuration of atom, a periodic property. Hydrogen has been reduced by the lithium. The Organic Chemistry Tutor 128,105 views 19:46 Our experts can answer your tough homework and study questions. By 1948, IUPAC used the 1940 nomenclature rules with the term "oxidation state",[153][154] instead of the original[148] valency. Pt 5d 9 6s 1. Originally, the term was used when oxygen caused electron loss in a reaction. Oxidation state 0 occurs for all elements – it is simply the element in its elemental form. Hydrogen has OS = +1, but adopts −1 when bonded as a, Systematic oxidation state; it is chosen from close alternatives for pedagogical reasons of descriptive chemistry. Number that describes the degree of oxidation of an atom in a chemical compound; the hypothetical charge that an atom would have if all bonds to atoms of different elements were fully ionic, Simple approach without bonding considerations, Oxidation-state determination from resonance formulas is not straightforward, A physical measurement is needed to decide the oxidation state. Sciences, Culinary Arts and Personal 2. For example, carbon has nine possible integer oxidation states from −4 to +4: Many compounds with luster and electrical conductivity maintain a simple stoichiometric formula; such as the golden TiO, blue-black RuO2 or coppery ReO3, all of obvious oxidation state. Table 15.1 The common binary fluorides and oxides of the elements of the flrst transition series Fluorides M" VF, CrF2 MnF1 FeF, COFl NiFl CuF1 ZnF, Mill ScFJ TiF) 'IF, CrFJ MnF) FeF3 CoF) Mov TiF. English. Valence refers to the absolute number of electrons lost or gained.Oxidation state refers to number of electrons gained (-) or lost (+). Maximum oxidation state: $\mathrm{+V}$. However, the terminology using "ligands"[20]:147 gave the impression that oxidation number might be something specific to coordination complexes. Oxidation doesn't necessarily involve oxygen! Jensen[146] gives an overview of the history up to 1938. In these oxidation reactions CO 2, H 2 O, N 2, SO 2 …are formed as end products. {/eq}. The maximum oxidation state in the first row transition metals is equal to the number of valence electrons from titanium (+4) up to manganese (+7), but decreases in the later elements. If an atom is reduced, it has a higher number of valence shell electrons, and therefore a higher oxidation state, and is a strong oxidant. Titrate a fraction from each vanadium complex with KMnO4 to determine the concentration of vanadium. 1. Most of the redox reactions in this chapter involve a change in the oxidation state of the carbon bearing the functional group. Oxidation is a type of corrosion involving the reaction between a metal and air or oxygen at high temperatures in the absence of water or an aqueous phase. Minimum oxidation state $\mathrm{-III}$. How many atoms are present in 300 g of the element ? So, the removal of ten electrons is highly hypothetical. 300+ SHARES. The effect of metals upon the activity and selectivity of TiO ... la concentration des nitrates en dessous de la norme sans dépasser le taux maximum permis pour l’ammonium et pour les nitrites. Ein Programm zur interaktiven Visualisierung von Festkörperstrukturen sowie Synthese, Struktur und Eigenschaften von binären und ternären Alkali- und Erdalkalimetallgalliden", "Selenium: Selenium(I) chloride compound data", "High-Resolution Infrared Emission Spectrum of Strontium Monofluoride", "Yttrium: yttrium(I) bromide compound data", "Hypervalent Bonding in One, Two, and Three Dimensions: Extending the Zintl–Klemm Concept to Nonclassical Electron-Rich Networks", 10.1002/1521-3773(20000717)39:14<2408::aid-anie2408>3.0.co;2-u, "Studies of N-heterocyclic Carbene (NHC) Complexes of the Main Group Elements", "Synthesis and Structure of the First Tellurium(III) Radical Cation", "High-Resolution Fourier Transform Infrared Emission Spectrum of Barium Monofluoride", "Fourier Transform Emission Spectroscopy of New Infrared Systems of LaH and LaD", "Pentavalent lanthanide nitride-oxides: NPrO and NPrO− complexes with N≡Pr triple bonds", "Кристаллическое строение и термодинамические характеристики монобромидов циркония и гафния / Crystal structure and thermodynamic characteristics of monobromides of zirconium and hafnium", 10.1002/(SICI)1521-3773(19991102)38:21<3194::AID-ANIE3194>3.0.CO;2-O, "Germanides, Germanide-Tungstate Double Salts and Substitution Effects in Zintl Phases", "Synthesis, structure, and reactivity of crystalline molecular complexes of the {[C, "Reduction chemistry of neptunium cyclopentadienide complexes: from structure to understanding", "Remarkably High Stability of Late Actinide Dioxide Cations: Extending Chemistry to Pentavalent Berkelium and Californium", "Gas Phase Chemistry of Superheavy Elements", "Physico-chemical characterization of seaborgium as oxide hydroxide", "Gas chemical investigation of bohrium (Bh, element 107)", "Annual Report 2015: Laboratory of Radiochemistry and Environmental Chemistry", "The arrangement of electrons in atoms and molecules", "Antoine Laurent Lavoisier The Chemical Revolution - Landmark - American Chemical Society", "Einige Nomenklaturfragen der anorganischen Chemie", https://en.wikipedia.org/w/index.php?title=Oxidation_state&oldid=992816237#List_of_oxidation_states_of_the_elements, Pages containing links to subscription-only content, Short description is different from Wikidata, Articles with unsourced statements from August 2020, Creative Commons Attribution-ShareAlike License. Year Narendra Awasthi MS Chauhan early papers about the octet rule has a charge of +3 +6... First studied by Antoine Lavoisier, who defined it as the result of reactions with oxygen ( hence name! Is best set to 0 for all elements – it is then accepted by the element present in its oxidation... Mno 4-, the sum of the early papers about the octet rule of valence electrons, the maximum number! Ncert Fingertips Errorless Vol-1 Errorless Vol-2 ) are more stable than the metals ( except gold ) are more than. Bonds to being completely ionic in the oxidation state for carbon in oxidation. All other trademarks and copyrights are the property of their respective owners '' in English chemical literature popularized! Sense maximum oxidation state is present in the entire molecule lithium hydride is neutral [ IrO4 ] +have failed so.. The atoms in the periodic table are all positive to be +4 solid or solute of. Difference to your business of electronegativity in basic medium it is simply the element is reduced of respective... Valency can be exptalend with the help of inert pair effect + −1/5 = −6/5 Batra. All the atoms is potassium superoxide, KO2 Gästebuch ; Impressum ; Primary. Based Tech SMEs that of Greenwood and Earnshaw, [ 1 ] with additions. Structures of group 10 elements: Ni 3d 8 4s 2 electrons is highly hypothetical of electronegativity a tangible to... Determined by extrapolating bonds to being completely ionic in the complex cation, tetroxoplatinum ( PtO 4 ),... The pure non-ionized element in its elemental form as it does in SO3 and SO42- the main oxidation state \mathrm! Periodic property adopt the rule particular element institutions guidelines 4 ) 2+, platinum possess an oxidation,. The two lists are compared in this /datacheck, to gain mutual improvements Exemplar ncert Fingertips Errorless Errorless. Of d-electrons range from 1 ( in Cu and Zn ) 6.0 & institutions... Used ubiquitously to rationalize phenomena observed with TiO2 NEET Students it can not be used in naming MnO 4- the... Fingertips Errorless Vol-1 Errorless Vol-2 with all additions noted if you add a positive 1 area is maximum! 4-, the oxidation state 0 occurs for all elements – it is 1, fractional oxidation of... Some patterns is called oxidation VII ) to Mn ( VII ) to Mn ( VII ) to (... If the oxidation states of all the atoms in the oxidation state be reduced but can! The 3d series transition element shows the highest oxidation state for a transition =... Element which is present in its elemental form EduRev NEET Question is on..., C3H8, has been described as having a carbon oxidation state on the hydrogen here is a of! The following elements ) = [ Ar } 3d 5 4s 2 atoms is equal the... Table shows, the charge is -1 and +1, the blue-boxed area is the maximum oxidation state a! The Mn is maximum in case of acidic medium that is, this page last. Take an example where hydrogen plays the other oxidation states of the early papers about the octet rule Awasthi Chauhan... Uncombined element ) is zero chemical species increases begin to look at organic reactions. By any atom this page was last edited on 7 December 2020, at 06:52... What is a,... Exhibited by any atom the execution and delivery of high quality, results-driven marketing that makes sense the... As iron ( ii ) the electronic configuration of atom, molecule, or also known as transition -... ) are more stable than the metals molecule, or ion intermediate, then compound act... = [ Ar } 3d 5 4s 2, a long-standing issue 1. Chloride and FeCl2 is ferrous chloride show +2 oxidation state on the molecule or ion the shows. And minimum oxidation state is +1 ions tend to hydrolyse metals show +2 oxidation of. Number of -1 in the outermost shell of a particular element a general quantum chemical Explanation written in the state... Fraction from each vanadium complex with KMnO4 maximum oxidation state is present in determine the oxidation state or oxidation number of in... To consider how we define the oxidation state of a particular element table shows the... Its elemental form & states Explained - Rules, polyatomic ions,,.... but rather a maximum oxidation state of sulphur in oxidation state of a?... When present in 1:49 8.4k LIKES have a oxidation state or oxidation number of as. Formal loss of electrons metals in the complex cation, tetroxoplatinum ( PtO 4 ) 2+, platinum an! Larger the charge on the ion maximum of the chemical elements, excluding nonintegral values element which is.. Element is usually limited by the element lose or gain to form the ion the property of respective. So far should oxidation state metal = number of valence electrons, the ionization energy ) ion the! That of Greenwood and Earnshaw, [ 1 ] this does have a general chemical... At the nuclear position early papers about the octet rule mass = 300g obviously, ionization! Ground state at the nuclear position Question is disucussed on EduRev Study group by 183 NEET Students compound in... Follow the pattern of the Mn is maximum in case of acidic medium that is, this was. Your business 25 ) = [ Ar } 3d 5 4s 2 & education institutions.. Method to determine the oxidation state or oxidation number, describes the of! +4 ions tend to hydrolyse the lithium here is a maximum oxidation state -1! The oxidation state of Mn changes from Mn ( 25 ) = [ Ar } 3d 5 4s.. Team says chloride rather than ferrous chloride state exhibited by any atom EduRev Study group by 183 NEET Students chloride. Called oxidation in an acidic medium the octet rule is written in oxidation. Tech SMEs described as having a carbon oxidation state of Mn changes from (. 4-, the maximum oxidation number of valence electrons, the team says ion has a of!, higher the ionization energy on EduRev Study group by 183 NEET Students +7 +2 +6 +3 +5 16 either... Elements, excluding nonintegral values 1 ] with all additions noted ) electronic. Is that most compounds, & transition metals - Duration: 19:46 other oxidation states for equivalent is! Agent in an acidic medium that is, 5 the common oxidation states of the carbon the..., FeCl3 is ferric chloride and FeCl2 is ferrous chloride of oxidation with. Example to explain the differences between the processes of... What is the maximum maximum oxidation state is present in state only. To determine the concentration of vanadium column for oxidation state be defined as the result reactions. Not neutral an element had oxidation states for equivalent atoms is potassium,...
|
2021-07-24 10:11:01
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.584355354309082, "perplexity": 5382.452877991214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150264.90/warc/CC-MAIN-20210724094631-20210724124631-00098.warc.gz"}
|
https://www.atmos-meas-tech.net/11/6189/2018/
|
Journal cover Journal topic
Atmospheric Measurement Techniques An interactive open-access journal of the European Geosciences Union
Journal topic
Atmos. Meas. Tech., 11, 6189-6201, 2018
https://doi.org/10.5194/amt-11-6189-2018
Atmos. Meas. Tech., 11, 6189-6201, 2018
https://doi.org/10.5194/amt-11-6189-2018
Research article 15 Nov 2018
Research article | 15 Nov 2018
# Calibration of isotopologue-specific optical trace gas analysers: a practical guide
Calibration of isotopologue-specific optical trace gas analysers
David W. T. Griffith David W. T. Griffith
• Centre for Atmospheric Chemistry, University of Wollongong, Wollongong, NSW, Australia
Abstract
The isotopic composition of atmospheric trace gases such as CO2 and CH4 provides a valuable tracer for the sources and sinks that contribute to atmospheric trace gas budgets. In the past, isotopic composition has typically been measured with high precision and accuracy by isotope ratio mass spectrometry (IRMS) offline and separately from real-time or flask-based measurements of concentrations or mole fractions. In recent years, development of infrared optical spectroscopic techniques based on laser and Fourier-transform infrared spectroscopy (FTIR) has provided high-precision measurements of the concentrations of one or more individual isotopologues of atmospheric trace gas species in continuous field and laboratory measurements, thus providing both concentration and isotopic measurements simultaneously. Several approaches have been taken to the calibration of optical isotopologue-specific analysers to derive both total trace gas amounts and isotopic ratios, converging into two different approaches: calibration via the individual isotopologues as measured by the optical device and calibration via isotope ratios, analogous to IRMS.
This paper sets out a practical guide to the calculations required to perform calibrations of isotopologue-specific optical analysers, applicable to both laser and broadband FTIR spectroscopy. Equations to calculate the relevant isotopic and total concentration quantities without approximation are presented, together with worked numerical examples from actual measurements. Potential systematic errors, which may occur when all required isotopic information is not available, or is approximated, are assessed. Fortunately, in most such realistic cases, these systematic errors incurred are acceptably small and within the compatibility limits specified by the World Meteorological Organisation – Global Atmosphere Watch. Isotopologue-based and ratio-based calibration schemes are compared. Calibration based on individual isotopologues is simpler because the analysers fundamentally measure amounts of individual isotopologues, not ratios. Isotopologue calibration does not require a range of isotopic ratios in the reference standards used for the calibration, only a range of concentrations or mole fractions covering the target range. Ratio-based calibration leads to concentration dependence, which must also be characterised.
1 Introduction
Until recently, measurements of the amounts of CO2 and other trace gases in the atmosphere and in calibration gas standards within the Global Atmosphere Watch – Greenhouse Gas Monitoring Techniques (GAW-GGMT) community were mostly made by analytical techniques which do not discriminate between isotopic variants of the target gases. Manometry and gravimetry enable the calibration of gas mixtures to be traceable to SI units of pressure, volume, mass and temperature but measure only the total amounts of the target trace gas without taking into account differences in isotopic composition. Gas chromatography is also commonly used both in atmospheric measurements and in the propagation of standards but is also blind to the isotopic composition of the target gas and measures only total amounts.
Non-dispersive infrared (NDIR) analysers have been used for many years as an instrument of choice for atmospheric trace gas monitoring. NDIR is an optical technique based on infrared absorption by the target trace gas, and like any optical or spectroscopic instrument, NDIR instruments have a different response to different isotopologues of the target species because different isotopologues have different absorption spectra. Earlier NDIR instruments such as URAS, UNOR, Siemens and APC employed microphone detectors filled with the target trace gas that responded selectively to the absorption of infrared radiation by the target gas in the sample (Griffith, 1982). The NDIR instrument response depends, in a complex and non-linear way, on the isotopic composition of the target gas and on the carrier gas. The more recent LI-COR instruments replaced the microphonic detector with an optical semiconductor detector that relies on a broad bandpass filter to restrict the wavelength range from the source to that absorbed by the target gas, for example, around 4.3 µm for CO2. Optical NDIR detectors also respond differently to the different isotopologues of the target gas because the bandpass filter does not cover the entire absorption range of the trace gas, and because different isotopologues have different absorption strengths and sensitivities. NDIR instruments thus have an ill-defined sensitivity to isotopic variability, which must be empirically quantified for the most precise atmospheric measurements (Lee et al., 2006; Tohjima et al., 2009).
Most recently, laser and Fourier-transform infrared (FTIR) based optical infrared analysers have taken on a major role in atmospheric trace gas measurements for many gases, especially the dominant greenhouse gases CO2 and CH4. These instruments are based on infrared absorption by single absorption lines or bands of specific isotopologues, which are only a proxy for the total amount of the target trace gas. If the isotopic composition of the trace gas is invariant, such analysis provides a valid measure of the total amount of the gas after calibration, but it has long been recognised that isotopic differences between the calibration gases and the samples measured lead to variations in the total trace gas amounts deduced from a single isotopologue measurement that are significant relative to GAW compatibility goals (Loh et al., 2011). Several studies have addressed isotopic calibration (e.g. Esler et al., 2000; Bowling et al., 2003; Griffis et al., 2005; Mohn et al., 2008; Loh et al., 2011; Tuzson et al., 2011; Griffith et al., 2012; Wehr et al., 2013; Wen et al., 2013; Rella et al., 2015; Vardag et al., 2015; Pang et al., 2016; Flores et al., 2017; Tans et al., 2017; Braden-Behrens et al., 2017) and compared calibration approaches (Wen et al., 2013), but until recently most studies made some level of approximation in dealing with the calculations required to properly include the contributions of all possible isotopologues of the target species in the calculation scheme. Most recently Griffith et al. (2012), Flores et al. (2017) and Tans et al. (2017) have published isotopic calibration strategies which are equivalent and which correctly and completely account for the full isotopic composition of the target gas (CO2 in these studies, but applicable in principle to any species).
Established calibration laboratories using mass spectrometry as the primary method for isotopic analysis normally provide calibration standards which specify the total amount and isotopic ratios of a trace gases in an air matrix, such as CO2, δ13C and δ18O, while optical analysers fundamentally determine individual amounts of isotopologues, such as 16O12C16O, 16O13C16O and 16O12C18O. Here we present a practical guide to the calculations required to rigorously, yet simply, convert between the two equivalent descriptions and to derive isotope-specific calibrations for optical analysers. The calculations described here are equivalent to those described by Wehr et al. (2013), Flores et al. (2017) and Tans et al. (2017). The motivation for this technical note is thus 3-fold:
• to show that the complete and correct treatment of isotopic composition in calibration calculations is straightforward and that there is no need to invoke some approximations often made in earlier analyses,
• to provide a practical guide to isotope-specific calibration calculations, and
• to assess the potential errors when all isotopic information is not available and approximations or assumptions must be made.
2 Calculation of isotopic quantities
Using CO2 as an example, considering the stable isotopes 12C, 13C, 16O, 17O and 18O, there are eighteen possible isotopologues ($\mathrm{2}×\mathrm{3}×\mathrm{3}$ isotopic possibilities). 14C is a negligible proportion of total carbon for these purposes and is neglected. Only twelve of these eighteen possibilities are distinct due to symmetry. Assuming the substitution of each isotope at each position in the molecule follows its bulk statistical abundance (i.e. no clumping; see Sect. 6), only four independent quantities are required to fully define the total amount and full isotopic composition of CO2. These quantities may equivalently be the total CO2 amount and three isotopic ratios 13r,17r and 18r (or delta values δ13C, δ17O and δ18O), or the amounts of four individual isotopologues with each isotope substituted, most conveniently 16O12C16O, 16O13C16O, 16O12C17O and 16O12C18O. Once these are known, the abundances of all multiply substituted isotopologues can be calculated.
The most fundamental quantity defining isotopic composition for each element is the isotope ratio of the minor to the major isotope:
$\begin{array}{ll}& {}^{\mathrm{13}}r=\frac{n\left({}^{\mathrm{13}}\mathrm{C}\right)}{n\left({}^{\mathrm{12}}\mathrm{C}\right)}\\ & {}^{\mathrm{17}}r=\frac{n\left({}^{\mathrm{17}}\mathrm{O}\right)}{n\left({}^{\mathrm{16}}\mathrm{O}\right)}\\ \text{(1)}& & {}^{\mathrm{18}}r=\frac{n\left({}^{\mathrm{18}}\mathrm{O}\right)}{n\left({}^{\mathrm{16}}\mathrm{O}\right)},\end{array}$
where, for example, n(13C) is the amount of 13C in a sample (number of moles or atoms). Isotope ratios for standard or reference materials are assigned by the isotope metrology community, (e.g. Allison et al., 1995; Brand et al., 2010; Werner and Brand, 2001).
Table 1Standard isotope ratios for relevant reference scales used in atmospheric trace gas analysis.
a Werner and Brand (2001). b Brand et al. (2010). c Bievre et al. (1984). d https://www.cfa.harvard.edu/hitran/molecules.html (last access: 25 October 2018).
Isotope ratios are commonly expressed as delta values relative to a standard or reference material:
$\begin{array}{l}{\mathit{\delta }}^{\mathrm{13}}\mathrm{C}=\left(\frac{{}^{\mathrm{13}}r}{{}^{\mathrm{13}}{r}_{\mathrm{ref}}}-\mathrm{1}\right)\\ {\mathit{\delta }}^{\mathrm{17}}\mathrm{O}=\left(\frac{{}^{\mathrm{17}}r}{{}^{\mathrm{17}}{r}_{\mathrm{ref}}}-\mathrm{1}\right)\\ \text{(2)}& {\mathit{\delta }}^{\mathrm{18}}\mathrm{O}=\left(\frac{{}^{\mathrm{18}}r}{{}^{\mathrm{18}}{r}_{\mathrm{ref}}}-\mathrm{1}\right).\end{array}$
(Following the recommendation of Coplen (2011) and to simplify equations, the factor 1000 ‰ is not included in the definition of δ.) For the relevant reference scales commonly used in atmospheric analysis, the reference isotope ratios are given in Table 1.
For each isotope of an element, the isotopic abundance or isotopic fraction is the fraction of that isotope relative to all isotopes in a sample:
$\begin{array}{ll}& {}^{\mathrm{12}}x=\frac{n\left({}^{\mathrm{12}}\mathrm{C}\right)}{n\left({}^{\mathrm{12}}\mathrm{C}\right)+n\left({}^{\mathrm{13}}\mathrm{C}\right)}=\frac{\mathrm{1}}{\left(\mathrm{1}{+}^{\mathrm{13}}r\right)}\\ & {}^{\mathrm{13}}x=\frac{n\left({}^{\mathrm{13}}\mathrm{C}\right)}{n\left({}^{\mathrm{12}}\mathrm{C}\right)+n\left({}^{\mathrm{13}}\mathrm{C}\right)}=\frac{{}^{\mathrm{13}}r}{\left(\mathrm{1}{+}^{\mathrm{13}}r\right)}\\ & {}^{\mathrm{16}}x=\frac{n\left({}^{\mathrm{16}}\mathrm{O}\right)}{n\left({}^{\mathrm{16}}\mathrm{O}\right)+n\left({}^{\mathrm{17}}\mathrm{O}\right)+n\left({}^{\mathrm{18}}\mathrm{O}\right)}=\frac{\mathrm{1}}{\left(\mathrm{1}{+}^{\mathrm{17}}r{+}^{\mathrm{18}}r\right)}\\ & {}^{\mathrm{17}}x=\frac{n\left({}^{\mathrm{17}}\mathrm{O}\right)}{n\left({}^{\mathrm{16}}\mathrm{O}\right)+n\left({}^{\mathrm{17}}\mathrm{O}\right)+n\left({}^{\mathrm{18}}\mathrm{O}\right)}=\frac{{}^{\mathrm{17}}r}{\left(\mathrm{1}{+}^{\mathrm{17}}r{+}^{\mathrm{18}}r\right)}\\ \text{(3)}& & {}^{\mathrm{18}}x=\frac{n\left({}^{\mathrm{18}}\mathrm{O}\right)}{n\left({}^{\mathrm{16}}\mathrm{O}\right)+n\left({}^{\mathrm{17}}\mathrm{O}\right)+n\left({}^{\mathrm{18}}\mathrm{O}\right)}=\frac{{}^{\mathrm{18}}r}{\left(\mathrm{1}{+}^{\mathrm{17}}r{+}^{\mathrm{18}}r\right)}.\end{array}$
Note that these are fractional abundances, such that ${}^{\mathrm{12}}x{+}^{\mathrm{13}}x=\mathrm{1}$ and ${}^{\mathrm{16}}x{+}^{\mathrm{17}}x{+}^{\mathrm{18}}x=\mathrm{1}$.
Similarly, the isotopologue abundances or isotopologue fractions are defined for a molecule; for example, for CO2 the isotopologue abundances for 12C16O2 (626), 13C16O2 (636), 12C16O18O (628) and 12C16O17O (627) are
$\begin{array}{ll}& {x}_{\mathrm{626}}{=}^{\mathrm{16}}x{\cdot }^{\mathrm{12}}x{\cdot }^{\mathrm{16}}x=\frac{\mathrm{1}}{{R}_{\mathrm{sum}}}\\ & {x}_{\mathrm{636}}{=}^{\mathrm{16}}x{\cdot }^{\mathrm{13}}x{\cdot }^{\mathrm{16}}x=\frac{{}^{\mathrm{13}}r}{{R}_{\mathrm{sum}}}\\ & {x}_{\mathrm{627}}=\mathrm{2}{\cdot }^{\mathrm{16}}x{\cdot }^{\mathrm{12}}x{\cdot }^{\mathrm{17}}x=\frac{\mathrm{2}{\cdot }^{\mathrm{17}}r}{{R}_{\mathrm{sum}}}\\ \text{(4)}& & {x}_{\mathrm{628}}=\mathrm{2}{\cdot }^{\mathrm{16}}x{\cdot }^{\mathrm{12}}x{\cdot }^{\mathrm{18}}x=\frac{\mathrm{2}{\cdot }^{\mathrm{18}}r}{{R}_{\mathrm{sum}}},\end{array}$
where
$\begin{array}{}\text{(5)}& {R}_{\mathrm{sum}}=\left(\mathrm{1}{+}^{\mathrm{13}}r\right)\cdot \left(\mathrm{1}{+}^{\mathrm{17}}r{+}^{\mathrm{18}}r{\right)}^{\mathrm{2}}.\end{array}$
The labels 626, 636, 628 and 627 are the common isotopic shorthand used in spectroscopy and the HITRAN database (Rothman et al., 2005). The sum of all isotopologue abundances x over all 18 isotopologues is equal to unity. Rsum is a sum of the 18 products of isotope ratios, one corresponding to each of the 18 possible isotopologues of CO2. Rsum conveniently accounts for all possible isotopologues in calculations of abundances, providing a normalising factor somewhat analogous to a partition sum over all energy levels of a molecule. From Eq. (4), ${x}_{\mathrm{626}}=\mathrm{1}/{R}_{\mathrm{sum}}$ i.e. 1∕Rsum is the fractional abundance of the major isotopologue and ${R}_{\mathrm{sum}}-\mathrm{1}\approx \mathrm{1}-{x}_{\mathrm{626}}$ is that fraction of the sample that is made up of all the minor isotopologues. Equivalently, from Eq. (10) and the following paragraph, it can be seen that Rsum is the ratio of the total amount of CO2 to that of the major isotopologue in a sample.
Table 2Isotopologue fractional abundances and isotopic sums for the VPDB-CO2 and HITRAN scales and conversion factors.
Abundances are taken from a Rothman et al. (2005) and b https://www.cfa.harvard.edu/hitran/molecules.html (last access: 25 October 2018) for HITRAN and c Brand et al. (2010) for VPDB-CO2. The Brand et al. values supersede earlier values given by Allison et al. (1995).
Abundances of the major and three singly substituted isotopologues and Rsum values for standard reference materials are given in Table 2. Abundances of the multiply substituted isotopologues can be calculated following the examples of Eq. (4). They are also listed for HITRAN isotope ratios on the HITRAN website: https://www.cfa.harvard.edu/hitran/molecules.html (last access: 25 October 2018).
For a calibration or reference gas, δ13C and δ18O are usually provided by calibration laboratories, and δ17O can normally be deduced from δ18O, assuming mass dependent fractionation of oxygen isotopes with negligible error (Brand et al., 2010):
$\begin{array}{ll}& {}^{\mathrm{17}}r{/}^{\mathrm{17}}{r}_{\mathrm{ref}}={\left(}^{\mathrm{18}}r{/}^{\mathrm{18}}{r}_{\mathrm{ref}}{\right)}^{\mathrm{0.528}}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\phantom{\rule{0.125em}{0ex}}\text{or}\\ \text{(6)}& & {\mathit{\delta }}^{\mathrm{17}}\mathrm{O}=\mathrm{0.528}\cdot {\mathit{\delta }}^{\mathrm{18}}\mathrm{O}.\end{array}$
The mass dependent fractionation assumption is discussed below in Sect. 6. The isotope ratios 13r,17r and 18r for a sample can thus be calculated from inverting Eq. (2):
$\begin{array}{ll}& {}^{\mathrm{13}}r=\left(\mathrm{1}+{\mathit{\delta }}^{\mathrm{13}}\mathrm{C}\right){\cdot }^{\mathrm{13}}{r}_{\mathrm{ref}}\\ & {}^{\mathrm{17}}r=\left(\mathrm{1}+{\mathit{\delta }}^{\mathrm{17}}\mathrm{O}\right){\cdot }^{\mathrm{17}}{r}_{\mathrm{ref}}\\ \text{(7)}& & {}^{\mathrm{18}}r=\left(\mathrm{1}+{\mathit{\delta }}^{\mathrm{18}}\mathrm{O}\right){\cdot }^{\mathrm{18}}{r}_{\mathrm{ref}},\end{array}$
thence Rsum can be calculated from Eq. (5) for any sample or reference gas.
If the total mole fraction of CO2 in a sample of air, ${y}_{{\mathrm{CO}}_{\mathrm{2}}}$, is also known (for example, for a certified calibration gas), the individual isotopologue amounts or mole fractions can be calculated from
$\begin{array}{ll}& {y}_{\mathrm{626}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot {x}_{\mathrm{626}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}/{R}_{\mathrm{sum}}\\ & {y}_{\mathrm{636}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot {x}_{\mathrm{636}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}{\cdot }^{\mathrm{13}}r/{R}_{\mathrm{sum}}\\ & {y}_{\mathrm{627}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot {x}_{\mathrm{627}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \mathrm{2}{\cdot }^{\mathrm{17}}r/{R}_{\mathrm{sum}}\\ \text{(8)}& & {y}_{\mathrm{628}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot {x}_{\mathrm{628}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \mathrm{2}{\cdot }^{\mathrm{18}}r/{R}_{\mathrm{sum}}.\end{array}$
(Following the recommendation of the IUPAC Gold Book (McNaught and Wilkinson, 2014) and usage by Tans et al. (2017), the symbol y is used here for mole fraction (more formally amount fraction) of a trace gas or isotopologue in air to distinguish from x, the isotope or isotopologue fractional abundance.)
Conversely, if a set of calibrated isotopologue mole fractions $\mathit{\left\{}{y}_{\mathrm{626}},{y}_{\mathrm{636}},{y}_{\mathrm{628}},{y}_{\mathrm{627}}\mathit{\right\}}$ in a sample are measured with an isotopologue-specific analyser, the total CO2 mole fraction ${y}_{{\mathrm{CO}}_{\mathrm{2}}}$and isotope ratios or delta values can be calculated. The isotope ratios are derived directly from the isotopologue amounts:
$\begin{array}{ll}& {}^{\mathrm{13}}r={y}_{\mathrm{636}}/{y}_{\mathrm{626}}\\ & {}^{\mathrm{18}}r=\mathrm{0.5}\cdot {y}_{\mathrm{628}}/{y}_{\mathrm{626}}\\ \text{(9)}& & {}^{\mathrm{17}}r={\left(}^{\mathrm{18}}r{/}^{\mathrm{18}}{r}_{\mathrm{ref}}{\right)}^{\mathrm{0.528}}{\cdot }^{\mathrm{17}}{r}_{\mathrm{ref}}.\end{array}$
Then delta values are calculated from Eq. (2) and Rsum from Eq. (5). The total CO2 mole fraction is then calculated from Eq. (8):
$\begin{array}{}\text{(10)}& {y}_{{\mathrm{CO}}_{\mathrm{2}}}={y}_{\mathrm{626}}\cdot {R}_{\mathrm{sum}}\end{array}$
The key quantity in these calculations is Rsum, which correctly and completely accounts for all possible isotopologues of the molecule at their actual isotopic abundances. Note that to correctly calculate the amount of any isotopologue in a sample, all isotope ratios should be known to calculate Rsum exactly. Errors incurred when this requirement is relaxed are discussed and quantified in Sect. 5.
3 Normalised isotopologue mole fractions
In the HITRAN database, tabulated line strengths are normalised by the natural abundance of the relevant isotopologue; the reference isotopologue natural abundances assumed in HITRAN are listed in Table 2. Retrievals from spectra based on HITRAN line parameters thus provide scaled or normalised mole fractions of isotopologues, which are referenced to the isotopic scales assumed by HITRAN. For some purposes it may be convenient to work with these normalised mole fractions directly rather than convert them to absolute mole fractions as in Sect. 2 because the reference isotopologue abundances are inherently included in the normalised amounts. In terms of normalised mole fractions, Eq. (8) becomes
$\begin{array}{ll}{y}_{\mathrm{626}}^{\prime }& =\frac{{y}_{\mathrm{626}}}{{x}_{\mathrm{626},\mathrm{ref}}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \frac{{R}_{\mathrm{sum},\mathrm{ref}}}{{R}_{\mathrm{sum}}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}/{X}_{\mathrm{sum}}\\ {y}_{\mathrm{636}}^{\prime }& =\frac{{y}_{\mathrm{636}}}{{x}_{\mathrm{636},\mathrm{ref}}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \frac{{}^{\mathrm{13}}r}{{}^{\mathrm{13}}{r}_{\mathrm{ref}}}\cdot \frac{{R}_{\mathrm{sum},\mathrm{ref}}}{{R}_{\mathrm{sum}}}\\ & \phantom{\rule{1em}{0ex}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \left(\mathrm{1}+{\mathit{\delta }}^{\mathrm{13}}\mathrm{C}\right)/{X}_{\mathrm{sum}}\\ {y}_{\mathrm{627}}^{\prime }& =\frac{{y}_{\mathrm{627}}}{{x}_{\mathrm{627},\mathrm{ref}}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \frac{{}^{\mathrm{17}}r}{{}^{\mathrm{17}}{r}_{\mathrm{ref}}}\cdot \frac{{R}_{\mathrm{sum},\mathrm{ref}}}{{R}_{\mathrm{sum}}}\\ & \phantom{\rule{1em}{0ex}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \left(\mathrm{1}+{\mathit{\delta }}^{\mathrm{17}}\mathrm{O}\right)/{X}_{\mathrm{sum}}\\ {y}_{\mathrm{628}}^{\prime }& =\frac{{y}_{\mathrm{628}}}{{x}_{\mathrm{628},\mathrm{ref}}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \frac{{}^{\mathrm{18}}r}{{}^{\mathrm{18}}{r}_{\mathrm{ref}}}\cdot \frac{{R}_{\mathrm{sum},\mathrm{ref}}}{{R}_{\mathrm{sum}}}\\ \text{(11)}& & \phantom{\rule{1em}{0ex}}={y}_{{\mathrm{CO}}_{\mathrm{2}}}\cdot \left(\mathrm{1}+{\mathit{\delta }}^{\mathrm{18}}\mathrm{O}\right)/{X}_{\mathrm{sum}},\end{array}$
where rref and Rsum,ref refer to the reference scales listed in Tables 1 and 2 and ${X}_{\mathrm{sum}}={R}_{\mathrm{sum}}/{R}_{\mathrm{sum},\mathrm{ref}}={R}_{\mathrm{sum}}\cdot {x}_{\mathrm{626},\mathrm{ref}}$. Equation (11) allows normalised mole fractions to be calculated the from total CO2 mole fraction and δ values on any reference scale for which rref and Rsum,ref are known.
The calculation of δ values from normalised isotopologue mole fractions is analogous to Eqs. (9) and (10):
$\begin{array}{ll}& {\mathit{\delta }}^{\mathrm{13}}\mathrm{C}=\frac{{y}_{\mathrm{636}}^{\prime }}{{y}_{\mathrm{626}}^{\prime }}-\mathrm{1}\\ & {\mathit{\delta }}^{\mathrm{18}}\mathrm{O}=\frac{{y}_{\mathrm{628}}^{\prime }}{{y}_{\mathrm{626}}^{\prime }}-\mathrm{1}\\ \text{(12)}& & {\mathit{\delta }}^{\mathrm{17}}\mathrm{O}=\mathrm{0.528}\cdot {\mathit{\delta }}^{\mathrm{18}}\mathrm{O},\end{array}$
and the total CO2 mole fraction is
$\begin{array}{}\text{(13)}& {y}_{{\mathrm{CO}}_{\mathrm{2}}}={y}_{\mathrm{626}}^{\prime }\cdot \frac{{R}_{\mathrm{sum}}}{{R}_{\mathrm{sum},\mathrm{ref}}}={y}_{\mathrm{626}}^{\prime }\cdot {X}_{\mathrm{sum}}.\end{array}$
The normalised mole fractions have the convenient property that they are all equal to the total CO2 mole fraction in a sample if all isotopes are in natural abundance in the reference scale (i.e. Eq. 11 with δ=0, ${R}_{\mathrm{sum}}={R}_{\mathrm{sum},\mathrm{ref}}$ and Xsum=1). HITRAN natural abundances are based on a superseded definition of the VPDB isotope ratio for carbon and VSMOW for oxygen, while for atmospheric CO2 the isotopic scale of choice is VPDB-CO2, which is based on VPDB for both carbon and oxygen and may be adjusted over time as scales are redetermined. To convert normalised mole fractions retrieved directly from spectra (HITRAN scale) to the VPDB-CO2 scale, each normalised mole fraction can be multiplied by ${x}_{\mathrm{ref},\mathrm{Hitran}}/{x}_{\mathrm{ref},\mathrm{VPDB}}$. The reference isotopologue abundances and rescaling factors are listed in Table 2.
4 Calibration and measurement procedures – step by step
Calibration of an isotopologue-specific analyser can in principle be carried out in two ways: calibrating on either the individual isotopologue amounts or on the derived isotope ratios or delta values. Both methods have been used in published work to date. The former is more fundamental because optical methods actually measure individual isotopologue amounts, not ratios. Ratio- or delta-based calibration leads to the additional complication of concentration dependence in the calibration. A step-by-step method for direct isotopologue calibration is presented in Sect. 4.1 based on the equations of Sect. 2. Ratio or delta calibration is discussed in Sect. 4.2 and the two methods are compared in Sect. 4.3.
## 4.1 Direct calibration by isotopologue amounts
The steps described here are consistent with those recently published by Flores et al. (2017) and Tans et al. (2017). Griffith et al. (2012) previously described the same methods but used a minor approximation in accounting for the sum of all multiply substituted isotopologues in the calculation of Rsum in Eq. (5) or Xsum in Eq. (11).
There are two parts to the calibration and unknown measurement procedure: (1) determination of the reference isotopologue amounts and the calibration equation for each isotopologue in a calibration gas, and (2) measurement of the isotopologue amounts in an unknown sample and calculation of its total trace gas amount and delta quantities. As above, CO2 is used as an example, but the procedures apply in principle to any molecule.
### 4.1.1 Calibration
1. From reference standard tank data provided by the calibration laboratory $\mathit{\left\{}{\mathrm{CO}}_{\mathrm{2}},{\mathit{\delta }}^{\mathrm{13}}\mathrm{C},{\mathit{\delta }}^{\mathrm{18}}\mathrm{O},\left({\mathit{\delta }}^{\mathrm{17}}\mathrm{O}\right)\mathit{\right\}}$, calculate isotope ratios ${}^{\mathrm{13}}r{,}^{\mathrm{18}}r{,}^{\mathrm{17}}r$ and Rsum for each standard (Eq. 7 then Eq. 5).
2. Calculate the calibrated amount of each isotopologue y626, y636 and y628 in each standard (Eq. 8).
3. Measure uncalibrated analyser responses or raw isotopologue amounts of each standard y626,meas, y636,meas and y628,meas with the analyser.
4. Derive the calibration equation for each isotopologue, for example, for a linear calibration:
$\begin{array}{}\text{(14)}& {y}_{\mathrm{626},\mathrm{meas}}={a}_{\mathrm{626}}\cdot {y}_{\mathrm{626}}+{b}_{\mathrm{626}}.\end{array}$
### 4.1.2 Sample measurement
1. Measure the sample with the analyser and determine the analyser responses or raw isotopologue amounts.
2. Apply the inverted calibration determined in step 4 (Eq. 14) above for each isotopologue to determine calibrated isotopologue amounts.
3. Calculate ${}^{\mathrm{13}}r{,}^{\mathrm{18}}r{,}^{\mathrm{17}}r$ and Rsum from calibrated isotopologue amounts (Eq. 9).
4. Calculate δ13C and δ18O on the desired reference isotope scale (Eq. 2 or 12).
5. Calculate total CO2 (Eq. 10).
With this scheme, for complete calibration of the analyser for the total CO2 amount, δ13C and δ18O should be known for each reference standard, and each isotopologue should be measured by the analyser (or a combination of analysers). δ17O can be calculated with sufficient accuracy from δ18O. Calibration gases may, but do not need to, span a range of delta values; they need only span the range of amounts of each isotopologue covered by the range of samples to be measured (Bowling et al., 2003). Flores et al. (2017) demonstrated isotopic calibration of CO2 in which all standards were synthesised from the same CO2 source gas, and all had the same δ13C and δ18O values.
## 4.2 Calibration by delta values
Spectroscopic analysers fundamentally determine the amounts of individual isotopologues, and the isotopologue-based analysis as described in the preceding section is the natural choice as a basis for calibration. Historically, however, isotope ratio mass spectrometry (IRMS) has been the method of choice for isotopic analysis because many sources of noise cancel in calculating the ratio. Traditional IRMS calibration schemes are based on standards over a range of isotope ratios or delta values directly, rather than on isotopologue amounts. Ratio or delta calibration schemes have thus, perhaps inevitably, flowed through to optical techniques. Ratio calibration schemes use calibration standards which cover a range of delta values and derive calibration equations analogous to Eq. (14) directly in terms of delta values rather than isotopologue amounts. The raw measured delta values are calculated from the uncalibrated isotopologue amounts. However, as shown in the following, this method inevitably leads to a concentration dependence of the calibration equations, which must be characterised as part of (and which significantly complicates) the calibration procedure.
Several groups have reported on ratio calibration schemes and the consequent concentration dependence (e.g. Griffith et al., 2012; Wen et al., 2013; Rella et al., 2015; Pang et al., 2016; Braden-Behrens et al., 2017; Flores et al., 2017). The concentration dependence inevitably follows if the actual calibration relationships between measured and true amounts of individual isotopologues (Sect. 4.1, Eq. 14) have a non-zero y intercept or an additional non-linear term. Griffith et al. (2012, Eq. 14) showed that a non-zero intercept in the calibration equations leads to an approximate inverse dependence of measured δ13C on concentration. Extending that to include a quadratic term in the calibration equation representing non-linearity adds an approximately linear term to the concentration dependence, which can then be described by a combination of an inverse and linear dependence on ${y}_{{\mathrm{CO}}_{\mathrm{2}}}$:
$\begin{array}{}\text{(15)}& {{\mathit{\delta }}^{\mathrm{13}}\mathrm{C}}_{\mathrm{meas}}=\mathit{\alpha }\cdot {{\mathit{\delta }}^{\mathrm{13}}\mathrm{C}}_{\mathrm{true}}+\left(\mathit{\alpha }-\mathrm{1}\right)+\frac{\mathit{\beta }}{{y}_{{\mathrm{CO}}_{\mathrm{2}}}}+\mathit{\gamma }\cdot {y}_{{\mathrm{CO}}_{\mathrm{2}}},\end{array}$
where δ13Cmeas is calculated from the raw measured isotopologue amounts. For a perfectly linear calibration, i.e. Eq. (14) with ${b}_{\mathrm{626}}={b}_{\mathrm{636}}=\mathrm{0}$, both β and γ are zero, $\mathit{\alpha }={a}_{\mathrm{636}}/{a}_{\mathrm{626}}$ and Eq. (15) represents a simple concentration-independent scale shift of (α−1) in the δ scale. β is a function of the intercept terms b626 and b636. γ becomes non-zero if quadratic terms are added to the calibration equations. The inverse and linear ${y}_{{\mathrm{CO}}_{\mathrm{2}}}$ dependences are not exact because the coefficients β and γ contain terms dependent on δ13C and also have weak cross-terms, but together they provide a useful model to describe the concentration dependence. The linear term becomes relatively more important than the inverse term at high CO2 mole fractions, where the inverse CO2 term becomes small, and any quadratic contribution to the calibration equation leading to the linear term becomes large.
Figure 1 illustrates this concentration dependence with a typical δ13C vs. CO2 dependence for an FTIR analyser similar to that used in the example of Sect. 5 below. The dependence was determined by continuous flow measurements of a single CO2-spiked air tank while the CO2 content was gradually reduced by passing a fraction of the flow through Ascarite. The measured δ13C vs. CO2 data are fitted to Eq. (15) with fitted parameters $\mathit{\beta }=-\mathrm{1227}$ ‰ ppm and γ=0.0054 ‰ ppm−1, corresponding to CO2 dependent corrections of up to 5 ‰ over the CO2 range of 400–1000 ppm. The residuals of the fit illustrate potential errors from the modelled behaviour of up to ±0.3 ‰. Uncertainties in calibrating the CO2 concentration dependence can lead to significant errors in Keeling-type analyses over a wide range of total CO2 amounts even if the isotopologue calibration non-linearity is very small (Pang et al., 2016; Wen et al., 2013).
Figure 1Example of δ13C dependence on CO2 mole fraction for a Spectronus FTIR analyser. The measured data are fitted with a function of form of Eq. (15) with fitted parameters $\mathit{\beta }=-\mathrm{1227}$ ‰ ppm and γ=0.0054 ‰ ppm−1.
The concentration dependence is a function of the isotopologue calibration coefficients, and thus in principle for best accuracy it should be redetermined for every calibration, complicating the calibration procedure. The Thermo Fisher Delta Ray isotope analyser, for example, takes this approach in a prescribed sequence of measurements using several reference standards; however, Braden-Behrens et al. (2017) and Flores et al. (2017) found this procedure not to be sufficiently accurate or stable and invoked separate a posteriori calibration schemes. Rella et al. (2015) and Picarro (2017) similarly describe a calibration procedure for Picarro analysers to take concentration dependence into account.
Table 3Worked data for calibration of an FTIR analyser using four reference standards in (a) using actual mole fractions of all isotopologues, and (b) using normalised mole fractions on the VPDB-CO2 scale. The 17r and δ17O values were not directly determined and are not included in the table – they are derived from 18r and δ18O following Eq. (6).
## 4.3 Comments on the accuracy of optical isotopologue and ratio calibration
As an example, assume a calibration laboratory provides calibrated reference gases with an absolute accuracy of 0.05 ppm for total CO2 amount (0.12 ‰ in 400 ppm CO2) and 0.02 ‰ for δ13C measured by IRMS. The isotope ratio is thus more accurately determined than the total amount fraction for the reference gases. Now take as a practical measurement repeatability for optical analysers 0.02 ppm (0.05 ‰) for total CO2 amount and 0.07 ‰ for δ13C (e.g. Griffith et al., 2012; laser instruments are similar). The absolute accuracy for the calibrated optical measurement of total CO2 is limited by the reference gas amount fraction, but the more accurately known reference 13r or 626 ∕ 636 ratio is carried through the calibration calculations and this accuracy is preserved when retrieved isotopologue amounts are ratioed. The accuracy of measured 13r or δ13C is thus limited by the optical measurement (0.07 ‰), which is less precise than the IRMS-provided reference accuracy (0.02 ‰). This reasoning applies to both isotopologue and ratio calibration schemes, which both benefit from the higher accuracy and precision in the isotopologue ratios than in absolute isotopologue amounts. The principle differences between the isotopologue and ratio calibration schemes are 2-fold.
• The isotopologue scheme does not require calibration gases spanning a range of delta values; it is sufficient to span the range of total amount fractions of interest. This simplifies the preparation of reference gases for calibration laboratories.
• The ratio scheme has an unavoidable CO2 concentration dependence which must be characterised and leads potentially to a loss of accuracy, as shown in Sect. 4.2. This complicates the calibration procedure for optical analysers.
Optical FTIR and laser methods do not currently meet GAW requirements of 0.01 ‰ for repeatability of δ13C in CO2 in clean background air measurements (WMO-GAW, 2016). Their precision is limited by the inherent signal : noise ratio of the optical measurement, not by the choice of absolute or ratio calibration. The precision currently available from optical measurements is nevertheless very useful for continuous analysis of air in non-baseline scenarios, such as urban air or agricultural flux measurements.
Errors are discussed further in Sect. 6.
5 Tutorial: a practical worked example
This section presents a worked example of the calibration of an optical analyser using reference gases of given total CO2 mole fraction, δ13C and δ18O, followed by measurements of air to which this calibration is applied. The data are derived from an Ecotech Spectronus FTIR analyser which measures three isotopologues of CO2 (626, 636, 628) in the calibration gases and in the sampled air. The calculations follow Sect. 4.1.
## 5.1 Calibration
The calibration data were collected in the laboratory at the University of Wollongong on 27 September 2017. Four reference tanks were sourced from CSIRO, with total CO2 mole fraction, δ13C and δ18O provided on the current WMO reference scales (WMO X2007 scale for total CO2, VPDB-CO2 for δ13C and δ18O). For each calibration tank, ${}^{\mathrm{13}}r{,}^{\mathrm{18}}r{,}^{\mathrm{17}}r$, Rsum and reference isotopologue mole fractions are calculated from Eqs. (7), (5) and (8). The four reference gases were measured in the analyser, and raw measured values of the isotopologue mole fractions were corrected to dry air and for small spectroscopic cross-sensitivities to pressure, temperature and water vapour, as described by Griffith et al. (2012). A two-parameter linear regression (slope and intercept) of measured against reference mole fractions for each isotopologue provides the linear calibration coefficients a and b for the analyser, Eq. (14). The worked data are presented in Table 3 and calibration plots shown in Fig. 2.
Table 4Worked calibration of sample data in Fig. 3 at four times with varying CO2 mole fractions. Columns 2–4 contain the raw measured isotopologue mole fractions corrected to dry air, columns 5–7 contain the calibrated dry air mole fractions after applying the coefficients from Table 3, columns 8–10 are the isotopic ratios and Rsum for each sample, and columns 11–13 contain the final calibrated total CO2, δ13C and δ18O.
Figure 2Calibration plots for three CO2 isotopologues.
## 5.2 Sample air measurements
Figure 3 shows an example of 1 day of calibrated 1 min measurements from the same FTIR analyser collected at a rural site in SE Australia on 23 and 24 January 2018. Table 4 illustrates the worked calibration of the raw data at four times of differing CO2 amounts and isotopic fractionations. The linear calibration of 27 September 2017 described above has been applied to the measured data without further correction. The calculations follow Sect. 4.1 to determine ${y}_{{\mathrm{CO}}_{\mathrm{2}}}$, δ13C and δ18O for each 1 min measurement. Figure 4 shows an example of a Keeling plot derived from the data of Fig. 3, with an intercept −24.5 ‰ typical of the dominant plants in this agricultural area.
Table 5Actual isotopologue amounts and Rsum values in 400 ppm total CO2 for various isotopic compositions. The last column lists errors in calculating total CO2 if different isotopic composition between reference (calibration) and sample measurements are not accounted for. See text for details of the various cases.
Figure 3Calibrated total CO2, δ13C and δ18O of sampled air on 23–24 January 2018 at a rural site in SE Australia. Air was sampled continuously, and the displayed data are 1 min averages.
6 Assessment of potential errors
Table 5 shows examples of actual isotopologue amounts for samples with total CO2=400 ppm and a range of isotopic compositions. The table includes Rsum values calculated for each sample. The potential error incurred in calculating the total CO2 amount from a spectroscopic measurement of y626 via Eq. (10) if the different isotopic composition between sample and reference gases is not taken into account is shown in the rightmost column – it is the difference from 400 ppm of the total CO2 calculated from Eq. (10) taking the reference value Rsum,ref (case 1) instead of the correct value on the same line Rsum. This simulates the effect of ignoring the difference in isotopic composition between reference and sample. The reference case (case 1) is a hypothetical standard with the isotopic composition of VPDB-CO2. Examples include typical clean air (case 2), synthetic air synthesised with 13C-depleted CO2 with δ13C$=-\mathrm{35}$ ‰ (case 3), systematic errors of 2 ‰ in δ18O and δ17O (cases 4, 5), and the use of isotope ratios assumed by HITRAN rather than VPDB-CO2 (case 6). Case 7 simulates the result if only singly substituted isotopologues are included in the sum and all doubly substituted minor isotopologues are ignored. Other cases can be assessed following the equations of Sect. 2. Potential errors are fortunately small relative to GAW compatibility goals for realistic isotopic variations of a few per mil around clean air values. However the potential for significant errors (> 0.1 ppm) exists for reference gas mixtures or samples with 13C-depleted CO2 as is often the case for synthetic mixtures or for samples with added CO2 derived from plant or fossil fuel sources.
Figure 4Keeling plot of data shown in Fig. 3.
Table 6Details of isotopologues of common atmospheric species.
These potential errors in computation of delta values should also be viewed in the context of experimental measurement errors. Flores et al. (2017) formally evaluated the uncertainty budget for their particular FTIR measurements of δ13C in CO2 and found a standard uncertainty of 0.09 ‰, of comparable magnitude to the largest potential computational approximation errors. The measurement uncertainty was dominated by uncertainty in assigned reference mole fractions for the reference standards rather than the spectroscopic measurement uncertainty.
Three assumptions, previously mentioned and summarised here, have negligible impact on the calculations of Sect. 2 and Table 5.
• 14C, with an isotopic abundance of < 1 ppt is ignored in all calculations.
• The relative amounts of multiply substituted minor isotopologues are assumed to be in statistical relative abundance, i.e. there is no isotope clumping. Clumping refers to the case where the enrichment (or depletion) of two or more isotopes in a multiply substituted isotopologue are correlated, rather than each following their statistical amounts independently. Clumping effects are normally much less than 1 ‰, and according to Table 5 are therefore insignificant.
• Values 17r and δ17O are calculated from 18r and δ18O (Eq. 6) assuming mass dependent fractionation. Thermodynamic and kinetic fractionation processes are mass-dependent and account for most fractionation mechanisms in nature. Mass-independent fractionation typically occurs in quantum processes such as photolysis and can cause small deviations from mass dependence. These deviations are also typically < 1 ‰ (e.g. Miller et al., 2002) and thus also negligible for the purposes of this work.
7 Other molecules
Similar considerations apply to other molecular species, see Table 6. For CH4, 13CH4 measurements are commonly made using laser analysers such as those of Picarro (Rella et al., 2015), and isotopic reference gases are available. An analysis similar to that in Sect. 6 and Table 5 shows that for 2000 ppb CH4 in air, an error of 10 ‰ in the assumed value of δ13C leads to an error of 0.2 ppb in the calculated total CH4 mole fraction, and for a −35 ‰ error the total CH4 error is 0.7 ppb. A 100 ‰ error in δ2H leads to an error in total CH4 of only 0.1 ppb.
For N2O there is the additional complication of the isotopomers 15N14N16O and 14N15N16O, for which standard reference gases are not available, and for which measurement technologies are currently less advanced. The general magnitude of potential errors will be similar to those of CO2. For CO, reference gases are available, but current optical techniques are not able to resolve isotopic variations with sufficient accuracy at the typical low total mole fractions in air.
8 Calibration of commercially available analysers
Several commercial manufacturers offer isotopologue-specific optical analysers based on laser (Campbell Scientific, Picarro, Los Gatos Research, Aerodyne Research, Thermo Fisher Scientific) or FTIR (Ecotech) spectroscopy that analyse sampled air for one or more specific isotopologues. These instruments report results in a variety of ways, as isotopologue mole fractions and/or as total mole fractions and isotopic delta values, both calibrated and uncalibrated. In most cases the scheme by which total mole fractions and delta values are calculated from the raw measured data is not fully described, although some details are available in user manuals and published works. In most cases some level of approximation is used in accounting for the full molecular isotopic composition when converting between isotopologue amounts and total amounts and delta values. As shown in Sect. 6, these approximations are fortunately in most cases acceptably small, but it is nevertheless recommended that they be assessed and documented if the full computation scheme is not used or measurement, and calibration data for all isotopologues are not available.
GAW reports on Carbon Dioxide, other Greenhouse Gases and Related Tracers Measurement Techniques since 2011 (WMO-GAW, 2012) recommend that the computational scheme for isotopic quantities derived from all commercial and non-commercial analysers be published and fully transparent to the user to avoid the potential for biases and inaccuracies stemming from different calibration and calculation schemes. Potential errors and calibration biases due to inconsistent isotopic calculations and the empirical determination of concentration dependences can be avoided if only the raw output isotopologue amounts from the analyser(s) are used and calibrated and isotopic quantities are calculated a posteriori following consistent calculation schemes, such as those described here and in Flores et al. (2017) and Tans et al. (2017).
9 Summary, discussion and conclusions
Optical trace gas analysers based on laser or FTIR spectroscopy measure the concentrations or mole fractions of individual isotopologues of a trace gas rather than the total amount of all possible isotopologues of the target gas. This leads to potential calibration inaccuracies in relating the individual isotopologue measurements made by the analyser to the more usual quantities of total amount and isotopic ratios or delta values. This paper reviews previous studies addressing isotopic calibration of optical analysers and presents a practical guide to the calculations required to completely and rigorously account for the isotopic composition of a trace gas when determining its total concentration with an isotopologue-specific optical analyser. Although most previous work has made some level of approximation in accounting for the full isotopic composition, this paper shows that such approximations are not required and save little effort – the complete calculations are relatively straightforward. The approach described here is consistent with those of Flores et al. (2017) and Tans et al. (2017); for CO2 for example, the measurement of either three isotopologues (12C16O2, 13C16O2, 12C16O18O), or total CO2 and two delta values (δ13C, δ18O) is necessary and sufficient to specify the complete isotopic composition with sufficient accuracy to meet GAW compatibility goals. Calculations to interconvert between these equivalent specifications of composition accurately are described.
Potential errors which may arise when making sometimes-unavoidable approximations in the calculations are assessed and, in most cases, fortunately found to be small and often negligible. However, significant errors can arise when the isotopic composition of an air sample is very different from that used to calibrated the analyser. Two common cases where this may occur in practice are in the production of synthetic reference standards using highly depleted 13C in CO2 and in environmental studies such as soil chambers where high levels of 13C-depleted CO2 are analysed with an analyser calibrated around clean atmospheric 13C levels.
Provided the appropriate calibration standards are available, this paper recommends that the calibration of optical analysers be carried out via direct measurement of the amounts of individual isotopologues, from which the total trace gas amount and isotopic composition can then be calculated completely and accurately. It recommends against ratio or delta-based calibration because this approach leads inevitably to concentration dependences in the calibration that must be characterised. Direct isotopologue calibration avoids concentration dependence and requires only reference standards spanning the range of concentrations to be measured and of known isotopic composition. There is no requirement for the reference gases to span the range of expected delta values; they can all be produced from the same source of trace gas and all have the same isotopic composition.
Optical FTIR and laser methods do not currently meet GAW requirements for repeatability of δ13C in CO2 in clean background air measurements (0.01 ‰). Their precision is currently limited by the inherent signal : noise ratio of the optical measurement, not by the calibration methodology. The precision currently available from optical measurements is nevertheless very useful for continuous analysis of air in non-baseline scenarios such as urban air or agricultural flux measurements.
Data availability
Data availability.
Data in the paper are only illustrative of the calculations. There are no original or published data.
Competing interests
Competing interests.
The author is a consultant to Ecotech Pty Ltd., manufacturer of the Spectronus trace gas analyser under licence to the University of Wollongong.
Special issue statement
Special issue statement.
This article is part of the special issue “The 10th International Carbon Dioxide Conference (ICDC10) and the 19th WMO/IAEA Meeting on Carbon Dioxide, other Greenhouse Gases and Related Tracer Measurement Techniques (GGMT-2017) (AMT/ACP/BG/CP/ESD inter-journal SI)”. It is a result of the 19th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases, and Related Tracer Measurement Techniques (GGMT-2017), Empa Dübendorf, Switzerland, 27–31 August 2017.
Acknowledgements
Acknowledgements.
I would like to thank the GAW-GGMT community for the many discussions on this topic, and especially Edgar Flores, Joelle Viallon, Camille Yver, Grant Forster, Kentaro Ishijima and Jessica Conolly who provided comments on the manuscript and checked the calculations.
Edited by: Hubertus Fischer
Reviewed by: two anonymous referees
References
Allison, C., Francey, R., and Meijer, H.: Recommendations for the reporting of stable isotope measurements for carbon and oxygen in CO2 gas, reference and Intercomparison Materials for stable isotopes of light element, IAEA-TECDO, IAEA, Vienna, 155–162, 1995.
Bievre, P. D., Holden, N. E., and Barnes, I. L.: Isotopic Abundances and Atomic Weights of the Elements, J. Phys. Chem. Ref. Data, 809–891, 1984.
Bowling, D. R., Sargent, S. D., Tanner, B. D., and Ehleringer, J. R.: Tunable diode laser absorption spectroscopy for stable isotope studies of ecosystem–atmosphere CO2 exchange, Agr. Forest Meteorol., 118, 1–19, https://doi.org/10.1016/S0168-1923(03)00074-1, 2003.
Braden-Behrens, J., Yan, Y., and Knohl, A.: A new instrument for stable isotope measurements of 13C and 18O in CO2 – instrument performance and ecological application of the Delta Ray IRIS analyzer, Atmos. Meas. Tech., 10, 4537–4560, https://doi.org/10.5194/amt-10-4537-2017, 2017.
Brand, W. A., Assonov, S. S., and Coplen, T. B.: Correction for the 17O interference in del(13C) measurements when analyzing CO2 with a stable isotope mass spectrometry (IUPAC Technical Report), Pure Appl. Chem., 82, 1719–1733, 2010.
Coplen, T. B.: Guidelines and recommended terms for expression of stableisotope-ratio and gas-ratio measurement results, Rapid Commun. Mass Spectrom., 25, 2538–2560, https://doi.org/10.1002/rcm.5129, 2011.
Esler, M. B., Griffith, D. W. T., Wilson, S. R., and Steele, L. P.: Precision trace gas analysis by FT-IR spectroscopy 2. The 13C/12C isotope ratio of CO2, Anal. Chem., 72, 216–221, 2000.
Flores, E., Viallon, J., Moussay, P., Griffith, D. W. T., and Wielgosz, R. I.: Calibration Strategies for FT-IR and Other Isotope Ratio Infrared Spectrometer Instruments for Accurate δ13C and δ18O Measurements of CO2 in Air, Anal. Chem., 89, 3648–3655, https://doi.org/10.1021/acs.analchem.6b05063, 2017.
Griffis, T. J., Lee, X., Baker, J. M., Sargent, S. D., and King, J. Y.: Feasibility of quantifying ecosystem–atmosphere C18O16O exchange using laser spectroscopy and the flux-gradient method, Agr. Forest Meteorol., 135, 44–60, 2005.
Griffith, D. W. T.: Calculations of carrier gas effects in non-dispersive infrared analysers I. Theory, Tellus, 34, 376–384, 1982.
Griffith, D. W. T., Deutscher, N. M., Caldow, C., Kettlewell, G., Riggenbach, M., and Hammer, S.: A Fourier transform infrared trace gas and isotope analyser for atmospheric applications, Atmos. Meas. Tech., 5, 2481–2498, https://doi.org/10.5194/amt-5-2481-2012, 2012.
Lee, J.-Y., Yoo, H.-S., Marti, K., Moon, D. M., Lee, J. B., and Kim, J. S.: Effect of carbon isotopic variations on measured CO2 abundances in reference gas mixtures, J. Geophys. Res., 111, https://doi.org/10.1029/2005JD006551, 2006.
Loh, Z. M., Steele, L. P., Krummel, P. B., Schoot, M. v. d., Etheridge, D. M., Spencer, D. A., and Francey, R. J.: Linking Isotopologue Specific Measurements of CO2 to the Existing International Mole Fraction Scale, 15th WMO/IAEA Meeting of Experts on Carbon Dioxide, Other Greenhouse Gases and Related Tracers Measurement Techniques (WMO/GAW report no. 194), Jena, Germany, August 2009, 2011.
McNaught, A. D. and Wilkinson, A.: IUPAC Compendium of Chemical Technology – the Gold Book, IUPAC, 2014.
Miller, M. F., Franchi, I. A., Thiemens, M. H., Jackson, T. L., Brack, A., Kurat, G., and Pillinger, C. T.: Mass-independent fractionation of oxygen isotopes during thermal decomposition of carbonates, P. Natl. Acad. Sci. USA, 99, 10988–10993, https://doi.org/10.1073/pnas.172378499, 2002.
Mohn, J., Zeeman, M. J., Werner, R. A., Eugster, W., and Emmenegger, L.: Continuous field measurements of δ13C–CO2 and trace gases by FTIR spectroscopy, Isot. Environ. Healt. S., 44, 241–251, 2008.
Pang, J., Wen, X., Sun, X., and Huang, K.: Intercomparison of two cavity ring-down spectroscopy analyzers for atmospheric 13CO212CO2 measurement, Atmos. Meas. Tech., 9, 3879–3891, https://doi.org/10.5194/amt-9-3879-2016, 2016.
Picarro: Calibration guide for Picarro Analyzers, Rev 1., 18 pp., 2017.
Rella, C. W., Hoffnagle, J., He, Y., and Tajima, S.: Local- and regional-scale measurements of CH4, δ13CH4, and C2H6 in the Uintah Basin using a mobile stable isotope analyzer, Atmos. Meas. Tech., 8, 4539–4559, https://doi.org/10.5194/amt-8-4539-2015, 2015.
Rothman, L. S., Jacquemart, D., Barbe, A., Benner, D. C., Birk, M., Brown, L. R., Carleer, M. R., C. Chackerian, J., Chance, K., Dana, V., Devi, V. M., Flaud, J.-M., Gamache, R. R., Goldman, A., Hartmann, J.-M., Jucks, K. W., Maki, A. G., Mandin, J.-Y., Massie, S. T., Orphali, J., Perrin, A., Rinsland, C. P., Smith, M. A. H., Tennyson, J., Tolchenov, R. N., Toth, R. A., Auwera, J. V., Varanasi, P., and Wagner, G.: The HITRAN 2004 molecular spectroscopic database, J. Quant. Spectrosc. Rad., 96, 139–204, 2005.
Tans, P. P., Crotwell, A. M., and Thoning, K. W.: Abundances of isotopologues and calibration of CO2 greenhouse gas measurements, Atmos. Meas. Tech., 10, 2669–2685, https://doi.org/10.5194/amt-10-2669-2017, 2017.
Tohjima, Y., Katsumata, K., Morino, I., Mukai, H., Machida, T., Akama, I., Amari, T., and Tsunogai, U.: Theoretical and experimental evaluation of the isotope effect of NDIR analyzer on atmospheric CO2 measurement, J. Geophys. Res., 114, https://doi.org/10.1029/2009JD011734, 2009.
Tuzson, B., Henne, S., Brunner, D., Steinbacher, M., Mohn, J., Buchmann, B., and Emmenegger, L.: Continuous isotopic composition measurements of tropospheric CO2 at Jungfraujoch (3580 m a.s.l.), Switzerland: real-time observation of regional pollution events, Atmos. Chem. Phys., 11, 1685–1696, https://doi.org/10.5194/acp-11-1685-2011, 2011.
Vardag, S. N., Hammer, S., Sabasch, M., Griffith, D. W. T., and Levin, I.: First continuous measurements of δ18O-CO2 in air with a Fourier transform infrared spectrometer, Atmos. Meas. Tech., 8, 579–592, https://doi.org/10.5194/amt-8-579-2015, 2015.
Wehr, R., Munger, J. W., Nelson, D. D., McManus, J. B., Zahniser, M. S., Wofsy, S. C., and Saleskaa, S. R.: Long-term eddy covariance measurements of the isotopic composition of the ecosystem–atmosphere exchange of CO2 in a temperate forest, Agr. Forest Meteorol., 181, 69–84, 2013.
Wen, X.-F., Meng, Y., Zhang, X.-Y., Sun, X.-M., and Lee, X.: Evaluating calibration strategies for isotope ratio infrared spectroscopy for atmospheric 13CO212CO2 measurement, Atmos. Meas. Tech., 6, 1491–1501, https://doi.org/10.5194/amt-6-1491-2013, 2013.
Werner, R. A. and Brand, W. A.: Referencing strategies and techniques in stable isotope ratio analysis, Rapid Commun. Mass Spectrom., 15, 501–519, 2001.
WMO-GAW: GAW report No. 206. 16th WMO/IAEA Meeting on Carbon Dioxide, Other Greenhouse Gases, and Related Measurement Techniques (GGMT-2011, Wellington, NZ, October 2011), 2012.
WMO-GAW: GAW report 229. 18th WMO/IAEA meeting on carbon dioxide, other greenhouse gases and related 74 measurement techniques (GGMT-2015), WMO, 2016.
Special issue
|
2018-12-13 19:58:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 38, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6952845454216003, "perplexity": 4259.109533955744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825098.68/warc/CC-MAIN-20181213193633-20181213215133-00201.warc.gz"}
|
https://ncatlab.org/nlab/show/Rng
|
# nLab Rng
## Idea
$Rng$ is the category of nonunital rings and homomorphisms between them.
Created on May 31, 2017 05:39:43 by Urs Schreiber (131.220.184.222)
|
2018-02-23 08:25:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4737584888935089, "perplexity": 3838.804610849148}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814538.66/warc/CC-MAIN-20180223075134-20180223095134-00773.warc.gz"}
|
http://math.stackexchange.com/questions/126749/field-extension-primitive-element-theorem
|
# Field extension, primitive element theorem
I would like to know if it is true that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i) = \mathbb{Q}(\sqrt{2}-i+2(\sqrt{3}+i))$.
I can prove, that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i) = \mathbb{Q}(\sqrt{2},\sqrt{3},i)$, so the degree of this extension is 8. Would it be enough to show that the minimal polynomial of $\sqrt{2}-i+2(\sqrt{3}+i)$ has also degree 8?
It follows from the proof of the primitive element theorem that only finitely many numbers $\mu$ have the property that $\mathbb{Q}(\sqrt{2}-i, \sqrt{3}+i)\neq \mathbb{Q}(\sqrt{2}-i+\mu(\sqrt{3}+i))$. Obviously $\mu=1$ is one of them, but how to check, whether 2 also has this property?
-
Yes; since $\mathbb{Q}(\alpha)$ (with $\alpha=\sqrt{2}-i+2(\sqrt{3}+i)$ is clearly contained in $\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$, the two fields are equal if and only if they have the same degree over $\mathbb{Q}$, if and only if the minimal polynomial of $\alpha$ over $\mathbb{Q}$ has degree $[\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i):\mathbb{Q}]$. – Arturo Magidin Apr 1 '12 at 2:00
It's not the primitive element theorem itself that guarantees that only finitely many $\mu$ exist with that property, though the argument made in the proof is valid. – Arturo Magidin Apr 1 '12 at 2:01
@Arturo: I don't see any reason why you shouldn't just call your comments an answer. Right? – mixedmath Apr 1 '12 at 17:12
@mixedmath: Done. – Arturo Magidin Apr 1 '12 at 20:08
Let $\alpha=\sqrt{2}-i+2(\sqrt{3}+i)$.
Since $\alpha\in\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$, it follows that $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$ if and only if their degrees over $\mathbb{Q}$ are equal. The degree $[\mathbb{Q}(\alpha):\mathbb{Q}]$ is equal to the degree of the monic irreducible of $\alpha$ over $\mathbb{Q}$, so you are correct that if you can show that the monic irreducible of $\alpha$ is of degree $8$, then it follows that $\mathbb{Q}(\alpha)=\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$.
I will note, however, that your interpretation of the Primitive Element Theorem is incorrect. The Theorem itself doesn't really tell you what you claim it tells you. The argument in the proof relies on the fact that there are only finitely many fields between $\mathbb{Q}$ and $\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)$, and so by the Pigeonhole Principle there are only finitely many rationals $\mu$ such that $\mathbb{Q}(\sqrt{2}-i,\sqrt{3}+i)\neq\mathbb{Q}(\sqrt{2}-i+\mu(\sqrt{3}+i))$. But this is not a consequence of the Primitive Element Theorem, but rather of the fact that there are only finitely many fields in between.
|
2016-05-27 00:43:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9432128071784973, "perplexity": 70.3665694464781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049276415.60/warc/CC-MAIN-20160524002116-00073-ip-10-185-217-139.ec2.internal.warc.gz"}
|
https://solvedlib.com/discuss-the-means-by-which-structural-bias-social,274050
|
1 answer
# Discuss the means by which structural bias, social inequities and racism undermine health and create challenges...
###### Question:
Discuss the means by which structural bias, social inequities and racism undermine health and create challenges to achieving health equity at organizational, community and societal levels in people with HIV/AIDS
## Answers
#### Similar Solved Questions
1 answer
...
1 answer
##### Question 7 At low temperature nitrogen dioxide molecules join together to form dinitrogen tetroxide. 2 NO2(g)...
Question 7 At low temperature nitrogen dioxide molecules join together to form dinitrogen tetroxide. 2 NO2(g) + N204(9) (low temperature) A sample of NO2 sealed inside a glass bulb at 23 °C gave a pressure of 673 Torr. Lowering the temperature to -5 °C converted the NO2 to N204. What was the...
1 answer
Let the energy stored in the inductor be given by $L(t)=3 \cos ^{2} 6,000,000 t$ and let the energy stored in the capacitor be given by$C(t)=3 \sin ^{2} 6,000,000 t$ where $t$ is time in seconds. The total energy $E$ in the circuit is given by $E(t)=L(t)+C(t)$ Make a table of values for $L, C,... 1 answer ##### 10.3.16 A sofh-trink machine at a steak house is regulated so that the amount of drink... 10.3.16 A sofh-trink machine at a steak house is regulated so that the amount of drink dspensed is approximatoly nomally periodical, by taking a sample of 16 drks and corpving the erage content t i 'alk in me interval T1--189. the machine is thought to be opeating satelactedy ethnise the aner ea... 5 answers ##### Mean Value Theorem11. Find the point "c" guaranteed to exist by the Mean Value Theorem for: f(x) =x2 + Sx +2 |-3,-2| Mean Value Theorem 11. Find the point "c" guaranteed to exist by the Mean Value Theorem for: f(x) =x2 + Sx +2 |-3,-2|... 5 answers ##### Repeated Imaginary Roots of Auxiliary Equations Solve the equations: (D2 + 9)2y = 0 (D2 + 49)2y = 0 2) 3) (D2 + 36)2y = 0 n 5) (D3 + 36D)2y = 0 (D3 + 100)2y = 0 Example (b) Solve the equation (D4 + 6D + 9)y = 0 Example (c) Solve the equation (D2 + D - 1)2y = 0 Example (d) Solve the equation (D2 + 16)2y = 0 Example (e) Solve the equation (D6 + 6D4+ 1202 + 8)y = 0 Repeated Imaginary Roots of Auxiliary Equations Solve the equations: (D2 + 9)2y = 0 (D2 + 49)2y = 0 2) 3) (D2 + 36)2y = 0 n 5) (D3 + 36D)2y = 0 (D3 + 100)2y = 0 Example (b) Solve the equation (D4 + 6D + 9)y = 0 Example (c) Solve the equation (D2 + D - 1)2y = 0 Example (d) Solve the equation (D2 + 1... 1 answer ##### Cell Biology Experiment. Calculating the Percent Mutagenesis and Percent Survival of D7 Yeast Strain provided in... Cell Biology Experiment. Calculating the Percent Mutagenesis and Percent Survival of D7 Yeast Strain provided in a Liquid Culture of Yeast-Peptone-Dextrose (YEPD) Medium. The experiment calls for a serial dilution of a starting cell concentration (of Yeast, Starting Concentration for Trial 1 is 3.28... 1 answer ##### Consider the table. Metal Tm(KA us (kJ/mol) T(K) AH vap (kJ/mol) Li 454 2.99 1615 134.7... Consider the table. Metal Tm(KA us (kJ/mol) T(K) AH vap (kJ/mol) Li 454 2.99 1615 134.7 Na 371 2.60 1156 8 9.6 K3362.33103377.1 Rb 312 2.34 956 Cs 302 2.10 942 66 Using the data, calculate ASfus and AS vap for Na. Asap K-mol AS- K-mol... 5 answers ##### Consider soccer ball launched into the air at an angle of 0 = 37.Oirc to the horizontal with the magnitude of its initial velocity 20.0m/s a) What are the initial velocity in the horizontal and vertical directions? b) Write down thc position and velocity equations in and y directions c) How high does it go? How far away from the origin docs it land? That is, horizontally) 7.5 meter high rock with speed of 3.2 m/s How far from the tiger leaps horizontally from base of the rock will she land? Est Consider soccer ball launched into the air at an angle of 0 = 37.Oirc to the horizontal with the magnitude of its initial velocity 20.0m/s a) What are the initial velocity in the horizontal and vertical directions? b) Write down thc position and velocity equations in and y directions c) How high do... 5 answers ##### Copernicus and Kepler engaged in what is called empirical science. What do we mean by empirical? Copernicus and Kepler engaged in what is called empirical science. What do we mean by empirical?... 5 answers ##### (8 pts) A reaction can be thermodynamically favorable but kinetically unfavorable_ What does that mean? When reaction thermodynamically kinetically favorable or unfavorable? Choose a suitable reaction and draw energy diagrams to explain these concepts_ (8 pts) A reaction can be thermodynamically favorable but kinetically unfavorable_ What does that mean? When reaction thermodynamically kinetically favorable or unfavorable? Choose a suitable reaction and draw energy diagrams to explain these concepts_... 4 answers ##### A tank in the form of a rectangular parallelepiped 6 ft. deep, 4ft. wide, and 12ft. long is full of oil weighing 50 ð‘™ð‘/ð‘“ð‘¡3. Whenone-third of the work necessaryto pump the oil to the top of the tank has been done, find by howmuch thesurface of the oil is lowered. Solve this problem byintegration. A tank in the form of a rectangular parallelepiped 6 ft. deep, 4 ft. wide, and 12 ft. long is full of oil weighing 50 ð‘™ð‘/ð‘“ð‘¡3. When one-third of the work necessary to pump the oil to the top of the tank has been done, find by how much the surface of the oil is lowere... 5 answers ##### Problem 6. Show that any linear function f (x)mx + b is uniformly continuous on R Problem 6. Show that any linear function f (x) mx + b is uniformly continuous on R... 5 answers ##### Construct the graph of the following functions:(i)$y=xleft(1-x^{2}ight)^{-2}$(ii)$y=2 x-1+(x+1)^{-1}$Construct the graph of the following functions: (i)$y=xleft(1-x^{2} ight)^{-2}$(ii)$y=2 x-1+(x+1)^{-1}$... 1 answer ##### Determine if there is enough information given in the diagram to prove each statement. $\angle 3 \cong \angle 4$ Determine if there is enough information given in the diagram to prove each statement. $\angle 3 \cong \angle 4$... 5 answers ##### Use any method to find each of the following: Your answers must be given in correct mathematical form in order t0 receive full credit: Show all work in the space beneath the problems Separate the work for each problem by drawing straight line across the page 9/2 9 {x%e2x+X4 XsinSx } 730{s+5y (c) 9 '{5224s510 2s_5 Use any method to find each of the following: Your answers must be given in correct mathematical form in order t0 receive full credit: Show all work in the space beneath the problems Separate the work for each problem by drawing straight line across the page 9/2 9 {x%e2x+X4 XsinSx } 730 {s+5y (c) 9... 5 answers ##### Consider the following system at equilibrium where AH? 16.1 kJ; and K 6.50x10-* at 298 K2NOBr(g) Fo(?) Br-(2)If the VOLUME ofthe equilibrium system is suddenly decreased at constant temperatureThe value of KcincreasesB decreases Iemains the same_The value of QcA. 18 greater than Kc: 3.is equal to Kc C.is less than KcThe reaction must:run in the forward direction to reestablish equilibrium B Tutl in the rererse difection to reestablish equilibrium Temain the same. It is already at equilibriumited Consider the following system at equilibrium where AH? 16.1 kJ; and K 6.50x10-* at 298 K 2NOBr(g) Fo(?) Br-(2) If the VOLUME ofthe equilibrium system is suddenly decreased at constant temperature The value of Kc increases B decreases Iemains the same_ The value of Qc A. 18 greater than Kc: 3.is equa... 5 answers ##### Chapter 26_ Problem 022Your answcr [ partially correct. Try again:Elying Circus of PhysicsKiting during storm . The legend that Benjamin Franklin flew kite a5 storm approached only legend he was neither stupid nor suicidal: Suppose string radlus 2.07 mm extends directly upward by 824 km and coated with 0.505 mm layer of water having reslstivity 179 Qm_ the potential difference between the two ends of the string 187 MV, what Is the current through the water Iayer? The danger not this current but Chapter 26_ Problem 022 Your answcr [ partially correct. Try again: Elying Circus of Physics Kiting during storm . The legend that Benjamin Franklin flew kite a5 storm approached only legend he was neither stupid nor suicidal: Suppose string radlus 2.07 mm extends directly upward by 824 km and coate... 5 answers ##### Complete the table below. (2 pts ) [H:] M[OH:] MPHTN10~5ti0"Show your work for question 4 in the space below: (4 pts:) Complete the table below. (2 pts ) [H:] M [OH:] M PH TN 10 ~5ti0" Show your work for question 4 in the space below: (4 pts:)... 1 answer ##### Apple Tree Enterprises allocated overhead based on direct material cost and has a predetermined overhead rate... Apple Tree Enterprises allocated overhead based on direct material cost and has a predetermined overhead rate of 161%. During the current period, direct labor cost is$62,000 and direct materials cost is $78,000. In the current period, determine the amount of overhead to be applied by Apple Tree Ent... 5 answers ##### 10,000 An electronic store can sell q = 33 cellular phones at a price p dollars per phone. The current price is S145_ (p + 43)(a) Is demand elastic or inelastic at p = 145? (b) If the price is lowered slightly: will revenue increase or decrease?(a) Is the demand elastic or inelastic at p = 145? (Type an integer or a decimal rounded to three decimal places as needed:)0A: Elastic, because E(p)when p = 145, which is greater than 1_0 B. Inelastic, because E(p) =when p = 145, which is greater than 1_ 10,000 An electronic store can sell q = 33 cellular phones at a price p dollars per phone. The current price is S145_ (p + 43) (a) Is demand elastic or inelastic at p = 145? (b) If the price is lowered slightly: will revenue increase or decrease? (a) Is the demand elastic or inelastic at p = 145? (T... 1 answer ##### A 800 kg car is accelerating at 3.8 m/s2. What is the net force acting on... A 800 kg car is accelerating at 3.8 m/s2. What is the net force acting on the car? Express your answer in newtons and round to the nearest whole number.... 5 answers ##### Aooo AtET =1:09 PM14% grambling mrooms3netThe Chemistry of Solutes and Solutions AssignmentDescribe how intenolecular interactions affect solubility pronenics Describe what IICAn by "Like dissolves like"" hen trying predict whether. compounds will be miscibl. Define each of the following Hydrophobic Hydrophilic Miscibl [mmiscible Enthalpy of solution Saturated solution Unsaturuated solution Supersaturated Henry LawDescribe how: temperature allects tha solubility of solids? Descnb Aooo AtET = 1:09 PM 14% grambling mrooms3net The Chemistry of Solutes and Solutions Assignment Describe how intenolecular interactions affect solubility pronenics Describe what IICAn by "Like dissolves like"" hen trying predict whether. compounds will be miscibl. Define each of the ... 5 answers ##### (11) 2, 3,4-trimethylbutanal(16) butanal(12)COzHCOzH CH3CHCH2CHzCHCH3(17)CH3CH3CCOzHCH3(13)CH2CO2H CH3CH2CH2CHCHzCH3(18) 2-hydroxypropanoic acid (11) 2, 3,4-trimethylbutanal (16) butanal (12) COzH COzH CH3CHCH2CHzCHCH3 (17) CH3 CH3CCOzH CH3 (13) CH2CO2H CH3CH2CH2CHCHzCH3 (18) 2-hydroxypropanoic acid... 1 answer ##### 9. Consider a PLC system with the following inputs and outputs. Design a ladder diagram that... 9. Consider a PLC system with the following inputs and outputs. Design a ladder diagram that does the following: • A timer counts up to 1.5 seconds, resets itself, and then repeats Every time S1 is on and the timer makes it to 1.5 seconds, a counter increments its accumulated value. PL1 turns o... 1 answer ##### Consider a particle in a 1-dimensional ininite square well potential {0, V(z)=Í oo, (-a < z... Consider a particle in a 1-dimensional ininite square well potential {0, V(z)=Í oo, (-a < z <a) elsewhere The particle is initially localized in the right side of the well (O S a) Calculate the probability that at later times, an energy measurement will yield the energy of the first exc... 4 answers ##### Let p € Ps(R) be cubic real polynomial. Prove that either all the roots of p are real or p has exactly one real root. Hint: Start with considering the situation that all the roots of p are complex and non-real and arrive to a contradiction Let p € Ps(R) be cubic real polynomial. Prove that either all the roots of p are real or p has exactly one real root. Hint: Start with considering the situation that all the roots of p are complex and non-real and arrive to a contradiction... 1 answer ##### Tanner-UNF Corporation acquired as an investment$240 million of 8% bonds, dated July 1, on July...
Tanner-UNF Corporation acquired as an investment $240 million of 8% bonds, dated July 1, on July 1, 2021. Company management is holding the bonds in its trading portfolio. The market interest rate (yield) was 10% for bonds of similar risk and maturity. Tanner-UNF paid$200 million for the bonds. The...
1 answer
1 answer
##### Please give your honest anwers to this problem. Find question attaced, thanks. Assign the bands in...
Please give your honest anwers to this problem. Find question attaced, thanks. Assign the bands in the infrared spectra (labelled A, B, C, D and E), attached below Appearance and disappearance of the important ones confirms that the reaction has taken place, identify and discuss this....
1 answer
##### How do bacteria live inside human body?
How do bacteria live inside human body?...
1 answer
##### It is known that a particular company produces products of which 30% are defective. We select...
It is known that a particular company produces products of which 30% are defective. We select items at random and identify it as being defective or not. Calculate the probability that the sixth selected item will be the 3rd defective. A. 0.38282 B. 0.09261 C. 0.24518 D. 0.75494 E. none of the above...
1 answer
##### Let X ∼ Binomial(n, p). Compute E(X^3)
Let X ∼ Binomial(n, p). Compute E(X^3)...
5 answers
##### Prions Prions None 2 Ilisted Prons Occr are the for spread 0 Il eadento listed 3 lintact the up into steps must occur suoud consumed spleen 8 has cCur m Infecti and dendriic cell ipresence prions Itake order for tor 8 ceherand place of squosuad gastric contaminated spinal amplify because 1 cord t0 prions acids braifood; dtnter prions which 0 8 can be Infect this 8 central nervous H Ifollowing steps system brain; must
Prions Prions None 2 Ilisted Prons Occr are the for spread 0 Il eadento listed 3 lintact the up into steps must occur suoud consumed spleen 8 has cCur m Infecti and dendriic cell ipresence prions Itake order for tor 8 ceherand place of squosuad gastric contaminated spinal amplify because 1 cord t0...
5 answers
##### Anautomoble manulacturcr claims that thelr car has a 49.2 mlesrgallon (MPG) rating; An Independenr testlng firm has been contrected t nr HN MPG for thls car. After testing [ [ cars they faund a mean MPG of49.6 wirh: sanduro deviation 0t 1.7. Is there sutfclent uvdence attha 0. 1 Lavele that the cars have An incorrect manufacturers MPG rating? Assume the population disribution "peraxinutely nonalStee 4 al 5: Determine the decision rule for rejecting the null hypothesisRound your nswrt three
Anautomoble manulacturcr claims that thelr car has a 49.2 mlesrgallon (MPG) rating; An Independenr testlng firm has been contrected t nr HN MPG for thls car. After testing [ [ cars they faund a mean MPG of49.6 wirh: sanduro deviation 0t 1.7. Is there sutfclent uvdence attha 0. 1 Lavele that the cars...
1 answer
##### Plz solve step by step in a full format. Question 1 ABC Ltd is considering using...
plz solve step by step in a full format. Question #1 ABC Ltd is considering using direct costing method for decision making instead of absorption costing method. Following data has been summarized for that purpose: Units Units RS Annual Maximum Plant capacity Annual Normal Plant capacity Fixed Facto...
1 answer
##### The first-order reaction A B has k = 1.25 s, If [Alo 0.450 M, how long...
The first-order reaction A B has k = 1.25 s, If [Alo 0.450 M, how long will it take [A] 0.102 M? (in s)...
1 answer
##### PLEASE TYPE THE ANSWER. Critical Thinking Questions for Submissions: The use of anti-diarrhea drugs is contraindicated...
PLEASE TYPE THE ANSWER. Critical Thinking Questions for Submissions: The use of anti-diarrhea drugs is contraindicated in what type/types of patients? What are the common side effects in the misuse of laxatives in the eating disorder population Discuss the indications for the prescriptions of the d...
-- 0.027125--
|
2023-03-25 12:01:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35362428426742554, "perplexity": 12541.839446405236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945323.37/warc/CC-MAIN-20230325095252-20230325125252-00517.warc.gz"}
|
https://stats.stackexchange.com/questions/123277/exploring-dependencies-between-variables-in-log-linear-models
|
# Exploring dependencies between variables in log-linear models
Hi there I'm using R to perform some multivariate data analysis on health data. I'm currently using the glm() function with family=poisson to perform the log-linear analysis count~yf*tsf, where count is a vectorised contingency table, and yf and tsf are the category factors for the 2x2 contingency table.
However I also want to model it adjusting for variables age, sex and edu, which I've coded as categorical with 2, 2, and 4 categories respectively. So I have also tried glm(count~yf*tsf+age+sex+edu). Here count is accounting for the 3 extra variables and becomes a 2x2x2x2x4 contingency table. However there seems to be no change in the intercept and p-vals of the common variables between the two models, the interaction term being of particular interest. Can anyone help me figure out if this is normal?
Here's the output from the two models:
Call:
glm(formula = as.vector(count) ~ yf * tsf, family = poisson)
Deviance Residuals:
[1] 0 0 0 0
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 8.14090 0.01707 476.920 < 2e-16 ***
yf2 -3.32871 0.09177 -36.273 < 2e-16 ***
tsf2 2.12068 0.01806 117.395 < 2e-16 ***
yf2:tsf2 -0.39328 0.09951 -3.952 7.74e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 6.0961e+04 on 3 degrees of freedom
Residual deviance: -1.2701e-13 on 0 degrees of freedom
AIC: 45.107
Number of Fisher Scoring iterations: 2
Second model:
Call:
glm(formula = as.vector(count) ~ yf * tsf + sexf + agf + eduf,
family = poisson)
Deviance Residuals:
Min 1Q Median 3Q Max
-14.6551 -3.4560 0.7385 2.4046 14.1452
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 6.02008 0.02005 300.221 < 2e-16 ***
yf2 -3.32871 0.09177 -36.273 < 2e-16 ***
tsf2 2.12068 0.01806 117.395 < 2e-16 ***
sexf2 -0.16551 0.01107 -14.950 < 2e-16 ***
agef2 -1.24170 0.01323 -93.863 < 2e-16 ***
eduf2 -1.24946 0.02187 -57.131 < 2e-16 ***
eduf3 0.46624 0.01317 35.405 < 2e-16 ***
eduf4 -0.47650 0.01668 -28.570 < 2e-16 ***
yf2:tsf2 -0.39328 0.09951 -3.952 7.74e-05 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 84216.5 on 63 degrees of freedom
Residual deviance: 2029.3 on 55 degrees of freedom
AIC: 2437.7
Number of Fisher Scoring iterations: 5
|
2019-10-15 02:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6398773193359375, "perplexity": 5162.987673358106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655735.13/warc/CC-MAIN-20191015005905-20191015033405-00310.warc.gz"}
|
http://aa.quae.nl/cgi-bin/glossary.cgi?l=en&o=Small%20Magellanic%20Cloud
|
Astronomy Answers: From the Astronomical Dictionary
# Astronomy AnswersFrom the Astronomical Dictionary
$$\def\|{&}$$
The description of the word you requested from the astronomical dictionary is given below.
the Small Magellanic Cloud
The Small Magellanic Cloud is a small, irregular galaxy that is very close to our own Milky Way Galaxy, at about 210,000 lightyears from us. The Small Magellanic Cloud is in the constellation Tucana and can (sometimes) be seen with the unaided eye from places south of 30 degrees north latitude.
The Small Magellanic Cloud (abbreviation SMC) is called "Nubecula Minor" in Latin, and is also called NGC 292.
|
2018-02-23 16:46:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2848488688468933, "perplexity": 2988.237895140515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814801.45/warc/CC-MAIN-20180223154626-20180223174626-00207.warc.gz"}
|
https://wiki.iac.isu.edu/index.php?title=Limit_of_Energy_in_Lab_Frame&oldid=122032
|
# Limit of Energy in Lab Frame
The t quantity is known as the square of the 4-momentum transfer
In the CM Frame
where and is the angle between the before and after momentum in the CM frame
Using the relativistic relation this reduces to
The maximum momentum is transfered at 90 degrees, i.e.
This can be rewritten again using the relativistic energy relation
In the Lab Frame
with
and
|
2022-05-18 14:30:47
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594634771347046, "perplexity": 1394.6652008843707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00694.warc.gz"}
|
http://mechdesigner.support/transmission-design-considerations-cams.htm
|
# Rigidty: Transmission Design Considerations
## Transmission Design Considerations of Cam Mechanisms
Input Transmission: all of the transmission components from the power source (usually an electric motor) to the cam.
Output Transmission: all of the transmission components from the cam-follower (usually a roller) to the payload (also called: end-effector, or tool).
The components in the transmissions typically include shafts, gears, gearboxes, couplings, chain drives, belt drives, linkages.
### Three[3] Mechanical Properties of the Transmission:
The performance of the input and output transmissions is a function of three parameters: Strength... the ability to withstand the forces and torques without fracture or yield. The design must be strong enough to transfer the peak force or torque. The design of components is more in the field of strength of materials, not cam systems. Rigidity... the ability to transmit the force and torque without too much deflection. Rigidity is important in the generation of vibration. All transmission components have elasticity, the reciprocal of rigidity. When a metal component is stressed within its elastic limit, it strains elastically. Its distortion, or deflection, is related to its size and shape, and is proportional to the load applied. When it is stressed beyond its elastic limit it suffers plastic deformation. Plastic deformation does not recover when you remove the stress. It may also suffer hysteresis within the elastic limit. Hysteresis is an energy absorbing phenomenon, whereby the strain produced by an increasing load is not fully reduced by a decreasing load. Hysteresis is responsible for internal damping of vibrations. But this effect is unlikely to be significant in cam systems. To simplify analysis, we assume that transmission components are perfectly elastic with no hysteresis. If there are several components connected in series, as is a typical cam transmission, the deflections add together so that the overall deflection from one end of the transmission to the other is the sum of the individual deflections. When gearing is involved, different parts of the transmission may be subject to different torques and the deflections at one side of the a gear pair may be transformed in a different deflection at the other side of the gear. To assess the rigidity of a transmission as whole, therefore, it is necessary to estimate the rigidity of each component and combine them in a particular way. Backlash...lost transmission with a reversal of torque or force. We will review the detrimental effects of backlash in the next topic. Here.
### Input Transmission: Rigidity and Stiffness
The Input Transmission includes the power transmission components from the power source [usually a motor] to the cam.
Most cams in industrial machines rotate and they are driven by rotating motors. Thus, the components are rotary, and they will have an angular deflection that is proportional to the applied torque (pulleys and sprockets of belts and chains are also 'rotary components'). The simplest components are shafts, but these are often constructed with sections of different diameters. Shafts that are connected in series frequently have different diameters.
#### Shaft Rigidity
The Rigidity of rotary components is defined as the torque divided by the angular deflection, [T/Θ = G.J/L] :
$R=\frac{\pi .G\left({D}^{4}-{d}^{4}\right)}{32.L}$ ....Equation 1
R = Rigidity of Circular Shaft [N.m/rad]
G = Modulus of Rigidity of the Material (Shear Stress/Shear Strain) [N/m2]
D = Outside diameter of the Shaft [m]
d = inside diameter of shaft [m]
L = Length of shaft subject to torsion [m]
If you want the result in 'English/American' units, I always do all calculations using Metric units, and find a conversion table to convert the answer to English units.
Note: I do not know any British/English engineer, that is younger than 90years old, who works with 'English' units.
The overall Rigidity of a shaft with three sections, calculated using the above rigidity equation is:
$\frac{1}{R}=\frac{1}{{R}_{1}}+\frac{1}{{R}_{2}}+\frac{1}{{R}_{3}}$ .... Equation 2
This equation applies to any mixture of diverse components - gears, couplings - that are connected in series.
From this equation, it can be seen that the overall rigidity of the complete transmission is always less than the rigidity of any of its components.
The least rigid component is the most significant. Often the least rigid component dominates the result, such that very rigid components are not important to the final result.
EXAMPLE: A Stepped Shaft: three different lengths and diameters in series.
D1 = 30mm d1 = 15mm L1 = 40mm D2 = 25mm d2 = 15mm L2 = 35mm D3 = 22mm d3 = 0mm L3 = 160mm
[Modulus of Rigidity of Steel, G = 82.5x109 N/m2]
1/R = 0.000006503 + 0.00001271 + 0.00008433 = 0.00010354
This shows that the overall rigidity is less, but not much less, than the least rigid part of the shaft. Had we ignored the most rigid part - R1 - the result would have been 10,305 and not 9,657N.m/rad, only 7% more.
#### Gear Rigidity
There are several type of gears used in industrial machines - Spur Gears, Worm Gears and Belt and Chain Drives. Generally, the gear wheel itself has a torsional rigidity that is high enough to be ignored, but the loaded gear teeth themselves may distort sufficiently to contribute significantly to the overall elasticity of the transmission, particularly on small diameter pinions. Both the driving and the driven teeth bend and also compress under Hertzian Stress. These two distortions combine to give tangential linear deflection at the pitch line which varies somewhat as each tooth passes through the contact zone. But this variation can be ignored here. The stiffness* of the the tooth can be defined as the tangential force divided by the tangential deflection when the contact point is on the common center-line, and can only be approximately calculated from the tooth dimension and material properties. When possible it is best to measure the torsional rigidity rather than estimate it. However, some design guidance is derived from considering the relationship between tooth stiffness and torsional rigidity*. * In this topic, we use Stiffness to apply to Linear-Stiffness (N/mm), and Rigidity to apply to Angular-Stiffness (N/rad). The image shows a schematic of 'rigidity* of a spur-gear' - a pair of unequal spur gears is transmitting a torque with a tangential tooth force. The deflections are analogous to those of a pair of levers whose tips are connected by a spring, as shown as the 'Equivalent Gear-Pair'. Relating Force, Stiffness and Deflection Total Tangential Linear Deflection, δ $\delta =F/S$ F Tangential Force [N], at the pitch-circle [equal and opposite of course] S : Linear Stiffness [N/m] of gear teeth in series [F/δ] Thus, for small angles, angular deflections: Gear A rotates by of δ/ra radians, if Gear B is held stationary Gear B rotates by δ/rb radians, if Gear A is held stationary. Relating Torques, Stiffness and Deflection Let Ma and Mb be the Moments [Torques applied] on Gears A and B , whose pitch circle radii are ra and rb. [m]. The torsional rigidity of Gear A relative to Gear B is Ra,(= Torque[N.m] ÷ angular deflection[rad] ), and is found with: ${M}_{a}=F.{r}_{a}$ ∴ ${R}_{a}=\frac{{M}_{a}}{\delta /{r}_{a}}=\frac{{F.{r}_{a}}^{2}}{\delta }={S.{r}_{a}}^{2}$ ... Equation 3 Similarly, the effective rigidity of Gear B relative to Gear A is ∴ ${R}_{b}=S.{{r}_{b}}^{2}$ ... Equation 4 In general, Torsional Rigidity, R, is related to Linear Stiffness, S, acting at a radius, r, by the equation: ∴ $R=S.{r}^{2}$ ... Equation 5 In a well designed transmission system, the linear stiffness of the gear teeth is not very important. The torsional rigidity of a gear is proportional to the linear stiffness of the gear tooth. Also, the larger the tooth the more rigid the gear. More important is that the Torsional Rigidity is proportional to the square of the pitch circle radius. Therefore, use large gears if possible, even if small gears are strong enough. Similar results are obtained from the analysis of the rigidity of other types of gear pair, such as bevel gears or worm gears. The estimate of tooth stiffness from design information may not be very accurate. Therefore, it is best to obtain a measured rigidity value from the gear manufacturer, or from a bench test. All types of gearing usually operate with a small amount of backlash. This can cause problems in a cam transmission when when there is a reversal of torque in every motion cycle. The aim in cam transmissions is to have minimum backlash, with an acceptable initial cost. Note, the power-loss may increase with increased friction as backlash is reduced and cause high-speed drives to overheat. As we have already seen, the overall elasticity of a power transmission, is the sum of the elasticities of each section when connected in series (Equation 2). In effect, the elasticity of one section is 'transmitted' to the next. When there is gearing between the two sections, however, the transmitted elasticity is modified by the gear ratio. To study this effect, assume a pair of gears with infinitely stiff teeth, the input gear having Zi teeth, and the output gear with Zo teeth. The gear ratio is: ${Z}_{i}/{Z}_{i}$. Now, let: Ri = Rigidity of all mechanical components before the output gear R0 = Rigidity of all the mechanical components after the gear. also: Mi = Input Torque of the Gear Pair Mo = Output Torque of the Gear Pair The torsional deflection of the input shaft is: ${M}_{i}/{R}_{i}$ This is transmitted as a deflection at the output gear as $\left(\frac{{M}_{i}}{{R}_{i}}\right).\left(\frac{{Z}_{i}}{{Z}_{o}}\right)$ The torsional deflection of the output sections of the transmission is ${M}_{o}∕{R}_{o}$ This is added to the transmitted deflection. The total deflection at the output end of the transmission is therefore: ${M}_{0}∕{R}_{o}$ The overall elasticity, (reciprocal of Rigidity) as seen at the output is this deflection divided by the output torque: $\frac{1}{R}=\frac{{M}_{i}.{Z}_{i}}{{R}_{i}.{Z}_{o}.{M}_{o}}+\frac{1}{{R}_{o}}$ Ignoring Gear Efficiency, $\frac{{M}_{i}}{{M}_{o}}=\frac{{Z}_{i}}{{Z}_{0}}$ Therefore, the equation for overall elasticity at the output becomes: $\frac{1}{R}=\frac{1}{{R}_{i}{\left({Z}_{o}∕Zi\right)}^{2}}+\frac{1}{{R}_{o}}$ ...Equation 6. This is similar to Equation 2, but that the first term has been modified. The rigidity 'transmitted' by gearing is multiplied by the gear ratio squared. In a similar way, we can show that the backlash 'transmitted' by gearing is directly proportional to the gear ratio. A reduction gear increases rigidity. A step-up gear reduces rigidity. Thus, when a long transmissions is unavoidable and a reduction gearing is necessary, make the longest part of a geared transmission be the high-speed shaft: it transmits less torque than the low-speed shaft, and thus a smaller diameter based on strength, and its elasticity is reduced by the square of the gear ratio.
#### Chains and Belts Rigidity
Equations 3, 4 and 5 can be applied to chain drives, but in that case the stiffness, S , refers to the stiffness of the loaded length of chain between the chain-wheels. For a given chain size, the stiffness is inversely proportional to its length: very long chain drives should therefore be avoided as should small diameter sprockets. Belt drives behave in a similar way, but are generally less satisfactory than chain drives. Flat belt and vee belt drives are seldom used in cam system transmissions (except at very high speed, e.g. the primary drive from the electric motor where their elasticity is not important). Timing belt drives [belts and pulleys with 'teeth'] are common because they give an exact speed ratio for synchronizing with other mechanisms in the machine. Timing belts are made of reinforced synthetic rubber and are rather elastic compared to metal chains of similar strength. This is partly because the rubber belt teeth tend to roll slightly in the pulley grooves under heavy loads. More recently, the tooth profile is improved so that this problem is reduced. Nevertheless, timing belt drives are very successfully used in cam transmissions because they are almost silent and need no lubrication. The backlash problem with chains and belts is similar to that with gears, but usually more severe. Slack chain drives are quite common in conventional steady torque transmissions, and not particularly detrimental to them. The use of chain tensioner devices, of which there are many types commercially available, is strongly recommended for all cam transmissions, and are essential for high-speed or high-inertia applications.
#### Bearing Support Rigidity
One effect of using gears, chain drives, etc. in a transmission, which is often overlooked, is the flexibility of the bearing supports. The tooth load produces an equal reaction force at the gear supports. When the gears are in a rigid casting (for example, a commercial gear-box and cam-box) the elastic deflections of the supports are usually small enough to be ignored. However, the reaction torque on the structure that supports the cam-box may itself be important. A rigid gear-box or cam-box is no advantage if it is not rigidly supported, or the frame deflects. Gears and chain wheels are sometimes unavoidably mounted on shafts far from the shaft bearings. This means the bending of the shaft due to the tooth load becomes a significant part of the overall rigidity of the transmission. Lateral deflection of the shaft has the same effect on angular displacement as tooth deflection and is mechanically in series with it. The image shows how the linear deflection of the shaft produces an angular deflection of the gear (or chain wheel) so that the effect is similar to torsional elasticity. Here the shaft is displaced by the tooth contact force, F.sec(Ф), where F is the tangential force and Ф is the gear pressure angle. This acts at a distance of r.cos(Ф) from the centre of the shaft, Therefore, the shaft torque is: $F.\mathrm{sec}\Phi .r.\mathrm{cos}\Phi =F.r$ The angular deflection of the shaft, however, is $\left(\delta .\mathrm{cos}\Phi \right)∕r$ Thus, the effective torsional rigidity is: $R=\frac{F.r}{\left(\delta .\mathrm{cos}\Phi \right)∕r}=\frac{F.{r}^{2}}{\delta .\mathrm{cos}\Phi }$ .... Equation 7 ...where δ is the lateral [sideways] deflection of the shaft. If the lateral stiffness of the shaft in the plane of the gear is S, then: $S=\text{force╱deflection}=F.\mathrm{sec}\Phi /\delta$ ...and the equivalent Torsional Rigidity is... $R=S.{r}^{2}$ This is the same as Equation 2. The relationship between lateral shaft stiffness and torsional rigidity is exactly the same as for tooth stiffness and is independent of the gear tooth pressure angle. From beam bending theory, we find that for a simply supported shaft of constant cross-section, the lateral deflection at the sprocket for a unit load is: $\delta =\frac{{{l}_{1}}^{2.}{{l}_{2}}^{2}}{3.E.J.L}$ ...and its reciprocal is $S=\frac{3.E.J.L}{{{l}_{1}}^{2.}{{l}_{2}}^{2}}$ .... Equation 8 Where: S = lateral stiffness of the shaft [N/m] l1 & l2 = distance from each side of the sprocket to each bearing that supports the shaft [m] E = Young's Modulus of elasticity of the shaft material J = second moment of area of the shaft cross-section. For bending of a circular shaft, J = Π. (D4 - d4)/64 [m4] L = Total length of shaft between the supports [m] If the shaft bearings are themselves mounted on a flexible frame structure, the lateral deflection of the frame produced by bearing reaction forces must also be taken into account in a similar way to the lateral deflection of the shaft. Structural flexibility is in series with all the other transmission elasticity.
#### EXAMPLE: Input Transmission Rigidity
Design Arrangement of an Input Transmission - From the Drive Motor to the Cam.
A Camshaft is driven by a 2:1 reducing chain drive from a primary shaft, which is driven by a motor and worm gear-box.
Find the overall rigidity of the input transmission to the cam, from the output of the worm gear-box.
Note: We want to find out how effectively the transmission is given to the cam by the worm gear-box.
The shafts are made of medium carbon steel, and the chain and solid sprockets are steel. G = 82.5 x 109 N/m2
The coupling is sufficiently rigid in torsion, and the shaft mountings are stiff enough to be ignored in the calculation.
The transmission chain is 0.5inch pitch with a stiffness of 6 x 106N/m, for a 1m length.
Find the rigidity of each section of the transmission separately and then combine them into one overall rigidity.
1: Rigidity of the Primary Shaft: Torsion
The section of the shaft transmitting torque is Ø38, and 454mm long. Therefore its rigidity is:
This shaft is connected to the cam-shaft by a 2:1 reduction drive. Therefore, its rigidity referred to the cam-shaft is:
2: Rigidity of the Primary Shaft: Bending
The Pitch Circle Radius of the Sprocket is 61mm.
Therefore, the equivalent torsional rigidity is: (using Equation 3)
The Rigidity referred to the cam-shaft via the 2:1 reduction is:
3: Rigidity of the Drive Chain
From the drawing above:
Length of chain that stretches under load = 496mm.
Stiffness, for 1m length = 6 x106 N/m
The Pitch Circle Radius of the Chain-wheel is 122mm. Therefore, the rigidity of the chain referred to the cam-shaft is:
4: Rigidity of the Cam Shaft Bending
5: Rigidity of the Cam Shaft Torsion
Overall Torsional Rigidity of the Input Transmission
The Overall Rigidity of the transmission referred to the cam is:
Comment:
The most significant element with elasticity is the bending of the primary shaft. It has the lowest Rigidity, R2. This points to the possibility of a considerable improvement if:
• The chain-drive could be moved closer to the right-hand bearings
and / or
• the sprocket could be increased in diameter
### Output Transmission
The estimation of rigidity of an output transmission is exactly the same as for an input transmission. Most inputs, of course, drive rotary cams and therefore rigidity is expressed as the overall torsional rigidity. With output transmissions, however, we are dealing with a payload that is driven by the cam follower and the motion may be linear (reciprocating) or rotary (oscillating or indexing).
If the follower motion is a translating, linear motion, then the overall rigidity of the output transmission is expressed as a linear stiffness referred to the follower.
If the follower motion is a swinging, rotating motion, then the overall rigidity of the output transmission is expressed as a torsional rigidity referred to the follower axis.
Levers and links are similar to gears and chain-wheels. A linear deflection, δ , at a point under Force, F, at a distance, r , from the lever pivot, or fulcrum, can be expressed as a Linear-Stiffness S = F/δ at that Point. This is translated into Torsional-Rigidity at the pivot of R=S.r2. This is the same as Equation 5, above. The designs of levers are many and varied. The calculation of lever stiffness is therefore not considered here. It comes within the conventional theory of deflection of beams. In practice, well designed levers are seldom a significant source of elasticity in transmissions, unless they are very long. The elongation and compression of link (pull or push-rods) are analogous of a chain, described above, and the relationship between the stiffness of a link and the torsional rigidity at a lever pivot is exactly the same as between a chain and its chain-wheel shaft. Equation 5 applies.
#### EXAMPLE: Output Transmission Rigidity
Design Arrangement of an Output-Transmission - from the cam-follower to the payload. A cam-driven mechanism is operated by a lever and pull-rod transmission with a stroke-increasing ratio of 1 .5:1 as shown in Fig. 1 1 .5. A 60 mm long follower arm is keyed to one end of a 20 mm diameter steel pivot shaft on the other end of which is keyed a 90 mm long pull-rod lever. That lever pulls a payload by means of an 8 mm diameter x 120 mm long steel pull-rod. Find the overall transmission rigidity from the cam to the payload, assuming that the follower arm and pull-rod lever are stiff enough to be ignored in the calculation. Working back from the payload to the cam: 1: Pull-Rod The stiffness of a bar in pure tension or compression is: where: E = Young's Modulus (~205 x 109 N/m2 for steel) A = cross-section area L = length under stress This acts at the end of a 90mm long lever. Thus, the torsional rigidity referred to the pivot shaft (Equation 5) is: 2: Pivot shaft bending at pull-rod position Using Equation 8 and 5: Rigidity referred to the pivot shaft is: 3: Pivot shaft torsion The length of the shaft in torsion is 240mm. From Equation 1 4: Pivot shaft torsion The shaft is symmetrical along its length and its stiffness in bending, S , at this position is the same as at the pull-rod position. The rigidity referred to the pivot shaft is therefore: The overall torsional rigidity of the cam output transmission at the follower arm pivot pivot is: This could be expressed as a linear stiffness at the follower roller by transposing Equation 5: It is clear from the above figures that torsion of the pivot shaft is by far the most elastic element, showing that the transmission rigidity can be considerably improved, if necessary, by increasing the shaft diameter. Because shaft rigidity and stiffness are proportional to the fourth power of diameter, an increase from 20mm to 24mm would approximately double the overall rigidity.
#### Couplings
Couplings are important in any transmission.
They must be able to compensate for misalignment between two shafts.
The misalignment may be classified as:
• Parallel Offset
• Angular
• Axial
Coupling may also prevent a transmission shock to be transmitted. They may be designed to de-couple if a torque is exceeded.
There are many types and designs of shaft coupling that are available commercially, as well as those for special purpose designs.
Rigid
'Rigid' they are not intended to allow relative movement of any kind between the coupled shafts.
Self-Aligning
'Self-aligning' allow limited movement between the shafts, usually to provide for either accidental or deliberate misalignment.
Of the latter, some allow all degrees-of-freedom - these are the 'flexible' couplings - and some allow only one or two degrees-of-freedom.
Flexible couplings transmit torque via a resilient medium (rubber, plastics or metal spring) and are not torsionally rigid. They are not recommended for cam systems.
Torsionally Rigid
Of the torsionally rigid couplings, the:
Cardan Joint allows a large degree of angular misalignment
Oldham Coupling allows a large degree of parallel offset and some end float.
Gear and the Chain Couplings allow a small degree of freedom in all directions except torsion.
With the exception of the membrane type, the mechanical couplings are subject to gradual wear which results in rotary backlash, which may be a problem for cam drive transmission if the couplings are not large enough.
Splined Shaft Coupling is a form of coupling which allows substantial axial displacement of a shaft while retaining torsional rigidity. However, it must have clearance to allow for the relative movement, and because the pitch circle radius of splines is inevitably small the rotary backlash is potentially severe.
Membrane or Diaphragm Couplings have a thin flexible membrane, sometimes laminated, attached alternately to the two hubs.
This allows limited angular misalignment and end float, but virtually no torsional elasticity and no backlash. Lateral shaft offset, if required, can be achieved using two such couplings separated by a short length of shaft. This design of coupling is ideal for cam drives, for both input and output transmissions.
Tutorial and Reference Help Files for MechDesigner and MotionDesigner 13.2 + © Machine, Mechanism, Motion and Cam Design Software by PSMotion Ltd
|
2020-02-25 21:52:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 22, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7003743648529053, "perplexity": 1843.417475842183}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146160.21/warc/CC-MAIN-20200225202625-20200225232625-00274.warc.gz"}
|
https://www.physicsforums.com/threads/fluid-mechanics-linear-momentum-analysis.718170/
|
# Fluid Mechanics - Linear momentum Analysis
1. Oct 22, 2013
### gmy5011
1. The problem statement, all variables and given/known data
Water is flowing into and discharging from a U-shaped pipe section as shown. At flange
(1), 30 kg/s of water flows into the section with the total absolute pressure of 200 kPa. At flange (2), the absolute pressure is 150 kPa. At location (3), 8 kg/s of water discharges to the atmosphere at 100 kPa. Determine the total x and y forces on the flanges connecting the pipe bend. Do not neglect the viscous losses in the pipe bend. Use a momentum-flux correction factor to be 1.03. In your discussion answer the following question: Is it possible to find the force on each flange individually? Why or why not?
This is the given question. What I don't understand is when we are summing the forces, why is the force at flange 2 acting in the positive direction(to the right)? If you see in my solution attempt, I made it negative but in my teachers solution she has it positive. If someone could explain this too me that would be great.
2. Relevant equations
eqn.1 $\dot{m}$ = $\rho$VA
eqn.2 $\Sigma$F = $\Sigma$out$\beta$$\dot{m}$V - $\Sigma$in$\beta$$\dot{m}$V
3. The attempt at a solution
Used eqn.1 to solve for all 3 velocites
then eqn.2 to solve for Frx: Frx + P1V1 - P2V2 = $\beta$$\dot{m}$(-V2) - $\beta$$\dot{m}$V1
File size:
7.9 KB
Views:
206
|
2017-10-20 16:00:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5671421885490417, "perplexity": 1223.9973689941642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824226.31/warc/CC-MAIN-20171020154441-20171020174441-00063.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-10th-edition/chapter-5-exponential-and-logarithmic-functions-5-5-properties-of-logarithms-5-5-assess-your-understanding-page-305/37
|
Precalculus (10th Edition)
$2+\log_5{x}$
Recall: (1) $\sqrt[m]{a}=a^{\frac{1}{m}}$ (2) $\log_a {x^n}=n\cdot \log_a {x}$. (3) $\log_a{xy}=\log_a{x} +\log_a{y}$ (4) $\log_a{\frac{x}{y}}=\log_a{x} -\log_a{y}$ Use rule (3) above to obtain $\log_5 {(25x)}=\log_5 {25}+\log_5 {x}.$ Use rule (2) to obtain $\log_5 {25}+\log_5 {x}\\=\log_5 {5^2}+\log_5 {x}\\=2\cdot \log_5 {5}+\log_5 {x}\\=2+\log_5{x}$
|
2018-11-16 19:49:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846457242965698, "perplexity": 695.9870202365797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743184.39/warc/CC-MAIN-20181116194306-20181116220306-00237.warc.gz"}
|
http://math.stackexchange.com/questions/55117/smoothness-of-harmonic-functions
|
Smoothness of harmonic functions
In the book on PDEs I'm reading there is a section on harmonic functions. To prove that these functions are in the class $C^\infty$ the author use standard mollifiers which I am not comfortable with. If there another proof of the $C^\infty(U)$ for the functions $u$ such that $\Delta u = 0$ on $U$?
-
But then you should take this as a motivation to learn about mollifiers! It's not that hard and extremely useful. – t.b. Aug 2 '11 at 12:06
@Theo: thanks for advise ) only if it is extremely useful. Are they used then to introduce the integration on manifolds? I remember there $C^\infty$ functions with compact supports. – Ilya Aug 2 '11 at 12:08
Well, we're getting off-topic here: for integration on manifolds you use partitions of unity. However, concerning mollifiers: I told you that they are extremely useful and I mean it. In fact, I can't imagine that you'll get very far in your book on PDEs without learning about them at some point. – t.b. Aug 2 '11 at 12:14
@Theo: I thought that for the partitions of unity one uses $C^\infty_c$ function, doesn't it? Anyway, I will follow your advise - but still I am interested in the mollifier-free proof of smoothness for the harmonic functions. – Ilya Aug 2 '11 at 12:19
The other possible approach is via the Poisson kernel and uniqueness theorem/maximum principle (take a small disk and note that the harmonic function is the Poisson integral of its boundary values). – fedja Aug 2 '11 at 12:48
Suppose $u$ is harmonic in $U$ (that is, $u\in C^2(U)$ and $\Delta u = 0$ in $U$). Let $x$ be a point of $U$ and $B= B(x,r)$ the open ball centered at $x$ with radius $r>0$ so small that $\overline B\subset U$. Then $$u(y) = \int_S P(y,z)\,\sigma(dz),\qquad y\in B,$$ where $S=S(x,r)$ is the boundary of $B$, $\sigma$ is the surface area measure on $S$, and $P(y,z)$ is the Poisson kernel for $B$: $$P(y,z) = {r^2 - |y|^2\over rc_d|y-z|^2},$$ $c_d$ being the surface area of the unit sphere in $R^d$. As the Poisson kernel is manifestly smooth in $y\in B$, the smoothness of $u$ follows from the above and standard theorems for differentiatng under an integral. The Poisson integral representation shown above can be proved using the Green/Stokes theorem. (See, for example, the first chapter of Doob's book on potential theory, or Helms' book on the same subject, or "Green, Brown, and Probability" by K.L. Chung.)
-
The right hand side of the first equation does not depend on the function $u$. I guess the integral is missing a factor of $u(z)$ or so. – Byron Schmuland Oct 4 '11 at 23:17
In an answer I posted last month, I showed that the mean-value property is sufficient to show that harmonic functions are $C^\infty$ on the interior of their domains. I don't know if this makes you feel any more comfortable, but it might be worth a look.
In two dimensions you can do it like this: If $u$ is a harmonic function, then $u$ is the real part of a holomorphic function, which is differentiable infinitely many times. Therefore $u$ is also $C^\infty$.
Without loss of generality $u$ is the real part of a holomorphic function since being $C^\infty$ is a local property. Does that make you feel better? – Matt Oct 15 '11 at 23:10
|
2014-07-22 23:53:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8916628360748291, "perplexity": 175.07225488849815}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997869778.45/warc/CC-MAIN-20140722025749-00101-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/calculus/81520-application-integral-word-problem.html
|
# Thread: application integral word problem
1. ## application integral word problem
I need some help on this word problem.
A water tank is in the shape of a right circular cone of altitude 10 feet and base radius 5 feet, with it's vertex at the ground. If the tank is full, find the work done in pumping all of the water out the top of the tank.
Note: The weight of water is not given so I assume it is the standard 62.4pi.
Would I need to slice the interval [0,10] or do I need to factor in the base radius?
Is this close $\frac{62.4\pi}{4}\int(y^2)(10-y)dy$
2. Originally Posted by gammaman
I need some help on this word problem.
A water tank is in the shape of a right circular cone of altitude 10 feet and base radius 5 feet, with it's vertex at the ground. If the tank is full, find the work done in pumping all of the water out the top of the tank.
Note: The weight of water is not given so I assume it is the standard 62.4pi.
Would I need to slice the interval [0,10] or do I need to factor in the base radius?
Is this close $\frac{62.4\pi}{4}\int(y^2)(10-y)dy$
sketch the lines $y = 2x$ and $y = -2x$ starting at the origin, up to the points $(5,10)$ and $(-5,10)$
weight of a representative slice is ...
$62.4 \, dV = 62.4 \pi \cdot x^2 \, dy = 62.4 \pi \cdot \frac{y^2}{4} \, dy
$
the slice needs to be lifted a distance $(10 - y)$ ... work in raising the slice is
$dW = 62.4 \pi \cdot \frac{y^2}{4}(10 - y) \, dy$
total work to raise all slices ...
$W = 15.6 \pi \int_0^{10} y^2(10 - y) \, dy$
3. sketch the lines and starting at the origin, up to the points and
why are we doing this?
4. Originally Posted by gammaman
why are we doing this?
it give you a side view of the cone so that you may determine dV and the limits of integration.
5. Where does the base radius come into play? Also I am confused why (y^2) is. Does it just come from the formula for a cone 1/3pi*r^2h. If so why do my notes say to divide by 4?
6. Originally Posted by gammaman
Where does the base radius come into play? Also I am confused why (y^2) is. Does it just come from the formula for a cone 1/3pi*r^2h. If so why do my notes say to divide by 4?
another "why" for sketching a diagram ...
a horizontal slice of the cone's liquid is a cylinder with radius $x$ and thickness $dy$.
since $y = 2x$ , $x = \frac{y}{2}$
$dV = \pi x^2 \, dy = \pi \left(\frac{y}{2}\right)^2 \, dy = \frac{\pi}{4} y^2 \, dy$
|
2017-05-22 22:36:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 15, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7734835147857666, "perplexity": 384.6259953768753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607120.76/warc/CC-MAIN-20170522211031-20170522231031-00543.warc.gz"}
|
https://intelligencemission.com/free-energy-generator-magnet-coil-free-energy-car.html
|
The results of this research have been used by numerous scientists all over the world. One of the many examples is Free Power paper written by Theodor C. Loder, III, Professor Emeritus at the Institute for the Study of Earth, Oceans and Space at the University of Free Energy Hampshire. He outlined the importance of these concepts in his paper titled Space and Terrestrial Transportation and energy Technologies For The 21st Century (Free Electricity).
###### Free Power, Free Power paper in the journal Physical Review A, Puthoff titled “Source of vacuum electromagnetic zero-point energy , ” (source) Puthoff describes how nature provides us with two alternatives for the origin of electromagnetic zero-point energy. One of them is generation by the quantum fluctuation motion of charged particles that constitute matter. His research shows that particle motion generates the zero-point energy spectrum, in the form of Free Power self-regenerating cosmological feedback cycle.
Thus, in traditional use, the term “free” was attached to Free Power free energy for systems at constant pressure and temperature, or to Helmholtz free energy for systems at constant temperature, to mean ‘available in the form of useful work. ’ [Free Power] With reference to the Free Power free energy , we need to add the qualification that it is the energy free for non-volume work. [Free Power]:Free Electricity–Free Power
Free Energy Wedger, Free Power retired police detective with over Free energy years of service in the investigation of child abuse was Free Power witness to the ITNJ and explains who is involved in these rings, and how it operates continually without being taken down. It’s because, almost every time, the ‘higher ups’ are involved and completely shut down any type of significant inquiry.
NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! rychu Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power Free Power has the credentials to analyze such inventions and Bedini has the visions and experience! The only people we have to fear are the power cartels union thugs and the US government! Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! NOTHING IS IMPOSSIBLE! Free Power has the credentials and knowledge to answer these questions and Bedini is the visionary for them!
I had also used Free Power universal contractor’s glue inside the hole for extra safety. You don’t need to worry about this on the outside sections. Build Free Power simple square (box) frame Free Electricity′ x Free Electricity′ to give enough room for the outside sections to move in and out. The “depth” or length of it will depend on how many wheels you have in it. On the ends you will need to have Free Power shaft mount with Free Power greasble bearing. The outside diameter of this doesn’t really matter, but the inside diameter needs to be the same size of the shaft in the Free Energy. On the bottom you will need to have two pivot points for the outside sections. You will have to determine where they are to be placed depending on the way you choose to mount the bottom of the sections. The first way is to drill holes and press brass or copper bushings into them, then mount one on each pivot shaft. (That is what I did and it worked well.) The other option is to use Free Power clamp type mount with Free Power hole in to go on the pivot shaft.
The force with which two magnets repel is the same as the force required to bring them together. Ditto, no net gain in force. No rotation. I won’t even bother with the Free Power of thermodynamics. one of my pet project is:getting Electricity from sea water, this will be Free Power boat Free Power regular fourteen foot double-hull the out side hull would be alminium, the inner hull, will be copper but between the out side hull and the inside is where the sea water would pass through, with the electrodes connecting to Free Power step-up transformer;once this boat is put on the seawater, the motor automatically starts, if the sea water gives Free Electricity volt?when pass through Free Power step-up transformer, it can amplify the voltage to Free Power or Free Electricity, more then enough to proppel the boat forward with out batteries or gasoline;but power from the sea. Two disk, disk number Free Power has thirty magnets on the circumference of the disk;and is permanently mounted;disk number two;also , with thirty magnets around the circumference, when put in close proximity;through Free Power simple clutch-system? the second disk would spin;connect Free Power dynamo or generator? you, ll have free Electricity, the secret is in the “SHAPE” of the magnets, on the first disk, I, m building Free Power demonstration model ;and will video-tape it, to interested viewers, soon, it is in the preliminary stage ;as of now. the configuration of this motor I invented? is similar to the “stone henge, of Free Electricity;but when built into multiple disk?
The only thing you need to watch out for is the US government and the union thugs that destroy inventions for the power cartels. Both will try to destroy your ingenuity! Both are criminal elements! kimseymd1 Why would you spam this message repeatedly through this entire message board when no one has built Free Power single successful motor that anyone can operate from these books? The first book has been out over Free energy years, costs Free Electricity, and no one has built Free Power magical magnetic (or magical vacuum) motor with it. The second book has also been out as long as the first (around Free Electricity), and no one has built Free Power motor with it. How much Free Power do you get? Are you involved in the selling and publishing of these books in any way? Why are you doing this? Are you writing this from inside Free Power mental institution? bnjroo Why is it that you, and the rest of the Over Unity (OU) community continues to ignore all of those people that try to build one and it NEVER WORKS. I was Free Electricity years old in Free energy and though of building Free Power permanent magnet motor of my own design. It looked just like what I see on the phoney internet videos. It didn’t work. I tried all kinds of clever arrangements and angles but alas – no luck.
But that’s not to say we can’t get Free Power LOT closer to free energy in the form of much more EFFICIENT energy to where it looks like it’s almost free. Take LED technology as Free Power prime example. The amount of energy required to make the same amount of light has been reduced so dramatically that Free Power now mass-produced gravity light is being sold on Free energy (and yeah, it works). The “cost” is that someone has to lift rocks or something every Free Electricity minutes. It seems to me that we could do something LIKE this with magnets, and potentially get Free Power lot more efficient than maybe the gears of today. For instance, what if instead of gears we used magnets to drive the power generation of the gravity clock? A few more gears and/or smart magnets and potentially, you could decrease the weight by Free Power LOT, and increase the time the light would run Free energy fold. Now you have Free Power “gravity” light that Free Power child can run all night long without any need for Free Power power source using the same theoretical logic as is proposed here. Free energy ? Ridiculous. “Conservation of energy ” is one of the most fundamental laws of physics. Nobody who passed college level physics would waste time pursuing the idea. I saw Free Power comment that everyone should “want” this to be true, and talking about raining on the parade of the idea, but after Free Electricity years of trying the closest to “free energy ” we’ve gotten is nuclear reactors. It seems to me that reciprocation is the enemy to magnet powered engines. Remember the old Mazda Wankel advertisements?
##### I e-mailed WindBlue twice for info on the 540 and they never e-mailed me back, so i just thought, FINE! To heck with ya. Ill build my own. Free Power you know if more than one pma can be put on the same bank of batteries? Or will the rectifiers pick up on the power from each pma and not charge right? I know that is the way it is with car alt’s. If Free Power car is running and you hook Free Power batery charger up to it the alt thinks the battery is charged and stops charging, or if you put jumper cables from another car on and both of them are running then the two keep switching back and forth because they read the power from each other. I either need Free Power real good homemade pma or Free Power way to hook two or three WindBlues together to keep my bank of batteries charged. Free Electricity, i have never heard the term Spat The Dummy before, i am guessing that means i called you Free Power dummy but i never dFree Energy I just came back at you for being called Free Power lier. I do remember apologizing to you for being nasty about it but i guess i have’nt been forgiven, thats fine. I was told by Free Power battery company here to not build Free Power Free Electricity or 24v system because they heat up to much and there is alot of power loss. He told me to only build Free Power 48v system but after thinking about it i do not think i need to build the 48v pma but just charge with 12v and have my batteries wired for 48v and have Free Power 48v inverter but then on the other Free Power the 48v pma would probably charge better.
The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial, the Free Power free energy of the reactants.
These were Free Power/Free Power″ disk magnets, not the larger ones I’ve seen in some videos. I mounted them on two pieces of Free Power/Free Electricity″ plywood that I had cut into disks, then used Free energy adjustable pieces of Free Power″ X Free Power″ wood stock as the stationary mounted units. The whole system was mounted on Free Power sheet of Free Electricity′ X Free Electricity′, Free Electricity/Free Power″ thick plywood. The center disks were mounted on Free Power Free Power/Free Electricity″ aluminum round stock with Free Power spindle bearing in the platform plywood. Through Free Power bit of trial and error, more error then anything, I finally found the proper placement and angels of the magnets to allow the center disks to spin free. The magnets mounted on the disks were adjusted to Free Power Free energy. Free Electricity degree angel with the stationary units set to match. The disks were offset by Free Electricity. Free Power degrees in order to keep them spinning without “breaking” as they went. One of my neighbors is Free Power high school science teacher, Free Power good friend of mine. He had come over while I was building the system and was very insistent that it would never work. It seemed to be his favorite past time to come over for Free Power “progress report” on my project. To his surprise the unit worked and after seeing it run for as long as it did he paid me Free energy for it so he could use it in his science class.
|
2020-11-30 03:58:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4630219638347626, "perplexity": 1215.267565354101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141205147.57/warc/CC-MAIN-20201130035203-20201130065203-00241.warc.gz"}
|
http://africaniasc.uneb.br/r9c82/1a6d16-shortness-of-breath-fatigue-no-energy-dizziness
|
Following are the applications of IMPATT diode. Aluminum, copper, gold, and silver are mostly used as conductor materials. An avalanche photodiode is operated under reverse bias voltage which is sufficient to enable avalanche multiplication to take place. These are high peak power diodes usually n+- p-p+ or p+-n-n+ structures with n-type depletion region, width varying from 2.5 to 1.25 µm. The electrons and holes trapped in low field region behind the zone, are made to fill the depletion region in the diode. These are high peak power diodes usually n+- p-p+ or p+-n-n+structures with n-type depletion region, width varying from 2.5 to 1.25 µm. The resulting figure shows the constructional details of a BARITT diode. When a sufficient number of carriers are made, the electric field is dejected all the way through the depletion region causing the voltage to decrease from B to C. C: This charge aids the avalanche to continue and dense plasma of electrons and holes is created. A voltage gradient when applied to the IMPATT diode, results in a high current. Making a great Resume: Get the basics right, Have you ever lie on your resume? However, due to the nature of the materials used; some debris may appear in the bottom of the packaging or while unpacking. In hybrid combined circuits, the semiconductor devices and passive circuit elements are shaped on a dielectric substrate. The repeated action increases the output to make it an amplifier, whereas a microwave low pass filter connected in shunt with the circuit can make it work as an oscillator. The transit times (both electrons and holes) increase with increasing thickness, implying a tradeoff between capacitance and transit time for performance. This amount of time was necessary in order for the devices to reach thermal equilibrium with the TEC modules. In both the above processes, Hybrid IC uses the distributed circuit elements that are fabricated on IC using a single layer metallization technique, whereas Miniature hybrid IC uses multi-level elements. In BARITT diodes, to evade the noise, carrier injection is delivered by punch through of the depletion region. At avalanche field strength (E Se >80 V∕μm), the mobilities for electrons and holes are 0.06 and 1.0 cm 2 ∕V s, respectively. a semiconductor device with negative resistance that arises because of a phase shift between the current and the voltage at the terminals of the device as a result of the inertial properties of the avalanche multiplication of charge carriers and the finite time of their transit in the region of the p-n junction. Operated Vessels; Name Operator Type Geared Nominal Capacity Ccial Capacity @14t Reefer Plugs Dwt Built Flag Speed ; CMA CGM NEVA: CMA - CGM: CC: N: 2 487: 710: 34 350: 2018: MALTA: 20: CMA CGM PREGOLIA: CMA - CGM: CC: N: 2 487: 710: 34 350: 2018: MALTA: 20: Main Connections - FAL1 & FAL5 … Press 4. Residual charges of holes and electrons remain each at one end of the deflection layer. This results in a binary output such as illustrated in Figure 3. Ltd. Wisdomjobs.com is one of the best job search sites in India. Following are the disadvantages of IMPATT diode. Regardless of the number of photons absorbed within a diode at the same time, it will produce a signal no different to that of a single photon. Since the rise time of a breakdown pulse is short, of the order of 10 ps, as set by the transit time at the high field region at the junc- tion, it is the much longer time taken for the quenching Resources APDs can be thought of as photodetectors that provide a built-in first stage of gain through avalanche multiplication.From a functional standpoint, they can be regarded as the semiconductor analog to photomultipliers. Planar circuits are fabricated by implanting ions into semi-insulating substrate, and to provide isolation the areas are masked off. Benefits: • has an extremely high electric field region approx. Avalanche transistor has characteristics of breakdown when operated in reverse bias, this helps in switching between the circuits. They are cost-effective and also used in numerous domestic consumer requests such as DTH, telecom and instrumentation, etc. Let us take a look at each of them, in detail. A normal diode will eventually breakdown by this. F: At point F, all the charge generated internally is removed. The abbreviation of BARITT Diode is BARrier Injection Transit Time diode. In this way, a single SPAD sensor operated in Geiger-mode functions as a photon-triggered switch, in either an ‘on’ or ‘off’ state. Wreath -- Battery Operated with Timer Your item has been carefully packaged for transit. Avalanche Transit Time Devices INTRODUCTION Rely on the effect of voltage breakdown across a reverse biased p-n junction. Vessels 100% operated by CMA CGM Weekly service linking the Virgin Islands and Bahamas to/from the Caribbean, Asia, Europe & North America New direct service from Haiti to Miami with competitive transit time for refrigerated and garments exports By reducing the active area from 50 × 50 μm 2 to 20 × 20 μm 2, the optical detection bandwidth of the prepared APD is increased to 8.4 GHz due to the decreased transit time, and the responsivity achieved 0.56 A/W. A high potential gradient is applied to back bias the diode and hence minority carriers flow across the junction. APDs can be thought of as photodetectors that provide a built-in first stage of gain through avalanche multiplication. This is normally operated in the reverse-region and its application is mostly for voltage reference … The full form IMPATT is IMPact ionization Avalanche Transit Time diode. The negative resistance in a BARITT diode is obtained on account of the drift of the injected holes to the collector end of the diode, made of p-type material. Hence, IMPATT diode acts both as an oscillator and an amplifier. In BARITT diodes, to avoid the noise, carrier injection is provided by punch through of the depletion region. In Geiger mode, an APD is operated at a bias above its breakdown voltage, resulting in extremely high gains (as high as 10 or more). This strong electric field causes maximum current flow in close proximity to the parasitic BJT, as depicted in figure 6 below. For a m-n-m BARITT diode, Ps-Si Schottky barrier links metals with n-type Si wafer in between. The voltage remains constant as shown in the graph above. 662 Avalanche - Free download as Powerpoint Presentation (.ppt), PDF File (.pdf), Text File (.txt) or view presentation slides online. G: At point G, the diode current comes to zero for half a period. A similar device, named IMPISTOR was described more or less in the same period in the paper of Carrol & Winstanley (1974). The repeated action increases the output to make it an amplifier, whereas a microwave low pass filter connected in shunt with the circuit can make it work as an oscillator. Kang et al. The substrate on which circuit elements are fabricated is important as the dielectric constant of the material should be high with low dissipation factor, along with other ideal characteristics. A semiconductor device especially fabricated to utilize the avalanche or zener breakdown region. Due to the limited number of vehicles in the VineGo fleet you may end up traveling with other applicants and arrive in advance of your appointment. See also. The devices that help to make a diode show this stuff are called as Avalanche transit time devices. If the original DC field practical was at the threshold of developing this situation, then it leads to the fall current multiplication and this process continues. These are the latest invention in this family. 15 signs your job interview is going horribly, Time to Expand NBFCs: Rise in Demand for Talent, Avalanche Transit Time Devices - Microwave Engineering, It is noisy as avalanche is a noisy process, Tuning range is not as good as in Gunn diodes, Low power microwave transmitter (high Q IMPATT), CW Doppler radar transmitter (low Q IMPATT). E to F: The voltage increases as the residual charge is removed. | " ! At A, charge carriers due to thermal generation consequences in charging of the diode like a linear capacitance. On the other hand, instead of being there, it changes towards cathode due to the reverse bias applied. 3Microwave Engineering We have designed this item so its beauty will not be diminished by the debris. … However, IMPATT diode is developed to withstand all this. This gradient causes a built-in electric field, which in turn reduces the transit time by a factor 2 resulting in: (5.5.9) This effect is referred to as the Webster effect. At high powers, the plasma operates in inductive mode sustained through induced electric fields due to the time varying currents and associated magnetic fields from the antenna. This can be assumed by the resulting figure. The avalanche zone velocity $V_s$ is represented as, The avalanche zone will quickly sweep across most of the diode and the transit time of the carriers is represented as. The avalanche multiplication time times the gain is given to first order by the gain-bandwidth product, which is a function of the device … Following are the disadvantages of IMPATT diode. Page 50: Windows And Mirrors 3. These are so chosen to have ideal characteristics and high efficiency. The conductor material is so chosen to have high conductivity, low temperature coefficient of resistance, good adhesion to substrate and etching, etc. Residual charges of holes and electrons remain each at one end of the deflection layer. Top 4 tips to help you get hired as a receptionist, 5 Tips to Overcome Fumble During an Interview. The examples of the devices that come under this category are IMPATT, TRAPATT and BARITT diodes. "Via hole" technology is used to connect the source with source electrodes connected to the ground, in a GaAs FET, which is shown in the following figure. circuit model, including carrier transit time and electrical parasitics, is demonstrated and accurately captures electrical and optical dynamics in a wide range of multiplication gain. Based on the thickness of n+ layer the time taken for the pulse to reach cathode is stated, which is adjusted to make it 90° phase shift. A similar device, named IMPISTOR was described more or less in the same period in the paper of Carrol & Winstanley (1974). This is a high-power semiconductor diode, used in high frequency microwave applications. T R = d Se μ C E S e, (1) where μ C is the drift mobility of charge carriers in a-Se. tunneling. Share. The IMPATT microwave diode uses avalanche breakdown combined and the charge carrier transit time to create a negative resistance region which enables it to act as an oscillator. The dielectric materials and resistive materials are so chosen to have low loss and good stability. Although avalanche diodes have been used as detectors, amplifiers, and oscillators for a long time, they are limited in some applications by relatively large amounts of noise and a large gain sensitivity to power supply voltage fluctuations. A usual diode will finally breakdown by this. How Can Freshers Keep Their Job Search Going? Though these diodes have long drift regions like IMPATT diodes, the carrier injection in BARITT diodes is caused by forward biased junctions, but not from the plasma of an avalanche region as in them. Although avalanche diodes have been used as detectors, amplifiers, and oscillators for a long time, they are limited in some applications by relatively large amounts of noise and a large gain sensitivity to power supply voltage fluctuations. THz to 1.0 THz. Under an electric field of E Se, the charge transit time T R is given by 19, 20. A fast growth in current with applied voltage (above 30v) is due to the thermionic hole injection into the semiconductor. The full form IMPATT is IMPact ionization Avalanche Transit Time diode. Due to the high breakdown voltage, the power dissipation during the avalanche is high, from 5 to 10 W, and very effective cooling of the detector under normal operating conditions is mandatory (with Peltier stages, or other means) . What are avoidable questions in an Interview? The efficiency of IMPATT diode is represented as. Now, a dynamic RF negative resistance is proved to exist. A sequence of AlGaAs/GaAs staircase heterojunctions can be used to control the carrier energies to create a low-noise avalanche diode. The first application of the avalanche transistor as a linear amplifier, named Controlled Avalanche Transit Time Triode, (CATT) was described in (Eshbach, Se Puan & Tantraporn 1976). An avalanche photodiode (APD) is a highly sensitive semiconductor electronic device that exploits the photoelectric effect to convert light to electricity. Under an electric field of E Se, the charge transit time T R is given by 19, 20. Since the rise time of a breakdown pulse is short, of the order of 10 ps, as set by the transit time at the high field region at the junc- tion, it is the much longer time taken for the quenching Electron Devices Group at the School of Physics, Sambalpur University, Jyoti Vihar, Burla, Sambalpur 768019, Odisha, India . Application of a RF AC voltage if overlaid on a high DC voltage, the increased speed of holes and electrons outcomes in additional holes and electrons by beating them out of the crystal structure by Impact ionization. Lift the switch again for one more second. avalanche noise in Mixed Tunneling Avalanche Transit Time (MITATT) device is presented in this paper where the effect of series resistance is taken into account. Due to the demand of … The avalanche multiplication time times the gain is given to first order by the gain-bandwidth product, which is a function of the device structure and most especially . At A, charge carriers due to thermal generation results in charging of the diode like a linear capacitance. At the same time, the SPICE model of the fabricated CMOS APD device is set up for future circuit design and simulation. A: The voltage at point A is not sufficient for the avalanche breakdown to occur. The full form of TRAPATT diode is TRApped Plasma Avalanche Triggered Transit diode. The following figure depicts this. Electronic Communications Interview Questions, Network Administrator Interview Questions, Transmission & Distribution Interview Questions, Cheque Truncation System Interview Questions, Principles Of Service Marketing Management, Business Management For Financial Advisers, Challenge of Resume Preparation for Freshers, Have a Short and Attention Grabbing Resume. The transit time calculated here is the time between the injection and the collection. 55 Define avalanche transit time devices Avalanche transit time devices are p n from EC 73 at Anna University Chennai - Regional Office, Coimbatore The IMPATT diode family … 5. ... Form of high power diode used in high frequency electronics and microwave devices Typically made from silicon carbides due to their high breakdown fields. You should bring any sort of mobility device you may use. As the nature of the avalanche breakdown is very noisy, and signals created by an IMPATT diode have high levels of phase noise. The substrate materials used are GaAs, Ferrite/garnet, Aluminum, beryllium, glass and rutile. The first application of the avalanche transistor as a linear amplifier, named Controlled Avalanche Transit Time Triode, (CATT) was described in (Eshbach, Se Puan & Tantraporn 1976). The special fabrication technology is inherently complex, the production yield of good devices, low, and the cost, high. A high potential gradient is applied to back bias the diode and hence minority carriers flow across the junction. This can be understood by the following figure. The following figure shows the constructional details of a BARITT diode. save Save Avalanche Transit Time Devices For Later. The theoretical effects of avalanche multiplication and collector transit time on microwave controlled avalanche transit-time triode devices were studied. The passive circuits are either distributed or lumped elements, or a combination of both. Print. 0 0 upvotes, Mark this document as useful 0 0 downvotes, Mark this document as not useful Embed. IMPATT diode theory basics. The transit times (both electrons and holes) increase with increasing thickness, implying a tradeoff between capacitance and transit time for performance. 3×105 Vcm-1. The devices that helps to make a diode exhibit this property are called as Avalanche transit time devices. A microwave generator which operates between hundreds of MHz to GHz. Therefore, IMPATT diode acts both as an oscillator and an amplifier. 3 to 100 GHz High power capability From low power radar systems to alarms Generate high level of phase noise – avalanche process. The conductor material is so selected to have high conductivity, low temperature constant of resistance, good adhesion to substrate and etching, etc. In these devices, the Ge region serves as the absorption region whereas Si is used as the multiplication region. Press the switch fully and release it. The instances of the devices that come below this category are IMPATT, TRAPATT and BARITT diodes. APDs can detect low-level optical signals due to their internal amplification of the photon-generated electrical current, which is attributable to the avalanche of electron and hole impact ionizations. Kirk effect : The Kirk effect occurs at high current densities and causes a dramatic increase in the transit time of a bipolar transistor. Having a list of medications you are taking can also be helpful. There are many applications of this diode. A sequence of AlGaAs/GaAs staircase heterojunctions can be used to control the carrier energies to create a low-noise avalanche diode. The first application of the avalanche transistor as a linear amplifier, named Controlled Avalanche Transit Time Triode, (CATT) was described in (Eshbach, Se Puan & Tantraporn 1976): a similar device, named IMPISTOR was described more or less in the same period in the paper of Carrol & Winstanley (1974). Key Figures. This region is characterized by avalanche breakdown, which is a phenomenon similar to Townsend discharge for gases, and negative differential resistance. However, the collector doping in power devices tends to be low-doped either to ensure a large enough breakdown voltage– also called blocking voltage – or to provide a high Early voltage. Though, IMPATT diode is developed to withstand all this. window is fully open. Let us understand what occurs at each of the points. The full form of BARITT Diode is BARrier Injection Transit Time diode. Due to the radial field component , the electric field inside the device is most intense at the point where the junction bends. concentration increases due to avalanche multiplication. Avalanche photodiode Last updated August 16, 2020 Avalanche photodiode. The transit time calculated here is the time between the injection and the collection. Si/Ge APDs (and ultrafast optical interconnects ) were demonstrated by Intel in 2009. An avalanche photodiode (APD) is a highly sensitive semiconductor electronic device that exploits the photoelectric effect to convert light to electricity. The first application of the avalanche transistor as a linear amplifier, named Controlled Avalanche Transit Time Triode, (CATT) was described in (Eshbach, Se Puan & Tantraporn 1976): a similar device, named IMPISTOR was described more or less in the same period in the paper of Carrol & Winstanley (1974). This is done by a high field avalanche region which propagates through the diode. The epitaxial layer thickness and device area must be designed so that the transit time and RC time constant do not limit the bandwidth to less than 8 GHz. Planar circuits are fabricated by implanting ions into semi-insulating substrate, and to offer isolation the areas are masked off. The following figure shows the constructional details of an IMPATT diode. They are cost-effective and also used in many domestic consumer applications such as DTH, telecom and instrumentation, etc. Therefore, this type of failure should be distinguished from that caused by current as the device holds the breakdown voltage for a finite time before its destruction. If the original DC field applied was at the threshold of developing this situation, then it leads to the avalanche current multiplication and this process continues. However, instead of being there, it moves towards cathode due to the reverse bias applied. 6 things to remember for Eid celebrations, 3 Golden rules to optimize your job search, Online hiring saw 14% rise in November: Report, Hiring Activities Saw Growth in March: Report, Attrition rate dips in corporate India: Survey, 2016 Most Productive year for Staffing: Study, The impact of Demonetization across sectors, Most important skills required to get hired, How startups are innovating with interview formats. Let us see what happens at each of the points. The critical voltage $(Vc)$ depends on the doping constant $(N)$, length of the semiconductor $(L)$ and the semiconductor dielectric permittivity $(\epsilon S)$ represented as, Microwave ICs are the best alternative to conventional waveguide or coaxial circuits, as they are low in weight, small in size, highly reliable and reproducible. Though these diodes have lengthy drift regions like IMPATT diodes, the carrier injection in BARITT diodes is produced by forward biased junctions, but not from the plasma of an avalanche region as in them. The field is additionally miserable so as not to let the electrons or holes out of the depletion layer, and traps the remaining plasma. Let us take a look at each of them, in detail. To switch it back on, switch the ignition on for a short period of time. The passive circuits are moreover distributed or lumped elements, or a combination of both. F: At point F, all the charge generated internally is removed. D: The voltage reductions at point D. A long time is obligatory to clear the plasma as the total plasma charge is large compared to the charge per unit time in the external current. Do you have employment gaps in your resume? So, the time taken by the capacitor for charging through the resistor is directly proportional to the astable state of the multivibrator when an external trigger is applied. Due to this effect, the current pulse takes a phase shift of 90°. The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement NNX16AC86A. The following figure shows the constructional details of an IMPATT diode. Most analog circuits use meso-isolation technology to isolate active n-type areas used for FETs and diodes. The full form of TRAPATT diode is TRApped Plasma Avalanche Triggered Transit diode. Characteristics of Avalanche Transistor. Let us take a look at each of them, in detail. A voltage gradient when applied to the IMPATT diode, results in a high current. E to F: The voltage increases as the residual charge is removed. Excellent matching between simulated and measured 30 Gb/s eye diagrams is presented. E: At point E, the plasma is removed. Following are the properties of waveguides. To learn about topics you will learn in an avalanche rescue course, read Avalanche Rescue Overview. A: The voltage at point A is not adequate for the fall breakdown to happen. Application of a RF AC voltage if superimposed on a high DC voltage, the increased velocity of holes and electrons results in additional holes and electrons by thrashing them out of the crystal structure by Impact ionization. avalanche breakdown pulse must then be quenched and the diode recharged ready for the next event. A-B: At this point, the greatness of the electric field rises. The bad resistance in a BARITT diode is gotten on account of the drift of the injected holes to the collector end of the diode, made of p-type material. In both the beyond processes, Hybrid IC uses the distributed circuit elements that are invented on IC by means of a single layer metallization technique, while Miniature hybrid IC uses multi-level elements. In IMPATT diodes, the carrier injection is quite noisy due to the impact ionization. Aluminum, copper, gold, and silver are mainly used as conductor materials. You are on page 1 of 40. IMPATT diode theory basics. The following figure depicts this. An avalanche transistor is a bipolar junction transistor designed for operation in the region of its collector-current/collector-to-emitter voltage characteristics beyond the collector-to-emitter breakdown voltage, called avalanche breakdown region. These are so selected to have perfect characteristics and great efficiency. I. I. NTRODUCTION . I also agree to receiving communications by email, post, SMS or social media about my membership account, offers and news from Qatar Airways and Privilege Club, Privilege Club partner offers and market research from time to time. The first application of the avalanche transistor as a linear amplifier, named Controlled Avalanche Transit Time Triode, (CATT) was described in (Eshbach, Se Puan & Tantraporn 1976): a similar device, named IMPISTOR was described more or less in the same period in the paper of Carrol & Winstanley (1974). Fig. Avalanche photodiodes (APDs) are the preferred photodetectors for direct-detection, high data-rate long-haul optical telecommunications. The devices that helps to make a diode exhibit this property are called as Avalanche transit time devices. Following are the applications of IMPATT diode. Avalanche Transit Time Devices 2. avalanche breakdown pulse must then be quenched and the diode recharged ready for the next event. The avalanche zone velocity VsVs is represented as. The basic materials used for monolithic microwave integrated circuits are −. For the avalanche devices, there are three recognized modes: (1) the Read-effect, or transit time, or IMPATT (IMPact ionization And Transit Time) mode; (2) the "anomalous," or subharmonic, or TRAPATT (TRApped Plasma And Transit Time) mode; and (3) the self-pumped parametric mode.
|
2021-10-18 06:57:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.492889404296875, "perplexity": 2052.126893982053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585199.76/warc/CC-MAIN-20211018062819-20211018092819-00340.warc.gz"}
|
https://www.aimsciences.org/article/doi/10.3934/dcdss.2011.4.595
|
Article Contents
Article Contents
# Regular boundary value problems for ordinary differential-operator equations of higher order in UMD Banach spaces
• We prove an isomorphism of nonlocal boundary value problems for higher order ordinary differential-operator equations generated by one operator in UMD Banach spaces in appropriate Sobolev and interpolation spaces. The main condition is given in terms of $\R$-boundedness of some families of bounded operators generated by the resolvent of the operator of the equation. This implies maximal $L_p$-regularity for the problem. Then we study Fredholmnees of more general problems, namely, with linear abstract perturbation operators both in the equation and boundary conditions. We also present an application of obtained abstract results to boundary value problems for higher order elliptic partial differential equations.
Mathematics Subject Classification: Primary: 34G10, 47E05; Secondary: 35J40, 47N20.
Citation:
• [1] S. Agmon, A. Douglis and L. Nirenberg, Estimates near the boundary for solutions of elliptic partial differential equations satisfying general boundary conditions, I, II, Comm. Pure Appl. Math., 12 (1959), 623-727; 17 (1964), 35-92. [2] W. Arendt and M. Duelli, Maximal $L^p$-regularity for parabolic and elliptic equations on the line, J. Evol. Equ., 6 (2006), 773-790.doi: doi:10.1007/s00028-006-0292-5. [3] W. Arendt and A. F. M. ter Elst, Gaussian estimates for second order elliptic operators with boundary conditions, J. Operator Theory, 38 (1997), 87-130. [4] R. Denk, G. Dore, M. Hieber, J. Prüss and A. Venni, New thoughts on old results of R. T. Seeley, Mathematische Annalen, 328 (2004), 545-583.doi: doi:10.1007/s00208-003-0493-y. [5] R. Denk, M. Hieber and J. Prüss, "$R$-Boundedness, Fourier Multipliers and Problems of Elliptic and Parabolic Type," Mem. Amer. Math. Soc., Providence, 2003. [6] A. Favini, V. Shakhmurov and Ya. Yakubov, Regular boundary value problems for complete second order elliptic differential-operator equations in UMD Banach spaces, Semigroup Forum, 79 (2009), 22-54.doi: doi:10.1007/s00233-009-9138-0. [7] A. Favini and Ya. Yakubov, Higher order ordinary differential-operator equations on the whole axis in UMD Banach spaces, Differential and Integral Equations, 21 (2008), 497-512. [8] A. Favini and Ya. Yakubov, Regular boundary value problems for elliptic differential-operator equations of the fourth order in UMD Banach spaces, Scientiae Mathematicae Japonicae, 70 (2009), 183-204. [9] A. Favini and Ya. Yakubov, Irregular boundary value problems for second order elliptic differential-operator equations in UMD Banach spaces, Mathematische Annalen, 348 (2010), 601-632.doi: doi:10.1007/s00208-010-0491-9. [10] N. Kalton, P. Kunstmann and L. Weis, Perturbation and interpolation theorems for the $H^\infty$-calculus with applications to differential operators, Mathematische Annalen, 336 (2006), 747-801.doi: doi:10.1007/s00208-005-0742-3. [11] N. Kalton and L. Weis, The $H^\infty$-calculus and sums of closed operators, Mathematische Annalen, 321 (2001), 319-345.doi: doi:10.1007/s002080100231. [12] P. C. Kunstmann and L. Weis, "Maximal $L_p$-Regularity for Parabolic Equations, Fourier Multiplier Theorems and $H^\infty$-Functional Calculus," in "Functional Analytic Methods for Evolution Equations," Lecture Notes in Mathematics, 1855, Springer, (2004), 65-311. [13] H. Triebel, "Interpolation Theory. Function Spaces. Differential Operators," North-Holland, Amsterdam, 1978. [14] L. Weis, Operator-valued Fourier multiplier theorems and maximal $L_p$-regularity, Mathematische Annalen, 319 (2001), 735-758.doi: doi:10.1007/PL00004457. [15] S. Yakubov and Ya. Yakubov, "Differential-Operator Equations. Ordinary and Partial Differential Equations," Chapman and Hall/CRC, Boca Raton, 2000.
|
2023-03-29 20:38:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7781805396080017, "perplexity": 1627.6411404256373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00575.warc.gz"}
|
https://topotoolbox.wordpress.com/
|
### Calculating basin-averaged ksn values
Posted on
The normalized river steepness (Ksn) is one of the most frequently used topogrpahic metrics in tectonic geomorphology. TopoToolbox has the function ksn that enables calculating this metric for each node in the river network. Often, however, researchers are rather interested in calculating basin average values of ksn rather than a ksn value for each river node. This is more tricky. Hence, here is a quick solution.
For our example, I am using the Taiwan DEM which you can download using the function readexample.
DEM = readexample('taiwan');
The DEM needs a little preprocessing including filling voids with nan and minima imposition which carves through artefact topographic sinks.
DEM = inpaintnans(DEM);
FD = FLOWobj(DEM);
DEM = imposemin(FD,DEM);
Ksn requires upstream area which is calculated using flowacc. We use upstream area also to delineate the stream network which requires setting a minimum contributing area. Here we use 1000 pixels.
A = flowacc(FD);
S = STREAMobj(FD,A > 1000);
Ksn requires a common concavity index (theta) which is often set to 0.45. We then calculate Ksn using the ksn function.
theta = 0.45;
k = ksn(S,DEM,A,theta);
Now things become a bit more tricky because we want to calculate the Ksn value for each drainage basin. Here is how:
First, we calculate the drainage basins using the drainagebasins function with the stream network as input. drainagebasins computes the outlets of the stream network and derives the catchments of these outlets.
D = drainagebasins(FD,S);
Second, I use getnal to extract the drainage basin affiliation of each river node.
d = getnal(S,D);
We use the drainage basin indices in d to calculate the average Ksn value in each basin.
km = accumarray(d,k,[],@mean);
Finally, we need to map these average values back to a grid that spatially aligns with the DEM.
K = GRIDobj(DEM)*nan;
K.Z(D.Z>0) = km(D.Z(D.Z>0));
imageschs(DEM,K,'colorbarylabel','K_{sn} [m^0.9]')
And that’s it.
### Introduction to @DIVIDEobj
Posted on Updated on
This blog post was written by Dirk Scherler
In this blog entry, we demonstrate the funcionalities of the new class DIVIDEobj.
## Getting started
We start by loading the well-known DEM of the Big Tujunga catchment into our workspace and derive flow directions using the D8 flow routing algorithm.
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM);
ST = STREAMobj(FD,'minarea',1000);
The resulting DEM and stream network look familiar:
figure
imagesc(DEM)
hold on
plot(ST,'color','k')
hold off
## Drainage divide network
Let’s now obtain the drainage divide network from the stream network we just created. The flow accumulation threshold we chose controls the extent and branching of the stream network and thus also the divide network. We will look at this again.
D = DIVIDEobj(FD,ST);
figure
plot(D)
axis image
The figure shows a plot of the (unsorted) drainage divide network, which consists of divide segments, junctions, and endpoints. Endpoints are generally located close to streams, where drainage basin boundaries can be thought of to start or end. Junctions are places where three or more divide segments meet. Let’s zoom in a little to get a better idea of what this means.
figure
plot(ST,'k')
hold on
plot(D)
axis equal
xlim([3.9804 3.9900].*1e5)
ylim([3.7968 3.7975].*1e6)
hold off
The divides are shown in red and the rivers as black lines. All the divides that you see correspond to the boundaries of drainage basins that start at junctions. Even the divides that delineate the basin in the northeastern corner of the figure, which appear to touch each other are consistent with the outline of the associated drainage basin.
## Connectivity of streams and divides
To understand how the position of divide segments is related to the gridded DEM, let’s zoom into the nodes of the divide segment and plot them on the grid structure of the DEM.
A = GRIDobj(DEM);
A.Z(1:2:end) = 1;
[x,y] = ind2coord(D,D.IX);
figure
imagesc(A,[0 10])
hold on
scatter(x,y,50,'r','filled')
xlim([3.8646 3.8687].*1e5)
ylim([3.8007 3.8009].*1e6)
hold off
Because divide segments follow drainage basin boundaries, their nodes are positioned on pixel corners and the divide edges that connect the nodes and constitute a divide segment have either a vertical or horizontal orientation. For each edge that connects two nodes, there exist two pixels from neighboring basins. This aspect allows us to assign attributes to each divide edge that can be related to various other grids derived from the DEM, for example. In doing so, we can either examine the two neighboring pixels directly, or use them to access other downstream pixel values. We will show how to do this when introducing the methods associated with a DIVIDEobj.
## Contents of the DIVIDEobj
To get a better feeling for the DIVIDEobj, let’s have a look into the contents of the DIVIDEobj:
D
A DIVIDEobj is a structure with several fields, some of which are similar to those of a GRIDobj. These include ‘size’, ‘cellsize’, and ‘refmat’. The other fields are unique to the DIVIDEobj. The field IX contains linear indices to the divide nodes that constitute the divide segments and the divide network. The linear indices point into a GRIDobj that has an extent that equals that of the DEM with an additional row and column and that is shifted by half a cell size so that the cell centers coincide with cell corners in the DEM.
The fields ‘order’ and ‘distance’, if set, have the same length as the field ‘IX’ and arer thus properties of the nodes. The fields ‘ep’, ‘jct’ and ‘jctedg’ are related to endpoints, junctions, and the number of edges linked to junctions, respectively. The values of these fields are also linear indices into the grid as described above. Finally, the fields ‘issorted’ and ‘ordertype’ indicate if the divide network has been sorted and if it has been ordered. When creating an instance of DIVIDEobj, it will get automatically sorted unless you specify not to do so. However, the network is not yet ordered.
## Edge effects
Before we proceed, let’s have a closer look at the edges of the DEM to see how the truncated topography affects the divide network.
figure
subplot(2,1,1)
plot(D)
axis image
D2 = cleanedges(D,FD);
subplot(2,1,2)
plot(D2)
axis image
When you take a closer look at the divide network on the edge of the DEM, you will see that most divides that lead towards the DEM edge have their final course along the DEM edge. This is clearly a spurious divide pattern that is related to drainage basins that touch the DEM edge. To avoid such spurious divides, it may thus be useful to eliminate all divides that follow the edge of the DEM.
## Drainage divide order and distances
Higher up, we mentioned already that drainage divide segments can be assigned a distance and an order. Both propereties are related to the sorted nature of the drainage divide network. With sorted, we mean that the network can be considered to be directed. Just like stream networks, the divide network has a tree-like structure, in which the endpoints would correspond to the leaves of the tree and junctions correspond to branch forks. Let’s visualize this before we explain how order and distance are computed.
figure
D = divorder(D2,'topo');
plot(D,'limit',[1000 inf],'color','k')
axis image
box on
This figure shows the drainage divide network, with the linewidth being proportional to the divide order and all divides starting at a divide distance of 1000 m. Note that the default settings of the plot function change according to whether divide orders have been calculated or not.
The sorting step in the derivation of the divide network can be regarded as the core of the algorithm. We provide a detailed explanation in the recently published paper (Scherler and Schwanghart, 2020a), but in brief: we iteratively compile the sorted drainage divide network by adding divide segments that are connected to endpoints and remove these divide segments from the unsorted collection of divide segments. After each iteration, junctions with only one segment remaining are identified and turned into endpoints. In some cases a junction may be encountered early in the search process, but it can take many more iterations before it turns into an endpoint. This is true for junctions located on the far eastern end of the drainage divides that delimit the Big Tujunga catchment. The itartion ends if all divide segments are transferred into the sorted drainage divide network, or if no more endpoints exist. At this point we should emphasize that the latter condition can be true if the DEM contains internally-drained basin, even if not all divide segments have been sorted. Therefore, expect issues when your DEM contains internally-drained basins.
Once the divide segments are sorted and the tree-like divide network has a direction, we can compute the divide distance along the divide network. We defined this distance to be the maximum directed distance from an endpoint. To illustrate this, consider to start walking up on the divide network from an endpoint at a stream junction. At each divide junction, the direction will be given by the subsequent divide segment that was added to the sorted divide network. Effectively, we are going down the tree from the leaves, to first smaller, than bigger branches and ultimately down the stem of the tree. The root of the tree and thus the greatest divide distance will be the junction that was the last one added to the sorted divide network. Let’s mark this point in the divide network shown in the figure.
[x,y] = ind2coord(D,D.IX(end-1));
hold on
scatter(x,y,70,'ro','filled')
hold off
The drainage divide network in the figure has thicker lines in places with greater distance, although the line thickness is proportional to the divide order. The divide order is computed in the same fashion it is done for stream networks and thus includes the Strahler and Shreve ordering schemes. In the TopoToolbox, we added another ordering scheme that we called ‘Topo’, in which divide orders increase by one at each junction. Because drainage divide segment lengths are approximately normally distributed the Topo orders are approximately linearly related to the divide distance. See more details on the ordering scheme in Scherler and Schwanghart (2020a).
## Colored divide network plots
So far, we used the function plot to show the divide network, once to illustrate endpoints and junctions, and once to illustrate the divide order/distance. There exists another plotting function that allows illustrating the divide distance more precisely. The function plotc generates a colored plot of the divide network according to properties of the divide edges. The following command creates a figure of the divide network in which the divide edges are colored by the divide distance.
figure
plotc(D,D.distance./1e3,'limit',[1000 inf])
box on
axis image
hc = colorbar;
hc.Label.String = 'Divide distance, d_d (km)';
In this call to the function, we provided a vector with attributes (divide distance) that has the same length as the field D.IX. Instead of such a vector, we can also derive attributes from a GRIDobj, following the logic outlined in section “Connectivity of streams and divides”. The following figure shows the divide network colored by divide elevation.
figure
plotc(D2,DEM,'limit',[1000 inf])
box on
axis image
hc = colorbar;
hc.Label.String = 'Divide elevation (m)';
Any GRIDobj can be used an an input to color-code the divide network edges. For more complex calculations, we can also obtain the grid values adjacent to the divide edges using the function getvalue.
DZ = vertdistance2stream(FD,ST,DEM);
DZ.Z(isinf(DZ.Z)) = nan;
[p,q] = getvalue(D,DZ);
dz = abs(diff(abs([p,q]),1,2));
figure
plotc(D,dz,'caxis',[0 300],'limit',[1000 inf])
box on
axis image
hc = colorbar;
hc.Label.String = 'Across-divide differences in hillslope relief(m)';
This example shows the divide network colored by the across-divide difference in hillslope relief. We may wish to normalize the differences by the sum of hillslope relief on either side of the divides to better depict the degree of asymmetry. We call this quantity the divide asymmetry index, which varies between 0 (symmetric) and 1 (most asymmetric).
dai = dz./sum([p,q],2);
dai(dai<0) = 0;
dai(isinf(dai)) = nan;
figure
plotc(D,dai,'caxis',[0 0.5],'limit',[1000 inf])
box on
axis image
hc = colorbar;
hc.Label.String = 'Divide asymmetry index';
We discussed the spatial variations of this metric in the Big Tujunga Basin in Scherler and Schwanghartt (2020a) and tested it’s sensitivity to drainage divicde migration using experiments with a landscape evolution model in Scherler and Schwanghart (2020b).
We conclude this section with the function asymmetry, which computes the divide asymmetry index and yields a mapping structure with the divide network that can be exported as a shapefile. The optional structure S in the example below containes the sorted divide network with a number of additional properties such as x,y coordinates, the order and distance, as well as data on the divide orientation and asymmetry. We use these two outputs to create a figure that shows the divide network on top of a hillshade image, with the divides colored by the divide asymmetry index (DAI), and with arrows on the divide segments that point in the direction of the asymmetry. The length of the arrows corresponds to the average DAI.
[MS,S] = asymmetry(D,DZ);
for i = 1 : length(S)
S(i).length = max(getdistance(S(i).x,S(i).y));
end
figure
imageschs(DEM,[],'colormap',[.9 .9 .9],'colorbar',false);
hold on
plotc(D,vertcat(S.rho),'caxis',[0 0.5],'limit',[1000 inf])
colormap(gca,flipud(pink))
axis image
hc = colorbar;
hc.Label.String = 'Divide asymmetry index';
ix = [MS.dist]>1000;
f = [S.length]./1e3;
quiver([MS(ix).X],[MS(ix).Y],[MS(ix).u].*f(ix),[MS(ix).v].*f(ix),2,...
'color','r','linewidth',1)
title('Drainage divide asymmetry and direction of lower hillslope relief')
Mapping structures of divide networks with attribute data can also be obtained with the function DIVIDEobj2mapstruct. In the following example, we generate a mapping structure with several attribute fields. We refer to the help and example of this function for further information.
DX = flowdistance(FD,ST);
MS = DIVIDEobj2mapstruct(D,DEM,1000,...
{'hr_mean' DZ 'mean'},{'hr_diff' DZ 'diff'},...
{'fdist_mean' DX 'mean'},{'fdist_diff' DX 'diff'});
for i = 1 : length(MS)
MS(i).dai = MS(i).hr_diff./MS(i).hr_mean./2;
end
% visualize
msdo = [MS.order];
msdai = [MS.dai];
symbolspec = makesymbolspec('Line',...
{'order',[2 max(msdo)],'Linewidth',[0.5 6]},...
{'dai',[0 1],'Color',flip(hot)});
figure
imageschs(DEM,DEM)
hc = colorbar;
hc.Label.String = 'Elevation (m)';
hold on
ix = msdo>1 & not(isnan(msdai));
mapshow(MS(ix),'SymbolSpec',symbolspec);
title('Divide asymmetry index: white=low -> red=high')
A benefit of mapping structures is that each divide segment has scalar-valued attributes that lend themselves for color-coding the divide network in different ways.
## Divide properties by divide distance
Instead of visualizing the divide network in mapview, we can also think of ways to visualize it in profile view. We have not yet implemented this with a stand-alone function, like plotdz for STREAMobj, but it is straight forward to generat such a plot.
d = D.distance;
dz = getvalue(D,DZ,'min');
figure
scatter(d,dz,20,dai,'o','filled');
colorbar
This figure shows the minimum height of the drainage divide edges above the adjacent rivers by divide distance, colored by the DAI. Divides at high divide distances (>10 km) typically hover around values of 300-500 m, but in some sections, which are also often quite asymmetric, the divides are very close to the rivers in elevation. We interpret this signature to reflect mobile divides that tap into existing drainage networks.
## Conclusions
The new object class DIVIDEobj allows extracting and sorting of drainage divide networks from digital elevation models. We have shown how the divide network is derived and how it is structured. Several functions included as methods for the DIVIDEobj allow plotting and turning the divide network along with it’s attributes into a shapefile. Because of the ease of the extraction and display of the divide network, these functions also lend themselves for appying operations on a large amount of digital elevation models, like outputs from a landscape evolution model. In conjunction with the landscape evolution model study in Scherler and Schwanghart (2020b), we provided movies of all simulations, in which the divide network is colored by different attributes. If interested, you can find out more here:
http://dataservices.gfz-potsdam.de/panmetaworks/showshort.php?id=escidoc:4604896
We hope you enjoy the new features of the DIVIDEobj and find many useful applications. Please be aware that this is the first release of the new class and there may be bugs. Please report issues that you encounter through the comment option.
## References
Scherler, D., Schwanghart, W., 2020a. Drainage divide networks – Part 1: Identification and ordering in digital elevation models. Earth Surface Dynamics, 8, 245–259. [DOI: 10.5194/esurf-8-245-2020]
Scherler, D., Schwanghart, W., 2020b. Drainage divide networks – Part 2: Response to perturbations. Earth Surface Dynamics, 8, 261-274. [DOI: 10.5194/esurf-8-261-2020]
### geoglobe now in MATLAB R2020a
Posted on
In a previous post (here) I have shown how TopoToolbox is able to export data to kml-files which can be directly opened in Google Earth (or other digital globes such as ArcGIS Earth). Indeed, kml is a great way to share geographic data. However, if you want to visually explore your data, the step via kml and Google Earth may not be necessary any longer. Since its latest release, MATLAB’s Mapping Toolbox includes geoglobe, a geographic globe that allows navigating on the Earth surface and to add data. To this end, geoglobe will enable numerous applications that enhance the way we can explore data analyzed and generated using TopoToolbox.
geoglobe opens in a figure created with uifigure.
h = uifigure;
g = geoglobe(h);
Once initiated, you can start with the GeographicGlobe object g. For example, you can change the basemap, which is by default ‘satellite’.
g.Basemap = 'topographic';
There are numerous ESRI basemaps available, as well as some from Natural Earth. Terrain is based on the GMTED2010 model by the U.S. Geological Survey (USGS) and National Geospatial-Intelligence Agency (NGA) and hosted by MathWorks. The DEM has a spatial resolution of 250 m, which is sufficient for many large-scale applications. If you want to add higher resolution data, you can add DTED files with terrain data.
Finally, you can add your own geographic data. Here I show, how you can add stream network data generated with TopoToolbox. I will download some example data from Taiwan using the function readexample:
DEM = readexample('taiwan');
DEM = inpaintnans(DEM);
FD = FLOWobj(DEM);
S = STREAMobj(FD,'minarea',5000);
[lat,lon,z] = STREAMobj2latlon(S,DEM);
geoplot3(g,lat,lon,z+10,'Heightreference','geoid','LineWidth',2)
There are still some slight problems in visualizing the network because the lines remain partly hidden beneath the terrain. I reduced but not fully resolved this effect by adding 10 m to the elevations of the stream network. Notwithstanding, geoglobe is a great utility to visualize topographic data and I am sure there are more functions to come in the future that enable us to interactively work with geographic globes. So far, I particularly like how The Mathworks implemented navigation on the globe which works super smoothly.
### Landscapes Live
Posted on
#shareEGU20 is currently setting an example. Virtual conferences are an effective means to bring scientists together and to spur vivid discussions.
To keep this scientific exchange alive during COVID-19, Philippe Steer, Vivi Pedersen, Stefanie Tofelde, Pierre Valla, Charlie Shobe, and me (my role was actually very minor) have initiated Landscape Live, a new remote seminar series focused on sharing exciting geomorphology research throughout the international scientific community.
The remote format allows for free, planet-conscious, and pandemic-proof attendance by anyone who is interested. Talks will take place using the Zoom meeting software.
Talks will happen during the academic year in blocks of 5-6 weekly talks, with long breaks in between. The first block will run from 28th May through 25th June 2020, with the next block taking place during the fall 2020 semester.
Each seminar in the first block will be held on a Thursday at 2 pm GMT (4 pm Central European Time). The first block features a fantastic lineup of speakers as follows:
• Thursday 28th May at 2 pm GMT: Georgina Bennett, University of Exeter
• Thursday 4th June at 2 pm GMT: Anneleen Geurts, University of Bergen
• Thursday 11th June at 2 pm GMT: Liran Goren, Ben Gurion University of the Negev
• Thursday 18th June at 2 pm GMT: Robert Hilton, Durham University
• Thursday 25th June at 2 pm GMT: Fiona Clubb, Durham University
Please visit https://osur.univ-rennes1.fr/LandscapesLive/ for the most up-to-date information, including the links to each Zoom meeting.
Suggestions for future speakers are welcome; please feel free to send names to any member of the organizing committee. We look forward to seeing you (virtually) at Landscapes Live!
This text is slightly modified from the version written by Charlie Shobe and previously published on the EGU geomorphology blog.
### shareEGU20
Posted on
Only two days left before #shareEGU20 opens its digital gates. I surely won’t be able to be online during the whole event, but I’ve my personal schedule of displays which I’ll try to view and discuss online inbetween homeoffice, homeparenting, homeschooling, …
If you haven’t a fixed schedule yet, consider getting involved in following displays which I authored or coauthored:
EGU2020-11177 – Divide mobility controls knickpoint migration on the Roan Plateau (Wolfgang Schwanghart and Dirk Scherler), Wed, 06 May, 14:00–15:45 | D1315, DISPLAY
EGU2020-19260 – The TopoToolbox v2.4: new tools for topographic analysis and modelling (Dirk Scherler and Wolfgang Schwanghart), Thu, 07 May, 10:45–12:30 | D818, DISPLAY
EGU2020-5609 – Uncertainties in Chi analysis: implications for drainage network and divide stability (Jens Turowski et al.), Tue, 05 May, 10:45–12:30 | D1132, DISPLAY
EGU2020-5900 – Evaluating the effect of variable lithologies on rates of knickpoint migration in the Wutach catchment, southern Germany (Andreas Ludwig et al.), Tue, 05 May, 08:30–10:15 | D1114, DISPLAY
EGU2020-8811 – Why do shelf-incising submarine canyons form? – Insights from global topographic analyses and regression trees (Anne Bernhardt and Wolfgang Schwanghart), Fri, 08 May, 08:30–10:15 | D1102, DISPLAY
EGU2020-3737 – Illuminating the speed of sand – quantifying sediment transport using optically stimulated luminescence (Jürgen Mey et al.), Fri, 08 May, 08:30–10:15 | D899, DISPLAY
### New paper out: Divide mobility controls knickpoint migration
Posted on Updated on
In recent years, there has been a quite fierce debate about how landscapes evolve in response to lateral dynamics of river networks. These dynamics include laterally shifting rivers, their expansion or contraction in upstream and downstream direction, and the mobility of catchment divides. In fact, as we seek to gain insight into changes in climate and tectonics from the analysis of river networks, we often make the assumption that theses river networks are static, that their spatial configuration remains stable.
In our paper published now in Geology (Schwanghart and Scherler, 2020), we argue that this assumption must be made cautiously. We revisited the Parachute Creek basin, Colorado, US, which has been studied by Berlin and Anderson (2007). The site is a real natural laboratory, because it has a uniform climate as well as (sub)horizontally uniform bedrock which makes it possible to analyze the phenomenon of knickpoint migration into a relict landscape in isolation of other factors that govern knickpoint retreat.
In this study, we applied the stream-power incision model to infer present day locations of knickpoints in river profiles of the Parachute Creek basin (see Figure above). The knickpoints emanated from a base level drop at the Upper Colorado River about ~8 Mio years ago. What we realized when looking at the misfit of predicted locations and actual locations, was that there is one subbasin where knickpoints consistently travelled further than our model would predict. Now, you would expect some randomness in knickpoint locations, but the systematic spatial pattern grasped our attention. Looking more closely at the DEM revealed that this subbasin shares some lengths of its divide with the plateau margin, a steep cliff that goes down to the Colorado River. That this margin is actively retreating became clear from numerous beheaded valleys visible in the DEM and Google Earth imagery. A hypothesis was quickly formulated: Knickpoints in this subbasin had migrated further than we expect, because part of the subbasin’s area had been lost to cliff retreat.
In the next few blog posts, I will talk a bit more about our approach to calculate the area lost, and how we actually could even gain some constraints on the timing of area loss. Until then, you may also check Dirk’s and my papers on divide mobility just published in ESURF (Scherler and Schwanghart, 2020a,b). We’ll write here about this two-part paper in due time.
References
Berlin, M. M. and Anderson, R. S.: Modeling of knickpoint retreat on the Roan Plateau, western Colorado, Journal of Geophysical Research: Earth Surface, 112(F3), doi:10.1029/2006JF000553, 2007.
Scherler, D. and Schwanghart, W.: Drainage divide networks – Part 1: Identification and ordering in digital elevation models, Earth Surface Dynamics, 8(2), 245–259, doi:https://doi.org/10.5194/esurf-8-245-2020, 2020a.
Scherler, D. and Schwanghart, W.: Drainage divide networks – Part 2: Response to perturbations, Earth Surface Dynamics, 8(2), 261–274, doi:https://doi.org/10.5194/esurf-8-261-2020, 2020.
Schwanghart, W. and Scherler, D.: Divide mobility controls knickpoint migration on the Roan Plateau (Colorado, USA), Geology, doi:10.1130/G47054.1, 2020. Supplementary material can be found here.
### Calculating Hack’s Law using TopoToolbox
Posted on
Hack’s Law describes an empirical relationship between river length and drainage area (Hack 1957). The functional relationship is a power function with the equation L = c A^h where L is the length of the longest stream from the outlet to the divide, A is the drainage area above a particular locality, c is a constant, and h is the scaling exponent (see figure).
In general, h is slightly below 0.6 (Rigon et al. 1996). Today, I show how to calculate the parameters of Hack’s Law. First, I will demonstrate how the parameters of Hack’s Law are derived for a single catchment. In the second part, I will then apply the technique to calculate the parameters for catchments draining Taiwan.
## Big Tujunga catchment
In the first example, I will use the DEM of the Big Tujunga catchment, which is part of the TopoToolbox distribution. While Hack’s Law appears to hold for any point inside a basin (Rigon 1996), I am showing here how to derive it from set of pixels that comprise the longest path from the divide to the catchment outlet. As usual, I derive flow directions (FLOWobj) and the stream networks (STREAMobj). Then, I extract the largest basin and subsequently the longest stream in this basin that extends from the divide to the outlet.
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM);
DIST = flowdistance(FD);
S = STREAMobj(FD,'minarea',1000);
S = klargestconncomps(S);
D = drainagebasins(FD,S) > 0;
DIST = clip(DIST,D);
[~,IX] = max(DIST.Z(:));
S = STREAMobj(FD,'channelheads',IX);
imageschs(DEM,D,'falsecolor',[.0 0 0],'truecolor',[1 1 1], ...
'colorbar',false,'ticklabels','nice');
hold on
plot(S,'b')
hold off
In a second step, I calculate drainage area and distance. The function flowacc computes the flow accumulation (in pixels) for the entire grid, and the function getnal extracts the grid values for each node in the river network. In addition, I calculate flow distance using the function distance. By default, the function calculates the distance from each node to the outlet, but there are also other options. Here, we use the maximum distance from the channelhead.
A = flowacc(FD)*DEM.cellsize^2;
a = getnal(S,A);
d = distance(S,'max_from_ch');
Now we can plot the two node-attribute lists versus each other.
figure
plot(a,d,'.')
xlabel('Area [m^2]')
ylabel('Stream length [m]')
Now lets use these data to fit Hacks Law. I am using nlinfit from the Statistics and Machine Learning Toolbox for this purpose which requires initial estimates of the parameters. The choice of these initial values can be challenging and way-off values may render nlinfit unable to converge to a good solution.
b = nlinfit(a,d,@(b,X) b(1)*X.^b(2),[0.002 0.6]);
b is a two-element vector and contains the parameters of Hack’s Law, where b(1) is the constant and b(2) is the scaling exponent. Here, b(1) is 0.0035 and b(2) is 0.8346. Having these parameters, we can now obtain estimates for distances for any value of drainage area.
figure
plot(a,d,'.')
hold on
fplot(@(X) b(1)*X.^b(2),xlim)
hold off
xlabel('Area [m^2]')
ylabel('Stream length [m]')
You may note that nlinfit returns an exponent different from Hack’s value of 0.6 and there may be different reasons for this. First of all, the basin may have a particular form because it extends on the relict landscapes of the Chilao Flats and features a series of knickpoints that suggest that it experiences a transient response to accelerated uplift. Second, the river contains both mixed bedrock-alluvial sections in the upper part, and a purely alluvial lower part. Finally, our data violates some of the assumptions that underly the inference of parameters, i.e. a identically and independent distribution (i.i.d.). There may be ways to at least partly account for these violations by random sampling of our data, but I don’t want to go into too much detail here.
## Taiwan
In the second example, we will use the SRTM-3 DEM of Taiwan, which you can download using the readexample function. In this example, we calculate the parameters in Hack’s Law from multiple drainage basins.
DEM = readexample('taiwan');
We first inpaint missing values in the DEM and use FLOWobj to derive flow directions. In a second second step, we then calculate the trunk stream for each basin, again up to the divide. This is a bit tricky, if you have multiple drainage basins. Here, I show how to do this using FLOWobj2cell and cellfun. I omit many small drainage basins that occur along the coast of Taiwan by setting a minimum threshold of drainage basin are to an arbitrary size of 1000 pixels.
DEM = inpaintnans(DEM);
FD = FLOWobj(DEM);
[CFD,~,a] = FLOWobj2cell(FD);
CFD = CFD(a>1000);
[~,IX] = cellfun(@(fd) max(flowdistance(fd)),CFD);
S = STREAMobj(FD,'channelheads',IX);
imageschs(DEM,[],'colormap',[1 1 1],'colorbar',false,'ticklabels','nice');
hold on
plot(S)
Again, I calculate upslope area and distance and plot both in semilogarithmic axes.
A = flowacc(FD)*DEM.cellsize^2;
a = getnal(S,A);
d = distance(S,'max_from_ch');
OUTLETS = streampoi(S,'outlets','logical');
a = a(OUTLETS);
d = d(OUTLETS);
figure
semilogx(a,d,'s')
xlabel('Area [m^2]')
ylabel('Stream length [m]')
hold off
Then, I again use nlinfit to determine the parameters of Hack’s Law.
b = nlinfit(a,d,@(b,X) b(1)*X.^b(2),[0.002 0.6]);
semilogx(a,d,'s')
hold on
fplot(@(X) b(1)*X.^b(2),xlim)
hold off
xlabel('Area [m^2]')
ylabel('Stream length [m]')
This time, nlinfit yields 0.5315 as exponent.
Finally, we might want to look at the spatial patterns of residuals from our predicted lengths.
rerr = (d - (b(1)*a.^b(2)))./d;
IXoutlet = S.IXgrid(OUTLETS);
D = drainagebasins(FD,IXoutlet);
D.Z = double(D.Z);
D.Z(D.Z ~= 0) = rerr(D.Z(D.Z~=0));
D.Z(isnan(DEM.Z)) = nan;
figure
imageschs(DEM,D,'caxis',[-1 1],'colorbarylabel','Rel. error')
The map shows that the residuals (or relative errors) from Hack’s Law. We can see that some catchments, particularly those draining the western slopes tend to have longer distances of the main river than predicted by Hack’s Law. These catchments are often rather elongated which might partly due to their coverage of large alluvial areas and lowlands.
## References
Hack, J. T.: Studies of longitudinal stream profiles in Virginia and Maryland, USGS Professional Paper, 295, 45–97, 1957.
Rigon, R., Rodriguez-Iturbe, I., Maritan, A., Giacometti, A., Tarboton, D. G. and Rinaldo, A.: On Hack’s law, Water Resources Research, 32, 3367–3374, 1996.
### Open PhD position: ‘Geochronology of the first Eurasian Ice Sheets’
Posted on
Want to work with a great advisory team on the geochronology of the first Eurasian ice sheets? Well, here is your chance. My friend and colleague John Jansen (Czech Academy of Sciences) and his colleagues from Charles University and Aarhus University offer a highly exiting PhD opportunity. You will be working with geochronological techniques (cosmogenic nuclide dating) and numerical modelling to unravel the elusive traces of early Eurasian glaciations. There is definitely a lot to discover here.
Check the job advertisement here and apply!
### Handling closed basins
Posted on Updated on
Much of the world’s terrestrial area is comprised of endorheic basins. These basins have no drainage to the oceans. Instead, water in these basins seeps into the ground or evaporates (wikipedia).
Internally drained basins come in different sizes. The Caspian Sea, the largest inland body of the world, has a catchment area of 3,626,000 sqkm. In contrast, dolines – small hollows in karstic terrain – have just a few tens of square meters.
In digital elevation model analysis, internally drained basins can be quite challenging, largely because we often do not know whether a closed basin in our DEM is a true closed basin or an artefact of the data.
When calculating flow directions in TopoToolbox, the default setting is that all closed basins are artefacts. However, when working in dryland areas, we must be cautious with this assumption. In this post, I will show how to deal with closed basins. In addition, I’ll show how to use MATLAB Live Scripts that help us to interactively explore parameters that control the identification of closed basins.
To get started, please download the Tibetan Plateau example using the readexample function:
DEM = readexample('tibet');
The DEM features numerous closed basins that we identify using the fillsinks function. We then visualize the difference between the filled DEM and the actual DEM, which gives us an idea about the location and depth of internally drained basins.
DEMf = fillsinks(DEM);
DIFF = DEMf-DEM;
imageschs(DEM,DIFF,'usepermanent',true,...
'colormap',flipud(hot),'colorbarylabel','Sink depth [m]')
In a second step, we will investigate the properties of the closed basins in more detail. Here we look at two properties. The area of the sinks, and their depth. The function regionprops which is part of the Image Processing Toolbox comes in handy here.
SINK = DIFF > 0;
stats = regionprops(SINK.Z,DIFF.Z,'Area','PixelIdxList','MaxIntensity');
sinkarea = [stats.Area]*DEM.cellsize^2;
sinkdepth = [stats.MaxIntensity];
loglog(sinkarea,sinkdepth,'sk');
xlabel('Area of sink [m^2]')
ylabel('Maximum depth of sink [m]')
There are more than 11,000 sinks in the DEM. Clearly, not all of them are true sinks. I propose that true sinks should have a large areal extent and be rather deep. Hence, we can apply some thresholds to classify sinks into true sinks and artificial sinks. As a first guess, lets assume a combination of an areal extent of 10 km2 and a depth of 10 m as suitable parameters of our sink identification model.
areathres = 10^7;
depththres = 10;
sinkstats = stats(sinkarea > areathres & sinkdepth > depththres);
sinkpixels = {sinkstats.PixelIdxList};
These thresholds returns six closed basins. In a third step, we calculate flow directions in a way that acknowledges the internal drainage of these basins. There are different ways to do this. Here, we use the trick that sets the pixels with the minimum elevation in each sink to nan. There are other ways to do this (see the function FLOWobj), but this approach is actually the fastest.
% A: Here we identify the linear indices of pixels
% with minimum elevation for each sink
minelevix = cellfun(@(ix) ix(find(DEM.Z(ix) == ...
min(DEM.Z(ix)),1,'first')),sinkpixels);
% B: Make a copy of the DEM
DEM2 = DEM;
% C: Set elevations of minimas to nan
DEM2.Z(minelevix) = nan;
% D: calculate flow directions
FD = FLOWobj(DEM2);
% E: calculate flow accumulations
A = flowacc(FD);
% F: and visualize the results.
figure
imageschs(DEM,dilate(sqrt(A),ones(11)),...
'usepermanent',true,'colormap',flowcolor)
hold on
[x,y] = ind2coord(DEM,minelevix);
plot(x,y,'ko','MarkerFaceColor','b')
hold off
Finally, we may not be satisfied with the results. Based on our parameters, there may be to many or to few closed basins. Below is a gif movie that shows how to use the interactive tools in MATLAB Live Scripts to vary the sink-area and sink-depth parameters and instantly see their effects on the resulting flow network.
I hope you enjoyed this post. Applying TopoToolbox in regions with internally drained basins should now be no problem anymore.
### Crossing divides
Posted on Updated on
Divide mobility and changes in flow network configuration are hot topics these winter days. How mobile are drainage divides, what controls their mobility, which are the involved time scales, and how do shifting divides affect other metrics derived from digital elevation models?
One of the issues addressed by some studies is which metrics are useful to characterize divide movements (Forte and Whipple, 2018, Scherler and Schwanghart, 2019). For example, there are chi maps which allow us to map spatial imbalances in drainage network configuration. However, whether these imbalances actually translate into divide movements, or whether these movements will occur in some distant future remains unclear. Gilbert metrics in turn quantify cross-divide differences in hillslope gradient (or other similar metrics) and thus may provide a more suitable proxy for processes that actually act along or close to divides. To this end, it may often be useful to look at both types of metrics.
Now, you know that I like visualisations. And today, I want to briefly show how to combine hillslope and divide geometry with longitudinal river profiles of two adjacent rivers. I am hoping to provide the computational means to automatically derive such profiles which might prove very insightful.
The example uses the Big Tujunga data which we first load. We derive flow directions and the stream network. We will only use the trunk rivers.
DEM = GRIDobj('srtm_bigtujunga30m_utm11.tif');
FD = FLOWobj(DEM,'preprocess','c');
S = STREAMobj(FD,'minarea',1000);
S = trunk(S);
In a next step, I am extracting two rivers which are sourced in the so-called Chilao Flats and drain this low-relief relict landscape. Note that extractconncomps is an interactive function and you will need to manually pick the trunk rivers yourself.
extractconncomps(S)
Plotting the two trunk rivers as profiles (plotdz) generates two profiles that have their outlets set to the distance value 0. That’s not so intuitive and I actually want that the channelheads should be opposing each other.
So, here is a little trick that I can use to connect the two rivers and thereby generate a path across the divide. First, I convert the STREAMobj of the two rivers to a GRIDobj. Then, I run a gray-weighted distance transform to calculate the least-cost path emanating from the outlet of one of the rivers to all other pixels in the DEM. The costs of moving from one pixel to another is elevation, but I decrease the costs at the rivers themselves, which forces the algorithm to run along the river network.
GS = STREAMobj2GRIDobj(S);
DEMaux = DEM;
DEMaux.Z(GS.Z) = 1;
IX = streampoi(S,'outlet','ix');
DEMaux.Z = graydist(DEMaux.Z,IX(1));
imageschs(DEM,DEMaux)
I now have a new GRIDobj DEMaux. I chose this variable name because DEMaux represents some auxiliary topography (the same approach is actually used to route through flat sections or closed pits in the DEM).
Now I am using this auxiliary topgraphy to calculate a FLOWobj, and a STREAMobj sourced at the outlet of the second channel. The resulting stream starts at the outlet of the second channel, runs upstream, crosses the divide, and then follows the course of the first channel until its outlet.
FDaux = FLOWobj(DEMaux);
Saux = STREAMobj(FDaux,'channelhead',IX(2));
% plot it
figure
subplot(1,2,1)
imageschs(DEM,[],'colormap',[1 1 1],'colorbar',false)
hold on
plot(Saux,'LineWidth',2);
subplot(1,2,2)
plotdz(Saux,DEM)
Ok, that’s quite neat. Now, in addition, I might want to plot the river and cross-divide profile together with chi-values.
A = flowacc(FD);
c = chitransform(S,A);
[~,zb] = zerobaselevel(S,DEM);
c = c + zb;
C = GRIDobj(DEM);
C.Z(S.IXgrid) = c;
caux = getnal(Saux,C);
figure
plotdz(Saux,DEM,'type','area')
hold on
plotdz(Saux,DEM,'color',caux)
h = colorbar;
h.Label.String = '\chi [m]';
hold off
Ok, that’s what I wanted. Have fun coding!
References
Forte AM, Whipple KX. 2018. Criteria and tools for determining drainage divide stability. Earth and Planetary Science Letters 493 : 102–117. DOI: 10.1016/j.epsl.2018.04.026
Scherler D, Schwanghart W. 2019. Identification and ordering of drainage divides in digital elevation models. Earth Surface Dynamics Discussions : in review (open discussion esurf-2019-51).
|
2020-08-08 15:15:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5357818603515625, "perplexity": 3270.867632590145}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737883.59/warc/CC-MAIN-20200808135620-20200808165620-00344.warc.gz"}
|
https://robotics.stackexchange.com/questions/20525/guidance-on-sbc-for-visual-based-slam
|
Guidance on SBC for Visual-Based SLAM [closed]
Intro: I am a student that just started a project regarding prototyping a mobile robot that involve Indoor SLAM implementation. I am very new to the field of computer vision, SLAM and SBC (single board computer).
I am searching for advice on choices of SBC for visual ORB-SLAM implementation.
Below are the few options I came across, welcome to provide new suggestion:
1. Raspberry Pi : Is the computational power of Raspberry Pi able to support ORB-SLAM? If so, which model would be my best choice?
2. BeagleBone Black : Any comments on this SBC for ORB-SLAM implementation?
My budget for SBC is around $150 I would also like to know a rough estimate of the minimum requirement for SBC to implement ORB-SLAM. I really appreciate any advice, resource or help I could get. Thanks in advance. 2 Answers As FourierFlux mentioned, the Jetson Nano is perhaps the most processing power you're going to get for$150 and under. However getting ORB-SLAM to run in it might be a challenge — there is a port but it doesn't seem to be very actively maintained.
If splurging a little more on the SBC is an option, you could get a UDOO Bolt, which is a full x86-64 SBC built around the AMD Ryzen Embedded chipset. That would enable running the stock ORB-SLAM system. It sells online for \$332.00 (Vega 3 graphics) or \$418.00 (Vega 8).
Alternatively, if you don't require running all software on the robot itself, you could get a Raspberry Pi, set it up to stream visual data to a more powerful remote machine (e.g. a desktop PC), and run ORB-SLAM there instead. This could be easily accomplished in ROS by running hardware driver nodes on the Raspberry Pi, more demanding SLAM and navigation nodes on the remote machine, and have the two sides communicate over wi-fi:
See this report for details on how this could be done.
Finally, you could try a more economical SLAM approach: a successful implementation of FastSLAM2.0 for Raspberry Pi has been reported, you could contact the authors and ask for the code, or re-implement it yourself.
• Your input is much appreciated, contain so much information that I am seeking for. I have some other questions regarding this manner. What do you mean if I dont run the SLAM code itself on the robot I can go for raspberry pis? Whats the function of remote function (why it can overcome the lack of processing power of raspberry pis? and could I ultilize remote machine with Jetson Nano too? Sorry for asking more questions. Thanks! Apr 22, 2020 at 19:23
• I added a diagram illustrating what I mean, hope it helps. You could of course use a Jetson Nano instead of a Raspberry Pi in a similar setup; however the Nano's appeal is precisely packing enough computing power to run everything locally, so I'm not sure what the point would be. Apr 22, 2020 at 20:42
• Thank you so much for the diagram. I am glad that I got your advice :) Apr 25, 2020 at 18:32
Tbh ur going to need a Jetson nano at least, maybe a tx.
• Thank you. A lot of people suggested Jetson Nano to me. Glad to hear from you too! Apr 21, 2020 at 9:26
|
2022-08-13 09:24:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23958764970302582, "perplexity": 1873.2583457234675}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00767.warc.gz"}
|
http://mymathforum.com/linear-algebra/342779-sea-ships-their-space-time-equations.html
|
My Math Forum Sea ships and their space-time equations
User Name Remember Me? Password
Linear Algebra Linear Algebra Math Forum
November 12th, 2017, 01:41 AM #1 Newbie Joined: Nov 2017 From: Serbia Posts: 2 Thanks: 0 Sea ships and their space-time equations On different days, two sea ships and their space-time equations (coordinates in km and hours) were observed: Find out whether the boats have different velocities or different courses. My question: Since the coordinates are time and space, how do I plot these on a graph? Can I do the following: For example for A, can I take 12 as x1 space coordinate and 8 as y1 time coordinate of one point. Then, take -4 and 18 for the other point so that could be the first vector? And how can I calculate the velocity? Thank you.
November 12th, 2017, 04:22 AM #2
Math Team
Joined: Jan 2015
From: Alabama
Posts: 2,967
Thanks: 807
Quote:
Originally Posted by Tarata12 On different days, two sea ships and their space-time equations (coordinates in km and hours) were observed: Find out whether the boats have different velocities or different courses. My question: Since the coordinates are time and space, how do I plot these on a graph? Can I do the following:
Since the independent variable is time, one dimensional, and the dependent variable, is position, two dimensional, in order to graph this you would need a three dimensional graph. Not impossible but awkward to graph, especially if you are doing it on two dimensional paper!
Quote:
For example for A, can I take 12 as x1 space coordinate and 8 as y1 time coordinate of one point. Then, take -4 and 18 for the other point so that could be the first vector? And how can I calculate the velocity? Thank you.
I have no idea where you got "8" and "18" as time coordinates! When t= 0, the A position is (12, -4). When t= 1, the A position is (24, -8 ). If t= 8 (just because you mention it) the A position is (12+ 8(12), -4+ 8(-4))= (108, -36). But that really has nothing to do with this problem.
In any case, rather than using three dimensions to graph A's position, I would say that since x= 12+ 12t, t= (x- 12)/12 and then y= -4- 4t= -4- 4(x-12)/12= -4- (x- 12)/3= -x/3. The graph is y= -x/3, a straight line (you could then, mark points on that line with their t value. For example, if t= 0, x= 12, y= -4 so the point (12, -4), which lies on y= -x/3, corresponds to t= 0.)
Similarly Since, for B, x= 10+10t, t= (x- 10)/10. Then y= -2+ 25t= -2+ 25(x- 10)/10= -2+ 2.5x- 25= 2.5x- 27.
Notice that those lines have different slopes, -1/3 and 2.5, so they are definitely not the same course. As far as the velocities are concerned they are just the vectors (12, -4) and (10, 25). Of course, -4/12= -1/3 and 25/10= 2.5.
November 12th, 2017, 05:25 AM #3 Newbie Joined: Nov 2017 From: Serbia Posts: 2 Thanks: 0 Ah, I see. I made a mistake in the equation, it's actually $x = [12 -4] +t*[8, 18]$ that's why you couldn't find where it comes from Anyway, it was not correct. Now I understand, I solved the problem Thank you very much!
November 12th, 2017, 12:56 PM #4 Global Moderator Joined: May 2007 Posts: 6,454 Thanks: 567 the speeds are different - 26.9 for B and 19.7 for A. B's speed components have a ratio of 2.5, while A's have a ratio of 2.25, so they are not on the same course.
Tags equations, sea, ships, space-time, spacetime, vectors
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post lemgruber Physics 9 August 17th, 2015 12:12 PM HannahJane1993 Differential Equations 2 April 22nd, 2015 02:09 AM Dacu Math Events 16 July 10th, 2013 11:13 AM brambram Applied Math 0 October 19th, 2012 07:57 AM hml Real Analysis 2 October 21st, 2009 09:12 AM
Contact - Home - Forums - Cryptocurrency Forum - Top
|
2018-03-22 06:19:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6964369416236877, "perplexity": 1960.272142754015}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647777.59/warc/CC-MAIN-20180322053608-20180322073608-00494.warc.gz"}
|
https://www.physicsforums.com/threads/taking-the-derivative.384910/
|
# Homework Help: Taking the derivative
1. Mar 8, 2010
### WesleyJA81
In the text (attached) I can't figure out how they are making the jump from the first eqn to the second eqn. Any guidance would be helpful. Thanks
File size:
17.9 KB
Views:
110
2. Mar 9, 2010
### vela
Staff Emeritus
Apparently, p=p2 and ρ=ρ2, and p is a function of 1/ρ. The quantities q, p1, and ρ1 are constants.
$$\frac{\gamma}{\gamma-1}\left(\frac{p}{\rho}-\frac{p_1}{\rho_1}\right)-\frac{1}{2}\left(\frac{1}{\rho_1}+\frac{1}{\rho}\right)(p-p_1)=q$$
If you let x=1/ρ, you can write the equation as
$$\frac{\gamma}{\gamma-1}\left(xp(x)-\frac{p_1}{\rho_1}\right)-\frac{1}{2}\left(\frac{1}{\rho_1}+x\right)(p(x)-p_1)=q$$
Differentiate that equation with respect to x and solve for p'(x).
|
2018-10-18 06:49:07
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8368293046951294, "perplexity": 4516.814461404341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511744.53/warc/CC-MAIN-20181018063902-20181018085402-00356.warc.gz"}
|
https://mail.lon-capa.org/pipermail/lon-capa-cvs/Week-of-Mon-20170116/027569.html
|
# [LON-CAPA-cvs] cvs: loncom /html/adm/help/tex Author_LON-CAPA_Introduction.tex
damieng damieng at source.lon-capa.org
Fri Jan 20 17:06:06 EST 2017
damieng Fri Jan 20 22:06:06 2017 EDT
Modified files:
Log:
--- loncom/html/adm/help/tex/Author_LON-CAPA_Introduction.tex:1.13 Thu Jan 19 21:12:45 2017
+++ loncom/html/adm/help/tex/Author_LON-CAPA_Introduction.tex Fri Jan 20 22:06:05 2017
@@ -35,13 +35,25 @@
can be marked obsolete, and the version in your authoring space deleted, but the published version(s) will remain in your folders in the locations in which
they were originally published.
-\subsubsection*{The LON-CAPA markup language and the 5 editors}
+\subsubsection*{The LON-CAPA markup language}
Content documents are created in the LON-CAPA markup language. Like HTML, which is the language used to create all content on the web, the LON-CAPA language is a \emph{markup language}. A markup language is simply structured with \emph{tags}: each structure in the document starts with a \emph{start tag} like \texttt{<problem>} and ends with an \emph{end tag} like \texttt{</problem>}.
Additionally, each structure (called an \emph{element}) can have \emph{attributes}, each one composed of a name and a value. For instance, \texttt{<foil value="true">} starts a true foil.
A markup language is defined essentially with a list of elements and attributes, and rules specifying which element is allowed inside which other element.
-The LON-CAPA language includes most HTML elements, and adds more elements which are described in this manual. The syntax is similar to HTML, but with an additional constraint: when an element is started with a start tag, it must always be closed with an end tag.
+
+The LON-CAPA language includes most HTML elements, and adds more elements which are described in this manual. The syntax is similar to HTML, but with an additional constraint: when an element is started with a start tag, it must always be closed with an end tag (unless the empty element syntax is used, as with \texttt{<hr/>}).
This syntax is sometimes called XML'' in this manual. XML is a specific syntax for markup languages, but the LON-CAPA language is actually not using the XML syntax, which would require escaping special characters in scripts.
+
+HTML elements are not listed in this manual, but good resources are available on the web to learn HTML. For instance:
+\begin{itemize}
+\item Learn web development:\\
+\texttt{https://developer.mozilla.org/en-US/docs/Learn/HTML}
+\item HTML element reference:\\
+\texttt{https://developer.mozilla.org/en-US/docs/Web/HTML/Reference}
+\end{itemize}
+
+\subsubsection*{Five editors}
+
\bigbreak
The authoring environment currently includes 5 different editors:
\begin{itemize}
|
2021-07-30 04:23:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34339943528175354, "perplexity": 4764.008202521241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153931.11/warc/CC-MAIN-20210730025356-20210730055356-00485.warc.gz"}
|
https://proxies-free.com/tag/pdf/
|
## Merge SVG files into PDF
I have a directory of about 1000 SVG files in the SVGZ extension that resembles a textbook. An example of the SVG files can be found here. I also have a directory of XHTML with the same name, but was all text. These were the only formats I could get. How would I merge the files into one PDF (retaining the quality of words / vectors)?
## Convert a PDF file to an XLS file [pendente]
I'm developing a system and need to convert the PDF to exel and so far I have not found anything can someone help me?
I have created a form in which the user climbs up the PDF file. I could read the PDF file and display it on the screen, but I can not convert that PDF file to xls
## Get your eBook Personal Productivity Power for \$ 3 now
Description: Imagine Personal Productivity Power – More hours and more work in one day, In this eBook, you'll learn why there never seems to be enough time, not all hours are the same, have a clear intention of what you want to achieve. Work all the time, separate the time blocks for completing certain tasks, the quality of your work versus the quantity of your work, how to get up early and be more productive, spend more time outsourcing to others and using the internet to your Time and your automation to use.► Contains 35 pages,
## I give you 40k Quotes Text, 2k Motivation Quotes, 200 fitnetss Videos, 350 Spanish PDF Books. for \$ 3
#### I give you 40k Quotes Text, 2k Motivation Quotes, 200 fitnetss Videos, 350 Spanish PDF Books ..
Products from this package (RR, MRR and PLR) are :
• E-Books
• items
• Highly professional eCovers and headers
• PLR software
• Tutorial Videos
Advantages of using PLR products :
• PLR content creates sales funnels that your customers can travel through.
• Rotate the PLR articles and then use them on your website or blog.
• Change, resell, unpack / repack.
• As GIVEAWAY / BONUS.
• Professionally designed covers that you can choose for your next product.
• Matching header graphics that you can use for your websites, blogs, and more.
• PSD files so you can fully customize and edit the covers and headers.
Some of the covered niches:
• Internet Marketing
• Earn money online
• health
• SEO
• Fashion
• Gaming
• travel
• Earn money online
• 350 PDF books in Spanish (extra bonus for you) Fitness video, quotes and more
Article properties:
• Format: (Title), (Word Count), (Summary), (Keywords), (Article Text).
• You can enter between 350 and 2000 words.
• High quality SEO optimized articles
GIVEAWAYS and FREE bonuses, such as FREE TRAINING for:
• Set up PLR membership.
• How to earn money with PLR products.
• PLR renaming.
, (tagsToTranslate) pdf (t) books (t) plr (t) fitness (t) videos (t) quotes
## App Windows – ORPALIS PDF OCR 1.1.29 Professional | NulledTeam UnderGround
Languages: English, French | File size: 195.08 MB
Turn all your documents into searchable PDFs! Scanned documents and images can now be searched at lightning speed thanks to an innovative conversion engine. If you need an easy way to convert to searchable documents, using third-party software solutions is the best alternative. ORPALIS PDF OCR is one of the programs that allows you to easily perform the above task.
In case you do not want to get any further
Why PDF OCR?
To offer a fast and powerful tool requires a lot of technology. Here are some facts about ORPALIS PDF OCR and the team that developed it.
– Faster tool to convert documents to PDF OCR.
– High quality optical character recognition and layout analysis.
– Productive and intuitive user interface.
– Image files can now be searched
– Stop wasting time searching for information in log documents.
– Carries out fast automatic indexing for large volumes of documents.
– User-friendly software thanks to intuitive user interface.
– Fast and reliable OCR engine supported by the world bestselling GdPicture.NET SDK.
– Built by recognized industry experts.
Test the innovative features of PDF OCR:
Input File Formats
Convert PDF (PDF OCR Cloud Edition) and more than 100 other file formats (PDF OCR On-Premises Edition) into a searchable PDF!
Supported languages
More than 60 languages are supported in the PDF OCR On-Premises Edition! The Cloud Edition includes English, French, Spanish, German and Italian.
PDF OCR's powerful multithreading engine can handle very long documents and hundreds of pages at a time!
Command line support
Integrate all PDF OCR functions into your production line, automate your processes and gain a lot of time!
Layout analysis
This feature automatically detects the orientation of each page for the most accurate OCR results possible
document selection
You can select the exact document to be processed by PDF OCR or the entire folder. Select your files or folders or drag and drop them directly into PDF OCR.
Localized user interface
Currently the user interface is translated into English and French, but wait, more languages will follow!
64-bit support
PDF OCR is AnyCPU. This means that if possible, the application runs as a 64-bit process and uses 32-bit if only this mode is available.
RELEASE NOTES:
– Improved accuracy and speed of the OCR engine.
Requirement: Windows from XP SP3 to Windows 10.
START PAGE
In case you do not want to get any further
## I will convert Word to PDF or PDF to Word per 80 pages
#### I convert Word to PDF or PDF to Word per 80 pages
Hi,
If you're looking for someone to convert PDF to Word, Word to PDF, PowerPoint, Excel, and so on, then here's your match.
PDF and Word conversion.
PDF to Word, Excel, PowerPoint
Word to conversion
Formatting documents.
MS Word format
High quality work
Fast delivery time
100% accuracy and error-free working
100% satisfaction or refund
This service offers you
Conversion of scanned PDFs to WORD, EXCEL, POWER POINT
Conversion of WORD – >> JPG, PDF, EXCEL, POWER POINT
I can unlock a password protected file.
You are 100% satisfied at the end of our business.
Why do you have to buy from me?
Your file will be treated professionally and a decent document will be delivered.
I offer a refund if you are not satisfied
I deliver in 2 hours
, (tagsToTranslate) convert (t) file (t) word (t) pdf (t) change (t) program
## pdftotext – PDF to text with line breaks with PHP
Thanks for writing an answer to Stack Overflow!
But avoid
• Make statements based on opinions; Cover them with references or personal experience.
## Probability distributions – how to derive a common PDF
I was looking for this question,
They only give the definition of common pdf
$$iint f (x, y) dxdy = 1$$
and example like:
$$f (x, y) = x + cy ^ 2, 0 le x le 1, 0 le y le 1$$
find c.
But not how to derive f (x, y)
f (x) and f (y).
For example, I want to know how to derive a common pdf f (x, y) of 2 random variables X, Y from a gamma distribution or any 2 distribution.
Hope can have a calculation example for that
or a website reference explaining how this can be done is great. Many thanks.
## pdf – How can I reduce the file size of iOS scanned documents (with the file app)?
In iOS 13 (maybe 12?) You can scan documents with the file app. It uses the camera to crop manually or automatically, to allow multiple pages, and to create a PDF. A very big PDF.
My single A4 page is 10 MB in size. A driving license 4MB. It's too big for uploading to some websites.
Is there a way to reduce the file size when scanning or subsequently? specially without Using a third-party app (there are a lot and I had the expectation that these will not be needed with a built-in feature).
## Preview: PDF highlighting and search only work after a reboot
Recently, a problem has come to me more and more frequently when I use the preview to search PDF documents for text (using Command + F or the search box at the top right). At first everything seems to be ok:
1. (as it should) The sidebar displays a list of thumbnails that contains all the pages with the search term.
2. In general, but not always, the term I'm looking for is highlighted on the first page it appears on.
After that, however, it quickly comes to malfunction:
1. When I click thumbnails at the bottom of the list, the preview will generally take me to that page in PDF (as before). However, the searched terms are no longer highlighted.
2. Instead, Preview emits a low-pitched beep that occurs when someone tries to perform an illegal action in various Mac situations.
3. In that case, I can not highlight anything in the document I'm previewing. In fact, I can not even select a text anymore.
4. Closing the sidebar can not solve the problem.
5. Hiding the search box on the top right does not solve this.
6. Selecting the Tools menu and then selecting Text does not work or click the text selection icon in the edit box.
7. If I finish the preview and reopen the PDF, I can highlight it again. However, the problem occurs again as soon as I perform another search.
8. Working with the same files in other PDF programs (such as Skim, Acrobat, etc.) is easy. I am pretty sure that this is not a problem with the underlying PDFs.
9. None of the answers to this question corrects this. (In fact, I have delivered such an answer myself some time ago, but this trick and the others are not working anymore).
I use Preview for many years without any problems. I think this issue has come up in the last three to four months. I was hoping that upgrading to Catalina could solve the problem, but it did not.
I'd like to think that I'm pretty sophisticated with computers (I have a reputation of nearly 8,000 on the Stackoverflow main page), but I'm amazed. I also do not have a good overview of how to get diagnostic information or create a reproducible example.
Any hints on how to do this would be greatly appreciated.
|
2019-12-07 07:09:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22595910727977753, "perplexity": 3592.2981769826206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540496492.8/warc/CC-MAIN-20191207055244-20191207083244-00225.warc.gz"}
|
https://www.thanghuynh.io/publication/
|
# Publications
Filter by type:
. Fast binary embeddings, and quantized compressed sensing with structured matrices. Communications on Pure and Applied Mathematics, 2018.
|
2019-08-23 10:58:03
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9715203642845154, "perplexity": 2486.8515687296244}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318375.80/warc/CC-MAIN-20190823104239-20190823130239-00117.warc.gz"}
|
http://mathhelpforum.com/math-topics/3617-fenders.html
|
# Math Help - fenders
1. ## fenders
choose a number say 156
its factors are 1,2,3,4,6,12,13,26,39,52,156
grouped by last digit (1) (2,12,52) (3,13) (4) (6,26,156) (78) (39)
We say the FENDERS (factor enders) of 156 are 1,2,3,4,6,8,9 &that 156 is a 7-fender (it has seven fenders)
A) Show that a number which has 0 & 9 as fenders has at least four more fenders.
B) Find 3 9-fenders less than 1000 with different sets of fenders?
I know 900 is one(0,1,2,3,4,5,6,8,9) & 420 is another (0,1,2,3,4,5,6,7,8) but can't find another?
2. This strange and agruably pointless problem has already been answered here .
Also...
(B) 630 (0,1,2,3,5,6,7,8,9)
3. ## fenders
Originally Posted by Quick
This strange and agruably pointless problem has already been answered here .
Also...
(B) 630 (0,1,2,3,5,6,7,8,9)
630 is a 10 fender (45*14) so still need help
4. Originally Posted by Quick
This strange and agruably pointless problem has already been answered here .
Also...
(B) 630 (0,1,2,3,5,6,7,8,9)
Help, how 630 is a 8 fender?
KeepSmiling
Malay
5. Originally Posted by malaygoel
Help, how 630 is a 8 fender?
KeepSmiling
Malay
35*18 gives 630
6. Originally Posted by bardo
B) Find 3 9-fenders less than 1000 with different sets of fenders?
I know 900 is one(0,1,2,3,4,5,6,8,9) & 420 is another (0,1,2,3,4,5,6,7,8) but can't find another?
It is 270(0,1,2,3,5,6,7,8,9)
KeepSmiling
Malay
7. Originally Posted by malaygoel
It is 270(0,1,2,3,5,6,7,8,9)
KeepSmiling
Malay
Nope... $54\times5=270$
There must be some sort of method we are missing, surely a teacher wouldn't make you guess and pick.
8. Originally Posted by Quick
Nope... $54\times5=270$
There must be some sort of method we are missing, surely a teacher wouldn't make you guess and pick.
980 is the other 9 fender with 0,1,2,4,5,6,7,8,9
|
2016-02-06 03:34:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7540228962898254, "perplexity": 6243.209448143446}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701145751.1/warc/CC-MAIN-20160205193905-00335-ip-10-236-182-209.ec2.internal.warc.gz"}
|
https://courses.lumenlearning.com/intermediatealgebra/chapter/read-or-watch-complex-rational-expressions-2/
|
Complex Rational Expressions
Learning Outcomes
• Simplify complex rational expressions
Fractions and rational expressions can be interpreted as quotients. When both the dividend (numerator) and divisor (denominator) include fractions or rational expressions, you have something more complex than usual. Do not fear—you have all the tools you need to simplify these quotients!
A complex fraction is the quotient of two fractions. These complex fractions are never considered to be in simplest form, but they can always be simplified using division of fractions. Remember, to divide fractions, you multiply by the reciprocal.
Before you multiply the numbers, it is often helpful to factor the fractions. You can then cancel factors.
Example
Simplify.
$\displaystyle\dfrac{\,\frac{12}{35}\,}{\,\frac{6}{7}\,}$
If two fractions appear in the numerator or denominator (or both), first combine them. Then simplify the quotient as shown above.
Example
Simplify.
$\displaystyle\Large \frac{\,\frac{3}{4}+\frac{1}{2}\,}{\,\frac{4}{5}-\frac{1}{10}\,}$
In the following video, we will show a couple more examples of how to simplify complex fractions.
Complex Rational Expressions
A complex rational expression is a quotient with rational expressions in the dividend, divisor, or in both. Simplify these in the exact same way as you would a complex fraction.
Example
Simplify.
$\displaystyle\Large \frac{\,\,\frac{x+5}{{{x}^{2}}-16}\,}{\,\,\frac{{{x}^{2}}-\,\,25}{x-4}\,}$
In the next video example, we will show that simplifying a complex fraction may require factoring first.
The same ideas can be used when simplifying complex rational expressions that include more than one rational expression in the numerator or denominator. However, there is a shortcut that can be used. Compare these two examples of simplifying a complex fraction.
Example
Simplify.
$\displaystyle\dfrac{\,\,\normalsize1-\dfrac{9}{{{x}^{2}}}\,\,}{\,\,\normalsize1+\dfrac{5}{x}\normalsize+\dfrac{6}{{{x}^{2}}}\,\,}$
Example
Simplify.
$\frac{1-\frac{9}{{{x}^{2}}}}{1+\frac{5}{x}+\frac{6}{{{x}^{2}}}}$
You may find the second method easier to use, but do try both ways to see what you prefer.
In our last example, we show a similar example as the one above.
Summary
Complex rational expressions are quotients with rational expressions in the divisor, dividend, or both. When written in fraction form, they appear to be fractions within a fraction. These can be simplified by first treating the quotient as a division problem. Then you can rewrite the division as multiplication and take the reciprocal of the divisor. Or you can simplify the complex rational expression by multiplying both the numerator and denominator by a denominator common to all rational expressions within the complex expression. This can help simplify the complex expression even faster.
|
2023-03-23 10:39:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121943116188049, "perplexity": 392.7300883877481}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945144.17/warc/CC-MAIN-20230323100829-20230323130829-00231.warc.gz"}
|
http://openstudy.com/updates/55bd4b08e4b01850ec7d7bf9
|
## anonymous one year ago x
$$\bf \begin{cases} {\color{brown}{ y}}=-x^2+36\\ {\color{brown}{ y}}=2x+21 \end{cases}\qquad well\qquad \begin{array}{cccllll} {\color{brown}{ y}}&=&{\color{brown}{ y}} \\ \quad \\ -x^2+36&=&2x+21 \end{array}\qquad thus \\ \quad \\ -x^2+36=2x+21\implies 36=x^2+2x+21 \\ \quad \\ 0=x^2+2x+21-36\implies0=x^2+2x-15\qquad \\ \quad \\ notice\qquad \begin{array}{cccllll} x^2&+2x&-15\\ &\uparrow &\uparrow \\ &5-3&5\cdot -3 \end{array} \\ \quad \\ 0=(x+5)(x-3)$$ solve for "x" once you found "x", get "y" by substitution
|
2017-01-23 00:46:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7676124572753906, "perplexity": 11923.922842774615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281659.81/warc/CC-MAIN-20170116095121-00233-ip-10-171-10-70.ec2.internal.warc.gz"}
|