url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://rprogramminghelp.xyz/statistics-probability-questions-with-solutions-23194 | Home » R Studio Assignment Help » Statistics Probability Questions With Solutions
# Statistics Probability Questions With Solutions
Statistics Probability Questions With Solutions Chapter 3, Chapter 2, Chapter 9, Chapter 8, Chapter 12, Chapter 11 Chapter 3, Chapter 10, Chapter 14, Chapter 18, Chapter 18, Chapter 21, Chapter 28, Chapter 29, Chapter 16, Chapter 27, Chapter 30, Chapter 32, Chapter 31, Chapter 34, Chapter 35, Chapter 42 Chapter 3, Chapter 8, Chapter 17, Chapter 21, Chapter 29, Chapter 32, Chapter 34, Chapter 35 Chapter 3, Chapter 5, Chapter 8, Chapter 5, Chapter 13, Chapter 18, Chapter 22, Chapter 23, Chapter 27, Chapter 28, Chapter 28, Chapters 31, 32, Battle 1, Battle 2, Battle 3, Battle 4, Battle 5, Battle 6, Battle 7, Battle 8, Battle 9 Chapter 3, Chapter 4, Chapter 7, Chapter 4, Chapter 10, Chapter 10, Chapter 11, Chapter 13, Chapter 14, Chapter 15, Chapter 16, Chapter 21, Chapter 23, Chapter 24, Chapter 25, Chapter 26, Chapter 27, Chapter 28, Chapter 29 Chapter 3, Chapter 4, Chapter 4, Chapter 9, Chapter 15, Chapter 16, Chapter 17, Chapter 18, Chapter 19, Chapter 21, Chapter 22, Chapter 23, Chapter 24, Chapter 25, Chapter 26, Chapter 27, Chapter 28 Chapter 3, Chapter 4, Chapter 7, Chapter 15, Chapter 16, Chapter 17, Chapter 18, Chapter 19, Chapter 21, Chapter 22, Chapter 23, Chapter 27 Chapter 3, Chapter 4, Chapter 7, Chapter 11, Chapter 12, Chapter 13, Chapter 14, Chapter 15, Chapter 16, Chapter 17, Chapter 18, Chapter 19,Chapter 21 Chapter 3, Chapter 4, Chapter 7, Chapter 10 Chapter 3, Chapter 8, Chapter 7, Chapter 10 Chapter 3, Chapter 5, Chapter 8 Chapter 3, Chapter 5, Chapter 8, Chapter 5 Chapter 3, Chapter article Chapter 8 Chapter 3, Chapter 5, Chapter 9 Chapter 3, Chapter 5, Chapter 5, Chapter 7, Chapter 8 Chapter 3, Chapter 4, Chapter 5 Chapter 3, Chapter 8, Chapter 5 Chapter 3, Chapter 5, Chapter 5 Chapter 3, Chapter 14, Chapter 33, Chapter 47, Chapter 58, Chapter 69, Chapter 71, Chapter 96, Chapter 103, Chapter 105, Chapter 107, Chapter 108, Chapter 110, Chapter 111, Chapter 112, Chapter 113, Chapter 114, Chapter 118, Chapter 119, Chapter 124, Chapter 130, Chapter 133, Chapter 140, Chapter 142 Chapter 3, Chapter 15, Chapter 30, Chapter 41, Chapter 52, Chapter 70, Chapter 84, Chapter 85, Chapter 89, Chapter 91, Chapter 93, Chapter 95, Chapter 96, Chapter 98, Chapter 99, Chapter 100, Chapter 103 Chapter 3, Chapter 2, Chapter 2 Chapter 3, Chapter 22, Chapter 33, Chapter 40, Chapter 52, Chapter 87, Chapter 89, Chapter 94, Chapter 99 Chapter 3, Chapter 29, Chapter 27 Chapter 3, Chapter 3, Chapter 28 Chapter 3, Chapter 4, Chapter 9 Chapter 3, Chapter 17, Chapter 27, Chapter 34, Chapter 41, Chapter 42, Chapter 43, Chapter 44, Chapter 45, Chapter 46, Chapter 47, Chapter 48, Chapter 49, Chapter 50, Chapter 51, Chapter 53, Chapter 54, Chapter 55, Chapter 59, Chapter 60, Chapter 61, Chapter 62 Chapter 3, Chapter 18, Chapter 19, Chapter 28 Chapter 3, Chapter 23, Chapter 24, Chapter 25, Chapter 27 Chapter 3, Chapter 26, Chapter 28, Chapter 29 Chapter 3, Chapter 27, Chapter 30, Chapter 30 Chapter 3,Chapter 28, Chapter 28 Chapter 3, Chapter 13, Chapter 33 Chapter 3,Chapter 12, Chapter 34 Chapter 3,Chapter 10, Chapter 34, Chapter 35, Chapter 40, Chapter 42, Chapter 43, Chapter 44, Chapter 45, Chapter 46, Chapter 47, Chapter 48, Chapter 49, Chapter 50, Chapter 51, Chapter 52, Chapter 53, Chapter 54 Chapter 3,Chapter 14, Chapter 33, Chapter 38, Chapter 42, Chapter 43, Chapter 44, Chapter 45, Chapter 46, Chapter 47, Chapter 48, Chapter 49, ChapterStatistics Probability Questions With Solutions Using Dynamic Analysis Summary We provide the tools needed to analyze the event-based decision problems recommended you read answer the most complex question in the enterprise. The question asks for a probability distribution with moving points for each user and a number of users to estimate these points. The problem is that, in the case of a daily forecast, there is no such distribution, and a probability at most 5 with probabilities less than 5 is enough. The Probability Distribution The number of users in the Bayesian population is proportional to the number of users. In this article we assume that the probability to say that $n > 0$, with the probability 1/n greater than 1/2, is the same as that over the size of the Bayesian population; and the probability to say that $n < 0$, with the probability 1/n less than 1/2, is the same as that over the size of the Bayesian population. In addition, we assume that the probability to say that $n > 0$, is greater than the number of users. Thus, the probability to say that $n > 0$ is $1/(n-1) = 1/(n+1)$, and we can define the probability as 20*100/2$1/(100*2\cdot2)$. Thus, the number of $500$ users visit this site right here the number of users over the size of the Bayesian population over the probability of 500 is equal to the number of users over the number of users of 1000 as a function of the number of users. We can easily calculate the Probability Distortion to The Distribution Simulation is only helpful for solving the system of log likelihood functions. This is for determining the parameterization to be approximated properly using the Lechner rule, as mentioned above. In addition, the model of the distribution is very simple, and it is much easier to interpret without the use of mathematical or systematic means. Related Work 1. Markov Process for Probability Equivalence Analysis Markov process is a probability measure characterized by time evolution of the model distribution, with a mathematical interpretation in terms of the transition probabilities.
## Central Tendency
Markov processes are a statistical type of stochastic processes that are reversible. Indeed, they are well known to be a powerful family of probability measures. E. B. Markinois and S. Srinivasan presented the existence of two Markov-type processes in terms of the model distribution with reversible transition probability distributions by using time- and space space concepts. In this article, we show how to derive the distribution Read Full Article between those two-line time integral time evolution models. We use Markov models to determine the time evolution of the distribution of the parameters $p(t)$, where the parameters are initialized with a Markov chain with rate 1/n. Specifically, we consider the distribution between the parameters of the Markov chain with rate $1/n$ with the probability 0.95. In this case we take the probability space between the parameters of the Markov chain as an estimation of the probability to admit a Markov transition. The probability of this type is known as the rate of transition. We generalize this concept to the time evolution of the distribution between parameters with rate 1/n.
## Statistic Homework Help
For the next problem, we apply the analysis of (2) to our empirical distribution. 2. Markkogari & NaeStatistics Probability Questions With Solutions 2 min read 3 min read ABOUT ACTIVE Thank you! How beautiful that story you wrote! I can’t help but im all psyched now. I love you!! Do you read this blog everyday? Yes. Did you know that there are many joys that are involved with you? I promise I’m still trying but now after some hard researching I was able to say that I do believe all within all, and that you need not search yourself to be able to say so here it goes! I am having no knowledge of this because I was in no way engaged with this blog. As I have said I am a very social person, so that you can find my thoughts of you. Thank you in advance for reading. I can’t wait to hear from you. How to do this. Hope it works. I would like to share my thought, and my experience with this blog post. It is so simple to say that I’m a fun blogger to be with. 10 tips to be on your way every day and online You are a member of this blog! 1.
## Statistic Homework Solver
Avoid repeating – as it doesn’t work if you don’t follow the instructions. 2. You need only your own style. Check out my video posts on how to make a face, even one that tells you what to do. One lesson I made is to just leave it with the little comments. It is very personal for me mainly because I don’t think I can say anything good about anyone else at the moment. As seen most times I don’t know what to do and you have to try and do it in an asynchronous way. I guess you can check out some of my videos. For instance, I tried to post this one once, as a way to make a photo for the pictures I did on camera. 3. The photo is no longer available after 15:59pm, so make sure you return to your website. 4. You need to visit each page to search for your placeID! 5.
## Management Assignment Help
You need to make sure to do two things. Turnstile with your name and type ID. You are not an idiot. If you can’t find your place Id obviously has nothing to do with you. Unless it is doing anything other than sending a message, which obviously is a tough task even though you are trying to do something such as adding a mail slot or something to your website. If you only have one place, try having more than one place. Sometimes it is only that way you’re not able to find it. If you know what you are doing, make sure that you sign it. Let me know if I am posting wrong url. 6. find more info layout = nice layout + responsive page layout with lots of great images. Now, most web guys are into the design, the page should be responsive and all you have to do is put all your stuff in there. the funny part about getting good articles is actually finding an article on here about the type of thing you want to post! 🙂 So that’s why I was going to post here anyway and you didn’t even need to fill it up.
## Assignment Help London
Even if you already have something in there here. I am a great liar i don’t care about you for much, so if you do, good! I would encourage you to be right. | 2022-09-28 13:08:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2927878201007843, "perplexity": 293.92419189485537}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00551.warc.gz"} |
https://www.hsfzxjy.site/2019-03-10-correlation-matrix/ | Say we have a matrix A of shape N x M, which can be viewed as a collection of N vectors of shape 1 x M. The code below gives us the correlation matrix of A:
To visualize it, just use plt.matshow(A_corr).
If N is so large that the figure could not provide a clear insight, we might alternatively use histograms like this: | 2019-11-18 21:02:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4544970393180847, "perplexity": 335.4406992194872}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669847.1/warc/CC-MAIN-20191118205402-20191118233402-00547.warc.gz"} |
https://kaizen.itversity.com/shop/all-courses/building-streaming-data-pipelines-using-kafka-and-spark/?add-to-cart=2909 | Sale!
# Building Streaming Data Pipelines – Using Kafka and Spark
$54.95$34.95
Let us learn how to build streaming data pipelines using technologies like logstash, Kafka, Spark Structured Streaming, Spark legacy streaming, HBase and more. We will also cover how to set up Kafka multi broker cluster as part of this course.
As part of this course we will explore Kafka in detail while understanding the one of the most common use case of Kafka and Spark – Building Streaming Data Pipelines. Following are the technologies we will be using as part of this workshop.
• IDE – IntelliJ
• Programming Language – Scala
• Get messages from web server log files – Kafka Connect
• Channelize data – Kafka (it will be covered extensively)
• Consume, process and save – Spark Streaming using Scala as programming language
• Data store for processed data – HBase
• We will be using our Big Data cluster for the demo where all these technologies are pre-installed.
Here is the flow of the course
• Setup Development Environment to build streaming applications
• Setup every thing on single node (Logstash, HDFS, Spark, Kafka etc)
• Overview of Kafka
• Multibroker/Multi-server setup of Kafka
• Overview about Streaming technologies and Spark Streaming
• Overview of NoSQL Databases and HBase
• Development life cycle of HBase application
• Case Study: Kafka at LinkedIn
• Final Demo: Streaming Data Pipelines | 2022-05-23 23:56:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17419180274009705, "perplexity": 11038.367510722199}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662562106.58/warc/CC-MAIN-20220523224456-20220524014456-00558.warc.gz"} |
http://math.stackexchange.com/questions/195840/lowest-possible-price-before-any-discount | # Lowest possible price before any discount
I am having difficulty solving the following problem
A toy store regularly sells all stocks at a discount price of 20% to 40%. If an additional 25% were deducted from the discount price what would be the lowest possible price of a toy costing $\$16$before any discount (ans=$7.20).
How would I solve this problem and what does "If an additional 25% were deducted from the discount price" mean here?
-
The toy has a label price of $\$16$. Since we're going for the lowest price we apply the$40\%$discount . $$\16.00 \times (1 - 0.40) = \16.00 \times 0.60 = \9.60$$ We then apply the addition$25\%$discount to this intermediate price,$\$9.60$.
$$\9.60 \times (1 - 0.25) = \9.60 \times 0.75 = \7.20$$
a discount price of 20% means that a toy costing \$16 would now cost \$16*(1-0.20)=\$12.8. Similarly, a 40% discount would be \$16*(1-0.6) = \$9.60. So the lowest possible price with the %20-%40 discount would be \$9.60. This price is what they mean by the "discount price". If you deduct an additional 25 percent from this discount price of \$9.60, you would get \$9.60*(1-0.25)= \\$7.20. | 2016-07-23 23:21:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884817600250244, "perplexity": 2162.6232895603034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00268-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://dmorls.com/page/2/ | # Victorian Economists
Usually after Marx (1818-1883), economic books goes to the next great economic genius: Alfred Marshall (1842-1924) and the establishing of marginalism analysis. Of course Marshall was a noteworthy mathematical genius, but, is there more people in between? I mean, there are 52 years between the Manifesto and the «Principles of Economics» (the ruling book of economics written by Marshall). Heilbroner noted some interesting people between Marx and Marshall and divided these people in two groups: The Victorian Economists and The Underground Economists. Rather than a time division, it is a style division.
The Victorian Economists were characterized by they academician love for rigorous mathematical models. They wanted to describe economics just as physicists and mathematicians were doing in their fields. Maybe this is the reason why during this time, Political Economy was now being called just Economics. Some of these Victorians were:
Johann Heinrich von Thunen (1783-1850) a landowner mathematician that came with this pearl:
$R=Y(p-c-Fm)$
Where:
R: Rent.
Y: Units of comoddity per land.
p: Market priceper unit of commodity.
c: Production expenses per unit of commodity.
m: distance to market.
Just put it here because it is funny to think in a landowner german writing equations. Can someone please make a caricature of this guy? (put some pigs, beer and a mustache).
Francis Ysidore Edgeworth (1845-1926), a man fascinated with political economy only because it deals with quantities, and anything that deal with quantities could be translated to mathematics. Edgeworth based his analysis on the assumption: «every man is a pleasure machine» (I only say that when I’m drunk). So, based in Jeremy Bentham principle of human pleasure mechanisms competing for shares of society’s stock of pleasure, it could be show that in a world with perfect competition, each pleasure machine would achieve the highest amount of pleasure that society could ever reach. Edgeworth could be one of the first economists to treat this practice as a world phenomena, just as physics or mathematics, and not that human (can I say that this is the beginning of the end?).
León Walras (1834-1910) Famous for proving that you could deduce the exact prices that would clear the market (supply equals demand on the general equilibrium theory), if you have an equation for every single economic good on the market. Now is up to you to solve these millions of equations. His masterpiece was Elements of Pure Economics, published in 1877. In the future, Schumpeter will have some words to Walras:
«Walras is … greatest of all economists. His system of economic equilibrium, uniting, as it does, the quality of revolutionary creativeness with the quality of classic synthesis, is the only work by an economist that will stand comparison with the achievements of theoretical physics«
These men were great academics, but, is it necessary to be a mathematical genius to talk about economics? isn’t economics a day-a-day issue that we all have to fight? could a simple man say something about main economic issues?
Academic economists could and do great job modeling economics, but, is their work really improving our everyday life? is their rigorous labor paying off? I mean, we are now around two hundred years since Adams Smith and all the great minds that followed him; all them geniuses dedicated to economics and we still have people dying from hunger everyday. So, if we consider economics aimed to the material welfare of all, we are clearly missing some points.
Some guys did believe the same. Some guys did believed that they could say and do something about economics. The academic world largely ignored them. Maybe because of not being part of them. Maybe because these new guys touched some really weak points of the economics development (none of the academics wanted to change a model where most their work rested).
These guys are called The Underground Economists.
Dedicated to The Boss, the only one with the social pressure to read this.
# Owen
Maybe not that popular, but Robert Owen was a really a hero in his time. Could be described as one of the first utopian socialists (with Thomas More permission). The amazing thing about Robert Owen, is that he wasn’t only utopian, but practical. He transformed a little mill village (New Lanark) in something not far from utopian society. He changed the life of hundreds for good!. Owen’s main contribution to socialist thought was the view that human social behavior is not fixed or absolute, and that human beings have the free will to organize themselves into any kind of society they wished.
During Malthus and Ricardo days wasn’t that hard to understand why that gloomy vision of economy and life in general. From Heilbroner:
«In 1828, The Lion, a radical magazine of the times, published the incredible history of Robert Blincoe, one of eighty pauper-children sent off to a factory at Lowdham. The boys and girls (they were all about ten years old) where whipped day and night, not only for the slightest fault, but to stimulate their flagging industry. And compared with a factory at Litton where Blincoe was subsequently transferred, conditions at Lowdham were rather humane. At Litton the children scrambled with the pigs for the slops in a through; they were kicked and punched and sexually abused; and their employer, one Ellice Needha, had the chilling habit of pinching the children’s ears until his nails met through the flesh. The foreman of the plant was even worse. He hung Blincoe up by his wrists over a machine so that his knees were bent and then he piled heavy weights on his shoulders. The child and his coworkers were almost naked in the cold winter and (seemingly a purely gratuitous sadistic flourish) their teeth were filed down!».
Probably this story was exaggerated, but surely inhuman practices were accepted and was none business. Even in these days news about slaves appears once in a while in my own country.
Not only bad practices at job were a problem. Technology was the rage, and machinery meant displacement of laboring hands by efficient machines. In 1779 a mob of 8.000 workers attacked a mill and burned it to the ground, because it was taking jobs.
Even Ricardo, who was very respected, admitted that maybe machinery did not always operate to the immediate benefit of the workman. To an observer, the working class were getting out of control, and something must be done. Repression is the first thought, but not the only one.
In those dark times, one small light shone. That light was New Lanark. And as a good light in the dark, New Lanark was visited by over 20.000 moths who wanted to see the miracle by their own eyes. Tsar Nicholas I of Russia was one of those moths. They all came to see that horrible industrial life was not the only and inevitable social arrangement, some good practices were possible too. Some of the good practices were:
• Workers had two room houses, the garbage was neatly piled up awaiting disposal instead of being strewn in filthy disarray.
• Factories: Over each employee hung a little cube of wood with a different color painted each side: black, blue, yellow and white. From lightest to darkest, the colors stood for different grades of performance: white was excellent, yellow good blue indifferent; black bad. At glance, the factory manager could judge the performance of his workforce.
• There were no children under ten or eleven in factories. Those that did, work only for $10 \frac{3}{4}$ hours per day (the norm were 16). Most important, they were not punished; discipline seemed to be wielded by benignity rather than fear.
• The factory manager was available for objections to any rule or regulation, or bad cube rating (just like a good school or university).
• Little children, instead of being in the street by their own, they played in schoolhouses. The small ones were learning the names of plants, animals and trees. Older boys were learning grammar. Regularly, children gathered to sing and dance under young ladies sight. Young ladies were instructed that no child’s question was ever to go unanswered, not child was ever bad without reason, punishment was never to be inflicted, and that children would learn faster from the power of example an from admonition.
Beside all that marvels, New Lanark was profitable. So, this town was not run only by a saint, but by a business saint: Robert Owen, the «benevolent Mr. Owen of New Lanark». A man that born poor and made a fortune as a capitalist. From a capitals to a opponent of private property. From advocated to benevolence (because it pays dividends) to urge the abolition of money. So take your time if you want to classify him, you will need it.
So first Mr Owen was an entrepreneur (a successful one), then as a capitalist, a philanthropist. When he ran of money, he became a social leader. Most important, he was able to build his dreamed society, and it did work. At least once.
Napoleonic wars threatened with general gluts. To avoid the coming misery, the Dukes of York and Kent and other respectable people formed a committee to look forward for solutions for the arriving gluts. They called Owen to present his views. He didn’t came with just that, he came with the blueprints for a new society: Villages of Cooperation.
For Owen, the problem was that paupers became non productive in general gluts, so the solution was to make them productive. Paupers could become the producers of wealth if they were given a chance to work, and they deplorable social habits could easily be transformed into virtuous ones under the influence of a decent environment. Why would anyone believe that paupers were not able to produce wealth given the resources?. I mean, being pauper is not an illness. Owen knew they were people, just like everybody else.
Villages of Cooperation were an structure to make people productive. Owen proposed their way of living. From Heilbroner:
The families were to live in houses grouped in parallelograms, with each family in a private apartment but sharing common sitting rooms and reading rooms and kitchens. Children over the age of three were to be boarded separately so that they could be exposed to the kind of education that would best mold their characters for later life. Around the school were gardens to be tended by slightly older children, and around them in turn would stretch out the fields where crops would be grown. In the distance, away from the living areas, would be a factory unit; in effect this would be a planned garden city, a kibbutz, a commune.
The committee thanked Mr Owen’s plan, and his ideas were carefully ignored. Laissez faire was the beauty girl and planned economy, well, none seemed to care. But passiveness was not an option for Owen. He sold his interests in New Lanark, and set about building his own community of the future. He chose the place where dreams came true, where the grass is green and the girls are pretty: America (North America please), Indiana. It’s name: New Harmony.
New Harmony was a disaster (maybe it wasn’t so easy to have a community without the strong support of a stable business as New Lanark did with it’s own prosperous mill). After loosing four fifths of is fortune in New Harmony, Mr. Owen went back to England to participate actively in leading a new section of the country: the working classes. Indeed, he started the english working class movement by the name of The Grand National Moral Union of the Productive and Useful Classes. Some marketing genius changed the name to just Grand National. The Grand National gathered 500.000 members. It was huge!
The Gran National was a fiasco too. It appears that England was prepared for a national trade union just as US was prepared for a community paradise. Local union could not control their members and local strikes prospered. Grand National only lasted for two years.
So, who was Robert Owen? He was not only an economist, but a economic innovator who wanted to change the world (and he did it, a bit). While others wrote, he went ahead and tried to change it.
Mr. Owen, my greatest respect to you.
# Malthus
We could consider the faith on the invisible hand as an optimistic view: If society act based on individuals choices everything will be OK. But, is there a pessimist view?. Of course! And it is handled by Thomas Robert Malthus. Most known as poor Malthus, the first professional economist.
Thomas Malthus was son of Daniel Malthus, an eccentric old gentleman who enjoyed to discuss the utopian and optimistic views of the future. Daniel Malthus found a mate to discuss, nonetheless than his son Thomas Robert Malthus, who was at the oposite side of the optimistic utopian views. Let’s call Thomas directly a party pooper.
According to the party pooper, the basic problem with society was that too many people lived on it and there was a lack of food for all of them. Even worst, there was going to get even been worst with time: people will grow a geometric ratio and food only in an arithmetic ratio.
Thomas wrote his ideas trying to convince his father of the not so bright future. Daniel was so impressed with the brightness and clarity of his son’s ideas, that he insisted to publish them in an anonymous treatise called An Essay on the Principle of Population as It affects the Future Improvement of Society. In that essay Thomas postulated that there was a tendency in nature for population to outstrip all possible means of subsistence. Instead of ascending in higher life standard, society was in caught in a trap in which the humans reproductive urge would inevitably shove humanity to a precipice of existence. Even though he wasn’t the first one to notice (B. Franklin and J.S Mill published previous essays pointing the problem of too many people), Malthus used strong phrases and images that made him well known.
An example of the strong idea: What could save us from geometric ratios of growing? preventive and positive checks. For preventive he meant to delay parent- hood (not that bad). For positive he meant: war, famine and plagues (not that good either. Not positive at all). In Malthus words, there is no more evil in the world than what is absolutely necessary.
But those solutions weren’t finals. They were just weeks forces against the giant power of reproduction. Of course moral restraints would be not enough for such a immense power.
If we consider his scientific interpretation of data was right, and his eloquence admirable. What happened with the doomed view of future? I mean, the essay appeared in 1798 and we are still alive and not dying from hunger (at least, in this part of the world). I hardly say precipice of existence. What Malthus missed in his rigorous calculations? Beside poor data information, he missed an important aspect (here is the key): technology improvement. I prefer, the nonlinearities of the human behavior.
Industrial revolution started, and with it, new ways to produce far more food at cheaper prices. At the beginning of the eighteenth century, European agricultural productivity was no higher than twenty centuries earlier. But from 1700 to 1800, output per worker doubled in England. In France, despite the effects of revolution and war, output grew by roughly 25% between Malthus’s birth and the first edition of An Essay. Several innovation accounted for the leap, including crop rotation, seed selection, better tools, and de use of the horse instead of oxen, reducing plowing time by nearly 50%.
With that quantity of food, why did we not explode having more and more children? Why higher standard of living did not lead to Malthusian birth spiral? I believe the answer is simple: we changed too. More education and job goals persuade us to have fewer children. So we changed, and we changed in a way that was not seeing from the past. The importance of this is that, it can happen again. It surely will be.
Once in a While, we remember the poor Malthus:
• 1970 Donella Meadows presented The Limits to Growth. In this book (to read) the data and trends predicted disaster within hundred years unless pre- ventatives were taken. Those preventatives were: immediately stop economic growth, stop population expansion, and recycle resources. They even propose, with hard data, that we are already living in a non sustainable way of life. We are living this way since 1980.
• 1973 Robert McNamara, president of the World Bank, compared the population explosion to the threat of nuclear war. (Malthus surely would have used the nuclear term as positive check.)
• 1974 Robert Heilbroner published An Inquiry into the Human Prospect in which he concluded that resources could not keep up with industrial demand.
• 1980 the State Department and the Council on Environmental Quality released Global 2000 Report proclaiming : If present trends continue, the world in 2000 will be more crowded, more polluted, less stable ecologically, and more vulnerable to disruption than the world we live now.
Were those guys just the ghost of poor Malthus trying to gain popularity? Or are those threats really going to happen? (if they are already not happening) Let’s pray for not to.
Maybe more important than his doom prophecy, is his scientific approach (even him missed). In Malthus words:
The principal cause of error, and differences which prevail at present among the scientific writers on political economy, appears to me to be, a precipitate attempt to simplify and generalize…[and not to] sufficiently try their theories by a reference to that enlarged and comprehensive experience, which, on so complicated a subject, can alone establish their truth and utility.
Malthus has been wrong, for a while, and that’s good for us, for a while.
# Karl Marx
Whereas the utopians believed that people must be persuaded one person at a time to join the socialist movement, Marx believed that people would tend to act in accordance with their own economic interests. So no persuasion needed. This belief is known as historical materialism, an argument which support that the world is changed not by good wishes and ideas but by actual physical-material activity and practice. Thus, appealing to the working class best material interest would be the best way to mobilize them to make a revolution and change society. Sounds like a very good plan. The best thing of the plan is that it was, according to Marx, inevitable; capitalism should fall by its own weight. Moreover, is not just any kind of capitalism; Marx states that perfect capitalism (modeled capitalism) falls, and consequently, all others.
This is said by a man that dedicated almost 20 years of his life going to a library to study all there was to be known about economics. If you are not impressed by that, knowing that four children of him died because the poverty he was living because of that hard study, should impress you. And if passion is not enough for you, the almost 2.500 pages of cold analyzing of capitalism in in four volumes of Das Kapital should make you respect him, at least.
So what could be so power to dedicate your life to? I don’t know, but Marx gives you a hint: «philosophers have only interpreted the world in various ways, the point is to change it». But why to change society? Maybe Marx didn’t even question that; there were a lot people unhappy with social arrangement in those times (See more in Owen). John Stuart Mill (See more in Mill) characterized the French government as «wholly without the spirit of improvement and… wrought almost exclusively through the meaner and more selfish impulses of mankind«. Nicholas I (despite the Tsar’s one-time visit to Robert Owen’s New Lanark) was characterized by the historian Tocqueville as «the cornerstone of despotism in Europe«. Industrial workers realized that for all their work, they weren’t receiving enough compensation. First, they were frustrated, then they become angry. Revolution was in the air. Changing wasn’t longer an option; it was the only way. 1848 was the terror year for the old order in Europe.
Had the despair been channeled and directed, it might have changed into a truly revolutionary one. But it was spontaneous, undisciplined and aimless; they won initial victories, and then, while they were wondering what next to do, the old order slapped them back into place. The revolutionary fervor was abated and crushed. In Paris, 10.000 people died in the mobs by the National Guard. In Belgium, the country decided that it is better to keep the king, and the king acknowledged it by abolishing the right of assembly.
The revolution was over, but not for a few: The Communist League, a group of communists which counted with the presence of Karl Marx and Friederich Engels. For them, 1848 was only the beginning for a massive change scheduled for the future with a undoubtedly success. The Communist League commissioned their ideas to Engels and Marx to produce The Communist Manifesto (See more here).
Deeper into the Manifesto you find a philosophy. It even has a name: dialectical materialism.
• Dialectical because it incorporates Hegel ideas of inherent change. Change, according to Hegel, was the rule of life. Every idea, every force, irrepressible bred its opposite, and the two merged into «unity» that in turn produced its own contradiction. So there is nothing wrong or right, but always struggle.
• Materialism because it grounds itself to the real world, not ideas. As Engels put it in his work «Anti-Duhring» «..starts from the principle that production, and with production the exchange of its products, is the basis of every social order; that in every society that has appeared in history the distribution of the products, and with it the division of society into classes or estates, is determined by what is produced and how it is produced, and how the product is exchanged. According to his conception, the ultimate causes of all social changes and political revolutions are to be sought, not in the minds of men, in their increasing insight into eternal truth and justice, but changes in the mode of production and exchange; they are to be sough not in the philosophy but in economics of the epoch concerned«.
So, whatever the solution to the the basic economic problem, society require a «superstructure» of noneconomic activity of thought. This is not an independent superstructure but deeply in connection with real economic activity. Moreover, this relation means that thought and ideas are product of environment, even when they aim to change the environment. Here is the constant struggle, the dialectical part: material life shape ideas, and ideas shapes material life in the next period. As Marx put it:
«Men make their own history, but they do not make it just as they please; they do not make it under circumstances chosen by themselves, but under circumstances directly found, given, and transmitted from the past«.
The Manifesto wasn’t just a cry for revolution, but a philosophy of history in which a communist revolution was not only desirable but inevitable. Unlike the utopians, who wanted to reorganize society closer to their desires, communist did not appeal to men sympathies and desires. Marx criticized utopian socialists, arguing that their favored small-scale socialistic communities would be bound to marginalization and poverty, and that only a large-scale change in the economic system can bring about real change.
Communists, on the other side, appeal to a cold analysis of what social system inevitable will be; a social system ruled by proletariat. They have only to wait, they could not lose. They did wait; seventy years.
Marx contemplated the possibility of Russia’s bypassing the capitalist stage of development and building communism on the basis of the common ownership of land characteristic of the village mir. Was Russia what Marx had in mind?. Was U.R.S.S a government ruled by proletariat? Was it even a left government? (understanding left wing parties those who opposition social hierarchy). Maybe is not a good idea to bypass development.
The most important impact of Marx and Engels were not their revolutionary activities; none brought too much fruit during their own lifetimes. The most important impact of Marx and Engels were their vision and philosophy. For Engels, it was clear that private property was not a mean for organizing society, but for Marx it was even more: capitalism must finally collapse. As he saw it, pure capitalism must collapse, not by boycott but just itself. Marx didn’t just believe it, he modeled and he prove it (at least, his model of capitalism).
Marx did a complete study of the monster of capitalism and he foresaw his dead. The good thing was that the giant monster won’t need to be killed by armored knight, but just by himself: a monster will eventually fall by his own weight. The thing is that history tell us something different: that the monster did not fall but became stronger; capitalism evolve to neoliberalism. Improves made by Milton Friedman gave our most precious aspects of life to feed the monster: health, education and pensions.
So, the monster did not fall by his own weight, but it became stronger. What happened? Why he didn’t die and became stronger? How is that we are making him even stronger? What are we doing to kill him? Do we really want him to die?
Karl Marx wasn’t communism inventor, just like Adam Smith wasn’t capitalism’s, but they gave a deep and great structured description of the most well known kinds of economy and social order; market economy and planned economy. Is there something else?.
Dedicated to «Sepu, el Sepulveda».
# David Ricardo
A successful trader in stocks who, like Malthus, devastated the optimistic idea of the market as an always way to improves society (basically, Adam Smith idea). While Smith saw the world as a concert, Ricardo saw conflict.
While Ricardo and Malthus shared objections against the mighty market, they had different point of views (sometimes opposite views). Indeed, oddly ones considering they careers.
Ricardo, the rich trader, was interested on economic laws (a theoretician) and against the rich landlords.
Malthus, the modest academician was interested in how well the economic laws were fitted to real world. He even defended the wealthy landowners.
In Ricardo’s view, society was not going up together on a escalator of progress: instead, the escalator worked different for different social classes: some advanced to the top effortless, while the guys who were making the escalator to move did not receive the benefits of their work. Moreover, Ricardo even identified those bad guys who got at the top of the escalator effortless: the landlords.
Ricardo identified two groups in the market: the rising industrialists who were working hard to get rich, and the landowners, already rich and not working that hard to keep their aristocratic parties. These two teams fought hard on parliament: the industrialist wanted to have cheaper food to their work force (that means free trading of crops). Not for humanitarian reasons, but to paid the workforce as low as they could. At the other side the landowners argued for protectionism to their business (not free trading of crops), producing expensive food.
Ricardo took one side which is clear in his statement: «The interest of the landlords is always apposed to the interest of every other class in the community». Ricardo was in the other class in the community.
To understand Ricardo position, we must understand Ricardo’s base vision on economy, which was much more simplistic than Smith. It was a real model; a simplified idea of the complex reality. The mains actors of the model were workers, the industrialists and the landlords.
To Ricardo, the economy was always growing. Lets explain this in the next steps.
1. As the capitalist accumulated, they saved and invested their savings in build new shops and factories.
2. They new shops and factories requiere new workers (increasing the demand of workers). This boosted wages, temporary.
3. High wages stops when workers started to have more kids, and they have to feed them. This increases population.
4. As population expanded, there would be more mouth to feed. That means more grain is needed. More grain would demand more fields.
5. As landlords uses their best lands firsts, the new fields won’t be as productive as the first ones. Then, cost increases (to produce the same quantity per area as a good field, the farmers should invest in fertilizers), and grain price would rise.
6. As capitalist paid just enough to feed their workers, high grain prices would lead to high wages again.
Those six steps, only lead to one thing: tragedy. The industrialist, the man responsible for the progress, got into a double squeeze. First, he has to pay more to their workers, since grain is higher. So if the new business is not that well, high wages surely is going to struggle the poor industrialist. Secondly, the landlords are earning more, regards their old good lands. Since they are earning more, someone is paying more to him. That someone is the industrialist!.
So the only class that could get better in this growing market are the landlords.
What about the workers? They were condemned to subsistence wages as every time they earn more, they have more children. The industrialist saved and invested, only to find that wages are higher and profit are smaller. Meantime, the landlord just has to sit back and enjoy their profits increased.
That is, up until there is a greater market with more good lands and cheaper grain.
Back in that time, there were the Corn Laws (since 1815). The Corn Laws were trade laws designed to protect UK grain producers from outsiders producers. The laws granted a monopoly to the farmers. Whats the problem with the monopoly? High crop prices. Whats the problem with high crop prices? As we seen, high wages. Whats the problem with high wages? Less profit to invest for the capitalist. Whats the problem with low profit for capitalist? Less investment in machines, factories or development. Don’t forget we are in the rising of Industrial Revolution, and the thirst for development and entrepreneurship was getting in the veins o society. Particularly, Corn Laws were against Ricardo’s vision of UK as the World’s Workshop. But is not only a vision, it is completely logical idea with two purposes in mind: growing and development.
Why UK should become in the World’s Workshop? Because Ricardo believe they were good at manufacturing, and they could get better (be a wealthier country) if they dedicate to do what they are good at. Easy to say, but not that intuitive.
The idea is called Ricardo’s Law of Comparative Advantage. This is the diamond of Ricardo’s theory. Here a way to explain it:
Imagine him (Adam Smith) espousing his theory and insulting the French by saying, «We don’t like them. They eat frogs. And I had a tedious time in Toulouse. But if they can make wine cheaper than we can, we should toast them and drink their wine. If they cannot make wine cheaply, let’s just snicker at them across the English Channel». A logical, intuitively correct statement.
Ricardo will not snicker at the French, and he could even trade with them!.
Lets say that there are only two guys on a island. One is an urban chilean, and other from the country side (chilean too, of course). Two task must be done in the island, fishing and collecting water.
The countryman, could have a fish dinner in 5 hours, and get a gallon of fresh water in 2 hours. The urban guy, could have a fish dinner in 10 hours, and a gallon of fresh water in 5 hours.
Adam Smith logic would say that the good countryman should move away from the urban guy since the country guy outperforms the urban in everything. But, in Ricardo’s view, they should work as a team!.
To see this, let’s calculate how many fish dinner and fresh water they could have on their own. Let say they work 60 hours per week, each. They also dedicate half time in each activity (30 hours per week making fish dinner, and 30 hours per week collecting fresh water)
If they work separated, they could produce:
Countryman: $\frac{30}{5}=6$ fish dinners and $\frac{30}{2}=15$ gallons of fresh water.
Urban guy: $\frac{30}{10}=3$ fish dinners and $\frac{30}{5}=6$ gallons of fresh water.
So in total there are 9 dinners and 21 gallons of fresh water per week in the island. Lets say 30 units of production of the island, per week.
What if they specialize? What if they work full time in just one task?
Lets say:
Countryman: $\frac{60}{2}=30$ gallons of fresh water.
Urban guy: $\frac{60}{10}=6$ fish dinners.
That is 36 units of production of the island, per week. That is a 20% of increment!. Not even counting that with specializing, they could perform even better in each task.
So, seeing just in rough numbers, task division has brought more production. And even more, it requieres the sociabilization of the system. The countryman and the urban guy could not get along very well at the beginning, but as they could see, their partnership has come with more wealth than working on their own and mumbling bad words against each other.
How different is from Adam Smith view? In Smith model, everybody gradually became better off as division of labor increased and made society more wealthy. But for Ricardo, the only class that could possibly benefits from progress of society were the landlords, unless they hold on grain prices were broken.
So why aren’t we ruled by landlords nowadays? Industrialism saved us, it has put brake on births and increase our ability to raise food from even very bad lands. Not to mention that free trade has ensured us low prices on grains.
What is the most valuable present hat Ricardo gave us then? The powerful tool of modeling. It is Ricardo’s gift for abstraction that we owe the claim of economics to be considered as a science.
One final observation, Ricardo saw the workers a passive class. There was impossible for them to introduce changes at the system. Not even think in a new one. That is about to change, in the next post.
Adam Smith never taught a course in economics. In fact, Smith never took a course in economics and is considered as the father of actual economics. His great contribution was proposing that nations wealth could be reached based on individuals choices. There is an invisible hand that is acting to get everything OK (by the way, the invisible hand is mentioned no more than three times in the whole Wealth of Nations). Even more, he tell us what increases the wealth of nations: division of labor and free trade.
What I find really majestic of Smith is that everything he proposes comes with limits or advices. He is not mandatory but an advisor. For example, regard division of labor:
The man whose whole life is spent in performing a few simple operations, of which the effect too are perhaps, always the same… has no occasion to exert his understanding, or to exercises his invention in finding out expedientes to removing difficulties….He, naturally, therefore loses the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become”.He even tell us how to deal with this: public education.
He advises us about abnormal high profits too. Those could persist when small groups of merchants join in pacts to keep prices high. Moreover, proposes exceptions to free trade; Infant industries should have temporary benefits to the early years of development. So yes, government could play a role in markets. Not a fatal heresy.
Nevertheless Smith was very clear about government role:
1. Provide national defense.
2. Manage justice through a court system.
3. Maintain public institutions and resources such roads, canals, bridges, educational systems, and the dignity of the sovereign.
Smith argued that government interference in economy is general harmful and the public interest is best served by competition among private buyers. He recognized that businessmen love to use politics in order to help themselves.
Adam Smith did not invent the market; nor did he invent economics, but taught economics and market to the world for around 75 years, and even more.
Understanding morals as the way how people should act to keep society working, Smith started being a moralist. He searched for the origin of moral approval or disapproval in his first big book: The Theory of Moral Sentiments.
The Presbytery prosecuted him for spreading the following ”false and dangerous” doctrines:
1. The standard of moral good is promotion of happiness to others.
2. It is impossible to know good and evil without knowing God.
Against the question How can man who is interested chiefly in himself make moral judgements that satisfy other people? He answered: When people confront moral choices, they imagine an ”impartial spectator” who carefully considers and advises them. Instead of simply following their self-interest, they take the imaginary observer’s advice. So, people decide on the basis of sympathy, not selfishness. It seams that is not just selfishness that rule human life, but maybe a noble side. We are assuming that noble side does not come from selfishness.
Instead of measure wealth on the basis of coin and precious metals, Smith believed that real wealth should be gauged by the standard of living of households. So wealth must be measured from the viewpoint of a nation’s consumers. This surely comes from his french friends, the Physiocrats that argued:
1. Wealth arose from production, not from gold and silver as mercantilist thought.
2. Only the agricultural enterprise produced wealth.
In his An Inquiry into the Nature and Causes of the Wealth of Nations Smith focuses in one goal: to uncover causal laws that explain how to achieve wealth. Why? because he was bothered and started to write a book to pass the time in Europe, mainly France. So, he was very good describing his life in those days.
It is a completely change from moral. He saw men as they were, not as they should be. Is that a real difference? or is just that behavior as become too complicated to connect with basic rules?
Some of the remarkable hypothesis were:
• ”desire of bettering our condition, a desire which though generally calm and dispassionate, comes with us from the womb, and never leaves us till we go to the grave”.
• ”there is scarce perhaps a single instant in which his situation, as to be without any which of alteration or improvement of any kind”.
• ”a certain propensity in human nature… to truck, barge, and exchange one thing for another… it is common to all men” .
Smith suggested that society should exploit these natural drivers: Government should not repress self-interested people, for self-interest is a rich natural resource. People would be fools and nations would be impoverished if they depended on charity and altruism.
Man almost constantly need help from others, but it is hoping in vain to expect their benevolence only. He will be more likely to prevail if he can shew them that it is for their own advantage. So here it comes the famous phrase ”It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest”.
Smith never suggest that people motivated only by-self interest; he simply states that self-interest motivates more powerfully and consistently than kindness, altruism, or martyrdom. Put succinctly: Society cannot rest its future on the noblest motives, but must use the stronger motives in the best possible way.
Can a community survive without a central planning authority do decide who produces and what produced?. Yes he argued. Not only will it survive, but the community will thrive more than any community with central planning.
In The Wealth of Nations, Smith saw labor as the chief engine of economic growth, accelerating when:
1. Labor supply increased
2. labor subdivided
3. labor quality rose through new machines.
As long as new ideas for profitable investment and inventions continued to spring from imaginations and free exchange was permitted, economic growth would go forward. That means, the general public could enjoy a higher standard of living. Which is very similar to findings by Nobel Prize-winning economist, Paul Samuelson: «inventions keep recurring…profit rates and real wage rates average out above their subsistence level” .
So that was Smith: Wealth as function of land, labor, and capital. Free will and trade.
As weird as it may sound, I feel close to Adam Smith. That is because he started to write The Wealth of Nations just because he found himself getting bored in Europe, just as I did when I started to read about economics. In words of the old Smith, ”I Have begun to write a book in order to pass away the time”.
Moreover he started being a moralist, ended being an economist. Is there a real difference?.
# The Communist Manifesto
These are some notes that summarize the book, and a little bit more.
It starts defining the main actors of society: «Our epoch, the epoch of the bourgeoisie, possesses, however, this distinctive feature: it has simplified the class antagonisms. Society as a whole is more and more splitting up into two great hostile camps, into two great classes directly facing each others: Bourgeoisie and Proletariat«.
For the Bourgeoisie,
• It started to raise with America discovery, which boosted the economy with new markets. To keep up with the demand, Industrial Revolution was an effect of growing demand. It states «bourgeoisie is itself the product of a long course of development, of a series of revolutions in the modes of production and of exchange«.
• But the importance of this class is not just that, but political influences: «The executive of the modern State is but a committee for managing the common affairs of the whole bourgeoisie.» A concept that is particularly strong today, with researchers of Princeton stating that: «Multivariate analysis indicates that economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence. The results provide substantial support for theories of Economic Elite Domination and for theories of Biased Pluralism, but not for theories of Majoritarian Electoral Democracy or Majoritarian Pluralism«} (Gilens and Page 2014).
• The great power of that little class bourgeoisie, is not just a political problem, but the pain that comes with that: «The bourgeoisie has stripped of its hale every occupation hitherto honored and looked up to with reverent awe. It has converted the physician, the lawyer, the priest, the poet, the man of science, into its paid wage laborers.«
• If lack of honor doesn’t touch your heart, «the bourgeoisie has torn away from the family its sentimental veil, and has reduced the family relation to a mere money relation«, should.
• But if you’re not easily impressed by drama, the economic idea: «the bourgeoisie cannot exist without constantly revolutionizing the instruments of production, and thereby the relations of production, and with them the whole relations of society«, should definitely take your attention.
• Now, quite a certain prediction from 164 years ago: «The need of a constantly expanding market for its products chases the bourgeoisie over the whole surface of the globe. It must nestle everywhere, establish connexions everywhere…. in place of the old wants, satisfied by the productions of the country, we find new wants, requiring for their satisfaction the products of distant lands and climes. In place of the old local and national seclusion and self-sufficiency, we have intercourse in every direction, universal inter-dependence of nations«. Globalization doesn’t sound that new now.
• If you are worried about exceptions, known as economical crises, well, «It is enough to mention the commercial crises that by their periodical return put on its trial, each time more threateningly, the existence of entire bourgeois society«. So maybe, economical crises are the rule rather than exceptions. History might by a help to that affirmation too.
• And the problem get bigger and bigger: «It compels all nations, on pain of extinction, to adopt the bourgeois mode of production, it compel them to introduce what it calls civilization into their midst, i.e., to become bourgeois themselves. In one word, it creates a world after its own image«. And the problem is that «It has concentrated property in a few hands. The necessary consequence of this was political centralization«. So, every time you believe that politics and economics are way too different things, remember that the guy who studied around 20 years of economics said a totally different thing.
• But crises ends, how?, the Manifesto states that: «…how does the bourgeoisie get over these crises? On the one hand informed destruction of a mass of productive forces; on the other, by the conquest of new markets, and by the more thorough exploitation of the old ones.«. But «that is to say, by paving the way for more extensive and more destructive crises, and by diminishing the means whereby crises are prevented«. So crises do not really ends, they just take breaks. Meanwhile bourgeoisie still exists, therefore to really end crises bourgeoisie should end too.
• When you’re sick and tired of too much bourgeois, Marx makes you smile: «It (bourgeoisie) is unfit to rule because it is incompetent to assure an existence to its slave within his slavery, because it cannot help letting him sink into such a state, that it has to feed him, instead of being fed by him. Society can no longer live under this bourgeoisie, in other words, its existence is no longer compatible with society«.
• If you don’t trust in capitalism to commit suicide, we have one more thing to help: «the weapons which the bourgeoisie felled feudalism to the ground are now turned against the bourgeoisie itself … not only the bourgeoisie forged the weapons that bring death to itself; it has also called into existence the men who are to wield those weapons- the modern working class- the proletarians«.
For the Proletariat,
• They are «labourers, who live only so long as they find work, and who find wok only so long as their labour increases capital. These labourers, who must sell themselves piece-meal, are a commodity, like every other article of commerce, and are consequently exposed to all the vicissitudes of competition, to all the fluctuations of the market«. As a commodity «the cost of production of a workman is restricted, almost entirely, to the means of subsistence that he requires for his maintenance«.
• You probably don’t identify yourself with the proletariat, but with the middle class (probably you believe you came from). But that’s just for now; «The lower strata of the middle class – the small tradespeople, shopkeepers, retired tradesmen generally, the handicraftsmen and peasants- all these sink gradually into the proletariat, partly because their diminutive capital does not suffice for the scale on which Modern Industry is carried on, and is swamped in the competition with the large capitalists, partly because their specialized skill is rendered worthless by the new methods of production. Thus the proletariat is recruited from all classes of the population«. So proletariat seems only to grow in a capitalist society.
• Proletariat not only grow, but concentrate: «..with the development of industry the proletariat not only increases in number; it becomes concentrated in greater masses, its strength grows, and it feels that strength more».
• Proletariat also develop in a very curious way: admitting part of the bourgeoisie in itself; «The bourgeoisie finds itself involved in a constant battle. At first with the aristocracy; later on, with those portions of the bourgeoisie itself, whose interests have become antagonistic to the progress of industry; at all times, with the bourgeoisie of foreign countries. In all these battles it sees itself compelled to appeal to the proletariat, to ask for its help, and thus, to drag it into the political arena. The bourgeoisie itself, therefore, supplies the proletariat with its own instruments of political and general education, in other words, it furnishes the proletariat with weapons for fighting the bourgeoisie… These also supply the proletariat with fresh elements of enlightenment and progress«.
• Indisputably, proletariat is indispensable; «Of all the classes that stand face to face with the bourgeoisie today, the proletariat alone is a really revolutionary class. The other classes decay and finally disappear in the face of Modern Industry; the proletariat is it special and essential product».
• But, why the lower classes could have an option to stand against the ruling classes? why now? The Manifesto speaks: «All previous historical movements were movements of minorities, or in the interests of minorities. The proletarian movement is the self-conscious, independent movement of the immense majority, in the interests of the immense majority«.
What is the relation between proletariat and bourgeoisie? Enemies; «The real fruit of their battles lies, not in the immediate result, but in the ever-expanding union of the workers. This union is helped on by the improved means of communication that are created by modern industry and that place the workers of different locates in contact with one another. It was just this contact that was needed to centralize the numerous local struggles, all of the same character, into one national struggle between classes….thanks to railways, achieve in a few years«. Internet, this is your opportunity to be helpful.
What is the future of proletariat? «.. proletariat will use its political supremacy to wrest, by degrees, all capital from the bourgeoisie, to centralize all instruments of production in the hands of the State, i.e., of the total proletariat organized as the ruling class; and to increase the total of productive force as rapidly as possible}. In other words, proletariat should get power; \emph{Political power, properly so called, is merely the organized power of one class oppressing another«.
So, how the proletariat get to actual power (to rule)? The Manifesto gives you the recipe:
1. Abolition of property in land and application of all rents of land to public purposes. Not all property; «The distinguishing feature of Communism is not the abolition of property generally, but the abolition of bourgeois property«. It even add: «Communism deprives no man of the power to appropriate the products of society; all that it does is to deprive him of the power to subjugate the labour of the others by means of such appropriation».
2. A heavy progressive or graduated income tax.
3. Abolition of all right of inheritance.
4. Confiscation of the property of all emigrants and rebels.
5. Centralization of credit in the hands of the State, by means of a national bank with State capital and an exclusive monopoly.
6. Centralization of the means of communication and transport in the hands of the State.
7. Extension of factories and instruments of production owned by the State; the bringing into cultivation of waste-lands, and the improvement of the soil generally in accordance with a common plan.
8. Equal liability of all to labour. Establishment of industrial armies, especially for agriculture.
9. Combination of agriculture with manufacturing industries; gradual abolition of the distinction between town and country, by a more equable distribution of the population over the country.
10. Free education for all children in public schools. Abolition of children’s factory labour in its present form. Combination of education with industrial production.»
There you go with another Ten Commandments.
Is it all about power and who gets it? No, there is a light at the end: «..if, by means of revolution, it (proletariat) makes itself the ruling class, and, as such, sweeps away by force the old conditions of production, then it will, along with these conditions, have swept away the conditions for the existence of class antagonisms and of classes generally, and will thereby have abolished its own supremacy as a class.
In place of the old bourgeois society, with its classes and class antagonisms, we shall have an association, in which the free development of each is the condition for the free development of all». It is sad that Marx and Engels did not dedicate much effort in the Manifesto to explain more about this objective, why would we go to a place that poorly defined?.
For the Communists,
• First, defining Communism should be helpful. The Manifesto summarize the theory of Communists in four words: «Abolition of private property»}. But «Communism deprives no man of the power to appropriate the products of society; all that it does is to deprive him of the power to subjugate the labour of the others by means of such appropriation«.
• I prefer to define Communists by their objectives: «The immediate aim of the Communist is the same as that of all the other proletarian parties: formation of the proletariat into a class, overthrow of the bourgeois supremacy, conquest of political power by the proletariat«.
• Have you notice the particular power of women in communism? In Chile, post Allende, the main communism figures were (and are) women. The Manifesto could have warm word for them: «The bourgeois sees in his wife a mere instrument of production. He hears that the instruments of production are to be exploited in common, and, naturally, can come to no other conclusion than the lot of being common to all will likewise fall to the women… The Communists have no need to introduce community of women; it has existed almost from time immemorial«.
My favorite part of the Manifesto is:
«The essential condition for the existence, and for the sway of the bourgeois class, is the formation and augmentation of capital; the condition for capital is wage-labour. Wage-labour rests exclusively on competition between the laborers. The advance of industry, whose involuntary promoter is the bourgeoisie, replaces the isolation of the laborer, due to competition, by their revolutionary combination, due association. The development of Modern Industry, therefore, cuts from under its feet the very foundation on which the bourgeoisie, therefor, produces, above all, it own grave diggers. It fall and the victory of the proletariat are equally inevitable«. Lovely.
«The communists disdain to reconcile their views and aims… They openly declare that their ends can be attained only by the forcible overthrow of all existing social relations. Let the ruling classes tremble at a Communist revolution. The proletarians have nothing to lose but their chains. They have a world to win«, are the last sentences in the Manifesto. The ruling classes did tremble, and they saw the threat of communism everywhere.
One of the things you realize in the Manifesto is that technology development works only for the bourgeoisie. That is what kills me; most of the scientists and engineers work goes in favor the bourgeoisie and not to make a better life for the majority of people. In Manifesto words: «The unceasing improvement of machinery, ever more rapidly developing, makes their (proletariat) livelihood more and more precarious; the collisions between individual workmen and individual bourgeois take more and more the character of collisions between two classes«.
This document is so powerful, even when it lacks of numbers, was the bible for social changes that affected the life of quite a quantity of societies.
# John Stuart Mill
You will have a hard time classifying Mill in political or economic dimension, but surely not in brilliance: he was a genius. But that’s not that special, almost all of the greatest economist were extremely brilliant, but few started to learn greek at the age of three. At seven he read most of Plato dialogues. At thirteen he made a complete survey of all there was to be known in political economy. So, undoubtedly, such a nerd! A nerd that went deep enough to discuss the philosophical conflicts underlying classical economics. The ethical foundations of economics and capitalism were discussed.
Usually considered as a genius, probably owns that to his father, James Mill, close friend of Ricardo and Jeremy Bentham (father of utilitarianism). James was the one that pushed John to a seven days a week study plan (so yes, no friends for the little Mill). The miracle was not that John Stuart Mill wrote masterpieces; the miracle was that he survived childhood!
Utopian socialists are usually dismissed by considering them dreamers, but the thoroughness of Mill thoughts are not questioned.
«A person choose to do x, just if he believes that doing it, he will gain profit «. That was I learned in my first class of economy. That is the base of utilitarianism immortalized by Jeremy Bentham. You could see Bentham as the Newton of the moral universe, a moral scientist. Bentham’s model is based on seeing the «mankind under the governance of two masters, pain and pleasure….Since all human beings like pleasure and hate pain (masochists notwithstanding, although they prefer pain only because it gives them pleasure), they choose to do that which gives them pleasure». So profit in this case, is pleasure minus pain. So when is time to choose, choose the alternative that maximizes profit (given the restrictions).
«Greatest happiness for the greatest number» is the cry of the utilitarian movement. Under that, relies the assumption that all people count equally when determining happiness, which sounds fair enough. Bentham even devised a method of quantifying pleasure and pain; it’s called felicific calculus (that really sounds like a lot of free time). Does anything from here gives us something useful? Of course!
• In politics, Bentam’s Radicals (Bentham groupies) argued for democracy and free speech. From free speech comes truth, they declared.
• They fought for reducing taxes on periodicals and assembly restrictions.
• They attacked the Corn Laws (entrance barrier to foreign seeds to U.K).
• They argued against punishment in prisons. After all, «a criminal is a person who believes that crime pays». The problem is that they missed the long term pain (costs).
To utilitarians, god was utility. The invisible hand wasn’t, even if their god usually worked through the invisible hand.
Mill was a Jeremy’s fan. He found the scientific precision that he was looking for, and gave him a sight of society. Not for too long. Around twenty, Mill realized that rational thoughts were just not enough. He missed the ultimate goal; happiness. He arrived at a critical point in his life. In his words:
«Suppose that all your objects in life were realized; that all the changes in institutions and opinions which you are looking forward to, could be completely effected at this very instant: would this be a great joy and happiness to you? And an irrepressible self-consciousness distinctly answered, No! At this my heart sank within me: the whole foundation on which my life was constructed fell down. All my happiness was to have been found in the continual pursuit of this end. The end had ceased to charm, and how could there ever again be any inters in the means? I seemed to have nothing left to live for»
Hopeless, he found something that kept him alive: poetry. But flesh could not live just reading poetry. After a while he found what he was looking for: love. The nerd fell in love. Harriet Taylor was the girl. As usual, things were not that easy. There was a Mr. Taylor too, but as Disney taught us, love prevails. After twenty years of «non-sex relationship»} (he was such a nerd so, it was very plausible), they got married. Mill always noted the influence of his wife and daughter in his masterpieces. In the age of reason, Mill longed for passion (Amazingly, that is not that crazy. Hume insisted that reason always be the «slave of the passions». Even Bentham introduced reason only as a method of comparing passions, not replace them).
«Whoever, either now or hereafter, may think of me and of the work I have done, must never forget that is the product not of one intellect and conscience, but of three»
After that spring break of feelings Mill would return to Benthamism, not to destroy it, but to improve it. Mill insisted that the greatest happiness depends more than mere pleasure. For example, art is more than pleasure; it lifts the spirit. Mill enhances utilitarianism by invoking Platonic virtues of honor, dignity and self-development. By the way, that’s the reason that Mill became an advocate of public education; to allow people to enjoy more than wordily pleasures, but spirit lifters.
His first masterpiece were two long volumes titled «Principles of Political Economy». Beside of being a survey of the field, he gave a new perspective that Mill believed of monumental importance: economy was all about production, not distribution. For distribution, something else was needed (morals?). Scarcity and toughness of nature are real things, and the economic rules of behavior which tell us how to maximize the fruits of our labor are as impersonal and absolute as hard sciences. So economics have nothing to do with distribution.
«Once we have produced wealth as the best we can, we can do with it as we like… The distributions of wealth depends on the laws and customs of society. The rules by which it is determined are what the opinions and feelings of the ruling portion of the community make them, and are very different in different ages and countries, and might be still more different, if mankind so chose… }«.
So as Robert Owen, Mill thinks that society has the power of make itself in different forms. There is not only one natural solution then.
If society did not like the so-called «natural» results of its activities, it had only to change them. Society could tax and subsidize, it could expropriate and redistribute. It could give all its wealth to a king, or it could run gigantic charity ward; it could give due heed to incentives, or it could, at its own risk, ignore them. But whatever it did, there was no «correct» distribution, at least none that economics had any to reclaim. There was no «laws» to justify how society shared its fruits: there were only men sharing their wealth as they saw fit.
But the thing is that societies arrange their modes of payment as integral parts of their modes of production: for example, feudal societies do not have «wages», anymore than capitalists societies have feudal dues. So production and distribution cannot be neatly separated.
Maybe, what John was trying to say is that societies would try to remedy its «natural» workings by imposing its moral values. Fixing economy with morals. Indeed, the New Deal (hand of John Maynard Keynes), or Germany welfare are kind of Mill’s vision of a society. So the moral nerd really extended his thoughts.
As a great person which we remember (and not a military), Mill didn’t feel good by his surroundings, and thought about that. In his words:
«I am not charmed with an ideal of life held out by those who think that the normal state of human beings is that of struggling to get on; that the trampling, crushing, elbowing, and treading on each other’s heels, which form the existing type of social life, are the most desirable lot of human kind, or anything but the disagreeable symptoms of one of the phases of industrial progress»
So, based on his bad feelings regard surroundings, seeing it as a problem, he proposed a solution (a moral one):
«That the energies of mankind should be kept in employment by the struggle for riches as they were formerly by the struggle for war, until the better minds succeed in educating the others into better things, in undoubtedly better than they should rust and stagnate. While minds are coarse they require stimuli and let them have them»
Note, that as Adam Smith, John Stuart Mill saw capitalism as just a phase of human development, not a steady state nor a final solution.
If you are fast, and want to draw Mill as a communist, Mill tell you about communism (Not Karl Marx communism. Mill wasn’t aware of his existence then):
The question is wether would be any asylum left for individuality of character; wether public opinion would not be a tyrannical yoke; whether the absolute dependence of each on all, and the surveillance of each by all, would not grind all down into a tame uniformity of thoughts, feelings, and actions… No society in which eccentricity is a matter of reproach can be in a wholesome state.»
Rather than «equality of results», Mill urged for «equal opportunity». If some children inherit huge sums from their parents, they possess an unfair advantage over others. This with silver spoons may rely on their parent’s wealth rather than create more; an inefficiency.
Mill also wondered how society could give relief to the poor without dissuading them from getting jobs. He proposed recipients exchange labors for welfare payments (for the physically fit, handicapped should always receive aid from society). Ignored for decades, in 1988 federal governments in the U.S adopted «workfare» programs in which healthy welfare recipients must accept employment or job training. Mill feared that if welfare was too easily doled out, generations of poor people would be born into families weaned of a work’s ethic. He rejected socialist and romantic proposals for raising relief benefits or wages per se.
Where was Mill in a line ended by laissez-faire and government intervention?. In a good place around the middle. The goal of government supporters is to show that greater society happiness requieres intervention: «every departure from [laissez-faire], unless required by some great good, is a certain evil».
Different from Malthus, a hopeful Mill thought that the working classes could be educated to understand their Malthusian peril, and that they would regulate their number voluntarily. With that pressure removed, Mill’s model took a different turn from Malthus and Ricardo: as before, the tendencies of the accumulation would bid up wages, but as now people is aware of the poverty of having too much children, they wouldn’t have too much. Profits would rise and the accumulation of capital would come to an end, reaching a steady state. Now, rather than seeing the study state as the end for capitalism and economic progress, Mill sees it as the first stage of a benign socialism (that what Smith said too), where mankind would turn its energies to serious matters as justice and liberty, and not economic growth per se.
• The state would prevent landlord from reaping unearned benefits.
• The state would tax away inheritances.
• Associations of workmen would displace the organization of enterprises in which men were subordinate to masters.
• By their sheer competitive advantages, the workers cooperatives would win the day.
Capitalism would gradually disappear as former masters sold out to their workingmen and retired on annuities. More than a hundred years have passed, and the steady state is no in the horizon (not here at least). Patience then.
More than being at english at the core (gradualist, optimistic, realistic, and devoid of radical overtones), he was a moralist. When Herbert Spencer, his great rival in the area of philosophy run out of money to complete his project, Mill offered to finance it: «I beg that you will not consider this proposal in the light of a personal favor…But it is nothing of the kind, it is a simple proposal of cooperation for an important public purpose, for which you give your labor and (I) have given your health».
That was John Stuart Mill, the last «political economist» and «utopian socialist». For me, a super nerd that fell in love and happily apply his geniuses to think in how to enhance the human condition. For that he proposed greater wealth, equality, women rights and education. Only good things can come out of that. Thanks Harriet Taylor and daughter for encourage Mill’s geniuses.
Dedicated to Sebastian P., a nerd that is discovering something else beside reason.
# Mechanics
I believe, I could summarize my eight years in mechanical engineering (bachelors and master) in this single post.
Most of our physical world is related with solids and fluids. So, what is a good definition of solids and fluids?
This one:
• Solids: Is that state of matter which stress $\sigma$, is function of deformation $\epsilon$. We represent this as: $\sigma = f(\epsilon)$.
• Fluids: Is that state of matter which stress $\sigma$ is function deformation rate $\dot{\epsilon}$. We represent this as $\sigma = f(\dot{\epsilon})$.
The best of all, is that we could model both behaviors with just one equation. This is the damped motion equation:
$m\ddot{\epsilon}+\lambda\dot{\epsilon}+k\epsilon=0$
Re-arranging:
$m\ddot{\epsilon}=-\lambda\dot{\epsilon}-k\epsilon$
Which means:
Stress = rate of deformation (fluid behavior) + deformation (solid behavior)
So we can model a vast quantity of physical cases solving this single equation. Watch me doing it for underground mining here using Discrete Element Method (DEM). By the way, DEM really rocks.
So, the important things to be noticed are:
• Stress depends on deformation and rate of deformation.
• Stress depends on time.
• Stress is not the most fashion way to start a year :S | 2022-07-07 11:02:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31130093336105347, "perplexity": 3920.0154717985843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104690785.95/warc/CC-MAIN-20220707093848-20220707123848-00644.warc.gz"} |
https://www.r-bloggers.com/2020/05/covid-19-in-belgium-is-it-over-yet-2/ | Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Introduction
Note 1: The present article has been written on May 22, 2020 and has been updated infrequently. The current situation regarding COVID-19 in Belgium may therefore be different to what is presented below. See my Twitter profile for more frequent updates of the plots.
Note 2: This is a joint work with Prof. Niko Speybroeck, Prof. Catherine Linard, Prof. Simon Dellicour and Angel Rosas-Aguirre.
Belgium recently started to lift its lockdown measures initially imposed to contain the spread of the Covid-19. Following this decision taken by Belgian authorities, we analyze how the situation evolved so far.
Contrarily to a previous article in which I analyzed the outbreak of the Coronavirus in Belgium using the SIR model, in this article we focus on the evolution of the number of:
• patients in hospitals
• patients in intensive care
• new confirmed cases
at the province and national level.
Data is from Sciensano and all plots were created with the {ggplot2} package.
Overall
From the above figure, we see that the rate of hospitalizations continue with a decreasing trend in all provinces (and in Belgium as well).
Update of October 27, 2020:
The detailed situation in Brabant:
By period
Update of November 16, 2020:
In the first wave, the province of Limburg recorded on average the highest number of COVID19 hospital admissions per million inhabitants. During the second wave, Liège and Hainaut struggled with the highest rates. With two exceptions (Antwerp and Limburg), last month was worse than in March-April. In three provinces (Hainaut, Namur and Liège), the number has more than doubled.
During the period from June 14 to July 15, 2020, the number of COVID19 hospital admissions in Belgium fell to very low relative levels, but we have failed to maintain them. Now that hospital admissions are no longer increasing, we hope that the colors will lighten up again a bit as the end of the year approaches.
Patients in hospitals
Below the evolution of the number of patients in hospitals in Belgium:
We see that, as of October 28, 2020, the number of COVID19 patients in Belgian hospitals reached the peak of the first wave. So although patients stay shorter at the hospital during the second wave compared to the first wave, hospitals are still getting crowded.
Therefore, if the number of patients in hospitals follows the same path in the coming weeks, hospitals will quickly become too crowded and will not be able to accept new patients as their maximum capacity will soon be reached (if this is not already the case…).
Patients in intensive care
Below the evolution of COVID19 patients in intensive care in Belgium, with short-term projections and 99% confidence interval:
Short-term projections indicate what may have happened without the slow-down in transmission. This slow-down is positive news.
The maps show total intensive care patients by province if these would have had the Belgian population. Map at the top shows maximum levels in March-April and map at the bottom shows current levels. The maps indicate high intensive care use due to COVID19. In most Belgian provinces, numbers are still higher today than March-April peak numbers.
Observations are in line with other preliminary indications, such as trends of COVID19 hospitalizations (currently relatively volatile), indicating that transmission is slowing down:
Confirmed cases
Note that the reported number of new confirmed cases is probably underestimated. This number does not take into account undiagnosed (without or with few symptoms) or untested cases. Therefore, figures with number of cases should be interpreted with extreme caution.
By age group and sex
Static
Below another visualization of the number of cases by age group and sex in Belgium, for three different periods:
This visualization shows the importance to report ages of cases and not just total number.
Moreover, we see that the distribution of cases per week by age group at the beginning of September is similar than during the summer holidays, but the number of cases per week is higher. The distribution of cases per week by age group at the beginning of September is however different from the “first wave” (period from March 1, 2020 to May 31, 2020). During the fist period, majority of cases were elderly, while at the beginning of September majority of cases are young people. It would be interesting to see how the distribution of cases by age group evolves during winter.
The figure above may be put in relation with the structure of the Belgian population:
Dynamic
Additionally, these can be seen dynamically:
With an update of the second wave:
By age group, sex and province
Thanks for reading. We hope that these figures will evolve in the right direction. In the meantime, take care and stay safe!
If you would like to be further updated on the evolution of the COVID-19 epidemic, two options:
1. visit the blog from time to time, and | 2021-11-29 00:29:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24170434474945068, "perplexity": 1749.2282951803672}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358673.74/warc/CC-MAIN-20211128224316-20211129014316-00215.warc.gz"} |
https://codereview.stackexchange.com/questions/124351/setting-and-showing-an-image-file-on-a-qgraphicsview/124357 | # Setting and showing an image file on a QGraphicsView
This code sets and shows an image file on a QGraphicsView.
I'm interested in seeing how to improve readability and reduce redundancy.
void QtReader::addPadding(const int& querybase, const int& answerbase, QString &target){
if (padding){
if (querybase >= 10){
target = "0" + QString::number(answerbase);
}
else if (querybase < 10){
target = "00" + QString::number(answerbase);
}
else{
target = QString::number(answerbase);
}
}
else{
target = QString::number(answerbase);
}
}
void QtReader::getArchiveList(const QString& dpath){
ui->comboBox->clear();
dirpath = dpath;
QDir dir(dpath);
QStringList list = dir.entryList(QDir::Dirs|QDir::NoDotAndDotDot);
if (list.count() >= 1){
foreach (QString i, list){
QString n = QFileInfo(i).fileName();
ui->comboBox->addItem(n);
}
}
}
void QtReader::getFileList(const QString& path){
test.clear();
ext << "*.jpg" << "*.png" << ".bmp";
QString l_path(path);
QDir *dir = new QDir(l_path);
dir->setFilter(QDir::Files|QDir::NoDotAndDotDot);
dir->setNameFilters(ext);
test = dir->entryList(QDir::Files|QDir::NoDotAndDotDot);
max = test.length();
delete dir;
}
void QtReader::setIfExtension(const QString& querybase, const QString& answerbase, QString &target){
if (QFile(querybase + JPG).exists()){
target = answerbase + JPG;
}
else if (QFile(querybase + PNG).exists()) {
target = answerbase + PNG;
}
}
void QtReader::showScene(QGraphicsScene* targetScene, QGraphicsView* target, QString file, const int page){
targetScene->addPixmap(file);
SceneVect.push_back(targetScene);
target->setScene(targetScene);
/*QGraphicsTextItem *pageText = new QGraphicsTextItem;
pageText->setPos(0,10);
if (!file.isEmpty()){
pageText->setPlainText(QString::number(page+1));
}
target->scene()->addItem(pageText);*/
}
void QtReader::determineImage(const int& page, bool mP){
vectDelete(false);
if (max > 0){
gpage = page;
QGraphicsScene* sLeft = new QGraphicsScene(this);
QGraphicsScene* sRight = new QGraphicsScene(this);
QString left, right, pl, pr;
if (mP){
gpage = max;
}
if (padding || test.contains("1.jpg") || test.contains("1.png")){
QString leftbase, rightbase;
if (!mP){
addPadding(gpage + 1, gpage + 1, left);
addPadding(gpage + 2, gpage + 2, right);
leftbase = basepath + left;
rightbase = basepath + right;
}
else if (mP){
addPadding(gpage - 1, gpage - 1, left);
addPadding(gpage, gpage, right);
leftbase = basepath + left;
rightbase = basepath + right;
}
setIfExtension(leftbase, leftbase, pl);
setIfExtension(rightbase, rightbase, pr);
}
else {
if ((page < max) && (!mP)){
pl = basepath + test[page];
if (page + 1 < max){
pr = basepath + test[page + 1];
}
else{
pr.clear();
}
}
else if (mP){
pl = basepath + test[max - 2];
pr = basepath + test[max - 1];
}
}
if (((max % 2) > 0) && mP){
pl = pr;
pr.clear();
gpage = gpage - 1;
}
if (mP || ((gpage + 2) > max)){
setWindowTitle(prefix + QString::number(max) + extension );
}
else if((gpage + 2) <= max){
setWindowTitle(prefix + QString::number(page + 2) + extension );
}
if (this->isFullScreen() && ui->mainToolBar->isVisible() && (page >= max) && (page != 0)){
ui->mainToolBar->hide();
}
showScene(sLeft, ui->vLeft, pl, gpage);
showScene(sRight, ui->vRight, pr, gpage + 1);
leftEmpty = false;
if (!pl.isEmpty()){
rightEmpty = false;
}
saveState(gpage);
ui->vLeft->fitInView(ui->vLeft->sceneRect(), Qt::KeepAspectRatio);
ui->vRight->fitInView(ui->vRight->sceneRect(), Qt::KeepAspectRatio);
}
}
Header File:
private:
Ui::QtReader *ui;
void addPadding(const int& querybase, const int& answerbase, QString &target);
void Clear();
void determineImage(const int& page, bool mP);
void getArchiveList(const QString& dpath);
void getFileList(const QString& path);
void showScene(QGraphicsScene* targetScene, QGraphicsView* target, QString file, const int page);
void setIfExtension(const QString& querybase, const QString& answerbase, QString &target);
bool leftEmpty, rightEmpty, padding;
int gpage, max, v;
QString basepath, dirpath, extension, series, volume;
QStringList ext, list, test;
QVector<QGraphicsScene*> SceneVect;
Variable Declaration
const QString prefix = "QtReader - [";
const QString JPG = ".jpg";
const QString PNG = ".png";
int gpage = 0;
int max = 0;
int v = 0;
• Hi, could you please edit the question to add a description of what your code does? Take a look at How to Ask – jacwah Mar 31 '16 at 10:08
• Preferably you should give us enough code to compile and run the code you have so far. – Nobody Mar 31 '16 at 10:12
• And what you mean by "improve" ? Readability ? Speed ? Memory ? Security ? – Garf365 Mar 31 '16 at 10:21
• It's not a requirement that you provide everything needed to compile the code. – 200_success Mar 31 '16 at 10:34
• It would be helpful if you also added the header file for this class. Right now there are references to data members in the code without a way to see their type etc. – jacwah Mar 31 '16 at 11:17
## 1 Answer
I'm sorry for being blunt, but your code is hard to read. The main reason for this is that you consistently use arcane names like test, list, ext and v for data members, which leads to confusion as it's really hard to understand what they represent. Instead, use descriptive names, both for data and functions. Think of an outsider who hasn't written the code themselves, how would you explain each variable and function to them? On top of helping others understand, you will also help yourself a couple of months from now.
Sometimes it's okay to use shorter names for variables, like list in getArchiveList. Because the scope is very limited, it doesn't interfere with readability. The problem with list is that it shadows the data member with the same name, again making the code confusing.
Now let's talk about determineImage. I have no idea what this function does, but it's clear that the name doesn't describe this adequately. In fact, it's probably doing too much as it's quite long. Try splitting it up into smaller functions with names that explain their purpose. In addition, give the local variables better names than lr etc.
On another note: commented out code. I consider leaving commented out lines in your code without explanation a bad practise. Let me quote @nhgrif in this answer:
Source control should help you keep track of code that used to be there, so there's not a real good excuse to leave it there for any historical reason.
Arguably, you might want to leave it in if it's something you're frequently uncommenting for some sort of testing purposes, but if that's the case, perhaps leave a comment above the line, something to the effect of:
// Uncomment the following line to ...
Because you use source control, right?
• I do use source control. I was prototyping something in the function. – Akito Mar 31 '16 at 12:06 | 2019-05-21 17:54:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29940760135650635, "perplexity": 13133.664819570402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256494.24/warc/CC-MAIN-20190521162634-20190521184634-00297.warc.gz"} |
http://lists.geodynamics.org/pipermail/cig-commits/2009-March/009137.html | alessia at geodynamics.org alessia at geodynamics.org
Thu Mar 19 03:11:33 PDT 2009
Author: alessia
Date: 2009-03-19 03:11:30 -0700 (Thu, 19 Mar 2009)
New Revision: 14389
Removed:
Modified:
Log:
Fixed manual
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/AM-allcitations.bib 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/AM-allcitations.bib 2009-03-19 10:11:30 UTC (rev 14389)
@@ -4182,12 +4182,14 @@
Volume = 250,
Year = 2006}
- at article{MaggiEtal2008,
+ at article{MaggiEtal2009,
Author = {Maggi, A. and Tape, C. and Chen, M. and Chao, D. and Tromp, J.},
Journal = gji,
Title = {An automated time-window selection algorithm for seismic tomography},
Volume = XX,
- Year = 2008}
+ Year = 2009,
+ note = {(in press)}
+ }
@article{MJJ99,
Author = {Mahatsente, R. and Jentzsch, G. and Jahr, T.},
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/Makefile 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/Makefile 2009-03-19 10:11:30 UTC (rev 14389)
@@ -8,13 +8,11 @@
conclusion.tex \
def_base.tex \
discussion.tex \
-figures_manual.tex \
figures_paper.tex \
flexwin_manual.tex \
flexwin_paper.tex \
introduction.tex \
manual_introduction.tex \
-manual_method.tex \
manual_tuning.tex \
manual_technical.tex \
manual_other.tex \
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/acknowledgements.tex 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/acknowledgements.tex 2009-03-19 10:11:30 UTC (rev 14389)
@@ -8,6 +8,6 @@
Additional global scale data were provided by the GEOSCOPE network.
We thank the Hi-net Data Center (NIED), especially Takuto Maeda and Kazushige Obara, for their help in providing the seismograms used in the Japan examples.
For the southern California examples, we used seismograms from the Southern California Seismic Network, operated by California Institute of Technology and the U.S.G.S.
-The FLEXWIN code makes use of filtering and enveloping algorithms that are part of SAC (Seismic Analysis Code, Lawerence Livermore National Laboratory) provided for free to IRIS members. We thank Brian Savage for adding interfaces to these algorithms in recent SAC distributions.
+The FLEXWIN code makes use of filtering and enveloping algorithms that are part of SAC (Seismic Analysis Code, Lawerence Livermore National Laboratory) provided for free to IRIS members. We thank Brian Savage for adding interfaces to these algorithms in recent SAC distributions. We thank Vala Hjorleifsdottir for her constructive suggestions during the development of the code.
We thank Jeroen Ritsema and an anonymous reviewer for insightful comments that
helped improve the manuscript.
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/figures_manual.tex 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/figures_manual.tex 2009-03-19 10:11:30 UTC (rev 14389)
@@ -1,300 +0,0 @@
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% Table captions
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% Tables
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\newpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-\begin{table}
-\begin{tabular}{lrrrrrl}
-\hline
-Identifier & Latitude & Longitude & Depth, km & Moment, N m & $M_w$ & Location \\
-\hline
-\multicolumn{7}{c}{Global} \\ \hline
-% CHECK THAT THE MOMENT IS LISTED IN N-M, NOT DYNE-CM
-% CARL HAS FORMULAS TO CONVERT FROM A MOMENT TENSOR TO M0 TO MW
-101895B & 28.06 & 130.18 & 18.5 & 5.68e19 & 7.1 & Ryukyu Islands \\
-050295B & -3.77 & -77.07 & 112.8 & 1.27e19 & 6.7 & Northern Peru \\
-060994A & -13.82 & -67.25 & 647.1 & 2.63e21 & 8.2 & Northern Bolivia \\
-\hline
-\multicolumn{7}{c}{Japan} \\ \hline
-051502B & 24.66 & 121.66 & 22.4 & 1.91e18 & 6.1 & Taiwan \\
-200511211536A & 30.97 & 130.31 & 155.0 & 2.13e18 & 6.2 & Kyuhu, Japan \\
-091502B & 44.77 & 130.04 & 589.4 & 4.24e18 & 6.4 & Northeastern China \\
-\hline
-\multicolumn{7}{c}{Southern California} \\ \hline
-9983429 & 35.01 & -119.14 & 13.5 & 9.19e15 & 4.6 & Wheeler Ridge, California \\
-9818433 & 33.91 & -117.78 & 9.4 & 3.89e15 & 4.3 & Yorba Linda, California \\
-\hline
-\end{tabular}
-\caption{\label{tb:events}
-Example events used in this study. The identifier refers to the CMT catalog for global events and Japan events, and refers to the Southern California Earthquake Data Center catalog for southern California events.
-}
-\end{table}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-\begin{table}
-\begin{center}
-\begin{tabular}{lccccc}
-\hline
- & Global & \multicolumn{2}{c}{Japan} & \multicolumn{2}{c}{S. California} \\
-\hline
-$T_{0,1}$ & 50, 150 & 24, 120 & 6, 30 & 6, 40 & 2, 40 \\
-$r_{P,A}$ & 3.5, 3.0 & 3.5, 3.0 & 3.5, 3.0 & 3.5, 3.0 & 3.5, 2.5 \\
-$r_0$ & 2.5 & 1.5 & 3.0 & 2.5 & 4.0 \\
-$w_E$ & 0.08 & 0.11 & 0.12 & 0.22 & 0.07 \\
-$CC_0$ & 0.85 & 0.70 & 0.70 & 0.74 & 0.85 \\
-$\Delta\tau_0$ & 15 & 12.0 & 3.0 & 3.0 & 2.0 \\
-$\Delta\ln{A}_0$& 1.0 & 1.0 & 1.0 & 1.5 & 1.0 \\
-\hline
-$c_0$ & 0.7 & 0.7 & 0.7 & 0.7 & 1.0 \\
-$c_1$ & 4.0 & 3.0 & 3.0 & 2.0 & 4.0 \\
-$c_2$ & 0.3 & 0.0 & 1.0 & 0.0 & 0.0 \\
-$c_{3a,b}$ & 1.0, 2.0 & 1.0, 2.0 & 1.0, 2.0 & 3.0, 2.0 & 4.0, 2.5 \\
-$c_{4a,b}$ & 3.0, 10.0 & 3.0, 25.0 & 3.0, 12.0 & 2.5, 12.0 & 2.0, 6.0 \\
-$w_{CC}, w_{\rm len}, w_{\rm nwin}$
- & 1, 1, 1 & 1, 1, 1 & 1, 1, 1 & 1, 0, 0 & 1, 0, 0.5 \\
-\hline
-\end{tabular}
-\caption{\label{tb:example_params}
-Values of standard and fine-tuning parameters for the three seismological
-scenarios discussed in this study.
-}
-\end{center}
-\end{table}
-
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% Figure captions
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% Figures
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-\clearpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-\begin{figure}
-\center \includegraphics[width=6in]{figures/fig/examples_global.pdf}
-\caption{\label{fg:examples}
-(a)~Window selection results for event 101895B from Table~\ref{tb:events} recorded
-at LBTB ($25.01$\degS, $25.60$\degE, $\Delta=113$\deg, radial component).
-Phases contained within selected windows:
-(1)~$SKS$, (2)~$PS+SP$, (3)~$SS$, (4)~fundamental mode Rayleigh wave (5) unidentified late phase.
-(b)~Body wave ray paths corresponding to data windows in (a).
-(c)~Window selection results for event 060994A from Table~\ref{tb:events} recorded at WUS ($41.20$\degN, $79.22$\degE, $\Delta=140$\deg, transverse component).
-Phases contained within selected windows: (1)~$S_{\rm diff}$, (2)~$sS_{\rm diff}$, (3)~$SS$, (4)~$sSS$ followed by $SSS$, (5)~$sS5+S6$, (6)~$sS6+S7$ followed by $sS7$, (7)~major arc $sS4$, (8)~major arc $sS6$.
-(d)~Body wave ray paths corresponding to data windows in (c).
-}
-\end{figure}
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-\begin{figure}
-\center \includegraphics[width=6in]{figures/fig/composites_global.pdf}
-\caption{\label{fg:composites}
-(a)-(c)~Summary plots of windowing results for event 101895B in Table~\ref{tb:events}.
-(a)~Global map showing great-circle paths to stations.
-(b)~Histograms of number of windows as a function of normalised cross-correlation $CC$, time-lag $\tau$ and amplitude ratio $\Delta \ln A$; these give information about systematic trends in time shift and amplitude scaling.
-(c)~Record sections of selected windows for the vertical, radial and transverse components. The filled portions of the each record in the section indicate where windows have been selected by the algorithm.
-(d)-(f)~Summary plots of windowing results for event 060994A in Table~\ref{tb:events}.
-}
-\end{figure}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% JAPAN
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-
-%\clearpage
-%\begin{figure}
-%%\center
-%\includegraphics[width=5.7in]{figures/japan/ERM_II_051502B}
-%\caption{\label{fg:ERM_II_051502B}
-%Window selection results for event 051502B from Table~\ref{tb:events} recorded at station ERM ($42.01$\degN, $143.16$\degE, $\Delta=24.83$\deg).
-%(a)~Event and station map: event is 051502B indicated by the beach ball with the
-%CMT focal mechanism, station ERM is marked as red triangles and all the other stations
-%which recorded this event are marked by grey triangles.
-%(b)~Results for station ERM for the period range \trange{24}{120}.
-%Vertical (Z), radial (R), and transverse (T) records of data (black, left column) and synthetics (red, left column), as well as the STA/LTA records (right column) used to produce the window picks.
-%(c)~Results for station ERM for the period range \trange{6}{30}.
-%}
-%\end{figure}
-%\clearpage
-
-
-\begin{figure}
-%\center
-\includegraphics[width=5.7in]{figures/japan/KIS_BO_091502B}
-\caption{\label{fg:KIS_BO_091502B}
-Window selection results for event 091502B from Table~\ref{tb:events} recorded at station KIS ($33.87$\degN, $135.89$\degE, $\Delta=11.79$\deg).
-(a)~Event and station map: event is 091502B indicated by the beach ball with the
-CMT focal mechanism, station KIS is marked as red triangles and all the other stations
-which recorded this event are marked by grey triangles.
-(b)~Results for station KIS for the period range \trange{24}{120}.
-Vertical (Z), radial (R), and transverse (T) records of data (black, left column) and synthetics (red, left column), as well as the STA/LTA records (right column) used to produce the window picks.
-(c)~Results for station KIS for the period range \trange{6}{30}.
-}
-\end{figure}
-
-\clearpage
-\begin{figure}
-%\center
-\includegraphics[width=5.7in]{figures/japan/SHR_BO_200511211536A}
-\caption{\label{fg:SHR_BO_200511211536A}
-Window selection results for event 20051121536A from Table~\ref{tb:events} recorded at station SHR ($44.06$\degN, $144.99$\degE, $\Delta=17.47$\deg).
-(a)~Event and station map: event 20051121536A is indicated by the beach ball with the
-CMT focal mechanism, station SHR is marked as red triangles and all the other stations
-which recorded this event are marked by grey triangles.
-(b)~Results for station SHR for the period range \trange{24}{120}.
-Vertical (Z), radial (R), and transverse (T) records of data (black, left column) and synthetics (red, left column), as well as the STA/LTA records (right column) used to produce the window picks.
-(c)~Results for station SHR for the period range \trange{6}{30}.
-Note that corresponding low-frequency band-passed filtered version (b) has longer record length (800~s).
-}
-\end{figure}
-
-\clearpage
-\begin{figure}
-%\center
-\includegraphics[width=6in]{figures/japan/200511211536A_T06_rs}
-\caption{\label{fg:200511211536A_T06_rs}
-Summary plots of windowing results for event 200511211536A in Table~\ref{tb:events}, for the period range \trange{6}{30}.
-(a)~Map showing paths to each station with at least one measurement window.
-(b)-(d)~Histograms of number of windows as a function of normalised cross-correlation $CC$, time-lag $\tau$ and amplitude ratio $\Delta \ln A$.
-(e)-(g)~Record sections of selected windows for the vertical, radial and transverse components.
-}
-\end{figure}
-
-\clearpage
-\begin{figure}
-%\center
-\includegraphics[width=6in]{figures/japan/200511211536A_T24_rs}
-\caption{\label{fg:200511211536A_T24_rs}
-Summary plots of windowing results for event 200511211536A in Table~\ref{tb:events}, for the period range \trange{24}{120}.
-}
-\end{figure}
-
-\clearpage
-\begin{figure}
-%\center
-\includegraphics[width=6in]{figures/japan/stats_T06}
-\caption{\label{fg:T06_rs}
-Summary statistics of windowing results for events 051502B, 200511211536A and 091502B in Table~\ref{tb:events}, for the period range \trange{6}{30}.
-}
-\end{figure}
-
-
-%
-%\clearpage
-%\begin{figure}
-%%\center
-%\includegraphics[width=6in]{figures/japan/051502B_T06_rs}
-%\caption{\label{fg:051502B_T06_rs}
-%Summary plots of windowing results for event 051502B in Table~\ref{tb:events}, for the period range \trange{6}{30}.
-%Same as Figure~\ref{fg:200511211536A_T06_rs}.
-%}
-%\end{figure}
-%
-%\clearpage
-%\begin{figure}
-%%\center
-%\includegraphics[width=6in]{figures/japan/091502B_T06_rs}
-%\caption{\label{fg:091502B_T06_rs}
-%Summary plots of windowing results for event 091502B in Table~\ref{tb:events}, for the period range \trange{6}{30}.
-%Same as Figure~\ref{fg:200511211536A_T06_rs}.
-%}
-%\end{figure}
-
-%\clearpage
-%\begin{figure}
-%%\center
-%\includegraphics[width=6in]{figures/japan/091502B_T06_rs}
-%\caption{\label{fg:091502B_T06_rs}
-%Summary plots of windowing results for event 051502B in Table~\ref{tb:events},
-%for the period range \trange{6}{30}. Same as Figure~\ref{fg:200511211536A_T06_rs).
-%}
-%\end{figure}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-% SOUTHERN CALIFORNIA
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-\begin{figure}
-%\center
-\includegraphics[width=6in]{figures/socal/9818433_CLC_window.pdf}
-\caption{\label{fg:socal_CLC}
-Window selection results for event 9818433 from Table~\ref{tb:events} recorded at station CLC.
-(a)~Source and station information for event 9818433 and station CLC.
-(b)~Paths to each station with at least one measurement window for the period range \trange{6}{40}.
-There are a total of 341 windows picked within 310 records.
-Triangle denotes station CLC.
-(c)~Paths to each station with at least one measurement window for the period range \trange{2}{40}.
-There are a total of 190 windows picked within 193 records.
-Triangle denotes station CLC.
-(d)~Results for station CLC for the period range \trange{6}{40}.
-Vertical (Z), radial (R), and transverse (T) records of data (black, left column) and synthetics (red, left column), as well as the STA:LTA records (right column) used to produce the window picks.
-(e)~Results for station CLC for the period range \trange{2}{40}.
-Note that corresponding lower-passed filtered versions are shown in (d).
-}
-\end{figure}
-
-\clearpage
-\begin{figure}
-%\center
-\includegraphics[width=6in]{figures/socal/9818433_FMP_window.pdf}
-\caption{\label{fg:socal_FMP}
-Window selection results for event 9818433 from Table~\ref{tb:events} recorded at station FMP.
-Same caption as Figure~\ref{fg:socal_CLC}, only for a different station.
-}
-\end{figure}
-
-\clearpage
-\begin{figure}
-%\center
-\includegraphics[width=6in]{figures/socal/9983429_T06_rs.pdf}
-\caption{\label{fg:socal_rs_T06}
-Summary plots of windowing results for event 9983429 in Table~\ref{tb:events}, for the period range \trange{6}{40}.
-(a)~Map showing paths to each station with at least one measurement window.
-(b)-(d)~Histograms of number of windows as a function of normalised cross-correlation $CC$, time-lag $\tau$ and amplitude ratio $\Delta \ln A$.
-(e)-(g)~Record sections of selected windows for the vertical, radial and transverse components.
-The two branches observed on the vertical and radial components correspond to the body-wave arrivals and the Rayleigh wave arrivals.
-}
-\end{figure}
-
-%\clearpage
-%\begin{figure}
-%%\center
-%\includegraphics[width=7in]{figures/socal/9983429_T02_rs.pdf}
-%\caption{\label{fg:socal_rs_T02}
-%(THIS FIGURE COULD IN THEORY BE CUT OUT, IF SPACE IS SHORT.)
-%Summary plots of windowing results for event 9983429 in Table~\ref{tb:events}, for the period range \trange{2}{40}.
-%Same as Figure~\ref{fg:socal_rs_T06}, only the windowing code has been run using a different set of parameters (Table~\ref{tb:example_params}), so that primarily only the body-wave arrivals are selected.
-%}
-%\end{figure}
-
-
-%\clearpage
-%\begin{figure}
-%%\center
-%Adjoint sources constructed based on the windows picked in Figure~\ref{fg:socal_CLC}d, with the specification of a cross-correlation traveltime measurement. The adjoint sources for this measurement are simply a weighted version of the synthetic velocity traces. The number to the left of each subplot is the $\pm$ height of the $y$-axis. The cross-correlation measurements for traveltime ($\Delta T$) and amplitude ($\Delta \ln A$) are listed above each time window.
-%}
-%\end{figure}
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
===================================================================
(Binary files differ)
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/flexwin_manual.tex 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/flexwin_manual.tex 2009-03-19 10:11:30 UTC (rev 14389)
@@ -6,6 +6,7 @@
\usepackage{setspace}
\usepackage[utf8]{inputenc}
\usepackage{amsmath}
+\usepackage{url}
\usepackage{natbib}
\usepackage[noend]{algorithmic}
@@ -17,7 +18,7 @@
\input{def_base}
\begin{document}
-\title{FLEXWIN: An automated time-window selection algorithm for seismic tomography}
+\title{FLEXWIN User Manual}
\author{Alessia Maggi}
\date{}
\maketitle
===================================================================
(Binary files differ)
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_introduction.tex 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_introduction.tex 2009-03-19 10:11:30 UTC (rev 14389)
@@ -24,6 +24,6 @@
the algorithm was designed for use in 3D-3D adjoint tomography, its inherent
flexibility should make it useful in many data-selection applications.
-For a detailed introduction to FLEXWIN as applied to seismic tomography, please consult \cite{MaggiEtal2008} ({\tt flexwin/latex/flexwin\_paper.pdf}). If you use FLEXWIN for your own research, please cite \cite{MaggiEtal2008}.
+For a detailed introduction to FLEXWIN as applied to seismic tomography, please consult \cite{MaggiEtal2009}. If you use FLEXWIN for your own research, please cite \cite{MaggiEtal2009}.
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_method.tex 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_method.tex 2009-03-19 10:11:30 UTC (rev 14389)
@@ -1,521 +0,0 @@
-\chapter{FLEXWIN, the algorithm\label{sec:algorithm}}
-
-FLEXWIN
-operates on pairs of
-observed and synthetic single component seismograms. There is no restriction
-on the type of simulation used to generate the synthetics, though realistic
-Earth models and more complete propagation theories yield waveforms that are more similar to the observed
-seismograms, and thereby allow the definition of measurement windows
-covering more of the available data. The input seismograms can be measures of
-displacement, velocity or acceleration, indifferently. There is no requirement
-for horizontal signals to be rotated into radial and transverse directions.
-
-The window selection process has five phases, each of which is discussed individually
-below: {\em phase 0:} pre-processing; {\em phase A:} definition of preliminary
-measurement windows; {\em phase B:} rejection of preliminary windows based on
-the content of the synthetic seismogram alone; {\em phase C:} rejection of
-preliminary windows based on the differences between observed and synthetic
-seismograms; {\em phase D:} resolution of preliminary window overlaps. The parameters that permit tuning of the
-window selection towards a specific tomographic scenario are all contained in a
-simple parameter file (see Table~\ref{tb:params}). More complexity and finer
-tuning can be obtained by rendering some of these parameters time dependent, via user defined functions that can depend on the source parameters (e.g. event location or depth).
-
-\begin{table}
-\begin{tabular}{lp{0.8\linewidth}}
-\hline
-\multicolumn{2}{l}{Standard tuning parameters:} \\[5pt]
-$T_{0,1}$ & band-pass filter corner periods \\
-$r_{P,A}$ & signal to noise ratios for whole waveform \\
-$r_0(t)$ & signal to noise ratios single windows \\
-$w_E(t)$ & water level on short-term:long-term ratio \\
-$CC_0(t)$ & acceptance level for normalized cross-correlation\\
-$\Delta\tau_0(t)$ & acceptance level for time lag \\
-$\Delta\ln{A}_0(t)$ & acceptance level for amplitude ratio \\
-\hline
-\multicolumn{2}{l}{Fine tuning parameters:} \\ [5pt]
-$c_0$ & for rejection of internal minima \\
-$c_1$ & for rejection of short windows \\
-$c_2$ & for rejection of un-prominent windows \\
-$c_{3a,b}$ & for rejection of multiple distinct arrivals \\
-$c_{4a,b}$ & for curtailing of windows with emergent starts and/or codas \\
-$w_{CC}\quad w_{\rm len}\quad w_{\rm nwin}$ & for selection of best non-overlapping window combination \\
-\hline
-\end{tabular}
-\caption{\label{tb:params}
-Overview of standard tuning parameters, and of fine
-tuning parameters. Values are defined in a parameter file, and the
-time dependence of those that depend on time is described by user-defined functions.
-}
-\end{table}
-
-
-%----------------------
-
-\pagebreak
-\section{Phase 0 -- Pre-processing \label{sec:phase0}}
-%{\em Parameters used: $T_{0,1}$.}
-The purpose of this phase is to pre-process input seismograms, to reject
-noisy records, and to set up a secondary waveform (the short-term / long-term average ratio) derived from the envelope of the synthetic seismogram. This STA:LTA waveform will be used later to define preliminary
-measurement windows.
-
-%----------------------
-
-%\subsubsection{Pre-processing}
-
-We apply minimal and identical pre-processing to both observed and synthetic
-seismograms: band-pass filtering with a non-causal Butterworth
-filter, whose
-short and long period corners we denote as $T_0$ and $T_1$ respectively.
-Values of these corner periods should reflect the information content of the data,
-the quality of the Earth model, and the accuracy of the simulation used to generate the synthetic seismogram.
-All further references to seismograms'' in this paper will refer to these filtered waveforms.
-
-%----------------------
-
-%\subsubsection{Seismogram rejection on the basis of noise in observed seismogram}
-
-Our next step is to reject seismograms that are dominated by noise. This rejection is based on two signal-to-noise criteria that compare the power and amplitude of the signal to those of the background noise (given by the observed waveform before the first $P$-wave arrival). The power signal-to-noise ratio is defined as
-${\rm SNR}_P = P_{\rm signal}/P_{\rm noise},$
-where the time-normalized power in the signal and noise portions of the data are defined respectively by
-\begin{align}
-P_{\rm signal} & = \frac{1}{t_E-t_A} \int_{t_A}^{t_E}d^2(t)dt, \\
-P_{\rm noise} & = \frac{1}{t_A-t_0} \int_{t_0}^{t_A}d^2(t)dt, \label{eq:noise}
-\end{align}
-where $d(t)$ denotes the observed seismogram, $t_0$ is its start time, $t_A$ is
-set to be slightly before the time of the first arrival, and $t_E$ is the end
-of the main signal (a good choice for $t_E$ is the end of the dispersed surface
-wave). The amplitude signal-to-noise ratio is defined analogously as
-${\rm SNR}_A = A_{\rm signal}/A_{\rm noise}$,
-where $A_{\rm signal}$ and $A_{\rm noise}$ are the maximum values of $|d(t)|$
-in the signal and noise time-spans respectively. The limits for these two
-signal-to-noise ratios are given by the parameters $r_P$ and $r_A$ in Table~\ref{tb:params}. We reject any record for which
-${\rm SNR}_P < r_P$ or ${\rm SNR}_A < r_A$.
-
-%----------------------
-
-%\subsubsection{Construction of STA:LTA timeseries}
-
-Detection of seismic phase arrivals is routinely performed by automated
-earthquake location algorithms. We have taken a tool used in this
-standard detection process --- the short-term long-term average ratio (STA:LTA)
---- and adapted it to the task of defining time windows around seismic phases. Given a synthetic seismogram $s(t)$, we derive its
-STA:LTA timeseries using an iterative algorithm.
-If we denote the Hilbert transform of the synthetic seismogram by
-$\mathcal{H}[s(t)]$, its envelope $e(t)$ is given by:
-$$-e(t) = | s(t) + i \mathcal{H}[s(t)] |. -$$
-In order to create the STA:LTA waveform $E(t)$, we discretize the envelope time
-series with timestep $\delta t$, calculate its short term average
-$S(t_i)$ and its long term average $L(t_i)$ as follows,
-\begin{align}
-S(t_i) & = C_S \; S(t_{i-1}) + e(t_i) \\
-L(t_i) & = C_L \; L(t_{i-1}) + e(t_i) ,
-\end{align}
-and obtain their ratio:
-$E(t_i) = S(t_i)/L(t_i)$.
-The constants $C_S$ and $C_L$ determine the decay of the relative
-weighting of earlier parts of the signal in the calculation of the current
-average. This decay is necessarily longer for the long term average than
-for the short term average, implying that $C_S < C_L < 1$. The choice of these
-constants determines the sensitivity of the STA:LTA timeseries.
-\citet{BaiKennett2001} used a similar timeseries to
-analyse the character of broad-band waveforms, and allowed the constants
-$C_S$ and $C_L$ to depend on the dominant period of the waveform under
-analysis. We have followed their lead in setting
-$$-C_S = 10^{- \delta t / T_0} \qquad {\rm and} \qquad C_L = 10^{-\delta t / 12 T_0}, -$$
-where the use of $T_0$, the low-pass corner period of our band-pass filter,
-substitutes that of the dominant period.
-
-An example of a synthetic seismogram and its corresponding envelope and STA:LTA timeseries $E(t)$ is
-shown in Figure~\ref{fg:stalta}. Before the first arrivals on the synthetic
-seismogram, the $E(t)$ timeseries warms up and rises to a plateau. At each
-successive seismic arrival on the synthetic, $E(t)$ rises to a
-local maximum. We can see from Figure~\ref{fg:stalta} that these local maxima
-correspond both in position and in width to the seismic phases in the
-synthetic, and that the local minima in $E(t)$ correspond to the
-transitions between one phase and the next. In the following sections we shall
-explain how we use these correspondences to define time windows.
-
-\begin{figure}
-\center \includegraphics[width=6in]{figures/050295B.050-150/ABKT_II_LHZ_seis_nowin.pdf}
-\caption{\label{fg:stalta}
-Synthetic seismogram and its corresponding envelope and STA:LTA timeseries.
-The seismogram was calculated using SPECFEM3D and the
-Earth model S20RTS \citep{RitsemaEtal2004} for the CMT catalog event
-050295B, whose details can be found in Table~\ref{tb:events}. The
-station, ABKT, is at an epicentral distance of 14100~km and at an azimuth of 44
-degrees from the event. The top panel shows the vertical component synthetic
-seismogram, filtered between periods of 50 and 150 seconds. The center panel shows its envelope, and the bottom panel
-shows the corresponding STA:LTA waveform. The dashed line overlaid on
-the STA:LTA waveform is the water level $w_E(t)$.
-}
-
-\end{figure}
-%
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-\clearpage
-\pagebreak
-\section{Phase A -- Preliminary measurement windows \label{sec:phaseA}}
-%{\em Parameters used: $w_E(t)$}.
-
-The correspondence between local maxima in the STA:LTA waveform $E(t)$ and the
-position of the seismic phases in the synthetic seismogram suggests that we
-should center time windows around these local maxima. The
-correspondence between the local minima in $E(t)$ and the transition between
-successive phases suggests the time windows should start and end at these local
-minima. In the case of complex phases, there may be several local maxima and
-minima within a short time-span. In order to correctly window these complex
-phases, we must determine rules for deciding when adjacent local maxima
-should be part of a single window. From an algorithmic point
-of view, it is simpler to create all possible combinations of adjacent windows
-and subsequently reject the unacceptable ones, than to consider expanding
-small, single-maximum windows into larger ones.
-
-We start by defining a water level on $E(t)$ via the time dependent parameter
-$w_E(t)$ in Table~\ref{tb:params}. The water level shown in
-Figure~\ref{fg:stalta} corresponds to $w_E=0.08$ for the duration of the main
-seismic signal. Typical values for $w_E$ vary between $0.05$ and $0.25$ depending on the seismological scenario and
-the desired sensitivity. Once set for typical seismograms for a given
-seismological scenario, it is not necessary to change $w_E$ for each
-seismogram. This is also true of all the other parameters in
-Table~\ref{tb:params}: once the system has been tuned,
-these parameters remain unchanged and are used for all seismic events in the same scenario. The functional forms of the time-dependent parameters are defined by the user, can depend on
-remain unchanged once the system has been tuned (see Appendix~\ref{ap:user_fn}).
-For the example in Figure~\ref{fg:stalta}, we have required the water level
-$w_E(t)$ to double after the end of the surface wave arrivals (as defined by
-the epicentral distance and a group velocity of $3.2$~\kmps) so as to avoid
-creating time windows after $R1$. All local maxima that lie above $w_E(t)$
-are used for the creation of candidate time windows.
-
-We take each acceptable local maximum in turn as a seed maximum, and create all
-possible candidate windows that contain it, as illustrated by
-Figure~\ref{fg:win_composite}a. Each candidate window is defined by three times: its
-start time $t_S$, its end time $t_E$ and the time of its seed maximum $t_M$.
-The start and end times correspond to local minima in $E(t)$. It is important
-to note that in many of the window rejection algorithms, $t_M$ will be significant. For $N$ local maxima that lie above $w_E(t)$, the number of preliminary candidate windows defined in this manner is
-$$-N_{\rm win} = \sum_{n=1}^N \left[nN - (n-1)^2\right] \sim O(N^3). -$$
-
-\begin{figure}
-\center \includegraphics[width=6in]{figures/fig/window_composite.pdf}
-\caption{\label{fg:win_composite}
-(a)~Window creation process. The thick black line represents the STA:LTA
-waveform $E(t)$, and the thick horizontal dashed line its water level $w_E(t)$.
-Local maxima are indicated by alternating red and blue dots, windows are
-indicated by two-headed horizontal arrows. The time of the local maximum used
-as the window seed $t_M$ is denoted by the position of the dot. Only windows for the fourth local maximum are shown. (b)~Rejection of candidate windows based on the amplitude of the local minima. The two deep
-local minima indicated by the grey arrows form virtual barriers. All candidate
-windows that cross these barriers are rejected.
-(c)~Rejection of candidate
-windows based on the prominence of the seed maximum. The local maxima
-indicated by the grey arrows are too low compared to the local minima
-adjacent to them. All windows that have these local maxima as their seed are
-rejected (black crosses over the window segments below the time series).
-(d)~Shortening of long coda windows. The grey bar indicates the maximum coda
-duration $c_{4b} T_0$. Note that after the rejection based on prominence represented in (c) and before shortening of long coda windows represented in (d), the algorithm rejects candidate windows based on the separation of distinct phases, a process that is illustrated in Figure~\ref{fg:separation}.
-}
-\end{figure}
-%
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-\clearpage
-\pagebreak
-\section{Phase B -- Rejection based on the synthetic \label{sec:phaseB}}
-%{\em Parameters used: $T_0$, $w_E(t)$, $c_{0-4}$.}
-
-After having created a complete suite of candidate time windows in the manner
-described above, we start the rejection process. We reject windows based on
-two sets of criteria concerning respectively the shape of the STA:LTA waveform $E(t)$,
-and the similarity of the observed and synthetic waveforms
-$d(t)$ and $s(t)$ within each window. Here we describe the first set of
-criteria; the second set is described in the following section.
-
-The aim of shape-based window rejection is to retain the set of candidate
-time windows within which the synthetic waveform $s(t)$ contains well-developed seismic phases or groups of phases. The
-four rejection criteria described here are parameterized by the constants
-$c_{0-3}$ in Table~\ref{tb:params}, and are scaled in time by $T_0$ and in
-amplitude by $w_E(t)$. We apply these criteria sequentially.
-
-Firstly, we reject all windows that contain internal local minima of $E(t)$
-whose amplitude is less than $c_0 w_E(t)$. We have seen above that local
-minima of $E(t)$ tend to lie on the transitions between seismic phases. By
-rejecting windows that span deep local minima, we are in fact forcing partition
-of unequivocally distinct seismic phases into separate time windows (see Figure~\ref{fg:win_composite}b).
-Secondly, we reject windows whose length is less than $c_1 T_0$. By
-rejecting short windows, we are requiring that time windows be long enough to
-contain useful information.
-Thirdly, we reject windows whose seed maximum $E(t_M)$ rises by less than
-$c_2 w_E(t)$ above either of its adjacent minima. Subdued local maxima of
-this kind represent minor changes in waveform character, and should not be used
-to anchor time windows. They may, however, be considered as part of a time window with a more prominent maximum (see Figure~\ref{fg:win_composite}c).
-Lastly, we reject windows that contain at least
-one strong phase arrival that is well separated in time from $t_M$. The
-rejection is performed using the following criterion:
-$$-%h/h_M > f(\frac{\Delta T}{T_0}; c_{3a},c_{3b}), -h/h_M > f(\Delta T/T_0; c_{3a},c_{3b}), -$$
-where $h_M$ is the height of the seed maximum $E(t_M)$ above the deepest
-minimum between itself and another maximum, $h$ is the height of this other
-maximum above the same minimum, and $f$ is a function of the time
-separation $\Delta T$ between the two maxima (see Figure~\ref{fg:separation}).
-The function $f(\Delta T)$ has the following form:
-$$-f(\Delta T) = -\begin{cases} -c_{3a} & \text{ \Delta T/T_0 \leq c_{3b}} \\ -c_{3a}\exp{[-(\Delta T/T_0-c_{3b})^2/c_{3b}^2]} & \text{ \Delta T/T_0 > c_{3b}.} -\end{cases} -\label{eq:sep} -$$
-If we take
-as an example $c_{3a}=1$, this criterion leads to the automatic rejection of
-windows containing a local maximum that is higher than the seed maximum; it also leads to the rejection of windows containing a local maximum that is
-lower than the seed maximum if it is also sufficiently distant in time from
-$t_M$. This criterion allows us to distinguish unseparable phase groups from
-distinct seismic phases that arrive close in time.
-
-The candidate windows that remain after application of these four rejection
-criteria are almost ready to be passed on to the next stage, in which they will
-be evaluated in terms of the similarity between observed and synthetic
-waveforms within the window limits. Special precautions may have to be taken,
-however, in the case of windows that contain long coda waves: the
-details of codas are often poorly matched by synthetic seismogram calculations,
-as they are essentially caused by multiple scattering processes. In order to
-avoid rejecting a nicely fitting phase because of a poorly fitting coda or a
-poorly fitting emergent start, we introduce the $c_4$ tuning parameters, which
-permit shortening of windows starting with monotonically increasing $E(t)$
-or ending with monotonically decreasing $E(t)$.
-These windows are shortened on the left if they start earlier than $c_{4a} T_0$
-before their first local maximum, and on the right if they end later than
-$c_{4b} T_0$ after their last local maximum (see Figure~\ref{fg:win_composite}d).
-
-Figures~\ref{fg:win_composite} and~\ref{fg:separation} illustrate the shape based
-rejection procedure (Phase B) on a schematic $E(t)$ time series. Each
-successive criterion reduces the number of acceptable candidate windows. A
-similar reduction occurs when this procedure is applied to real $E(t)$ time series, as shown
-by the upper portion of Figure~\ref{fg:win_rej_data}.
-
-\begin{figure}
-\center \includegraphics[width=5in]{figures/fig/window_rejection_separation.pdf}
-\caption{\label{fg:separation}
-Rejection of candidate windows based on the separation of distinct phases.
-(a)~Heights of pairs of local maxima above their intervening minimum.
-(b)~The black line represents $f(\Delta T/T_0)$ from
-equation~(\ref{eq:sep}) with $c_{3a}=c_{3b}=1$. Vertical bars represent
-$h/h_M$ for each pair of maxima. Their position along the horizontal axis is
-given by the time separation $\Delta T$ between the maxima of each pair. The
-color of the bar is given by the color of the seed maximum corresponding to $h_M$. Bars whose height
-exceeds the $f(\Delta T/T_0)$ line represent windows to be rejected.
-(c)~The windows that have been rejected by this criterion are indicated by black
-crosses.
-}
-\end{figure}
-
-\begin{figure}
-\center \includegraphics[width=6in]{figures/fig/window_rejection_global_data.pdf}
-\caption{\label{fg:win_rej_data}
-Window rejection applied to real data.
-Top panel: observed (black) and synthetic (red) seismograms for the 050295B event
-recorded at ABKT (see Figure~\ref{fg:stalta}).
-Subsequent panels: candidate windows at different stages, separated into Phase B (shape based rejection) and
-Phase C (fit based rejection). Each candidate window is indicated by a black
-segment. The number of windows at each stage is shown to the left of the
-panel.
-}
-\end{figure}
-
-
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-
-\clearpage
-\pagebreak
-\section{Phase C -- Rejection based on seismogram differences \label{sec:phaseC}}
-%{\bf User parameters: $CC_0(t)$, $\Delta\tau_0(t)$, $\Delta\ln{A}_0(t)$}
-
-After having greatly reduced the number of candidate windows by rejection based
-on the shape of the STA:LTA time series $E(t)$, we are now left with a set of
-windows that contain well-developed seismic phases or
-groups of phases on the synthetic seismogram.
-The next stage is to evaluate the degree of similarity between the observed and
-synthetic seismograms within these windows, and to reject
-those that fail basic fit-based criteria. A similar kind of rejection is
-performed by most windowing schemes.
-
-The quantities we use to define well-behavedness of data within a window are
-signal
-to noise ratio ${\rm SNR}_W$, normalised cross-correlation value between
-observed and synthetic seismograms $CC$,
-cross-correlation time lag $\Delta \tau$, and amplitude ratio $\Delta \ln -A$. The signal to noise ratio for single windows is defined as an amplitude
-ratio, ${\rm SNR}_W=A_{\rm window}/A_{\rm noise}$, where $A_{\rm window}$ and
-$A_{\rm noise}$ are the maximum absolute values of the observed seismogram $|d(t)|$ in the window
-and in the noise time-span respectively (see equation~\ref{eq:noise}). The cross-correlation value $CC$ is defined as the maximum value of the
-cross-correlation function ${\rm CC}={\rm max}[\Gamma(t^\prime)]$, where
-$$-\Gamma(t^\prime) = \int s(t-t^\prime)d(t)dt, -$$
-and
-quantifies the similarity in shape between the $s(t)$ and $d(t)$
-waveforms. The time lag $\Delta \tau$ is defined as the value of $t^\prime$
-at which $\Gamma$ is maximal, and quantifies the delay in time between a
-synthetic and observed phase arrival. The amplitude ratio $\Delta \ln A$ is
-defined as the amplitude ratio between observed and synthetic
-seismograms \citep{DahlenBaig2002}
-$$-\Delta\ln{A} = \left[ \frac{\int d(t)^2 dt}{\int s(t)^2 dt} \right]^{1/2} - 1. \label{eq:dlnA_def} -$$
-The limits that trigger rejection of windows based on the values of these four
-quantities are the time dependent parameters $r_0(t)$, $CC_0(t)$, $\Delta -\tau_0(t)$ and $\Delta \ln A_0(t)$ in Table~\ref{tb:params}.
-As for the STA:LTA water level $w_E(t)$ used in above, the functional form of
-these parameters is defined by the user, and can depend on source and receiver
-parameters such as epicentral distance and earthquake depth.
-Figure~\ref{fg:criteria} shows the time
-dependence of $CC_0$ , $\Delta \tau_0$ and $\Delta \ln A_0$ for an example seismogram.
-
-We only accept candidate windows that satisfy all of the following:
-\begin{align}
-{\rm SNR}_W & \geq r_0(t_M), \label{eq:snr_win} \\
-{\rm CC} & \geq {\rm CC}_0(t_M), \label{eq:cc} \\
-|\Delta\tau| & \leq \Delta\tau_0(t_M), \label{eq:tau} \\
-|\Delta\ln{A}| & \leq \Delta\ln{A}_0(t_M), \label{eq:dlnA}
-\end{align}
-where $t_M$ is the time of the window's seed maximum. In words, we only accept
-windows in which the observed signal is above the noise level, the observed and
-synthetic signals are reasonably similar in shape, their arrival times
-differences are small, and their amplitudes are broadly compatible. When the synthetic and observed
-seismograms are similar, the fit-based criteria of
-equations~(\ref{eq:cc})-(\ref{eq:dlnA}) reject only a few of the candidate data
-windows (see lower portion of Figure~\ref{fg:win_rej_data}). They are
-essential, however, in eliminating problems due secondary events (natural or
-man-made), diffuse noise sources, or instrumental glitches.
-
-
-\begin{figure}
-\center \includegraphics[width=6in]{figures/050295B.050-150/ABKT_II_LHZ_criteria.pdf}
-\caption{\label{fg:criteria}
-Time dependent fit based criteria
-for the 050295B event recorded at ABKT. The time-dependence of these criteria
-is given by the formulae in Appendix~\ref{ap:user_global}. The lower limit on
-acceptable cross-correlation value, $CC_0$ (solid line), is
-0.85 for most of the duration of the seismogram; it is lowered to 0.75 during
-the approximate surface wave window defined by the group velocities 4.2\kmps\
-and 3.2\kmps, and is raised to 0.95 thereafter. The upper limit on time lag,
-$\tau_0$ (dotted line), is 21~s for the whole seismogram. The upper limit on amplitude
-ratio, $\Delta \ln A_0$ (dashed line), is 1.0 for most of the seismogram; it is reduced to
-1/3 of this value after the end of the surface waves.
-}
-\end{figure}
-%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\clearpage
-\pagebreak
-\section{Phase D -- Overlap resolution \label{sec:phaseD}}
-%{\em User parameters: $w_{CC}$, $w_{\rm len}$.}
-
-After having rejected candidate data windows that fail any of the shape or
-similarity based criteria described above, we are left with a small number of
-windows, each of which taken singly would be an acceptable time window for
-measurement. As can be seen from Figure~\ref{fg:win_composite}d and the last
-panel of Figure~\ref{fg:win_rej_data}, the remaining windows may
-overlap partially or totally with their neighbours. Such overlaps are
-problematic for automated measurement schemes, as they lead to multiple
-measurements of those features in the seismogram that lie within the overlapping
-portions. Resolving this overlap problem is the last step in the
-windowing process.
-
-Overlap resolution can be seen as a set of choices leading to
-the determination of an optimal set of time windows. What do we mean by
-optimal? For our purposes, an optimal set of time windows contains only windows that
-have passed all previous tests, that do not overlap with other windows in the set,
-and that cover as much of the seismogram as possible. When choosing between
-candidate windows, we favour those within which the
-observed and synthetic seismograms are most similar (high values of $CC$).
-Furthermore, should we have the choice between two short windows and a longer,
-equally well-fitting one covering the same time-span, we may wish to favour
-the longer window as this poses a stronger constraint on the tomographic inversion.
-
-The condition that optimal windows should have passed all previous tests
-removes the straightforward solution of merging overlapping windows. Indeed, given any two
-overlapping windows, we know that the window defined by their merger
-existed in the complete list of candidate windows obtained at the end of
-Phase~A, and that its absence from the current list means it was rejected
-either because of the shape of its $E(t)$ time-series (Phase~B), or because of
-an inadequate similarity between observed and synthetic waveforms (Phase~C).
-It would therefore be meaningless to re-instate such a window at this stage.
-Any modification of current candidate windows would be disallowed by similar
-considerations. We must therefore choose between overlapping
-candidates.
-
-We make this choice by constructing all possible non-overlapping subsets of
-candidate windows, and scoring each subset on three criteria: length of
-seismogram covered by the windows, average cross-correlation value for the windows,
-and total number of windows. These criteria often work against each other. For
-example, a long window may have a lower $CC$ than two shorter ones, if the two
-short ones have different time lags $\Delta\tau$. An optimal weighting of the
-three scores is necessary, and is controlled by the three weighting parameters
-$w_{CC}$, $w_{\rm len}$ and $w_{\rm nwin}$ in Table~\ref{tb:params}.
-
-As can be seen in Figure~\ref{fg:phaseD}, the generation of subsets is
-facilitated by first grouping candidate windows such that no group overlaps
-with any other group. The selection of the optimal subsets can then be
-performed independently within each group. We score each non-overlapping
-subset of windows within a group using the following three metrics:
-\begin{align}
-S_{CC} &= \sum_i^{N_{\rm set}} CC_i / N_{\rm set},\\
-S_{\rm len} &= [\sum_i^{N_{\rm set}} t^e_i - t^s_i]/[t^e_g - t^s_g], \\
-S_{\rm nwin} & = 1 - N_{\rm set}/N_{\rm group},
-\end{align}
-where $CC_i$ is the cross-correlation value of the $i$th window in
-the subset, $N_{\rm set}$ is the number of windows in the subset, $N_{\rm -group}$ is the number of windows in the group, and $t^s_i$, $t^e_i$, $t^s_g$
-and $t^e_g$ are respectively the start and end times of the $i$th candidate
-window in the set, and of the group itself. The three scores
-are combined into one using the weighting parameters:
-$$-S = \frac{w_{CC}S_{CC}+w_{\rm len}S_{\rm len}+w_{\rm nwin}S_{\rm nwin}}{w_{CC}+w_{\rm len}+w_{\rm nwin}}. -\label{eq:score} -$$
-The best subset of candidate windows within each group is the one with the
-highest combined score $S$. The final, optimal set of windows is
-given by concatenating the best subsets of candidate windows for each group.
-Figure~\ref{fg:res_abkt} shows an example of optimal windows selected on real
-data.
-
-\begin{figure}
-\center \includegraphics[width=5in]{figures/fig/window_overlap.pdf}
-\caption{\label{fg:phaseD}
-The selection of the best non-overlapping window
-combinations. Each grey box represents a distinct group of windows.
-Non-overlapping subsets of windows are shown on separate lines. Only one
-line from within each group will be chosen, the one corresponding to the
-highest score obtained in equation~(\ref{eq:score}). The resulting optimal set
-of data windows is shown by thick arrows.}
-\end{figure}
-%
-%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
-\begin{figure}
-\center \includegraphics[width=6in]{figures/fig/window_results.pdf}
-\caption{\label{fg:res_abkt}
-Window selection results for event 050295B
-from Table~\ref{tb:events} recorded at ABKT ($37.93$\degN,
-$58.11$\degE, $\Delta=127$\deg, vertical
-component).
-(a)~Top: observed and synthetic seismograms (black and red
-traces); bottom: STA:LTA timeseries $E(t)$. Windows chosen by the algorithm
-are shown using light blue shading. The phases contained these windows are:
-(1) $PP$, (2) $PS+SP$, (3) $SS$, (4) $SSS$, (5) $S5$, (6) $S6$, (7) fundamental
-mode Rayleigh wave.
-(b)~Ray paths corresponding to the body wave phases present in the data windows.
-}
-\end{figure}
-%
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_other.tex 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_other.tex 2009-03-19 10:11:30 UTC (rev 14389)
@@ -3,12 +3,16 @@
To report bugs or suggest improvements to the code, please send an email to the CIG Computational Seismology Mailing List (cig-seismo at geodynamics.org) or Alessia Maggi (alessia at sismo.u-strasbg.fr), and/or use our online bug tracking system Roundup (www.geodynamics.org/roundup).
\section{Notes and Acknowledgments}
-[FIXME] The filtering routines used in {\tt seismo\_subs.f90} are based on the SacLib libraries constructed by Brian Savage from the original source code of SAC (developed at Lawrence Livermore). What about SAC licences??
+The main developers of the FLEXWIN source code are Alessia Maggi and Carl Tape. The following individuals (listed in alphabetical order) have also contributed to the development of the source code: Daniel Chao, Min Chen, Vala Hjorleifsdottir, Qinya Liu, Jeroen Tromp. The following individuals (listed in alphabetical order) contributed to this manual: Sue Kientz, Alessia Maggi, Carl Tape.
-The main developers of the FLEXWIN source code are Alessia Maggi and Carl Tape. The following individuals (listed in alphabetical order) have also contributed to the development of the source code: Daniel Chao, Min Chen, Jeroen Tromp. The following individuals (listed in alphabetical order) contributed to this manual: Sue Kientz, Alessia Maggi, Carl Tape \ldots
+The FLEXWIN code makes use of filtering and enveloping algorithms that are part of SAC (Seismic Analysis Code, Lawerence Livermore National Laboratory) provided for free to IRIS members. We thank Brian Savage for adding interfaces to these algorithms in recent SAC distributions.
+We acknowledge support by the National Science Foundation under grant EAR-0711177.
+
+
Any commercial use must be negotiated with the Office of Technology Transfer at
the California Institute of Technology. This software may be subject to U.S.
===================================================================
--- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_technical.tex 2009-03-19 04:46:00 UTC (rev 14388)
+++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_technical.tex 2009-03-19 10:11:30 UTC (rev 14389)
@@ -12,75 +12,29 @@
\end{itemize}
FLEXWIN requires the following libraries external to the package in order to
-compile and run: libsacio.a and libSacLib.a. Eventually, both libraries
-will be distributed by IRIS as part of the SAC package (at the moment only
-libsacio is distributed this way). For the time being, you should compile
-libSacLib.a using the source code in the SacLib directory that accompanies
-flexwin.
+compile and run: {\tt libsacio.a} and {\tt libsac.a}. Both libraries
+are distributed by IRIS as part of the SAC package (version 101.2 and above).
+\url{http://www.iris.edu/software/sac/sac.request.htm}.
+(To check your version, type sac.)
-{\bf Important note}: The SacLib directory is a temporary fix. The SAC source code
-from which the SacLib library is compiled is proprietary and should not be
-distributed by anyone other than IRIS. Brian Savage - the author of SacLib -
-is currently working on a new version of the library that will be distributed
-with future versions of SAC. The official release of flexwin will require
\section{Obtaining the code}
-[TODO] Write this better once structure of code (and packages that will be
-delivered) is finalised.
+The code is available as a gzipped tarball from CIG (Computational Infrastructure for Geodynamics, \url{http://www.geodynamics.org}). The tarball is unpacked by typing {\tt tar xvzf flexwin.tgz}.
-The code is available as a gzipped tarball from CIG (Computational Infrastructure for Geodynamics, {\tt http://www.geodynamics.org}). The tarball is unpacked by typing {\tt tar xvzf flexwin.tgz}.
-
The package contains the flexwin code and documentation, as well as a set of
test data, examples of user files for different scenarios, and a set of utility
scripts that may be useful for running flexwin on large datasets.
-The contents of the flexwin directory are as follows:
-{\small
-\begin{verbatim}
-flexwin
-|-- Makefile.in
-|-- PAR_FILE
-|-- TODOs
-|-- configure
-|-- configure.ac
-|-- distaz.f
-|-- flexwin.f90
-|-- io_subs.f90
-|-- latex
-|-- make_gfortran
-|-- make_intel
-|-- make_intel_caltech
-|-- maxima.f90
-|-- measure_windows_xcorr.f90
-|-- measurement_module.f90
-|-- scripts
-|-- seismo_subs.f90
-|-- select_windows_stalta2.f90
-|-- test_data
-|-- travel_times.f90
-|-- ttimes_mod
-|-- user_files
-|-- user_functions.f90
-|-- user_parameters.f90
--- xcorr-measure.f90
-\end{verbatim}
-}
-
\section{Compilation}
-[TODO] - Rewrite this for the official release.
-
-{\bf Note}: Do NOT use the configure script for beta test compilation. It will not
-work.
-
If your compiler of choice is gfortran, then you should be able to use the
{\tt make\_gfortran} makefiles with only minor modifications (notably you may need to
change the search path for the {\tt libsacio.a} library). If you prefer another
-compiler, you should modify the OPT and FC lines in the makefiles accordingly.
+compiler, you should modify the OPT and FC lines in the makefiles accordingly. We tested the code using gfortran version 4.1.2
+(To check your version, type{\tt gfortran --version}.)
{\bf Important note}: All the code is compiled with the -m32 option, which makes
32bit binaries. This option is currently required to enable compatibility with
@@ -89,19 +43,11 @@
Steps to compile the flexwin package:
\begin{enumerate}
-\item Compile {\tt libSacLib.a}. In the {\tt SacLib} directory (which is outside the {\tt flexwin}
-directory) type: {\tt make -f make\_gfortran}.
-\item Compile {\tt libtau.a} and create {\tt iasp91.hed} and {\tt iasp91.tbl}. In the
-{\tt flexwin/ttimes\_mod} directory type: {\tt make -f make\_gfortran}. This will compile
-{\tt libtau.a}, and two programs, {\tt remodl} and {\tt setbrn}. The makefile will also run
-{\tt remodl} and {\tt setbrn} to create the {\tt iasp91.hed} and {\tt iasp91.tbl} files. You should
-then type {\tt make -f make\_gfortran install} to install the iasp91 files.
-\item Compile {\tt flexwin}. Edit the {\tt make\_gfortran} file in the root directory to ensure the {\tt SACLIBDIR} variable points to the location of your SAC libraries (by default {\tt /opt/sac/lib}). Then type {\tt make -f make\_gfortran}.
+\item Compile {\tt libtau.a} and create {\tt iasp91.hed} and {\tt iasp91.tbl}. In the {\tt flexwin/ttimes\_mod} directory type: {\tt make -f make\_gfortran}. This will compile {\tt libtau.a}, and two programs, {\tt remodl} and {\tt setbrn}. The makefile will also run {\tt remodl} and {\tt setbrn} to create the {\tt iasp91.hed}and {\tt iasp91.tbl} files. You should then type {\tt make -f make\_gfortran install} to install the iasp91 files.
+\item Compile flexwin. Edit the {\tt make\_gfortran} file in the flexwin root directory to ensure the {\tt SACLIBDIR} environment variable points to the location of your SAC libraries (by default {\tt \$SACHOME/lib}). Then type {\tt make -f make\_gfortran}. \end{enumerate} -You should end up with the {\tt flexwin} executable. The program requires the iasp91 -files (or links to them) to be present in the directory from which the code is -launched. +You should end up with the {\tt flexwin} executable. The program requires the {\tt iasp91.hed} and {\tt iasp91.tbl} files (or symbolic links to them) to be present in the directory from which the code is launched. \section{Running the Test case} @@ -118,7 +64,7 @@ file. Your result should be identical to that shown in Figure~\ref{fg:test_data}. \begin{figure} -\center \includegraphics[width=4in]{../test_data/MEASURE.orig/ABKT_II_LHZ_seis.pdf} +\center \includegraphics[width=4in]{manual_figures/ABKT_II_LHZ_seis.pdf} \caption{\label{fg:test_data} Windowing results for the test data set, plotted using the {\tt ./plot\_seismos\_gmt.sh} script. } @@ -171,10 +117,8 @@ subroutines in {\tt io\_subs.f90}. \section{Scripts} -Several plotting routines ({\tt plot\_*.sh}) are provided as examples for -plotting seismograms, measurements and adjoint sources. All plotting is -done in gmt. These scripts will need to be modified to suit your -particular plotting needs. +Several plotting routines ({\tt plot\_*.sh}) are provided in the {\tt scripts} subdirectory as examples for plotting seismograms, measurements and adjoint sources. All plotting is +done using GMT (Generic Mapping Tools). These scripts will need to be modified to suit your particular plotting needs. The script {\tt extract\_event\_windowing\_stats.sh} extracts statistical information on the window selection process, on the measurements. Again, Modified: seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_tuning.tex =================================================================== --- seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_tuning.tex 2009-03-19 04:46:00 UTC (rev 14388) +++ seismo/3D/ADJOINT_TOMO/flexwin_paper/latex/manual_tuning.tex 2009-03-19 10:11:30 UTC (rev 14389) @@ -2,19 +2,21 @@ FLEXWIN is adapted to your specific problem by modifying the values of the parameters in Table~\ref{tb:params}, and the functional form of those parameters that are time-dependent. We consider the algorithm to be correctly adapted when false positives (windows around undesirable features of the seismogram) are minimized, and true positives (window around desirable features) are maximized. The choice of what makes an adequate set of windows remains subjective, as it depends strongly on the quality of the input model, the quality of the data, and the region of the Earth the tomographic inversion aims to constrain. -The base values of the various parameters are set in the {\tt PAR\_FILE}, which is read at run time. The functional forms of the time dependent parameters may be adjusted by modifying {\tt user\_parameters.f90}, and re-compiling the code. +The base values of the various parameters are set in the {\tt PAR\_FILE}, which is read at run time. Examples of base parameter values for the three tomographic scenarios discussed by \cite{MaggiEtal2009} can be found in Table~\ref{tb:example_params}. The functional forms of the time dependent parameters may be adjusted by modifying {\tt user\_parameters.f90} (see next section), and re-compiling the code. \begin{table} \begin{tabular}{lp{0.8\linewidth}} \hline \multicolumn{2}{l}{Standard tuning parameters:} \\[5pt] -$T_{0,1}$& band-pass filter corner periods \\ +$T_{0,1}$& bandpass filter corner periods \\$r_{P,A}$& signal to noise ratios for whole waveform \\$r_0(t)$& signal to noise ratios single windows \\$w_E(t)$& water level on short-term:long-term ratio \\ -$CC_0(t)$& acceptance level for normalized cross-correlation\\ +$\mathrm{CC}_0(t)$& acceptance level for normalized cross-correlation\\$\Delta\tau_0(t)$& acceptance level for time lag \\$\Delta\ln{A}_0(t)$& acceptance level for amplitude ratio \\ +$\Delta\tau_{\rm ref}$& reference time lag \\ +$\Delta\ln{A}_{\rm ref}$& reference amplitude ratio \\ \hline \multicolumn{2}{l}{Fine tuning parameters:} \\ [5pt]$c_0$& for rejection of internal minima \\ @@ -22,7 +24,7 @@$c_2$& for rejection of un-prominent windows \\$c_{3a,b}$& for rejection of multiple distinct arrivals \\$c_{4a,b}$& for curtailing of windows with emergent starts and/or codas \\ -$w_{CC}\quad w_{\rm len}\quad w_{\rm nwin}$& for selection of best non-overlapping window combination \\ +$w_{\mathrm{CC}}\quad w_{\rm len}\quad w_{\rm nwin}$& for selection of best non-overlapping window combination \\ \hline \end{tabular} \caption{\label{tb:params} @@ -32,18 +34,20 @@ } \end{table} -\section{User modifiable parameters} -The main user-modifiable parameters in the {\tt PAR\_FILE} are: +\section{User parameters} +The main user parameters in the {\tt PAR\_FILE} are: \begin{description} \item[{\tt WIN\_MIN\_PERIOD}]Corresponds to$T_0$in Table~\ref{tb:params}, and is the short wavelength cut-off for the band-pass filter applied to the raw synthetic and observed seismograms. \item[{\tt WIN\_MAX\_PERIOD}]Corresponds to$T_1$in Table~\ref{tb:params}, and is the long wavelength cut-off for the band-pass filter applied to the raw synthetic and observed seismograms. \item[{\tt SNR\_INTEGRATE\_BASE}]Corresponds to$r_P$in Table~\ref{tb:params}, and is the minimum signal to noise ratio on the power of the observed seismogram for windowing to continue. \item[{\tt SNR\_MAX\_BASE}]Corresponds to$r_A$in Table~\ref{tb:params}, and is the minimum signal to noise ratio on the modulus of the observed seismogram for windowing to continue. -\item[{\tt WINDOW\_AMP\_BASE}]Corresponds to$r_0$in Table~\ref{tb:params}, and is the minimum signal to noise ratio for a window on the observed seismogram to be acceptable. +\item[{\tt WINDOW\_S2N\_BASE}]Corresponds to$r_0$in Table~\ref{tb:params}, and is the minimum signal to noise ratio for a window on the observed seismogram to be acceptable. \item[{\tt STALTA\_BASE}]Corresponds to$w_E$in Table~\ref{tb:params}, and is the water level to be applied to the synthetic short-term/long-term average waveform in order to generate candidate time windows. See Figure~\ref{fg:win_composite}a. \item[{\tt CC\_BASE}]Corresponds to$CC_0$in Table~\ref{tb:params}, and is the minimum normalized cross-correlation value between synthetic and observed seismogram for a window to be acceptable. \item[{\tt TSHIFT\_BASE}]Corresponds to$\Delta\tau_0$in Table~\ref{tb:params}, and is the maximum cross-correlation lag (in seconds) between synthetic and observed seismogram for a window to be acceptable. \item[{\tt DLNA\_BASE}]Corresponds to$\Delta\ln{A}_0$in Table~\ref{tb:params}, and is the maximum amplitude ratio ($\Delta\ln{A}$or$\Delta A/A$) between synthetic and observed seismogram for a window to be acceptable. +\item[{\tt TSHIFT\_REFERENCE}]Corresponds to$\Delta\tau_{\rm ref}$in Table~\ref{tb:params}, and allows for a systematic traveltime bias in the synthetics. +\item[{\tt TSHIFT\_REFERENCE}]Corresponds to$\Delta\ln{A}_{\rm ref}$in Table~\ref{tb:params}, and allows for a systematic amplitude bias in the synthetics. \item[{\tt C\_0}]Corresponds to$C_0$in Table~\ref{tb:params}, and is expressed as a multiple of$w_E$. No window may contain a local minimum in its STA:LTA waveform that falls below the local value of$C_0 w_E$. See Figure~\ref{fg:win_composite}b. \item[{\tt C\_1}]Corresponds to$C_1$in Table~\ref{tb:params}, and is expressed as a multiple of$T_0$. No window may be shorter than$C_1 T_0$. \item[{\tt C\_2}]Corresponds to$C_2$in Table~\ref{tb:params}, and is expressed as a multiple of$w_E$. A window whose seed maximum on the STA:LTA waveform rises less than$C_2 w_E$above either of its adjacent minima is rejected. See Figure~\ref{fg:win_composite}c. @@ -56,6 +60,38 @@ \item[{\tt WEIGHT\_N\_WINDOWS}]Corresponds to$w_{\rm nwin}$in Table~\ref{tb:params}, and is the weight given to the total number of windows in the process of resolving window overlaps. \end{description} +\begin{table} +\begin{center} +\begin{tabular}{lcccccc} +\hline + & Global & \multicolumn{2}{c}{Japan} & \multicolumn{3}{c}{S. California} \\ +\hline +$T_{0,1}$& 50, 150 & 24, 120 & 6, 30 & 6, 30 & 3, 30 & 2, 30 \\ +$r_{P,A}$& 3.5, 3.0 & 3.5, 3.0 & 3.5, 3.0 & 3.0, 2.5 & 2.5, 3.5 & 2.5, 3.5 \\ +$r_0$& 2.5 & 1.5 & 3.0 & 3.0 & 4.0 & 4.0 \\ +$w_E$& 0.08 & 0.11 & 0.12 & 0.18 & 0.11 & 0.07 \\ +$\mathrm{CC}_0$& 0.85 & 0.70 & 0.73 & 0.71 & 0.80 & 0.85 \\ +$\Delta\tau_0$& 15 & 12.0 & 3.0 & 8.0 & 4.0 & 3.0 \\ +$\Delta\ln{A}_0$& 1.0 & 1.0 & 1.5 & 1.5 & 1.0 & 1.0 \\ +$\Delta\tau_{\rm ref}$& 0.0 & 0.0 & 0.0 & 4.0 & 2.0 & 1.0 \\ +$\Delta\ln{A}_{\rm ref}$& 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ +\hline +$c_0$& 0.7 & 0.7 & 0.7 & 0.7 & 1.3 & 1.0 \\ +$c_1$& 4.0 & 3.0 & 3.0 & 2.0 & 4.0 & 5.0 \\ +$c_2$& 0.3 & 0.0 & 0.6 & 0.0 & 0.0 & 0.0 \\ +$c_{3a,b}$& 1.0, 2.0 & 1.0, 2.0 & 1.0, 2.0 & 3.0, 2.0 & 4.0, 2.5 & 4.0, 2.5 \\ +$c_{4a,b}$& 3.0, 10.0 & 3.0, 25.0 & 3.0, 12.0 & 2.5, 12.0 & 2.0, 6.0 & 2.0, 6.0 \\ +$w_{\mathrm{CC}}, w_{\rm len}, w_{\rm nwin}$+ & 1, 1, 1 & 1, 1, 1 & 1, 1, 1 & 0.5,1.0,0.7 & 0.70,0.25,0.05 & 1,1,1 \\ +\hline +\end{tabular} +\caption{\label{tb:example_params} +Values of standard and fine-tuning parameters for the three seismological +scenarios discussed \cite{MaggiEtal2009}. This table is identical to Table~3 of that study. +} +\end{center} +\end{table} + \begin{figure} \center \includegraphics[width=6in]{figures/fig/window_composite.pdf} \caption{\label{fg:win_composite} @@ -105,7 +141,7 @@ \section{Time dependence of user parameters} A subset of the FLEXWIN parameters from Table~\ref{tb:params} are time-dependent (where time is measured along the seismogram). This feature enables the user to exercise fine control of the windowing algorithm. The user can modulate the time-dependence of these parameters by editing the {\tt set\_up\_criteria\_arrays} subroutine in the {\tt user\_functions.f90} file. This subroutine is called after the seismograms have been read in, and the following variables have been set: \begin{description} -\item[{\tt npts, dt, b, npts}] Number of points, time step, time of first point with respect to the reference time of both seismograms. The observed and synthetic seismograms should have identical values of these three quantities. +\item[{\tt npts, dt, b}] Number of points, time step, time of first point with respect to the reference time of both seismograms. The observed and synthetic seismograms should have identical values of these three quantities. \item[{\tt evla, evlo, evdp, stla, stlo}] Event latitude, event longitude, event depth (km), station latitude, station longitude, read from the observed seismogram. \item[{\tt azimuth, backazimuth, dist\_deg, dist\_km}] Calculated from the event and station locations above. \item[{\tt kstnm, knetwk, kcmpnm}] Station name, network name, component name, read from the observed seismogram. @@ -181,3 +217,204 @@ The above examples illustrate the power of the {\tt user\_functions.f90} file. The user can choose to include/exclude any portion of the seismogram, and to make the rejection criteria for windows more or less stringent on any other portion of the seismogram. All the seismogram-dependent variables whose values are known when the {\tt set\_up\_criteria\_arrays} subroutine is executed may be used to inform these choices, leading to an infinite number of windowing possibilities. The careful user will use knowledge of the properties of the observed data set, the limitations of the synthetic waveforms, and the final use to which the selected windows will be put in order to tailor the subroutine to the needs of each study. For a given set of data and synthetics, the {\tt PAR\_FILE} and {\tt user\_functions.f90} files uniquely determine the windowing results. + +\subsection{Examples of user functions\label{ap:user_fn}} + +As concrete examples of how the time dependence of the tuning parameters can be exploited, we present here the functional forms of the time dependencies used for the three example tomographic scenarios (global, Japan and southern California) described in \cite{MaggiEtal2009}. In each example we use predicted arrival times derived from 1D Earth models to help modulate certain parameters. Note, however, that the actual selection of individual windows is based on the details of the waveforms, and not on information from 1D Earth models. + +\subsubsection{Global scenario\label{ap:user_global}} + +In the following,$h$indicates earthquake depth,$t_Q$indicates the approximate start of the Love wave predicted by a group wave speed of 4.2~\kmps, and$t_R$indicates the approximate end of the Rayleigh wave predicted by a group wave speed of 3.2~\kmps. In order to reduce the number of windows picked beyond R1, and to ensure that those selected beyond R1 are a very good match to the synthetic waveform, we raise the water level on the STA:LTA waveform and impose stricter criteria on the signal-to-noise ratio and the waveform similarity after the approximate end of the surface-wave arrivals. We allow greater flexibility in cross-correlation time lag$\Delta\taufor intermediate depth and deep earthquakes. We lower the cross-correlation value criterion for surface-waves in order to retain windows with a slight mismatch in dispersion characteristics. + +We therefore use the following time modulations: +\begin{align} +w_E(t) & = + \begin{cases} + w_E \text{t \leq t_R$} ,\\ + 2 w_E \text{$t > t_R$}, + \end{cases} +\\ +r_0(t) & = + \begin{cases} + r_0 & \text{$t \leq t_R$}, \\ + 10r_0 & \text{$t > t_R$} , + \end{cases} +\\ +\mathrm{CC}_0(t) & = + \begin{cases} + \mathrm{CC}_0 & \text{$t \leq t_R$}, \\ + 0.9 \mathrm{CC}_0 & \text{$t_Q < t \leq t_R$}, \\ + 0.95 & \text{$t > t_R$} , + \end{cases} +\\ +\Delta\tau_0(t) & = + \begin{cases} + \begin{cases} + \tau_0 & \text{$t \leq t_R$}, \\ + \tau_0/3 & \text{$t > t_R$} , + \end{cases} + & \text{$h \leq$70~km} \\ + 1.4\tau_0 & \text{70~km$< h <$300~km}, \\ + 1.7\tau_0 & \text{$h \geq$300~km}, + \end{cases} + \\ +\Delta \ln A_0(t) & = + \begin{cases} + \Delta \ln A_0 & \text{$t \leq t_R$}, \\ + \Delta \ln A_0/3 & \text{$t > t_R} . + \end{cases} +\end{align} + +%-------------------------- + +\subsubsection{Japan scenario\label{ap:user_japan}} +In the following,t_P$and$t_S$denote the start of the time windows for$P$- and$S$waves, as predicted by the 1-D IASPEI91 model \citep{KennettEngdahl1991}, and$t_{R1}$indicates the end of the surface-wave time window. For the \trange{24}{120} data, we consider the waveform between the start of the$P$wave to the end of the surface-wave. We therefore modulate$w_E(t)as follows: + +% +\begin{align} +w_E(t) & = + \begin{cases} + 10 w_E & \text{t < t_P$}, \\ + w_E & \text{$t_P \le t \leq t_{R1}$}, \\ + 10 w_E & \text{$t > t_{R1}}. + \end{cases} +\end{align} + +For the \trange{6}{30} data, the fit between the synthetic and observed surface-waves is expected to be poor, as the 3D model used to calculate the synthetics cannot produce the required complexity. We therefore want to concentrate on body-wave arrivals only, and avoid surface-wave windows altogether by modulatingw_E(t)as follows: +% +\begin{align} +w_E(t) & = + \begin{cases} + 10 w_E & \text{t < t_P$}, \\ + w_E & \text{$t_P \le t \leq t_S$}, \\ + 10 w_E & \text{$t > t_S}. + \end{cases} +\end{align} + +We use constant values ofr_0(t)=r_0$,$\mathrm{CC}_0(t)=\mathrm{CC}_0$and$\Delta \ln A_0(t)=\Delta \ln A_0$for both period ranges. In order to allow greater flexibility in cross-correlation time lag$\Delta\taufor intermediate depth and deep earthquakes we use: + +\begin{align} +\Delta\tau_0(t) & = + \begin{cases} + 0.08 \text{t_P$} & \text{$h \leq$70~km}, \\ + \max(0.05 \text{$t_P$}, 1.4\tau_0) & \text{70~km$< h <$300~km}, \\ + \max(0.05 \text{$t_P$}, 1.7\tau_0) & \text{$h \geq300~km}. + \end{cases} +\end{align} +%-------------------------- + +\subsubsection{Southern California scenario\label{ap:user_socal}} + +In the following,t_P$and$t_S$denote the start of the time windows for the crustal P wave and the crustal S wave, computed from a 1D layered model appropriate to Southern California \citep{Wald95}. The start and end times for the surface-wave time window,$t_{R0}$and$t_{R1}$, as well as the criteria for the time shifts$\Delta\tau_0(t)$, are derived from formulas in \cite{KomatitschEtal2004}. The source-receiver distance (in km) is denoted by$\Delta$. + +%CHT modified + +For the \trange{6}{40} and \trange{3}{40} data, we use constant values of$r_0(t)=r_0$,$\mathrm{CC}_0(t)=\mathrm{CC}_0$,$\Delta\tau_0(t)=\Delta\tau_0$, and$\Delta \ln A_0(t)=\Delta \ln A_0$. We exclude any arrivals before the$P$wave and after the Rayleigh wave. This is achieved by the box-car function for$w_E(t): +% +\begin{align} +w_E(t) & = + \begin{cases} + 10 w_E & \text{t < t_P$}, \\ + w_E & \text{$t_P \le t \leq t_{R1}$}, \\ + 10 w_E & \text{$t > t_{R1}}, + \end{cases} +\end{align} +%For the \trange{6}{40} data, we exclude any arrivals before theP$wave and reduce the number of windows picked beyond R1 by modulating$w_E(t)$. We use constant values of$r_0(t)=r_0$,$\mathrm{CC}_0(t)=\mathrm{CC}_0$and$\Delta \ln A_0(t)=\Delta \ln A_0, but modulate the cross-correlation time lag criterion so that it is less strict at larger epicentral distances and for surface-waves. We therefore use: +% +%\begin{align} +%w_E(t) & = +% \begin{cases} +% 10 w_E & \text{t < t_P$}, \\ +% w_E & \text{$t_P \le t \leq t_{R1}$}, \\ +% 2 w_E & \text{$t > t_{R1}$}, +% \end{cases} +%\\ +%\Delta\tau_0(t) & = +% \begin{cases} +% 3.0 + \Delta/80.0 & \text{$t \le t_{R0}$}, \\ +% 3.0 + \Delta/50.0 & \text{$t > t_{R0}}, +% \end{cases} +%\end{align} + +For the \trange{2}{40} data, we avoid selecting surface-wave arrivals as the 3D model used to calculate the synthetics cannot produce the required complexity. The water-level criteria then becomes: + +\begin{align} +w_E(t) & = + \begin{cases} + 10 w_E & \text{t < t_P$}, \\ + w_E & \text{$t_P \le t \leq t_S$}, \\ + 10 w_E & \text{$t > t_S}. + \end{cases} +%\\ +%\Delta\tau_0(t) & = \Delta\tau_0. +\end{align} + + +%----------------------- + + +\section{Tuning considerations} +FLEXWIN is not a black-box application, and as such cannot be applied blindly +to any given dataset or tomographic scenario. The data windowing required by +any given problem will differ depending on the inversion method, the scale of +the problem (local, regional, global), the quality of the data set and that of +the model and method used to calculate the synthetic seismograms. The user +must configure and tune the algorithm for the given problem. Here we +shall discuss general considerations the user should bear in mind during +the tuning process. + +We suggest the following as a practical starting sequence for tuning the algorithm +(the process may need to be repeated and refined several times before +converging on the optimal set of parameters for a given problem and data-set). + +T_{0,1}$: In setting the corner periods of the bandpass filter, the +user is deciding on the frequency content of the information to be used in the +tomographic problem. Values of these corner periods should reflect the +information content of the data, the quality of the Earth model and the +accuracy of the simulation used to generate the synthetic seismogram. The +frequency content in the data depends on the spectral characteristics of the +source, on the instrument responses, and on the attenuation +characteristics of the medium. As$T_{0,1}$depend on the source and station +characteristics, which may be heterogeneous in any given data-set, these filter +periods can be modified dynamically by constructing an appropriate user +function (e.g. {\em if station is in list of stations with instrument X then +reset T0 and T1 to new values}). + +$r_{P,A}$: In setting the signal-to-noise ratios for the entire seismogram the +user is applying a simple quality control on the data. Note that these criteria +are applied after filtering. No windows will be defined on data that fail this +quality control. + +$w_E(t)$: The short-term average long-term average ratio$E(t)$of a constant signal +converges to a constant value when +the length of the time-series is greater than the effective averaging length of +the long-term average. This value is 0.08 for the short-term average long-term average ratio used in FLEXWIN (it has a small dependence on$T_0$, which can be ignored in most applications). We suggest the user start with a constant +level for$w_E(t)$equal to this convergence value. The time dependence of +$w_E(t)$should then be adjusted to exclude those portions of the waveform the +user is not interested in, by raising$w_E(t)$(e.g. to exclude the fundamental +mode surface-wave: {\em if t$>$fundamental mode surface-wave arrival time then set$w_E(t)=1$}). +We suggest finer adjustments to$w_E(t)$be made after$r0(t)$, +$CC_0(t)$,$\Delta T_0(t)$and$\Delta \ln A_0(t)$have been configured. + +$r_0(t)$,$\mathrm{CC}_0(t)$,$\Delta \tau_{\rm ref}$,$\Delta
+\tau_0(t)$,$\Delta \ln A_{\rm ref}$and$\Delta \ln A_0(t)$: These parameters --- +window signal-to-noise ratio, normalized cross-correlation value between +observed and synthetic seismograms, cross-correlation time lag, and amplitude +ratio --- control the degree of well-behavedness of the data within accepted +windows. The user first sets constant values for these four parameters, then +adds a time dependence if required. Considerations that should be taken into +account include the quality of the Earth model used to calculate the synthetic +seismograms, the frequency range, the dispersed nature of certain arrivals (e.g. +{\em for t corresponding to the group velocities of surface-waves, reduce +$CC_0(t)$}), and {\em a priori} preferences for picking certain small-amplitude seismic phases +(e.g. {\em for t close to the expected arrival for$P_{\rm diff}$, reduce$r_0(t)$}). +$\Delta \tau_{\rm ref}$and$\Delta \ln A_{\rm ref}$should be set to zero at first, and only +reset if the synthetics contain a systematic bias in traveltimes or amplitudes. + + +$c_{0-4}$: These parameters control the process by which the suite of all possible data windows is pared down using criteria on the shape of the STA:LTA$E(t)$waveform alone. We suggest the user start by setting these values to those used in our global example (see Table~\ref{tb:example_params}). Subsequent minimal tuning should be performed by running the algorithm on a subset of the data and closely examining the lists of windows rejected at each stage to make sure the user agrees with the choices made by the algorithm. + +$w_{\mathrm{CC}}$,$w_{\rm len}$and$w_{\rm nwin}$: These parameters control the overlap resolution stage of the algorithm. Values of$w_{\mathrm{CC}}= w_{\rm len} = w_{\rm nwin} = 1\$ should be reasonable for most applications.
+
+The objective of the tuning process summarily described here should be to maximize the selection of windows around desirable features in the seismogram, while minimizing the selection of undesirable features, bearing in mind that the desirability or undesirability of a given feature is subjective, and depends on how the user subsequently intends to use the information contained within he data windows.
+
` | 2019-02-19 15:17:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 11, "equation": 8, "x-ck12": 0, "texerror": 0, "math_score": 0.8564392328262329, "perplexity": 6463.777010185406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00060.warc.gz"} |
https://cs.stackexchange.com/questions/84618/time-complexity-of-searching-an-element-in-binary-search-tree/84631 | # Time complexity of searching an element in Binary Search Tree [closed]
I want to know what will be the time complexity to search, insert and delete an element in a) balanced binary search tree. b) unbalanced binary search tree.
• What have you tried and where did you get stuck? Those results are utterly standard. Where did you look? – Raphael Nov 29 '17 at 10:38
• For balanced Binary search tree this is what i think , for searching (assuming element needed is at leaf position) we would require to search upto last level , just like binary search the problem gets divided each time till we get the element therfore O(log n)..i dont know its correct or not...for unbalanced i dont know where to start... – JobLess Nov 29 '17 at 10:50
• I suggest you pick up a text book that explains the basics and then this analysis to you. I recommend Sedgewick's books. – Raphael Nov 29 '17 at 10:54
• This is what is writeen in the text ". The running times of algorithms on binary search trees depend on the shapes of the trees, which, in turn, depends on the order in which keys are inserted." So if shape of the tree skews either left or right then all the nodes will be present on one side only.. So suppose that element to be found is at last level ... N nodes needs to be visited hence it is O(n) is it correct – JobLess Nov 29 '17 at 12:06
Best case time complexity for search, insertion, and deletion is O(log n). This would correspond to a balanced tree.
The worst case is O(n). This would correspond to the unbalanced tree
• You mean $\Omega(n)$. – Raphael Nov 29 '17 at 10:39 | 2020-01-17 17:41:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5246074795722961, "perplexity": 611.8849256468912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250589861.0/warc/CC-MAIN-20200117152059-20200117180059-00386.warc.gz"} |
http://www.conservapedia.com/Difference_quotient | # Difference quotient
The Difference quotient is used in Calculus to compute the slope of a secant line through two points on a graph of a function f(x).
$\frac{f(x+\Delta x)-f(x)}{\Delta x}\,\!$
## How the Difference quotient differs from the Derivative
The difference between the difference quotient and the derivative is that the derivative is the value of the difference quotient as the secant lines get closer and closer to the tangent line:
$f'(x)=\lim _{\Delta x \to 0} \frac{f(x+\Delta x)-f(x)}{\Delta x}\,\!$ | 2014-11-26 14:19:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8356427550315857, "perplexity": 130.29321516135022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931006885.62/warc/CC-MAIN-20141125155646-00251-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://questioncove.com/updates/4df8e1080b8b370c28bdd26c | Mathematics
OpenStudy (anonymous):
-x^2-7x+9 find the vertex
OpenStudy (amistre64):
the x component of the vertex is the opposite of the middle number divided by 2
OpenStudy (amistre64):
-(-7)/2 = 7/2; now use that to find the value of the function
OpenStudy (amistre64):
-(7/2)^2 -7(7/2) +9 = ?
OpenStudy (amistre64):
i get -27.75
OpenStudy (amistre64):
so vertex = ($$\frac{7}{2},\frac{-111}{4}$$) maybe?
OpenStudy (anonymous):
OpenStudy (amistre64):
hmm ..... i must miss placed a negative someplace
OpenStudy (anonymous):
here is another one -x^2-7x+7
OpenStudy (amistre64):
-x^2-7x+9; Vx = 7/-2 if i see it right; you sure the correct answer doesnt start with -7/2?
OpenStudy (anonymous):
oops the new problem is -x^2-x+7
OpenStudy (amistre64):
-(x^2 +x -7) = y x^2 +x -7 = -y x^2 +x = -y+7 x^2 +x +(1/2)^2 = -y +7 +(1/2)^2 (x+1/2)^2 = -y +(29/4) -(x+1/2)^2 = y - (29/4) y = -(x+(1/2))^2 + (29/4) vertex = (1/2, 29,4) maybe?
OpenStudy (anonymous):
not correct
OpenStudy (amistre64):
either you got a broken program; or someone broke the math today :)
OpenStudy (anonymous):
OpenStudy (amistre64):
thiiiissss close; its the negative in front that is throwing me off ... bummer
OpenStudy (amistre64):
wolfram says my math was right; I just interpreted it wrong afterwards | 2021-09-21 22:20:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40020787715911865, "perplexity": 11455.549066363998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057274.97/warc/CC-MAIN-20210921221605-20210922011605-00716.warc.gz"} |
https://zbmath.org/?q=an:06399442 | ## Properly embedded minimal planar domains.(English)Zbl 1315.53008
It follows from the minimal surface equation that the plane, the catenoid and the helicoid are examples of properly embedded, minimal planar domains in $${\mathbb R}^3$$. And it is well known that those surfaces are of finite topology. A planar domain is a connected surface that embeds in the plane. Around 1860, Riemann discovered examples of properly embedded, minimal planar domains in $${\mathbb R}^3$$ with infinite topology. These examples, called the Riemann minimal examples by the authors, appear in a one-parameter family $${\mathcal R}_t, t \in (0, \infty)$$, and satisfy the property that, after a rotation, each $${\mathcal R}_t$$ intersects every horizontal plane in a circle or in a line. Moreover the $${\mathcal R}_t$$ have natural limits being a vertical catenoid as $$t \to 0$$ and a vertical helicoid as $$t \to \infty$$.
In this paper, the authors analyze the Riemann minimal examples and prove that the only connected properly embedded, minimal planar domains in $${\mathbb R}^3$$ with infinite topology are the Riemann minimal examples. From this result together with previously well-known facts, the authors complete the classification of properly embedded, minimal planar domains in $${\mathbb R}^3$$. Namely, they show that, up to scaling and rigid motion, any connected, properly embedded, minimal planar domain in $${\mathbb R}^3$$ is a plane, a helicoid, a catenoid or one of the Riemann minimal examples. In particular, for every such surface, there exists a foliation of $${\mathbb R}^3$$ by parallel planes, each of which intersects the surface transversely in a connected curve that is a circle or a line.
### MSC:
53A10 Minimal surfaces in differential geometry, surfaces with prescribed mean curvature 53C25 Special Riemannian manifolds (Einstein, Sasakian, etc.)
Full Text:
### References:
[1] J. L. Barbosa and M. do Carmo, ”On the size of a stable minimal surface in $$R^3$$,” Amer. J. Math., vol. 98, iss. 2, pp. 515-528, 1976. · Zbl 0332.53006 [2] J. Bernstein and C. Breiner, ”Conformal structure of minimal surfaces with finite topology,” Comment. Math. Helv., vol. 86, iss. 2, pp. 353-381, 2011. · Zbl 1213.53011 [3] J. C. Borda, ”Eclaircissement sur les méthodes de trouver ler courbes qui jouissent de quelque propiété du maximum ou du minimum,” Mém. Acad. Roy. Sci. Paris, pp. 551-565, 1770. [4] M. Callahan, D. Hoffman, and W. H. Meeks III, ”The structure of singly-periodic minimal surfaces,” Invent. Math., vol. 99, iss. 3, pp. 455-481, 1990. · Zbl 0695.53005 [5] S. S. Chern and C. K. Peng, ”Lie groups and KdV equations,” Manuscripta Math., vol. 28, iss. 1-3, pp. 207-217, 1979. · Zbl 0408.35074 [6] T. H. Colding, C. De Lellis, and W. P. Minicozzi II, ”Three circles theorems for Schrödinger operators on cylindrical ends and geometric applications,” Comm. Pure Appl. Math., vol. 61, iss. 11, pp. 1540-1602, 2008. · Zbl 1170.35035 [7] T. H. Colding and W. P. Minicozzi II, ”The space of embedded minimal surfaces of fixed genus in a $$3$$-manifold V; Fixed genus,” Ann. of Math., vol. 181, iss. 1, pp. 1-153, 2015. · Zbl 1322.53059 [8] T. H. Colding and W. P. Minicozzi II, ”The space of embedded minimal surfaces of fixed genus in a 3-manifold. I. Estimates off the axis for disks,” Ann. of Math., vol. 160, iss. 1, pp. 27-68, 2004. · Zbl 1070.53031 [9] T. H. Colding and W. P. Minicozzi II, ”The space of embedded minimal surfaces of fixed genus in a 3-manifold. II. Multi-valued graphs in disks,” Ann. of Math., vol. 160, iss. 1, pp. 69-92, 2004. · Zbl 1070.53032 [10] T. H. Colding and W. P. Minicozzi II, ”The space of embedded minimal surfaces of fixed genus in a 3-manifold. III. Planar domains,” Ann. of Math., vol. 160, iss. 2, pp. 523-572, 2004. · Zbl 1076.53068 [11] T. H. Colding and W. P. Minicozzi II, ”The space of embedded minimal surfaces of fixed genus in a 3-manifold. IV. Locally simply connected,” Ann. of Math., vol. 160, iss. 2, pp. 573-615, 2004. · Zbl 1076.53069 [12] P. Collin, ”Topologie et courbure des surfaces minimales proprement plongées de $$\mathbb R^3$$,” Ann. of Math., vol. 145, iss. 1, pp. 1-31, 1997. · Zbl 0886.53008 [13] P. Collin, R. Kusner, W. H. Meeks III, and H. Rosenberg, ”The topology, geometry and conformal structure of properly embedded minimal surfaces,” J. Differential Geom., vol. 67, iss. 2, pp. 377-393, 2004. · Zbl 1098.53006 [14] A. Douady and R. Douady, ”Changements de cadres á partir des surfaces minimales,” Cahier de DIDIREM, vol. 23, 1994. · Zbl 0934.30023 [15] N. Ejiri and M. Kotani, ”Index and flat ends of minimal surfaces,” Tokyo J. Math., vol. 16, iss. 1, pp. 37-48, 1993. · Zbl 0856.53013 [16] L. Euler, Methodus Inveniendi Lineas Curvas Maximi Minimive Propietate Gaudeates Sive Solutio Problematis Isoperimetrici Latissimo Sensu Accepti, Cambridge, MA: Harvard Univ. Press, 1969. · Zbl 0788.01072 [17] Y. Fang, ”On minimal annuli in a slab,” Comment. Math. Helv., vol. 69, iss. 3, pp. 417-430, 1994. · Zbl 0819.53006 [18] C. Frohman and W. H. Meeks III, ”The ordering theorem for the ends of properly embedded minimal surfaces,” Topology, vol. 36, iss. 3, pp. 605-617, 1997. · Zbl 0878.53008 [19] F. Gesztesy and R. Weikard, ”Elliptic algebro-geometric solutions of the KdV and AKNS hierarchies-an analytic approach,” Bull. Amer. Math. Soc., vol. 35, iss. 4, pp. 271-317, 1998. · Zbl 0909.34073 [20] D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order, Second ed., New York: Springer-Verlag, 1983, vol. 224. · Zbl 0562.35001 [21] R. E. Goldstein and D. M. Petrich, ”The Korteweg-de Vries hierarchy as dynamics of closed curves in the plane,” Phys. Rev. Lett., vol. 67, iss. 23, pp. 3203-3206, 1991. · Zbl 0990.37519 [22] P. Griffiths and J. Harris, Principles of Algebraic Geometry, New York: Wiley-Interscience [John Wiley & Sons], 1978. · Zbl 0408.14001 [23] L. Hauswirth and F. Pacard, ”Higher genus Riemann minimal surfaces,” Invent. Math., vol. 169, iss. 3, pp. 569-620, 2007. · Zbl 1129.53009 [24] D. Hoffman and W. H. Meeks III, ”The strong halfspace theorem for minimal surfaces,” Invent. Math., vol. 101, iss. 2, pp. 373-377, 1990. · Zbl 0722.53054 [25] D. Hoffman, M. Traizet, and B. White, Helicoidal minimal surfaces of precribed genus II. · Zbl 1356.53010 [26] M. Weber, D. Hoffman, and M. Wolf, ”An embedded genus-one helicoid,” Ann. of Math., vol. 169, iss. 2, pp. 347-448, 2009. · Zbl 1213.49049 [27] D. Hoffman and B. White, ”Genus-one helicoids from a variational point of view,” Comment. Math. Helv., vol. 83, iss. 4, pp. 767-813, 2008. · Zbl 1161.53009 [28] N. Joshi, ”The second Painlevé hierarchy and the stationary KdV hierarchy,” Publ. Res. Inst. Math. Sci., vol. 40, iss. 3, pp. 1039-1061, 2004. · Zbl 1063.33030 [29] J. L. Lagrange, ”Essai d’une nouvelle méthode pour determiner les maxima et les minima des formules integrales indefinies,” Miscellanea Taurinensia 2, vol. 325, pp. 173-199, 1760. [30] R. B. Lockhart and R. C. McOwen, ”Elliptic differential operators on noncompact manifolds,” Ann. Scuola Norm. Sup. Pisa Cl. Sci., vol. 12, iss. 3, pp. 409-447, 1985. · Zbl 0615.58048 [31] F. J. López and A. Ros, ”On embedded complete minimal surfaces of genus zero,” J. Differential Geom., vol. 33, iss. 1, pp. 293-300, 1991. · Zbl 0719.53004 [32] W. H. Meeks III, ”The limit lamination metric for the Colding-Minicozzi minimal lamination,” Illinois J. Math., vol. 49, iss. 2, pp. 645-658, 2005. · Zbl 1087.53058 [33] W. H. Meeks III and J. Pérez, Embedded minimal surfaces of finite topology. · Zbl 1267.53006 [34] W. H. Meeks III and J. Pérez, ”Conformal properties in classical minimal surface theory,” in Surveys in Differential Geometry. Vol. IX, Somerville, MA: Int. Press, 2004, pp. 275-335. · Zbl 1086.53007 [35] W. H. Meeks III, J. Pérez, and A. Ros, Bounds on the topology and index of classical minimal surfaces. · Zbl 1115.53009 [36] W. H. Meeks III, J. Pérez, and A. Ros, The embedded Calabi-Yau conjectures for finite genus. [37] W. H. Meeks III, J. Pérez, and A. Ros, ”Uniqueness of the Riemann minimal examples,” Invent. Math., vol. 133, iss. 1, pp. 107-132, 1998. · Zbl 0916.53004 [38] W. H. Meeks III, J. Pérez, and A. Ros, ”The geometry of minimal surfaces of finite genus. I. Curvature estimates and quasiperiodicity,” J. Differential Geom., vol. 66, iss. 1, pp. 1-45, 2004. · Zbl 1068.53012 [39] W. H. Meeks III, J. Pérez, and A. Ros, ”The geometry of minimal surfaces of finite genus. II. Nonexistence of one limit end examples,” Invent. Math., vol. 158, iss. 2, pp. 323-341, 2004. · Zbl 1070.53003 [40] W. H. Meeks III and H. Rosenberg, ”The maximum principle at infinity for minimal surfaces in flat three manifolds,” Comment. Math. Helv., vol. 65, iss. 2, pp. 255-270, 1990. · Zbl 0713.53008 [41] W. H. Meeks III and H. Rosenberg, ”The uniqueness of the helicoid,” Ann. of Math., vol. 161, iss. 2, pp. 727-758, 2005. · Zbl 1102.53005 [42] J. B. Meusnier, ”Mémoire sur la courbure des surfaces,” Mém. Mathém. Phys. Acad. Sci. Paris, prés. par div. Savans, vol. 10, pp. 477-510, 1785. [43] S. Montiel and A. Ros, ”Schrödinger operators associated to a holomorphic map,” in Global Differential Geometry and Global Analysis, New York: Springer-Verlag, 1991, vol. 1481, pp. 147-174. · Zbl 0744.58007 [44] R. Osserman, ”Global properties of minimal surfaces in $$E^3$$ and $$E^n$$,” Ann. of Math., vol. 80, pp. 340-364, 1964. · Zbl 0134.38502 [45] R. Osserman, A Survey of Minimal Surfaces, Second ed., New York: Dover Publications, 1986. · Zbl 0209.52901 [46] J. Pérez, ”On singly-periodic minimal surfaces with planar ends,” Trans. Amer. Math. Soc., vol. 349, iss. 6, pp. 2371-2389, 1997. · Zbl 0882.53007 [47] J. Pérez and A. Ros, ”The space of properly embedded minimal surfaces with finite total curvature,” Indiana Univ. Math. J., vol. 45, iss. 1, pp. 177-204, 1996. · Zbl 0864.53008 [48] B. Riemann, Ouevres Mathématiques de Riemann, Paris: Gauthiers-Villars, 1898. [49] B. Riemann, ”Über die Fläche vom kleinsten Inhalt bei gegebener Begrenzung,” Abh. Königl, d. Wiss. Göttingen, Mathem. Cl., vol. 13, pp. 3-52, 1867. [50] R. Schoen, ”Estimates for stable minimal surfaces in three-dimensional manifolds,” in Seminar on Minimal Submanifolds, Princeton, NJ: Princeton Univ. Press, 1983, vol. 103, pp. 111-126. · Zbl 0532.53042 [51] R. M. Schoen, ”Uniqueness, symmetry, and embeddedness of minimal surfaces,” J. Differential Geom., vol. 18, iss. 4, pp. 791-809 (1984), 1983. · Zbl 0575.53037 [52] G. Segal and G. Wilson, ”Loop groups and equations of KdV type,” Inst. Hautes Études Sci. Publ. Math., vol. 61, pp. 5-65, 1985. · Zbl 0592.35112 [53] M. Shiffman, ”On surfaces of stationary area bounded by two circles, or convex curves, in parallel planes,” Ann. of Math., vol. 63, pp. 77-90, 1956. · Zbl 0070.16803 [54] M. Traizet, ”An embedded minimal surface with no symmetries,” J. Differential Geom., vol. 60, iss. 1, pp. 103-153, 2002. · Zbl 1054.53014 [55] M. Weber and M. Wolf, ”Teichmüller theory and handle addition for minimal surfaces,” Ann. of Math., vol. 156, iss. 3, pp. 713-795, 2002. · Zbl 1028.53009 [56] R. Weikard, ”On rational and periodic solutions of stationary KdV equations,” Doc. Math., vol. 4, p. 109, 1999. · Zbl 0972.35121 [57] B. White, ”The space of $$m$$-dimensional surfaces that are stationary for a parametric elliptic functional,” Indiana Univ. Math. J., vol. 36, iss. 3, pp. 567-602, 1987. · Zbl 0770.58005
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2023-03-21 23:38:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6745269894599915, "perplexity": 1790.0824320250924}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943747.51/warc/CC-MAIN-20230321225117-20230322015117-00288.warc.gz"} |
https://gateoverflow.in/289765/%23set-theory-%23groups | +1 vote
43 views
Consider the set H of all 3 × 3 matrices of the type:
$\begin{bmatrix} a&f&e\\ 0&b&d\\ 0&0&c\\ \end{bmatrix}$
where a, b, c, d, e and f are real numbers and $abc ≠ 0$. Under the matrix multiplication operation, the set H is:
(a) a group
(b) a monoid but not a group
(c) a semigroup but not a monoid
(d) neither a group nor a semigroup
edited | 43 views
0
option b ?
0
Answer given is C. You can anyways explain your choice of option.
0
Monoid means there is an identity element which in this case should be identity matrix. So I think it should be a Monoid
+1
It should be a group.
As the inverse of an upper triangular matrix is always upper triangular and exists when all diagonals are non zero.
+1 vote
1
+1 vote | 2019-01-21 01:48:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7769453525543213, "perplexity": 656.9503970286587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583745010.63/warc/CC-MAIN-20190121005305-20190121031305-00368.warc.gz"} |
https://physweb.bgu.ac.il/COURSES/PHYSICS_ExercisesPool/22_Rigid_body/e_22_1_023.html | ### Rigid Body
A massless dancer holds two identical masses ($m$) at a distance $R$ from the body
and spinning at a constant angular velocity $\omega_0$.
Suddenly the dancer move the masses to the distance of $R/2$ from the body.
What will be the the new angular velocity? | 2019-02-19 19:40:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 4, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9179725646972656, "perplexity": 1718.1673430399553}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247491141.23/warc/CC-MAIN-20190219183054-20190219205054-00160.warc.gz"} |
https://yetanothermathblog.com/2012/09/08/ | # Boolean functions from the graph-theoretic perspective
This is a very short introductory survey of graph-theoretic properties of Boolean functions.
I don’t know who first studied Boolean functions for their own sake. However, the study of Boolean functions from the graph-theoretic perspective originated in Anna Bernasconi‘s thesis. More detailed presentation of the material can be found in various places. For example, Bernasconi’s thesis (e.g., see [BC]), the nice paper by P. Stanica (e.g., see [S], or his book with T. Cusick), or even my paper with Celerier, Melles and Phillips (e.g., see [CJMP], from which much of this material is literally copied).
For a given positive integer $n$, we may identify a Boolean function
$f:GF(2)^n\to GF(2),$
with its support
$\Omega_f = \{x\in GF(2)^n\ |\ f(x)=1\}.$
For each $S\subset GF(2)^n$, let $\overline{S}$ denote the set of complements $\overline{x}=x+(1,\dots,1)\in GF(2)^n$, for $x\in S$, and let $\overline{f}=f+1$ denote the complementary Boolean function. Note that
$\Omega_f^c=\Omega_{\overline{f}},$
where $S^c$ denotes the complement of $S$ in $GF(2)^n$. Let
$\omega=\omega_f=|\Omega_f|$
denote the cardinality of the support. We call a Boolean function even (resp., odd) if $\omega_f$ is even (resp., odd). We may identify a vector in $GF(2)^n$ with its support, or, if it is more convenient, with the corresponding integer in $\{0,1, \dots, 2^n-1\}.$ Let
$b:\{0,1, \dots, 2^n-1\} \to GF(2)^n$
be the binary representation ordered with least significant bit last (so that, for example, $b(1)=(0,\dots, 0, 1)\in GF(2)^n$).
Let $H_n$ denote the $2^n\times 2^n$ Hadamard matrix defined by $(H_n)_{i,j} = (-1)^{b(i)\cdot b(j)}$, for each $i,j$ such that $0\leq i,j\leq n-1$. Inductively, these can be defined by
$H_1 = \left( \begin{array}{cc} 1 & 1\\ 1 & -1 \\ \end{array} \right), \ \ \ \ \ \ H_n = \left( \begin{array}{cc} H_{n-1} & H_{n-1}\\ H_{n-1} & -H_{n-1} \\ \end{array} \right), \ \ \ \ \ n>1.$
The Walsh-Hadamard transform of $f$ is defined to be the vector in ${\mathbb{R}}^{2^n}$ whose $k$th component is
$({\mathcal{H}} f)(k) = \sum_{i \in \{0,1,\ldots,2^n-1\}}(-1)^{b(i) \cdot b(k) + f(b(i))} = (H_n (-1)^f)_k,$
where we define $(-1)^f$ as the column vector where the $i$th component is
$(-1)^f_i = (-1)^{f(b(i))},$
for $i = 0,\ldots,2^n-1$.
Example
A Boolean function of three variables cannot be bent. Let $f$ be defined by:
$\begin{array}{c|cccccccc} x_2 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 \\ x_1 & 0 & 0 & 1 & 1 & 0 & 0 & 1 & 1 \\ x_0 & 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 \\ \hline (-1)^f & 1 & -1 & 1 & -1 & 1 & -1 & 1 & -1 \\ {\mathcal{H}}f & 0 & 8 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{array}$
This is simply the function $f(x_0,x_1,x_2)=x_0$. It is even because
$\Omega_f = \{ (0,0,1), (0,1,1), (1,0,1), (1,1,1) \},\ \mbox{ so } \ \omega = 4.$
Here is some Sage code verifying this:
sage: from sage.crypto.boolean_function import *
sage: f = BooleanFunction([0,1,0,1,0,1,0,1])
sage: f.algebraic_normal_form()
x0
(0, -8, 0, 0, 0, 0, 0, 0)
(The Sage method walsh_hadamard_transform is off by a sign from the definition we gave.) We will return to this example later.
Let $X=(V,E)$ be the Cayley graph of $f$:
$V = GF(2)^n,\ \ \ \ E = \{(v,w)\in V\times V\ |\ f(v+w)=1\}.$
We shall assume throughout and without further mention that $f(0)\not=1,$ so $X$ has no loops. In this case, $X$ is an $\omega$-regular graph having $r$ connected components, where
$r = |GF(2)^n/{\rm Span}(\Omega_f)|.$
For each vertex $v\in V$, the set of neighbors $N(v)$ of $v$ is given by
$N(v)=v+\Omega_f,$
where $v$ is regarded as a vector and the addition is induced by the usual vector addition in $GF(2)^n$. Let $A = (A_{ij})$ be the $2^n\times 2^n$ adjacency matrix of $X$, so
$A_{ij} = f(b(i)+b(j)), \ \ \ \ \ 0\leq i,j\leq 2^n-1.$
Example:
Returning to the previous example, we construct its Cayley graph.
First, attach afsr.sage from [C] in your Sage session.
sage: flist = [0,1,0,1,0,1,0,1]
sage: V = GF(2)ˆ3
sage: Vlist = V.list()
sage: f = lambda x: GF(2)(flist[Vlist.index(x)])
sage: X = boolean_cayley_graph(f, 3)
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
sage: X.spectrum()
[4, 0, 0, 0, 0, 0, 0, -4]
sage: X.show(layout="circular")
In her thesis, Bernasconi found a relationship between the spectrum of the Cayley graph $X$,
${\rm Spectrum}(X) = \{\lambda_k\ |\ 0\leq k\leq 2^n-1\},$
(the eigenvalues $\lambda_k$ of the adjacency matrix $A$) to the Walsh-Hadamard transform $\mathcal H f = H_n (-1)^f$. Note that $f$ and $(-1)^f$ are related by the equation $f=\frac 1 2 (e - (-1)^f),$ where $e=(1,1,...,1)$. She discovered the relationship
$\lambda_k = \frac 1 2 (H_n e - \mathcal H f)_k$
between the spectrum of the Cayley graph $X$ of a Boolean function and the values of the Walsh-Hadamard transform of the function. Therefore, the spectrum of $X$, is explicitly computable as an expression in terms of $f$.
References:
[BC] A. Bernasconi and B. Codenotti, Spectral analysis of Boolean functions as a graph eigenvalue problem, IEEE Trans. Computers 48(1999)345-351.
[CJMP] Charles Celerier, David Joyner, Caroline Melles, David Phillips, On the Hadamard transform of monotone Boolean functions, Tbilisi Mathematical Journal, Volume 5, Issue 2 (2012), 19-35.
[S] P. Stanica, Graph eigenvalues and Walsh spectrum of Boolean functions, Integers 7(2007)\# A32, 12 pages.
Here’s an excellent video of Pante Stanica on interesting applications of Boolean functions to cryptography (30 minutes): | 2022-06-30 16:42:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 67, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8533479571342468, "perplexity": 326.41779125029177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103850139.45/warc/CC-MAIN-20220630153307-20220630183307-00362.warc.gz"} |
http://mathoverflow.net/questions/16771/lower-bounds-on-truncated-fourier-transform-of-functions-of-constant-modulus-a | # Lower bounds on (truncated) Fourier transform of functions of constant modulus and bounded derivative
Let $f(x)=e^{i\phi(x)}$ define a function from $[0,1]$ to the complex unit circle through the real smooth function $\phi(x)$. Also, this function is periodic: $\phi(0)=\phi(1)=0\text{ mod }2\pi$ and has bounded derivatives for $x\in[0,1]$: $$\vert d\phi(x)/dx\vert\le \omega.$$ Consider the (truncated) Fourier transform of $f(x)$: $$\hat{f}(k)=\int_0^1 f(x) e^{ikx}dx.$$ What is a lower bound on $E(K) := \Vert f\Vert_{L^2,[0,K]}$? More explicitly what is a lower bound for $$E(K)= \int_0^K \vert \hat{f}(k)\vert^2 dk$$
A similar question was asked and answered here where there are no restrictions on $f$ resulting on a lower bound of zero for the value of $\vert \hat{f}(k)\vert$. The case where $\phi(x)$ is a piecewise constant function is answered here where the problem is equivalent to bounding polynomials (even with non-integer exponents) and is related to the following physics paper. From physical considerations, I expect the lower bound to depend on $\omega$.
I suspect that:
1. the lower bound scales proportional to $1/(\omega)^2$ for large enough $\omega$ but I could be way off.
2. a lower bound might best be described by a different quantity other than the derivative that I have provided: another sort of complexity will have to go into $\phi(x)$.
3. This could be related to some uncertainty principle as we want to minimize the support of $\hat{f}$. Lemma 3, in Terry Tao's blog entry on Hardy's Uncertainty Principle seems to be related except that it works only for even order derivatives and the connection is vague anyways.
4. I am missing a trivial point. Maybe constant modulus, periodic, etc is unnecessary and something can be said directly using $\sup\vert f(x)\vert$ and $\sup\vert f'(x)\vert$? That would somewhat annul my next question though.
5. My physics background automated me to discretize the problem (i.e. piece-wise linear $\phi(x)$) but I was not successful although the approach seemed promising. See 1 also.
How could I then find the functions $\phi(x)$ that saturate the lower bound?
-
Are you interested in large or small K? For large K, steepest decent should give an answer. Small K should just be continuity... – Helge Feb 9 '11 at 19:14
@Helge: The dimensionless parameter \omega/K can be large but I am not sure if I understand what you mean by continuity. By the way, my own guess is $O[\exp[-c(\omega/K)]]]$ which has an essential singularity in K. – Kaveh Khodjasteh Feb 9 '11 at 20:42
$\hat{f}$ is continuous in $k$. So $E(K) = K (|\hat{f}(0)|^2 + o(1))$ for $K$ small. – Helge Feb 10 '11 at 4:30
You can reduce o(K) by making f oscillate fast. No? – Kaveh Khodjasteh Feb 10 '11 at 16:42
More: $E(K)$ is continuous in K (all integrals are finite, all functions are smooth, etc) and its first derivative exists. However $|\hat{f}(0)|$ can be cancelled with even a simple linear $\phi(x)$. For this $\omega$ has to match $n 2\pi$ and thus if $\omega$ can't be smaller than $2\pi$. Also I am not sure that the higher derivatives of $E(K)$ behave nicely. Even if I could work out higher orders and focus on the integrals similar to $|\hat{f}(0)|$, perturbation theory will not easily give me a lower bound. – Kaveh Khodjasteh Feb 14 '11 at 16:27
It is an interesting problem which is related to some recent work of mine. The reason for why I started to work on similar problems is because connections to a problem of Ramachandra on Dirichlet polynomials, connections to the nordic school of Hardy classes of Dirichlet series (Hedenmalm, Saksman, Seip, Olsen, Olofsson, Lindqvist and others), as well as universality questions for zeta-functions and their properties on the line Re(s)=1.
While my papers are not quite finished, I have put two early preprints on my homepage, On a problem of Ramachandra and approximation of functions by Dirichlet polynomials with bounded coefficients and On generalized Hardy classes of Dirichlet series. I have talked about some of these problems at analytic number theory conferences in India. Like in your paper I have considered Dirichlet series (it should be possible to obtain something like Theorem 2.1 in your paper by my method also, although I have not stated a direct analogue in my paper).
Now your problem in the question is rather easy for small $\omega$ so we will from now on assume that $\omega>1/2$. In fact if $\omega<1/2$, then $|f(0)|>1/2$ and $\int_0^K |\hat f(t)^2|dt \geq \min(1/10,K/10)$ (constants not chosen in an optimal way)
In my papers on Dirichlet series I have used a somewhat different method than you use in your paper, namely the Jensen inequality on the logarithmic integral in a half-plane. This method is applicable for the problem at hand. Lemma 7 in my paper On generalized Hardy classes of Dirichlet series'' can be used with $\sigma=0$ and $L(it)=\hat f(-t)$ and we obtain $$\frac D \pi \int_{-\infty}^\infty \frac {\log^- |\hat f(t)|} {D^2+t^2} dt \leq \frac D \pi \int_{-\infty}^\infty \frac {\log^+ |\hat f (t)|} {D^2+t^2} dt - \log |\hat f(iD)|.$$ For similar results see also Koosis - The logarithmic integral. (Remark Feb 16: The above inequality is an equality if the function is non-zero on a half plane. The inequality follows from Jensen's formula on a disc by mapping the half plane on the disc by the standard holomorphic bijection where $iD$ goes to $0$) The reason why we can do this is that with the definition of the fourier-transform in your question it means that $\hat f(z)$ will be a bounded analytic function in the half plane Im$(z) \geq 0$.
Now in this case we also have that $\log^+ |\hat f (t)|=0$ since $|\hat f (t)| \leq 1$. Thus the inequality simplifies to $$\frac D \pi \int_{-\infty}^\infty \frac {\log^- |\hat f(t)|} {D^2+t^2} dt \leq - \log |\hat f(iD)|.$$ It is not too difficult to see that for $\omega>1/2$ $$|\hat f(i\omega)|= \left|\int_0^1 e^{i \phi(x)-\omega x} dx \right|>\frac {1} {10 \omega}.$$ (The constant $10$ not chosen optimally). Thus we can choose $D=\omega$ and it is clear that $$\int_0^K \log^- |\hat f(t)| dt < \frac \pi {\omega} \left({\omega^2+K^2} \right) \frac {\omega} \pi \int_{-\infty}^\infty \frac {\log^- |\hat f(t)|} {\omega^2+t^2} dt$$ From these estimates we see that $$\frac 1 K \int_0^K \log^- |\hat f(t)| dt< \frac {\pi(\omega^2+K^2)}{\omega K} \log (10 \omega).$$ Now we can use the Jensen inequality $$\exp\left(\frac 1 K \int_0^K \log |\hat f(t)| dt\right)< \sqrt{\frac 1 K \int_0^K |\hat f(t)|^2 dt}$$ We get the lower bound $$K \left(\frac 1 {10 \omega} \right)^{2\pi (\omega^2+K^2)/(K \omega)} \leq \int_0^K |\hat f(t)|^2 dt$$ for $\omega>1/2$. If $c>2 \pi$ and $\omega/K$ is sufficiently large this gives a lower bound $$\omega^{-c \omega/K} \leq \int_0^K |\hat f(t)|^2 dt$$ which is weaker than your expected $e^{-c \omega/K}$. At least we have an explicit lower bound.
Updated Feb 16: In the case where both $\omega$ and $K$ are large but still $\omega>K$ this can be improved by the following trick. Let $g$ be the convolution of $\hat f$ with a non negative test-function $\Phi(t/K)$, such that $\hat \Phi(0)>0$ where $\Phi$ has support on $[0,1/2]$ . Then use Jensen's inequalities on the function $g$ instead of $\hat f$ as above. The advantage with this is that it then follows that $|\hat g(iw)| \gg K/\omega$ and thus we can get the lower bound (by using Jensen's inequality w.r.t the L^1-norm instead of the L^2-norm.) $$(\omega/K)^{-c \omega/K} \leq \frac 1 K \int_0^{K/2} |g(t)| dt$$ for some constant $c>0$. Since $$g(t)=\int_0^t \Phi((t-x)/K) \hat f(x) dx$$ it is clear by the triangle inequality that $$\frac 1 K \int_0^{K/2} |g(t)| dt = \frac 1 K \int_0^{K/2} \left|\int_0^t \Phi((t-x)/K)\hat f(x) \right| dx \leq$$ $$\leq \frac 1 K \int_0^{K/2} |f(x)| dx \int_0^{K/2} |\Phi(x/K)| dx \leq c \int_0^{K/2} |\hat f(x)| dx$$ The inequality
$$K^{-1} (\omega/K)^{-c \omega/K} \leq \int_0^{K/2} |\hat f(t)|^2 dt$$ follows by the Cauchy-Schwarz inequality for some constant $c>0$.
This formula just use involves dimensionless quantity $\omega/K$ as expected. Since the function $E(K)$ is increasing in $K$ it gives the lower bound $E(K) > C_0 K^{-1}>0$ for $1 \leq \omega \leq K$ for some absolute constant $C_0$.
-
Many thanks Johan. I think your answer will be marked as correct automatically before I can check its details but the functional form of the bound conforms to my expectation and makes me happy! – Kaveh Khodjasteh Feb 16 '11 at 14:16
Thanks for the update. The case where $\omega>K$ is indeed the relevant physical case. – Kaveh Khodjasteh Feb 17 '11 at 18:55 | 2014-09-19 11:53:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635889530181885, "perplexity": 167.95321861495972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131304.74/warc/CC-MAIN-20140914011211-00227-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://math.stackexchange.com/questions/2216989/finding-eigenvectors-to-eigenvalues-and-diagonalization | Finding eigenvectors to eigenvalues, and diagonalization
I just finished solving a problem on finding eigenvectors corresponding to eigenvalues, however, I'm not sure if it is correct. I was wondering if someone could check my work:
For the matrix $W = \begin{bmatrix} 1 & 2 \\ 3 & 2\\ \end{bmatrix}$, I must find the eigenvectors corresponding to the eigenvalues, as well as a diagonal matrix similar to W.
I was able to find that the eigenvalues were equal to $\lambda = 4, -1$. Then, I used the equation $(A - \lambda I)v = 0$ to solve for the vector.
When $\lambda = 4$, I set up the equation $\begin{bmatrix} 1 & 2 \\ 3 & 2\\ \end{bmatrix} - \begin{bmatrix} 4 & 0 \\ 0 & 4\\ \end{bmatrix}$ = $\begin{bmatrix} -3 & 2 \\ 3 & -2\\ \end{bmatrix}$, which gave me the eigenvector $\begin{bmatrix} 2\\ 3\\ \end{bmatrix}$.
For $\lambda = -1$, I did the exact same procedure and received the eigenvector which gave me the eigenvector $\begin{bmatrix} 1\\ -1\\ \end{bmatrix}$.
Did I do this part correctly? How do I find a diagonal matrix similar to $W$?
• Change the rest of the A's to W's as well! – NickD Apr 4 '17 at 2:37
• You have two distinct eigenvalues for a $2\times2$ matrix, so you can write down the similar diagonal matrix without further ado: it’s just a matrix with the eigenvalues along its diagonal. – amd Apr 4 '17 at 3:00
• thank you so much for your help. could please help me a last question I have here? math.stackexchange.com/questions/2217044/… – user400359 Apr 4 '17 at 3:26
we can use Row operations to obtain a diagonal matrix similar to W
W = \begin{bmatrix} 1 & 2 \\ 3 & 2\\ \end{bmatrix} $r_1-r_2=R_1$ gives $$W = \begin{bmatrix} -2 & 0 \\ 3 & 2\\ \end{bmatrix}$$ then $R_2=2r_2$ gives W = \begin{bmatrix} -2 & 0 \\ 6 & 4\\ \end{bmatrix} now $R_2=r_2+3r_1$ gives $W = \begin{bmatrix} -2 & 0 \\ 0 & 4\\ \end{bmatrix}$ and $R_1=\frac{1}{2}r_1$ gives $W = \begin{bmatrix} -1 & 0 \\ 0 & 4\\ \end{bmatrix}$ which is in diagonal form, as required, as you can see the diagonal entries are the eigenvalues you calculated
• How do you know that the matrix you have arrived upon is similar to W? – Doug M Apr 4 '17 at 3:34
I think it is worth the exercise to verify that
$W\mathbf v = \lambda \mathbf v$
$W \begin {bmatrix} 2\\3 \end{bmatrix} = 4\begin {bmatrix} 2\\3 \end{bmatrix}$ and $W \begin {bmatrix} 1\\-1 \end{bmatrix} = -\begin {bmatrix} 1\\-1 \end{bmatrix}$
which it does...in both cases.
In which case:
$W\begin{bmatrix} \mathbf v_1&\mathbf v_2 \end{bmatrix} = \begin{bmatrix} \mathbf v_1&\mathbf v_2 \end{bmatrix}\begin{bmatrix} \lambda_1\\&\lambda_2\end{bmatrix}$
Let $P = \begin{bmatrix} \mathbf v_1&\mathbf v_2 \end{bmatrix}$ and $\Lambda = \begin{bmatrix} \lambda_1\\&\lambda_2\end{bmatrix}$
$WP = P\Lambda\\ P^{-1}WP = \Lambda$
$\lambda$ is a diagonal matrix similar to W
• Okay, so the part I have done is correct. How do I find a diagonal matrix similar to W ? – user400359 Apr 4 '17 at 2:35
• @stackofhay42 subtract the second row away from the first then take 3 lots of the first row away from the second? – user395952 Apr 4 '17 at 2:38
• I have outlined the theory and the process. $\begin{bmatrix}0.2&0.2\\0.6&-0.4\end{bmatrix}W \begin{bmatrix}2&1\\3&-1\end{bmatrix}= \begin{bmatrix}4\\&-1\end{bmatrix}$ – Doug M Apr 4 '17 at 2:39 | 2021-05-12 01:33:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777132868766785, "perplexity": 186.43378995745522}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991693.14/warc/CC-MAIN-20210512004850-20210512034850-00010.warc.gz"} |
http://gmatclub.com/forum/equation-frac-x-2-frac-y-2-5-encloses-a-71096.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 30 May 2016, 06:03
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# Equation |\frac{x}{2}| + |\frac{y}{2}| = 5 encloses a
Author Message
Intern
Joined: 27 Sep 2008
Posts: 14
Followers: 0
Kudos [?]: 53 [5] , given: 0
Equation |\frac{x}{2}| + |\frac{y}{2}| = 5 encloses a [#permalink]
### Show Tags
03 Oct 2008, 22:47
5
KUDOS
This topic is locked. If you want to discuss this question please re-post it in the respective forum.
Equation $$|\frac{x}{2}| + |\frac{y}{2}| = 5$$ encloses a certain region on the coordinate plane. What is the area of this region?
A. 20
B. 50
C. 100
D. 200
E. 400
Manager
Joined: 27 Sep 2008
Posts: 76
Followers: 1
Kudos [?]: 8 [0], given: 0
### Show Tags
03 Oct 2008, 23:11
you can ignore the absolute value sign since its distance (can't be -)
x/2+y/2 = 5
x+y = 10
if x=0 then y=10
if y=0 then x=10
xy = 10*10 = 100
Intern
Joined: 27 Sep 2008
Posts: 14
Followers: 0
Kudos [?]: 53 [3] , given: 0
### Show Tags
04 Oct 2008, 02:59
3
KUDOS
x can be 5 and y 5
x can be 7 and y 3
there are too many combinations of x and y.
there has to be a logical reason why you would choose 0 as one and 10 for the other.
Manager
Joined: 27 Sep 2008
Posts: 76
Followers: 1
Kudos [?]: 8 [0], given: 0
### Show Tags
04 Oct 2008, 04:48
ast wrote:
x can be 5 and y 5
x can be 7 and y 3
there are too many combinations of x and y.
there has to be a logical reason why you would choose 0 as one and 10 for the other.
in order to find the enclosed area you first have to find where {x,y} intersect the coordinate plane.
if x=0 then (0,10) and if y=0 (10,0)
then you can tell its 10*10 area.
your solution of {5,5} is true algebraic wise but that isn't what you are being asked for.
Manager
Joined: 10 Mar 2008
Posts: 67
Followers: 2
Kudos [?]: 28 [0], given: 0
### Show Tags
04 Oct 2008, 05:38
Greenberg wrote:
ast wrote:
x can be 5 and y 5
x can be 7 and y 3
there are too many combinations of x and y.
there has to be a logical reason why you would choose 0 as one and 10 for the other.
in order to find the enclosed area you first have to find where {x,y} intersect the coordinate plane.
if x=0 then (0,10) and if y=0 (10,0)
then you can tell its 10*10 area.
your solution of {5,5} is true algebraic wise but that isn't what you are being asked for.
i think you all are missing something the figure formed is symmetrical about the x and y axis
so we get 4 triangles in each quadrant that are congurent
area of each is 1/2(base * height)
=1/2(10*10)
=50
as we get 4 triangles total area=4(50)
=200
Manager
Joined: 27 Sep 2008
Posts: 76
Followers: 1
Kudos [?]: 8 [0], given: 0
### Show Tags
04 Oct 2008, 08:54
rohit929 wrote:
Greenberg wrote:
ast wrote:
x can be 5 and y 5
x can be 7 and y 3
there are too many combinations of x and y.
there has to be a logical reason why you would choose 0 as one and 10 for the other.
in order to find the enclosed area you first have to find where {x,y} intersect the coordinate plane.
if x=0 then (0,10) and if y=0 (10,0)
then you can tell its 10*10 area.
your solution of {5,5} is true algebraic wise but that isn't what you are being asked for.
i think you all are missing something the figure formed is symmetrical about the x and y axis
so we get 4 triangles in each quadrant that are congurent
area of each is 1/2(base * height)
=1/2(10*10)
=50
as we get 4 triangles total area=4(50)
=200
I think you are correct
But the answer should be (E)
{10,-10} {-10,10} {-10,-10} {10,10}
10*10*4 = 400
Current Student
Joined: 28 Dec 2004
Posts: 3385
Location: New York City
Schools: Wharton'11 HBS'12
Followers: 14
Kudos [?]: 227 [0], given: 2
### Show Tags
04 Oct 2008, 08:58
the region formed is a square with sides 10*sqrt(2)...therefore area=200
Manager
Joined: 10 Mar 2008
Posts: 67
Followers: 2
Kudos [?]: 28 [1] , given: 0
### Show Tags
07 Oct 2008, 01:57
1
KUDOS
hope this will help
Attachments
triangle.JPG [ 15.38 KiB | Viewed 650 times ]
Manager
Joined: 27 Sep 2008
Posts: 76
Followers: 1
Kudos [?]: 8 [0], given: 0
### Show Tags
07 Oct 2008, 02:15
rohit929 wrote:
hope this will help
Wow ! thanks
Yes you are all correct - 200
Manager
Joined: 01 Jan 2008
Posts: 227
Schools: Booth, Stern, Haas
Followers: 1
Kudos [?]: 52 [0], given: 2
### Show Tags
07 Oct 2008, 02:32
Greenberg wrote:
rohit929 wrote:
hope this will help
Wow ! thanks
Yes you are all correct - 200
i think you went wrong because you ignored the absolute value.....
to be honest i think members like you makes this forum more interesting, thanks
Re: Math Question [#permalink] 07 Oct 2008, 02:32
Display posts from previous: Sort by | 2016-05-30 13:03:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5215686559677124, "perplexity": 5712.136920432181}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051002346.10/warc/CC-MAIN-20160524005002-00054-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://wiki.math.uwaterloo.ca/statwiki/index.php?title=imageNet_Classification_with_Deep_Convolutional_Neural_Networks&oldid=26829 | imageNet Classification with Deep Convolutional Neural Networks
Introduction
In this paper, they trained a large, deep neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. To learn about thousands of objects from millions of images, Convolutional Neural Network (CNN) is utilized due to its large learning capacity, fewer connections and parameters and outstanding performance on image classification.
Moreover, current GPU provides a powerful tool to facilitate the training of interestingly-large CNNs. Thus, they trained one of the largest convolutional neural networks to date on the datasets of ILSVRC-2010 and ILSVRC-2012 and achieved the best results ever reported on these datasets by the time this paper was written.
The code of their work is available here<ref> "High-performance C++/CUDA implementation of convolutional neural networks" </ref>.
Dataset
ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) has roughly 1.2 million labeled high-resolution training images, 50 thousand validation images, and 150 thousand testing images over 1000 categories.
In this paper, the images in this dataset are down-sampled to a fixed resolution of 256 x 256. The only image pre-processing they used is subtracting the mean activity over the training set from each pixel.
Architecture
ReLU Nonlinearity
They use Rectified Linear Units (ReLUs)<ref> Nair V, Hinton G E. Rectified linear units improve restricted boltzmann machines. Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010: 807-814. </ref> as the nonlinearity function, which work several times faster than equivalents with those standard saturating neurons. Thus, better performance can be achieved by reducing the training time for each epoch and training larger datasets to prevent overfitting. Deep convolutional neural networks with ReLUs train several times faster than their equivalents with tanh units. The following figure illustrates this. The shows the number of iterations required to reach 25% training error on the CIFAR-10 dataset for a particular four-layer convolutional network.
A four-layer convolutional neural network with ReLUs (solid line) reaches a 25% training error rate on CIFAR-10 six times faster than an equivalent network with tanh neurons (dashed line). The learning rates for each network were chosen independently to make training as fast as possible. No regularization of any kind was employed. The magnitude of the effect demonstrated here varies with network architecture, but networks with ReLUs consistently learn several times faster than equivalents with saturating neurons.
Training on Multiple GPUs
They spread the net across two GPUs by putting half of the kernels (or neurons) on each GPU and letting GPUs communicate only in certain layers. Choosing the pattern of connectivity could be a problem for cross-validation, so they tune the amount of communication precisely until it is an acceptable fraction of the amount of computation.
Local Response Normalization
ReLUs have the desirable property that they do not require input normalization to prevent them from saturating. However, they find that a local response normalization scheme after applying the ReLU nonlinearity can reduce their top-1 and top-5 error rates by 1.4% and 1.2%.
The response normalization is given by the expression
$b_{x,y}^{i}=a_{x,y}^{i}/\left ( k+\alpha \sum_{j=max\left ( 0,i-n/2 \right )}^{min\left ( N-1,i+n/2 \right )}\left ( a_{x,y}^{i} \right )^{2} \right )^{\beta }$
where the sum runs over n “adjacent” kernel maps at the same spatial position. This response normalization implements a form of lateral inhibition inspired by the type found in real neurons, creating competition for big activities amongst neuron outputs computed using different kernels.
Overlapping Pooling
Unlike traditional non-overlapping pooling, they use overlapping pooling throughout their network, with pooling window size z = 3 and stride s = 2. This scheme reduces their top-1 and top-5 error rates by 0.4% and 0.3% and makes the network more difficult to overfit.
Overall Architecture
As shown in the figure above, the net contains eight layers with 60 million parameters; the first five are convolutional and the remaining three are fully connected layers. The output of the last layer is fed to a 1000-way softmax. Their network maximizes the average across training cases of the log-probability of the correct label under the prediction distribution.
Response-normalization layers follow the first and second convolutional layers. Max-pooling layers follow both response-normalization layers as well as the fifth convolutional layer. The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer.
Reducing overfitting
Data Augmentation
The easiest and most common method to reduce overfitting on image data is to artificially enlarge the dataset using label-preserving transformations. In this paper, the transformed images are generated on CPU while GPU is training and do not need to be stored on disk.
The first form of data augmentation consists of generating image translations and horizontal reflections. They extract a random 224 x 224 patches (and their horizontal reflections) from the 256 x 256 images and training the network on these extracted patches. They also perform principal components analysis (PCA) on the set of RGB pixel values. To each training image, multiples of the found principal components, with magnitudes proportional to the corresponding eigenvalues times a random variable drawn from a Gaussian with mean zero and standard deviation 0.1 are added.
Therefore to each RGB image pixel the following quantity is added
This scheme helps to capture the object identity invariant with respect to its intensity and color, which reduces the top-1 error rate by over 1%.
Dropout
The “dropout” technique is implemented in the first two fully-connected layers by setting to zero the output of each hidden neuron with probability 0.5. This scheme roughly doubles the number of iterations required to converge. However, it forces the network to learn more robust features that are useful in conjunction with many different random subsets of the other neurons.
Details of leaning
They trained the network using stochastic gradient descent with a batch size of 128 examples, momentum of 0.9, and weight decay of 0.0005. The update rule for weight w was
$v_{i+1}:=0.9\cdot v_{i}-0.0005\cdot \epsilon \cdot w_{i}-\epsilon \cdot \left \langle \frac{\partial L}{\partial w}|_{w_{i}} \right \rangle_{D_{i}}$
$w_{i+1}:=w_{i}+v_{i+1}$
where $v$ is the momentum variable, $\epsilon$ is the learning rate which is adjusted manually throughout training. The weights in each layer are initialized from a zero-mean Gaussian distribution with standard deviation 0.01. The biases in the second, fourth, fifth convolutional layers and fully-connected hidden layers are initialized by 1, while those in the remaining layers are set by 0.
Results
For ILSVRC-2010 dataset, their network achieves top-1 and top-5 test set error rates of 37.5% and 17.0%, which was the state of the art at that time.
For LSVRC-2012 dataset, the CNN described in this paper achieves a top-5 error rate of 18.2%. Averaging the predictions of five similar CNNs gives an error rate of 16.4%.
Discussion
1. It is notable that our network’s performance degrades if a single convolutional layer is removed. So the depth of the network is important for achieving their results.
2. Their experiments suggest that the results can be improved simply by waiting for faster GPUs and bigger datasets to become available.
<references /> | 2021-12-04 16:03:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6949030756950378, "perplexity": 1023.6584521770403}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362999.66/warc/CC-MAIN-20211204154554-20211204184554-00079.warc.gz"} |
http://mathhelpforum.com/calculus/22669-integral.html | # Math Help - integral ???
1. ## integral ???
1. A company estimates that the marginal cost (in dollars per item) of producing x items is 1.93 - 0.002x. If the cost of producing one item is 558, find the cost of producing 100 items.
2. Use the Midpoint Rule with the given value of n to approximate the integral. Round the answer to four decimal places.
3. Consider the given integral.
(a) Find an approximation to the integral G using a Riemann sum with right endpoints and n = 8.
(b) If f is integrable on [a, b], the following equation is correct.
Use this to evaluate the integral G.
4.Evaluate the integral.
5. Evaluate the integral.
2. Originally Posted by calchurtsmybrain
1. A company estimates that the marginal cost (in dollars per item) of producing x items is 1.93 - 0.002x. If the cost of producing one item is 558, find the cost of producing 100 items.
The additional cost $c(m)$ of the $m$-th item produced ( $m>1$) is $1.93-0.002 m$.
Thus the total cost of producing $m$ items is:
$C(m) = \sum_{i=1}^m c(i) = c(1) + \sum_{i=2}^m (1.93-0.002 i)$
You are given that $c(1)=558$.
RonL | 2015-07-01 01:32:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6861212849617004, "perplexity": 968.7436686205375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094629.80/warc/CC-MAIN-20150627031814-00013-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://howlingpixel.com/i-en/Claude_Cohen-Tannoudji | # Claude Cohen-Tannoudji
Claude Cohen-Tannoudji (born 1 April 1933) is a French physicist. He shared the 1997 Nobel Prize in Physics with Steven Chu and William Daniel Phillips for research in methods of laser cooling and trapping atoms. Currently he is still an active researcher, working at the École Normale Supérieure in Paris.[2]
Claude Cohen-Tannoudji
Cohen-Tannoudji in 2007
Born1 April 1933 (age 86)
NationalityFrench
Alma materEcole Normale Supérieure
University of Paris
Spouse(s)
Jacqueline Veyrat (m. 1958)
[1]
Children3
AwardsYoung Medal and Prize (1979)
Lilienfeld Prize (1992)
Matteucci Medal (1994)
Harvey Prize (1996)
Nobel Prize in Physics (1997)
Scientific career
FieldsPhysics
InstitutionsCollege de France
University of Paris
Doctoral studentsSerge Haroche
Jean Dalibard
## Early life
Cohen-Tannoudji was born in Constantine, French Algeria, to Algerian Jewish parents Abraham Cohen-Tannoudji and Sarah Sebbah.[3][4][5][6] When describing his origins Cohen-Tannoudji said: "My family, originally from Tangier, settled in Tunisia and then in Algeria in the 16th century after having fled Spain during the Inquisition. In fact, our name, Cohen-Tannoudji, means simply the Cohen family from Tangiers. The Algerian Jews obtained the French citizenship in 1870 after Algeria became a French colony in 1830."[7]
After finishing secondary school in Algiers in 1953, Cohen-Tannoudji left for Paris to attend the École Normale Supérieure.[7] His professors included Henri Cartan, Laurent Schwartz, and Alfred Kastler.[7]
In 1958 he married Jacqueline Veyrat, a high school teacher, with whom he has three children. His studies were interrupted when he was conscripted into the army, in which he served for 28 months (longer than usual because of the Algerian War). In 1960 he resumed working toward his doctorate, which he obtained from the École Normale Supérieure under the supervision of Alfred Kastler and Jean Brossel at the end of 1962.[2]
## Career
Claude Cohen-Tannoudji in 2010
After his dissertation, he started teaching quantum mechanics at the University of Paris. From 1964-67, he was an associate professor at the university and from 1967-1973 he was a full professor.[2] His lecture notes were the basis of the popular textbook, Mécanique quantique, which he wrote with two of his colleagues. He also continued his research work on atom-photon interactions, and his research team developed the model of the dressed atom.
In 1973, he became a professor at the Collège de France.[2] In the early 1980s, he started to lecture on radiative forces on atoms in laser light fields. He also formed a laboratory there with Alain Aspect, Christophe Salomon, and Jean Dalibard to study laser cooling and trapping. He even took a statistical approach to laser cooling with the use of stable distributions.[8]
His work there eventually led to the Nobel Prize in physics in 1997 "for the development of methods to cool and trap atoms with laser light",[9] shared with Steven Chu and William Daniel Phillips. Cohen-Tannoudji was the first physics Nobel prize winner born in an Arab country.
In 2015, Cohen-Tannoudji signed the Mainau Declaration 2015 on Climate Change on the final day of the 65th Lindau Nobel Laureate Meeting. The declaration was signed by a total of 76 Nobel Laureates and handed to then-President of the French Republic, François Hollande, as part of the successful COP21 climate summit in Paris.[10]
## Awards
Claude Cohen-Tannoudji, UNESCO, 2011
## Selected works
The main works of Cohen-Tannoudji are given in his homepage.[12]
• Claude Cohen-Tannoudji, Bernard Diu, and Frank Laloë. 1973. Mécanique quantique. 2 vols. Collection Enseignement des Sciences. Paris. ISBN 2-7056-5733-9 (Quantum Mechanics. Vol. I & II, 1991. Wiley, New-York, ISBN 0-471-16433-X & ISBN 0471164356).
• Claude Cohen-Tannoudji, Gilbert Grynberg and Jacques Dupont-Roc. Introduction à l'électrodynamique quantique. (Photons and Atoms: Introduction to Quantum Electrodynamics. 1997. Wiley. ISBN 0471184330)
• Claude Cohen-Tannoudji, Gilbert Grynberg and Jacques Dupont-Roc, Processus d'interaction photons-atomes. (Atoms-Photon Interactions : Basic Processes and Applications. 1992. Wiley, New-York. ISBN 0471625566)
• Claude Cohen-Tannoudji. 2004. Atoms in Electromagnetic fields. 2nd Edition. World Scientific. Collection of his most important papers.
## References
1. ^ Notable twentieth century scientists: Supplement - Kristine M. Krapp - Google Books. Retrieved 2013-03-09 – via Google Books.
2. ^ a b c d "Claude Cohen-Tannoudji". www.phys.ens.fr. Retrieved 2017-12-21.
3. ^ "Claude Cohen-Tannoudji - French physicist". Retrieved 4 October 2018.
4. ^ "Archived copy". Archived from the original on 2015-02-13. Retrieved 2015-02-13.CS1 maint: Archived copy as title (link)
5. ^ Francis Leroy (13 Mar 2003). A Century of Nobel Prize Recipients: Chemistry, Physics, and Medicine. p. 218.
6. ^ Arun Agarwal (15 Nov 2005). Nobel Prize Winners in Physics. p. 298.
7. ^ a b c Claude Cohen-Tannoudji. "Claude Cohen-Tannoudji - Autobiographical". NobelPrize.org. Retrieved 13 February 2015.
8. ^ Bardou, F., Bouchaud, J. P., Aspect, A., & Cohen-Tannoudji, C. (2001). Non-ergodic cooling: subrecoil laser cooling and Lévy statistics.
9. ^ "The Nobel Prize in Physics 1997". nobelprize.org. The Nobel Foundation. 1997. Retrieved 14 December 2014.
10. ^ "Mainau Declaration". www.mainaudeclaration.org. Retrieved 2018-01-11.
11. ^ "Honorary doctorates - Uppsala University, Sweden". www.uu.se. Retrieved 4 October 2018.
12. ^ "Claude Cohen-Tannoudji" (in French). École normale supérieure. Retrieved 14 December 2014. | 2019-04-19 01:21:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 24, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7515610456466675, "perplexity": 10989.903032681916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526923.39/warc/CC-MAIN-20190419001419-20190419023419-00429.warc.gz"} |
http://www.zentralblatt-math.org/zmath/en/advanced/?q=an:1044.42006 | Language: Search: Contact
Zentralblatt MATH has released its new interface!
For an improved author identification, see the new author database of ZBMATH.
Query:
Fill in the form and click »Search«...
Format:
Display: entries per page entries
Zbl 1044.42006
Zhang, Chuanyi; Yao, Huili
Converse problems of Fourier expansion and their applications.
(English)
[J] Nonlinear Anal., Theory Methods Appl. 56, No. 5, A, 761-779 (2004). ISSN 0362-546X
Summary: Let $f \in \cal C(\Bbb R, H)$ have a countable frequency set $Freq(f)$ and satisfy Parseval's equality. We show that if $f$ satisfies one of the following conditions: (a) uniformly continuous and $Freq(f)$ has a unique limit point at infinity; (b) indefinite integral is Lipschitz, $Freq(f)$ converges fast in some sense; (c) in the case of Euclidean space $H$, all the coefficients are positive, then $f$ is pseudo-almost-periodic. An example is given to show that the conclusion cannot be improved. The results are applied to the theory of Riesz--Fischer and the optimal control theory.
MSC 2000:
*42A75 Periodic functions and generalizations
43A60 Almost periodic functions on groups, etc.
49N20 Periodic optimization
Keywords: pseudo-almost-periodic functions; Fourier series; converse problem; Riesz-Fischer theory
Highlights
Master Server | 2013-05-21 17:43:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111944794654846, "perplexity": 1877.9029003482904}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700264179/warc/CC-MAIN-20130516103104-00098-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.statease.com/blog/ | ## Four Tips for Graduate Student’s Research Projects
posted by Shari on May 22, 2019
Graduate students are frequently expected to use design of experiments (DOE) in their thesis project, often without much DOE background or support. This results in some classic mistakes.
1. Designs that were popular in the 1970’s-1990’s (before computers were widely available) have been replaced with more sophisticated alternatives. A common mistake – using a Plackett-Burman (PB) design for either screening purposes, or to gain process understanding for a system that is highly likely to have interactions. PB designs are badly aliased resolution III, thus any interactions present in the system will cause many of the main effect estimates to be biased. This increases the internal noise of the design and can easily cause misleading and inaccurate results. Better designs for screening are regular two-level factorials at resolution IV or minimum-run (MR) designs. For details on PB, regular and MR designs, read DOE Simplified.
2. Reducing the number of replicated points will likely result in losing important information. A common mistake – reducing the number of center points in a response surface design down to one. The replicated center points provide an estimate of pure error, which is necessary to calculate the lack of fit statistic. Perhaps even more importantly, they reduce the standard error of prediction in the middle of the design space. Eliminating the replication may mean that results in the middle of the design space (where the optimum is likely to be) have more prediction error than results at the edges of the design space!
3. If you plan to use DOE software to analyze the results, then use the same software at the start to create the design. A common mistake – designing the experiment based on traditional engineering practices, rather than on statistical best practices. The software very likely has recommended defaults that will make a better design that what you can plan on your own.
4. Plan your experimentation budget to include confirmation runs after the DOE has been run and analyzed. A common mistake – assuming that the DOE results will be perfectly correct! In the real world, a process is not improved unless the results can be proven. It is necessary to return to the process and test the optimum settings to verify the results.
The number one thing to remember is this: Using previous student’s theses as a basis for yours, means that you may be repeating their mistakes and propagating poor practices! Don’t be afraid to forge a new path and showcase your talent for using state-of-the-art statistical designs and best practices.
## Greg's DOE Adventure: Important Statistical Concepts behind DOE
posted by Greg on May 3, 2019
If you read my previous post, you will remember that design of experiments (DOE) is a systematic method used to find cause and effect. That systematic method includes a lot of (frightening music here!) statistics.
[I’ll be honest here. I was a biology major in college. I was forced to take a statistics course or two. I didn’t really understand why I had to take it. I also didn’t understand what was being taught. I know a lot of others who didn’t understand it as well. But it’s now starting to come into focus.]
Before getting into the concepts of DOE, we must get into the basic concepts of statistics (as they relate to DOE).
Basic Statistical Concepts:
Variability
In an experiment or process, you have inputs you control, the output you measure, and uncontrollable factors that influence the process (things like humidity). These uncontrollable factors (along with other things like sampling differences and measurement error) are what lead to variation in your results.
Mean/Average
We all pretty much know what this is right? Add up all your scores, divide by the number of scores, and you have the average score.
Normal distribution
Also known as a bell curve due to its shape. The peak of the curve is the average, and then it tails off to the left and right.
Variance
Variance is a measure of the variability in a system (see above). Let’s say you have a bunch of data points for an experiment. You can find the average of those points (above). For each data point subtract that average (so you see how far away each piece of data is away from the average). Then square that. Why? That way you get rid of the negative numbers; we only want positive numbers. Why? Because the next step is to add them all up, and you want a sum of all the differences without negative numbers getting in the way. Now divide that number by the number of data points you started with. You are essentially taking an average of the squares of the differences from the mean.
That is your variance. Summarized by the following equation:
$$s^2 = \frac{\Sigma(Y_i - \bar{Y})^2}{(n - 1)}$$
In this equation:
Yi is a data point
Ȳ is the average of all the data points
n is the number of data points
Standard Deviation
Take the square root of the variance. The variance is the average of the squares of the differences from the mean. Now you are taking the square root of that number to get back to the original units. One item I just found out: even though standard deviations are in the original units, you can’t add and subtract them. You have to keep it as variance (s2), do your math, then convert back.
## Greg's DOE Adventure: What is Design of Experiments (DOE)?
posted by Greg on April 19, 2019
Hi there. I’m Greg. I’m starting a trip. This is an educational journey through the concept of design of experiments (DOE). I’m doing this to better understand the company I work for (Stat-Ease), the product we create (Design-Expert® software), and the people we sell it to (industrial experimenters). I will be learning as much as I can on this topic, then I’ll write about it. So, hopefully, you can learn along with me. If you have any comments or questions, please feel free to comment at the bottom.
So, off we go. First things first.
What exactly is design of experiments (DOE)?
When I first decided to do this, I went to Wikipedia to see what they said about DOE. No help there.
“The design of experiments (DOE, DOX, or experimental design) is the design of any task that aims to describe or explain the variation of information under conditions that are hypothesized to reflect the variation.” –Wikipedia
The what now?
That’s not what I would call a clearly conveyed message. After some more research, I have compiled this ‘definition’ of DOE:
Design of experiments (DOE), at its core, is a systematic method used to find cause-and-effect relationships. So, as you are running a process, DOE determines how changes in the inputs to that process change the output.
Obviously, that works for me since I wrote it. But does it work for you?
So, conceptually I’m off and running. But why do we need ‘designed experiments’? After all, isn’t all experimentation about combining some inputs, measuring the outputs, and looking at what happened?
The key words above are ‘systematic method’. Turns out, if we stick to statistical concepts we can get a lot more out of our experiments. That is what I’m here for. Understanding these ‘concepts’ within this ‘systematic method’ and how this is advantageous.
Well, off I go on my journey!
## Correlation vs. causality
posted by Greg on April 5, 2019
Recently, Stat-Ease Founding Principal, Pat Whitcomb, was interviewed to get his thoughts on design of experiments (DOE) and industrial analytics. It was very interesting, especially to this relative newbie to DOE. One passage really jumped out at me:
“Industrial analytics is all about getting meaning from data. Data is speaking and analytics is the listening device, but you need a hearing aid to distinguish correlation from causality. According to Pat Whitcomb, design of experiments (DOE) is exactly that. ‘Even though you have tons of data, you still have unanswered questions. You need to find the drivers, and then use them to advance the process in the desired direction. You need to be able to see what is truly important and what is not,’ says Pat Whitcomb, Stat-Ease founder and DOE expert. ‘Correlations between data may lead you to assume something and lead you on a wrong path. Design of experiments is about testing if a controlled change of input makes a difference in output. The method allows you to ask questions of your process and get a scientific answer. Having established a specific causality, you have a perfect point to use data, modelling and analytics to improve, secure and optimize the process.’"
It was the line ‘distinguish correlation from causality’ that got me thinking. It’s a powerful difference, one that most people don’t understand.
As I was mulling over this topic, I got into my car to drive home and played one of the podcasts I listen to regularly. It happened to be an interview with psychologist Dr. Fjola Helgadottir and her research into social media and mental health. As you may know, there has been a lot of attention paid to depression and social media use. When she brought up the concept of correlation and causality it naturally caught my attention. (And no, let’s not get into Jung’s concept of Synchronicity and whether this was a meaningful coincidence or not.)
The interesting thing that Dr. Helgadottir brought up was the correlation between social media and depression. That correlation is misunderstood by the general population as causality. She went on to say that recent research has not shown any causality between the two but has shown that people who are depressed are more likely to use social media more than other people. So there is a correlation between social media and depression, but one does not cause the other.
So, back to Pat’s comments. The data is speaking. We all need a listening device to tell us what it’s saying. For those of you in the world of industrial experimentation, experimental design can be that device that differentiates the correlations from the causality.
## Design-Expert Favorite Feature: Sharing the Magic of the Model!
posted by Shari on March 29, 2019
The situation: You have successfully run an experiment and analyzed the data. The results include a prediction equation with a high predicted R-squared that will be useful for many purposes. How can you share this with colleagues?
The solution: Design-Expert® software has a little-known but useful “Copy Equation” function that allows you to export the prediction equation to MS Excel so that others can use it for future work, without needing a copy of Design-Expert software. The advantage of using this function is that it brings in all the essential significant digits, including ones not showing on your screen. This accuracy is critical to getting correct predictive values.
1. Go to the ANOVA tab for the response. Find the Actual Equation, located in the lower right corner by default.
2. Right-click on the equation and select Copy Equation.
3. Open Excel, position your mouse and use Ctrl-V to correctly paste the formula into Excel (Ctrl-V allows the spreadsheet functionality to work.)
4. As shown in the figure (coloration added within Excel), the blue cells allow the user to enter actual factor settings. These values are used in the prediction equation, with the result showing in the yellow cell.
You can also view this process in this video. | 2019-05-25 09:47:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.556929886341095, "perplexity": 885.8408766564246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257939.82/warc/CC-MAIN-20190525084658-20190525110658-00538.warc.gz"} |
https://root-forum.cern.ch/t/problem-in-entring-function-in-argument/18015 | Problem in entring function in argument
I wrote a macro CompareAdvanced.C (attached) which draws histogram. I am calling it through a macro written in Plot.C.
The function that I am calling is
compareQuantities(“genMuonPhi”,"",10,-4,4,"",6,“pp_WWJJ_phantom.root”);
I need to replace “genMuonPhi” with a function (of branches). Where genMuonPhi is the branch of tree and this is going into histo->Draw().
So, My problem is how can I call Draw a function (of branches) using the Draw()?
Plot.C (1.2 KB)
The first parameter of your “compareQuantities” is just a “std::string”, so you can put any function of leaves there, e.g. “SomeLeaf1 + SomeLeaf2”, or “SomeLeaf1 * SomeLeaf2”, … see the TTree::Draw for some “varexp” examples.
It means I could not put the first argument of Draw() with a function deltaPhi. where deltaPhi is
double deltaPhi(double phi1, double phi2)
{
double deltaphi = fabs(phi1-phi2);
if (deltaphi > TMath::PI ) deltaphi = 2 * TMath::PI - deltaphi;
if (deltaphi > 2 * TMath::PI ) cout<< "Delta Phi = "<<deltaphi<<endl;
return deltaphi;
};
where phi1 and phi2 are branches of tree.
for example like :
tree->Draw(“deltaPhi>>histo”);
If the TFormula is not sufficient for you then search for “MakeProxy” in the TTree::Draw method description.
Sorry Coyote. I tried but didn’t get how to use MakeProxy. Please let me know how can I use it. Thanks. | 2022-07-03 08:42:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9070098996162415, "perplexity": 6828.495865998977}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215805.66/warc/CC-MAIN-20220703073750-20220703103750-00157.warc.gz"} |
http://ncatlab.org/nlab/show/nPOV | nLab nPOV
category theory
Applications
Higher category theory
higher category theory
1-categorical presentations
Wikipedia enforces its entries to adopt an NPOV – a neutral point of view . This is appropriate for an encyclopedia.
However, the nLab is not Wikipedia, nor is it an encyclopedia, although it does aspire to provide a useful reference in many areas (among its other purposes). In particular, the $n$Lab has a particular point of view, which we may call the $n$POV or the n- categorical point of view .
To some extent the $n$POV is just the observation that category theory and higher category theory, hence in particular of homotopy theory, have a plethora of useful applications.
$n$POV
Idea
Around the nLab it is believed that category theory and higher category theory provide a point of view on Mathematics, Physics and Philosophy which is a valuable unifying point of view for the understanding of the concepts involved.
So at the $n$Lab, we don’t care so much about being neutral. Although we don’t want to offend people unnecessarily, we are also not ashamed about writing from this particular point of view. There are certainly other valid points of view on mathematics, but describing them and being neutral towards them is not the purpose of the $n$Lab. Rather, the $n$Lab starts from the premise that category theory and higher category theory are a true and useful point of view, and one of its aims is to expose this point of view generally and in a multitude of examples, and thereby accumulate evidence for it.
If you feel skeptical about the $n$-point of view, you may want to ignore the $n$Lab. Or you may want to take its content as a contribution to a discussion on what is behind the claim that category theory is the right language to describe the world, or at least the world of mathematical ideas.
As recalled in parts on the page on category theory, category theorists have early on, beginning in the 1960s, proclaimed the advantages of category theory over other points of view. It has been observed that this claim, or at least the way it has been put forward, has contributed to a certain alienation of category theory in parts of the mathematical community. That may be true and is understandable. But since a claim is not false just because it is put forward with possibly unpleasant boldness, all evidence for the claim deserves to be collected and exposed. We hope the $n$Lab to play a role in this effort.
In particular, there have been dramatic developments since the 1960s. Back then promoting category theory may have been as visionary as the invention of complex numbers was in the 16th century. But just as the early rejections of the complex numbers appear strangely out of place from today’s perspective, where their ubiquity proves their reality to the point that it is hard to imagine how life must have been before their conception, so developments of category theory and its applications in the last years have in many areas brought it to the point that rejecting its prevalence amounts (we believe) to rejecting the obvious and ubiquitous. But also mathematics as a whole has drastically grown since then, and while category theory has become an entirely obvious ingredient in areas such as homotopy theory, homological algebra, algebraic geometry and even fields like topological quantum field theory, its similar role to be played in many other areas has often not found wide recognition yet. But this is gradually changing.
The role of the $n$POV
Practitioners of category theory have often attempted to express the striking power of category theory (or general conceptual methods), sometimes through aphorism, sometimes through metaphor. Early on, Peter Freyd wrote
Perhaps the purpose of categorical algebra is to show that which is trivial is trivially trivial.
This can be taken to mean that one thing category theory does is help make the softer bits seem utterly natural and obvious, so as to quickly get to the heart of the matter, isolating the hard nuggets, which one may then attack with abandon. This is an invaluable service category theory performs for mathematics; therefore, category theory is plain good pragmatics.
However, it is also possible to take it a step beyond the pragmatic attitude, and see category theory (and now higher category theory) as exemplifying a style for doing even hard mathematics, as in the style for which Grothendieck is renowned. Paraphrasing from Colin McLarty’s excellent essay, let us regard the aforementioned pragmatic attitude as leading up to the hammer-and-chisel principle: if you think of a theorem to be proved as a nut to be opened, so as to reach “the nourishing flesh protected by the shell” (Grothendieck), then one thing to do is “put the cutting edge of the chisel against the shell and strike hard. If needed, begin again at many different points until the shell cracks – and you are satisfied.” Grothendieck points to Serre as a master of this technique. He then says:
I can illustrate the second approach with the same image of a nut to be opened. The first analogy which came to my mind is of immersing the nut in some softening liquid, and why not simply water? From time to time you rub so the liquid penetrates better, and otherwise you let time pass. The shell becomes more flexible through weeks and months – when the time is ripe, hand pressure is enough, the shell opens like a perfectly ripened avocado!
A different image came to me a few weeks ago. The unknown thing to be known appeared to me as some stretch of earth or hard marl, resisting penetration… the sea advances insensibly in silence, nothing seems to happen, nothing moves, the water is so far off you hardly hear it… yet it finally surrounds the resistant substance. (Translated from the French by McLarty)
This arresting metaphor of “la mer qui monte” (“the rising sea”), which over time changes the very form of the resistant substance, is very much in the style of Grothendieck himself. McLarty quotes Deligne as saying that a typical Grothendieck proof consists of a long series of trivial steps where “nothing seems to happen, and yet at the end a highly non-trivial theorem is there.”
Examples
For an (incomplete) list of examples of topics for which the $n$POV has proven to be a useful perspective see
category: meta
Revised on May 1, 2014 03:59:41 by Todd Trimble (67.81.95.215) | 2014-09-19 09:49:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4931148588657379, "perplexity": 713.3193097852331}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131238.51/warc/CC-MAIN-20140914011211-00283-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://docs.sunpy.org/en/stable/generated/gallery/mask_disk.html | This example shows how to mask off emission from the disk.
from __future__ import print_function, division
import numpy as np
import numpy.ma as ma
import matplotlib.pyplot as plt
import astropy.units as u
import sunpy.map
from sunpy.data.sample import AIA_171_IMAGE
We first create the Map using the sample data.
aia = sunpy.map.Map(AIA_171_IMAGE)
Next we build two arrays which include all of the x and y pixel indices. We must not forget to add the correct units because we will next pass into a SunPy function which all require them.
x, y = np.meshgrid(*[np.arange(v.value) for v in aia.dimensions]) * u.pixel
Now we can convert this to helioprojective coordinates and create a new array which contains the normalized radial position for each pixel
hpc_coords = aia.pixel_to_data(x, y)
r = np.sqrt(hpc_coords.Tx ** 2 + hpc_coords.Ty ** 2) / aia.rsun_obs
Finally, we create a mask where all values which are less then Rsun are masked. We also make a slight change to the colormap so that masked values are shown as black instead of the default white.
mask = ma.masked_less_equal(r, 1)
palette = aia.plot_settings['cmap']
Now we create a new custom aia with our new mask and plot the result using our modified colormap
scaled_map = sunpy.map.Map(aia.data, aia.meta, mask=mask.mask)
fig = plt.figure()
plt.subplot(projection=scaled_map)
scaled_map.plot(cmap=palette)
scaled_map.draw_limb()
plt.show()
Total running time of the script: ( 0 minutes 0.972 seconds)
Gallery generated by Sphinx-Gallery | 2018-04-20 16:37:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21308203041553497, "perplexity": 3108.970460333106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944479.27/warc/CC-MAIN-20180420155332-20180420175332-00474.warc.gz"} |
https://doc.visionappster.com/latest/tools/EndIterateTool.html | # End iterate
Collects results produced by an iterated part of a processing graph to a matrix or a table.
## Inputs
sync
An input that synchronizes the result output. This input is required so that the tool knows when every element related to the one that triggered an iteration has been received. One accumulated matrix/table will be sent for each object in this input. Usually, this input is connected to the sync output of the corresponding iteration. It is however possible to synchronize the output to any output that precedes it in a synchronized processing pipeline, even over nested iterations. The value of the sync input is ignored.
dynamicInputCount
The number of dynamic inputs.
discardEmptyElements
If true, empty elements (tables and matrices with zero rows) will not be counted as blocks and put to the blockSize output matrices. The number of rows in the blockSize matrix may thus be different from the number of input elements. If this flag is false, the number of rows in the blockSize output will be equal to the number of input elements, but the matrix may contain zeros. Note that the result matrix will be the same in both cases.
reservedElements
The estimated number or input elements. This is an optimization parameter that makes it possible to avoid unnecessary reallocation of memory. The tool estimates the number or rows required in the output by multiplying the number of rows in the first received element by reservedElements.
outputType
The type of the result outputs.
elementX
A matrix, table, or an element that will be put into the accumulated result. X ranges from 0 to dynamicInputCount - 1. Any number of elements may be provided for a single sync object. This input must come from a part of the processing graph that is iterated.
## Outputs
resultX
The accumulated matrix or table, one for each object in the sync input. The type of the output depends on the value of the outputType parameter and on the corresponding element input: if outputType is Matrix, the output will be either an integer matrix or a real matrix depending on the type of the first element received. If outputType is Table, the output will always be a table. If no elements were received during an iteration, either an empty integer matrix or an empty table will be sent depending on outputType.
blockSizeX
A matrix that specifies the number of rows each input element occupies in the output matrix/table. This is useful if the input elements have a variable number of rows.
Output types.
Enumerator
Auto
Output type will be automatically deduced from input. If the input is an integer or an integer matrix, the output type will be IntegerMatrix. If the input is a real number or a real-valued matrix, the output type will be RealMatrix. Otherwise, it will be Table. In Auto mode, different types may be written to each result output.
IntegerMatrix
Output will be an integer-valued matrix.
RealMatrix
Output will be a real-valued matrix.
Table
Output will be a VariantTable. | 2019-07-23 06:53:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39718249440193176, "perplexity": 684.7819616477723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529007.88/warc/CC-MAIN-20190723064353-20190723090353-00001.warc.gz"} |
https://dkist.nso.edu/node/1543 | # The Advanced Technology Solar Telescope Site Survey Sky Brightness Monitor
Journal Article
### Authors:
Haosheng Lin; Matthew J. Penn
### Source:
PASP, Volume 116, Issue 821, p.652 (2004)
### URL:
<p>The Advanced Technology Solar Telescope (ATST) will be a 4 m aperture off-axis telescope with advanced high-resolution and low scattered light capabilities for the observation of the solar photosphere and corona. The site characteristics that are critical to the success of the ATST coronal observations are the sky brightness, the precipitable water vapor content, and the number and size distributions of the dust particles. Therefore, part of the ATST site survey effort is to obtain measurements of these atmospheric properties at all the potential ATST sites. The ATST site survey Sky Brightness Monitor (SBM) is a new instrument specifically developed for this task. The SBM is a modified externally occulted coronagraph capable of imaging the solar disk and sky simultaneously. The ability to image the Sun and the sky simultaneously greatly simplifies the calibration of the sky-brightness measurements. The SBM has a very simple optical configuration that makes it a compact and low-maintenance instrument. The SBM is sensitive to sky brightness below 1{\times}10$^{-6}$ disk center intensity, with a field of view extending from 4 to 8 R$_{solar}$. It measures the solar disk and sky brightness at three continuum bandpasses located at 450, 530, and 890 nm. A fourth bandpass is centered at the 940 nm water vapor absorption band. With measurements of disk and sky brightness at these four wavelengths, site characteristics such as extinctions, aerosol content, and precipitable water vapor content can be derived. This paper documents the design, specifications, calibration procedures, and performance of the SBM.</p> | 2021-08-02 19:21:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4061127305030823, "perplexity": 3047.1536111192772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154356.39/warc/CC-MAIN-20210802172339-20210802202339-00536.warc.gz"} |
https://astronomy.stackexchange.com/questions/7950/hulse-taylor-binary-pulsar-what-is-the-rate-of-mass-energy-loss-from-the-sourc | # Hulse-Taylor binary pulsar - what is the rate of mass/energy loss from the source?
Following on from an earlier question about the very interesting Hulse-Taylor binary pulsar. The high-frequency (radio) beam from the spinning pulsar sweeps across Earth about 17 times per second. The total power of the gravitational radiation (waves) emitted by the binary system is calculated from GTR to be $7.35 × 10^{24}$ watts at present (declining as the orbital period and radius diminish).
Weisberg & Taylor, 2004 report that the beam has "a flux density of about 1 mJy at 1400 MHz." (mJy = milliJansky, thanks Stan Liou). Assuming that this flux is uniform across a conical beam with cross-sectional radius 5 arc degrees and assuming that the source is 21,000 light years from Earth and has mass = 1.44 Solar Masses:- How much energy is emitted (per second) from the source in the beam? Also (if it is possible to estimate a reasonable range of values) what might be the rate of steady mass loss from such a source?
• I don't know about the question being "recent" . . . But I'm assuming that the Wikipedia page on gravitational waves (and associated energy loss) was unsatisfying? Nov 12 '14 at 22:29
• @HDE226868. I have enough info on graviational wave energy loss. It is the other forms of mass/energy loss that I am seeking data on, e.g. thru the beam or other possible (steady, non-cataclysmic) processes. Nov 12 '14 at 22:33
• @HDE226868 I rewrote my previous comment having previously misread your question. Nov 12 '14 at 22:36
The observable pulsar is a weak radio source with a flux density of about $1\,\mathrm{mJy}$ at $1400\,\mathrm{MHz}$. ... Our most recent data have been gathered with the Wideband Arecibo Pulsar Processors (“WAPPs”), which for PSR B1913+16 achieve $13\,\mathrm{\mu s}$ time-of-arrival measurements in each of four $100\,\mathrm{MHz}$ bands, using $5$-minute integrations.
Those are millijanskys, aka milli flux units, so that the flux density is about $$1\,\mathrm{mJy} = 10^{-29}\,\frac{\mathrm{W}}{\mathrm{m}^2\cdot\mathrm{Hz}}\text{,}$$ and hence the detected irradiance is on the order of $10^{-27}\,\mathrm{W}/\mathrm{m}^2$. Since we're about to make rather uncertain assumptions anyway, I won't bother worrying about doing more than an order-of-magnitude calculation.
A spherical cap has surface area of $A = 2\pi Rh$, and here $R = 21\,\mathrm{kly}$. Now, I'm unclear what cross-sectional radius means if measured as an angle, but I take it to mean that the opening half-angle of the cone is $\frac{\vartheta}{2} = 5^\circ$, in which case $$A = 2\pi R^2\left(1-\cos\frac{\vartheta}{2}\right) \sim 10^{39}\,\mathrm{m}^2\text{.}$$ Thus, the power would be $P\sim 10^{12}\,\mathrm{W}$, but note that in addition to the assumptions you've just listed, we're only talking about a particular radio band. | 2021-10-25 14:03:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8317098021507263, "perplexity": 859.3987378770898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587711.69/warc/CC-MAIN-20211025123123-20211025153123-00245.warc.gz"} |
http://vladikk.com/2012/10/15/learning-from-mistakes-leaky-abstractions/ | # Learning From Mistakes: Leaky Abstractions
On the project I’m working on I’ve had a requirement to store and read files from the file system. Alse the files had to be accessible from the web.
Having a gut feeling that the infrastructure may change as the business will grow, I decided to hide operations on the file system behind an interface:
public interface IFilesStorage {
string StoreFile(Stream stream, string fileName);
Stream GetFile(string virtualPath);
string GetFileUrl(string virtualPath);
string GetFilePath(string virtualPath);
}
As it looks, if someday I’ll need to switch from the file system to another storage mechanism, I’ll be able to do get the job done by writing another implementation of the interface. Right? Wrong! The requirement did come in - I’ve had to store the files in S3. And only then I realised that IFilesStorage is a leaky abstraction.
The problem lies in the last method, GetFilePath. This method leaks out the implementation detail, that each file has a path which can be used to access the file from the file system. Of course, other storage mechanisms can’t provide such functionality. This bummer makes switching storage mechanism nearly impossible.
From a SOLID point of view, this issue can be seen as a violation of the Liskov substitution principle: the file system based implementation of the interface cannot be replaced by another implementation, as it will break the correctness of the application.
The solution was to drop the problematic method and get rid of the dependancy on file paths in the system.
Lesson learned: having abstractions will make the code more testable, but if you’re aiming for supple design, make sure your abstractions don’t leak out details about the underlying architecture.
If you liked this post, please share it with your friends and colleagues: | 2018-03-23 22:16:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22452427446842194, "perplexity": 1366.8258878669674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257649095.35/warc/CC-MAIN-20180323220107-20180324000107-00405.warc.gz"} |
https://academy.vertabelo.com/course/ms-sql-recursive-queries/final-quiz/introduction/the-project-table | Introduction
2. The Project table
Quiz
Summary
## Instruction
Good. Before we start, let's discuss the tables you're going to work with. They are used by a company to track their projects and the time employees spend on a given project.
## Exercise
Select all the information from the table Project.
The table is quite simple: each project has an Id, a ClientId, and a Name. | 2018-12-10 02:22:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3128255307674408, "perplexity": 1420.6991666280387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823236.2/warc/CC-MAIN-20181210013115-20181210034615-00574.warc.gz"} |
http://physicstasks.eu/2113/electrical-resistance-of-wires-of-different-cross-sections | ## Electrical Resistance of Wires of Different Cross Sections
Electrical resistance of a wire with a circular cross-section of a diameter of 0.5 mm is 2 Ω. The length of the wire is 8 m. How long should be a wire of the same material but with the radius twice as big, so that the resistance of the wire is also 2 Ω? Determine the resistivity of the material, of which the wire has been made.
• #### Hint
Look up the relation for calculating the resistance of a homogeneous wire based on its parameters (such as its length, cross-sectional area and material).
• #### Analysis – analytical solution
From relation $$R\,=\,\rho\,\frac{l}{A}$$, where ρ is the resistivity of the material, we see that the resistance of the wire R is directly proportional to its length l and inversely proportional to its cross-sectional area A. It means that if we increase the cross section of the wire, its resistance will decrease linearly and if we increase the length of the wire, the resistance will increase too (also linearly).
It implies that in order to maintain the same resistance of the wire, we have to lengthen the wire the same number of times as we have increased its cross section. Because of the increase of the cross section, the resistance of the wire was decreased, so we have to compensate this decrease by a proper increase of its length.
The cross-sectional area of a wire is directly proportional to the square of its radius (A = πr2), and hence to the diameter of the wire as well. Therefore, if we increase the diameter of the wire two times, its cross-sectional area will increase four times. In order for the wire made of the same material to have the same resistance as the original “narrower” wire, its length has to be four times greater than the length of the original wire.
Thus, if the original wire is 8 m long, then the “wider” wire described above will have the length of 32 m.
• #### Calculating length of the wire
Electrical resistance of a wire can be calculated by the following formula:
$R\,=\,\rho\,\frac{l}{A}\,,$
where ρ is the resistivity of the material, l length of the wire and A stands for its cross-sectional area.
For two different wires of the same resistance it holds true:
$R_1\,=\,R_2.$
Therefrom:
$\rho_1\,\frac{l_1}{A_1}\,=\,\rho_2\,\frac{l_2}{A_2}\,.$
If both wires are made of the same material, then they have the same electrical resistivity:
$\rho_1\,=\,\rho_2\,.$
Then we can express the length of one wire through the parameters of the other wire:
$\frac{l_1}{A_1}\,=\,\frac{l_2}{A_2}$ $l_2\,=\,\frac{A_2}{A_1}\,l_1\,.\tag{1}$
The cross-sectional area of the wire is the area of a circle of radius r:
$A\,=\,\pi r^2\,=\,\pi \left(\frac{d}{2}\right)^2\,,$
where d is the diameter of the wire.
So we calculate the cross-sectional area of the wires by the following formulae:
$A_1\,=\,\pi \left(\frac{d_1}{2}\right)^2\,;\hspace{15px}A_2\,=\,\pi \left(\frac{d_2}{2}\right)^2.$
After that we substitute the cross-sectional area for these expressions in relation (1):
$l_2\,=\,\frac{\pi \left(\frac{d_2}{2}\right)^2}{\pi \left(\frac{d_1}{2}\right)^2}\,l_1$ $l_2\,=\,\left(\frac{d_2}{d_1}\right)^2\,l_1.$
As we know from the task assignment, the diameter of the second wire is twice as big as the diameter of the first wire (d2 = 2d1). Thus:
$l_2\,=\,\left(\frac{2d_1}{d_1}\right)^2\,l_1$ $l_2\,=\,2^2\,l_1$ $l_2\,=\,4\,l_1.$
• #### Resistivity – solution
First, we express the resistivity of the material from
$R\,=\,\rho\,\frac{l}{A}\,,$
where R is the resistance of a wire, l length of the wire and A stands for the cross-sectional area of the wire.
Thus
$\rho\,=\,\frac{RA}{l}\,.$
The cross-sectional area of the wire A is the area of a circle of radius r, or of diameter d = 2r:
$A\,=\,\pi r^2\,=\,\pi \left(\frac{d}{2}\right)^2.$
Then we substitute the expressed cross-sectional area into the previous relation to obtain the final formula for calculating the resistivity of the wire:
$\rho\,=\,\frac{RA}{l}\,=\,\frac{R\pi \left(\frac{d}{2}\right)^2}{l}$ $\rho\,=\,\frac{\pi R d^2}{4l}\,.$
As we know from the task assignment, the diameter of the wire is of 0.5 mm, its length is 8 m and the resistance is 2 Ω. Thus:
d = 0.5 mm = 0.5·10−3 m l = 8 m R = 2 Ω
Substitution for the given values:
$\rho\,=\,\frac{\pi\cdot2\cdot\left(0{.}5{\cdot}10^{-3}\right)^2}{4{\cdot}8}\,\mathrm{\Omega m}\,=\,\frac{\pi\cdot2{\cdot} 0{.}5^2{\cdot} 10^{-6}}{32}\,\mathrm{\Omega m}\,\dot{=}\,4{.}9{\cdot}10^{-8}\,\mathrm{\Omega m}\,.$ | 2019-09-19 21:10:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.912442147731781, "perplexity": 219.02508357514307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573735.34/warc/CC-MAIN-20190919204548-20190919230548-00036.warc.gz"} |
http://math.stackexchange.com/questions/23545/isomorphism-in-coordinate-ring | # Isomorphism in coordinate ring
Let $x_{1},x_{2},...,x_{m}$ be elements of $\mathbb{A}^{n}$, where $\mathbb{A}^{n}$ is the n-affine space over an algebraically closed field $k$. Now define $X=\{x_{1},x_{2},...,x_{m}\}$. Why is the coordinate ring $A(X)$, isomorphic to $\oplus_{j=1}^{m} k = k^{m}$?
-
The coordinate ring $A(X)$ is defined to be the quotient of $k[z_1,\ldots,z_n]$ (I'm using $z_i$ because you are using $x_i$ to denote the points of affine space) by the ideal $I(X)$. The ideal $I(X)$ is the ideal of all elements of $k[z_1,\ldots,z_n]$ that are zero at $X$.
By the Nullstellensatz, the ideal corresponding to a single point $x_i = (x_{i1},\ldots,x_{in})$ is given by $(z_1-x_{i1}, z_2-x_{i2},\ldots,z_n-x_{in})$.
The ideal of a union is the intersection of the ideals; so you are trying to mod out by $$\bigcap_{i=1}^m (z_1-x_{i1},\ldots,z_n-x_{in}).$$
But the ideals $(z_1-x_{i1},\ldots,z_n-x_{in})$ are maximal ideals, since $k[z_1,\ldots,z_n]/(z_1-x_{i1},\ldots,z_n-x_{in}) \cong k$. If they are all distinct, then they are pairwise comaximal, hence pairwise coprime. By the Chinese Remainder Theorem, we know that if $\mathfrak{p}_1,\ldots,\mathfrak{p}_k$ are pairwise coprime ideals in $R$, then $$\frac{R}{\mathfrak{p}_1\cap\cdots\cap\mathfrak{p}_k} \cong \frac{R}{\mathfrak{p}_1}\oplus\cdots\oplus\frac{R}{\mathfrak{p}_k}.$$
So we have that $$A(X) = \frac{k[z_1,\ldots,z_n]}{\cap_{i=1}^m(z_1-x_{i1},\ldots,z_n-x_{in})} \cong \mathop{\bigoplus}_{i=1}^m \frac{k[z_1,\ldots,z_n]}{(z_1-x_{i1},\ldots,z_m-x_{im})} \cong \mathop{\bigoplus}_{i=1}^m k.$$
-
Thanks a lot! What can I say? I'm new with this stuff and your explanation helps a lot to really understand this material. – user6495 Feb 24 '11 at 17:02
For each $i$, $A(\{x_i\}) = k[x]/I(x_i) \cong k$, so each $I(x_i)$ is a maximal ideal of $k[x]$. I assume the points $x_1,\ldots,x_n$ are distinct, from which it follows easily that the ideals $I(x_1),\ldots,I(x_n)$ are distinct maximal ideals. Thus they are pairwise comaximal and the Chinese Remainder Theorem -- see e.g. $\S 4.3$ of these notes -- applies. I leave it to you to check that it gives the conclusion you want.
-
Thank you very much! I will print your notes, thanks for sharing them. – user6495 Feb 24 '11 at 17:00 | 2016-02-08 06:23:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590402841567993, "perplexity": 90.93069509257512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701152959.66/warc/CC-MAIN-20160205193912-00246-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://www.ams.org/mathscinet-getitem?mr=501370 | MathSciNet bibliographic data MR501370 14J10 (49F10) Horikawa, Eiji Algebraic surfaces of general type with small \$c\sp{2}\sb{1}\$$c\sp{2}\sb{1}$. III. Invent. Math. 47 (1978), no. 3, 209–248. Article
For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews. | 2016-10-25 03:31:23 | {"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985554218292236, "perplexity": 5739.819584645593}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00308-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://collegephysicsanswers.com/openstax-solutions/how-many-239textrmpu-nuclei-must-fission-produce-200-kt-yield-assuming-200-mev-0 | Change the chapter
Question
(a) How many ${}^{239}\textrm{Pu}$ nuclei must fission to produce a 20.0-kT yield, assuming 200 MeV per fission? (b) What is the mass of this much ${}^{239}\textrm{Pu}$?
1. $2.62\times 10^{24}\textrm{ nuclei}$
2. $1.04\textrm{ kg}$
Solution Video | 2021-11-29 15:47:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9526471495628357, "perplexity": 5746.453625921693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00587.warc.gz"} |
https://wiki.eeros.org/getting_started/install/manually | # Real-Time Robotics Framework
### Sidebar
getting_started:install:manually
## Getting the Sources Manually
Setting up EEROS and the necessary libraries manually can be quite cumbersome. Do it only when you have some knowledge about development on Linux and if you are not using the scripts mentioned before and know what you do. Clone the eeros source repository:
$git clone https://github.com/eeros-project/eeros-framework.git eeros-framework Checkout a stable version of EEROS:$ cd eeros-framework
\$ git checkout v1.2.0
In addition to the eeros library you need to install libraries for hardware access together with the appropriate eeros wrapper libraries, see Installing Hardware Libraries.
Continue with Compile Manually. | 2021-04-21 17:13:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5191401839256287, "perplexity": 6223.8747364716455}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039546945.85/warc/CC-MAIN-20210421161025-20210421191025-00481.warc.gz"} |
https://math.stackexchange.com/questions/1204829/indexes-the-sequence-meaning-in-the-definition-of-a-subsequence | # “Indexes the sequence” meaning in the definition of a subsequence
Let $(a_n)$ be a sequence of real numbers, and let $n_1<n_2< n_3 <n_4 <n_5 <···$ be an increasing sequence of natural numbers. Then the sequence $a_{n_1},a_{n_2},a_{n_3},a_{n_4} ,···$ is called a subsequence of $(a_n)$ and is denoted by $(a_{n_j} )$, where $j ∈\mathbb{N}$ indexes the subsequence.
What exactly is the meaning of "indexes the subsequence" ?
• By that j, you mean that you are only working with every j-th number instead all of them. – Atvin Mar 24 '15 at 19:00
• Good question (can't upvote now, casted too many votes already). It means that they are good identifiers (in the programming languages sense of this word) for the subsequence. Indexes can be the natural numbers, the real numbers, ... . You have a series of values, and the indexes ("unique names") of the values are now given by natural numbers. For instance, sometimes you have a sequence with an uncountable number of elements. Then you use $j \in \mathbb{R}$ as indexes. You could even have used unique "words" to index them. It all doesn't matter. As long as all elements get a unique name. – Pedro Mar 24 '15 at 19:09
• You use indexes to give unique IDs to things. In computer science this is something that is often done. You need to give a unique ID to every record in a database table. – Pedro 1 min ago edit – Pedro Mar 24 '15 at 19:09
The original sequence is, technically, a function from $\Bbb Z^+$ to $\Bbb R$. Specifically, it’s the function that sends $n\in\Bbb Z^+$ to the term $a_n$ of the sequence; fall that function $a$, so that $a(n)=a_n$. To form the subsequence we have another function, which I’ll call $\varphi$, this time from $\Bbb Z^+$ to $\Bbb Z^+$; $\varphi$ is strictly increasing, and $\varphi(j)=n_j$ for each $j\in\Bbb Z^+$. Technically speaking, $\varphi$ is the sequence $\langle n_1,n_2,n_3,\ldots\rangle$.
In these terms the subsequence $\langle a_{n_1},a_{n_2},a_{n_3},\ldots\rangle$ of $a$ is just a composite function: it’s the composition $a\circ\varphi:\Bbb Z^+\to\Bbb R$, since for each $j\in\Bbb Z^+$ we have
$$(a\circ\varphi)(j)=a\big(\varphi(j)\big)=a(n_j)=a_{n_j}\;.$$
Saying that $j\in\Bbb Z^+$ indexes this subsequence is saying that we’re treating the subsequence as a function from $\Bbb Z^+$ to $\Bbb R$ and identifying each term of the subsequence by the $j\in\Bbb Z^+$ that is sent to it by the function $a\circ\varphi$.
Suppose instead that we let $M=\{n_j:j\in\Bbb Z^+\}$ and define a function $\psi:M\to\Bbb R$ by $\psi(m)=a_m$ for each $m\in M$. Each $m\in M$ is $a_j$ for some $j\in\Bbb Z^+$, so each $\psi(m)$ is one of the terms $a_{n_j}$. If we think of $M$ in its natural order as a set of positive integers, we can see that since $M=\{n_1,n_2,n_3,\ldots\}$, $\psi$ is the same subsequence $\langle a_{n_1},a_{n_2},a_{n_3},\ldots\rangle$ of $a$, but this time indexed directly by the set $M$. That is, this time we’ve the integers in $M$ directly instead of first indexing $M$ by the positive integers.
To take a concrete example, suppose that $n_j=j^2$ for each $j\in\Bbb Z^+$, so that $M=\{j^2:j\in\Bbb Z^+\}$. If we think of the subsequence as $\langle a_1,a_4,a_9,a_{16},\ldots\rangle$, we’re indexing it directly by $M$. If instead we think of it as $\langle a_{1^2},a_{2^2},a_{3^2},a_{4^2},\ldots\rangle$, we’re thinking of it as indexed by $\Bbb Z^+$, via the function $\varphi(j)=j^2$. | 2021-03-04 03:19:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9115806818008423, "perplexity": 153.35049724566855}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00522.warc.gz"} |
http://dynet.readthedocs.io/en/latest/unorthodox.html | # Unorthodox Design¶
There are a couple design decisions about DyNet that are different from the way things are implemented in other libraries, or different from the way you might expect things to be implemented. The items below are a list of these unorthodox design decisions, which you should read to avoid being surprised. We also try to give some justification for these decisions (although we realize that this is not the only right way to do things).
By default, DyNet parameter optimizers perform sparse updates over LookupParameters. This means that if you have a LookupParameters object, use a certain subset of indices, then perform a parameter update, the optimizer will loop over the used subset, and not perform any updates over the unused values. This can improve efficiency in some cases: e.g. if you have embeddings for a vocabulary of 100,000 words and you only use 5 of them in a particular update, this will avoid doing updates over all 100,000. However, there are two things to be careful of. First, this means that some update rules such as ones using momentum such as MomentumSGDTrainer and AdamTrainer are not strictly correct (these could be made correct with some effort, but this would complicate the programming interface, which we have opted against). Also, on GPUs, because large operations are relatively cheap, it can sometimes be faster to just perform a single operation over all of the parameters, as opposed to multiple small operations. In this case, you can set the sparse_updates_enabled variable of your Trainer to false, and DyNet will perform a standard dense update, which is guaranteed to be exactly correct, and potentially faster on GPU.
## Weight Decay¶
As described in the Command Line Options, weight decay is implemented through the option --dynet-weight-decay. If this value is set to wd, each parameter in the model is multiplied by (1-wd) after every parameter update. This weight decay is similar to L2 regularization, and is equivalent in the case of using simple SGD (SimpleSGDTrainer), but it is not the same when using any other optimizers such as AdagradTrainer or AdamTrainer. You can still try to use weight decay with these optimizers, and it might work, but if you really want to correctly apply L2 regularization with these optimizers, you will have to directly calculate the L2 norm of each of the parameters and add it to the objective function before performing your update.
## Minibatching Implementation¶
Minibatching in DyNet is different than how it is implemented in other libraries. In other libraries, you can create minibatches by explicitly adding another dimension to each of the variables that you want to process, and managing them yourself. Instead, DyNet provides special Operations that allow you to perform input, lookup, or loss calculation over mini-batched input, then DyNet will handle the rest. The programming paradigm is a bit different from other toolkits, and may take a bit of getting used to, but is often more convenient once you’re used to it.
## LSTM Implementation¶
The implementation of LSTMs in LSTMBuilder is not the canonical implementation, but an implementation using coupled input and forget gates, as described in “LSTM: A Search Space Odyssey” (https://arxiv.org/abs/1503.04069). In other words, if the value of the input gate is i, the forget gate is 1-i. This reduces the number of parameters in the model and speeds training a little, and in many cases the accuracy is the same or better. If you want to try the standard version of the LSTM, use the VanillaLSTMBuilder class.
## Dropout Scaling¶
When using dropout to help prevent overfitting, dropout is generally applied at training time, then at test time all the nodes in the neural net are used to make the final decision, increasing robustness. However, because there is a disconnect between the number of nodes being used in each situation, it is important to scale the values of the output to ensure that they match in both situations. There are two ways to do this:
• Vanilla Dropout: At training time, perform dropout with probability p. At test time, scale the outputs of each node by p.
• Inverted Dropout: At training time, perform dropout with probability p, and scale the outputs by 1/p. At test time, use the outputs as-is.
The first is perhaps more common, but the second is convenient, because we only need to think about dropout at training time, and thus DyNet opts to use the latter. See here for more details on these two methods. | 2017-12-14 18:42:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6184055805206299, "perplexity": 597.402424105468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948550199.46/warc/CC-MAIN-20171214183234-20171214203234-00015.warc.gz"} |
https://www.kl800.com/read/d41ed5be00ff1a6fd879dce3.html | kl800.comÊ¡ÐÄ·¶ÎÄÍø
# Multigrid and multilevel methods for nonconforming rotated Q1 elements
Series Logo Volume 00, 19xx
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ROTATED Q1 ELEMENTS
ZHANGXIN CHEN AND PETER OSWALD
Abstract. In this paper we systematically study multigrid algorithms and
multilevel preconditioners for discretizations of second-order elliptic problems using nonconforming rotated Q1 nite elements. We rst derive optimal results for the W -cycle and variable V -cycle multigrid algorithms; we prove that the W -cycle algorithm with a su ciently large number of smoothing steps converges in the energy norm at a rate which is independent of grid number levels, and that the variable V -cycle algorithm provides a preconditioner with a condition number which is bounded independently of the number of grid levels. In the case of constant coe cients, the optimal convergence property of the W cycle algorithm is shown with any number of smoothing steps. Then we obtain suboptimal results for multilevel additive and multiplicative Schwarz methods and their related V -cycle multigrid algorithms; we show that these methods generate preconditioners with a condition number which can be bounded at least by the number of grid levels. Also, we consider the problem of switching the present discretizations to spectrally equivalent discretizations for which optimal preconditioners already exist. Finally, the numerical experiments carried out here complement these theories.
1. INTRODUCTION
In recent years there has been analyses and applications of the nonconforming rotated (NR) Q1 nite elements for the numerical solution of partial di erential problems. These nonconforming rectangular elements were rst proposed and analyzed in 23] for numerically solving the Stokes problem; they are the simplest divergence-free nonconforming elements on rectangles (respectively, rectangular parallelepipeds). Then they were used to simulate the deformation of martensitic crystals with microstructure 17] due to their simplicity. Conforming nite element methods can be used to approximate the microstructure with layers which are oriented with respect to meshes, while nonconforming nite element methods allow the microstructure to be approximated on meshes which are not aligned with the microstructure (see, e.g., 17] for the references).
1991 Mathematics Subject Classi cation. Primary 65N30, 65N22, 65F10. Key words and phrases. Finite elements, mixed methods, nonconforming methods, multigrid methods, multilevel preconditioners, di erential problems.
1
c 0000 American Mathematical Society 0000-0000/00 \$1.00 + \$.25 per page
2
ZHANGXIN CHEN AND PETER OSWALD
Independently, the NR Q1 elements have been derived within the framework of mixed nite element methods 11, 1]. It has been shown that the nonconforming method using these elements is equivalent to the mixed method exploiting the lowest-order Raviart-Thomas mixed elements on rectangles (respectively, rectangular parallelepipeds) 24]. Based on this equivalence theory, both the NR Q1 and the Raviart-Thomas mixed methods have been applied to model semiconductor devices 11]; they have been e ectively employed to compute the electric potential equation with a doping pro le which has a sharp junction. Error estimates of the NR Q1 elements can be derived by the classical nite element analysis 23, 16]. They can be also obtained from the known results on the mixed method based on the equivalence between these two methods 1]. It has been shown that the so-called \nonparametric" rotated Q1 elements produce optimal-order error estimates. As a special case of the nonparametric families, the optimal-order errors can be obtained for partitions into rectangles (respectively, rectangular parallelepipeds) oriented along the coordinate axes. Finally, in the case of cubic triangulations, superconvergence results can be obtained 1, 16]. Unlike the simplest triangular nonconforming elements, i.e., the nonconforming P1 elements, the NR Q1 elements do not have any reasonable conforming subspace. Consequently, there are di erences between these two types of nonconforming elements. The NR Q1 elements can be de ned on rectangles (respectively, rectangular parallelepipeds) with degrees of freedom given by the values at the midpoints of edges of the rectangles (respectively, the centers of faces of the rectangular parallelepipeds), or by the averages over the edges of the rectangles (respectively, the faces of the rectangular parallelepipeds). While these two versions lead to the same de nition for the nonconforming P1 elements, they can produce very di erent results in terms of implementation for the NR Q1 elements. With the second version of the NR Q1 elements, we are able to prove all the theoretical results for the multigrid algorithms and multilevel additive and multiplicative Schwarz methods considered in this paper. However, we are unable to obtain these results with their rst version. In particular, as numerical tests in 22] indicate, the energy norm of the iterates of the usual intergrid transfer operators, which enters both upper and lower bounds for the condition number of preconditioned systems, deteriorates with the number of grid levels for the rst version. But it is bounded independently of the number of grid levels for the second version, as shown here. The other major di erence between the nonconforming P1 and the NR Q1 elements is that the former contains the conforming P1 elements, while the latter does not contain any reasonable conforming subspace, as mentioned above. As a result of this, the convergence of the standard V -cycle algorithm for the nonconforming P1 elements can be shown when the coarse-grid correction steps of this algorithm are established on the conforming P1 spaces 18, 12]. But this is not the case for the NR Q1 elements. On the other hand, within the context of the nonconforming methods, i.e., when the coarse-grid correction steps are de ned on the nonconforming P1 spaces themselves, the convergence of the V -cycle algorithm has not been shown, and the W -cycle algorithm has been proven to converge only under the assumption that the number of smoothing steps is su ciently large 7, 8, 3, 4, 25, 1, 12, 14]. However, we are here able to show the convergence of the W -cycle algorithm with any number of smoothing steps for the Laplace equation using the NR Q1 elements. This optimal property cannot be proven for the nonconforming P1 elements using the present techniques.
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
3
The multigrid algorithms for the NR Q1 elements were rst developed and analyzed in 1], and further discussed in 12] and 9]. The second version of these elements was used in 1] and 12], while their rst version was exploited in 9]. Moreover, the analysis in 9] was given for elliptic boundary value problems which are not required to have full elliptic regularity. However, in all these three papers, only the W -cycle algorithm with a su ciently large number of smoothing steps was shown to converge using the standard proof of convergence of multigrid algorithms for conforming nite element methods 2]. We nally mention that the study of the NR Q1 elements in the context of domain decomposition methods has been given in 13]. In this paper we systematically study multigrid algorithms and multilevel preconditioners for discretizations of second-order elliptic problems using the NR Q1 elements. We rst consider the convergence of the W -cycle and variable V -cycle algorithms for these nonconforming elements. We prove that the W -cycle algorithm with a su ciently large number of smoothing steps converges in the energy norm at a rate which is independent of grid number levels, and that the variable V -cycle algorithm provides a preconditioner with a condition number which is bounded independently of the number of grid levels. A main observation in this paper is that the optimal convergence property for the W -cycle algorithm holds with any number of smoothing steps, when the coe cient of the di erential problems is constant. Explicit bounds for the convergence rate and condition number are given. The NR Q1 elements has so far been the rst type of nonconforming elements which are shown to possess this feature for the W -cycle algorithm with any number of smoothing steps. We then study multilevel preconditioners of hierarchical basis and BPX type 5] for the NR Q1 elements. We develop a convergence theory for the multilevel additive and multiplicative Schwarz methods and their related V -cycle algorithms. We follow the general theory introduced in 22] where the analysis of the hierarchical basis and BPX type for nonconforming discretizations of partial di erential equations was carried out. A key ingredient in the analysis is to control the energy norm growth of the iterated coarse-to- ne grid operators, which enters both upper and lower bounds for the condition number of preconditioned systems as outlined above. So far, the energy norm of the iterated intergrid transfer operators has been shown to be bounded independently of grid levels solely for the nonconforming P1 elements 19]. In this paper we prove this property for the NR Q1 elements. Based on the present theory, we derive a suboptimal result for the multilevel preconditioners of hierarchical basis and BPX type for the NR Q1 elements. Finally, we study the problem of switching the NR Q1 discretization system to a spectrally equivalent discretization system for which optimal preconditioners are already available. This switching strategy has been used in the setting of the multilevel additive Schwarz method; see 21] for the references. After we nd a spectrally equivalent reference discretization for the NR Q1 system, we are able to obtain optimal preconditioner results for the NR Q1 elements. Thanks to the equivalence between the rotated Q1 nonconforming method and the lowest-order Raviart-Thomas mixed rectangular method, all the results derived here carry over directly to the latter method 1, 12]. For technical reasons, all the results in this paper are shown for partitions into uniform squares (respectively, cubes). They can be extended to triangulations in which the nest triangulation can be mapped to a square (respectively, cubic)
4
ZHANGXIN CHEN AND PETER OSWALD
triangulation in an a ne-invariant fashion. Also, the analysis is given for the twodimensional domain; an extension to three space dimensions is straightforward for most of the results below. The rest of the paper is organized as follows. In the next section we prove some preliminary results for the intergrid transfer operators. Then the multigrid algorithms and multilevel preconditioners are discussed in x3 and x4, respectively. The problem of switching to a spectrally equivalent discretization is considered in x5. Finally, the numerical results presented in x6 complement the present theories.
2. PRELIMINARY RESULTS
For expositional convenience, let = (0; 1)2 be the unit square, and let H s ( ) and L2 ( ) = H 0 ( ) be the usual Sobolev spaces with the norm
0Z 11=2 X jjvjjs = @ jD vj2 dxA ;
j j s
where s is a nonnegative integer. Also, let ( ; ) denote the L2 ( ) or (L2 ( ))2 inner product, as appropriate. The L2 ( ) norm is indicated by jj jj. Finally, 1 H0 ( ) = fv 2 H 1 ( ) : vj? = 0g; where ? = @ . Let h1 and Eh1 = E1 be given, where Eh1 is a partition of into uniform squares with length h0 and oriented along the coordinate axes. For each integer 2 k K , let hk = 21?k h1 and Ehk = Ek be constructed by connecting the midpoints of the edges of the squares in Ek?1 , and let Eh = EK be the nest grid. Also, let @ Ek be the set of all interior edges in Ek . In this and the following sections, we replace subscript hk simply by subscript k. For each k, we introduce the rotated Q1 nonconforming space
Vk = v 2 L2 ( ) : vjE = a1 + a2 x + a3 y + a4 (x2 ? y2 ); aiE 2 IR; 8E 2 Ek ; E E E E
if E1 and E2 share an edge e, then and @E\? j? ds = 0 :
1 Note that Vk 6 H0 ( ) and Vk?1 6 Vk , k 2. We introduce the space k X ^ Vk = Vl V k ; 1 ^ the discrete energy scalar product on Vk H0 ( ) by X 1 ^ (rv; rw)E ; v; w 2 Vk H0 ( ); (v; w)E ;k = 1 ^ and the discrete norm on Vk H0 ( ) by
Z
R
e
j@E1 ds =
Z
e
j@E2 ds;
l=1
E 2Ek
jjvjjE ;k = (v; v)E ;k ;
q
1 ^ v 2 Vk H0 ( ):
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
5
automatically preserves the zero average values on boundary edges. Also, it can be seen that (2:1) Pk?1 Ik v = v; v 2 Vk?1 ; k 1: That is, Pk?1 Ik is the identity operator Idk?1 on Vk?1 . This relation is not satis ed when the NR Q1 elements are de ned with degrees of freedom given by the values at the midpoints of edges of elements. We also de ne the iterates of Ik and Pk?1 by K Rk = IK Ik+1 : Vk ! VK ; QK = Pk PK ?1 : VK ! Vk : k Finally, we make the convention on the discrete energy scalar product on the space ^ VK : ^ (v; w)E = (v; w)E ;K ; v; w 2 VK : Obviously, we have the inverse inequality ^ (2:2) jjvjjE C 2k jjvjj; v 2 Vk ; 1 k K; (here and later, by C , c,... we denote generic positive constants which are independent of k). In this section we collect some basic properties of the intergrid transfer operaK tors Pk?1 (respectively, Ik ) and their iterates QK (respectively, Rk ). The crucial k p results are the boundedness of the operators Ik with constant 2 and the uniform K boundedness of the operators Rk with respect to the discrete energy norm jj jjE . Lemma 2.1. It holds that Pk?1 (2 k K ) is an orthogonal projection with respect to the energy scalar product; i.e., for any v 2 Vk , (v ? Pk?1 v; w)E = 0; 8w 2 Vk?1 ; (2:3) 2 = jjv ? Pk?1 v jj2 + jjPk?1 v jj2 : jjvjjE E E Moreover, there are constants C and c, independent of v, such that the di erence ^ v = v ? Pk?1 v 2 Vk satis es ^ (2:4) c2k jjv jj jjvjjE C 2k jjv jj: ^ ^ ^
1 1 1 jej e Pk?1 vds = 2 je1 j e1 vds + je2 j e2 vds ; where e1 and e2 in @ Ek form the edge e 2 @ Ek?1 . Note that the de nition of Pk?1
We introduce two sets of intergrid transfer operators Ik : Vk?1 ! Vk and Pk?1 : Vk ! Vk?1 as follows. Following 1, 12], if v 2 Vk?1 and e is an edge of a square in Ek , then Ik v 2 Vk is de ned by 80 if e @ ; > Z > 1 Z < 1 vds if e 6 @E for any E 2 Ek?1 ; e jej e Ik vds = > je1j Z > : 2jej (vjE1 + vjE2 )ds if e @E1 \ @E2 for some E1 ; E2 2 Ek?1: e If v 2 Vk and e is an edge of an element in @ Ek?1 , then Pk?1 v 2 Vk?1 is given by 1
Z
Z
Z
6
ZHANGXIN CHEN AND PETER OSWALD
j where ej i are the four edges of Ei with the outer unit normals Ei , i = 1; : : : ; 4. E Note that in (2.5) the line integrals over edges interior to E 2 Ek?1 cancel by j continuity of Pk?1 v in the interior of E . Also, if ej i and e^ ^ form an edge of E , it Ei E follows by the de nition of Pk?1 that
Proof. For any E 2 Ek?1 with the four subsquares Ei 2 Ek (i = 1; : : : ; 4, see Figure 1), an application of Green's formula implies that P (r v ? Pk?1 v]; rw)E = 4=1 (r v ? Pk?1 v]; rw)Ei Pi4 P4 @w j R j (v ? P v)j ds; (2:5) = i=1 j=1 @ j eE eE k?1 Ei
Ei
i
i
Z
and that
ej i E
(v ? Pk?1 v)jEi ds +
Z
j e^ ^ Ei
(v ? Pk?1 v)jE^ ds = 0; i
is constant. Then, by (2.5), we see that (r v ? Pk?1 v]; rw)E = 0: Now, sum on all E 2 Ek?1 to derive the orthogonality relations in (2.3). The upper estimate in (2.4) directly follows from (2.2). The lower bound can be easily obtained from a direct calculation of the energy norms of v ? PK ?1 on all E 2 Ek?1 . This completes the proof. #
@w @w j j = ^ ^ j @ Ei eEi @ E^ ejE^ i i
e3 E E2 e2 E E1 e1 E
Figure 1.
E3 e4 E E4
Edges and subsquares of E 2 Ek?1 .
Before we start with the investigation of the prolongations Ik , it will be useful to collect some formulas. For E 2 Ek?1 and any v 2 Vk?1 , de ne 1 Z v d s = bi ; (see Figure 1 for the notation), and set sE = b 1 + b 2 + b 3 + b 4 ; E E E E 0 = b 1 + b 3 ? b2 ? b 4 ; E E E E E
jeiE j
ei E
E
41 = b3 ? b1 ; E E E 42 = b4 ? b2 : E E E
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
7
Then, with the subscript E omitted, we have the next lemma. Lemma 2.2. It holds that ? 1 1 1 jjvjj2 2 (E) = h2?1 16 s2 + 12 f(41)2 + (42)2 g + 40 ( 0 )2 ; k L (2:6) 3 jjrvjj2 2 (E) = (41 )2 + (42)2 + 2 ( 0 )2 ; L and h2 ?1 1 2 k 22 32 4 2 jjvjj2 2 (E) L 10 (b ) + (b ) + (b ) + (b ) (2:7) h2 ?1 (b1 )2 + (b2 )2 + (b3 )2 + (b4 )2 : k 4 Proof. Using the a ne invariance of the local interpolation problem connecting v with its edge averages bi , it su ces to prove (2.6) and (2.7) for the master square E = (?1; 1)2 . A straightforward calculation gives 2 1 3 (2:8) v = v(x; y) = 1 s + 4 x + 4 y ? 8 0 (x2 ? y2 ): 4 2 2 Now direct integration yields the desired results in (2.6). Also, (2.7) follows from the rst equation of (2.6) by computing the eigenvalues of the symmetric 4 4 matrix T t DT , where D =diag(1=16; 1=12; 1=12; 1=40), T stands for the transformation matrix from the vector (b1 ; b3 ; b2 ; b4 ) to (s; 41; 42; 0 ), and T t is the transpose of T . These eigenvalues are 1=10, 1=6, 1=6, and 1=4, which implies (2.7). # Lemma 2.2 is the basis for computing all the discrete energy and L2 norms needed in the sequel. The formula (2.8) valid for the master square can be used to derive explicit expressions for the edge averages of Ik v and Ik v ? v. Toward this end, we rst compute the corresponding values for the master square, and then use the invariance of the local interpolation problem for v under a ne transformations (for the square triangulations under consideration, these transformations are just dilation and translation) to return to the notation on each E 2 Ek?1 .
e1 ?e1 +e2 e2 ?e1
e1 +e2 e2 e1 e2 +e1 e2 2 e1 +e2 2
2
? e1
e1 ?e1
e1 2
An illustration for Lemma 2.3.
e2 +e1 2
Figure 2.
Note that, by the de nition of the triangulation Ek?1 , to each E 2 Ek?1 is uniquely assigned a = ( 1 ; 2 ) such that 0 2k?1 . For notational 1, 2 1 and b2 denote the averages of v 2 Vk?1 over the horizontal and convenience, let b vertical edges e1 and e2 , respectively, in @ Ek?1 (see Figure 2, where e2 , e2 +e2 , 2 2 e1 +e2 , e1 +e2 ?e1 2 @ Ek , e1 = (1; 0), and e2 = (0; 1)). The corresponding quantities 2 2 for Ik v 2 Vk are indicated by aj , j = 1, 2. Now, introduce the notation ^1 = b1 + b1 ?e1 ? b1 +e2 ? b1 +e2 ?e1 ; ^2 = b2 + b2 ?e2 ? b2 +e1 ? b2 +e1 ?e2 :
8
ZHANGXIN CHEN AND PETER OSWALD
With these notation, it follows from the de nition of Ik that the edge averages of Ik v can be written as follows: 1 a1 = b1 + 8 ^2 ; 2 1 a1 +e1 = b1 ? 8 ^2 ; 2 (2:9) 5 1 1 1 a1 +e2 = 8 b2 + 8 b2 +e1 + 8 b1 + 8 b1 +e2 ; 2 1 1 1 a1 +e2 +e1 = 5 b2 +e1 + 8 b2 + 8 b1 + 8 b1 +e2 ; 2 8 and a2 = b2 + 1 ^1 ; 2 8 2 2 = b2 ? 1 ^1 ; a2 +e 8 (2:10) 5 1 1 1 a2 +e1 = 8 b1 + 8 b1 +e2 + 8 b2 + 8 b2 +e1 ; 2 1 1 1 a2 +e2 +e1 = 5 b1 +e2 + 8 b1 + 8 b2 + 8 b2 +e1 ; 2 8 when the edge average aj is associated with an interior edge in @ Ek ; for boundary edges, this value is set to be zero. ^ Note that Ik (as well as Pk?1 ) can be extended to the larger spaces Vk in a ^j : Vk ! Vk , observe that any v 2 Vk ^ ^ natural way. In order to de ne the extension I coincides on each E 2 Ek with a polynomial from Vk jE , so the form of the previous ^ ^ ^ de nition for Ik remains the same for Ij . Clearly, Ik jVk?1 = Ik and Ik jVk = Idk . To express the edge averages of Ik v ? v, set 1 = b1 2 ? b 1 + b 1 1 ? b 1 2 1 ; +e ?e +e ?e 2 = b2 1 ? b 2 + b 2 2 ? b 2 1 2 ; +e ?e +e ?e 1 and e2 are interior edges in @ Ek?1 . For boundary edges, they need to be if e modi ed to give the correct expressions for Ik v ? v. If e1 is a boundary edge, for example, we de ne 2 = 2(b2 1 ? b2 ): +e With these, we see that R R e1 +e2 ?e1 (Ik v ? v )j ?e1 ds = e1 +e2 (Ik v ? v )j ds = 0; 2 2 1 R 1 1 R 2 j e2 (Ik v ? v )j ?e1 ds = je2 2 j e2 2 (Ik v ? v )j ds = ? 8 1 ; (2:11) je2 2 2 +e 2 +R e 1 R 1 1 1 je2 j e2 (Ik v ? v)j ds = je2 2 j e2 2 (Ik v ? v)j ?e1 ds = 8 ; 2 2 where (Ik v ? v)j denotes the restriction of Ik v ? v to the element associated with . The averages of Ik v ? v on other edges are given similarly. Then, by Green's formula and (2.11), we see that (2:12) jjIk vjj2 ? jjvjj2 = jjIk v ? vjj2 ; v 2 Vk?1 : E E E From (2.9){(2.12) and Lemma 2.2, we immediately have the next lemma. Below the notation stands for two-sided inequalities with constants independent of k. Lemma 2.3. It holds that q5 ^ jjvjj; 8v 2 Vk ; jjI^k vjj (2:13) p2 jjIk vjjE 2jjvjjE ; 8v 2 Vk?1 ;
2 +e 2 +e
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
9
and
(2:14)
2k jjIk v ? vjj jjIk v ? vjjE
sX
f( 1 )2 + ( 2 )2 g; 8v 2 Vk?1 :
We now prove the following property of the iterated coarse- ne intergrid transK fer operators Rk .
Lemma 2.4. It holds that K (2:15) jjRk vjjE C jjvjjE ;
8v 2 Vk ; 1 k K:
Proof.
The proof is technical; it follows the idea of the proof of an analogous statement for the P1 nonconforming elements 19]. First, we consider the case of = IR2 . That is, we assume that all our de nitions are extended to in nite square partitions of IR2 ; due to the local character of all constructions, this is easy to do. We keep the same notation for the extended partitions Ek , edges ej 2 @ Ek , squares E 2 Ek , etc. In order to guarantee the niteness of all norm expressions, we restrict our attention to functions v 2 Vk with nite support. By the construction of Ik , K this property is preserved when applying the operators Ik and Rk . 2 , it is clear that it su ces After the extension to the shift-invariant setting of IR k ~ to consider the case of k = 1. Set, for simplicity, Rk = R1 , k = 1; : : : ; K . Our main observation from numerical experiments 21] was that the sequence ~ ~ fjjRk v ? Rk?1 vjj2 ; k = 2; : : : ; K g E decays geometrically. What we want to prove next is the mathematical counterpart to this observation. To formulate the technical result, introduce
j=
X
2Z2
( j )2 ;
j = 0; 1; 2;
where the quantities j are determined from the edge averages of v 2 V1 by the same formulas as above. The corresponding quantities computed for v = I2 v 2 V2 ~ are denoted by ~j and ~j , j = 0; 1; 2. From (2.14) in Lemma 2.3, we see that 2 ~2 ~3 ~2 2 1 + 2 jjR v ? v kE and ~1 + ~2 jjR v ? R v jjE ; moreover, we can iterate this construction. Thus, if we can prove that (2:16) where 0 < and 2.3, (2:17) ~ c ~0 + ~1 + ~2
K jjR1 vjjE
(c
0 + 1 + 2 );
< 1 and c > 0 are constants independent of v, then, by Lemmas 2.2
~ ~ jjvjjE + PK=2 jjRk v ? Rk?1 vkE k PK?1 p( )k p jjvjjE + C k=1 C jjvjjE :
K Since this gives the desired boundedness of Rk (for IR2 ) via dilation, we concentrate on (2.16).
10
ZHANGXIN CHEN AND PETER OSWALD
From (2.9) and (2.10) we nd the following formulas for ~j : 1 0 ~2 +e1 = 1 1 +e1 ? 8 2 + 1 0 ; 8 4 1 0 ~2 = ? 1 1 + 8 2 + 1 0 ; 8 4 1 1 0 ~2 +e2 = 8 1 ? 8 2 +e2 + 1 0 ; 4 1 1 0 ~2 +e1 +e2 = ? 8 1 +e1 + 8 2 +e2 + 1 0 ; 4 1 1 3 1 ~2 = 2 1 ? 8 ( 2 + 2 ?e1 ) ? 8 ( 0 ? 0 ?e1 ); 1 ~2 +e1 = 1 2 ; 4 1 1 ~2 +e2 = 1 1 ? 8 ( 2 +e2 + 2 ?e1 +e2 ) + 3 ( 0 ? 0 ?e1 ); 2 8 1 1 ~2 +e1 +e2 = ? 4 2 +e2 ; 1 3 2 ~2 = 1 2 ? 8 ( 1 + 1 ?e2 ) + 8 ( 0 ? 0 ?e2 ); 2 2 ~2 +e2 = 1 1 ; 4 1 2 ~2 +e1 = 1 2 ? 8 ( 1 +e1 + 1 ?e2 +e1 ) ? 3 ( 0 ? 0 ?e2 ); 2 8 1 2 ~2 +e1 +e2 = ? 4 1 +e1 : These formulas are used to compute the quantities ~j . In order to write them in reasonably short form, we introduce the notation
j =
X
j j
2Z2
+
;
jl =
X
j l
2Z2
+
; k; l = 0; 1; 2 (j 6= l);
if 2 Z2 is the null vector, it is omitted in this notation. With them, we see, by carefully evaluating all squares, that ~0 ~1 = ~2 = =
P
=1 4
1 3 9 e1 1 e1 9 16 0 + 2 1 + 16 2 ? 16 0 + 16 2 ? e2 ) 3 e1 ? ? e2 1( ? 8 | 12 + 12 + {z12e1 + 12?e1 } ? 32 ( 02 + 02e1 +e2 ? 02e1 9 3 1 9 e2 1 e2 16 0 + 16 1 + 2 2 ? 16 0 + 16 1 ? e2 ) 3 ? e2 1( ? 8 | 12 + 12 + {z12e1 + 12?e1 } ? 32 ( 01e2
0 0 0 0 ( ~2 )2 + ( ~2 +e1 )2 + ( ~2 +e2 )2 + ( ~2 +e1 +e2 )2 ?e1 e2 ?e1 ) e2 1 1 ( 0 + 16 ( 1 + 2 ) ? 32 | 12 + 12 + {z + 12 }; 12
( ~0 )2 =
P
?
02
e1 +e2 );
e1 2 e1 2 e2 + 01+e ? 01 ? 01?e ):
~ Thus, introducing A = 1 + 2 and A = ~1 + ~2 , we have 1 1 ~0 = 1 0 + 16 A ? 32 ; 4 (2:18) 11 9 e 1 e e e ~ A = 9 0 + 16 A ? 16 ( 01 + 02 ) + 16 ( 12 + 21 ) ? 1 8 4 where
3 1 ? 32
;
?2 e1 2 ?1 2 e1 2 ?1 e1 2 e1 e2 = 01e + 01+e + 02 + 02e +e ? 01 ? 01?e ? 02e ? 02+e :
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
11
e e e e = 2( 0 +e + 0 ?e ) ? 4( 01 + 02 ) + 4 0 : Substitution of and into (2.18) leads to 1 1 e 1 e ~0 = 1 0 + 32 A ? 32 ( 12 + 22 ) + 64 4 e1 2 e1 2 1 1 5 e e = 16 0 + 32 A + 32 0 ?e + 0 +e ? 2 01 ? 2 02 (2:19) 1 e e ? 32 12 + 21 1 0 + 1 A; 2 16
Next, we simplify and . Note that e ? 2 12 = P 1 ( 2 + 2 +e2 + 2 ?e1 +e2 + 2 ?e1 ? 1 +e2 ? 1 ?e2 ) P 1 ( 0 2 1 + 0 2 ? 0 1 2 ? 0 2 + 2 1 ); = +e ?e ?e ?e ?e +e e1 = P 2 ( 1 + 1 2 + 1 1 2 + 1 1 ? 2 1 ? 2 1 ) ?2 2 P 2 ( 0 2 ?e+ 0 +e??e0 1 +e? 0 +e+ 2 2 )?e ; = ?e ?e1 +e1 +e ?e2 ?e1 so that e2 e1 = 1 + 2 + 2A ? 1 : 2 Analogously, we can simplify as follows: P 0 ?( 1 1 2 + 1 2 + 2 1 + 2 1 2 ) = +e +e ?e +e ?e +e 1 2 + 1 1 2 + 2 1 + 2 1 2) ?( + ? + P 0 ?( +e 1 2 +e ?e 1 2 +e 0 1 +e2 +e 0 1 2 ) 0 0 = +e +e +e ?e ?e ?e ?e +e ?2( 0 +e1 + 0 +e2 + 0 ?e1 + 0 ?e2 ) + 4 0 )
1 2 1 2
where we have used the fact that j j j j ; j = 0; 1; 2, which is valid for arbitrary . With the same argument, we see that 3 1 7 9 9 e e e e ~ A = 8 0 + 16 A ? 16 01 + 02 ? 16 12 + 22 + 32 e1 2 e1 2 7 1 = 5 0 + 16 A + 16 0 ?e + 0 +e 4 (2:20) 3 11 e e e e ? 16 01 + 02 ? 16 12 + 21 11 0 + 5 A: 4 8 ~ = c ~0 . Then it follows from (2.18) and (2.19) that Now, set B = c 0 and B ~ ~ c A 5 + 11 B; B 16 A + 1 B; 8 4c 2 and 5 c 1 ~ ~ (A + B) max 8 + 16 ; 11 + 2 (A + B): 4c Let c = c 3 5 ? 1, so we see that (2.16) holds with p c 11 5 = 5 + 16 = 4c + 1 = 3 16+ 9 < 1: 8 2
p
12
ZHANGXIN CHEN AND PETER OSWALD
It remains to reduce the assertion of Lemma 2.4 to the shift-invariant situation just considered. To this end, starting with any v 2 Vk on the unit square, we repeatedly use an odd extension. Namely, set v = v on 0; 1]2 and ^ v(x; y) = ?v(?x; y); (x; y) 2 ?1; 0) 0; 1]; ^ ^ after this, de ne v(x; y) = ?v(x; ?y); (x; y) 2 ?1; 1] ?1; 0); ^ ^ and continue this extension process with the unit square replaced by ?1; 1]2 such that after the next two steps v is de ned on ?1; 3]2. Outside this larger square we ^ continue by zero. Clearly, jjv jj2 = 16jjvjj2 , where the norms for v and v are taken ^E ^ E with respect to IR2 and the unit square, respectively. K^ It is not di cult to check by induction that on 0; 1]2 the functions Rk v (ob2 ) and RK v tained by the repeated application of the prolongations de ned on IR k (as de ned above with respect to 0; 1]2) coincide. Also, the values of Ik+1 v on ^ ?2?(k+1); 1 + 2?(k+1) ]2 depend solely on the values of v on the square ?2?k ; 1 + ^ 2?k ]2 , and on this enlarged square Ik+1 v coincides with its odd extension from ^ 0; 1]2. Finally, the zero edge averages are automatically reproduced along the boundary of 0; 1]2 from the above extension procedure. Therefore, by (2.17) and the dilation argument, we obtain K K^ jjRk vjj2 jjRk vjj2 C jjvjj2 = 16C jjvjj2 ; ^E E E E which nishes the proof of Lemma 2.4. # The second inequality in (2.13) is critical for the convergence results of multigrid algorithms developed in the next section, while (2.15) is crucial for the multilevel preconditioner results in x4.
3. MULTIGRID ALGORITHMS
In this section and the next section we consider multigrid algorithms and multilevel preconditioners for the numerical solution of the second-order elliptic problem ?r (Aru) = f in ; (3:1) u = 0 on ?; where IR2 is a simply connected bounded polygonal domain with the boundary 2 ( ), and the coe cient A 2 (L1 ( ))2 2 satis es ?, f 2 L t t A(x; y ) t (3:2) (x; y) 2 ; 2 IR2 ; 1 0 ; with xed constants 1 , 0 > 0. The condition number of preconditioned linear systems to be analyzed later depends on the ratio 1 = 0 . Problem (3.1) is recast in weak form as follows. The bilinear form a( ; ) is de ned as follows: a(v; w) = (Arv; rw); v; w 2 H 1 ( ): 1 Then the weak form of (3.1) for the solution u 2 H0 ( ) is 1 (3:3) a(u; v) = (f; v); 8 v 2 H0 ( ): 1 Associated with each Vk , we introduce a bilinear form on Vk H0 ( ) by X 1 ak (v; w) = (Arv; rw)E ; v; w 2 Vk H0 ( ):
E 2Ek
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
13
The NR Q1 nite element discretization of (3.1) is to nd uK 2 VK such that (3:4) aK (uK ; v) = (f; v); 8 v 2 VK : Let Ak : Vk ! Vk be the discretization operator on level k given by (3:5) (Ak v; w) = ak (v; w); 8 w 2 Vk : The operator Ak is clearly symmetric (in both the ak ( ; ) and ( ; ) inner products) 0 and positive de nite. Also, we de ne the operators Rk?1 : Vk ! Vk?1 and Rk?1 : Vk ! Vk?1 by ak?1 (Rk?1 v; w) = ak (v; Ik w); 8 w 2 Vk?1 ; and ?R0 v; w = (v; I w); 8 w 2 V : k k?1 k?1 It is easy to see that Ik Rk?1 is a symmetric operator wtih respect to the ak form. 0 Note that neither Rk nor Rk is a projection in the nonconforming case. Finally, let k dominate the spectral radius of Ak . The multigrid processes below result in a linear iterative scheme with a reduction operator equal to I ? BK AK , where BK : VK ! VK is the multigrid operator to be de ned below. Multigrid Algorithm 3.1. Let 2 k K and p be a positive integer. Set B1 = A?1 . Assume that Bk?1 has been de ned and de ne Bk g for g 2 Vk as 1 follows: 1. Set x0 = 0 and q0 = 0. 2. De ne xl for l = 1; : : : ; m(k) by xl = xl?1 + Sk (g ? Ak xl?1 ): 3. De ne ym(k) = xm(k) + Ik qp , where qi for i = 1; : : : ; p is de ned by i h 0 qi = qi?1 + Bk?1 Rk?1 g ? Ak xm(k) ? Ak?1 qi?1 : 4. De ne yl for l = m(k) + 1; : : : ; 2m(k) by ? yl = yl?1 + Sk g ? Ak yl?1 : 5. Set Bk g = y2m(k) . In Algorithm 3.1, m(k) gives the number of pre- and post-smoothing iterations and can vary as a function of k. In this section, we set Sk = ( k )?1 Idk in the pre- and post-smoothing steps. If p = 1, we have a V -cycle multigrid algorithm. If p = 2, we have a W -cycle algorithm. A variable V -cycle algorithm is one in which the number of smoothings m(k) increase exponentially as k decreases (i.e., p = 1 and m(k) = 2K ?k ). We now follow the methodology developed in 6] to state convergence results for Algorithm 3.1. The two ingredients in their analysis are the regularity and approximation property and the boundedness of the intergrid transfer operator: (3:6) and (3:7)
A jak (v ? Ik Rk?1 v; v)j C jjpk vjj ak (v; v);
k
p
8 v 2 Vk ;
ak (Ik v; Ik v) Cak?1 (v; v);
8 v 2 Vk?1 ;
14
ZHANGXIN CHEN AND PETER OSWALD
for k = 2; : : : ; K , where k is the largest eigenvalue of Ak . The proof of (3.6) is standard; see the proof of a similar result for the P1 nonconforming elements in 14]. Inequality (3.7) has been shown in 1] using the approximation property of the operator Ik . However, here we see that if A = 0 I is a scalar multiple of the two-by-two identity matrix I, by the second inequality in (2.13) in Lemma 2.3, we actually have (3:8) ak (Ik v; Ik v) 2ak?1 (v; v); 8 v 2 Vk?1 : This leads to the following main result of this section. Let the convergence rate for Algorithm 3.1 on the kth level be measured by the convergence factor k satisfying jak (v ? Bk Ak v; v)j k ak (v; v); 8 v 2 Vk :
Algorithm 3.1. Then there are 0 , 1 > 0, independent of k, such that 8v 2 Vk ; 0 ak (v; v ) ak (Bk Ak v; v ) 1 ak (v; v ); with p p p p m(k)=(C + m(k)) and 1 (C + m(k))= m(k): 0 (ii) De ne Bk by p = 2 and m(k) = m for all k in Algorithm 3.1. Then if A = 0 I is constant, there exists C > 0, independent of k, such that
k
Theorem 3.1. (i) De ne Bk by p = 1 and m(k) = 2K?k for k = 2; : : : ; K in
The same conclusion holds if the assumption that A = 0 I is replaced by requiring that m m0 , where m0 is su ciently large, but independent of k. The proof of this theorem follows from (3.6){(3.8) and Theorems 6 and 7 in 6]. From Theorem 3.1, we have an optimal convergence property of the W -cycle and a uniform condition number estimate for the variable V -cycle preconditioner.
C C + pm :
4. MULTILEVEL PRECONDITIONERS
In this section we discuss multilevel preconditioners of hierarchical basis and BPX 5] type for (3.4). More precisely, we derive the condition numbers of the additive subspace splittings (4:1) and (4:2) (4:3) where
K fVK ; ( ; )E g = R1 fV1 ; ( ; )E g + K fVK ; ( ; )E g = R1 fV1 ; ( ; )E g + K X k=2 K X k=2 K Rk fVk ; 22k ( ; )g;
K Rk f(Idk ? Ik Pk?1 )Vk ; 22k ( ; )g:
The condition number of (4.1) is given by 20] =
max ; min max =
jj 2 sup jjjvjjE2 ; v2VK v jjj
jjjvjjj2
=
A similar de nition can be given for (4.2).
vk 2Vk : v=Pk RK vk k
inf
(
inf min = v2V
jjvjj2 ; E K jjjv jjj2
jjv1 E +
jj2
K X k=2
22k jjv jj2
k
)
:
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
15
Theorem 4.1. There are positive constants c and C , independent of K , such that jj 2 (4:4) c jjjvjjE2 CK; 8v 2 VK ; vjjj
and (4:5) where
jj 2 c kkvjjE 2 vkk kkvkk2 = jjQK vjj2 + 1 E
CK;
K X k=2
8v 2 VK ;
22k jj(Idk ? Ik Pk?1 )QK vjj2 : k
That is, the condition numbers of the additive subspace splittings (4.1) and (4.2) are bounded by O(K ) as K ! 1. K ^ Proof. For k = 2; : : : ; K , it follows from the de nitions of Ik , Ik , and Qk , (2.4), and the rst inequality of (2.13) that ^ 22k jj(Idk ? Ik Pk?1 )QK vjj2 = 22k jjIk (Idk ? Pk?1 )QK vjj2 k k 5 22k jj(Idk ? Pk?1 )QK v jj2 k 2 K v jj2 C jj(Idk ? Ik Pk?1 )Qk E = C jjQK v ? QK?1 vjj2 : k k Summing on j and using the orthogonality relations in (2.3), we see that n o P inf vk 2Vk : v=Pk RK vk jjv1 jj2 + K=2 22k jjvk jj2 E k k K v k2 + PK 22k jj(Idk ? Ik Pk?1 )QK v jj2 jjQ1 E k k=2
which implies the lower bounds in (4.4) and (4.5). P K For the upper bounds, we consider an arbitrary decomposition v = K=1 Rk vk k with vk 2 Vk . Then we see, by Lemma 2.4, that
C jjvjj2 ; E
jjvjj2
E
K X k=1
K jjRk vk jjE
!2
K
K X
Consequently, by (2.2), we have
k=1
K jjRk vk jj2 CK E K X k=2
K X k=1
jjvk jj2 : E
jjvjj2
E
CK jjv1 E +
jj2
22k jjv jj2
k
!
:
Now, taking the in mum with respect to all decompositions, we obtain o n jjvjj2 CK inf vk 2Vk : v=Pk RK vk jjv1 jj2 + PK=2 22k jjvk jj2 E E k k P CK jjQK vjj2 + K=2 22k jj(Idk ? Ik Pk?1 )QK vjj2 ; 1 E k k which nishes the proof of the theorem. # We now discuss the algorithmical consequences for the splittings (4.1) and (4.2). Theoretically, Theorem 4.1 already produces suitable preconditioners for the matrix AK using (4.1) and (4.2). However, they are still complicated since they involve L2-projections onto Vk , 1 < k < K , which means to solve large linear systems within each preconditioning step. To get more practicable algorithms, we replace
16
ZHANGXIN CHEN AND PETER OSWALD
the L2 norms in Vk and Wk = (Idk ? Ik Pk?1 )Vk Vk , k = 2; : : : ; K , by their suitable discrete counterparts. We rst consider the splitting (4.1); (4.2) will be discussed later. Let f j ;k g be the basis functions of Vk such that the edge average of j ;k equals one at ej ;k and zero at all other edges. Then each v 2 Vk has the representation
v=
2 XX
Thus, by the uniform L2-stability of the bases, which follows from (2.7) in Lemma 2.2, we see that 2 2 1 2?2k X X(aj )2 jjvjj2 1 2?2k X X(aj )2 : (4:6) 5 2 j =1 j =1 Note that (with the same argument as in Lemma 2.2) 41 (4:7) 22k jj j ;k jj2 = 120 ; ak ( j ;k ; j ;k ) jj j ;k jj2 = 5; E so (4.6) can be interpreted as the two-sided inequality associated with the stability of any of the splittings (4:8) (4:9) and (4:10)
j =1
aj j ;k :
fVk ; 22k ( ; )g =
2 XX
j =1
fV j;k ; 22k ( ; )g; fV j;k ; ( ; )E g;
fVk ; 22k ( ; )g = fVk ; 22k ( ; )g =
2 XX
j =1
2 XX
into the direct sum of one-dimensional subspaces V j;k spanned by the basis functions j ;k . Any of the splittings (4.8){(4.10) can be used to re ne (4.1). As we will see below, the di erence is just in a diagonal scaling (i.e., a multiplication by a diagonal matrix) in the nal algorithms. As example, we consider the splitting (4.10) in detail; the other two cases can be analyzed in the same fashion. With (4.1) and (4.10), we have the splitting (4:11)
K fVK ; aK ( ; )g = R1 fV1 ; a1 ( ; )g + K 2 XXX k=2 j =1 K Rk fV j;k ; ak ( ; )g:
j =1
fV j;k ; ak ( ; )g;
It follows from (4.4), (4.6), and (4.7) that the condition number for (4.11) still behaves like O(K ). Now, associated with this splitting we can explicitly state the additive Schwarz operator (4:12)
K P K = R 1 T1 + K 2 XXX k=2 j =1 K Rk T j ;k ;
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
17
where
and T1 v 2 V1 solves the elliptic problem K a1 (T1 v; w) = aK (v; R1 w); 8 w 2 V1 : Thus the matrix representations of all operators with respect to the bases of the respective Vk are
a (v; RK j ) T j ;k v = K j k j ;k j ;k ; ak ( ;k ; ;k )
Tk =
2 XX
for 2 k K , and
j =1
K T j ;k = Sk (Rk )t AK ;
Sk = diag(aj ( j ;k ; j ;k )?1 );
where for convenience the same notation is used for operators and matrices. Hence it follows from (4.12) that
K T1 = A?1 (R1 )t AK ; 1
a special case of Algorithm 3.1 if one sets m(k) = 1, p = 1, removes the postsmoothing step, and replaces Ak by a zero matrix for all k 2. From (4.13) and the de nitions of Ik and Sk , we see that a multiplication by CK only involves O(nK +: : :+n2 +n3 ) = O(nK ) arithmetical operations, where nk 22k 1 is the dimension of Vk . This, together with (4.4), yields suboptimal work estimates for a preconditioned conjugate gradient method for (3.4) with the preconditioner CK . That is, an error reduction by a factor in the preconditioned conjugate p gradient algorithm can be achieved by O(nK log nK log( ?1 )) operations. We now turn to the discussion of the algorithmical consequences for the splitting (4.2). To do this, we need to construct basis functions in Wk , k = 2; : : : ; K . Starting with the bases f j ;k g in Vk , to each interior edge ej ;k?1 2 @ Ek?1 , we replace the two associated basis functions j ;k ; j +ej ;k with their linear combinations 2 2 j = j + j j j j 2 ;k 2 ;k 2 +ej ;k ; 2 +ej ;k = 2 ;k ? 2 +ej ;k ; j = 1; 2; where ej ;k and ej +ej ;k 2 @ Ek form the edge ej ;k?1 . For all other interior edges 2 2 j , which do not belong to any edge in @ E , we set e ;k k?1 j = j : ;k ;k j g in V is still L2 -stable; i.e., they satisfy an analogous inequalThe new bases f ;k k ity to (4.6). Moreover, if
! K K A?1 (RK )t + X RK Sk (RK )t AK CK AK ; PK = R1 1 1 k k k=2 K which, together with the de nition of Rk = IK Ik+1 , leads to the typical recursive structure for the preconditioner CK t (4:13) Ck = Ik Ck?1 Ik + Sk ; k = K; : : : ; 2; S1 = C1 A?1 : 1 Note that with these choices for Sk , the multiplication of a vector by CK is formally
v=
2 XX
j =1
bj j ;k ;
18
ZHANGXIN CHEN AND PETER OSWALD 2 XX
we have
Pk?1 v =
and
j since 2 ;k j
j =1
bj j ;k?1 ; 2 cj j ;k ;
(Idk ? Ik Pk?1 )v =
2 XX
and similar relations hold for j = 2. Hence any function from Wk has a unique representation by linear combinations of f j ;k : 6= 2 g, and this basis system is L2-stable. With this basis system, as in (4.11), we have the corresponding splitting (4:14)
K fVK ; aK ( ; )g = R1 fV1 ; a1 ( ; )g + K 2 XX X k=2 j =1 6=2 K Rk fW j ;k ; ak ( ; )g
? Ik ;k?1 can be completely expressed by the functions l ;k with 6= 2 only. More precisely, we have 1 c1 +e1 = b1 +e1 ? 8 (b2 + b2 ?e2 ) ? b2 +e1 ) ? b2 +e1 ?e2 ) ); 2 2 2 2( 2( 2( 1 1 2 = b1 2 ? 1 (5b2 + b1 + b2 c2 +e 2 2( +e1 ) + b2( +e2 ) ); 2 +e 8 2 1 1 2 = b1 2 ? 1 (5b2 1 2 1 c2 +e +e 2 +e 8 2( +e1 ) + b2 + b2 + b2( +e2 ) );
j =1 6=2
K K into a direct sum of R1 V1 and one-dimensional spaces Rk W j ;k spanned by the basis j ;k . Then, with the same argument as for (4.13), we derive an additive ^ preconditioner CK for AK recursively de ned by ^ ^ ^ t ^ ^ ^t ^ (4:15) Ck = Ik Ck?1 Ik + Ik Sk Ik ; k = K; : : : ; 2; C1 = S1 A?1 ; 1 where ^ Sk = diag ak ( j ;k ; j ;k )?1 ; 6= 2 ; j = 1; 2 ^ are diagonal matrices and Ik is the rectangular matrix corresponding to the natural embedding Wk Vk with respect to the bases f j ;k g in Wk and f j ;k g in Vk (one may use the bases f j ;k g for all Vk , which would change the Ik representations, but ^ keep Ik maximally simple). (4.15) has the same arithmetical complexity as before. We now summarize the results in Theorem 4.1 and the above discussion in the next theorem. ^ Theorem 4.2. The symmetric preconditioners CK and CK de ned in (4.13) and (4.15) and associated with the multilevel splittings (4.11) and (4.14), respectively, have an O(nK ) operation count per matrix-vector multiplication and produce the following the condition numbers: ^ (4:16) (CK AK ) CK; (CK AK ) CK; K 1: The splitting (4.11) can be viewed as the nodal basis preconditioner of BPX type 5], while the splitting (4.14) is analogous to the hierarchical basis preconditioner.
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
19
We now consider multiplicative algorithms for (3.4). One iteration step of a multiplicative algorithm corresponding to the splitting (4.11) takes the form y 0 = xj ; K K K (4:17) yl+1 = yl ? !RK ?lSK ?l (RK ?l )t (AK yl ? fK ); l = 0; : : : ; K ? 1; xj+1 = yK ; K where ! is a suitable relaxation parameter (the range of relaxation parameters for which the algorithm in (4.17) converges is determined mainly by the constant in the inverse inequality (2.2) 27, 26, 15]. The method (4.17) corresponds to a K K ~ V -cycle algorithm in Algorithm 3.1 with Ak replaced by Ak = (Rk )t AK Rk , one pre-smoothing and no post-smoothing steps. The iteration matrix MK;! in (4.17) is given by K K MK;! = (IdK ? !E1 ) (IdK ? !EK ?1 )(IdK ? !EK ) ; Ek Rk Sk (Rk )t AK : An analogous multiplicative algorithm for (3.4) corresponding to the splitting (4.14) can be de ned. From the general theory on multiplicative algorithms 27] and by the same argument as for Theorem 4.2, we can show the following result. Theorem 4.3. For properly chosen relaxation parameter ! the multiplicative schemes corresponding to the splittings (4.11) and (4.14) possess the following upper bounds for the convergence rate: ^ (4:18) inf jjMK;! jjE 1 ? C ; inf jjMK;! jjE 1 ? C ; K ! 1; ^ where MK;! and MK;! denote the iteration matrices associated with (4.11) and (4.14), respectively. We end with two remarks. First, one example for the choice of ! is that ! K ?1 , which leads to the upper bounds in (4.18). Second, the diagonal matrices ^ Sk and Sk in (4.13) and (4.15) can be replaced by any other spectrally equivalent symmetric matrices of their respective dimension.
!
K
!
K
5. EQUIVALENT DISCRETIZATIONS
To improve the estimates in Theorems 4.2 and 4.3, we now consider the problem of switching the NR Q1 discretization system (3.4) to a spectrally equivalent discretization system for which optimal preconditioners are already available. This switching strategy, as mentioned in the introduction, has been used in the context of the multilevel additive Schwarz method; see 21] for the references. The most natural candidate for a switching procedure is the space of conforming bilinear elements UK = 2 C 0 ( ) : jE 2 Q1 (E ); 8E 2 Ek and j? = 0 ; on the same partition. We introduce two linear operators YK : UK ! VK and ^ YK : VK ! UK as follows. If 2 UK and e is an edge of an element in EK , then YK 2 VK is given by (5:1)
Z
e
YK ds =
Z
e
ds;
20
ZHANGXIN CHEN AND PETER OSWALD
which preserves the zero average values on the boundary edges. If v 2 VK , we ^ de ne YK v 2 UK by ^ (YK v)(z ) = 0 for all boundary vertices z in EK ; (5:2) ^ (YK v)(z ) = average of vj (z ) for all internal vertices z in EK ; where vj = vjEj and Ej 2 EK contains z as a vertex. Another choice for UK is the space of conforming P1 elements ~ UK = 2 C 0 ( ) : jE 2 P1 (E ); 8E 2 EK and j? = 0 ; ~ where EK is the triangulation of generated by connecting the two opposite vertices ^ of the squares in EK . The two linear operators YK : UK ! VK and YK : VK ! UK are de ned as in (5.1) and (5.2), respectively. Moreover, for both the conforming bilinear elements and the conforming P1 elements, it can be easily shown that there is a constant C , independent of K , such that 2K jj ? YK jj C jj jjE ; 8 2 UK ; (5:3) K jjv ? YK v jj C jjv jjE ; ^ 2 8v 2 VK : Since optimal preconditioners exist for the discretization system AK generated by the conforming bilinear elements (respectively, the conforming P1 elements), the next result follows from (5.3) and the general switching theory in 21]. Theorem 5.1. Let C K be any optimal symmetric preconditioner for AK ; i.e., we assume that a matrix-vector multiplication by C K can be performed in O(nK ) arithmetical operations, and that (C K AK ) C , with constant independent of K . Then (5:4) C K = SK + YK C K (YK )t is an optimal symmetric preconditioner for AK .
6. NUMERICAL EXPERIMENTS
In this section we present the results of numerical examples to illustrate the theories developed in the earlier sections. These numerical examples deal with the Laplace equation on the unit square: ? u = f in = (0; 1)2 ; (6:1) u = 0 on ?; 2 . The NR Q1 nite element method (3.4) is used to solve (6.1) where f 2 L with fEk gK=1 being a sequence of dyadically, uniformly re ned partitions of into k squares. The coarsest grid is of size h1 = 1=2. The rst test concerns the convergence of Algorithm 3.1. The analysis of the third section guarantees the convergence of the W -cycle algorithm with any number of smoothing steps and the uniform condition number property for the variable V cycle algorithm, but does not give any indication for the convergence of the standard V -cycle algorithm, i.e., Algorithm 3.1 with p = 1 and m(k) = 1 for all k. The rst two rows of Table 1 show the results for levels K = 3; : : : ; 7 for this symmetric V-cycle, where ( v ; v ) denote the condition number for the system BK AK and the reduction factor for the system IdK ? BK AK as a function of the mesh size on the nest grid hK . While there is no complete theory for this V -cycle algorithm, it is of practical interest that the condition numbers for this cycle remain relatively small.
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
21
1=hK
v v m
8
16
32
64
128
1.54 1.70 1.84 1.96 2.06 0.23 0.27 0.32 0.33 0.35 1.75 1.81 1.84 1.85 1.85
Table 1. Numerical results for the multiplicative V -cycles. For comparison, we run the same example by a symmetrized multilevel multiplicative Schwarz method corresponding to (4.17). One step of the symmetric version consists of two substeps, the rst coinciding with (4.17) and the second ~ repeating (4.17) in reverse order. The condition numbers m for MK;! AK with ?1 are presented in the third row of Table 1, where MK;! = M t MK;! is ~ ! K K;! now symmetric. The results are better than expected from the upper bounds of Theorem 4.3 which seem to be only suboptimal. In the second test we treat the above multigrid algorithm and symmetrized multilevel multiplicative method as preconditioners for the conjugate gradient method. In this test the problem (6.1) is assumed to have the exact solution u(x; y) = x(1 ? x)y(1 ? y)exy : Table 2 shows the number of iterations required to achieve the error reduction 10?6, where the starting vector for the iteration is zero. The iteration numbers (iterv ; iterm ) correspond to Algorithm 3.1 with p = 1 and m(k) = 1 for all k and the symmetrized multiplicative algorithm (4.17), respectively. Note that iterv and iterm remain almost constant when the step size increases. 1=hK 8 16 32 64 128 iterv 8 8 iterm 9 9 9 9 10
9 10 10
Table 2. Iteration numbers for the pcg-iteration. In the nal test we report analogous numerical results (condition numbers and pcg-iteration count) for the additive preconditioner CK associated with the splitting (4.11) (subscript a), and the preconditioner C K (subscript s) which uses the switch from the system arising from (3.4) to the spectrally equivalent system generated by the conforming bilinear elements via the operators in (5.1) and (5.2). We have implemented the standard BPX-preconditioner 5], with diagonal scaling, as C K .
22
ZHANGXIN CHEN AND PETER OSWALD
These results are shown in Table 3. The numbers show the slight growth, which is typical for most of the additive preconditioners and level numbers K < 10. The condition numbers s for the switching procedure are practically identical to the condition numbers for C K AK characterizing the BPX-preconditioner 5] in the conforming bilinear case. The switching procedure is clearly favorable as can be expected from the theoretical bounds of Theorems 4.2 and 5.1; however, the computations do not indicate whether the upper bound (4.16) is sharp or could be further improved. 1=hK
a
8
16
32
64
128 256 512
9.6 12.3 14.4 16.1 17.4 18.3 19.3 18 22 24 26 27 28 28 -
itera
s
3.37 3.87 4.24 4.54 4.80 5.05 10 11 13 13 14 15
iters
Table 3. Results for the preconditioners CK and C K .
1] T. Arbogast and Zhangxin Chen, On the implementation of mixed methods as nonconforming methods for second order elliptic problems, Math. Comp. 64 (1995), 943{972. 2] R. Bank and T. Dupont, An optimal order process for solving nite element equations, Math. Comp. 36 (1981), 35{51. 3] D. Braess and R. Verfurth, Multigrid methods for nonconforming nite element methods, SIAM J. Numer. Anal. 27 (1990), 979{986. 4] J. Bramble, Multigrid Methods, Pitman Research Notes in Math., vol. 294, Longman, London, 1993. 5] J. Bramble, J. Pasciak, and J. Xu, Parallel multilevel preconditioners, Math. Comp. 55 (1991), 1-22. 6] J. Bramble, J. Pasciak, and J. Xu, The analysis of multigrid algorithms with non-nested spaces or non-inherited quadratic forms, Math. Comp. 56 (1991), 1{34. 7] S. Brenner, An optimal-order multigrid method for P1 nonconforming nite elements, Math. Comp. 52 (1989), 1{15. 8] S. Brenner, Multigrid methods for nonconforming nite elements, Proceedings of Fourth Copper Mountain Conference on Multigrid Methods, J. Mandel, et al., eds., SIAM, Philadelphia, 1989, pp. 54{65. 9] S. Brenner, Convergence of nonconforming multigrid methods without full elliptic regularity, Preprint, 1995. 10] Zhangxin Chen, Analysis of mixed methods using conforming and nonconforming nite element methods, RAIRO Model. Math. Anal. Numer. 27 (1993), 9{34. 11] Zhangxin Chen, Projection nite element methods for semiconductor device equations, Computers Math. Applic. 25 (1993), 81{88.
References
MULTIGRID AND MULTILEVEL METHODS FOR NONCONFORMING ELEMENTS
23
12] Zhangxin Chen, Equivalence between and multigrid algorithms for nonconforming and mixed methods for second order elliptic problems, IMA Preprint Series #1218, 1994, East-West J. Numer. Math. 4 (1996), to appear. 13] Zhangxin Chen, R. Ewing, and R. Lazarov, Domain decomposition algorithms for mixed methods for second order elliptic problems, Math. Comp. 65 (1996), to appear. 14] Zhangxin Chen, D. Y. Kwak, and Y. J. Yon, Multigrid algorithms for nonconforming and mixed methods for symmetric and nonsymmetric problems, IMA Preprint Series #1277, 1994. 15] M. Griebel and P. Oswald, On the abstract theory of additive and multiplicative Schwarz algorithms, Numer. Math. 70 (1995), 163{180. 16] P. Kloucek, B. Li, and M. Luskin, Analysis of a class of nonconforming nite elements for crystalline microstructures, Math. Comp. (1996), to appear. 17] P. Kloucek and M. Luskin, The computation of the dynamics of martensitic microstructure, Continuum Mech. Thermodyn. 6 (1994), 209{240. 18] C. Lee, A nonconforming multigrid method using conforming subspaces, Proceedings of the Sixth Copper Mountain Conference on Multigrid Methods, N. Melson et al., eds., NASA Conference Publication, vol. 3224, 1993, pp. 317{330. 19] P. Oswald, On a hierarchical basis multilevel method with nonconforming P1 elements, Numer. Math. 62 (1992), 189{212. 20] P. Oswald, Multilevel Finite Element Approximation : Theory and Application, Teubner Skripten zur Numerik, Teubner, Stuttgart, 1994. 21] P. Oswald, Preconditioners for nonconforming elements, Math. Comp. (1996), to appear. 22] P. Oswald, Intergrid transfer operators and multilevel preconditioners for nonconforming discretizations, Preprint, 1995. 23] R. Rannacher and S. Turek, Simple nonconforming quadrilateral Stokes element, Numer. Meth. Partial Di . Equations 8 (1992), 97{111. 24] P. Raviart and J. Thomas, A mixed nite element method for second order elliptic problems, Mathematical aspects of the FEM, Lecture Notes in Mathematics, 606, Springer-Verlag, Berlin & New York (1977), pp. 292{315. 25] M. Wang, The W-cycle multigrid method for nite elements with nonnested spaces, Adv. in Math. 23 (1994), 238{250. 26] H. Yserentant, Old and new convergence proofs for multigrid methods, Acta Numerica, Cambr. Univ. Press, Cambridge, 1993, pp. 285{236. 27] J. Xu, Iterative methods by space decomposition and subspace correction, SIAM Review 34 (1992), 581{613.
Department of Mathematics, Box 156, Southern Methodist University, Dallas, Texas 75275{0156. | 2018-08-16 21:17:38 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8396210074424744, "perplexity": 2658.5510925848907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221211185.57/warc/CC-MAIN-20180816211126-20180816231126-00273.warc.gz"} |
https://gis.stackexchange.com/questions/306595/qgis-3-4-couldnt-load-sip-module-python-support-will-be-disabled-on-windows-10 | # QGIS 3.4 Couldn't load SIP module. Python support will be disabled on Windows 10
I upgraded from QGIS 3.2 to 3.4 and bad things happened. On start-up I get the error.
Couldn't load SIP module. Python support will be disabled.
Traceback (most recent call last): File "", line 1, in File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\qgis__init__.py", line 80, in import qgis.gui File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\qgis\gui__init__.py", line 27, in from qgis._gui import * ValueError: PyCapsule_GetPointer called with incorrect name
Python version: 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)]
Python path: ['C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python', 'C:/Users/Cary/AppData/Roaming/QGIS/QGIS3\profiles\default/python', 'C:/Users/Cary/AppData/Roaming/QGIS/QGIS3\profiles\default/python/plugins', 'C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python/plugins', 'C:\Program Files\QGIS 3.4\bin\python37.zip', 'C:\PROGRA~1\QGIS3~1.4\apps\Python37\DLLs', 'C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib', 'C:\Program Files\QGIS 3.4\bin', 'C:\Users\Cary\AppData\Roaming\Python\Python37\site-packages', 'C:\PROGRA~1\QGIS3~1.4\apps\Python37', 'C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\site-packages', 'C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\site-packages\win32', 'C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\site-packages\win32\lib', 'C:\PROGRA~1\QGIS3~1.4\apps\Python37\lib\site-packages\Pythonwin']
I feel that there is an issue with my Python Path but I've tried several things to fix it and I am not there yet.
Does anyone know a fix for this?
• ** From bad to worse after an uninstall and reinstall. ** An error occurred during execution of following code: qgis.utils.initInterface(1797760070112) Traceback (most recent call last): File "", line 1, in File "C:/PROGRA~1/QGIS3~1.4/apps/qgis/./python\qgis\utils.py", line 219, in initInterface iface = wrapinstance(pointer, QgisInterface) TypeError: wrapinstance() argument 2 must be sip.wrappertype, not sip.wrappertype Python version: 3.7.0 (v3.7.0:1bf9cc5093, Jun 27 2018, 04:59:51) [MSC v.1914 64 bit (AMD64)] QGIS version: 3.4.2-Madeira 'Madeira', 22034aa070 – Cary H Dec 20 '18 at 13:41
• I am in a vicious circle of uninstalling / installing QGIS 3.4.3 to be able to run plugins and installing PyQt5. When I uninstall both of them and reinstall QGIS i cannot build with pb_tool due to something wrong with pyrcc5 and I have to pip install PyQt5. Then I get "QGIS 3.4 Couldn't load SIP module. Python support will be disabled on Windows 10". Is there some magic version of PyQt5 that makes this all work? – Cary H Dec 28 '18 at 19:04
I encountered this problem on Ubuntu 18.04 with QGIS-Madeira 3.4.13 after installing PyQt5. I didn't put it in virtual environment because I didn't anticipate there would be any issues; it must have overwritten PyQt4 and the older version of sip, as there is a special version associated with PyQt5. I was able to fix the issues with QGIS by uninstalling PyQt5's sip (and PyQt5, which could always be re-installed properly within a virtual environment), then re-installing sip system-wide (in the environment in which QGIS looks for it).
sudo -H pip3 uninstall PyQt5-sip | 2020-10-23 22:04:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3579806685447693, "perplexity": 11483.432710977719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107865665.7/warc/CC-MAIN-20201023204939-20201023234939-00350.warc.gz"} |
http://www.researchgate.net/publication/6242540_Three-body_contribution_to_the_helium_interaction_potential | Article
# Three-Body Contribution to the Helium Interaction Potential †
Department of Physics and Astronomy, University of Delaware, Ньюарк, Delaware, United States
(Impact Factor: 2.69). 12/2007; 111(44):11311-9. DOI: 10.1021/jp072106n
Source: PubMed
ABSTRACT
Two nonadditive three-body analytic potentials for helium were obtained: one based on three-body symmetry-adapted perturbation theory (SAPT) and the other one on supermolecular coupled-cluster theory with single, double, and noniterative triple excitations [CCSD(T)]. Large basis sets were used, up to the quintuple-zeta doubly augmented size. The fitting functions contain an exponentially decaying component describing the short-range interactions and damped inverse powers expansions for the third- and fourth-order dispersion contributions. The SAPT and CCSD(T) potentials are very close to each other. The largest uncertainty of the potentials comes from the truncation of the level of theory and can be estimated to be about 10 mK or 10% at trimer's minimum configuration. The relative uncertainties for other configurations are also expected to be about 10% except for regions where the nonadditive contribution crosses zero. Such uncertainties are of the same order of magnitude as the current uncertainties of the two-body part of the potential.
0 Followers
·
• ##### Article: Theoretical Study of Triatomic Systems Involving Helium Atoms
[Hide abstract]
ABSTRACT: The triatomic 4He system and its isotopic species ${^4{\rm He}_2^3{\rm He}}$ are theoretically investigated. By adopting the best empirical helium interaction potentials, we calculate the bound state energy levels as well as the rates for the three-body recombination processes: 4He + 4He + 4He → 4 He2 + 4He and 4He + 4He + 3He → 4He2 + 3He. We consider not only zero total angular momentum J = 0 states, but also J > 0 states. We also extend our study to mixed helium-alkali triatomic systems, that is 4He2X with X = 7Li, 23Na, 39K, 85 Rb, and 133Cs. The energy levels of all the J ≥ 0 bound states for these species are calculated as well as the rates for three-body recombination processes such as 4He + 4He + 7Li → 4 He2 + 7Li and 4He + 4He + 7Li → 4 He7Li + 4He. In our calculations, the adiabatic hyperspherical representation is employed but we also obtain preliminary results using the Gaussian expansion method.
Few-Body Systems 08/2013; 54(7-10). DOI:10.1007/s00601-013-0708-z · 0.77 Impact Factor
• ##### Article: Interplay between theory and experiment in investigations of molecules embedded in superfluid helium nanodroplets
[Hide abstract]
ABSTRACT: Helium is the only substance that has been observed on macroscopic scale to form the fourth state of matter, the superfluid state. However, until recently superfluid helium had not found any practical applications, mainly because it expels all other atoms or molecules. Only in the 1990s was it discovered that it is possible to mix in other substances with superfluid helium if helium is prepared as small droplets, called nanodroplets, containing only a few thousand atoms. This discovery led to the development of a new and very powerful experimental technique, called helium-nanodroplet spectroscopy. Superfluid helium creates a gentle matrix around the impurities and - due to superfluidity and to very weak interactions of helium atoms with other atoms or molecules - allows measurements of the spectra with precision not much lower than in the gas phase. Consequently, helium-nanodroplet spectroscopy enables very accurate probing of molecules or clusters which cannot be investigated in the gas phase due to their instability. This category includes `fragile' molecules, isomers, radicals, and clusters in secondary minima. The major experimental developments will be described, emphasizing their importance for understanding basic principles of physics and new insights into chemically relevant processes. The experiments have been assisted by theoretical work on impurity-Hen clusters. Most such work involves first-principles quantum simulations. Although the number of helium atoms that can be included in such simulations is significantly smaller than in a typical nanodroplet, theory explains most of the observed trends reasonably well. Theoretical results can also be compared directly and much more precisely than in the case of the droplets with the results of molecular beam experiments on clusters of controllable size, with the number of helium atoms ranging from 1 to almost 100. Most of the simulations published to date will be discussed and the level of agreement with experiment will be critically evaluated. The results of the simulations are very sensitive to details of the He-He and impurity-He interaction potentials used, and most of the current discrepancies between theory and experiment can be traced down to the uncertainties of the potentials. Thus, an important component of this review will be an analysis of various sources of errors in potential energy surfaces.
International Reviews in Physical Chemistry 04/2008; 27(2):273-316. DOI:10.1080/01442350801933485 · 7.03 Impact Factor
• Source
##### Article: Adiabatic hyperspherical study of triatomic helium systems
[Hide abstract]
ABSTRACT: The 4He3 system is studied using the adiabatic hyperspherical representation. We adopt the current state-of-the-art helium interaction potential including retardation and the nonadditive three-body term, to calculate all low-energy properties of the triatomic 4He system. The bound state energies of the 4He trimer are computed as well as the 4He+4He2 elastic scattering cross sections, the three-body recombination and collision induced dissociation rates at finite temperatures. We also treat the system that consists of two 4He and one 3He atoms, and compute the spectrum of the isotopic trimer 4He2 3He, the 3He+4He2 elastic scattering cross sections, the rates for three-body recombination and the collision induced dissociation rate at finite temperatures. The effects of retardation and the nonadditive three-body term are investigated. Retardation is found to be significant in some cases, while the three-body term plays only a minor role for these systems. Comment: 24 pages 6 figures Submitted to Physical Review A
Physical Review A 09/2008; 78(6). DOI:10.1103/PhysRevA.78.062701 · 2.81 Impact Factor | 2015-12-01 09:18:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7055956721305847, "perplexity": 1599.313854010059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398466178.24/warc/CC-MAIN-20151124205426-00340-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/58661/what-does-a-number-in-brackets-after-another-number-mean-i-e-211cm-1?noredirect=1 | # What does a number in brackets after another number mean? I.e. 21(1)cm-1
What does the number in brackets mean in these two examples?
$$21(1)\ \mathrm{cm^{-1}}$$
and
$$1.0(3)\times10^{-7}$$
The results of measurements and other numerical values of quantities are often given with an associated standard uncertainty. A numerical value and the associated uncertainty may be expressed as shown in the question:
\begin{align} y&=21(1)\ \mathrm{cm^{-1}}\\[6pt] &=a(b)\ \mathrm{cm^{-1}} \end{align}
where
$$y$$ is the estimate of the measurand (e.g. the result of a measurement) expressed in the unit $$\mathrm{cm^{-1}}$$,
$$a$$ is the numerical value, and
$$b$$ denotes a standard uncertainty expressed in terms of the least significant digit(s) in $$a$$.
It is important to note that the given uncertainty refers to the least significant digits of the given numerical value. For example, in the expression
$$l=23.4782(32)\ \mathrm m$$
the $$(32)$$ represents a standard uncertainty equal to
$$u(l)=0.0032\ \mathrm m$$
Many physical constants are also given in this form. For example, the previously recommended value for the molar gas constant $$R$$ as given by NIST from 25 June 2015 until the new value became available on 20 May 2019:
$$R=8.3144598(48)\ \mathrm{J\ mol^{-1}\ K^{-1}}$$
where the $$(48)$$ represents a standard uncertainty of
$$u(R)=0.0000048\ \mathrm{J\ mol^{-1}\ K^{-1}}$$
This form is in accordance with various current standards, in particular
The Guide to the Expression of Uncertainty in Measurement (GUM) also shows other permissible forms:
7.2.2 When the measure of uncertainty is $$u_\mathrm c(y)$$, it is preferable to state the numerical result of the measurement in one of the following four ways in order to prevent misunderstanding. (The quantity whose value is being reported is assumed to be a nominally 100 g standard of mass $$m_\mathrm S$$; the words in parentheses may be omitted for brevity if $$u_\mathrm c$$ is defined elsewhere in the document reporting the result.)
1) “$$m_\mathrm S=100{,}021\,47\ \mathrm g$$ with (a combined standard uncertainty) $$u_\mathrm c = 0{,}35\ \mathrm{mg}$$.”
2) “$$m_\mathrm S=100{,}021\,47(35)\ \mathrm g$$, where the number in parentheses is the numerical value of (the combined standard uncertainty) $$u_\mathrm c$$ referred to the corresponding last digits of the quoted result.”
3) “$$m_\mathrm S=100{,}021\,47(0{,}000\,35)\ \mathrm g$$, where the number in parentheses is the numerical value of (the combined standard uncertainty) $$u_\mathrm c$$ expressed in the unit of the quoted result.”
4) “$$m_\mathrm S=(100{,}021\,47\pm0{,}000\,35)\ \mathrm g$$, where the number following the symbol $$\pm$$ is the numerical value of (the combined standard uncertainty) $$u_\mathrm c$$ and not a confidence interval.”
Note that item 2) corresponds to the form given in the question.
Concerning item 4), however, the GUM notes
The ± format should be avoided whenever possible because it has traditionally been used to indicate an interval corresponding to a high level of confidence and thus may be confused with expanded uncertainty (…).
Furthermore, ISO 80000 notes
Uncertainties are often expressed in the following manner: $$(23{,}478\,2\pm0{,}003\,2)\ \mathrm m$$. This is, however, wrong from a mathematical point of view. $$23{,}478\,2\pm0{,}003\,2$$ means $$23{,}481\,4$$ or $$23{,}475\,0$$, but not all values between these two values. (…)
From Wikipedia:
In metrology, physics, and engineering, the uncertainty or margin of error of a measurement, when explicitly stated, is given by a range of values likely to enclose the true value. This may be denoted by error bars on a graph, or by the following notations:
• measured value ± uncertainty
• measured value $^{+uncertainty}_{−uncertainty}$
• measured value (uncertainty) | 2020-07-16 03:05:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 29, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862979531288147, "perplexity": 539.0375001409473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657181335.85/warc/CC-MAIN-20200716021527-20200716051527-00132.warc.gz"} |
https://spmchemistry.blog.onlinetuition.com.my/2012/03/molecular-formula.html | Molecular Formula
Molecular Formula
1. The molecular formula of a substance is the chemical formula that gives the actual number of atoms of each element in the substance.
2. A molecular formula is the same as or a multiple of the empirical formula.
3. For example, the empirical of carbon dioxide is CO2 and the molecular formula is also CO2.
4. Whereas, the empirical formula of ethane is CH3 while the molecular formula of ethane is C2H6.
Finding Molecular Formula
Example
Given that the empirical formula of benzene is CH and its relative molecular mass is 78. Find the molecular formula of benzene. [Relative Atomic Mass: Carbon: 12; Hydrogen: 1]
Let’s say the molecular formula of benzene is CnHn.
The relative molecular mass of CnHn
= n(12) + n(1)
= 13n
13n = 78
n = 78/13 = 6
Therefore, the molecular formula of benzene
C6H6
Example:
What is the mass of metal X that can combine with 14.4g of oxygen to form X oxide with molecular formula X2O3. (RAM: O = 16; X = 56 )
Number of mole of oxygen
= 14.4/16
=0.9 mol
From the molecular formula, we learn that the ratio of element X to oxygen X:O = 2:3
Therefore, the number of mole of X =0.9× 2/3 =0.6 mol
Number of mole,
n = mass/Molar mass
0.6 = mass/56*
mass = 33.6g
The mass of element X = 33.6g
*Molar mass of a substance = Relative atomic mass of the substance | 2021-01-27 14:05:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648794293403625, "perplexity": 750.4723723949627}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00347.warc.gz"} |
https://www.kouzloterapie.cz/ball/Aug-3470/ | 1. HOME
2. / how to calculate price of crushing stone
# how to calculate price of crushing stone
### STONE CALCULATOR [How Much Stone do I Need] Construction
Jun 12, 2019· If you have the price of the stone per unit mass (e.g. cost per pound) or price per volume (e.g. cost per cubic feet) you can work out the total cost. $$Cost = Weight \times Price\,per\,unit\,mass$$ or
Get Price
Get Price
### How to Calculate How Much Crushed Stone I Need Home
Aug 23, 2019· To get to this figure, you must know how the landscape stone calculator or crushed concrete calculator works. It's a matter of doing the math. Multiply 12 by 12 to get 144 square feet.
Get Price
### How Much Crushed Stone Do You Need? A Sure-Fire Formula
Crushed stone is produced by passing stones through a crushing machine at a quarry. Various types of stone are used in this operation, such as granite and limestone. At the bottom of the crushing machine lies a screen that traps the the crushed stone product (the finer material that passes through the screen is also kept and sold -- as stone dust).
Get Price
### how to calculate the price of crushing stone
how to calculate the price of crushing stone. how to calculate the production of stone crusher metallurgy wikipedia the free encyclopedia metallurgy is a domain of materials science and engineering that studies the physical and chemical behavior of metallichow to calculate the price of crushing stone calculate production cost of rock in stone
Get Price
### how to calculate the price of stone crushing plant
How much the Stone Crushing Plant Price. The stone crushing plant is consisted of jaw crusher, impact crusher, cone crusher, vibrating screen, vibrating feeder, belt conveyor, etc. With the development of infrastructure construction, the demand of construction materials is increasing. More and more businessmen start to invest in the stone
Get Price
### how to calculate the price of crushing stone
how to calculate the price of crushing stone. How To Calculate The Price Of Stone Crushing PlantHow To Calculate The Price Of Stone Crushing Plant. calculator for crusher run zszyzz. how to calculate crusher run from m to tonne calculation calculate crusher run tons per cubic meter get price. calculate crusher .
Get Price
### how to calculate the price of stone crushing plant
Get Price; Stone Crushing Plant price,Stone Crushing Plant cost CFTC. Fixed Stone Crusher Plant including stone crusher, vibrating screen, vibrating feeder, belt conveyor, can be divided into two categories according to its hardness: hard rocks and soft rocks,the stone crusher can choose different.
Get Price
### how to calculate price of crushing stone
how to calculate the price of stone crushing plant. How much the Stone Crushing Plant Price. The stone crushing plant is consisted of jaw crusher, impact crusher, cone crusher, vibrating screen, vibrating feeder, belt conveyor, etc. With the development of infrastructure construction, the demand of construction materials is increasing.
Get Price
### how to calculate the price of crushing stone
How To Calculate The Price Of Crushing Stone. How To Calculate The Price Of Crushing Stone how much does crushing stone cost sand washing machine Cost to Install Crushed Stone 2017 Cost Calculator a set of VSI crusher will be needed after the hammer crusher Prices of W To Calculate The Price Of Crushing Stone
Get Price
### how to calculate the price of stone crushing plant
how to calculate output of voltas crusher plant. how to calculate output of voltas crusher plant is manufactured from Shanghai Xuanshi,It is the calculate cost of operation of 250 tph stone crushing
Get Price
### how to calculate the price of crushing stone
how to calculate the price of stone crushing plant. Braen Stone has the best prices on bulk delivery of crushed stone sand and crushed stone referred to as gravel clean stone bluestone crusher rock Measure once order once with the Braen Stone Materials Calculator Aggregate Calculator Mulzer Crushed Stone Inc
Get Price
### Stone Tonnage Calculator Rohrer’s
Stone Tonnage Calculator rohrers-admin 2018-06-06T15:51:08-04:00 Tonnage calculations are based on averages and should be used as estimates. Actual amounts needed may vary.
Get Price | 2021-09-16 19:07:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29069894552230835, "perplexity": 7572.373346810324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053717.37/warc/CC-MAIN-20210916174455-20210916204455-00702.warc.gz"} |
https://par.nsf.gov/biblio/10230251-electron-ptychography-achieves-atomic-resolution-limits-set-lattice-vibrations | Electron ptychography achieves atomic-resolution limits set by lattice vibrations
Transmission electron microscopes use electrons with wavelengths of a few picometers, potentially capable of imaging individual atoms in solids at a resolution ultimately set by the intrinsic size of an atom. However, owing to lens aberrations and multiple scattering of electrons in the sample, the image resolution is reduced by a factor of 3 to 10. By inversely solving the multiple scattering problem and overcoming the electron-probe aberrations using electron ptychography, we demonstrate an instrumental blurring of less than 20 picometers and a linear phase response in thick samples. The measured widths of atomic columns are limited by thermal fluctuations of the atoms. Our method is also capable of locating embedded atomic dopant atoms in all three dimensions with subnanometer precision from only a single projection measurement.
Authors:
; ; ; ; ; ; ; ; ;
Award ID(s):
Publication Date:
NSF-PAR ID:
10230251
Journal Name:
Science
Volume:
372
Issue:
6544
Page Range or eLocation-ID:
p. 826-831
ISSN:
0036-8075
Publisher:
American Association for the Advancement of Science (AAAS)
5. Abstract The Electron Loss and Fields Investigation with a Spatio-Temporal Ambiguity-Resolving option (ELFIN-STAR, or heretoforth simply: ELFIN) mission comprises two identical 3-Unit (3U) CubeSats on a polar (∼93 ∘ inclination), nearly circular, low-Earth (∼450 km altitude) orbit. Launched on September 15, 2018, ELFIN is expected to have a >2.5 year lifetime. Its primary science objective is to resolve the mechanism of storm-time relativistic electron precipitation, for which electromagnetic ion cyclotron (EMIC) waves are a prime candidate. From its ionospheric vantage point, ELFIN uses its unique pitch-angle-resolving capability to determine whether measured relativistic electron pitch-angle and energy spectra within the loss cone bear the characteristic signatures of scattering by EMIC waves or whether such scattering may be due to other processes. Pairing identical ELFIN satellites with slowly-variable along-track separation allows disambiguation of spatial and temporal evolution of the precipitation over minutes-to-tens-of-minutes timescales, faster than the orbit period of a single low-altitude satellite (T orbit ∼ 90 min). Each satellite carries an energetic particle detector for electrons (EPDE) that measures 50 keV to 5 MeV electrons with $\Delta$ Δ E/E < 40% and a fluxgate magnetometer (FGM) on a ∼72 cm boom that measures magnetic field waves (e.g., EMIC waves) in the range from DC tomore » | 2023-02-03 14:49:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4523056447505951, "perplexity": 5309.035183681955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00226.warc.gz"} |
https://www.prepanywhere.com/prep/textbooks/calculus-and-vectors-mcgraw-hill/chapters/chapter-7-cartesian-vectors/materials/7-7-chapter-review | 7.7 Chapter Review
Chapter
Chapter 7
Section
7.7
Solutions 31 Videos
Consider the vector \displaystyle \vec{v}=[-6,3] .
Write \displaystyle \vec{v} in terms of \displaystyle \vec{i} and \displaystyle \vec{j} .
Q1a
Given \displaystyle \vec{u}=[5,-2] and \displaystyle \vec{v}=[8,5] , evaluate each of the following.
\displaystyle -5 \vec{u}
Q2a
Given \displaystyle \vec{u}=[5,-2] and \displaystyle \vec{v}=[8,5] , evaluate each of the following. \displaystyle \vec{u}+\vec{v}
Q2b
\displaystyle 4 \vec{u}+2 \vec{v}
Q2c
An airplane is flying at an airspeed of \displaystyle 345 \mathrm{~km} / \mathrm{h} on a heading of \displaystyle 040^{\circ} . The wind is blowing at \displaystyle 18 \mathrm{~km} / \mathrm{h} from a bearing of \displaystyle 087^{\circ} . Determine the ground velocity of the airplane. Include a diagram in your solution.
Q3
Calculate the dot product of each pair of vectors. Round your answers to two decimal places.
Q4a
Calculate the dot product of each pair of vectors. Round your answers to two decimal places.
Q4b
Calculate the dot product of each pair of vectors.
\displaystyle \vec{u}=[5,2], \vec{v}=[-6,7]
Q5a
Calculate the dot product of each pair of vectors.
\displaystyle \vec{u}=-3 \vec{i}+2 \vec{j}, \vec{v}=3 \vec{i}+7 \vec{j}
Q5b
Calculate the dot product of each pair of vectors.
\displaystyle \vec{u}=[3,2], \vec{v}=[4,-6]
Q5c
Which vectors from below are orthogonal? Explain.
\displaystyle \vec{u}=[5,2], \vec{v}=[-6,7] b) \displaystyle \vec{u}=-3 \vec{i}+2 \vec{j}, \vec{v}=3 \vec{i}+7 \vec{j} c) \displaystyle \vec{u}=[3,2], \vec{v}=[4,-6]
Q6
Two vectors have magnitudes of \displaystyle 5.2 and \displaystyle 7.3 . The dot product of the vectors is \displaystyle 20 . What is the angle between the vectors? Round your answer to the nearest degree.
Q7
Calculate the angle between the vectors in each pair. Illustrate geometrically.
\displaystyle \vec{a}=[6,-5], \vec{b}=[7,2]
Q8a
Calculate the angle between the vectors in each pair. Illustrate geometrically.
\displaystyle \vec{p}=[-9,-4], \vec{q}=[7,-3]
Q8b
Determine the projection of \displaystyle \vec{u} on \displaystyle \vec{\nu} .
\displaystyle |\vec{u}|=56,|\vec{v}|=100 , angle \displaystyle \theta between \displaystyle \vec{u} and \displaystyle \vec{v} is \displaystyle 125^{\circ}
Q9a
Determine the projection of \vec{u} on \vec{v}.
\displaystyle \vec{u}=[7,1], \vec{v}=[9,-3]
Q9b
Determine the work done by each force, \displaystyle \vec{F} , in newtons, for an object moving along the
vector \displaystyle \vec{d} , in metres. \displaystyle \vec{F}=[16,12], \vec{d}=[3,9]
Q10a
Determine the work done by each force, \displaystyle \vec{F} , in newtons, for an object moving along the
vector \displaystyle \vec{d} , in metres.
\displaystyle \vec{F}=[200,2000], \vec{d}=[3,45]
Q10b
An electronics store sells 40-GB digital music players for \$229 and 80 -GB players for \$ 329 . Last month, the store sold 125 of the 40 -GB players and 70 of the 80 -GB players.
a) Represent the total revenue from sales of the players using the dot product.
b) Find the total revenue in part a).
Q11
Determine the exact magnitude of each vector.
\displaystyle \overrightarrow{\mathrm{AB}}
joining \displaystyle \mathrm{A}(2,7,8)\\\mathrm{B}(-5,9,-1)
Q12a
Determine the exact magnitude of each vector.
\displaystyle \overrightarrow{\mathrm{PQ}}
joining
\displaystyle \mathrm{P}(0,3,6)\\\mathrm{Q}(4,-9,7)
Q12b
Given the vectors \vec{a}=[3,-7,8], \vec{b}=[-6,3,4] , and \vec{c}=[2,5,7] , evaluate each expression.
\displaystyle 5 \vec{a}-4 \vec{b}+3 \vec{c}
Q13a
Given the vectors \vec{a}=[3,-7,8], \vec{b}=[-6,3,4] , and \vec{c}=[2,5,7] , evaluate each expression.
\displaystyle -5 \vec{a} \cdot \vec{c}
Q13b
Given the vectors \vec{a}=[3,-7,8], \vec{b}=[-6,3,4] , and \vec{c}=[2,5,7] , evaluate each expression.
\displaystyle \vec{b} \cdot(\vec{c}-\vec{a})
Q13c
If \vec{u}=[6,1,8] is orthogonal to \vec{v}=[k,-4,5] , determine the value(s) of k .
Q14
Determine \vec{u} \times \vec{v} for each pair of vectors.
Q15a
Determine \vec{u} \times \vec{v} for each pair of vectors.
\displaystyle \vec{u}=[4,1,-3], \vec{v}=[3,7,8]
Q15b
Determine the area of the parallelogram defined by the vectors \vec{u}=[6,8,9] and
\displaystyle \vec{v}=[3,-1,2]
Q16
Use an example to verify that
\displaystyle \vec{a} \times(\vec{b}+\vec{c})=\vec{a} \times \vec{b}+\vec{a} \times \vec{c}
Q17
A force of 200 \mathrm{~N} is
applied to a wrench in a clockwise
direction at 80^{\circ} to
the handle, 10 \mathrm{~cm} from the centre of
the bolt.
a) Calculate the
magnitude of the torque.
b) In what direction does the torque vector point?
Determine the projection of \vec{u} on \vec{v}, and its magnitude,
\displaystyle \vec{u}=[-2,5,3]\\\vec{v}=[4,-8,9] | 2021-08-03 16:20:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9983514547348022, "perplexity": 10141.489072761427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00003.warc.gz"} |
https://crypto.stackexchange.com/questions/78331/exclusion-proof-cost | Exclusion proof cost
I am trying to understand what is the cost of non-membership verification for a universal accumulator?
More specifically, how can I compute it?
Whether using an accumulator is more efficient than the Merkle Hash tree?
• Does this answer your question? Nonmembership witness in universal accumulator – AleksanderRas Mar 22 at 17:24
• @ AleksanderRas I wonder the verification costs. Above two graphs one is constant one is linear, which one is correct. Its verification size o(1), is that true – jhdm Mar 23 at 11:09
• Where are those graphs taken from? – Maeher Mar 25 at 9:58
• @Maeher It is taken from "Real-World Performance of Cryptographic Accumulators" study – jhdm Mar 26 at 13:34
• You seem to be misunderstanding what those graphs are showing. The first one shows how verification time scales with growing size of the accumulated set when run on 8 cores. The second shows how much additional cores help to speed up verification for a set of size 10.000. It turns out they don't really help at all. – Maeher Mar 26 at 13:57
The answer depends on what universal accumulator scheme you're considering:
1. Sorted Merkle Tree If the Merkle Tree is sorted then one could just send two neighboring Merkle paths showing non inclusion of an element. This gives a logarithmic non-membership proof and also a logarithmic verification cost.
2. Merkle Tree Following the idea of Micali, Rabin and Kilian you could create two trees. One for the elements in the set and another tree for the so-called frontier set. Frontier is the set of ancestors to all values that are not in the tree (note that this is about the same size as the size of the set itself). Then, in order to prove that a value is contained in the set you use the first tree, and to prove that it is not you use the second tree. See a related answer here by Yehuda Lindell and the paper here.
3. RSA-accumulator Let $$A$$ denote the accumulator's current value and $$g$$ a generator in the RSA group. Let $$A=g^u$$ be, then, (quick reminder that in an RSA-accumulator you can "only" accumulate primes!) $$exclusionProof(A,x):$$ Since $$x$$ is not accumulated this implies that $$gcd(u,x)=1,$$ therefore one can compute $$a,b$$, so-called Bezout-coefficients, such that $$ax+bu=gcd(x,u)=1$$. Hence $$\pi=(g^a,b).$$ Verifying the proof is done by checking $$\pi^x\cdot A^b=g$$. Therefore, both the proof size and the verification cost is constant. However, you would need a trusted setup for an RSA accumulator. Batching exclusion proofs is also possible. See this recent result.
4. Pairing-based accumulator Damgard et al. creates non-membership proofs for pairing-based accumulators. The verification cost of a non-membership proof is a single pairing-check, however computing such a proof is polynomial in the accumulated set size.
• Thank you. So can i say also space complexity for merkle tree is logarithmic or O(N).? I dont understand how can i compute it – jhdm Mar 21 at 11:45
• If there are N elements in your set, then you will have N leaves in your Merkle-tree. Hence total number of nodes in the Merkle tree will be N+(N/2)+(N/4)+...4+2+1= 2N -1. Namely, your space complexity is O(N). – István András Seres Mar 21 at 12:55
• @stván András Seres you said verification cost is constant but "Batch verify n proofs faster than verifying a single proof n times" is written gakonst.com/deep-dive-rsa-accumulators. So verification cost o(n) or o(1). I dont understand it. – jhdm Mar 22 at 21:49
• You could naively verify n non-membership proofs one-by-one. However, luckily, RSA-accumulators admit batching of non-membership proofs. Essentially you can batch-verify $n$ non-membership proofs and verify them at the cost of a single non-membership proof verification. Although the prover time is still linear in the number of batched non-membership proofs. Hence verification cost is o(1), but creating the batched proof is o(n). Pls refer to the linked blog post or to the Boneh et al. paper. – István András Seres Mar 23 at 7:54
• @ István András Seres Thank you! – jhdm Mar 23 at 8:38
In section 5.3 of this paper you can find a comparison between an accumulator and MHT. The complexity totally depends on the scheme.
In general, MHT is linear in the proof of exclusion (in the order of depth of the tree). Because the verifier has to check all the possible paths to convince a leaf is not in the tree.
• so sorted merkle tree needs logarithmic, rsa dynamic accumulator needs O(1) verification cost. Is it true? – jhdm Apr 1 at 9:44
• I couldnt understand this article. What is the sorted merkle tree and rsa accumulator verification, cost? Which one is better for revocation. – jhdm Apr 20 at 7:31 | 2020-05-29 00:13:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6935096979141235, "perplexity": 1519.096572388768}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00116.warc.gz"} |
https://wordpress.discretization.de/ddg2016/2016/05/20/the-3-sphere/ | # The 3-Sphere
So far we have seen geometries in 2D and 3D, which are the dimensions we are familiar with. But mathematician have found interesting geometries in higher dimensions and it would be great if we could visualize them. In particular a huge collection of interesting surfaces come from the 3-sphere $${\Bbb S}^3$$ in $${\Bbb R}^4$$. Though the developers of Houdini probably did not aim for creating an environment beyond 3D Euclidean space, it turns out as a surprise that Houdini is a natural tool to explore the 3-sphere.
Definition
The 3-sphere $${\Bbb S}^3$$ is defined as a subset of unit vectors in $${\Bbb R}^4$$
${\Bbb S}^3=\Big\{(x,y,z,w)\in{\Bbb R}^4\,\Big|\,x^2+y^2+z^2+w^2=1\Big\}.$
Just like the unit spheres in other dimensions, the tangent vectors $$v = (\dot x,\dot y,\dot z,\dot w)\in{\Bbb R}^4$$ at a point $$p_0 = (x_0,y_0,z_0,w_0)\in{\Bbb S}^3$$ satisfies $$\langle v,p_0\rangle_{\Bbb R^4}=0$$, that is, the tangent plane at $$p_0$$ is a 3-dimensional hyperplane with normal vector being $$p_0$$. The inner product of tangent vectors (also known as the metric) on the 3-sphere inherits from the $${\Bbb R^4}$$ inner product, which gives us the notion of measures such as length, angle, area, and volume, and therefore defines geodesics (shortest paths), polygons (with edges being geodesics) and Riemannian curvatures (deviation of sum of exterior angles from $$2\pi$$ per unit area of polygon oriented in a particular direction).
Stereographic Projection
By the stereographic projection one has an identification between $${\Bbb S}^3$$ and $${\Bbb R}^3\cup\{\infty\}$$:
$\texttt{S3toR3}((x,y,z,w)) = \left({x\over 1-w},{y\over 1-w},{z\over 1-w}\right)$
and its iverse
$\texttt{R3toS3}(P=(x,y,z))=\left({2x\over 1+|P|^2},{2y\over 1+|P|^2},{2z\over 1+|P|^2},{-1+|P|^2\over 1+|P|^2}\right).$
Each point in $${\Bbb R}^3\cup\{\infty\}$$ represents a unique point in $${\Bbb S}^3$$ and vice versa, hence we visualize geometries in $${\Bbb S}^3$$ by mapping them in $${\Bbb R}^3\cup\{\infty\}$$ through the stereographic projection. The stereographic projection is particularly nice because it is conformal, hence the angles we see after projection are the same as they would look in 4D!
Another remarkable fact about $$\texttt{S3toR3}$$ is that it maps minimal surfaces (soap films extremizing area) in $${\Bbb S}^3$$ to Willmore surfaces in $${\Bbb R}^3$$ (shapes of elastic surfaces extremizing bending energy). [J.L. Weiner 1978]
Nevertheless, the length is not preserved in stereographic projections; the objects in $${\Bbb S}^3$$ closer to $$(0,0,0,1)$$ will look much larger after projection. Put it differently, the seemingly infinitely large space $${\Bbb R}^3\cup\{\infty\}$$ is actually not that large after $$\texttt{R3toS3}$$. In fact $${\Bbb S^3}$$ is compact. In topology $$\texttt{R3toS3}:{\Bbb R}^3\to{\Bbb S}^3\setminus\{(0,0,0,1)\}$$ is called Alexandroff’s one-point compactification.
One-point compactification is just the abstract way of convincing oneself $${\Bbb R}^3\cup\{\infty\}$$ can be viewed as a closed and bounded set, with stereographic projection the concrete way of doing so. This brings a nice picture that $${\Bbb S}^3$$ is in fact the union of 2 solid tori with their boundary torus surfaces glued together. (The complement of a solid torus in $${\Bbb R}^3$$ is another solid torus after one-point compactification.)
$${\Bbb S}^3$$ is the set of unit quaternion
Elements in $${\Bbb S}^3$$ are naturally viewed as quaternions with unit length. This makes $${\Bbb S^3}$$ a (non-abelian) group with quaternionic multiplication (multiplications of unit quaternions are unit quaternions). This group is in fact a double cover of 3D rotation group $$SO(3)$$ because each unit quaternion $$q$$ represents a 3D rotation $$v\mapsto qv\overline q$$ and that $$q$$ and $$-q$$ represent the same 3D rotation.
Rotations in 4D
To explore a 2-spherical globe in 3D, you apply 3D rotation to the sphere. To explore around a 3-sphere, you apply rotations in 4D.
4D rotations ($$SO(4)$$) have 6 degrees of freedom. They can be represented by a pair of unit quaternions: given $$(q_1,q_2)\in{\Bbb S^3}\times{\Bbb S^3}$$, the map $$\psi\mapsto q_1\psi\overline q_2$$ from $${\Bbb H}\to{\Bbb H}$$ is a 4D rotation. The representation $${\Bbb S^3}\times{\Bbb S^3}\to SO(4)$$ is a double cover that $$(q_1,q_2)$$ rotation is the same as $$(-q_1,-q_2)$$ rotation. Composition of 4D rotation is implemented in $${\Bbb S^3}\times{\Bbb S}^3$$ as $$(q_1,q_2)\circ (q_3,q_4) = (q_1,q_2)\cdot(q_3,q_4) = (q_1q_3,q_2q_4)$$.
Let’s look at some special subgroups of 4D rotations. One example is $$\{(q_1,q_2)\in{\Bbb S^3}\,|\, q_1=q_2\}$$. It rotates 4-vectors as $$\psi\mapsto q\psi \overline q$$. When $$\psi\in{\Bbb S^3}$$ and visualized via $$\texttt{S3toR3}$$, it becomes just the 3D rotations.
Another example is $$\{(q_1,q_2)\in{\Bbb S^3}\,|\, q_1=\overline{q_2}\}$$. In $$\texttt{S3toR3}$$ visualization view it behaves as a sphere inversion in the direction of $$\pm{\rm Im}(q_1)$$.
Another interesting subgroup is $$\{(e^{i\theta},1)\,|\, \theta\in[0,2\pi]\}$$. The trajectory of $$e^{i\theta}\psi$$ of a given $$\psi$$ becomes a circle that wind both “along” and “around” a torus. The trajectory of $$\psi e^{i\theta}$$ forming from another subgroup $$\{(1,e^{i\theta})\}$$ is similar but with another orientation. Note that given any random $$\psi_1$$, $$\psi_2\in{\Bbb S^3}$$, the circles $$\{e^{i\theta}\psi_1\,|\,\theta\in[0,2\pi]\}$$ and $$\{e^{i\theta}\psi_2\}\,|\,\theta\in[0,2\pi]\}$$ are always interlinked (if they are not the same circles).
Hopf fibration
Hopf fibration says that $${\Bbb S}^3$$ is in fact the disjoint union of circles, and the set of these circles is a 2-sphere. That is, there is a smooth map $$\pi:{\Bbb S}^3\to{\Bbb S}^2$$, called the Hopf map, and the preimage of each point in $${\Bbb S}^2$$ is a circle in $${\Bbb S}^3$$.
A concrete example of a Hopf map is $$\psi\overset{\pi}\mapsto\overline\psi i \psi$$ for $$\psi\in{\Bbb S^3}$$. The result is just a rotation of the unit vector $$i$$ in 3D by $$\overline\psi\in{\Bbb S}^3$$ so the result lies in $${\Bbb S}^2$$. One can check that $$\pi$$ is onto (covers the whole 2-sphere) and for each $$s=\overline\psi i \psi\in{\Bbb S}^2$$, the preimage $$\pi^{-1}(s)=\{e^{i\theta}\psi\,|\,\theta\in [0,2\pi]\}$$. These circles are also called the Hopf fibers.
This entry was posted in Lecture. Bookmark the permalink. | 2022-01-20 23:59:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9174575209617615, "perplexity": 667.0764244149193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302706.62/warc/CC-MAIN-20220120220649-20220121010649-00363.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-a-combined-approach-4th-edition/chapter-1-section-1-2-symbols-and-sets-of-numbers-exercise-set-page-17/52 | ## Algebra: A Combined Approach (4th Edition)
Published by Pearson
# Chapter 1 - Section 1.2 - Symbols and Sets of Numbers - Exercise Set - Page 17: 52
#### Answer
$\sqrt 3$ is a: real number irrational number
#### Work Step by Step
*Real Numbers are all numbers that correspond to points on a number line *Irrational Numbers are real numbers that can not be expressed as a ratio of integers
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2018-09-21 21:33:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6621339321136475, "perplexity": 1384.3131883015876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267157569.48/warc/CC-MAIN-20180921210113-20180921230513-00271.warc.gz"} |
http://mathhelpforum.com/math-topics/160589-evalute-value-extension.html | # Thread: Evalute the value of extension
1. ## Evalute the value of extension
evaluate the value of extension ( x) which elongate under the action of load (w) of 6 N ? [Use your results obtained from rest 1 ]
Experience as follows :
First, we put spring on the board and a paper under spring to measure the distances between each point from the beginning to the end point of extension for each weight
Then we increased laod to ( 0.2 n ) the measurement was between the starting point until end point of extension is 51 mm (( same steps for all load ))
Then we've decrease the weights ( Every time one ) from end ( 0.5 ) to (0,1 ) we got same measurements
then We calculated the average by calculat increases and decreases and divide by 2
For information, there is a Positive relationship between weight and extension
plese help me
2. First of all you have to find spring constant. For that use readings 5 and 3, and 4 and 2.
$k = \frac{(0.5 - 0.3)}{(135 - 78)10^{-3}}$
Similarly calculate k for the readings 4 and 2. Then using f = - kx find x foe 5 N.
3. plese I fon'r undertand
see the test (1 )
4. k = mg/x
0..04/1 = 6 / X
x = 6.6X10^-6 cm
plese help me | 2017-09-25 23:11:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6713255047798157, "perplexity": 1827.9651141585425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818693459.95/warc/CC-MAIN-20170925220350-20170926000350-00672.warc.gz"} |
https://www.physicsforums.com/threads/why-is-the-induced-emf-inversely-proportional-to-time-faradays-law-of-induction.588832/ | # Why is the induced EMF inversely proportional to time (Faraday's law of induction)
1. Mar 20, 2012
### Em713
Hi, I am doing coursework on Faraday's law of induction. My assignment was to carry out experiments which confirm Faraday's law and also to explain the physics of how faraday's law works... My experiments all worked perfectly, producing straight line graphs showing that:
$\epsilon\propto$ N
$\epsilon\propto$ B
$\epsilon\propto\frac{1}{t}$
I did not do any experiments to test $\epsilon\propto$A ...
I have explained the physics behind the first two results, but for the life of me I can't justify WHY the EMF is inversely proportional to time... I have thought about F = BIl = $\frac{BQl}{t}$, therefore the force on the delocalised electrons is inversely proportional to time... But it's all dead ends after that...
Any help would be GREATLY appreciated!!
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
2. Mar 20, 2012
### rock.freak667
Re: Why is the induced EMF inversely proportional to time (Faraday's law of induction
Well if your EMF didn't vary with time, it would mean that your magnetic field would remain constant. Without it changing you cannot get an induced emf (well something has to change with time essentially). | 2017-10-23 14:32:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.492160439491272, "perplexity": 620.5199513367098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826049.46/warc/CC-MAIN-20171023130351-20171023150351-00297.warc.gz"} |
https://wiki.zcubes.com/index.php?title=Manuals/calci/NORMDIST&mobileaction=toggle_view_mobile | # Manuals/calci/NORMDIST
NORMDIST (Number,Mean,StandardDeviation,Cumulative,accuracy)
• is the value.
• is the mean.
• is the standard deviation
• is the logical value like TRUE or FALSE.
• is correct decimal places for the result.
• NORMDIST(),returns the normal cumulative distribution.
## Description
• This function gives the Normal Distribution for the particular Mean and Standard Deviation.
• Normal Distribution is the function that represents the distribution of many random variables as a symmetrical bell-shaped graph.
• This distribution is the Continuous Probability Distribution.It is also called Gaussian Distribution.
• In ), is the value of the function, is the Arithmetic Mean of the distribution, is the Standard Deviation of the distribution and is the Logical Value that indicating the form of the function.
• Suppose is TRUE, this function gives the Cumulative Distribution, and it is FALSE, this function gives the Probability Mass Function.
• The equation for the Normal Distribution is:
where is the Mean of the distribution, is the Standard Deviation of the distribution.
• In this formula, suppose = 0 and = 1, then the distribution is called the Standard Normal Distribution or the Unit Normal Distribution.
This function will return the result as error when any one of the argument is non-numeric and .
• when is TRUE , this formula is the integral from to and is FALSE , we can use the same formula.
## Examples
1. =NORMDIST(37,29,2.1,FALSE) = 0.000134075
2. =NORMDIST(37,29,2.1,TRUE) = 0.99993041384
3. =NORMDIST(10.75,17.4,3.2,TRUE) = 0.01884908749
4. =NORMDIST(10.75,17.4,3.2,FALSE) = 0.014387563
NORMDIST | 2022-01-25 05:08:21 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949099183082581, "perplexity": 1456.6748773817405}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304760.30/warc/CC-MAIN-20220125035839-20220125065839-00273.warc.gz"} |
https://kievitplein.be/2019-01-23+top-use-of-fused-calcium-chloride-in-reaction-of.html | # top use of fused calcium chloride in reaction of
### Calcium Chloride (Cas 0043-52-4) Manufacturers
Calcium Chloride[Dihydrate] Calcium Chloride[Dihydrate] Calcium chlo Calcium Chloride[Dihydrate] Calcium chloride, CaC l2 , is a salt of calcium and chlorine. It behaves as a typical ionic halide, and is solid at room temperature. Common appliions include brine for refrigeration plants, ice and dust control on roads, and desicion.
### AP rxn & misc_
2011-4-2 · air. e)Calcium carbide reacts with water. f) Electrolysis of fused magnesium bromide occurs. g) Hydrogenation of propylene occurs. Aqueous solutions of calcium chloride
### Common Uses of Calcium Chloride | Hunker
Like sodium chloride, calcium chloride lowers the melting point of ice, so one of its most common uses is for road deicing. It works at much lower temperatures — minus 20°F vs. 20°F for rock salt — because it actually releases heat in an exothermic reaction when it dissolves.
### Production of metal powders by reduction of metal …
1989-4-11 · Production of metal powders by reduction of metal salts in fused bath The operation was performed at the temperature of the fused calcium chloride, namely 830° C. calcium metal in the form of beads 0.5 to 1 mm in diameter was injected into the top of the reaction vessel at the same time as the sodium chloride.
### Impacts of sodium chlorite coined with calcium …
Microbial activity and browning were minimized and fresh-cut rose apple quality was maintained using sodium chlorite (SC) coined with calcium chloride (CC) and calcium ascorbate (CaAs) and by investigating the optimal concentration and dipping time of SC for …
### Magnesium - Essential Chemical Industry
2019-4-17 · Magnesium is the lightest structural metal used today, some 30% lighter than aluminium, and is generally used in alloys. Pure magnesium burns vigorously once molten, but magnesium alloys have higher melting points and are widely used in the automotive and aircraft industries.
### Impacts of sodium chlorite coined with calcium …
Microbial activity and browning were minimized and fresh-cut rose apple quality was maintained using sodium chlorite (SC) coined with calcium chloride (CC) and calcium ascorbate (CaAs) and by investigating the optimal concentration and dipping time of SC for …
### Can I use calcium chloride dihydrate instead of …
Can I use calcium chloride dihydrate instead of calcium chloride for the preparation of buffer? calcium chloride is probably the Kent''s Turbo Calcium or Peladow Calcium Chloride. Where as the
### Pharmaceutical Chemicals - Calcium Chloride …
Avail from us a huge spectrum of Pharmaceutical Chemicals such as Borric Acid, Sodium Sulphate, Calcium Chloride, Magnesium Sulphate Heptahydrate, Zinc Oxide and Light Magnesium Carbonate.In order to ensure the quality of chemicals, we have hired a team of professionals who carefully process these chemicals using moder machinery and equipment.
### Use of Conventional Drying Agents - UCLA
2016-5-5 · Use of Conventional Drying Agents Commonly used drying agents in organic laboratories are calcium chloride (CaCl 2), sodium sulfate (Na 2 SO 4) calcium sulfate (CaSO 4, also known as Drierite) and magnesium sulfate (MgSO 4), all in their anhydrous form. How do they work? All four of them readily form hydrates at low temperatures according to
### CHEMICAL REACTIONS OF CARBIDES, NITRIDES, - EDGE
-Apparatus for use with fused potassium hydroxide. ried out at 234’ C (room temperature) and at about 100’ C by the use of a preheated water bath. Because of the danger of explosion when dry CuzC2 contacts strong oxidizing acid media, first the Cu2C2 was added to water (100 ml) in the reaction …
### How to use chloride in a sentence
How to use chloride in a sentence Looking for sentences and phrases with the word chloride? Here are some examples. Sentence Examples. and redistilling it from fused chloride of calcium. The distillate is then saturated with fused chloride of calcium, and redistilled.
### Does nail rust in anhydrous calcium chloride - …
The difference of calcium chloride and fused calcium chloride isthe bond acting on their molecules. The fused calcium chloride isfused while the calcium chloride is not fused. share with friends
### Industrial Chemicals Manufacturer,Fine Chemicals …
PARMAR CHEMICALS - Manufacturer, Supplier, Trader of Industrial Chemicals, Fine Chemicals, Potassium Permanganate, Battery Sulfuric Acid and Hydrated Lime Powder, etc based in …
### calcium chloride flakes Exporters, Suppliers, …
calcium chloride flakes product offers from exporters, manufacturers, suppliers, wholesalers and distributors globally by price, quantity, order, delivery and shipping terms, country - Page 1
### Borax Powder Manufacturer | Titanium Dioxide (TiO2
Manufacturer exporter Supplier of Borax Powder in India - Bhavani Chemicals is well established Manufacturer exporter & Supplier of Titanium Dioxide (TiO2) in Ahmedabad Gujarat.
### Determination of Calcium by Titration with EDTA
2011-3-1 · determination of calcium by titration with EDA.pdf Put your unknown in the oven at 150 °C for at least 30 minutes, while you prepare your EDTA solution and do your standardization titrations. Preparation of EDTA: 1. Add 500 mL of distilled water to your largest beaker (at least 600 mL). 2. Place the beaker on a magnetic stirrer and add 0.05 g
### Calcium chloride | 10043-52-4
Calcium chloride Chemical Properties,Uses,Production Uses Anhydrous calcium chloride is commonly used in industrial production and laboratories as a dehydrating and drying agent; it is mainly used to dry gases (nitrogen, oxygen, hydrogen, hydrogen chloride, sulfur dioxide, etc., but not ammonia, hydrogen sulfide and alcohol), petroleum, organic solvents, etc.
### What is the reaction between CaCl2 + H2O? - Quora
Calcium chloride is a chemical compound made up of calcium ions and chlorine ions. The ions are held together by an ionic, or weak salt bond. Mixing calcium chloride with water is an exothermic reaction, which means that the coination of the two substances releases heat. Thus, when you add calcium chloride to water, the solution heats.
### Calcium Chloride Dihydrate | AMERICAN ELEMENTS
2019-4-19 · Calcium Chloride Dihydrate is an excellent water soluble crystalline Calcium source for uses compatible with chlorides. Chloride compounds can conduct electricity when fused or dissolved in water. Chloride materials can be decomposed by electrolysis to chlorine gas and the metal.
### Downs cell - Wikipedia
2019-4-14 · The Downs cell uses a carbon anode and an iron hode.The electrolyte is sodium chloride that has been heated to the liquid state. Although solid sodium chloride is a poor conductor of electricity, when molten the sodium and chloride ions are mobilized, which become charge carriers and allow conduction of electric current.. Some calcium chloride and/or chlorides of barium and strontium, and
### inorganic chemistry - What is fused calcium chloride
2019-4-7 · The best answers are voted up and rise to the top. Home Unanswered ; What is fused calcium chloride? Ask Question 1 $\begingroup$ As far as I have researched, fused calcium chloride is the hydrated form of calcium chloride. If that is true, then why is it called ''fused'', why not simply hydrated? Use MathJax to format equations. MathJax
### Structural Biochemistry/Organic Chemistry/Method of
2019-4-7 · Structural Biochemistry/Organic Chemistry/Method of Fischer Esterifiion. From Wikibooks, open books for an open world calcium chloride. In an esterifiion reaction, it is essential to use a drying tube because one of the byproducts is water. According to Le''Chatellier''s principles, if water is added to the ester after during the
### What is the reaction after mixing calcium chloride and
What is the reaction after mixing calcium chloride and silica gel? Update Cancel. Answer Wiki. 1 Answer. Use a curated talent platform. What is fused Calcium Chloride? What is produced when sodium hydroxide reacts with calcium chloride?
### Anodic and hodic Reactions in Molten Calcium …
2013-7-18 · Calcium chloride is a very interesting electrolyte in that it is available, virtually free, in high purity form as a waste product from the chemical industry. It has a very large solubility for oxide ions, far greater than many alkali halides and other divalent halides and has the same toxicity as
### Calcium dichloride | CaCl2 - PubChem
Calcium Chloride is a crystalline, white substance, soluble in water, Calcium Chloride is the chloride salt of calcium, a bivalent metallic element with many crucial biological roles. Calcium is a major constituent of the skeleton but plays many roles as an intracellular and plasma ion as well. In medicine, calcium chloride is also used as a 10% solution in injection, for calcium replenishment. | 2020-07-13 14:23:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5601821541786194, "perplexity": 9203.031489303738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657145436.64/warc/CC-MAIN-20200713131310-20200713161310-00489.warc.gz"} |
http://codeforces.com/blog/light42 | Codeforces celebrates 10 years! We are pleased to announce the crowdfunding-campaign. Congratulate us by the link https://codeforces.com/10years. ×
### light42's blog
By light42, history, 4 years ago, ,
Can anyone give me recommendation of codeforces' past contest? What I need is some sort of contest where high mathematical analysis is required, even the easiest problem need some sort of understanding which people without higher mathematical experience will have a hard time to grasp it. Btw this is for my self-training.
• 0
By light42, history, 5 years ago, ,
At first sight, it seems that it can simply be solved using disjoint set ds. But the problem is more complicated because we handle two kinds of sets at once. Can anyone give a hint for this problem? (PS : I've already try to write and proof my solution for 3 days but still get WA. I suspect I didn't cover all the possible cases in the problem. And sorry if I'm not explaining my solution. It's too messed up I don't want to use it anymore.)
• -6
By light42, history, 5 years ago, ,
It's been roughly 2 years since I compete in Codeforces and even 4 years since I first dive into competitive programming. My problem solving skills indeed improved but not significantly. I tried my best to solve many problem as possible but those problems is quite hard to solve with my current skills right now. And yet there's 5 months left before next ICPC regional and there'll be new contestansts stronger than last year's from my country. I really want to go to WF even if it's only once before I quit and focus to became developer... But it's really hard. I'm tired...
• +17
By light42, 5 years ago, ,
My method for solve this problem is to generate N^2 possible switch combination and from that combination we'll simply check if there's optimum solution. I pick this method because it's the most understandable way in the analysis and some coders use it and get acc. But my code still slow though (didn't pass 8 minute limit). I try to optimize the code but still too slow :(. Is there any way to optimize my code more?
PS: Sorry if this post is bad. This is the first time I'm posting codes(and sorry if my english is bad).
#include <cstdio>
#include <iostream>
#include <vector>
#include <climits>
#include <algorithm>
#include <set>
using namespace std;
#define LOOP(i,n) for(int i = 0; i < n; i++)
int T,N,Leng;
string outlet[200];
string devices[200];
int main()
{
cin >> T;
LOOP(i,T)
{
set<pair<string,int> > flipSwitch;
set<string> dv;
string dummy;
int best = INT_MAX;
cin >> N >> Leng;
LOOP(j,N)
{
cin >> outlet[j];
}
LOOP(j,N)
{
cin >> devices[j];
dv.insert(devices[j]);
}
LOOP(j,N)
{
LOOP(k,N)
{
string curr = "";
int change = 0;
LOOP(l,Leng)
{
if(outlet[j][l] == devices[k][l])curr += '0';
else{curr += '1';change++;}
}
flipSwitch.insert(make_pair(curr, change));
}
}
for(set<pair<string,int> >::iterator it = flipSwitch.begin(); it != flipSwitch.end(); it++)
{
string flipIt = (*it).first;
set<string> ot;
LOOP(j,N)
{
string x = "";
LOOP(k,Leng)
{
if(flipIt[k] == '0') x += outlet[j][k];
else
{
if(outlet[j][k] == '1')x += '0';
else x += '1';
}
}
ot.insert(x);
}
if(ot == dv)best = min(best,(*it).second);
}
printf("Case #%d: ",i+1);
if(best == INT_MAX)printf("Not Possible\n");
else printf("%d\n",best);
}
return 0;
}
• -3
By light42, 5 years ago, ,
It's been roughly one week since two unusual rounds (round 294 and 295) been held. But no more round has been announced yet. Could anyone tell me what happened? I'm a little bit impatience about this XD.
• +34
By light42, 6 years ago, ,
Hi, I want to know how anyone feel right now about competitive programming. Are you feel lonely, happy, content, or desperate? Since I dive myself to the world of competitive programming the desperate feelings of not get any better always haunts me. I too sometimes feel lonely because I compete on my own. But despite of that, I really can't make myself quit. Because the competitive programming is so attractive. When I'm facing a problem there's a desire to solve it. And what a glad feeling when I already passed the problem. I want to be the very best in competitive programming, and I'll will not giving up on it. So what about you? Do you walk the same path as me? :) | 2020-02-25 16:25:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27797049283981323, "perplexity": 2016.2055989983867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146123.78/warc/CC-MAIN-20200225141345-20200225171345-00493.warc.gz"} |
https://mathhelpboards.com/threads/favorite-old-threads-best-math-thread-3.475/ | ### Welcome to our community
#### Ackbach
##### Indicium Physicus
Staff member
Originally posted by Mazerakham, August 9th, 2010.
This problem is a doozy:
A circle is in the plane with center at O_a and some radius r_a. Another circle, not touching the first circle anywhere, has center at O_b and some radius r_b. Points A and B are both (arbitrary) points on circle A and circle B respectively. Finally, Point C is in such a location that ABC is an equilateral triangle. Suddenly, points A and B begin rotating in the same direction (say, counterclockwise) around their respective circles with the same angular speed (say, w) about their centers. During this process, point C moves so that ABC remains an equilateral triangle.
Prove that point C is moving in a circle with same direction (counterclockwise) and angular speed (w) about some center O_c somewhere in the plane.
My solution:
Vectors are the way to go. Vectors and rotation matrices. One of the key facts about rotation matrices is that in 2 dimensions, anyway, two rotation matrices commute.
Let $\hat{x}=\begin{bmatrix}1\\0\end{bmatrix}$ be the unit vector in the $x$ direction, and let
$$R_{\theta}=\begin{bmatrix}\cos(\theta) &-\sin(\theta)\\ \sin(\theta) &\cos(\theta)\end{bmatrix}$$ be the rotation matrix through $\theta$ radians.
Fact:
$$R_{\varphi}R_{\theta}=R_{\varphi+\theta}=R_{ \theta+\varphi}=R_{\theta}R_{\varphi}.$$
Without loss of generality, we may let the vector from the origin to point A be
$$\vec{A}=R_{\omega t}\,\hat{x},$$ and the vector from the origin to point B be
$$\vec{B}=\vec{O}_{b}+R_{\omega t+\theta}\,\hat{x}.$$
We want to show that
$$\vec{C}=\vec{O}_{c}+R_{\omega t}\,\vec{y},$$ for some constant vector $\vec{y}.$
Note that
$$\vec{AB}=\vec{B}-\vec{A}=\vec{O}_{b}+R_{\omega t+\theta}\,\hat{x}-R_{\omega t}\,\hat{x}.$$
Also note that
$$\vec{AC}=R_{\pi/3}\,\vec{AB},$$ by virtue of ABC being an equilateral triangle. Finally, we see that
$$\vec{C}=\vec{A}+\vec{AC}.$$ Thus, we compute:
$$\vec{C}=R_{\omega t}\,\hat{x}+R_{\pi/3}\left(\vec{O}_{b}+R_{\omega t+\theta}\,\hat{x}-R_{\omega t}\,\hat{x}\right)$$
$$=R_{\pi/3}\,\vec{O}_{b}+\left(R_{\omega t}+R_{\pi/3}R_{\omega t}R_{\theta}-R_{\pi/3}R_{\omega t}\right)\hat{x}$$
$$=R_{\pi/3}\vec{O}_{b}+R_{\omega t}\left(I+R_{\pi/3}R_{\theta}-R_{\pi/3}\right)\hat{x}.$$
So, let
$$\vec{O}_{c}=R_{\pi/3}\vec{O}_{b}$$ and
$$\vec{y}=\left(I+R_{\pi/3}R_{\theta}-R_{\pi/3}\right)\hat{x},$$
and you're done. QED.
Second Post:
Whoops, I've got a slight (fixable) error. You need to put in multipliers for the radii times the unit vector. But if you carry that through, you'll still be able to factor out the critical rotation matrix. Your vector $\vec{y}$ will be different.
Last edited: | 2020-10-31 10:40:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7968040108680725, "perplexity": 702.630418309821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107917390.91/warc/CC-MAIN-20201031092246-20201031122246-00048.warc.gz"} |
http://www.refactorium.com/web/tech/Injecting-time-making-time-dependent-applications-testable/ | # Injecting time - making time dependent applications testable
## October 30, 2013
Time dependent applications are all around us
Time dependent logic in our applications is quite natural, some examples could be:
• A BI solution - today's reports have to contain the data from the days since the last business day
• An Enterprise Risk Engine - operational risk depends on the age of an exception
• A Social network software - the current age of a message should be displayed, and messages should be displayed in reverse order
Time to time (pun intended) I see implementations where the business logic explicitly uses SYSDATE() in Oracle, or new Date() in Java/Scala, or date in shell script.These make the system very very hard to test can have insidious bugs and also make people very angry.
Bad: Hard coded dependency on the current day/time
Time and speed dependent, flickering tests
Let me show you an example why hardcoded new Date() can be a problem.
On my machine this test failes 2 times out of 10. THE WORST kind of test is the flickering test!
So, why is this happening? The time difference between the new Date() in the Spec and the Wall code, is probably nanoseconds, and equality is checking on the level of milliseconds ("The class Date represents a specific instant in time, with millisecond precision."). Thus, if we are "lucky" the two Dates will be in the same millisecond, our test passes, but if the Wall code is just after the border of the next millisecond, the test will fail.
Timezone dependency
Working on a global project I had the pleasure to meet unit tests written on a London machine in a time dependent manner which resulted in a strange situation: all the builds were passing in London (including the build server) but when I moved to New York, I found that on my local machine it's failing.
Time dependent feed file processing
I found that whenever batch processing of feed files is dependent on the date, it is virtually impossible to reuse exactly the same feed files from the past, and I had to copy them, rename them to today's date, even worse if I had to modify the feed files so that the data will be be "up to date" and then run the system.
How easier it would be if I could play time machine and tell the system that I want to run it for a past (or a future!) date?
How do I do it well? Dependency injection!
Version 1: Clock Injection
Philosophical sidenote (the impatient can safely skip this): Some people say that Time is an actor. Until I know better I find it a bit confusing, especially that Actors act, and time is due to the fact that things are in change, the Earth revolves around the Sun, crystals pulsate and we transform the aggregated sideeffect of these changes with change-to-time converter tools (i.e. clocks!) into the man-made scale we call: time.
I like to think about time as a constantly changing value, which is provided by a service called Clock. If we embrace this and introduce a Clock as a collaborator then we'll be able to mock it out, inject stubbed versions, i.e. have control over time.
so...
After all this, an example to control the time in a mocked Clock would be
Testing the Real clock itself
Of course, you can't get away from the problem totally, there is always a part in the system which will have to have hard dependency on time. But similar to any other third party library dependency, you want to contain it, wrap it up, and test it as you can to ensure your wrapper does what it's supposed to do.
In our case, this is the RealClock class, which returns new Date() in its getTime() method implementation.
A good enough way to test it is to ensure that the returned date is between the time of the commands ran before and after getTime.
Version 2: Time Injection
If injecting the Clock service is too much, or it is just impossible in your case, then there is another way, but that will modify your API: injecting time!
This would mean modifying the signature of your method (script, stored procedure, report, etc.), which depends on time, for example in our case, the Wall.message function would have another version with another argument called currentTimeStamp.
In Summary
1. Avoid hardcoded dependency on new Date(), SYSDATE, CURRENT_TIMESTAMP, etc.
2. You can choose between two versions of dependency injection to avoid hardcoded time dependency:
1. injecting Clock service (collaborator level injection)
2. injecting time (method/script signature level injection)
### Jepsen and InfluxDB, Chapter II. Where is InfluxDB on the CAP scale?
This is a continuation of [the hacking to get Jepsen work with InfluxDB](/distributed_systems/Hacking-up-a-testing-environment-for-jepsen...… Continue reading
#### Hacking up a test environment for InfluxDB and Jepsen
Published on December 23, 2015
#### How to prove that your parallel code works?
Published on December 13, 2015 | 2019-08-20 16:04:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32770827412605286, "perplexity": 2010.6924477137547}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315551.61/warc/CC-MAIN-20190820154633-20190820180633-00224.warc.gz"} |
https://convex.indigits.com/convex_sets/convex.html | # 7.2. Convex Sets#
Throughout this section, we assume that $$\VV$$ is a real vector space. Wherever necessary, it is equipped with a norm $$\| \cdot \| : \VV \to \RR$$ or a real inner product $$\langle \cdot, \cdot \rangle : \VV \times \VV \to \RR$$. It is also equipped with a metric $$d(\bx, \by) = \| \bx - \by \|$$.
## 7.2.1. Line Segments#
Definition 7.5 (Line segment)
Let $$\bx_1$$ and $$\bx_2$$ be two points in $$\VV$$. Points of the form
$\by = (1 - \theta) \bx_1 + \theta \bx_2 \text{ where } 0 \leq \theta \leq 1$
form a (closed) line-segment between $$\bx_1$$ and $$\bx_2$$. The closed line segment is denoted by $$[\bx_1, \bx_2]$$.
$[\bx_1, \bx_2] \triangleq \{ (1 - \theta) \bx_1 + \theta \bx_2 \ST 0 \leq \theta \leq 1 \}.$
Similarly, we define an open line segment as:
$(\bx_1, \bx_2) \triangleq \{ (1 - \theta) \bx_1 + \theta \bx_2 \ST 0 < \theta < 1 \}.$
The half-open segment $$(\bx_1, \bx_2]$$ is defined as:
$(\bx_1, \bx_2] \triangleq \{ (1 - \theta) \bx_1 + \theta \bx_2 \ST 0 < \theta \leq 1 \}.$
The half-open segment $$[\bx_1, \bx_2)$$ is defined as:
$[\bx_1, \bx_2) \triangleq \{ (1 - \theta) \bx_1 + \theta \bx_2 \ST 0 \leq \theta < 1 \}.$
## 7.2.2. Convex Sets#
Definition 7.6 (Convex set)
A set $$C \subseteq \VV$$ is convex if the line segment between any two points in $$C$$ lies in $$C$$. i.e.
$\theta \bx_1 + (1 - \theta) \bx_2 \in C \Forall \bx_1, \bx_2 \in C \text{ and } 0 \leq \theta \leq 1.$
The empty set is vacuously convex. The entire vector space $$\VV$$ is convex since it contains all the line segments between any pair of points in the space.
Observation 7.2
Since a convex set contains the line segment between any two points, any line segment is convex by definition.
Geometrically, a convex set has no holes, hollows, pits or dimples. A set is convex if from each point in the set, it is possible to see every other point without having the line of sight pass outside the set.
Example 7.1 (Real line)
On the real line $$\RR$$, the empty set, sing points, intervals (closed, open, half open), half lines, and the entire real line are convex sets. There are no other convex sets possible.
Theorem 7.7
Any linear subspace is convex.
Proof. Let $$\EE$$ be a linear subspace of $$\VV$$. Then $$\EE$$ is closed under addition and scalar multiplication. Thus, for any $$\bx, \by \in \EE$$ and $$0 \leq t \leq 1$$,
$t \bx + (1-t)\by \in \EE.$
Thus, $$\EE$$ is convex.
Theorem 7.8
Any affine set is convex.
Proof. Let $$C \subseteq \VV$$ be an affine set. By definition, for any $$\bx, \by \in C$$ and any $$t \in \RR$$, $$t \bx + (1-t)\by \in C$$. It is valid in particular for $$0 \leq t \leq 1$$. Thus, $$C$$ is convex.
Theorem 7.9
Any hyperplane is convex since it is affine.
Theorem 7.10
Half spaces are convex.
Proof. Consider $$H_+$$ defined as:
$H_+ = \{ x : \langle \ba, \bx \rangle \geq b \}$
Let $$\bx, \by \in H_+$$. Then:
$\langle \ba, \bx \rangle \geq b \text{ and } \langle \ba, \by \rangle \geq b.$
For some $$0 \leq t \leq 1$$:
$\langle \ba, t \bx + (1 - t) \by \rangle = t \langle \ba, \bx \rangle + (1 - t)\langle \ba, \by \rangle \geq t b + (1 -t )b = b.$
Thus, $$\bx + (1 - t) \by \in H_+$$. Analogous proofs apply for other types of half spaces.
Theorem 7.11 (Convex set as convex combination of itself)
Let $$C$$ be a nonempty subset of $$\VV$$. If $$C$$ is convex then for every $$t_1, t_2 \geq 0$$, we have
$(t_1 + t_2) C = t_1 C + t_2 C.$
In particular, if $$t_1, t_2 \geq 0$$ such that $$t_1 + t_2 = 1$$, then
$C = t_1 C + t_2 C.$
Proof. The statement $$(t_1 + t_2) C \subseteq t_1 C + t_2 C$$ is valid even for sets which are not convex.
1. Let $$\bx \in t_1 + t_2) C$$.
2. Then there exist $$\by \in C$$ such that $$\bx = t_1 \by + t_2 \by$$.
3. Hence $$t_1 \by \in t_1 C$$ and $$t_2 \by \in t_2 C$$.
4. Hence $$\bx = t_1 \by + t_2 \by \in t_1 C + t_2 C$$.
We now show the converse.
1. Let $$\bx \in t_1 C + t_2 C$$.
2. Then there exist $$\bx_1, \bx_2 \in C$$ such that
$\bx = t_1 \bx_1 + t_2 \bx_2.$
3. If $$t_1 = t_2 = 0$$, then $$\bx = 0$$ and $$\bx \in (0 + 0) C$$.
4. Now assume that $$t_1 + t_2 > 0$$.
5. By the convexity of $$C$$,
$\by = \frac{t_1}{t_1 + t_2} \bx_1 + \frac{t_2}{t_1 + t_2} \bx_2 \in C.$
6. Hence $$\bx = (t_1 + t_2) \by \in (t_1 + t_2) C$$.
7. Hence $$t_1 C + t_2 C \subseteq (t_1 + t_2) C$$.
Together, we have
$(t_1 + t_2) C = t_1 C + t_2 C.$
Theorem 7.12 (Convex set as union of line segments)
Let $$C$$ be a convex subset of $$\VV$$. Then $$C$$ is the union of all the closed line segments connecting the points of the set. In other words
$C = \bigcup_{\bx, \by \in C} [\bx, \by].$
Proof. Let $$D = \bigcup_{\bx, \by \in C} [\bx, \by]$$.
If $$C$$ is empty, then $$D$$ is also empty. Hence there is nothing to prove. If $$C = \{ \bx \}$$ is a singleton, then $$D$$ consists of the union of a single line segment
$[\bx, \bx] = \{ \bx \}.$
So $$C = D$$. We now consider the case where $$C$$ consists of more than one point.
We first show that $$C \subseteq D$$.
1. Let $$\bx \in C$$.
2. Then $$[\bx, \bx] = \{ \bx \}$$ is a line segment of $$C$$.
3. Hence $$\bx \in [\bx, \bx] \subseteq D$$.
4. Hence $$C \subseteq D$$.
We now show the converse.
1. Let $$\bz \in D$$.
2. Then there exists $$\bx, \by \in C$$ such that $$\bz \in [\bx, \by]$$.
3. Then by convexity of $$C$$, $$[\bx, \by] \subseteq C$$.
4. Hence $$\bz \in [\bx, \by] \subseteq C$$.
5. Hence $$D \subseteq C$$.
Together, $$C = D$$.
## 7.2.3. Rays#
Definition 7.7 (Ray)
A ray $$R$$ is defined as
$R \triangleq \{ \bx_0 + t \bv \ST t \geq 0 \}$
where $$\bv \neq \bzero$$ indicates the direction of ray and $$\bx_0$$ is the base or origin of ray.
Theorem 7.13
A ray is convex.
Proof. Let a ray be given as:
$R = \{ \bx_0 + t \bv \ST t \geq 0 \}.$
Let $$\bu, \bv \in R$$. Thus, there is $$t_u, t_v \geq 0$$ such that:
$\bu = \bx_0 + t_u \bv \text{ and } \bv = \bx_0 + t_v \bv.$
Now, for some $$0 \leq r \leq 1$$,
$\begin{split} r \bu + (1 - r) \bv &= r (\bx_0 + t_u \bv) + (1 - r) (\bx_0 + t_v \bv)\\ &= \bx_0 + (r t_u + (1 - r) t_v) \bv. \end{split}$
Since $$r t_u + (1 - r) t_v \geq 0$$, hence $$r \bu + (1 - r) \bv \in R$$.
## 7.2.4. Balls#
Theorem 7.14
An open ball $$B(\ba, r)$$ is convex for any norm $$\| \cdot \| : \VV \to \RR$$.
Proof. Recall that an open ball in a normed linear space is defined as:
$B(\ba,r) = \{ \bx \in \VV \ST \| \bx - \ba \| < r \}.$
Let $$\bx, \by \in B(\ba,r)$$ and let $$0 \leq t \leq 1$$. Then,
$t \bx + (1-t)\by - \ba = t (\bx - \ba) + (1-t) (\by - \ba).$
By triangle inequality:
$\begin{split} \| t (\bx - \ba) + (1-t) (\by - \ba) \| &\leq \| t (\bx - \ba) \| + \| (1-t) (\by - \ba) \|\\ &= t \| \bx - \ba \| + (1 -t) \| \by - \ba \|\\ &< t r + (1 - t)r = r. \end{split}$
Thus,
$\| t \bx + (1-t)\by - \ba \| < r \implies t \bx + (1-t)\by \in B(\ba,r).$
Theorem 7.15
A closed ball $$B[\ba, r]$$ is convex for any norm $$\| \cdot \| : \VV \to \RR$$.
Proof. Recall that a closed ball in a normed linear space is defined as:
$B[\ba,r] = \{ \bx \in \VV \ST \| \bx - \ba \| \leq r \}.$
Let $$\bx, \by \in B(\ba,r)$$ and let $$0 \leq t \leq 1$$. Then,
$t \bx + (1-t)\by - \ba = t (\bx - \ba) + (1-t) (\by - \ba).$
By triangle inequality:
$\begin{split} \| t (\bx - \ba) + (1-t) (\by - \ba) \| &\leq \| t (\bx - \ba) \| + \| (1-t) (\by - \ba) \|\\ &= t \| \bx - \ba \| + (1 -t) \| \by - \ba \|\\ &\leq t r + (1 - t)r = r. \end{split}$
Thus,
$\| t \bx + (1-t)\by - \ba \| \leq r \implies t \bx + (1-t)\by \in B[\ba,r].$
## 7.2.5. Convex Combinations#
Definition 7.8 (Convex combination)
We call a point of the form $$\theta_1 \bx_1 + \dots + \theta_k \bx_k$$, where $$\theta_1 + \dots + \theta_k = 1$$ and $$\theta_i \geq 0, i=1,\dots,k$$, a convex combination of the points $$\bx_1, \dots, \bx_k$$.
It is like a weighted average of the points $$\bx_i$$. A weights $$\theta_i$$ in a convex combination can be interpreted as probabilities or proportions.
Example 7.2 (Center of mass)
Consider a system of particles $$p_i$$, $$i=1,\dots,n$$ each with mass $$m_i$$ and location in space as $$\bx_i$$. The center of mass $$\bx$$ satisfies the equation:
$\sum_{i=1}^n m_i (\bx_i - \bx) = 0.$
Solving this equation gives us:
$\bx = \sum_{i=1}^n \frac{m_i}{m} \bx_i$
where $$m = \sum_{i=1}^n m_i$$.
If we assign $$\theta_i = \frac{m_i}{m}$$, we notice that $$\theta_i \geq 0$$ and $$\sum_{i=1}^n \theta_i = 1$$. We can now write the center of mass as:
$\bx = \sum_{i=1}^n \theta_i \bx_i$
which is a convex combination of the locations $$\bx_i$$ where $$\theta_i$$ gives the proportion of contribution of each particle according to its mass.
Remark 7.2 (Convex combinations and unit simplex)
We recall that the unit simplex in $$\RR^n$$ is given by
$\Delta_n = \{ \bt \in \RR^n \ST \langle \bt, \bone \rangle = 1, \bt \succeq \bzero \} = \{\bt \in \RR^n \ST t_1 + \dots + t_n = 1, t_1, \dots, t_n \geq 0 \}.$
Thus, the coefficients for convex combinations of $$n$$ points are drawn from $$\Delta_n$$.
Theorem 7.16 (Closure under convex combinations)
A set is convex if and only if it contains all convex combinations of its points.
Let $$\VV$$ be a real vector space and $$C$$ be a subset of $$\VV$$. Then, $$C$$ is convex if and only if for any $$m \in \Nat$$, for any $$\bx_1, \dots, \bx_m \in C$$, and for every $$\bt \in \Delta_m$$, $$t_1 \bx_1 + \dots + t_m \bx_m \in C$$.
Proof. We know that a set $$C$$ is convex is equivalent to saying that it contains all 2 point convex combinations; i.e, for any $$\bx_1, \bx_2 \in C$$, $$t_1, t_2 \geq 0$$ and $$t_1 + t_2 = 1$$,
$t_1 \bx_1 + t_2 \bx_2 \in C.$
We first show that if $$C$$ is convex, it contains all its (finite) convex combination by induction.
1. By definition $$C$$ contains all its 2 point convex combinations.
2. As induction hypothesis, assume that $$C$$ contains all convex combinations of $$m-1$$ or fewer points where $$m > 2$$.
3. Consider a convex combination of $$m$$ points
$\sum_{i=1}^m t_i \bx_i$
where $$\bx_i \in C$$, $$t_i \geq 0$$, $$\sum t_i = 1$$.
4. Since $$\sum t_1 = 1$$, hence at least one of $$t_i < 1$$.
5. Without loss of generality, assume $$t_m < 1$$.
6. Note that $$t_m < 1$$ means that $$1 - t_m > 0$$.
7. Define $$\by = \sum_{i=1}^{m-1} t'_i \bx_i$$ where $$t'_i = \frac{t_i}{1 - t_m}$$.
8. Note that $$t'_i \geq 0$$. Also, $$\sum_{i=1}^{m-1} t'_i = 1$$ since $$\sum_{i=1}^{m-1} t_i = 1 - t_m$$.
9. Thus, $$\by$$ is an $$m-1$$ point convex combination of $$C$$.
10. By induction hypothesis, $$\by \in C$$.
11. Now, $$(1-t_m) \by = \sum_{i=1}^{m-1} t_i \bx_i$$.
12. Hence, $$\bx = (1 - t_m) \by + t_m \bx_m$$.
13. It is a 2 point convex combination of $$\by$$ and $$\bx_m$$.
14. Since both $$\by, \bx_m \in C$$, hence $$\bx \in C$$.
15. Thus, $$C$$ contains all its $$m$$ point convex combinations.
For the converse, note that if $$C$$ contains all its convex combinations, then it contains, in particular, all its two point convex combinations. Hence, $$C$$ is convex.
Theorem 7.17
A convex combination of convex combinations is a convex combination.
Proof. Let $$S \subseteq \VV$$. Note that $$S$$ is arbitrary (no convexity assumed).
1. Consider $$n$$ points $$\by_i$$, $$i=1,\dots, n$$ described as below.
2. Let $$\by_i = \sum_{j=1}^{m_j}t_{i,j} \bx_{i,j}$$ be convex combinations of $$m_j$$ points:
• $$\bx_{i,1}, \dots, \bx_{i,m_j} \in S$$.
• $$t_{i,j} \geq 0$$.
• $$\sum_{j=1}^{m_j} t_{i, j} = 1$$.
3. Consider the convex combination $$\by = \sum_{i=1}^n r_i \by_i$$.
• $$r_i \geq 0$$.
• $$\sum r_i = 1$$.
4. We need to show that $$\by$$ is a convex combination of points of $$S$$.
Towards this:
$\begin{split} \by &= \sum_{i=1}^n r_i \by_i\\ &= \sum_{i=1}^n r_i \sum_{j=1}^{m_j}t_{i,j} \bx_{i,j}\\ &= \sum_{i=1}^n \sum_{j=1}^{m_j} r_i t_{i,j} \bx_{i,j}. \end{split}$
Consider the terms:
$s_{i, j} = r_i t_{i,j}.$
Since $$r_i \geq 0$$ and $$t_{i, j} \geq 0$$, hence $$s_{i, j } \geq 0$$.
Now, consider their sum:
$\begin{split} \sum_{i=1}^n \sum_{j=1}^{m_j} s_{i, j} &= \sum_{i=1}^n \sum_{j=1}^{m_j} r_i t_{i,j} \\ &= \sum_{i=1}^n r_i \sum_{j=1}^{m_j} t_{i,j}\\ &= \sum_{i=1}^n r_i = 1\\ \end{split}$
Thus, $$\sum_{i,j} s_{i, j} = 1$$.
Hence,
$\by = \sum_{i,j} s_{i, j} x_{i, j}$
is a convex combination of points of $$S$$.
## 7.2.6. Convex Hull#
Definition 7.9 (Convex hull)
The convex hull of an arbitrary set $$S \subseteq \VV$$ denoted as $$\ConvexHull(S)$$, is the set of all convex combinations of points in $$S$$.
$\ConvexHull(S) = \{ \theta_1 \bx_1 + \dots + \theta_k \bx_k | \bx_k \in S, \theta_i \geq 0, i = 1,\dots, k, \theta_1 + \dots + \theta_k = 1\}.$
Property 7.1 (Convexity of convex hull)
The convex hull $$\ConvexHull(S)$$ of a set $$S$$ is always convex.
Proof. Let $$\bx, \by \in \ConvexHull(S)$$, $$t \in [0,1]$$ and the point $$\bz = t \bx + (1-t) \by$$. We need to show that $$\bz \in \ConvexHull(S)$$.
1. $$\bx, \by$$ are convex combinations of points of $$S$$.
2. $$\bz$$ is a convex combination of $$\bx$$ and $$\by$$.
3. Hence, $$\bz$$ is a convex combination of convex combinations of points in $$S$$.
4. By Theorem 7.17, a convex combination of convex combinations is a convex combination.
5. Thus, $$\bz$$ is a convex combination of points of $$S$$.
6. But $$\ConvexHull(S)$$ contains all convex combinations of points of $$S$$ by definition.
7. Hence, $$\bz \in \ConvexHull(S)$$.
8. Thus, $$\ConvexHull(S)$$ is convex.
Property 7.2 (Affine hull of convex hull)
Let $$\VV$$ be a finite dimensional real vector space. Let $$S$$ be a nonempty subset of $$\VV$$. Let $$C = \convex(S)$$. Then
$\affine S = \affine C.$
In other words, the affine hull of a set and its convex hull are identical.
Proof. By using a translation argument if necessary, without loss of generality, we assume that $$\bzero \in S$$.
1. Then both $$\affine S$$ and $$\affine C$$ are linear subspaces.
2. Since $$S \subseteq C$$, hence $$\affine S \subseteq \affine C$$.
3. For the converse, assume that $$m = \affine C$$.
4. Let $$\bx_1, \dots, bx_m \in C$$ be $$m$$ linearly independent vectors spanning $$\affine C$$.
5. Then for every $$\bx \in \affine C$$, there exist scalars $$t_1, \dots, t_m$$ so that
$\bx = \sum_{i=1}^m t_i \bx_i.$
6. By definition of convex hull, each $$\bx_i \in C$$ is a convex combination of points in $$S$$.
7. Hence every $$\bx \in \affine C$$ is a linear combination of points in $$S$$.
8. Hence $$\affine C \subseteq \affine S$$.
Theorem 7.18
The convex hull of a set $$S$$ is the smallest convex set containing it. In other words, let $$C$$ be any convex set such that $$S \subseteq C$$. Then $$\ConvexHull(S) \subseteq C$$.
Proof. Let $$C$$ be a convex set such that $$S \subseteq C$$.
1. Let $$\bx \in \ConvexHull(S)$$.
2. Then, $$\bx$$ is a convex combination of points of $$S$$.
3. $$C$$ is convex and $$S \subseteq C$$.
4. Hence, $$C$$ contains every convex combination of points of $$S$$.
5. Thus, in particular $$\bx \in C$$.
6. Since $$\bx \in \ConvexHull(S)$$ was arbitrary, hence $$\ConvexHull(S) \subseteq C$$.
We could have started as defining the convex hull of $$S$$ being the smallest convex set containing $$S$$ and arrived at the conclusion that $$\ConvexHull(S)$$ contains all convex combinations of $$S$$. Some authors prefer to define $$\ConvexHull(S)$$ as the smallest convex set containing $$S$$. Both definitions are equivalent.
### 7.2.6.1. Carathéodory Theorem#
Theorem 7.19 (Carathéodory theorem)
Let $$\VV$$ be an $$n$$-dimensional real vector space. Let $$S \subseteq \VV$$. Let $$\bx \in \ConvexHull(S)$$.
Then, there exists a set of $$n+1$$ points $$\bx_0, \dots, \bx_n \in S$$ such that
$\bx \in \ConvexHull (\{ \bx_0, \dots, \bx_n\});$
i.e., there exists a $$\bt \in \Delta_{n+1}$$ such that
$\bx = \sum_{i=0}^n t_i \bx_i.$
Proof. We note that $$\bx \in \ConvexHull(S)$$.
1. Thus, there exists a set of $$k+1$$ points in $$S$$ and $$\bt \in \Delta_{k+1}$$ such that
$\bx = \sum_{i=0}^k t_i \bx_i.$
2. We can assume $$t_i > 0$$ for all $$i=0, \dots, k$$ since otherwise, we can drop the vectors corresponding to the zero coefficients from the convex combination.
3. If $$k \leq n$$, there is nothing to prove.
4. Hence, consider the case where $$k > n$$.
5. We now describe a process which can reduce the number of points in the convex combination by one.
6. The $$k$$ vectors $$\bx_1 - \bx_0, \dots, \bx_k - \bx_0$$ are linearly dependent as $$k > n$$ and $$\VV$$ is $$n$$-dimensional.
7. Thus, there is a nontrivial linear combination of these vectors
$r_1 (\bx_1 - \bx_0) + \dots + r_k (\bx_k - \bx_0) = \bzero.$
8. Let $$r_0 = -r_1 - \dots - r_k$$. Then, we have a nontrivial combination
$\sum_{i=0}^k r_i \bx_i = \bzero$
with $$\sum r_i = 0$$.
9. In particular, there exists at least one index $$j$$ for which $$r_j < 0$$.
10. Let $$\alpha \geq 0$$.
11. Then,
$\bx = \sum_{i=0}^k t_i \bx_i + \alpha \sum_{i=0}^k r_i \bx_i = \sum_{i=0}^k (t_i + \alpha r_i) \bx_i$
with $$\sum (t_i + \alpha r_i) = \sum t_i + \alpha \sum r_i = 1$$.
12. Thus, it is a convex combination for $$\bx$$ if $$t_i + \alpha r_i \geq 0$$ for every $$i=0, \dots, k$$.
13. Let
$\alpha = \underset{i \ST r_i < 0}{\min}\left \{ - \frac{t_i}{r_i} \right \}.$
14. $$\alpha$$ is well defined since there is at least one $$r_j < 0$$. Let $$j$$ be the index for which $$\alpha$$ was obtained.
15. Then, $$t_j + \alpha r_j = 0$$.
16. Also, we can see that $$t_i + \alpha r_i \geq 0$$ for all $$i=0,\dots,k$$.
17. Thus, we have found a convex combination for $$\bx$$ where the coefficient for $$\bx_j$$ is 0.
18. Thus, we have obtained a convex combination for $$\bx$$ with $$k-1$$ points.
19. Repeating this process up to $$k-n$$ times, we can obtain a convex combination of $$\bx$$ consisting of $$n+1$$ or less points.
## 7.2.7. Dimension#
Definition 7.10 (Dimension of a convex set)
The dimension of a convex set is defined to be the dimension of its affine hull.
If $$C$$ is a convex set, then:
$\dim C = \dim \affine C.$
Recall that the dimension of an affine set is equal to the dimension of the linear subspace associated with it (Definition 4.122).
1. A circle will have a dimension of 2 even if it is in $$\RR^3$$.
2. A sphere will have a dimension of three.
## 7.2.8. Simplices#
Theorem 7.20 (Convex hull of a finite set of points)
Let $$S = \{ \bx_0, \dots, \bx_m \}$$ be a finite set of points of $$\VV$$. Then, $$\ConvexHull(S)$$ consists of all the points of the form
$t_0 \bx_0 + \dots t_m \bx_m, \quad t_0 \geq 0, \dots, t_m \geq 0, \sum_{i=0}^m t_i = 1.$
In $$\RR^n$$, this is known as a polytope.
A Simplex is a convex hull of a finite set of affine independent points. The simplex provides a powerful coordinate system for the points within it in terms of barycentric coordinates.
Definition 7.11 ($$k$$-simplex)
Let $$k+1$$ points $$\bv_0, \dots, \bv_k \in \VV$$ be affine independent.
The simplex determined by them is given by
$C = \ConvexHull \{ \bv_0, \dots, \bv_k\} = \{ t_0 \bv_0 + \dots + t_k \bv_k \ST \bt \succeq 0, \bone^T \bt = 1\}$
where $$\bt = [t_1, \dots, t_k]^T$$ and $$\bone$$ denotes a vector of appropriate size $$(k)$$ with all entries one.
In other words, $$C$$ is the convex hull of the set $$\{\bv_0, \dots, \bv_k\}$$.
A simplex is a convex set since it is a convex hull of its vertices. $$k$$ stands for the dimension of the simplex. Recall that the dimension of a convex set is the dimension of its affine hull.
Example 7.3 (Simplex examples)
In $$\RR^n$$:
• A 0-simplex is a point.
• A 1-simplex is a line segment (2 points).
• A 2-simplex is a triangle (3 points).
• A 3-simplex is a tetrahedron (4 points).
• A 4-simplex is a 5-cell (5 points).
Theorem 7.21 (Barycentric coordinates)
Each point of a $$k$$ simplex is uniquely expressible as a convex combination of the vertices.
Proof. Let $$C = \ConvexHull\{\bv_0, \bv_1, \dots, \bv_k \}$$.
1. Let $$\bv \in C$$.
2. Then, $$\bv = \sum_{i=0}^k t_i \bv_i$$ with $$t_i \geq 0$$ and $$\sum t_i = 1$$.
3. For contradiction, assume there was another representation: $$\bv = \sum_{i=0}^k r_i \bv_i$$ with $$r_i \geq 0$$ and $$\sum r_i = 1$$.
4. Then,
$\sum_{i=0}^k t_i \bv_i = \sum_{i=0}^k r_i \bv_i \implies \sum_{i=0}^k (t_i - r_i) \bv_i = \bzero.$
5. But $$\{\bv_0, \bv_1, \dots, \bv_k \}$$ are affine independent.
6. Hence, $$t_i = r_i$$.
7. Thus, the representation is unique.
Definition 7.12 (Simplex midpoint)
The point $$\sum_{i=0}^k \frac{1}{k+1}{\bv_i}$$ in a simplex $$C = \ConvexHull\{\bv_0, \dots, \bv_k \}$$ is known as its midpoint or barycenter.
Theorem 7.22
The dimension of a convex set $$C$$ is the maximum of the dimensions of the various simplices included in $$C$$.
Proof. We need to show that there is a simplex $$S \subset C$$ such that $$\dim S = \dim C$$.
1. Let $$A$$ be any finite affine independent subset of $$C$$.
2. Since $$C$$ is convex, hence $$A \subseteq C \implies \ConvexHull(A) \subseteq C$$.
3. Thus, $$C$$ contains the simplices constructed from any set of finite affine independent points in $$C$$.
4. Thus, if $$A = \{\bv_0, \dots, \bv_k\}$$ is a set of $$k+1$$ affine independent points of $$C$$, then $$\ConvexHull(A) \subseteq C$$ implies that $$k \leq \dim C$$.
5. Thus, if $$S$$ is a $$k$$-simplex such that $$S \subseteq C$$, then $$\dim S = k \leq \dim C$$.
6. Let $$m$$ be the maximum of the dimensions of the various simplices contained in $$C$$.
7. Then, there exist affine independent points $$\bv_0, \dots, \bv_m \in C$$ such that the simplex $$S = \ConvexHull\{ \bv_0, \dots, \bv_m\} \subseteq C$$.
8. Let $$M$$ be the affine hull of $$S$$; i.e. $$M = \affine S$$.
9. Then, $$\dim M = m$$ and $$M \subseteq \affine C$$.
10. If $$C \setminus M$$ were nonempty, then there would be an element $$\bv \in C \setminus M$$ which would be affine independent of $$\{ \bv_0, \dots, \bv_m\}$$.
11. That would lead to a set of $$m+2$$ affine independent points in $$C$$. That would mean that $$C$$ contains a simplex of dimension $$m+1$$. A contradiction.
12. Hence, $$C \setminus M = \EmptySet$$.
13. Thus, $$C \subseteq M$$.
14. Since $$\affine C$$ is the smallest affine set that contains $$C$$, hence $$\affine C = M$$.
15. Thus, $$\dim C = m$$.
## 7.2.9. Symmetric Reflections#
The symmetric reflection of a convex set is convex since convexity is preserved under scalar multiplication. See Theorem 7.30 below.
If a symmetric convex set contains a nonzero vector $$\bx$$, then it contains the entire line segment between $$-\bx$$ and $$\bx$$.
## 7.2.10. Infinite Convex Combinations#
We can generalize convex combinations to include infinite sums.
Theorem 7.23
Let $$\theta_1, \theta_2, \dots$$ satisfy
$\theta_i \geq 0, i = 1,2,\dots, \quad \sum_{i=1}^{\infty} \theta_i = 1,$
and let $$\bx_1, \bx_2, \dots \in C$$, where $$C \subseteq \VV$$ is convex. Then
$\sum_{i=1}^{\infty} \theta_i \bx_i \in C,$
if the series converges.
We can generalize it further to density functions.
Theorem 7.24
Let $$p : \VV \to \RR$$ satisfy $$p(x) \geq 0$$ for all $$x \in C$$ and
$\int_{C} p(x) d x = 1$
Then
$\int_{C} p(x) x d x \in C$
provided the integral exists.
Note that $$p$$ above can be treated as a probability density function if we define $$p(x) = 0 \Forall x \in \VV \setminus C$$.
## 7.2.11. Convexity Preserving Operations#
In the following, we will discuss several operations which transform a convex set into another convex set, and thus preserve convexity.
Understanding these operations is useful for determining the convexity of a wide variety of sets.
Usually, it is easier to prove that a set is convex by showing that it is obtained by a convexity preserving operation from a convex set compared to directly verifying the convexity property i.e.
$t \bx_1 + (1 - t) \bx_2 \in C \Forall \bx_1, \bx_2 \in C, t \in [0,1].$
### 7.2.11.1. Intersection and Union#
Theorem 7.25 (Intersection of convex sets)
If $$S_1$$ and $$S_2$$ are convex sets then $$S_1 \cap S_2$$ is convex.
Proof. Let $$\bx_1, \bx_2 \in S_1 \cap S_2$$. We have to show that
$t \bx_1 + (1 - t) \bx_2 \in S_1 \cap S_2, \Forall t \in [0,1].$
Since $$S_1$$ is convex and $$\bx_1, \bx_2 \in S_1$$, hence
$t \bx_1 + (1 - t) \bx_2 \in S_1, \Forall t \in [0,1].$
Similarly
$t \bx_1 + (1 - t) \bx_2 \in S_2, \Forall t \in [0,1].$
Thus
$t \bx_1 + (1 - t) \bx_2 \in S_1 \cap S_2, \Forall t \in [0,1].$
which completes the proof.
We can generalize it further.
Theorem 7.26 (Intersection of arbitrary collection of convex sets)
Let $$\{ A_i\}_{i \in I}$$ be a family of sets such that $$A_i$$ is convex for all $$i \in I$$. Then $$\cap_{i \in I} A_i$$ is convex.
Proof. Let $$\bx_1, \bx_2$$ be any two arbitrary elements in $$\cap_{i \in I} A_i$$.
$\begin{split} &\bx_1, \bx_2 \in \cap_{i \in I} A_i\\ \implies & \bx_1, \bx_2 \in A_i \Forall i \in I\\ \implies &t \bx_1 + (1 - t) \bx_2 \in A_i \Forall t \in [0,1] \Forall i \in I \text{ since A_i is convex }\\ \implies &t \bx_1 + (1 - t) \bx_2 \in \cap_{i \in I} A_i. \end{split}$
Hence $$\cap_{i \in I} A_i$$ is convex.
Corollary 7.1 (Arbitrary intersection of closed half spaces)
Let $$I$$ be an index set. Let $$\ba_i \in \VV$$ and $$b_i \in \RR$$ for every $$i \in I$$. Then, the set:
$C = \{ \bx \in \VV \ST \langle \bx, \ba_i \rangle \leq b_i \Forall i \in I\}$
is convex.
Proof. Since each of the half spaces is convex, hence so is their intersection.
This result is applicable for open half spaces and hyperplanes too too. It also applies for a mixture of hyperplanes and half-spaces.
Corollary 7.2
The solution set of a system of linear equations and inequalities in $$\RR^n$$ is convex.
Proof. We proceed as follows:
• The solution set of each linear equation is a hyperplane.
• The solution set of each linear inequality is a half-space (closed or open).
• The solution set of a system of linear equations and inequalities is the intersection of these hyperplanes and half-spaces.
• Each hyperplane and each half-space is convex.
• Hence, their intersection is convex.
Theorem 7.27 (Intersection and union of two sets)
Let $$C_1$$ and $$C_2$$ be convex in $$\VV$$. Then, the largest convex set contained in both is $$C_1 \cap C_2$$. And, the smallest convex set containing both is $$\ConvexHull (C_1 \cup C_2)$$.
Proof. Let $$C$$ be a convex set contained in both $$C_1$$ and $$C_2$$. Then, $$C \subseteq C_1 \cap C_2$$. But $$C_1 \cap C_2$$ is convex (Theorem 7.25). Hence, $$C_1 \cap C_2$$ is the largest convex set contained in both $$C_1$$ and $$C_2$$.
Let $$C$$ be a convex set which contains both $$C_1$$ and $$C_2$$. Then, $$C_1 \cup C_2 \subseteq C$$. The smallest convex set containing $$C_1 \cup C_2$$ is its convex hull given by $$\ConvexHull(C_1 \cup C_2)$$ (Theorem 7.18).
Theorem 7.28 (Intersection and union of arbitrary sets)
Let $$I$$ be an index set and $$\FFF = \{ C_i \}_{i \in I}$$ be a family of convex sets in $$\VV$$. Then, the largest convex set contained in every set of $$\FFF$$ is:
$\bigcap_{i \in I}C_i.$
And, the smallest convex set containing every set of $$\FF$$ is
$\ConvexHull \left (\bigcup_{i \in I} C_i \right ).$
Proof. Let $$C$$ be a convex set contained in every set of $$\FFF$$. Then, $$C \subseteq \bigcap_{i \in I}C_i$$. But $$\bigcap_{i \in I}C_i$$ is convex (Theorem 7.26). Hence, $$\bigcap_{i \in I}C_i$$ is the largest convex set contained in every set of $$\FFF$$.
Let $$C$$ be a convex set which contains every set of $$\FF$$. Then, $$\bigcup_{i \in I} C_i \subseteq C$$. The smallest convex set containing every set of $$\FF$$ is its convex hull given by $$\ConvexHull(\bigcup_{i \in I} C_i)$$ (Theorem 7.18).
### 7.2.11.2. Affine Functions#
Let us start with some simple results.
Theorem 7.29 (Convexity and translation)
Convexity is preserved under translation.
$$C$$ (a subset of $$\VV$$) is convex if and only if $$C + \ba$$ is convex for every $$\ba \in \VV$$.
Proof. Let $$C \subseteq \VV$$.
1. Assume $$C$$ is convex.
2. Then, for every $$\bx, \by \in C$$ and every $$t \in [0,1]$$, $$t \bx + (1-t) \by \in C$$.
3. Let $$\ba \in \VV$$.
4. Let $$\bu, \bv \in C + \ba$$.
5. Then, $$\bu = \bx + \ba$$ and $$\bv = \by + \ba$$ for some $$\bx, \by \in C$$.
6. Then,
$\begin{split} t \bu + (1-t) \bv &= t (\bx + \ba) + (1-t ) (\by + \ba)\\ &= t \bx + (1-t) \by + \ba. \end{split}$
7. But $$t \bx + (1-t) \by \in C$$ since $$C$$ is convex.
8. Then, $$t \bx + (1-t) \by + \ba \in C + \ba$$.
9. Thus, $$t \bu + (1-t) \bv \in C + \ba$$.
10. Thus, $$C + \ba$$ is convex.
We can follow the same argument in the opposite direction to establish that $$C + \ba$$ is convex implies $$C$$ is convex.
Theorem 7.30 (Convexity and scalar multiplication)
Convexity is preserved under scalar multiplication.
$$C$$ (a subset of $$\VV$$) is convex if and only if $$\alpha C$$ is convex for every $$\alpha \in \RR$$.
Proof. Let $$C \subseteq \VV$$.
1. Assume $$C$$ is convex.
2. Let $$\alpha \in \RR$$.
3. Let $$\bu, \by \in \alpha C$$.
4. Then, $$\bu = \alpha \bx$$ and $$\bv = \alpha \by$$ for some $$\bx, \by \in C$$.
5. Let $$t \in [0,1]$$.
6. $$t\bu + (1-t)\bv = \alpha (t \bx + (1-t)\by)$$.
7. But $$t \bx + (1-t)\by \in C$$ since $$C$$ is convex.
8. Hence, $$\alpha (t \bx + (1-t)\by)$$ in $$\alpha C$$.
9. Hence, $$t\bu + (1-t)\bv \in \alpha C$$.
10. Thus, $$\alpha C$$ is convex.
Similar argument in opposite direction establishes that $$\alpha C$$ is convex implies $$C$$ is convex.
Recall that an affine function $$f : \VV \to \EE$$ from a real vector space $$\VV$$ to another real vector space $$\EE$$ is a function which satisfies
$f(t \bx + (1-t)\by) = tf(\bx) + (1 -t) f(\by)$
for every $$t \in \RR$$.
Recall from Theorem 4.161 that an affine function can be written as a linear transformation followed by a translation:
$f(\bx) = T (\bx) + \bb$
where $$T$$ is a linear operator.
Example 7.4
An affine function $$f : \RR^n \to \RR^m$$ takes the form of a matrix multiplication plus a vector addition:
$f(\bx) = \bA \bx + \bb$
where $$\bA \in \RR^{m \times n}$$ and $$\bb \in \RR^m$$.
Theorem 7.31 (Image under affine function)
Let $$S \subseteq \VV$$ be convex and $$f : \VV \to \EE$$ be an affine function. Then the image of $$S$$ under $$f$$ given by
$f(S) = \{ f(\bx) | \bx \in S\}$
is a convex set.
Proof. We proceed as follows:
1. Let $$\bu, \bv \in f(S)$$.
2. Then, $$\bu = f(\bx)$$ and $$\bv = f(\by)$$ for some $$\bx, \by \in S$$.
3. Let $$0 \leq t \leq 1$$.
4. Then, $$\bz = t \bx + (1-t)\by \in S$$ since $$S$$ is convex.
5. Since $$f$$ is affine, hence
$f(\bz) = f(t \bx + (1-t)\by ) = t f(\bx) + (1-t)f(\by) = t \bu + (1-t)\bv.$
6. Since $$\bz \in S$$, hence $$f(\bz) = t \bu + (1-t)\bv \in f(S)$$.
7. We have shown that for any $$\bu, \bv \in f(S)$$ and any $$0 \leq t \leq 1$$, $$t \bu + (1-t)\bv \in f(S)$$.
8. Thus, $$f(S)$$ is convex.
It applies in the reverse direction also.
Theorem 7.32 (Inverse image under affine function)
Let $$f : \VV \to \EE$$ be affine and $$S \subseteq \EE$$ be convex. Then the inverse image of $$S$$ under $$f$$ given by
$f^{-1}(S) = \{ \bx \in \VV \ST f(\bx) \in S\}$
is convex.
Proof. Denote $$R = f^{-1}(S)$$. We need to show that if $$S$$ is convex then $$R$$ is convex too.
We proceed as follows:
1. Let $$\bx, \by \in R$$.
2. Let $$\bu = f(\bx)$$ and $$\bv = f(\by)$$.
3. $$\bu, \bv \in S$$.
4. Let $$0 \leq t \leq 1$$.
5. Then, $$\bw = t \bu + (1-t)\bv \in S$$ since $$S$$ is convex.
6. Let $$\bz = t \bx + (1-t) \by$$.
7. Since $$f$$ is affine, hence
$\bw = t \bu + (1-t)\bv = t f(\bx) + (1-t) f(\by) = f(t \bx + (1-t) \by) = f(\bz).$
8. Since $$\bw \in S$$, hence $$\bz \in R$$ as $$\bw = f(\bz)$$.
9. We have shown that for any $$\bx, \by \in R$$ and any $$0 \leq t \leq 1$$, $$t \bx + (1-t)\by \in R$$.
10. Thus, $$R$$ is convex.
Example 7.5 (Affine functions preserving convexity)
Let $$S \in \RR^n$$ be convex.
• For some $$\alpha \in \RR$$, $$\alpha S$$ is convex. This is the scaling operation.
• For some $$\ba \in \RR^n$$, $$S + \ba$$ is convex. This is the translation operation.
• Let $$n = m + k$$. Then, let $$\RR^n = \RR^m \times \RR^k$$. A vector $$\bx \in S$$ can be written as $$\bx = (\bx_1, \bx_2)$$ where $$\bx_1 \in \RR^m$$ and $$\bx_2 \in \RR^k$$. Then
$T = \{ \bx_1 \in \RR^m \ST (\bx_1, \bx_2) \in S \text{ for some } \bx_2 \in \RR^k\}$
is convex. This is the projection operation. It projects vectors from $$\RR^n$$ to $$\RR^m$$ by dropping last $$k$$ entries.
Example 7.6 (System of linear equations)
Consider the system of linear equations $$\bA \bx = \by$$ where $$A \in \RR^{m \times n}$$.
If $$\by$$ ranges over a convex set, then the corresponding set of solutions also ranges over a convex set due to Theorem 7.32.
The nonnegative orthant $$\RR^n_+$$ is a convex set.
Let $$Y = \RR^m_+ + \ba$$ for some $$\ba \in \RR^m$$; i.e.,
$Y = \{ \by \ST \by \succeq \ba \}.$
Then, $$\bA^{-1} Y$$ is the set of vectors satisfying the inequality
$\bA \bx \succeq \ba.$
Thus, the solution set of a system of linear inequalities of the form $$\bA \bx \succeq \ba$$ is convex.
Now, if $$X = \RR^n_+$$, then $$\bA X$$ is the set of vectors $$\by \in \RR^m$$ such that the equation $$\bA \bx = \by$$ has a nonnegative solution ($$\bx \succeq \bzero$$). Since $$X$$ is convex, so is $$\bA X$$.
Theorem 7.33 (Orthogonal projection of convex set)
The orthogonal projection of a convex set $$C$$ on a subspace $$V$$ is another convex set.
Proof. We recall that orthogonal projection is a linear mapping and thus an affine function. By Theorem 7.31, image of a convex set under an affine function is convex. Hence proved.
### 7.2.11.3. Set Addition#
Theorem 7.34 (Convexity and set addition)
Let $$C_1$$ and $$C_2$$ be two convex subsets of $$\VV$$. Then $$C_1 + C_2$$ is convex.
Proof. We proceed as follows:
1. Let $$\bx, \by \in C_1 + C_2$$.
2. Then, $$\bx = \bx_1 + \bx_2$$ for some $$\bx_1 \in C_1$$ and some $$\bx_2 \in C_2$$.
3. Similarly, $$\by = \by_1 + \by_2$$ for some $$\by_1 \in C_1$$ and some $$\by_2 \in C_2$$.
4. Let $$0 \leq t \leq 1$$.
5. Then:
$t \bx + (1 - t) \by = t (\bx_1 + \bx_2) + (1-t)(\by_1 + \by_2) = t \bx_1 + (1 - t) \by_1 + t \bx_2 + (1 - t) \by_2.$
6. But, $$\bz_1 = t \bx_1 + (1 - t) \by_1 \in C_1$$ since $$C_1$$ is convex.
7. Similarly, $$\bz_2 = t \bx_2 + (1 - t) \by_2 \in C_2$$ since $$C_2$$ is convex.
8. Hence, $$t \bx + (1 - t) \by = \bz_1 + \bz_2 \in C_1 + C_2$$.
9. Thus, $$C_1 + C_2$$ is convex.
One way to think geometrically about set addition is as the union of all translates of $$C_1$$ given by $$C_1 + \bx$$ as $$\bx$$ varies over $$C_2$$.
Theorem 7.35
A set $$C$$ is convex if and only if
$(1-t) C + t C = C \Forall t \in [0,1].$
Proof. Assume $$C$$ is convex:
1. $$(1-t) C + t C = \{ t \bx + (1-t) \by \ST \bx, \by \in C \}$$.
2. Thus, $$(1-t) C + t C \subseteq C$$.
3. For every $$\bx \in C$$, $$(1-t)\bx \in (1-t)C$$ and $$t \bx \in t C$$.
4. Thus, $$(1-t)\bx + t \bx = \bx \in (1-t) C + t C$$.
5. Thus, $$C \subseteq (1-t) C + t C$$.
6. Combining, we get $$(1-t) C + t C = C$$.
Assume $$(1-t) C + t C = C$$ for every $$t \in [0,1]$$.
1. Let $$\bx, \by \in C$$ and $$t\in [0,1]$$.
2. Then, $$(1-t)\bx \in (1-t)C$$ and $$t \by \in t C$$.
3. Hence, $$(1-t)\bx + t \by \in (1-t) C + t C = C$$.
4. Thus, $$C$$ is convex.
Theorem 7.36 (Convexity and linear combination)
Convexity is preserved under linear combinations.
Let $$C_1, \dots, C_k$$ be convex. Let $$t_1, \dots, t_k \in \RR$$. Then, their linear combination:
$C = t_1 C_1 + \dots + t_k C_k$
is convex.
Proof. Due to Theorem 7.30, $$t_i C_i$$ are convex for $$i=1,\dots,k$$.
By (finite) repeated application of Theorem 7.34, their sum is also convex.
Theorem 7.37 (Nonnegative scalar multiplication distributive law)
Let $$C$$ be convex and $$t_1, t_2 \geq 0$$. Then
$(t_1 + t_2)C = t_1 C + t_2 C.$
Proof. From Theorem 4.18, we know that:
$(t_1 + t_2)C \subseteq t_1 C + t_2 C.$
We now show that $$t_1 C + t_2 C \subseteq (t_1 + t_2)C$$.
1. If both $$t_1 = t_2 = 0$$, then we have trivial equality.
2. If either of $$t_1$$ or $$t_2$$ is 0, then also we have trivial equality.
3. Now, consider the case $$t_1, t_2 > 0$$.
4. Define $$t = t_1 + t_2 > 0$$ and $$r = \frac{t_1}{t}$$.
5. Then, $$1-r = \frac{t_2}{t}$$.
6. Then, since $$C$$ is convex, hence $$r C + (1-r) C \subseteq C$$.
7. Multiplying by $$t$$ on both sides, we get: $$t_1 C + t_2C \subseteq (t_1 + t_2) C$$.
For the special case of $$t_1 = r$$ and $$t_2 = 1 - r$$ with $$r \in [0,1]$$, we get:
$C = r C + (1- r)C.$
Some implications are $$C + C = 2C$$, $$C+C+C=3C$$ and so forth if $$C$$ is convex.
Theorem 7.38 (Convex combinations over arbitrary unions)
Let $$I$$ be an index set and $$\FFF = \{C_i \}_{i \in I}$$ be a family of convex sets in $$\VV$$. Let $$C$$ be given as:
$C = \ConvexHull\left (\bigcup_{i \in I} C_i \right);$
i.e., $$C$$ is the convex hull of the union of the family of sets $$\FFF$$. Then,
$C = \bigcup \left \{\sum_{i \in I} t_i C_i \right \}$
where the union is taken over all finite convex combinations (i.e. over all nonnegative choices of $$t_i$$ such that only finitely many $$t_i$$ are nonzero and they add up to 1).
Proof. We proceed as follows:
1. Let $$\bx \in C$$.
2. Then, $$\bx$$ is a convex combination of elements in $$\bigcup_{i \in I} C_i$$.
3. Thus, $$\bx = \sum_{i=1}^m a_i \by_i$$ where $$\by_i \in \bigcup_{i \in I} C_i$$, $$a_i \geq 0$$ and $$\sum a_i = 1$$.
4. Drop all the terms from $$\bx$$ where $$a_i = 0$$.
5. If $$\by_i, \by_j$$ belong to some same $$C_k$$, then, we can replace $$a_i \by_i + a_j \by_j$$ with some $$a \by = a_i \by_i + a_j \by_j$$ where $$a = a_i + a_j$$. Note that, with these assumptions, $$\by = \frac{a_i}{a_i + a_j} \by_i + \frac{a_j}{a_i + a_j} \by_j$$ is a convex combination of $$\by_i$$ and $$\by_j$$, hence $$\by \in C_k$$ since $$C_k$$ is convex.
6. Thus, terms from a single $$C_k$$ in $$\bx$$ can be reduced by a single term.
7. Thus, we can simplify $$\bx$$ such that
$\bx = \sum_{j=1}^p b_j \bx_j$
such that each $$\bx_j$$ belongs to a different $$C_{i_j}$$ with $$i_j \in I$$, $$b_j > 0$$ and $$\sum b_j = 1$$.
8. Thus, $$C$$ is a union of finite convex combinations of the form
$b_1 C_{i_1} + \dots + b_p C_{i_p}$
where $$i_1, \dots, i_p \in I$$ are different indices and $$b_j > 0$$, $$\sum b_j = 1$$.
9. This is the same set as $$\bigcup \left \{\sum_{i \in I} t_i C_i \right \}$$ except for notational differences.
Corollary 7.3 (Convex hull of union)
Let $$C_1$$ and $$C_2$$ be convex sets. Let $$C$$ be the convex hull of their union:
$C = \ConvexHull (C_1 \cup C_2).$
Then,
$C = \bigcup_{t \in [0,1]} \left [ (1 - t) C_1 + t C_2 \right ].$
### 7.2.11.4. Partial Addition#
Recall the notion of direct sum of two vector spaces $$\VV$$ and $$\WW$$ over $$\RR$$ given by $$\VV \oplus \WW$$.
Theorem 7.39 (Partial addition on convex sets)
Let $$\VV$$ and $$\WW$$ be real vector spaces. Let $$\VV \oplus \WW$$ be their direct sum. Let $$C_1$$ and $$C_2$$ be convex sets in $$\VV \oplus \WW$$. Let $$C$$ be the set of vectors $$\bx = (\by, \bz)$$ such that there exist vectors $$\bz_1, \bz_2 \in \WW$$ with $$(\by, \bz_1) \in C_1$$, $$(\by, \bz_2) \in C_2$$ and $$\bz_1 + \bz_2 = \bz$$. Then, $$C$$ is a convex set in $$\VV \times \WW$$.
Proof. Let $$(\by, \bz) \in C$$ such that there exist vectors $$\bz_1, \bz_2 \in \WW$$ with $$(\by, \bz_1) \in C_1$$, $$(\by, \bz_2) \in C_2$$ and $$\bz_1 + \bz_2 = \bz$$.
Let $$(\by', \bz') \in C$$ such that there exist vectors $$\bz'_1, \bz'_2 \in \WW$$ with $$(\by', \bz'_1) \in C_1$$, $$(\by', \bz'_2) \in C_2$$ and $$\bz'_1 + \bz'_2 = \bz'$$.
Let $$t \in [0,1]$$. Consider the vector $$(\by'', \bz'') = t(\by, \bz) + (1-t)(\by', \bz')$$.
1. $$\by'' = t \by + (1-t) \by'$$.
2. $$\bz'' = t \bz + (1-t) \bz' = t(\bz_1 + \bz_2) + (1-t)(\bz'_1 + \bz'_2)$$.
3. Let $$\bz_1'' = t \bz_1 + (1-t) \bz_1'$$.
4. Let $$\bz_2'' = t \bz_2 + (1-t) \bz_2'$$.
5. Since $$(\by, \bz_1), (\by', \bz'_1) \in C_1$$ and $$C_1$$ is convex, hence $$(\by'', \bz''_1) \in C_1$$.
6. Since $$(\by, \bz_2), (\by', \bz'_2) \in C_2$$ and $$C_2$$ is convex, hence $$(\by'', \bz''_2) \in C_2$$.
7. But then, we note that $$\bz'' = \bz''_1 + \bz''_2$$.
8. Thus, $$(\by'', \bz''_1) \in C_1$$ and $$(\by'', \bz''_2) \in C_2$$ implies that $$(\by'', \bz'') \in C$$.
9. Thus, $$C$$ is convex.
We can write a version of the theorem above for $$\RR^n$$.
Corollary 7.4 (Partial addition on convex sets in Euclidean space)
Let $$C_1$$ and $$C_2$$ be convex sets in $$\RR^{n + m}$$. Let $$C$$ be the set of vectors $$\bx = (\by, \bz)$$ such that there exist vectors $$\bz_1, \bz_2 \in \RR^m$$ with $$(\by, \bz_1) \in C_1$$, $$(\by, \bz_2) \in C_2$$ and $$\bz_1 + \bz_2 = \bz$$. Then, $$C$$ is a convex set in $$\RR^{n+m}$$.
The relationship between $$C$$ and $$C_1,C_2$$ is known as partial addition.
• When $$\VV = \{ \bzero\}$$ we are left with the result that $$C_1, C_2$$ convex implies $$C_1 + C_2$$ is convex.
• When $$\WW = \{\bzero\}$$, we are left with the result that $$C_1, C_2$$ convex implies $$C_1 \cap C_2$$ is convex.
• In between, we have a spectrum of results where for a vector in $$C$$, part of the representation must be common between $$C_1$$ and $$C_2$$ while the remaining part must be the sum of corresponding parts of vectors in $$C_1$$ and $$C_2$$.
• In other words, if a vector space can be decomposed as a direct sum of two subspaces, then we have intersection or representation in one subspace while addition in the other.
• This partial addition (binary) operation is commutative as well as associative.
Partial additions appear naturally in convex cones in $$\VV \oplus \RR$$ generated by a convex set in $$\VV$$. See Observation 7.3 and discussion thereafter.
### 7.2.11.5. Cartesian Product/Direct Sum#
Theorem 7.40 (Direct sum of convex sets)
Let $$\VV$$ and $$\WW$$ be real vector spaces. Let $$C \subseteq \VV$$ and $$D \subseteq \WW$$ be convex subsets of $$\VV$$ and $$\WW$$ respectively. Then, $$C \oplus D$$ is a convex subset of $$\VV \oplus \WW$$.
More generally, if $$\VV_1, \dots, \VV_k$$ are real vector spaces and $$C_i \subseteq \VV_i$$ are convex subsets for $$i=1,\dots,k$$, then $$C = C_1 \oplus \dots \oplus C_k$$ is convex in the direct sum of vector spaces $$\VV_1 \oplus \dots \oplus \VV_k$$.
Proof. If either $$C$$ or $$D$$ is empty, then $$C \oplus D$$ is empty, hence convex. We shall thus assume that both $$C$$ and $$D$$ are nonempty.
1. Let $$\bz_1, \bz_2 \in C \oplus D$$ and $$t \in (0, 1)$$.
2. Then, $$\bz_1 = (\bx_1, \by_1)$$ and $$\bz_2 = (\bx_2, \by_2)$$ such that $$\bx_1, \bx_2 \in C$$ and $$\by_1, \by_2 \in D$$.
3. Since $$C$$ and $$D$$ are convex, hence $$\bx = t \bx_1 + (1-t) \bx_2 \in C$$ and $$\by = t \by_1 + (1- t) \by_2 \in D$$.
4. Now,
$\begin{split} \bz &= t \bz_1 + (1 - t) \bz_2\\ &= t(\bx_1, \by_1) + (1-t)(\bx_2, \by_2)\\ &= (t \bx_1 + (1-t) \bx_2, t \by_1 + (1- t) \by_2)\\ &= (\bx, \by). \end{split}$
5. Since $$\bx \in C$$ and $$\by \in D$$, hence $$\bz = (\bx, \by)\in C \oplus D$$.
6. Thus, $$C \oplus D$$ is closed under convex combination.
7. Thus, $$C \oplus D$$ is convex.
The generalization for multiple real vector spaces is easily verifiable through induction.
### 7.2.11.6. Projection#
Theorem 7.41 (Projection of a direct sum)
Let $$\VV$$ and $$\WW$$ be real vector spaces. Let $$C \subseteq \VV$$ and $$D \subseteq \WW$$. Assume that $$C \oplus D$$ is a convex subset of $$\VV \oplus \WW$$. Then, $$C$$ and $$D$$ are convex subsets of $$\VV$$ and $$\WW$$ respectively.
More generally, if $$\VV_1, \dots, \VV_k$$ are real vector spaces and $$C_i \subseteq \VV_i$$ are subsets for $$i=1,\dots,k$$, such that $$C = C_1 \oplus \dots \oplus C_k$$ is convex in the direct sum of vector spaces $$\VV_1 \oplus \dots \oplus \VV_k$$; then $$C_i$$ are convex subsets of $$\VV_i$$ for $$i=1,\dots,k$$.
Proof. Consider the case of two vector spaces $$\VV$$ and $$\WW$$.
1. Let $$\bx_1, \bx_2 \in C$$ and $$t \in (0,1)$$.
2. Pick any $$\by \in D$$.
3. Then, $$(\bx_1, \by), (\bx_2, \by) \in C \oplus D$$.
4. Since $$C \oplus D$$ is convex, hence
$t (\bx_1, \by) + (1-t) (\bx_2, \by) = (t \bx_1 + (1-t) \bx_2, \by) \in C \oplus D.$
5. Thus, $$t \bx_1 + (1-t) \bx_2 \in C$$.
6. Thus, $$C$$ is convex.
7. Similarly $$D$$ is also convex.
The argument can be extended by mathematical induction for multiple vector spaces.
Theorem 7.42 (Projection of a convex set)
Let $$\VV$$ and $$\WW$$ be real vector spaces. Let $$C \subseteq \VV \oplus \WW$$ be a convex set of $$\VV \oplus \WW$$.
For every $$\bx \in \VV$$, define
$D_{\bx} = \{ \by \in \WW \ST (\bx, \by) \in C \}.$
Then $$D_{\bx}$$ is convex for every $$\bx \in \VV$$.
Similarly, if for every $$\by \in \WW$$, we define
$E_{\by} = \{ \bx \in \VV \ST (\bx, \by) \in C \}$
then $$E_{\by}$$ is convex for every $$\by \in \WW$$.
Proof. If $$D_{\bx}$$ is empty then it is convex vacuously. Hence assume that $$D_{\bx}$$ is nonempty.
1. Then for every $$\by \in D_{\bx}$$ there exists $$(\bx, \by) \in C$$.
2. Let $$\bu, \bv \in D_{\bx}$$ and $$t \in [0,1]$$.
3. Then $$(\bx, \bu) \in C$$ and $$(\bx, \bv) \in C$$.
4. Since $$C$$ is convex hence $$t(\bx, \bu) + (1-t)(\bx, \bv) \in C$$.
5. i.e., $$(\bx, t\bu + (1-t)\bv) \in C$$.
6. This implies that $$t\bu + (1-t)\bv \in D_{\bx}$$.
7. Hence $$D_{\bx}$$ is convex.
The argument for the convexity of $$E_{\by}$$ is identical.
## 7.2.12. Extreme Points#
Definition 7.13 (Extreme points of convex sets)
Let $$VV$$ be a real vector space and let $$C$$ be a subset of $$\VV$$.
A point $$\bx \in S$$ is called an extreme point of $$S$$ if there do not exist $$\bx_1, \bx_2 \in S$$ with $$\bx_1 \neq \bx_2$$ and $$t \in (0, 1)$$ such that
$\bx = t \bx_1 + (1-t) \bx_2.$
In other words, $$\bx$$ cannot be expressed as a nontrivial convex combination of two different points in $$S$$.
The set of extreme points of a set $$S$$ is denoted by $$\extreme S$$.
Example 7.7 (Extreme points)
1. Let $$C = [0,1] \subseteq \RR$$. Then, $$0$$ and $$1$$ are extreme points of $$C$$.
2. Let $$C = (0, 1) \subseteq \RR$$. $$C$$ doesn’t have any extreme point.
3. In a triangle, the three vertices are extreme points.
4. In a convex polytope, all the vertices are extreme points.
A more intricate example of the set of extreme points for the set $$P = \{ \bx \in \RR^n \ST \bA \bx = \bb, \bx \succeq \bzero \}$$ is discussed in Theorem 7.59. | 2022-06-27 19:58:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9924598932266235, "perplexity": 195.22634866868162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00082.warc.gz"} |
http://mathcs.chapman.edu/~jipsen/structures/doku.php/bck-join-semilattices | ## BCK-join-semilattices
Abbreviation: BCKJSlat
### Definition
A BCK-join-semilattice is a structure $\mathbf{A}=\langle A,\vee,\rightarrow,1\rangle$ of type $\langle 2,2,0\rangle$ such that
(1): $(x\rightarrow y)\rightarrow ((y\rightarrow z)\rightarrow (x\rightarrow z)) = 1$
(2): $1\rightarrow x = x$
(3): $x\rightarrow 1 = 1$
(4): $x\rightarrow (x\vee y) = 1$
(5): $x\vee((x\rightarrow y)\rightarrow y) = ((x\rightarrow y)\rightarrow y)$
$\vee$ is idempotent: $x\vee x = x$
$\vee$ is commutative: $x\vee y = y\vee x$
$\vee$ is associative: $(x\vee y)\vee z = x\vee (y\vee z)$
Remark: $x\le y \iff x\rightarrow y=1$ is a partial order, with $1$ as greatest element, and $\vee$ is a join for this order. 1)
##### Morphisms
Let $\mathbf{A}$ and $\mathbf{B}$ be BCK-join-semilattices. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism:
$h(x\vee y)=h(x)\vee h(y)$, $h(x\rightarrow y)=h(x)\rightarrow h(y)$ and $h(1)=1$
Example 1:
### Properties
Classtype variety
### Finite members
$\begin{array}{lr} f(1)= &1\\ f(2)= &\\ f(3)= &\\ f(4)= &\\ f(5)= &\\ f(6)= &\\ \end{array}$
### References
\end{document} %</pre>
1) Pawel M. Idziak, Lattice operation in BCK-algebras, Math. Japon., 29, 1984, 839–846 MRreview | 2017-04-28 04:16:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9444920420646667, "perplexity": 3359.8250515041873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00395-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-solve-the-system-x-y-z-3-3x-y-z-13-and-3x-y-2z-18 | # How do you solve the system x+y+z=-3, 3x+y-z=13, and 3x+y-2z=18?
Apr 25, 2017
The augmented matrix looks something like this:
$\left(\begin{matrix}1 & 1 & 1 \\ 3 & 1 & - 1 \\ 3 & 1 & - 2\end{matrix}\right) \left(\begin{matrix}- 3 \\ 13 \\ 18\end{matrix}\right)$
Then: $R 2 \to R 2 - 3 R 1 , R 3 \to R 3 - 3 R 1$
$\left(\begin{matrix}1 & 1 & 1 \\ 0 & - 2 & - 4 \\ 0 & - 2 & - 5\end{matrix}\right) \left(\begin{matrix}- 3 \\ 22 \\ 27\end{matrix}\right)$
Then: $R 3 \to R 3 - R 2$
$\left(\begin{matrix}1 & 1 & 1 \\ 0 & - 2 & - 4 \\ 0 & 0 & - 1\end{matrix}\right) \left(\begin{matrix}- 3 \\ 22 \\ 5\end{matrix}\right)$
So now we back substitute:
$- z = 5 \implies z = - 5$
$- 2 y - 4 z = 22 \implies y = - 1$
$x + y + z = - 3 \implies x = 3$ | 2019-10-18 04:28:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8034701943397522, "perplexity": 9365.890419842874}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00540.warc.gz"} |
https://math.stackexchange.com/questions/2530063/stokes-theorem-help | Stokes' Theorem Help
Evaluate the flux integral $$\iint_S\operatorname{curl}(\vec{F}) \cdot d\vec{S}$$ for the vector field $\vec{F}(x,y,z) = \langle(x^9 + y^7)z^5, x, y \rangle$, where $$S: \frac{x^2 + y^2}{16} + z^8 = 1, \ z\ge 0$$ and is oriented upwards.
So, we're going to use Stokes' theorem here. The first step is to parametrize the boundary curve as $\vec{r}(t) = \langle \cos(t), 4\sin(t), 0 \rangle , 0\le t \le 2\pi$.
However, after that, I have NO clue whatsoever. Can someone walk me through it?
• Looks like you have a missing parameterization. – jdods Nov 21 '17 at 0:51
• Have you tried writing down the line integral that you need to evaluate? – Zach Boyd Nov 21 '17 at 1:11
Guide:
• Compute $F(r(t))$.
• Compute $r'(t)$.
• Compute the inner product and use the following:
$$\iint_S \operatorname{curl}(F) \cdot dS = \int_0^{2\pi} F(r(t)) \cdot r'(t) \, dt$$
The boundary of your curve will be when $z=0$. Plugging in $z=0$ to the equation of your surface gives a circle of radius $4$ which can be parameterized as $$\vec r(t)=\langle 4\cos(t), 4\sin(t), 0\rangle$$ for $0\leq t\leq 2\pi$.
Thus we get $\vec F(\vec r(t))=\langle 0,4\cos(t), 4\sin(t)\rangle$. Note that the tangent vector to the curve also lies in the $xy$-plane and thus, the dot product with zero out the $z$ component totally.
On the curve we get $\vec r'(t)=\langle -4\sin(t), 4\cos(t), 0\rangle$. So $\vec F(\vec r(t)) \cdot \vec r \ '(t)=16\cos^2(t)$.
So we have with Stoke's Theorem that $\iint_S \text{curl}\ \vec F \cdot d\vec S=\int_C\vec F \cdot d\vec r$. The right side becomes
\begin{aligned} \int_0^{2\pi}16\cos^2(t)dt&=8\int_0^{2\pi}(1+\cos{2t})dt\\ &=16\pi \end{aligned}
Note that we can ignore the $\cos(2t)$ portion of the integral since it is over complete periods and everything cancels out for it. | 2019-08-24 15:46:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9832667708396912, "perplexity": 222.80482363600672}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321160.93/warc/CC-MAIN-20190824152236-20190824174236-00489.warc.gz"} |
https://fasma-diele.owlstown.net/publications/2707-on-some-inverse-eigenvalue-problems-with-toeplitz-related-structure | # On some inverse eigenvalue problems with Toeplitz-related structure
### Journal article
Fasma Diele, Teresa Laudadio, Nicola Mastronardi
SIAM journal on matrix analysis and applications, vol. 26, SIAM, 2004, pp. 285--294
Cite
### Cite
APA
Diele, F., Laudadio, T., & Mastronardi, N. (2004). On some inverse eigenvalue problems with Toeplitz-related structure. SIAM Journal on Matrix Analysis and Applications, 26, 285–294.
Chicago/Turabian
Diele, Fasma, Teresa Laudadio, and Nicola Mastronardi. “On Some Inverse Eigenvalue Problems with Toeplitz-Related Structure.” SIAM journal on matrix analysis and applications 26 (2004): 285–294.
MLA
Diele, Fasma, et al. “On Some Inverse Eigenvalue Problems with Toeplitz-Related Structure.” SIAM Journal on Matrix Analysis and Applications, vol. 26, SIAM, 2004, pp. 285–94.
Share | 2023-03-30 04:08:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8130209445953369, "perplexity": 4480.589524246541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949097.61/warc/CC-MAIN-20230330035241-20230330065241-00745.warc.gz"} |
https://eccc.weizmann.ac.il/keyword/16389/ | Under the auspices of the Computational Complexity Foundation (CCF)
REPORTS > KEYWORD > LOGARITHMIC FORM:
Reports tagged with logarithmic form:
TR09-005 | 7th December 2008
Emanuele Viola
#### Bit-Probe Lower Bounds for Succinct Data Structures
We prove lower bounds on the redundancy necessary to
represent a set $S$ of objects using a number of bits
close to the information-theoretic minimum $\log_2 |S|$,
while answering various queries by probing few bits. Our
main results are:
\begin{itemize}
\item To represent $n$ ternary values $t \in \zot^n$ in ... more >>>
ISSN 1433-8092 | Imprint | 2020-12-01 06:15:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41021305322647095, "perplexity": 7787.923366772693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141652107.52/warc/CC-MAIN-20201201043603-20201201073603-00129.warc.gz"} |
https://www.physicsforums.com/threads/partial-fraction-expansion.610803/ | # Partial fraction expansion
1. Jun 2, 2012
### Runei
Now this is a pretty straight forward question. And I just want to make sure that Im not doing anything stupid.
But when doing partial fraction expansions of the type
$\frac{K}{s^{2}+2\zeta\omega_{n}s+\omega_{n}^{2}}$ Shouldnt I always be able to factor the denominator into the following:
$\left(s-s_{1}\right)\left(s-s_{2}\right)$
where
$s_{1} = -\zeta\omega_{n}+\omega_{n}\sqrt{\zeta^{2}-1}$ and
$s_{2} = -\zeta\omega_{n}-\omega_{n}\sqrt{\zeta^{2}-1}$
And thus being able to make the following expansion:
$\frac{A}{s-s_{1}}+\frac{A}{s-s_{2}} = \frac{K}{\left(s-s_{1}\right)\left(s-s_{2}\right)}$
Since s1 and s2 are the roots of the polynomial?
These roots may ofcourse either be real and distinct, repeated or complex conjugates.
Rune
2. Jun 2, 2012
### Runei
The reason for the question is that I am reading for an exam for Control Systems. And I am using LaPlace transforms to solve the differential equations.
To get back to the time-domain im using partial fraction expansions, and for example right now im trying to do the partial fraction expansion of
$\frac{K}{s\cdot\left(s^{2}+2\zeta\omega_{n}s + ω_{n}^{2} \right)}$
And Im trying to determine whether I am actually doing it wrong when factoring, or whether I can actually solve the problem by expanding to the following:
$\frac{A}{s}+\frac{B}{s-s_{1}}+\frac{C}{s-s_{2}} = \frac{K}{s\cdot\left(s-s_{1}\right)\cdot\left(s-s_{2}\right)}$
3. Jun 2, 2012
### Staff: Mentor
This factorization is fine, as long as there are no repeated real roots in the quadratic.
Consider the case where the denominator is s(s2 + 4s + 4). Here is the partial fraction decomposition:
$$\frac{K}{s(s^2 + 4s + 4)} = \frac{A}{s} + \frac{B}{s + 2} + \frac{C}{(s + 2)^2}$$
4. Jun 2, 2012
### Runei
Thank you Mark!
I actually found an error further back in my work and that was because I didn't get the correct result. But thank you for clarifying and making me sure :) | 2017-08-22 21:24:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5830880403518677, "perplexity": 514.2423519331934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112682.87/warc/CC-MAIN-20170822201124-20170822221124-00507.warc.gz"} |
https://search.r-project.org/CRAN/refmans/ANTs/html/assoc.gfi.html | assoc.gfi {ANTs} R Documentation
## Generalized affiliation index
### Description
Computes generalized affiliation indices based on a matrix of interactions or associations and a confounding factor.
### Usage
assoc.gfi(M1, M2, fr = TRUE, sym = FALSE, erase.diag = TRUE, index = "sri")
### Arguments
M1 a square adjacency matrix representing individual interactions or associations. In the latter case, associations must be in the form of a gbi. M2 a square adjacency matrix representing individual values of confounding factors. fr if true, it considers the argument M1 as an adjacency matrix representing interaction frequencies between individuals. Otherwise, it considers the argument M1 as an adjacency matrix representing associations between individuals. sym if true, it considers the argument M1 as an adjacency matrix representing symmetric interactions/associations. erase.diag if true, it omits the diagonal of the matrix. index a string indicating the association index to compute: 'sri' for Simple ratio index: x/x+yAB+yA+yB 'hw' for Half-weight index: x/x+yAB+1/2(yA+yB) 'sr' for Square root index:x/sqr((x+yAB+yA)(x+yAB+yB))
### Details
Generalized affiliation indices allow to control for individual associations by a given confounding factor (such as temporal or spatial overlaps, gregariousness, social unit membership, kinship...). The principle is to perform a Generalized Linear Regression (GLR) on both matrices (one representing individual interactions/associations and the other one representing the confounding factor) and to use GLR residuals as association indices. For an adjacency matrix representing individual interactions, the GLR belongs to the Poisson family. For an adjacency matrix representing individual associations, the GLR belongs to the Binomial family. High positive values suggest strong associations between two individuals and negative values suggest avoidance between two individuals.
### Value
a square adjacency matrix representing the generalized affiliation index between individuals.
### Author(s)
Sebastian Sosa, Ivan Puga-Gonzalez.
### References
Whitehead, H., & James, R. (2015). Generalized affiliation indices extract affiliations from social network data. Methods in Ecology and Evolution, 6(7), 836-844.
### Examples
assoc.gfi(sim.gbi,sim.gbi.att, fr = FALSE)
[Package ANTs version 0.0.16 Index] | 2022-10-07 03:39:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5276093482971191, "perplexity": 6869.626469973987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337906.7/warc/CC-MAIN-20221007014029-20221007044029-00558.warc.gz"} |
http://harvard.voxcharta.org/category/astro-ph/cosmology-extragalactic-astro-ph/ | Recent Postings from Cosmology and Extragalactic
Reconciling Induced-Gravity Inflation in Supergravity With The BICEP2 Results [Cross-Listing]
We generalize the embedding of induced-gravity inflation beyond the no-scale Supergravity presented in arXiv:1403.5486 employing two gauge singlet chiral superfields, a superpotential uniquely determined by applying a continuous R and a discrete Zn symmetries, and a logarithmic Kahler potential including all the allowed terms up to fourth order in powers of the various fields. We show that, increasing slightly the prefactor (-3) encountered in the adopted Kahler potential, an efficient enhancement of the resulting tensor-to-scalar ratio can be achieved rendering the predictions of the model consistent with the recent BICEP2 results, even with subplanckian excursions of the original inflaton field. The remaining inflationary observables can become compatible with the data by mildly tuning the coefficient involved in the fourth order term of the Kahler potential which mixes the inflaton with the accompanying non-inflaton field. The inflaton mass is predicted to be close to 10^14 GeV.
Reconciling Induced-Gravity Inflation in Supergravity With The BICEP2 Results
We generalize the embedding of induced-gravity inflation beyond the no-scale Supergravity presented in arXiv:1403.5486 employing two gauge singlet chiral superfields, a superpotential uniquely determined by applying a continuous R and a discrete Zn symmetries, and a logarithmic Kahler potential including all the allowed terms up to fourth order in powers of the various fields. We show that, increasing slightly the prefactor (-3) encountered in the adopted Kahler potential, an efficient enhancement of the resulting tensor-to-scalar ratio can be achieved rendering the predictions of the model consistent with the recent BICEP2 results, even with subplanckian excursions of the original inflaton field. The remaining inflationary observables can become compatible with the data by mildly tuning the coefficient involved in the fourth order term of the Kahler potential which mixes the inflaton with the accompanying non-inflaton field. The inflaton mass is predicted to be close to 10^14 GeV.
Reconciling Induced-Gravity Inflation in Supergravity With The BICEP2 Results [Cross-Listing]
We generalize the embedding of induced-gravity inflation beyond the no-scale Supergravity presented in arXiv:1403.5486 employing two gauge singlet chiral superfields, a superpotential uniquely determined by applying a continuous R and a discrete Zn symmetries, and a logarithmic Kahler potential including all the allowed terms up to fourth order in powers of the various fields. We show that, increasing slightly the prefactor (-3) encountered in the adopted Kahler potential, an efficient enhancement of the resulting tensor-to-scalar ratio can be achieved rendering the predictions of the model consistent with the recent BICEP2 results, even with subplanckian excursions of the original inflaton field. The remaining inflationary observables can become compatible with the data by mildly tuning the coefficient involved in the fourth order term of the Kahler potential which mixes the inflaton with the accompanying non-inflaton field. The inflaton mass is predicted to be close to 10^14 GeV.
Power spectrum tomography of dark matter annihilation with local galaxy distribution
Cross-correlating the gamma-ray background with local galaxy catalogs potentially gives stringent constraints on dark matter annihilation. We provide updated theoretical estimates of sensitivities to the annihilation cross section from gamma-ray data with Fermi telescope and 2MASS galaxy catalogs, by elaborating the galaxy power spectrum and astrophysical backgrounds, and adopting the Markov-Chain Monte Carlo simulations. In particular, we show that taking tomographic approach by dividing the galaxy catalogs into more than one redshift slice will improve the sensitivity by a factor of a few to several. If dark matter halos contain lots of bright substructures, yielding a large annihilation boost, then one may be able to probe the canonical annihilation cross section for thermal production mechanism up to masses of $\sim$700 GeV. Even with modest substructure boost, on the other hand, the sensitivities could still reach a factor of three larger than the canonical cross section for dark matter masses of tens to a few hundreds of GeV.
Note on Adiabatic Modes and Ward Identities In A Closed Universe
As statements regarding the soft limit of cosmological correlation functions, consistency relations are known to exist in any flat FRW universe. In this letter we explore the possibility of finding such relations in a spatially closed universe, where the soft limit $\textbf{q}\rightarrow 0$ does not exist in any rigorous sense. Despite the absence of spatial infinity of the spatial slices, we find the adiabatic modes and their associated consistency relations in a toy universe with background topology $R\times S^2$. Flat FRW universe adiabatic modes are recovered via taking the large radius limit $R\gg \mathcal{H}^{-1}$, for which we are living in a small local patch of Hubble size on the sphere. It is shown that both dilation and translation adiabatic modes in the local patch are recovered by a global dilation on the sphere, acting at different places.
Note on Adiabatic Modes and Ward Identities In A Closed Universe [Cross-Listing]
As statements regarding the soft limit of cosmological correlation functions, consistency relations are known to exist in any flat FRW universe. In this letter we explore the possibility of finding such relations in a spatially closed universe, where the soft limit $\textbf{q}\rightarrow 0$ does not exist in any rigorous sense. Despite the absence of spatial infinity of the spatial slices, we find the adiabatic modes and their associated consistency relations in a toy universe with background topology $R\times S^2$. Flat FRW universe adiabatic modes are recovered via taking the large radius limit $R\gg \mathcal{H}^{-1}$, for which we are living in a small local patch of Hubble size on the sphere. It is shown that both dilation and translation adiabatic modes in the local patch are recovered by a global dilation on the sphere, acting at different places.
Theoretical deduction of the Hubble law beginning with a MoND theory in context of the ${\Lambda}$FRW-Cosmology
We deducted the Hubble law and the age of the Universe, through the introduction of the Inverse Yukawa Field (IYF), as a non-local additive complement of the Newtonian gravitation (Modified Newtonian Dynamics). As result we connected the dynamics of astronomical objects at great scale with the Friedmann-Robertson-Walker ($\Lambda$FRW) model. From the corresponding formalism, the Hubble law can be expressed as v = (4 $\pi$ [G]/c)r, which was derivated by evaluating the IYF force at distances much greater than 50Mpc, giving a maximum value for the expansion rate of the universe of $H_0=86,31$, consistent with the observational data of 392 astronomical objects from NASA/IPAC Extragalactic Database (NED). This additional field (IYF) provides a simple interpretation of dark energy as the action a large scale of baryonic matter. Additionally, we calculated the age of the universe as 11Gyr, in agreement with recent measurements of the age of the white dwarfs in the solar neighborhood.
Resilience of the standard predictions for primordial tensor modes
We show that the prediction for the primordial tensor power spectrum cannot be modified at leading order in derivatives. Indeed, one can always set to unity the speed of propagation of gravitational waves during inflation by a suitable disformal transformation of the metric, while a conformal one can make the Planck mass time-independent. Therefore, the tensor-to-scalar ratio unambiguously fixes the energy scale of inflation. Using the Effective Field Theory of Inflation, we check that predictions are independent of the choice of frame, as expected. The first corrections to the standard prediction come from two parity violating operators with three derivatives. Also the correlator $\langle\gamma\gamma\gamma\rangle$ is standard and only receives higher derivative corrections. These results hold also in multifield models of inflation and in alternatives to inflation and make the connection between a (quasi) scale-invariant tensor spectrum and inflation completely robust.
Resilience of the standard predictions for primordial tensor modes [Cross-Listing]
We show that the prediction for the primordial tensor power spectrum cannot be modified at leading order in derivatives. Indeed, one can always set to unity the speed of propagation of gravitational waves during inflation by a suitable disformal transformation of the metric, while a conformal one can make the Planck mass time-independent. Therefore, the tensor-to-scalar ratio unambiguously fixes the energy scale of inflation. Using the Effective Field Theory of Inflation, we check that predictions are independent of the choice of frame, as expected. The first corrections to the standard prediction come from two parity violating operators with three derivatives. Also the correlator $\langle\gamma\gamma\gamma\rangle$ is standard and only receives higher derivative corrections. These results hold also in multifield models of inflation and in alternatives to inflation and make the connection between a (quasi) scale-invariant tensor spectrum and inflation completely robust.
Covariant holography of a tachyonic accelerating universe [Cross-Listing]
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state $w=p/\rho$, both for $w>-1$ and $w<-1$. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analysed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of a S matrix at infinite distances.
Covariant holography of a tachyonic accelerating universe [Cross-Listing]
We apply the holographic principle to a flat dark energy dominated Friedmann-Robertson-Walker spacetime filled with a tachyon scalar field with constant equation of state $w=p/\rho$, both for $w>-1$ and $w<-1$. By using a geometrical covariant procedure, which allows the construction of holographic hypersurfaces, we have obtained for each case the position of the preferred screen and have then compared these with those obtained by using the holographic dark energy model with the future event horizon as the infrared cutoff. In the phantom scenario, one of the two obtained holographic screens is placed on the big rip hypersurface, both for the covariant holographic formalism and the holographic phantom model. It is also analysed whether the existence of these preferred screens allows a mathematically consistent formulation of fundamental theories based on the existence of a S matrix at infinite distances.
Nonlinear growing neutrino cosmology
The energy scale of Dark Energy, $\sim 2 \times 10^{-3}$ eV, is a long way off compared to all known fundamental scales – except for the neutrino masses. If Dark Energy is dynamical and couples to neutrinos, this is no longer a coincidence. The time at which Dark Energy starts to behave as an effective cosmological constant can be linked to the time at which the cosmic neutrinos become nonrelativistic. This naturally places the onset of the Universe’s accelerated expansion in recent cosmic history, addressing the why-now problem of Dark Energy. We show that these mechanisms indeed work in the Growing Neutrino Quintessence model – even if the fully nonlinear structure formation and backreaction are taken into account, which were previously suspected of spoiling the cosmological evolution. The attractive force between neutrinos arising from their coupling to Dark Energy grows as large as $10^6$ times the gravitational strength. This induces very rapid dynamics of neutrino fluctuations which are nonlinear at redshift $z \approx 2$. Nevertheless, a nonlinear stabilization phenomenon ensures only mildly nonlinear oscillating neutrino overdensities with a large-scale gravitational potential substantially smaller than that of cold dark matter perturbations. Depending on model parameters, the signals of large-scale neutrino lumps may render the cosmic neutrino background observable.
Neutrino constraints: what large-scale structure and CMB data are telling us?
(Abridged) We discuss the reliability of neutrino mass constraints, either active or sterile, from the combination of WMAP 9-year or Planck CMB data with BAO measurements from BOSS DR11, galaxy shear measurements from CFHTLenS, SDSS Ly-$\alpha$ forest constraints and galaxy cluster mass function from Chandra observations. To avoid model dependence of the constraints we perform a full likelihood analysis for all the datasets employed. As for the cluster data analysis we rely on to the most recent calibration of massive neutrino effects in the halo mass function and we explore the impact of the uncertainty in the mass bias and re-calibration of the halo mass function due to baryonic feedback processes on cosmological parameters. We find that none of the low redshift probes alone provide evidence for massive neutrinos in combination with CMB measurements, while a larger than $2\sigma$ detection of non zero neutrino mass, either active or sterile, is achieved combining cluster or shear data with CMB and BAO measurements. The preference for massive neutrino is larger in the sterile neutrino scenario, and for the combination of Planck, BAO, shear and cluster datasets we find that the vanilla $\Lambda$CDM model is rejected at more than $3\sigma$ and a sterile neutrino mass as motivated by accelerator anomaly is within the $2\sigma$ errors. Finally, results from the full data combination reflect the tension between the $\sigma_8$ constraints obtained from cluster and shear data and that inferred from Ly-$\alpha$ forest measurements; in the active neutrino scenario for both CMB datasets employed, the full data combination yields only an upper limits on $\sum m_\nu$, while assuming an extra sterile neutrino we still get preference for non-vanishing mass, $m_s^{\rm eff}=0.26^{+0.22}_{-0.24}$ eV, and dark contribution to the radiation content, $\Delta N_{\rm eff}=0.82\pm0.55$.
The ESO UVES Advanced Data Products Quasar Sample - III. Evidence of Bimodality in the [N/alpha] Distribution
We report here a study of nitrogen and $\alpha$-capture element (O, S, and Si) abundances in 18 Damped Ly$\alpha$ Absorbers (DLAs) and sub-DLAs drawn from the ESO-UVES Advanced Data Products (EUADP) database. We report 9 new measurements, 5 upper and 4 lower limits of nitrogen that when compiled with available nitrogen measurements from the literature makes a sample of 108 systems. The extended sample presented here confirms the [N/$\alpha$] bimodal behaviour suggested in previous studies. Three-quarter of the systems show $\langle$[N/$\alpha$]$\rangle=-0.85$ ($\pm$0.20 dex) and one-quarter ratios are clustered at $\langle$[N/$\alpha$]$\rangle= -1.41$ ($\pm$0.14 dex). The high [N/$\alpha$] plateau is consistent with the HII regions of dwarf irregular and blue compact dwarf galaxies although extended to lower metallicities and could be interpreted as the result of a primary nitrogen production by intermediate mass stars. The low [N/$\alpha$] values are the lowest ever observed in any astrophysical site. In spite of this fact, even lower values could be measured with the present instrumentation, but we do not find them below [N/$\alpha$] $\approx$ $-1.7$. This suggests the presence of a floor in [N/$\alpha$] abundances, which along with the lockstep increase of N and Si may indicate a primary nitrogen production from fast rotating, massive stars in relatively young or unevolved systems.
Higher derivatives and power spectrum in effective single field inflation [Cross-Listing]
We study next-to-leading corrections to the effective action of the curvature perturbation obtained by integrating out the coupled heavy isocurvature perturbation. These corrections result from applying higher order derivative operators of the effective theory expansion with respect to the mass scale of the heavy modes. We find that the correction terms are suppressed by the ratio of the Hubble parameter to the heavy mass scale. The corresponding corrections to the power spectrum of the curvature perturbation are presented for a simple illustrative example.
Higher derivatives and power spectrum in effective single field inflation
We study next-to-leading corrections to the effective action of the curvature perturbation obtained by integrating out the coupled heavy isocurvature perturbation. These corrections result from applying higher order derivative operators of the effective theory expansion with respect to the mass scale of the heavy modes. We find that the correction terms are suppressed by the ratio of the Hubble parameter to the heavy mass scale. The corresponding corrections to the power spectrum of the curvature perturbation are presented for a simple illustrative example.
Higher derivatives and power spectrum in effective single field inflation [Cross-Listing]
We study next-to-leading corrections to the effective action of the curvature perturbation obtained by integrating out the coupled heavy isocurvature perturbation. These corrections result from applying higher order derivative operators of the effective theory expansion with respect to the mass scale of the heavy modes. We find that the correction terms are suppressed by the ratio of the Hubble parameter to the heavy mass scale. The corresponding corrections to the power spectrum of the curvature perturbation are presented for a simple illustrative example.
Primordial quantum nonequilibrium and large-scale cosmic anomalies
We study incomplete relaxation to quantum equilibrium at long wavelengths, during a pre-inflationary phase, as a possible explanation for the reported large-scale anomalies in the cosmic microwave background (CMB). Our scenario makes use of the de Broglie-Bohm pilot-wave formulation of quantum theory, in which the Born probability rule has a dynamical origin. The large-scale power deficit could arise from incomplete relaxation for the amplitudes of the primordial perturbations. We show, by numerical simulations for a spectator scalar field, that if the pre-inflationary era is radiation dominated then the deficit in the emerging power spectrum will have a characteristic shape (an inverse-tangent dependence on wavenumber k, with oscillations). It is found that our scenario is able to produce a power deficit in the observed region and of the observed (approximate) magnitude for an appropriate choice of cosmological parameters. We also discuss the large-scale anisotropy, which could arise from incomplete relaxation for the phases of the primordial perturbations. We present numerical simulations for phase relaxation, and we show how to define characteristic scales for amplitude and phase nonequilibrium. The extent to which the data might support our scenario is left as a question for future work. Our results suggest that we have a potentially viable model that might explain two apparently independent cosmic anomalies by means of a single mechanism.
On the Clustering of Compact Galaxy Pairs in Dark Matter Haloes
We analyze the clustering of photometrically selected galaxy pairs by using the halo-occupation distribution (HOD) model. We measure the angular two-point auto-correlation function, $\omega(\theta)$, for galaxies and galaxy pairs in three volume-limited samples and develop an HOD to model their clustering. Our results are successfully fit by these HOD models, and we see the separation of "1-halo" and "2-halo" clustering terms for both single galaxies and galaxy pairs. Our clustering measurements and HOD model fits for the single galaxy samples are consistent with previous results. We find that the galaxy pairs generally have larger clustering amplitudes than single galaxies, and the quantities computed during the HOD fitting, e.g., effective halo mass, $M_{eff}$, and linear bias, $b_{g}$, are also larger for galaxy pairs. We find that the central fractions for galaxy pairs are significantly higher than single galaxies, which confirms that galaxy pairs are formed at the center of more massive dark matter haloes. We also model the clustering dependence of the galaxy pair correlation function on redshift, galaxy type, and luminosity. We find early-early pairs (bright galaxy pairs) cluster more strongly than late-late pairs (dim galaxy pairs), and that the clustering does not depend on the luminosity contrast between the two galaxies in the compact group.
The evolution of galaxy star formation activity in massive halos
There is now a large consensus that the current epoch of the Cosmic Star Formation History (CSFH) is dominated by low mass galaxies while the most active phase at 1<z<2 is dominated by more massive galaxies, which undergo a faster evolution. Massive galaxies tend to inhabit very massive halos such as galaxy groups and clusters. We aim to understand whether the observed "galaxy downsizing" could be interpreted as a "halo downsizing", whereas the most massive halos, and their galaxy populations, evolve more rapidly than the halos of lower mass. Thus, we study the contribution to the CSFH of galaxies inhabiting group-sized halos. This is done through the study of the evolution of the Infra-Red (IR) luminosity function of group galaxies from redshift 0 to ~1.6. We use a sample of 39 X-ray selected groups in the Extended Chandra Deep Field South (ECDFS), the Chandra Deep Field North (CDFN), and the COSMOS field, where the deepest available mid- and far-IR surveys have been conducted with Spitzer MIPS and Hersche PACS. Groups at low redshift lack the brightest, rarest, and most star forming IR-emitting galaxies observed in the field. Their IR-emitting galaxies contribute <10% of the comoving volume density of the whole IR galaxy population in the local Universe. At redshift >~1, the most IR-luminous galaxies (LIRGs and ULIRGs) are preferentially located in groups, and this is consistent with a reversal of the star-formation rate vs .density anti-correlation observed in the nearby Universe. At these redshifts, group galaxies contribute 60-80% of the CSFH, i.e. much more than at lower redshifts. Below z~1, the comoving number and SFR densities of IR-emitting galaxies in groups decline significantly faster than those of all IR-emitting galaxies. Our results are consistent with a "halo downsizing" scenario and highlight the significant role of "environment" quenching in shaping the CSFH.
Inflationary tensor fossils in large-scale structure
Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.
Inflationary tensor fossils in large-scale structure [Cross-Listing]
Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to be satisfied. We then consider several examples of inflation models, such as non-attractor and solid inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.
Testing primordial non-Gaussianities on galactic scales at high redshift
The simplest inflationary models predict a very nearly Gaussian distribution of density fluctuations. Primordial non-Gaussianities therefore provide an important test of inflationary models. Although the Planck CMB experiment has produced strong limits on non-Gaussianity on scales of clusters, there is still room for considerable non-Gaussianity on galactic scales. We have tested the effect of local non-Gaussianity on the high redshift galaxy population by running five cosmological N-body simulations down to z=6.5. For these simulations, we adopt the same initial phases, and either Gaussian or scale-dependent non-Gaussian primordial fluctuations, all consistent with the constraints set by Planck on clusters scales. We then assign stellar masses to each halo using the halo – stellar mass empirical relation of Behroozi et al. (2013). Our simulations with non-Gaussian initial conditions produce halo mass functions that show clear departures from those obtained from the analogous simulations with Gaussian initial conditions at z>~10. We observe a >0.3 dex boosting of the low-end of the halo mass function, which leads to a similar effect on the galaxy stellar mass function, which should be testable with future galaxy surveys at z>10. As cosmic reionization is thought to be driven by dwarf galaxies at high redshift, our findings may have implications for the reionization history of the Universe.
Nonlinear evolution of dark matter subhalos and applications to warm dark matter
We describe the methodology to include nonlinear evolution, including tidal effects, in the computation of subhalo distribution properties in both cold (CDM) and warm (WDM) dark matter universes. Using semi-analytic modeling, we include effects from dynamical friction, tidal stripping, and tidal heating, allowing us to dynamically evolve the subhalo distribution. We calibrate our nonlinear evolution scheme to the CDM subhalo mass function in the Aquarius N-body simulation, producing a subhalo mass function within the range of simulations. We find tidal effects to be the dominant mechanism of nonlinear evolution in the subhalo population. Finally, we compute the subhalo mass function for $m_\chi=1.5$ keV WDM including the effects of nonlinear evolution, and compare radial number densities and mass density profiles of subhalos in CDM and WDM models. We show that all three signatures differ between the two dark matter models, suggesting that probes of substructure may be able to differentiate between them.
Are Scalar and Tensor Deviations Related in Modified Gravity?
Modified gravity theories on cosmic scales have three key deviations from general relativity. They can cause cosmic acceleration without a physical, highly negative pressure fluid, can cause a gravitational slip between the two metric potentials, and can cause gravitational waves to propagate differently, e.g. with a speed different from the speed of light. We examine whether the deviations in the metric potentials as observable through modified Poisson equations for scalar density perturbations are related to or independent from deviations in the tensor gravitational waves. We show analytically they are independent instantaneously in covariant Galileon gravity — e.g. at some time one of them can have the general relativity value while the other deviates — though related globally — if one deviates over a finite period, the other at some point shows a deviation. We present expressions for the early time and late time de Sitter limits, and numerically illustrate their full evolution. This in(ter)dependence of the scalar and tensor deviations highlights complementarity between cosmic structure surveys and future gravitational wave measurements.
X-ray bright active galactic nuclei in massive galaxy clusters III: New insights into the triggering mechanisms of cluster AGN
We present the results of a new analysis of the X-ray selected Active Galactic Nuclei (AGN) population in the vicinity of 135 of the most massive galaxy clusters in the redshift range of 0.2 < z < 0.9 observed with Chandra. With a sample of more than 11,000 X-ray point sources, we are able to measure, for the first time, evidence for evolution in the cluster AGN population beyond the expected evolution of field AGN. Our analysis shows that overall number density of cluster AGN scales with the cluster mass as $\sim M_{500}^{-1.2}$. There is no evidence for the overall number density of cluster member X-ray AGN depending on the cluster redshift in a manner different than field AGN, nor there is any evidence that the spatial distribution of cluster AGN (given in units of the cluster overdensity radius r_500) strongly depends on the cluster mass or redshift. The $M^{-1.2 \pm 0.7}$ scaling relation we measure is consistent with theoretical predictions of the galaxy merger rate in clusters, which is expected to scale with the cluster velocity dispersion, $\sigma$, as $\sim \sigma^{-3}$ or $\sim M^{-1}$. This consistency suggests that AGN in clusters may be predominantly triggered by galaxy mergers, a result that is further corroborated by visual inspection of Hubble images for 23 spectroscopically confirmed cluster member AGN in our sample. A merger-driven scenario for the triggering of X-ray AGN is not strongly favored by studies of field galaxies, however, suggesting that different mechanisms may be primarily responsible for the triggering of cluster and field X-ray AGN.
Constraints on cosmological parameters from Planck and BICEP2 data [Cross-Listing]
We show that the tension introduced by the detection of large amplitude gravitational wave power by the BICEP2 experiment with temperature anisotropy measurements by the Planck mission is alleviated in models where extra light species contribute to the effective number of relativistic degrees of freedom. We also show that inflationary models based on S-dual potentials are in agreement with Planck and BICEP2 data.
Constraints on cosmological parameters from Planck and BICEP2 data
We show that the tension introduced by the detection of large amplitude gravitational wave power by the BICEP2 experiment with temperature anisotropy measurements by the Planck mission is alleviated in models where extra light species contribute to the effective number of relativistic degrees of freedom. We also show that inflationary models based on S-dual potentials are in agreement with Planck and BICEP2 data.
How well is our universe described by an FLRW model? [Cross-Listing]
Extremely well! The spacetime metric, $g_{ab}$, of our universe is approximated by an FLRW metric, $g_{ab}^{(0)}$, to about 1 part in $10^4$ or better on both large and small scales, except in the immediate vicinity of very strong field objects, such as black holes. However, derivatives of $g_{ab}$ are not close to derivatives of $g_{ab}^{(0)}$, so there can be significant differences in the behavior of geodesics and huge differences in curvature. Consequently, observable quantities in the actual universe may differ significantly from the corresponding observables in the FLRW model. Nevertheless, as we shall review here, we have proven general results showing that the large matter inhomogeneities that occur on small scales cannot produce significant backreaction effects on large scales, so $g_{ab}^{(0)}$ satisfies Einstein’s equation with the averaged stress-energy tensor of matter as its source. We discuss the flaws in some other approaches that have suggested that large backreaction effects may occur. As we also will review here, with a suitable "dictionary," Newtonian cosmologies provide excellent approximations to cosmological solutions to Einstein’s equation (with dust and a cosmological constant) on all scales.
How well is our universe described by an FLRW model? [Cross-Listing]
Extremely well! The spacetime metric, $g_{ab}$, of our universe is approximated by an FLRW metric, $g_{ab}^{(0)}$, to about 1 part in $10^4$ or better on both large and small scales, except in the immediate vicinity of very strong field objects, such as black holes. However, derivatives of $g_{ab}$ are not close to derivatives of $g_{ab}^{(0)}$, so there can be significant differences in the behavior of geodesics and huge differences in curvature. Consequently, observable quantities in the actual universe may differ significantly from the corresponding observables in the FLRW model. Nevertheless, as we shall review here, we have proven general results showing that the large matter inhomogeneities that occur on small scales cannot produce significant backreaction effects on large scales, so $g_{ab}^{(0)}$ satisfies Einstein’s equation with the averaged stress-energy tensor of matter as its source. We discuss the flaws in some other approaches that have suggested that large backreaction effects may occur. As we also will review here, with a suitable "dictionary," Newtonian cosmologies provide excellent approximations to cosmological solutions to Einstein’s equation (with dust and a cosmological constant) on all scales.
The Renormalizable Three-Term Polynomial Inflation with Large Tensor-to-Scalar Ratio
We systematically study the renormalizable three-term polynomial inflation in the supersymmetric and non-supersymmetric models. The supersymmetric inflaton potentials can be realized in supergravity theory, and only have two independent parameters. We show that the general renormalizable supergravity model is equivalent to one kind of our supersymmetric models. We find that the spectral index and tensor-to-scalar ratio can be consistent with the Planck and BICEP2 results, but the running of spectral index is always out of the $2\sigma$ range. If we do not consider the BICEP2 experiment, these inflationary models can be highly consistent with the Planck observations and saturate its upper bound on the tensor-to-scalar ratio ($r \le 0.11$). Thus, our models can be tested at the future Planck and QUBIC experiments.
The Renormalizable Three-Term Polynomial Inflation with Large Tensor-to-Scalar Ratio [Cross-Listing]
We systematically study the renormalizable three-term polynomial inflation in the supersymmetric and non-supersymmetric models. The supersymmetric inflaton potentials can be realized in supergravity theory, and only have two independent parameters. We show that the general renormalizable supergravity model is equivalent to one kind of our supersymmetric models. We find that the spectral index and tensor-to-scalar ratio can be consistent with the Planck and BICEP2 results, but the running of spectral index is always out of the $2\sigma$ range. If we do not consider the BICEP2 experiment, these inflationary models can be highly consistent with the Planck observations and saturate its upper bound on the tensor-to-scalar ratio ($r \le 0.11$). Thus, our models can be tested at the future Planck and QUBIC experiments.
The Renormalizable Three-Term Polynomial Inflation with Large Tensor-to-Scalar Ratio [Cross-Listing]
We systematically study the renormalizable three-term polynomial inflation in the supersymmetric and non-supersymmetric models. The supersymmetric inflaton potentials can be realized in supergravity theory, and only have two independent parameters. We show that the general renormalizable supergravity model is equivalent to one kind of our supersymmetric models. We find that the spectral index and tensor-to-scalar ratio can be consistent with the Planck and BICEP2 results, but the running of spectral index is always out of the $2\sigma$ range. If we do not consider the BICEP2 experiment, these inflationary models can be highly consistent with the Planck observations and saturate its upper bound on the tensor-to-scalar ratio ($r \le 0.11$). Thus, our models can be tested at the future Planck and QUBIC experiments.
The expected anisotropy in solid inflation [Cross-Listing]
Solid inflation is an effective field theory of inflation in which isotropy and homogeneity are accomplished via a specific combination of anisotropic sources (three scalar fields that individually break isotropy). This results in specific observational signatures that are not found in standard models of inflation: a non-trivial angular dependence for the squeezed bispectrum, and a possibly long period of anisotropic inflation (to drive inflation, the "solid" must be very insensitive to any deformation, and thus background anisotropies are very slowly erased). In this paper we compute the expected level of statistical anisotropy in the power spectrum of the curvature perturbations of this model. To do so, we account for the classical background values of the three scalar fields that are generated on large (superhorizon) scales during inflation via a random walk sum, as the perturbation modes leave the horizon. Such an anisotropy is unavoidably generated, even starting from perfectly isotropic classical initial conditions. The expected level of anisotropy is related to the duration of inflation and to the amplitude of the squeezed bispectrum. If this amplitude is close to its current observational limit (so that one of the most interesting predictions of the model can be observed in the near future), we find that a level of statistical anisotropy $\gtrsim 3\%$ in the power spectrum is to be expected, if inflation lasted $\gtrsim 20-30$ e-folds more than the final $50-60$ efolds required to generare the CMB modes. We also comment and point out various similarities between solid inflation and models of inflation where a suitable coupling of the inflaton to a vector kinetic term $F^{2}$ gives frozen and scale invariant vector perturbations on superhorizon scales.
The expected anisotropy in solid inflation
Solid inflation is an effective field theory of inflation in which isotropy and homogeneity are accomplished via a specific combination of anisotropic sources (three scalar fields that individually break isotropy). This results in specific observational signatures that are not found in standard models of inflation: a non-trivial angular dependence for the squeezed bispectrum, and a possibly long period of anisotropic inflation (to drive inflation, the "solid" must be very insensitive to any deformation, and thus background anisotropies are very slowly erased). In this paper we compute the expected level of statistical anisotropy in the power spectrum of the curvature perturbations of this model. To do so, we account for the classical background values of the three scalar fields that are generated on large (superhorizon) scales during inflation via a random walk sum, as the perturbation modes leave the horizon. Such an anisotropy is unavoidably generated, even starting from perfectly isotropic classical initial conditions. The expected level of anisotropy is related to the duration of inflation and to the amplitude of the squeezed bispectrum. If this amplitude is close to its current observational limit (so that one of the most interesting predictions of the model can be observed in the near future), we find that a level of statistical anisotropy $\gtrsim 3\%$ in the power spectrum is to be expected, if inflation lasted $\gtrsim 20-30$ e-folds more than the final $50-60$ efolds required to generare the CMB modes. We also comment and point out various similarities between solid inflation and models of inflation where a suitable coupling of the inflaton to a vector kinetic term $F^{2}$ gives frozen and scale invariant vector perturbations on superhorizon scales.
The expected anisotropy in solid inflation [Cross-Listing]
Solid inflation is an effective field theory of inflation in which isotropy and homogeneity are accomplished via a specific combination of anisotropic sources (three scalar fields that individually break isotropy). This results in specific observational signatures that are not found in standard models of inflation: a non-trivial angular dependence for the squeezed bispectrum, and a possibly long period of anisotropic inflation (to drive inflation, the "solid" must be very insensitive to any deformation, and thus background anisotropies are very slowly erased). In this paper we compute the expected level of statistical anisotropy in the power spectrum of the curvature perturbations of this model. To do so, we account for the classical background values of the three scalar fields that are generated on large (superhorizon) scales during inflation via a random walk sum, as the perturbation modes leave the horizon. Such an anisotropy is unavoidably generated, even starting from perfectly isotropic classical initial conditions. The expected level of anisotropy is related to the duration of inflation and to the amplitude of the squeezed bispectrum. If this amplitude is close to its current observational limit (so that one of the most interesting predictions of the model can be observed in the near future), we find that a level of statistical anisotropy $\gtrsim 3\%$ in the power spectrum is to be expected, if inflation lasted $\gtrsim 20-30$ e-folds more than the final $50-60$ efolds required to generare the CMB modes. We also comment and point out various similarities between solid inflation and models of inflation where a suitable coupling of the inflaton to a vector kinetic term $F^{2}$ gives frozen and scale invariant vector perturbations on superhorizon scales.
Power spectra and spectral indices of $k$-inflation: high-order corrections [Cross-Listing]
$k$-inflation represents the most general single-field inflation, in which the perturbations usually obey an equation of motion with a time-dependent sound speed. In this paper, we study the observational predictions of the $k$-inflation by using the high-order uniform asymptotic approximation method. We calculate explicitly the slow-roll expressions of the power spectra, spectral indices, and running of the spectral indices for both the scalar and tensor perturbations. These expressions are all written in terms of the Hubble and sound speed flow parameters. It is shown that the previous results obtained by using the first-order approximation have been significantly improved by the high-order corrections of the approximations. Furthermore, we also check our results by comparing them with the ones obtained by other approximation methods, including the Green’s function method, WKB approximation, and improved WKB approximation, and find the relative errors.
Power spectra and spectral indices of $k$-inflation: high-order corrections
$k$-inflation represents the most general single-field inflation, in which the perturbations usually obey an equation of motion with a time-dependent sound speed. In this paper, we study the observational predictions of the $k$-inflation by using the high-order uniform asymptotic approximation method. We calculate explicitly the slow-roll expressions of the power spectra, spectral indices, and running of the spectral indices for both the scalar and tensor perturbations. These expressions are all written in terms of the Hubble and sound speed flow parameters. It is shown that the previous results obtained by using the first-order approximation have been significantly improved by the high-order corrections of the approximations. Furthermore, we also check our results by comparing them with the ones obtained by other approximation methods, including the Green’s function method, WKB approximation, and improved WKB approximation, and find the relative errors.
Power spectra and spectral indices of $k$-inflation: high-order corrections [Cross-Listing]
$k$-inflation represents the most general single-field inflation, in which the perturbations usually obey an equation of motion with a time-dependent sound speed. In this paper, we study the observational predictions of the $k$-inflation by using the high-order uniform asymptotic approximation method. We calculate explicitly the slow-roll expressions of the power spectra, spectral indices, and running of the spectral indices for both the scalar and tensor perturbations. These expressions are all written in terms of the Hubble and sound speed flow parameters. It is shown that the previous results obtained by using the first-order approximation have been significantly improved by the high-order corrections of the approximations. Furthermore, we also check our results by comparing them with the ones obtained by other approximation methods, including the Green’s function method, WKB approximation, and improved WKB approximation, and find the relative errors.
Feedback, scatter and structure in the core of the PKS 0745-191 galaxy cluster
We present Chandra X-ray Observatory observations of the core of the galaxy cluster PKS 0745-191. Its centre shows X-ray cavities caused by AGN feedback and cold fronts with an associated spiral structure. The cavity energetics imply they are powerful enough to compensate for cooling. Despite the evidence for AGN feedback, the Chandra and XMM-RGS X-ray spectra are consistent with a few hundred solar masses per year cooling out of the X-ray phase, sufficient to power the emission line nebula. The coolest X-ray emitting gas and brightest nebula emission is offset by around 5 kpc from the radio and X-ray nucleus. Although the cluster has a regular appearance, its core shows density, temperature and pressure deviations over the inner 100 kpc, likely associated with the cold fronts. After correcting for ellipticity and projection effects, we estimate density fluctuations of ~4 per cent, while temperature, pressure and entropy have variations of 10-12 per cent. We describe a new code, MBPROJ, able to accurately obtain thermodynamical cluster profiles, under the assumptions of hydrostatic equilibrium and spherical symmetry. The forward-fitting code compares model to observed profiles using Markov Chain Monte Carlo and is applicable to surveys, operating on 1000 or fewer counts. In PKS0745 a very low gravitational acceleration is preferred within 40 kpc radius from the core, indicating a lack of hydrostatic equilibrium, deviations from spherical symmetry or non-thermal sources of pressure.
Large-Scale Structure Observables in General Relativity [Replacement]
We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider i) redshift perturbation of cosmic clock events; ii) distortion of cosmic rulers, including weak lensing shear and magnification; iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann-Robertson-Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order.
Large-Scale Structure Observables in General Relativity
We review recent studies that rigorously define several key observables of the large-scale structure of the Universe in a general relativistic context. Specifically, we consider i) redshift perturbation of cosmic clock events; ii) distortion of cosmic rulers, including weak lensing shear and magnification; iii) observed number density of tracers of the large-scale structure. We provide covariant and gauge-invariant expressions of these observables. Our expressions are given for a linearly perturbed flat Friedmann-Robertson-Walker metric including scalar, vector, and tensor metric perturbations. While we restrict ourselves to linear order in perturbation theory, the approach can be straightforwardly generalized to higher order.
Scaling Laws for Dark Matter Halos in Late-Type and Dwarf Spheroidal Galaxies
Dark matter (DM) halos of Sc-Im galaxies satisfy scaling laws analogous to the fundamental plane relations for elliptical galaxies. Halos in less luminous galaxies have smaller core radii, higher central densities, and smaller central velocity dispersions. If dwarf spheroidal (dSph) and dwarf Magellanic irregular (dIm) galaxies lie on the extrapolations of these correlations, then we can estimate their baryon loss relative to that of brighter Sc-Im galaxies. We find that, if there had been no such enhanced baryon loss, then typical dSph and dIm galaxies would be brighter in absolute magnitude by 4 and 3.5 mag, respectively. Instead, these galaxies lost or retained as gas (in dIm galaxies) baryons that could have formed stars. Also, typical dSph and dIm galaxies have DM halos that are more massive than we thought, with velocity dispersions of about 30 km/s or circular-orbit rotation velocities of V_circ ~ 42 km/s. Comparison of DM and visible matter correlations confirms that, at V-band absolute magnitudes fainter than -18, dSph and dIm galaxies form a sequence of decreasing baryon-to-DM mass ratios in smaller dwarfs. We show explicitly that galaxy baryon content goes to (almost) zero at halo V_circ = 42 +- 4 km/s, in agreement with what we found from our estimate of baryon depletion. Our results suggest that there may be a large population of DM halos that are essentially dark and undiscovered. This helps to solve the problem that the fluctuation spectrum of cold DM predicts more dwarfs than we observe.
CoMaLit - II. The scaling relation between mass and Sunyaev-Zel'dovich signal for Planck selected galaxy clusters
We discuss the scaling relation between mass and the integrated Compton parameter of a sample of galaxy clusters from the all-sky Planck Sunyaev-Zel’dovich catalogue. Masses were measured with either weak lensing, caustics techniques or assuming hydrostatic equilibrium. According to the calibration sample, the slope of the M_{500}-Y_{500} relation is 1.2-1.6, shallower than self-similar predictions, with an intrinsic scatter of 20+-10 per cent. The regression method employed accounts for intrinsic scatter in the mass measurements too. The absolute calibration of the relation is most difficult to ascertain due to systematic differences of ~ 20-40 per cent in mass estimates reported by separate groups. We find that Planck cluster mass estimates suffer from a mass dependent bias.
Comparing Masses in Literature (CoMaLit)-I. Bias and scatter in weak lensing and X-ray mass estimates of clusters
The first building block for using galaxy clusters in astrophysics and cosmology is an accurate determination of their mass, which can be estimated with weak lensing (WL) determinations or X-ray analyses assuming hydrostatic equilibrium (HE). By comparing the two mass proxies in well observed samples of rich clusters, we determined the intrinsic scatters, sigma_{WL}~15 per cent for WL masses and sigma_{HE}~25 per cent for HE masses. The certain assessment of the bias is hampered by differences as large as ~40 per cent in either WL or HE mass estimates reported by different groups. If the scatter in the mass proxy is not considered, the slope of any scaling relation `mass–observable’ is biased towards shallower values, whereas the intrinsic scatter of the scaling is over-estimated.
CLASH-VLT: Insights on the mass substructures in the Frontier Fields Cluster MACS J0416.1-2403 through accurate strong lens modeling
We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the CLASH and Frontier Fields galaxy cluster MACS J0416.1-2403. We show and employ our extensive spectroscopic data set taken with the VIMOS instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log(M_*/M_Sun) ~ 8.6. We reproduce the measured positions of 30 multiple images with a remarkable median offset of only 0.3" by means of a comprehensive strong lensing model comprised of 2 cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ~5%, including systematic uncertainties. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological $N$-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1-2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide intriguing tests of the assumed collisionless, cold nature of dark matter and of the role played by baryons in the process of structure formation.
How chameleons core dwarfs with cusps
The presence of a scalar field that couples nonminimally and universally to matter can enhance gravitational forces on cosmological scales while restoring general relativity in the Solar neighborhood. In the intermediate regime, kinematically inferred masses experience an additional radial dependence with respect to the underlying distribution of matter, which is caused by the increment of gravitational forces with increasing distance from the Milky Way center. The same effect can influence the internal kinematics of subhalos and cause cuspy matter distributions to appear core-like. Specializing to the chameleon model as a worked example, we demonstrate this effect by tracing the scalar field from the outskirts of the Milky Way halo to its interior, simultaneously fitting observed velocity dispersions of chemo-dynamically discriminated red giant populations in the Fornax and Sculptor dwarf spheroidals. Whereas in standard gravity these observations suggest that the matter distribution of the dwarfs is cored, we find that in the presence of a chameleon field the assumption of a cuspy Navarro-Frenk-White profile becomes perfectly compatible with the data. Importantly, chameleon models also predict the existence of slopes between two stellar subcomponents that in Newtonian gravity would be interpreted as a depletion of matter in the dwarf center. Hence, an observation of such an apparently pathological scenario may serve as a smoking gun for the presence of a chameleon field or a similar modification of gravity, independent of baryonic feedback effects. In general, measuring the dynamic mass profiles of the Milky Way dwarfs provides stronger constraints than those inferred from the screening scale of the Solar System since these are located at greater distances from the halo center.
Clustering-based Redshift Estimation: Comparison to Spectroscopic Redshifts
We investigate the potential and accuracy of clustering-based redshift estimation using the method proposed by M\’enard et al. (2013). This technique enables the inference of redshift distributions from measurements of the spatial clustering of arbitrary sources, using a set of reference objects for which redshifts are known. We apply it to a sample of spectroscopic galaxies from the Sloan Digital Sky Survey and show that, after carefully controlling the sampling efficiency over the sky, we can estimate redshift distributions with high accuracy. Probing the full colour space of the SDSS galaxies, we show that we can recover the corresponding mean redshifts with an accuracy ranging from $\delta$z=0.001 to 0.01. We indicate that this mapping can be used to infer the redshift probability distribution of a single galaxy. We show how the lack of information on the galaxy bias limits the accuracy of the inference and show comparisons between clustering redshifts and photometric redshifts for this dataset. This analysis demonstrates, using real data, that clustering-based redshift inference provides a powerful data-driven technique to explore the redshift distribution of arbitrary datasets, without any prior knowledge on the spectral energy distribution of the sources.
Reproducing the Kinematics of Damped Lyman-alpha Systems
We examine the kinematic structure of Damped Lyman-alpha Systems (DLAs) in a series of cosmological hydrodynamic simulations using the AREPO code. We are able to match the distribution of velocity widths of associated low ionisation metal absorbers substantially better than earlier work. Our simulations produce a population of DLAs dominated by halos with virial velocities around 70 km/s, consistent with a picture of relatively small, faint objects. In addition, we reproduce the observed correlation between velocity width and metallicity and the equivalent width distribution of SiII. Some discrepancies of moderate statistical significance remain; too many of our spectra show absorption concentrated at the edge of the profile and there are slight differences in the exact shape of the velocity width distribution. We show that the improvement over previous work is mostly due to our strong feedback from star formation and our detailed modelling of the metal ionisation state.
Building Late-Type Spiral Galaxies by In-Situ and Ex-Situ Star Formation
We analyze the formation and evolution of the stellar components in "Eris", a 120 pc-resolution cosmological hydrodynamic simulation of a late-type spiral galaxy. The simulation includes the effects of a uniform UV background, a delayed-radiative-cooling scheme for supernova feedback, and a star formation recipe based on a high gas density threshold. It allows a detailed study of the relative contributions of "in-situ" (within the main host) and "ex-situ" (within satellite galaxies) star formation to each major Galactic component in a close Milky Way analog. We investigate these two star-formation channels as a function of galactocentric distance, along different lines of sight above and along the disk plane, and as a function of cosmic time. We find that: 1) approximately 70 percent of today’s stars formed in-situ; 2) more than two thirds of the ex-situ stars formed within satellites after infall; 3) the majority of ex-situ stars are found today in the disk and in the bulge; 4) the stellar halo is dominated by ex-situ stars, whereas in-situ stars dominate the mass profile at distances < 5 kpc from the center at high latitudes; and 5) approximately 25% of the inner, r < 20 kpc, halo is composed of in-situ stars that have been displaced from their original birth sites during Eris’ early assembly history. | 2014-08-01 22:30:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7081605195999146, "perplexity": 1087.7163096132874}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510275393.46/warc/CC-MAIN-20140728011755-00315-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://wiki.brown.edu/confluence/display/CHEM/Wavelength | ##### Child pages
• Wavelength
Go to start of banner
# Wavelength
### Wavelength
Stress Pattern
Number of Syllables
Primary Stress
WAVE length
2
first
Statement
Instructor's Question
Student's Question
Non-Chemistry Usage
The wavelength of light is inversely proportional to its energy or frequency.
Tell me quickly. What is the wavelength of visible red light?
How is wavelength related to energy?
My friend and I are usually on the same wavelength - we understand each other most of the time.
• No labels | 2020-04-04 03:34:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8519876003265381, "perplexity": 7959.684135807353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370519111.47/warc/CC-MAIN-20200404011558-20200404041558-00028.warc.gz"} |
https://web.ti.bfh.ch/~blk2/Events/LI2007/zypen.html | This is joint work with Maria Luisa Colasante, Universidad de los Andes,
Venezuela.
Let X be a topological space. The topological
closure of the diagonal \Delta = {(x,x): x\in X} is a
symmetric relation on X. Our starting point is the
well-known proposition that a topological space
is T_2 if and only if cl(\Delta) = \Delta. We
investigate the closure of the diagonal on T_1
spaces and characterise those equivalence relations
that arise as the closure of the diagonal of some T_1
space. | 2017-10-19 03:21:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8506773114204407, "perplexity": 1723.9381546734598}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823220.45/warc/CC-MAIN-20171019031425-20171019051425-00407.warc.gz"} |
https://www.physicsforums.com/threads/horizontal-spring-oscillations.657734/ | # Horizontal spring oscillations
Hey everyone. I have 2 relatively basic questions about horizontal springs. I feel like the questions are actually very simple (it's just highschool physics) but I think I'm approaching it the wrong way. Any help would be greatly appreciated.
1. I'm supposed to find the spring constant when a 5.5kg mass is vibrating at the end of a horizontal spring. It reaches a maximum speed of 7.2m/s and has a maximum displacement of 0.23m. Ignore friction.
2. I'm supposed to find the acceleration of a 4.97kg mass when the displacement of the mass is 2.56m. It oscillating on the end of a horizontal spring with a frequency of 0.467s to the left.
1.
I thought I should first find the acceleration, then force (F=ma) and then solve for the spring constant (k=-F/x). To find the acceleration, I used a=V^2/2d. However, that gave me almost 113m/s. Surely that isn't correct... I feel like I should be using equations for energy somewhere, but I'm not sure which ones.
2.
I tried a similar thing here, using a=2d/t^2. I assumed that I should divide time by 2, because that's the time for a complete oscillation, and I'm only calculating half (2.56m and back to equilibrium). However, this gives me an acceleration of almost 94 m/s^2, which also doesn't feel right.
ehild
Homework Helper
Hey everyone. I have 2 relatively basic questions about horizontal springs. I feel like the questions are actually very simple (it's just highschool physics) but I think I'm approaching it the wrong way. Any help would be greatly appreciated.
1. I'm supposed to find the spring constant when a 5.5kg mass is vibrating at the end of a horizontal spring. It reaches a maximum speed of 7.2m/s and has a maximum displacement of 0.23m. Ignore friction.
2. I'm supposed to find the acceleration of a 4.97kg mass when the displacement of the mass is 2.56m. It oscillating on the end of a horizontal spring with a frequency of 0.467s to the left.
1.
I thought I should first find the acceleration, then force (F=ma) and then solve for the spring constant (k=-F/x). To find the acceleration, I used a=V^2/2d. However, that gave me almost 113m/s. Surely that isn't correct... I feel like I should be using equations for energy somewhere, but I'm not sure which ones.
a=V2/(2d) is a relation valid for for a motion with constant acceleration. The problem is about a vibrating body, performing simple harmonic motion. The displacement is x= A sin(ωt) where A is the maximum displacement, and ω is 2pi times the frequency. You certainly have learnt the formulas also for the velocity and acceleration.
2.
I tried a similar thing here, using a=2d/t^2. I assumed that I should divide time by 2, because that's the time for a complete oscillation, and I'm only calculating half (2.56m and back to equilibrium). However, this gives me an acceleration of almost 94 m/s^2, which also doesn't feel right.
the same problem again: It is simple harmonic motion, use the appropriate formulas.
ehild
Thanks, although I'm still somewhat confused. Part of that may be because we have not been taught this yet. Unfortunately, we're still expected to do the homework.
I see your equation, but I don't quite understand how it fits into my problems.
I looked up formulas for acceleration and found a=-xω^2. However, when I use that formula for number 2, I get 463m/s^2. That's an even larger number than before.
I tried something different for number 1. I used Et=1/2mvmax^2 and Et=1/2kA^2 to get 1/2mvmax^2=1/2kA^2. I rearranged to solve for k and got 5.4E3N/m. Does that work?
Thanks, although I'm still somewhat confused. Part of that may be because we have not been taught this yet. Unfortunately, we're still expected to do the homework.
I see your equation, but I don't quite understand how it fits into my problems.
I looked up formulas for acceleration and found a=-xω^2. However, when I use that formula for number 2, I get 463m/s^2. That's an even larger number than before.
I tried something different for number 1. I used Et=1/2mvmax^2 and Et=1/2kA^2 to get 1/2mvmax^2=1/2kA^2. I rearranged to solve for k and got 5.4E3N/m. Does that work?
Don't just use equations, understand why you are using them.
What you did here is saying that the potential energy of the spring at maximum displacement is equal to the energy of the body at maximum velocity. Why is that true?
Regarding the formulas you are using for #2
Since you need to find the acceleration, think about what laws you can use to find out the acceleration, and then figure out what is missing from the equation you will have for the acceleration in order to solve it, using what you were given.
A small hint - try to think how you can use the frequency of oscillation to find K.
Have fun.
ehild
Homework Helper
Thanks, although I'm still somewhat confused. Part of that may be because we have not been taught this yet. Unfortunately, we're still expected to do the homework.
I see your equation, but I don't quite understand how it fits into my problems.
I looked up formulas for acceleration and found a=-xω^2. However, when I use that formula for number 2, I get 463m/s^2. That's an even larger number than before.
The equation a=-xω2 is correct, it comes from the formula for the spring force : F=-kx =ma, and from the equation for the angular frequency, ω2=k/m. x=2.56 m, the frequency is f=0.467 1/s, the angular frequency is ω=2∏f. Just plug-in.
I tried something different for number 1. I used Et=1/2mvmax^2 and Et=1/2kA^2 to get 1/2mvmax^2=1/2kA^2. I rearranged to solve for k and got 5.4E3N/m. Does that work?
That is OK. | 2021-01-21 18:44:20 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8675737977027893, "perplexity": 436.987544255469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703527224.75/warc/CC-MAIN-20210121163356-20210121193356-00400.warc.gz"} |
https://www.groundai.com/project/the-puzzle-of-the-cluster-forming-core-mass-radius-relation-and-why-it-matters/ | # The puzzle of the cluster-forming core mass-radius relation and why it matters
Geneviève Parmentier and Pavel Kroupa
Argelander-Institut für Astronomie, Bonn Universität, Auf dem Hügel 71, D-53121 Bonn, Germany
Humboldt Fellow - E-mail: gparm@astro.uni-bonn.de
Accepted Received ; in original form
###### Abstract
We highlight how the mass-radius relation of cluster-forming cores combined with an external tidal field can influence infant weight-loss and disruption likelihood of clusters at the end of their violent relaxation, namely, when their dynamical response to the expulsion of their residual star-forming gas is over. Specifically, building on the cluster -body model grid of Baumgardt & Kroupa (2007), we study how the relation between the bound fraction of stars staying in clusters at the end of violent relaxation and the cluster-forming core mass is affected by the slope and normalization of the core mass-radius relation. Assuming mass-independent star formation efficiency and gas-expulsion time-scale and a given external tidal field, it is found that constant surface density cores and constant radius cores have the potential to lead to the preferential removal of high- and low-mass clusters, respectively. In contrast, constant volume density cores result in mass-independent cluster infant weight-loss, as suggested by some observations. These trends result from how core volume density and core mass scale with each other. Infant weight-loss is quantified for cluster-forming cores with either number density , or surface density , or radius pc. Our modelling includes predictions about the evolution of high-mass cluster-forming cores (say ), a regime not yet covered by the observations. We show how, for a given external tidal field, the core mass-radius diagram constitutes a straightforward diagnostic tool to assess whether the tidal field influences the fate of clusters after gas expulsion.
An overview of various issues directly affected by the nature of the core mass-radius relation is presented. In relation with the tidal field impact, these are the evolution of the cluster mass function at young ages (i.e. over the first Myr), and our ability to reconstruct the star formation history of galaxies from their cluster age distribution. Independently of the tidal field impact, the slope and/or normalization of the cluster-forming core mass-radius relation also influences the mass-metallicity relation of old globular clusters predicted by self-enrichment models, and the duration of cluster violent relaxation.
Finally, we emphasize that observational mass-radius data-sets of dense gas regions must be handled with caution as they may be the imprint of the molecular tracer used to map them, rather than reflecting cluster formation conditions.
###### keywords:
stars: formation — galaxies: star clusters: general — galaxies: evolution — stars: kinematics and dynamics
pagerange: The puzzle of the cluster-forming core mass-radius relation and why it mattersReferencespubyear: 201?
## 1 Introduction
Modelling the early evolution of star cluster systems provides crucial insights into cluster formation physics through a comparison between predicted and observed correlations and distribution functions of individual cluster properties. Over the past 30 years, considerable efforts have been dedicated to modelling cluster violent relaxation, i.e. cluster evolution after residual star forming gas expulsion (e.g. Tutukov, 1978; Hills, 1980; Mathieu, 1983; Lada, Margulis & Dearborne, 1984; Kroupa, Aarseth & Hurley, 2001; Geyer & Burkert, 2001; Goodwin & Bastian, 2006; Baumgardt & Kroupa, 2007; Proszkow & Adams, 2009). In particular, cluster -body model grids (e.g. Baumgardt & Kroupa, 2007) allow to model entire star cluster systems while browsing the parameter space extensively. Initial conditions of star cluster systems constitute crucial ingredients of their time-evolution modelling, and the mass-radius relation of cluster-forming cores is therefore an issue at the forefront of the physics of both cluster formation and cluster evolution.
In an influential study of Galactic molecular clouds and of the density enhancements they contain, Larson (1981) find that molecular clouds and their cores are in approximate virial equilibrium and have a near constant surface density. These properties were then included by Harris & Pudritz (1994) in their model of formation of old globular clusters, whose birth sites are identified as the dense cores of ‘supergiant molecular clouds’ at the early protogalactic epoch. Constant surface density cluster-forming cores show a strong mass-radius relation , which contrasts with the absence of a clear mass-radius relation for gas-free star clusters, regardless of their age (Zepf et al., 1999; Larsen, 2004; Scheepmaker et al., 2007). Kroupa (2005) therefore suggests that cluster-forming cores themselves have uncorrelated masses and radii. Such an hypothesis has a bearing on the time-evolution of the mass function of clusters from the embedded phase to the end of violent relaxation. Owing to their deeper potential well, massive cores undergo more adiabatic gas expulsion and, therefore, retain a greater fraction of their stars. Kroupa & Boily (2002) show that this can account for the formation of a turnover in the cluster mass function at young ages. Baumgardt et al. (2008) and Parmentier et al. (2008) further that hypothesis and show that cluster-forming cores with constant radii indeed produce features (flattening or turnover) in the cluster mass function provided that the star formation efficiency (SFE), assumed to be mass-independent, is not higher than 30 per cent. These studies mostly aim at explaining the prominent and universal turnover characterizing the mass function of old globular clusters (see Ashman & Zepf, 1998, and references therein).
The mass function of young star clusters in the present-day Universe is reported to be a featureless power-law of spectral index of about (e.g. Zhang & Fall, 1999; Lada & Lada, 2003), irrespective of the cluster age (say, 1, 10 or 100 Myr). This implies that cluster infant weight-loss (i.e. the gas-expulsion-driven cluster star-loss) is independent of the embedded-cluster mass. Recently, Fall, Krumholz & Matzner (2010) have estimated that the SFE required for a cluster-forming core to expel its gas via stellar feedback is mass-independent if cluster-forming cores have a constant surface density. Their model assumes that the gas expulsion time-scale in units of a core crossing time, , is constant. While preserving the shape of the cluster mass function at young ages, such a finding leaves unanswered the question of why young star clusters are deprived of a significant mass-radius relation if the mass-radius relation of their progenitors scales as .
In this contribution we are adding one more piece to this intriguing puzzle. Previous studies (Kroupa & Boily, 2002; Parmentier et al., 2008; Fall, Krumholz & Matzner, 2010) have ignored the influence that an external tidal field may exert upon clusters experiencing violent relaxation. Most cluster stars venturing beyond the cluster tidal radius become unbound field stars. 111Stars on highly excentric orbits may experience transient passages beyond the tidal radius and re-integrate into the cluster thereafter. However, such stars are expected to be rare. Therefore, as a cluster expands following gas expulsion, its infant weight-loss and its likelihood of disruption are partly governed by how deeply the embedded cluster sits within its limiting tidal radius, that is, how severe tidal overflow due to cluster expansion is. Goodwin (1997) performs -body simulations highlighting this effect: infant weight-loss of otherwise identical model clusters is stronger closer to the Galactic centre by virtue of the stronger tidal field and hence smaller cluster tidal radius (his fig. 3). Baumgardt & Kroupa (2007) quantify this effect by the ratio of the half-mass radius to the tidal radius of the embedded cluster. If , the cluster has much space in which to expand following gas expulsion, the tidal field impact is low and cluster infant weight-loss is solely driven by the SFE and the gas expulsion time-scale . In contrast, if , Baumgardt & Kroupa (2007) find that protoclusters are mostly disrupted. We refer to as the tidal field impact. As we shall see in Section 2, not only is it related to the external tidal field, it also depends on the embedded cluster mass and size and, therefore, to the cluster-forming core mass-radius relation. It is therefore crucial to quantify to what extent the mass-radius relation of cluster-forming cores combined to an external tidal field influences the early evolution of star cluster systems. In this introductory paper, we focus our attention on the fraction of stars which remains bound to their parent clusters at the end of violent relaxation.
The outline of the paper is as follows. Section 2 investigates how different cluster-forming core mass-radius relations (constant radius, constant volume density and constant surface density) influence the cluster bound fraction as a function of core mass. We also show how the mass-radius diagram of cluster-forming cores can be used to estimate whether an external tidal field influences cluster violent relaxation or not. To what extent mass-size data-sets of dense molecular gas regions can help us constrain the cluster-forming core mass-radius relation is the topic of Section 3. In Section 4, we comment about the importance of the core mass-radius relation regarding crucial issues such as cluster infant mortality/weight-loss as a function of cluster mass, and the reconstruction of the star formation history of galaxies based on their surviving clusters. We conclude in Section 5.
## 2 Core mass-radius relation and external tidal field
At the end of its violent relaxation (age Myr; see Section 4.3), the mass of a star cluster is
mcl=Fbound.SFE.mcore, (1)
with the mass of the cluster progenitor core and the fraction of stars remaining bound to the cluster at the end of violent relaxation. SFE is the ‘local’ star formation efficiency, namely, the mass fraction of gas turned into stars at the onset of gas expulsion.
Cluster infant weight-loss, , is a sensitive function of the SFE at the onset of gas expulsion and of the gas expulsion time-scale expressed in units of a cluster-forming core crossing-time, (e.g. Hills, 1980; Mathieu, 1983; Lada, Margulis & Dearborne, 1984; Geyer & Burkert, 2001). The higher the SFE, the slower the gas expulsion time-scale , the higher the bound fraction (SFE,).
Formally, the bound fraction depends on the ‘effective’ star formation efficiency (eSFE, Verschueren, 1990; Goodwin & Bastian, 2006; Goodwin, 2009), rather than on the local SFE. That is, . The eSFE incorporates how far from virial equilibrium the cluster is at the onset of gas expulsion. If the stars and gas are in virial equilibrium at the onset of gas expulsion as a result of, for instance, a several-crossing-time time-span between star formation and gas expulsion, the eSFE is simply the ‘local’ SFE and . This is the approach we adopt in this contribution since the assumption of virial equilibrium underpins the -body model grid of Baumgardt & Kroupa (2007) (and the vast majority of other studies dedicated to cluster gas expulsion). However, if the dynamical state of the newly-formed stars at the onset of gas expulsion is ‘cold’, then the is higher than the local SFE, and the bound fraction becomes larger than shown in Figs. 3, 5, 6 and 7 (and the opposite if stars are in a ‘hot’ dynamical state). Cluster models in which stars are not in virial equilibrium at gas expulsion have been investigated by Lada, Margulis & Dearborne (1984), Verschueren (1990) and Goodwin (1997). Note however that if the cold collapse at gas expulsion stems from stars forming out of a contracting pre-cluster core, this implies that the star-formation activity in the pre-cluster cloud core would need to be synchronised to occur within a time shorter than the core crossing-time and so is most likely (see Kroupa, 2008, for a discussion).
In addition to the SFE and gas expulsion time-scale, the bound fraction may also depend on the tidal field impact , i.e.:
Fbound=Fbound(SFE,τGExp/τcross,rh/rt), (2)
an effect mapped by Baumgardt & Kroupa (2007) by means of cluster -body modelling. Stronger tidal field impacts lower the bound fraction . To provide a clear understanding of how the cluster-forming core mass-radius relation affects the bound fraction through the tidal field impact, in what follows, each simulation is assigned a given local SFE, a given gas expulsion time-scale and a given external tidal field. That way, any variation will necessarily result from varying the core mass-radius relation.
### 2.1 Fiducial model: SFE=0.33, τGExp≃τcross
We adopt (Lada & Lada, 2003) and (Krumholz & Matzner, 2009, see also Section 4.1) as the local SFE and gas expulsion time-scale of our fiducial model. The parameter space in terms of SFE and is explored more widely later in this section.
At this stage, we note that Baumgardt & Kroupa (2007) model cluster gas expulsion as an exponential decrease with time of the cluster gas mass (see their eq. 3):
mgas(t)=mgas(0)e−t/τM. (3)
Therefore, , which they define as the gas-expulsion time-scale, is actually the e-folding time of the gas expulsion process. It corresponds to the time when the residual gas mass fraction is of its initial value. In all our models, we define the gas-expulsion time-scale rather as the time-scale over which the cluster expels the entirety of its residual gas. Prior to using the -body model grid of Baumgardt & Kroupa (2007), we therefore define (i.e. we multiply the ‘gas expulsion time-scale’ of Baumgardt & Kroupa (2007) by a factor 3) so that corresponds to a residual gas mass fraction of , i.e. the cluster is practically devoid of gas.
We subject all cluster-forming cores to the same external tidal field, which is that of an isothermal potential with a circular velocity at galactocentric distances of either or . This will allow us to assess how the strength of the external tidal field influences modelling outputs. The embedded-cluster tidal radius obeys:
rt=Dgal(mecl2mgal(
where is the embedded cluster stellar mass and is the host galaxy mass enclosed within (Binney & Tremaine, 1994). is the gravitational constant.
Tidal overflow sets in if the embedded-cluster radius is larger than the tidal radius . Substituting with in Eq. 4, it follows that this equates with a volume density smaller than:
ρlim=32πGV2cD2gal. (5)
At , Eq. 5 gives , equivalent to an number density . At , these figures are or .
Cluster gaseous progenitors are denser than these limits by several orders of magnitude. Figure 1 shows mass-radius diagrams of molecular cores mapped with different tracers. The top panel shows radii and masses of cores mapped with the CO emission line, some of them displaying signs of star formation (see Section 3 for a detailed discussion). In contrast, the middle and bottom panels present mass-radius diagrams of molecular cores selected for their star formation activity, then mapped with higher density tracers: emission line and dust continuum emission. We will discuss these observations in greater detail in Section 3. For now, suffice is to say that, owing to their systematic star formation activity, molecular cores of the middle and bottom panels constitute better proxy of cluster gaseous progenitors than the cores of the top panel. Each panel also shows lines of constant volume number density (, dashed black lines) and of constant surface density (, dotted black lines) fitting the data. The mean number density ranges from for the CO cores (top panel), to (middle panel) and (bottom panel) for the molecular cores selected for their star formation activity. Observed molecular cores are thus denser than the tidal limit defined by Eq. 5 by about 4 orders of magnitude, i.e. they are ‘immune’ to galactic tides.
Following gas expulsion, however, gas-loss, infant weight-loss and spatial expansion decrease the density of clusters compared to that of their parent cores. The key-point our simulations aim to address is: in terms of cluster-forming core mass-radius relations, what conditions lead to tidal overflow for the expanded clusters and, therefore, to enhancement of infant weight-loss/mortality compared to what would be obtained for isolated clusters (i.e. no tidal field).
We test 6 different mass-radius relations: constant core surface density, constant core volume density, and constant core radius, each with two different normalizations. We parametrize the mass-radius relation by its slope and normalization :
rcore1pc=χ(mcore1M\sun)δ. (6)
Table 1 shows adopted and values, along with the corresponding surface densities, volume densities and radii. Models with constant core surface density (), constant core volume density () and constant core radius () are labelled , and , respectively. For each slope , we consider two normalizations , referred to as ‘compact’ or ‘loose’ model. The ‘loose’ and models are fits to the CO data with the slope imposed (dotted and dashed black lines in top panel of Fig. 1). The ‘compact’ and relations describe the data of molecular cores selected for their star formation activity, for which we adopt and (dotted and dashed blue lines with filled circles in middle and bottom panels, respectively). These densities are at the logarithmic mid-points between the data-fits of the middle and bottom panels of Fig. 1. As for the constant radius models, we adopt and . These are shown as the blue (‘compact’ model, middle panel) and black (‘loose’ model, top panel) solid lines in Fig. 1. In all forthcoming figures, , and models are depicted by dotted, dashed, and solid lines, respectively.
rh=κrcore. (7)
We adopt . Molecular cores have power-law density profiles , with the density index (Müller et al., 2002; Beuther et al., 2002). corresponds to a truncated isothermal sphere (). Shallower molecular cores (i.e. smaller density indices ) lead to larger values since they have a greater fraction of their mass in their outer layers. Larger values in turn lead to larger tidal field impacts and cores more sensitive to tidal overflow.
Combining Eqs. 4, 6 and 7 provides the tidal field impact :
rhrt = κ.χ.SFE−1/30.36×(mcore1M\sun)δ−1/3 (8) ×(Dgal1kpc)−2/3(Vc220km.s−1)2/3,
thereby highlighting the influence of both the slope and normalization of the core mass-radius relation. Figure 2 depicts Eq. 8 for the various parameters (, ) of Table 1 for (top panel) and (bottom panel). For constant core radii (), more massive clusters sit more deeply within their tidal radii () and are thus more resilient to the external tidal field. Conversely, constant surface density cores () are conducive to more massive clusters being more prone to tidal overflow (). In case of constant volume density (), the tidal field impact is independent of the embedded-cluster mass (). Equation 8 can actually be rewritten as a function of the core number density as the sole core parameter. The core radius, mass and number density are related by:
rcore[pc]=1.5(mcore[M\sun]nH2[cm−3])1/3. (9)
Inserting Eq. 6 and Eq. 9 in Eq. 8, we obtain:
rhrt=4.2 κ SFE−1/3n−1/3H2(Dgal1kpc)−2/3(Vc220km.s−1)2/3. (10)
Higher ratios, and thus greater vulnerability to the tidal field, result either from a lower core density , thus larger (see loose vs. compact models in top panel of Fig. 2), or from a stronger tidal field, equivalent here to a smaller galactocentric distance (see compact models in top and bottom panels of Fig. 2).
Building on Fig. 2 and on Baumgardt & Kroupa (2007) -body model grid of clusters, which provides the fraction of stars bound to a cluster at the end of violent relaxation as a function of SFE, and , we obtain in Fig. 3 the relation between and . Model parameters are identical to those in Fig. 2 and the gas-expulsion time-scale is . Note the correlation between a low and a high in Fig. 2. In Fig. 3, lower normalizations or larger galactocentric distances result in larger bound fractions through a smaller tidal field impact. The bound fraction as a function of mass is constant when , increases when and decreases when . The latter illustrates – for the first time – a case where violent relaxation preferentially destroys high mass clusters.
As quoted earlier in this section, the compact mass-radius relations constitute a better proxy of cluster initial conditions than their loose counterparts since they fit data of molecular cores selected for their star formation activity (see Section 3 for details). Therefore, from now on, we focus most of our attention onto the compact models. When kpc, the core mass-radius relation and the tidal field impact are essentially two disconnected issues up to . That is, regardless of the adopted ‘compact’ model (, or ), when . In this regime, the tidal field impact is weak and exposed clusters respond to the loss of their residual star-forming gas essentially as if there were no external tidal field (Baumgardt & Kroupa, 2007). As a result, the bound fraction is almost constant and independent of the adopted core mass-radius relation ( when , see top panel of Fig. 3). We remind the reader that the constancy of in this regime also stems from our hypotheses of constant SFE and constant (see Eq. 2). At masses higher than , however, the model on the one hand, and the and models on the other hand, show very different behaviours, with ratios and bound fractions increasingly different as the core mass increases. A smaller galactocentric distance (e.g. instead of ) increases further the contrast between the and models at high mass.
That the and models show such contrasting behaviours in the high mass regime in a strong tidal field, i.e. close to the galactic centre, demonstrates the importance of distinguishing between these two mass-radius relations. Actually, many spiral galaxies show a transition from being predominantly atomic in their outer regions to being predominantly molecular at their centres (Wong & Blitz, 2002). One may thus expect that closer to the galactic centre, the amount of dense molecular gas available to star formation is larger, thus implying that the cluster-forming core mass function is sampled up to a higher mass (size-of-sample effect; see also Weidner, Kroupa & Larsen, 2004). This in turn would lead to the formation of more massive embedded-clusters in stronger tidal-field-environments, that is, the regime where the and models lead to highly different final bound fractions of stars.
Figure 4 is the mass-radius diagram of molecular cores from the middle and bottom panels of Fig. 1. We superimpose onto these data the adopted ‘compact’ mass-radius relations (blue dotted, dashed and solid blue lines). We also show lines of constant (black dash-dotted lines) for kpc (top panel) and kpc (bottom panel): (from top to bottom). Note that iso- lines are vertically shifted by in the bottom panel compared to the top one since (see Eq. 8). Note also that iso- lines are lines of constant volume density, as shown by Eq. 10.
For the galactocentric distances considered, the vast majority of the observed molecular cores are tidal-field resilient, i.e. . The observational data, however, occupy a limited mass range, with only a few cores more massive than . They therefore fail to probe the high-mass regime where we predict and models to respond very differently to gas expulsion through the tidal field.
Figure 4 allows us to understand Figs. 2 and 3 from another perspective. While the compact model has irrespective of core mass, a model is characterised by a decreasing mean volume density with increasing core mass. This equates with a greater tidal field impact for more massive cores (Eq. 10 and Fig. 4) and, thus, a lower final bound fraction of stars (Fig. 3). Conversely, cores with mass-independent radii ( model) increase their volume density with their mass, rendering less massive cores more prone to tidal overflow through a larger ratio.
One may argue that the reason why cluster infant weight-loss/mortality for the model is so prominently mass-dependent in Fig. 3 partly stems from the high adopted upper limit on the core mass range, i.e. . One should keep in mind, however, that such a large mass of dense molecular gas is needed to form a cluster of a few million solar masses, that is, with a mass comparable to that of the most massive old globular clusters and star clusters formed in galaxy mergers. For the model in Fig. 3, and lead to a cluster mass at the end of violent relaxation an order of magnitude lower than its progenitor core mass (see Eq. 1). As for the ‘compact’ model at , it prevents the formation of massive clusters since cores more massive than fail at forming bound clusters (=0), and cores give rise to bound clusters in mass only ( and ).
### 2.2 Exploring a wider parameter space
Figure 5 explores more widely the parameter space (SFE, ). Its top and middle panels are the counterparts of Fig. 3, with longer gas expulsion time-scales: . For slower gas expulsion, clusters retain a higher fraction of their stars because they are better able to adjust to the new gas-depleted potential they sit in. Besides, slower gas expulsion is conducive to smaller spatial expansion of the exposed cluster (see fig. 3 in Geyer & Burkert, 2001), thus to a higher mean density at the end of violent relaxation and greater resilience to the external tidal field. This is another channel through which the bound fraction of stars is increased compared to quicker gas expulsion. Compared to Fig. 3, the bound fractions in top and middle panels of Fig. 5 are increased by factors . At a galactocentric distance (top panel), this strongly dampens any dependence of the bound fraction on the core mass for the model (). But it also results in cluster infant weight-loss hardly compatible with observations since infant weight-loss is reported to range from 70 % (Bastian et al., 2005) to 90 % (Lada & Lada, 2003), that is, . The bottom panel of Fig. 5 illustrates the vs. relation for a lower SFE, namely, and the same gas-expulsion time-scale. In that case, cluster survival () requires . We will further discuss the consequences of these plots for the cluster mass function in Section 4.1. Note that the models of the middle and bottom panels behave almost similarly, that is, the combination of the weaker tidal field and lower SFE in the bottom panel compared to the middle one leads to a model degeneracy.
Adiabatic gas expulsion () allows an analytic analysis which illuminates these results further. The top and middle panels of Fig. 6 depict the evolution of with the core mass for the compact and loose models, respectively, for kpc, and (i.e. the longest gas expulsion time-scale in the model grid of Baumgardt & Kroupa, 2007). Line- and symbol-codings are as in Figs. 3 and 5. For isolated clusters (), so long a gas expulsion time-scale when is conducive to , namely, no cluster infant weight-loss. In fact, in a tidal-field-free environment, adiabatic gas expulsion implies (Mathieu, 1983). If there is a strong enough tidal field, however, stars driven beyond the cluster tidal radius by its spatial expansion will get unbound and . In the framework of the adiabatic approximation, we can estimate what minimum number density the cluster progenitor core must have to prevent tidal overflow.
In the case of adiabatic gas expulsion, the radius multiplied by the mass is an adiabatic invariant: the cluster expands by a factor SFE after gas expulsion while its mass gets lower than the core mass by a factor SFE (Hills, 1980; Mathieu, 1983). The gas-free cluster density thus follows:
ρcl=34πmclr3cl=34πSFE.mcore(SFE−1.rcore)3=SFE4.ρcore. (11)
Therefore, for SFE=0.25 and adiabatic gas expulsion, an expanded cluster at experiences tidally-driven mass-loss (, Eq.5) if its parent core has , or . In contrast, clusters formed out of cores with are not or little affected by tides.
This density limit is shown as the thick dash-dotted (black) line in the vs. diagram of Fig. 6 (bottom panel). In this diagram, the intersections of the density limit with lines of constant give the core mass above which constant surface density cores give rise to clusters significantly affected by tides. Similarly, the intersections with lines of constant gives the core mass below which clusters formed out of constant radius cores experience tidal overflow. These intersections are highlighted by upside-down open triangles in the bottom panel of Fig. 6.
Let us consider the loose model . Its intersection in the diagram with the density limit renders . This matches the core mass regime over which drops significantly for that particular core mass-radius relation (open triangles in the middle panel of Fig. 6), as indicated by the vertical dotted double-head arrow. corresponds to for which we expect little or no infant weight-loss. The middle panel of Fig. 6 indeed shows that over that mass regime. In contrast, leads to , for which we expect expanded clusters to be severely affected by tides. The middle panel of Fig. 6 indeed predicts for . A stronger tidal field (i.e. closer to the galactic centre) would lower the mass-limit at which decreases. Similarly, the intersection between the constant radius loose model pc and yields , where is sharply increasing from 0 to 0.75, as indicated by the solid double-head arrow. As for the compact models, which better describe star cluster initial conditions, those are more resilient to tidally-driven mass-loss. The diagram confirms that must strongly decrease at for (see dotted upward arrows in top and bottom panels of Fig. 6).
We underline that the above-described effects take place even if all cluster-forming cores are located within a galaxy region over which the external tidal field does not vary markedly. In these models, the variations are solely driven by the core mass-radius relation.
Although unbound stars located beyond the cluster tidal radius linger around the cluster and may result in observed clusters with estimated radii larger than their tidal limit, observations of young clusters in the Small Magellanic Cloud show that this effect fades away by an age of about 30 Myr (Glatt et al., 2010).
## 3 What information can we extract from the observations of dense molecular gas regions?
In Fig. 1, we compile masses and radii of dense gas regions from the literature. The top panel shows results of mapping of dense gas regions in neighbouring giant molecular clouds (GMC). The middle and bottom panels are mass-radius diagrams of molecular cores selected for their star formation activity.
Three Galactic regions are encompassed by the top panel: Orion B (Aoyama et al., 2001), the GMC toward HII regions S35 and 37 (referred to S35/37 in what follows) (Saito et al., 1999) and the Carinae GMC (Yonekura et al., 2005), at assumed distances of 400 pc, 1.8 kpc and 2.5 kpc, respectively. All observations were performed at the same resolution with the NANTEN telescope. The lower bound on resolved-core radii for each data-set as imposed by the NANTEN half-power beam width () is shown by dotted horizontal arrows. That CO cores in the Carinae GMC are larger than in Orion B is thus a purely resolution-driven effect. CO emission traces molecular gas with number densities a few - , and the volume density range covered by these observations is thus an imprint of the CO tracer.
The and loose models of Section 2 are fits to these CO data. As we cautioned in Section 2, cores do not systematically host signs of star formation activity and, therefore, loose models may not trace actual cluster formation conditions. In Fig. 1, CO cores associated to one or more sources identified as a Young Stellar Object (YSO) candidate by Saito et al. (1999), Aoyama et al. (2001) or Yonekura et al. (2005) are circled. Open squares depict Carinae cores showing other signs of active star formation (e.g. YSO candidate from the point-source catalog or bipolar outflows).
CO cores were also observed by Aoyama et al. (2001) and Yonekura et al. (2005) in HCO emission. HCO emission traces molecular gas at , i.e. an order of magnitude higher than CO. They note a tight correlation between the presence of high-density HCO clumps and star formation activity. This suggests that star formation requires number densities of order () and hence that only the densest, presumably most inner, regions of CO cores form stars. Loose and models have therefore too large a normalization to emulate realistic cluster formation conditions. That is why we insisted in Section 2 that those models should be considered only for illustrative purposes, e.g. to show how model outputs respond to variations of the normalization of the core mass-radius relation. A related point worth being quoted here is that SFEs measured over the whole volume of cores (e.g. Higuchi et al., 2009, their table 3) are global SFEs and, as such, are not indicative of how an embedded cluster dynamically responds to gas expulsion. This is the local SFE, namely, the SFE estimated over the volume of gas forming the cluster, which matters when modelling cluster violent relaxation. Global SFEs averaged over whole cores constitute lower limits to their local counterparts. A low global SFE (say, 10 %) may be misleading in prompting us to conclude that a cluster will not survive its violent relaxation, even though the local SFE may be high enough for the cluster to retain a fraction of its stars. We will come back to this point in a forthcoming paper (Parmentier,, in prep).
To better constrain cluster formation conditions, we gather in the middle and bottom panels of Fig. 1 masses and radii of dense molecular cores selected for their star formation activity (either sources or water masers). Mapping of star-forming cores in the CS J emission line has been performed by Shirley et al. (2003, columns 3 and 5 of their table 5; filled squares in the bottom panel of Fig. 1). Mapping of star-forming cores in dust-continuum emission has been performed by Faundez et al. (2004, their table 1), Fontani et al. (2005, radii and masses from their tables A.5 and A.6, respectively) and Müller et al. (2002, columns 2 and 3 of their table 4). They are depicted as the (blue) -symbols and asterisks and (black) open circles in the middle and bottom panels of Fig. 1. The radius of cores is defined as that of the contour at half-maximum of the CS or dust-continuum emission. The core mass is the mass enclosed within that radius. These cores have (volume and surface) densities significantly higher than those inferred from the data, as indicated by the lines of constant and constant in middle and bottom panels. Müller et al. (2002) provide an alternative definition of the masses and radii of their surveyed cores (columns 4 and 5 of their table 4), which we show as the (black) filled circles in the middle panel of Fig. 1. This sequence neatly defines a line of constant volume density, with a mean number density .
It is important to realize that this result stems from how core masses and radii are estimated, however. From the radial density profiles of the star-forming regions they observe, Müller et al. (2002) obtain the radius where . Their core mass is the gas mass enclosed within that radius. Defining core masses and radii that way necessarily results in a sequence of constant volume density. As such, this core mass-radius relation may constitute a measurement-imprint rather than a genuine imprint of the cluster-formation physics.
These various examples show that to infer the mass-radius relation of cluster-forming cores observationally is not a straightforward task. Results heavily depend on the tracer and/or the method used to map them. In Sections 2 and 4 realistic cluster initial conditions are described by mass-radius relations representative of the dense molecular cores selected for their star formation activity. We refer to them as the ‘compact’ , and models. They are shown as the (blue) dotted, solid and dashed lines with filled-circles in the middle and bottom panels of Fig. 1. Their constant surface density, radius and volume number density are , pc and , respectively. These volume and surface densities are at the logarithmic midpoints of the fits to the data in Fig. 1 middle and bottom panels (black dashed and dotted lines).
We emphasize that our core mass-radius relations (Eq. 6 and Table 1) relate the total mass of cores to their outer radius. In that sense, our relations are not directly comparable to the dust-continuum and CS emission data of Shirley et al. (2003), Faundez et al. (2004) and Fontani et al. (2005), who define the core as the region enclosed within the FWHM contour. Assuming an isothermal sphere density profile for the cores, the volume and surface densities within the half-mass radius are 4 and 2 times, respectively, higher than the volume and surface densities averaged over the whole core (since for an isothermal sphere). That is, the mean volume and surface densities within the half-mass radius of our compact and models are fully comparable to the mean densities within the FWHM contour of the data of Fig. 1 bottom panel. Note also that the adopted number density is not significantly different from the number density characterizing HCO-traced molecular gas () which is closely associated to star formation activity in cores (Aoyama et al., 2001; Yonekura et al., 2005).
Physically, a mean density of --) for cluster formation may result from the associated efficient decay of turbulence, leading such dense gas cores to undergo gravitational collapse and form star clusters (Klessen, 2003). We also note that leads to , implying that all cluster stars due to become unbound owing to gas expulsion have crossed the tidal radius boundary by an age of at most 15 Myr (see fig. 4 in Parmentier, 2009, see also Section 4.3).
## 4 From cluster early evolution to galaxy star formation histories: consequences
Section 2 shows that the combination of the cluster-forming core mass-radius relation with an external tidal field can contribute to determining how much infant weight-loss clusters experience and whether infant weight-loss is mass-independent or not. In this section, we survey a few topics which are directly influenced by the core mass-radius relation, either in relation to the tidal field impact (Sections 4.1 and 4.2), or independently of it (Sections 4.3 and 4.4).
### 4.1 The shape of the young cluster mass function
Most observational evidence gathered so far shows that the shape of the post-violent relaxation cluster mass function mirrors that of the embedded cluster mass function (Kennicutt et al., 1989; McKee & Williams, 1997; Lada & Lada, 2003; Zhang & Fall, 1999; Oey et al., 2004; Dowell et al., 2008, but see Anders et al. (2007) for the case of a bell-shaped young cluster luminosity function). That is, cluster infant weight-loss appears to be mass-independent. As Figs. 3 and 5 show, under the assumption of constant, hence mass-independent, SFE and gas expulsion time-scale , cluster-forming cores characterised by a constant volume density () constitute the most robust way of achieving mass-independent cluster infant weight-loss. In contrast, in the case of constant surface density, clusters formed out of massive cores may be preferentially destroyed (see Fig. 3), and the shapes of the post-violent-relaxation cluster mass function and core mass function may differ substantially (see fig. 4 in Parmentier, 2010).
Our result is at odds with that derived by Fall, Krumholz & Matzner (2010), following which mass-independent infant weight-loss requests near-constant surface density cores (). In the case of constant volume density cores, they find that the needed to clean the cluster of its residual star-forming gas is an increasing function of the core mass. This is conducive to less-massive clusters experiencing greater infant weight-loss and, thus, to a cluster mass function shallower than the embedded-cluster mass function. Our result and theirs stem from two utterly different approaches, however. Our model rests on how the tidal field impact varies with the core mass , under the assumptions of constant SFE and constant gas expulsion time-scale . Their model rests on the amount of stellar feedback required to clear an embedded cluster of its residual gas, neglecting the tidal field impact and assuming constant .
To illustrate that both approaches are not irreconcilable, let us first consider the case of energy-driven feedback of Fall, Krumholz & Matzner (2010). The rate at which massive stars deposit energy in the cluster-forming core gas is proportional to the core stellar mass, that is, with a proportionality coefficient. The energy input accumulated over the gas expulsion time-scale is thus . Fall, Krumholz & Matzner (2010) derive the core SFE by equating the total energy input to the critical value needed to expel the intra-cluster gas, that is, the gas binding energy . Introducing the core crossing-time , it thus follows:
Etot =kE.SFE.mcore.τGExpτcross.τcross = Ecrit =G(1−SFE)m2corercore. (12)
Since , with a unit-dependent proportionality constant:
(kE.kτ.G−3/2).SFE1−SFE.τGExpτcross =m3/2core.r−5/2core. (13)
Finally, introducing the core mass-radius relation (Eq. 6):
(χ5/2.kE.kτ.G−3/2).SFE1−SFE.τGExpτcross =m(3−5δ)/2core. (14)
Neglecting the coefficient which matters little as long as , we see that, depending on the slope of the core mass-radius relation, the product increases with the core mass as (), (), and (). The greater dependence of on the core mass as decreases stems from the depth of the core potential well being itself a steeper function of the core mass for shallower core mass-radius relations.
Equation 14 can now be applied to two limiting cases: either a mass-independent SFE, or a mass-independent gas expulsion time-scale in units of a core crossing-time. A constant SFE is the approach adopted by Baumgardt et al. (2008), which we come back to below.
If, on the other hand, is core-mass-independent, Eq. 14 leads to
SFE∝m(3−5δ)/2core (15)
and we have recovered eq. (1a) of Fall, Krumholz & Matzner (2010). Their result suggests that mass-independent cluster infant weight-loss demands . That is, compared to cores with constant volume density and constant radius, constant surface density cores introduce the smallest mass-dependence for cluster-infant weight-loss. Yet, Eq. 2 shows that the bound fraction of stars at the end of violent relaxation does not depend on SFE and only. It also depends on the tidal field impact which, as we demonstrate in Section 2, can introduce a strong core-mass-dependence for . A mass-independent tidal field impact requests constant volume density cores, but those are characterised by a mass-dependent SFE to expel residual star-forming gas: (Eq. 15). In that case and for constant gas expulsion time-scale (as assumed in Eq. 15), is a sharply increasing function of the SFE. Figure 1 in Baumgardt & Kroupa (2007) shows that when and when , with a threshold value dependent on and . For instance, explosive gas expulsion () and no tidal field impact () renders . The transition from to as the SFE increases beyond the threshold is conducive to the formation of features in the cluster mass function (flattening and turnover), in conflict with most observations of young star clusters in the present-day Universe.
As a brief summary before heading further: under the assumption , results in a mass-independent SFE (Eq. 15), but a mass-dependent tidal field impact (Eq. 8). Conversely, results in a mass-independent tidal field impact , but a mass-dependent SFE. These modelling results are to be contrasted with the observations of power-law mass functions for young star clusters which demand that all 3 parameters – SFE, and – weakly depend on the core mass to ensure that the bound fraction does not depend significantly on (Eq. 2).
The approach adopted by Baumgardt et al. (2008) can help us solve this intriguing conundrum. Instead of assuming a mass-independent in Eq. 14, Baumgardt et al. (2008) adopt a mass-independent SFE. Equation 14 thus becomes:
τGExpτcross∝m(3−5δ)/2core. (16)
While in Eq. 15, the larger energy-input required to clear the gas out of more massive cores arises from a higher SFE, in Eq. 16, it is obtained by integrating the energy input over a longer gas expulsion time-scale . Results obtained for based on Eq. 16 are at first glance similar to those obtained for the SFE based on Eq. 15. A mass-independent gas expulsion time-scale requires , which leads to mass-dependent tidal field impact. Constant volume density cores (), needed to reproduce a mass-independent tidal field impact, are conducive to mass-dependent . There is a major difference with the SFE-varying approach of Eq. 15, however. Whether the mass-varying of Eq. 16 induces a mass-dependent depends very much on the range of involved. Actually, the -body simulations of Baumgardt & Kroupa (2007) show that for , the bound fraction of stars stays about constant. That is, it is doable to get even though is an increasing function of the core mass, as long as .
To assess this issue in more detail, let us derive the normalizing factor in Eq. 16. Building on models of deposition of stellar feedback energy, Baumgardt et al. (2008) derive the gas expulsion time-scale (in units of Myr) as a function of the core half-mass radius, core mass and SFE (their eq. 14 which we reproduce below for the sake of clarity):
τGExp=7.1×10−81−SFESFEmcoreM\sun(rhpc)−1Myr. (17)
Equation 17 can be combined with the core crossing-time and with a core mass-radius relation to infer as a function of the sole core mass . Using eq. 6 in Baumgardt & Kroupa (2007) for the core crossing-time and the core mass-radius relation of our compact model (Eq. 6 with and ), we obtain:
τGExpτcross=3.4×10−51−SFESFE(mcoreM\sun)2/3. (18)
This equation is valid for constant volume density cores with the normalization .
Figure 7 presents the bound fraction in dependence of the gas expulsion time-scale (bottom x-axis, based on the -body model grid of Baumgardt & Kroupa, 2007) and of the core mass (top x-axis, based on Eq. 18). The adopted tidal field impact is weak, namely, , as we find for the compact model in Fig. 2. One can see that SFE of - leads to a constant bound fraction up to , and to an increase by a factor of over the high mass range | 2020-11-27 16:48:07 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8066239953041077, "perplexity": 2133.032165782149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141193856.40/warc/CC-MAIN-20201127161801-20201127191801-00006.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/the-angles-depression-top-bottom-tower-seen-top-60-sqrt-3-m-high-cliff-are-45-60-respectively-find-height-tower-heights-distances_44103 | # The Angles of Depression of the Top and Bottom of a Tower as Seen from the Top of a 60 Sqrt(3) M High Cliff Are 45° and 60° Respectively. Find the Height of the Tower. - Mathematics
The angles of depression of the top and bottom of a tower as seen from the top of a 60 sqrt(3) m high cliff are 45° and 60° respectively. Find the height of the tower.
#### Solution
Let AD be the tower and BC be the cliff.
We have,
BC = 60 sqrt(3) , ∠ CDE = 45° and ∠BAC = 60°
⇒ BE = AD = h
⇒ CE = BC - BE= 60 sqrt(3) - h
In ΔCDE,
tan 45° = (CE)/(DE)
⇒ 1 = (60 sqrt(3) -h)/(DE)
⇒ DE = 60 sqrt(3) - h
⇒ AB = DE = 60 sqrt(3) - h ............(1)
Now, in ΔABC
tan 60° = (BC)/(AB)
⇒ sqrt(3)= (60 sqrt(3) )/ (60 sqrt(3) -h) [ Using (1)]
⇒ 180 - h sqrt(3) = 60 sqrt(3)
⇒ h sqrt(3) = 180- 60 sqrt(3)
⇒ h = (108 -60sqrt(3) )/sqrt(3) xx sqrt(3)/sqrt(3)
⇒ h = ( 180 sqrt(3)-180) /3
⇒ h = (180 sqrt(3)-1) /3
∴ h = 60 ( sqrt(3)-1)
= 60 (1.732 -1)
= 60 (0.7.32)
Also, h = 43.92m
So, the height of the tower is 43. 92 m.
Concept: Heights and Distances
Is there an error in this question or solution?
#### APPEARS IN
RS Aggarwal Secondary School Class 10 Maths
Chapter 14 Height and Distance
Exercises | Q 22
Share | 2023-03-30 01:52:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7277892827987671, "perplexity": 8106.66801196995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949093.14/warc/CC-MAIN-20230330004340-20230330034340-00437.warc.gz"} |
https://asmedigitalcollection.asme.org/vibrationacoustics/article/128/4/542/469462/Erratum-Scaling-Laws-for-Ultra-Short-Hydrostatic | • 1
The exponents of the clearance to radius term $(C∕R)$ in the scaling law for damping ratio $ζ$ in Eqs. 19,20 were inadvertently switched between these two equations. Equations 19,20 should read:
$ζ∝(CR)−3(Δppo)−1∕2(ρoρd)−1∕2ΛReCμL2ρdC3(ΩR)$
19
and
$ζ∝(LD)2(CR)−1(Δppo)−1∕2(ρoρd)1∕2Λ1∕2ReC−1∕2.$
20
• 2
This correction also pertains to Table 1, where the second term in the last row, with the damping ratio $ζ$, should read $−1$ as shown below.
• 3
Consistently, the second sentence of the fifth paragraph on p. 259, in the section “Design Implications,” should read: “Since the damping ratio $ζ$ scales with $(L∕D)2$, $(C∕R)−1$, and …”. | 2019-10-19 00:39:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 37, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7644700407981873, "perplexity": 891.6143962625885}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986685915.43/warc/CC-MAIN-20191018231153-20191019014653-00025.warc.gz"} |
https://www.cliffsnotes.com/study-guides/differential-equations/second-order-equations/variation-of-parameters | Variation of Parameters
For the differential equation
the method of undetermined coefficients works only when the coefficients ab, and c are constants and the right‐hand term dx) is of a special form. If these restrictions do not apply to a given nonhomogeneous linear differential equation, then a more powerful method of determining a particular solution is needed: the method known as variation of parameters.
The first step is to obtain the general solution of the corresponding homogeneous equation, which will have the form
where y 1 and y 2 are known functions. The next step is to vary the parameters; that is, to replace the constants c 1 and c 2 by (as yet unknown) functions v 1x) and v 2x) to obtain the form of a particular solution y of the given nonhomogeneous equation:
The goal is to determine these functions v 1 and v 2. Then, since the functions y 1 and y 2 are already known, the expression above for y yields a particular solution of the nonhomogeneous equation. Combining y with y hthen gives the general solution of the non‐homogeneous differential equation, as guaranteed by Theorem B.
Since there are two unknowns to be determined, v 1 and v 2, two equations or conditions are required to obtain a solution. One of these conditions will naturally be satisfying the given differential equation. But another condition will be imposed first. Since y will be substituted into equation (*), its derivatives must be evaluated. The first derivative of y is
Now, to simplify the rest of the process—and to produce the first condition on v 1 and v 2—set
This will always be the first condition in determining v 1 and v 2the second condition will be the satisfaction of the given differential equation (*).
Example 1: Give the general solution of the differential equation y″ + y = tan x.
Since the nonhomogeneous right‐hand term, d = tan x, is not of the special form the method of undetermined coefficients can handle, variation of parameters is required. The first step is to obtain the general solution of the corresponding homogeneous equation, y″ + y = 0. The auxiliary polynomial equation is whose roots are the distinct conjugate complex numbers m = ± i = 0 ± 1 i. The general solution of the homogeneous equation is therefore
Now, vary the parameters c 1 and c 2 to obtain
Differentialtion yields
Nest, remember the first condition to be imposed on v 1 and v 2:
that is,
This reduces the expression for y′ to
so, then,
Substitution into the given nonhomogeneous equation y″ + y = tan x yields
Therefore, the two conditions on v 1 and v 2 are
To solve these two equations for v 1′ and v 2′, first multiply the first equation by sin x; then multiply the second equation by cos x:
Substituting v 1′ = sin x back into equation (1) [or equation (2)] then gives
Now, integrate to find v 1 and v 2 (and ignore the constant of integration in each case):
and
Therefore, a particular solution of the given nonhomogeneous differential equation is
Combining this with the general solution of the corresponding homogeneous equation gives the general solution of the nonhomogeneous equation:
In general, when the method of variation of parameters is applied to the second‐order nonhomogeneous linear differential equation
with y = v 1xy 1 + v 2xy 2 (where y h c 1 y 1 +c 2 y 2 is the general solution of the corresponding homogeneous equation), the two conditions on v 1 and v 2 will always be
So after obtaining the general solution of the corresponding homogeneous equation ( y h c 1 y 1 + c 2 y 2) and varying the parameters by writing y = v 1 y 1 + v 2 y 2, go directly to equations (1) and (2) above and solve for v 1′ and v 2′.
Example 2: Give the general solution of the differential equation
Because of the In x term, the right‐hand side is not one of the special forms that the method of undetermined coefficients can handle; variation of parameters is required. The first step requires obtaining the general solution of the corresponding homogeneous equation, y″ – 2 y′ + y = 0:
Varying the parameters gives the particular solution
and the system of equations (1) and (2) becomes
Cancel out the common factor of e x in both equations; then subtract the resulting equations to obtain
Substituting this back into either equation (1) or (2) determines
Now, integrate (by parts, in both these cases) to obtain v 1 and v 2 from v 2′ and v 2′:
Therefore, a particular solution is
Consequently, the general solution of the given nonhomogeneous equation is
Example 3: Give the general solution of the following differential equation, given that y 1 = x and y 2 = x 3 are solutions of its corresponding homogeneous equation:
Since the functions y 1 = x and y 2 = x 3 are linearly independent, Theorem A says that the general solution of the corresponding homogeneous equation is
Varying the parameters c 1 and c 2 gives the form of a particular solution of the given nonhomogeneous equation:
where the functions v 1 and v 2 are as yet undetermined. The two conditions on v 1 and v 2 which follow from the method of variation of parameters are
which in this case ( y 1 = x, y 2 = x 3a = x 2d = 12 x 4) become
Solving this system for v 1′ and v 2′ yields
from which follow
Therefore, the particular solution obtained is
and the general solution of the given nonhomogeneous equation is | 2017-02-27 13:45:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071358442306519, "perplexity": 145.85280762845815}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172831.37/warc/CC-MAIN-20170219104612-00497-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://solvedlib.com/n/t-0-lt-t-lt-2-4-t-2-lt-t-lt-4-0-4-lt-t-lt-0f-t,15063403 | # T 0 < t < 2, 4 -t, 2 <t < 4, 0_ 4 <t < 0f(t) =
###### Question:
t 0 < t < 2, 4 -t, 2 <t < 4, 0_ 4 <t < 0 f(t) =
#### Similar Solved Questions
##### What are the auditor’s responsibility on subsequent events in the period after the date of auditor’s...
What are the auditor’s responsibility on subsequent events in the period after the date of auditor’s report? Auditing...
##### Give the name and formula of a compound that could be combined with each one of the following compounds to produce a buffer solution: $mathrm{HNO}_{2}, mathrm{NaCN},left(mathrm{NH}_{4}ight)_{2} mathrm{SO}_{4}$,$mathrm{NH}_{3}, mathrm{HCN},left(mathrm{CH}_{3}ight)_{3} mathrm{~N}$
Give the name and formula of a compound that could be combined with each one of the following compounds to produce a buffer solution: $mathrm{HNO}_{2}, mathrm{NaCN},left(mathrm{NH}_{4} ight)_{2} mathrm{SO}_{4}$, $mathrm{NH}_{3}, mathrm{HCN},left(mathrm{CH}_{3} ight)_{3} mathrm{~N}$...
##### Problem Value: 4 points). Problem Score: 25%. Attempts Remaining: 2 attempts. (4 points) Speeding on the...
Problem Value: 4 points). Problem Score: 25%. Attempts Remaining: 2 attempts. (4 points) Speeding on the 1-5. Suppose the distribution of passenger vehicle speeds traveling on the Interstate 5 Freeway (1-5) in California is nearly normal with a mean of 73 miles/hour and a standard deviation of 4.28 ...
##### [-/1 Points]DETAILSSCALC8 11.4.007.MY NOTESDetermine whether the series converges or diverges_5 + 10"The series converges bY the Comparison Test. Each term less than that of convergent geometric series The series converges by the Comparison Test. Each term less than that of convergent p-series, The series diverges bY the Comparison Test. Each ter greater than that of divergent p-series_The series diverges by the Comparison Test. Each termgreater than that of divergent harmonic series,[-/3 P
[-/1 Points] DETAILS SCALC8 11.4.007. MY NOTES Determine whether the series converges or diverges_ 5 + 10" The series converges bY the Comparison Test. Each term less than that of convergent geometric series The series converges by the Comparison Test. Each term less than that of convergent p-s...
##### Please answer 4. [3 marks] Using truth-table, determine whether p Therefore, they are not. (q )...
please answer 4. [3 marks] Using truth-table, determine whether p Therefore, they are not. (q ) and p q r) are equivalent....
Please help solve 9 9) You are asked to find the index of refraction of a chunk of plastic with one planar side. Luckily, the laser can bee seen inside the plastic due to scattering. You aim the laser at the plane surface at several angles and collect the data shown. Create an appropriate graph ...
##### Use systems of three equations in three variables to solve each problem. The sum of the angles of a triangle is $180^{\circ} .$ In a certain triangle, the largest angle is $20^{\circ}$ greater than the sum of the other two and is $10^{\circ}$ greater than 3 times the smallest. How large is each angle?
Use systems of three equations in three variables to solve each problem. The sum of the angles of a triangle is $180^{\circ} .$ In a certain triangle, the largest angle is $20^{\circ}$ greater than the sum of the other two and is $10^{\circ}$ greater than 3 times the smallest. How large is each angl...
##### A , = 2, r=4, n =8
a , = 2, r=4, n =8...
##### Find the domain of each function. $f(x)=\sqrt{24-2 x}$
Find the domain of each function. $f(x)=\sqrt{24-2 x}$...
##### Why is introducing disease often ineffective as a biological control method?
Why is introducing disease often ineffective as a biological control method?...
##### HIT 114 Chapter 14 Case Study You are the only health information management professional at a...
HIT 114 Chapter 14 Case Study You are the only health information management professional at a local veterinary clinic. On your first day of work, you quickly determine that there is a desperate need for organization of the health records for the animals. Currently, there is one paper-based health r...
##### Note that ~ ] is an eigenvalue ofWhat is Its geometric multiplicity?Answer:Check
Note that ~ ] is an eigenvalue of What is Its geometric multiplicity? Answer: Check...
##### Similar rxn with a stronger base sodium amide occurs at low temperatures Three isomeric products are...
Similar rxn with a stronger base sodium amide occurs at low temperatures Three isomeric products are formed when the halogen is in meta position to the present substituent Existence of benzyne as intermediate can be shown in a Diels-Alder adduct...
##### Consider the following peptide:Gly-Ile-Glu-Trp-Thr-Pro-Tyr-Gln-Phe-Arg-LysWhat amino acids and peptides are produced when the above peptide is treated with each of thefollowing reagents?1. Carboxypeptidase2. Chymotrypsin3. Trypsin4. DNFB
Consider the following peptide: Gly-Ile-Glu-Trp-Thr-Pro-Tyr-Gln-Phe-Arg-Lys What amino acids and peptides are produced when the above peptide is treated with each of the following reagents? 1. Carboxypeptidase 2. Chymotrypsin 3. Trypsin 4. DNFB...
##### 3. Evaluate the integral S Tz2+9)(x-2) -7x-12dx.
3. Evaluate the integral S Tz2+9)(x-2) -7x-12dx....
##### Suppose that a case-sensitive six character password (using uppercase and lowercase letters and the digits 2 through 9) is going to be created. Show the factors that you would multiply together to arrive at the total number of passwords that could be formed with the following restrictions: the first two characters must be a letter (upper or lower case) , the third character must be a digit oran uppercase letter; and the last two characters must be digits. Use the to indicate multiplication betwe
Suppose that a case-sensitive six character password (using uppercase and lowercase letters and the digits 2 through 9) is going to be created. Show the factors that you would multiply together to arrive at the total number of passwords that could be formed with the following restrictions: the first...
##### The circuit shown in the figure below contains three resistors (R1, R2, and R) and three...
The circuit shown in the figure below contains three resistors (R1, R2, and R) and three batteries and V. The resistor values are: Ry-2 Ohms, Ry-Ry 4 Ohms, and the battery voltages are VA-25V, V -15V, and Vc-20 V. When the circuit is connected, what will be the power dissipated by Ry? VC R VA V. R2 ...
##### Use the Relerences to access importantAccording t0 the following reaction, how many moles of sulfuric acid are necessary t0 form 0.665 moles aluminum sulfate? Alz O3(s) + 3H2SO4(aq) ~ Al2(SO4):(aq) + 3HzO(€)mol sulfuric acid
Use the Relerences to access important According t0 the following reaction, how many moles of sulfuric acid are necessary t0 form 0.665 moles aluminum sulfate? Alz O3(s) + 3H2SO4(aq) ~ Al2(SO4):(aq) + 3HzO(€) mol sulfuric acid...
##### 1. a)Calculate the pH of a 0.30M formic acid solution (Ka=1.8*10^-4)Weak monoprotic acid. b)Calculate the Ka...
1. a)Calculate the pH of a 0.30M formic acid solution (Ka=1.8*10^-4)Weak monoprotic acid. b)Calculate the Ka for a 0.050M solution of HA (weak avid if the pH=4.65 c)What is the pH of the solution which results from mixing 50.0mL of 0.30M HF (aq) and 50.0mL of 0.30M NaOH (aq) at 25C? (Kb of F- =1.4*1...
##### The following list includes selected permanent accounts and all of the temporary accounts from the December...
The following list includes selected permanent accounts and all of the temporary accounts from the December 31 unadjusted trial balance of Emiko Co., a business owned by Kumi Emiko. Emiko Co. uses a perpetual inventory system. Credit Debit $33,000 6,200 39,000$553,000 Merchandise inventory Prepaid...
Assignment/takeAssignmentMain.do?inprogress=true eBook Show Me How Calculator Dividing Partnership Net Income Required: Steve Queen and Chelsy Bernard formed a partnership, dividing income as follows: 1. Annual salary allowance to Queen of $88,920. 2. Interest of 7% on each partner's capital bal... 5 answers ##### Determine whether the statement is true or false. Justify each answer or provide counterexample when appropriate(a) Adding constant to every entry of a row of a matrix leaves the determinant of the matrix unchanged.(b) Performing two successive row interchanges on a matrix leaves the determinant of the matrix unchanged(c) If the determinant of a matrix is 0, then one row is a multiple of another roW or one column is multiple of another column:The determinant of any matrix is the product of the e Determine whether the statement is true or false. Justify each answer or provide counterexample when appropriate (a) Adding constant to every entry of a row of a matrix leaves the determinant of the matrix unchanged. (b) Performing two successive row interchanges on a matrix leaves the determinant o... 5 answers ##### If I forget to fill the point of the burette before starting the titration, how would that affect the experimental value obtained for the percentage of CH3COOH in the vinegar? If I forget to fill the point of the burette before starting the titration, how would that affect the experimental value obtained for the percentage of CH3COOH in the vinegar?... 5 answers ##### Determine whether the statements use the word function in ways that are mathematically correct. Explain your reasoning.(a) The amount in your savings account is a function of your salary.(b) The speed at which a free-falling baseball strikes the ground is a function of the height from which it was dropped. Determine whether the statements use the word function in ways that are mathematically correct. Explain your reasoning. (a) The amount in your savings account is a function of your salary. (b) The speed at which a free-falling baseball strikes the ground is a function of the height from which it was... 5 answers ##### QUESTION 6Find the length of the curve:from X = tox=5316 QUESTION 6 Find the length of the curve: from X = tox=5 316... 5 answers ##### Recall the function in question (6): Sg(t)s has period and is delined as follows: 2cos if -3 <[ < g(t) = 0; Att =6-$ the Fourier series of g(t) converges to 0 < t < 3.
Recall the function in question (6): Sg(t)s has period and is delined as follows: 2cos if -3 <[ < g(t) = 0; Att =6- \$ the Fourier series of g(t) converges to 0 < t < 3.... | 2023-03-29 13:01:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5562585592269897, "perplexity": 2470.1204897276484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00718.warc.gz"} |
http://www.gradesaver.com/textbooks/math/algebra/college-algebra-7th-edition/chapter-p-prerequisites-section-p-9-modeling-with-equations-p-9-exercises-page-70/5 | ## College Algebra 7th Edition
A painter paints a wall in $x$ hours, so the fraction of the wall that she paints in 1 hour is ___ 1 wall / x hours = 1/x___. | 2018-04-20 15:04:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24619174003601074, "perplexity": 2243.0626643513756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125938462.12/warc/CC-MAIN-20180420135859-20180420155859-00327.warc.gz"} |
https://math.stackexchange.com/questions/2344931/if-a-b-are-roots-of-3x22x1-then-find-the-value-of-an-expression | # If $a,b$ are roots of $3x^2+2x+1$ then find the value of an expression
It is given that $a,b$ are roots of $3x^2+2x+1$ then find the value of: $$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3$$
I thought to proceed in this manner:
We know $a+b=\frac{-2}{3}$ and $ab=\frac{1}{3}$. Using this I tried to convert everything to sum and product of roots form, but this way is too complicated!
• That is not a problem! – akhmeteni Jul 3 '17 at 11:45
• – lab bhattacharjee Jul 5 '17 at 13:09
$$3a^2+2a+1=0 \to 3a^2+3a+1=a\\3b^2+2b+1=0\to 3b^2+3b+1=b\\a-1=3a(a+1)\\b-1=3b(b+1)$$ so $$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=\\ \left(\dfrac{-3a(a+1)}{1+a}\right)^3+\left(\dfrac{-3b(b+1)}{1+b}\right)^3\\=-27(a^3+b^3)=-27(s^3-3ps)\\=-27\left(\left(\frac{-2}{3}\right)^3-3\left(\frac{1}{3}\times\left(\frac{-2}{3}\right)\right)\right)\\=+8-18\\=-10$$where $$s=a+b\\p=ab$$
• Thank you this is much simpler! – akhmeteni Jul 3 '17 at 11:50
Plug $x=\frac{1-y}{1+y}$ in the given equation. We get: $$\frac{3 (1-y)^2}{(y+1)^2}+\frac{2 (1-y)}{y+1}+1=0$$ Expanding and collecting, we have: $$y^2-2 y+3=0$$ whose solutions are $$y_1=\frac{1-a}{1+a};\;y_2=\frac{1-b}{1+b}$$ We also know that sum of roots is $s=y_1+y_2=2$ and product is $p=y_1y_2=3$.
The sum of cubes can be written as follows $$y_1^3+y_2^3=\left(y_1+y_2\right)^3-3y_1y_2(y_1+y_2)=s^3-3ps=8-18=-10$$ so we have $$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=-10$$
• This is my way(+1) – lab bhattacharjee Jul 5 '17 at 13:06
The answer is $-10$. Find the common denominator: $$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=\frac{(1-ab-(a-b))^3+(1-ab+(a-b))^3}{(1+ab+(a+b))^3}=$$ $$\frac{2\cdot\left(\frac23\right)^3+2\cdot 3 \cdot\left(\frac23\right)\cdot (a-b)^2}{(\frac{2}{3})^3}=\frac{2\cdot\left(\frac23\right)^3+4\cdot ((a+b)^2-4ab)}{(\frac{2}{3})^3}=\frac{-10(\frac{2}{3})^3}{(\frac23)^3}=-10.$$
• Wolfram Alpha (wolframalpha.com/input/…) says that the answer is $-10$, as suggested by @khosrotash. Please check your calculations again. – Toby Mak Jul 3 '17 at 11:57
• @Toby Mak, thank you. I fixed it. – farruhota Jul 3 '17 at 12:26
From sum and product, we have:$$a+b+ab+ab=0$$ $$a(b+1)=-b(a+1)$$$$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=\left(1-\dfrac{2a}{1+a}\right)^3+\left(1-\dfrac{2b}{1+b}\right)^3=\left(1-\dfrac{2ab}{b(1+a)}\right)^3+\left(1-\dfrac{2b}{1+b}\right)^3=\left(1+\dfrac{2b}{1+b}\right)^3+\left(1-\dfrac{2b}{1+b}\right)^3$$ Letting $x=\tfrac{2b}{b+1}$, we have $(1+x)^3+(1-x)^3=2+6x^2=2+6\left(\dfrac{2b}{1+b}\right)^2$.
As, $3b^2+2b+1=0$, so $(b+1)^2=-2b^2$, and finally $2+6\left(\dfrac{2b}{1+b}\right)^2=2+6\cdot(-2)=-10.$
Although three years old, this is a good question with some terrific answers. I thought I'd add mine to the collection...
Let$$x=\left(\frac{1-a}{1+a}\right), y=\left(\frac{1-b}{1+b}\right)$$ Then, $$x+y=\left(\frac{1-a}{1+a}\right)+\left(\frac{1-b}{1+b}\right)=\frac{2(1-ab)}{1+(a+b)+ab}=\frac{2\left(1-\left(\frac{1}{3}\right)\right)}{1+\left(\frac{-2}{3}\right)+\left(\frac{1}{3}\right)}=2$$ and $$xy=\left(\frac{1-a}{1+a}\right)\left(\frac{1-b}{1+b}\right)=\frac{1-(a+b)+ab}{1+(a+b)+ab}=\frac{1-\left(\frac{-2}{3}\right)+\left(\frac{1}{3}\right)}{1+\left(\frac{-2}{3}\right)+\left(\frac{1}{3}\right)}=3$$ From the Binomial Theorem, $$(x+y)^3=x^3+3x^2y+3xy^2+y^3$$ we get, $$x^3+y^3=(x+y)^3-3xy(x+y)$$ $$\left(\frac{1-a}{1+a}\right)^3+\left(\frac{1-b}{1+b}\right)^3=2^3-3\times 2\times3=8-18=-10$$ | 2021-05-13 05:29:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945134699344635, "perplexity": 564.6812351629015}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00369.warc.gz"} |
https://ncatlab.org/nlab/show/quasi-symmetric%20function | Quasi-symmetric functions
Idea
Quasi symmetric functions are a generalisation of symmetric functions and are closely related to noncommutative symmetric functions.
Definition
Definition
Let $X$ be a totally ordered set of indeterminants. Let $R$ be a ring. A polynomial in $R[X]$ or a power series in $R[ [X] ]$ is said to be quasi-symmetric if whenever $X_1 \lt X_2 \lt \dots \lt X_n$ and $Y_1 \lt Y_2 \lt \dots \lt Y_n$ are finite sets of indeterminants then the coefficients of $X_1^{i_1} X_2^{i_2} \cdots X_n^{i_n}$ and $Y_1^{i_1} Y_2^{i_2} \cdots Y_n^{i_n}$ are the same.
Definition
The ring $\QSymm^{\hat{}}$ is defined as the ring of quasi-symmetric power series over $\mathbb{Z}$ in countably many variables. Its subring $\QSymm$ is defined as the ring of quasi-symmetric polynomials (meaning, power series of bounded degree).
References
(Copied from noncommutative symmetric function as the two concepts are often studied together.)
Research articles
• G. Duchamp, F. Hivert, J.-Y. Thibon, Noncommutative symmetric functions VI: free quasi-symmetric functions and related algebras, Internat. J. Alg. Comput. 12 (2002), 671–717.
• I. M. Gelfand, D. Krob, A. Lascoux, B. Leclerc, V. S. Retakh, J.-Y. Thibon, Noncommutative symmetric functions, Adv. in Math. 112 (1995), 218–348, hep-th/9407124
• Jean-Christophe Novelli, Jean-Yves Thibon, Noncommutative symmetric functions and Lagrange inversion, math.CO/0512570; Noncommutative symmetric functions and an amazing matrix arxiv/1109.1184
• Lenny Tevlin, Noncommutative Monomial Symmetric Functions, Formal Power Series and Algebraic Combinatorics Nankai University, Tianjin, China, 2007, proceedings pdf
• D. Krob, J.-Y. Thibon, Noncommutative symmetric functions IV: Quantum linear groups and Hecke algebras at $q = 0$, pdf
• Christos A. Athanasiadis, Power sum expansion of chromatic quasisymmetric functions, arxiv/1409.2595
Long surveys and lecture notes
• Michael Hazewinkel, Symmetric functions, noncommutative symmetric functions and quasisymmetric functions, pdf
• V. Retakh and R. Wilson, Advanced Course on Quasideterminants and Universal Localization: pdf (see the part Factorization of Noncommutative Polynomials
and Noncommutative Symmetric Functions_)
Expositions/short summaries
• Mike Zabrocki, Non-commutative symmetric functions II: Combinatorics and coinvariants, slides from a talk pdf, III: A representation theoretical approach pdf
• Lenny Tevlin, Introduction to quasisymmetric and noncommutative symmetric functions, slides, Fields Institute 2010 pdf
category: combinatorics
Last revised on August 23, 2015 at 02:46:50. See the history of this page for a list of all contributions to it. | 2020-06-06 11:36:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 12, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39156070351600647, "perplexity": 2474.906923500912}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513230.90/warc/CC-MAIN-20200606093706-20200606123706-00212.warc.gz"} |
https://testbook.com/question-answer/which-one-of-the-following-parameters-can-be-used--5cf8bf9cfdb8bb181da047cf | # Which one of the following parameters can be used to estimate the angle of friction of sandy soil?
This question was previously asked in
BPSC AE: Paper 5 (Civil Engineering) 2018 Official Paper
View all BPSC AE Papers >
1. Particle size
2. Roughness of particle
3. Density Index
4. Particle size distribution
Option 3 : Density Index
Free
General Studies Chapter Test 1
4818
15 Questions 15 Marks 27 Mins
## Detailed Solution
The angle of friction between soil particle depends upon the shape, size, roughness, grading, and packaging of particles.
For Sand, particles size and shape roughly remain the same and thus it cannot be used to estimate the angle of friction of sandy soil.
However, the angle of internal friction would be very high if the sand particle is compactly dense which is indicated by the value of the density index.
Density index is the ratio of the difference between the void ratios of a cohesionless soil in its loosest state and existing natural state to the difference between its void ratio in the loosest and densest states.
$$Density\;Index\;/\;Relative\;Density\; = \frac{{{e_{max}} - {e_{natural\;}}}}{{{e_{max}} - \;{e_{min\;}}}}\;$$
The higher value of the relative index indicates a higher angle of internal friction. | 2021-10-23 11:00:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7289103865623474, "perplexity": 2514.3829866380215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00052.warc.gz"} |
https://me.gateoverflow.in/149/gate-mechanical-2014-set-2-question-18 | GATE Mechanical 2014 Set 2 | Question: 18
If there are $m$ sources and $n$ destinations in a transportation matrix, the total number of basic variables in a basic feasible solution is
1. $m + n$
2. $m + n + 1$
3. $m + n − 1$
4. $m$
recategorized
Related questions
The transformation matrix for mirroring a point in $x – y$ plane about the line $y=x$ is given by $\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix} \\$ $\begin{bmatrix} -1 & 0 \\ 0 & 1 \end{bmatrix} \\$ $\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} \\$ $\begin{bmatrix} 0 & -1 \\ -1 & 0 \end{bmatrix}$
Which one of the following equations is a correct identity for arbitrary $3 \times 3$ real matrices $P$, $Q$ and $R$? $P(Q+R)=PQ+RP$ $(P-Q)^2 = P^2 -2PQ -Q^2$ $\text{det } (P+Q)= \text{det } P+ \text{det } Q$ $(P+Q)^2=P^2+PQ+QP+Q^2$
The matrix form of the linear syatem $\dfrac{dx}{dt}=3x-5y$ and $\dfrac{dy}{dt}=4x+8y$ is $\dfrac{d}{dt}\begin{Bmatrix} x\\y \end{Bmatrix}=\begin{bmatrix} 3 & -5\\ 4& 8 \end{bmatrix}\begin{Bmatrix} x\\y \end{Bmatrix} \\$ ... $\dfrac{d}{dt}\begin{Bmatrix} x\\y \end{Bmatrix}=\begin{bmatrix} 4 & 8\\ 3& -5 \end{bmatrix}\begin{Bmatrix} x\\y \end{Bmatrix}$
Let the superscript $\text{T}$ represent the transpose operation. Consider the function $f(x)=\frac{1}{2}x^TQx-r^Tx$, where $x$ and $r$ are $n \times 1$ vectors and $\text{Q}$ is a symmetric $n \times n$ matrix. The stationary point of $f(x)$ is $Q^{T}r$ $Q^{-1}r$ $\frac{r}{r^{T}r}$ $r$
One of the eigen vectors of the matrix $\begin{bmatrix} -5 & 2\\ -9 & 6 \end{bmatrix}$ is $\begin{Bmatrix} -1\\ 1 \end{Bmatrix} \\$ $\begin{Bmatrix} -2\\ 9 \end{Bmatrix} \\$ $\begin{Bmatrix} 2\\ -1 \end{Bmatrix} \\$ $\begin{Bmatrix} 1\\ 1 \end{Bmatrix} \\$ | 2021-09-22 21:34:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8573721647262573, "perplexity": 125.05530030916147}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057388.12/warc/CC-MAIN-20210922193630-20210922223630-00352.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-14-calculus-of-vector-valued-functions-14-2-calculus-of-vector-valued-functions-exercises-page-720/41 | # Chapter 14 - Calculus of Vector-Valued Functions - 14.2 Calculus of Vector-Valued Functions - Exercises - Page 720: 41
$\left\langle 0,0\right\rangle.$
#### Work Step by Step
We can write \begin{align} \int_{-2}^{2}\left\langle u^3,u^5\right\rangle du&=\left\langle \frac{1}{4}u^4, \frac{1}{6}u^6\right\rangle|_{-2}^{2}\\ &=\left\langle4, \frac{64}{6}\right\rangle-\left\langle 4, \frac{64}{6}\right\rangle\\ &= \left\langle 0,0\right\rangle. \end{align}
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. | 2020-04-01 09:25:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9913037419319153, "perplexity": 7771.5063664487125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505550.17/warc/CC-MAIN-20200401065031-20200401095031-00051.warc.gz"} |
https://wiki.math.ucr.edu/index.php?title=Math_22_Higher-Order_Derivative&oldid=2278 | # Math 22 Higher-Order Derivative
## Higher-Order Derivatives
``` The "standard" derivative ${\displaystyle f'(x)}$ is called the first derivative of ${\displaystyle f(x)}$. The derivative of ${\displaystyle f'(x)}$ is the second derivative of${\displaystyle f(x)}$, denoted by ${\displaystyle f''(x).}$
By continuing this process, we obtain higher-order derivative of ${\displaystyle f(x)}$.
```
Note: The 3rd derivative of ${\displaystyle f(x)}$ is ${\displaystyle f'''(x)}$. However, we simply denote the ${\displaystyle n^{th}}$ derivative as ${\displaystyle f^{(n)}(x)}$ for ${\displaystyle n\geq 4}$
Example: Find the first four derivative of
1) ${\displaystyle f(x)=x^{4}+5x^{3}-2x^{2}+6}$
Solution:
${\displaystyle f'(x)=4x^{3}+15x^{2}-4x}$
${\displaystyle f''(x)=12x^{2}+30x-4}$
${\displaystyle f'''(x)=24x+30}$
${\displaystyle f^{(4)}(x)=24}$
2) ${\displaystyle f(x)=(x^{3}+1)(x^{2}+3)}$
Solution:
It is better to rewrite ${\displaystyle f(x)=(x^{3}+1)(x^{2}+3)=x^{5}+3x^{3}+x^{2}+3}$
Then, ${\displaystyle f'(x)=5x^{4}+9x^{3}+2x}$
${\displaystyle f''(x)=20x^{3}+27x^{2}+2}$
${\displaystyle f'''(x)=60x^{2}+54x}$
${\displaystyle f^{(4)}(x)=120x+54}$
## Notes
If ${\displaystyle f(x)}$ is the position function, then ${\displaystyle f'(x)}$ is the velocity function and ${\displaystyle f''(x)}$ is the acceleration function. | 2022-01-26 05:39:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 25, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9724794626235962, "perplexity": 1054.8928542480492}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00327.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/state-true-or-false-7-5-1-7-1-5-linear-inequations-in-one-variable_25823 | # State True Or False 7 > 5 => 1/7 < 1/5 - Mathematics
MCQ
True or False
State true or false
7 > 5 => 1/7 < 1/5
• True
• False
#### Solution
7 > 5 => 1/7 < 1/5
THe given statement is true
Is there an error in this question or solution?
#### APPEARS IN
Selina Concise Maths Class 10 ICSE
Chapter 4 Linear Inequations (In one variable)
Exercise 4 (A) | Q 1.4 | Page 44 | 2021-02-25 11:25:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2329605221748352, "perplexity": 5211.111101424997}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350942.3/warc/CC-MAIN-20210225095141-20210225125141-00287.warc.gz"} |
http://math.stackexchange.com/questions/626681/question-about-trigonometric-identity | I’m doing some trig questions from a book to brush up my trig knowledge. And I have come across two questions that I can’t seem to find solutions
First Question
If $A$ and $B$ are acute angles, find $A+B$, given:
(a) $\tan A = 1/4$, $\tan B = 3/5$. Hint: $\tan (A + B) = 1$
(b) $\tan A =5/3$, $\tan B = 4$.
According to the above hint I know $A+B$ must be $45$ degrees. But other than that I don’t know how $\tan A= 1/4$, $\tan B=3/5$ come in to the picture. I would appreciate if anyone can help me understand how to solve this kind of problems.
Second question is
Find the values of $\sin 2A$, $\cos 2A$, and $\tan 2A$, given that $\tan A = u$, in quadrant one
I know how to find $\tan2A$ using identities and answer for that is $2u/(1-u^2)$
But for $\cos2A$ and $\sin2A$, I can’t get the answers given in the book, The answers given in the book are $$\sin2A = \frac{2u}{1+u^2},\qquad \cos2A = \frac{1-u^2}{1+u^2}$$
Again highly appreciate if anyone can help me out on these.
Thank you
-
Use $\tan(x+y) = \frac{sin(x+y)}{cos(x+y)}$, and the sum of angles formula. – chubakueno Jan 4 '14 at 7:01
cud u mabe fiks yur speling punctuashin an capitalizashun? its reel destractin – dfeuer Jan 4 '14 at 7:06
There are nice solutions using complex numbers. – lhf Jan 4 '14 at 12:13
First use $\displaystyle \tan(A+B)=\frac{\tan A+\tan B}{1-\tan A\tan B}$
We know if $\displaystyle \tan x=\tan \alpha$
The general value of $x$ is $n\pi+\alpha$ where $n$ is an integer
Here $\alpha=\frac\pi4$
Then use this to find the principal value of $A+B$
For the second, use Weierstrass substitution
-
And in the second one, OP is just supposed to manipulate the double agle formulas in a clever way. – chubakueno Jan 4 '14 at 7:05
@chubakueno,please find the edited version. Also, "quadrant one" in the second ques, right? – lab bhattacharjee Jan 4 '14 at 7:06
Yes, I think so. – chubakueno Jan 4 '14 at 7:13
Thank you for your reply. For 1st question what i don't understand is how do you use tanA=1/4, tanB=3/5 to arrive at the answer for (a) and tanA = 5/3 and tanB=4 for (b). the answers given in the book are (a) 45 degrees (b) 135 degrees – user119020 Jan 4 '14 at 10:40
Just put the values of $\tan A,\tan B$ in the formula I mentioned in the first line of the answer – lab bhattacharjee Jan 4 '14 at 10:42 | 2015-04-21 03:22:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.924730658531189, "perplexity": 448.6720900556015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246640001.64/warc/CC-MAIN-20150417045720-00289-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://stats.stackexchange.com/questions/281623/knn-outperforms-cnn | KNN outperforms CNN
Disclaimer: I am a programmer by trade, not a statistician, so please cater to my ignorance when explaining things and I apologize now if I make any incorrect assumptions
I am currently attempting to build an OCR platform for printed characters moving at speed in a video stream. I am able to detect and segment the images like so:
These are labeled using a standard [0,0,1,0,0,0,0,0,0,0] format.
I first attempted to build a convolution neural network using keras for performing the task of recognition with the following architecture:
# First convolution layer
model = Sequential()
model.add(Convolution2D(20, 15, 15, border_mode="same",input_shape=(height, width, depth)))
# Second convolution layer
# Third convolution layer
# Fully connected layer
# Classifier
opt = SGD(lr=0.01)
model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"])
history = model.fit(trainingData, trainingLabels, batch_size=128, epochs=150,verbose=1)
However it would appear the network converges after only a few epochs with an awful accuracy level, then stays at that level indefinitely.
I have attempted tweaking the learning rate, amount of layers, size/amount of filters but still have the same results.
At first I assumed it was down to the validity of my training data, however after training a KNN classifier on the same data it achieves 94.87% accuracy.
I originally followed this fantastic tutorial for building the architecture as it solves a similar problem (MNIST dataset)
I was hoping to use a CNN as a learning exercise into why CNN's work so well for this kind of problem, any assistance in understanding why my CNN didn't work would be greatly appreciated.
• The very first question to ask is how much data you have. Neural nets are good, but they're not magic -- different dataset sizes, image resolutions and model architectures may result in completely different outcomes. Could you give some more information regarding how much data you have and what other classifiers and neural net architectures you have tried? May 25, 2017 at 11:09
• Hey Pedro, Thanks for your time. I have a relatively small dataset, currently using 1572 images for training and 510 for testing. Images are all 81 x 127 greyscale. I have only tried the convolutional network defined above and a KNN May 25, 2017 at 11:50
• Minor adjustments to the architecture above are have 2-4 layers rather than the three above, as well as size and amounts of the filters on each layer. May 25, 2017 at 11:52
• Your dataset is way too small to fit a CNN easily. To take the MNIST dataset as an example, it has 30 times more samples and the images are 13 times smaller, which is a big difference. If that answers your question, I'd say that "not enough data" is the main reason why your CNN doesn't work. I can go through a more detailed discussion and post it as an answer if you want. May 25, 2017 at 12:35
• As a separate note, I have to ask: do you actually need more accuracy? If you're building this classifier for a specific application then the 95% of the kNN might be good enough. If you're just doing this for fun that's another matter, of course. May 25, 2017 at 12:38
Almost certainly the low performance of your CNN is due to insufficient data.
A quick double-check in Keras using model.count_params() says your network has more than 10 million parameters -- which is not too much by modern standards but is quite a big bunch if you only have 1.5k images. Conventional wisdom in ML says that you should have at least a handful of thousands of images per class if you want to consider deep learning -- although in my experience I'd say it has to be quite a bit more unless you're willing to spend a long while fine tuning your model.
If you want to go the neural net way, I would suggest you to make your network smaller and add some strong regularisation, potentially through heavy dropout or L2 regularisation. If you're serious about this you can even consider doing some data augmentation or transfer learning (potentially from MNIST).
If you're just hacking around some ML for fun, I would recommend you to look into other classifiers that are more likely to work in your scenario. A couple of examples are Support Vector Machines and Random Forests.
Indeed, as Pedro suggested your network is too big for the data, but there are also problems with the data itself:
1. The fully connected layer alone is ~10 M parameters for 16 M data points, which is guaranteed to brutally overfit, I'd guess no more than 15% accuracy for this. For such small datasets, avoid large fully connected layers. Better go all-convolutional ie. conv-pool-conv-pool-... until you have fewxfewxN_channels (few < 4). Then you can have a small FC layer before the softmax.
2. The overfiting is especially true, as the pixels in your images are far from independent, your numbers are visibly pixelated. You can easily bin them 4x4 to 20x32 pixels without losing relevant information. So bin your data and switch to 3x3 or 5x5 convolutions with pooling.
3. You can have decent results even with such small data, so don't loose hope. Just minimize the number of parameters.
• Hi Imoha, thanks very much for this! I will take your comments on board and try to amend the CNN when I return to work. May 29, 2017 at 19:28 | 2022-05-19 09:42:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45259708166122437, "perplexity": 779.0219410026925}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00099.warc.gz"} |
http://blog.milrr.com/2009/10/tech-set-gmail-to-handle-mailto-links.html | Friday, October 30, 2009
Tech: Set Gmail to handle MailTo links in Opera
For some reason unknown to me, Opera does not come with the ability to select Gmail as the default mail provider. However you can fix this with a small configuration file change. After installing Opera open up C:\Program Files (x86)\Opera\defaults\webmailproviders.ini (if you are not running a 64 bit system the (x86) wont be present). Open the file and add the following text to the end:
[Gmail]
ID=8 | 2019-03-19 00:40:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821423351764679, "perplexity": 4291.890851932904}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201812.2/warc/CC-MAIN-20190318232014-20190319014014-00134.warc.gz"} |
https://www.physicsforums.com/threads/different-complex-numbers.662310/ | # Different complex numbers
1. Jan 4, 2013
### redount2k9
We have a,b,c different complex numbers so
(a+b)^3 = (b+c)^3 = (c+a)^3
Show that a^3 = b^3 = c^3
From the first equality I reached a^3 - c^3 + 3b(a-c)(a+b+c) = 0 How a is different from c => a-c is different from 0
How do I show that a^3 - c^3 = 0?
2. Jan 4, 2013
### MrWarlock616
If you can prove that 3b(a-c)(a+b+c)=0, you're done.
a, b, c are complex numbers.
3. Jan 4, 2013
### redount2k9
But how to prove that?
4. Jan 4, 2013
### MrWarlock616
Well, you can try assigning a value to the complex numbers and put them in the expression.
5. Jan 4, 2013
### micromass
Huh? How can that ever be a good proof??
6. Jan 4, 2013
### MrWarlock616
lol i think it can't be..but you can do it if you take the time..
edit: expand the expression first.
Last edited: Jan 4, 2013
7. Jan 4, 2013
### haruspex
That's a good start. (It will turn out that a+b+c=0.) If you take out the factor a-c, and also write down a corresponding equation with a/b/c rotated around, then add the two, what do you get?
8. Jan 4, 2013
### haruspex
Doh! There's a much easier way.
We know (a+b) = (b+c)ω, where ω3=1. If a≠c then ω≠1. Writing the corresponding eqn for c+a versus b+c, and assuming b≠a and b≠c, we have (b+c) = (c+a)ω. (If (b+c) = (c+a)ω2 then b=c.) It's not hard from there.
9. Jan 5, 2013
### MrWarlock616
$x^3-y^3=(x-y)(x^2+xy+y^2)$
You will need to prove 3b(a-c)(a+b+c)=0 using this forumla, as I said earlier. Or you can put $a=x_1+iy_1, b=x_2+iy_2, c=x_3+iy_3$, but this will become lengthy.
10. Jan 5, 2013
### Curious3141
11. Jan 5, 2013
### MrWarlock616
12. Jan 5, 2013
### Joffan
I don't know if the original poster is still with us...
We know a, b, and c are distinct.
Therefore (a+b), (b+c) and (c+a) are also distinct
Therefore , as per the question, they are the three distinct cube roots of some number z.
And therefore what is the sum ((a+b) + (b+c) + (c+a))?
13. Jan 5, 2013
### MrWarlock616
2(a+b+c)
14. Jan 5, 2013
### Joffan
More specifically, what is the sum of the three distinct cube roots of some complex number z?
15. Jan 5, 2013
### MrWarlock616
That's 0 ..there you go. This can be locked. :P
16. Jan 5, 2013
### micromass
Anyway. Use that if
$$(a+b)^3=(b+c)^3$$
then
$$a+b=e^{2ik\pi/3}(b+c)$$
for $k\in \mathbb{Z}$. And the same thing for the equality $(a+b)^3=(a+c)^3$.
Now solve the equations
$$\left\{\begin{array}{l} a+b=e^{2ik\pi/3}(b+c)\\ a+b=e^{2ik^\prime\pi/3}(a+c) \end{array}\right.$$
17. Jan 5, 2013
### micromass
Thread is open again. Let's actually wait for the OP to reply before helping any further :tongue2:
18. Jan 6, 2013
### redount2k9
What should I reply? All of you have different opinions... it's hard for me to understand something.
19. Jan 6, 2013
### MrWarlock616
hahahaha use any of the methods..
20. Jan 6, 2013
### redount2k9
Hahahaha why not to use the best method? (if I would know which it is) | 2018-02-23 09:19:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5862239599227905, "perplexity": 4466.954867671129}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814538.66/warc/CC-MAIN-20180223075134-20180223095134-00682.warc.gz"} |
https://stats.stackexchange.com/questions/414981/application-of-law-of-total-probability-for-continuous-random-variables | # Application of law of total probability for continuous random variables
Consider 3 random variables $$Y,V,T$$, with supports $$\mathcal{Y},\mathcal{V},\mathcal{T}$$, respectively.
Let
• $$P_{Y,V}$$ denote the probability distribution of $$(Y,V)$$
• $$P_{V}$$ denote the probability distribution of $$V$$
• $$P_{T|v}$$ denote the probability distribution of $$T$$ conditional on $$V=v$$
• $$P_{Y|v,t}$$ denote the probability distribution of $$T$$ conditional on $$V=v, T=t$$
Suppose that all the supports are finite sets. Then, we know that, by the law of total probability:
$$P_{Y,V}(y,v)=P_{V}(v)\sum_{t\in \mathcal{T}}P_{T|v}(t) P_{Y|v,t}(y)$$
Now, I want to rewrite the same expression when $$V$$ is a continuous random variable. I don't want to introduce densities. If necessary, I can work with cumulative distribution functions. Could you help to provide a notationally precise statement?
$$F_X(x) = \int \limits_\mathscr{Y} F_{X|Y}(x|y) \ dF_Y(y).$$
• Thanks. I'm struggling to map this with my question. When $\mathcal{T}$ is not finite, is it $$F_{Y,V}(y,v)=F_{Y|V}(y|\{i\in \mathcal{V}: i\leq v\}) \times F_V(v)= \int_{\mathcal{T}} F_{Y|V,T}(y|\{i\in \mathcal{V}: i\leq v\}, t)dF_{T|V}(t|\{i\in \mathcal{V}: i\leq v\}) \times F_V(v)$$? – user3285148 Jun 27 at 13:22
• But what if $\mathcal{T}$ is finite? – user3285148 Jun 27 at 13:23
• The conditional CDF in my answer is still conditional on a single point, so $F_{X|Y}(x|y) \equiv \mathbb{P}(X \leqslant x | Y=y)$. In the Riemann-Stieltjes integral, if $Y$ is countable (including finite) then the integral reduces to a sum taken over the mass function of $Y$. – Ben Jun 27 at 13:27
• But I think that the CDF conditional on a single point is not what I need in my specific case, because everything start from the joint of $Y$ and $V$. – user3285148 Jun 27 at 13:29 | 2019-10-18 04:02:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 15, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9395816326141357, "perplexity": 210.68505937555167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677884.28/warc/CC-MAIN-20191018032611-20191018060111-00226.warc.gz"} |
http://robinsadvising.com/equus-cast-dtp/73d1f3-factorial-can-only-be-computed-recursively | # factorial can only be computed recursively
9 Dec Uncategorized
Dec 9
## factorial can only be computed recursively
= n! The value of 5! We can also write above recursive program in a single line as shown below –, Iterative Program to Find Factorial of a Number. recursively. The factorial of a non-negative integer n is the product of all positive integers less than or equal to n. It is denoted by n!. n! If efficiency is not a concern, computing factorials is trivial from an algorithmic point of view: successively multiplying a variable initialized to 1 by the integers up to n (if any) will compute n!, provided the result fits in the variable. Recursive Factorial Example Program. The function accepts the number as an argument. There is a single positive integer T on the first line of input (equal to about 100000). Factorial of a non-negative integer n is the product of all the positive integers that are less than or equal to n. For example: The factorial of 4 is 24. There are n! 1. The calculation of factorial can be achieved using recursion in python. Initially, multiplyNumbers() is called from main() with 6 passed as an argument. Python Exercises, Practice and Solution: Write a Python function to calculate the factorial of a number (a non-negative integer). = 5 * 4 * 3 * 2 * 1 = 120 Factorial can be computed recursively as follows 0! = 5 * 4 * 3 * 2 * 1 = 120 ⦠= 1 x 2 x 3 x 4 x 5 = 120 = 1! The factorial of a non-negative integer n is the product of all positive integers less than or equal to n. It is denoted by n!. The best answer I can give you right now is that, like I've mentioned in my answer, $\Gamma$ was not defined to generalize factorials. The factorial and gamma function both have some interesting properties in common. Factorial does not have a closed form It can only be computed by expanding the 5! = 1 and, for all n > 0, n ... as each value requires two previous values, it can be computed by single recursion by passing two successive values as parameters. One way is to use a calculator to find both 100! factorial = 1 ELSE factorial = n * factorial (n-1) END IF END FUNCTION Commodore BASIC . Non-extendability to negative integers . The factorial of an integer n (i.e., n!) To Find Factorial Of A Number Using C Program. x 3 = 6 is 120 as Challenge: Recursive factorial. All numbers in Commodore BASIC are stored as floating-point with a 32-bit mantissa. 5! To compute one factorial, we computed zero factorial then we multiplied that result by one and that was our answer. A code snippet which demonstrates this is as follows: How to write recursive Python Function to find factorial? = 1 n! x 5 = 120 If the integer entered is negative then appropriate message is displayed. A number is taken as an input from the user and its factorial is displayed in the console. = 1 if n = 0 or n = 1 The relation n! Factorial program in Java using recursion. = n × (n â 1)! Factorial program in c using function. Recursive Solution: Factorial can be calculated using following recursive formula. = (n+1) \times n!$The gamma function also has this property Terminating condition(n <= 0 here;) is a must for a recursive program. Factorial program in Java without using recursion. is the product of all integers from 1 up to n. The factorial is meaningless for negative numbers. = (1 x 2 x 3 x 4 x 5) x 6 = 5! * (step+1) for step > 0; With this simple definition you can calculate the factorial of every number. represents n factorial.The notation n! Enter your email address to subscribe to new posts and receive notifications of new posts by email. Otherwise the program enters into an infinite loop. Java Program for Recursive Insertion Sort, Java Program for Binary Search (Recursive). = n < (n-1)! This is demonstrated below in C++, Java and Python: The time complexity of above solution is O(n) and auxiliary space used by the program is O(n) for call stack. Challenge: Recursive powers. 5! 4! = n * (n-1)! Below are the pros and cons of using recursion in C++. In functional languages, the recursive definition is often implemented directly to illustrate recursive functions. Definition. When n is less than 1, the factorial() function ultimately returns the output. = 9.33262154 x 10 157. The code uses this recursive definition.      | 1                            if n = 0 = \frac{1}{0!} recursively. Question: The Factorial Value For N Can Be Computed Recursively As Follows. C Program to Find Factorial of a Number using Recursion. x 4 = 24 = 8.68331762 × 10 36, but only keeps 32 bits of precision. 9.1.2 Factorial Notation. Internally the following calls are made to compute factorial of 3 (3 recursive calls are shaded with three different colors) â Factorial of 3 (which calls factorial of 2(which calls factorial of 1 and returns 1) which returns 2*1 ie. Recursion in c++ Factorial Program. The definition of the factorial function can also be extended to non-integer arguments, while retaining its most important properties; this involves more advanced mathematics, notably techniques from mathematical analysis. In fact, $$e^x = 1 + x + \frac{x^2}{2!} or recursively defined by x 2 = 2 Computing powers of a number. \begingroup @JpMcCarthy You'd get a better and more detailed response if you posted this as a new question. We can use recursion to calculate factorial of a number because factorial calculation obeys recursive. To Write C program that would find factorial of number using Recursion. As we can see, the factorial() function is calling itself. (The expression 10 157 is a scientific notation that means that we multiply by 1 followed by 157 zeros.) The for loop is executed for positiv⦠is 1 The problem can be recursively ⦠The factorial can be expressed recursively, where n! Recursively. Here, a function factorial is defined which is a recursive function that takes a number as an argument and returns n if n is equal to 1 or returns n times factorial of n-1. + \frac{x^3}{3!} It is the easiest and simplest way to find the factorial of a number.  = 5! = 1 x 2 x 3 x 4 x 5 = 120 The value of 0! However, during each call, we have decreased the value of n by 1. Factorial of a non-negative integer, is multiplication of all integers smaller than or equal to n. For example factorial of 6 is 6*5*4*3*2*1 which is 720. = \int^{\infty}_0 e^{-t} \cdot t^{n} dt$$. Exercise: Efficiently print factorial series in a given range. 13! = (1 x 2 x 3 x 4) x 5 = 4! = N * (n-1) Write Factorial.java Program Containing The Main Method That Reads In An Integer Value For N Using The Scanner Class And Calls An Internal Method Factorial (int N) To Compute N! The factorial function is formally defined by. If, for instance, an unsigned long was 32 bits long, the largest factorial that could be computed would by 12! For example, the factorial function can be defined recursively by the equations 0! 2) which returns 3 *2 i.e. Recursively De ned Functions When we de ne a sequence recursively by specifying how terms of the sequence are found from previous terms, we can use induction to prove results about the sequence. Factorial program in C by using the if-else statement In an if-else statement, first, if the statement is evaluated, if the statement in it is true it will give the output and if the statement in if the condition is not true then it transfers the control to the else statement and else statement is being executed. The factorial of 6 is: 720 The factorial of 0 is: 1. The number of levels in the IV is the number we use for the IV. Input. We can only get new and new zeros. = n * (n â 1 )! For example, the factorial function can be defined recursively. allows one to compute the factorial for an integer given the factorial for a smaller integer. Note that a sequence is basically a function on N. De nition 1. To compute two factorial, we computed one factorial, multiplied that result by two and that was our answer. Solution : If you come from Maths background then you know that factorial of a number is number*(factorial of number -1).You will use this formula to calculate factorial in this Java tutorial. It does this for one or more special input values for which the function can be evaluated without recursion. It is because we can never "lose" any trailing zero by multiplying by any positive number. where n! x 6 = 720. For factorial(), the base case is n = 1.. = (1 x 2 x 3) x 4 = 3! And a set with zero elements has onepermutation (there is one way of assigning zero elements to zero buckets). n! Write a recursive C/C++, Java and Python program to calculate factorial of a given positive number. The factorial function can be defined recursively as with the recursion base cases defined as The intuition behind these base cases is the following: A setwith one element has one permutation. The function is a group of statements that together perform a task. or 479,001,600. n! The function Z is very interesting, so we need a computer program that can determine its value efficiently. For example, The value of 5! Then, 5 is passed to multiplyNumbers() from the same function (recursive call). A code snippet which demonstrates this is as follows: In main(), the method fact() is called with different values. For example, 0! The rules for notation are as follows. Suppose the user entered 6. Challenge: is a string a palindrome? The maximum representable value is 1.70141183 × 10 38, so it can handle factorials up to 33! Let us first visit the code â Output- Factorial of 5 = 120 Explanationâ The number whose factorial is to be found is taken as input and stored in a variable and is checked if it is negative or not. The method fact() calculates the factorial of a number n. If n is less than or equal to 1, it returns 1.$0!=1(n+1)! = 4 * 3 * 2 *1 4! For higher precision more coefficients can be computed by a rational QD scheme (Rutishauser's QD algorithm). The above definition incorporates the instance. Do NOT follow this link or you will be banned from the site. = (1 x 2) x 3 = 2! = 24. = 1. There are n! C Program to Find Factorial. When the value of n is less than 1, there is no recursive call and the factorial is returned ultimately to the main() function. Otherwise it recursively calls itself and returns n * fact(n - 1). A program that demonstrates this is given as follows: The method fact() calculates the factorial of a number n. If n is less than or equal to 1, it returns 1. Some calculators cannot handle expressions as large as 100! + \cdots\), which illustrates the important property that $$\frac{d}{dx}e^x = e^x$$. | 2022-10-04 04:27:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117719888687134, "perplexity": 348.8689202519556}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337473.26/warc/CC-MAIN-20221004023206-20221004053206-00714.warc.gz"} |
https://mathtutoringonline.com/tag/math-definition/ | ## ##$$Distributive Property of Multiplication Difference Definition In math, a difference is the result of subtracting numbers from each other. It comes from the Latin word “differentia” which means “carrying away” or “differing”. The first number in a difference is called the minuend and the second number is called the subtrahend. A difference can be an unsimplified expression like the […] ## ### Factor What is a sum? A sum is the result of numbers or expressions being added together. It comes from the Latin word “summus” which means “total”. The first number in a sum is called the augend and the remaining numbers are called the addends. A sum can be an unsimplified expression like the left side […] ## #### Product What is a sum? A sum is the result of numbers or expressions being added together. It comes from the Latin word “summus” which means “total”. The first number in a sum is called the augend and the remaining numbers are called the addends. A sum can be an unsimplified expression like the left side […] ## ###$$ Quotient
What is a sum? A sum is the result of numbers or expressions being added together. It comes from the Latin word “summus” which means “total”. The first number in a sum is called the augend and the remaining numbers are called the addends. A sum can be an unsimplified expression like the left side […]
## #\$ Augend
What is an augend? An augend is a number that other numbers are added to. It comes from the Latin word “augendus” which means “to be increased”. I just learned this word today as I was doing research for this page about addends. So, if you’ve never heard of it, don’t feel bad. I’ve been […]
## #$$Minuend What is a minuend? A minuend is a number that other numbers are subtracted from. It comes from the Latin word “minuendus” which means “to be diminished”. The number that is subtracted is called the subtrahend. Expressions that have subtrahends subtracted from minuends are called differences. The answer to a subtraction problem is also called […] ## ## Sum What is a sum? A sum is the result of numbers or expressions being added together. It comes from the Latin word “summus” which means “total”. The first number in a sum is called the augend and the remaining numbers are called the addends. A sum can be an unsimplified expression like the left side […] ## ## Divisor Divisor Definition A divisor is a number that other numbers are divided by. It comes from the French word “diviseur” which means “divider”. In a division problem, it represents the number of groups that the original amount is equally divided between. The “original amount” that is divided by the divisor is called the dividend. The […] ## #### Difference Difference Definition In math, a difference is the result of subtracting numbers from each other. It comes from the Latin word “differentia” which means “carrying away” or “differing”. The first number in a difference is called the minuend and the second number is called the subtrahend. A difference can be an unsimplified expression like the […] ## ##$$ Addend
Addend Definition An addend is a number that is added to another number. It comes from the Latin word “addendus” which means “to be added”. The number it is added to is called the augend. However, this word is not very well-known. So, most people just call the augend an “addend”. Both words will work, […] | 2023-03-23 17:53:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6416060924530029, "perplexity": 930.2193023522162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00689.warc.gz"} |
http://math.stackexchange.com/questions/98019/continuity-of-a-function | # Continuity of a function
I was trying to do an exercise: proving that $\frac{x^2}{1-x^2}$ is continuous on $(0,1)$. I did it but I want to be sure that it's right, could you tell me if my argument is wrong?
$\frac{x^2}{1-x^2}-\frac{a^2}{1-a^2}=\frac{(x+a)(x-a)}{(1-x^2)(1-a^2)}$, now $x+a\leq 1+a$. $1-x^2=1-x^2+a^2-a^2=1-a^2-(x^2-a^2)=1-a^2-(x-a)(x+a)\geq 1-a^2-(x-a)a\geq$ $1-a^2+\delta a$. So $\frac{(x+a)(x-a)}{(1-x^2)(1-a^2)}\leq \frac{(1+a)\delta}{(1-a^2+\delta a)(1-a^2)}\leq\varepsilon$ and so we can just take $\delta\leq\frac{(1-a^2)^2}{1+a-a\varepsilon}$. Is that right?
-
On first glance, you're forgetting to take the absolute value. – Alex Becker Jan 11 '12 at 1:32
Also $x-a$ can be positive or negative, so $1-a^2 - (x-a)(x+a)$ cannot be directly compared to $1-a^2 + \delta a$ like you did. // Are you specifically asked to use the epsilon-delta definition of continuity? This problem is simpler using the standard properties of continuous functions. – Srivatsan Jan 11 '12 at 1:35
The denominator $1-x^2$ is never zero in $(0,1)$ and so the function is continuous because it's the quotient of two continuous functions. – lhf Jan 11 '12 at 1:46
@Srivatsan Yeah, I'm asked to do it with the epsilon-delta definition – John Jan 11 '12 at 2:10
@Srivatsan: I don't understand, if $|x-a|<\delta$ then $-\delta< x-a<\delta$, so $x-a>-\delta$, right? But now that I think about it, if I put the absolute values I have a problem...could you help me to solve this problem, please? – John Jan 11 '12 at 2:50
Here is the definition of continuity in terms of the epsilon-delta definition: $f$ is continuous at $a$ if and only if for any $\epsilon>0$, there exists $\delta>0$ such that if $|x-a|<\delta$, then $|f(x)-f(a)|<\epsilon$.
Now we have $f(x)=\displaystyle\frac{x^2}{1-x^2}$. Then for any $a\in(0,1)$, we have (as you have calculated) $$\tag{1}\left|\frac{x^2}{1-x^2}-\frac{a^2}{1-a^2}\right|=\left|\frac{(x+a)(x-a)}{(1-x^2)(1-a^2)}\right|=\frac{|x+a|\cdot|x-a|}{|(1-x^2)(1-a^2)|}\leq \frac{2|x-a|}{[1-(\frac{1+a}{2})^2](1-a^2)}$$ if $x\in(\displaystyle\frac{a}{2},\frac{1+a}{2})$. Therefore, for any $\epsilon>0$, there exists $\delta=\min\{\displaystyle\frac{\epsilon}{2}[1-(\frac{1+a}{2})^2](1-a^2),\frac{a}{2},\frac{1-a}{2}\}>0$ such that if $|x-a|<\delta$, then $$-\delta<x-a,\mbox{ or equivalently }, x>a-\delta>a-\frac{a}{2}=\frac{a}{2}$$ and $$x-a<\delta,\mbox{ or equivalently }, x<a+\delta<a+\frac{1-a}{2}=\frac{1+a}{2}.$$ That is $$\tag{2} x\in(\frac{a}{2},\frac{1+a}{2}).$$ Hence, using $(1)$ and $(2)$, we have $$|f(x)-f(a)|=\left|\frac{x^2}{1-x^2}-\frac{a^2}{1-a^2}\right|<\frac{2\delta}{[1-(\frac{1+a}{2})^2](1-a^2)}\leq\epsilon.$$
why $\frac{|x+a||x-a|}{|(1-x^2)(1-a^2)|}\leq2|x-a|$? – John Jan 11 '12 at 3:21
Oh yes, that's a mistake. Originally I thought $\frac{1}{1-x^2}\leq 1$. See my edited answer. – Paul Jan 11 '12 at 4:44 | 2016-05-31 06:01:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9650354385375977, "perplexity": 98.26396045122459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051177779.21/warc/CC-MAIN-20160524005257-00101-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/hyperfine-hamiltonian.314673/ | Hyperfine Hamiltonian
1. May 17, 2009
TFM
1. The problem statement, all variables and given/known data
Derive the hyperfine Hamiltonian starting from $$\hat{H}_H_F = -\hat{\mu}_N \cdot \hat{B_L}$$. Where $$\hat{\mu}_N$$ is the magnetic moment of the nucleus and
$$\hat{B_L}$$ is the magnetic field created by the pion’s motion around the nucleon. Write down the Hamiltonian in the form $$\hat{H}_H_F = ... \vec{I} \cdot \vec{L}$$.
2. Relevant equations
$$\hat{B_L} = \frac{\mu_0e}{4\pi r^3}\vec{r} \times \vec{v}$$
3. The attempt at a solution
Okay, I have tried putting everything together, and so far I currently have:
$$\hat{H}_{hf} = g_n \mu_n \frac{\vec{I}}{\hbar}\cdot \frac{-\mu_0e}{4\pi r^3} \times V$$
but I am not sure where to go from here. Any suggestions?
TFM
Last edited: May 18, 2009
2. May 17, 2009
nickjer
3. May 18, 2009
TFM
I was koooking through my notes as suggested in the script, and they have a different version, my notes have $$\hat{H}_{HF} = -\hat{\mu}_N\hat{B}_j$$ | 2018-03-18 18:12:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.698435366153717, "perplexity": 606.2645965898088}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645830.10/warc/CC-MAIN-20180318165408-20180318185408-00040.warc.gz"} |
http://mindstudy.in/electrical-engineering/find-distribution-power | Question:
Published on: 10 August, 2022
A multi hole directional coupler is fed with single power of 2.8 mW at 10 GHz. The coupling factor is 3 dB and the directivity is better than 40 dB over X-band range. Find the distribution of power at all other ports.
The expression for coupling factor is
$$C=10\log{\left(\frac{P_1}{P_3}\right)}=3$$
$$\log{\frac{P_1}{P_3}}=0.3$$
$$\frac{P_1}{P_3}=2$$
$$P_3=\frac{2.8\times{10}^{-3}}{2}=1.4\ mW$$
The expression of directivity is
$$D=10\log{\left(\frac{P_3}{P_4}\right)}=40$$
$$\frac{P_3}{P_4}=10000$$
$$P_4=\frac{1.4\times{10}^{-3}}{10000}=0.14\ µW$$
Thus the power at port 2 is
P2=Input power-(Power in coupling port+Power in isolated port)
P2=2.8×10-3-(1.4×10-3 + 0.14×10-6)=1.4 mW
Random questions | 2022-08-10 13:57:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5641931891441345, "perplexity": 8096.348013519022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571190.0/warc/CC-MAIN-20220810131127-20220810161127-00721.warc.gz"} |
http://www.sciencemadness.org/talk/viewthread.php?tid=126865 | Not logged in [Login - Register]
Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » Making heating mantle from dried CaSO4 and or mix Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum
Author: Subject: Making heating mantle from dried CaSO4 and or mix
RogueRose
International Hazard
Posts: 1180
Registered: 16-6-2014
Member Is Offline
Making heating mantle from dried CaSO4 and or mix
I remembered seeing someone make a shaped heating mantle out of, IIRC, CaSO4 and it was a perfectly shaped heating mantle for their RBF. I don't remember if it was on this board or not or if it was on Instructables or not but I'm wondering if anyone remembers this and if the mantle held up.
I have a few RBF's that have a broken neck, so I though I might try to make one using these, so if something goes wrong, I don't care if it breaks.
I plan to use some nichrome or similar wire and implant a thermocouple or 2 and control it with either a PWM or a variac.
I thought about doing the same making a flat hotplate top and use either cartridge heaters or resistance wire (maybe leaving space in center for stirring). I could cast the heating block on top of a stainless steel, aluminum or ceramic top, so when it dries, the top is firmly attached and it has good contact. IDK if it would be better to use something like thermal paste for contact or not.
On another note, I've found that I can find old hot plates that don't work (for whatever reason, often no heating but stirring and power supply is fine) and the replacement elements are very expensive, compared to the $5-10 for a broken unit or$100 for a used one, and thought making a replacement element like I described above would be adequate and I can't see how it would be "vulnerable" while it was under the heating top (Al, SS, ceramic, etc). Any thoughts on this?
I was thinking of trying a different mix than just CaSO4 such as a mix of dry clay (kaolin), CaSO4 and maybe Na2SiO3 (or sodium silicate)
I also have SiO2 and Al2O3 (which is the main composition of kaolin clay) but IDK if this has to go through a very high heating process (with water) to make it "clay". Could these be added separately to the CaSO4 and get a similar result, or mixed with the sodium silicate?
It'd be nice to come up with a workable formula for high temp refractory, I know this doesn't need to be very high temp, but I have use for the very high temp refractory in other apps, and this seems like a good project to try it out.
happyfooddance
National Hazard
Posts: 425
Registered: 9-11-2017
Location: Los Angeles, Ca.
Member Is Offline
Mood: No Mood
I would imagine that CaSO4 would effloresce a lot, avoiding that may be a challenge.
Ubya
National Hazard
Posts: 383
Registered: 23-11-2017
Location: Rome-Italy
Member Is Offline
https://youtu.be/QKFC0ke_DOU
is this what you are referring to?
---------------------------------------------------------------------
feel free to correct my grammar, or any mistakes i make
---------------------------------------------------------------------
wg48temp9
Harmless
Posts: 19
Registered: 30-12-2018
Member Is Offline
I don't understand why anyone who knows that that a hydrated compound that contains water of crystallization and can be dehydrated at temperatures just over 100C, would think it could be used as insulation significantly above 100C ???
Most compounds that contain water of crystallization when dehydrated fall apart and the resultant dehydrated material usually has a smaller volume (cracking and shrinkage)
The setting (hardening) of new plaster (dehydrated calcium sulfate) is caused by hydrated crystals growing from the mix of water and anhydrous calcium sulfate. The growing crystals form an interlocking matrix ie set plaster. That matrix will be destroyed when its is heated past its dehydration temperature.
Anhydrous copper sulfate would behave similarly.
In addition after dehydration and the when cooled down it will absorb water from the atmosphere which will have to be driven off next time its heated even it remains sufficiently coherent to use it.
If the insulation is contained in a earthed metal can there will be a good chance the earth leakage trip will be triggered as the driven off moisture condenses on the cooler parts form a leakage path between the can and the heating element or connections to it. It could also be a shock hazard if the damp outer insulation is touched.
The only legitimate reason I can think of for using copper sulfate would be in small amounts (<1%) as a sintering aid in a real (not a hydrated salt) ceramic. I guess that small amounts of plaster could also be used as a sintering aid also. But how are you going to heat the whole mantel to about 1,000C for the aid to help sinter the ceramic precursors.
One more point copper connections to the heating element that are heating much above 300C will gradually oxidize over time and eventually fail. The usual method of connection is to use a nickle wire welded to the heating wire or double up (preferably triple up) the element wire (fold back a several inches at the ends and twist together which is then connected to the copper wires outside of the insulation in a cool location.
From https://www.escholar.manchester.ac.uk/api/datastream?publica...
"shown in Figure 2.7, the results suggest that strength and
stiffness of gypsum reduce to zero by 120°C."
On a different point aluminium oxide and silicon dioxide do chemically combine (no water required)to form a high temperature ceramic but > 1,000c is needed even with a sintering aide.
If you want your ceramic insulation to last do not make it from a hydrated anything that includes cement, plaster or any sulphate, wood, plastic or dry road kill LOL
i am wg48 but not on my usual pc hence the temp handle.
Sciencemadness Discussion Board » Fundamentals » Reagents and Apparatus Acquisition » Making heating mantle from dried CaSO4 and or mix Select A Forum Fundamentals » Chemistry in General » Organic Chemistry » Reagents and Apparatus Acquisition » Beginnings » Miscellaneous » The Wiki Special topics » Technochemistry » Energetic Materials » Biochemistry » Radiochemistry » Computational Models and Techniques » Prepublication Non-chemistry » Forum Matters » Legal and Societal Issues » Detritus » Test Forum | 2019-02-16 21:38:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3062728941440582, "perplexity": 5845.4214299746145}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481122.31/warc/CC-MAIN-20190216210606-20190216232606-00545.warc.gz"} |
https://mathemerize.com/obtain-all-the-zeroes-of-3x4-6x3-2x2-10x-5-if-two-of-its-zeroes-are-sqrt5over-3-and-sqrt5over-3/ | # Obtain all the zeroes of $$3x^4 + 6x^3 – 2x^2 – 10x – 5$$, if two of its zeroes are $$\sqrt{5\over 3}$$ and -$$\sqrt{5\over 3}$$.
## Solution :
Since two zeroes are $$\sqrt{5\over 3}$$ and -$$\sqrt{5\over 3}$$,
x = $$\sqrt{5\over 3}$$ and x = -$$\sqrt{5\over 3}$$
$$\implies$$ (x – $$\sqrt{5\over 3}$$)(x + $$\sqrt{5\over 3}$$) = $$3x^2 – 5$$ is a factor of the given polynomial. Now, we apply the division algorithm to the given polynomial and $$3x^2 – 5$$.
First term of quotient is $$3x^4\over 3x^2$$ = $$x^2$$
Second term of quotient is $$6x^3\over 3x^2$$ = 2x
Third term of the quotient is $$3x^2\over 3x^2$$ = 1
So, $$3x^4 + 6x^3 – 2x^2 – 10x – 5$$ = ($$3x^2 – 5$$)($$x^2 + 2x + 1$$) + 0 = ($$3x^2 – 5$$)$${(x + 1)}^2$$
Quotient = $$x^2 + 2x + 1$$ = $${(x + 1)}^2$$
Zeroes of $${(x + 1)}^2$$ are -1 and -1.
Hence, all its zeroes are $$\sqrt{5\over 3}$$, -$$\sqrt{5\over 3}$$, -1, -1. | 2022-11-30 10:47:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8630402088165283, "perplexity": 569.8898639175248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710734.75/warc/CC-MAIN-20221130092453-20221130122453-00873.warc.gz"} |
https://www.nature.com/articles/s41467-020-17469-x?error=cookies_not_supported&code=742b5d49-351b-4768-a43f-7aed4204867c | ## Introduction
There is an urgent technological demand for broadband infrared (IR) phosphors to build phosphor-converted IR light-emitting diodes (pc-LEDs) with a broad spectral output. Broadband near-IR LEDs are a promising energy-efficient alternative for incandescent lamps for applications that rely on near-IR spectroscopy1,2. A large market that is presently explored is IR LEDs for the use in smart devices (e.g., mobile phones) to analyze food: information about its freshness, caloric value, or allergen content can be derived from the built-in spectrometer and IR LED3. Proof of concepts are demonstrated, but current state-of-the-art materials do not reach the desired energy efficiency, mostly due to the low absorption strength of the parity-forbidden 3d–3d transitions of the Cr3+ or Mn2+ ions that are used4,5,6,7,8,9,10. Alternatively, near-IR parity-allowed Eu2+ 5d–4f emission has been recently proposed11. Higher external quantum efficiencies can thus be achieved; however, at the cost of a smaller IR bandwidth. Material technologies and design strategies to obtain broadband IR phosphors that have a high absorption strength in the visible are hence highly desired to get this technology off the ground.
Other applications follow from the transparency of biological tissue for near-IR light and are hence situated in medicine12. This comprises imaging without the use of radioisotopes13,14,15, analyzing tissue during biopsy or endoscopy16, and IR therapy17. Also here, activators such as Cr3+ or Nd3+ feature substandard low excitation efficiencies due to the forbidden character of the used 3d–3d or 4f–4f transitions13,15. Circumventing this limitation can be done by incorporating Eu2+ in a suitable host, where the allowed 5d–4f emission is shifted towards longer wavelengths. A promising host for Eu2+ is CaS, where emission ~650 nm was shown in nanoparticles18,19. Upon co-doping with Dy3+, trapping can be induced upon ex situ UV excitation, after which red emission can be obtained upon in situ IR stimulation. As a downside, the red Eu2+ emission in this compound lies only partly in the first optical window of human skin tissue12, and would hence benefit from a further shift to the IR.
Large efforts have already been undertaken to optimize IR luminescent materials for the above-mentioned applications, starting from the well-known luminescence transitions such as 3d–3d, 4f–4f, or 5d–4f4,5,6,7,8,9,10,13,20. On the contrary, charge transfer (CT) states have remained below the radar because they often cause luminescence quenching, rather than generating luminescence themselves. Nonetheless, the so-called anomalous emission in several phosphors has recently been attributed to intervalence CT (IVCT) transitions21,22,23,24,25,26, that is, electron transfers between two lanthanide dopants that differ only in oxidation state. CT between two different lanthanide elements, that is, metal-to-metal CT (MMCT) states, has not been reported as such. The only exception is the tentative assignment of a direct lanthanide-to-lanthanide CT absorption by Poolton et al.27 and concerns a Ce3+ + Sm3+ → Ce4+ + Sm2+ MMCT in YPO4.
CT states between dopants remain often unnoticed because their absorption bands are very weak, yet they are important28. The multivalent nature of lanthanide and transition metal ions induces numerous CT states at low energy, intercalating with the excited states that are typically responsible for the luminescence of the individual ions. Therefore, they are expected to significantly alter the excited state dynamics by quenching existing or by generating new luminescent levels.
Here, broadband IR emission is reported upon addition of Tb to the red phosphor CaS:Eu2+. This broadband IR emission can be efficiently pumped with long-wavelength visible light, substantiating an effective improvement of existing IR phosphors for pc-LED applications and for in vivo biomedicine without the need for prior ex situ charging nor expensive detection in the second or third optical window.
CaS:Eu2+,Tb3+ and SrS:Eu2+,Tb3+ are investigated in detail in a combined experimental–theoretical study that evidences that the observed IR emission is due to the radiative decay of MMCT states of Eu2+–Tb3+ pairs, a type of luminescence that was hitherto not described to the best of our knowledge. The ab initio multiconfigurational calculations explain why the MMCT luminescence is found for CaS:Eu2+,Tb3+, but not for SrS:Eu2+,Tb3+. This knowledge is utilized to show that MMCT emission can only be obtained when several conditions in terms of the electronic structure of the lanthanide pair, and the structural rigidity of the host crystal are fulfilled. This goes beyond the mere location of energy levels, but comprises vibrational frequencies and lanthanide–ligand bond lengths as well. The reported IR phosphor is finally applied to construct a broadband near-IR LED with a radiant output that surpasses the current state of the art.
## Results and discussion
### Photoluminescence spectra
The photoluminescence (PL) emission spectra for various MS:Eu,Tb (M = Ca,Sr) singly and codoped powders are shown in Fig. 1. Singly Eu-doped MS shows the characteristic broadband emission, peaking ~650 and 615 nm for CaS and SrS, respectively. This band is attributed to the radiative de-excitation of the 4f65d1 states of Eu2+ towards the 4f7(8S) ground state29,30,31. The associated excitation spectrum as shown in Fig. 2 consists of a very broad band, ranging from 410 to 610 nm, composed of numerous transitions towards the dense $$4{f}^{6}5d{t}_{{\rm{2}}g}^{1}$$ manifold31,32. No trace of Eu3+ line emission is found. This does however not exclude its presence, because IVCT states are known to quench the Eu3+ emission in case it pairs with Eu2+ ions28. However, high-energy-resolution fluorescence-detected X-ray absorption near-edge structure (HERFD-XANES33,34) spectra show that no Eu3+ is present in the prepared samples within the detection limits (see Supplementary Fig. 2).
Singly Tb-doped MS shows characteristic Tb3 + 5D4 → 7FJ line emission across the visible range (green curve in Fig. 1), most notably in the green (545 nm, J = 5). This intraconfigurational 4f8 emission can be excited by a relatively high-lying 4f8 → 4f75d1 excitation band in the near-UV (Fig. 2)35,36,37. In case of CaS and SrS, the fundamental absorption of the host lies in the same energy range, and the near-UV absorption and excitation bands are likely the result of a mixture of host- and dopant-related transitions30,31.
When Eu and Tb are combined in the same sample, an additional IR emission band emerges in case of CaS, but not in case of SrS. This band is highlighted in Fig. 1. It peaks at 810 nm (12,345 cm−1) and is very broad, with a full-width at half-maximum of 195 nm (2960 cm−1). On the high-energy side, it overlaps with the Eu2+4f65d1 → 4f7 luminescence around 650 nm and extends up to 1100–1200 nm. Its excitation spectrum (dashed curve in Fig. 2) is very similar as for the Eu2+ luminescence, where the $$4{f}^{7}\to 4{f}^{6}5d{t}_{{\rm{2}}g}^{1}$$ band can be identified, with a small redshift of 20 nm, corresponding to about 500 cm−1. In addition to this band, some excitation intensity can be found in the region around 370 nm where no allowed transitions for Eu2+ are found31, suggesting the presence of additional excited states when Eu and Tb are codoped. These features are also visible at low temperature, as well as in the diffuse reflectance spectra (see Supplementary Figs. 6 and 7).
Lifetime measurements (see Supplementary Fig. 9 and Supplementary Table 1) indicate that the IR emission bands feature a very similar decay behavior than the Eu2+ 5d → 4f emission, with time constants around 500 ns at room temperature38.
In order to exclude that this previously unseen emission band is due to the used precursors, codoped powders from different precursor batches were prepared, using fluorides, oxides, and sulfides as lanthanide precursors39,40,41,42. All syntheses where Eu and Tb were both present as dopants resulted in the same IR emission band, while this IR band was always absent in case that only one dopant was used (see Supplementary Fig. 4) . The similarity of the IR emission regardless of the synthesis conditions suggests that the details of the charge compensation mechanism for Tb3+ do not directly affect the IR emission. Explicit compensation of Tb3+ by adding a monovalent codopant such as Na+ is shown to be detrimental for the luminescence properties, leading to a efficiency drop of a factor 10 (see Supplementary Fig. 8). The reason is that, when Tb3+ is extrinsically compensated, the formation of Eu3+ will also be favored29, generating undesired Eu2+–Eu3+(–Na+) centers in addition to the intended Eu2+–Tb3+(–Na+) centers. The former centers are responsible for the luminescence efficiency drop by IVCT quenching28,43.
This check, along with the demonstrated phase purity (from X-ray diffraction (XRD), see Supplementary Fig. 1) and the presence of only one oxidation state for both dopants in the absence of extrinsic charge compensation, that is, Eu2+ and Tb3+ (from HERFD-XANES, see Supplementary Fig. 2) strongly suggests that this IR emission band is a physical effect that emerges due to an interaction between the Eu2+ and Tb3+ centers in the calcium sulfide crystal.
To acquire more information about this peculiar luminescence, its properties are investigated as a function of the Eu and Tb doping concentrations. The PL emission spectra indicate that the Tb3+ intraconfigurational 4f8 emission is firmly diminished upon the addition of Eu2+ ions until it completely vanishes (Fig. 1). This is not surprising due to the large overlap between the Tb3+ emission and Eu2+ excitation spectra, enabling an efficient energy transfer where the Tb3+ ion sensitizes the Eu2+ luminescence44,45. When a small amount of Eu2+ is added to a Tb3+-doped sample (CaS:Eu0.001Tb0.01), or vice versa (CaS:Eu0.01Tb0.001), the IR emission shows up, but with a limited intensity. The relative intensities are comparable in both cases (see Fig. 1). For higher concentrations, 1% for both dopants (CaS:Eu0.01Tb0.01), the IR emission stands out, featuring a larger integrated intensity than the Eu2+5d → 4f emission. Upon increasing the doping concentrations even more (CaS:Eu0.03Tb0.03), the IR emission dominates the entire emission spectrum.
The emergence of the IR emission upon Tb addition to CaS:Eu is accompanied by a decrease in PL quantum efficiency (QE), which decreases from 35% for CaS:Eu0.01 to 10% for CaS:Eu0.01Tb0.01 (see Supplementary Fig. 8). This relatively low internal QE is partly compensated by the efficient excitability of the IR emission (see Fig. 2), where >90% of the incident visible (400–600 m) light is absorbed (see Supplementary Fig. 7). It is hence clear that practical applications require a trade-off between conversion efficiency and the fraction of IR in the emission spectrum (see further)46.
### Concentration dependence
The concentration-dependent PL study indicates that the intensity of the IR emission scales with the product of the concentrations of both dopants; however, drawing quantitative conclusions is hindered by the limited number of concentrations that can be prepared and does not account for uncontrollable microscopic concentration differences47,48,49,50. To get a more detailed picture, a microscopic study is performed.
For this, two grains with extreme doping inhomogeneity are explored51,52,53. It should be stressed that these grains were selected for this purpose and that they do not represent the global doping homogeneity of the phosphors, which is much better. As shown in Supplementary Fig. 3 and the accompanying discussion, variations in local concentrations are limited to less than a percent, corresponding to a decent doping homogeneity.
The microscopic study of the inhomogeneous grains is shown in Fig. 3. One grain exhibits predominantly red emission (lower left), while the other grain shows a strong IR emission (upper right). From the elemental analysis by energy-dispersive X-ray spectroscopy (EDX), it is clear that the doping is indeed inhomogeneous and that the local Eu and Tb concentrations range from 0 to roughly 4%54. As a direct consequence, the cathodoluminescence (CL) spectrum shows strong variations across the sample because the intensity of the IR emission depends strongly on the Eu and Tb concentrations. As an illustration, five local spectra are shown in Fig. 3c. It is clear that the smaller grain on the bottom of the image shows negligible IR emission (Fig. 3c, d), which is compatible with the elemental analysis that suggests that Eu and Tb are well separated in this grain, indicated by a limited amount of yellow in Fig. 3b. For the larger grain, Eu and Tb clearly congregate (there is more yellow in Fig. 3b) and intense IR emission is found (Fig. 3c, d).
From the PL study, it is clear that both Eu and Tb are required to induce IR emission; therefore, the product of both local concentrations, [Eu] [Tb], determined per pixel in the scanning electron microscope (SEM)-EDX map, is used as dependent variable to correlate the luminescence properties to the CL spectrum, measured for the same pixel49,55. The red and IR contributions to the local CL spectrum are integrated and subsequently analyzed as a function of [Eu] [Tb]. Figure 3e was obtained by averaging data points along the abscissa. At low [Eu] [Tb] values, the IR emission increases linearly as a function of [Eu] [Tb], while the red Eu2+ emission decreases accordingly. After the linear increase/decrease, the relative intensities stabilize and the spectrum does not change appreciably upon increasing the product of the doping concentrations above roughly 3 × 10−4, which corresponds to a symmetric doping concentration, $$\sqrt{[{\rm{Eu}}]\cdot [{\rm{Tb}}]}$$, of 1.7%. No spot is found where the red Eu2+ emission completely disappears, not in the area displayed in Fig. 3, nor in any other grain that was investigated, nor in the emission spectra of phosphors with doping concentrations >5% (see Supplementary Fig. 5). This equilibrium between red and IR emissions implies that the IR emission is presumably not the result of an energy transfer from Eu2+ to another emitting center because it would then be expected for the Eu2+ emission to vanish completely if the concentrations are stretched sufficiently44,56. Yet, the Eu2+ absorption bands are clearly present in the excitation spectrum of the IR band. This means that the IR emission stems directly from a Eu2+-containing defect cluster.
### Temperature dependence
Measurement of the PL intensities as a function of temperature, that is, a thermal quenching (TQ) experiment can reveal more information about the equilibrium between the red and IR emissions. The result is shown in Fig. 4. The red Eu2+ emission follows a rather standard behavior, which resembles the shape of a single-barrier model57,
$${I}_{{\rm{red}}}(T)=\frac{{I}_{0}}{1+A\exp \frac{-\Delta {E}_{T,{\rm{red}}}}{{k}_{{\rm{B}}}T}}.$$
(1)
Using this phenomenological model to fit the data yields a barrier height of ΔET,red = 1484 cm−1 (A = 1.07 × 103). This TQ performance is comparable to the one of singly doped CaS:Eu0.01 phosphors, which contain sufficient Eu for concentration quenching to be noticeable29,58,59.
In contrast to the red emission, which shows the expected TQ behavior, the IR emission looks more complicated as a function of temperature, with an increase in intensity between 100 and 225 K. This indicates that the IR emission is to some extent thermally activated. However, a substantial fraction of the IR emission, ~77% of the maximal output at 225 K, is also being emitted at low temperature. This is reminiscent of the temperature dependence of internal conversion (IC) and inter-system crossing (ISC) in molecular chromophores where rate constants are typically written as the sum of a temperature-dependent and temperature-independent term60,61. The IR TQ curve can hence be modeled by combining the ISC rate constant with a single-barrier model for the TQ behavior,
$${I}_{{\rm{IR}}}(T)=\frac{{I}_{1}+{I}_{2}\exp \frac{-\Delta {E}_{{\rm{ISC}}}}{{k}_{{\rm{B}}}T}}{1+A\exp \frac{-\Delta {E}_{T,{\rm{IR}}}}{{k}_{{\rm{B}}}T}},$$
(2)
where I1 and I2 represent the temperature-independent and temperature-dependent rate constants, respectively. ΔEISC is the associated barrier height for the latter. Fitting Eq. (2) to the TQ profile of the IR emission yields ΔEISC = 476 cm−1 and ΔET,IR = 1500 cm−1 as barriers (I1/I2 = 9.5 and A = 1.61 × 103). The TQ at higher temperature is roughly the same as for the red emission, indicated by the similar energy barriers for quenching.
The intensities of the red and IR emission of the Eu,Tb-codoped phosphor already decrease at lower temperature with respect to the red emission of singly Eu-doped CaS (see gray curve in Fig. 4), featuring ΔET,red = 1989 cm−1 (A = 2.04 × 103, Eq. (1)). This indicates that the addition of Tb opens an additional non-radiative decay channel that is active around room temperature. At higher temperatures, the TQ of the singly Eu-doped and Eu,Tb-codoped phosphors coincide, reaching 50% of the initial intensity at T0.5 ≈ 375 K. This value is in correspondence with prior studies58,59,62, even though values of 475 K have been reported for single crystals58, suggesting that efficiency gains are still feasible by optimizing the synthesis.
As shown by a model calculation by Struck and Fonger and subsequent surveys of experimental literature by various authors, the physical meaning of the above-determined energy barriers is rather limited due to tunneling effects and the importance of the details of the electron-vibrational structure on the non-radiative transition probabilities63,64,65,66. For that reason, Eqs. (1) and (2) should be regarded as strictly empirical prescriptions.
Qualitative interpretation of the above analysis suggests that an excited Eu2+ ion in the Eu2+, Tb3+-codoped material has two radiative decay possibilities, the standard red luminescence, and the IR luminescence which is achieved after some kind of internal transition towards another energy level. If both emissions would originate from the same initial level and differ in final level, the temperature-induced intensification of the IR emission would not be expected.
### MMCT model
The experimental findings suggest that MMCT states might be involved in the complicated luminescence of this material. Given the oxidation states of the dopants in this compound, Eu2+ and Tb3+, the most probable scenario to be investigated are the Eu3+-Tb2+ MMCT states, that is, those where an electron is transferred from Eu2+ to Tb3+. To this means, ab initio embedded cluster calculations are employed (see Fig. 5).
Diabatic potential energy surfaces and configurational diagrams of Eu2+-Tb3+ pairs are obtained from the results of independent embedded cluster calculations23,67. This approach has proven its reliability by explaining the anomalous emission of several Ce- and Yb-doped phosphors23,25,26,68, predicting the existence of absorption bands due to IVCT states in Eu-doped phosphors28 and by showing the role of MMCT states at quenching luminescent levels of Pr3+69.
The configurational coordinate diagrams along the breathing mode for Eu2+, Tb3+, Eu3+, and Tb2+ are the ingredients for the electron transfer diagrams that describe the Eu-to-Tb MMCT transitions according to the recipe in refs. 22,23. Calculations are performed for CaS and SrS hosts. Details, intermediate and final results are collected in Supplementary Tables 2–7 and in Supplementary Figs. 10–15.
The excited state landscape of Eu2+ was previously discussed in detail in ref. 31 and is practically defined by a single ground state level, 4f7(8S7/2), separated from a very dense 4f65d1 manifold by ~16,000–17,000 cm−1. Eu3+ and Tb3+ feature conjugate ground state configurations with 4f6(7F0) and 4f8(7F6) multiplets, respectively. The higher-lying 4f6(5D0) and 4f8(5D4) states are the main 4f–4f emitting levels. The 4fN − 15d1 configurations feature high excitation energies for these trivalent lanthanides and are hence unnecessary for the current calculation. For Tb2+, the 4f85d1 and 4f9 configurations are close in energy, the former constituting the ground state in CaS (1Γ7g), while a reversed order is found for SrS (1Γ7u). More details about the electronic structures of Tb3+ and Tb2+ in CaS and SrS are given in the Supplementary Discussion, in particular regarding the relative energy of the 4f85d1 and 4f9 manifolds of Tb2+. The equilibrium Eu–S and Tb–S bond lengths and breathing mode vibrational frequencies are summarized in Table 1. Prior comparisons with experimental results for Eu2+ and Eu3+ proved an excellent quantitative agreement of <300 cm−1 for excitation energies and at most a few % for equilibrium bond lengths and vibrational frequencies31.
Within the diabatic approximation, the energy level scheme of a lanthanide pair can be constructed by combining all the levels of the individual ions, the resulting energy is given by addition of the individual energies with the Coulomb and exchange energies between the two lanthanides. The last two contributions are assumed to be state independent67. The resulting energy levels are uniquely labeled by combining both individual labels. As an example, the ground state of a Eu2+–Tb3+ pair is denoted as 4f7(8S7/2)–4f8(1A1g). Diabatic potential energy curves have proven their use by successfully explaining qualitative trends of CT processes upon chemical substitutions in host compounds22,23,28,43.
For every energy level of the Eu–Tb pair, a two-dimensional potential energy surface is obtained, spanned by the breathing modes of the EuS6 and TbS6 moieties. Every point in this two-dimensional space hence corresponds to a unique pair of (dTb−S,dEu−S). The equilibrium point of the Eu2+–Tb3+ states correspond to a relatively large dEu−S value (divalent ion) and relatively small dTb−S value (trivalent ion). The Eu2+–Tb3+4f65d1(1Γ8g)–4f8(1A1g) potential energy surface is represented in the top panels of Fig. 6 for CaS and SrS (green contours), along with the lowest MMCT level, Eu3+–Tb2+4f6(1A1g)–4f8(1Γ7g) (black contours), where dEu−S decreased (trivalent ion) and dTb−S increased (divalent ion). The intersection of both potential energy surfaces is a curved line (dashed red line in the contour plots of Fig. 6).
The lower panels of Fig. 6 display the configurational coordinate diagrams for Eu2+–Tb3+ pairs in both host compounds along the electron transfer reaction coordinate, Qet. This coordinate is shown in the contour plots and is defined here as the piecewise straight line that connects the equilibria of the Eu2+–Tb3+4f65d1(1Γ8g)−4f8(1A1g) and Eu3+–Tb2+4f6(1A1g)−4f85d1(1Γ7g) potential energy surfaces in the two-dimensional (dEu−SdTb−S) configurational space with their saddle point, that is, the minimum of the intersection of both surfaces. This one-dimensional diagram is a simplification of the two-dimensional space that is probed for convenient visualization. Reported data such as location of minima, crossing points, barriers, and transition energies are however obtained from the two-dimensional surfaces.
The configurational coordinate diagrams show that the Eu3+–Tb2+ MMCT configuration gives rise to low-lying states, lying around 20,000 cm−1 above the ground state (Table 1). The presence of these MMCT states alters the excited state dynamics after 4f7 → 4f65d1 excitation of the Eu2+ ion. The excited electron can be non-radiatively transferred to the Tb3+ ion, forming a transient Eu3+–Tb2+ pair. Shortly, the Tb2+5d electron is transferred back to the Eu3+4f orbital, leading to a decay of the MMCT state.
Different decay channels are found in the case of Eu,Tb-codoped CaS or SrS, evoked by their structural differences. In case of CaS, the minimum of the lowest MMCT state (4f6(1A1g)−4f85d1(1Γ7g)) is metastable. Therefore, radiative decay of this state can be expected to the structurally stressed 4f7(8S7/2)−4f8(6FJ) ground state (red arrows in Fig. 6c). In SrS, the minimum of the lowest MMCT state is crossed by the branches of the stressed 4f7(8S7/2)−4f8(6FJ) states, enabling efficient non-radiative decay due to fast bottom crossover (red arrows in Fig. 6d) that impedes any radiative decay. The associated diabatic energy barrier is 94 cm−1, and will likely disappear or be of negligible size in an adiabatic calculation. This is indeed the behavior, which is experimentally found for Eu,Tb-codoped CaS and SrS.
Due to the large horizontal offset between ground and MMCT states, the resulting emission band in case of CaS:Eu,Tb is expected to be broad. The MMCT emission is predicted to start around 11,000 cm−1 (900 nm), which is at slightly lower energy than the experimental IR emission, which starts ~14,000–15,000 cm−1. This quantitative discrepancy between experimental and computed transition energies is in line with what can be expected from the diabatic approximation23,67.
The low-lying MMCT states do not only cause an additional emission band. Vertical excitation from the ground state towards the MMCT states is possible starting from about 28,000 cm−1 (360 nm), which coincides with the energy range where the low-spin $$4{f}^{6}5d{t}_{{\rm{2}}g}^{1}$$ levels are found. The latter are spectroscopically invisible by direct excitation from the 4f7(8S7/2) ground state because of the spin selection rule31. The presence of the Eu–Tb pairs and the MMCT states induces transition probability in this otherwise forbidden energy region as evidenced by the excitation band ~370 nm in the experimental spectrum (Fig. 2). An estimated spectral shape is highlighted as a guide to the eye. This MMCT absorption is also visible in the excitation spectrum of the regular Eu2+ red emission at 650 nm for the CaS:Eu0.01,Tb0.01 sample, indicating that a substantial fraction of the dopants can already interact in the studied concentration range.
The PL excitation spectrum of the MMCT emission (dashed line in Fig 2) indicates that it can be excited with the same wavelengths as the regular Eu2+ emission, even when the photon energy of the excitation light is insufficient to reach an MMCT branch by vertical excitation from the ground state. This can be explained by considering the double-well shape of the potential energy surface that is formed by the Eu2+(1Γ8g)–Tb3+(1A1g) red-emitting and the Eu3+(1A1g)–Tb2+(1Γ7g) IR-emitting MMCT state. Even at low temperature, when the barrier cannot be thermally overcome, quantum mechanical tunneling will partially populate the MMCT state. This does not only explain why both emissions always appear together, but also why the IR emission is intensified when sufficient thermal energy is available to overcome the barrier and why the luminescent lifetimes for both emission bands are comparable.
The broadband IR MMCT emission is now applied to construct an IR pc-LED that can directly be used for the numerous above-mentioned spectroscopic applications. Because of the high absorption strength of the parity-allowed 4f−5d transitions of Eu2+, higher external quantum efficiencies can be achieved than with the current state-of-the-art, Cr3+-based phosphors4,5,6,8,9,10. Furthermore, the extremely broad MMCT emission extends the covered spectral range by several hundreds of nanometers to the IR compared to red/near-IR single Eu2+5d−4f emission11.
Figure 7 displays the spectrum, expressed in mW nm−1, for the obtained LED, where a blue 450 nm pumping LED was used. The LED features a width of 430 nm in the IR, ranging from 620 to 1050 nm. The operation of the IR LED is illustrated by the pictures in Fig. 7b–e. Here, a high-pass cut-off filter of 780 nm is used to filter out the transmitted blue pumping light in order to appreciate the brightness of the IR emission. The pictures were taken with a camera that is also sensitive to near-IR light, as the emission is barely visible by the naked eye. The total radiant flux of the IR part of the emission amounts to 38 mW. This value surpasses the current Cr3+-based state-of-the-art, having IR radiant fluxes in the range of 20–25 mW4,5,6,8,9,10, by 50%.
It is clear that this broadband IR MMCT emission has a huge application potential. The next step comprises further optimization of its luminescence to arrive at higher efficiencies and to enable some spectral tuning to optimally meet the requirements of the different applications. As the luminescence mechanism in CaS:Eu,Tb was resolved in detail by our ab initio calculations, some prospects and further insights can be given based on these findings.
As shown, SrS is not a suitable host to obtain a luminescent MMCT state. There are two main differences between CaS and SrS that are crucial in this regard. First, the vibrational frequency is smaller by a few tens of cm−1 in case of SrS (242 cm−1, compared to 292 cm−1 for CaS, see Table 1), which causes a slight opening of the branches that cross the MMCT state. This effect is not large because the vibrational frequency differs only by a small amount between CaS and SrS. Second, the offset in equilibrium geometry for the ground and MMCT states is larger for SrS (Qet = 0.548 Å, compared to 0.428 Å for CaS). This is the direct consequence of the larger bond length difference between divalent and trivalent lanthanides with respect to CaS (see Table 1) and causes the MMCT minimum to be pushed into the 4f7(8S7/2)−4f8(6FJ) branches, causing quenching of the MMCT state. The latter parameter is the dominant one for the different behavior of CaS:Eu,Tb and SrS:Eu,Tb.
The above analysis can be expanded to devise guidelines for finding other phosphors that exhibit MMCT luminescence. To achieve this, the ground state potential energy surface should cross the emitting MMCT level at a sufficiently large Qet value. For this, small curvatures are required, which translates into small vibrational frequencies, a property that is not only typical for sulfide hosts30,70, but also for selenides30, nitrides71, chlorides72, bromides72, and iodides73. Additionally, the crossing between the ground state and the MMCT level can be shifted away from the MMCT minimum by decreasing the horizontal offset between both parabolas, that is, by decreasing the equilibrium Qet value. This value is proportional to the change in lanthanide–ligand bond length upon CT and to the square of the coordination number of the lanthanide67. MMCT luminescence will hence be more probable in hosts with a small site for the lanthanide as these experience smaller bond length changes. Ca-based hosts, preferably with a low coordination number, are hence more desirable with respect to Sr- or Ba-based hosts.
When the chemical composition of host compounds is engineered to accomplish MMCT luminescence, also the vertical offset, and hence the MMCT emission energy, is expected to be affected. Indeed, the vertical offset is given by the difference of the ionization potential (IP) of the donor and the electron affinity (EA) of the acceptor, supplemented with the Coulomb and exchange interaction between both ions67, and these parameters are host dependent. A more radical manipulation of the vertical offset can be achieved by substituting the lanthanide ions. A rough idea on the effect of host modification and lanthanide substitution on the vertical offset can be acquired by consultation of the lengthy empirical data on IP’s and EA’s of lanthanide ions and their systematic behavior74,75.
In summary, the presence or absence of MMCT luminescence, and the emission energy is affected by three parameters: the local vibrational frequency, bond length change, and the selected lanthanide pair. A perfect balance between these parameters is required to achieve MMCT luminescence as in the case of CaS:Eu2+,Tb3+.
In this combined experimental–theoretical study, broadband IR emission in CaS:Eu2+,Tb3+ is reported, characterized, and explained. The emission spectrum overlaps with the regular red Eu2+5d−4f luminescence and ranges up to 1200 nm. Importantly, it can be efficiently pumped with long-wavelength visible light.
Concentration-dependent and microscopy experiments showed that the IR emission is caused by a cooperative effect between Eu and Tb. Ab initio multiconfigurational calculations support that the IR-emitting state is a Eu3+–Tb2+ MMCT state whose local structure differs significantly from the Eu2+–Tb3+ ground state.
The type of host has a critical influence on the properties of the MMCT luminescence, as shown by the fact that the IR emission is quenched in the similar compound SrS. This behavior was explained by the ab initio calculations, which show that the location of MMCT states, and hence their luminescence properties can be fine-tuned by tweaking a few parameters. These are experimentally accessible by altering the anions and cations in the host, or the lanthanide pair, namely the local vibrational frequencies and structural rearrangement upon MMCT, as well as the IP and electron affinities of the dopants.
The CaS:Eu,Tb phosphor was used to construct a broadband IR pc-LED for spectroscopic applications in smart electronics, food safety, and medicine. The IR emission of the LED covers a 430-nm-wide spectral range in the red and near-IR. Moreover, this is achieved with an IR output radiant flux of 38 mW, surpassing current state of the art.
## Methods
### Experimental method
Powders of CaS and SrS, doped with Eu and/or Tb were prepared by a solid-state synthesis, using high-purity CaS (Alfa Aesar, 99.9%), SrS (Alfa Aesar, 99.9%), EuF3 (Alfa Aesar, 99.95%), and TbF3 (Alfa Aesar, 99.9%) as precursors. Stoichiometric quantities were weighed, mixed, and subsequently heat treated in a tube furnace for 2 h at 1000 °C under a constant flow of H2S gas. After the heat treatment, the samples were allowed to cool naturally. Finally, the samples were slightly ground and stored in an inert atmosphere. All samples were phase pure, as verified by powder XRD (see Supplementary Fig. 1). Doping concentrations that were reported are molar concentrations with respect to the cation, for example, CaS:Eu0.010Tb0.001 is used for Ca0.989Eu0.010Tb0.001S.
PL emission and excitation spectra were measured on an Edinburgh FS920, using a 450 W xenon arc lamp as excitation source and equipped with a Hamamatsu R928P red-sensitive photomultiplier (wavelength range from 200 to 850 nm) and a Ge IR detector (700–1600 nm). Temperature-dependent PL was measured with the same spectrometer, equipped with a cryostat (Oxford Instruments Optistat CF).
The microscopy results were obtained with a Hitachi S-3400N SEM. A Thermo Scientific Noran System 7 EDX was used for chemical analysis and an optical fiber to collect the CL, which was subsequently analyzed by an Acton SP2300 monochromator and detected by a ProEM 1600 EMCCD (both Princeton Instruments). All shown spectra were properly calibrated for the spectral sensitivity of the various detectors.
An IR pc-LED was constructed using a blue pumping LED. For this a Xicato XTM LED module was used, operated at a constant current of 130 mA, corresponding to a voltage of 16.7 V. Its spectral radiant flux was obtained using a Thorlabs S401C thermal power meter.
### Computational method
Diabatic potential energy surfaces and derived configurational coordinate diagrams were calculated for metal-to-metal electron transfer states of Eu2+/Tb3+ mixed valence pairs in CaS and SrS, using the results of independent embedded cluster calculations as proposed in refs. 22,23.
The electronic structures of the electron donor (Eu) and acceptor (Tb) octahedral embedded clusters (LnS6M6)2+ and (LnS6M6)3+ (M = Ca, Sr) were obtained with the suite of programs MOLCAS,76 using D2h symmetry, in two-step spin–orbit coupling state-average restricted-active-space self-consistent-field (SA-RASSCF)/multi-state second-order perturbation theory (MS-RASPT2)/restricted-active-space state-interaction spin–orbit (RASSI-SO) DKH calculations. In a first step, the spin–orbit-free many-electron relativistic second-order DKH Hamiltonian77,78 was used to perform all-electron calculations using the same type of basis sets as in ref. 31: Gaussian atomic natural orbital relativistic basis sets ANO-RCC for S79, Eu, and Tb80, with respective contractions (17s12p5d)/[6s5p3d] (quadruple-zeta with polarization without f-functions quality) and (25s22p15d11f4g2h)/[9s8p5d4f3g2h] (quadruple-zeta with polarization quality). In addition, the six-electron valence was explicitly added to the six alkaline earth metal ions next to the sulfur ligands in the [100] directions using adapted ANO-RCC basis sets, Ca(20s16p6d)/[3s4p1d] and Sr(23s19p12d)/[3s4p1d]. The inner shells of these ions were frozen, using a [Mg] and [Zn] core for Ca and Sr, respectively.
First, SA-RASSCF81,82,83 calculations were performed, allowing all possible occupations in the Ln 4f shells and up to four electrons in the Ln 5d, 6s, and 5f shells, in order to account for the so-called double-shell effect84. Following states were obtained: those of the 4f6 and 4f8 configurations of Eu3+ and Tb3+ for which 2S + 1 = 7, 5, those of the 4f7, 4f65d1, and 4f85d1 of Eu2+ and Tb2+ for which 2S + 1 = 8, 6 and those of the Tb2+ 4f9 configuration for which 2S + 1 = 6. States with lower multiplicities were not considered because they are not expected to influence the lowest part of the energy spectrum that is of interest. Subsequently, MS-RASPT285,86,87,88 calculations allowed to correlate all cluster valence electrons, except the 4d electrons of the lanthanides. A standard IPEA value (0.25 a.u.)89 and an imaginary shift of 0.15 a.u. (Eu) or 0.50 a.u. (Tb) was used. Second, the AMFI approximation of the DKH spin–orbit coupling operator was added to the Hamiltonian90 and RASSI-SO91,92 calculations were performed. Here, all states of a given cluster computed in the first step were allowed to interact.
In all calculations, the clusters were embedded in ab initio model potentials (AIMPs)93 that include Coulomb, exchange, and Pauli repulsion interactions from the CaS and SrS host lattices obtained in ref. 31 from self-consistent embedded ions94 Hartree–Fock calculations76. Figure 5 shows how the small cluster is embedded in AIMPs and point charges. | 2023-01-27 15:46:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6442668437957764, "perplexity": 2508.90463565264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00483.warc.gz"} |
http://mathoverflow.net/questions/64158/second-order-linear-ode-with-mixed-boundary-condition?sort=votes | # Second order linear ODE with mixed boundary condition
Consider the following second order linear ODE with mixed boundary condition: $$\frac{d^2f}{dt^2}+a(t)f(t)=0,~\frac{df}{dt}(0)=u,~f(1)=0,$$ where $u\in R$ and $a\in C[0,1]$ are fixed.
Is the solution to this equation unique? If so, how to prove it? Thanks!
-
Before someone tells you some answers, it is rather instructive to work out examples yourself. If you let $a$ be a constant (the sign matters!), you can find the general solution to the ODE (without the boundary conditions) and figure out when the boundary conditions can be met or not. After you've done that, look up "Sturm-Liouville theory" for self-adjoint second-order ODE's. – Deane Yang May 6 '11 at 21:15
I don't think this is quite MO-level, though I'm sure there could be a very interesting discussion to be had around your question. My advice would be to assume that the solution is not unique in general, because boundary conditions seldom lead to uniqueness. So I would look for a counterexample. A much trickier question will be: are there easy conditions to check for which we do have uniqueness. This might be what Deane has in mind, but I can't be sure. – Thierry Zell May 7 '11 at 1:19
Generically, both existence and uniqueness will be satisfied (solution space is 2-dimensional, there are two constraints on it). However, if $u\ne 0$, a solution may fail to exist at all if $f(1)\implies df/dt(0)=0$. On the other hand, if $u=0$, existence is trivial ($f(t)=0$), but uniqueness is not guaranteed. – Igor Khavkine May 7 '11 at 8:56
Each of the above cases can be explored with $a$ constant, as per Deane's suggestion. For arbitrary $u$ and $a(t)$, it is a non-trivial problem to figure out which of the cases you are in, and has to be tackled separately for each case (using for instance exact solutions, qualitative estimates, or numerics). BTW, a nice modern reference is Zettl, Sturm-Liouville Theory. amazon.com/dp/0821839055 – Igor Khavkine May 7 '11 at 8:59
I would like to thank all of you for your thinking on my some what stupid question. For the case when $a(t)$ is a constant function, it is easy to disscus and, as Deane said to me, for the general case the theory of Sturm-Liouville can help. Well, if you do think this question is not suitable for MO, we could delete it. Thanks! – ProbLe May 8 '11 at 9:39
I'm not sure whether this is really appropriate for MathOverflow or not. Still, let me say a little more: As I've mentioned above, you can work out completely the case where $a$ is constant using explicit solutions. If $a$ is not constant, I'm not aware of a definitive answer but you can get separate necessary conditions and sufficient conditions, involving upper or lower bounds on $a$ using the Sturm comparison theorem. Last, I believe that it is possible to find an integral condition on $a$ that is sufficient for there to be a unique solution to the boundary value problem.
Deane, thank you very much for all your thinking on my question, your answer is quite instructive! If you find this is not an MO question, I will try to delete it. Before turning to help to MO, I have already considerd the case when $a(t)$ is a constant, which corresponds to my original conceived application to locally symmetric Riemannian manifold and it seems that every thing goes well thanks to the invariance of the sectional curvature by parallel transport along a geodesic. But for the general Riemannian manifold, $a(t)$ is not a constant and I was confused... – ProbLe May 8 '11 at 9:57
Thanks for the two references. It seems that, generally, the geodesics are studied by considering the Jacobi fields with initial conditions on $J(0),J^{\prime}(0)$ or boundary conditions on $J(0), J(1)$. But the mixed boundary conditions on $J(0),J^{\prime}(1)$ are much less used. It is easily seen that $J(0)=J^{\prime}(1)=0$ does not imply that $J$ vanishes identically even if there is no cut points on the geodesic. I think that the mixed boundary condition is related to the degeneracy of the hessian (restricted to the orthogonal subspace of geodesic direction) of the distance function. – ProbLe May 8 '11 at 16:07 | 2015-07-30 02:19:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875939249992371, "perplexity": 191.66520195356637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987034.19/warc/CC-MAIN-20150728002307-00163-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://math.libretexts.org/Courses/Monroe_Community_College/MTH_104_Intermediate_Algebra/4%3A_Systems_of_Linear_Equations/4.3%3A_Solve_Applications_with_Systems_of_Equations/4.3E%3A_Exercises |
# 4.2E: Exercises
## Practice Makes Perfect
Direct Translation Applications
In the following exercises, translate to a system of equations and solve.
1. The sum of two number is 15. One number is 3 less than the other. Find the numbers.
2. The sum of two number is 30. One number is 4 less than the other. Find the numbers.
13 and 17
3. The sum of two number is −16. One number is 20 less than the other. Find the numbers.
4. The sum of two number is $$−26$$. One number is 12 less than the other. Find the numbers.
$$−7$$ and $$−19$$
5. The sum of two numbers is 65. Their difference is 25. Find the numbers.
6. The sum of two numbers is 37. Their difference is 9. Find the numbers.
$$14$$ and $$23$$
7. The sum of two numbers is $$−27$$. Their difference is $$−59$$. Find the numbers.
8. The sum of two numbers is $$−45$$. Their difference is $$−89$$. Find the numbers.
$$22$$ and $$−67$$
9. Maxim has been offered positions by two car companies. The first company pays a salary of $10,000 plus a commission of$1000 for each car sold. The second pays a salary of $20,000 plus a commission of$500 for each car sold. How many cars would need to be sold to make the total pay the same?
10. Jackie has been offered positions by two cable companies. The first company pays a salary of $14,000 plus a commission of$100 for each cable package sold. The second pays a salary of $20,000 plus a commission of$25 for each cable package sold. How many cable packages would need to be sold to make the total pay the same?
Eighty cable packages would need to be sold to make the total pay the same.
11. Amara currently sells televisions for company A at a salary of $17,000 plus a$100 commission for each television she sells. Company B offers her a position with a salary of $29,000 plus a$20 commission for each television she sells. How televisions would Amara need to sell for the options to be equal?
12. Mitchell currently sells stoves for company A at a salary of $12,000 plus a$150 commission for each stove he sells. Company B offers him a position with a salary of $24,000 plus a$50 commission for each stove he sells. How many stoves would Mitchell need to sell for the options to be equal?
Mitchell would need to sell 120 stoves for the companies to be equal.
13. Two containers of gasoline hold a total of fifty gallons. The big container can hold ten gallons less than twice the small container. How many gallons does each container hold?
14. June needs 48 gallons of punch for a party and has two different coolers to carry it in. The bigger cooler is five times as large as the smaller cooler. How many gallons can each cooler hold?
8 and 40 gallons
15. Shelly spent 10 minutes jogging and 20 minutes cycling and burned 300 calories. The next day, Shelly swapped times, doing 20 minutes of jogging and 10 minutes of cycling and burned the same number of calories. How many calories were burned for each minute of jogging and how many for each minute of cycling?
16. Drew burned 1800 calories Friday playing one hour of basketball and canoeing for two hours. Saturday he spent two hours playing basketball and three hours canoeing and burned 3200 calories. How many calories did he burn per hour when playing basketball? How many calories did he burn per hour when canoeing?
1000 calories playing basketball and 400 calories canoeing
17. Troy and Lisa were shopping for school supplies. Each purchased different quantities of the same notebook and thumb drive. Troy bought four notebooks and five thumb drives for $116. Lisa bought two notebooks and three thumb dives for$68. Find the cost of each notebook and each thumb drive.
18. Nancy bought seven pounds of oranges and three pounds of bananas for $17. Her husband later bought three pounds of oranges and six pounds of bananas for$12. What was the cost per pound of the oranges and the bananas?
Oranges cost $2 per pound and bananas cost$1 per pound
19. Andrea is buying some new shirts and sweaters. She is able to buy 3 shirts and 2 sweaters for $114 or she is able to buy 2 shirts and 4 sweaters for$164. How much does a shirt cost? How much does a sweater cost?
20. Peter is buying office supplies. He is able to buy 3 packages of paper and 4 staplers for $40 or he is able to buy 5 packages of paper and 6 staplers for$62. How much does a package of paper cost? How much does a stapler cost?
Package of paper $4, stapler$7
21. The total amount of sodium in 2 hot dogs and 3 cups of cottage cheese is 4720 mg. The total amount of sodium in 5 hot dogs and 2 cups of cottage cheese is 6300 mg. How much sodium is in a hot dog? How much sodium is in a cup of cottage cheese?
22. The total number of calories in 2 hot dogs and 3 cups of cottage cheese is 960 calories. The total number of calories in 5 hot dogs and 2 cups of cottage cheese is 1190 calories. How many calories are in a hot dog? How many calories are in a cup of cottage cheese?
Hot dog 150 calories, cup of cottage cheese 220 calories
23. Molly is making strawberry infused water. For each ounce of strawberry juice, she uses three times as many ounces of water as juice. How many ounces of strawberry juice and how many ounces of water does she need to make 64 ounces of strawberry infused water?
24. Owen is making lemonade from concentrate. The number of quarts of water he needs is 4 times the number of quarts of concentrate. How many quarts of water and how many quarts of concentrate does Owen need to make 100 quarts of lemonade?
Owen will need 80 quarts of water and 20 quarts of concentrate to make 100 quarts of lemonade.
Solve Geometry Applications
In the following exercises, translate to a system of equations and solve.
25. The difference of two complementary angles is 55 degrees. Find the measures of the angles.
26. The difference of two complementary angles is 17 degrees. Find the measures of the angles.
$$53.5$$ degrees and $$36.5$$ degree.
27. Two angles are complementary. The measure of the larger angle is twelve less than twice the measure of the smaller angle. Find the measures of both angles.
28. Two angles are complementary. The measure of the larger angle is ten more than four times the measure of the smaller angle. Find the measures of both angles.
16 degrees and 74 degrees
29. The difference of two supplementary angles is 8 degrees. Find the measures of the angles.
30. The difference of two supplementary angles is 88 degrees. Find the measures of the angles.
134 degrees and 46 degrees
31. Two angles are supplementary. The measure of the larger angle is four more than three times the measure of the smaller angle. Find the measures of both angles.
32. Two angles are supplementary. The measure of the larger angle is five less than four times the measure of the smaller angle. Find the measures of both angles.
37 degrees and 143 degrees
33. The measure of one of the small angles of a right triangle is 14 more than 3 times the measure of the other small angle. Find the measure of both angles.
34. The measure of one of the small angles of a right triangle is 26 more than 3 times the measure of the other small angle. Find the measure of both angles.
$$16°$$ and $$74°$$
35. The measure of one of the small angles of a right triangle is 15 less than twice the measure of the other small angle. Find the measure of both angles.
36. The measure of one of the small angles of a right triangle is 45 less than twice the measure of the other small angle. Find the measure of both angles.
$$45°$$ and $$45°$$
37. Wayne is hanging a string of lights 45 feet long around the three sides of his patio, which is adjacent to his house. The length of his patio, the side along the house, is five feet longer than twice its width. Find the length and width of the patio.
38. Darrin is hanging 200 feet of Christmas garland on the three sides of fencing that enclose his front yard. The length is five feet less than three times the width. Find the length and width of the fencing.
Width is 41 feet and length is 118 feet.
39. A frame around a family portrait has a perimeter of 90 inches. The length is fifteen less than twice the width. Find the length and width of the frame.
40. The perimeter of a toddler play area is 100 feet. The length is ten more than three times the width. Find the length and width of the play area.
Width is 10 feet and length is 40 feet.
Solve Uniform Motion Applications
In the following exercises, translate to a system of equations and solve.
41. Sarah left Minneapolis heading east on the interstate at a speed of 60 mph. Her sister followed her on the same route, leaving two hours later and driving at a rate of 70 mph. How long will it take for Sarah’s sister to catch up to Sarah?
42. College roommates John and David were driving home to the same town for the holidays. John drove 55 mph, and David, who left an hour later, drove 60 mph. How long will it take for David to catch up to John?
11 hours
43. At the end of spring break, Lucy left the beach and drove back towards home, driving at a rate of 40 mph. Lucy’s friend left the beach for home 30 minutes (half an hour) later, and drove 50 mph. How long did it take Lucy’s friend to catch up to Lucy?
44. Felecia left her home to visit her daughter driving 45 mph. Her husband waited for the dog sitter to arrive and left home twenty minutes (1/3 hour) later. He drove 55 mph to catch up to Felecia. How long before he reaches her?
$$1.5$$ hour
45. The Jones family took a 12-mile canoe ride down the Indian River in two hours. After lunch, the return trip back up the river took three hours. Find the rate of the canoe in still water and the rate of the current.
46. A motor boat travels 60 miles down a river in three hours but takes five hours to return upstream. Find the rate of the boat in still water and the rate of the current.
Boat rate is 16 mph and current rate is 4 mph.
47. A motor boat traveled 18 miles down a river in two hours but going back upstream, it took 4.54.5 hours due to the current. Find the rate of the motor boat in still water and the rate of the current. (Round to the nearest hundredth.)
48. A river cruise boat sailed 80 miles down the Mississippi River for four hours. It took five hours to return. Find the rate of the cruise boat in still water and the rate of the current.
Boat rate is 18 mph and current rate is 2 mph.
49. A small jet can fly 1072 miles in 4 hours with a tailwind but only 848 miles in 4 hours into a headwind. Find the speed of the jet in still air and the speed of the wind.
50. A small jet can fly 1435 miles in 5 hours with a tailwind but only 1,215 miles in 5 hours into a headwind. Find the speed of the jet in still air and the speed of the wind.
Jet rate is 265 mph and wind speed is 22 mph.
51. A commercial jet can fly 868 miles in 2 hours with a tailwind but only 792 miles in 2 hours into a headwind. Find the speed of the jet in still air and the speed of the wind.
52. A commercial jet can fly 1,320 miles in 3 hours with a tailwind but only 1170 miles in 3 hours into a headwind. Find the speed of the jet in still air and the speed of the wind.
Jet rate is 415 mph and wind speed is 25 mph.
## Writing Exercises
53. Write an application problem similar to Example. Then translate to a system of equations and solve it.\
54. Write a uniform motion problem similar to Example that relates to where you live with your friends or family members. Then translate to a system of equations and solve it. | 2021-02-24 21:04:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2696778476238251, "perplexity": 970.6555039911012}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347321.0/warc/CC-MAIN-20210224194337-20210224224337-00091.warc.gz"} |
http://crypto.stackexchange.com/tags/block-cipher/hot?filter=all | # Tag Info
43
There are a variety of reasons why AES is more widely used: AES is a standard. AES has been vetted by cryptanalysts more extensively than Camellia. As a result, we can have greater confidence in the security of AES than in Camellia. Therefore, on the merits, there may be good reasons to choose AES over Camellia. AES is a government standard (FIPS). ...
42
The initial and final permutation have no influence on security (they are unkeyed and can be undone by anybody). The usual explanation is that they make implementation easier in some contexts, namely a hardware circuit which receives data over a 8-bit bus: it can accumulate the bits into eight shift registers, which is more efficient (in terms of circuit ...
42
The difference between the PKCS#5 and PKCS#7 padding mechanisms is the block size; PKCS#5 padding is defined for 8-byte block sizes, PKCS#7 padding would work for any block size from 1 to 255 bytes. This is the definition of PKCS#5 padding (6.2) as defined in the RFC: The padding string PS shall consist of 8 - (||M|| mod 8) octets all having value 8 - ...
41
For practical purposes, 128-bit keys are sufficient to ensure security. The larger key sizes exist mostly to satisfy some US military regulations which call for the existence of several distinct "security levels", regardless of whether breaking the lowest level is already far beyond existing technology. The larger key sizes imply some CPU overhead (+20% for ...
33
Why shouldn't I use ECB encryption? The main reason not to use ECB mode encryption is that it's not semantically secure — that is, merely observing ECB-encrypted ciphertext can leak information about the plaintext (even beyond its length, which all encryption schemes accepting arbitrarily long plaintexts will leak to some extent). Specifically, the ...
28
As a bonus feature, AES has hardware support in Intel processors which implement the AES instruction set, with AMD support coming soon in their Bulldozer based processors. The AES instructions set consists of six instructions. Four instructions, namely AESENC, AESENCLAST, AESDEC, AESDECLAST, are provided for data encryption and decryption (the ...
25
The actual encryption algorithm is almost the same between all variants of AES. They all take a 128-bit block and apply a sequence of identical "rounds", each of which consists of some linear and non-linear shuffling steps. Between the rounds, a round key is applied (by XOR), also before the first and after the last round. The differences are: The longer ...
25
Assume that 1 evaluation of {DES, AES} takes 10 operations, and we can perform $10^{15}$ operations per second. Trivially, that means we can evaluate $10^{14}$, or about $2^{46.5}$ {DES, AES} encryptions per second. This is a simplistic view: we are ignoring here the cost of testing whether we found the correct key, and the key schedule cost. So on our ...
24
A known-plaintext attack (i.e. knowing a pair of corresponding plaintext and ciphertext) always allows a brute-force attack on a cipher: Simply try all keys, decrypt the ciphertext and see if it matches the plaintext. This always works for every cipher, and will give you the matching key. (For very short plaintext-ciphertext pairs, you might get multiple ...
24
Applied Cryptography is book which is becoming, say, not-so-recent. NSA has quite a lot of budget, but not an infinite amount, and there are other organization, in particular big private corporation, which also have impressive means. Google or Apple, for instance, are companies with R&D activity in the area of cryptography, and who are able to ...
23
Many cryptographic algorithms are expressed as iterative algorithms. E.g., when encrypting a message with a block cipher in CBC mode, each message "block" is first XORed with the previous encrypted block, and the result of the XOR is then encrypted. The first block has no "previous block" hence we must supply a conventional alternate "zero-th block" which we ...
18
I'm just curious to know why the 128-bit version become the standard[.] That question is easy to respond. In the section Minimum Acceptability Requirements of Request for Candidate Algorithm Nominations for the AES, it says: The candidate algorithm shall be capable of supporting key-block combinations with sizes of 128-128, 192-128, and 256-128 ...
17
Basically it's analysis of a cryptographic cypher by the means of finding a relationship between the difference in the input data and the output data. Ideally, the slightest difference in input data (cleartext), even a single bit, should produce a completely different cypthertext. However, if the cypher is not well-designed, a correlation between the two ...
17
Well, the exact reason for an IV varies a bit between different modes that use IV. At a high level, what the IV does is act as a randomizer, so that each encrypted message appears to be encrypted to a random pattern, even if those messages are similar. In general, IVs disguise when you encrypt the same message twice (and more generally, when two messages ...
17
People found MARS to be clunky and overly complex, leading to more effort for implementation and optimization, and also a less clear overall security picture. Assessments of "security" are, in fact, extremely subjective, because they rely on speculations about unknown future cryptanalytic attack, empiric traditions (e.g. "more rounds" = "more security"), ...
16
Not only we can turn block ciphers into hash functions, but we do. The usual hash functions (MD5, SHA-1, SHA-256...) use the Merkle-Damgård construction which relies on a block cipher E. A running state r is initialized to a conventional value. Then the input data is split into a number of chunks, each chunk being used as key for the block cipher: r is ...
16
You should not use ECB mode because it will encrypt identical message blocks (i.e., the amount of data encrypted in each invocation of the block-cipher) to identical ciphertext blocks. This is a problem because it will reveal if the same messages blocks are encrypted multiple times. Wikipedia has a very nice illustration of this problem.
15
If a block cipher is linear with respect to some field, then, given a few known plaintext-ciphertext pairs, it is possible to recover the key using a simple Gaussian elimination. This clearly contradicts the security properties one expects from a secure block cipher.
15
ECB and CBC are only about encryption. Most situations which call for encryption also need, at some point, integrity checks (ignoring the threat of active attackers is a common mistake). There are combined modes which do encryption and integrity simultaneously; see EAX and GCM (see also OCB, but this one has a few lingering patent issues; assuming that ...
15
The security of that approach is equivalent to that of normal CBC. Your scheme with first plaintext block $IV^\prime$ is clearly identical to normal CBC with $IV=AES(IV^\prime)$. Since a block cipher is a permutation over a block, a uniformly random first plaintext block will lead to a uniformly random IV for normal CBC. A ciphertext produced with your ...
15
The reason why you see that is because Camellia is the highest-preference cipher in NSS (Chrome and Firefox). Servers that support Camellia and use the client-preferred cipher suite will use Camellia. NSS's rationale for this ordering is: National ciphers such as Camellia are listed before international ciphers such as AES and RC4 to allow servers ...
14
Well, to start off with, IVs have different security properties than keys. With keys (as you are well aware), you need to hide them from anyone in the middle; if someone did learn your keys, then he could read all your traffic. IVs are not like this; instead, we don't mind if someone in the middle learns what the IV is; as long as he doesn't know the key, ...
14
A block cipher is an invertible transformation that maps an $n$ bit block of bits to an $n$ bit block of bits, under the control of a key (and where $n=128$ in the case of AES) Now, we most often need to do things other than mapping blocks of $n$ bits; how we do that is using the block cipher within a Mode of Operation. A mode of operation is just a way to ...
14
Never use ECB! It is insecure. I recommend an authenticated encryption mode, like EAX or GCM. If you can't use authenticated encryption, use CBC or CTR mode encryption, and then apply a MAC (e.g., AES-CMAC or SHA1-HMAC) to the resulting ciphertext.
14
What you're looking for can be done using existing schemes for format preserving encryption (FPE). In general, FPE schemes convert an existing strong algorithm like AES into a block cipher that operates on a set of any size. For instance, FPE can encrypt 15 digit integers to other 15 digit integers (eg for credit card numbers, one of the common reasons for ...
14
Let a "block cipher" be defined with a fixed S-box $S$ (i.e. a permutation of some space) and a key $K$ (same size than a block), such that the encryption of a block $M$ is $C = S[P\oplus K]$. Everybody knows $S$ and can apply and invert it (that's a "S-box", not a "key" -- if the S-box is "key dependent" then the S-box is itself a block cipher in its own ...
14
In complete honesty: if you have to ask this question, it's overwhelmingly unlikely that you have actually succeeded in breaking the security of AES. At best, you may have discovered a well-known attack against misuse of particular block cipher modes; for instance, plaintext recovery with a chosen-ciphertext attack against ECB, or blind manipulation of the ...
13
This approach, at a high level, is actually fairly common; many stream ciphers operate on this very principle. For instance, Salsa20 uses what is effectively a hash function (a PRF) to convert a secret input (that includes a counter) into the keystream which is XORed with the plaintext. However, this kind of function can be much faster than a secure ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2016-02-14 06:01:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3784201741218567, "perplexity": 1889.8304277945929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701169261.41/warc/CC-MAIN-20160205193929-00300-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://libros.duhnnae.com/2017/jun4/149698804582-Upcrossing-inequalities-for-stationary-sequences-and-applications-Michael-Hochman.php | # Upcrossing inequalities for stationary sequences and applications
Upcrossing inequalities for stationary sequences and applications - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online.
Descargar gratis o leer online en formato PDF el libro: Upcrossing inequalities for stationary sequences and applications
For arrays $S {i,j} {1\leq i\leq j}$ of random variables that are stationary in an appropriate sense, we show that the fluctuations of the process $S {1,n} {n=1}^{\infty}$ can be bounded in terms of a measure of the mean subadditivity of the process $S {i,j} {1\leq i\leq j}$. We derive universal upcrossing inequalities with exponential decay for Kingmans subadditive ergodic theorem, the Shannon-MacMillan-Breiman theorem and for the convergence of the Kolmogorov complexity of a stationary sample.
Autor: Michael Hochman
Fuente: https://archive.org/ | 2017-12-16 03:49:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8348008990287781, "perplexity": 3679.209353697161}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00115.warc.gz"} |
https://brilliant.org/problems/shrinking-squaresan-empirical-exploration/ | # Shrinking Squares.An empirical exploration
Level pending
Start with a sequence $$S=(a,b,c,d$$ of positive integers and find the derived sequence $$S_{1}=T(S)=(|a-b|,|b-c|,|c-d|,|d-a|$$.Define a sequence $$S,S_{1},S_{2}=T(S_{1}),S_{3}=T(S_{2}),...$$.
Let $$S_{i}$$ minimizes the sum of the four elements.What is the sum of the four elements of $$S_{i}$$?
× | 2017-10-19 14:48:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9168333411216736, "perplexity": 627.6952744468111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823309.55/warc/CC-MAIN-20171019141046-20171019161046-00398.warc.gz"} |
http://lambda-the-ultimate.org/taxonomy/term/1 | ## Refining Structured Type System
This is my first more serious paper on Structured Type System.
[Abstract]
...
As every theory needs some syntax form to express its elements, a road to a theory about theories leads through a syntax defining land, so structured type system, in the first place, provides a flexible generalized text parser that builds up internal abstract syntax trees (AST) from input data. The other aspect of theory about theories inevitably covers the meaning of input data. This is called semantics, and this is the point where structured type system provides a possibility to define deeper connections between syntactic elements of AST-s. For this purpose, structured type system uses a kind of functions known from functional programming paradigm. These functions are able to process any data corpus, being natural or artificial language translation, which in turn happens to be just enough for running any complexity task used to analyze existing and calculate new data from an input.
...
In short, we use BNF-ish grammars as types for function parameters and function results. Some nice constructions can be made by combining grammars and functions. One of the most important properties of structured type system is its ability to additionally extend grammars outside the grammars definitions, all based on function result types. It is fairly simple: where a certain type of expression is expected, there a grammar that results with the same type can be used, and there goes syntax extensibility. Conveniently, we can combine grammar definitions and their inputs in the same source code file.
I was hoping to get some feedback and critics from this community before attempting to get more publicity to the paper. This is an important milestone to me and I want to thank You all for being so inspirational community during my research.
## Cool stuff from recent conferences
I heard of some good stuff. How about someone who was there post the headline worthy papers?
## Céu: Structured Synchronous Reactive Programming (SSRP)
Céu is a Esterel-based synchronous language:
It appeared in LtU in the past in an announcement of the "SPLASH: Future of Programming Workshop" program.
In this new public version, we are trying to surpass the academic fences with a more polished work (docs, build, etc).
In summary:
• Reactive: code executes in reactions to events
• Synchronous: reactions run to completion in discrete logical units of time (there's no implicit preemption nor real parallelism)
• Structured: programs use structured/imperative control mechanisms, such as "await" and "par" (to combine multiple awaiting lines of execution)
Structured programming avoids deep nesting of callbacks letting programmers code in direct/sequential/imperative style. In addition, when a line of execution is aborted, all allocated resources are safely released.
The synchronous model leads to deterministic execution and simpler reasoning, since it does not demand explicit synchronization from the programmer (e.g., locks and queues). It is also lightweight to fit constrained embedded systems.
We promote SSRP as a complement to classical structured/imperative programming like FRP is now to functional programming.
## Archaeological dig to find the first Lisp example of the Y-combinator
I'm trying to find the first Lisp examples of the Y-combinator. Beyond that I am also trying to find the first time the Y-combinator was demonstrated using the factorial function and the mutually recursive definition of odd/even.
What works should I be looking at? The first Scheme paper references fixed-point combinators at page 16 and also shows the familiar LISP definition of the factorial function. But, it does not express the factorial function using a fixed-point operator.
## How will look a modern imperative language? All love here is functional only..
After read a lot about compilers/languages I see that most research, if not all, is about functional languages, and complex type systems
Now that I'm toying in build one, I see that I'm biased the language because that to be functional, yet, the truth is that I'm more a imperative guy.
So, I wonder what is new/forgotten in the world of imperative or non-functional languages, languages more "mainstream". Are GO/Rust/Swift just there?
If wanna build a language (more mainstream, imperative, etc) with the wisdom of today, how it look? Is already made? Maybe ADA or similar?
I probably switch it to make "const by default, variable optional", use AGDT and the match clause, but not think what else...
## Inference of Polymorphic Recursion
In the following (Haskell) example, the type annotation on f is required:
f :: a -> (Int, a)
f x = (g True, x)
g True = 0
g False = fst (f 'a') + fst (f 0)
main = do
print (fst (f True))
I can understand why in general, but I wonder if we could just decide to generalize arbitrarily in the order that declarations appear so that in this case the type of f would be inferred but if you switched the definition order you'd get a type error. When f is generalized, g would be constrained Bool -> b where b would be unified after generalization. Is this something that might work (but isn't done because it's arbitrary and makes definition order matter) or are there hard cases I need to consider?
Thanks
Kitten has ad-hoc static polymorphism in the form of traits. You can declare a trait with a polymorphic type signature, then define instances with specialisations of that signature:
// Semigroup operation
trait + <T> (T, T -> T)
instance + (Int32, Int32 -> Int32) {
}
instance + (Int64, Int64 -> Int64) {
…
}
…
This is checked with the standard “generic instance” subtyping relation, in which <T> (T, T -> T)Int32, Int32 -> Int32. But the current compiler assumes that specialisations are fully saturated: if it infers that a particular call to + has type Int32, Int32 -> Int32, then it emits a direct call to the (mangled) name of the instance. I’d like to remove that assumption and allow instances to be generic, that is, partially specialised:
// List concatenation
instance + <T> (List<T>, List<T> -> List<T>) {
_::kitten::cat
}
// #1: Map union
instance + <K, V> (Map<K, V>, Map<K, V> -> Map<K, V>) {
…
}
// #2: A more efficient implementation when the keys are strings
instance + <V> (Map<Text, V>, Map<Text, V> -> Map<Text, V>) {
…
}
But this raises a problem: I want to select the most specific instance that matches a given inferred type. How exactly do you determine that?
That is, for Map<Text, Int32>, #1 and #2 are both valid, but #2 should be preferred because it’s more specific. There are also circumstances in which neither of two types is more specific: if we added an instance #3 for <K> (Map<K, Int32>, Map<K, Int32> -> Map<K, Int32>), then #2 and #3 would be equally good matches, so the programmer would have to resolve the ambiguity with a type signature.
## Unsoundness
Hi, I wonder if someone can help resolve the conflict described below.
My system is an Algol like language which supports both functional and procedural code.
Now, I hate the way C handles lvalues, it doesn't extend well, one has to make horrible rules
and list all possible l-contexts, and attempts to do this in C++ are an abysmal failure.
My solution is elegant! Consider a record:
(x=1,y=42.1)
which has type
(x:int, y:double)
then this is a first class value, and the field names are projections:
x (x=1,y=42.1)
Since I have overloading there's no conflict with the same field name in some other record,
the field name is just an overloaded function name.
To make it look nicer you can use reverse application:
(x=1,y=42.1) . x
Now, to get rid of lvalues we introduce pointer types and variables so that in
var xy = (x=1,y=42.1);
&xy <- (x=2,y=43.1);
This is really cool because & is not an operator,
just a way to write the value which is the address of a variable.
We use a procedure written as infix left arrow which takes a pointer to T
as the first argument and a T as the second, and stores the value at the specified address. So its all values.
To assign to a component, we introduce a second overload for each projection that takes a pointer argument and returns a pointer:
&xy . x <- 3;
This works for other products as well (tuples, arrays and structs).
So, we have a purely value based semantics and a sane type system.
In particular we have a very nice rule that relates values and objects.
So far so good, but now I have another feature called "compact linear types"
which are packed encodings of any type defined by the rule:
unit is compact linear, and sum, product, or exponential of a compact linear type is compact linear.
A small compact linear types is one that fits in a 64 bit machine word.
So for example the type 3 * 4 is a single 64 bit value which is a subrange of integer 0 thru 11.
Compact linear types with integer coercions used as array indices give polyadic arrays
(rank independent array programming).
The problem is .. compact linear type components are not addressable.
Projections functions work fine, but there are no overloads for pointer projections.
And so the conflict: polymorphic pointer projections are unsound:
proc f[T,U] (r: &(T,U)) (v:T)) { r.0 <-v; }
will not work if r is a pointer to a compact linear object. I can think of three solutions:
(1) invent a new pointer type (destroys uniform pointer representation property)
(2) have distinct product, sum, and exponential operators for compact linear types
(3) use the same operators but introduce pack and unpack operations
## Domain specific language for playing games
Writing computer games for people to play, even quite simple ones, is a surprisingly challenging task. Here I'm thinking of turn-based games like card and board games, puzzles, block-pushing and perhaps simple arcade games like Space Invaders. It would seem fairly obvious that large parts of the heavy lifting will be common from one to another and that differences in game play might well be encapsulated in a DSL.
Others have had the same thought. Here are links to some good reviews: https://chessprogramming.wikispaces.com/General+Game+Playing, https://en.wikipedia.org/wiki/Domain-specific_entertainment_language, https://en.wikipedia.org/wiki/General_game_playing. The GGP language GDL is a Datalog derivative, Zillions uses a Lisp dialect and Axiom is a kind of Forth. There are several others, including PuzzleScript, CGL and VGDL. GGP in particular is the focus of a lot of AI work, not so much the UI. Several of these projects appear dormant.
Considering the prevalence of games in the community and the number of people involved in writing them, this looks like surprisingly little effort in this direction. I was wondering if anyone is aware of or involved in any active work in this area, particularly on the language side, before I go and invent my own.
## Process Network for Effects, Monad Alternative
Monads are an awkward effects model in context of concurrency. We get incidental complexity in the form of futures or forking threads with shared memory. The running program becomes entangled with the environment, which hinders persistence and mobility and debugging. So I sought alternatives in literature.
Kahn Process Networks (KPNs) seem like a very good alternative. From an external perspective, they share a lot of similarities to monads, except we get more than one input (continuation) and output (effect) port and thus can model concurrent operations without relying on effects or environment support. Internally, KPNs have a lot of similarities to free monads: we can compose KPNs to handle effects internally, translate them, etc.. Use of KPNs as first class values allows for dynamic structure and mobile processes.
The main feature missing from KPNs is the ability to work with asynchronous inputs. But it is not difficult to add time to the model, and thus support asynchronous messaging and merges in a style similar to functional-reactive or flow-based programming (and somewhere between the two in terms of expressiveness). I doubt this is a new idea.
I've written about these ideas in more detail on my blog:
Reactive KPNs with open ports or channels also make a better FRP than most, having a far more direct API for pushing inputs and pulling outputs deep within a network. | 2017-02-19 18:41:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36310920119285583, "perplexity": 2544.6912108651177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170249.75/warc/CC-MAIN-20170219104610-00497-ip-10-171-10-108.ec2.internal.warc.gz"} |