url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://stats.stackexchange.com/questions/7698/output-layer-of-artificial-neural-networks-when-learning-non-linear-functions-wi
# Output layer of artificial neural networks when learning non-linear functions with limited value range I want to approximate a non-linear function with a limited value range by an artificial neural network (feed forward, back propagation). Most tools and literature availabe suggest linear functions for the output neurons when doing regressions. However, I know a priori that my goal function is of limited range, therefore is it reasonable to use a function for the output neurons with limited value range, too? To be more explicit: My target function's values are in the range between 0 and 1, but the neural net does predict occasionally values that exceed this range (e.g. -1.3). Can i prevent the net from doing so, and is it reasonable? I am opposed to cutting values of, since this will lead to an undifferentiable transfer function and your gradient based training algorithm might screw up. The sigmoid function at the output layer is fine: $\sigma(x) = \frac{1}{1 + e^{-x}}$. It will squash any output to lie within $(0, 1)$. So you can get arbitrarily close to the targets. However, if you use the squared error you will lose the property of a "matching loss function". When using linear outputs for a squared error, the derivatives of the error reduce to $y - t$ where $y$ is the output and $t$ the corresponding target value. So you have to check your gradients. I have personally had good results with sigmoids as outputs when I have targets in that range and using sum of squares error anyway. If you use a logistic activation function in the output layer it will restrict the output to the range 0-1 as you require. However if you have a regression problem with a restricted output range the sum-of-squares error metric may not be ideal and maybe a beta noise model might be more appropriate (c.f. beta regression, which IIRC is implemented in an R package, but I have never used it myself) • I replaced the linear function by a siglog-function. Strangely, the neural net predicts now 0.5 for whatever data inputs. May this be a consequence of still using the sum-of-squares metric? Feb 28 '11 at 14:02 • The use of the sum-of-squares error metric shouldn't be a problem, assymptotically it will give a network that predicts the conditional mean of the data regardless of the distribution and output layer activation function. Can you give some more information about the data, size of the netwok, training algorithm etc? What is the distribution of target values like? Are you using regularisation? Feb 28 '11 at 14:10 • Training algorithm is back propagation. 500 training cycles, learning rate 0.3 and 0.2 momentum. 1 hidden layer with 3 neurons. The data consists of 3 attributes, and one target attribute. 20000 samples. The target attribute is very heavily(!!!) skewed within the range of 0 and 1, the mean is 0.077, SD 0.185. Feb 28 '11 at 14:29 • In that case, it might be worth transforming the target to have a less skewed distribution before training. Is it a problem with many values exactly at zero (like rainfall for example) or are there no special values? Feb 28 '11 at 15:48 • Try a lesser learning rate (say 0.001) and a higher momentum (say 0.99). This will make training take longer but it will also be more robust. Mar 1 '11 at 14:34 If you know an absolute range for the output, but there is no reason to expect it to have the non-linear characteristic of the typical logistic activation function (i.e. a value in the middle is just as likely as a value near 0 or 1), then you can just transform the output by dividing by the absolute maximum. If the minimum were not 0, you could subtract the absolute minimum before dividing by the value (maximum - minimum). So basically don't try to train the neural network to the raw value, train it to the percentile value (0 for minimum, 1 for maximum). • That will not prevent the predictions of the network from going outside the range 0-1. If it is necessary to constrain the outputs of the network to lie within a particular range, a non-linear activation function is unavoidable. Feb 28 '11 at 13:53 • If you mean, it won't go to even 1.01, then you're right. However, if you train to values that are all between 0 and 1, then eventually the network will train to always give values close to that range. You can clip the output to 0 to 1 if you want, while leaving the region in between linear, but in most cases if it gives an output of 1.01 or 1.02 occasionally this is not an issue. Mar 1 '11 at 13:34 "Would it work to use the linear function and simply cut all values below 0 to 0, and values above 1 to 1?" I believe in many cases the cut-off value should be the percentage split of the training data. Eg if your training data has 13% - 0's and 87% - 1's, then the cut-off would be 0.13; For example anything 0.13 and below on the output is 0 and anything 0.14 and above is 1. Obviously there is more uncertainty the closer to the cut-off the output provides. It may also help adjusting the cut-off limits especially where the cost of a mis-classification is high. This link may help a little http://timmanns.blogspot.com/2009/11/building-neural-networks-on-unbalanced.html
2021-09-28 02:31:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5598231554031372, "perplexity": 518.3860934425468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058589.72/warc/CC-MAIN-20210928002254-20210928032254-00400.warc.gz"}
https://gm.dualjuridik.org/8479-genes-and-intelligence.html
# Genes and Intelligence We are searching data for your request: Forums and discussions: Manuals and reference books: Data from registers: Wait the end of the search in all databases. Upon completion, a link will appear to access the found materials. Assuming that intelligence has a genetic component, • do we know which genes contribute to it? and, if so, • can we predict intelligence from genomic analysis? ### A few words on genomic prediction No complex trait is 100% heritable, hence no prediction based entirely on DNA would ever be perfect. With that said, predictive genomics is progressing at a quite amazing rate right now. So while predictions can be nowhere near perfect, it is getting possible to make DNA predictions that correlate substantially with observed values. The genes that correlate with phenotypic differences are found using genome-wide association studies (GWAS). The total effects are then aggregated up to what is called a polygenic score. ### A success story -- Height Height is highly heritable, meaning that much of the phenotypic variance (although not 100%) is due to genetic differences. Height is also highly polygenic -- it's a trait influenced by many genes, each of small effect. Many variants that correlate with height have now been found. Polygenic scores of height correlate higher than ~0.60 with observed height. DNA predictions are generally within a few centimeters of true height (Lello, Avery, Tellier, Vazquez, de los Campos, Hsu, 2018). According to an interview with Stephen Hsu, one of the authors of this paper and co-founder of the company Genomic Prediction, they can correctly predict the height ordering of siblings within the same family 80-90% of the time (source). So based purely on DNA, can we say which of two siblings are going to be taller than the other? Not perfectly, but with decent reliably, yes. This is about as good as predictions of complex traits can be based on current methods. There are a few principal reasons that correlations are not higher (even though it is quite impressive already in its own right). First, as I said, complex traits are not 100% heritable and therefore predictions are not expected to ever be perfect. Second, current GWAS are typically based on SNPs which are common gene variants. Rarer gene variants are expected to contribute to the variance and their effects are still needed to be uncovered. Third, only additive effects are taken into account, not gene-by-gene interactions. ### Genetic IQ Prediction? IQ, like height, is a heritable trait and is highly polygenic. There are several approaches to finding gene variants correlated with IQ, however they can broadly be categorized into two. The most obvious approach is to simply give people intelligence tests and get their DNA. The problem with this is that it is difficult to get large samples with good intelligence tests. When this approach is used, usually a very short (say, 2 min) intelligence test is used. The second approach is to use a proxy phenotype. With this approach, the variable years of educational attainment has been used to good success, and has been shown to have a high genetic correlation with intelligence ($$r_g approx 0.7$$). Many variants that are associated with intelligence or educational attainment have been found, see e.g. (Lee et al, 2018; Savage et al, 2018). While many variants are known, IQ is not as well understood as height, and current SNP predictors correlate about ~0.3 with observed IQ (See e.g. Allegrini et al, 2018). This correlation is bound to increase in the coming few years. ### Further Information If you want a simple introduction to what can be read from the genome so far, there is a TED talk on this subject, TED2016: How to read a genome and build a human being. Although, major improvements have already been made in the years since 2016. For anyone that is more interested in the mathematical techniques used for prediction and the underlying theory, I recommend this talk by Stephen Hsu if you're further interested in genomic prediction of complex traits. I also recommend reading this review by Robert Plomin and Sophie von Stumm to get an easily understandable overview of the current state of knowledge on the subject. ### References Lello, Avery, Tellier, Vazquez, de los Campus, Hsu (2018). Accurate Genomic Prediction of Human Height. DOI: https://doi.org/10.1534/genetics.118.301267 Marty Nemko (2018). The Future of In-Vitro Fertilization and Gene Editing. Psychology Today Link. Lee,… , Cesarini (2018). Gene discovery and polygenic prediction from a genome-wide association study of educational attainment in 1.1 million individuals. DOI: https://doi.org/10.1038/s41588-018-0147-3 Savage,… , Posthuma (2018). Genome-wide association meta-analysis in 269,867 individuals identifies new genetic and functional links to intelligence. DOI: https://doi.org/10.1038/s41588-018-0152-6 Allegrini, Selzam, Rimfeld, von Stumm, Pingault, Plomin (2018). Genomic prediction of cognitive traits in childhood and adolescence. DOI: https://doi.org/10.1101/418210 Sabatini (2016). How to read a genome and build a human being. TED2016 Link to Talk. Plomin, von Stumm (2018). The new genetics of intelligence. DOI: https://doi.org/10.1038/nrg.2017.104 No, because the trait you describe does not exist Your question betrays a common misunderstanding of how genetics and the environment interact in order to produce complex phenotypes. In fact, every biological trait is 100% genetic and 100% environmental. Don't believe me? Try teaching algebra to your cat, or see what height someone is after you've dropped them into the sun. The only sense in which you have an "IQ you were born with" is the measurable IQ of a newborn child which I'm guessing this comes out as 0. Instead your IQ is a result of continuous interaction between genes and environment. You can say that at the end of process, across the population, 70% (say) of observed variation is explicable by genetic variation but this does not mean that you got 70 points of IQ from your genes and 30 points from the environment. The whole score is attributable to an interaction between the two. There is no "null" environment in which you would observe pure genetics, nor does it follow that increasing the range of environments encountered across your sample will increase the proportion of variation explicable by the environment. A gene that has no effect on IQ in one environment may have a marked effect in a different environment and you'd only discover this by varying the environment so both are encountered. So, there is no trait "genetic IQ" to predict from someone's DNA, even if we had perfect knowledge of the link between genes and intelligence. Which we don't. IQ typically has a relatively high heritability (see this post). Please have a look at this post to understand the concept of heritability. In order to know what loci (QTL as the trait of interest is quantitative) are associated with a specific trait of interest, one must perform a GWAS The studies listed in the first pots linked above do not involve GWAS but only some kind of parent-offspring regression or twin studies to estimate heritability. From these studies, one cannot infer anything about a person's IQ from its genome. GWAS studies on IQ do exist though (reviewed in Pfiffer 2015). It would be possible from one to make predictions and compute confidence intervals as well of someone's intelligence based on their genome, yes. I doubt anyone ever wrote such algorithm though. Without having such algorithm in front of one's eyes, it is impossible to tell, whether the confidence intervals are going to be very wide or very narrow though. IQ is pretty much like muscle strength, in the sense that you may have the right combination of genes that would give you extra strong muscles but the environmental factors do play an essential role as well. Nutrition, training etc will have a profound effect on the development of your muscles. You cannot predict the effect a priori. For IQ, it is the same thing. On one extreme you may have genetically-linked developmental diseases, physically impairing the normal brain functions, i/e/ negatively affects your IQ. In most of the cases however, the effects of genetics on IQ is shadowed by the actual "brain training" you get during your life, by nutrition, and by other environmental factors. So, yes, IQ is influenced by the underlying genetics. Yes, in some extreme cases you can predict a very strong detrimental effect of some genes/mutations on the IQ. For most of the cases, it's very hard to predict and genetics alone would not be enough to do so. ## The biology of intelligence? The idea that there is a biological basis for intelligence in people has been translated into two beliefs about IQ tests: first, that measured IQ is genetically determined and that differences in IQ between different groups are partly or largely caused by genetic differences between them secondly, that IQ reflects some presumed fundamental property of the brain such as efficiency or speed of neural transmission, that can be measured by recording evoked potentials or by the speed with which a person performs some very simple task. Measured IQ may well have a significant heritable component, but there is very little evidence that average differences in IQ between, say, different ethnic groups are genetic in origin. In Great Britain, moreover, differences in IQ between white, West Indian, Indian or Pakistani children are closely correlated with differences in their social circumstances. It should be stressed, however, that there is equally little evidence that IQ tests significantly underestimate the academic attainments of children from ethnic minorities or that such children fall progressively further behind as they go through school. The search for simple, physiological or behavioural correlates of IQ has met with no more than modest success. Recent reports of highly significant correlations between IQ and measures of evoked potentials have not always been replicated and the past history of failures of replication in this general area counsels caution. Reports of very high (greater than 0.60) correlations between IQ and measures of timed performance have not been substantiated and can usually be attributed to the inclusion of disproportionate numbers of retarded subjects in the samples. There is quite good evidence of moderate correlations (in the range 0.20 to 0.40) with those measures, but it is not even clear how they should be interpreted. One possibility is that they reflect differences in concentration or sustained attention. If we want higher correlations with IQ we will probably need to look to more sophisticated tasks. ## Large study uncovers genes linked to intelligence Genes controlling how our nervous system develops are linked to intelligence. Credit: Evgeny Atamanenko Exactly what constitutes intelligence, and to what extent it is genetic, are some of the most controversial questions in science. But now a new study of nearly 80,000 people, published in Nature Genetics, has managed to identify a number of genes that seem to be involved in intelligence. According to a dictionary definition, intelligence is "the ability to learn, understand or deal with new situations" or "the ability to apply knowledge to manipulate one's environment or to think abstractly". This is obviously quite broad. Indeed, even animals display a number of different forms of intelligence, typically critical for survival. These range from reaching or gathering sources of food and escaping predators to the sharing of duties within a group (such as in ant communities). Elephants or monkeys also possess forms of empathy and care, which strengthen their relationships and chances to survive. Human intelligence started out as "reactive", enabling us to find solutions to the challenges of nature. But it later became "proactive", so that we could use the resources of nature to develop preventive measures aimed at solving problems. Ultimately, what makes human intelligence different from that of other animals is our ability to shape the environment, for example through farming. This became possible as we developed communities and started delegating tasks on the basis of talents. When the acute problem of survival was controlled, we could dedicate our intelligence to the development of arts or other higher skills. There are many factors that enable us to shape and nurture our intelligence – ranging from access to resources and information to skills acquired through experience and repetition. But, like with most human traits, there is also a genetic basis. The method used to measure intelligence in the new study was the so-called "g-factor" – a measure of analytical intelligence. Although it might appear reductive to catalogue all types of intelligence through a single test, the g-factor is often used in scientific research as being among the most unbiased methods. The authors looked at such scores in 78,000 people of European descent to search for genetic factors and genes that potentially influence human intelligence. They carried out a genome-wide association study (GWAS). This assesses connections between a trait and a multitude of DNA markers called single-nucleotide polymorphisms, or SNPs, which might determine an individual's likelihood to develop a specific trait. The test enabled the researchers to identify 336 significant SNPs. Generally, the vast majority of significant SNPs that result in this way fall in non-coding regions of the DNA. In other words, they indicate portions of the DNA that may regulate gene expression even though the actual regulated gene is unknown. This makes the SNPs from GWAS hard to interpret. So the authors then complemented their analysis with a so called genome-wide gene association analysis (or GWGAS), which calculates the effect of multiple SNPs within genes and can identify actual associated genes. They then combined both kinds of study to strengthen their confidence in naming the genes associated with intelligence. This work led to isolating 52 candidate genes linked to intelligence. Although 12 of these had been previously associated with "intelligence", the study needs to be replicated in future studies. The researchers discovered that the genes that were the strongest linked to intelligence are ones involved in pathways that play a part in the regulation of the nervous system's development and apoptosis (a normal form of cell death that is needed in development). The most significant SNP was found within FOXO3, a gene involved in insulin signalling that might trigger apoptosis. The strongest associated gene was CSE1L, a gene involved in apoptosis and cell proliferation. Does this all mean that intelligence in humans depends on the molecular mechanisms that support the development and preservation of the nervous system throughout an person's lifespan? It's possible. And is it possible to explain intelligence through genetics? This paper suggests it is. Nevertheless, it might be warranted to consider that intelligence is a very complex trait and even if genetics did play a role, environmental factors such as education, healthy living, access to higher education, exposure to stimulating circumstances or environments might play an equally or even stronger role in nurturing and shaping intelligence. It is also worth considering that the meaning of "intelligence" rather falls within a grey area. There might be different types of intelligence or even intelligence might be interpreted differently: in which category would for example a genius physicist – unable to remember their way home (Albert Einstein) – fall? Selective intelligence? Mozart nearly failed his admission tests to Philharmonic Academy in Bologna because his genius was too wide and innovative to be assessed by rigid tests. Is that another form of selective intelligence? And if so, what's the genetic basis of this kind of intelligence? Studies like this are extremely interesting and they do show we are starting to scratch the surface of what the biological basis of intelligence really is. ## Biological Intelligence Biological intelligence is a new concept that’s nearly four billion years old. How does your body create and use knowledge? Biological intelligence teaches your body to teach itself. Just like you educate your brain, you can teach your body. Lots of people have heard about artificial intelligence, or AI. But why haven’t you heard about biological intelligence, or BI? Because most of biological intelligence is quiet and unconscious. You see your hair grow. You don’t see your body learn. Take immunity, your ability to fight infection and cancer. Your ability to fight off the flu is not something you know about—unless you get sick. Look at cancer. We probably form dozens of tiny cancers each and every day. But we destroy almost all of them so effectively we never know. Biological intelligence engages all of immunity's remarkable tasks. One of the future tasks of biological intelligence is to teach the body to correctly recognize cancers that have been missed, then make the immune system go after them. Yet biological intelligence is much more than destroying viruses and tumors. Biological intelligence is the basic stuff that keeps you going. It teaches your body to do what it needs to do. It’s so big, so important, it's useful to compare it with artificial intelligence, the stuff transforming our lives and economy: 1. Biological intelligence engages all the conscious and unconscious knowledge of a human being. That immense field stretches from genetics to culture to society and psychology. Much of it is hardly understood. Your mother’s arm that holds you in an embrace, the lover’s hand that gently touches your cheek, and the little gestures that tell you’re loved will prove hard work for robots. 2. Biological intelligence is connected to everything inside you—every information system you use. You have an immune system, a cardiovascular system, a hormonal system, a muscular system, dozens of interconnected systems. Unlike most robots, the body doesn’t do one thing at a time. It coordinates all the different information systems at the same moment. Can you presently conceptualize the systems that will have a robot laugh, cry, sing and dance, all while gauging the audience and telling a joke? Comedians do that. The size of your body’s information systems dwarf the complexity of the entire Internet. Our medical attempts to make ourselves not ill are generally far less impressive than the actions biological intelligence engages every moment to keep us healthy. 3. Biological intelligence has different goals than artificial intelligence. What is it for? In our case, the survival of our species. Biological intelligence is built to keep humanity going. Normally that includes you, and me, and everyone we know—but not always. Having genes for sickle cell may help a population survive malaria, but can really hurt you in places without it. Diabetes genes may keep us stay alive during famines, but otherwise can really mess up your life. Biological intelligence wants the species to survive—not just us. 4. Unlike the artificial intelligence you experience in glitching software, biological intelligence has survived almost everything thrown at it. It survived the asteroids that wiped out the dinosaurs. It survived volcanoes that scorched and burnt the earth for millions of years. It survived plagues and pestilence. Most of the species on the planet are gone. Billions of species have disappeared. We’re still here. Why? Because biological intelligence built us to survive. 5. So how does biological intelligence work? Here’s the real trick: biological intelligence is built on contingency and chance. Stephen Jay Gould and others pointed that out long ago. Not only do we have genetic information systems that survived asteroids. We’re built to survive comets, earthquakes, cataclysms and catastrophes that we’ve never seen—and that may never happen. Biological intelligence provides us genes and physiology that's built to survive stresses that do not yet exist and may never exist. Chance rules the world, and we are built to survive all that chance can throw at us. Think of new illnesses, like AIDS. When AIDS first hit it was terrifying. Yet many of us had inbuilt systems to keep it off, even before one effective drug was produced. 6. Unlike artificial intelligence, biological intelligence does not operate just within us, but over a huge ecosystem. That ecosystem is you. There are at least 40 trillion bacteria in your gut. They not only digest food, but now appear to change your mood, your ability to fight off infections, how cancer drugs work. There’s at least 10 times more non-human cells in your body than human ones. Biological intelligence rules them all. So biological intelligence is big. It does amazing things. Because we haven’t thought of the body as intelligent and perpetually learning, we don’t even know what many of those capacities are. That’s the power of a system built on chance, a system created through billion of years of failures and triumphs. But here’s the real take home: knowing that your body is intelligent can make you more intelligent. Because what you do is what you become. Every moment of life is a teaching moment. Every moment potentially gives you chances to make your body more capable, more intelligent, smarter. Everything you do teaches your body something new. If you want to get smart, you need education. In school we teach our brains. Now we need to teach our bodies. Biological intelligence make you resilient, more capable, more able to produce and create, more capable to avoid and fight off disease. We need to teach our bodies to teach themselves. The robots are advancing, at work and play. Artificial intelligence is getting smarter every day. So should you. ## Genes and Intelligence - Biology By Diane Swanbrow News Service Environmental conditions are much more powerful than genetic influences in determining intelligence, social psychologist Richard Nisbett says. Recent research in psychology, genetics and neuroscience, and new studies on the effectiveness of educational interventions, have shown that intelligence strongly is affected by environmental factors that have nothing to do with genes, Nisbett says. In new research, Nisbett analyzes a large number of such studies, showing how environment influences not just IQ as measured by standardized tests but also actual achievement. "Believing that intelligence is under your control — and having parents who demand achievement — can do wonders," Nisbett writes in "Intelligence and How to Get It: Why Schools and Cultures Count," published Feb. 2 by W.W. Norton & Company Inc. For example, the high academic and occupational attainment of Asians and Jews is not due to higher IQs, but to family values that emphasize accomplishment and intellectual attainment, and to cultures that emphasize hard work and persistence. Likewise, genes play no role in race differences in IQ between blacks and Caucasians, Nisbett says. Class and race differences starting in early infancy combine with neighborhood, cultural and educational differences that widen this gap. "We need intensive early childhood education for the poor, and home visits to teach parents how to encourage intellectual development," Nisbett writes. "Such efforts can produce huge immediate gains in IQ and enormous long-term gains in academic achievement and occupational attainment. "Highly ambitious elementary, junior high and high school programs also can produce massive gains in academic achievement. And a variety of simple, cost-free interventions, including, most notably, simply convincing students that their intelligence is under their control to a substantial extent, can make a big difference to academic achievement." The United States has fallen behind most of the developed world in its level of educational achievement, Nisbett points out, attributing this deficit to the large and widening gaps between socioeconomic classes in this country. Being poor, he says, is linked with many environmental factors of a biological and social nature that lower IQ and academic achievement. These factors include poor nutrition, inferior medical care, a low rate of breast-feeding and parenting styles that are much less warm and supportive than those of higher socioeconomic status parents. Not only are many U.S. blacks afflicted with these problems, they also struggle with stereotypes and prejudice that intensify decreases in performance. Nisbett singles out several educational intervention programs that have been shown to be effective in closing the racial and socioeconomic gap in school achievement. He also debunks the claims of success in other programs and techniques, including the No Child Left Behind Act. ## Hundreds of new genes may underlie intelligence—but also autism and depression Being smart is a double-edged sword. Intelligent people appear to live longer, but many of the genes behind brilliance can also lead to autism, anxiety, and depression, according to two new massive genetic studies. The work also is one of the first to identify the specific cell types and genetic pathways tied to intelligence and mental health, potentially paving the way for new ways to improve education, or therapies to treat neurotic behavior. The studies provide some of the first “hard evidence of the many genes and pathways” that work together in complex ways to build smart brains and keep them in balance, says geneticist Peter Visscher of the Queensland Brain Institute at The University of Queensland in Brisbane, Australia, who was not involved in the work. Researchers have long known that people often inherit intelligence and some personality disorders from their parents. (Environmental factors such as education and stress also profoundly shape intelligence and mental health.) But geneticists have had trouble identifying more than a handful of genes associated with intelligence. Last year, researchers used new statistical methods that can detect strong associations between genes and specific traits to analyze health and genetic records in huge data sets. This led to the discovery of 52 genes linked to intelligence in 80,000 people. Now, the same team has added almost 1000 genes to that list. Researchers led by geneticist Danielle Posthuma of Vrije University in Amsterdam scoured 14 databases of health and genetic records to identify 939 new genes associated with intelligence in 250,000 individuals. (The data sets measured intelligence with scores on tests of abilities such as mathematics, synonyms, and logic.) Many variants of genes associated with higher intelligence turned up in people who also lived longer and did not have Alzheimer’s disease, attention-deficit hyperactivity disorder, or schizophrenia, the team reports today in Nature Genetics , suggesting that intelligence protects against these disorders. On the downside, genes associated with intelligence correlated with a higher risk for autism. In a separate study also published today in Nature Genetics , Posthuma and her colleagues identified 500 genes associated with neurotic traits, such as anxiety and depression, by searching the health and genetic records of 449,400 individuals in large databases, such as the UK Biobank, a repository of information on the genetics, health and wellbeing of 500,000 British volunteers, and 23andMe, a personal genomics company in Mountain View, California, with genetic and health data on 5 million customers. They also found that people who worried a lot had inherited different genes than those who were more likely to be depressed, suggesting that there are different underlying genetic pathways for those conditions. In both studies, the researchers used a new statistical method called MAGMA to quickly search genetic data to identify specific types of cells and tissues where the genes were expressed. Many genes for intelligence were expressed in the “medium spiny neurons” which are part of the basal ganglia, clusters of neurons deep in the brain involved in learning, cognition, and emotion. The researchers also identified many potential targets for developing new pharmaceutical drugs. “If you can understand the mechanisms at the cell level, you can also look at candidates for medication,” Posthuma says. The same is true for genes for intelligence, she says, which could offer clues to new ways to protect against Alzheimer’s and other disorders. ## New Theory: How Intelligence Works Like memory, human intelligence is probably not confined to a single area in the brain, but is instead the result of multiple brain areas working in concert, a new review of research suggests. The review by Richard Haier of the University of California , Irvine , and Rex Jung of the University of New Mexico proposes a new theory that identifies areas in the brain that work together to determine a person's intelligence. "Genetic research has demonstrated that intelligence levels can be inherited, and since genes work through biology, there must be a biological basis for intelligence," Haier said. The review of 37 imaging studies, detailed online in the journal Behavioral and Brain Sciences, suggests that intelligence is related not so much to brain size or a particular brain structure, but to how efficiently information travels through the brain. "Our review of imaging studies identifies the stations along the routes intelligence information processing takes," Haier said. "Once we know where the stations are, we can study how they relate to intelligence." The new theory might eventually lead to treatments for low IQ, the researchers say, or to ways of boosting the IQ of people with normal intelligence. In their review, Haier and Jung compiled a list of all the brain areas previous neuroimaging studies had found to be related to intelligence, placing greater emphasis on those areas that appeared multiple times. The list they came up with suggests that most of the brain areas thought to play a role in intelligence are clustered in the frontal and parietal lobes. Furthermore, some of these areas area also related to attention and memory and to more complex functions such as language. The pair does not think this is a coincidence. In their Parieto-Frontal Integration Theory (P-FIT), they suggest that intelligence levels are based on how efficiently these brain areas communicate with one another. Haier says the new theory sidesteps the sticky question of what intelligence is, something that scientists have yet to agree on. "In every single study that we reviewed, there was a different measure of intelligence," Haier said. "There's controversy about what is the best measure of intelligence. There's controversy over how broad or narrow the definition of intelligence should be. Our work really goes beyond those questions and basically says that irrespective of the definition of intelligence you use in neuroimaging studies, you find a similar result." Earl Hunt, a neuroscientist at the University of Washington , who was not involved in the research, said the P-FIT model highlights the progress scientists have made in recent years toward understanding the biological basis of intelligence. "Twenty-five years ago researchers in the field were engaged in an unedifying discussion of the relation between skull sizes and intelligence test scores," Hunt said. Building upon previous work Haier and Jung were also behind other important intelligence-related studies. In 2004, they found that regions related to general intelligence are scattered throughout the brain and that the existence of a single "intelligence center" was unlikely. And in a 2005 study, they found that while there is essentially no difference in general intelligence between the sexes, women have more white matter and men more gray matter. Gray matter represents information processing centers in the brain, and white matter links the centers together. The finding suggested that no single structure in the brain determines general intelligence and that different types of brain designs can produce equivalent intellectual performance. Knowing what determines intelligence might lead to treatments for diseases of intelligence like mental retardation, Haier said. "It would be important to now how intelligence works to determine if there's any way to treat low IQ," Haier told LiveScience. "If you can treat low IQ in mental retardation because you identify something wrong in the brain that's affecting intelligence, then that raises the question of whether you can raise IQ in people that don't necessarily have the brain injuries." ## Do genes affect our intelligence? The debate ‘is over’ Researchers are now becoming confident enough to claim that the information available from sequencing a person’s genome – the instructions encoded in our DNA that influence our physical and behavioural traits – can be used to make predictions about their potential to achieve academic success. All too often genes are read as destiny. But in truth there’s rather little in your genetic make-up that fixes traits or behaviour with any clarity. There are some genetic diseases that particular gene mutations will give you if you’re unlucky enough to inherit them. But most traits (including diseases) that are influenced by genes manifest only as tendencies. Partly this is because a lot of traits are influenced by many genes, interacting and correlating with one another in complex ways that are hard, perhaps impossible, to anticipate. But it’s also because genes are themselves influenced by environmental factors, which can cause them to be activated or suppressed. The data both from twin studies and DNA analysis are unambiguous: intelligence is strongly heritable. Typically around 50 per cent of variations in intelligence between individuals can be ascribed to genes, although these gene-induced differences become markedly more apparent as we age. As [psychologist Stuart] Ritchie says: like it or not, the debate about whether genes affect intelligence is over. ## Genome-wide association meta-analysis of 78,308 individuals identifies new loci and genes influencing human intelligence Intelligence is associated with important economic and health-related life outcomes. Despite intelligence having substantial heritability (0.54) and a confirmed polygenic nature, initial genetic studies were mostly underpowered. Here we report a meta-analysis for intelligence of 78,308 individuals. We identify 336 associated SNPs (METAL P < 5 × 10 -8 ) in 18 genomic loci, of which 15 are new. Around half of the SNPs are located inside a gene, implicating 22 genes, of which 11 are new findings. Gene-based analyses identified an additional 30 genes (MAGMA P < 2.73 × 10 -6 ), of which all but one had not been implicated previously. We show that the identified genes are predominantly expressed in brain tissue, and pathway analysis indicates the involvement of genes regulating cell development (MAGMA competitive P = 3.5 × 10 -6 ). Despite the well-known difference in twin-based heritability for intelligence in childhood (0.45) and adulthood (0.80), we show substantial genetic correlation (rg = 0.89, LD score regression P = 5.4 × 10 -29 ). These findings provide new insight into the genetic architecture of intelligence. ### Conflict of interest statement The other authors declare no competing financial interests. ### Figures Fig. 1. Regional association and linkage disequilibrium… Fig. 1. Regional association and linkage disequilibrium plots for 18 genome-wide significant loci Fig. 2. Results of SNP-based meta-analysis for… Fig. 2. Results of SNP-based meta-analysis for intelligence based on 78,308 individuals Fig. 3. Gene-based genome wide analysis for… Fig. 3. Gene-based genome wide analysis for intelligence and genetic overlap with other traits
2023-01-27 23:37:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2698034644126892, "perplexity": 2637.8394154441485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00207.warc.gz"}
http://stockgame1.iktogo.com/post/what-is-an-ico-in-cryptocurrency/
# What Is An ICO In Cryptocurrency? ICO is short regarding Initial Tableau Offering. Whenever launching a whole new cryptocurrency or even crypto-token, often the developers offer investors a limited quantity of products in exchange regarding different major crypto gold coins such as Bitcoin or maybe Ethereum. ICOs are wonderful resources for quickly raining development funds to support new cryptocurrencies. crypto asset fund throughout an ICO can be distributed and even dealt on cryptocurrency deals, assuming there is enough demand for them. Often the Ethereum ICO is 1 of the most notable success and the recognition of Initial Coin Solutions is growing as we all converse. A brief historical past of ICOs Ripple is usually likely the first cryptocurrency distributed via an ICO. With the start regarding 2013, Ripple Labs started out to develop the Ripple monthly payment system and created about 100 billion XRP bridal party. These were offered through an ICO to help fund Ripple’s platform growth. Mastercoin is another cryptocurrency which has sold a small amount of million tokens for Bitcoin during a good ICO, also in 2013. Mastercoin targeted to tokenize Bitcoin purchases and carry out smart deals by setting up a new covering on top of the present Bitcoin code. Of study course, there are other cryptocurrencies that have got been successfully financed by means of ICOs. Back in 2016, Lisk gathered approximately $5 million during their Primary Tableau Offering. Nevertheless, Ethereum’s ICO that took spot in 2014 is probably the most dominant a single so far. During their own ICO, the Ethereum Foundation sold ETH for zero. 0005 Bitcoin each, increasing virtually$20 million. Together with Ethereum harnessing the electricity of smart plans, it paved the way regarding the next generation of Initial Coin Products. Ethereum’s ICO, a menu regarding success Ethereum’s smart deals system has implemented the particular ERC20 protocol standard of which sets the core guidelines for creating other compliant tokens which can become transacted on Ethereum’s blockchain. This allowed some others to create their own bridal party, compliant with the ERC20 standard which can be traded with regard to ETH directly on Ethereum’s network. Often the DAO is definitely a notable example of properly using Ethereum’s sensible legal papers. The investment organization increased \$100 million worthy of of ETH and the buyers received in exchange DAO as well allowing them to take part in the governance of the platform. Regrettably, the DAO failed immediately after that was hacked. Ethereum’s ICO and their ERC20 method possess outlined the particular latest generation of crowdfunding blockchain-based projects via Initial Coin Offerings. In addition, it produced it very easy to be able to invest in additional ERC20 tokens. You simply copy ETH, paste the deal in your wallet together with the fresh tokens will certainly show up in your account so you can apply them however you remember to. Obviously, not all cryptocurrencies possess ERC20 tokens life upon Ethereum ’s networking although pretty much just about any new blockchain-based project may launch an Initial Coin Supplying. The authorized state associated with ICOs On the subject of the validity of ICOs, it’s the bit of a new marketplace out there. In hypothesis, tokens can be purchased as a digital goods, not necessarily financial resources. Most jurisdictions haven’t controlled ICOs but so if, perhaps the founders have a seasoned legal representative on their workforce, the whole process ought to be paperless. Even so, various jurisdictions have become conscious of ICOs and are already working on regulating them in a similar manner to be able to sales regarding shares plus securities. Back in December 2017, typically the U. S. Securities In addition to Exchange Commission (SEC) classified ICO tokens like stock options. In other words, often the SEC was preparing to halt ICOs they think of to be misleading shareholders.
2020-06-07 09:34:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24057252705097198, "perplexity": 5774.575191981729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348526471.98/warc/CC-MAIN-20200607075929-20200607105929-00023.warc.gz"}
https://www.atmos-chem-phys.net/20/6725/2020/
Journal topic Atmos. Chem. Phys., 20, 6725–6747, 2020 https://doi.org/10.5194/acp-20-6725-2020 Atmos. Chem. Phys., 20, 6725–6747, 2020 https://doi.org/10.5194/acp-20-6725-2020 Research article 08 Jun 2020 Research article | 08 Jun 2020 # Multiphase MCM–CAPRAM modeling of the formation and processing of secondary aerosol constituents observed during the Mt. Tai summer campaign in 2014 Multiphase MCM–CAPRAM modeling of the formation and processing of secondary aerosol constituents observed during the Mt. Tai summer campaign in 2014 Yanhong Zhu1,2,a, Andreas Tilgner2, Erik Hans Hoffmann2, Hartmut Herrmann2,3, Kimitaka Kawamura4,b, Lingxiao Yang1, Likun Xue1, and Wenxing Wang1 Yanhong Zhu et al. • 1Environment Research Institute, Shandong University, 266237 Qingdao, China • 2Leibniz Institute for Tropospheric Research (TROPOS), Atmospheric Chemistry Department (ACD), 04318 Leipzig, Germany • 3School of Environmental Science and Engineering, Shandong University, 266237 Qingdao, China • 4Institute of Low Temperature Science, Hokkaido University, Sapporo 060-0819, Japan • anow at: Department of Atmospheric Sciences, School of Earth Sciences, Zhejiang University, 310012 Hangzhou, China • bnow at: Chubu Institute for Advanced Studies, Chubu University, Kasugai 487-8501, Japan Correspondence: Likun Xue (xuelikun@sdu.edu.cn) and Hartmut Herrmann (herrmann@tropos.de) Abstract Despite the high abundance of secondary aerosols in the atmosphere, their formation mechanisms remain poorly understood. In this study, the Master Chemical Mechanism (MCM) and the Chemical Aqueous-Phase Radical Mechanism (CAPRAM) are used to investigate the multiphase formation and processing of secondary aerosol constituents during the advection of air masses towards the measurement site of Mt. Tai in northern China. Trajectories with and without chemical–cloud interaction are modeled. Modeled radical and non-radical concentrations demonstrate that the summit of Mt. Tai, with an altitude of ∼1.5 km a.m.s.l., is characterized by a suburban oxidants budget. The modeled maximum gas-phase concentrations of the OH radical are 3.2×106 and 3.5×106 molec. cm−3 in simulations with and without cloud passages in the air parcel, respectively. In contrast with previous studies at Mt. Tai, this study has modeled chemical formation processes of secondary aerosol constituents under day vs. night and cloud vs. non-cloud cases along the trajectories towards Mt. Tai in detail. The model studies show that sulfate is mainly produced in simulations where the air parcel is influenced by cloud chemistry. Under the simulated conditions, the aqueous reaction of ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ with H2O2 is the major contributor to sulfate formation, contributing 67 % and 60 % in the simulations with cloud and non-cloud passages, respectively. The modeled nitrate formation is higher at nighttime than during daytime. The major pathway is aqueous-phase N2O5 hydrolysis, with a contribution of 72 % when cloud passages are considered and 70 % when they are not. Secondary organic aerosol (SOA) compounds, e.g., glyoxylic, oxalic, pyruvic and malonic acid, are found to be mostly produced from the aqueous oxidations of hydrated glyoxal, hydrated glyoxylic acid, nitro-2-oxopropanoate and hydrated 3-oxopropanoic acid, respectively. Sensitivity studies reveal that gaseous volatile organic compound (VOC) emissions have a huge impact on the concentrations of modeled secondary aerosol compounds. Increasing the VOC emissions by a factor of 2 leads to linearly increased concentrations of the corresponding SOA compounds. Studies using the relative incremental reactivity (RIR) method have identified isoprene, 1,3-butadiene and toluene as the key precursors for glyoxylic and oxalic acid, but only isoprene is found to be a key precursor for pyruvic acid. Additionally, the model investigations demonstrate that an increased aerosol partitioning of glyoxal can play an important role in the aqueous-phase formation of glyoxylic and oxalic acid. Overall, the present study is the first that provides more detailed insights in the formation pathways of secondary aerosol constituents at Mt. Tai and clearly emphasizes the importance of aqueous-phase chemical processes on the production of multifunctional carboxylic acids. 1 Introduction Secondary aerosols are more abundant than primary aerosols (Volkamer et al., 2006). Their constituents are formed on a regional scale and transported over long distances and thus have a direct impact on the air quality of a wider area (Kim et al., 2007; Matsui et al., 2009; DeCarlo et al., 2010). Secondary aerosols are usually divided into two classes: secondary inorganic aerosol (SIA) and secondary organic aerosol (SOA). A number of studies have been conducted that aimed to investigate their formation mechanisms (Yao et al., 2002; Duan et al., 2006; Wang et al., 2006; Guo et al., 2010; Zhao et al., 2013). The SIA components, including sulfate, nitrate and ammonium, are important contributors to fine particulate matter (PM2.5) and play an important role in haze formation (Volkamer et al., 2006; Sun et al., 2014; Wang et al., 2014; Zhang et al., 2014). The SIA formation processes are relatively well understood, but some indefiniteness still remains, such as multiphase formation, particularly under highly polluted conditions such as in those China (Wang et al., 2014; G. Wang et al., 2016). SOA is also a key component of PM2.5 and linked to adverse health effects, visibility reduction and climate change (Tabazadeh, 2005; Seagrave et al., 2006; De Gouw and Jimenez, 2009; Shrivastava et al., 2017). However, their formation mechanisms are still not well understood (Huang et al., 2014). Dicarboxylic acids and related compounds (oxo-carboxylic acids and α-dicarbonyls) (DCRCs) are ubiquitous water-soluble components of SOA (Kawamura and Sakaguchi, 1999; Kawamura and Yasui, 2005; Pavuluri et al., 2010). They are mainly produced by secondary processes of precursors via gas-phase and subsequent aqueous-phase reactions (Glasius et al., 2000; Legrand et al., 2007; Kundu et al., 2010; Tilgner and Herrmann, 2010). A detailed knowledge of the formation processes of DCRCs is helpful to better understand the fate of SOAs in the troposphere. A number of studies have proposed that aromatic hydrocarbons, isoprene and ethene are important precursors for DCRCs (Warneck, 2003; Ervens et al., 2004; Bikkina et al., 2014; Tilgner and Herrmann, 2010). However, modeling studies that focus on understanding of DCRC formation pathways based on field measurements are limited. Additionally, model studies show growing evidence that substantial amounts of DCRCs are formed by aqueous-phase reactions within aerosol particles, clouds and fog droplets (Sorooshian et al., 2006; Carlton et al., 2007, 2009; Ervens et al., 2008, 2011; Ervens, 2015; Tilgner and Herrmann, 2010; Tilgner et al., 2013; Mouchel-Vallon et al., 2017). Nevertheless, the applied mechanisms are still incomplete and the formation processes are therefore not completely understood. Hence, in this study, a near-explicit multiphase model is applied to investigate the chemical processing of DCRCs in both gas and aqueous phases in order to understand the formation processes and the fate of DCRCs in the atmosphere. The present study focuses on the multiphase formation mechanism of key secondary aerosol constituents measured in June 2014 at Mt. Tai, which is the highest mountain on the North China Plain (NCP). Mt. Tai is located in Shandong province on the NCP and between the Bohai Rim (BHR) and the Yangtze River Delta (YRD) regions. Together, the BHR and YRD regions had a population of more than 450 million in 2018 (China Statistical Yearbook in 2019). In summer, clouds frequently occur over the summit of Mt. Tai. Despite a small amount of emissions from temples and small restaurants at Mt. Tai's peak, the sampling site on top of Air Force Hotel, Houshiwu, was typically not influenced much by tourists and temples (Sun et al., 2016). The special altitude and geographical location of Mt. Tai provide a suitable site to measure regional secondary aerosol constituents and to investigate their formation pathways along the advection to the measurement site. The detailed objectives of the present study are as follows: (i) characterization of modeled radical and non-radical oxidant concentrations; (ii) assessment of modeled concentrations and formation processes of key secondary inorganic compounds; (iii) study of modeled concentrations of DCRCs and a comparison with field observations to assess the model predictions; (iv) investigation of source and sink pathways of selected DCRCs; (v) examination of the impact of emission data on modeled secondary aerosol concentrations; (vi) identification of the key precursors of selected DCRCs; and (vii) the impact of higher glyoxal (Gly) partitioning constants on the modeled concentrations of Gly, glyoxylic acid (ωC2), and oxalic acid (C2). 2 Multiphase modeling and model setup Detailed descriptions about the sampling site, the sampling instruments and the analysis methods can be found in a previous publication (Zhu et al., 2018). Campaign observation data, meteorological conditions and corresponding findings are also given there. The sampling period was from 4 June to 4 July 2014. The meteorological data during the campaign were as follows: temperatures ranged from 10 to 25 C, with an average of 17±6.2C; relative humidity (RH) ranged from 58 % to 100 %, with an average of 87±13 %; the prevailing wind direction was northwest; and wind speeds ranged from 1 to 7 m s−1. The weather conditions were mostly cloudy and occasionally foggy. Source identification indicated that DCRCs were mainly derived from anthropogenic activities followed by photochemical aging. Secondary sources, fuel combustion, photooxidation of unsaturated fatty acids and waste burning were also significant sources. DCRCs exhibited mostly similar concentrations during daytime and nighttime, e.g., C2, pyruvic acid (Pyr) and ωC2 (Zhu et al., 2018). These results differed from previous studies at urban (New Delhi, India) (Miyazaki et al., 2009), suburban (the northern vicinity of Beijing) (He et al., 2013) and mountain sites (Central Himalayan) (Hegde and Kawamura, 2012), where diurnal variations were observed. Therefore, this study focused on the formation processes of these compounds to better understand the similar concentrations during daytime and nighttime summer conditions at Mt. Tai. ## 2.1 Model and mechanism description In this study, we applied the air parcel model SPACCIM (SPectral Aerosol Cloud Chemistry Interaction Model, Wolke et al., 2005) to simulate multiphase chemistry along main trajectories during a simulated campaign period. SPACCIM combines a multiphase chemical model with a cloud microphysical model, simulating aqueous-phase chemistry in deliquesced particles and cloud droplets. The cloud microphysical model applied in SPACCIM is based on the work of Simmel and Wurzler (2006) and Simmel et al. (2005). Droplet formation, evolution and evaporation are realized by a one-dimensional sectional microphysics considering deliquesced particles and cloud droplets. In the present study, the moving bin version of SPACCIM has been applied. In the model, the growth and shrinking of aerosol particles by water vapor diffusion and nucleation and the growth and evaporation of cloud droplets is considered. The dynamic growth rate in the condensation and evaporation process and the droplet activation is based on the Köhler theory. Due to the emphasis on complex multiphase chemistry, other microphysical processes, such as impaction of aerosol particles and collision and coalescence of droplets and thus precipitation, were not considered in the present study. Moreover, the air parcel model SPACCIM is not able to reflect the complexity of tropospheric mixing processes. Nevertheless, the complex model enables detailed investigations of the multiphase chemical processing of gases, deliquescent particles and cloud droplets. More detailed descriptions of SPACCIM can be found in Wolke et al. (2005), Sehili et al. (2005) and Tilgner et al. (2013). However, SPACCIM cannot assess the complexity of (i) the tropospheric mixing processes along the transport, (ii) the occurring aerosol particle microphysical processes (e.g., nucleation, aggregation, etc.) or (iii) the effects of nonideal solutions on the occurring multiphase chemistry. These limitations have to be kept in mind when studying deliquesced particles and comparing predicted and observed concentrations at Mt. Tai. The potential limitations of an ideal solution assumption compared to a nonideal treatment are discussed in a recent paper by Rusumdar et al. (2020). The applied multiphase chemistry mechanism is comprised of the Master Chemical Mechanism (MCM3.2 scheme with 13 927 reactions, http://mcm.leeds.ac.uk/MCM/, last access: 16 August 2019; Jenkin et al., 2003; Saunders et al., 2003) and the Chemical Aqueous-Phase Radical Mechanism (CAPRAM4.0 scheme with 7129 reactions; Bräuer et al., 2019). MCM3.2 is a near-explicit gas-phase chemistry mechanism, while CAPRAM4.0 explicitly describes the aqueous-phase chemical and phase transfer processes of inorganic compounds and organic compounds of up to 4 carbon atoms. Phase exchange processes (in total 275) are implemented based on the kinetic resistance model of Schwartz (1986), in which the mass accommodation coefficient, the gas-phase diffusion coefficient and the Henry's law constant are considered. ## 2.2 Trajectories and performed model simulations Zhu et al. (2018) have shown that during the sampling period (4 June–4 July 2014) air masses arriving at Mt. Tai mainly came from the north (named cluster 2) and the south (named cluster 4) (Fig. S1 in the Supplement). The two clusters accounted for 79 % of the total trajectories. Moreover, the sum of DCRC concentrations in clusters 2 and 4 amounted to 73 % of total DCRC concentration during the sampling period. Therefore, in this study, we selected clusters 2 and 4 to simulate and investigate the formation processes and the fate of DCRCs. Additionally, Zhu et al. (2018) have clearly shown that biomass burning was only an important source during the first half of the sampling period (4–19 June). The aim of the study was to investigate the secondary formation of aerosol constituents along the trajectories towards Mt. Tai. However, biomass burning can be an important primary source of compounds that are often of secondary origin. Therefore, in this study, we focused on the period that was less impacted by biomass burning. In addition, both clusters 2 and 4 exhibited a rather stable transport above the mixing layer to the Mt. Tai site. A total simulation time of 96 h is chosen (4 d), representing a typical aerosol lifespan (Willams et al., 2002). The first 24 h are considered a model initialization day. Thus, only the model results from 24 to 96 h are presented in this study. With the help of measured RH at Mt. Tai and meteorological values of clusters 2 and 4 that are obtained by HYSPLIT4.9 (Draxler and Rolph, 2003) and MODIS satellite pictures (Li et al., 2005), we have ascertained that clouds most likely occurred at the Mt. Tai top and advected to Mt. Tai at the altitude of the trajectories (Zhu et al., 2018). Radiosonde data (http://weather.uwyo.edu/upperair/sounding.html, last access: 28 August 2019) also show that clouds could occur in the trajectories of clusters 2 and 4. Cloud interactions are considered on the basis of the measured RH values at Mt. Tai and RH values in radiosonde data at about 1500 m. The fractions of RH values above 90 % are calculated and used as a representative for the time possibly spent by a trajectory inside the clouds. For cluster 2, three 1.28 h daytime and three 1.92 h nighttime cloud cycles are repeated every 24 h. For cluster 4, daytime and nighttime cloud durations are 1.28 and 1.68 h, respectively, and are also repeated every 24 h. In order to better identify the impact of cloud droplet chemistry, we have also considered a model scenario without clouds. During the non-cloud period, RH is 70 % from 24 to 96 h. Acronyms for the different model simulations performed in this study can be found in Table 1. We have also carried out sensitivity runs in this study, investigating the following three aspects: (i) the impact of considered emission data on modeled secondary aerosol concentrations; (ii) the identification of key precursors of C2, Pyr, and ωC2 by relative incremental reactivity (RIR) (Xue et al., 2014); and (iii) the assessment of impact of increased Gly aerosol partitioning on concentrations of Gly, ωC2, and C2. RIR is defined as the ratio of the decrease in the maximum concentrations of the DCRCs and the decrease in the emission data of the corresponding precursor (a 20 % reduction is adopted in this study). Table 1Acronyms of the performed model simulations. ## 2.3 Chemical and physical model initialization Zhu et al. (2018) have reported that the pollutant concentrations during the campaign at Mt. Tai were largely controlled by long-range transport. The formation processes of secondary aerosols during long-range transport strongly depend on the emission of precursors. Therefore, emission data passed over in clusters 2 and 4 are implemented in the model. Biogenic emission data (isoprene, α- and β-pinenes) are obtained from Emissions of atmospheric Compounds and Compilation of Ancillary Data (ECCAD) MEGAN-MACC dataset (https://eccad.aeris-data.fr/, last access: 19 September 2019), while other input emission data, i.e., volatile organic compounds (VOCs), including alkanes, alkenes, aromatics, aldehydes, and alcohols; CO; CO2; NH3; NO; NO2; and SO2, are obtained from a new anthropogenic emission inventory in Asia (http://www.meicmodel.org/dataset-mix.html, last access: 15 September 2019) (M. Li et al., 2017). Emission data applied in the model can be found in Table S1 in the Supplement. The deposition velocities used in SPACCIM were taken from Ganzeveld et al. (1998) and are presented in Table S2. Additionally, the multiphase chemistry mechanism is also constrained by the initial concentrations of key species at the corresponding originated areas. The initial chemical data include gas-phase concentrations of inorganic gases (NO, NO2, O3, SO2, HNO3, NH3, H2O2), VOCs (including alkanes, alkenes, aromatics, aldehydes, alcohols and ketones) (Barletta et al., 2005; Duan et al., 2008; An et al., 2009; Liu et al., 2009; He et al., 2010; Ianniello et al., 2011; Meng et al., 2011; Li et al., 2011; Liu et al., 2012b; P. S. Zhao et al., 2013; Y. H. Wang et al., 2014; J. Li et al., 2015; L. Y. Li et al., 2015; Rao, et al., 2016; Y. Wang et al., 2016) and particle-phase data (Hu et al., 2015; Wang et al., 2015; Sun et al., 2013, 2015; Liu et al., 2014). In cases with missing values, values are taken from the CAPRAM urban scenario (http://projects.tropos.de/capram, last access: 15 July 2019, Herrmann et al., 2005). These initial model data and the aerosol parameters are both given in Table S3 and Table S4. The meteorological scenarios are illustrated in Fig. S2. 3 Model results and discussion ## 3.1 Modeled concentrations of important oxidants Due to the key role of radical and non-radical oxidation in the formation processes of secondary aerosol constituents, their concentration variations and corresponding reasons are investigated. Several publications have already focused on the oxidant budget in China. Kanaya et al. (2009) modeled gas-phase concentrations of OH, HO2 and O3 oxidants at Mt. Tai using the Regional Atmospheric Chemistry Mechanism (RACM), but they did not consider the effect of aqueous-phase conditions, such as cloud cases. Kanaya et al. (2013) only observed O3 gas-phase concentration at Mt. Tai and modeled photochemical O3 production processes by RACM. Suhail et al. (2019) only observed gas-phase NO3 concentration at the Wangdu site in the NCP. Ren et al. (2009) and Y. Wang et al. (2016) reported measurement of gas-phase peroxides at the Mt. Tai and Wangdu site in the NCP, respectively, but no other radical or non-radical oxidants. Liu et al. (2012a) modeled gas-phase concentrations of OH, HO2, and RO2 and investigated ROx budgets and O3 formation in Beijing in the NCP using a 1-D photochemical model. These studies partly investigated the radical or non-radical oxidants and lacked investigation of aqueous-phase reactions. To our knowledge, this is the first detailed multiphase chemical modeling study examining radical and non-radical oxidant concentrations along the trajectory to the Mt. Tai under day vs. night and cloud vs. non-cloud cases. Figure 1Time series of the modeled gas-phase (a, b, c: molec. cm−3) and aqueous-phase (d, e, f: mol L−1) radical oxidant concentrations of the C2w and C2wo cases, respectively (light blue column: cloud; shadow: night; a: aqueous phase). For acronyms, see Table 1. Figure 1 shows the modeled gas- and aqueous-phase concentrations of important radical oxidants in the C2w and C2wo cases. The gas- and aqueous-phase OH, HO2 and NO3 radical concentrations and the corresponding time profiles in the C4w and C4wo cases are quite similar to those in the C2w and C2wo cases. Therefore, the corresponding plots of the C4w and C4wo cases are presented in Fig. S3. Due to photochemistry, the gas-phase OH and HO2 oxidants showed a distinct diurnal profile, but they are significantly influenced by cloud occurrences. Under daytime cloud droplet conditions, OH and HO2 concentrations in the gas phase decreased by about 90 % and 98 %, respectively. The reduction of gas-phase OH radical concentrations in daytime cloud is mainly caused by the reduction of the gas-phase formation pathway of the HO2+NO reaction. Due to high water solubility, a direct phase transfer of HO2 into cloud droplets contributed significantly to its gas-phase concentration reduction. The NO3 radical exhibited higher gas-phase concentrations during the night than during the day. The levels are also substantially lower during in-cloud conditions. Under nighttime cloud droplet conditions, gas-phase NO3 concentrations decreased by about 72 %. The decreased NO3 radical concentration mainly resulted from the increased N2O5 uptake during in-cloud conditions. Unfortunately, we did not perform measurements of key radicals during the campaign. However, the simulated maxima of the gas-phase concentrations of OH (C2w: 3.2×106; C2wo: 3.5×106 molec. cm−3) and HO2 (C2w: 2.9×108; C2wo: 3.8×108 molec. cm−3) for Mt. Tai in this study are discussed in comparison to both the measurements and previous modeling studies. Compared with the modeled maximum OH (6.0×106 molec. cm−3) and HO2 (7.0×108 molec. cm−3) concentrations at Mt. Tai in June 2006 taken using a photochemical box model that was based on the Regional Atmospheric Chemistry Mechanism (RACM) (Kanaya et al., 2009), the OH and HO2 concentrations reported here are only slightly lower. Moreover, the modeled OH and HO2 concentrations in this study are lower than those of simulated results over the Chinese megacity of Beijing in August 2007 using a 1-D photochemical model (Regional chEmical and trAnsport Model, REAM-1D), whose chemistry was driven by the standard GEOS-Chem gas-phase chemistry mechanism (OH: 9×106; HO2: 6.8×108 molec. cm−3) (Liu et al., 2012a), and much lower than the measured data by laser-induced fluorescence (LIF) at a rural site downwind of the megacity of Guangzhou, China, on 3–30 July 2006 (OH: 15–26×106; HO2: 3–25×108 molec. cm−3) (Lu et al., 2012). Additionally, the simulated NO3 radical maxima (C2w: 1.0×108; C2wo: 1.5×108 molec. cm−3) are much lower than those observed at the urban site of Shanghai, China, by Differential Optical Absorption Spectroscopy (DOAS) from 15 August to 7 October 2011 (2.5×109 molec. cm−3) (S. Wang et al., 2013). These comparisons suggest that the predicted model data represented a suburban oxidant budget along the trajectory above the boundary layer to Mt. Tai. Similar to the gas phase, aqueous-phase concentrations of OH and HO2 also display a distinct diurnal profile with peak concentration around noon and are largely impacted by cloud interactions. Under daytime cloud droplet conditions, OH aqueous-phase concentrations are increased by a factor of 3, mainly due to the increased direct transfer of OH from the gas phase. Compared with OH concentrations measured in remote clouds from laboratory studies (average: $\mathrm{7.2}×{\mathrm{10}}^{-\mathrm{15}}$ mol L−1; Arakaki et al., 2013), the modeled average aqueous-phase OH concentration in daytime clouds ($\mathrm{9.6}×{\mathrm{10}}^{-\mathrm{14}}$ mol L−1) is much higher. The difference between measured and modeled OH concentrations is comprehensively discussed in Tilgner and Herrmann (2018). The chapter outlined that both model results and laboratory investigations of field samples are biased. However, it should be mentioned that a more comprehensive aqueous-phase mechanism tends to lower OH predictions due to the higher number of possible OH sinks. On the other hand, laboratory investigations of field samples most likely tend to underestimate the OH sources due to the limitation of present offline methods. For instance, during the time period on the way from the measurement site to the laboratory, the OH radical can still be consumed by oxidation processes that cannot be resolved by the laboratory protocol, and OH sources related to the uptake of OH precursors (H2O2, ROOHs, etc.) are also excluded. Therefore, an adequate comparison is rather difficult at present. On the other hand, HO2 aqueous-phase concentrations are decreased by a factor of 0.9 due to aqueous-phase reactions of HO2 with Cu2+ or Cu+. However, higher HO2 aqueous-phase concentrations appear at the beginning of the daytime cloud. This is caused by the increased H2O2 aqueous concentrations due to uptake at the beginning of the daytime cloud period. In the aqueous-phase, H2O2 reactions with Mn3+ or FeO2+ led to a formation of HO2. Table 2Average aqueous-phase concentrations (mol L−1) of modeled radical and non-radical oxidants in different simulations. The NO3 radical presents higher aqueous-phase concentrations during the night, with a minor cloud impact. Under nighttime in-cloud conditions, the NO3 radical concentration decreases by about 13 %. In Table 2, average aqueous-phase concentrations of OH, HO2 and NO3 under different conditions are presented. Aqueous-phase NO3 concentrations are much higher than those of aqueous-phase OH. Considering the normally lower reaction rate constant of the aqueous NO3 radical compared to aqueous OH (Herrmann et al., 2010, 2015; Ng et al., 2017), both OH and NO3 radicals are able to initiate aqueous-phase oxidations under the simulated conditions, contributing to the chemical processing of SOA constituents. Figure 2Time series of modeled gas-phase (a, b ppbv) and aqueous-phase (c, d mol L−1) non-radical oxidant concentrations in the C2w and C2wo cases. Light blue column: cloud; shadow: night; a: aqueous phase; green triangle: the maximum (above), average (middle), and minimum (below) value of measured concentration at Mt. Tai. Figure 2 depicts the modeled gas- and aqueous-phase concentrations of H2O2 and O3 for the C2w and C2wo cases. The corresponding plots for the C4w and C4wo cases can be found in the Supplement because of their similar concentration levels and patterns to C2w and C2wo, respectively (Fig. S4). Figure 2 shows that, due to active photochemistry, gas-phase concentrations of H2O2 and O3 mainly increased during the day and decreased during the night. During cloud periods, H2O2 gas-phase concentrations are decreased by 96 % due to direct phase transfer and corresponding aqueous-phase chemistry. The predicted cloud impact is minor for gas-phase O3 concentrations, which is consistent with former studies (Tilgner et al., 2013). During daytime cloud periods, O3 gas-phase concentrations are decreased by about 4 %. In the C2wo case, H2O2 concentrations are substantially higher than those in C2w because of the missing cloud-phase sinks (e.g., hydrogen sulfide oxidation). However, O3 concentrations in C2wo and C2w are similar because of the minor cloud impact on O3 in C2w. In the C2wo case, measured gas-phase O3 concentrations at Mt. Tai ranged from 78.6 to 108.3 ppbv (Fig. 2), which is typical in a Chinese suburban regime (Y. Wang et al., 2013). However, these concentrations are reached even at the high altitude of Mt. Tai. Additionally, the simulated maxima gas-phase H2O2 concentrations (C2w: 1.0; C2wo: 2.3 ppbv) are lower than those observed at a rural site downwind of the more polluted area of Hebei, China (11.3 ppbv) (Y. Wang et al., 2016). The simulated O3 maxima (C2w: 94.2; C2wo: 105.1 ppbv) are slightly lower than those observed at the Nanjing urban area in China (133.9 ppbv) (An et al., 2015). The aqueous-phase H2O2 presents a similar concentration pattern as the gas-phase H2O2. Higher H2O2 aqueous-phase concentrations at the beginning of the daytime cloud are caused by the fast phase transfer. The subsequent sharp decrease during the first minute is connected to the efficient hydrogen sulfide oxidation. In the C2wo case, aqueous-phase O3 concentrations increase during the day and decrease throughout the night. In both daytime and nighttime clouds, O3 aqueous-phase concentrations are increased by about 18 %. The average aqueous-phase concentrations of H2O2 and O3 in the different cases can be found in Table 2. Figure 3Time series of the modeled aerosol mass concentrations (µg m−3) of key secondary inorganic aerosol constituents (sulfate, nitrate and ammonium) in the C2w and C2wo cases. Light blue column: cloud; shadow: night; green triangle: the maximum (above), average (middle), and minimum (below) value of measured concentration at Mt. Tai. ## 3.2 Modeled concentrations and formation of key secondary inorganic aerosol constituents In Fig. 3, modeled concentrations of the most important SIA constituents are plotted, including (i) sulfate (sum of all sulfur compounds with oxidation state +6, mainly ${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}/{\mathrm{HSO}}_{\mathrm{4}}^{-}$), (ii) nitrate (sum of all nitrogen compounds with oxidation state +5, mainly ${\mathrm{NO}}_{\mathrm{3}}^{-}$) and (iii) ammonium (sum of all nitrogen compounds with oxidation state −3, mainly ${\mathrm{NH}}_{\mathrm{4}}^{+}$). The corresponding plots for the C4w and C4wo cases are presented in Fig. S5. ### 3.2.1 Sulfate Conducted field observations, together with estimated sulfur oxidation rates using a tracer method in previous studies at Mt. Tai, have suggested that sulfate formation is closely related to cloud chemistry (Zhou et al., 2009; Shen et al., 2012; Guo et al., 2012). However, these studies are not able to comprehensively quantify the impact of cloud chemistry on sulfate concentration and have not performed detailed investigations on chemical formation pathways of sulfate during the transport to Mt. Tai. In this study, we primarily present modeled concentrations of sulfate and discuss the differences between the different day vs. night and cloud vs. non-cloud cases using a multiphase chemistry model. Moreover, findings of sulfate source and sink chemical reactions are presented for the different model cases. Figure 4Modeled multiphase (gas + aqueous phase) source and sink fluxes of sulfate and nitrate (light blue column: cloud; shadow: night). Percentage is given for the third model day. Figure 3 shows that sulfate concentrations mainly increase under in-cloud conditions throughout the whole simulation due to active in-cloud chemical sulfur oxidation pathways. Although in-cloud residence time is slightly higher during the night, sulfate concentrations increase more in the daytime clouds (35 %) than the nighttime clouds (15 %) because of the increased aqueous reaction of ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ with H2O2 in daytime cloud droplets (Fig. 4). As shown in Fig. 4, about 97 % of sulfate formation fluxes are related to in-cloud conditions, and they mostly occurred in daytime clouds. The aqueous-phase reaction of ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ with H2O2 represents the most important sulfate source in daytime cloud with a contribution of 78 %. In the nighttime cloud, aqueous-phase reaction of ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ with H2O2 (42 %) and aqueous reaction of bisulfite with O3 (28 %) are dominant pathways for sulfate formation. In the C2wo case, sulfate concentrations gradually increase over time (Fig. 3). The highest increase occurs during the day as a consequence of the gas-phase SO2 oxidation by OH (Fig. S6). However, the formed sulfate mass in the C2wo case is substantially lower than in the C2w case. At the end of the simulation, the modeled sulfate concentrations are 76.7 and 24.7 µg m−3 in the C2w and C2wo cases, respectively. Compared to the maximum (40.0 µg m−3), average (32.0 µg m−3) and minimum (18.8 µg m−3) values of the measured sulfate concentrations at Mt. Tai (Fig. 3), SPACCIM overestimates measured concentrations of sulfate in the C2w case but slightly underestimates the mean sulfate concentration in the C2wo case. ### 3.2.2 Nitrate Studies at Mt. Tai focused on nitrate suggested that photochemical formation of HNO3 has important contribution to nitrate formation (Zhou et al., 2009; Guo et al., 2012). Wen et al. (2018) found that partitioning of HNO3 from gas to particulate phase and hydrolysis of N2O5 is the predominant daytime and nighttime formation pathway of nitrate at Mt. Tai, respectively. However, these studies did not consider nitrate formation processes related to in-cloud conditions. In this study, we not only focus on the concentration time profiles of nitrate under day vs. night but also compare cloud vs. non-cloud cases. Furthermore, we have analyzed nitrate source and sink reaction rates and their contributions in different cases. As can be seen in Fig. 3, nitrate concentrations are increased throughout the simulation. Under in-cloud conditions, nitrate concentrations are increased by about 10 % and 24 % during the day and the night, respectively. The concentration time profiles in the C2w and C2wo cases show only small differences, indicating that most of the nitrate formation occurs during non-cloud periods. Therefore, the end concentrations of C2w and C2wo do not differ significantly. An analysis of chemical sink and source in the C2w case (Fig. 4) has revealed that nitrate is mainly produced by aqueous-phase N2O5 hydrolysis during the night. During the day, nitrate is mainly produced by the aqueous-phase reaction of HNO4 and ${\mathrm{HSO}}_{\mathrm{3}}^{-}$, gas-phase reaction of OH and NO2, and aqueous-phase N2O5 hydrolysis. A comparison of daytime and nighttime fluxes in the C2wo case has revealed that 31 % and 69 % of nitrate formation fluxes occur during the day and at night, respectively. In the C2wo case, nighttime nitrate is mainly produced by aqueous-phase N2O5 hydrolysis with a contribution of 92 % (Fig. S6), which agrees with the result in Wen et al. (2018). However, gas-phase reaction of OH+NO2 to gaseous HNO3 and further partitioning into the aerosols is the most important source for daytime nitrate, with a contribution of 56 % (Fig. S6). Aqueous-phase N2O5 hydrolysis is also significant for daytime nitrate formation with a contribution of 30 %. In comparison, Wen et al. (2018) have suggested the partitioning of HNO3 from gas to the particulate phase to be the most important formation pathway for daytime nitrate with a contribution of 94 %. The N2O5 hydrolysis has been identified as the second most important pathway with a contribution of 4 %–6 %. The modeled nitrate concentrations are 69.5 and 65.3 µg m−3 at 96 h in the C2w and C2wo cases, respectively. Compared to the maximum (25.0 µg m−3), average (14.0 µg m−3) and minimum (6.8 µg m−3) values of the measured nitrate concentrations at Mt. Tai (Fig. 3), the model simulations overestimate the measured concentrations in both cases. Potential reasons are discussed in Sect. 3.3.2. ### 3.2.3 Ammonium Measured ammonium concentrations at Mt. Tai can be strongly impacted by acidification and cloud chemistry (Guo et al., 2012; J. Li et al., 2017). Still, a detailed analysis of the occurring processes is missing. Therefore, we provide a detailed insight into the ammonium concentration variation trends and the impact of acidification and cloud processing along the simulated trajectories towards Mt. Tai. Similar to sulfate and nitrate, ammonium concentrations also gradually increased throughout the simulation due to the included emissions rates and the followed uptake of gaseous NH3 into the aqueous phase. Ammonium concentrations rose in cloud droplets both during the day and night because a more efficient uptake into the larger cloud water volume. After cloud evaporation, ammonium aerosol concentrations are increased due to stronger partitioning into more acidified deliquesced particles. In the C2wo case, ammonium concentrations are also increased during both the day and night. However, modeled aerosol mass of ammonium in the C2wo case is lower than that in the simulation case with cloud interaction (C2w case). In the C2wo case, less sulfate is formed. Consequently, deliquesced aerosol particles are less acidified and a larger fraction of ammonium stays in the gas phase as gaseous NH3. The simulated ammonium concentrations after 96 h are 42.7 and 25.9 µg m−3 in the C2w and C2wo cases, respectively, which are higher than the measured concentrations (maximum: 18.0, average: 15.6, minimum: 7.9 µg m−3). The differences between the modeled and measured concentrations of sulfate, nitrate and ammonium can be attributed to several issues such as (i) the indefiniteness of the input emission data, (ii) the initial concentrations, (iii) the missing entrainment–detrainment processes and (iv) the performed heating of the inlet during the sampling of wet aerosol (see Sect. 3.3.2 for further details). ## 3.3 Modeled concentrations of DCRCs and comparison with field data In recent years, a number of field observations on DCRCs have been conducted in the NCP. For example, He et al. (2013), Ho et al. (2015), Zhao et al. (2018) and Yu et al. (2019) observed DCRCs in Beijing, and Wang et al. (2009), Kawamura et al. (2013), Meng et al. (2018) and Zhao et al. (2019) measured DCRCs at Mt. Tai. Our field observation about DCRCs at Mt. Tai has been reported in Zhu et al. (2018). However, these studies are focused on DCRC concentrations, molecular compositions, temporal variations, size distributions, source implications and stable carbon isotopic composition. They have not investigated the chemical formation of DCRC concentrations along the trajectory or the impact of in-cloud and non-cloudy conditions on DCRC concentrations. To our knowledge, a multiphase chemical model study investigating the DCRCs concentration variations and their chemical processing along the trajectory to Mt. Tai considering day vs. night and cloud vs. non-cloud cases has not been yet reported. Figure 5Time series of the modeled aerosol mass concentrations of selected DCRCs (a, b, c Gly, ωC2, and C2; d, e, f MGly, Pyr, and C3) in the C2w and C2wo cases. Light blue column: cloud; shadow: night; green triangle: the maximum (above), average (middle), and minimum (below) value of measured concentrations at Mt. Tai. ### 3.3.1 Modeled concentrations of DCRCs Figure 5 shows the modeled aerosol mass concentrations of Gly, ωC2, C2, methylglyoxal (MGly), Pyr and malonic acid (C3), both in the C2w and C2wo cases and the values measured at Mt. Tai. In the C4w and C4wo cases, the modeled Gly, ωC2, C2, MGly, Pyr and C3 concentrations show similar patterns to those in the C2w and C2wo cases, respectively (Fig. S8). ### Dicarbonyl compounds In the C2w case, Gly and MGly concentration patterns show a substantial uptake into cloud droplets. Gly concentrations decreased during the daytime and nighttime cloud droplet periods due to in-cloud oxidation processes. On the other hand, MGly concentrations display a decrease in the daytime cloud droplets but an increase under nighttime in-cloud conditions. This might have been caused by the fact that the aqueous oxidation fluxes under nighttime in-cloud conditions are lower than the ones under daytime. This might have been caused by the fact that the aqueous oxidation fluxes under nighttime in-cloud conditions are lower than the ones under daytime conditions because of the much lower OH radical concentrations under nighttime in-cloud conditions (Fig. S3). In the C2wo case, Gly and MGly concentrations are very low due to the low partitioning towards aqueous particles that has been predicted by the model. The effect of a potentially higher partitioning constant of Gly (Volkamer et al., 2009; Ip et al., 2009) is investigated in Sect. 3.5.3. It is worth noting that Gly or MGly have similar concentrations at the end of the simulation with or without cloud chemistry. ### C2 carboxylic acids In the C2w case, aqueous-phase concentrations of ωC2 are increased under both daytime and nighttime in-cloud conditions and early in the night of non-cloud periods. On the other hand, ωC2 concentrations are lowered during the day and later in the night under non-cloudy conditions. In the C2wo case, ωC2 is decreased during the morning periods but is increased in the late afternoon and at night. Compared with the C2wo case, the C2w case shows higher ωC2 concentrations, which suggests that cloud processes play important roles in ωC2 formation and oxidation. In the C2w case, modeled aqueous-phase concentrations of C2 are increased under daytime in-cloud conditions and daytime aqueous particle conditions but are lowered during nighttime cloud periods and under nighttime aqueous particle conditions. In the C2wo case, C2 is increased during the day but is decreased during the night. ### C3 carboxylic acids In the C2w case, Pyr concentrations are raised during the daytime and nighttime in-cloud conditions as well as in the late mornings of the non-cloud periods. Pyr concentrations are decreased in the early morning, afternoon and nighttime non-cloudy conditions (caused by the efficient degradation from the reaction with aqueous-phase H2O2). In the C2wo case, aqueous-phase concentration profile of Pyr is characterized by an increase during the morning and early afternoon period and by a strong decrease during the late afternoon and night. Pyr shows a high correlation with aqueous-phase H2O2 in the C2wo case due to its efficient H2O2 decay (Fig. S7). The aqueous-phase C3 concentrations are increased during cloud formation due to the uptake of gaseous C3 into cloud droplets. Moreover, C3 concentrations are increased during non-cloud periods. In the C2wo case, C3 concentrations only increased slightly during the night and even less so during the day. In comparison to the C2wo case, the C2w case shows higher concentrations of C3, indicating that cloud oxidation processes are very important for C3 aqueous-phase formation under the simulated conditions. The production of C3 is 27 % higher in the C2w case than in the C2wo case. ### 3.3.2 Comparison to field observations The ratios of the average concentration of modeled and measured DCRCs can be found in Table 3. The results show that model predictions are higher than the measured concentrations of C3, Pyr and ωC2 in both the C2w and C2wo cases. Moreover, the concentration ratios of Pyr and ωC2 in the C2w case are much higher than in the C2wo case. On the other hand, model results underpredict the C2, Gly and MGly concentrations in both cases. SPACCIM overestimates the measured ωC2 concentrations but underestimates the measured C2 ones, suggesting the conversion of ωC2 is implemented less efficiently into CAPRAM than is seen in the field. Phase partitioning between gas and aqueous phase, a key process for modeled MGly concentration, may be not sufficient to predict the measured MGly aerosol concentrations in the field because of model simplicity. Kroll et al. (2017) have found that a possible hydration of MGly in the gas phase might lead to an enhanced uptake into aqueous particles. Thus, the MGly uptake may be underestimated. Additionally, other important processes that can effectively enhance partitioning of MGly are not yet considered, such as reversible oligomerizations (Ervens and Volkamer, 2010). As a result, the modeled aqueous-phase MGly concentration is rather low and about 3 orders of magnitude lower than the measured data. Based on this finding, a sensitivity study has been performed (see Sect. 3.5 for details). Table 3Ratios of the concentrations of the modeled and measured DCRC compounds in the different model trajectories at Mt. Tai. The overestimation and underestimation of the measured concentrations of inorganic and organic aerosol constituents could have the following reasons. 1. Input data. Indefiniteness of emission data and initial concentrations. The emission data are obtained through a new anthropogenic emission inventory in Asia, which provides monthly emissions in 2010 by sector at $\mathrm{0.25}{}^{\circ }×\mathrm{0.25}{}^{\circ }$ resolution. However, the model simulation period in this study is in 2014, and the spatial resolution is less than $\mathrm{5}{}^{\circ }×\mathrm{2}{}^{\circ }$. Therefore, errors in conversion of the emission data can occur. The height of Mt. Tai (about 1500 m) also causes its input to be indefinite. Furthermore, initial concentrations in the areas of origin are obtained through related references rather than field measurements, which also leads to indefiniteness. 2. Field measurement. PM2.5 samples are the only ones available, so a possible contribution of larger particles might have been missed. Moreover, heating the inlet during the sampling of wet aerosol has definitely lowered the measured concentrations of more volatile compounds such as ammonium nitrate. 3. MCM. Some species' crucial gaseous precursors are efficiently destroyed by MCM. For example, it is recommended that gas-phase oxidation of acetic acid by OH proceed via both an H atom abstraction from the OH group and the CH3 group (Sun et al., 2009). Nonetheless, only an H abstraction for the OH group is implemented in the MCM. This oxidation scheme is implemented for all carboxylic acids. The disadvantage of the MCM probably leads to an underestimation of the functionalized acids and need to be resolved in more detail. However, the development of an improved gas-phase acid oxidation scheme for the MCM goes beyond the scope of this study. 4. CAPRAM. Missing sources of selected DCRCs from higher organic chemistry, such as the aqueous-phase chemistry of aromatic compounds (Hoffmann et al., 2018). 5. SPACCIM. The model neglects nonideal solution effects and do not consider organic salt formation. These factors possibly result in overestimated or underestimated results. Recent studies by Rusumdar et al. (2020) have demonstrated that nonideal chemistry treatment led to much higher concentrations of C2 and ωC2. Apart from MGly, the concentration ratios of the modeled and measured species ranged from 0.1 to 8.3. Interestingly, the ratio of C3 is close to 1, which might be related to a good representation of the concentrations of C3 precursors. The comparison indicates that formation pathways of DCRCs implemented in CAPRAM4.0 are realistic but highly dependent on the input data of precursors. SPACCIM, with MCM3.2 and CAPRAM4.0 implemented, is a numerical tool that can help us better understand the complexity of the multiphase processing of DCRCs . However, the present study also demonstrates that there are still huge gaps in knowledge about the formation and degradation of secondary aerosol constituents. Hence, further laboratory investigations and modeling work are necessary. ## 3.4 Model investigations of chemical sources and sinks of selected DCRCs Although field observations have speculated about several potential formation pathways of some DCRCs species by correlations or ratios analyses (Hegde and Kawamura, 2012; Kawamura et al., 2013; Zhao et al., 2019), the detailed pathways of DCRCs need to be studied. Multiphase model simulations are a suitable tool to investigate DCRC formation processes. In recent years, DCRC formation processes have been examined by several model studies. For example, Tilgner and Herrmann (2010) have modeled gas- and aqueous-phase processing of C2–C4 carbonyl compounds and carboxylic acids by SPACCIM; Ervens et al. (2004) have discussed formation pathways of Pyr and C2–C6 dicarboxylic acids in gas and aqueous phases; Lim et al. (2005) have reported the formation pathways of Gly, MGly, Pyr, and C2 by isoprene oxidation in gas and aqueous phases using a photochemical box model; and Huisman et al. (2011) have investigated the driving processes of Gly chemistry using the Master Chemical Mechanism (MCM, v3.1). These studies have suggested that DCRC formations are related to the oxidations of anthropogenic (e.g., toluene and ethylene) and biogenic (e.g., isoprene) gas-phase VOC precursors. The emissions of these anthropogenic and biogenic VOC in China are much higher than those reported in above references (Sindelarova et al., 2014; Bauwens et al., 2016). However, multiphase model simulations are sparsely used to study DCRC formations in China. Therefore, the present study aims to undertake a detailed investigation of the formation pathways of selected DCRCs under day vs. night and cloud vs. non-cloud cases along the trajectories approaching Mt. Tai. Due to the similar concentration levels and corresponding variation trends of ωC2, C2, Pyr and C3 in clusters 2 and 4, the source and sink flux analyses are only presented and discussed for the C2 case. Additionally, the corresponding plots of the four compounds in the C2wo case are presented in the Supplement (Fig. S9). Figure 6Modeled multiphase (gas phase + aqueous phase) source and sink fluxes of ωC2 (a), C2 (b), Pyr (c) and C3 (d) along the trajectory of the third model day (light blue column: cloud; shadow: night). Percentage is given for the third model day. ### 3.4.1 Glyoxylic acid (ωC2) In Fig. 6, the multiphase source and sink fluxes of ωC2 (C2w case) on the third model day are plotted. The results reveal a net formation flux that mainly occurs during in-cloud conditions as well as a net degradation that mainly occurs during non-cloud periods. About 94 % of the net formation flux of ωC2 is simulated under in-cloud conditions. However, non-cloudy conditions represent 99 % of the net sink flux of ωC2. Under daytime and nighttime in-cloud conditions, the major formation pathways of ωC2 are aqueous-phase reactions of hydrated Gly with the OH and ${\mathrm{SO}}_{x}^{-}$ radical (contribution of 60 % during the day and 86 % during the night), which is similar to the results in Tilgner and Herrmann (2010), and Tilgner et al. (2013). The aqueous-phase oxidation of glycolate by OH is also significant under daytime in-cloud conditions with a contribution of 18 %. Under daytime and nighttime non-cloudy conditions, aqueous reactions of hydrated Gly (day: 20 %; night: 20 %) and gas-phase decay of 3-methyl-6-nitrocatechol (day: 14 %; night: 24 %) are significant for ωC2 formation. Other reactions contributed less than 5 % to the overall source flux. Under daytime clouds, ωC2 sink is dominated by aqueous-phase reaction of hydrated glyoxylate with OH (contribution of 88 %), which is consistent with Ervens et al. (2004). During nighttime clouds, however, aqueous-phase reactions of hydrated glyoxylate with NO3 (45 %) and OH (28 %) are the most important sinks. In contrast to those under in-cloud conditions, gas-phase ωC2 photolysis (57 %) and gas-phase reaction of ωC2 with OH (18 %) act as key sinks of ωC2 under daytime non-cloudy conditions. Under nighttime non-cloudy conditions, the sink fluxes of ωC2 are low and therefore unimportant. Figure 7Concentration variations in modeled sulfate, nitrate, ammonium, Gly, ωC2, C2, MGly, Pyr and C3 when doubling emission data (light blue column: cloud; shadow: night). ### 3.4.2 Oxalic acid (C2) Figure 6 also depicts the source and sink fluxes of C2 in the C2w case. The model simulated a net formation flux during the non-cloud periods and a net degradation in the early morning hours when non-clouds are present but the photolysis of iron oxalate complexes is efficient. A net formation of about 94 % C2 is simulated under non-cloudy conditions. About 74 % of the net C2 sink fluxes are predicted during the early morning non-cloud period, and 26 % are related to the cloud oxidation fluxes. The most important source of C2 in the aqueous phase under in-cloud conditions is aqueous oxidations of hydrated glyoxylate with the OH radical, which agrees with other model results (Tilgner and Herrmann, 2010; Ervens et al., 2004). This formation pathway contributes to C2 formation with about 72 % during the day and 87 % during the night. In contrast to that under in-cloud conditions, the most important C2 formation pathway during daytime and nighttime non-cloudy conditions is the aqueous-phase oxidation of hydrated ωC2 with the OH radical (day: 39 %; night: 52 %). The field observations also suggested that aqueous-phase oxidation of ωC2 is the most important formation pathway of C2 (Kundu et al., 2010; Kawamura et al., 2013; He and Kawamura, 2010), but they are not able to quantify contribution and the responsible specific oxidation pathways. Other reactions contribute less than 5 % to the overall source flux. The most important sink of C2 under daytime in-cloud conditions is the photolysis of iron oxalate complexes, with a contribution of about 93 %. The result is similar to reported laboratory experiment findings (Pavuluri and Kawamura, 2012) and aqueous model simulations (Tilgner and Herrmann, 2010). On the other hand, aqueous reaction of oxalate with NO3 (80 %) is the most important sink in the nighttime cloud case. Under nighttime non-cloudy conditions, C2 sink is dominated by the reaction of the oxalate monoanion with NO3 (81 %). Under daytime non-cloudy conditions, the only significant C2 sink is the photolysis of iron oxalate complexes. However, photolysis of iron oxalate complexes under aqueous particle conditions is most likely overestimated in the present SPACCIM studies. A recent study by Rusumdar et al. (2020), using an advanced SPACCIM version with a nonideal treatment but which has a more reduced chemical CAPRAM scheme, revealed that the formation and consequently the photolysis of iron oxalate complexes is inhibited under aqueous particle conditions. The possible overestimation of the photolytic decay leads to a significantly low C2 concentration and thus might be one of the reasons for the underestimated C2 concentration. ### 3.4.3 Pyruvic acid (Pyr) The modeled source and sink fluxes of Pyr in the C2w case on the third model day can be found in Fig. 6. A net formation flux is modeled mainly under in-cloud conditions, especially during the day, along with a net degradation during non-cloud periods. About 72 % of the net Pyr flux occurs in clouds, whereas 28 % is formed under non-cloudy conditions. However, 100 % of the multiphase Pyr net sink fluxes are related to non-cloud oxidation. Under in-cloud conditions, the dominant source for Pyr is hydrolysis of the aqueous-phase oxidation product of nitro-2-oxopropanoate, with a contribution of 89 % during the day and 70 % during the night. The result is different from former model studies, e.g., Ervens et al. (2004), Lim et al. (2005), Tilgner and Herrmann (2010) and Tilgner et al. (2013), which modeled the aqueous oxidations of MGly as the major formation pathway of Pyr. However, these model studies have also modeled different environmental conditions with much lower anthropogenic pollution, including lower NOx and NO3 radical budgets compared to the Chinese conditions. Furthermore, the former studies have also used rather simple gas-phase mechanisms and lack potential production pathways from higher organic compounds. Similarly, the aqueous oxidation of nitro-2-oxopropanoate is identified as a major source under non-cloudy conditions, with a contribution of 87 % during the day and 74 % during the night. The key sinks of Pyr under daytime in-cloud conditions are aqueous-phase reactions of pyruvate with OH (58 %) and H2O2 (29 %). This finding is consistent with results from laboratory experiments (Carlton et al., 2006). Under nighttime in-cloud conditions, the sink fluxes are very low and therefore can be ignored. Under daytime and nighttime non-cloudy conditions, dominant sinks are aqueous-phase reactions of pyruvate with H2O2 (day: 57 %; night: 72 %) and free Pyr with H2O2 (day: 13 %; night: 15 %). Additionally, gas-phase Pyr photolysis (15 %) is also important under daytime non-cloudy conditions. ### 3.4.4 Malonic acid (C3) In Fig. 6, the modeled source and sink fluxes of C3 in the C2w case are presented for the third model day. A net formation flux can be seen under daytime in-cloud conditions and both daytime and nighttime non-cloudy conditions. A net degradation is only found under nighttime in-cloud conditions. A C3 net formation flux is about 82 % under non-cloudy conditions and 18 % under daytime in-cloud conditions. The major modeled sources of C3 under daytime and nighttime in-cloud conditions are aqueous oxidation reactions of hydrated 3-oxopropanoic acid (day: 48 %; night: 50 %) and hydrated 3-oxopropanoate (day: 45 %; night: 47 %). However, under non-cloudy conditions, aqueous-phase oxidation of hydrated 3-oxopropanoic acid is dominant, with a contribution of 79 % during the day and 88 % during the night. Differences between the sink fluxes under in-cloud and non-cloudy conditions are modeled. The C3 sinks under daytime clouds are dominated by aqueous-phase reaction of malonate monoanion with OH. Its contribution to total sink flux under the daytime cloud is 70 %. In contrast, the aqueous-phase reaction of malonate monoanion with NO3 is the most important sink under nighttime in-cloud conditions, with a contribution of 75 %. The predominant sink pathway of C3 in daytime cloud is consistent with Ervens et al. (2004), but the sink pathway in nighttime cloud is different due to the missing NO3 radical pathways in their mechanism. The modeled C3 sinks under non-cloudy conditions are much lower than the sinks under in-cloud conditions and thus are unimportant. ## 3.5 Sensitivity studies Due to the similarity between clusters 2 and 4, as mentioned above, sensitivity tests are only performed under cluster 2 conditions. The present study investigated the (i) impact of emissions on modeled compound concentrations; the (ii) key precursors of DCRCs; and the (iii) impact of increased Gly aerosol partitioning on Gly, ωC2, and C2. ### 3.5.1 Impact of emissions First, sensitivity tests are performed to evaluate the effect of different emission strengths on the concentrations of key secondary inorganic compounds and selected DCRCs during the transport. The emission sensitivities of sulfate, nitrate, ammonium, Gly, ωC2, C2, MGly, Pyr and C3 are investigated by doubling the applied emission fluxes of the base case. The results of the sensitivity tests can be found in Fig. 7. The modeled concentrations of sulfate, nitrate, ammonium, Gly, ωC2, C2, MGly, Pyr and C3 are increased by a factor of about 2 when the emissions doubled, which suggests an almost linear dependency. The results indicate that the modeled concentrations of secondary aerosol are very sensitive to the emissions in the model. ### 3.5.2 Identification of the key precursors of C2, Pyr and ωC2 Further sensitivity tests are conducted to identify key primary precursors of DCRCs during atmospheric transport. We have adopted the relative incremental reactivity (RIR) method by Carter and Atkinson (1989) for the sensitivity tests. The positive or negative RIR value reveals that reducing precursor emissions would weaken or aggravate DCRC formation, respectively. The RIR method has already been applied in a former study to investigate the precursors of peroxy acetyl nitrate in urban plume in Beijing (Xue et al., 2014). Figure 8The calculated RIRs for C2, Pyr and ωC2 in both the C2w (green bars) and C2wo (blue bars) cases at Mt. Tai (column: RIR values; red dots: emission data). Figure 9Correlations between the decreasing ratios of radical oxidants and C2-RIR (a, b, c) and ωC2-RIR (d, e, f) under C2w and C2wo conditions, respectively. Figure 10Concentration variations in modeled Gly, ωC2 and C2 when increasing the Gly Henry's law constant by 2 orders of magnitude (shadow: night; KH: Henry's law constant). As can be seen in Fig. 8, C2 formation in the C2w case is mostly sensitive to aromatics and alkenes. Among the aromatic compounds, toluene is the most important one for C2 formation. However, other aromatic species (such as xylene, ethylbenzene, isopropylbenzene) present negative RIRs. Among the alkenes, isoprene and 1,3-butadiene are dominant, but ethene shows negative RIR. The alkane RIRs are all negative. Positive and negative RIRs probably depend on oxidant variations. As shown in Fig. S10, the important sources of C2 in the C2w case are the oxidation of hydrated ωC2 by the OH radical and sulfur-containing radicals (${\mathrm{SO}}_{x}^{-}$). A reduced concentration ratio of the OH or ${\mathrm{SO}}_{x}^{-}$ is calculated in cases with a 20 % decrease in emission data. After the calculation, a correlation with RIR values has been performed. Figure 9 shows that OH and ${\mathrm{SO}}_{x}^{-}$ radicals have high and moderate correlations with C2-RIRs in C2w case, respectively, suggesting that the concentration variations in OH and ${\mathrm{SO}}_{x}^{-}$ radicals are the reason for the positive and negative C2-RIRs in the C2w case. In the C2wo case, alkenes account for the highest RIR. The RIR of alkenes is more than 2 times higher than that of the second highest group (aromatic compounds). Among the alkenes, the dominant compound is isoprene. Contrary to the C2w case, 1,3-butadiene reveals very low RIR under C2wo conditions. In the C2wo case, ethene exhibits a positive but low RIR. Among aromatic compounds, toluene shows the highest RIR. Xylene, ethylbenzene and isopropylbenzene also present significantly positive RIR values in the C2wo case. Alkanes again have negative RIRs. As shown in Fig. S10, in the C2wo case, the reactions of dissolved ωC2 with OH radicals represent the major pathways. Strong correlations between C2-RIRs and OH radical in the C2wo case (Fig. 9) indicate that the calculated positive and negative RIRs in the C2wo case are due to OH radical concentration variations. For Pyr, in both C2w and C2wo cases, alkenes are the dominant precursor group with the largest RIRs. The major compound is isoprene. The absolute RIR values for other selected species are less than 0.05. These results indicated that Pyr formation during atmospheric transport is highly sensitive to isoprene. In the C2w case, aromatic compounds are the most significant precursors of ωC2 with high positive RIR. However, individual aromatic species listed in Fig. 8 are characterized by negative RIRs. Alkanes and alkenes show negative values. However, isoprene and 1,3-butadiene have positive RIRs, and their high levels suggest that they are key species controlling ωC2 formation during the modeled summer conditions. As shown in Fig. S10, oxidations of dissolved Gly by OH and ${\mathrm{SO}}_{x}^{-}$ radicals are the most important sources for ωC2 formation in the C2w case. High correlations between OH and ${\mathrm{SO}}_{x}^{-}$ radicals with ωC2-RIR values (Fig. 9) suggest that positive and negative ωC2-RIRs in the C2w case are a result of variations in the two oxidants. Figure 8 shows that aromatic compounds account for the highest RIR under C2wo conditions and that toluene is a major contributor. Ethylbenzene and isopropylbenzene also made significant contributions. The alkene RIRs are the next highest. Isoprene is the most abundant compound during ωC2 formation. As in the cases of C2 and Pyr, alkanes also have negative ωC2-RIRs. ωC2 production mainly depends on the oxidation of dissolved Gly by OH radical in the C2wo case, and the variation trends of the OH radical are the reason for the positive and negative ωC2-RIRs (see Figs. S10 and 9). ### 3.5.3 Identifying the impact of increased Gly aerosol partitioning Phase partitioning between gas and aqueous phase in a multiphase model can be affected, e.g., by salting-in and salting-out effects and other reversible accretion reactions (Herrmann et al., 2015). For example, Ip et al. (2009) and Kampf et al. (2013) have reported that ${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$ and ammonium sulfate can have a significant effect on the uptake of Gly into an aqueous solution. Therefore, a sensitivity study considering increased Gly aerosol partitioning has been done to evaluate the changes of Gly, ωC2 and C2 concentrations. This has been realized by increasing the CAPRAM Gly Henry's law constant (1.4 mol L−1 atm−1, Betterton and Hoffmann, 1988). An increased Gly Henry's law constant (raised by 2 orders of magnitude) have been applied, which is close to the value reported in Volkamer et al. (2009). As can be seen in Fig. 10, compared to the base C2wo case performed without an increased Gly Henry's law constant, the modeled ωC2 and C2 aerosol concentrations increase by 3 and 2 times, respectively. This result suggested that an increased Gly aerosol partitioning might play an important role in ωC2 and C2 aqueous-phase formation. 4 Conclusions The present study focuses on the formation processes of secondary aerosols constituents along trajectories towards Mt. Tai using the multiphase chemistry air parcel model SPACCIM. The modeled radical and non-radical concentrations (e.g., gas-phase OH concentration of 3.2×106 in C2w and 3.5×106 molec. cm−3 in C2wo) suggest that the atmospheric environment of Mt. Tai (∼1.5 km a.m.s.l.) is still characterized by a suburban oxidants budget at the altitude of about 1.5 km. Compared to previous studies at Mt. Tai, this study is the first that investigates the formation processes of secondary aerosols constituents along different trajectories towards Mt. Tai under day vs. night and cloud vs. non-cloudy conditions in detail. The aqueous reaction of ${\mathrm{HSO}}_{\mathrm{3}}^{-}$ with H2O2 has been identified as the major contributor to ${\mathrm{SO}}_{\mathrm{4}}^{\mathrm{2}-}$ formation (contribution of 67 % in C2w and 60 % in C2wo). ${\mathrm{NO}}_{\mathrm{3}}^{-}$ formation is higher during the night than during the day. The major pathways are aqueous-phase N2O5 hydrolysis (contribution of 72 % in C2w and 70 % in C2wo) and gas-phase reaction of OH+NO2 (contribution of 18 % in C2w and 21 % in C2wo). Aqueous-phase reactions of hydrated Gly, hydrated ωC2, nitro-2-oxopropanoate and hydrated 3-oxopropanoic acid are dominant formation pathways of ωC2, C2, Pyr and C3, respectively. Sensitivity tests indicate that isoprene, 1,3-butadiene and toluene are the key precursors of ωC2 and C2. The model data analyses show that isoprene is the predominant precursor for Pyr. When emissions are doubled, the modeled SOA compound concentrations increase by about 2 times, suggesting that gaseous VOC emissions are a driving factor for the modeled SOA compound concentrations. The results indicate the importance of further emission reduction efforts to achieve a better standard of air quality in this part of China. Additionally, the simulations show that increased Gly aerosol partitioning plays an important role in ωC2 and C2 aqueous-phase formation. Finally, the present study reveals that, in order to better understand the presence, formation, chemical fate and phase partitioning of DCRCs in the troposphere in the future, comprehensive aerosol and cloud field studies, advanced mechanistic laboratory studies, and more chemical processes model studies are necessary. In cases with field investigations, advanced measurements characterizing chemical gas and aerosol compositions with a high time resolution are needed to enable better comparison with and evaluations of present multiphase models. On the other hand, for future model comparisons more advanced models are also required. Those should include a more detailed chemistry description, a detailed treatment of nonideal solution effects and an improved treatment of the phase partitioning of organic compounds, e.g., considering salting-in and salting-out effects and other reversible accretion reactions. Thus, the observed differences between modeled data and measurements could be fixed. Data availability Data availability. The input data used in the paper are given in the Supplement. The output data of all figures given in this study are publicly available at https://github.com/Yanhong-Zhu/acp-2019-982-data (Zhu, 2020). Supplement Supplement. Author contributions Author contributions. YZ, AT and HH designed the SPACCIM modeling work. YZ, AT and EHH performed the different SPACCIM simulations. YZ, AT, EHH and HH analyzed the SPACCIM simulation results. YZ and LX performed and interpreted the RIR analysis. YZ, AT, KK, LY and WW compared the model results with field data. YZ, AT, EHH, HH and LX wrote the paper and prepared the manuscript material with contributions from all the co-authors. Competing interests Competing interests. The authors declare that they have no conflict of interest. Acknowledgements Acknowledgements. We thank the European Commission for support of the MARSU project (contract no. 69089). The authors acknowledge the Emissions of atmospheric Compounds and Compilation of Ancillary Data (ECCAD) MEGAN-MACC dataset. The authors also acknowledge the China Scholarship Council for supporting Yanhong Zhu to study in the project at the Atmospheric Chemistry Department (ACD) of the Leibniz Institute for Tropospheric Research (TROPOS), Germany. Financial support Financial support. This research has been supported by the National Key Research and Development Program of China (grant no. 2016YFC0200500), the National Natural Science Foundation of China (grant nos. 21577079 and 41922051), the Japan Society for the Promotion of Science through Grant-in-Aid (grant no. 24221001), and the European Commission for support of the MARSU project (grant no. 69089). Review statement Review statement. This paper was edited by Holger Tost and reviewed by two anonymous referees. References An, J., Zou, J., Wang, J., Lin, X., and Zhu, B.: Differences in ozone photochemical characteristics between the megacity Nanjing and its suburban surroundings, Yangtze River Delta, China, Environ. Sci. Pollut. Res., 22, 19607–19617, https://doi.org/10.1007/s11356-015-5177-0, 2015. An, J. L., Zhang, W., and Qu, Y.: Impacts of a strong cold front on concentrations of HONO, HCHO, O3, and NO2 in the heavy traffic urban area of Beijing, Atmos. Environ., 43, 3454–3459, https://doi.org/10.1016/j.atmosenv.2009.04.052, 2009. Arakaki, T., Anastasio, C., Kuroki, Y., Nakajima, H., Okada, K., Kotani, Y., Handa, D., Azechi, S., Kimura, T., Tsuhako, A., and Miyagi, Y.: A General Scavenging Rate Constant for Reaction of Hydroxyl Radical with Organic Carbon in Atmospheric Waters, Environ. Sci. Technol., 47, 8196–8203, https://doi.org/10.1021/es401927b, 2013. Barletta, B., Meinardi, S., Rowland, F. S., Chan, C. Y., Wang, X. M., Zou, S. C., Chan, L. Y., and Blake, D. R.: Volatile organic compounds in 43 Chinese cities, Atmos. Environ., 39, 5979–5990, https://doi.org/10.1016/j.atmosenv.2005.06.029, 2005. Bauwens, M., Stavrakou, T., Müller, J.-F., De Smedt, I., Van Roozendael, M., van der Werf, G. R., Wiedinmyer, C., Kaiser, J. W., Sindelarova, K., and Guenther, A.: Nine years of global hydrocarbon emissions based on source inversion of OMI formaldehyde observations, Atmos. Chem. Phys., 16, 10133–10158, https://doi.org/10.5194/acp-16-10133-2016, 2016. Betterton, E. A. and Hoffmann, M. R.: Henry's law constants of some environmentally important aldehydes, Environ. Sci. Technol., 22, 1415–1418, https://doi.org/10.1021/es00177a004, 1988. Bikkina, S., Kawamura, K., Miyazaki, Y., and Fu, P.: High abundances of oxalic, azelaic, and glyoxylic acids and methylglyoxal in the open ocean with high biological activity: Implication for secondary OA formation from isoprene, Geophys. Res. Lett., 41, 3649–3657, https://doi.org/10.1002/2014GL059913, 2014. Bräuer, P., Mouchel-Vallon, C., Tilgner, A., Mutzel, A., Böge, O., Rodigast, M., Poulain, L., van Pinxteren, D., Wolke, R., Aumont, B., and Herrmann, H.: Development of a protocol for the auto-generation of explicit aqueous-phase oxidation schemes of organic compounds, Atmos. Chem. Phys., 19, 9209–9239, https://doi.org/10.5194/acp-19-9209-2019, 2019. Carlton, A. G., Turpin, B. J., Lim, H. J., Altieri, K. E., and Seitzinger, S.: Link between isoprene and secondary organic aerosol (SOA): Pyruvic acid oxidation yields low volatility organic acids in clouds, Geophys. Res. Lett., 33, L06822, https://doi.org/10.1029/2005GL025374, 2006. Carlton, A. G., Turpin, B. J., Altieri, K. E., Seitzinger, S., Reff, A., Lim, H. J., and Ervens, B.: Atmospheric oxalic acid and SOA production from glyoxal: Results of aqueous photooxidation experiments, Atmos. Environ., 41, 7588–7602, https://doi.org/10.1016/j.atmosenv.2007.05.035, 2007. Carlton, A. G., Wiedinmyer, C., and Kroll, J. H.: A review of Secondary Organic Aerosol (SOA) formation from isoprene, Atmos. Chem. Phys., 9, 4987–5005, https://doi.org/10.5194/acp-9-4987-2009, 2009. Carter, W. P. and Atkinson, R.: Computer modeling study of incremental hydrocarbon reactivity, Environ. Sci. Technol., 23, 864–880, https://doi.org/10.1021/es00065a017, 1989. DeCarlo, P. F., Ulbrich, I. M., Crounse, J., de Foy, B., Dunlea, E. J., Aiken, A. C., Knapp, D., Weinheimer, A. J., Campos, T., Wennberg, P. O., and Jimenez, J. L.: Investigation of the sources and processing of organic aerosol over the Central Mexican Plateau from aircraft measurements during MILAGRO, Atmos. Chem. Phys., 10, 5257–5280, https://doi.org/10.5194/acp-10-5257-2010, 2010. De Gouw, J. and Jimenez, J. L.: Organic aerosols in the Earth's atmosphere, Environ. Sci. Technol., 43, 7614–7618, https://doi.org/10.1021/es9006004, 2009. Draxler, R. and Rolph, G.: HYSPLIT (HYbrid Single-Particle Lagrangian Integrated Trajectory) model access via NOAA ARL READY, Silver Spring, Md: NOAA Air Resources Laboratory, available at: http://www.arl.noaa.gov/ready/hysplit4.html (last access: 20 August 2019), 2003. Duan, F., He, K., Ma, Y., Yang, F., Yu, X., Cadle, S., Chan, T., and Mulawa, P.: Concentration and chemical characteristics of PM2.5 in Beijing, China: 2001–2002, Sci. Total Environ., 355, 264–275, https://doi.org/10.1016/j.scitotenv.2005.03.001, 2006. Duan, J. C., Tan, J. H., Yang, L., Wu, S., and Hao, J. M.: Concentration, sources and ozone formation potential of volatile organic compounds (VOCs) during ozone episode in Beijing, Atmos. Res., 88, 25–35, https://doi.org/10.1016/j.atmosres.2007.09.004, 2008. Ervens, B.: Modeling the processing of aerosol and trace gases in clouds and fogs, Chem. Rev., 115, 4157–4198, https://doi.org/10.1021/cr5005887, 2015. Ervens, B. and Volkamer, R.: Glyoxal processing by aerosol multiphase chemistry: towards a kinetic modeling framework of secondary organic aerosol formation in aqueous particles, Atmos. Chem. Phys., 10, 8219–8244, https://doi.org/10.5194/acp-10-8219-2010, 2010. Ervens, B., Feingold, G., Frost, G. J., and Kreidenweis, S. M.: A modeling study of aqueous production of dicarboxylic acids: 1. Chemical pathways and speciated organic mass production, J. Geophys. Res.-Atmos., 109, D15205, https://doi.org/10.1029/2003JD004387, 2004. Ervens, B., Carlton, A. G., Turpin, B. J., Altieri, K. E., Kreidenweis, S. M., and Feingold, G.: Secondary organic aerosol yields from cloud-processing of isoprene oxidation products, Geophys. Res. Lett., 35, L02816, https://doi.org/10.1029/2007GL031828, 2008. Ervens, B., Turpin, B. J., and Weber, R. J.: Secondary organic aerosol formation in cloud droplets and aqueous particles (aqSOA): a review of laboratory, field and model studies, Atmos. Chem. Phys., 11, 11069–11102, https://doi.org/10.5194/acp-11-11069-2011, 2011. Ganzeveld, L., Lelieveld, J., and Roelofs, G.: A dry deposition parameterization for sulfur oxides in a chemistry and general circulation model, J. Geophys. Res.-Atmos., 103, 5679–5694, https://doi.org/10.1029/97JD03077, 1998. Glasius, M., Lahaniati, M., Calogirou, A., Di Bella, D., Jensen, N. R., Hjorth, J., Kotzias, D., and Larsen, B. R.: Carboxylic acids in secondary aerosols from oxidation of cyclic monoterpenes by ozone, Environ. Sci. Technol., 34, 1001–1010, https://doi.org/10.1021/es990445r, 2000. Guo, J., Wang, Y., Shen, X. H., Wang, Z., Lee, T., Wang, X. F., Li, P. H., Sun, M. H., Collett Jr., J. L., Wang, W. X., and Wang, T.: Characterization of cloud water chemistry at Mount Tai, China: Seasonal variation, anthropogenic impact, and cloud processing, Atmos. Environ., 60, 467–476, https://doi.org/10.1016/j.atmosenv.2012.07.016, 2012. Guo, S., Hu, M., Wang, Z. B., Slanina, J., and Zhao, Y. L.: Size-resolved aerosol water-soluble ionic compositions in the summer of Beijing: implication of regional secondary formation, Atmos. Chem. Phys., 10, 947–959, https://doi.org/10.5194/acp-10-947-2010, 2010. He, N. and Kawamura, K.: Distributions and diurnal changes of low molecular weight organic acids and α-dicarbonyls in suburban aerosols collected at Mangshan, North China, Geochem. J., 44, 17–22, https://doi.org/10.2343/geochemj.1.0093, 2010. He, N., Kawamura, K., Okuzawa, K., Kanaya, Y., and Wang, Z. F.: Diurnal variations of total carbon, dicarboxylic acids, ketoacids and α-dicarbonyls in aerosols in the northern vicinity of Beijing, Atmos. Chem. Phys. Discuss., 13, 16699–16731, https://doi.org/10.5194/acpd-13-16699-2013, 2013. He, S. Z., Chen, Z. M., Zhang, X., Zhao, Y., Huang, D. M., Zhao, J. N., Zhu, T., Hu, M., and Zeng, L. M.: Measurement of atmospheric hydrogen peroxide and organic peroxides in Beijing before and during the 2008 Olympic Games: Chemical and physical factors influencing their concentrations, J. Geophys. Res., 115, D17307, https://doi.org/10.1029/2009JD013544, 2010. Hegde, P. and Kawamura, K.: Seasonal variations of water-soluble organic carbon, dicarboxylic acids, ketocarboxylic acids, and α-dicarbonyls in Central Himalayan aerosols, Atmos. Chem. Phys., 12, 6645–6665, https://doi.org/10.5194/acp-12-6645-2012, 2012. Herrmann, H., Tilgner, A., Barzaghi, P., Majdik, Z., Gligorovski, S., Poulain, L., and Monod, A.: Towards a more detailed description of tropospheric aqueous phase organic chemistry: CAPRAM 3.0, Atmos. Environ., 39, 4351–4363, https://doi.org/10.1016/j.atmosenv.2005.02.016, 2005. Herrmann, H., Hoffmann, D., Schaefer, T., Bräuer, P., and Tilgner, A.: Tropospheric aqueous-phase free-radical chemistry: Radical sources, spectra, reaction kinetics and prediction tools, Chem. Phys. Chem., 11, 3796–3822, https://doi.org/10.1002/cphc.201000533, 2010. Herrmann, H., Schaefer, T., Tilgner, A., Styler, S. A., Weller, C., Teich, M., and Otto, T.: Tropospheric aqueous-phase chemistry: kinetics, mechanisms, and its coupling to a changing gas phase, Chem. Rev., 115, 4259–4334, https://doi.org/10.1021/cr500447k, 2015. Ho, K. F., Huang, R.-J., Kawamura, K., Tachibana, E., Lee, S. C., Ho, S. S. H., Zhu, T., and Tian, L.: Dicarboxylic acids, ketocarboxylic acids, α-dicarbonyls, fatty acids and benzoic acid in PM2.5 aerosol collected during CAREBeijing-2007: an effect of traffic restriction on air quality, Atmos. Chem. Phys., 15, 3111–3123, https://doi.org/10.5194/acp-15-3111-2015, 2015. Hoffmann, E. H., Tilgner, A., Wolke, R., Böge, O., Walter, A., and Herrmann, H.: Oxidation of substituted aromatic hydrocarbons in the tropospheric aqueous phase: kinetic mechanism development and modelling, Phys. Chem. Chem. Phys., 20, 10960–10977, https://doi.org/10.1039/C7CP08576A, 2018. Hu, G. Y., Sun, J. Y., Zhang, Y. M., Shen, X. J., and Yang, Y.: Chemical Composition of PM2.5 Based on Two-Year Measurements at an Urban Site in Beijing, Aerosol Air Qual. Res., 15, 1748–1759, https://doi.org/10.4209/aaqr.2014.11.0284, 2015. Huang, R. J., Zhang, Y., Bozzetti, C., Ho, K. F., Cao, J. J., Han, Y., Daellenbach, K. R., Slowik, J. G., Platt, S. M., and Canonaco, F.: High secondary aerosol contribution to particulate pollution during haze events in China, Nature, 514, 218–222, https://doi.org/10.1038/nature13774, 2014. Huisman, A. J., Hottle, J. R., Galloway, M. M., DiGangi, J. P., Coens, K. L., Choi, W., Faloona, I. C., Gilman, J. B., Kuster, W. C., de Gouw, J., Bouvier-Brown, N. C., Goldstein, A. H., LaFranchi, B. W., Cohen, R. C., Wolfe, G. M., Thornton, J. A., Docherty, K. S., Farmer, D. K., Cubison, M. J., Jimenez, J. L., Mao, J., Brune, W. H., and Keutsch, F. N.: Photochemical modeling of glyoxal at a rural site: observations and analysis from BEARPEX 2007, Atmos. Chem. Phys., 11, 8883–8897, https://doi.org/10.5194/acp-11-8883-2011, 2011. Ianniello, A., Spataro, F., Esposito, G., Allegrini, I., Hu, M., and Zhu, T.: Chemical characteristics of inorganic ammonium salts in PM2.5 in the atmosphere of Beijing (China), Atmos. Chem. Phys., 11, 10803–10822, https://doi.org/10.5194/acp-11-10803-2011, 2011. Ip, H., Huang, X., and Yu, J. Z.: Effective Henry's law constants of glyoxal, glyoxylic acid, and glycolic acid, Geophys. Res. Lett., 36, L01802, https://doi.org/10.1029/2008GL036212, 2009. Jenkin, M. E., Saunders, S. M., Wagner, V., and Pilling, M. J.: Protocol for the development of the Master Chemical Mechanism, MCM v3 (Part B): tropospheric degradation of aromatic volatile organic compounds, Atmos. Chem. Phys., 3, 181–193, https://doi.org/10.5194/acp-3-181-2003, 2003. Kampf, C. J., Waxman, E. M., Slowik, J. G., Dommen, J., Pfaffenberger, L., Praplan, A. P., Prevot, A. S., Baltensperger, U., Hoffmann, T., and Volkamer, R.: Effective Henry's law partitioning and the salting constant of glyoxal in aerosols containing sulfate, Environ. Sci. Technol., 47, 4236–4244, https://doi.org/10.1021/es400083d, 2013. Kanaya, Y., Pochanart, P., Liu, Y., Li, J., Tanimoto, H., Kato, S., Suthawaree, J., Inomata, S., Taketani, F., Okuzawa, K., Kawamura, K., Akimoto, H., and Wang, Z. F.: Rates and regimes of photochemical ozone production over Central East China in June 2006: a box model analysis using comprehensive measurements of ozone precursors, Atmos. Chem. Phys., 9, 7711–7723, https://doi.org/10.5194/acp-9-7711-2009, 2009. Kanaya, Y., Akimoto, H., Wang, Z.-F., Pochanart, P., Kawamura, K., Liu, Y., Li, J., Komazaki, Y., Irie, H., Pan, X.-L., Taketani, F., Yamaji, K., Tanimoto, H., Inomata, S., Kato, S., Suthawaree, J., Okuzawa, K., Wang, G., Aggarwal, S. G., Fu, P. Q., Wang, T., Gao, J., Wang, Y., and Zhuang, G.: Overview of the Mount Tai Experiment (MTX2006) in central East China in June 2006: studies of significant regional air pollution, Atmos. Chem. Phys., 13, 8265–8283, https://doi.org/10.5194/acp-13-8265-2013, 2013. Kawamura, K. and Sakaguchi, F.: Molecular distributions of water soluble dicarboxylic acids in marine aerosols over the Pacific Ocean including tropics, J. Geophys. Res.-Atmos., 104, 3501–3509, https://doi.org/10.1029/1998JD100041, 1999. Kawamura, K. and Yasui, O.: Diurnal changes in the distribution of dicarboxylic acids, ketocarboxylic acids and dicarbonyls in the urban Tokyo atmosphere, Atmos. Environ., 39, 1945–1960, https://doi.org/10.1016/j.atmosenv.2004.12.014, 2005. Kawamura, K., Tachibana, E., Okuzawa, K., Aggarwal, S. G., Kanaya, Y., and Wang, Z. F.: High abundances of water-soluble dicarboxylic acids, ketocarboxylic acids and α-dicarbonyls in the mountaintop aerosols over the North China Plain during wheat burning season, Atmos. Chem. Phys., 13, 8285–8302, https://doi.org/10.5194/acp-13-8285-2013, 2013. Kim, H. S., Huh, J. B., Hopke, P. K., Holsen, T. M., and Yi, S. M.: Characteristics of the major chemical constituents of PM2.5 and smog events in Seoul, Korea in 2003 and 2004, Atmos. Environ., 41, 6762–6770, https://doi.org/10.1016/j.atmosenv.2007.04.060, 2007. Kroll, J. A., Hansen, A. S., Møller, K. H., Axson, J. L., Kjaergaard, H. G., and Vaida, V.: Ultraviolet Spectroscopy of the Gas Phase Hydration of Methylglyoxal, ACS Earth Space Chem., 1, 345–352, https://doi.org/10.1021/acsearthspacechem.7b00054, 2017. Kundu, S., Kawamura, K., and Lee, M.: Seasonal variations of diacids, ketoacids, and α-dicarbonyls in aerosols at Gosan, Jeju Island, South Korea: Implications for sources, formation, and degradation during long-range transport, J. Geophys. Res.-Atmos., 115, D19307, https://doi.org/10.1029/2010JD013973, 2010. Legrand, M., Preunkert, S., Oliveira, T., Pio, C. A., Hammer, S., Gelencsér, A., Kasper-Giebl, A., and Laj, P.: Origin of C2-C5 dicarboxylic acids in the European atmosphere inferred from year-round aerosol study conducted at a west-east transect, J. Geophys. Res.-Atmos., 112, D23S07, https://doi.org/10.1029/2006JD008019, 2007. Li, C. C., Mao, J. T., Alexis, K. H. L., Yuan, Z. B., Wang, M. H., and Liu, X. Y.: Application of MODIS satellite products to the air pollution research in Beijing, Sci. China Ser. D., 48. 209–219, 2005. Li, J., Xie, S. D., Zeng, L. M., Li, L. Y., Li, Y. Q., and Wu, R. R.: Characterization of ambient volatile organic compounds and their sources in Beijing, before, during, and after Asia-Pacific Economic Cooperation China 2014, Atmos. Chem. Phys., 15, 7945–7959, https://doi.org/10.5194/acp-15-7945-2015, 2015. Li, J., Wang, X., Chen, J., Zhu, C., Li, W., Li, C., Liu, L., Xu, C., Wen, L., Xue, L., Wang, W., Ding, A., and Herrmann, H.: Chemical composition and droplet size distribution of cloud at the summit of Mount Tai, China, Atmos. Chem. Phys., 17, 9885–9896, https://doi.org/10.5194/acp-17-9885-2017, 2017. Li, L. Y., Xie, S. D., Zeng, L. M., Wu, R. R., and Li, J.: Characteristics of volatile organic compounds and their role in ground-level ozone formation in the Beijing-Tianjin-Hebei region, China, Atmos. Environ., 113, 247–254, https://doi.org/10.1016/j.atmosenv.2015.05.021, 2015. Li, M., Zhang, Q., Kurokawa, J.-I., Woo, J.-H., He, K., Lu, Z., Ohara, T., Song, Y., Streets, D. G., Carmichael, G. R., Cheng, Y., Hong, C., Huo, H., Jiang, X., Kang, S., Liu, F., Su, H., and Zheng, B.: MIX: a mosaic Asian anthropogenic emission inventory under the international collaboration framework of the MICS-Asia and HTAP, Atmos. Chem. Phys., 17, 935–963, https://doi.org/10.5194/acp-17-935-2017, 2017. Li, Y., An, J. L., Min, M., Zhang, W., Wang, F., and Xie, P. H.: Impacts of HONO sources on the air quality in Beijing, Tianjin and Hebei Province of China, Atmos. Environ., 45, 4735–4744, https://doi.org/10.1016/j.atmosenv.2011.04.086, 2011. Lim, H. J., Carlton, A. G., and Turpin, B. J.: Isoprene forms secondary organic aerosol through cloud processing: model simulations, Environ. Sci. Technol., 39, 4441–4446, https://doi.org/10.1021/es048039h, 2005. Liu, Y., Shao, M., Kuster, W. C., Goldan, P. D., Li, X. H., Lu, S. H., and De Gouw, J. A.: Source Identification of Reactive Hydrocarbons and Oxygenated VOCs in the Summertime in Beijing, Environ. Sci. Technol., 43, 75–81, https://doi.org/10.1021/es801716n, 2009. Liu, Y. J., Zhang, T. T., Liu, Q. Y., Zhang, R. J., Sun, Z. Q., and Zhang, M. G.: Seasonal Variation of Physical and Chemical Properties in TSP, PM10 and PM2.5 at a Roadside Site in Beijing and Their Influence on Atmospheric Visibility, Aerosol Air Qual. Res., 14, 954–969, https://doi.org/10.4209/aaqr.2013.01.0023, 2014. Liu, Z., Wang, Y., Gu, D., Zhao, C., Huey, L. G., Stickel, R., Liao, J., Shao, M., Zhu, T., Zeng, L., Amoroso, A., Costabile, F., Chang, C.-C., and Liu, S.-C.: Summertime photochemistry during CAREBeijing-2007: ROx budgets and O3 formation, Atmos. Chem. Phys., 12, 7737–7752, https://doi.org/10.5194/acp-12-7737-2012, 2012a. Liu, Z., Wang, Y. H., Vrekoussis, M., Richter, A., Wittrock, F., Burrows, J. P., Shao, M., Chang, C. C., Liu, S. C., Wang, H. L., and Chen, C. H.: Exploring the missing source of glyoxal (CHOCHO) over China, Geophys. Res. Lett., 39, L10812, https://doi.org/10.1029/2012GL051645, 2012b. Lu, K. D., Rohrer, F., Holland, F., Fuchs, H., Bohn, B., Brauers, T., Chang, C. C., Häseler, R., Hu, M., Kita, K., Kondo, Y., Li, X., Lou, S. R., Nehr, S., Shao, M., Zeng, L. M., Wahner, A., Zhang, Y. H., and Hofzumahaus, A.: Observation and modelling of OH and HO2 concentrations in the Pearl River Delta 2006: a missing OH source in a VOC rich atmosphere, Atmos. Chem. Phys., 12, 1541–1569, https://doi.org/10.5194/acp-12-1541-2012, 2012. Matsui, H., Koike, M., Kondo, Y., Takegawa, N., Kita, K., Miyazaki, Y., Hu, M., Chang, S. Y., Blake, D., and Fast, J. D.: Spatial and temporal variations of aerosols around Beijing in summer 2006: Model evaluation and source apportionment, J. Geophys. Res.-Atmos., 114, D00G13, https://doi.org/10.1029/2008JD010906, 2009. Meng, J., Wang, G., Hou, Z., Liu, X., Wei, B., Wu, C., Cao, C., Wang, J., Li, J., Cao, J., Zhang, E., Dong, J., Liu, J., Ge, S., and Xie, Y.: Molecular distribution and stable carbon isotopic compositions of dicarboxylic acids and related SOA from biogenic sources in the summertime atmosphere of Mt. Tai in the North China Plain, Atmos. Chem. Phys., 18, 15069–15086, https://doi.org/10.5194/acp-18-15069-2018, 2018. Meng, Z. Y., Lin, W. L., Jiang, X. M., Yan, P., Wang, Y., Zhang, Y. M., Jia, X. F., and Yu, X. L.: Characteristics of atmospheric ammonia over Beijing, China, Atmos. Chem. Phys., 11, 6139–6151, https://doi.org/10.5194/acp-11-6139-2011, 2011. Miyazaki, Y., Aggarwal, S. G., Singh, K., Gupta, P. K., and Kawamura, K.: Dicarboxylic acids and water-soluble organic carbon in aerosols in New Delhi, India, in winter: Characteristics and formation processes, J. Geophys. Res.-Atmos., 114, D19206, https://doi.org/10.1029/2009JD011790, 2009. Mouchel-Vallon, C., Deguillaume, L., Monod, A., Perroux, H., Rose, C., Ghigo, G., Long, Y., Leriche, M., Aumont, B., Patryl, L., Armand, P., and Chaumerliac, N.: CLEPS 1.0: A new protocol for cloud aqueous phase oxidation of VOC mechanisms, Geosci. Model Dev., 10, 1339–1362, https://doi.org/10.5194/gmd-10-1339-2017, 2017. Ng, N. L., Brown, S. S., Archibald, A. T., Atlas, E., Cohen, R. C., Crowley, J. N., Day, D. A., Donahue, N. M., Fry, J. L., Fuchs, H., Griffin, R. J., Guzman, M. I., Herrmann, H., Hodzic, A., Iinuma, Y., Jimenez, J. L., Kiendler-Scharr, A., Lee, B. H., Luecken, D. J., Mao, J., McLaren, R., Mutzel, A., Osthoff, H. D., Ouyang, B., Picquet-Varrault, B., Platt, U., Pye, H. O. T., Rudich, Y., Schwantes, R. H., Shiraiwa, M., Stutz, J., Thornton, J. A., Tilgner, A., Williams, B. J., and Zaveri, R. A.: Nitrate radicals and biogenic volatile organic compounds: oxidation, mechanisms, and organic aerosol, Atmos. Chem. Phys., 17, 2103–2162, https://doi.org/10.5194/acp-17-2103-2017, 2017. Pavuluri, C. M. and Kawamura, K.: Evidence for 13-carbon enrichment in oxalic acid via iron catalyzed photolysis in aqueous phase, Geophys. Res. Lett., 39, L03802, https://doi.org/10.1029/2011GL050398, 2012. Pavuluri, C. M., Kawamura, K., and Swaminathan, T.: Water-soluble organic carbon, dicarboxylic acids, ketoacids, and α-dicarbonyls in the tropical Indian aerosols, J. Geophys. Res.-Atmos., 115, D11302, https://doi.org/10.1029/2009JD012661, 2010. Rao, Z. H., Chen, Z. M., Liang, H., Huang, L. B., and Huang, D.: Carbonyl compounds over urban Beijing: Concentrations on haze and non-haze days and effects on radical chemistry, Atmos. Environ., 124, 207–216, https://doi.org/10.1016/j.atmosenv.2015.06.050, 2016. Ren, Y., Ding, A. J., Wang, T., Shen, X. H., Guo, J., Zhang, J. M., Wang, Y., Xu, P. J., Wang, X. F., Gao, J., and Collett Jr., J. L.: Measurement of gas-phase total peroxides at the summit of Mount Tai in China, Atmos. Environ., 43, 1702–1711, https://doi.org/10.1016/j.atmosenv.2008.12.020, 2009. Rusumdar, A. J., Tilgner, A., Wolke, R., and Herrmann, H.: Treatment of non-ideality in the multiphase model SPACCIM-Part2: Impacts on the multiphase chemical processing in deliquesced aerosol particles, Atmos. Chem. Phys. Discuss., https://doi.org/10.5194/acp-2019-819, in review, 2019. Saunders, S. M., Jenkin, M. E., Derwent, R. G., and Pilling, M. J.: Protocol for the development of the Master Chemical Mechanism, MCM v3 (Part A): tropospheric degradation of non-aromatic volatile organic compounds, Atmos. Chem. Phys., 3, 161–180, https://doi.org/10.5194/acp-3-161-2003, 2003. Schwartz, S. E.: Mass-transport considerations pertinent to aqueous phase reactions of gases in liquid-water clouds, in: Chemistry of multiphase atmospheric systems, Springer, 415–471, https://doi.org/10.1007/978-3-642-70627-1_16, 1986. Seagrave, J., McDonald, J. D., Bedrick, E., Edgerton, E. S., Gigliotti, A. P., Jansen, J. J., Ke, L., Naeher, L. P., Seilkop, S. K., and Zheng, M.: Lung toxicity of ambient particulate matter from southeastern US sites with different contributing sources: relationships between composition and effects, Environ. Health Persp., 114, 1387–1393, https://doi.org/10.1289/ehp.9234, 2006. Sehili, A., Wolke, R., Knoth, O., Simmel, M., Tilgner, A., and Herrmann, H.: Comparison of different model approaches for the simulation of multiphase processes, Atmos. Environ., 39, 4403–4417, https://doi.org/10.1016/j.atmosenv.2005.02.039, 2005. Shen, X. H., Lee, T., Guo, J., Wang, X. F., Li, P. H., Xu, P. J., Wang, Y., Ren, Y., Wang, W. X., Wang, T., Li, Y., Carn, S. A., and Collett Jr., J. L.: Aqueous phase sulfate production in clouds in eastern China, Atmos. Environ., 62, 502–511, https://doi.org/10.1016/j.atmosenv.2012.07.079, 2012. Shrivastava, M., Cappa, C. D., Fan, J. W., Goldstein, A. H., Guenther, A. B., Jimenez, J. L., Kuang, C., Laskin, A., Martin, S. T., Ng, N. L., Petaja, T., Pierce, J. R., Rasch, P. J., Roldin, P., Seinfeld, J. H., Shilling, J., Smith, J. N., Thornton, J. A., Volkamer, R., Wang, J., Worsnop, D. R., Zaveri, R. A., Zelenyuk, A., and Zhang, Q.: Recent advances in understanding secondary organic aerosol: Implications for global climate forcing, Rev. Geophys., 55, 509–559, https://doi.org/10.1002/2016RG000540, 2017. Simmel, M. and Wurzler, S.: Condensation and activation in sectional cloud microphysical models, Atmos. Res., 80, 218–236, https://doi.org/10.1016/j.atmosres.2005.08.002, 2006. Simmel, M., Diehl, K., and Wurzler, S.: Numerical simulation of the microphysics of an orographic cloud: comparison with measurements and sensitivity studies, Atmos. Environ., 39, 4365–4373, https://doi.org/10.1016/j.atmosenv.2005.02.017, 2005. Sindelarova, K., Granier, C., Bouarar, I., Guenther, A., Tilmes, S., Stavrakou, T., Müller, J.-F., Kuhn, U., Stefani, P., and Knorr, W.: Global data set of biogenic VOC emissions calculated by the MEGAN model over the last 30 years, Atmos. Chem. Phys., 14, 9317–9341, https://doi.org/10.5194/acp-14-9317-2014, 2014. Sorooshian, A., Varutbangkul, V., Brechtel, F. J., Ervens, B., Feingold, G., Bahreini, R., Murphy, S. M., Holloway, J. S., Atlas, E. L., and Buzorius, G.: Oxalic acid in clear and cloudy atmospheres: Analysis of data from International Consortium for Atmospheric Research on Transport and Transformation 2004, J. Geophys. Res.-Atmos., 111, D23S45, https://doi.org/10.1029/2005JD006880, 2006. Suhail, K., George, M., Chandran, S., Varma, R., Venables, D. S., Wang, M., and Chen, J.: Open path incoherent broadband cavity-enhancedmeasurements of NO3 radical and aerosol extinction in the North China Plain, Spectrochim. Acta. A, 208, 24–31, https://doi.org/10.1016/j.saa.2018.09.023, 2019. Sun, L., Xue, L., Wang, T., Gao, J., Ding, A., Cooper, O. R., Lin, M., Xu, P., Wang, Z., Wang, X., Wen, L., Zhu, Y., Chen, T., Yang, L., Wang, Y., Chen, J., and Wang, W.: Significant increase of summertime ozone at Mount Tai in Central Eastern China, Atmos. Chem. Phys., 16, 10637–10650, https://doi.org/10.5194/acp-16-10637-2016, 2016. Sun, W. J., Yang, L. M., Yu, L. Y., and Saeys, M.: Ab Initio Reaction Path Analysis for the Initial Hydrogen Abstraction from Organic Acids by Hydroxyl Radicals, J. Phys. Chem. A., 113, 7852–7860, https://doi.org/10.1021/jp8090792, 2009. Sun, Y., Jiang, Q., Wang, Z., Fu, P., Li, J., Yang, T., and Yin, Y.: Investigation of the sources and evolution processes of severe haze pollution in Beijing in January 2013, J. Geophys. Res.-Atmos., 119, 4380–4398, https://doi.org/10.1002/2014JD021641, 2014. Sun, Y. L., Wang, Z. F., Fu, P. Q., Yang, T., Jiang, Q., Dong, H. B., Li, J., and Jia, J. J.: Aerosol composition, sources and processes during wintertime in Beijing, China, Atmos. Chem. Phys., 13, 4577–4592, https://doi.org/10.5194/acp-13-4577-2013, 2013. Sun, Y. L., Wang, Z. F., Du, W., Zhang, Q., Wang, Q. Q., Fu, P. Q., Pan, X. L., Li, J., Jayne, J., and Worsnop, D. R.: Long-term real-time measurements of aerosol particle composition in Beijing, China: seasonal variations, meteorological effects, and source analysis, Atmos. Chem. Phys., 15, 10149–10165, https://doi.org/10.5194/acp-15-10149-2015, 2015. Tabazadeh, A.: Organic aggregate formation in aerosols and its impact on the physicochemical properties of atmospheric particles, Atmos. Environ., 39, 5472–5480, https://doi.org/10.1016/j.atmosenv.2005.05.045, 2005. Tilgner, A. and Herrmann, H.: Radical-driven carbonyl-to-acid conversion and acid degradation in tropospheric aqueous systems studied by CAPRAM, Atmos. Environ., 44, 5415–5422, https://doi.org/10.1016/j.atmosenv.2010.07.050, 2010. Tilgner, A. and Herrmann, H.: Tropospheric Aqueous-Phase OH Oxidation Chemistry: Current Understanding, Uptake of Highly Oxidized Organics and Its Effects, Multiphase Environ. Chem. Atmos., 1299, 49–85, https://doi.org/10.1021/bk-2018-1299.ch004, 2018. Tilgner, A., Bräuer, P., Wolke, R., and Herrmann, H.: Modelling multiphase chemistry in deliquescent aerosols and clouds using CAPRAM3.0i, J. Atmos. Chem., 70, 221–256, https://doi.org/10.1007/s10874-013-9267-4, 2013. Volkamer, R., Jimenez, J. L., San Martini, F., Dzepina, K., Zhang, Q., Salcedo, D., Molina, L. T., Worsnop, D. R., and Molina, M. J.: Secondary organic aerosol formation from anthropogenic air pollution: Rapid and higher than expected, Geophys. Res. Lett., 33, L17811, https://doi.org/10.1029/2006GL026899, 2006. Volkamer, R., Ziemann, P. J., and Molina, M. J.: Secondary Organic Aerosol Formation from Acetylene (C2H2): seed effect on SOA yields due to organic photochemistry in the aerosol aqueous phase, Atmos. Chem. Phys., 9, 1907–1928, https://doi.org/10.5194/acp-9-1907-2009, 2009. Wang, G., Kawamura, K., Umemoto, N., Xie, M. J., Hu, S. Y., and Wang, Z. F.: Water-soluble organic compounds in PM2.5 and size-segregated aerosols over Mount Tai in North China Plain, J. Geophys. Res.-Atmos., 114, D19208, https://doi.org/10.1029/2008JD011390, 2009. Wang, G., Zhang, R., Gomez, M. E., Yang, L., Zamora, M. L., Hu, M., Lin, Y., Peng, J., Guo, S., and Meng, J.: Persistent sulfate formation from London Fog to Chinese haze, P. Natl. Acad. Sci. USA, 113, 13630–13635, https://doi.org/10.1073/pnas.1616540113, 2016. Wang, H. B., Tian, M., Li, X. H., Chang, Q., Cao, J. J., Yang, F. M., Ma, Y. L., and He, K. B.: Chemical Composition and Light Extinction Contribution of PM2.5 in Urban Beijing for a 1-Year Period, Aerosol Air Qual. Res., 15, 2200–2211, https://doi.org/10.4209/aaqr.2015.04.0257, 2015. Wang, S., Shi, C., Zhou, B., Zhao, H., Wang, Z., Yang, S., and Chen, L.: Observation of NO3 radicals over Shanghai, China, Atmos. Environ., 70, 401–409, https://doi.org/10.1016/j.atmosenv.2013.01.022, 2013. Wang, Y., Zhuang, G., Sun, Y., and An, Z.: The variation of characteristics and formation mechanisms of aerosols in dust, haze, and clear days in Beijing, Atmos. Environ., 40, 6579–6591, https://doi.org/10.1016/j.atmosenv.2006.05.066, 2006. Wang, Y., Shen, L., Wu, S., Mickley, L., He, J., and Hao, J.: Sensitivity of surface ozone over China to 2000–2050 global changes of climate and emissions, Atmos. Environ., 75, 374–382, https://doi.org/10.1016/j.atmosenv.2013.04.045, 2013. Wang, Y., Yao, L., Wang, L., Liu, Z., Ji, D., Tang, G., Zhang, J., Sun, Y., Hu, B., and Xin, J.: Mechanism for the formation of the January 2013 heavy haze pollution episode over central and eastern China, Sci. China Earth Sci., 57, 14–25, https://doi.org/10.1007/s11430-013-4773-4, 2014. Wang, Y., Chen, Z., Wu, Q., Liang, H., Huang, L., Li, H., Lu, K., Wu, Y., Dong, H., Zeng, L., and Zhang, Y.: Observation of atmospheric peroxides during Wangdu Campaign 2014 at a rural site in the North China Plain, Atmos. Chem. Phys., 16, 10985–11000, https://doi.org/10.5194/acp-16-10985-2016, 2016. Warneck, P.: In-cloud chemistry opens pathway to the formation of oxalic acid in the marine atmosphere, Atmos. Environ., 37, 2423–2427, https://doi.org/10.1016/S1352-2310(03)00136-5, 2003. Wang, Y. H., Hu, B., Ji, D. S., Liu, Z. R., Tang, G. Q., Xin, J. Y., Zhang, H. X., Song, T., Wang, L. L., Gao, W. K., Wang, X. K., and Wang, Y. S.: Ozone weekend effects in the Beijing–Tianjin–Hebei metropolitan area, China, Atmos. Chem. Phys., 14, 2419–2429, https://doi.org/10.5194/acp-14-2419-2014, 2014. Wen, L., Xue, L., Wang, X., Xu, C., Chen, T., Yang, L., Wang, T., Zhang, Q., and Wang, W.: Summertime fine particulate nitrate pollution in the North China Plain: increasing trends, formation mechanisms and implications for control policy, Atmos. Chem. Phys., 18, 11261–11275, https://doi.org/10.5194/acp-18-11261-2018, 2018. Williams, J., de Reus, M., Krejci, R., Fischer, H., and Ström, J.: Application of the variability-size relationship to atmospheric aerosol studies: estimating aerosol lifetimes and ages, Atmos. Chem. Phys., 2, 133–145, https://doi.org/10.5194/acp-2-133-2002, 2002. Wolke, R., Sehili, A., Simmel, M., Knoth, O., Tilgner, A., and Herrmann, H.: SPACCIM: A parcel model with detailed microphysics and complex multiphase chemistry, Atmos. Environ., 39, 4375–4388, https://doi.org/10.1016/j.atmosenv.2005.02.038, 2005. Xue, L., Wang, T., Wang, X., Blake, D. R., Gao, J., Nie, W., Gao, R., Gao, X., Xu, Z., and Ding, A.: On the use of an explicit chemical mechanism to dissect peroxy acetyl nitrate formation, Environ. Pollut., 195, 39–47, https://doi.org/10.1016/j.envpol.2014.08.005, 2014. Yao, X., Chan, C. K., Fang, M., Cadle, S., Chan, T., Mulawa, P., He, K., and Ye, B.: The water-soluble ionic composition of PM2.5 in Shanghai and Beijing, China, Atmos. Environ., 36, 4223–4234, https://doi.org/10.1016/S1352-2310(02)00342-4, 2002. Yu, Q., Chen, J., Qin, W. H., Cheng, S. M., Zhang, Y. P., Ahmad, M., and Ouyang, W.: Characteristics and secondary formation of water-soluble organic acids in PM1, PM2.5 and PM10 in Beijing during haze episodes, Sci. Total Environ., 669, 175–184, https://doi.org/10.1016/j.scitotenv.2019.03.131, 2019. Zhang, J., Chen, J., Yang, L., Sui, X., Yao, L., Zheng, L., Wen, L., Xu, C., and Wang, W.: Indoor PM2.5 and its chemical composition during a heavy haze–fog episode at Jinan, China, Atmos. Environ., 99, 641–649, https://doi.org/10.1016/j.atmosenv.2014.10.026, 2014. Zhao, P. S., Dong, F., He, D., Zhao, X. J., Zhang, X. L., Zhang, W. Z., Yao, Q., and Liu, H. Y.: Characteristics of concentrations and chemical compositions for PM2.5 in the region of Beijing, Tianjin, and Hebei, China, Atmos. Chem. Phys., 13, 4631–4644, https://doi.org/10.5194/acp-13-4631-2013, 2013. Zhao, W., Kawamura, K., Yue, S., Wei, L., Ren, H., Yan, Y., Kang, M., Li, L., Ren, L., Lai, S., Li, J., Sun, Y., Wang, Z., and Fu, P.: Molecular distribution and compound-specific stable carbon isotopic composition of dicarboxylic acids, oxocarboxylic acids and α-dicarbonyls in PM2.5 from Beijing, China, Atmos. Chem. Phys., 18, 2749–2767, https://doi.org/10.5194/acp-18-2749-2018, 2018. Zhao, W., Fu, P., Yue, S., Li, L., Xie, Q., Zhu, C., Wei, L., Ren, H., Li, P., Li, W., Sun, Y., Wang, Z., Kawamura, K., and Chen, J.: Excitation-emission matrix fluorescence, molecular characterization and compound-specific stable carbon isotopic composition of dissolved organic matter in cloud water over Mt. Tai, Atmos. Environ., 213, 608–619, https://doi.org/10.1016/j.atmosenv.2019.06.034, 2019. Zhao, X. J., Zhao, P. S., Xu, J., Meng,, W., Pu, W. W., Dong, F., He, D., and Shi, Q. F.: Analysis of a winter regional haze event and its formation mechanism in the North China Plain, Atmos. Chem. Phys., 13, 5685–5696, https://doi.org/10.5194/acp-13-5685-2013, 2013. Zhou, Y., Wang, T., Gao, X. M., Xue, L. K., Wang, X. F., Wang, Z., Gao, J., Zhang, Q. Z., and Wang, W. X.: Continuous observations of water-soluble ions in PM2.5 at Mount Tai (1534 ma.s.l.) in central-eastern China, J. Atmos. Chem., 64, 107–127, https://doi.org/10.1007/s10874-010-9172-z, 2009. Zhu, Y.: acp-2019-982-data, available at: https://github.com/Yanhong-Zhu/acp-2019-982-data, last access: 15 May 2020. Zhu, Y., Yang, L., Chen, J., Kawamura, K., Sato, M., Tilgner, A., van Pinxteren, D., Chen, Y., Xue, L., Wang, X., Simpson, I. J., Herrmann, H., Blake, D. R., and Wang, W.: Molecular distributions of dicarboxylic acids, oxocarboxylic acids and α-dicarbonyls in PM2.5 collected at the top of Mt. Tai, North China, during the wheat burning season of 2014, Atmos. Chem. Phys., 18, 10741–10758, https://doi.org/10.5194/acp-18-10741-2018, 2018.
2020-07-05 20:17:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 23, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7993213534355164, "perplexity": 14906.5503715922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655888561.21/warc/CC-MAIN-20200705184325-20200705214325-00469.warc.gz"}
https://chemistry.stackexchange.com/questions/110124/action-of-h2s-and-oh-on-an-alkyl-halide
# Action of H2S and OH- on an alkyl halide? What would be the action of $$\ce{OH-}$$ and $$\ce{H2S}$$ on 2,6 dichloroheptane? I came across this question in my organic chemistry workbook but I have simply never seen a question of this type before. I think a cyclic compound may be formed, but I am not sure which. I thought really hard with help from the comments under my question, I came up with this solution. However in my book it says that there should be only one sulphur atom in the cyclic compound. How does this make sense as there are two halogens in the original haloalkane? Is my mechanism incorrect? • It rather depends on the reaction conditions, but the formation of 5 and 6-member rings is a favourable process and likely to occur, particularly with sulfide as a nucleophile – Waylander Feb 26 '19 at 19:04 • I assumed you mean $\ce{OH-}$ (like alkaline $\ce{H2S}$?) – Mathew Mahindaratne Feb 27 '19 at 3:14 • @Waylander thank you! But why would there be a sulphide? – viktor nikiforov Feb 27 '19 at 11:21 • @Mathew Mahindaratne Yes. – viktor nikiforov Feb 27 '19 at 11:22 • H2S plus OH- gives HS- which will react readily with the haloalkane to give RSH, this will be further deprotonated by OH- to give RS- which will react internally to give the cyclic species. – Waylander Feb 27 '19 at 12:03
2021-01-25 15:03:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5033496618270874, "perplexity": 1613.2623723526838}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703581888.64/warc/CC-MAIN-20210125123120-20210125153120-00799.warc.gz"}
http://clay6.com/qa/39255/find-the-coordinates-of-the-point-which-divides-the-line-segment-joining-th
Browse Questions # Find the coordinates of the point which divides the line segment joining the points $(-2,3,5)$ and $(1,-4,6)$ in the ratio $2:3$ internally. $\begin{array}{1 1}\big(-\large\frac{4}{5},\frac{1}{5},\frac{27}{5}\big) \\(-8,17,3) \\ \big(\large\frac{8}{5},\frac{17}{5},\frac{27}{5}\big) \\ \big(-\large\frac{8}{5},\frac{1}{5},\frac{27}{5}\big)\end{array}$ Toolbox: • Section formula: The coordinates of the point $C$ that divides the segment joining the points $A(x_1,y_1,z_1)$ and $B(x_2,y_2,z_2)$ in the ratio $l:m$ internally is given by $C\big(\large\frac{lx_2+mx_1}{l+m},\frac{ly_2+my_1}{l+m},\frac{lz_2+mz_1}{l+m}\big)$ Given two points are $A(-2,3,5)$ and $B(1,-4,6)$ We know that from section formula the coordinates of the point $C$ that divides the segment joining the points $A(x_1,y_1,z_1)$ and $B(x_2,y_2,z_2)$ in the ratio $l:m$ internally is given by $C\big(\large\frac{lx_2+mx_1}{l+m},\frac{ly_2+my_1}{l+m},\frac{lz_2+mz_1}{l+m}\big)$ $\therefore$ the coordinates of the point $C$ which divides $AB$ in the ratio $2:3$ internally is $C\big(\large\frac{2.1+3.(-2)}{2+3},\frac{2.(-4)+3.3}{2+3},\frac{2.6+3.5}{2+3}\big)$ $=C\big(-\large\frac{4}{5},\frac{1}{5},\frac{27}{5}\big)$
2017-07-29 11:46:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8731492161750793, "perplexity": 76.9217027290098}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549427766.28/warc/CC-MAIN-20170729112938-20170729132938-00198.warc.gz"}
https://www.semanticscholar.org/paper/CaloGAN%3A-Simulating-3D-High-Energy-Particle-Showers-Paganini-Oliveira/263cd1c2e7a0bffe69eb34f3fd6abec6e6ad7c7a
# CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks @article{Paganini2017CaloGANS3, title={CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks}, author={Michela Paganini and Luke de Oliveira and Benjamin P. Nachman}, journal={ArXiv}, year={2017}, volume={abs/1712.10321} } • Published 2017 • Physics, Computer Science, Mathematics • ArXiv The precise modeling of subatomic particle interactions and propagation through matter is paramount for the advancement of nuclear and particle physics searches and precision measurements. The most computationally expensive step in the simulation pipeline of a typical experiment at the Large Hadron Collider (LHC) is the detailed modeling of the full complexity of physics processes that govern the motion and evolution of particle showers inside calorimeters. We introduce CaloGAN, a new fast… Expand 156 Citations Precise Simulation of Electromagnetic Calorimeter Showers Using a Wasserstein Generative Adversarial Network • Physics • Computing and Software for Big Science • 2019 Simulations of particle showers in calorimeters are computationally time-consuming, as they have to reproduce both energy depositions and their considerable fluctuations. A new approach to ultra-fastExpand Fast and Accurate Electromagnetic and Hadronic Showers from Generative Models Generative machine learning models offer a promising way to efficiently amplify classical Monte Carlo generators’ statistics for event simulation and generation in particle physics. Given the alreadyExpand Validation of Deep Convolutional Generative Adversarial Networks for High Energy Physics Calorimeter Simulations • Physics, Computer Science • AAAI Spring Symposium: MLPS • 2021 It is proved that the new neural network architecture reaches a higher level of accuracy with respect to the 3D convolutional GAN while reducing the necessary computational resources. Expand Three Dimensional Energy Parametrized Generative Adversarial Networks for Electromagnetic Shower Simulation • Computer Science • 2018 25th IEEE International Conference on Image Processing (ICIP) • 2018 This work investigates the use of Generative Adversarial Networks (GANs) as a fast alternative for simulation of HEP detector response by treating the energy deposited by a particle inside a calorimeter detector as a three-dimensional image. Expand Getting High: High Fidelity Simulation of High Granularity Calorimeters with High Speed This work investigates the use of a new architecture—the Bounded Information Bottleneck Autoencoder—for modelling electromagnetic showers in the central region of the Silicon-Tungsten calorimeter of the proposed International Large Detector and achieves an accurate simulation of differential distributions including for the first time the shape of the minimum-ionizing-particle peak. Expand Physics Validation of Novel Convolutional 2D Architectures for Speeding Up High Energy Physics Simulations • Physics, Computer Science • ArXiv • 2021 This work applies Generative Adversarial Networks (GANs), a deep learning technique, to replace the calorimeter detector simulations and speeding up the simulation time by orders of magnitude, following a previous approach which used threedimensional convolutional neural networks. Expand Generative Models for High Energy Physics Calorimeters Existing methods for simulating particle collision experiments for the planned International Linear Collider are time-consuming, slow, and expensive. In these experiments, two 1 high-energy beams ofExpand A data-driven event generator for Hadron Colliders using Wasserstein Generative Adversarial Network • Physics • 2021 Highly reliable Monte-Carlo event generators and detector simulation programs are important for the precision measurement in the high energy physics. Huge amounts of computing resources are requiredExpand Conditional Wasserstein Generative Adversarial Networks for Fast Detector Simulation • Computer Science • EPJ Web of Conferences • 2021 This work investigates the use of conditional Wasserstein Generative Adversarial Networks to simulate both hadronization and the detector response to jets, and demonstrates that the model produces accurate conditional reconstructed jet transverse momentum (pT) distributions over a wide range of pT for the input parton jet. Expand Deep generative models for fast shower simulation in ATLAS This feasibility study demonstrates the potential of using Variational Auto-Encoders and Generative Adversarial Networks for fast calorimeter simulation for the ATLAS experiment in the future and opens the possibility to complement current simulation techniques. Expand #### References SHOWING 1-10 OF 69 REFERENCES Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics Synthesis • Physics, Mathematics • 2017 We provide a bridge between generative modeling in the Machine Learning community and simulated physical processes in high energy particle physics by applying a novel Generative Adversarial NetworkExpand Electron efficiency measurements with the ATLAS detector using 2012 LHC proton–proton collision data • M. Aaboud, +2,849 authors L. Zwalinski • Medicine, Physics • The European physical journal. C, Particles and fields • 2017 The efficiency to reconstruct and identify electrons at the ATLAS experiment varies from 65 to 95%, depending on the transverse momentum of the electron and background rejection, which is measured in data and evaluated in simulated samples. Expand Geant4 - A simulation toolkit Geant4 is a toolkit for simulating the passage of particles through matter. It includes a complete range of functionality including tracking, geometry, physics models and hits. The physics processesExpand Generative Adversarial Networks recover features in astrophysical images of galaxies beyond the deconvolution limit • Computer Science, Physics • ArXiv • 2017 The ability to better recover detailed features from low-signal-to-noise and low angular resolution imaging data significantly increases the ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope and the Hubble and James Webb space telescopes. Expand Fast Simulation of Electromagnetic Showers in the ATLAS Calorimeter: Frozen Showers One of the most time consuming process simulating pp interactions in the ATLAS detector at LHC is the simulation of electromagnetic showers in the calorimeter. In order to speed up the eventExpand Electron reconstruction and identification efficiency measurements with the ATLAS detector using the 2011 LHC proton–proton collision data • G. Aad, +2,889 authors L. Zwalinski • Physics, Medicine • The European physical journal. C, Particles and fields • 2014 The electron reconstruction and identification efficiencies of the ATLAS detector at the LHC have been evaluated using proton–proton collision data collected in 2011 and determined with an accuracy at the few per mil level for electron transverse energy greater than 30 GeV. Expand Electron performance measurements with the ATLAS detector using the 2010 LHC proton-proton collision data Detailed measurements of the electron performance of the ATLAS detector at the LHC are reported, using decays of the Z, W and J/ψ particles. Data collected in 2010 at $\sqrt{s}=7\mbox{~TeV}$ areExpand Reconstruction of three-dimensional porous media using generative adversarial neural networks • Computer Science, Physics • Physical review. E • 2017 Results show that generative adversarial networks can be used to reconstruct high-resolution three-dimensional images of porous media at different scales that are representative of the morphology of the images used to train the neural network. Expand Final state interactions in the (nuclear) FRITIOF string interaction scenario • Physics • 1996 We consider the final state reinteraction of the produced hadrons in a scenario with the initial high energy nuclear interaction provided by the FRITIOF Model. The basic idea is that any producedExpand The ATLAS Simulation Infrastructure The simulation software for the ATLAS Experiment at the Large Hadron Collider is being used for large-scale production of events on the LHC Computing Grid. This simulation requires many components,Expand
2021-10-28 08:25:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.499234139919281, "perplexity": 2517.7883191720384}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588282.80/warc/CC-MAIN-20211028065732-20211028095732-00040.warc.gz"}
https://jeffe.cs.illinois.edu/teaching/comptop/2023/notes/14-planar-separators.html
# Planar Separators Let $$\Sigma$$ be an arbitrary planar map, with non-negative weights on its vertices, edges, and/or faces that sum to $$W$$. A simple cycle $$C$$ in a planar map $$\Sigma$$ is a balanced cycle separator if the total weight of all vertices, edges, and faces on either side of $$C$$ is at most $$3W/4$$. As long as each vertex, edge, or face of $$\Sigma$$ has weight at most $$W/4$$, there is a balanced cycle separator with at most $$O(\sqrt{n})$$ vertices; moreover, we can compute such a cycle in $$O(n)$$ time. ## Tree separators Before we consider separators in planar graphs, let’s consider the simpler case of trees. Here a balanced separator is a single edge that splits the tree into two subtrees of roughly equal weight. Tree separators were first studied by Camille Jordan Let $$T = (V, E)$$ be an unrooted tree in which every vertex has degree at most $$3$$. Intuitively, $$T$$ is a “binary” tree, but without a root and without a distinction between left and right children. (This bounded-degree assumption is necessary.) Assign each vertex $$v$$ a non-negative weight $$w(v)$$ and let $$W := \sum_v w(v)$$. Tree-separator lemma: If every vertex has weight at most $$W/4$$, there is an edge $$e$$ in $$T$$ such that the total weight in either component of $$T\setminus e$$ is at most $$3W/4$$. Proof: Pick an arbitrary leaf $$r$$ of $$T$$ as the root, and direct all edges away from $$r$$, so every vertex in $$T$$ has at most two children. By attaching leaves with weight zero, we can assume without loss of generality that every non-leaf vertex has exactly two children. For any vertex $$v$$, let $$W(v)$$ denote the total weight of $$v$$ and its descendants; for example, $$W(r) = W$$. For any non-leaf vertex $$v$$, label its children $$\textsf{heft}(v)$$ and $$\textsf{lite}(v)$$ so that $$W(\textsf{heft}(v)) \ge W(\textsf{lite}(v))$$ (breaking ties arbitrarily). Starting at the root $$r$$, follow pointers down to the first vertex $$x$$ such that $$W(\textsf{heft}(x)) \le W/4$$. Then we immediately have \begin{aligned} W/4 &< W(x) \\ &= W(\textsf{heft}(x)) + W(\textsf{lite}(x)) + w(x) \\ &\le 2\cdot W(\textsf{heft}(x)) + w(x) \\ &\le 3W/4. \end{aligned} Let $$e$$ be the edge between $$x$$ and its parent. The two components of $$T\setminus e$$ have total weight $$W(x) \le 3W-4$$ and $$W - W(x) < 3W/4$$. $$\qquad\square$$. It’s easy to see that the upper bounds on vertex degree and vertex weight are both necessary. This separator lemma has several variants; I’ll mention just a few without proof: Unweighted tree-separator lemma: For any $$n$$-vertex tree $$T$$ with maximum degree $$3$$, there is an edge $$e$$ such that the each component of $$T\setminus e$$ has at most $$2n/3$$ vertices. Edge-weight tree-separator lemma: For any tree $$T$$ with maximum degree $$3$$ and any weights on the edges of $$T$$ that sum to $$W$$, there is an edge $$e$$ such that both components of $$T\setminus e$$ have total edge weight at most $$2W/3$$. Vertex tree-separator lemma: For any tree $$T$$ and any weights on the vertices of $$T$$ that sum to $$W$$, there is a vertex $$v$$ such that every component of $$T\setminus v$$ has total weight at most $$W/2$$. ## Fundamental cycle separators Now let $$\Sigma$$ be a planar triangulation. Assign each face $$f$$ a non-negative weight $$w(f) \le W/4$$, where $$W := \sum_f w(f)$$. (Again, the upper bounds on face degree and face weight are both necessary.) A cycle $$C$$ in $$\Sigma$$ is a balanced separator if the total weight on either side of $$C$$ is at most $$3W/4$$. Let $$T$$ be an arbitrary spanning tree of $$\Sigma$$. For any non-tree edge $$e$$, the fundamental cycle $$\textsf{cycle}(T, e)$$ is the unique cycle in $$T+e$$, consisting of $$e$$ and the unique path in $$T$$ between the endpoints of $$e$$. Lemma: At least one fundamental cycle $$\textsf{cycle}(T, e)$$ is a balanced separator for $$\Sigma$$. Proof: Let $$C^*$$ be the spanning tree of $$\Sigma^*$$ complementary to $$T$$. Because $$\Sigma$$ is a triangulation, every vertex of $$C^*$$ has degree at most $$3$$. Suppose each vertex of $$C^*$$ inherits its weight from the corresponding face of $$\Sigma$$. The tree-separator lemma implies that there is some edge $$e$$ such that each component of $$C^*\setminus e^*$$ has at most $$3/4$$ the total weight of the vertices of $$C^*$$. It follows that $$\textsf{cycle}(T, e)$$ is a balanced separator. $$\qquad\square$$ We can extend this lemma to the setting where vertices and edges also have weights, in addition to faces. Let $$w\colon V\cup E\cup F \to \mathbb{R}_+$$ be the given weight function. Define a new face-weight function $$w’\colon F\to\mathbb{R}_+$$ by moving the weight of each vertex and edge to some incident face. Unfortunately, fundamental cycles can be quite long. For any particular map $$\Sigma$$, we can minimize the maximum length of all fundamental cycles using a breadth-first search tree from the correct root vertex as our spanning tree $$T$$, but in the worst case, every balanced fundamental cycle separator has length $$\Omega(n)$$. For most applications of balanced separators, breadth-first fundamental cycles are usually the best choice in practice; see the detailed experimental analysis by Fox-Epstein et al. [1]. A second easy method for computing separators is to consider the levels of a breadth-first search tree. For the moment, let’s assume that the vertices of $$\Sigma$$ are weighted. For each integer $$\ell$$, let $$V_\ell$$ denote the vertices $$\ell$$ steps away from the root vertex of $$T$$. By computing a weighted median, we can find a level $$V_m$$ such that the total vertex weight in any component of $$\Sigma\setminus V_m$$ is at most $$W/2$$. There are two obvious problems with this separator construction. The less serious problem is that the medial level $$V_m$$ is not a cycle; it’s just a cloud of vertices. Many applications of planar separators don’t actually require cycle separators, but most of the applications we’ll see in this class do. The more serious problem is size; in the worst case, the set $$V_m$$ could contain a constant fraction of the vertices. When Richard Lipton and Robert Tarjan introduced planar separators in 1979, they did not consider cycle separators. Rather, they proved that there is always a subset $$S$$ of $$O(\sqrt{n})$$ vertices such that any component of $$\Sigma\setminus S$$ has at most $$2n/3$$ vertices. Lipton and Tarjan’s construction combines fundamental cycle separators and BFS-level separators. I will not describe their construction in detail, partly because we really do need cycles, and partly because most of their ideas show up in the next section. ## Cycle separators Gary Miller was the first to prove that small balanced cycle separators exist, in 1986. The following refinement of Miller’s algorithm is based on later proofs by Philip Klein, Shay Mozes, and Christian Sommer (2013) and Sariel Har-Peled and Amir Nayyeri (2018). Miller’s key idea was to generalize our notion of “level” from vertices to faces.1 As in our earlier setup, Let $$\Sigma$$ be a simple planar triangulation with weighted faces, where no individual face weight is too large. Let $$T_0$$ be a breadth-first search tree, and suppose the fundamental cycle $$\textsf{cycle}(T_0, xy)$$ is a balanced separator. If this cycle has length $$O(\sqrt{n})$$, we are done, so assume otherwise. Let $$r$$ denote the least common ancestor of $$x$$ and $$y$$, and let $$T$$ be a breadth-first search tree rooted at $$r$$. The cycle $$\textsf{cycle}(T, xy) = \textsf{cycle}(T_0, xy)$$ is still a balanced separator. For any vertex $$v$$, let $$\textsf{level}(v)$$ denote the breadth-first distance from $$r$$ to $$v$$. Without loss of generality, assume $$\textsf{level}(x) \le \textsf{level}(y)$$. Then for any face $$f$$, let $$\textsf{level}(f)$$ denote the maximum level among the three vertices of $$f$$. A face at level $$\ell$$ has vertices only at levels $$\ell$$ and $$\ell-1$$. Let $$o$$ denote the outer face of $$\Sigma$$, and without loss of generality, assume that $$L = \textsf{level}(o) = \max_f \textsf{level}(f)$$. For any integer $$\ell$$, let $$U_{\le\ell}$$ denote the union of all faces with level at most $$\ell$$, and let $$C_\ell$$ be the outer boundary of $$U_{\le\ell}$$. Trivially $$U_{\le 0} = \varnothing$$ and therefore $$C_0 = \varnothing$$. Similarly, fr any $$\ell\ge L$$, we have $$U_{\le \ell} = \mathbb{R}^2$$ and therefore $$C_\ell = \varnothing$$. Lemma: 1. Every vertex in $$C_\ell$$ has level $$\ell$$. 1. Every non-empty subgraph $$C_\ell$$ is a simple cycle. 1. The cycles $$C_\ell$$ are pairwise vertex-disjoint. 1. The fundamental cycle $$\textsf{cycle}(T, xy)$$ intersects $$C_\ell$$ in at most two vertices Proof: Part (a) follows directly from the definitions. By construction $$C_\ell$$ consists of one or more simple cycles, any two of which share at most one vertex. Let $$C$$ be the simple cycle in $$C_\ell$$ that contains $$r$$ in its interior. and let $$v$$ be any vertex of $$C_\ell\setminus C$$. Let $$u$$ be the second-to-last vertex on the shortest path from $$r$$ to $$v$$. Vertex $$u$$ has level $$\ell-1$$ and therefore does not lie on $$C$$; moreover, because $$v\not\in C$$, vertex $$u$$ cannot lie in the interior of $$C$$. The Jordan curve theorem implies that the shortest path from $$u$$ to $$r$$ crosses $$C$$, but this is impossible, because levels decrease monotonically along that path. We conclude that $$C_\ell = C$$, proving part (b). Part (c) follows immediately from part (a). Finally, the vertices of $$\textsf{cycle}(T, xy)$$ lie on two shortest paths from $$r$$, one to $$x$$ and the other to $$y$$. Levels increase monotonically along any shortest path from $$r$$. Thus, by part (a), the shortest paths from $$r$$ to $$x$$ and $$y$$ each share at most one vertex with $$C_\ell$$. $$\qquad\square$$ Let $$m$$ be the largest integer such that the total weight of all faces inside $$C_m$$ is at most $$W/2$$. Then the total weight of the faces outside $$C_{m+1}$$ is also at most $$W/2$$. If either of these cycles is a balanced cycle separator of length $$O(\sqrt{n})$$, we are done, so assume otherwise. We choose two level cycles $$C^-$$ and $$C^+$$ as follows.2 • Consider the set of cycles $$\mathcal{C}^- = \{C_\ell \mid m-\sqrt{n} < \ell \le m\}$$. These $$\sqrt{n}$$ cycles contain at most $$n$$ vertices, and therefore some cycle $$C^-$$ in this set must have length less than $$\sqrt{n}$$. By construction, the total weight of all faces inside $$C^-$$ is at most $$W/2$$. • Similarly, consider the set $$\mathcal{C}^+ = \{C_\ell \mid m < \ell \le m + \sqrt{n}\}$$. These $$\sqrt{n}$$ cycles contain at most $$n$$ vertices, and therefore some cycle $$C^+$$ in this set must have length less than $$\sqrt{n}$$. By construction, the total weight of all faces outside $$C^+$$ is at most $$W/2$$. Let $$\pi_x$$ denote the portion of the shortest path from $$r$$ to $$x$$ with levels between $$a$$ and $$b$$, and define $$\pi_y$$ similarly. By construction, each of these paths has length at most $$2\sqrt{n}$$. Let $$\Theta$$ denote the graph $$C^- \cup C^+ \cup \pi_x \cup \pi_y$$, as shown in the figure below. This subgraph of $$\Theta$$ has at most $$4\sqrt{n}$$ vertices and edges. We label the four faces of $$\Theta$$ as follows: • $$A$$ is the interior of $$C^-$$. • $$B$$ is the exterior of $$C^+$$. • $$C$$ is the region between $$C^+$$ and $$C^-$$ and outside $$\textsf{cycle}(T, xy)$$. • $$D$$ is the region between $$C^+$$ and $$C^-$$ and inside $$\textsf{cycle}(T, xy)$$. Let $$W(S)$$ denote the total weight of the set of faces $$S$$. By construction we have \begin{aligned} W(A) & \le W/2, &&& W(B) & \le W/2, &&& W(C) & \le 3W/4, &&& W(D) & \le 3W/4. \end{aligned} At least one of these four regions contains total weight at least $$W/4$$; the boundary of that region is a balanced cycle separator of length $$O(\sqrt{n})$$. Most divide-and-conquer algorithms that use cycle separators do not delete the separator vertices to obtain smaller subgraphs. Rather, the algorithms slice the planar map along the cycle separator to obtain smaller maps, called pieces of the original map, one containing the faces inside the cycle and the other containing the faces outside. Both pieces contain a copy of the $$O(\sqrt{n})$$ vertices and edges of the separator. Thus, the total size of all subproblems is larger at deeper levels of the recursion tree, but because that increase is sublinear, we can ignore it when solving the resulting divide-and-conquer recurrences. ## Good $$r$$-divisions and Subdivision Hierarchies An $$r$$-division is a decomposition of a planar map into $$n/r$$ pieces, each of which has $$O(r)$$ vertices and $$O(\sqrt{r})$$ boundary vertices (shared with other pieces). An $$r$$-division is good if each piece is a disk with $$O(1)$$ holes. For any $$r$$, we can construct a good $$r$$-division by recursively slicing the input triangulation along balanced cycle separators. In fact, this subdivision strategy computes a subdivision hierarchy that includes good $$r$$-divisions for arbitrary values of $$r$$. In each recursive call, we are given a region $$R$$, which is a connected subcomplex of the original triangulation $$\Sigma$$. Any face of the region $$R$$ that is not a face of $$\Sigma$$ is called a hole; any vertex of $$R$$ that is incident to a hole is a boundary vertex of $$R$$. To split $$R$$ into two smaller regions, we first triangulate $$R$$ by inserting an artificial vertex $$v_h$$ inside each hole $$h$$, along with artificial edges connecting $$v_h$$ to each corner of $$h$$. We then compute a cycle separator in the resulting triangulation $$R’$$, splitting it into two smaller triangulated regions $$R’_0$$ and $$R’_1$$. Finally, we delete the artificial vertices and edges from $$R’_0$$ and $$R’_1$$ to get the final regions $$R_0$$ and $$R_1$$. To simultaneously bound the number of vertices, the number of boundary vertices, and the number of holes in the final regions, we cycle through three different vertex weights at different levels of recursion. Specifically, at recursion depth $$l$$, we weight the vertices as follows: • If $$l\bmod 3 = 0$$, we give natural vertices weight $$1$$ and artificial vertices weight $$0$$, so that the separator splits natural vertices evenly. • If $$l\bmod 3 = 1$$, we give boundary vertices weight $$1$$ and all other vertices weight $$0$$, so that the separator splits boundary vertices evenly. • If $$l\bmod 3 = 2$$, we give artificial vertices weight $$1$$ and natural vertices weight $$0$$, so that the separator splits holes evenly. Let $$T_r(n, b, h)$$ denote the time to compute a good $$r$$-division for a region with $$n$$ vertices, $$b$$ boundary vertices, and $$h$$ holes. Expanding out three levels of recursion, we have $T_r(n, b, h) = O(n + h) + \sum_{i=1}^8 T_r(n_i, b_i, h_i),$ where \begin{aligned} \sum_{i=1}^8 n_i &\le n + O(\sqrt{n}) & \sum_{i=1}^8 b_i &\le b + O(\sqrt{n}) & \sum_{i=1}^8 h_i &\le h + O(1) \\ \max_i n_i &\le 3n/4 + O(\sqrt{n}) & \max_i b_i &\le 3b/4 + O(\sqrt{n}) & \max_i h_i &\le 3h/4 + O(1) \end{aligned} for suitable absolute big-Oh constants. The recursion stops when the number of vertices in each piece is $$O(r)$$. Every leaf in the recursion tree has depth at most $$O(\log (n/r))$$, and there are at most $$O(n/r)$$ such leaves. One can prove by induction that in every recursive subproblem, the number of boundary vertices is at most $$O(\sqrt{r})$$ and the number of holes is at most $$O(1)$$, so we end with a good $$r$$-division. We perform $$O(n)$$ work at every level of recursion, so the overall running time of the algorithm is $$T_r(n, 0, 0) = O(n \log(n/r))$$. In particular, if $$r = O(1)$$, the entire algorithm runs in $$O(n\log n)$$ time. Theorem: Given a planar triangulation $$\Sigma$$ with $$n$$ vertices, we can compute a recursive subdivision of $$\Sigma$$, containing good $$r$$-divisions of $$\Sigma$$ for every $$r \ge r_0$$, in $$O(n \log (n/r_0))$$ time. Corollary: Given a planar triangulation $$\Sigma$$ with $$n$$ vertices and an integer $$r$$, we can compute a good $$r$$-divisions of $$\Sigma$$ in $$O(n \log (n/r))$$ time. Some applications of separators actually require a nested sequence of good $$r$$-divisions with exponentially decreasing values of $$r$$. For any vector $$\vec{r} = (r_1, r_2, \dots, r_t)$$ where $$r_i < r_{i-1}/\alpha$$ for some suitable constant $$\alpha$$, a good $$\vec{r}$$-division of a planar map $$\Sigma$$ consists of a good $$r_1$$-division $$\mathcal{R}_1$$ of $$\Sigma$$ and (unless $$t=1$$) a good $$(r_2, \dots, r_t)$$-division of each piece of $$\mathcal{R}_1$$. We can easily extract a good $$\vec{r}$$-division from any good subdivision hierarchy in $$O(n)$$ time. Corollary: Given a planar triangulation $$\Sigma$$ with $$n$$ vertices, and any exponentially decreasing vector $$\vec{r} = (r_1, r_2, \dots, r_t)$$, we can construct a good $$\vec{r}$$-division of $$\Sigma$$ in $$O(n \log (n/r_t))$$ time. Greg Frederickson introduced $$r$$-divisions (based on non-cycle separators) in 1989. Sergio Cabello proposed a weaker definition of good $$r$$-division in 2006 that only requires the average number of holes per piece to be constant, along with a similar two-phase divide-and-conquer algorithm to construct them. The current definition of good $$r$$-division and the three-phase algorithm I’ve just described were proposed by Philip Klein, Shay Mozes, and Christian Sommer in 2013. This is not the fastest algorithm known for computing good $$r$$-divisions or subdivision hierarchies. in 1995, Michael Goodrich described a $$O(n)$$-time algorithm to construct subdivision hierarchies based on Lipton-Tarjan separators; Klein, Mozes, and Sommer described a similar algorithm to compute a good subdivision hierarchy in $$O(n)$$ time, in the same 2013 paper. Both of algorithms use dynamic forest data structures (to maintain tree-cotree decompositions of the pieces, identify fundamental cycle separators, compute least common ancestors, and compute the weight enclosed by short cycles), along with several other data structures. In the next lecture we’ll see how to use good $$r$$-divisions to compute shortest paths quickly. ## References 1. Sergio Cabello. Many distances in planar graphs. Proc. 17th Ann. ACM-SIAM Symp. Discrete Algorithms, 1213–1220, 2006. 2. Sergio Cabello. Many distances in planar graphs. Algorithmica 62(1–2):361–381, 2010. 3. Eli Fox-Epstein, Shay Mozes, Phitchaya Mangpo Phothilimthana, and Christian Sommer. Short and simple cycle separators in planar graphs. ACM J. Exp. Algorithmics 21(1):2.2:1–2.2:24, 2016. 4. Greg N. Frederickson. Fast algorithms for shortest paths in planar graphs with applications. SIAM J. Comput. 16(8):1004–1004, 1987. 5. Michael T. Goodrich. Planar separators and parallel polygon triangulation. J. Comput. Syst. Sci. 51(3):374–389, 1995. 6. Sariel Har-Peled and Amir Nayyeri. A simple algorithm for computing a cycle separator. Preprint, September 2017. arXiv:1709.08122. 7. Camille Jordan. Sur les assemblages de lignes. J. Reine Angew. Math. 70:185–190, 1869. 8. Philip N. Klein, Shay Mozes, and Christian Sommer. Structured recursive separator decompositions for planar graphs in linear time. Proc. 45th Ann. ACM Symp. Theory Comput., 505–514, 2013. arXiv:1208.2223. 9. Richard J. Lipton and Robert E. Tarjan. A separator theorem for planar graphs. SIAM J. Applied Math. 36(2):177–189, 1979. 10. Richard J. Lipton and Robert Endre Tarjan. Applications of a planar separator theorem. SIAM J. Comput. 9:615–627, 1980. 11. Gary L. Miller. Finding small simple cycle separators for 2-connected planar graphs. J. Comput. System Sci. 32(3):265–279, 1986. ## Aptly Named Sir Not • Cycle separators via Koebe-Andreev circle packing • Nested dissection to compute Tutte’s spring embedding in $$O(n^{3/2})$$ time. • Details of $$r$$ divisions (and recursive $$r$$-divisions) in $$O(n)$$ time. 1. Fox-Eppstein et al. [1] describe an arguably simpler algorithm that uses a dual breadth-first search tree rooted at the outer face to define face levels, instead of a primal breadth-first search tree.↩︎ 2. I am ignoring two extreme cases. First, if $$m < \sqrt{n}$$, we define $$C^- = \varnothing$$; similarly, if $$m > \textsf{level}(y) - \sqrt{n}$$, we define $$C^+ = \varnothing$$. Handling these special cases in the rest of the construction is straightforward.↩︎
2023-03-21 17:55:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436249136924744, "perplexity": 324.48535853966865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00061.warc.gz"}
https://math.stackexchange.com/questions/880980/why-these-two-series-are-convergent-or-divergent
# Why these two series are convergent or divergent? I do not understand why $$\sum^{\infty}_{k=1} z_k = \sum^{\infty}_{k=1} \frac1k$$ is divergent but the other series $$\sum^{\infty}_{k=1} z_k = \sum^{\infty}_{k=1} \frac{(-1)^{k+1}}k$$ is convergent. For both cases the $\displaystyle\lim_{n \to +\infty} z_{n} = 0$. Could you explain please? I prefer to come up with an all-inclusive test of convergence that accepts the second but rejects the first. Thanks. • It diverges because of this. The fact that $\lim z_n = 0$ is only useful in that we can't immediately conclude divergence in either case. – user61527 Jul 28, 2014 at 22:04 • Yes, to keep generality, let's assume we're in complex plane. Jul 28, 2014 at 22:06 • Complex plane or real, it doesn't matter, the first diverges because the little bits after a long while add up to a lot, while cancellation keeps the partial sums small in the second series. Jul 28, 2014 at 22:11 • for the second series use the Leibnz criterion Jul 28, 2014 at 22:11 ## 4 Answers If the series $\sum_{n=1}^{\infty} \frac{1}{n}$ would converge,from the Cauchy criterion,there would be a $n_0 \in \mathbb{N}$ such that $\forall m>n \geq n_0:$ $$\frac{1}{n+1}+\frac{1}{n+2}+ \dots +\frac{1}{m}<\frac{1}{2}$$ Specifically: $$\frac{1}{n+1}+\frac{1}{n+2}+\dots+\frac{1}{n+n}<\frac{1}{2}$$ But: $$\frac{1}{n+1}+\frac{1}{n+2}+\dots+\frac{1}{n+n} \geq \frac{1}{n+n} +\frac{1}{n+n} \dots+\frac{1}{n+n} =\frac{n}{2n}=\frac{1}{2}$$ Therefore,the series diverges. EDIT: To conclude that the series $\sum_{n=1}^{\infty} \frac{(-1)^{n+1}}{n}$ converges,we use the Dirichlet criterion: Let $a_n,b_n$ two sequences of real numbers such that 1. The sequence of the partial sums of the serie $\sum_{n=1}^{\infty} a_n$ is bounded 2. $b_n$ is decreasing and converges to $0$ Then,the series $\sum_{n=1}^{\infty} a_nb_n$ converges. In your case,take: $$b_n=\frac{1}{n} \text{ and } a_n=(-1)^{n+1}$$ The approach I would use would be to expand the sequences and see whether or not a lower or upper limit can be placed. For $\frac1k$, consider that: $$\frac12 + \frac13 + \frac14 + \frac15 + \frac16 + \frac17 + \frac18 + \frac19 + \frac1{10} + \frac1{11} + \frac1{12} + ...$$ ...is certainly a larger sum than: $$\frac12 + \frac14 + \frac14 + \frac18 + \frac18 + \frac18 + \frac18 + \frac1{16} + \frac1{16} + \frac1{16} + \frac1{16} + ...$$ ...or: $$\frac12 + \frac12 + \frac12 + ...$$ So think of it this way. No matter how many $\frac12$'s you want to add together, you'll eventually get there. If you want $n \frac12$'s to add together, you will get there by (around) the ($2^n$)'th term. So any number you come up with can be topped by going further into the series, which means the series diverges to infinity: you can't place any finite upper bound on it that won't be surpassed. For $± \frac1k$ with the $-1$'s alternating, again we look at the partially expanded series: $$1 - \frac12 + \frac13 - \frac14 + \frac15 - \frac16 + \frac17 - \frac18 + \frac19 - \frac1{10} + \frac1{11} - \frac1{12} + ...$$ If we combine each pair of terms, we get the series: $$\frac12 + \frac1{12} + \frac1{30} + \frac1{56} + ...$$ This is clearly bounded below because every term is positive. To show convergence we have to find some finite upper limit this can't ever exceed. Doing this directly is tricky, but we can reduce it to a common problem we already know the answer to. The series is equivalent to: $$\frac1{1*2} + \frac1{3*4} + \frac1{5*6} + \frac1{7*8} + ...$$ ...which is obviously less than: $$\frac1{1^2} + \frac1{2^2} + \frac1{3^2} + \frac1{4^2} + ...$$ ...which, you probably already know, converges to a finite value. The moral of all this writing I'm doing is that you should try some basic mathematical logic first to see where it can take you. None of what I did above required any sort of formula (with the exception of the very last step which relies on a common proof you can look up). It just required me to start expanding the series, group terms, and see what can happen. You have to be careful when doing this with an infinite series, but the logic holds for the above examples. The best part about being able to do this logically as opposed to formulaically is that logic can be applied to any problem of this sort, while formulas don't always work or cover what you need--and plus, why use a formula if you don't understand why that formula works in the first place? This is common in computer science in the analyzation of recursive running times: using the "master method" formula won't work in all cases, but expanding out the series and using logic will always work. • For information about typesetting mathematics here, please refer to this page. Jul 29, 2014 at 5:43 • @ArthurFischer Thank you! Jul 29, 2014 at 11:34 • I believe it is fixed, but hopefully I will get feedback in the future in the event I'm not formatting the math correctly. Thanks again for the link. Jul 29, 2014 at 12:16 OP wants a single test that can distinguish convergence and divergence of the 2 aforementioned series. So far all the answers used Cauchy's convergence test for first series and are essentially correct, albeit the presentations are questionable. Actually Cauchy's test can also be applied on second series with some extra handling. Please refer to Wikipedia article for the variables used here. We need to prove for arbitrary positive real number $\epsilon$ there exists natural number $N$ so that for any $n \gt N$ and $p \ge 1$, $$-\epsilon < \sum\limits_{n+1}^{n+p} \frac{(-1)^{k+1}}{k} < \epsilon$$ We pick $N$ so that $\frac{1}{N}$ is largest value smaller than $\epsilon$, i.e. $N=\lceil\frac{1}{\epsilon}\rceil$ (bracket means ceiling). Here 4 cases will need to be presented depending on evenness of $n$ and $p$. 1. $n$ even, $p$ even: (implying $n+1$ is odd and $n+p$ is even) With different grouping on the terms, $$\sum\nolimits = (\frac{1}{n+1}-\frac{1}{n+2}) + {...} + (\frac{1}{n+p-1}-\frac{1}{n+p}) > 0 > -\epsilon$$ $$\sum\nolimits = \frac{1}{n+1} - (\frac{1}{n+2}-\frac{1}{n+3}) - {...} -\frac{1}{n+p} < \frac{1}{n+1} < \frac{1}{N} < \epsilon$$ 2. $n$ even, $p$ odd: (implying both $n+1$ and $n+p$ are odd) $$\sum\nolimits = (\frac{1}{n+1}-\frac{1}{n+2}) + {...} + (\frac{1}{n+p-2}-\frac{1}{n+p-1})+\frac{1}{n+p} > 0 > -\epsilon$$ $$\sum\nolimits = \frac{1}{n+1} - (\frac{1}{n+2}-\frac{1}{n+3}) - {...} - (\frac{1}{n+p-1}-\frac{1}{n+p}) < \frac{1}{n+1} < \frac{1}{N} < \epsilon$$ Other cases are left as exercise to OP, the presentation is becoming a bit tedious here. We do not answer the specific question, but instead look at two related series, $$1+\frac{1}{2}+\frac{1}{2}+\frac{1}{4}+\frac{1}{4}+\frac{1}{4}+\frac{1}{4}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{8}+\frac{1}{16}+\cdots\tag{1}$$ and $$1-\frac{1}{2}+\frac{1}{2}-\frac{1}{4}+\frac{1}{4}-\frac{1}{4}+\frac{1}{4}-\frac{1}{8}+\frac{1}{8}-\frac{1}{8}+\frac{1}{8}-\frac{1}{8}+\frac{1}{8}-\frac{1}{8}+\frac{1}{8}-\frac{1}{16}+\cdots\tag{2}$$ Series (1) diverges. For the sum of the first $3$ terms is $2$, the sum of the first $7$ is $3$, the sum of the first $15$ is $4$, the sum of the first $31$ is $5$, and so on. The partial sums "blow up," albeit with a great deal of reluctance. It takes an awful lot of terms to get a sum of $100$. Series (2) converges. The partial sums are $1, \frac{1}{2}, 1, \frac{3}{4}, 1, \frac{3}{4}, 1,\frac{7}{8}, 1,\frac{7}{8}$, and so on. It is clear that the partial sums approach $1$.
2022-06-29 10:47:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8625534176826477, "perplexity": 258.2424459872029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103626162.35/warc/CC-MAIN-20220629084939-20220629114939-00134.warc.gz"}
http://dlmf.nist.gov/10.61
# §10.61(i) Definitions Throughout §§10.61–§10.71 it is assumed that $x\geq 0$, $\nu\in\Real$, and $n$ is a nonnegative integer. 10.61.1 $\mathop{\mathrm{ber}_{\nu}\/}\nolimits x+i\mathop{\mathrm{bei}_{\nu}\/}% \nolimits x=\mathop{J_{\nu}\/}\nolimits\!\left(xe^{3\pi i/4}\right)=e^{\nu\pi i% }\mathop{J_{\nu}\/}\nolimits\!\left(xe^{-\pi i/4}\right)=e^{\nu\pi i/2}\mathop% {I_{\nu}\/}\nolimits\!\left(xe^{\pi i/4}\right)=e^{3\nu\pi i/2}\mathop{I_{\nu}% \/}\nolimits\!\left(xe^{-3\pi i/4}\right),$ Defines: $\mathop{\mathrm{bei}_{\nu}\/}\nolimits\!\left(x\right)$: Kelvin function and $\mathop{\mathrm{ber}_{\nu}\/}\nolimits\!\left(x\right)$: Kelvin function Symbols: $\mathop{J_{\nu}\/}\nolimits\!\left(z\right)$: Bessel function of the first kind, $e$: base of exponential function, $\mathop{I_{\nu}\/}\nolimits\!\left(z\right)$: modified Bessel function, $x$: real variable and $\nu$: complex parameter A&S Ref: 9.9.1 Referenced by: §10.63(ii), §10.65(i), §10.65(ii), §10.67(i), §10.69 Permalink: http://dlmf.nist.gov/10.61.E1 Encodings: TeX, pMML, png 10.61.2 $\mathop{\mathrm{ker}_{\nu}\/}\nolimits x+i\mathop{\mathrm{kei}_{\nu}\/}% \nolimits x=e^{-\nu\pi i/2}\mathop{K_{\nu}\/}\nolimits\!\left(xe^{\pi i/4}% \right)=\tfrac{1}{2}\pi i\mathop{{H^{(1)}_{\nu}}\/}\nolimits\!\left(xe^{3\pi i% /4}\right)=-\tfrac{1}{2}\pi ie^{-\nu\pi i}\mathop{{H^{(2)}_{\nu}}\/}\nolimits% \!\left(xe^{-\pi i/4}\right).$ Defines: $\mathop{\mathrm{kei}_{\nu}\/}\nolimits\!\left(x\right)$: Kelvin function and $\mathop{\mathrm{ker}_{\nu}\/}\nolimits\!\left(x\right)$: Kelvin function Symbols: $\mathop{{H^{(1)}_{\nu}}\/}\nolimits\!\left(z\right)$: Bessel function of the third kind (or Hankel function), $\mathop{{H^{(2)}_{\nu}}\/}\nolimits\!\left(z\right)$: Bessel function of the third kind (or Hankel function), $e$: base of exponential function, $\mathop{K_{\nu}\/}\nolimits\!\left(z\right)$: modified Bessel function, $x$: real variable and $\nu$: complex parameter A&S Ref: 9.9.2 Referenced by: §10.63(ii), §10.65(ii), §10.67(i), §10.69 Permalink: http://dlmf.nist.gov/10.61.E2 Encodings: TeX, pMML, png When $\nu=0$ suffices on $\mathop{\mathrm{ber}\/}\nolimits$, $\mathop{\mathrm{bei}\/}\nolimits$, $\mathop{\mathrm{ker}\/}\nolimits$, and $\mathop{\mathrm{kei}\/}\nolimits$ are usually suppressed. Most properties of $\mathop{\mathrm{ber}_{\nu}\/}\nolimits x$, $\mathop{\mathrm{bei}_{\nu}\/}\nolimits x$, $\mathop{\mathrm{ker}_{\nu}\/}\nolimits x$, and $\mathop{\mathrm{kei}_{\nu}\/}\nolimits x$ follow straightforwardly from the above definitions and results given in preceding sections of this chapter. # §10.61(ii) Differential Equations 10.61.3 $x^{2}\frac{{d}^{2}w}{{dx}^{2}}+x\frac{dw}{dx}-(ix^{2}+\nu^{2})w=0,$ $w=\begin{array}[t]{cc}\mathop{\mathrm{ber}_{\nu}\/}\nolimits x+i\mathop{% \mathrm{bei}_{\nu}\/}\nolimits x,&\mathop{\mathrm{ber}_{-\nu}\/}\nolimits x+i% \mathop{\mathrm{bei}_{-\nu}\/}\nolimits x\\ \mathop{\mathrm{ker}_{\nu}\/}\nolimits x+i\mathop{\mathrm{kei}_{\nu}\/}% \nolimits x,&\mathop{\mathrm{ker}_{-\nu}\/}\nolimits x+i\mathop{\mathrm{kei}_{% -\nu}\/}\nolimits x.\end{array}$ 10.61.4 $x^{4}\frac{{d}^{4}w}{{dx}^{4}}+2x^{3}\frac{{d}^{3}w}{{dx}^{3}}-(1+2\nu^{2})% \left(x^{2}\frac{{d}^{2}w}{{dx}^{2}}-x\frac{dw}{dx}\right)+(\nu^{4}-4\nu^{2}+x% ^{4})w=0,$ $w=\mathop{\mathrm{ber}_{\pm\nu}\/}\nolimits x,\mathop{\mathrm{bei}_{\pm\nu}\/}% \nolimits x,\mathop{\mathrm{ker}_{\pm\nu}\/}\nolimits x,\mathop{\mathrm{kei}_{% \pm\nu}\/}\nolimits x$. # §10.61(iii) Reflection Formulas for Arguments In general, Kelvin functions have a branch point at $x=0$ and functions with arguments $xe^{\pm\pi i}$ are complex. The branch point is absent, however, in the case of $\mathop{\mathrm{ber}_{\nu}\/}\nolimits$ and $\mathop{\mathrm{bei}_{\nu}\/}\nolimits$ when $\nu$ is an integer. In particular, 10.61.5 $\displaystyle\mathop{\mathrm{ber}_{n}\/}\nolimits\!\left(-x\right)$ $\displaystyle=(-1)^{n}\mathop{\mathrm{ber}_{n}\/}\nolimits x,$ $\displaystyle\mathop{\mathrm{bei}_{n}\/}\nolimits\!\left(-x\right)$ $\displaystyle=(-1)^{n}\mathop{\mathrm{bei}_{n}\/}\nolimits x.$ # §10.61(iv) Reflection Formulas for Orders 10.61.6 $\displaystyle\mathop{\mathrm{ber}_{-\nu}\/}\nolimits x$ $\displaystyle=\mathop{\cos\/}\nolimits\!\left(\nu\pi\right)\mathop{\mathrm{ber% }_{\nu}\/}\nolimits x+\mathop{\sin\/}\nolimits\!\left(\nu\pi\right)\mathop{% \mathrm{bei}_{\nu}\/}\nolimits x+(2/\pi)\mathop{\sin\/}\nolimits\!\left(\nu\pi% \right)\mathop{\mathrm{ker}_{\nu}\/}\nolimits x,$ $\displaystyle\mathop{\mathrm{bei}_{-\nu}\/}\nolimits x$ $\displaystyle=-\mathop{\sin\/}\nolimits\!\left(\nu\pi\right)\mathop{\mathrm{% ber}_{\nu}\/}\nolimits x+\mathop{\cos\/}\nolimits\!\left(\nu\pi\right)\mathop{% \mathrm{bei}_{\nu}\/}\nolimits x+(2/\pi)\mathop{\sin\/}\nolimits\!\left(\nu\pi% \right)\mathop{\mathrm{kei}_{\nu}\/}\nolimits x.$ 10.61.7 $\displaystyle\mathop{\mathrm{ker}_{-\nu}\/}\nolimits x$ $\displaystyle=\mathop{\cos\/}\nolimits\!\left(\nu\pi\right)\mathop{\mathrm{ker% }_{\nu}\/}\nolimits x-\mathop{\sin\/}\nolimits\!\left(\nu\pi\right)\mathop{% \mathrm{kei}_{\nu}\/}\nolimits x,$ $\displaystyle\mathop{\mathrm{kei}_{-\nu}\/}\nolimits x$ $\displaystyle=\mathop{\sin\/}\nolimits\!\left(\nu\pi\right)\mathop{\mathrm{ker% }_{\nu}\/}\nolimits x+\mathop{\cos\/}\nolimits\!\left(\nu\pi\right)\mathop{% \mathrm{kei}_{\nu}\/}\nolimits x.$ 10.61.8 $\displaystyle\mathop{\mathrm{ber}_{-n}\/}\nolimits x$ $\displaystyle=(-1)^{n}\mathop{\mathrm{ber}_{n}\/}\nolimits x,~{}\mathop{% \mathrm{bei}_{-n}\/}\nolimits x$ $\displaystyle=(-1)^{n}\mathop{\mathrm{bei}_{n}\/}\nolimits x,$ $\displaystyle\mathop{\mathrm{ker}_{-n}\/}\nolimits x$ $\displaystyle=(-1)^{n}\mathop{\mathrm{ker}_{n}\/}\nolimits x,~{}\mathop{% \mathrm{kei}_{-n}\/}\nolimits x$ $\displaystyle=(-1)^{n}\mathop{\mathrm{kei}_{n}\/}\nolimits x.$ # §10.61(v) Orders $\pm\frac{1}{2}$ 10.61.9 $\displaystyle\mathop{\mathrm{ber}_{\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2}\right)$ $\displaystyle=\frac{2^{-\frac{3}{4}}}{\sqrt{\pi x}}\left(e^{x}\mathop{\cos\/}% \nolimits\!\left(x+\frac{\pi}{8}\right)-e^{-x}\mathop{\cos\/}\nolimits\!\left(% x-\frac{\pi}{8}\right)\right),$ $\displaystyle\mathop{\mathrm{bei}_{\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2}\right)$ $\displaystyle=\frac{2^{-\frac{3}{4}}}{\sqrt{\pi x}}\left(e^{x}\mathop{\sin\/}% \nolimits\!\left(x+\frac{\pi}{8}\right)+\,e^{-x}\mathop{\sin\/}\nolimits\!% \left(x-\frac{\pi}{8}\right)\right).$ 10.61.10 $\displaystyle\mathop{\mathrm{ber}_{-\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2}\right)$ $\displaystyle=\frac{2^{-\frac{3}{4}}}{\sqrt{\pi x}}\left(e^{x}\mathop{\sin\/}% \nolimits\!\left(x+\frac{\pi}{8}\right)-e^{-x}\mathop{\sin\/}\nolimits\!\left(% x-\frac{\pi}{8}\right)\right),$ $\displaystyle\mathop{\mathrm{bei}_{-\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2}\right)$ $\displaystyle=-\frac{2^{-\frac{3}{4}}}{\sqrt{\pi x}}\left(e^{x}\mathop{\cos\/}% \nolimits\!\left(x+\frac{\pi}{8}\right)+e^{-x}\mathop{\cos\/}\nolimits\!\left(% x-\frac{\pi}{8}\right)\right).$ 10.61.11 $\displaystyle\mathop{\mathrm{ker}_{\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2}\right)$ $\displaystyle=\mathop{\mathrm{kei}_{-\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2}\right)$ $\displaystyle=-2^{-\frac{3}{4}}\sqrt{\frac{\pi}{x}}e^{-x}\mathop{\sin\/}% \nolimits\!\left(x-\frac{\pi}{8}\right),$ 10.61.12 $\displaystyle\mathop{\mathrm{kei}_{\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2}\right)$ $\displaystyle=-\mathop{\mathrm{ker}_{-\frac{1}{2}}\/}\nolimits\!\left(x\sqrt{2% }\right)$ $\displaystyle=-2^{-\frac{3}{4}}\sqrt{\frac{\pi}{x}}e^{-x}\mathop{\cos\/}% \nolimits\!\left(x-\frac{\pi}{8}\right).$
2014-10-02 12:31:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 137, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9307718276977539, "perplexity": 2658.1264007712916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.0/warc/CC-MAIN-20140930004103-00295-ip-10-234-18-248.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3054626/any-finite-dimensional-normed-linear-space-over-a-complete-field-is-complete?noredirect=1
# Any finite dimensional normed linear space over a complete field is complete. Prove that any finite dimensional normed linear space over a complete field is complete. After several comments and corrections, I present the correct proof in the answer section. • Looks fine to me. I would probably emphasize why we can choose $N$ uniformly in $i$ (after the line when you say $\{\alpha_{i,r}\}^{n}_{i=1,r\in \Bbb{N}}$ is Cauchy) because this is the key place where the finite dimensionality is required. – Pratyush Sarkar Dec 28 '18 at 6:41 • @Pratyush Sarkar: Thanks a lot for your comment! Can you please, elaborate on the use of finite dimension? – Omojola Micheal Dec 28 '18 at 6:43 • You are using the fact that $\alpha_{i,r}\to \alpha_{i}\in \Bbb{R}\;\text{or}\;\Bbb{C},\;\text{as}\;r\to \infty$. But there are multiple sequences here -- one for each $i$. For any fixed $i$ and $\epsilon'$, you have a corresponding $N$ (depending on $\epsilon'$ and $i$). But a priori these $N$ may be different for each $i$ (we can write $N_i$ to make the dependence explicit). You want to choose $N = \max\{N_i: i = 1, 2, \dotsc, n\}$ so that the same $N$ works for all $i$. Does that make sense? This also illuminates why things can go wrong in infinite dimensions. – Pratyush Sarkar Dec 28 '18 at 6:57 • @Pratyush Sarkar: Oh, now I get you! That makes sense! Is it fine, now? I made some edits! I believe it should be. – Omojola Micheal Dec 28 '18 at 7:41 • I removed some unnecessary parts which you forgot to delete. Looks fine now. – Pratyush Sarkar Dec 29 '18 at 1:01 Let $$E$$ be any finite dimensional normed linear space over a complete field, $$\Bbb{R}$$ or $$\Bbb{C},$$ say. Suppose $$\dim E=n\geq 1,$$ and let $$\{e_i\}^{n}_{i=1}$$ be a basis for $$E.$$ Then, there exists scalars $$\{\alpha_i\}^{n}_{i=1}$$ such that, for arbitrary $$x\in E$$, \begin{align} x= \sum^{n}_{i=1} \alpha_i e_i .\end{align} Suppose $$\{x_r\}_{r\in \Bbb{N}}$$ is Cauchy in $$E$$ w.r.t $$\|\cdot\|$$ norm and $$\epsilon'>0.$$ Then, there exists $$N$$ such that for all $$s\geq r\geq N,$$ \begin{align} \|x_r-x_s\|<\epsilon'.\end{align} Now, $$\|\cdot\|_1$$ defined by $$\|x\|_1=\sum^{n}_{i=1} |\alpha_i|$$ is a norm on $$E$$ and since All norms defined on a finite dimensional normed linear space are equivalent, we have that $$\|\cdot\|_1\sim \|\cdot\|,$$ i.e., there exists $$\gamma,\beta>0,$$ such that \begin{align} \gamma\|x\|_1\leq\|x\|\leq \beta\|x\|_1,\;\forall\,x\in E.\end{align} Then, \begin{align}\gamma|\alpha_{i,r}-\alpha_{i,s}|\leq\gamma\sum^{n}_{i=1}|\alpha_{i,r}-\alpha_{i,s}|=\gamma\|x_r-x_s\|_1\leq\|x_r-x_s\|<\epsilon',\;s\geq r\geq N\end{align} and so the sequence $$\{\alpha_{i,r}\}^{n}_{i=1,r\in \Bbb{N}}$$ is Cauchy in $$\Bbb{R}$$ or $$\Bbb{C}$$. Now, $$\alpha_{i,r}\to \alpha_{i}\in \Bbb{R}\;\text{or}\;\Bbb{C},\;\text{as}\;r\to \infty$$ by completeness. This implies that for each $$i\in\{1,2,\cdots,n\},$$ there exists $$N_i:=N(i,\epsilon')$$ such that \begin{align}|\alpha_{i,r}-\alpha_{i}|<\epsilon',\;r\geq N_i.\end{align} Taking $$M=\max\{N_i:1\leq i\leq n\},$$ we have that \begin{align}|\alpha_{i,r}-\alpha_{i}|<\epsilon',\;r\geq M.\end{align} Let $$\epsilon>0$$ and $$n\in\Bbb{N}$$, then for $$\epsilon'=\dfrac{\epsilon}{n\beta},$$ there exists $$M$$ such that \begin{align}|\alpha_{i}-\alpha_{i,s}|\leq\dfrac{\epsilon}{n\beta},\;s\geq M.\end{align} Taking sums, we have \begin{align}\|x-x_s\|\leq\beta\|x-x_s\|_1=\beta\sum^{n}_{i=1}|\alpha_{i}-\alpha_{i,s}|\leq\sum^{n}_{i=1}\dfrac{\epsilon}{n},\;s\geq M.\end{align} Hence, \begin{align}\|x_s-x\|\leq\epsilon,\;s\geq M,\end{align} and so, we have that $$x\in E,$$ which implies that $$E$$ is a complete finite dimensional normed linear space and we are done!
2021-03-04 21:26:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 41, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528864622116089, "perplexity": 236.40819872580428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369523.73/warc/CC-MAIN-20210304205238-20210304235238-00050.warc.gz"}
https://help.altair.com/winprop/topics/winprop/getting_started_guide/GS3/example3_folder_output_results_dpm_winprop_t.htm
# Specifying the Output Folder for the Prediction Results Specify the folder for the prediction results to be computed using the dominant path model. 1. Click the Propagation tab. 2. Specify the output folder for the results to be computed using the dominant path model. 1. Under Output Directory / Basic Filename for Propagation Results, in the Directory field, change the default PropName to Results\Prop03_DPM. Note: Keep the dialog open to define additional project parameters.
2023-03-20 16:33:44
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8544129729270935, "perplexity": 4278.793036508163}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00158.warc.gz"}
http://sites.mathdoc.fr/cgi-bin/sps?kwd=Domination+inequalities&kwd_op=contains
Browse by: Author name - Classification - Keywords - Nature 2 matches found XIV: 04, 26-48, LNM 784 (1980) LENGLART, Érik; LÉPINGLE, Dominique; PRATELLI, Maurizio Présentation unifiée de certaines inégalités de la théorie des martingales (Martingale theory) This paper is a synthesis of many years of work on martingale inequalities, and certainly one of the most influential among the papers which appeared in these volumes. It is shown how all main inequalities can be reduced to simple principles: 1) Basic distribution inequalities between pairs of random variables (Doob'', domination'', good lambda'' and Garsia-Neveu''), and 2) Simple lemmas from the general theory of processes Comment: This paper has been rewritten as Chapter XXIII of Dellacherie-Meyer, Probabilités et Potentiel E ; see also 1621. A striking example of the power of these methods is Barlow-Yor, {\sl Jour. Funct. Anal.} 49,1982 Keywords: Moderate convex functions, Inequalities, Martingale inequalities, Burkholder inequalities, Good lambda inequalities, Domination inequalities Nature: Original Retrieve article from Numdam XVI: 19, 221-233, LNM 920 (1982) YOR, Marc Application de la relation de domination à certains renforcements des inégalités de martingales (Martingale theory) The domination relation (Lenglart 1977) between a positive, right-continuous process $X$ and a previsible increasing process $A$ holds whenever $E[X_T]\le E[A_T]$ at stopping times. It plays an important role in the paper 1404 of Lenglart-Lepingle-Pratelli on martingale inequalities. Here it is shown to imply a general inequality involving $X^\ast_{\infty}$ and $1/A_{\infty}$, from which follow a number of inequalities for a continuous local martingale $M$. Among them, estimates on the ratios of the three quantities $M^\ast_{\infty}$, $<M>_{\infty}$, $\sup_{a,t} L^a_t$. One can recover also the stronger version of Doob's inequality, proved by Pitman 1517 Comment: See an earlier paper of the author on this subject, Stochastics, 3, 1979. The author mentions that part of the results were discovered slightly earlier by R.~Gundy Keywords: Martingale inequalities, Domination inequalities Nature: Original Retrieve article from Numdam
2023-04-01 03:50:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8166515231132507, "perplexity": 2586.097788357725}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00600.warc.gz"}
https://newslibrarynexa.web.app/orbital-mechanics-for-engineering-students-curtis-pdf-download-808.html
Download the Book:Encyclopedia Of Aerospace Engineering PDF For Free, Preface: Discover Orbital Mechanics for Engineering Students by howard curtis. Mar 31, 2011 These are notes for a course in orbital mechanics catalogued as Aerospace Students who complete the course successfully will be prepared to participate in basic lestial mechanics, two fields not usually included in an engineering curricu- lum. http://www.oc.nps.navy.mil/˜garfield/ellipse app2.pdf. ## Orbit determination (Armellin et al., 2016; Curtis, 2013) is germane to one of the branches of astronomy, which Orbital mechanics for engineering students. Editorial Reviews. About the Author. Professor Curtis is former professor and department chair (Aerospace Engineering) - Kindle edition by Howard D. Curtis. Download it once and read it on your Kindle device, PC, phones or tablets. Orbital Mechanics for Engineering Students is an aerospace engineering textbook by Howard D. Curtis, in its fourth edition as of 2019. The book provides an  Mar 31, 2011 These are notes for a course in orbital mechanics catalogued as Aerospace Students who complete the course successfully will be prepared to participate in basic lestial mechanics, two fields not usually included in an engineering curricu- lum. http://www.oc.nps.navy.mil/˜garfield/ellipse app2.pdf. Interplanetary Trajectories. Howard D. Curtis, in Orbital Mechanics for Engineering Students (Second Edition), 2010. Section 8.7. 8.9. An earth orbit has a  Oct 26, 2010 (1) Download the zipped folder from the ASTER Labs web store after your [2] Curtis, H.D., "Orbital Mechanics for Engineering Students,". %. ## 978-0-08-100194-3 Aerodynamics for Engineering Students (Seventh Edition) 2015 978-0-08-096905-3 Aircraft Structures for Engineering Students (Fifth Edition) 2012 978-0-08-100914-7 Aircraft Structures for Engineering Students (Sixth Edition… Editorial Reviews. About the Author. Professor Curtis is former professor and department chair (Aerospace Engineering) - Kindle edition by Howard D. Curtis. Download it once and read it on your Kindle device, PC, phones or tablets. Orbital Mechanics for Engineering Students is an aerospace engineering textbook by Howard D. Curtis, in its fourth edition as of 2019. The book provides an  Mar 31, 2011 These are notes for a course in orbital mechanics catalogued as Aerospace Students who complete the course successfully will be prepared to participate in basic lestial mechanics, two fields not usually included in an engineering curricu- lum. http://www.oc.nps.navy.mil/˜garfield/ellipse app2.pdf. Interplanetary Trajectories. Howard D. Curtis, in Orbital Mechanics for Engineering Students (Second Edition), 2010. Section 8.7. 8.9. An earth orbit has a  Oct 26, 2010 (1) Download the zipped folder from the ASTER Labs web store after your [2] Curtis, H.D., "Orbital Mechanics for Engineering Students,". %. ## NASA planned even more complex missions for the mid-1970s that would require For many years economical interplanetary travel meant using the Hohmann transfer orbit. Hohmann demonstrated that the lowest energy route between any two orbits is an elliptical "orbit" which forms a tangent to the starting and destination… ^ Curtis, Howard D. (2014). Orbital Mechanics for Engineering Students (3rd Edition). Oxford, UK: Elsevier. pp. 383–387. ISBN 9780080977478. In astrodynamics or celestial mechanics, an elliptic orbit or elliptical orbit is a Kepler orbit with an eccentricity of less than 1; this includes the special case of a circular orbit, with eccentricity equal to 0. Orbit determination is the estimation of orbits of objects such as moons, planets, and spacecraft. One major application is to allow tracking newly observed asteroids and verify that they have not been previously discovered. At the beginning of its journey, the spacecraft will already have a certain velocity and kinetic energy associated with its orbit around Earth. In astrodynamics, orbit phasing is the adjustment of the time-position of spacecraft along its orbit, usually described as adjusting the orbiting spacecraft's true anomaly. Orbital phasing is primarily used in scenarios where a spacecraft… Topics covered by the text include a review of kinematics and Newtonian dynamics, the two-body problem, Kepler's laws of planetary motion, orbit determination, orbital maneuvers, relative motion and rendezvous, and interplanetary… Orbital mechanics is a core discipline within space-mission design and control. In astrodynamics and celestial dynamics, the orbital state vectors (sometimes state vectors) of an orbit are Cartesian vectors of position ( r {\displaystyle \mathbf {r} } ) and velocity ( v {\displaystyle \mathbf {v} } ) that together with… of teaching an introductory course in orbital mechanics for aerospace engineering students. These undergraduate students had no prior formal experience in the. Orbital mechanics 2nd edition pdf Orbital Mechanics FOR Engineering Students Taylor & Francis, 1975. [3] B.-G. Park and M.-J. Thak, “Three-dimensional trajectory optimization of soft lunar landing from the parking orbit with considerations of the landing site,” International Journal of Control, Autimation and… In orbital mechanics (subfield of celestial mechanics), Gauss's method is used for preliminary orbit determination from at least three observations (more observations increases the accuracy of the determined orbit) of the orbiting body of… The frame is centered at the focus of the orbit, i.e. the celestial body about which the orbit is centered. The unit vectors p ^ {\displaystyle \mathbf {\hat {p}} } and q ^ {\displaystyle \mathbf {\hat {q}} } lie in the plane of the orbit. Free PDF Books #1. Dracula book. Crime and Punishment book. Beyond Good and Evil book. Around the World in 80 Days book NASA planned even more complex missions for the mid-1970s that would require M.E.curriculum Syllabus - Free download as PDF File (.pdf), Text File (.txt) or read online for free. SYLL ## Dec 8, 2015 This article introduces a new method to optimize finite-burn orbital Download citation · https://doi.org/10.1080/0305215X.2015.1115026 The impulsive approach is the most ideal orbital transfer which results in the lowest required fuel mass (Curtis 2010 Orbital Mechanics for Engineering Students. syllabus Nanotech detail.pdf - Free download as PDF File (.pdf), Text File (.txt) or read online for free. H. D. Curtis “Orbital Mechanics for Engineering Students” (see chapter 11: “Rocket Vehicle Dynamics”) 2014 [14] W. Johnson “Contents and commentary on William Moore’s a treatise on the motion of rockets…” 1995 [15] The Willard Gibbs Award, presented by the Chicago Section of the American Chemical Society, was established in 1910 by William A. Converse (1862–1940), a former Chairman and Secretary of the Chicago Section of the society and named for… Velocity along the orbit is entirely angular, and since $h = rv_\bot$, then solving for $h$ and combining with above gives $$v_\bot = \sqrt{\mu/r}.$$ 8 Vlli Preface tial inequalities for systems are also discussed. In this chapter we discuss the fundamental matrices of general linear systems x = A(t)x. Then for the linear system with constant coefficients x = Ax we introduce… Page created by Jeffrey Mack: In alliance Curtis, Howard D. "Orbital Mechanics for Engineering Students", 3rd ed. also has an appendix D with code samples at the publisher site -- Under "D.40 Calculation of a gravity-turn trajectory" there is a matlab script which integrates an…
2022-05-25 03:21:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.6902986764907837, "perplexity": 3548.2463268351253}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662578939.73/warc/CC-MAIN-20220525023952-20220525053952-00471.warc.gz"}
http://ptsymmetry.net/?p=1771
## One-Way Optical Transition based on Causality in Momentum Space Sunkyu Yu, Xianji Piao, KyungWan Yoo, Jonghwa Shin, Namkyoo Park The concept of parity-time (PT) symmetry has been used to identify a novel route to nonreciprocal dynamics in optical momentum space, imposing the directionality on the flow of light. Whereas PT-symmetric potentials have been implemented under the requirement of $$V(x) = V*(-x)$$, this precondition has only been interpreted within the mathematical frame for the symmetry of Hamiltonians and has not been directly linked to nonreciprocity. Here, within the context of light-matter interactions, we develop an alternative route to nonreciprocity in momentum space by employing the concept of causality. We demonstrate that potentials with real and causal momentum spectra produce unidirectional transitions of optical states inside the k-continuum, which corresponds to an exceptional point on the degree of PT-symmetry. Our analysis reveals a critical link between non-Hermitian problems and spectral theory and enables the multi-dimensional manipulation of optical states, in contrast to one-dimensional control from the use of a Schrodinger-like equation in previous PT-symmetric optics. http://arxiv.org/abs/1409.7031 Optics (physics.optics)
2017-08-23 05:59:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4877612888813019, "perplexity": 1896.457160222016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117874.26/warc/CC-MAIN-20170823055231-20170823075231-00333.warc.gz"}
http://mathhelpforum.com/differential-geometry/133706-limits.html
1. ## Limits Determine the following limits if they exists. If the limit exists, establish the convergence. $(a) \lim _{n\rightarrow\infty} (\sqrt{(n+a)(n+b)} - n$ where $a,b$ $> 0.$ $(b) \lim _{n\rightarrow\infty} (n!)^{\frac{1}{n^2}}$ 2. Originally Posted by yusukered07 Determine the following limits if they exists. If the limit exists, establish the convergence. $(a) \lim _{n\rightarrow\infty} (\sqrt{(n+a)(n+b)} - n$ where $a,b$ $> 0.$ $(b) \lim _{n\rightarrow\infty} (n!)^{\frac{1}{n^2}}$ Let $L=\lim_{n\to\infty}\left(n!\right)^{\frac{1}{n^2}} =\lim_{n\to\infty}\left(\left(n!\right)^{\frac{1}{ n}}\right)^{\frac{1}{n}}$, applying the connection between the root and ratio tests we see that this is equal to $L=\lim_{n\to\infty}\frac{((n+1)!)^{\frac{1}{n+1}}} {(n!)^{\frac{1}{n}}}=\lim_{n\to\infty}\frac{(n!)^{ \frac{1}{n+1}}}{(n!)^{\frac{1}{n}}}\cdot (n+1)^{\frac{1}{n+1}}=\lim_{n\to\infty}(n!)^{\frac {-1}{n(n+1)}}=\lim_{n\to\infty}\frac{1}{(n!)^{\frac{ 1}{n^2}}}=\frac{1}{L}$. Thus, $L=\pm 1$ but since it is clearly positive we may conclude that $L=1$. 3. Is... $\sqrt{(n+a)\cdot (n+b)} - n = \frac{(n+a)\cdot (n+b)}{\sqrt{(n+a)\cdot (n+b)} + n} =$ $= \frac{n\cdot (a+b)+ a b }{\sqrt{(n+a)\cdot (n+b)} + n} = \frac{a + b + \frac{a b}{n}}{1+ \sqrt{1 + \frac{a+b}{n} + \frac{a b}{n^{2}}}}$ (1) ... so that is... $\lim_{n \rightarrow \infty} \sqrt{(n+a)\cdot (n+b)} - n = \frac{a+b}{2}$ (2) Kind regards $\chi$ $\sigma$ 4. Originally Posted by Drexel28 Let $L=\lim_{n\to\infty}\left(n!\right)^{\frac{1}{n^2}} =\lim_{n\to\infty}\left(\left(n!\right)^{\frac{1}{ n}}\right)^{\frac{1}{n}}$, applying the connection between the root and ratio tests we see that this is equal to $L=\lim_{n\to\infty}\frac{((n+1)!)^{\frac{1}{n+1}}} {(n!)^{\frac{1}{n}}}=\lim_{n\to\infty}\frac{(n!)^{ \frac{1}{n+1}}}{(n!)^{\frac{1}{n}}}\cdot (n+1)^{\frac{1}{n+1}}=\lim_{n\to\infty}(n!)^{\frac {-1}{n(n+1)}}=\lim_{n\to\infty}\frac{1}{(n!)^{\frac{ 1}{n^2}}}=\frac{1}{L}$. Thus, $L=\pm 1$ but since it is clearly positive we may conclude that $L=1$. I am not sure this is fully rigorous: for instance, don't you need to prove first that the limit exists? (and a few steps would need justifications, like $\lim_{n\to\infty}(n!)^{\frac{-1}{n(n+1)}}=\lim_{n\to\infty}\frac{1}{(n!)^{\frac{ 1}{n^2}}}$ or the previous one) I would rather say: readily, $1\leq n!=n(n-1)\cdots 1\leq n^n$, hence $1\leq (n!)^{\frac{1}{n^2}}=\exp(\frac{\log (n!)}{n^2})\leq \exp(\frac{\log n}{n})$. Since $\frac{\log n}{n}\to 0$ (and $\exp$ is continuous at 0), the limit indeed exists and equals 1. 5. Originally Posted by Laurent I am not sure this is fully rigorous: for instance, don't you need to prove first that the limit exists? (and a few steps would need justifications, like $\lim_{n\to\infty}(n!)^{\frac{-1}{n(n+1)}}=\lim_{n\to\infty}\frac{1}{(n!)^{\frac{ 1}{n^2}}}$ or the previous one) I would rather say: readily, $1\leq n!=n(n-1)\cdots 1\leq n^n$, hence $1\leq (n!)^{\frac{1}{n^2}}=\exp(\frac{\log (n!)}{n^2})\leq \exp(\frac{\log n}{n})$. Since $\frac{\log n}{n}\to 0$ (and $\exp$ is continuous at 0), the limit indeed exists and equals 1. I don't think that the point of the exercise was to be fully rigorous. Besides, I don't think I made any extraordinary leaps. 6. Originally Posted by Drexel28 I don't think that the point of the exercise was to be fully rigorous. Besides, I don't think I made any extraordinary leaps. I was actually a bit euphemistic in my answer : assuming the existence of the limit in the first place and playing with it is really really not a good habit to take (no harm intended!). Especially when, like here, we are asked "if the limit exists, establish the convergence"; but certainly not only in this case. It is not because the only "possible" limit is 1 that the sequence converges to 1. I'm just saying... 7. Originally Posted by Drexel28 I don't think that the point of the exercise was to be fully rigorous. Besides, I don't think I made any extraordinary leaps. If you act this way when you are a mathematician, you'll be in some kind of trouble 8. Originally Posted by Moo If you act this way when you are a mathematician, you'll be in some kind of trouble Haha! I agree That said...I struggle with complete rigor and not giving the OP the full answer. My response was meant to be a nudge for which they would fill in the details
2017-05-29 13:29:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 31, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9799233675003052, "perplexity": 396.14225844837284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612327.8/warc/CC-MAIN-20170529130450-20170529150450-00416.warc.gz"}
https://answers.opencv.org/questions/98539/revisions/
# Revision history [back] ### Undistort image before estimating pose using solvePnP I have to estimate the pose of the camera using a known marker. The camera is calibrated and I have all the calibration coefficients. The current version of the algorithm extracts 4 co-planar points from the frame and uses them to estimate the pose using the solvePnP function. The algorithms seems to work fine but I have a doubt. Since the solvePnP takes as input also the calibration coefficients, do I need to undistort the image before looking the 4 points? In the following code, are the initUndistortRectifyMap/remap functions necessary? while(1) { frame = camera->getFrame(); imshow("frame", frame); // Estimation of Map1 and Map2 for image rectification (to be done only on the first iteration) if (initRectifyMap_flag) { // Rectify the image // The initialization of the Rectification Parameters will be carried out only at the first frame. initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(), getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, frame.size(), 1, frame.size(), 0), frame.size(), CV_16SC2, map1, map2); initRectifyMap_flag = false; } // Remapping of the current frame remap(frame, src, map1, map2, INTER_LINEAR, BORDER_TRANSPARENT, Scalar::all(255)); // Here is the code for the extraction of the 4 points based on the content of the variable src ... ... // Pose estimation solvePnP(Mat(refMarkerPoint), Mat(markerPoints), cameraMatrix, distCoeffs, rvec, tvec,false);
2021-02-25 23:29:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38605549931526184, "perplexity": 5939.695996001318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178355937.26/warc/CC-MAIN-20210225211435-20210226001435-00159.warc.gz"}
https://www.studysmarter.us/textbooks/physics/college-physics-urone-1st-edition/work-energy-and-energy-resources/q25-pe-a-how-high-a-hill-can-a-car-coast-up-engine-disengage/
Suggested languages for you: Americas Europe Q25 PE Expert-verified Found in: Page 261 ### College Physics (Urone) Book edition 1st Edition Author(s) Paul Peter Urone Pages 1272 pages ISBN 9781938168000 # (a) How high a hill can a car coast up (engine disengaged) if work done by friction is negligible and its initial speed is 110 km/h? (b) If, in actuality, a 750-kg car with an initial speed of 110 km/h is observed to coast up a hill to a height 22.0 m above its starting point, how much thermal energy was generated by friction? (c) What is the average force of friction if the hill has a slope 2.5° above the horizontal? (a) The car can go up to the height of $47.63\text{m}$. (b) The thermal energy produced due to heat is $188415.74\text{J}$. (c) The average force of friction is $373.57\text{N}$. See the step by step solution ## Conservation of energy Conservation of energy: When both conservative and nonconservative force acts on a body, the conservation of energy is given as, $\Delta \text{KE}={W}_{f}-\Delta \text{PE}\phantom{\rule{0ex}{0ex}}\frac{1}{2}m{v}_{f}^{2}-\frac{1}{2}m{v}_{i}^{2}={W}_{f}-\left(mg{h}_{f}-mg{h}_{i}\right)\phantom{\rule{0ex}{0ex}}\frac{1}{2}m{v}_{f}^{2}-\frac{1}{2}m{v}_{i}^{2}={W}_{f}-mg{h}_{f}+mg{h}_{i}$ (1.1) Here, m is the mass of the car, ${v}_{f}$ is the final velocity of the car (${v}_{f}=0$ as the car stops), ${v}_{i}$ is the initial velocity of the car $\left({v}_{i}=110\text{km}/\text{h}\right)$, ${W}_{f}$ is the work done by nonconservative force or the work done by the friction, g is the acceleration due to gravity $\left(9.8\text{m}/{\text{s}}^{2}\right)$, ${h}_{f}$ is the final height, and ${h}_{i}$ is the initial height (,${h}_{i}=0$ as the object starts from the ground). ## Maximum height attained by the car (a) When there is no frictional force, the work done by the frictional force will be zero i.e., ${W}_{f}=0$. The maximum height attained by the car can be calculated using equation (1.1). Putting all known values in equation (1.1), $\left[\frac{1}{2}×\left(750\text{kg}\right)×{\left(0\right)}^{2}-\phantom{\rule{0ex}{0ex}}\frac{1}{2}×\left(750\text{kg}\right)×{\left(110\text{km}/\text{h}\right)}^{2}\right]=\left[0-\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)×{h}_{f}\phantom{\rule{0ex}{0ex}}+\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)×\left(0\right)\right]\phantom{\rule{0ex}{0ex}}-\frac{1}{2}×\left(750\text{kg}\right)×{\left(30.56\text{m}/\text{s}\right)}^{2}=-\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)×{h}_{f}\phantom{\rule{0ex}{0ex}}{h}_{f}=\frac{-\frac{1}{2}×\left(750\text{kg}\right)×{\left(30.56\text{m}/\text{s}\right)}^{2}}{-\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)}\phantom{\rule{0ex}{0ex}}{h}_{f}=47.63\text{m}$ Therefore, the car can go up to the height of $47.63\text{m}$. ## The heat energy generated (b) When the frictional force is applicable the car can attain maximum of ${h}_{f}=22.0\text{m}$. The work done by nonconservative force or the frictional force can be calculated using equation (1.1). Putting all known values, $\left[\frac{1}{2}×\left(750\text{kg}\right)×{\left(0\right)}^{2}-\phantom{\rule{0ex}{0ex}}\frac{1}{2}×\left(750\text{kg}\right)×{\left(110\text{km}/\text{h}\right)}^{2}\right]=\left[{W}_{f}-\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)×\left(22.0\text{m}\right)\phantom{\rule{0ex}{0ex}}+\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)×\left(0\right)\right]\phantom{\rule{0ex}{0ex}}-\frac{1}{2}×\left(750\text{kg}\right)×{\left(30.56\text{m}/\text{s}\right)}^{2}={W}_{f}-\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)×\left(22.0\text{m}\right)\phantom{\rule{0ex}{0ex}}{W}_{f}=\left[\left(750\text{kg}\right)×\left(9.8\text{m}/{\text{s}}^{2}\right)×\left(22.0\text{m}\right)-\phantom{\rule{0ex}{0ex}}\frac{1}{2}×\left(750\text{kg}\right)×{\left(30.56\text{m}/\text{s}\right)}^{2}\right]\phantom{\rule{0ex}{0ex}}=-188415.74\text{J}$ Since, the work done by the frictional force is liberated as thermal energy. Therefore, the thermal energy produced due to heat is $188415.74\text{J}$. ## The average frictional force The work done by the frictional force is, ${W}_{f}=-Fd$ (1.2) Here, F is the average frictional force and d is the distance travelled. The distance travelled is given as, $d=\frac{h}{\mathrm{sin}\left(2.5°\right)}$ Putting all known values, $d=\frac{\left(22.0\text{m}\right)}{\mathrm{sin}\left(2.5°\right)}\phantom{\rule{0ex}{0ex}}=504.36\text{m}$ Rearranging equation (1.2) in order to get average frictional force. $F=-\frac{W}{d}$ Putting all known values, $F=-\frac{\left(-188415.74\text{J}\right)}{\left(504.36\text{m}\right)}\phantom{\rule{0ex}{0ex}}=373.57\text{N}$ Therefore, the required average frictional force is $373.57\text{N}$
2023-04-01 03:37:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858304619789124, "perplexity": 560.008102388091}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00764.warc.gz"}
https://castlevania.fandom.com/wiki/Innocent_Devil
## FANDOM 5,809 Pages An Innocent Devil is a creature that can only be created by a Devil Forgemaster, and are said to be created through even the mere "wisp of conjured matter". While they are beings born of darkness, their namesake stems from them being largely unaware of their wicked origins and intended malevolent existence, and act ambivalently without selfish or conceited motivations, while being driven to serve and protect their creator, whom they have unwavering and absolute loyalty to. ## Gameplay ### General The Innocent Devils can be found on special rooms in specific dungeon areas. These rooms house the initial stage for a specific Type of the Innocent Devil. It should also be noted that these Innocent Devils are in a somewhat petrified state before Hector releases them. Hector can summon Innocent Devils in Castlevania: Curse of Darkness, and as an extension of the contrast between Hector and Isaac, Isaac's own Innocent Devils are fought during the battles with him. They play a major role in the game, by helping Hector fight Dracula's minions and resolve environmental riddles or puzzles, consequently gaining access to otherwise unreachable areas. They also boost Hector's stats. However, they are completely optional in battle, and the player may still choose to not use them. There are so far six known types of Innocent Devils: the Fairy-Type, Battle-Type, Bird-Type, Mage-Type, Devil-Type and the Pumpkin-Type. The usage of their abilities consume Hearts which also serve as their Health bar. Innocent Devils do not literally die, once the Innocent Devil's Heart meter drops to zero, its sparking "soul" hovers around Hector, unable to do anything. From this state, Hector must collect at least 1 Heart for the Innocent Devil to revert to its normal state. ### Statistic bonuses Aside from being an ally in combat, Innocent Devils also passively increase Hector's statistics. The additional statistic is based on the type of the summon which starts with an initial value and will also increment based on the specific Innocent Devil's level. • Fairy-Type: LCK +1 (initial), LCK +14 (growth rate) • Battle-Type: STR +10 (initial), STR + 40 (growth rate) • Bird-Type: CON +4 (initial), CON +12 (growth rate) • Mage-Type: STR +5, CON +2 (initial), STR +20, CON +6 (growth rate) • Devil-Type: STR +4, CON +2, LCK +1 (initial), STR +14, CON +4, LCK +5 (growth rate) • Pumpkin-Type: STR +10, CON +4, LCK +5 (initial), STR +60, CON +18, LCK +50 (growth rate) A formula for computing the total enhancement of statistics is shown below[1] $Total Enhancement = Initial + (growth rate × (level - 1) / 98)$ Basically, the values of each I.D. Type which are labeled as growth rate, are the maximum bonus statistic which are added to the initial value when the Innocent Devil has reached level 99, the maximum level of the game. ## Health The Innocent Devils' health is represented by Hearts. When the Innocent Devils are attacked by enemies or when Hector commands them to use an ability (unless using the Mobeius Brooch), the Heart gauge decreases. Hector can't use spells when the Innocent Devils' life is too low. If their life drops to zero, they will turn into a sparking orb and Hector will have to find Hearts to revive them from death. The left side of their health bar also shows a face that signifies the Innocent Devil's "states", ranging from being idle, being hurt, or being aggressive. ## Evolution ### Level-up Just like Hector, Innocent Devils earn XP Points and Level-Up whenever an enemy is killed. They can learn new attacks and get stat boosts. ### Evo Crystals Main article: Evo Crystal Aside from levels, Innocent Devils can evolve with Evolution Crystals that drop from enemies. With enough Evolution Crystals, an Innocent Devil can evolve and take a new form. Innocent Devils can evolve up to three times, though some species have shorter evolution lines. Depending on what weapon you use, your Innocent Devils will evolve differently. The color of these shards depends on the type of the weapon Hector uses to kill his enemies. ### Development line A development line (or chart, map) functions as a guide in order to determine the current evolution stage of the Innocent Devil. This also includes the color and amount of Evo Crystals required for the next stage of evolution. A small version can be seen in the Innocent Devil's stats. Likewise, a "Chart" button allows the player to view the larger, more-detailed version, including the Abilities for each evolution. WARNING: If you evolve an Innocent Devil without it obtaining all of its ablilities in the current form they will not be able to obtain them and you can't go back. Example: If the Battle-Type in Golem form evolves to a different form without obtaining Hip Press Lv.2 then that skill is skipped and remains as ?????? in the abilities chart screen. ### Devil Shards Main article: Devil Shard Innocent Devils will also randomly drop Devil Shards, that can create new Level 1 Innocent Devils with the help of Julia. Those Innocent Devils will have higher stats compared to their "parents", being a new generation. Newly-created I.D.s have 10% of their parent's statistics added to their initial stats depending on the Level of the Parent I.D. when the Devil Shard has been produced. The higher the generation, the higher will be the base stats of the newborn Innocent Devil at Level 1. ## Different types There are six different types of Innocent Devils, the first five of which are mandatory in order to progress, while the sixth is completely optional. ### Fairy-Type The first Innocent Devil is summoned in the Abandoned Castle. Fairy Devils don't attack (except for a few Evolutions that possess attacks, but they're quite harmless). They can cast spells that can heal Hector or cure him from Poison or Curse. They can also open chests, and certain higher evolutions of fairy-types can translate messages. Enemies can't attack Fairy Innocent Devils. ### Battle-Type The second Innocent Devil is summoned in the Baljhet Mountains. As the name suggests, Battle Devils are pure killing machines, with great melee abilities created to attack enemies and protect Hector. They can open metal doors that bar Hector's passage (for example, the one blocking the Mortvia Aqueduct in the mountains) using Brute Force, or destroy cracked ground spots using Hip Press. Hector can perform Chain Attacks with them. They are one of three types also seen used by Isaac, who uses an Iytei and a Rasetz. ### Bird-Type The third Innocent Devil is summoned in the Mortvia Aqueduct. Bird Devils are good in air combat and can help Hector in crossing large gaps that cannot ordinarily be passed, by lifting Hector and flying for a short amount of time, the ability of which can be upgraded to access the Tower of Evermore. They can juggle light enemies, allowing Hector to perform air combos easily. This is one of three types also seen used by Isaac, who uses a Crimson. ### Mage-Type The fourth Innocent Devil is summoned in Cordova Town. Mage Devils are physically weak, being mediocre at fighting enemies, but can cast powerful spells which inflict heavy area-of-effect damage. They can also help Hector by using the Time Stop spell, which for a limited duration, freezes time for everything but Hector and bosses, who can immediately break the spell. ### Devil-Type The final mandatory Innocent Devil is summoned in the Eneomaos Machine Tower. Devil-Type Innocent Devils are considered a balance between Battle and Bird-Types: they are good at melee and air battle, with high mobility and attack power, and being very aggressive whenever a enemy is spotted. They can also affect Hector with a spell that converts both him and the Devil into a magic circle, allowing the player to access rooms by crossing small gaps on some wall. There is only one evolutionary path for this Devil Type, and its final form can only be created by collecting Evo crystals using the Chauve-souris spear. Additionally, Hector can perform Chain Attacks with them. This is one of three types used by Isaac, who summons a The End, named Abel. ### Pumpkin-Type An optional Innocent Devil Hector that can summon in Dracula's Castle. In order to actually unlock it, it's necessary to forge a weapon called Pumpkin Mace. Pumpkin Devils are really bad at fighting enemies and have a low Heart gauge. However, they give high stat boosts to Hector, increasing his strength, constitution and luck. They're often considered as the "joke" Innocent Devil, or just as an added challenge to players who like to solo content as Hector while benefiting from the stat bonuses. ## Trivia • All except the Pumpkin-Type Innocent Devils have a pink crystal protruding from a part of their body.
2020-02-25 10:17:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29767662286758423, "perplexity": 8234.415851426162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146064.76/warc/CC-MAIN-20200225080028-20200225110028-00084.warc.gz"}
http://www.mzan.com/article/49873158-mirror-hourglass-with-least-necessary-for-loops.shtml
Home how to mirror an hourglass with no more than four for loops? I would like to find out how to mirror one half hourglass... with not more than 4 loops. I do not want to use recursion or arrays, just simple for loops Scanner sc = new Scanner(System.in); System.out.println(" Enter odd number above 0 here: "); int h = sc.nextInt(); char x = '#'; if (h % 2 != 0) { for (int i = 1; i <= h; i++) { // loop for first part of hourglass for (int j = 1; j <= i; j++) { System.out.print(x); } // create white space for (int j = h - i; j >= 1; j--) { System.out.print(" "); } // create mirror for (int k = i; k >= 1; k--) { System.out.print(x); } System.out.println(); } } else { System.out.println(" Not an odd number. Try again: "); }
2018-04-25 00:36:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22448889911174774, "perplexity": 836.2837440261134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947654.26/warc/CC-MAIN-20180425001823-20180425021823-00544.warc.gz"}
https://search.datacite.org/works/10.6092/ISSN.1973-2201/881
### On the definition and some justification of the Likelihood Principle B. V. Frosini Two points have been raised about the Likelihood Principle (LP). The first relates to the definition of LP: it has been shown that one of the two available definitions is hardly acceptable outside a strictly Bayesian context. The cue for the second issue is taken from Birnbaum's proof (and other similar proofs) that the LP - in its strong version - is implied by other more fundamental principles, which most statisticians tend to accept; some...
2017-09-26 10:49:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258700370788574, "perplexity": 946.4508630042177}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695439.96/warc/CC-MAIN-20170926103944-20170926123944-00255.warc.gz"}
https://people.maths.bris.ac.uk/~matyd/GroupNames/154/C2xD4.D5.html
Copied to clipboard ## G = C2×D4.D5order 160 = 25·5 ### Direct product of C2 and D4.D5 Series: Derived Chief Lower central Upper central Derived series C1 — C20 — C2×D4.D5 Chief series C1 — C5 — C10 — C20 — Dic10 — C2×Dic10 — C2×D4.D5 Lower central C5 — C10 — C20 — C2×D4.D5 Upper central C1 — C22 — C2×C4 — C2×D4 Generators and relations for C2×D4.D5 G = < a,b,c,d,e | a2=b4=c2=d5=1, e2=b2, ab=ba, ac=ca, ad=da, ae=ea, cbc=ebe-1=b-1, bd=db, cd=dc, ece-1=bc, ede-1=d-1 > Subgroups: 184 in 68 conjugacy classes, 33 normal (17 characteristic) C1, C2, C2 [×2], C2 [×2], C4 [×2], C4 [×2], C22, C22 [×4], C5, C8 [×2], C2×C4, C2×C4, D4 [×2], D4, Q8 [×3], C23, C10, C10 [×2], C10 [×2], C2×C8, SD16 [×4], C2×D4, C2×Q8, Dic5 [×2], C20 [×2], C2×C10, C2×C10 [×4], C2×SD16, C52C8 [×2], Dic10 [×2], Dic10, C2×Dic5, C2×C20, C5×D4 [×2], C5×D4, C22×C10, C2×C52C8, D4.D5 [×4], C2×Dic10, D4×C10, C2×D4.D5 Quotients: C1, C2 [×7], C22 [×7], D4 [×2], C23, D5, SD16 [×2], C2×D4, D10 [×3], C2×SD16, C5⋊D4 [×2], C22×D5, D4.D5 [×2], C2×C5⋊D4, C2×D4.D5 Smallest permutation representation of C2×D4.D5 On 80 points Generators in S80 (1 21)(2 22)(3 23)(4 24)(5 25)(6 26)(7 27)(8 28)(9 29)(10 30)(11 36)(12 37)(13 38)(14 39)(15 40)(16 31)(17 32)(18 33)(19 34)(20 35)(41 61)(42 62)(43 63)(44 64)(45 65)(46 66)(47 67)(48 68)(49 69)(50 70)(51 76)(52 77)(53 78)(54 79)(55 80)(56 71)(57 72)(58 73)(59 74)(60 75) (1 16 6 11)(2 17 7 12)(3 18 8 13)(4 19 9 14)(5 20 10 15)(21 31 26 36)(22 32 27 37)(23 33 28 38)(24 34 29 39)(25 35 30 40)(41 51 46 56)(42 52 47 57)(43 53 48 58)(44 54 49 59)(45 55 50 60)(61 76 66 71)(62 77 67 72)(63 78 68 73)(64 79 69 74)(65 80 70 75) (1 11)(2 12)(3 13)(4 14)(5 15)(6 16)(7 17)(8 18)(9 19)(10 20)(21 36)(22 37)(23 38)(24 39)(25 40)(26 31)(27 32)(28 33)(29 34)(30 35)(41 46)(42 47)(43 48)(44 49)(45 50)(61 66)(62 67)(63 68)(64 69)(65 70) (1 2 3 4 5)(6 7 8 9 10)(11 12 13 14 15)(16 17 18 19 20)(21 22 23 24 25)(26 27 28 29 30)(31 32 33 34 35)(36 37 38 39 40)(41 42 43 44 45)(46 47 48 49 50)(51 52 53 54 55)(56 57 58 59 60)(61 62 63 64 65)(66 67 68 69 70)(71 72 73 74 75)(76 77 78 79 80) (1 68 6 63)(2 67 7 62)(3 66 8 61)(4 70 9 65)(5 69 10 64)(11 73 16 78)(12 72 17 77)(13 71 18 76)(14 75 19 80)(15 74 20 79)(21 48 26 43)(22 47 27 42)(23 46 28 41)(24 50 29 45)(25 49 30 44)(31 53 36 58)(32 52 37 57)(33 51 38 56)(34 55 39 60)(35 54 40 59) G:=sub<Sym(80)| (1,21)(2,22)(3,23)(4,24)(5,25)(6,26)(7,27)(8,28)(9,29)(10,30)(11,36)(12,37)(13,38)(14,39)(15,40)(16,31)(17,32)(18,33)(19,34)(20,35)(41,61)(42,62)(43,63)(44,64)(45,65)(46,66)(47,67)(48,68)(49,69)(50,70)(51,76)(52,77)(53,78)(54,79)(55,80)(56,71)(57,72)(58,73)(59,74)(60,75), (1,16,6,11)(2,17,7,12)(3,18,8,13)(4,19,9,14)(5,20,10,15)(21,31,26,36)(22,32,27,37)(23,33,28,38)(24,34,29,39)(25,35,30,40)(41,51,46,56)(42,52,47,57)(43,53,48,58)(44,54,49,59)(45,55,50,60)(61,76,66,71)(62,77,67,72)(63,78,68,73)(64,79,69,74)(65,80,70,75), (1,11)(2,12)(3,13)(4,14)(5,15)(6,16)(7,17)(8,18)(9,19)(10,20)(21,36)(22,37)(23,38)(24,39)(25,40)(26,31)(27,32)(28,33)(29,34)(30,35)(41,46)(42,47)(43,48)(44,49)(45,50)(61,66)(62,67)(63,68)(64,69)(65,70), (1,2,3,4,5)(6,7,8,9,10)(11,12,13,14,15)(16,17,18,19,20)(21,22,23,24,25)(26,27,28,29,30)(31,32,33,34,35)(36,37,38,39,40)(41,42,43,44,45)(46,47,48,49,50)(51,52,53,54,55)(56,57,58,59,60)(61,62,63,64,65)(66,67,68,69,70)(71,72,73,74,75)(76,77,78,79,80), (1,68,6,63)(2,67,7,62)(3,66,8,61)(4,70,9,65)(5,69,10,64)(11,73,16,78)(12,72,17,77)(13,71,18,76)(14,75,19,80)(15,74,20,79)(21,48,26,43)(22,47,27,42)(23,46,28,41)(24,50,29,45)(25,49,30,44)(31,53,36,58)(32,52,37,57)(33,51,38,56)(34,55,39,60)(35,54,40,59)>; G:=Group( (1,21)(2,22)(3,23)(4,24)(5,25)(6,26)(7,27)(8,28)(9,29)(10,30)(11,36)(12,37)(13,38)(14,39)(15,40)(16,31)(17,32)(18,33)(19,34)(20,35)(41,61)(42,62)(43,63)(44,64)(45,65)(46,66)(47,67)(48,68)(49,69)(50,70)(51,76)(52,77)(53,78)(54,79)(55,80)(56,71)(57,72)(58,73)(59,74)(60,75), (1,16,6,11)(2,17,7,12)(3,18,8,13)(4,19,9,14)(5,20,10,15)(21,31,26,36)(22,32,27,37)(23,33,28,38)(24,34,29,39)(25,35,30,40)(41,51,46,56)(42,52,47,57)(43,53,48,58)(44,54,49,59)(45,55,50,60)(61,76,66,71)(62,77,67,72)(63,78,68,73)(64,79,69,74)(65,80,70,75), (1,11)(2,12)(3,13)(4,14)(5,15)(6,16)(7,17)(8,18)(9,19)(10,20)(21,36)(22,37)(23,38)(24,39)(25,40)(26,31)(27,32)(28,33)(29,34)(30,35)(41,46)(42,47)(43,48)(44,49)(45,50)(61,66)(62,67)(63,68)(64,69)(65,70), (1,2,3,4,5)(6,7,8,9,10)(11,12,13,14,15)(16,17,18,19,20)(21,22,23,24,25)(26,27,28,29,30)(31,32,33,34,35)(36,37,38,39,40)(41,42,43,44,45)(46,47,48,49,50)(51,52,53,54,55)(56,57,58,59,60)(61,62,63,64,65)(66,67,68,69,70)(71,72,73,74,75)(76,77,78,79,80), (1,68,6,63)(2,67,7,62)(3,66,8,61)(4,70,9,65)(5,69,10,64)(11,73,16,78)(12,72,17,77)(13,71,18,76)(14,75,19,80)(15,74,20,79)(21,48,26,43)(22,47,27,42)(23,46,28,41)(24,50,29,45)(25,49,30,44)(31,53,36,58)(32,52,37,57)(33,51,38,56)(34,55,39,60)(35,54,40,59) ); G=PermutationGroup([(1,21),(2,22),(3,23),(4,24),(5,25),(6,26),(7,27),(8,28),(9,29),(10,30),(11,36),(12,37),(13,38),(14,39),(15,40),(16,31),(17,32),(18,33),(19,34),(20,35),(41,61),(42,62),(43,63),(44,64),(45,65),(46,66),(47,67),(48,68),(49,69),(50,70),(51,76),(52,77),(53,78),(54,79),(55,80),(56,71),(57,72),(58,73),(59,74),(60,75)], [(1,16,6,11),(2,17,7,12),(3,18,8,13),(4,19,9,14),(5,20,10,15),(21,31,26,36),(22,32,27,37),(23,33,28,38),(24,34,29,39),(25,35,30,40),(41,51,46,56),(42,52,47,57),(43,53,48,58),(44,54,49,59),(45,55,50,60),(61,76,66,71),(62,77,67,72),(63,78,68,73),(64,79,69,74),(65,80,70,75)], [(1,11),(2,12),(3,13),(4,14),(5,15),(6,16),(7,17),(8,18),(9,19),(10,20),(21,36),(22,37),(23,38),(24,39),(25,40),(26,31),(27,32),(28,33),(29,34),(30,35),(41,46),(42,47),(43,48),(44,49),(45,50),(61,66),(62,67),(63,68),(64,69),(65,70)], [(1,2,3,4,5),(6,7,8,9,10),(11,12,13,14,15),(16,17,18,19,20),(21,22,23,24,25),(26,27,28,29,30),(31,32,33,34,35),(36,37,38,39,40),(41,42,43,44,45),(46,47,48,49,50),(51,52,53,54,55),(56,57,58,59,60),(61,62,63,64,65),(66,67,68,69,70),(71,72,73,74,75),(76,77,78,79,80)], [(1,68,6,63),(2,67,7,62),(3,66,8,61),(4,70,9,65),(5,69,10,64),(11,73,16,78),(12,72,17,77),(13,71,18,76),(14,75,19,80),(15,74,20,79),(21,48,26,43),(22,47,27,42),(23,46,28,41),(24,50,29,45),(25,49,30,44),(31,53,36,58),(32,52,37,57),(33,51,38,56),(34,55,39,60),(35,54,40,59)]) 34 conjugacy classes class 1 2A 2B 2C 2D 2E 4A 4B 4C 4D 5A 5B 8A 8B 8C 8D 10A ··· 10F 10G ··· 10N 20A 20B 20C 20D order 1 2 2 2 2 2 4 4 4 4 5 5 8 8 8 8 10 ··· 10 10 ··· 10 20 20 20 20 size 1 1 1 1 4 4 2 2 20 20 2 2 10 10 10 10 2 ··· 2 4 ··· 4 4 4 4 4 34 irreducible representations dim 1 1 1 1 1 2 2 2 2 2 2 2 2 4 type + + + + + + + + + + - image C1 C2 C2 C2 C2 D4 D4 D5 SD16 D10 D10 C5⋊D4 C5⋊D4 D4.D5 kernel C2×D4.D5 C2×C5⋊2C8 D4.D5 C2×Dic10 D4×C10 C20 C2×C10 C2×D4 C10 C2×C4 D4 C4 C22 C2 # reps 1 1 4 1 1 1 1 2 4 2 4 4 4 4 Matrix representation of C2×D4.D5 in GL6(𝔽41) 1 0 0 0 0 0 0 1 0 0 0 0 0 0 40 0 0 0 0 0 0 40 0 0 0 0 0 0 1 0 0 0 0 0 0 1 , 0 1 0 0 0 0 40 0 0 0 0 0 0 0 0 40 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 , 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 40 0 0 0 0 40 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 , 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 16 0 0 0 0 0 22 18 , 15 26 0 0 0 0 26 26 0 0 0 0 0 0 15 15 0 0 0 0 15 26 0 0 0 0 0 0 7 23 0 0 0 0 30 34 G:=sub<GL(6,GF(41))| [1,0,0,0,0,0,0,1,0,0,0,0,0,0,40,0,0,0,0,0,0,40,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[0,40,0,0,0,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,40,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[0,1,0,0,0,0,1,0,0,0,0,0,0,0,0,40,0,0,0,0,40,0,0,0,0,0,0,0,1,0,0,0,0,0,0,1],[1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,16,22,0,0,0,0,0,18],[15,26,0,0,0,0,26,26,0,0,0,0,0,0,15,15,0,0,0,0,15,26,0,0,0,0,0,0,7,30,0,0,0,0,23,34] >; C2×D4.D5 in GAP, Magma, Sage, TeX C_2\times D_4.D_5 % in TeX G:=Group("C2xD4.D5"); // GroupNames label G:=SmallGroup(160,154); // by ID G=gap.SmallGroup(160,154); # by ID G:=PCGroup([6,-2,-2,-2,-2,-2,-5,96,218,579,159,69,4613]); // Polycyclic G:=Group<a,b,c,d,e|a^2=b^4=c^2=d^5=1,e^2=b^2,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,c*b*c=e*b*e^-1=b^-1,b*d=d*b,c*d=d*c,e*c*e^-1=b*c,e*d*e^-1=d^-1>; // generators/relations ׿ × 𝔽
2021-01-27 12:33:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.99974125623703, "perplexity": 2078.3742941247406}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704824728.92/warc/CC-MAIN-20210127121330-20210127151330-00769.warc.gz"}
https://community.babycentre.co.uk/post/a26059223/-
# :( Had scan today has ive had bleeding ive lost the baby :( / • So sorry to hear that xxxx • Aw hugs (((()))) been there twice and it's awful xx • Sorry you're going through this, my prayers are with you xx • I'm so sorry xx • So sorry hun . Thinking of you xx • My minds so blank this would of been my third :( xxx • So sorry :( • Oh no sorry this has happened if it's ur 3rd mc I hope they get u a referral for recurrent mc clinic so u can get the help/treatment u might need xx • No would of been my third child, this is my first mc xxxx • Oh sorry I had my first mc in feb it's not nice but does get easier look after urself and rest xx /
2021-06-22 10:20:50
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9359859824180603, "perplexity": 13164.5685097704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488517048.78/warc/CC-MAIN-20210622093910-20210622123910-00338.warc.gz"}
http://math.stackexchange.com/questions/34888/classification-of-general-fibre-bundles
Classification of general fibre bundles For principal $G$-bundles with $G$ a Lie group there exists a principal $G$-bundle $EG \to BG$ such that we have a bijection $$[X,BG] \leftrightarrow \text{(principal G-bundles over X)}$$ $$f \mapsto f^* EG$$ where $[X,BG]$ is the set of homotopy classes of maps from $X$ to $BG$. As a result of this, homotopic maps induce the same pullback maps of bundles. My question is the following: for what class of spaces $F$ does there exist $F \to EF \to BF$ that gives a correspondence as above. I am also interested in knowing for what type of $F$ homotopic maps induce the same pullback. Let's also assume all spaces are (countable) CW complexes. - My understanding is that homotopic maps induce the same pullback for any fiber bundles at all. I don't have a clue as to the first question though. – Jason DeVito Apr 24 '11 at 17:28 I'm not entirely sure what the exact question is. My guess is this: We know for principal G-bundles that homotopic maps induce isomorphisms on the pullback bundles, to what extent can we remove the "principal G-bundle" aspect and just replace it with an arbitrary fiber bundle? I'm pretty sure this is always true (for reasonable X). I've seen a proof for vector bundles on manifolds, but I don't think it used the "vector" part or "manifold" part. – Matt Apr 24 '11 at 17:31 @Matt: in the rank $k$ vector bundle case the base space is the Grassmannian of $k$-planes in $\mathbb R^\infty$. Does this really generalize in the proof you saw to give the base space when the fiber is arbitrary? – Eric O. Korman Apr 24 '11 at 18:18 I just looked it up, and the proof only requires the maps $f, g : A\to B$ that are homotopic to have the property that $A$ is paracompact. The fiber bundle over $B$ that you are pulling back can be anything. – Matt Apr 24 '11 at 20:57 Pullback gives the bijection between $[X,BG]$ and isomorphism classes of bundles on X with structure group G. In particular, all bundles on X with fiber F is the same thing as [X,BAut(F)] (where Aut is the group of autohomeomorphisms of F). I should have mentioned in my question that I know what I said holds true for $G$ a Lie group. But does this still hold if $G$ is any topological group? Also, if $F$ is a CW complex is $Aut(F)$ also? – Eric O. Korman Apr 24 '11 at 18:17 @Eric: As far as $Aut(F)$, I think these are pretty poorly understood. Of course by saying "automorphism" we're not really saying what we mean -- homeomorphism, or diffeomorphism, or biholomorphism? Anyways, I don't think it's at all obvious that the group of homeomorphisms of a CW complex should be another CW complex. – Aaron Mazel-Gee Apr 25 '11 at 5:57 This seems to be the only answer I ever give these days... Peter May wrote a memoir called Classifying spaces and Fibrations that might be what you are looking for. He writes down spaces that play the role of $BG$ but for "arbitrary" fibrations! (arbitrary is in quotes because there are topological restrictions which I think you will be fine accepting, I believe that if the homotopy fiber has the homotopy type of a non-degenerately based CW complex the results go through). You don't need local triviality or anything.
2013-06-20 05:05:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858569264411926, "perplexity": 195.5792697912242}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710299158/warc/CC-MAIN-20130516131819-00082-ip-10-60-113-184.ec2.internal.warc.gz"}
http://www.statemaster.com/encyclopedia/Light
FACTOID # 30: If Alaska were its own country, it would be the 26th largest in total area, slightly larger than Iran. Home Encyclopedia Statistics States A-Z Flags Maps FAQ About WHAT'S NEW RELATED ARTICLES People who viewed "Light" also viewed: SEARCH ALL FACTS & STATISTICS    Advanced view Search encyclopedia, statistics and forums: (* = Graphable) Encyclopedia > Light A beam of white light (entering upwards from the right) is dispersed into its constituent colors by its passage through a prism. The fainter beam of white light exiting to the upper right has been reflected (without dispersion) off the first surface of the prism. Light is electromagnetic radiation of a wavelength (~400–700nm) that is visible to the human eye (visible light). In a scientific context, the word light is sometimes used to refer to the entire electromagnetic spectrum.[1] Light is composed of an elementary particle called a photon. Look up light in Wiktionary, the free dictionary. ... prism splitting light Source: NASA File history Legend: (cur) = this is the current file, (del) = delete this old version, (rev) = revert to this old version. ... prism splitting light Source: NASA File history Legend: (cur) = this is the current file, (del) = delete this old version, (rev) = revert to this old version. ... Diagram of a triangular prism, dispersing light Lamps as seen through a prism. ... Electromagnetic waves can be imagined as a self-propagating transverse oscillating wave of electric and magnetic fields. ... For other uses, see Wavelength (disambiguation). ... A nanometre (American spelling: nanometer, symbol nm) (Greek: νάνος, nanos, dwarf; μετρώ, metrÏŒ, count) is a unit of length in the metric system, equal to one billionth of a metre (or one millionth of a millimetre), which is the current SI base unit of length. ... For other uses, see Eye (disambiguation). ... For the scientific journal named Science, see Science (journal). ... Legend γ = Gamma rays HX = Hard X-rays SX = Soft X-Rays EUV = Extreme ultraviolet NUV = Near ultraviolet Visible light NIR = Near infrared MIR = Moderate infrared FIR = Far infrared Radio waves EHF = Extremely high frequency (Microwaves) SHF = Super high frequency (Microwaves) UHF = Ultra high frequency VHF = Very high frequency HF = High... For the novel, see The Elementary Particles. ... In modern physics the photon is the elementary particle responsible for electromagnetic phenomena. ... Three primary properties of light are: Light can exhibit properties of both waves and particles. This property is referred to as wave–particle duality. The study of light, known as optics, is an important research area in modern physics. In physics, intensity is a measure of the time-averaged energy flux. ... Brightness is an attribute of visual perception in which a source appears to emit a given amount of light. ... For other uses, see Frequency (disambiguation). ... In electrodynamics, polarization (also spelled polarisation) is the property of electromagnetic waves, such as light, that describes the direction of their transverse electric field. ... Surface waves in water This article is about waves in the most general scientific sense. ... Thousands of particles explode from the collision point of two relativistic (100 GeV per nucleon) gold ions in the STAR detector of the Relativistic Heavy Ion Collider. ... In physics and chemistry, wave-particle duality is a conceptualization that all objects in our universe exhibit properties of both waves and of particles. ... For the book by Sir Isaac Newton, see Opticks. ... A magnet levitating above a high-temperature superconductor demonstrates the Meissner effect. ... ## Speed of light GA_googleFillSlot("encyclopedia_square"); Main article: Speed of light A line showing the speed of light on a scale model of Earth and the moon, about 1.2 seconds. Different physicists have attempted to measure the speed of light throughout history. Galileo attempted to measure the speed of light in the seventeenth century. A good early experiment to measure the speed of light was conducted by Ole Rømer, a Danish physicist, in 1676. Using a telescope, Ole observed the motions of Jupiter and one of its moons, Io. Noting discrepancies in the apparent period of Io's orbit, Rømer calculated that light takes about 18 minutes to traverse the diameter of Earth's orbit. Unfortunately, this was not a value that was known at that time. If Ole had known the diameter of the earth's orbit, he would have calculated a speed of 227,000,000 m/s. Galileo can refer to: Galileo Galilei, astronomer, philosopher, and physicist (1564 - 1642) the Galileo spacecraft, a NASA space probe that visited Jupiter and its moons the Galileo positioning system Life of Galileo, a play by Bertolt Brecht Galileo (1975) - screen adaptation of the play Life of Galileo by Bertolt Brecht... Ole Rømer. ... Atmospheric characteristics Atmospheric pressure 70 kPa Hydrogen ~86% Helium ~14% Methane 0. ... A natural satellite is an object that orbits a planet or other body larger than itself and which is not man-made. ... Atmosphere Surface pressure: trace Composition: 90% sulfur dioxide Io (eye-oe, IPA: , Greek Ῑώ) is the innermost of the four Galilean moons of Jupiter and, with a diameter of 3,642 kilometers, is the fourth largest moon in the Solar System. ... Another, more accurate, measurement of the speed of light was performed in Europe by Hippolyte Fizeau in 1849. Fizeau directed a beam of light at a mirror several kilometers away. A rotating cog wheel was placed in the path of the light beam as it traveled from the source, to the mirror and then returned to its origin. Fizeau found that at a certain rate of rotation, the beam would pass through one gap in the wheel on the way out and the next gap on the way back. Knowing the distance to the mirror, the number of teeth on the wheel, and the rate of rotation, Fizeau was able to calculate the speed of light as 313,000,000 m/s. Armand Hippolyte Louis Fizeau Armand Hippolyte Louis Fizeau (September 23, 1819-1896), French physicist, was born in Paris. ... Year 1849 (MDCCCXLIX) was a common year starting on Monday (link will display the full calendar) of the Gregorian calendar (or a common year starting on Saturday of the 12-day slower Julian calendar). ... ## Refraction Main article: Refraction Light in a vacuum propagates at a maximum finite speed, defined above, and denoted by the symbol c. While passing through any other transparent medium, the speed of light slows to some fraction of c. The reduction of the speed of light traveling in a transparent medium is indicated by the refractive index, n, which is defined as: For the property of metals, see refraction (metallurgy). ... The refractive index (or index of refraction) of a medium is a measure for how much the speed of light (or other waves such as sound waves) is reduced inside the medium. ... $n = frac{c}{v} ;!$ Where v denotes the speed that light travels in the transparent medium. Note, n = 1 in a vacuum and n > 1 in a transparent medium. When a beam of light crosses the boundary between a vacuum and another medium, or between two different mediums, the wavelength of the light changes, but the frequency remains constant. If the beam of light is not orthogonal to the boundary, the change in wavelength results in a change in the direction of the beam. This change of direction is known as refraction. In mathematics, orthogonal is synonymous with perpendicular when used as a simple adjective that is not part of any longer phrase with a standard definition. ... The refraction quality of lenses is frequently used to manipulate light in order to change the apparent size of images. Magnifying glasses, spectacles, contact lenses, microscopes and refracting telescopes are all examples of this manipulation. This article is about the optical device. ... A magnifying glass (called a hand lens in laboratory contexts) is a convex lens which is used to produce a magnified image of an object. ... Glasses, spectacles, or eyeglasses are frames bearing lenses worn in front of the eyes, sometimes for purely aesthetic reasons but normally for vision correction or eye protection. ... A pair of contact lenses, positioned with the concave side facing upward. ... Robert Hookes microscope (1665) - an engineered device used to study living systems. ... Image of a refracting telescope from the Cincinnati Observatory in 1848 A refracting or refractor telescope is a dioptric telescope that uses a lens as its objective to form an image. ... ## Optics Main article: Optics The study of light and the interaction of light and matter is termed optics. The observation and study of optical phenomena such as rainbows and the Aurora Borealis offer many clues as to the nature of light as well as much enjoyment. For the book by Sir Isaac Newton, see Opticks. ... This article is about matter in physics and chemistry. ... For the book by Sir Isaac Newton, see Opticks. ... An optical phenomenon is any observable event which results from the interaction of light and matter. ... For other uses, see Rainbow (disambiguation). ... Aurora borealis Polar aurorae are optical phenomena characterized by colorful displays of light in the night sky. ... ## Light sources Mist illuminated by sunlight A cloud illuminated by sunlight Atoms emit and absorb light at characteristic energies. This produces "emission lines" in the spectrum of each atom. Emission can be spontaneous, as in light-emitting diodes, gas discharge lamps (such as neon lamps and neon signs, mercury-vapor lamps, etc.), and flames (light from the hot gas itself—so, for example, sodium in a gas flame emits characteristic yellow light). Emission can also be stimulated, as in a laser or a microwave maser. A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from an excess or deficiency of photons in a narrow frequency range, compared with the nearby frequencies. ... In physics, emission is the process by which the energy of a photon is released by another entity, for example, by an atom whose valence electrons make a transition between two electronic energy levels. ... Spontaneous emission is the process by which a molecule in an excited state drops to the ground state, resulting in the creation of a photon. ... “LED” redirects here. ... -1... Lighting neon lamp, two 220/230 volt and 110 V neon lamps and a screwdriver with neon lamp inside A neon lamp is a gas discharge lamp containing primarily neon gas at low pressure. ... Neon signs are often used to advertise for hotels, bars and entertainment venues. ... A Mercury-vapor lamp is a gas discharge lamp which uses mercury in an excited state to produce light. ... For sodium in the diet, see Edible salt. ... In optics, stimulated emission is the process by which, when perturbed by a photon, matter may lose energy resulting in the creation of another photon. ... For other uses, see Laser (disambiguation). ... A hydrogen radio frequency discharge, the first element inside a hydrogen maser (see description below) A maser is a device that produces coherent electromagnetic waves through amplification due to stimulated emission. ... Certain chemicals produce visible radiation by chemoluminescence. In living things, this process is called bioluminescence. For example, fireflies produce light by this means, and boats moving through water can disturb plankton which produce a glowing wake. Lightsticks Chemoluminescence (sometimes chemiluminescence) is the emission of light (luminescence) as the result of a chemical reaction. ... Bioluminescence is the production and emission of light by a living organism as the result of a chemical reaction during which chemical energy is converted to light energy. ... For the science fiction television series, see Firefly (TV series). ... Certain substances produce light when they are illuminated by more energetic radiation, a process known as fluorescence. This is used in fluorescent lights. Some substances emit light slowly after excitation by more energetic radiation. This is known as phosphorescence. Fluorescence induced by exposure to ultraviolet light in vials containing various sized Cadmium selenide (CdSe) quantum dots. ... Fluorescent lamps in Shinbashi, Tokyo, Japan Assorted types of fluorescent lamps. ... In common use, phosphorescence also refers to the emission of light by bioluminescent plankton, and some other forms of chemoluminescence. ... Phosphorescent materials can also be excited by bombarding them with subatomic particles. Cathodoluminescence is one example of this. This mechanism is used in cathode ray tube televisions. Cathodoluminescence is an optical and electrical phenomenon whereby a beam of electrons is generated by an electron gun (e. ... Cathode ray tube employing electromagnetic focus and deflection Cutaway rendering of a color CRT: 1. ... Certain other mechanisms can produce light: When the concept of light is intended to include very-high-energy photons (gamma rays), additional generation mechanisms include: Scintillation is a flash of light produced in a transparent material by an ionizing event. ... Animation of LCD, both unlit and with electroluminiscent backlight switched on Electroluminescence (EL) is an optical phenomenon and electrical phenomenon where a material emits light in response to an electric current passed through it, or to a strong electric field. ... Long exposure image of multi-bubble sonoluminescence created by a high intensity ultrasonic horn immersed in a beaker of liquid. ... Triboluminescence is an optical phenomenon in which light is generated via the breaking of asymmetrical bonds in a crystal when that material is scratched, crushed, or rubbed. ... Cherenkov radiation glowing in the core of a TRIGA reactor Cherenkov radiation (also spelled Cerenkov or sometimes ÄŒerenkov) is electromagnetic radiation emitted when a charged particle passes through an insulator at a speed greater than the speed of light in that medium. ... Radioactive decay is the process in which an unstable atomic nucleus loses energy by emitting radiation in the form of particles or electromagnetic waves. ... Corresponding to most kinds of particle, there is an associated antiparticle with the same mass and opposite charges. ... ## Theories about light ### Indian theories In ancient India, the philosophical schools of Samkhya and Vaisheshika, from around the 6th5th century BC, developed theories on light. According to the Samkhya school, light is one of the five fundamental "subtle" elements (tanmatra) out of which emerge the gross elements. The atomicity of these elements is not specifically mentioned and it appears that they were actually taken to be continuous. This article does not cite any references or sources. ... Samkhya, also Sankhya, (Sanskrit: सांख्य, IAST: Sāṃkhya - Enumeration) is one of the six schools of classical Indian philosophy. ... Vaisheshika, also Vaisesika, (Sanskrit: वैशॆषिक)is one of the six Hindu schools of philosophy (orthodox Vedic systems) of India. ... (2nd millennium BC - 1st millennium BC - 1st millennium) The 6th century BC started on January 1, 600 BC and ended on December 31, 501 BC. // Monument 1, an Olmec colossal head at La Venta The 5th and 6th centuries BC were a time of empires, but more importantly, a time... The 5th century BC started the first day of 500 BC and ended the last day of 401 BC. // The Parthenon of Athens seen from the hill of the Pnyx to the west. ... Concern has been expressed that this article or section is missing information about: discussions of existence of atoms among prominent physicists up to the end of 19th century. ... On the other hand, the Vaisheshika school gives an atomic theory of the physical world on the non-atomic ground of ether, space and time. (See Indian atomism.) The basic atoms are those of earth (prthivı), water (apas), fire (tejas), and air (vayu), that should not be confused with the ordinary meaning of these terms. These atoms are taken to form binary molecules that combine further to form larger molecules. Motion is defined in terms of the movement of the physical atoms and it appears that it is taken to be non-instantaneous. Light rays are taken to be a stream of high velocity of tejas (fire) atoms. The particles of light can exhibit different characteristics depending on the speed and the arrangements of the tejas atoms. Around the first century BC, the Vishnu Purana correctly refers to sunlight as the "the seven rays of the sun". This article focuses on the historical models of the atom. ... Hinduism (Tattva) and Buddhism (MahābhÅ«ta) Vayu / Pavan — Air / Wind Agni/Tejas — Fire Akasha — Aether Prithvi / Bhumi — Earth Ap / Jala — Water Chinese (Wu Xing) Japanese (Godai) Earth (地) | Water (æ°´) | Fire (火) | Air / Wind (風) | Void / Sky / Heaven (空) Bön Māori According to ancient and medieval science, Aether (Greek αἰθήρ, aithÄ“r[1... Concern has been expressed that this article or section is missing information about: discussions of existence of atoms among prominent physicists up to the end of 19th century. ... Properties For other meanings of Atom, see Atom (disambiguation). ... The Vishnu Purana is one of the oldest of the Puranas (dating to maybe the 5th century), containing some 23,000 shlokas, presented as a dialogue between Parasara with his disciple Maitreya. ... Prism splitting light High Resolution Solar Spectrum Sunlight in the broad sense is the total spectrum of the electromagnetic radiation given off by the Sun. ... The Indian Buddhists, such as Dignāga in the 5th century and Dharmakirti in the 7th century, developed a type of atomism that is a philosophy about reality being composed of atomic entities that are momentary flashes of light or energy. They viewed light as being an atomic entity equivalent to energy, similar to the modern concept of photons, though they also viewed all matter as being composed of these light/energy particles. A replica of an ancient statue found among the ruins of a temple at Sarnath Buddhism is a philosophy based on the teachings of the Buddha, Siddhārtha Gautama, a prince of the Shakyas, whose lifetime is traditionally given as 566 to 486 BCE. It had subsequently been accepted by... Dignāga (5th century AD), was an Indian scholar and one of the Buddhist founders of Indian philosophical logic. ... Europe in 450 The 5th century is the period from 401 to 500 in accordance with the Julian calendar in the Christian Era. ... Dharmakirti (circa 7th century), was an Indian scholar and one of the Buddhist founders of Indian philosophical logic. ... The 7th century is the period from 601 - 700 in accordance with the Julian calendar in the Christian Era. ... Concern has been expressed that this article or section is missing information about: discussions of existence of atoms among prominent physicists up to the end of 19th century. ... In modern physics the photon is the elementary particle responsible for electromagnetic phenomena. ... ### Greek and Hellenistic theories In the fifth century BC, Empedocles postulated that everything was composed of four elements; fire, air, earth and water. He believed that Aphrodite made the human eye out of the four elements and that she lit the fire in the eye which shone out from the eye making sight possible. If this were true, then one could see during the night just as well as during the day, so Empedocles postulated an interaction between rays from the eyes and rays from a source such as the sun. Emission theory has at least two meanings: First, it refers to Newtons proposal that light is emitted from luminous objects in the form of particles or corpuscles. ... Empedocles (Greek: , ca. ... Several ancient Classical Element ideas exist. ... The Birth of Venus, (detail) by Sandro Botticelli, 1485 For other uses, see Aphrodite (disambiguation). ... In about 300 BC, Euclid wrote Optica, in which he studied the properties of light. Euclid postulated that light travelled in straight lines and he described the laws of reflection and studied them mathematically. He questioned that sight is the result of a beam from the eye, for he asks how one sees the stars immediately, if one closes one's eyes, then opens them at night. Of course if the beam from the eye travels infinitely fast this is not a problem. For other uses, see Euclid (disambiguation). ... In 55 BC, Lucretius, a Roman who carried on the ideas of earlier Greek atomists, wrote: Centuries: 2nd century BC - 1st century BC - 1st century Decades: 100s BC 90s BC 80s BC 70s BC 60s BC - 50s BC - 40s BC 30s BC 20s BC 10s BC 0s BC Years: 60 BC 59 BC 58 BC 57 BC 56 BC 55 BC 54 BC 53 BC 52... Lucretius Titus Lucretius Carus (c. ... Concern has been expressed that this article or section is missing information about: discussions of existence of atoms among prominent physicists up to the end of 19th century. ... "The light and heat of the sun; these are composed of minute atoms which, when they are shoved off, lose no time in shooting right across the interspace of air in the direction imparted by the shove." - On the nature of the Universe Despite being similar to later particle theories, Lucretius's views were not generally accepted and light was still theorized as emanating from the eye. Ptolemy (c. 2nd century) wrote about the refraction of light, and developed a theory of vision that objects are seen by rays of light emanating from the eyes. This article is about the geographer, mathematician and astronomer Ptolemy. ... The 2nd century is the period from 101 - 200 in accordance with the Julian calendar in the Christian Era. ... For the property of metals, see refraction (metallurgy). ... ### Optical theory Main article: Book of Optics He also carried out the first experiments on the dispersion of light into its constituent colors. His major work Kitab al-Manazir was translated into Latin in the Middle Ages, as well his book dealing with the colors of sunset. He dealt at length with the theory of various physical phenomena like shadows, eclipses, the rainbow. He also attempted to explain binocular vision, and gave a correct explanation of the apparent increase in size of the sun and the moon when near the horizon. Because of his extensive research on optics, Al-Haytham is considered the father of modern optics. For other uses, see Latin (disambiguation). ... The Middle Ages formed the middle period in a traditional schematic division of European history into three ages: the classical civilization of Antiquity, the Middle Ages, and modern times, beginning with the Renaissance. ... For the book by Sir Isaac Newton, see Opticks. ... Al-Haytham also correctly argued that we see objects because the sun's rays of light, which he believed to be streams of tiny particles travelling in straight lines, are reflected from objects into our eyes. He understood that light must travel at a large but finite velocity, and that refraction is caused by the velocity being different in different substances. He also studied spherical and parabolic mirrors, and understood how refraction by a lens will allow images to be focused and magnification to take place. He understood mathematically why a spherical mirror produces aberration. ### The 'plenum' René Descartes (1596-1650) held that light was a disturbance of the plenum, the continuous substance of which the universe was composed. In 1637 he published a theory of the refraction of light that assumed, incorrectly, that light travelled faster in a denser medium than in a less dense medium. Descartes arrived at this conclusion by analogy with the behaviour of sound waves. Although Descartes was incorrect about the relative speeds, he was correct in assuming that light behaved like a wave and in concluding that refraction could be explained by the speed of light in different media. As a result, Descartes' theory is often regarded as the forerunner of the wave theory of light. Descartes redirects here. ... For the property of metals, see refraction (metallurgy). ... This article is about audible acoustic waves. ... ### Particle theory Pierre Gassendi (1592-1655), an atomist, proposed a particle theory of light which was published posthumously in the 1660s. Isaac Newton studied Gassendi's work at an early age, and preferred his view to Descartes' theory of the plenum. He stated in his Hypothesis of Light of 1675 that light was composed of corpuscles (particles of matter) which were emitted in all directions from a source. One of Newton's arguments against the wave nature of light was that waves were known to bend around obstacles, while light travelled only in straight lines. He did, however, explain the phenomenon of the diffraction of light (which had been observed by Francesco Grimaldi) by allowing that a light particle could create a localised wave in the aether. Pierre Gassendi (January 22, 1592 – October 24, 1655) was a French philosopher, scientist and mathematician, best known for attempting to reconcile Epicurean atomism with Christianity and for publishing the first official observations of the Transit of Mercury in 1631. ... Events and Trends Samuel Pepys begins his famous diary in 1660 and ends it, due to failing eyesight in 1669. ... Sir Isaac Newton FRS (4 January 1643 – 31 March 1727) [ OS: 25 December 1642 – 20 March 1727][1] was an English physicist, mathematician, astronomer, natural philosopher, and alchemist. ... The intensity pattern formed on a screen by diffraction from a square aperture Diffraction refers to various phenomena associated with wave propagation, such as the bending, spreading and interference of waves passing by an object or aperture that disrupts the wave. ... Francesco Maria Grimaldi (April 2, 1618 - December 28, 1663) was an Italian mathematician and physicist who taught at the Jesuit college in Bologna. ... Hinduism (Tattva) and Buddhism (MahābhÅ«ta) Vayu / Pavan — Air / Wind Agni/Tejas — Fire Akasha — Aether Prithvi / Bhumi — Earth Ap / Jala — Water Chinese (Wu Xing) Japanese (Godai) Earth (地) | Water (æ°´) | Fire (火) | Air / Wind (風) | Void / Sky / Heaven (空) Bön Māori According to ancient and medieval science, Aether (Greek αἰθήρ, aithÄ“r[1... Newton's theory could be used to predict the reflection of light, but could only explain refraction by incorrectly assuming that light accelerated upon entering a denser medium because the gravitational pull was greater. Newton published the final version of his theory in his Opticks of 1704. His reputation helped the particle theory of light to hold sway during the 18th century. The reflection of a bridge in Indianapolis, Indianas Central Canal. ... For the property of metals, see refraction (metallurgy). ... Free space is the most simple and elementary electromagnetic medium. ... Gravity is a force of attraction that acts between bodies that have mass. ... Opticks or a treatise of the reflections, refractions, inflections and colours of light Opticks is a book written by English physicist Isaac Newton that was released to the public in 1704. ... Events Building of the Students Monument in Aiud, Romania. ... In physics, wave-particle duality holds that light and matter simultaneously exhibit properties of waves and of particles (or photons). ... (17th century - 18th century - 19th century - more centuries) As a means of recording the passage of time, the 18th century refers to the century that lasted from 1701 through 1800. ... ### Wave theory In the 1660s, Robert Hooke published a wave theory of light. Christiaan Huygens worked out his own wave theory of light in 1678, and published it in his Treatise on light in 1690. He proposed that light was emitted in all directions as a series of waves in a medium called the Luminiferous ether. As waves are not affected by gravity, it was assumed that they slowed down upon entering a denser medium. Events and Trends Samuel Pepys begins his famous diary in 1660 and ends it, due to failing eyesight in 1669. ... Robert Hooke, FRS (July 18, 1635 – March 3, 1703) was an English polymath who played an important role in the scientific revolution, through both experimental and theoretical work. ... Surface waves in water This article is about waves in the most general scientific sense. ... Christiaan Huygens (pronounced in English (IPA): ; in Dutch: ) (April 14, 1629 – July 8, 1698), was a Dutch mathematician, astronomer and physicist; born in The Hague as the son of Constantijn Huygens. ... Events Giovanni Domenico Cassini observes differential rotation within Jupiters atmosphere. ... The luminiferous aether: it was hypothesised that the Earth moves through a medium of aether that carries light In the late 19th century the luminiferous aether (light-bearing aether), or ether, was a substance postulated to be the medium for the propagation of light. ... Thomas Young's sketch of the two-slit experiment showing the diffraction of light. Young's experiments supported the theory that light consists of waves. Another supporter of the wave theory was Leonhard Euler. He argued in Nova theoria lucis et colorum (1746) that diffraction could more easily be explained by a wave theory. Leonhard Paul Euler (pronounced Oiler; IPA ) (April 15, 1707 – September 18 [O.S. September 7] 1783) was a pioneering Swiss mathematician and physicist, who spent most of his life in Russia and Germany. ... // Events Catharine de Ricci (born 1522) canonized. ... The intensity pattern formed on a screen by diffraction from a square aperture Diffraction refers to various phenomena associated with wave propagation, such as the bending, spreading and interference of waves passing by an object or aperture that disrupts the wave. ... Later, Augustin-Jean Fresnel independently worked out his own wave theory of light, and presented it to the Académie des Sciences in 1817. Simeon Denis Poisson added to Fresnel's mathematical work to produce a convincing argument in favour of the wave theory, helping to overturn Newton's corpuscular theory. Augustin Fresnel Augustin-Jean Fresnel (pronounced [] in AmE (or fray-NELL), [] in French) (May 10, 1788 – July 14, 1827), was a French physicist who contributed significantly to the establishment of the theory of wave optics. ... The French Academy of Sciences (Académie des sciences) is a learned society, founded in 1666 by Louis XIV at the suggestion of Jean-Baptiste Colbert, to encourage and protect the spirit of French scientific research. ... 1817 was a common year starting on Wednesday (see link for calendar). ... Simeon Poisson. ... The weakness of the wave theory was that light waves, like sound waves, would need a medium for transmission. A hypothetical substance called the luminiferous aether was proposed, but its existence was cast into strong doubt in the late nineteenth century by the Michelson-Morley experiment. The luminiferous aether: it was hypothesised that the Earth moves through a medium of aether that carries light In the late 19th century luminiferous aether (light-bearing aether) was the term used to describe a medium for the propagation of light. ... The Michelson-Morley experiment, one of the most important and famous experiments in the history of physics, was performed in 1887 by Albert Michelson and Edward Morley at what is now Case Western Reserve University, and is considered by some to be the first strong evidence against the theory of... Newton's corpuscular theory implied that light would travel faster in a denser medium, while the wave theory of Huygens and others implied the opposite. At that time, the speed of light could not be measured accurately enough to decide which theory was correct. The first to make a sufficiently accurate measurement was Léon Foucault, in 1850. His result supported the wave theory, and the classical particle theory was finally abandoned. The speed of light in a vacuum is an important physical constant denoted by the letter c for constant or the Latin word celeritas meaning swiftness.[1] It is the speed of all electromagnetic radiation, including visible light, in a vacuum. ... ... For the game, see: 1850 (board game) 1850 (MDCCCL) was a common year starting on Tuesday (link will display the full calendar) of the Gregorian calendar (or a common year starting on Sunday [1] of the 12-day-slower Julian calendar). ... ### Electromagnetic theory A linearly-polarized light wave frozen in time and showing the two oscillating components of light; an electric field and a magnetic field perpendicular to each other and to the direction of motion (a transverse wave). In 1845, Michael Faraday discovered that the angle of polarization of a beam of light as it passed through a polarizing material could be altered by a magnetic field, an effect now known as Faraday rotation. This was the first evidence that light was related to electromagnetism. Faraday proposed in 1847 that light was a high-frequency electromagnetic vibration, which could propagate even in the absence of a medium such as the ether. Image File history File links No higher resolution available. ... Image File history File links No higher resolution available. ... In electrodynamics, polarization (also spelled polarisation) is the property of electromagnetic waves, such as light, that describes the direction of their transverse electric field. ... In physics, the space surrounding an electric charge or in the presence of a time-varying magnetic field has a property called an electric field. ... Magnetic field lines shown by iron filings Magnetostatics Electrodynamics Electrical Network Tensors in Relativity This box:      In physics, the magnetic field is a field that permeates space and which exerts a magnetic force on moving electric charges and magnetic dipoles. ... A light wave is an example of a transverse wave. ... 1845 was a common year starting on Wednesday (see link for calendar). ... Michael Faraday, FRS (September 22, 1791 – August 25, 1867) was an English chemist and physicist (or natural philosopher, in the terminology of that time) who contributed to the fields of electromagnetism and electrochemistry. ... In physics, magnetism is a phenomenon by which materials exert an attractive or repulsive force on other materials. ... In physics, the Faraday effect or Faraday rotation is an interaction between light and a magnetic field. ... Electromagnetism is the physics of the electromagnetic field: a field which exerts a force on particles that possess the property of electric charge, and is in turn affected by the presence and motion of those particles. ... Faraday's work inspired James Clerk Maxwell to study electromagnetic radiation and light. Maxwell discovered that self-propagating electromagnetic waves would travel through space at a constant speed, which happened to be equal to the previously measured speed of light. From this, Maxwell concluded that light was a form of electromagnetic radiation: he first stated this result in 1862 in On Physical Lines of Force. In 1873, he published A Treatise on Electricity and Magnetism, which contained a full mathematical description of the behaviour of electric and magnetic fields, still known as Maxwell's equations. Soon after, Heinrich Hertz confirmed Maxwell's theory experimentally by generating and detecting radio waves in the laboratory, and demonstrating that these waves behaved exactly like visible light, exhibiting properties such as reflection, refraction, diffraction, and interference. Maxwell's theory and Hertz's experiments led directly to the development of modern radio, radar, television, electromagnetic imaging, and wireless communications. James Clerk Maxwell (13 June 1831 – 5 November 1879) was a Scottish mathematician and theoretical physicist from Edinburgh, Scotland, UK. His most significant achievement was aggregating a set of equations in electricity, magnetism and inductance — eponymously named Maxwells equations — including an important modification (extension) of the Ampères... 1873 (MDCCCLXXIII) was a common year starting on Wednesday (see link for calendar). ... A young Maxwell at university. ... For thermodynamic relations, see Maxwell relations. ... Heinrich Rudolf Hertz (February 22, 1857 - January 1, 1894) was the German physicist and mechanician for whom the hertz, an SI unit, is named. ... ### The special theory of relativity The wave theory was wildly successful in explaining nearly all optical and electromagnetic phenomena, and was a great triumph of nineteenth century physics. By the late nineteenth century, however, a handful of experimental anomalies remained that could not be explained by or were in direct conflict with the wave theory. One of these anomalies involved a controversy over the speed of light. The constant speed of light predicted by Maxwell's equations and confirmed by the Michelson-Morley experiment contradicted the mechanical laws of motion that had been unchallenged since the time of Galileo, which stated that all speeds were relative to the speed of the observer. In 1905, Albert Einstein resolved this paradox by revising the Galilean model of space and time to account for the constancy of the speed of light. Einstein formulated his ideas in his special theory of relativity, which radically altered humankind's understanding of space and time. Einstein also demonstrated a previously unknown fundamental equivalence between energy and mass with his famous equation Galileo redirects here. ... “Einstein” redirects here. ... Special relativity (SR) or the special theory of relativity is the physical theory published in 1905 by Albert Einstein. ... This article is about the idea of space. ... Look up time in Wiktionary, the free dictionary. ... 15ft sculpture of Einsteins 1905 E = mc² formula at the 2006 Walk of Ideas, Germany In physics, mass-energy equivalence is the concept that all mass has an energy equivalence, and all energy has a mass equivalence. ... For other uses, see Mass (disambiguation). ... $E = mc^2 ,$ where E is energy, m is mass, and c is the speed of light. The speed of light in a vacuum is an important physical constant denoted by the letter c for constant or the Latin word celeritas meaning swiftness.[1] It is the speed of all electromagnetic radiation, including visible light, in a vacuum. ... ### Particle theory revisited Another experimental anomaly was the photoelectric effect, by which light striking a metal surface ejected electrons from the surface, causing an electric current to flow across an applied voltage. Experimental measurements demonstrated that the energy of individual ejected electrons was proportional to the frequency, rather than the intensity, of the light. Furthermore, below a certain minimum frequency, which depended on the particular metal, no current would flow regardless of the intensity. These observations clearly contradicted the wave theory, and for years physicists tried in vain to find an explanation. In 1905, Einstein solved this puzzle as well, this time by resurrecting the particle theory of light to explain the observed effect. Because of the preponderance of evidence in favor of the wave theory, however, Einstein's ideas were met initially by great skepticism among established physicists. But eventually Einstein's explanation of the photoelectric effect would triumph, and it ultimately formed the basis for wave–particle duality and much of quantum mechanics. A diagram illustrating the emission of electrons from a metal plate, requiring energy gained from an incoming photon to be more than the work function of the material. ... Electric current is the flow (movement) of electric charge. ... International safety symbol Caution, risk of electric shock (ISO 3864), colloquially known as high voltage symbol. ... For other uses, see Frequency (disambiguation). ... In physics, intensity is a measure of the time-averaged energy flux. ... In physics and chemistry, wave-particle duality is a conceptualization that all objects in our universe exhibit properties of both waves and of particles. ... For a less technical and generally accessible introduction to the topic, see Introduction to quantum mechanics. ... ### Quantum theory A third anomaly that arose in the late 19th century involved a contradiction between the wave theory of light and measurements of the electromagnetic spectrum emitted by thermal radiators, or so-called black bodies. Physicists struggled with this problem, which later became known as the ultraviolet catastrophe, unsuccessfully for many years. In 1900, Max Planck developed a new theory of black-body radiation that explained the observed spectrum correctly. Planck's theory was based on the idea that black bodies emit light (and other electromagnetic radiation) only as discrete bundles or packets of energy. These packets were called quanta, and the particle of light was given the name photon, to correspond with other particles being described around this time, such as the electron and proton. A photon has an energy, E, proportional to its frequency, f, by As the temperature decreases, the peak of the black body radiation curve moves to lower intensities and longer wavelengths. ... The ultraviolet catastrophe, also called the Rayleigh-Jeans catastrophe, was a prediction of early 20th century classical physics that an ideal black body at thermal equilibrium will emit radiation with infinite power. ... “Planck” redirects here. ... In physics, the spectral intensity of electromagnetic radiation from a black body at temperature T is given by the Plancks law of black body radiation: where: I(&#957;) is the amount of energy per unit time per unit surface area per unit solid angle per unit frequency. ... In physics, a quantum (plural: quanta) is an indivisible entity of energy. ... In modern physics the photon is the elementary particle responsible for electromagnetic phenomena. ... For other uses, see Electron (disambiguation). ... For other uses, see Proton (disambiguation). ... $E = hf = frac{hc}{lambda} ,!$ where h is Planck's constant, λ is the wavelength and c is the speed of light. Likewise, the momentum p of a photon is also proportional to its frequency and inversely proportional to its wavelength: A commemoration plaque for Max Planck on his discovery of Plancks constant, in front of Humboldt University, Berlin. ... The speed of light in a vacuum is an important physical constant denoted by the letter c for constant or the Latin word celeritas meaning swiftness.[1] It is the speed of all electromagnetic radiation, including visible light, in a vacuum. ... $p = { E over c } = { hf over c } = { h over lambda }.$ As it originally stood, this theory did not explain the simultaneous wave- and particle-like natures of light, though Planck would later work on theories that did. In 1918, Planck received the Nobel Prize in Physics for his part in the founding of quantum theory. Hannes Alfvén (1908–1995) accepting the Nobel Prize for his work on magnetohydrodynamics [1]. List of Nobel Prize laureates in Physics from 1901 to the present day. ... ### Wave–particle duality The modern theory that explains the nature of light includes the notion of wave–particle duality, described by Albert Einstein in the early 1900s, based on his study of the photoelectric effect and Planck's results. Einstein asserted that the energy of a photon is proportional to its frequency. More generally, the theory states that everything has both a particle nature and a wave nature, and various experiments can be done to bring out one or the other. The particle nature is more easily discerned if an object has a large mass, so it took until a bold proposition by Louis de Broglie in 1924 to realise that electrons also exhibited wave–particle duality. The wave nature of electrons was experimentally demonstrated by Davission and Germer in 1927. Einstein received the Nobel Prize in 1921 for his work with the wave–particle duality on photons (especially explaining the photoelectric effect thereby), and de Broglie followed in 1929 for his extension to other particles. In physics and chemistry, wave-particle duality is a conceptualization that all objects in our universe exhibit properties of both waves and of particles. ... “Einstein” redirects here. ... A diagram illustrating the emission of electrons from a metal plate, requiring energy gained from an incoming photon to be more than the work function of the material. ... For other uses, see Frequency (disambiguation). ... Louis-Victor-Pierre-Raymond, 7th duc de Broglie, generally known as Louis de Broglie (August 15, 1892–March 19, 1987), was a French physicist and Nobel Prize laureate. ... Properties The electron (also called negatron, commonly represented as e&#8722;) is a subatomic particle. ... ### Quantum electrodynamics The quantum mechanical theory of light and electromagnetic radiation continued to evolve through the 1920's and 1930's, and culminated with the development during the 1940's of the theory of quantum electrodynamics, or QED. This so-called quantum field theory is among the most comprehensive and experimentally successful theories ever formulated to explain a set of natural phenomena. QED was developed primarily by physicists Richard Feynman, Freeman Dyson, Julian Schwinger, and Shin-Ichiro Tomonaga. Feynman, Schwinger, and Tomonaga shared the 1965 Nobel Prize in Physics for their contributions. Quantum electrodynamics (QED) is a relativistic quantum field theory of electrodynamics. ... Quantum field theory (QFT) is the quantum theory of fields. ... This article is about the physicist. ... Freeman John Dyson FRS (born December 15, 1923) is an English-born American theoretical physicist and mathematician, famous for his work in quantum mechanics, solid-state physics, nuclear weapons design and policy, and for his serious theorizing in futurism and science fiction concepts, including the search for extraterrestrial intelligence. ... Julian Seymour Schwinger (February 12, 1918 -- July 16, 1994) was an American theoretical physicist. ... Sin-Itiro Tomonaga or Shinichir&#333; Tomonaga (&#26397;&#27704; &#25391;&#19968;&#37070; Tomonaga Shinichir&#333;, March 31, 1906&#8211;July 8, 1979) was a Japanese physicist, influential in the development of quantum electrodynamics, work for which he was jointly awarded the Nobel Prize in Physics in 1965 along with... ## Light pressure Main article: Radiation pressure Light pushes on objects in its way, just as the wind would do. This pressure is most easily explainable in particle theory: photons hit and transfer their momentum. Light pressure can cause asteroids to spin faster,[2] acting on their irregular shapes as on the vanes of a windmill. The possibility to make solar sails that would accelerate spaceships in space is also under investigation.[citation needed] Radiation pressure is the pressure exerted upon any surface exposed to electromagnetic radiation. ... For other uses, see Asteroid (disambiguation). ... A Dutch tower windmill, sporting sails, surrounded by tulips A windmill is an engine powered by the wind to produce energy, often contained in a large building as in traditional post mills, smock mills and tower mills. ... Solar sails (also called light sails or photon sails, especially when they use light sources other than the Sun) are a proposed form of spacecraft propulsion using large membrane mirrors. ... Although the motion of the Crookes radiometer was originally attributed to light pressure, this interpretation is incorrect; the characteristic Crookes rotation is the result of a partial vacuum.[3] This should not be confused with the Nichols radiometer, in which the motion is directly caused by light pressure.[4] The Crookes radiometer, also known as the light mill or solar engine, consists of an airtight glass bulb, containing a partial vacuum. ... A Nichols radiometer is the apparatus used by Nichols and Hull in 1901 for the measurement of radiation pressure. ... ## Spirituality The sensory perception of light plays a central role in spirituality (vision, enlightenment, darshan, Tabor Light), and the presence of light as opposed to its absence (darkness) is a universal metaphor of good and evil, knowledge and ignorance, and similar concepts. In religion, visions comprise inspirational renderings, generally of a future state and/or of a mythical being, and are believed (by followers of the religion) to come from a deity, directly or indirectly via prophets, and serve to inspire or prod believers as part of a revelation or an epiphany. ... Enlightenment broadly means the acquisition of new wisdom or understanding enabling clarity of perception. ... Darshan is a Sanskrit and Hindu (also used to some extent in Urdu) term meaning sight (in the sense of an instance of seeing something or somebody), vision, apparition, or a glimpse. ... In Eastern Orthodox theology, Tabor Light (also Tabors Light, Taboric Light; Greek , also Uncreated Light, Divine Light; Russian ) is the light revealed on Mount Tabor at the Transfiguration of Jesus, identified with the light seen by Paul at his conversion. ... Darkness is the absence of light. ... This article or section does not cite any references or sources. ... This article needs additional references or sources for verification. ... Look up ignorance in Wiktionary, the free dictionary. ... ## References 1. ^ What Is a Light Source?. 2. ^ Kathy A. (02.05.2004). Asteroids Get Spun By the Sun. Discover Magazine. 3. ^ P. Lebedev, Untersuchungen über die Druckkräfte des Lichtes, Ann. Phys. 6, 433 (1901). 4. ^ Nichols, E.F & Hull, G.F. (1903) The Pressure due to Radiation, The Astrophysical Journal,Vol.17 No.5, p.315-351 Wikimedia Commons has media related to: Look up Light in Wiktionary, the free dictionary. Wikiquote has a collection of quotations related to: Image File history File links Commons-logo. ... Wikipedia does not have an article with this exact name. ... Wiktionary (a portmanteau of wiki and dictionary) is a multilingual, Web-based project to create a free content dictionary, available in over 150 languages. ... Image File history File links This is a lossless scalable vector image. ... Wikiquote is one of a family of wiki-based projects run by the Wikimedia Foundation, running on MediaWiki software. ... The lighting system of a motor vehicle consists of lighting and signalling devices mounted or integrated to the front, sides and rear of the vehicle. ... Ballistic Photons are the light photons that travel through the scattering (turbid) media in straight line. ... The CIE 1931 x,y chromaticity space, also showing the chromaticities of black-body light sources of various temperatures, and lines of constant correlated color temperature Color temperature is a characteristic of visible light that has important applications in photography, videography, publishing and other fields. ... According to the Corpuscular theory of light, set forward by Sir Isaac Newton, light is made up of small discrete particles called corpuscles (little particles). ... Legend γ = Gamma rays HX = Hard X-rays SX = Soft X-Rays EUV = Extreme ultraviolet NUV = Near ultraviolet Visible light NIR = Near infrared MIR = Moderate infrared FIR = Far infrared Radio waves EHF = Extremely high frequency (Microwaves) SHF = Super high frequency (Microwaves) UHF = Ultra high frequency VHF = Very high frequency HF = High... Wave Refraction in the manner of Huygens. ... Fermats principle assures that the angles given by Snells law always reflect lights quickest path between P and Q. Fermats principle in optics states: This principle was first stated by Pierre de Fermat. ... The CIE 1931 color space chromaticity diagram with wavelengths in nanometers. ... The items in the Metroid series are the driving force behind its gameplay. ... This time exposure photo of New York City shows sky glow, one form of light pollution. ... Not to be confused with lightning. ... ACHOO syndrome is inherited in an autosomal dominant fashion. ... Photopic (black) and scotopic [1] (green) luminosity functions. ... In United Kingdom Law, A right to light will come into existence if it has been enjoyed uninterrupted for 20 years or more, granted by deed, or registered under the Rights of Light Act 1959. ... Spectroscopy is the study of spectra, ie. ... Results from FactBites: The Light Group, Las Vegas Nightclubs, Restaurants, Bars, Lounges, and Real Estate, the Best Las Vegas Night Clubs and ... (1969 words) Light Las Vegas night club was thoughtfully, artistically designed by Lian Waisbrod with Studio Gaia of New York City. Whether you prefer a high-energy Las Vegas nightclub or more intimate Las Vegas night club experience, Light Las Vegas Nightclub is sure to satisfy with various distinct VIP service areas with Las Vegas table reservations and European bottle service dedicated to suit your comfort. The Light Group, renowned for providing VIP service at its sophisticated and stylish Las Vegas nightlife and Las Vegas restaurant properties, brings its exceptional operational prowess to Bare Pool Lounge at The Mirage, providing an exclusive adult alternative to the typical Las Vegas pool experience. light. The Columbia Encyclopedia, Sixth Edition. 2001-05 (1379 words) The term “light” is often extended to adjacent wavelength ranges that the eye cannot detect—to infrared radiation, which has a frequency less than that of visible light, and to ultraviolet radiation and fl light, which have a frequency greater than that of visible light. Light that is all of the same wavelength and phase (all the waves are in step with one another) is called “coherent”; one of the most important modern applications of light has been the development of a source of coherent light—the laser. The scientific study of the behavior of light is called optics and covers reflection of light by a mirror or other object, refraction by a lens or prism, diffraction of light as it passes by the edge of an opaque object, and interference patterns resulting from diffraction. More results at FactBites » Share your thoughts, questions and commentary here
2019-07-22 18:10:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5085421204566956, "perplexity": 1544.0823891769078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528208.76/warc/CC-MAIN-20190722180254-20190722202254-00424.warc.gz"}
https://math.stackexchange.com/questions/822572/integrate-this-how-int-frac-yx-1-yx2yx3-dx
# Integrate this how? $\int {\frac {y(x)' (1-y(x)^2)}{y(x)^3}} dx$ I'm trying to solve a differential equation but I get to the point in which I have to solve this: $$\int {\frac {y(x)' (1-y(x)^2)}{y(x)^3}} dx$$ I don't understand how to do it. I think it's related to some property I don't know, but the $y(x)'$ is throwing me off. I tried looking at Wolfram's step by step but I don't understand it. Thanks for any help. Context: Original function to solve is $(v^3) du + (u^3-uv^2) dv = 0$. I used a variable change: $y = \dfrac uv$. I'm not sure if it's correct or anything though. • What is the differential equation you're trying to solve? How does this integral come up? If you're using separation of variable there shouldn't be a $y'$ in any integral. – DanZimm Jun 6 '14 at 6:26 • Hint: use the chain rule. – David H Jun 6 '14 at 6:27 • Also welcome to MSE! – DanZimm Jun 6 '14 at 6:27 • @DavidH Oh, I always forget the chain rule for integrals. Thanks! – Paula Jun 6 '14 at 6:32 • And @DanZimm I used first a change of variables, edited the question for context if you're curious :). And thanks for the welcome! – Paula Jun 6 '14 at 6:33 Using the obvious change of variables $y=y(x) \implies dy=y'(x)\,dx$, your integral becomes: $$\int {\frac {y(x)' (1-y(x)^2)}{y(x)^3}} dx=\int {\frac {1-y^2}{y^3}} dy\\ =\int y^{-3}dy-\int y^{-1}dy\\ =-\frac12 y^{-2}-\log{y}+constant$$ The differential equation simplifies a lot if you use $$u(v)=\frac{v}{y(v)}$$ (which is basically the reciprocal of what you chose). After replacement and simplifications (there are several), you should arrive to $$1-v y(v) \frac{dy(v)}{dv}=0$$ for which the solution is simply $$y(v)= \sqrt{c_1+2\log (v)}$$ from which $$u(v)=\frac{v}{\sqrt{c_1+2 \log (v)}}$$
2019-07-19 08:41:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9001393914222717, "perplexity": 353.9057574165191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526153.35/warc/CC-MAIN-20190719074137-20190719100137-00021.warc.gz"}
https://www.cnblogs.com/ioriwellings/p/6222208.html
# Amd64 and Va_arg OCT 3RD, 2010 A while back, I was poking around LLVM bugs, and discovered, to my surprise, that LLVM doesn't supportthe va_arg intrinsic, used by functions to accept multiple arguments, at all on amd64. It turns out that clang and llvm-gcc, the compilers that backend to LLVM, have their own implementations in the frontend, so this isn't as big a deal as it might sound, but it was still a surprise to me. Figuring that this might just be something no one got around to, and couldn't actually be that hard, I pulled out my copy of the amd64 ABI specification, figuring that maybe I could throw together a patch and fix this issue. Maybe half an hour of reading later, I stopped in terror and gave up, repenting of my foolish ways to go work on something else. va_arg on amd64 is a hairy, hairy beast, and probably not something I was going to hack together in an evening. And so instead I decided to blog about it. ## The problem: Argument passing on amd64 On i386, because of the dearth of general-purpose registers, the calling convention passes all arguments on the stack. This makes the va_arg implementation easy – A va_list is simply a pointer into the stack, and va_arg just adds the size of the type to be retrieved to the va_list, and returns the old value. In fact, the i386 ABI reference simply specifies va_arg in terms of a single line of code: #define va_arg(list, mode) ((mode *)(list = (char *)list + sizeof(mode)))[-1] On amd64, the problem is much more complicated. To start, amd64 specifies that up to 6 integer arguments and up to 8 floating-point arguments are passed to functions in registers, to take advantage of amd64's larger number of registers. So, for a start, va_arg will have to deal with the fact that some arguments may have been passed in registers, and some on the stack. (One could imagine simplifying the problem by stipulating a different calling convention for variadic functions, but unfortunately, for historical reasons and otherwise, C requires that code be able to call functions even if their prototype is not visible, which means the compiler doesn't necessarily know if it's calling a variadic function at any given call site. [edited to add: caf points out in the comments that C99 actually explicitly does not require this property. But I speculate that the ABI designers wanted to preserve this property from i386 because it has historically worked, and so existing code depended on it]). That's not all, however. Not only can integer arguments be passed by registers, but small structs (16 bytes or fewer) can also be passed in registers. A sufficiently small struct, for the purposes of the calling convention, is essentially broken up into its component members, which are passed as though they were separate arguments – unless only some of them would fit into registers, in which case the whole struct is passed on the stack. So va_arg, given a struct as an argument, has to be able to figure out whether it was passed in registers or on the stack, and possibly even re-assemble it into temporary space. ## The implementation Given all those constraints, the required implementation is fairly straightforward, but incredibly complex compared to any other platform I know of. To start, any function that is known to use va_start is required to, at the start of the function, save all registers that may have been used to pass arguments onto the stack, into the "register save area", for future access by va_start and va_arg. This is an obvious step, and I believe pretty standard on any platform with a register calling convention. The registers are saved as integer registers followed by floating point registers. As an optimization, during a function call, %rax is required to hold the number of SSE registers used to hold arguments, to allow a varargs caller to avoid touching the FPU at all if there are no floating point arguments. va_list, instead of being a pointer, is a structure that keeps track of four different things: typedef struct { unsigned int gp_offset; unsigned int fp_offset; void *overflow_arg_area; void *reg_save_area; } va_list[1]; reg_save_area points at the base of the register save area initialized at the start of the function. fp_offsetand gp_offset are offsets into that register save area, indicating the next unused floating point and general-purpose register, respectively. Finally, overflow_arg_area points at the next stack-passed argument to the function, for arguments that didn't fit into registers. Here's an ASCII art diagram of the stack frame during the execution of a varargs function, after the register save area has been established. Note that the spec allows functions to put the register save area anywhere in its frame it wants, so I've shown potential storage both above and below it. | ... | [high addresses] +----------------+ | argument | | passed | | on stack (2) | +----------------+ <---- overflow_arg_area | argument | | passed | | on stack (1) | +----------------+ +----------------+ | ... | (possible local storage for func) +----------------+ | %xmm15 | \ +----------------+ | | %xmm14 | | ___ +----------------+ | | | ... | \ register +----------------+ }save| | %xmm0 | / area| +----------------+ | | | %r9 | | | +----------------+ | | fp_offset | %r8 | | ___ | +----------------+ | | | | ... | | | | +----------------+ | | gp_offset | %rsi | | | | +----------------+ | | | | %rdi | / | | +----------------+ <----+--+--- reg_save_area | ... | (potentially more storage) +----------------+ <----------- %esp Because va_arg must tell determine whether the requested type was passed in registers, it needs compiler support, and can't be implemented as a simple macro like on i386. The amd64 ABI reference specifies va_arg using a list of eleven different steps that the macro must perform. I'll try to summarize them here. First off, va_arg determines whether the requested type could be passed in registers. If not, va_argbehaves much like it does on i386, using the overflow_arg_area member of the va_list (Plus some complexity to deal with alignment values). Next, assuming the argument can be passed in registers, va_arg determines how many floating-point and general-purpose registers would be used to pass the requested type. It compares those values with the gp_offset and fp_offset fields in the va_list. If the additional registers would cause either value to overflow the number of registers used for parameter-passing for that type, then the argument was passed on the stack, and va_arg bails out and uses overflow_arg_area. If we've made it this far, the argument was passed in registers. va_arg fetches the argument using reg_save_area and the appropriate offsets, and then updates gp_offset and fp_offset as appropriate. Note that if the argument was passed in a mix of floating-point and general-purpose registers, or requires a large alignment, this means that va_arg must copy it out of the register save area onto temporary space in order to assemble the value. So, in the worst case, va_arg on a type that embeds both a floating-point and an integer type must do two comparisons, a conditional branch, and then update two fields in the va_list and copy multiple values out of the register save area into a temporary object to return. That's quite a lot more work than the i386 version does. Note that I don't mean to suggest this is a performance concern – I don't have any benchmarks to back this up, but I would be shocked if this is measurable in any reasonable code. But I was surprised by how complex this operation is. posted on 2016-12-26 14:26 不及格的程序员-八神 阅读(...) 评论(...) 编辑 收藏
2018-12-14 16:11:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26033681631088257, "perplexity": 1598.236442301284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00496.warc.gz"}
https://www.vedantu.com/question-answer/given-sbf5-reacts-with-xef4-and-xef6-to-form-class-11-chemistry-cbse-5f610c017d7dc34d3b8ddce2
# Given, $Sb{{F}_{5}}$ reacts with $Xe{{F}_{4}}$ and $Xe{{F}_{6}}$ to form ionic compounds $[Sb{{F}_{6}}^{-}][Xe{{F}_{3}}^{+}]$ and $[Sb{{F}_{6}}^{-}][Xe{{F}_{5}}^{+}]$ respectively. The geometry of $Xe{{F}_{3}}^{+}$ ion and $Xe{{F}_{5}}^{+}$ ion respectively is:A. square pyramidal, t-shapedB. bent T-shaped, square pyramidalC. see-saw, square pyramidalD. square pyramidal, see-saw Verified 146.1k+ views Hint: Take note of the oxidation state of the $Xe$ atom and then consider the electronic configuration. From this, calculate the number of lone pairs and bond pairs. This will give you an idea about the hybridization as well as the geometry of the molecule. Complete solution: First, let us consider the electronic configuration of $Xe$ in its ground state. The atomic number of xenon is 54. Its electronic configuration is $[Kr]4{{d}^{10}}5{{s}^{2}}5{{p}^{6}}$. Usually, it is inert, but it can form bonds with different atoms by promoting its electrons to the $5d$ orbital and hybridization. Both the ions that need to be considered here have a +1 charge, since fluorine cannot have lost one of its electrons, let us assume that it is xenon that has lost one. So, the configuration will be $[Kr]4{{d}^{10}}5{{s}^{2}}5{{p}^{5}}$. While drawing the electronic configuration, let us ignore the $4d$ orbital and only consider the $5s,\text{ }5p,\text{ and 5d}$ orbitals for the time being. So, the configuration will be: - For $Xe{{F}_{3}}^{+}$ To form bonds with 3 fluorine atoms, xenon will require 2 more free electrons, so it will promote one of the electrons from the $5p$ orbital to the $5d$ orbital and carry out the hybridization to form 5 $s{{p}^{3}}d$ orbitals. The hybridized configuration will be: Now the three electrons will form bonds with 3 fluorine atoms and the geometry of the molecule will become trigonal bipyramidal. There are 2 lone pairs and 3 bond pairs present; the lone pairs always occupy the equatorial positions in the trigonal bipyramidal geometry. Thus, the shape of the molecule will be: Here, we can see the bent T-shape formed by the atoms. - For $Xe{{F}_{5}}^{+}$ The formation of this ion will be similar to that of the $Xe{{F}_{3}}^{+}$ ion. Since, xenon has to form bonds with 5 fluorine atoms, it will need 5 free electrons, so it will promote 2 electrons from the $5p$ orbital to the $5d$ orbital. And carry out the hybridization to form 6 $s{{p}^{3}}{{d}^{2}}$ orbitals. The hybridized configuration will be: Now, the unpaired electrons will pair with the 5 fluorine atoms and the molecule will become octahedral. There are 5 bond pairs and 1 lone pair present. The geometry will be: Here, we can see the square pyramid formed by the atoms. The geometries of the $Xe{{F}_{3}}^{+}$ ion and $Xe{{F}_{5}}^{+}$ ion respectively is bent T-shaped and square pyramidal respectively. Hence, the correct answer is ‘B. bent T-shaped, square pyramidal’ Note: Always consider the charge that is present on the ions, if the charge is ignored then it will not be possible to draw the hybridization diagram. When xenon has to bond with an odd number of atoms, it always has to have a charge. Otherwise, one unpaired electron will remain and make the molecule unstable.
2022-01-27 13:55:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6316001415252686, "perplexity": 857.8290045232661}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00398.warc.gz"}
https://www.gradesaver.com/textbooks/science/physics/CLONE-afaf42be-9820-4186-8d76-e738423175bc/chapter-3-exercises-and-problems-page-51/65
## Essential University Physics: Volume 1 (4th Edition) Clone We know that $t=\sqrt{\frac{2h}{g}}$ $t=\sqrt{\frac{2(1.6-0.93)}{9.8}}=0.3698s$ Now we can find the horizontal speed of the water as $v=\frac{x}{t}$ We plug in the known values to obtain: $v=\frac{2.1}{0.3698}=5.7\frac{m}{s}$
2019-10-15 02:06:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8757482171058655, "perplexity": 276.47951355228065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655735.13/warc/CC-MAIN-20191015005905-20191015033405-00424.warc.gz"}
http://openstudy.com/updates/500a7292e4b0549a892ee9a4
## anonymous 4 years ago This exercise is in a section about using trig substitution to integrate. I can use trig sub, but I'm just not seeing what initial substitution to use so I can apply trig sub. Can anyone give me a hint? I don't need (don't want) it solve to the end. At least not yet. 1. anonymous See attached 2. anonymous 3. anonymous The trick is, how do we get this integral into the form$dx \over \sqrt{u^2 +a^2}$ 4. anonymous sorry du on top 5. anonymous note, 16+x^6= (4)^2+(x^3)^2. or (x^3)^2+(4)^2. 6. anonymous what will be the a ? and what will be the u? 7. anonymous then, get the derivative of u. 8. anonymous Thanks, that is just what I needed. 9. anonymous most welcome:))
2016-08-26 17:30:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9377613663673401, "perplexity": 1823.986439922781}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982296020.34/warc/CC-MAIN-20160823195816-00124-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.sparrho.com/item/division-algebras-of-gelfand-kirillov-transcendence-degree-2/124ecb7/
# Division algebras of Gelfand-Kirillov transcendence degree 2 Research paper by Jason P. Bell Indexed on: 10 Jul '09Published on: 10 Jul '09Published in: Israel Journal of Mathematics #### Abstract Let A be a finitely generated K-algebra that is a domain of GK dimension less than 3, and let Q(A) denote the quotient division algebra of A. We show that if D is a division subalgebra of Q(A) of GK dimension at least 2, then Q(A) is finite dimensional as a left D-vector space. We use this to show that if A is a finitely generated domain of GK dimension less than 3 over an algebraically closed field K, then any division subalgebra D of Q(A) is either a finitely generated field extension of K of transcendence degree at most one, or Q(A) is finite dimensional as a left D-vector space.
2021-01-18 13:57:14
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9232823848724365, "perplexity": 590.2793283140752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514796.13/warc/CC-MAIN-20210118123320-20210118153320-00146.warc.gz"}
https://chemistry.stackexchange.com/questions/9866/how-to-memorize-7-crystal-systems
How to memorize 7 crystal systems? Is there any trick to memorize the angles and sides of 7 crystal systems of crystalline solids? 1. Cubic 2. Tetragonal 3. Orthorhombic 4. Monoclinic 5. Hexagonal 6. Rhombohedral or Trigonal 7. Triclinic The first step is understanding what you have. The seven crystal systems are derived from decreasing symmetry (lifting degeneracies) of the cubic system by altering lengths and angles. Example: Cube volume is $\ce{a^3}$. Parallelepiped volume with all symmetries explicit is http://www.rockhounds.com/rockshop/xtal/index.shtml Crystallographers often translate the unit cell within the lattice to make its contents "nice." The rhombohedral system is particularly nasty for choosing the unit cell itself. 65 Sohnke space groups of 230 overall contain chiral formula units. Racemic screw axes make that interesting, ${2_1}$, ${4_2}$, ${6_3}$. Within the 65, 11 enantiomorphic space groups - themselves chiral without regard to unit cell contents - allow the same substance to crystallize in two mirror-image space groups.
2022-09-29 08:34:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.724146842956543, "perplexity": 3955.2500456357966}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00792.warc.gz"}
http://docs.physiozoo.com/en/latest/mhrv/sections/mhrv.hrv.html
# mhrv.hrv¶ ## hrv_time()¶ mhrv.hrv.hrv_time(nni, varargin) Calculates time-domain HRV mertics from NN intervals. Parameters • nni – Vector of NN-interval dirations (in seconds) • varargin Pass in name-value pairs to configure advanced options: • pnn_thresh_ms: Optional. Threshold NN interval time difference in milliseconds (for the pNNx HRV measure). • plot: true/false whether to generate a plot. Defaults to true if no output arguments were specified. Returns Table containing the following HRV metrics: • AVNN: Average NN interval duration. • SDNN: Standard deviation of NN interval durations. • RMSSD: Square root of mean summed squares of NN interval differences. • pNNx: The percentage of NN intervals which differ by at least x (ms) (default 50) from their preceding interval. The value of x in milliseconds can be set with the optional parameter ‘pnn_thresh_ms’. • SEM: Standard error of the mean NN interval length. ## hrv_freq()¶ mhrv.hrv.hrv_freq(nni, varargin) NN interval spectrum and frequency-domain HRV metrics This function estimates the PSD (power spectral density) of a given nn-interval sequence, and calculates the power in various frequency bands. Parameters • nni – RR/NN intervals, in seconds. • varargin Pass in name-value pairs to configure advanced options: • methods: A cell array of strings containing names of methods to use to estimate the spectrum. Supported methods are: • lomb: Lomb-scargle periodogram. • ar: Yule-Walker autoregressive model. Data will be resampled. No windowing will be performed for this method. • welch: Welch’s method (overlapping windows). • fft: Simple fft-based periodogram, no overlap (also known as Bartlett’s method). In all cases, a window will be used on the samples according to the win_func parameter. Data will be resampled for all methods except lomb. Default value is {'lomb', 'ar', 'welch'}. • time_intervals: specify the time interval vector tnn. If it is not specified then it will be computed from the nni time series. • power_methods: The method(s) to use for calculating the power in each band. A cell array where each element can be any one of the methods given in ‘methods’. This also determines the spectrum that will be returned from this function (pxx). Default: First value in methods. • norm_method: A string, either total or lf_hf. If total, then the power in each band will be normalized by the total power in the entire frequency spectrum. If lf_hf, then only for the LF and HF bands, the normalization will be performed by the (LF+HF) power. This is the standard method used in many papers to normalize these bands. In any case, VLF and user-defined custom bands are not affected by this parameter. • band_factor: A factor that will be applied to the frequency bands. Useful for shifting them linearly to adapt to non-human data. Default: 1.0 (no shift). • vlf_band: 2-element vector of frequencies in Hz defining the VLF band. Default: [0.003, 0.04]. • lf_band: 2-element vector of frequencies in Hz defining the LF band. Default: [0.04, 0.15]. • hf_band: 2-element vector of frequencies in Hz defining the HF band. Default: [0.15, 0.4]. • extra_bands: A cell array of frequency pairs, for example {[f_start,f_end], ...}. Each pair defines a custom band for which the power and normalized power will be calculated. • window_minutes: Split intervals into windows of this length, calcualte the spectrum in each window, and average them. A window funciton will be also be applied to each window after breaking the intervals into windows. Set to [] if you want to disable windowing. Default: 5 minutes. • detrend_order: Order of polynomial to fit to the data for detrending. Default: 1 (i.e. linear detrending). • ar_order: Order of the autoregressive model to use if ar method is specific. Default: 24. • welch_overlap: Percentage of overlap between windows when using Welch’s method. Default: 50 percent. • win_func: The window function to apply to each segment. Should be a function that accepts one parameter (length in samples) and returns a window of that length. Default: @hamming. • plot: true/false whether to generate plots. Defaults to true if no output arguments were specified. Returns • hrv_fd: Table containing the following HRV metrics: • TOTAL_POWER: Total power in all three bands combined. • VLF_POWER: Power in the VLF band. • LF_POWER: Power in the LF band. • HF_POWER: Power in the HF band. • VLF_NORM: 100 * Ratio between VLF power and total power. • LF_NORM: 100 * Ratio between LF power and total power or the sum of LF and HF power (see ‘norm_method’). - HF_NORM: 100 * Ratio between HF power and total power or the sum of LF and HF power (see norm_method). • LF_TO_HF: Ratio between LF and HF power. • LF_PEAK: Frequency of highest peak in the LF band. • HF_PEAK: Frequency of highest peak in the HF band. • BETA: Slope of log-log frequency plot in the VLF band. Note that each of the above metrics will be calculated for each value given in power_methods, and their names will be suffixed with the method name (e.g. LF_PEAK_LOMB). • pxx: Power spectrum. It’s type is determined by the first value in power_methods. • f_axis: Frequencies, in Hz, at which pxx was calculated. ## hrv_nonlinear()¶ mhrv.hrv.hrv_nonlinear(nni, varargin) Calcualtes non-linear HRV metrics based on Poincaré plots, detrended fluctuation analysis (DFA) 2 and Multiscale Entropy (MSE) 3. Parameters • nni – RR/NN intervals, in seconds. • varargin Pass in name-value pairs to configure advanced options: • mse_max_scale: Maximal scale value that the MSE will be calculated up to. • mse_metrics: Whether to output MSE at each scale as a separate metric. • sampen_r: r value used to calculate Sample Entropy • sampen_m: m value used to calculate Sample Entropy • plot: true/false whether to generate plots. Defaults to true if no output arguments were specified. Returns • hrv_nl: Table containing the following HRV metrics: • SD1: Poincare plot SD1 descriptor (std. dev. of intervals along the line perpendicular to the line of identity). • SD2: Poincare plot SD2 descriptor (std. dev. of intervals along the line of identity). • alpha1: Log-log slope of DFA in the low-scale region. • alpha2: Log-log slope of DFA in the high-scale region. • SampEn: The sample entropy. 2 Peng, C.-K., Hausdorff, J. M. and Goldberger, A. L. (2000) ‘Fractal mechanisms in neuronal control: human heartbeat and gait dynamics in health and disease, Self-organized biological dynamics and nonlinear control.’ Cambridge: Cambridge University Press. 3 Costa, M. D., Goldberger, A. L. and Peng, C.-K. (2005) ‘Multiscale entropy analysis of biological signals’, Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 71(2), pp. 1–18. ## hrv_fragmentation()¶ mhrv.hrv.hrv_fragmentation(nni, varargin) Computes HRV fragmentation indices 1 of a NN interval time series. Parameters nni – Vector of NN-interval dirations (in seconds) Returns Table containing the following fragmentation metrics: • PIP: Percentage of inflection points. • IALS: Inverse average length of segments. • PSS: Percentage of NN intervals that are in short segments. • PAS: Percentage of NN intervals that are in alternation segments of at least 4 intervals. 1 Costa, M. D., Davis, R. B., & Goldberger, A. L. (2017). Heart Rate Fragmentation: A New Approach to the Analysis of Cardiac Interbeat Interval Dynamics. Frontiers in Physiology, 8(May), 1–13.
2019-05-27 03:26:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6171402335166931, "perplexity": 8786.298574483448}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00109.warc.gz"}
https://mathematica.stackexchange.com/questions/123270/changing-center-of-rotation-of-sphere
# Changing center-of-rotation of sphere I'm hoping to be able to change the "center-of-rotation" (CoR) about a 3D globe which is given by the code below. Currently the CoR is about the center of the sphere itself, however, I'd love to be able to, for example, change the CoR to be about New York city, or any surface location for that matter. That is, I'd like to be able to move the CoR, which is by default set to (x,y,z)=(0,0,0), to a point on the surface of the sphere, a radial distance R from the sphere's center. Is such an option possible? EarthTexture = Import["http://naturalearth.springercarto.com/ne3_data/8192/\ textures/2_no_clouds_8k.jpg"]; EarthSphere = ParametricPlot3D[{Cos[u] Sin[v], Sin[u] Sin[v], Cos[v]}, {u, 0, 2 Pi}, {v, 0, Pi}, Mesh -> None, PlotPoints -> 100, TextureCoordinateFunction -> ({#4, 1 - #5} &), Boxed -> False, PlotStyle -> Texture[Show[EarthTexture]], Lighting -> "Neutral", Axes -> False, RotationAction -> "Clip", ViewPoint -> {-2.026774, 2.07922, 1.73753418}, ImageSize -> 800]; Show[EarthSphere, PlotRange -> Automatic] • Have you seen ViewCenter? – Michael E2 Aug 6 '16 at 2:07 • ViewCenter looks to work nicely, but I'm struggling to figure out how to change the center-of-rotation to a specific point on Earth. Would it be possible, given latitude, longitude and the radius of Earth, to accurately set ViewCenter? – InquisitiveInquirer Aug 6 '16 at 20:14 The documentation says that The setting for ViewCenter is given in scaled coordinates, which run from 0 to 1 across each dimension of the bounding box. so it should just be case of converting from lat/long to graphics coordinates and then to scaled coordinates. E.g. earth = ParametricPlot3D[{Cos[u] Sin[v], Sin[u] Sin[v], Cos[v]}, {u, 0, 2 Pi}, {v, 0, Pi}, Mesh -> None, PlotPoints -> 30, Boxed -> False, Axes -> False, TextureCoordinateFunction -> ({#4, 1 - #5} &), PlotStyle -> Texture[EarthTexture], Lighting -> "Neutral"]; pt[city_] := Module[{v, u}, {v, u} = LatitudeLongitude@GeoPosition@city; {-Cos[u] Cos[v], -Sin[u] Cos[v], Sin[v]}] cities={ Entity["City", {"London", "GreaterLondon", "UnitedKingdom"}], Entity["City", {"NewYork", "NewYork", "UnitedStates"}], Entity["City", {"Tokyo", "Tokyo", "Japan"}]}; Manipulate[ Show[earth, Graphics3D[{Red, Arrow@Tube[{1.3 pt[c], pt[c]}]}], ViewCenter -> Dynamic[0.5 + 0.5 pt[c]]], {c, cities}] • Hi Simon, thanks very much for the reply, but I seem to be getting errors when I run your code regarding lists not being of the same shape. The errors are of the form "Lists {v$1518, u$1518} and {51.5, -0.116667} Degree are not the same shape." – InquisitiveInquirer Aug 7 '16 at 10:24 • @AlexR that's odd, what do you get from FullForm[LatitudeLongitude@GeoPosition@cities[[1]]] ? – Simon Woods Aug 7 '16 at 10:59
2019-09-18 12:35:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29835283756256104, "perplexity": 3661.0557518062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573284.48/warc/CC-MAIN-20190918110932-20190918132932-00345.warc.gz"}
https://chemistry.stackexchange.com/questions/21855/why-do-some-solid-chemicals-dissolve-better-in-colder-water
# Why do some solid chemicals dissolve better in colder water? I just discovered that some solid chemicals, such as $\ce{Na2SO4}$, dissolve better in cold water than hot water from here and would like to know if there is any particular reason as to why. Is it their structure, their charge, etc.? As with every process, dissolving can be described by the thermodynamical equation: $$\Delta G = \Delta H - T \Delta S$$ Which the process being spontaneous if $\Delta G$ is negative. We have two components here, enthalpic and entropic. The enthalpic component $\Delta H$ is a constant of the salt in question. The entropic component $\Delta S$ can usually be considered positive because a highly ordered crystal lattice is broken to give dissolved, much less ordered fragments — hence why for most salts they dissolve easier in hot water. If the solubility decreases with higher temperatures, then $\Delta S$ must be negative. The salt will then only dissolve if $\Delta H$ is negative enough to give an overall negative result. This seems to be the case for $\ce{Na2SO4}$ It is impossible to predict $\Delta S$ and $\Delta H$ a priori without any computer quantum calculations. One has to resort to experiments to determine whether a salt will dissolve better or worse in the heat. While I have used the term ‘salt’ most of the time, the same is true for molecules. Take sucrose: I was asked in a physical chemistry lab to guess with my colleague whether sugar would dissolve endothermicly or exothermicly. We decided, that the degree of order be a lot higher in dissolved sugar due to a high amount of water surrounding it rather tightly, that the entropy must decrease, so that the enthalpy has to be negative and sugar has to dissolve exothermically. Hence sugar should dissolve more easily in cold water, too. According to a comment by Nicolau Saker Neto, our reasoning was false and sucrose displays increasing solubility at higher temperatures. Apparantly the testing faculty member approved of it, though.
2020-01-24 17:53:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4510025382041931, "perplexity": 890.0533623696156}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250624328.55/warc/CC-MAIN-20200124161014-20200124190014-00213.warc.gz"}
https://www.unqualified-reservations.org/2007/11/musharrafs-rebellion-or-how-to-read/
## Musharraf’s rebellion, or: how to read a newspaper Sometimes history just serves you up really juicy examples. Days when the papers are this easy to read are rare. They are not complex and must be drunk young, like a Beaujolais. In Campagna they say a mozzarella di bufala is over the hill after sunset on the day it’s made—and some say afternoon. I suspect this post will be obsolete at most within the next month. But it might be tomorrow. But with that caveat: today there are a couple of cute articles about Pakistan in the WSJ. These articles don’t actually tell us what’s going on in Pakistan. In fact, they are active, if quite unconscious, attempts to mislead us about what’s going on in Pakistan. But this is just the art of reading a newspaper. The newspaper’s adaptive goal is to persuade its readers, who include you, to adopt some perspective on the subject of Pakistan. Without in any way adopting, or even considering, this perspective—frankly, why should it be worth considering?—we can understand Pakistan by understanding the effect that the text is designed to have on its unwary reader. Of course we also need to know who wrote it, and why. And first, we need some background about what’s going on in Pakistan. There are three factions fighting for power in Pakistan today: the Islamists, the civilists, and the army. The Islamists are too well-known to describe. The civilists are basically the Soros people, the “civil society” types, lawyers, judges, journalists and politicians. The army is the Pakistani military: Musharraf and his people. The basic plot of the story is that Musharraf is rebelling against Washcorp. His motivation for taking this step is that he believes that if he doesn’t, he will end up either exiled, dead or in prison. In my opinion, this perception is accurate. These events are taking place now because of the weakening of the Pentagon and White House faction, the neoconservative defense hawks, who have controlled the US military since 2001. This weakening is a natural consequence of the fact that the Bush administration is timing out, a normal structural phenomenon in Washcorp power politics. Neocons are really best described as retro-Universalists. Their great dream is to try to restore a kind of faded 1950s vision of Universalism. There is no possibility of success in this effort. But they certainly can keep themselves employed by trying. As for their neo-Universalist adversaries, the Polygon proper, there must be some avian mascot that fits the bill. But it is certainly no member of the pigeon family. I have a pair of semi-tame ravens that come regularly to my deck for peanuts, which I’ve taught them to catch in the air. I think the Polygon’s bird has to be some kind of corvid—the only real question is whether it’s a bluejay, a crow, or a raven. For now I’ll stick with the last. When the hawks were strong, they could afford to protect Musharraf. Weak, they forced to sacrifice him. Thus the neoconservative near-unanimity on the subject, with only a few dissenters. Many neocons can still stomach ol’ Mushy, but they are politically unable to afford to avoid attaching their John Hancock to a demand for elections within three months. While this is a ludicrous demand, it creates a point of bipartisan unanimity within Washcorp, and all major players in the postwar period automatically defer to any unanimous demand of Washcorp—whose internal structure they understand far better than the average Plainlander. (For example, if you read the Tiananmen Papers, which narrate the decisions of the leaders of China during the Tiananmen Square incident—there is some debate over the authenticity of these documents, but if they are not real they are a very convincing fiction—the Chinese Politburo and Elders receive and read not a daily summary of State Department communiques, not of Pentagon press releases, not of White House statements, but of the Western press. And access, as they always say, is power. The modern official press is a coordination signal orders of magnitude more reliable than any other diplomatic channel.) In any case, the Pakistani army is primarily aligned with the Washcorp hawks, and the Pakistani civilists are primarily aligned with the Washcorp ravens. The Islamists, of course, have no alliance with any Washcorp faction. At least, no direct alliance. Therefore, the events in Pakistan follow the usual pattern of Western colonial proxy wars. The two factions struggling for power within Washcorp nurture and support corresponding Pakistani factions. The provincial struggle is often a bit more rowdy. But it exists solely because of the invisible power struggles within the Beltway. We can therefore use events in Pakistan as a sort of amplifier to help us observe the delicate game in Washington. (Think of it as a sort of Beltwology.) In terms of Pakistani politics alone, however, the structure of the conflict is simple. The army is side A, presently dominant. The civilists and Islamists are side B, presently subordinate. (Yes, I am aware that this is not the conflict as we normally hear it described. That’s kind of the point. As usual, the only way to test an alternate analysis is to simply adopt it, at least rhetorically, work within its context for a while and see how generally true it rings.) If side A loses, it can only lose decisively. After all, it’s the army. The resulting fight will be between the civilists and Islamists. The Islamists will win with a first-round knockout, the civilists and the top rung of the officer corps will end up exiled, dead or in prison, and the rest of the army will be subordinated. The Islamist-army alliance of the ’80s will be re-established, probably in a more virulent form, and Pakistan will become an open ally of Iran. If side A wins decisively, the Islamists and civilists will end up exiled, dead, politically irrelevant or in prison. If side A wins weakly, the outcome is effectively a draw (as the IRA used to say, “we only have to be lucky once”), the conflict will continue in its present state indefinitely, and Pakistan will remain unstable. The army has been forced to roll the dice, however, because of its weakening as the result of trends within Washcorp. Obviously, the outcome I prefer is the middle: decisive victory for side A. If you disagree with this result, either you do not agree with the decision analysis, or your position is objectively hostile to Pakistan. (Which is totally fine, by the way. Not everyone has to be friends.) Note that there are no pleasant outcomes for the civilists in this decision tree. This is normal for those who accept the role of shills, puppets and collaborators. Quislings can always be found. The ugly fact is that the civilist movement in Pakistan is basically a criminal mafia. Or, more precisely, a consortium of several criminal mafias. It is fundamentally corrupt and utterly irredeemable. I’m quite confident in saying that nothing good will ever come of it. Now isn’t that interesting? Who does that make you think of? Well, obviously, one name looms large: Michael Corleone. But for anyone who’s seen The Departed, as I just did (I’m afraid this is what I’m supposed to say, but Infernal Affairs really was much better), there is another figure: Whitey Bulger. Jack Nicholson’s character in The Departed is an obvious impression of the notorious Mr. Bulger. But wait—who is Jemima Khan? How do you get a name like “Jemima Khan,” anyway? For those too busy to follow the links, Jemima Khan is a British socialite who happens to be married to cricket star and Pakistani politician Imran Khan. Who happens to have been arrested the other day. One suspects Jemima does not approve. But one also suspects that her motivation for informing Telegraph readers about the true nature of Ms. Bhutto and her “party” is slightly less than altruistic. It’s not a rose garden out there, kids. We are now prepared to read the first piece in the WSJ—an op-ed by one Husain Haqqani. It’s important to note that the back two pages of the A section of the WSJ are composed by an entirely separate organization from the rest of the paper. Call them WSJr and WSJl. WSJr is a reliable indicator of official neoconservative doctrine, inasmuch as any such thing exists. WSJl is one of the most orthodox Universalist newsrooms in Washcorp. The fact that the two are sending more or less the same message makes the Pakistan situation unusually easy to understand. Which is why it’s so like a good Beaujolais. Anyway, Mr. Haqqani is, as a little note at the end of the piece informs us: director of Boston University’s Center for International Relations and the author of “Pakistan: Between Mosque and Military” (Carnegie Endowment for International Peace, 2005). He also has served as adviser to several Pakistani prime ministers, including Ms. Bhutto. This is a ripe, rich turd in a suit. A mob lawyer, an abettor of thieves, a peddler of lies. In a decent world, this man would be in prison. Here are his words, in WSJr, 11/8/2007. The whole piece is priceless, and I have quoted it all. Pakistanis Say No By HUSAIN HAQQANI November 8, 2007; Page A23 When Gen. Pervez Musharraf suspended Pakistan’s Constitution, declared a state of emergency and put the nation once again under martial law, he expected limited civilian resistance and only ritual international condemnation, in view of his role in the war against terrorism. On both counts, Mr. Musharraf appears to have badly miscalculated. Translation: You’ve fucked with the wrong people. Now, we’re going to fuck you. (Note also how Haqqani declines to use General Musharraf’s title. I’m not sure of the Pakistani military etiquette on this. Perhaps it’s not quite as serious as pissing on his mother’s grave.) More Haqqani: Pakistan’s burgeoning civil society, led by lawyers and encouraged by judges ousted from the Supreme Court, is refusing to be cowed. Translation: Your time is over, you little Pentagon poodle. Quit while you still can. Haqqani: Protests are spreading despite thousands of arrests and the use of tear gas and batons against peaceful demonstrators. Translation: Your pathetic “policemen” will never dare to resist our vast rent-a-mobs. They don’t even have the guts to shoot—let alone keep shooting. Haqqani: More than 1,700 attorneys have been jailed but still more are taking to the streets. University students have joined the lawyers, and former Prime Minister Benazir Bhutto has vowed to violate a ban on public meetings by leading a rally on Friday. Translation: Look—Rupert Murdoch has donated the top of his op-ed page to help us whip our mobs into a frenzy of lawless street violence. You have no chance, buster. None. Haqqani: There are a number of important reasons why Pakistan’s attorneys are leading the protests against Mr. Musharraf. They have a long tradition of activism for rule of law and human-rights issues. In 1968–69, the lawyers started the campaign that resulted in the ouster of Pakistan’s first military ruler, Field Marshal Ayub Khan. They also were at the forefront of the campaign against Mr. Zia-ul-Haq, whose 11-year military rule ended when he died in a 1988 plane crash. Translation: Our trained lawyers have been scheming for power since you were in short pants. They’re pretty good at it now. Isn’t it time you clicked over to Orbitz.com? Flights out of Islamabad are pretty crowded this time of year. Haqqani: The legal fraternity has another advantage, in that they can afford to confront the government without fearing starvation for their families. Some 65 million of Pakistan’s 160 million people subsist on less than $1 a day, while another 65 million survive just above the poverty line. The poor are willing to participate in organized rallies, such as the one that welcomed Ms. Bhutto back to Pakistan on Oct. 18 (and was targeted by a suicide terrorist), but they generally avoid protest demonstrations where getting arrested and missing work is almost inevitable. Translation: Our lawyers are already fat with graft. They can do this all day, every day, for as long as you’re willing to sit on the pot. But if we start to get bored, maybe we’ll call out the real mobs. Haqqani: That could change in the days and weeks to come. Although Mr. Musharraf has taken all private and international television channels off the air, images of the protests are being seen all over Pakistan through the Internet and with satellite dishes. Middle-class Pakistanis, and increasingly the poor, are making it clear that they want political freedom, along with an improvement in their economic prospects, and do not consider prosperity and democracy to be mutually exclusive. Translation: And we don’t want to have to go that far, now, do we? Try reading my lips, bro. I’m not sure we’re really connecting here. Haqqani: The international community has also responded more strongly than Mr. Musharraf expected. The Netherlands has suspended aid, and several donors are reviewing their policy on military and economic assistance. The Bush administration is hoping to defuse the situation through assertive diplomacy. But withdrawal of aid, supported by several congressional leaders, remains a possibility. Translation: The New York mob is 100% with us. The Washington people are leaning our way. Fuck with the bull—get a horn in the ass. Haqqani: Since 9/11, Mr. Musharraf has positioned himself as the key Western ally in the global war against terrorism. Translation: This part of my message is for the Pentagon. Haqqani: But in recent months, he has been too distracted with domestic politics to play an effective role. Translation: We can neutralize Musharraf completely. You will never get anywhere with him. Haqqani: The more he has to repress critics and political opponents, the less Pakistan will be able to fight terrorism. After all, when troops have to be deployed to detain Supreme Court judges, journalists, lawyers and politicians, there are fewer troops available to fight terrorists. Pakistan’s intelligence services can either spy on dissenting Pakistani civilians or focus their energies on finding Osama bin Laden and his ever increasing number of deputies and operatives around Pakistan. Translation: We’re so tight with State, we can piss on your boots and tell you it’s raining. Haqqani: But Pakistan needs to fight terrorism for Pakistan’s sake. Mr. Musharraf cannot endlessly blackmail Washington by hinting that he would withdraw antiterror cooperation if the U.S. pressures him on other issues, including democracy and human-rights violations. Translation: Besides—you think you’re using Musharraf. But it’s the other way around. Haqqani: One thing is clear: Mr. Musharraf’s authoritarianism is being challenged by diverse elements in Pakistani society. Translation: We have two Mexicans, a spade, and a tranny who calls herself “Marquetta.” She can shoot the asshole out of a sparrow at fifty meters. But she says she likes you. Don’t make her have to change her mind. Haqqani: His self-cultivated image as a benign dictator is a thing of the past, and his recent harsh measures have failed to frighten Pakistan’s civil society and political opposition into submission. Translation: Have I mentioned yet that you’re DOOMED? Haqqani: The defiance of the judiciary and the media might not immediately topple Mr. Musharraf, but it could render him ineffective to a point where the military rethinks its options. The army will soon recognize that the only thing keeping the general and his civilian cronies in power is the army’s support. It risks further alienating the Pakistani people and losing their respect as long as it continues to act solely in the interests of Mr. Musharraf and his small band of political allies. At some point, the professional soldiers will wonder whether they should risk their institution’s position to keep him in power. The army is Mr. Musharraf’s support base. It is a major beneficiary of U.S. security assistance, having received$17 billion since 1954 with equipment worth several hundred million dollars currently in the pipeline. Since 2002, the U.S. has subsidized the Pakistani army to the tune of $150 million per month. The army is also a stakeholder in Pakistan’s growing economy, which benefits from international aid and investment. If Mr. Musharraf’s autocratic policies threaten Pakistan’s prosperity, the army is likely to be less unanimous in its support of its commander. Translation: Perhaps you’re not ready to hear it yet. But maybe your people are. Are you sure they’re all still loyal? Anyone can read the writing on the wall, old man. Haqqani: Already, there are signs of economic fallout from the political turmoil. Rumors of an anti-Musharraf military coup on Monday caused the biggest one-day decline in 16 months on the Karachi Stock Exchange, resulting in losses of an estimated$1.3 billion. Pakistan’s credit rating has been revised downward in anticipation of further civic unrest and international sanctions. Translation: Our guys are lawyers—they can always find work. We don’t care if we have to destroy the economy. But perhaps someone on your side does. Are you counting on him? I wouldn’t be so fast, old chap. Haqqani: Pakistanis are used to coups d’état where the army takes the helm of government. Things are different this time. In the past, generals have suspended the constitution to remove from power unpopular rulers, usually weakened civilians rightly or wrongly accused of corruption (as was the case when Mr. Musharraf ousted Prime Minister Nawaz Sharif in October 1999). Translation: I’m too smart to say whether Nawaz Sharif was corrupt or not. After all, you never know. The future can hold anything. Haqqani: This is the first time an unpopular military ruler has suspended the constitution to preserve his own rule. In doing so, Mr. Musharraf has clearly overplayed his hand. Translation: You started it. But now, we’re going to end this crap one way or another. Haqqani: Mr. Musharraf cannot blame a civilian predecessor for bringing the country to the brink. If there is internal chaos in Pakistan today, it is of the general’s making. After all, it was his arbitrary decision to remove Supreme Court Chief Justice Iftikhar Mohammed Chaudhry in March that initiated the political crisis which has led to the current “state of emergency.” Translation: And your boots are still getting wet. How dare you resist us? How dare you? Haqqani: Justice Chaudhry, on the other hand, has become a symbol of resistance to arbitrary rule—the man who refused to roll over and disappear, unlike earlier judges who cooperated with military rulers or simply went home when their conscience dictated otherwise. Justice Chaudhry’s call upon the legal fraternity to “Go to every corner of Pakistan and give the message that this is the time to sacrifice” for the supremacy of Pakistan’s Constitution has drawn elements disillusioned with existing political leaders to anti-Musharraf protests. Translation: Article 37 of the Pakistani Constitution specifically states that all disputes shall be resolved by mob violence. Cut it out with this martial-law shit. If you have people, let’s see ’em. If not, why the fuck are you still here? Haqqani: Among Pakistani political leaders, Ms. Bhutto has emerged as the viable civilian alternative to Mr. Musharraf, with public support at home and acceptance abroad. As the only politician in Pakistan to publicly describe Islamist extremism and terrorism as the principal threat to the nation, Ms. Bhutto was initially measured in her response to Mr. Musharraf’s reckless actions. She demanded that he restore the constitution and call elections as scheduled. Translation: Look—Pinkie has been very patient with you. Haqqani: She demanded that he restore the constitution and call elections as scheduled. She hoped to change his attitude with the threat of putting hundreds of thousands of supporters in the streets, without actually doing so. But Mr. Musharraf’s stubbornness is changing that position. Translation: But she’s starting to get a little fed up. Haqqani: Like many in the U.S., Ms. Bhutto appears worried about directing attention away from fighting terrorism and destabilizing Pakistan further. But leaving the anti-Musharraf campaign leaderless is not an option. She has positioned herself as an opposition leader who represents the sentiment of the people, but is also willing to accept a negotiated settlement that restores the constitution, ends persecution, and results in free and fair elections leading to full civilian rule. Translation: As you see, Pinkie is prepared to be quite reasonable. Haqqani: So far Mr. Musharraf has shown no inclination to negotiate in good faith with Ms. Bhutto or the international community. With each passing day, the Bush administration’s hopes—that with its help there could be a transition to democracy in Pakistan with a continuing role for Mr. Musharraf—are diminishing. Unless Mr. Musharraf changes course quickly, the U.S. will be compelled to start looking beyond him to a more legitimate leader. Translation: Perhaps you should be reasonable as well. Haqqani: Mr. Musharraf seems determined to put his own political survival before the rule of law—actions that warrant the label dictator. Pakistan’s attorneys, and increasingly the rest of its citizenry, seem equally determined to prevent this from happening. Translation: Thanks, I’m all done here. A big hand to Rupert for helping make this message possible. Mr. Murdoch, you’ve come a long way in your efforts to avoid the fate of Lord Black. And to all the good folks at Washcorp: remember, we’re on your side. And that’s the entire article. Anyway. I don’t mean to be too flippant here. This is obviously a serious business. But if Marx was right about anything, he was right about history and farce. Now, here’s another article. Same day, same paper, but this one is on the bottom of the front page—WSJl, as it were. This is hard news. Failed Courtship of Warlord Trips Up U.S. in Afghanistan Eager for Allies, Army Tries Turning Insurgents; Chaos Embroils Pakistan By JAY SOLOMON November 8, 2007; Page A1 I have no idea who “Jay Solomon” is. But does it matter? Not at all. Perhaps you have seen All the President’s Men and you think the life of the elite Washington journalist is all about diving through dumpsters and making secret rendezvous with anonymous informants in scruffy phonebooths. I’m afraid this is not how it is. If you are someone who can get his articles on the front page of the WSJ, as many prewritten stories as you could possibly ask for will show up in your email every day. These are not even press releases. They are messages directly to you. But if you don’t print them or if you screw them up in some way, they will stop coming and you will fall off the front page. The task, however, is basically the normal journalist’s task of rewriting official information dumps, to make them seem as if they were written by an intelligent person with judgment and character. This one, as we’ll see, is obviously from the State Department. The U.S. is struggling to find tribal allies in Afghanistan and Pakistan as it tries to beat back the resurgence of al Qaeda and the Taliban. I hate to break it to you, kids. But when you’re winning, the allies struggle to find you. In alienating a powerful warlord named Jalaluddin Haqqani a few years ago, however, some U.S. and Afghan officials argue the Americans may have shot themselves in the foot. Okay, here is the money. First, note the sourcing. Clearly “officials” means State and/or CIA. In case you’ve been in a cave for the last 30 years, these two are like this these days. Second, note the reason Mr. Haqqani (I’m sure the coincidence of name is, um, coincidental) is fighting. He is fighting for emotional reasons. He is obviously a deeply troubled young man who only needs time and peace to heal. This is absolutely typical of the rhetoric of these stupid little wars. To the hawks, our enemies fight because they hate us. They will always hate us, so they must be destroyed. To the ravens, our enemies fight because they hate us. Their hearts have been hardened by our callous and cruel treatment, but with enough hugs and candy they can be made to love us again. Do I need to tell you how insane both these attitudes are? They are both perfect examples of Conquest’s three laws. And they are exactly the reason I support a complete shutdown of US foreign policy, with no exceptions at all, dissolving State completely and folding Defense into Homeland Security. Perhaps we can rename it “National Security.” Anyway, more: Mr. Haqqani is now one of the major rebel leaders roiling Afghanistan. But back in autumn 2002, he secretly sent word that he could ally with the new U.S.-friendly Afghan government. The warlord had once been a partner of the Central Intelligence Agency, and later closely collaborated with Osama bin Laden and the ruling Taliban. CIA officers held talks with his brother, Ibrahim, and made plans to meet with Mr. Haqqani, who was leading some of the Taliban’s troops. But U.S. military forces operating separately from the CIA arrested Ibrahim—cutting off the talks and entrenching his brother as a nemesis. Mr. Haqqani is still fighting U.S. troops along the Pakistan border. “We blew our chance,” contends one of the CIA officers involved who had worked with Mr. Haqqani in the 1980s. “I truly believe he could have been on our side.” Just like Uncle Ho! I’m telling you, man. History as farce. Other senior officials in the CIA and Pentagon are less certain. We’ll quote anyone. But the ledes go only to our real friends. But Washington’s aborted courtship of Mr. Haqqani epitomizes the conflicts and calculations that are complicating U.S. involvement in the region. Sometimes I like to just say nothing at all. The war in Afghanistan is a major factor in the chaos unfolding in neighboring Pakistan. A spreading Islamic insurgency inside Pakistan is one reason Gen. Pervez Musharraf cited Saturday when he declared emergency rule, though the opposition contends the move was more about extending his stay in power. Militants in Pakistan’s tribal belt are suspected of fighting in both countries, dramatically widening the conflict from the days that it was largely confined to Afghanistan. Meanwhile, some officials report that bears are shitting in the woods. With U.S. intelligence officials concerned that al Qaeda is using Pakistan as a base to plot new attacks in Afghanistan and elsewhere, winning back tribal leaders like Mr. Haqqani—or eradicating those who refuse to be wooed—has climbed to the top of Washington’s strategic agenda. The State Department recently pledged $750 million in new aid to Pakistan’s border regions, hoping to use economic development and education to peel local leaders away from al Qaeda and militants such as Mr. Haqqani. Whoa! Okay, let’s stop right here. I don’t think we need to quote any more of this article. Question: “Who controls Pakistan’s border regions?” Answer: “The Taliban.” Question: “So when State sends$750 million to Pakistan’s border regions, who are they sending $750 million to?” Answer: “What are you, anyway? Some kind of a neo-McCarthyist?” In other words, here is what Washcorp is up to in Pakistan. This is its favorite trick. It does this all the time. It just needs to make sure you don’t see the aces in its sleeves. Washcorp is fighting a war against itself. Through one arm, it is funding the Taliban. Through another arm, American soldiers are fighting the Taliban. If you think this is the first time this sort of thing has happened, perhaps you need to think about switching your history provider. The delusional belief that allows Washington to fight a war against itself, without State and Defense actually coming to actual fisticuffs in the National Security Council, is that you can pay people to love you. The folks over at State genuinely believe that the Pashtun tribes can be bought off. If not$750 million, how about a cool 1.5 bil? What’s 1.5 billion when we’re talking about peace? Besides, it’s not really our money, anyway. Meanwhile, up in Peshawar, they’re not exactly stupid. They’re perfectly aware that they are being paid to fight the Pentagon, just as they were paid to fight the Soviets. New century, new evil empire, same difference. “After all, if we stopped fighting, wouldn’t State just stop paying? The more we fight, the more we seem to get paid. Funny how that works. Why, it’s almost like having an actual job!” The same exact thing is going on with Bhutto and her cronies. And even with Musharraf. Après moi, le déluge ! If Musharraf actually destroys the Islamists, his cash pipeline from Arlington will dry up. “Sorry, old chap, Ben’s having a little trouble with the printer. He’s out in his helicopter today, anyway. Something about ‘jumbo loans?’ But we’ll call you just as soon as he gets back. Cheerio!” But at least Musharraf is actually capable of fighting the Islamists. Whereas Bhutto’s only solution is to pay them, pay them and pay them again. She will smother those poor, broken, mistreated men with her warm, wise Cambridge-educated love. And surely they will love her, and us, etc., etc., etc. If there are any remaining disputes, perhaps the United Nations can settle it. Wouldn’t it be nice if the United Nations could trust America again? This is the entire pattern of Washcorp’s foreign relations for the last 65 years. At least. It’s a sort of MySpace diplomacy, with buckets of cash. The entire point is to pay people, typically extremely sordid and nasty people, to let us be their “friend.” And look! How many friends America has. Conquest’s third law, dear reader, I rest my case. Meanwhile, the poor bastards in the US military are fighting against suicide bombers whose wallets are stuffed to the gills with their own tax dollars. We’re raining so much money on northern Pakistan, you probably can’t get a latte in Peshawar for less than $20. (Couldn’t we at least mark the bills, so we can see where it’s going? I mean, when you send$750M to the North-West Frontier Provinces, how do you do it? Do you send in Ben Bernanke, in a heavily armored Apache, and have him dumping bales from the tailgunner’s seat? Or do you just write a check to Mullah Omar? If so, where does he bank?) Of course, Mullah Omar is happy to take cash. Even dollars! And he’s certainly not afraid of all the lawyers in Pakistan. What are they going to do, sue him? I am not an expert in the Quran, but somehow I don’t think it says anything about “batons and tear gas.” So it’s fairly clear what we can expect if Musharraf loses and Bhutto wins: Khomeini 2.0, with nukes and ICBMs. Hey, it worked for Carter. Perhaps Condoleeza Rice will get the Nobel. Or Rice and al-Zawahiri? Could they be meeting already? Anyway. Enough of this mockery. The bottom line, in my deeply humble and quite sincere opinion, is that it’s time for an independent Pakistan. In case you’re not familiar with this word “independent,” let me go through its etymology. It starts with “in,” which oddly enough is a kind of Latin word for “not.” Then the second part is this “dependent” bit. I’m not quite sure what that means. But the whole construct would seem to imply that Pakistan, or the Pakistani government, or someone, is not, in some way, dependent. On, I don’t know, anyone else. Like, as in, it can do whatever it wants. And nobody will cut its allowance. And nobody will raise its allowance. Because it has no allowance. And if it decides that the best way to handle a mob of lawyers is a couple of bored sergeants and a Dushka, nobody who’s not actually within at most a thousand kilometers of Rawalpindi has any reason to care. Not even the Wall Street Journal. Unless it’s, like, a slow news day or something. (Thanks to reader ZK, who may or may not endorse the result, for much useful background.)
2020-01-25 14:37:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28196823596954346, "perplexity": 4745.4996335550595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251672537.90/warc/CC-MAIN-20200125131641-20200125160641-00054.warc.gz"}
http://sumidiot.blogspot.com/2008/02/p-adics.html
## Saturday, February 9, 2008 Yesterday I said I'd post something about p-adics this weekend. I had originally planned on typing up the notes I used for my talk, but may have decided not to do that, saving my time for other things. But I still wanted to post some highlights. Fix a prime p. For integers x, let vp(x) denote the largest power of p dividing x (to save some time typing this, I'll drop the p from the notation, and just write v(x)). Note that v(xy)=v(x)+v(y), which looks logarithmic (as well it should, we're talking about powers...). So lets extend the definition to include rationals, via v(a/b)=v(a)-v(b) (and check that this is well defined). I should comment that we set v(0)=∞, with usual conventions for working with . Anyway, our v has another interesting property: v(x+y)≥min(v(x),v(y)), with equality if v(x)≠v(y). To see this, let a=v(x) and b=v(y) and we can then write x=pam, y=pbn where p doesn't divide m or n (since p is prime, we could say this as "p doesn't divide mn"). Now v(x+y)=v(pam+pbn), so we factor our the least power of p, and we see that v(x+y) is at least that power. Furthermore, suppose a<b. Then x+y=pa(m+pb-an), and since p doesn't divide m, p also doesn't divide m+pb-an (here we use b-a≠0), so v(x+y) is exactly the min of v(x) and v(y), as claimed. We now create a new absolute value on the rationals, via |x|p=p-v(x) (I will, again, drop the p from the notation and just write |x|). Translating the two properties we now know about v(x), we see that |xy|=|x|*|y| and |x+y|≤max(|x|,|y|) (which is, in turn, no larger than |x|+|y|). This new absolute value is stronger than the normal absolute value, in the sense that it satisfies the strong triangle inequality (the second property above). This new absolute value is an example of a "non-Archimedean" norm. A norm is Archimedean if it satisfies the following property: if x is smaller than y, there is a positive integer n such that nx is bigger than y (stick enough small line segments together to make as large a number as you want). Notice that our new absolute value cannot possibly have this property, because for integers n, v(n)≥1, so |n|≤1, and then the multiplicative property tells us |nx|≤|x|, so we've actually made (potentially) our length smaller by taking more copies of it. Given an absolute value, the thing to do is define a new metric. So let d(x,y)=|x-y|. Keep in mind this is our new absolute value, so I should write dp(x,y)=|x-y|p (but it takes longer). Anyway, this metric satisfies the "ultrametric" property: For any z, d(x,y)≤max(d(x,z),d(z,y)) (this is stronger than the normal triangle inequality). What's more: if d(x,z) is different from d(z,y), then d(x,y)=max(d(x,z),d(z,y)). This can be interpreted as saying that "all triangles are isosceles". With a metric you get the associated topology. Let b(a,r) denote the set of all points whose distance from a is less than r, and B(a,r) be those points whose distance is less than or equal to r. These balls (as they are called) have some interesting properties, different from balls in the normal topology on the reals. I think I'll leave it for you to prove that if x is in b(a,r), then b(a,r)=b(x,r) (I think about this as saying that every point is the center). Using this, we can see that if b(a,r) intersects b(s,t), then one is contained in the other: if x is in the intersection, then b(a,r)=b(x,r), and b(s,t)=b(x,t) and the ball with larger radius contains (possibly with equality) the ball of smaller radius. Now every ball is open in the topology (that's how the topology is defined), but what is new with this topology is that all the balls b(a,r) are also closed. I'd like to leave you with one last fact about these balls: B(0,1) is the disjoint union of b(0,1), b(1,1),... b(p-1,1). The proof I'll summarize here is from Gouvea's book: Suppose a/b is in B(0,1). Taking a/b to be in lowest terms, we may assume p doesn't divide b (since its distance from 0 is no more than 1...). Consider the p integers a, a-b, a-2b,... a-(p-1)b. It is not too hard to see that these are all distinct mod p, and since there are p of them, exactly 1 of them, say a-ib, is equivalent to 0 mod p. Now check that a/b is in b(i,1). Anyway, that's a start. Perhaps you'd rather thing about it algebraically. If I tell you B(0,1) is a subring of the rationals with maximal ideal b(0,1), can you guess what the b(i,1) are? So anyway, those are the low hanging fruits of the new metric. Fun things to see, I hope. If you want to learn more, the books by Gouvea and Koblitz were the main ones I looked at when preparing my talk.
2013-05-19 23:18:26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9175552725791931, "perplexity": 691.9671242419836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698141028/warc/CC-MAIN-20130516095541-00095-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.alexirpan.com/2020/08/18/ai-timelines.html
For this post, I’m going to take artificial general intelligence (AGI) to mean an AI system that matches or exceeds humans at almost all (95%+) economically valuable work. I prefer this definition because it focuses on what causes the most societal change, rather than how we get there. In 2015, I made the following forecasts about when AGI could happen. • 10% chance by 2045 • 50% chance by 2050 • 90% chance by 2070 Now that it’s 2020, I’m updating my forecast to: • 10% chance by 2035 • 50% chance by 2045 • 90% chance by 2070 I’m keeping the 90% line the same, but shifting everything else to be faster. Now, if you’re looking for an argument of why I picked these particular years, and why I shifted by 10 years instead of 5 or 15, you’re going to be disappointed. Both are driven by a gut feeling. What’s important is why parts of my thinking have changed - you can choose your own timeline adjustment based on that. ## I Should Have Been More Uncertain It would be incredibly weird if I was never surprised by machine learning (ML) research. Historically, it’s very hard to predict the trajectory a research field will take, and if I were never surprised, I’d take that as a personal failing to not consider large enough ideas. At the same time, when I think back on the past 5 years, I believe I was surprised more often than average. It wasn’t all in a positive direction. Unsupervised learning got better way faster than I expected. Deep reinforcement learning got better a little faster than I expected. Transfer learning has been slower than expected. Combined, I’ve decided I should widen the distribution of outcomes, so now I’m allocating 35 years to the 10%-90% interval instead of 25 years. I also noticed that my 2015 prediction placed 10% to 50% in a 5 year range, and 50% to 90% in a 20 year range. AGI is a long-tailed event, and there’s a real possibility it’s never viable, but a 5-20 split is absurdly skewed. I’m adjusting accordingly. Now we’re at the hard part. Why did I choose to shift the 10% and 50% lines closer to present day? ## I Didn’t Account for Better Tools Three years ago, I was talking to someone who mentioned that there was no fire alarm for AGI. I told them I knew Eliezer Yudkowsky had written another post about AGI, and I’d seen it shared among Facebook friends, but I hadn’t gotten around to reading it. They summarized it as, “It will never be obvious when AGI is going to occur. Even a few years before it happens, it will be possible to argue AGI is far away. By the time it’s common knowledge that AI safety is the most important problem in the world, it’ll be too late.” And my reaction was, “Okay, that matches what I’ve gotten from my Facebook timeline. I already know the story of Fermi predicting a nuclear chain reaction was very likely to be impossible, only a few years before he worked on the Manhattan Project. More recently, we had Rémi Coulom state that superhuman Go was about 10 years away, one year before the first signs it could happen, and two years before AlphaGo made it official. I also already know the common knowledge arguments for AI safety.” I decided it wasn’t worth my time to read it. (If you haven’t heard the common knowledge arguments, here’s the quick version: it’s possible for the majority to believe AI safety is worthwhile, even if no one says so publicly, because each individual could be afraid everyone else will call them crazy if they argue for drastic action. This can happen even if literally everyone agrees, because they don’t know that everyone agrees.) I read the post several years later out of boredom, and I now need to retroactively complain to all my Facebook friends who only shared the historical events and common knowledge arguments. Although that post summary is correct, the ideas I found useful were all outside that summary. I trusted you, filter bubble! How could you let me down like this? Part of the fire alarm post proposes hypotheses for why people claim AGI is impossible. One of the hypotheses is that researchers pay too much attention to the difficulty of getting something working with their current tools, extrapolate that difficulty to the future, and conclude we could never create AGI because the available tools aren’t good enough. This is a bad argument, because your extrapolation needs to account for research tools also improving over time. What “tool” means is a bit fuzzy. One clear example is our coding libraries. People used to write neural nets in Caffe, MATLAB, and Theano. Now it’s mostly TensorFlow and PyTorch. A less obvious example is feature engineering for computer vision. When was the last time anyone talked about SIFT features for computer vision? Ages ago, they’re obsolete. But feature engineering didn’t disappear, it just turned into convolutional neural net architecture tuning instead. For a computer vision researcher, SIFT features were the old tool, convolutional neural nets are the new tool, and computer vision is the application that’s been supercharged by the better tool. Whereas for me, I’m not a computer vision person. I think ML for control is a much more interesting problem. However, you have to do computer vision to do control in image-based environments, and if you want to handle the real world, image-based inputs are the way to go. So for me, computer vision is the tool, robotics is the application, and the improvements in computer vision have driven many promising robot learning results. (Filters automatically learned by AlexNet, which has itself been obsoleted by the better tool, ResNets.) I’m a big advocate for research tools. I think on average, people underestimate their impact. So after reading the hypothesis that people don’t forecast tool improvement properly, I thought for a bit, and decided I hadn’t properly accounted for it either. That deserved shaving off a few years. In the more empirical sides of ML, the obvious components of progress are your ideas and computational budget, but there are less obvious ones too, like your coding and debugging skills, and your ability to utilize your compute. It doesn’t matter how many processors you have per machine, if your code doesn’t use all the processors available. There are a surprising number of ML applications where the main value-add comes from better data management and data summarizing, because those tools free up decision making time for everything else. In general, everyone’s research tools are deficient in some way. Research is about doing something new, which naturally leads to discovering new problems, and it’s highly unlikely someone’s already made the perfect tool for a problem that didn’t exist three months ago. So, your current research tools will always feel janky, and you shouldn’t be using that to argue anything about timelines. The research stack has lots of parts, improvements continually happen across that entire stack, and most of these improvements have multiplicative benefits. Multiplicative factors can be very powerful. One simple example is that to get 10x better results, you can either make one thing 10x better with a paradigm shift, or you can make ten different things 1.26x better, and they’ll combine to a 10x total improvement. The latter is just as transformative, but can be much easier, especially if you get 10 experts with different skill sets to work together on a common goal. This is how corporations become a thing. (From JamesClear.com) ## Semi-Supervised and Unsupervised Learning are Getting Better Historically, unsupervised learning has been in this weird position where it is obviously the right way to do learning, and also a complete waste of time if you want something to work ASAP. On the one hand, humans don’t have labels for most things they learn, so ML systems shouldn’t need labels either. On the other hand, the deep learning boom of 2015 was mostly powered by supervised learning on large, labeled datasets. Richard Socher made a notable tweet at the time: I wouldn’t say unsupervised learning has always been useless. In 2010, it was common wisdom that deep networks should go through an unsupervised pre-training step before starting supervised learning. See (Erhan et al, JMLR 2010). In 2015, self-supervised word vectors like GloVe and word2vec were automatically learning interesting relationships between words. As someone who started ML around 2015, these unsupervised successes felt like exceptions to the rule. Most other applications relied on labels. Pretrained ImageNet features were the closest thing to general behavior, and those features were learned from scratch through only supervised learning. I’ve long agreed that unsupervised learning is the future, and the right way to do things, as soon as we figure out how to do so. But man, we have spent a long time trying to do so. That’s made me pretty impressed with the semi-supervised and unsupervised learning papers from the past few months. Momentum Contrast from (He et al, CVPR 2020) was quite nice, SimCLR from (Chen et al, ICML 2020) improved on that, and Bootstrap Your Own Latent (Grill, Strub, Altché, Tallec, Richemond et al, 2020) has improved on that. And then there’s GPT-3, but I’ll get to that later. When I was thinking through what made ML hard, the trend lines were pointing to larger models and larger labeled datasets. They’re still pointing that way now. I concluded that future ML progress would be bottlenecked by labeling requirements. Defining a 10x bigger model is easy. Training a 10x bigger model is harder, but it doesn’t need 10x as many people to work on it. Getting 10x as many labels does. Yes, data labeling tools are getting better, Amazon Mechanical Turk is very popular, and there are even startups whose missions are to provide fast data labeling as a service. But labels are fundamentally a question about human preferences, and that makes it hard to escape human labor. Reward functions in reinforcement learning have a similar issue. In principle, the model figures out a solution after you define what success looks like. In practice, you need a human to check the model isn’t hacking the reward, or your reward function is implicitly defined by human raters, which just turns into the same labeling problem. Large labeled datasets don’t appear out of nowhere. They take deliberate, sustained effort to generate. There’s a reason ImageNet won the Test of Time award at CVPR 2019 - the authors of that paper went out and did the work. If ML needed ever larger labeled datasets to push performance, and models kept growing by orders of magnitude, then you’d hit a point where the amount of human supervision needed to make progress would be insane. (This isn’t even getting into the problem of labels being imperfect. We’ve found that many labeled datasets used in popular benchmarks contain lots of bias. That isn’t surprising, but now that it’s closer to common knowledge, building a large dataset with a laissez-faire labeling system isn’t going to fly anymore.) Okay. Well, if 10x labels is a problem, are there ways around that problem? One way is if you don’t need 10x as many labels to train a 10x larger model. The messaging on that is mixed. One scaling law paper, (Hestness et al, 2017), recommends a model size that grows sublinearly with dataset size. We expect that number of model parameters to fit a data set should follow $$s(m) \propto \alpha m^{\beta_p}$$, where $$s(m)$$ is the required model size to fit a training set of size $$m$$. (From Section 2.2) Different problem settings have different coefficients. Image classification followed a $$\beta_p=0.573$$ power law, while language modeling followed a $$\beta_p \approx 0.72$$ line. Trend lines for image classification (left) and language modeling (right) from (Hestness et al, 2017) Inverting this suggests dataset size should grow superlinearly with model size - a 10x larger image classification model should use $$10^{1/0.573} = 55.6$$x times as much data! That’s awful news! But, the (Kaplan and Candlish, 2020) paper suggests the inverse relationship - that dataset size should grow sublinearly with model size. They only examine language modeling, but state in Section 6.3 that To keep overfitting under control, the results of Section 4 imply we should scale the dataset size as $$D \propto N^{0.74}$$, [where $$D$$ is dataset size and $$N$$ is model size]. This is strange when compared to the Hestness result of $$D \propto N^{1/0.72}$$ . Should the dataset grow faster or slower than the model? The difference between the two numbers happens because the Kaplan result is derived assuming a fixed computational budget. One of the key results they found was that it was more efficient to train a very large model for a short amount of time, rather than train a smaller model to convergence. Meanwhile, as far as I could tell, the Hestness results always use models trained to convergence. Figure 2 of (Kaplan and Candlish, 2020) That was a bit of a digression, but after plugging the numbers in, we get that every 10x increase in model size should require between a 4x and 50x increase in dataset size. Let’s assume the 4x side to be generous. A 4x factor for label needs is definitely way better than a 10x factor, but it’s still a lot. Enter unsupervised learning. These methods are getting better, and what “label” means is shifting towards something easier to obtain. GPT-3 is trained on a bunch of web crawling data, and although some input processing was required, it didn’t need a human to verify every sentence of text before it went into model training. At sufficient scale, it’s looking like it’s okay for your labels to be noisy and your data to be messy. There’s a lot of potential here. If you have $$N$$ unsupervised examples, then yes, $$N$$ labeled examples will be better, but remember that labels take effort. The size of your labeled dataset is limited by the supervision you can afford, and you can get much more unlabeled data for the same amount of effort. A lot of Big Data hype was driven by plots showing data was getting created faster than Moore’s Law. Much of the hype fizzled out because uninformed executives didn’t understand that having data is not the same as having useful data for machine learning. The true amount of usable data was much smaller. The research community had a big laugh, but the joke will be on us if unsupervised learning gets better and even junk data becomes marginally useful. Is unsupervised learning already good enough? Definitely not. 100% not. It is closer than I expected it to be. I expect to see more papers use data sources that aren’t relevant to their target task, and more “ImageNet moments” where applications are built by standing on the shoulders of someone else’s GPU time. ## GPT-3 Results are Qualitatively Better than I Expected I had already updated my timeline estimates before people started toying with GPT-3, but GPT-3 was what motivated me to write this blog post explaining why. What we’re seeing with GPT-3 is that language is an incredibly flexible input space. People have known this for a while. I know an NLP professor who said language understanding is an AI-Complete task, because a hypothetical machine that perfectly understands and replies to all questions might as well be the same as a person. People have also argued that compression is a proxy for intelligence. As argued on the Hutter Prize website, to compress data, you must recognize patterns in that data, and if you view pattern recognition as a key component of intelligence, then better compressors should be more intelligent. To clarify: these are nowhere near universal NLP opinions! There’s lively debate over what language understanding even means. I mention them because these opinions are held by serious people, and the GPT-3 results support them. GPT-3 is many things, but its core is a system that uses lots of training time to compress a very large corpus of text into a smaller set of Transformer weights. The end result demonstrates a surprisingly wide breadth of knowledge, that can be narrowed into many different tasks, as long as you can turn that task into a prompt of text to seed the model’s output. It has flaws, but the breadth of tech demos is kind of absurd. It’s also remarkable that most of this behavior is emergent from getting good at predicting the next token of text. This success is a concrete example of the previous section (better unsupervised learning), and it’s a sign of the first section (better tooling). Although there’s a lot of fun stuff in story generation, I’m most interested in the code generation demonstrations. They look like early signs of a “Do What I Mean” programming interface. If the existing tech demos could be made 5x better, I wouldn’t be surprised if they turned into critical productivity boosters for nuts-and-bolts programming. Systems design, code verification, and debugging will likely stick to humans for now, but a lot of programming is just coloring inside the lines. Even low levels of capability could be a game changer, in the same way as pre-2000 search engines. AltaVista was the 11th most visited website in 1998, and it’s certainly worse than what Google/Bing/DuckDuckGo can do now. One specific way I could see code generation being useful is for ML for ML efforts, like neural architecture search and black-box hyperparameter optimization. One of the common arguments around AGI is intelligence explosion, and that class of black-box methods has been viewed as a potential intelligence explosion mechanism. However, they’ve long had a key limitation: even if you assume infinite compute, someone has to implement the code that provides a clean API from experiment parameters to final performance. The explorable search space is fundamentally limited by what dimensions of the search space humans think of. If you don’t envision part of the search space, machine learning can’t explore it. Domain randomization in robot learning has the same problem. This was my main criticism of the OpenAI Rubik’s Cube result. The paper read like a year long discovery of the Rubik’s Cube domain randomization search space, rather than any generalizable robot learning lesson. The end result is based on a model learning to generalize from lots of random simulations, but that model only got there because of the human effort spent determining which randomizations were worth implementing. Now imagine that whenever you discovered a new unknown unknown in your simulator, you could very quickly implement the code changes that add it to your domain randomization search space. Well, those methods sure look more promising! There are certainly problems with GPT-3. It has a fixed attention window. It doesn’t have a way to learn anything it hasn’t already learned from trying to predict the next character of text. Determining what it does know requires learning how to prompt GPT-3 to give the outputs you want, and not all simple prompts work. Finally, it has no notion of intent or agency. It’s a next-word predictor. That’s all it is, and I’d guess that trying to change its training loss to add intent or agency would be much, much more difficult than it sounds. (And it already sounds quite difficult to me! Never underestimate the inertia of a working ML research project.) But, again, this reminds me a lot of early search engines. As a kid, I was taught ways to structure my search queries to make good results appear more often. Avoid short words, place important key words first, don’t enter full sentences. We dealt with it because the gains were worth it. GPT-3 could be similar. I don’t know where this leads, but there’s something here. ## I Now Expect Compute to Play a Larger Role, and See Room for Models to Grow For reasons I don’t want to get into in this post, I don’t like arguments where people make up a compute estimate of the human brain, take a Moore’s Law curve, extrapolate the two out, and declare that AGI will happen when the two lines intersect. I believe they oversimplify the discussion. However, it’s undeniable that compute plays a role in ML progress. But how much are AI capabilities driven by better hardware letting us scale existing models, and how much is driven by new ML ideas? This is a complicated question, especially because the two are not independent. New ideas enable better usage of hardware, and more hardware lets you try more ideas. My 2015 guess to the horrid simplification was that 50% of AGI progress would come from compute, and 50% would come from better algorithms. There were several things missing between 2015 models, and something that put the “general” in artificial general intelligence. I was not convinced more compute would fix that. Since then, there have been many successes powered by scaling up models, and I now think the balance is more like 65% compute, 35% algorithms. I suspect that many human-like learning behaviors could just be emergent properties of larger models. I also suspect that many things humans view as “intelligent” or “intentional” are neither. We just want to think we’re intelligent and intentional. We’re not, and the bar ML models need to cross is not as high as we think. If compute plays a larger role, that speeds up timelines. ML ideas are bottlenecked by the size and growth of the ML community, whereas faster hardware is powered by worldwide consumer demand for hardware. The latter is a much stronger force. Let’s go back to GPT-3 for a moment. GPT-3 is not the largest Transformer you could build, and there are reasons to build a larger one. If the performance of large Transformers scaled for 2 orders of magnitude (1.5B params for GPT-2, 175B params for GPT-3), then it wouldn’t be too weird if they scaled for another 2 orders of magnitude. Of course, it might not. The (Kaplan et al, 2020) scaling laws are supposed to start contradicting each other starting around $$10^{12}$$ parameters. which is less than 1 order of magnitude away from GPT-3. That doesn’t mean the model will stop improving though. It just means it’ll improve at a different rate. I don’t see a good argument why we should be confident a 100x model would not be qualitatively different. This is especially true if you move towards multi-modal learning. Focusing on GPT-3’s text generation is missing the main plot thread. If you believe the rumors, OpenAI has been working towards incorporating audio and visual data into their large models. So far, their research output is consistent with that. MuseNet was a generative model for audio, based on large Transformers. The recent Image GPT was a generative model for images, also based on large transformers. Was MuseNet state-of-the-art at audio synthesis when it came out? No. Is Image GPT state-of-the-art for image generation? Also no. Model architectures designed specifically for audio and image generation do better than both MuseNet and Image GPT. Focusing on that is missing the point OpenAI is making: a large enough Transformer is not state-of-the-art, but it does well enough on these very different data formats. There’s better things than MuseNet, but it’s still good enough to power some silly yet maybe useful audio completions. If you’ve got proof that a large Transformer can handle audio, image, and text in isolation, why not try doing so on all three simultaneously? Presumably this multi-modal learning will be easier if all the modalities go through a similar neural net architecture, and their research implies Transformers are good-enough job to be that architecture. It helps that OpenAI can leverage any intuition they already have about very large Transformers. Once you add in other data streams, there should definitely be enough data to train much larger unsupervised models. Sure, you could use just text, but you could also use all that web text and all the videos and all the audio. There shouldn’t be a trade-off, as long as you can scale large enough. Are large Transformers the last model architecture we’ll use? No, probably not, some of their current weaknesses seem hard to address. But I do see room for them to do more than they’ve done so far. Model architectures are only going to get better, so the capabilities of scaling up current models must be a lower bound on what could be possible 10 or 20 years from now, with scaled up versions of stronger model architectures. What’s possible right now is already interesting and slightly worrying. ## The Big Picture In “You and Your Research”, Richard Hamming has a famous piece of advice: “what are the important problems in your field, and why aren’t you working on them?” Surely AGI is one of the most important problems for machine learning. So, for machine learning, the natural version of this question is, “what problems need to be solved to get to artificial general intelligence?” What waypoints do you expect the field to hit on the road to get there, and how much uncertainty is there about the path between those waypoints? I feel like more of those waypoints are coming into focus. If you asked 2015-me how we’d build AGI, I’d tell you I have no earthly idea. I didn’t feel like we had meaningful in-roads on any of the challenges I’d associate with human-level intelligence. If you ask 2020-me how we’d build AGI, I still see a lot of gaps, but I have some idea how it could happen, assuming you get lucky. That’s been the biggest shift for me. There have always been disagreements over what large-scale statisical ML means for AI. The deep learning detractors can’t deny large statisical ML models have been very useful, but deep learning advocates can’t deny they’ve been very expensive. There’s a grand tradition of pointing out how much compute goes into state-of-the-art models. See this image that made the rounds on Twitter during the Lee Se-dol match: (By @samim) Arguments like this are good at driving discussion to places models fall short compared to humans, and poking at ways our existing models may be fundamentally flawed, but I feel these arguments are too human-centered. Our understanding of how humans learn is still incomplete, but we still took over the planet. Similarly, we don’t need to have fine-grained agreement on what “understanding” or “knowledge” means for AI systems to have far-reaching impacts on the world. We also don’t have to build AI systems that learn like humans do. If they’re capable of doing most human-level tasks, economics is going to do the rest, whether or not those systems are made in our own image. ## Trying Hard To Say No The AGI debate is always a bit of a mess, because people have wildly divergent beliefs over what matters. One useful exercise is to assume AGI is possible in the short term, determine what could be true in that hypothetical future, then evaluate whether it sounds reasonable. This is crucially very different from coming up with reasons why AGI can’t happen, because there are tons of arguments why it can’t happen. There are also tons of arguments why it can happen. This exercise is about putting more effort into the latter, and seeing how hard it is to say “no” to all of them. This helps you focus on the arguments that are actually important. Let me take a shot at it. If AGI is possible soon, how might that happen? Well, it would require not needing many more new ideas. It would likely be based on scaling existing models, because I don’t think there’s much time for the field to do a full-scale paradigm shift. And, it’s going to need lots of funding, because it needs to be based on scaling, and scaling needs funding. Perhaps someone develops an app or tool, using a model of GPT-3’s size or larger, that’s a huge productivity multiplier. Imagine the first computers, Lotus Notes, or Microsoft Excel taking over the business world. Remember, tools drive progress! If you code 2x faster, that’s probably 1.5x as much research output. Shift up or down depending on how often you’re bottlenecked by implementation. If that productivity boost is valuable enough to make the economics work out, and you can earn net profit once you account for inference and training costs, then you’re in business - literally. Big businesses pay for your tool. Paying customers drives more funding and investment, which pays for more hardware, which enables even larger training runs. In cloud computing, you buy excess hardware to anticipate spikes in consumer demand, then sell access to the extra hardware to earn money. In this scenario, you buy excess hardware to anticipate spikes in consumer inference needs, then give excess compute capacity to research to see what they come up with. This mechanism is already playing out. You might recognize the chip below. It’s a picture of the first TPU, and as explained in a Google blog post, Although Google considered building an Application-Specific Integrated Circuit (ASIC) for neural networks as early as 2006, the situation became urgent in 2013. That’s when we realized that the fast-growing computational demands of neural networks could require us to double the number of data centers we operate. Google needed to run more neural nets in production. This drove more hardware investment. A few years later, and we’re now on TPUv3, with rumors that Facebook is hiring hardware people to build custom silicon for AR technology. So the story for hardware demand seems not just plausible, but likely to be true. If you can scale to do something impractically, that sparks research and demand into making it practical. On top of this, let’s assume cross-modality learning turns out to be easier than expected at scale. Similar emergent properties as GPT-3 show up. Object tracking and intuitive physics turn out to be naturally occurring phenomena that are learnable just from images, without direct environment interaction or embodiment. With more tweaks, even larger models, and even more data, you end up with a rich feature space for images, text, and audio. It quickly becomes unthinkable to train anything from scratch. Why would you? Much of the prior work in several fields gets obsoleted, going the way of SIFT features for vision, parse trees for machine translation, and phoneme decoding steps for speech recognition. Deep learning has already killed these methods. People who don’t know any of those techniques are working on neural nets that achieve state-of-the-art results in all three domains. That’s faintly sad, because some of the obsolete ideas are really cool decompositions of how we understand language and speech, but it is what it is. As models grow larger, and continue to demonstrate improved performance, research coalesces around a small pool of methods that have been shown to scale with compute. Again, that happened and is still happening with deep learning. When lots of fields use the same set of techniques, you get more knowledge sharing, and that drives better research. CNNs have heavy priors towards considering nearby values. They were first useful for image recognition, but now have implications for genomics (Nature Genetics, 2019), as well as music generation (van den Oord et al, 2016). Transformers are a sequence model that were first used for language modeling. They were later applied to video understanding (Sun et al, 2019). This trend is likely to continue. Machine learning has hit a point where describing something as “deep learning” is practically meaningless, since multilayer perceptions have integrated with enough of the field that you’re no longer specifying anything. Maybe five years from now, we’ll have a new buzzword that takes deep learning’s place. If this model is good at language, speech, and visual data, what sensor inputs do humans have that this doesn’t? It’s just the sensors tied to physical embodiment, like taste and touch. Can we claim intelligence is bottlenecked on those stimuli? Sure, but I don’t think it is. You arguably only need text to pretend to be human. A lot has to go right in this scenario above. Multi-modal learning has to work. Behaviors need to continue to emerge out of scaling, because your researcher timer is mostly going into ideas that help you scale, rather than inductive priors. Hardware efficiency has to match pace, which includes clean energy generation and fixing your ever-increasing hardware fleet. Overall, the number of things that have to go right makes me think it’s unlikely, but still a possibility worth taking seriously. The most likely problem I see with my story is that unsupervised learning could be way harder for anything outside of language. Remember, in 2015, unsupervised learning gave us word vectors for language, and nothing great for images. One reasonable hypothesis is that the compositional properties of language make it well suited to unsupervised learning, in a way that isn’t true for other input modalities. If that’s true, I could be overestimating research by paying too much attention to the successes. It’s for those reasons that I’m only adjusting my estimates by a few years. I don’t think GPT-3, by itself, is a reason to radically adjust what I believe to be possible. I think transfer learning being harder than anticipated is also a damper on things. But on net, I’ve mostly seen reasons to speed up my estimates, rather than slow them down. Thanks to all the people who gave feedback on earlier drafts, including: Michael Andregg, James Bradbury, Ethan Caballero, Ajeya Cotra, William Fedus, Nolan Kent, David Krueger, Simon Ramstedt, and Alex Ray.
2022-07-06 06:13:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33079904317855835, "perplexity": 1313.3680633964511}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00293.warc.gz"}
https://igfae.usc.es/igfae/
### Research Areas ##### THE STANDARD MODEL TO THE LIMITS The Standard Model (SM) is our best construction of the microscopic reality at the most fundamental level. The degree of consistency with experiment is astonishingly good. Our Institute has played a fundamental role in this broad field of research. ##### COSMIC PARTICLES AND FUNDAMENTAL PHYSICS Astroparticle Physics, as well as the connections between Particle Physics and Cosmology, are experiencing a new golden age. Our Institute has been one of the pioneers in the Pierre Auger Observatory, a giant air shower array in Argentina. ##### NUCLEAR PHYSICS FROM THE LAB TO IMPROVE PEOPLE’S HEALTH The participation of IGFAE in new international facilities such as FAIR offers unprecedented opportunities to contribute to understanding the nature of the strong interaction in the non-perturbative regime, the structure of the nuclear many-body systems and their astrophysical implications.
2019-02-19 03:27:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17156609892845154, "perplexity": 1223.2277738261357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247489304.34/warc/CC-MAIN-20190219020906-20190219042906-00457.warc.gz"}
http://research.endlessfernweh.com/2016/08/
# Observational Astronomy Primer Exercises: 3.) Reducing SPIRE Data, Aperture Photometry ## Tasks/exercises for SPIRE data in HIPE • Try to download the Hi-Gal data that has already been calibrated and mapped.  Check the data quality to determine if you need to re-reduce or make the map again (for example, if there is some residual striping from the cross-scans). • If you decide you need to re-reduce it, you can follow the links I’ve listed below. • You will need the OBSID – you can look for this with the HSA Tool at  http://www.cosmos.esa.int/web/herschel/science-archive –> Input the object name or coordinates and it will return a list of observations and IDs There will be many observations with PACS/SPIRE/HIFI. Choose the ones you want to see.  The SPIRE large map obs for G332.4+0.1 were OBSID 1342192055 • MAKE SURE you are using a recent version of HIPE if you plan to reprocess the data.  You will need LOTS of RAM to reduce Herschel data – aim for a machine with at least 32GB. ### Data reduction pipeline in HIPE ==> See tutorials on NASA/IPAC’s site: https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/PhotDataAnalysis The tutorial for the SPIRE large map pipeline reprocessing: That one is quite good – much better than was available when I started learning how to reduce SPIRE data, with many picture examples of things to look out for in your final images They have also made available a large number of web video tutorials over the past few years: https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/VideoTutorials https://nhscsci.ipac.caltech.edu/sc/index.php/Spire/Webinars ### SPIRE photometry ‘recipe’ Also see the HIPE Data Analyis Guide for a description of basically any analysis task you would care to do in HIPE: http://herschel.esac.esa.int/hcss-doc-15.0/ obsid = 1342192055  # Specify the observation ID from the HSA  (Kes 32) alpha = 2   #For a source with spectrum S(ν) proportional to Sα (SPIRE default pipeline assumes α = -1) array = "PSW"  # Run for an individual SPIRE band: "PSW", "PMW", "PLW" obs = getObservation(obsid, useHsa=True, instrument='SPIRE') # Loading an observation of Gamma Dra from the HSA # obs = getObservation(obsid, poolName='mypool', instrument='SPIRE') # Alternative, for observation from your own local pool mapExtd = obs.level2.refs["extd"+array].product  #Extract  Extended (MJy/sr) calibrated maps from the Observation Context cal = spireCal() #Load the calibration tree #  --> if that doesn't work: cal = spireCal(calTree="spire_cal", saveTree=1) beamCorrTable  = cal.phot.refs["ColorCorrBeam"].product kCorrExtdTable = cal.phot.colorCorrKList.refs[0].product beamArea  = beamCorrTable.meta["beamPipeline"+array.title()+"Arc"].double kCorrExtd = kCorrExtdTable.getAlphaCorrection(alpha, array) mapExtended = convertImageUnit(image=mapExtd, newUnit='Jy/pixel') # convert maps from MJy/sr to Jy/pix mapExtendedCorrected = imageMultiply(image1=mapExtended, scalar=kCorrExtd) #Color Correction ra  = '244.252833'   # Target RA and string values:  16h 17m 00.68s dec = '-50.799300'   # Target Dec and string values: -50d 47' 57.48'' photrad = 200.0      #photometry source radius, in arcsec  (I'm just making these up - you will have to choose an appropriate region) phot_ann_in = 300.0  #photometry annulus inner radius, in arcsec phot_ann_out = 300.0 #photometry annulus outer radius, in arcsec # Carry out circular/annulus Aperture Photometry annularPSW = annularSkyAperturePhotometry(image= mapExtendedCorrected, \ centerRA=ra, centerDec=dec, fractional=1, \ # annularSkyAperturePhotometry() #Alternatively, define the photometry regions manually flux = annularPSW.getTargetTotal()  # Final target brightness in Jy/pixel print 'PSW flux = %5.3f Jy'%(flux) # Observational Astronomy Primer Exercises: 2.) Python plotting / fitting / etc. You may find some of these tasks useful as we move forward doing photometry and running HIPE.  This is just some useful material that you can work on over the next few weeks or months. You can write & run python scripts in a few different ways, and the best way for you is really just up to your preference.  I like to write my code in gedit (Pluma in Ubuntu Mate), and interactively test/run in ipython in the terminal.  But there are also several IDEs such as Spyder, Eclipse, Geany, PyCharm (similar to those used by Matlab or IDL). There are many great tutorials on the web for learning python.  I started to make a few very simple tutorials elsewhere on this site ( http://research.endlessfernweh.com/tutorials/ ) but there are a variety of better tutorials out there: Practical Python For Astronomers  https://python4astronomers.github.io/  is a great comprehensive one tailored to astronomy, but I find that it’s a little bit hard to jump right in Here is a page with powerpoint-style tutorials on many things in python (again, targeted at astronomers): https://astro.uni-bonn.de/~rschaaf/Python2008/ Astrobetter is a blog that often has very useful posts about coding and other important things in astronomy.  Here is one post on python: http://www.astrobetter.com/blog/2014/07/07/astropy-tutorials-learn-how-to-do-common-astro-tasks-with-astropy-and-python/ For plotting (see below as well), I recommend using the pywcsgrid2 module, but you can also use APLpy and kapteyn to make great  images.  Here are a couple pages for using pywcsgrid2: http://leejjoon.github.io/pywcsgrid2/users/overview.html http://leejjoon.github.io/matplotlib_astronomy_gallery/tutorial/tutorial.html#tutorial-index Kapteyn has many other useful tools for reprojection, image smoothing, etc.  https://www.astro.rug.nl/software/kapteyn/ Here is a list of useful goals & concepts to work on. • importing modules, syntax • Create some functions (try some useful ones for astronomy, such as a Gaussian, blackbody, Modified BB) • Load tables from text files, or DS9 regs. • Save data to text files. • Fit function parameters – see my web tutorial for a good overview • Plot single function and 2D (heatmap, maybe try a 2D gaussian?) – see my web tutorial • include labels – title, axis labels (modify size, color, etc.) • include legend (1D), colorbar (2D) • Can use APLpy, pywcsgrid2, kapteyn… whatever package you like • Include labels • include colorbar & units • play with colormaps, intensity limits, scaling • maybe include size scale bar, compass, beam size… • draw a region (e.g., ellipse) • Plot two images side by side (say, Kepler infrared and radio; follow the pywcsgrid2 tutorial) • match coordinates • make contours # Observational Astronomy Primer Exercises: 1.) Viewing FITS Images To help you become familiar with astronomical data files – .FITS format – I would like to give you a small task: please experiment with some supernova images in DS9.   Here is what I’m thinking for some small goals: • Download data for one astronomical target from two different telescopes and display them together, side-by-side. • Stretch the image scaling to appropriate levels and experiment with color maps. • Match the coordinates. • Make labels. • Create regions. • Output/print an image (png, jpg, etc.) of your results & save ‘backup’ file. I discuss photometry in another post & how to do simple flux extraction with DS9 and HIPE later. Try Kepler’s SNR (G4.5+6.8) in IR  http://chandra.harvard.edu/photo/openFITS/multiwavelength_data.html   and try to find one more image from another band that is not listed on that page.  I will discuss some good online databases tomorrow, but one good option would be a radio image from the VLA – you can download VLA data from the NRAO image archive retrieval tool  https://archive.nrao.edu/archive/archiveimage.html A good .fits file will include necessary WCS (pointing) and brightness units.  And beam size information if needed. #### Units: The main header ‘card’ to look for is ‘BUNIT’, which should be something like Jy/beam, MJy/sr, W/m2/pixel, etc.  Sometimes images you download (such as HST, GALEX…) will still be in units of counts/sec or other system that will need to be converted to physical units before you can integrate fluxes.  If you need to perform these conversions, try looking at the instrument webpages or papers in the literature to get the conversion factors, otherwise you can try to contact the PI who owns the data. To convert the beam area – that is, to calculate the beam size as pixels/beam or sr/beam etc – our job is easy if the PSF can be approximated by a 2D Gaussian.  Properly-formed radio, sub-mm and IR headers should have the beam parameters listed as BMAJ, BMIN, and BPA – the beam major axis FWHM, minor axis FWHM, and position angle, respectively.  All 3 are normally in units of degrees.  The PA in the FITS standard is defined as counter-clockwise from ‘image north’ or the upper y-axis, as opposed to the mathematical standard right x-axis assumed by python and other programming languages. #### WCS (World Coordinate System) The information about which pixel corresponds to what RA/DEC on the sky is given in the header as well.  Here is the NASA dictionary of standard header cards:  http://heasarc.gsfc.nasa.gov/docs/fcg/standard_dict.html Basically, all the header cards beginning with C (CTYPE, CDELT, …) have to do with the the axis and pointing information.  Here is a brief description of some: CTYPE gives the projection type – SIN, TAN, … CRPIX gives the reference pixel number (starting from 1) CRVAL gives the pointing coordinate of the reference pixel CUNIT gives the units of the reference coordinate values (default is degrees) CDELT gives the width of each pixel in sky coordinates CROTA gives the image rotation angle, if applicable. The number after the main keyword denotes which image axis – 1 is for the x-axis (RA), 2 is for the y-axis (DEC), 3 is for the z-axis (velocity/frequency/wavelength) in a 3D data cube, 4 is for the Stokes axis.  So, CRPIX1 is the reference pixel for the x-axis, CDELT2 is the pixel width in the y-axis, CUNIT3=’Hz’ means that the spectral axis channels are stepped in frequency, etc. Instead of CRDELT/CROTA etc. you may find your image using an alternate grid reference system based on the cards CD1_1, CD2_1, CD1_2 and CD2_2.  These four parameters define a matrix grid of the pixel scale and rotation. # Photometry (Note, I’m primarily concerned with relatively ‘intermediate distance’ astronomical sources, such as low-redshift galaxies, supernova remnants, etc. instead of high-redshift galaxies and other point sources.  This discussion is therefore biased in that direction.  ) First, some resources. PhotUtils is a python module for photometry that you may find useful: http://photutils.readthedocs.io/en/latest/photutils/aperture.html I recently came across the APT GUI tool, and while I haven’t used it, it seems to be quite useful: http://www.aperturephotometry.org/ Here is some Herschel documentation for ap.photom. in HIPE: http://herschel.esac.esa.int/hcss-doc-15.0/load/dag/html/Dag.ImageAnalysis.HowTo.AperturePhotometry.html Of course there are many papers that detail how the authors did their photometry and uncertainty estimation.  A few recommendations are the H-ATLAS data release paper (Valiante+2016) which I was involved with, and I liked Remy-Ruyer+13 for their careful treatment of the PACS PSF contribution on the background regions, etc. ### Basics points to consider • Create an appropriate region – may need to check image with extreme scaling in DS9 to make sure you’re getting all the target features you want / avoiding those you don’t want • Convert to desired physical units • Estimate background – from nearby aperture (preferably not far away due to beam power pattern and inherent sky variability) • Apply correction factors: Aperture correction, Color correction, K4 correction (IR) • Uncertainty estimation (you can do a ‘formal’ or theoretical estimation, but an empirical approach will be more realistic) A general ‘formula’ for photometry will look something like this: Raw = sum(Fluxpix,i) over all pixels BGsub = Raw – MedianBGreg Corrected = BGsub·correction1·correction2… The goal of photometry is to determine a certain target’s integrated emission in some wavelength band. Usually for the purpose of calculating flux ratios or constructing an SED.  To accurately and precisely determine the true source flux, you need a properly calibrated image and usually some knowledge of the instrument that obtained it, for various correction factors. ### Region creation and flux extraction The most obvious step is to sum up the individual pixel intensities in your target region of choice.  However, there are many things to consider in this step. 1. Be sure to decide on an appropriate aperture (or ‘region’ in DS9 parlance) for your science goal.  Of course, if you are trying to target a small feature in a crowded map, you might make a little circle or ellipse close around that target to make sure you aren’t including any flux from nearby features.  But take care that your region isn’t too small – in particular it should always be larger than the limiting beam size.  Furthermore, if you are interested in a target with a complicated brightness profile – perhaps an entire galaxy with an exponential brightness decrease from the center – be sure to make your aperture large enough to include all the necessary emission.  One good way to ensure you are doing this (for your image, at least) is to play with the data scaling in DS9 to see what the very faint extended emission is doing, etc. –> I’ll try to bring up an image to demonstrate some good and problematic choices of apertures for various features of interest 2. You should make sure you understand the units of the image.  If they are something like Jy/beam, you will need to know what the beam area is to convert to flux density, and if you want to convert to something like luminosity, you will also need to know how to properly convert from the “per Hz” – if your image is ν·Fν you can just use the band’s characteristic frequency, but if it’s from a collapsed data cube you will need to know the channel width, and for some instruments you may need to know the overall imager bandwidth, etc… 3. If you intend to compare the integrated fluxes between images at different spatial resolutions, you will need to convolve the sharper images down to limiting beam size (sometimes called “smoothing” the images).  This ensures that your flux integrator is viewing the images in the same way – that no flux is being spread out by a PSF in a different way.  Basically all astronomy data reduction/analysis packages have tools for this: CASA has imsmooth(), HIPE has imageConvolution(), python has several excellent tools such as  the astropy.convolution package. 4. If your region is relatively small, or alternatively if your pixels are relatively large, you may want to employ sub-pixel precision in your sum.  It just depends on the level of precision you need, but if changing your desired region by a small increment changes the resulting flux by a few percent or more, you are probably in the regime of caring about this.  One simple way to get good payoff for relatively little work is to divide each pixel into something like 4, 9, 16, or 25 etc. subpixels for the flux integration. (Make sure to properly conserve flux if you do this!) ### Background estimation • The ‘standard practice’ is to determine the background level from an annular region just outside of target region, but this isn’t always possible.  If, for example, you have bright foreground/background stars in the annulus just around your target, you may wish to blank them out or use a slightly further-out annulus.  Or if there are other galaxies nearby in the field, or if the PSF throws a noticeable portion of the source flux into the nearby background region (more on that below), you may have to use non-annulus regions even further out.  Be sure not to use regions of sky that are obviously biased, such as a large negative or positive bowl – remember, you want to estimate the typical background.  Each situation is a little bit different, and you will need to tailor your method to the data available to obtain the most unbiased estimate of the background. • Once you have determined the best region to use for your background pixels, you will need to decide on an appropriate way to characterize the ‘typical’ background level.  A simple average may be acceptable, or the intensity distribution may make the median more appropriate.  Perhaps you may find that sigma-clipping will give a more robust value.  Looking at the histogram of background pixel intensities can be useful for determining the best course of action.  Ultimately, as long as you have good justification for your choice, you should be fine. • Does the PSF make your BG region contain a contribution from the source? For example, in PACS images, the beam has a measurable contribution out to ~1000 arcsec (though of course it’s quite small that far out).  Typical small PACS maps are a few arcminutes across, so in principle every pixel in those maps will have a small contribution from the target source (and indeed every other pixel for that matter).  The effect may be small for your science goals, but it may be something that you decide you need to correct for.  See, e.g., Remy-Ruyer+2013 for one example of how to deal with this. ### Correction factors – discussed in the previous post • Aperture correction – This is a correction for the amount of flux that the beam has scattered outside of your chosen aperture.  This will be a significant amount for small regions around the size of the beam (say, tens of percent), but will be small or negligible for very large apertures.  One of the best way to calculate this is to directly sum the fraction of the total power enclosed in your chosen aperture – by placing your region on a fits file of the beam profile, for example.  But if you know the instrument’s beam profile and you are using a standard aperture like a circle, you can construct an ‘Enclosed Energy Function’ (EEF) in advance by calculating the enclosed energy percentage for a wide range of radii.  Then your estimated aperture correction would be 1/EEF at the region’s radius.  HIPE has a function for calculating this, or you can also download the EEF tables yourself to create your own interpolation routines. • Color correction (IR).  This is a correction to account for the fact that the data reduction assumed a flat spectrum response, while in reality your source spectrum is something like a blackbody, with a different power-law spectral index.  Again, HIPE has a tool to calculate this for you, or you can retrieve the tables manually and calculate it yourself.  See the Spire Handbook: http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-1000046 • K4 correction (IR, for extended objects) – this is a small correction to apply for extended sources to convert the RSRF-weighted flux density to monochromatic surface brightness.  Use the HIPE tool or download the tables manually. http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-830072 • Beam area – if you need to convert from Jy/beam to Jy.  Again, because of the assumed flat spectrum in the data reduction, your beam area will be slightly different (~ a few %) for the source’s true spectral index.  Use the HIPE function or download the tables yourself. http://herschel.esac.esa.int/Docs/SPIRE/html/spire_om.html#x1-1000024 ### Uncertainty estimation Some sources of uncertainty in your measurement: • Uncertainty in background level – if the BG pixel intensities are roughly Gaussian-distributed, you can estimate this from the standard deviation of the BG pixels.  Since the uncertainty will decrease as the square root of the number of measurements, $\sigma_\mathrm{BG} \sim stddev(BG)/\sqrt{N_\mathrm{BG}} \times N_\mathrm{ap}$ • Uncertainty on individual pixel intensities (usually for stacked or drizzled images).  The total pixel error is generally calculated as the quadratic sum of all σpix,i • Confusion noise – this is particularly important for catalog measurements of small faint sources. This is basically the random fluctuations in the background sky brightness.  Below this level you can’t detect a source, even if you integrate for more time.  Causes are foreground Galactic cirrus, cosmic background, background galaxies.  Confusion noise depends on wavelength, position on sky, beam profile.  See the Herschel docs for a more complete discussion: http://herschel.esac.esa.int/Docs/Herschel/html/ch04s03.html • One of the most fundamental is the uncertainty on the source flux sum.  Some authors choose to use a ‘theoretical’ estimate of this error: if you assume that the variation in the aperture pixels follows the variation in the background pixels, then σsource = sqrt(Nap)·σBG.  But this on its own doesn’t address the inherent variation due to the shape/location of your chosen aperture.  If you were to take your source region and move it around the map, the enclosed flux would vary by some amount – even if you are only moving it around ’empty’ sky.  This is a complex combination of many effects, including the relation between the region shape and the beam size, the confusion noise, inherent variation across the map, etc.  This can often be the dominant source of uncertainty in your measurement, probably because it includes so many effects.  You can estimate this empirically with monte carlo methods.  The idea is quite simple – place the source aperture at many places around your map that are empty of ‘real’ emission from the galaxies or whatever you’re interested in.  Then integrate the flux in each of these test regions.  The standard deviation of this array of test aperture values should serve as a good estimate of the inherent variability your chosen region shape should produce in your map. • Systematic/Calibration uncertainties – These include the beam area uncertainty (4% for SPIRE), and absolute flux cal uncertainty (5.5% for SPIRE). Spectroscopy and polarization measurements will of course also have their own systematic uncertainties. Once you have determined all the individual uncertainties you need to include, if they are all independent of each other, you can calculate the total uncertainty as the quadratic sum of the individual components.  For example: $\sigma_\mathrm{tot} = \sqrt{ \sigma^2_\mathrm{BG} + \sigma^2_\mathrm{source} + \sigma^2_\mathrm{pix} + \sigma^2_\mathrm{cal} + … }$ # How an infrared (single-aperture) telescope works ## Types of detectors There are three common varieties, roughly based on the wavelength of observations.  This is because on the scale of infrared light wavelengths (a few to a few hundred microns), the photons are in the transition phase of being smaller/bigger than our detecting electronics.  When the photon wavelength is much smaller, it interacts with the detector like a particle; when it is much larger, it interacts with the detector like a wave. The three common types are: • Photodetector – ~a few microns.  This is somewhat like a camera CCD. • bolometer arrays – ~tens to ~1000 microns, for imaging.  Basically, a single bolometer is a little block of material (silicon/germanium is common) that will fully absorb or ‘thermalize’ IR photons. This slightly changes the temperature of the material, which changes its electrical properties (resistance,conductance), which can be measured. • heterodyne – ~70 to ~1000 microns, for spectroscopy.  The photon waves are mixed with a local oscillator (like an old-style radio).  Because the actual waves are detected, you can measure the polarization. Also, the phase is preserved, so you can do interferometry with multiple antennae/receivers. ## General observing recipe for imaging (in optical, IR, etc.) – ‘dark/bias’ scans to calibrate how much signal the detector ‘sees’ as a baseline, with the cover still on.  Subtract this. – ‘flat field’ scans to calibrate the relative response of each pixel, since each has different sensitivity.  Divide this. – Observe target –> This is your raw data.  Telescope software should put pointing, etc. info into a data header. – Spectrometers will also observe a blank patch of sky separately for background subtraction ## FIR observing The atmosphere is mostly opaque to incoming IR light in many bands, due to absorption by water vapor.  Need very high-elevation dry sites for the open windows in the NIR and submm, or ideally airborne/space observatories like SOFIA & Herschel. – The dark/bias current is usually much more of a problem, because the temperature/thermal emission of the science target can be very similar to the actual physical temperature of the detector/telescope/surroundings.  So these are typically cryogenically cooled, but we still need to be very careful with our calibration. – The sky/background is quite high & variable in the IR, and atmosphere conditions can change quickly. – Account for these facts by observing in a “chop-nod” pattern: chopping to remove sky background, nodding to remove telescope background. – The secondary mirror moves from on-source to off-source (a few arcmins away) usually at speeds of a few Hz.  This is called a ‘chop’.  One chop contains the source plus the background, the other chop contains only background emission.  So when you subtract, you get the source-only signal.  (You will still need to do a separate background subtraction in your photometry, though!) – The telescope itself is still contributing a lot of signal to your readouts, and it can actually be many hundreds of times the astronomical signal.  To remove this effect in single pointed observations, the telescope is physically slewed to an offset but nearby position a few times per minute – this is called the “nod”.  The chopped observations are repeated at the other nod to get a second background-subtracted image.  Then these are combined to give a final clean image that should only contain astronomical signal. – The SOFIA FORCAST website has a really nice description and animations of chop/nod observing: https://www.sofia.usra.edu/science/proposing-and-observing/proposal-calls/cycle-4/cycle-4-phase-ii/why-chop-and-nod-forcast – For mapping large areas of the sky, cross-scan mapping can be employed (e.g. SPIRE ‘large map’ mode) For IR telescopes, the telescope staff will have determined the flux calibration scale by repeated observations of ‘standard’ objects of infrared emission, such as planets and asteroids in our solar system.  Then in the data reduction process you can convert from instrument readout units like Volts to physical units like Watts and Jansky. Note: 1 Jansky = 10-26 W/m2/Hz. ## (Spatial) Resolution Imagine looking up into the sky on a dark night and spotting a particularly bright point somewhere out there. Typical human eyesight has about 1 arcminute resolution [source], so if the bright spot on the sky is smaller than 1′, you obviously can’t make out any detail below that scale, and it’s effectively a point source for your detector (eye).  What does the true source look like?  It could be a single bright star far away, or a broader faint star, or even a small cluster of stars, a galaxy, or something else entirely. Which one of the following sources is the true source you’re looking at? You will need a telescope to find out. To determine what the true source in your bright point on the sky contains – that is, to increase the spatial resolution – you will of course observe with a fancy telescope of some sort.  There are many subtle effects of the optics and detector you must be aware of if you want to understand the precise quantitative and qualitative properties of your signal.  One of these, which is very important when you get down to the lowest spatial scales in your image, is the effect of the Point Spread Function (often called the ‘resolving beam’ in longer-wavelength astronomy.) ## Beam/PSF When we observe light, what we see in our detector (whether that’s our eye, a telescope, or a common camera, etc.), we are observing through a certain aperture.  Optics theory (and empirical demonstration, for that matter) tells us that the pattern of light we detect through an aperture is a convolution of the original emission incident on the aperture and the fourier-transform of the aperture shape/profile.  Remember the simple single-slit laser experiment from your undergraduate physics labs (Fraunhofer diffraction) as a 1-dimensional example.  There, the incident light is basically uniform over the slit.  But you don’t see a uniform laser dot on the wall, you see a sinc2 function.  The aperture profile is basically a Heaviside (“top-hat”) function, whose fourier transform is a sinc function.  The convolution of those profiles (Intensity goes as amplitude times its complex conjugate, hence sinc2 on the wall.) The point-spread function (also called a “beam” in radio/submm) is the pattern a single small source of light (less than the limiting 1.22 λ/D) will make on the detector due to the optics of the aperture.  An idealized circular (2D) aperture produces an Airy-disk PSF pattern, and the central part of that can be approximated as a 2D gaussian.  For a more complicated real-world example of a telescope, the PSF can be quite complex due to things like imperfections in the reflector surface, occultation from support struts, etc.  What this means for us in practice, when we are trying to measure the actual flux of an object, is that even if you observe a tiny point source in the sky, a (hopefully) small amount of that power coming through the aperture will be spread out over the detector in the pattern of that instrument’s PSF.  You need to correct for the fraction of emission that PSF spreads outside of your integration region during aperture photometry.  For very large regions, the effect gets much smaller, but even for Herschel images with ~20″ resolution, the PSF can make a difference of a few percent in regions out to a couple arcminutes. For a given optics system, all else being equal, increasing the size of your dish will mean more collecting area to help with sensitivity – you can detect fainter things.  But a larger diameter also directly reduces the theoretical smallest scale (1.22 λ/D) you can resolve.  Another way to say this is that a larger dish means a smaller beam or PSF in our idealized telescope.  At some point, building a larger dish will become prohibitive, because of cost, land, or engineering constraints.  Interferometers such as the VLA and ALMA dramatically increase the maximum resolving baseline by combining signals from multiple separate dishes that can be many kilometers apart (or even the whole earth, for VLBI). In the following example images of our funny source, the beam size is gradually increasing to the right – as would happen if you observed with progressively smaller telescopes.  Or, conversely, if the same telescope would observe the imaginary source at smaller and smaller angular sizes.  Not only does the image become more ‘blurry’ as the beam gets larger, but the flux from the source gets more spread out to the other pixels in the image.  Be aware that more pixels does not necessarily mean better spatial resolution.  See the tutorial on convolution and regridding for some more discussion on these points. ## Color Corrections Herschel/SPIRE imaging bands are not limited to exactly 250,350,500μm – they actually extend about 50μm to either side.  In addition to that, the filter transmission is not uniform across all wavelengths.  The exact profile of the wavelength-dependent transmission in a given imaging band is called the Relative Source Response Function (RSRF).  The incoming emission spectrum can have a variety of shapes, such as a power law.  The flux density recorded in the bolometer is the integral of the source’s spectrum weighted by the RSRF.   The data pipeline doesn’t know what shape the source spectrum has, however, so just assumes it’s flat – that is, νSν = constant.  It also assumes that the source is a point source.  (This calibration is done at the level-1/detector-timeline data stage.)  The monochromatic flux densities you get out of SPIRE – Sν0 for ν0 = 250,350,500μm – are produced with these assumptions, and weighted by the RSRF.  In reality though, your source is possibly not a point source, and almost certainly doesn’t have a flat or uniform spectrum.  For example, blackbody emission (planets, asteroids) in the SPIRE wavelengths (roughly the Rayleigh-Jeans limit) will follow a power law with α~2, while cold (~20K) dust will more typically have α~3 or 4, depending on the dust properties.  To get a usable ‘monochromatic’ flux (for example, a ‘250 micron flux’), you need to correct for these two assumptions. The correction for the shape of the spectrum is called a ‘color correction’.  The assumed functional form for the source spectrum based on the monochromatic wavelength ν0 is:  S(ν) = S(ν0)·(ν/ν0)α.  Again, α = -1 is assumed, but your true source spectrum can be something like α=2 (blackbody), in which case you apply the appropriate correction factor.  See the Spire Handbook Section 5.2.6 “Colour correction for power-law spectra” (and 5.2.7 for greybodies) for a better explanation and the pertinent equations.  HIPE contains a table of color correction factors for various values of α, as well as for modified blackbodies as functions of temperature and β. Herschel/HIPE assumes your data is a point source for reduction purposes.  If your source is actually extended, you need to make a small gain correction (called a “K4” correction) to the bolometer signals.  This is because point source observations care primarily about the peak signal in bolometers while extended source observations want to take caution about the consistency of the integral between individual bolometers, reducing striping in maps.  (See the SPIRE Handbook, Section 5.2.5 for a better explanation.)  This K4 correction factor also depends on the assumed shape of your source spectrum, and you can find the table of corrections for various indices of α in the SPIRE Handbook (possibly out of date) or within HIPE. When converting from Jy/beam units to MJy/sr – that is, using the effectve beam/PSF area – note that the beam sizes also depends on the assumed spectral index.  Choose the appropriate beam area from the tables in HIPE.
2018-02-19 11:46:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5893699526786804, "perplexity": 2075.6579111057363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812584.40/warc/CC-MAIN-20180219111908-20180219131908-00490.warc.gz"}
https://www.hpmuseum.org/forum/showthread.php?mode=threaded&tid=9342&pid=82137
(HP65) Factorial and Gamma Function 10-26-2017, 06:59 AM (This post was last modified: 10-26-2017 05:42 PM by Dieter.) Post: #10 Dieter Senior Member Posts: 2,398 Joined: Dec 2013 RE: (HP65) Factorial and Gamma Function (10-25-2017 04:06 PM)peacecalc Wrote:  thank you for your answers. Twenty-five years ago I wrote a "turbo-pascal" program for the gamma-fct with real arguments. Ah, yes, Turbo Pascal – I loved it. (10-25-2017 04:06 PM)peacecalc Wrote:  I remember this, I also used for large arguments the stirling approx (x>10) as a example for coprozesser programming. But for smaller arguments I used the method described above (divsion by integer values). For negative number I used the formula: (...) Great. Here is an HP67/97 version that applies the same formula, modified for x! instead of Gamma. Also the sin(pi*x) part is calculated in a special way to avoid roundoff errors for multiples of pi, especially if x is large. Edit: code has been replaced with a slightly improved version Code: LBL e 8 EEX 4 1/x CHS STO 1 . 0 0 2 6 9 6 CHS STO 2 2 8 8 1/x STO 3 1 2 1/x STO 4 CLX RTN LBL E CF 2 1 STO 0 R↓ x≠0? x>0? GTO 0 SF 2 CHS ENTER ENTER FRAC 1 CHS COS-1 * SIN 1 CHS R↑ INT Y^X * PI X<>Y / STO 0 R↓ 1 - LBL 0 6 X<>Y LBL 1 x>y? GTO 2 1 + STO*0 GTO 1 LBL 2 ENTER ENTER ENTER 1 CHS e^x * X<>Y Y^X RCL 0 / X<>Y 2 * PI * SQRT * RCL 1 R↑ / RCL 2 + R↑ / RCL 3 + R↑ / RCL 4 + R↑ / 1 + * F2? 1/x RTN Initialize with f [e]. –3,6 [E] => –0,888685714 –4,6 [E] =>   0,246857143 Edit: If you don't mind one more second execution time, here is a version with the constants directly in the code. Except R0 no other data registers are used, and an initialisation routine is not required either. Code: LBL E CF 2 1 STO 0 R↓ x≠0? x>0? GTO 0 SF 2 CHS ENTER ENTER FRAC 1 CHS COS-1 * SIN 1 CHS RUP INT Y^X * PI X<>Y / STO 0 R↓ 1 - LBL 0 6 X<>Y LBL 1 x>y? GTO 2 1 + STO*0 GTO 1 LBL 2 ENTER ENTER ENTER 1 CHS e^x * X<>Y Y^X RCL 0 / X<>Y 2 * PI * SQRT * 8 EEX 4 1/x CHS R↑ / . 0 0 2 6 9 6 - R↑ / 2 8 8 1/x + R↑ / 1 2 1/x + R↑ / 1 + * F2? 1/x RTN Dieter « Next Oldest | Next Newest » Messages In This Thread (HP65) Factorial and Gamma Function - Gamo - 10-21-2017, 08:32 AM RE: (HP65) Factorial and Gamma Function - Dieter - 10-21-2017, 01:21 PM RE: (HP65) Factorial and Gamma Function - Dieter - 10-21-2017, 08:01 PM RE: (HP65) Factorial and Gamma Function - Gamo - 10-22-2017, 02:40 AM RE: (HP65) Factorial and Gamma Function - Dieter - 10-22-2017, 05:28 PM RE: (HP65) Factorial and Gamma Function - peacecalc - 10-22-2017, 08:49 AM RE: (HP65) Factorial and Gamma Function - Dieter - 10-24-2017, 06:34 PM RE: (HP65) Factorial and Gamma Function - Gamo - 10-25-2017, 12:26 AM RE: (HP65) Factorial and Gamma Function - peacecalc - 10-25-2017, 04:06 PM RE: (HP65) Factorial and Gamma Function - Dieter - 10-26-2017 06:59 AM RE: (HP65) Factorial and Gamma Function - Massimo Gnerucci - 10-26-2017, 07:11 AM RE: (HP65) Factorial and Gamma Function - Dieter - 10-26-2017, 05:34 PM RE: (HP65) Factorial and Gamma Function - Massimo Gnerucci - 10-26-2017, 08:11 PM RE: (HP65) Factorial and Gamma Function - peacecalc - 10-26-2017, 03:41 PM RE: (HP65) Factorial and Gamma Function - pier4r - 10-27-2017, 08:06 AM RE: (HP65) Factorial and Gamma Function - toml_12953 - 10-27-2017, 02:26 PM RE: (HP65) Factorial and Gamma Function - Gamo - 10-27-2017, 01:57 PM User(s) browsing this thread: 1 Guest(s)
2019-06-16 20:53:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5369468331336975, "perplexity": 11434.477008894117}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998298.91/warc/CC-MAIN-20190616202813-20190616224813-00420.warc.gz"}
https://cs.stackexchange.com/questions/106450/how-does-a-predictive-coding-aid-in-lossless-compression
# How does a predictive coding aid in lossless compression? I'm working on this lab where we need to apply a lossless predictive coding to an image before compressing it (with Huffman, or some other lossless compression algorithm). From the example seen below, it's pretty clear that by pre-processing the image with predictive coding, we've modified its histogram and concentrated all of its grey levels around 0. But why exactly does this aid compression? Is there maybe a formula to determine the compression rate of Huffman, knowing the standard deviation and entropy of the original image? Otherwise, why would the compression ratio be any different; it's not like the range of values has changed between the original image and pre-processed image. Huffman coding, as usually applied, only considers the distribution of singletons. If $$X$$ is the distribution of a random singleton, then Huffman coding uses between $$H(X)$$ and $$H(X)+1$$ bits per singleton, where $$H(\cdot)$$ is the (log 2) entropy function. In contrast, predictive coding can take into account correlations across data points. As a simple example, consider the following sequence: $$0,1,2,\ldots,255,0,1,2,\ldots,255,\ldots$$ Huffman coding would use 8 bits per unit of data, whereas with predictive coding we could get potentially to $$O(\log n)$$ bits for the entire sequence.
2019-07-17 01:00:25
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5005741119384766, "perplexity": 713.2977877094999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525004.24/warc/CC-MAIN-20190717001433-20190717023433-00090.warc.gz"}
https://www.qb365.in/materials/stateboard/12th-chemistry-chemical-kinetics-model-question-paper-4145.html
" /> --> #### Chemical Kinetics Model Question Paper 12th Standard EM Reg.No. : • • • • • • Chemistry Time : 01:00:00 Hrs Total Marks : 50 12 x 1 = 12 1. The decomposition of phosphine (PH3) on tungsten at low pressure is a first order reaction. It is because the (NEET) (a) rate is proportional to the surface coverage (b) rate is inversely proportional to the surface coverage (c) rate is independent of the surface coverage (d) rate of decomposition is slow 2. For a reaction Rate = kacetone[]32 then unit of rate constant and rate of reaction respectively is (a) (mol L-1 S-1),(mol1/2L1/2S-1) (b) (mol-1/2L1/2s-1),(mol L-1s-1) (c) (mol1/2L1/2s-1),(molL-1s-1) (d) (molLs-1),(mol1/2L1/2s) 3. In a first order reaction $x\longrightarrow y$ if k is the rate constant and the initial concentration of the reactant x is 0.1M, then, the half life is (a) $\left( \cfrac { log2 }{ k } \right)$ (b) $\left( \cfrac { 0.693 }{ (0.1)k } \right)$ (c) $\left( \cfrac { In2 }{ k } \right)$ (d) none of these 4. Predict the rate law of the following reaction based on the data given below $2A+B\longrightarrow C+3D$ Reaction number [A] (min) [B] (min) Initial rate (M s-1) 1 0.1 0.1 x 2 0.2 0.1 2x 3 0.1 0.2 4x 4 0.2 0.2 8x (a) rate=k[A]2[B] (b) rate=k[A][B]2 (c) rate=k[A][B] (d) rate=k[A]1/2[B]1/2 5. During the decomposition of H2O2 to give dioxygen, 48 g O2 is formed per minute at certain point of time. The rate of formation of water at this point is (a) 0.75 mol min−1 (b) 1.5 mol min−1 (c) 2.25 mol min−1 (d) 3.0 mol min−1 6. A+B $\longrightarrow$ C; ∆H = 60 kJ mol-1 Eaf = 150 kJ. What is the activation energy of the backward reaction? (a) 210 kJ (b) 105 kJ (c) 90 kJ (d) 145 kJ 7. (a) It is a 2 steps reaction, step 1 is slower than step 2 (b) It is a 2 steps reaction, step 2 is slower than step 1. (c) Single step reaction where B is a activated complex (d) Single step reaction in which B is a reaction intermediate. 8. Which order reaction obeys the expression ${ t }_{ \frac { 1 }{ 2 } }\alpha \frac { 1 }{ \left[ A \right] } ?$ (a) First (b) Second (c) Third (d) Zero 9. The addition of a catalyst during a chemical reaction alters which of the following quantities? (a) Activation energy (b) Entropy (c) Internal energy (d) Enthalpy 10. Rate law cannot be determined from balanced chemical equation if _____________ (a) Reverse reactions is not involved (b) It is an elementary reaction (c) It is a sequence of elementary reactions (d) All of the reactants is in excess. Rate law can be determined from balanced chemical equation if it is an elementary reaction. 11. The minimum energy that all colliding molecules must possess so as to make the collisions more effective and successful is_____________ (a) activation energy (b) colliding energy (c) threshold energy (d) kinetic energy 12. Activation energy of a reactant is reduced by ___________________ (a) increased temperature (b) reduced temperature (c) increased pressure (d) reduced pressure 13. 1 x 1 = 1 14. Two reactions A ⟶ Band C ⟶ D has the energy of activation 40 kJ and 60 kJ respectively. Which of the following statement is correct? a) Comparison of rate cannot be determined b) The reaction A ⟶ B proceeds at a faster rate compared to the reaction C ⟶ D c) The reaction A ⟶ B proceeds at a slower rate compared to the reaction C ⟶ D. d) Comparison of rate cannot be determined. () b) The reaction A ⟶ B proceeds at a faster rate compared to the reaction C ⟶ D 15. 3 x 2 = 6 16. Assertion: A positive catalyst increases the rate of reaction. Reason: A positive catalyst alters reaction mechanism and decreases activation energy. a) Both assertion and reason are true and the reason is the correct explanation of the assertion. b) Both assertion and reason are true but the reason is not the correct explanation of the assertion. c) Assertion is true but reason is false. d) Both assertion and reason are false. 17. Assertion: Order and molecularity are same. Reason: Order is determined experimentally and molecularity is the sum of the stoichiometric coefficient of rate determining elementary step. a) Both assertion and reason are true and the reason is the correct explanation of the assertion. b) Both assertion and reason are true but the reason is not the correct explanation of the assertion. c) Assertion is true but reason is false. d) Both assertion and reason are false. 18. Assertion: A catalyst is a substance which alters the rate of a reaction. Reason: In the presence of catalyst the energy of activation is increased. a) Both assertion and reason are true and the reason is the correct explanation of the assertion. b) Both assertion and reason are true but the reason is not the correct explanation of the assertion. c) Assertion is true but reason is false. d) Both assertion and reason are false. 19. 1 x 2 = 2 20. Consider the following statements and identify the incorrect statement(s). (i) Decomposition of H2O2 is an II order reaction (ii) t1/u  is independent of initial concentration of a reaction. (iii) Fractional order reactions are observed depending on their rates. (iv) Rate = k [A]p [B]q; p + q = order. a) only (ii) b) both (ii) and (iii) c) only (i) d) None of these 21. 4 x 2 = 8 22. Define half life of a reaction. Show that for a first order reaction half life is independent of initial concentration 23. Describe the graphical representation of first order reaction. 24. For a chemical reaction, Variation in the concentration In[A] Vs time in seconds is given as (i) What is the order of the reaction? (ii) What is the unit of rate constant K? (iii) Give the relationship between k and ${ t }_{ \frac { 1 }{ 2 } }$ 25. If the rate of a reaction gets doubled as the temperature is increased from 27oC to 37oC. Find the activation energy of reaction? 26. 2 x 3 = 6 27. A reaction is of second order in A and first order in B. (i) Write the differential rate equation. (ii) How is the rate affected on increasing the concentration of A three times? (iii) How is the rate affected when the concentration of both A and B is doubled? 28. From the rate expression for the following reactions, determine their order of reaction and the dimensions of the rate constants. (i) 3NO(g) ⟶ N2O(g) Rate = K[NO]2 (ii) H2O2(aq) + 3I-(aq) +2H+ ⟶ 2H2O(l) +I3- Rate = K[H2O2][I-] (iii) CH3CHO(g) ⟶ CH4(g) +CO(g) Rate = K[CH3 CHO]3/2 (iv) C2H5Cl(g) ⟶ C2H2(g) +HCl(g) Rate = K[C2H5Cl]2 29. 3 x 5 = 15 30. The time for half change in a first order decomposition of a substance A is 60 seconds. Calculate the rate constant. How much of A will be left after 180 seconds? 31. Benzene diazonium chloride in aqueous solution decomposes according to the quation ${ C }_{ 6 }{ H }_{ 5 }{ N }_{ 2 }Cl\longrightarrow { C }_{ 6 }{ H }_{ 5 }Cl+{ N }_{ 2 }$ Starting with an initial concentration of 101 g L-1, the volume of N2 gas obtained at 50 °C at different intervals of time was found to be as under: t(min) 6 12 18 24 30 $\infty$ Vol.of N2 (ml) 19.3 32.6 41.3 46.5 50.4 58.3 Show that the above reaction follows the first order kinetics. What is the value of the rate constant? 32. The initial rate of a first order reaction is 5.2 x 10-6 mol lit-1 S-1 at 298 K. When the initial concentration of reactant is 2.6 x 10-3 mol.lit-1, calculate the first order rate constant of the reaction at the same temperature.
2020-10-20 09:07:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6614444851875305, "perplexity": 1564.3099730284011}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107871231.19/warc/CC-MAIN-20201020080044-20201020110044-00623.warc.gz"}
https://www.physicsforums.com/threads/rotational-motion-problem.70102/
# Rotational motion problem 1. Apr 5, 2005 ### imnotsmart A 50.0 kg child stands at the rim of a merry-go-round of radius 1.90 m rotating with an angular speed of 3.00 rad/s. (a) What is the child's centripetal acceleration? (b)What is the minimum force between her feet and the floor of the merry-go-round that is required to keep her in the circular path? (c) What minimum coefficient of static friction is required? Need some help getting started. Don't really know where to go here? Last edited: Apr 5, 2005 2. Apr 5, 2005 ### whozum a) $$v_{linear} = \omega r$$ That should give you a velocity, along with radius you can find acceleration. b) You know that the only force keeping her in circular motion is the centripetal force, and the only forces acting are friction and gravity. Does gravity alone help her stay in circular motion? If not, how does friction do the job? c) This is just number crunching after you figure out the equation in (b)
2017-01-22 18:59:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3474781811237335, "perplexity": 1094.6485790138643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281492.97/warc/CC-MAIN-20170116095121-00121-ip-10-171-10-70.ec2.internal.warc.gz"}
https://jharkhandboardsolution.com/jac-class-9-maths-solutions-chapter-13-ex-13-3/
JAC Class 9 Maths Solutions Chapter 13 Surface Areas and Volumes Ex 13.3 Jharkhand Board JAC Class 9 Maths Solutions Chapter 13 Surface Areas and Volumes Ex 13.3 Textbook Exercise Questions and Answers. JAC Board Class 9th Maths Solutions Chapter 13 Surface Areas and Volumes Ex 13.3 Page-121 Question 1. Diameter of the base of a cone is 10.5 cm and its slant height is 10 cm. Find its curved surface area. Radius (r) = $$\frac{10.5}{2}$$ cm = 5.25 cm Slant height (l) = 10 cm Curved surface area of the cone = (πrl) cm2 = ($$\frac{22}{7}$$ × 5.25 × 10) cm2 = 165 cm2 Question 2. Find the total surface area of a cone, if its slant height is 21 m and diameter of its base is 24 m. Ans. Radius (r) = $$\frac{24}{2}$$ m = 12 m Slant height (l) = 21 m Total surface area of the cone = πr (l + r) m2 = $$\frac{22}{7}$$ × 12 × (21 + 12) m2 = $$\frac{22}{7}$$ × 12 × 33 m2= 1244.57 m2 Question 3. Curved surface area of a cone is 308 cm2 and its slant height is 14 cm. Find (i) radius of the base and (ii) total surface area of the cone. (i) Curved surface of a cone = 308 cm2 Slant height (l) = 14 cm Let r be the radius of the base ∴ πrl = 308 ⇒ $$\frac{22}{7}$$ × r × 14 = 308 ⇒ 44r = 308 ⇒ r = $$\frac{308}{\frac{22}{7} \times 14}$$ = 7 cm. (ii) TSA of the cone = πr(l + r) cm2 = $$\frac{22}{7}$$ × 7 × (14 + 7) cm2 = (22 × 21) cm2 = 462 cm2 Question 4. A conical tent is 10 m high and the radius of its base is 24 m. Find: (i) slant height of the tent. (ii) cost of the canvas required to make the tent, if the cost of 1 m2 canvas is ₹ 70. (i) Radius of the base (r) = 24 m Height of the conical tent (h) = 10 m Let l be the slant height of the cone. ∴ l2 = h2 + r2 ⇒ l = $$\sqrt{\mathrm{h}^2+\mathrm{r}^2}=\sqrt{10^2+24^2}$$ = $$\sqrt{100 +576}$$ = 26 m (ii) Canvas required to make the conical tent = Curved surface of the cone = πrl = $$\frac{22}{7}$$ × 24 × 26 m2 = $$\frac{13728}{7}$$ m Cost of 1 m2 canvas = ₹ 70 ∴ Cost of canvas = ₹ $$\frac{13728}{7}$$ × 70 = ₹ 137280 Question 5. What length of tarpaulin 3 m wide will be required to make conical tent of height 8 m and base radius 6 m? Assume that the extra length of material that will be required for stitching margins and wastage in cutting is approximately 20 cm (Use π = 3.14) Radius of the base (r) = 6 m Height of the conical tent (h) = 8 m Let l be the slant height of the cone. ∴ l = $$\sqrt{\mathrm{h}^2+\mathrm{r}^2}$$ = $$\sqrt{10^2+24^2}$$ = $$\sqrt{100}$$ = 10 m CSA of conical tent = πrl = (3.14 x 6 x 10) m2 = 188.4 m2 Breadth of tarpaulin = 3 m Let length of tarpaulin sheet required be x. 20 cm will be wasted in cutting. So, the length will be (x – 0.2) m Area of sheet = CSA of tent ⇒ [(x – 0.2) m × 3] m2 = 188.4 m2 ⇒ x – 0.2 = 62.8 ⇒ x = 63 m ∴ Length of tarpaulin sheet required = 63 m. Question 6. The slant height and base diameter of a conical tomb are 25 m and 14 m respectively. Find the cost of white-washing its curved surface at the rate of ₹ 210 per 100 m². Ans. Radius (r) = $$\frac{14}{2}$$ m = 7 m Slant height of the tomb (l) = 25 m Curved surface area = πrl m2 = $$\frac{22}{7}$$ × 25 × 7 m2= 550 m2 Rate of white-washing = ₹ 210 per 100 m2 Total cost of white-washing the tomb = ₹ (550 × $$\frac{210}{100}$$) = ₹ 1155 Question 7. A joker’s cap is in the form of a right circular cone of base radius 7 cm and height 24 cm. Find the area of the sheet required to make 10 such caps. Ans. Radius of the cone (r) = 7 cm Height of the cone (h) = 24 cm Let l be the slant height ∴ l = $$\sqrt{\mathrm{h}^2+\mathrm{r}^2}$$ = $$\sqrt{24^2+7^2}$$ = $$\sqrt{625}$$ = 25 m Sheet required for one cap = Curved surface of the cone = πrl cm2 = $$\frac{22}{7}$$ × 7 × 25 cm2 = 550 cm2 Sheet required for 10 caps = 550 × 10 cm2 = 5500 cm2 Question 8. A bus stop is barricaded from the remaining part of the road, by using 50 hollow cones made of recycled cardboard. Each cone has a base diameter of 40 cm and height 1 m. If the outer side of each of the cones is to be painted and the cost of painting is ₹ 12 per m2, what will be the cost of painting all these cones? (Use n = 3.14 and take Vl.04 = 1.02) Radius of the cone (r) = $$\frac{40}{2}$$ cm = 20 cm = 0.2 m l = $$\sqrt{\mathrm{h}^2+\mathrm{r}^2}$$ = $$\sqrt{1^2+0.2^2}$$ = $$\sqrt{1.04}$$ = 1.02 m
2022-12-05 15:00:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6941893100738525, "perplexity": 2314.6943475968906}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711017.45/warc/CC-MAIN-20221205132617-20221205162617-00801.warc.gz"}
https://www.nature.com/articles/s41598-017-17510-y?error=cookies_not_supported&code=04ae89c7-56b1-43fa-922a-7387cc4d7ffb
Article | Open | Published: # RNAStructuromeDB: A genome-wide database for RNA structural inference ## Abstract RNA plays important roles in almost every aspect of biology, and every aspect of RNA biology is influenced by its folding. This is a particularly important consideration in the era of high-throughput sequencing, when the discovery of novel transcripts far outpaces our knowledge of their functions. To gain a comprehensive picture of biology requires a structural framework for making functional inferences on RNA. To this end we have developed the RNA Structurome Database (https://structurome.bb.iastate.edu), a comprehensive repository of RNA secondary structural information that spans the entire human genome. Here, we compile folding information for every base pair of the genome that may be transcribed: coding, noncoding, and intergenic regions, as well as repetitive elements, telomeres, etc. This was done by fragmenting the GRCh38 reference genome into 154,414,320 overlapping sequence fragments and, for each fragment, calculating a set of metrics based on the sequence’s folding properties. These data will facilitate a wide array of investigations: e.g. discovery of structured regulatory elements in differential gene expression data or noncoding RNA discovery, as well as allow genome-scale analyses of RNA folding. ## Introduction Once thought to be solely an intermediary between the genome and proteome, RNA is now known to be a key player in the biology of all living things (as well as viruses, viroids and transposable elements). In addition to carrying the genetic information needed to generate proteins, RNA can also act as a catalyst1,2, encode signals for subcellular localization3,4,5, and regulate gene expression6. RNA’s gene regulatory roles can occur in-cis, via sequence/structure elements embedded within messenger (m)RNAs: e.g. regulatory elements in untranslated regions (UTRs) and introns can affect translation5,7 alternative splicing8, and mRNA lifetime/abundance9 Additionally, RNA can regulate gene expression in-trans via intermolecular base pairing between mRNAs and noncoding (nc)RNAs such as micro (mi)RNAs10,11, as well as through interactions that mediate the epigenetic control of gene expression: e.g. the Xist long (l)ncRNA that associates with one of two X chromosomes in mammalian females and leads to chromosomal condensation and inactivation. A great deal of work has gone into identifying, understanding, and archiving cis- and trans-regulatory sites on human mRNAs. For example, the regulatory (reg)RNA12 web server contains a large repository of data on various motifs, such as splicing regulatory motifs, polyadenylation signals, and mRNA degradation elements; drawing from additional databases of cis-regulatory elements: e.g. ERPIN13, fRNAdb14, and Rfam15. The RBPMap web tool allows users to deduce primary sequence binding motifs on RNAs for a wide array of regulatory proteins16. The list of new ncRNAs and novel functions of ncRNAs grows daily: however, this is likely only the “tip of the iceberg”. The current release of the human genome reference sequence (GRCh38.p10) is 3,088,269,832 base pairs (bp) long (Genome Reference Consortium). Approximately 90% of our genome is transcribed into RNA17 yet, only 1.5% encodes protein. This results in a staggering amount of potentially functional RNA to be characterized. Although some of this pervasively transcribed RNA may be “junk”, many of these sequences are differentially expressed (compiled within the NRED database18) in diseases such as cancer19,20,21, or under conditions of cellular stress22. Considerable effort is underway to identify and elucidate the functions of ncRNAs. A number of labs have been recording and annotating sequences recovered from high-throughput sequencing and more traditional molecular and cell biology approaches. Collections of ncRNA sequences are being built into databases such as Rfam15,23, lncRNAdb24,25, LNCipedia26,27, mirBASE28,29,30,31,32 and RNAcentral33,34,35. These important projects are compiling well-annotated and, in many cases, functionally validated ncRNAs alongside other valuable data. The RNA families (Rfam) database, for example, contains entries for families of ncRNAs linked by homology. Rfam entries contains information describing ncRNA biosynthesis, localization, phylogenetic distribution and functional roles, as well as evolutionary conservation of primary sequence and, importantly, secondary structure. Conservation of secondary structure is a defining feature of ncRNAs and is used in essentially all ncRNA prediction programs36. A powerful, and popular, program for genome-wide ncRNA prediction is RNAz37,38,39,40. This program is based on a support vector machine (SVM) that is trained on data from known ncRNAs contained within Rfam. Two primary training parameters are used for ncRNA classification: a structure conservation index (SCI), which measures conservation of secondary structure and a thermodynamic z-score, which measures the propensity of a particular sequence to form a defined and energetically stable structure. Typical ncRNAs have structures with high conservation and propensity to form structure. RNAz was previously used to scan through whole genome alignments comparing human vs. animal genomes (ranging in similarity down to zebrafish) to identify putative ncRNAs. This yielded >30,000 high-confidence predictions with ~1,000 that were conserved throughout all vertebrates41. The identification of so many deeply conserved structured RNAs highlights their likely ubiquity and importance. In both coding and noncoding RNAs, secondary structure plays key roles throughout their functions. The diversity of RNA functions and potential for physiological impact (much like with proteins) is made possible by the ability of RNA to fold into unique functional structures. Functional RNA structures form thermodynamically stable base pairs that have been selected for by evolution42. This is the key principle behind the thermodynamic z-score implemented in RNAz: functional RNAs have a more stable folding energy than randomized sequences. Specific folds can be recognized by regulatory proteins43, occlude/present functional motifs44, or alter the distance between functional sites45.The impact of RNA structure on alternative splicing, for example, has been particularly well-studied46,47. Additionally, awareness of the importance of non-specific RNA structure (e.g. regions that do not form particular folds or adopt dynamic structures) is growing. For example, thermodynamically stable regions within open reading frames are proposed to modulate the speed of translation and thus affect protein folding48,49. RNA intramolecular thermodynamic stability is important in mediating its accessibility for intermolecular interactions: e.g. stable regions in UTRs are less accessible to miRNA binding, thus affecting miRNA-mediated gene silencing50. Dynamic RNA structure also has significance to disease: single nucleotide polymorphisms (SNPs) can affect RNA folding in ways that impede healthy function by disrupting specific motifs or altering conformational equilibria51,52. Advances in sequencing technology will continue to massively expand the list of interesting RNA sequences and, excitingly, also provide information on secondary structure. Several in-vivo RNA structure probing techniques have been developed to acquire transcriptome-wide folding information53. Snapshots of the human “RNA structurome” (ranging across tissue/cell types, disease states, and treatments with drugs) will become more common in the near future; informing our knowledge of human biology and advancing our understanding of pathogenesis. For this reason, and the reasons discussed above, it is critical to have a knowledge framework in place to understand the roles of RNA structure in human biology. This is the motivation for the creation of the RNAStructuromeDB. Here, we have compiled computed RNA folding information across the entire human genome, irrespective of whether or not it is known to be transcribed. The RNAStructuromeDB is a web-accessible (https://structurome.bb.iastate.edu) repository for investigators to obtain structural metrics for any RNA sequence originating from the human genome. To further aid investigators we have put the data into context by incorporating comprehensive Gencode annotations54 using the biological database schema Chado55. This allows the rapid comparison of differential gene expression data (e.g. to identify regulatory RNA structures) or transcriptome-wide RNA biochemical probing data against the RNAStructuromeDB facilitating these, and other types of studies. ## Results and Discussion The RNAStructuromeDB holds the results of a genome-wide computational analysis in which we folded the entire human genome. The results of this analysis are comprised of folding metrics which indicate every region of the genome’s propensity to generate structured RNA molecules. Tools have been built by which users can quickly download these metrics or use them to search for novel RNA structural elements or to assess the global folding properties of a transcript. Here we present the metrics that will aid in the investigation of RNA structure/function, and the tools which have been developed to view these metrics effectively. ### Data types To analyze a genome as large as the human genome requires fragmenting the sequence data into smaller pieces that are both computationally tractable, as well as biologically meaningful. The fragmentation approach taken in the construction of the RNAStructuromeDB is similar to those that were successfully used for the analysis of RNA structure in the human and other genomes39,40,41,56,57,58 as well as in specific lncRNAs59. Here we consider discrete, overlapping windows whose size was selected based on several considerations. Due to the kinetics of RNA folding, RNAs typically fold into structures composed of smaller motifs; for example, lncRNAs like HOTAIR (2,421 nt) are comprised of locally folding domains (4 domains), each containing many local substructures (e.g. RNA hairpins60). Additionally, the majority of known RNA sequences/structures archived in Rfam are short (<200 nt long). Incidentally, a window size spanning 100 to 150 nt was shown to be most accurate at predicting base pairs in known cis-regulatory structures of large mRNA molecules (>85,000 nts long)61. Finally, folding algorithms scale O(N3) in time and O(N2) in memory, where N is the sequence length62; thus, shorter sequences, in addition to yielding more accurate predictions, can be folded more quickly and efficiently. With these factors in mind, a window size of 120 nt was selected to maximize the chance of encapsulating structured elements, while also optimizing prediction accuracy and computational resources. A step size of 40 nt was selected to ensure the entire genome is canvassed with multiple frames, increasing the likelihood of capturing all structured elements. This window and step size resulted in the generation of 154,414,320 windows which were analyzed in both strand orientations. For each window, five folding metrics were calculated and archived. The Gibb’s minimum free energy (MFE) of folding (ΔG), which estimates the thermodynamic stability of the most stable 2D conformation of an RNA given Turner nearest neighbor energy parameters63,64 (a set of experimentally measured values taken from the analysis of many small RNA motifs). The ΔG was calculated using the program RNAfold, which is a component of the ViennaRNA package63; predictions were made at 37 °C (human body temperature) and values are reported in kcal/mol. 2D structures associated with window MFE predictions were captured and archived in “dot-bracket" notation; here, paired nt are represented by matched brackets “()” and single stranded nt by dots “.”. To determine if the MFE ΔG depends on the nt order or composition, we compared the native sequence ΔG vs. in silico randomized sequences with the same nt composition. In general, structured ncRNA molecules have lower (more stable) MFE values than random sequences of the same nucleotide composition65; a property which can be exploited for structured RNA motif discovery66. For each window, we report the thermodynamic z-score. The z-score is calculated by taking the difference between native MFE (ΔG native ) and the mean MFE of random sequences ($$\overline{\Delta {G}_{random}})$$, then normalizing by the standard deviation, as described in equation (1) in the Materials & Methods section. The z-score sign indicates if the native MFE is either lower (negative) or higher (positive) than $$\overline{\Delta {G}_{random}}$$ and its magnitude indicates the standard deviations ΔG native is from $$\overline{\Delta {G}_{random}}$$. A z-score less than −1, for example, indicates a ΔG native which is one standard deviation lower than random. Negative z-score suggests that a sequence’s order was selected by evolution to fold into a stable structure42: e.g. if the order is important, shuffling nt will lead to less stable folds by disrupting native (evolved) pairing contacts that give the RNA thermodynamic stability. Positive z-scores are harder to interpret, however, they could suggest regions of RNAs that are evolved to be more accessible/unstructured59. The z-score can also be qualified using the p-value, which is calculated as the fraction of ΔG random values with more thermodynamic stability than ΔG native . This provides a measure of the z-score quality, where p-values close to zero indicate higher prediction confidence: few or no randomized sequences are more stable than the native sequence; this also indicates if the randomization number is high enough to yield reliable z-scores. Two values are recorded which measure statistical properties of the RNA 2D folding ensemble: the ensemble diversity (ED) and the frequency of the MFE (fMFE) metrics. Structured RNAs have rough folding landscapes (with many “suboptimal” folds that are near in energy to the native MFE fold) and, in some cases, are capable of occupying several conformations. The probably of finding any of these structures within the Boltzmann ensemble is mathematically described by the partition function67, which was calculated using RNAfold63. From this partition the ED is calculated by taking the average base-pair “distance” between all structures in the ensemble, where distance is defined as the number of base pairs different between structures65. The ED then, is a metric to mathematically signify the variety of folding structures within the ensemble (a low ED implies a small number of similar structures are present, while a high ED suggests several alternative folds or a lack of overall structure in the ensemble68). The fMFE metric is the probability of finding the MFE within the Boltzmann distribution of structures, where a high probability suggests the MFE structure is more likely to be the dominant fold. The five archived metrics each suggest regions of the human genome that may generate RNAs with interesting folding properties. The MFE ΔG defines the thermodynamic stability, while the z-score suggests whether or not that stability is unusual (given the sequence composition). A region can have a very stable ΔG, but mediocre (or positive) z-scores because the order is unimportant vs., for example, the GC content. The p-value estimates the quality of the z-score. If almost every randomized sequence is less stable than native, then the likelihood of the order being significant is higher. The ED and fMFE indicate the diversity of the RNA folding ensemble in a region and how well-represented the MFE prediction is in the 2D structural ensemble, respectively. These can suggest which regions have well-defined folds (e.g. with low ED and high fMFE) or where folding may be diverse (e.g. dynamic regions or regulatory structural switches that have higher ED and low fMFE). Taken together, these metrics can also suggest which regions may be harboring functional structures. Overlapping windows with interesting folding metrics can be concatenated to define larger regions of interest, which can then be used for comparative sequence/structure modeling69 or ncRNA prediction (e.g. using approaches such as RNAz37,38,39,40. Examples of how the metrics contained in the RNAStructuromeDB can be used are discussed below in the “Examples” subsections. ### Data accessibility The JBrowse genome browser provides significant insights into the structural landscape of each chromosome via a customizable visual interface. Within the JBrowse interface, folding metrics have been adapted to allow for rapid structural inferences to be made: they are displayed as bar graphs canvasing each chromosome sequence. Each folding metric (MFE, z-score, p-value, ED, and fMFE) has been split into separate graphical tracks that can be displayed parallel to genomic annotations. A user interested in a specific transcript will be able to observe the folding metrics of its constitutive features (exons, introns, and UTRs are all displayed with unique visualizations), while at the same time be able to see the folding metrics of the entire gene within greater genomic context. The human eye is very good at identifying patterns (e.g. within the structure of the data represented in the JBrowse tracks) and visualizing the genome, transcriptome and predicted RNA structurome simultaneously can facilitate discovery. Examples are given below. ### Example 1: The MALAT1 lncRNA Currently, thousands of lncRNAs have been discovered in the human transcriptome and a growing list has been assigned functions and are implicated in diseases such as cancer71,72,73; however, most lncRNAs and their functions remain uncharacterized73. One aspect of lncRNA function that remains controversial, is the role played by RNA secondary structure. Many reports describe extensive local or global folding for lncRNAs59,60,74; yet, statistical evidence of structure conservation remains tenuous75. It is worth noting that, even in the absence of high structural conservation, RNA folding can play functional roles76. Analyzing lncRNA folding landscapes and building 2D structural models of these molecules will aid in understanding their mechanisms of action (e.g. in detecting regulatory structural motifs) and, possibly, in developing therapeutic strategies77 to modulate function. lncRNAs tend be modular in structure, consisting of multiple structured domains78. Sliding window approaches for RNA structure detection are able to roughly define the extent of these domains59. The RNAStructuromeDB suggests these domains for all potential human lncRNAs with the folding metrics and local structural models necessary to begin to decipher the structure and function of these transcripts. For example, MALAT1 (metastasis-associated lung adenocarcinoma transcript 1) is a highly conserved (throughout 20 mammalian species, including mouse and human79) lncRNA, which is involved in numerous cellular processes (e.g. transcriptional regulation80, alternative splicing81, and cellular localization82) and implicated in disease states: such as cancer83 and diabetes84. Here, the RNAStructuromeDB data table interface is used to define the structured domains of MALAT1 (summarized in Fig. 1). Upon inputting the MALAT1 target (specified by genomic coordinates, gene symbol, or Ensembl ID), all overlapping folding metric windows are extracted from the DB. These can be browsed on the website or downloaded as a single CSV file. With these data, investigators can use any method to define structured regions; in this example, regions with overlapping windows having z-scores 1 σ more negative (< −1.70) than the average MALAT1 z-score (−0.51) were concatenated into domains (similar to previous work on the Xist lncRNA59). This resulted in seven domains (labeled I–VII in Table 1) likely to generate structured RNAs and 11 individual windows with z-scores less than −1.70. Concatenated domains range from 160 to 280 nt (comprising two to six overlapping windows). Interestingly, in addition to their low z-scores, the defined domains also have lower than average ΔG native (in all but domain II) and ED scores (Table 1), suggesting stable folding with one (or few) dominant conformations in the structural ensemble. Structure models for individual windows can be analyzed directly from the RNAStructuromeDB. For example, domain VII contains a window (positions chr11:65,506,081–65,506,200) that contains a known structured element important to MALAT1 maturation and that also has independent functions in the cell: the MALAT1-associated small cytoplasmic (masc)RNA85. The terminal window in this domain contains the predicted model of the mascRNA (highlighted in Fig. 1d), which correctly predicts the mascRNA tRNA-like folding this is essential to its function. Sequences corresponding to each longer domain can be used for structure modeling: e.g. using RNAfold locally or through the RNAfold web server86, be used for BLAST87 searches to identify homologs, aligned to related sequences, and used for consensus folding, comparative sequence/structure analyses and ncRNA prediction (e.g. using the RNAz server38,39,40). Links to all of these tools can be found on the RNAStructuromeDB website. ### Example 2: The VEGFA gene In 2008, the first human riboswitch was discovered in the mRNA of vascular endothelial growth factor-A (VEGFA)44. Within this mRNA, is a region of the 3′ UTR which can adopt two structural conformations, each of which leads to different translational levels of VEGFA. The adoption of either conformation is mediated by protein binding which occurs as a result of hypoxic signaling. When expressed in high abundance (under hypoxic conditions) the regulatory protein hnRNP L binds to the VEGFA riboswitch causing two sequences, known as the GAIT (gamma interferon inhibitor of translation) element and the stem stability sequence (Fig. 2d), to anneal to each other: inhibiting association with the GAIT complex and stimulating VEGFA expression. When hnRNP L is not highly expressed (under normoxic conditions), these elements form their own discrete hairpins (Fig. 2d) and the GAIT element is able to bind the GAIT complex repressing VEGFA expression. Figure 2 shows the basic pipeline involved in using the genome browser. Upon opening the JBrowse tool, a user is able to input genomic coordinates or Ensembl ID and navigate to their region of interest. In the case of VEGFA, the Ensembl ID (ENSG00000112715.21) was entered to navigate to the genome coordinates. Graphical representations of folding metrics are displayed as tracks, which flank the VEGFA transcript model (Fig. 2a). Transcript models display exons (yellow), introns (black line), and UTRs (blue) as separate entities, allowing for quick visual analyses. VEGFA has a multitude of low z-score windows (197 windows below average), the majority of which (85%) lie within introns and UTRs. The lowest z-score window in the VEGFA gene resides in the 3′ UTR; excitingly, this window fully encompasses the VEGFA riboswitch, highlighting the utility of the metrics archived in the RNAStructuromeDB as well as the ease of picking out regions of interest visually using JBrowse. Users can highlight this region to “zoom in” to investigate that particular window (Fig. 2b). Individual windows can be selected to view a “pop-up” window (Fig. 2c) containing: folding metrics, RNA sequence, and a dot-bracket structure model (dot-bracket structures can be conveniently represented in 2D using the VARNA java applet88). Using VARNA, we annotated the 2D image with the key riboswitch elements (Fig. 2d). The model MFE structure archived in the RNAStructuromeDB recapitulates the translation permissive (TP) conformation that dominates under hypoxia, where the GAIT element and stem stability sequence are annealed to each other. The differences in the MFE model and the TP conformation described in the literature occur primarily at the hnRNP L binding site. The literature model is based on enzymatic probing of an in vitro generated construct that terminates 5 nt upstream of the hnRNP L interaction site. The MFE model, however, includes 13 additional upstream nt because of the window size used. These additional nt allow for the formation of a short hairpin stem that sequesters part of the hnRNP L binding site in a short hairpin loop (Fig. 2d). It is worth noting that the in vitro model for the TP conformer is poorly defined in this region: high reactivity spans the 5′ end and a strong cleavage site occurs within one of the proposed helices in this model44. The MFE model posits novel interactions of the hnRNP L binding nt that may be biologically significant. ### Example 3: Hyperedited regions In addition to the most well-known RNA post-transcriptional modifications (splicing, capping, polyadenylation, tRNA base modifications, etc.) RNA molecules can undergo endogenous enzyme-mediated chemical modifications; a process known as RNA editing89. The most common editing event in the human cell, is that performed by the adenosine deaminase acting on RNA (ADAR) enzyme family: Adenosine to Inosine (A-to-I) deamination. These editing events are ubiquitous throughout the body90 and likely play a role in regulating gene expression91 and establishing disease states92. Inosine is recognized by most cellular machinery (as well as the enzymes used in RNA sequencing) as guanosine; indeed it is the A-to-G transitions observed as variants in sequencing reads that allows one to quantify the percent A-to-I editing at a nt93. Efforts towards creating an “inosinome Atlas” (by conducting whole genome and transcriptome sequencing of human cells to detect A-to-I editing events) resulted in the identification of ~3 million A-to-I editing sites90. Given their quantity, likely roles in regulating gene expression, and their implication in disease states it is important to gain insight into the structural features of A-to-I “hot-spots”. ADAR enzymes have a strong preference for double stranded (ds)RNA regions94. ADAR editing was first described in helical regions of very long hairpins found in 3′ UTRs95,96 Subsequently, many editing sites were found; for example, within regions containing Alu elements97 whose inverted repeat structure facilitates hairpin formation. This structure-preference for ADAR makes the folding metrics within the RNAStructuromeDB particularly useful for interpreting patterns of A-to-I editing data. We utilized the customizability of JBrowse to visualize A-to-I editing sites alongside folding metrics in order to examine the structural landscape corresponding to edited regions. We prepared tracks using data from a study which looked at RNA editing in human B cells91 where several transcripts were discovered to be hyperedited. For example, the formin binding protein 1 (FNBP1) transcript was found to have the most editing events (291) of any gene, and when viewed alongside folding metrics it was clear that predicted structured regions aligned well to editing sites (Fig. 3). A particularly striking example is the window with metrics shown in Fig. 3b (as well as the genome browser tracks shown in Fig. 3c and d); this window has not only a highly-negative z-score (almost five standard deviations more stable than random), but low (favorable) MFE, ED and high fMFE values as well. This window overlaps a cluster of five highly-edited sites, which are annotated on the predicted structure shown in Fig. 3e. Interestingly, in addition to the highly-stable hairpin in the individual window, concatenating adjacent windows which overlap editing sites (or are within 40 nt and have a less than average z-score) can define a larger hairpin structure (677 nt long) where this particular window forms the terminal hairpin stem loop (Figure S1). Nine out of the 15 windows overlapping this region had z-scores lower than the transcript average (Supplementary Table 1) showing how the sliding window approach can still be used to define domains > the window size used (120 nt in this case). In this particular region there have been two inverted Alu element insertions (Fig. 3d), which provide the complementarity for forming such a large stem structure. As this whole region is transcribed as part of an intron, it is possible for it to fold as predicted into the long hairpin, which is structurally similar and similar in length to the long 3′ UTR hairpins described in earlier studies of ADAR editing95,96. In addition to helping define the extent of structured domains recognized by ADAR (which could facilitate the design of assays to study editing, by removing/adding domains to reporters, etc.) the information in the RNAStructuromeDB can also offer insights into the detailed structural contexts of A-to-I editing sites. The primary sequence context of these editing sites is being investigated, with insights being made into flanking nucleotide preferences90 and tools have been created that predict editing sites based on these primary sequence rules98; the RNAStructuromeDB may enhance these investigations by providing secondary structure context. For example, we can compare the percent editing at an individual site to its structural context. In Fig. 3e, three editing sites occur in AU pairs (two flanked by Watson-Crick pairs, and the other at the end of a helix upstream of a 3 nt bulge-loop) and two occur in internal loops opposite to cytosine residues. In all cases the inosine substitution would be expected to minimally disrupt folding energy (AU to IU pairs) or enhance stability (AC mismatch to IC pair). Additionally, the flanking nearest-neighbor nt can also affect stability. Thus, the folding information in the RNAStructuromeDB might be able to help discern sequence/structural features of editing hot spots that can help predict sites of editing as well as the effect of editing on RNA folding. ## Conclusion The RNAStructuromeDB is a repository of useful RNA folding metrics and a powerful vehicle for exploring the human genome via RNA structure. It allows users to browse, access, and retrieve the data quickly and flexibly, which will facilitate a wide array of researches. We presented three examples of how this database can be used: to generate a map of RNA folding throughout the MALAT1 lncRNA, VEGFA gene, and FNBP1 transcript. In each example, global properties of folding (e.g. the enrichment for stable folding in intronic or UTR regions), as well as the identification of functional motifs (e.g. the identification and modeling of the VEGFA riboswitch and stable hairpin in FNBP1) could be quickly deduced. We anticipate that this tool will have many applications both in basic research and in the therapeutic targeting of disease-associated human RNAs. ## Materials and Methods ### Overview Our database is housed on a Red Hat Enterprise Linux (RHEL7) server (provided by the Research IT group at Iowa State University http://researchit.las.iastate.edu) running Postgres 9.2, with all computational and genomic data stored within the Chado schema. Folding metrics were calculated on Iowa State University’s High Powered Compute cluster using RNAfold (version 2.2.10) and Perl (version 5). The website pages were built using Iowa State University’s Luggage platform (http://luggagedocs.info/), which is constructed on an underlying Drupal 7 framework. Tripal99,100 (version 2.1) was used to upload all data into the Chado schema and populate page “views” by later pulling relevant data from the Chado schema. JBrowse API (https://github.com/isubit/tripal_jbrowse_api) was used to generate JBrowse tracks directly from the Chado schema. An overview of this procedure is indicated in Fig. 4. ### Genomic sequence and annotations files This analytical approach utilizes FASTA chromosomal DNA sequence files as input. For the human genome, the standardized reference sequence is generated by the Human Genome Reference Consortium. The latest major release of this human genome reference sequence (GRCh38/hg38) was acquired from Ensembl FTP server (ftp://ftp.ensembl.org/pub/release-90/fasta/homo_sapiens/dna/) as 24 separate chromosomal FASTA files (consisting of unmasked DNA sequence). Regions of sequence ambiguity and/or difficult to sequence nucleotides are depicted as the character “N” and were not considered in any calculations, but remain as placeholders within corresponding windows. Comprehensive Gencode genome annotations54 (release 26) were acquired from Gencode directly in the form of GFF3 files (ftp://ftp.sanger.ac.uk/pub/gencode/Gencode_human/release_26/gencode.v26.annotation.gff3.gz). ### RNA structure, partition function and z-score calculations Each chromosomal FASTA file supplied the underlying sequence for metric calculations, and was analyzed every 40 nt using 120 nt windows. Each window fragment is run through RNAfold to calculate its MFE (value and fold) and partition function at 37 °C. To account for the reverse strand, each fragment is also converted to its reverse complement and run through the same process in a parallel script. Partition function calculations output an ensemble diversity score and an fMFE in the ensemble value for each window. Each “native” window sequence is then scrambled to produce 30 randomized versions which are run through RNAfold to calculate their MFE values as well. The MFE values of the native sequences are then compared to the MFE values of random sequences to generate a thermodynamic z-score, calculated with a method adapted from Clote et al.66 as illustrated in the following equation: $$z-score=\frac{{\rm{\Delta }}{G}_{native}-\bar{\,{\rm{\Delta }}{G}_{random}}}{\sigma }$$ (1) The randomization number was optimized to yield converged z-score values using the minimal randomization number. As a measure of z-score quality, we also calculate the fraction of MFE values that were lower than the native (the p-value) using Perl operations. ### MALAT1 Example MALAT1 metrics were obtained from https://structurome.bb.iastate.edu/folding-metric-feature-search by filtering using Ensembl ID (ENSG00000251562.7) and downloading via the CSV link in the webpage. Statistical analyses were performed and graphs/tables were generated in Excel. ### VEGFA Example VEGFA was navigated to at https://structurome.bb.iastate.edu/jbrowse/ by inputting its Ensembl ID (ENSG00000112715.21). All data was gathered from the JBrowse interface directly. All structures were generated using VARNA and annotated as described in Ray, et al. ### FNBP1 Example A-to-I editing sites were obtained from ref.88 supplemental table five. Genomic coordinates were converted from human genome reference version hg18 to version hg38 using the UCSC reference conversion tool: https://genome.ucsc.edu/cgi-bin/hgLiftOver. Converted coordinates were then used to create bigWig tracks for each nucleotide site. These tracks were uploaded directly to JBrowse from the user interface for visualization along with folding metrics. ### Data availability Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## References 1. 1. Serganov, A. & Patel, D. J. Ribozymes, riboswitches and beyond: regulation of gene expression without proteins. Nat Rev Genet 8, 776–790, https://doi.org/10.1038/nrg2172 (2007). 2. 2. Kruger, K. et al. Self-splicing RNA: autoexcision and autocyclization of the ribosomal RNA intervening sequence of Tetrahymena. Cell 31, 147–157 (1982). 3. 3. Kloc, M., Zearfoss, N. R. & Etkin, L. D. Mechanisms of subcellular mRNA localization. Cell 108, 533–544 (2002). 4. 4. Belmont, B. J. & Niles, J. C. Inducible control of subcellular RNA localization using a synthetic protein-RNA aptamer interaction. PLoS One 7, e46868, https://doi.org/10.1371/journal.pone.0046868 (2012). 5. 5. Ossenbuhl, F. & Nickelsen, J. cis- and trans-Acting determinants for translation of psbD mRNA in Chlamydomonas reinhardtii. Mol Cell Biol 20, 8134–8142 (2000). 6. 6. Lee, N., Yario, T. A., Gao, J. S. & Steitz, J. A. EBV noncoding RNA EBER2 interacts with host RNA-binding proteins to regulate viral gene expression. Proc Natl Acad Sci USA 113, 3221–3226, https://doi.org/10.1073/pnas.1601773113 (2016). 7. 7. Matoulkova, E., Michalova, E., Vojtesek, B. & Hrstka, R. The role of the 3′ untranslated region in post-transcriptional regulation of protein expression in mammalian cells. RNA Biol 9, 563–576, https://doi.org/10.4161/rna.20231 (2012). 8. 8. Graveley, B. R. Mutually exclusive splicing of the insect Dscam pre-mRNA directed by competing intronic RNA secondary structures. Cell 123, 65–73, https://doi.org/10.1016/j.cell.2005.07.028 (2005). 9. 9. Brown, J. T. & Johnson, A. W. A cis-acting element known to block 3′ mRNA degradation enhances expression of polyA-minus mRNA in wild-type yeast cells and phenocopies a ski mutant. RNA 7, 1566–1577 (2001). 10. 10. Farh, K. K. et al. The widespread impact of mammalian MicroRNAs on mRNA repression and evolution. Science 310, 1817–1821, https://doi.org/10.1126/science.1121158 (2005). 11. 11. Georges, M., Coppieters, W. & Charlier, C. Polymorphic miRNA-mediated gene regulation: contribution to phenotypic variation and disease. Curr Opin Genet Dev 17, 166–176, https://doi.org/10.1016/j.gde.2007.04.005 (2007). 12. 12. Chang, T. H. et al. An enhanced computational platform for investigating the roles of regulatory RNA and for identifying functional RNA motifs. BMC Bioinformatics 14(Suppl 2), S4, https://doi.org/10.1186/1471-2105-14-S2-S4 (2013). 13. 13. Lambert, A. et al. The ERPIN server: an interface to profile-based RNA motif identification. Nucleic Acids Res 32, W160–165, https://doi.org/10.1093/nar/gkh418 (2004). 14. 14. Kin, T. et al. fRNAdb: a platform for mining/annotating functional RNA candidates from non-coding RNA sequences. Nucleic Acids Res 35, D145–148, https://doi.org/10.1093/nar/gkl837 (2007). 15. 15. Griffiths-Jones, S., Bateman, A., Marshall, M., Khanna, A. & Eddy, S. R. Rfam: an RNA family database. Nucleic Acids Res 31, 439–441 (2003). 16. 16. Paz, I., Kosti, I., Ares, M. Jr., Cline, M. & Mandel-Gutfreund, Y. RBPmap: a web server for mapping binding sites of RNA-binding proteins. Nucleic Acids Res 42, W361–367, https://doi.org/10.1093/nar/gku406 (2014). 17. 17. Dinger, M. E., Amaral, P. P., Mercer, T. R. & Mattick, J. S. Pervasive transcription of the eukaryotic genome: functional indices and conceptual implications. Brief Funct Genomic Proteomic 8, 407–423, https://doi.org/10.1093/bfgp/elp038 (2009). 18. 18. Dinger, M. E. et al. NRED: a database of long noncoding RNA expression. Nucleic Acids Res 37, D122–126, https://doi.org/10.1093/nar/gkn617 (2009). 19. 19. Zhang, J., Fan, D., Jian, Z., Chen, G. G. & Lai, P. B. Cancer Specific Long Noncoding RNAs Show Differential Expression Patterns and Competing Endogenous RNA Potential in Hepatocellular Carcinoma. PLoS One 10, e0141042, https://doi.org/10.1371/journal.pone.0141042 (2015). 20. 20. Li, C. et al. Differential expression profiles of long non-coding RNAs reveal potential biomarkers for identification of human gastric cancer. Oncol Rep 35, 1529–1540, https://doi.org/10.3892/or.2015.4531 (2016). 21. 21. Peng, F. D. et al. Differential expression analysis at the individual level reveals a lncRNA prognostic signature for lung adenocarcinoma. Molecular Cancer 16, ARTN 98, https://doi.org/10.1186/s12943-017-0666-z (2017). 22. 22. Vilborg, A., Passarelli, M. C., Yario, T. A., Tycowski, K. T. & Steitz, J. A. Widespread Inducible Transcription Downstream of Human Genes. Mol Cell 59, 449–461, https://doi.org/10.1016/j.molcel.2015.06.016 (2015). 23. 23. Nawrocki, E. P. et al. Rfam 12.0: updates to the RNA families database. Nucleic Acids Res 43, D130–137, https://doi.org/10.1093/nar/gku1063 (2015). 24. 24. Amaral, P. P., Clark, M. B., Gascoigne, D. K., Dinger, M. E. & Mattick, J. S. lncRNAdb: a reference database for long noncoding RNAs. Nucleic Acids Res 39, D146–151, https://doi.org/10.1093/nar/gkq1138 (2011). 25. 25. Quek, X. C. et al. lncRNAdb v2.0: expanding the reference database for functional long noncoding RNAs. Nucleic Acids Research 43, D168–D173, https://doi.org/10.1093/nar/gku988 (2015). 26. 26. Volders, P. J. et al. An update on LNCipedia: a database for annotated human lncRNA sequences. Nucleic Acids Res 43, 4363–4364, https://doi.org/10.1093/nar/gkv295 (2015). 27. 27. Volders, P. J. et al. LNCipedia: a database for annotated human lncRNA transcript sequences and structures. Nucleic Acids Res 41, D246–251, https://doi.org/10.1093/nar/gks915 (2013). 28. 28. Kozomara, A. & Griffiths-Jones, S. miRBase: integrating microRNA annotation and deep-sequencing data. Nucleic Acids Res 39, D152–157, https://doi.org/10.1093/nar/gkq1027 (2011). 29. 29. Griffiths-Jones, S., Saini, H. K., van Dongen, S. & Enright, A. J. miRBase: tools for microRNA genomics. Nucleic Acids Res 36, D154–158, https://doi.org/10.1093/nar/gkm952 (2008). 30. 30. Griffiths-Jones, S., Grocock, R. J., van Dongen, S., Bateman, A. & Enright, A. J. miRBase: microRNA sequences, targets and gene nomenclature. Nucleic Acids Res 34, D140–144, https://doi.org/10.1093/nar/gkj112 (2006). 31. 31. Kozomara, A. & Griffiths-Jones, S. miRBase: annotating high confidence microRNAs using deep sequencing data. Nucleic Acids Research 42, D68–D73, https://doi.org/10.1093/nar/gkt1181 (2014). 32. 32. Griffiths-Jones, S. The microRNA Registry. Nucleic Acids Res 32, D109–111, https://doi.org/10.1093/nar/gkh023 (2004). 33. 33. The, R. C. RNAcentral: a comprehensive database of non-coding RNA sequences. Nucleic Acids Res 45, D128–D134, https://doi.org/10.1093/nar/gkw1008 (2017). 34. 34. Consortium, R. N. RNAcentral: an international database of ncRNA sequences. Nucleic Acids Res 43, D123–129, https://doi.org/10.1093/nar/gku991 (2015). 35. 35. Bateman, A. et al. RNAcentral: A vision for an international database of RNA sequences. RNA 17, 1941–1946, https://doi.org/10.1261/rna.2750811 (2011). 36. 36. Panwar, B., Arora, A. & Raghava, G. P. S. Prediction and classification of ncRNAs using structural information. Bmc Genomics 15, Artn 127 https://doi.org/10.1186/1471-2164-15-127 (2014). 37. 37. Gruber, A. R., Neuboeck, R., Hofacker, I. L. & Washietl, S. The RNAz web server: prediction of thermodynamically stable and evolutionarily conserved RNA structures. Nucleic Acids Research 35, W335–W338, https://doi.org/10.1093/nar/gkm222 (2007). 38. 38. Gruber, A. R., Findeiss, S., Washietl, S., Hofacker, I. L. & Stadler, P. F. RNAz 2.0: improved noncoding RNA detection. Pac Symp Biocomput, 69–79 (2010). 39. 39. Washietl, S. & Hofacker, I. L. Identifying structural noncoding RNAs using RNAz. Curr Protoc Bioinformatics Chapter 12, Unit 12 17, https://doi.org/10.1002/0471250953.bi1207s19 (2007). 40. 40. Washietl, S. Prediction of structural noncoding RNAs with RNAz. Methods Mol Biol 395, 503–526 (2007). 41. 41. Washietl, S. et al. Structured RNAs in the ENCODE selected regions of the human genome. Genome Res 17, 852–864, https://doi.org/10.1101/gr.5650707 (2007). 42. 42. Qu, Z. & Adelson, D. L. Evolutionary conservation and functional roles of ncRNA. Front Genet 3, 205, https://doi.org/10.3389/fgene.2012.00205 (2012). 43. 43. Battle & Doudna.. The stem-loop binding protein forms a highly stable and specific complex with the 3′ stem-loop of histone mRNAs (vol 7, pg 123, 2001). Rna-a Publication of the Rna Society 7, 642–643 (2001). 44. 44. Ray, P. S. et al. A stress-responsive RNA switch regulates VEGFA expression. Nature 457, 915–919, https://doi.org/10.1038/nature07598 (2009). 45. 45. Buratti, E. & Baralle, F. E. Influence of RNA secondary structure on the pre-mRNA splicing process. Molecular and Cellular Biology 24, 10505–10514, https://doi.org/10.1128/Mcb.24.24.10505-10514.2004 (2004). 46. 46. Lin, C. L., Taggart, A. J. & Fairbrother, W. G. RNA structure in splicing: An evolutionary perspective. RNA Biol 13, 766–771, https://doi.org/10.1080/15476286.2016.1208893 (2016). 47. 47. Plass, M., Codony-Servat, C., Ferreira, P. G., Vilardell, J. & Eyras, E. RNA secondary structure mediates alternative 3′ss selection in Saccharomyces cerevisiae. RNA 18, 1103–1115, https://doi.org/10.1261/rna.030767.111 (2012). 48. 48. Faure, G., Ogurtsov, A. Y., Shabalina, S. A. & Koonin, E. V. Adaptation of mRNA structure to control protein folding. RNA Biol, 0, https://doi.org/10.1080/15476286.2017.1349047 (2017). 49. 49. Faure, G., Ogurtsov, A. Y., Shabalina, S. A. & Koonin, E. V. Role of mRNA structure in the control of protein folding. Nucleic Acids Res 44, 10898–10911, https://doi.org/10.1093/nar/gkw671 (2016). 50. 50. Minguzzi, S., Selcuklu, S. D., Spillane, C. & Parle-McDermott, A. An NTD-associated polymorphism in the 3′ UTR of MTHFD1L can affect disease risk by altering miRNA binding. Hum Mutat 35, 96–104, https://doi.org/10.1002/humu.22459 (2014). 51. 51. Salari, R., Kimchi-Sarfaty, C., Gottesman, M. M. & Przytycka, T. M. Sensitive measurement of single-nucleotide polymorphism-induced changes of RNA conformation: application to disease studies. Nucleic Acids Res 41, 44–53, https://doi.org/10.1093/nar/gks1009 (2013). 52. 52. Halvorsen, M., Martin, J. S., Broadaway, S. & Laederach, A. Disease-associated mutations that alter the RNA structural ensemble. PLoS Genet 6, e1001074, https://doi.org/10.1371/journal.pgen.1001074 (2010). 53. 53. Bevilacqua, P. C., Ritchey, L. E., Su, Z. & Assmann, S. M. Genome-Wide Analysis of RNA Secondary Structure. Annu Rev Genet 50, 235–266, https://doi.org/10.1146/annurev-genet-120215-035034 (2016). 54. 54. Harrow, J. et al. GENCODE: the reference human genome annotation for The ENCODE Project. Genome Res 22, 1760–1774, https://doi.org/10.1101/gr.135350.111 (2012). 55. 55. Mungall, C. J. & Emmert, D. B. A Chado case study: an ontology-based modular schema for representing genome-associated biological information. Bioinformatics 23, I337–I346, https://doi.org/10.1093/bioinformatics/btm189 (2007). 56. 56. Moss, W. N., Priore, S. F. & Turner, D. H. Identification of potential conserved RNA secondary structure throughout influenza A coding regions. RNA 17, 991–1011, https://doi.org/10.1261/rna.2619511 (2011). 57. 57. Moss, W. N. & Steitz, J. A. Genome-wide analyses of Epstein-Barr virus reveal conserved RNA structures and a novel stable intronic sequence RNA. BMC Genomics 14, 543, https://doi.org/10.1186/1471-2164-14-543 (2013). 58. 58. Kavanaugh, L. A. & Dietrich, F. S. Non-coding RNA prediction and verification in Saccharomyces cerevisiae. PLoS Genet 5, e1000321, https://doi.org/10.1371/journal.pgen.1000321 (2009). 59. 59. Fang, R., Moss, W. N., Rutenberg-Schoenberg, M. & Simon, M. D. Probing Xist RNA Structure in Cells Using Targeted Structure-Seq. PLoS Genet 11, e1005668, https://doi.org/10.1371/journal.pgen.1005668 (2015). 60. 60. Somarowthu, S. et al. HOTAIR forms an intricate and modular secondary structure. Mol Cell 58, 353–361, https://doi.org/10.1016/j.molcel.2015.03.006 (2015). 61. 61. Lange, S. J. et al. Global or local? Predicting secondary structure and accessibility in mRNAs. Nucleic Acids Res 40, 5215–5226, https://doi.org/10.1093/nar/gks181 (2012). 62. 62. Venkatachalam, B., Gusfield, D. & Frid, Y. Faster algorithms for RNA-folding using the Four-Russians method. Algorithm Mol Biol 9, Artn 5 https://doi.org/10.1186/1748-7188-9-5 (2014). 63. 63. Lorenz, R. et al. ViennaRNA Package 2.0. Algorithms Mol Biol 6, 26, https://doi.org/10.1186/1748-7188-6-26 (2011). 64. 64. Mathews, D. H., Sabina, J., Zuker, M. & Turner, D. H. Expanded sequence dependence of thermodynamic parameters improves prediction of RNA secondary structure. J Mol Biol 288, 911–940, https://doi.org/10.1006/jmbi.1999.2700 (1999). 65. 65. Freyhult, E., Gardner, P. P. & Moulton, V. A comparison of RNA folding measures. BMC Bioinformatics 6, 241, https://doi.org/10.1186/1471-2105-6-241 (2005). 66. 66. Clote, P., Ferre, F., Kranakis, E. & Krizanc, D. Structural RNA has lower folding energy than random RNA of the same dinucleotide frequency. RNA 11, 578–591, https://doi.org/10.1261/rna.7220505 (2005). 67. 67. McCaskill, J. S. The equilibrium partition function and base pair binding probabilities for RNA secondary structure. Biopolymers 29, 1105–1119, https://doi.org/10.1002/bip.360290621 (1990). 68. 68. Martin, J. S. Describing the Structural Diversity within anRNA’s Ensemble. Entropy-Switz 16, 1331–1348, https://doi.org/10.3390/e16031331 (2014). 69. 69. Moss, W. N. Computational prediction of RNA secondary structure. Methods Enzymol 530, 3–65, https://doi.org/10.1016/B978-0-12-420037-1.00001-4 (2013). 70. 70. Eilbeck, K. et al. The Sequence Ontology: a tool for the unification of genome annotations. Genome Biol 6, R44, https://doi.org/10.1186/gb-2005-6-5-r44 (2005). 71. 71. Qiu, M. T., Hu, J. W., Yin, R. & Xu, L. Long noncoding RNA: an emerging paradigm of cancer research. Tumour Biol 34, 613–620, https://doi.org/10.1007/s13277-013-0658-6 (2013). 72. 72. Batista, P. J. & Chang, H. Y. Long noncoding RNAs: cellular address codes in development and disease. Cell 152, 1298–1307, https://doi.org/10.1016/j.cell.2013.02.012 (2013). 73. 73. Iyer, M. K. et al. The landscape of long noncoding RNAs in the human transcriptome. Nat Genet 47, 199–208, https://doi.org/10.1038/ng.3192 (2015). 74. 74. Novikova, I. V., Hennelly, S. P. & Sanbonmatsu, K. Y. Structural architecture of the human long non-coding RNA, steroid receptor RNA activator. Nucleic Acids Res 40, 5034–5051, https://doi.org/10.1093/nar/gks071 (2012). 75. 75. Rivas, E., Clements, J. & Eddy, S. R. A statistical test for conserved RNA structure shows lack of evidence for structure in lncRNAs. Nat Methods 14, 45–48, https://doi.org/10.1038/nmeth.4066 (2017). 76. 76. Mao, Y. H., Li, Q., Wang, W. T., Liang, P. Q. & Tao, S. H. Number Variation of High Stability Regions Is Correlated with Gene Functions. Genome Biology and Evolution 5, 484–493, https://doi.org/10.1093/gbe/evt020 (2013). 77. 77. Parasramka, M. A., Maji, S., Matsuda, A., Yan, I. K. & Patel, T. Long non-coding RNAs as novel targets for therapy in hepatocellular carcinoma. Pharmacol Therapeut 161, 67–78, https://doi.org/10.1016/j.pharmthera.2016.03.004 (2016). 78. 78. Mercer, T. R. & Mattick, J. S. Structure and function of long noncoding RNAs in epigenetic regulation. Nat Struct Mol Biol 20, 300–307, https://doi.org/10.1038/nsmb.2480 (2013). 79. 79. Ma, X. Y. et al. Malat1 as an evolutionarily conserved lncRNA, plays a positive role in regulating proliferation and maintaining undifferentiated status of early-stage hematopoietic cells. Bmc Genomics 16, ARTN 676 https://doi.org/10.1186/s12864-015-1881-x (2015). 80. 80. Engreitz, J. M. et al. RNA-RNA interactions enable specific targeting of noncoding RNAs to nascent Pre-mRNAs and chromatin sites. Cell 159, 188–199, https://doi.org/10.1016/j.cell.2014.08.018 (2014). 81. 81. Tripathi, V. et al. The nuclear-retained noncoding RNA MALAT1 regulates alternative splicing by modulating SR splicing factor phosphorylation. Mol Cell 39, 925–938, https://doi.org/10.1016/j.molcel.2010.08.011 (2010). 82. 82. Yang, L. Q. et al. ncRNA- and Pc2 Methylation-Dependent Gene Relocation between Nuclear Structures Mediates Gene Activation Programs. Cell 147, 773–788, https://doi.org/10.1016/j.cell.2011.08.054 (2011). 83. 83. Ji, P. et al. MALAT-1, a novel noncoding RNA, and thymosin beta4 predict metastasis and survival in early-stage non-small cell lung cancer. Oncogene 22, 8031–8041, https://doi.org/10.1038/sj.onc.1206928 (2003). 84. 84. Wu, Y., Huang, C., Meng, X. & Li, J. Long Noncoding RNA MALAT1: Insights into its Biogenesis and Implications in Human Disease. Curr Pharm Des 21, 5017–5028 (2015). 85. 85. Wilusz, J. E., Freier, S. M. & Spector, D. L. 3′ end processing of a long nuclear-retained noncoding RNA yields a tRNA-like cytoplasmic RNA. Cell 135, 919–932, https://doi.org/10.1016/j.cell.2008.10.012 (2008). 86. 86. Gruber, A. R., Bernhart, S. H. & Lorenz, R. The ViennaRNA web services. Methods Mol Biol 1269, 307–326, https://doi.org/10.1007/978-1-4939-2291-8_19 (2015). 87. 87. Altschul, S. F., Gish, W., Miller, W., Myers, E. W. & Lipman, D. J. Basic local alignment search tool. J Mol Biol 215, 403–410, https://doi.org/10.1016/S0022-2836(05)80360-2 (1990). 88. 88. Darty, K., Denise, A. & Ponty, Y. VARNA: Interactive drawing and editing of the RNA secondary structure. Bioinformatics 25, 1974–1975, https://doi.org/10.1093/bioinformatics/btp250 (2009). 89. 89. Gott, J. M. & Emeson, R. B. Functions and mechanisms of RNA editing. Annu Rev Genet 34, 499–531, https://doi.org/10.1146/annurev.genet.34.1.499 (2000). 90. 90. Picardi, E. et al. Profiling RNA editing in human tissues: towards the inosinome Atlas. Sci Rep 5, 14941, https://doi.org/10.1038/srep14941 (2015). 91. 91. Wang, I. X. et al. ADAR regulates RNA editing, transcript stability, and gene expression. Cell Rep 5, 849–860, https://doi.org/10.1016/j.celrep.2013.10.002 (2013). 92. 92. Slotkin, W. & Nishikura, K. Adenosine-to-inosine RNA editing and human disease. Genome Med 5, 105, https://doi.org/10.1186/gm508 (2013). 93. 93. Ramaswami, G. & Li, J. B. RADAR: a rigorously annotated database of A-to-I RNA editing. Nucleic Acids Res 42, D109–113, https://doi.org/10.1093/nar/gkt996 (2014). 94. 94. Yi-Brunozzi, H. Y., Easterwood, L. M., Kamilar, G. M. & Beal, P. A. Synthetic substrate analogs for the RNA-editing adenosine deaminase ADAR-2. Nucleic Acids Res 27, 2912–2917 (1999). 95. 95. Morse, D. P., Aruscavage, P. J. & Bass, B. L. RNA hairpins in noncoding regions of human brain and Caenorhabditis elegans mRNA are edited by adenosine deaminases that act on RNA. Proc Natl Acad Sci USA 99, 7906–7911, https://doi.org/10.1073/pnas.112704299 (2002). 96. 96. Morse, D. P. & Bass, B. L. Long RNA hairpins that contain inosine are present in Caenorhabditis elegans poly(A)(+) RNA. P Natl Acad Sci USA 96, 6048–6053, https://doi.org/10.1073/pnas.96.11.6048 (1999). 97. 97. Athanasiadis, A., Rich, A. & Maas, S. Widespread A-to-I RNA editing of Alu-containing mRNAs in the human transcriptome. PLoS Biol 2, e391, https://doi.org/10.1371/journal.pbio.0020391 (2004). 98. 98. Eggington, J. M., Greene, T. & Bass, B. L. Predicting sites of ADAR editing in double-stranded RNA. Nat Commun 2, 319, https://doi.org/10.1038/ncomms1324 (2011). 99. 99. Sanderson, L. A. et al. Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases. Database (Oxford) 2013, bat075, https://doi.org/10.1093/database/bat075 (2013). 100. 100. Ficklin, S. P. et al. Tripal: a construction toolkit for online genome databases. Database (Oxford) 2011, bar044, https://doi.org/10.1093/database/bar044 (2011). 101. 101. Kerpedjiev, P., Hammer, S. & Hofacker, I. L. Forna (force-directed RNA): Simple and effective online RNA secondary structure diagrams. Bioinformatics 31, 3374–3376, https://doi.org/10.1093/bioinformatics/btv372 (2015). ## Acknowledgements We would like to thank the Research IT department and College or Liberal Arts and Sciences for its help in constructing and housing the RNAStructuromeDB. This work was supported by startup funds from the Iowa State University College of Agriculture and Life Sciences and the Roy J. Carver Charitable Trust, as well as grant 4R00GM112877-02 from the NIH/NIGMS. ## Author information ### Affiliations 1. #### Roy J. Carver Department of Biochemistry, Biophysics, and Molecular Biology, Iowa State University, 2437 Pammel Drive, Ames, IA, 50011, USA • Ryan J. Andrews •  & Walter N. Moss • Levi Baber ### Contributions W.N.M. conceived of the project. R.J.A. generated and curated folding metric data. L.B. was involved in web development. W.N.M. and R.J.A. wrote the manuscript. ### Competing Interests The authors declare that they have no competing interests. ### Corresponding author Correspondence to Walter N. Moss.
2019-02-15 19:02:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6348798274993896, "perplexity": 12299.148477407583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247479101.30/warc/CC-MAIN-20190215183319-20190215205319-00507.warc.gz"}
https://physicscup.ee/pc-2021-problem-1/
# Physics Cup – TalTech 2021 – Problem 1 by Jaan Kalda (TalTech). Two hollow spheres of radius $R=10\,\mathrm{cm}$ are made of copper and connected with a cylindrical copper rod of length $L=1\,\mathrm{m}$ and radius $r=1\,\mathrm{mm}$. The total mass of this dumbbell-like system is $m=150\,\mathrm{g}$. The entire system is in a very strong magnetic field $B=2\,\mathrm{MT}$ which is perpendicular to the rod. This “dumbbell” is given an initial velocity $v=1\,\mathrm{m/s}$ which is parallel to the direction of the rod. However, this orientation is unstable: the “dumbbell” will start oscillating; eventually, these oscillations decay so that the “dumbbell” will take a stable orientation and a new final speed $u$. Find an expression for this terminal speed $u$ analytically and evaluate it numerically. The “dumbbell” moves in weightlessness in a total vacuum. Please submit the solution of this problem via e-mail to physcs.cup@gmail.com. For full regulations, see the “Participate” tab.
2021-04-21 13:20:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 8, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6131638288497925, "perplexity": 711.4119257791376}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039544239.84/warc/CC-MAIN-20210421130234-20210421160234-00394.warc.gz"}
https://dsp.stackexchange.com/questions/63914/adaptive-equalization-vs-inverse-of-transfer-function
# Adaptive equalization vs inverse of transfer function I have the following equalization problem as shown in the figure below: Now I can compute the coefficients for my adaptive FIR filter c (dim(c) = N) the following: $$\mathbf{c_{opt}} = (\mathbf{H}^T\mathbf{H})^{-1}\mathbf{H}~\mathbf{h_{ideal}}$$ where $$\mathbf{H}$$ is a convolution matrix with shifted vectors of $$\mathbf{h}$$ and $$\mathbf{h_{ideal}}$$ is chosen such that $$x[n]=d[n]$$ (delay-free equalizer). The channel impulse response is given as $$\mathbf{h} = [1, 0.5]^T$$ $$\Rightarrow H(z) = 1+0.5 z^{-1}$$ so the inverse of the system would be IIR: $$1/H(z) = \frac{z}{z+0.5}$$ Now the question is the following: What is the difference between the LS-solution with an adaptive filter and direct inversion of the system? Is it just that one filter is FIR and the other one IIR? Therefore with the FIR-filter we cannot reach full equalization and a residual error stays? • Yes: the main issue is that the inverse of a FIR is IIR. – MBaz Feb 15 at 16:57
2020-10-24 18:18:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473161458969116, "perplexity": 516.5834820295086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00139.warc.gz"}
https://idontknowbut.blogspot.com/2013/
## Sunday, December 29, 2013 ### Could there be mental clinics too? Nah. A little discussion over at AVI's got me wondering: what would the mental health equivalent of a clinic be? I like "what if" thought experiments. I often leave out something important, though. I'll define a clinic as where you go for regular checkups and minor ailments. For checkups: A few quick measurements can tell if something is wrong now, and putting them in context of your history can tell is something is starting to go wrong. And that's before the quick talk to see if there's anything obvious going on. If you can't remember anything apropos, you're good to go for another couple of years (barring slips on the ice). If something is amiss, chances are you notice and complain about something related to it. You may not know why your hands are swelling, but you're pretty sure they didn't do that last year. You have a construct for how your body is supposed to work and you can recognize deviations. Mental health seems intrinsically more complicated. I gather there are some simple things that let experienced psychiatrists spot some kinds of major problems, but I suspect that if they were as widely recognized as high blood pressure is, a lot of patients would hide them. They'd be "symptoms" relative to a culture, too, and not universal like mm of Hg. But let's stipulate that some well-advanced problems are readily detectable, and others can be determined with some effort using interviews with the patient and with friends/family. What about the rest? Over the years a number of acquaintances have surprised me when they suddenly left a spouse or quit a job. I do not say friends but acquaintances, because the mental clinician will be in pretty much the same situation. He is not a close friend of the patient, does not get to see him react to everyday stress, and only knows what the patient choses to tell him. What will the patient tell him? Maybe the patient will be worried: "My hands are raw because I keep washing them." Or "I can't seem to make friends." If the mental problem is one of perception, he may think the problem is with someone else. Or that there is no problem; what he is doing is quite reasonable under the circumstances. Only talking with family and friends will tell the clinician that something is wrong; our hero won't know. And if he knows and is ashamed he may not want to tell anybody. When your perspective is distorted, your model of what is normal operations doesn't do you as much good. And, of course, our hero may not have any local family and no friends either. Which might be a warning sign. Or not.(*) Some people will come to the clinic but many, likely some of the ones who need it most, will not. You can't compel ordinary people to go for a mental checkup. Or perhaps more accurately, I don't see any good ways to do that which aren't open to abuse. If Joe has a history of problems, yes. Otherwise, MYOB. If the services are subsidized, the clinic will have a waiting list of lonely people who just want to talk. I've never manned a suicide hotline or a late-night DJ slot, but the received wisdom is that both get a lot of phone calls from lonely people. I have a little suspicion, from dealing with some counselors over the years, that much of the "treatment" for many of the "clinic-level" problems involves simple advice consistently applied. The consistency is the hard part; somebody has to be there to observe and remind the patient (who will need to be very patient with the reminders!). When we're all urged to leave home and have our own apartments, that's problematic and pricey. All in all, it doesn't look like there's a close analog of the physical medicine clinic in the mental health field. However, that doesn't mean there might not be resources. Maybe not so many for Americans, living alone and not talking to the deacons much. But if you talk to your grandmother regularly, and to your cousin's friends when they come over, and look for some spiritual direction--I'd think some of the smaller problems could be dealt with and large ones spotted. Not that they'd know what to do with big problems; just know that something's wrong. UPDATE It was pointed out in a private communication that there are clinics that serve existing patients, presumably relatively inexpensively. Not everybody who sits like a lump at the party has some mental disorder. And then suddenly the goblets leapt and rattled on the board and the great table shook, for the friar had brought down his huge fist like a club of stone, with a crash that startled everyone like an explosion; and had cried out in a strong voice, but like a man in the grip of a dream, "And that will settle the Manichees!" ### Review problem No doubt it proves I'm déclassé, but I enjoy Dave Barry's year in review columns. I like running into lines like: "Also stepping down is Hillary Rodham Clinton, who, after decades of public service, resigns as secretary of state so she will finally have a chance to spend some personal quality time with her team of campaign advisers." He likes to take a target (e.g. Lance Armstrong), expand his list of misdemeanors to include something absurd, and add follow-up adventures based on that absurdity. Sometimes the result is chuckle-worthy and sometimes not; I won't spoil your joy of discovering which is which. Except that he seems nonplussed by Pope Francis. "College of Cardinals, apparently seeking to move the church in a new direction, chooses, as the first non-European pope in over a thousand years, a retired New Jersey tax accountant named Harvey Schwartz. Appearing before a massive crowd gathered in St. Peter’s Square, the new pontiff vows to, quote, “give it a shot.”" The follow-up references are just as off target. I wonder if he was afraid that he couldn't exaggerate enough. If Dave wrote that Francis "sold the Vatican and donated the proceeds to the Salvation Army," that would be over-the-top enough for Dave's style, an amusing extrapolation of Francis' attitudes, and impossible enough that it wouldn't happen, but pretty much anything less might happen. Sell off some art to fund catholic charities? He might. Tell priests to go live with the poor? He did already. Dave "gets" plenty of other subjects well enough to parody them. I wonder if 1) he doesn't "get" Francis or 2) his editors told him to back off or 3) he couldn't work him in without putting in too much reality and jarring the rest of the parody. I'm leaning toward 3). ## Tuesday, December 24, 2013 ### Suffering and union One time her saddle slipped, and she found herself head down under the belly of a donkey as she crossed a stream. Complaining to the Lord of her treatment, she heard him reply, “Teresa, whom the Lord loves, he chastises. This is how I treat all my friends.” She replied tartly, “No wonder you have so few!”(*) "Blessed are you among women" to have an out-of-wedlock pregnancy in a land where such things mattered, to be in labor in a stable, to have to leave your homeland because the king wants to kill your son, to have a "sword pierce your own soul also." Joseph gets to raise somebody else's son, be regarded as not quite self-controlled, run to Egypt (so much for getting rich off that gold and spices--they go to pay expenses), and never see the son become the Messiah. If we're going to be like God, do we have to suffer as He does? Even the Nativity has some Calvary in it. Childbirth seems to be hard on the baby too, and that new baby you see--cells are growing, but others are dying. Even a newborn is dying a little. To go from being the author of life to being a contingent being that is always dying has to be a painful change--and we're supposed to want to unite with God. There's no escaping pain anyway (Gautama to the contrary), so it might as well have a meaning and a purpose. But it doesn't make "take up your cross daily" any easier. (*)There are several versions of the story. ### Wii conductor Suppose you recorded an orchestra, carefully separating the instrument classes: brasses, strings, etc., and then used a Wii (together with a fairly nice sound system) to mix the streams using instrumented gloves as the control. You give the beat, and the sound streams run fast or slow according to your beat. Gesture at the image of the brass section for a little more brass, and that gets amplified in the mix. Stretch out a phrase... if there's a standardized gesture for it you could program it. Pausing and restarting from some defined time points have to be integrated into the system somehow, and you need a set of different symphonies to play with, but it would seem possible that a Wii could be used for "Conductor Hero." It might not have the horsepower itself, and a bolt-on sound mixing processor might be a bit pricy--I don't have the intimate knowledge of the hardware needed to guess. This is similar to, but a little more complex than, Wii Maestro, which I just learned about after googling to see if any was already doing this already. It doesn't seem to do any mixing, just speed and volume control, though that's quite a bit. Maybe the next release... ## Monday, December 23, 2013 ### Computer/RFID implants With the prospect of instrumenting people with electronic implants or tattoos, the usual suspects emerge to extol the prospect of unlocking your car by waving your hand (and dream of communicating with your computer by thinking), or alternatively to warn of the number of the beast. Suppose you have such an RFID tattoo to identify you. You could unlock your car, walk through airport security, purchase without having to bring a wallet--all the things that make life worth living, right? But if you slip on the stairs and slice through one of the wires, it may not work so well; similarly if you gain a lot of weight and change the shape of the antenna. And when somebody compromises the system you've got to get a new tattoo. And don't even dream of getting an MRI unless you are fond of burns. Implants have mostly the same problems (barring the getting fat issue), plus the more invasive surgery, the chance of your body reacting to the implant, and the possibility that your body may migrate it out over time. The implant security is only as good as the security the firms keep over the key codes they put in the devices. I predict that "master keys" would quickly appear in certain highly placed hands (Hi, NSA!), and after few years and a few bribed techs, devices to fake the implant signals would become fairly common. Time for new algorithms, new implants, new car locks... The "communicating with computers by thinking" seems even more unlikely. Computers like nice crisp commands and most of us are fairly fuzzy thinkers who need the crutch of writing our thoughts down to be sure we know what we are saying. For simple stuff such as Google search, you can live with "fuzzy," but if you want the machine to do something you need to frame it precisely--save this file as draftlettertotheboss--and don't think about a hippopotamus. ## Sunday, December 22, 2013 ### Puck's voice The Met broadcast Britten's Midsummer Night's Dream yesterday. Puck never does sound quite right to me--nor Ariel. They sound too human. (The animated Shakespeare version of Tempest tinkered with Ariel's voice but didn't do a good job.) It would probably annoy a opera singer to digitally manipulate his voice(*), but for some special characters the effect might be worth it. Suppose you distort Puck's voice in the direction of pure tones--but not too much or you lose the syllables. It would, if not overindulged, make it sound more alien but still recognizable. Puck and Ariel are dangerous. You might want to add a fainter parallel version of the speech dropped a couple/three octaves. (faint enough to add flavor without interfering or distracting) Why yes, I think opera lends itself very well to the movie screen, if the sound system is good enough. When the music is continuous you can't split the scene into multiple takes, but everybody can get a good rest in before the next section, the viewer can be in the action much more deeply, and you can do more thorough miking. Plus you won't giggle at Siegfried's dragon. (*) Pop singers must not care so much, if I can take the widespread use of autotuning as an indicator. ## Saturday, December 21, 2013 ### John Birch If you don't read the Friday history posts over at ChicagoBoyz you're missing some fun. Trent has been doing a series on WWII-in-the-Pacific history that got left out of the official histories, and Sgt. Mom one on Texas. Trent's latest is on the secret intelligence network set up in China to monitor Japanese movements--doubly secret because it had to be kept secret not just from the Japanese but also from Gen. Stilwell who had forbidden it. (Sometimes internal service politics took precedence over defeating the enemy. Kenneth Roberts described the same me-first attitude during the American Revolution, and it is visible to this day in DC.) And who should turn up in Chennault's network but a fellow named John Birch. I'm not sure why I never bothered to look him up before. Probably it was prejudice; all right-thinking people ignore the John Birch Society, therefore his name does not come to mind: crimestop. (Funny how the anti-fluoridation movement is coming back again, from the other direction this time.) That was silly of me--they could just as easily pick a genuine hero as an ideological hero, and apparently they did. For me he's a slightly ambiguous hero, though. He was a Baptist missionary in China who fled the Japanese invasion and worked establishing missions well in the interior. After he rescued some downed Americans he was recruited to spy on the Japanese, which he agreed to do provided he could continue his mission work. He did both, and apparently was courageous enough with the missions work to worry his supervisors. In his travels (he was skilled enough to pass through Japanese lines as an ordinary farmer) he relied heavily on a network of fellow Christians, and recruited many to monitor Japanese troop, air, and other movements. At war's end the Japanese had to surrender control to the Nationalists, even in areas where the Communists had effective control, and as he went to receive surrender of a Japanese post for the Nationalists, he was intercepted and killed by Communists. What sounds a little iffy to me is the use of Christian groups for military ends. It was probably inevitable (who could he trust?), and the Maoists already hated religions so it wouldn't have made any difference in how Christians were treated afterwards. But it makes me a little itchy. I know there is precedent for Christian fighters and I am not pacifist myself, but still... Would I have done the same? If I had the courage, I think yes--who else could I trust? And if I cared at all about the people I was working with I'd want them free from the enslaving aliens. Go look him up. ## Friday, December 20, 2013 ### Decline of writing? At Forbes Leef complains the K-12 fails to teach reading/writing very well, and universities don't do it at all. I have no reason to believe he is wrong, but several things complicate my view. • It is no secret that lots of people graduate from college who wouldn't have been admitted had they been born 60 years ago. And plenty of people graduate from high school who'd have dropped out in years past. So making sure you are comparing apples to apples is tougher than it seems. • I'm not a teacher. If something is badly written, I generally don't have to read it. I can simply marvel at the editor's incompetence and move on to the next story. I generally only get mad when the school principal's letter to the parents is ungrammatical. Government forms are a different kind of nightmare--idiosyncratic jargon trips me up. • I'm a forgiving sort. If I can understand what you're trying to say, I'll try to concentrate on that. I work with quite a few people for whom English is not the first language. I've helped my Better Half raise several kids. Being the grammar policeman squelches otherwise good conversations. • Most of the people I work with have excellent math and very good verbal skills. I don't run into much bad grammar or spelling at work, except from people whose English is far better than my German or (non-existent) Swedish. • I sometimes write sloppily myself, as a careful inspection of this blog will show. On the other hand, social interactions with college kids and recent graduates (excluding the physics students) often leave me wondering what other gaps in their education one might find. And whatever benefits there may be to electronic entertainment, sharpening writing skills cannot be one of them. I offer Facebook in evidence. Twitter may be good at training headline writers, but I don't use it and can't say for sure. On the third hand, I find unexpected gaps in my own knowledge. Yet aside from verb disagreements (usually from incomplete sentence revisions), overuse of parenthesis, and innovative use of semicolons I generally write competently. So my observations of conversations may not be relevant to whether the youngsters can write. ## Wednesday, December 18, 2013 ### Remakes Freely stealing material: An unfortunate man broke the law and went to prison. He was feeling very scared his first night alone in his cell. After the lights went out he heard one of the other inmates in a cell quite a long way off shout out "32". Then all the prisoners burst out laughing. When the laughter subsided he heard another inmate call out "66". Once again followed by a burst of laughter. This went on for some time before they all fell to sleep. The man was intrigued by this behavior. The next morning during breakfast in the eating area the man gathered up his courage and spoke to one of the older prisoners and asked him what was going on. The older inmate said, "Many of us have been in here for a long time. There's not much you can do when the lights go out. So, to amuse ourselves we tell jokes. But after a while we all seemed to know all the jokes, so it became easier to just give the jokes a number and just shout out the number rather than taking all that time to tell the joke." Ahhh. Now it all made sense. So, for the next few weeks, the man listened to the numbers and found out what joke corresponded to what number and which numbers got the biggest laughs. Finally, one night he decided to join in. After about five or six jokes had been told "by the numbers". He shouted out "22!" Nothing happened? Dead silence. He thought that maybe the others didn't hear him. So, he waited till a few more jokes were numbered and shouted out, as loud as he could, "66!" Again, just silence? This happened to him about five times. The next morning he just had to find out why no one laughed at his numbers. He went to the old man again and asked him. "Why does no one laugh at my jokes?" The old man replied, "Ahhh, it's the way you tell them." I noticed that somebody is thinking of making a Gilligan’s Island movie. No doubt they are short of ideas, but never mind that for now. I got to wondering why it seems like such a bad idea. I’ve never tried to cast a play, and suspect one might do a better job by having the candidates draw straws. But I’ve been in groups that just seemed to click—the conversations were lively and kept everybody involved. Replace a person, and it wasn’t quite the same—maybe nothing happened at all. Some people seem to have a knack for figuring out what sorts of groups will work (or maybe that’s just selection bias—we only remember the ones that did work). I doubt the original show would get any traction today (that’s a whole other topic), but at the time the combination of the mannerisms of the cast and the writers writing to match the team gave a particular and very popular flavor to the humor. With a different Ginger and her slightly different tone, MaryAnn has to react a little differently, and so on. The flavor has to be different. And it might or might not catch the public fancy the same way. That’s not a bad thing, of course. But I suspect that trying to recapture a flavor is a fool’s errand. In one Newton’s Apple episode the host was offered an ice cream cone, and gagged a little to discover it was filled with mashed potato instead of ice cream. He probably liked potatoes just fine, but his palate was set for something sweet. I don’t watch enough movies to be able to say for sure, but from the reviews I read I gather that sequels sometimes hit the same sort of problem when the cast has to change: the chemistry is a little different, and the flavor isn’t quite the same. I can think of a few exceptions in the serial/sequel line. The Avengers had several different female sidekicks, and the series seemed to go on anyway; though not for long after Diana Rigg left. Dr. Who seems to have survived a dozen actors.(*) I suppose the producers wouldn’t come up with remakes if they didn’t make at least some money—nostalgia viewers if nothing else. Still, on the whole it seems like a bad idea to try to remake a show. Inspired-by, maybe. Remake—please don’t. (*) I saw Dr. Who in the dorms at Fermilab in WTTW’s weekly BBC-fest. It exasperates the younger fans in my household that I still think Tom Baker is Dr. Who. ## Monday, December 16, 2013 ### Ouch I began using the arm too soon. There seems to be a little conflict between acquiring range of motion and healing. Or something. So Youngest Son and Youngest Daughter were sawing the wood for the turtle basking ramp,(*) and doing most of the drilling and screwing. And I'm not driving anymore, and I'm practicing doing lots of things left-handed. Such is the price of impatience. The land is going to have its Sabbaths one way or another... (*)The turtle is now in an aquarium significantly deeper than the horse trough. But the glass sides seem to be weirding him out. ## Sunday, December 15, 2013 ### "Intense world" The idea that some autistic behavior is due to hypersensitivity is a little older than the story suggests. Youngest Daughter had "Auditory Training", which seemed to help for a while, but then after a couple of weeks there was no further improvement. (The music in the headphones had random glitches, with the intention of desensitizing the hearer to noise.) I've met more people on the autism spectrum than average, and am comfortable with the notion that there are many causes--there are certainly many presentations. The causes might hit the same general machinery, though. If the thesis above is correct, and autistic children have sensory overload with consequent fear and defensive behaviors, then it might, as they also suggest, be possible to reduce its severity with very early intervention. Except that another study suggests that it is hard to distinguish it before about 3 months. A couple of things don't seem to quite fit their model, though. One is the deficit in "cognitive empathy" (not "affective"--autistic children do like people): In a now famous experiment, children watched two puppets, “Sally” and “Anne.” Sally has a marble, which she places in a basket and then leaves. While she’s gone, Anne moves Sally’s marble into a box. By age four or five, normal children can predict that Sally will look for the marble in the basket first because she doesn’t know that Anne moved it. But until they are much older, most autistic children say that Sally will look in the box because they know it’s there. While typical children automatically adopt Sally’s point of view and know she was out of the room when Anne hid the marble, autistic children have much more difficulty thinking this way. The other is: if life is so overwhelming, how can they maintain focus so well? I suppose if it is life or death to learn to focus, you do. ### Lyric meanings then and now At the semester's end recital, a young fellow without quite enough breath support sang "For the Longest Time." I didn't grow up with this one; we already had our eldest when it was released. But it will do as a stand-in for the ones I did hear. It almost seemed written in a foreign language. "For the longest" time (never mind that it seems funny coming from a 16-year-old) almost doesn't seem to apply. The time before my wife isn't so much "a long time" ago as a different life entirely. "But I've come this far and it's more than I hoped for" seems apropos, but I'd have to qualify it with with "so much different than I hoped for." Much of my early hope was short-sighted, with no glimmering of all that a life together would mean. So even "hope" doesn't mean the same thing anymore; it is a less focussed thing. "Now I know the woman that you are": hardly, you're just beginning to. And you don't know the woman she will grow to be, together with you--not yet. The knowing and growing go together. "I don't care what consequence it brings" is almost right, but not quite. I do care, but was willing to take the bad with the good, so long as we could avoid the evil. And I'm slowly learning what grace can come with the troubles. So in one understanding (which I have not achieved) I don't have to worry. "Romance" doesn't seem to mean the same thing: for the singer it is the two together, but after a few years of growing together and links to places and new family and new friends and even new links to old family, it seems to be a bigger concept than at first. "I intend to hold you for the longest time." Intermittently. It turns out to interfere with fixing the car and cooking and getting the kids ready for school. "I have been a fool for lesser things." OK, that's still true. I wasn't a fool to marry her, though. ### Nyarlathotep Lovecraft would have enjoyed oobleck in space. ## Saturday, December 14, 2013 ### Who you know Of the stories I've read about the sign-language interpreter at the funeral, nobody seems to have touched on what looks like the obvious conclusion. That he was unqualified, with a history of violence, and was close to the most carefully guarded man in the world, everybody notices, but there's something else too. Look at him Look at Zuma Now ask why he was hired, and why his criminal record is so hard to find, and why the complaints about earlier signing work don't seem to have mattered. Just from looking at them, I'd bet the two are the same tribe--maybe not close family, but the same tribe. And though it didn't stand front and center in the Mandela funeral coverage, South Africa has huge issues with corruption and tribal conflict. I wonder if the proximity issue will prompt the American media to follow up, or if they'll worry that digging into this would dirty Mandela's memory. I'll try to keep my ears open on this, and see if I'm right. ## Friday, December 13, 2013 ### Protein origami master? This story puzzles me. Allegedly the protein Hsp90 keeps other proteins from mis-folding even when there are minor defects in the protein due to minor mutations. Too little Hsp90 and the little changes tend to be expressed (almost always as defective systems) rather than suppressed. That's a nice method for achieving largish jumps in characteristics. Stress a population of organisms, reducing the Hsp90 or demanding so much protein creation that you outrun your supply (I wonder about radiation environments?) You get lots more previously hidden mutations expressed, almost all of which are damaging (but we take it on faith that some can be pathways to beneficial changes). The only real example of a successful change is fish whose eyes change size (smaller or larger) under Hsp90-blocking, which might emulate the stress in the slightly too non-conductive water of the dark cave some of them live in. If you are already halfway there you should show a more rapid tendency to lose the genes for eyes. Except--how does this wonder protein know how each protein is supposed to fold? If there were a general folding pattern common to all proteins, then the normally suppressed deviants would always be abnormal and not the wave of the future. For if some deviant folding were a new pattern for future proteins, then from then on there would not be a general pattern common to all proteins. If there are no general folding patterns, then how in the world would Hsp90 do its hypothesized job? Interesting work, but I think something is wrong with the model of what Hsp90 does. ## Thursday, December 12, 2013 ### Blinders Somebody was playing "Cross of Gold" and I got to thinking about Bryan. Some things don't seem to change much. Faced with powerful rent-seekers, the populist answer was to steal from everybody else using inflation: in Bryan's day by expanding the money supply with an arbitrary linkage to silver, in ours by printing money with no revenue to back it up. These days, because we've got the strongest economy in the world, we've been successful at exporting some of the inflation--which means exporting the theft into poorer countries. So the price of cooking fuel in Africa goes up because our politicians don't like to make hard choices. I'm trying to understand how this readily predictable side effect fits in with straight-faced progressive talk of "economic justice". One obvious answer is that only the people in their field of view are real, and the those out of the focus--well, that group includes some of the villains(*), so maybe it is easier to discount them. And their favored tools cannot possibly have bad side effects--you must be an evil person for suggesting that The Law of Unintended Consequences trumps their good intentions (an attitude common between groups that bitterly disagree on the tools). And the retired couple down the street whose savings are eaten away--we'll devise plans that help them so they don't need to worry, just have faith in our good intentions. Something similar appeared in some of the defenders of the Second Iraq campaign(**). There could not possibly be any serious consequences, because the cause was right. Well yes, but... What makes us expect perfection from "least-bad" choices? (*)I am perfectly willing to stipulate that some of the accused villains actually are predatory. Not all, though. And I notice that some of those genuinely predatory sorts (one such is Soros) are quite cozy with the populist leaders. And often with the not-so-populist leaders too. (**) I'm a defender too, though I think I had a better idea of the downsides than some of the commentariat, and believe we blundered away and threw away all the strategic possibilities and arguably leave the situation worse than before. ### Healthy eating The Mediterranean diet is all the rage these days, and it doesn't look too bad--though I'm more of a fish in moderation sort. Beans and nuts are some of the basics of the diet. I'm not sure which category Theobroma cacao falls in but it has to be one or the other--and thus should be a staple. ## Wednesday, December 11, 2013 ### "Does not commute" I sometimes talk about science and models of reality. The math may be as precise as you like, but if the application of it isn't right it doesn't help much. One thing that surprises people from time to time is that the math that best describes the real world doesn't always have A+B=B+A. I like to remind them that putting on your socks and shoes isn't the same as putting on your shoes and socks--the order makes a little difference. Typically they are unconvinced. So if they are rich enough to wave about two dollar bills, I suggest the standard example. Lay two dollars bill in front of you face up, each with George's head oriented up as he sadly contemplates the state of the nation. You will apply the same kinds of rotations to both. On the left, rotate the bill 90 degrees clockwise. Then flip it over far end to near and near end to far. Take the other bill and flip it far end to near and near end to far. Then rotate it 90 degrees clockwise. Notice the difference. What happens if you rotate by 180 degrees instead of 90? In a closely related example, suppose you start at Indianapolis and travel 100 miles due south, ignoring roads and construction. Then travel 100 miles due east, then 100 miles due north, then 100 miles due west. As we all know, you are not back where you started from; you overshoot because the Earth is round. In fact the effect is fairly common. Do A, then do B, then do the opposite of A, then do the opposite of B. It isn't always the same as doing nothing. Sometimes you are not back where you started. The difference between where you started and where you wind up tells you something about the kind of space you are in (which needn't be a space with the usual distance dimensions--it can be a space of momenta or something else). If you have points in a plane represented by (x,y) you can rotate clockwise by θ about the origin with a simple matrix (quick reminder about matrix multiplication) cos(θ) -sin(θ) sin(θ) cos(θ) Suppose θ is vanishingly small. Then cos(θ) ≈ 1 and sin(θ) ≈ θ . So you can write the tiny rotation as the identity plus the first order rotation plus higher order terms: 1 0 0 1 + 0 -θ θ 0 + higher order terms So far this is nothing particularly startling. A small rotation mostly leaves the situation the same (that's the identity matrix with 1's in the diagonal), but there's a tiny set of first order changes and even tinier higher order. (If you think .001 is small, how about .001 squared: .000001?) In 3D, with rotations about the x, y, and z axes, when you play the same game you have 3 tiny rotation matrices, which I'll call A, B, and C, with extremely tiny angles a, b, and c. You start with the identity matrix (1's down the diagnonal) and A= 0 -a 0 a 0 0 0 0 0 B= 0 0 b 0 0 0 -b 0 0 C= 0 0 0 0 0 -c 0 c 0 Notice that AB is not BA, and AB-BA is not a zero matrix. In fact, if we just look at the unscaled base matrices, setting a=b=c=1, we see the following: • AB-BA=-C • BC-CB=-A • CA-AC=-B It isn't as simple as 1+2=2+1, but the structure gives you some interesting symmetries you wouldn't have seen otherwise. Or to put it in layman's terms, that's kind of cool. And if you have a mathematical bone anywhere in your body you'll ask: "what happens if..." (if rotations are in 4 dimensions, if you have 4 similar cycling equations in ABCD, if AB-BA=-C+A, etc, etc). ## Tuesday, December 10, 2013 ### Hard problems The headline says "Researchers Reveal How an Expanding Universe Can Emerge Without a “Big Bang”". The team figured a way of having an emergent expanding universe without a singularity. They start with a flat spacetime that is rotating, and small phase transitions can occur which act a little like expanding bubbles. OK, cool. But in the fine print: In a first step, a spacetime with only two spatial dimensions was considered. “But there is no reason why the same should not be true for a universe with three spatial dimensions”, says Grumiller. Our own universe does not seem to have come into existence this way. The phase-transition model is not meant to replace the theory of the Big Bang. Or in other words, the model they show is a 2D+time model instead of the 3D+time world we live in. So you'd think that this is not a huge deal, call back when they get to 3 dimensions. Fair enough. The linked article tries to explain why this is interesting--the short version is that there's been an influential theory that links quantum field theory to gravity in a "holographic" way. One example given to explain the conjecture was that if it were true, then a solution of the equations on the boundary would define everything inside; sort of like a hologram (2D) that reconstructs to something 3D. Which seems to kind of flatten us out a bit, but it isn't as bad as it sounds. But: problems that are OK in 2D can be fiendishly hard in 3D. I've been fiddling with a little problem and wondered if hyperdeterminants would help (and if you understand that link on the first read-through you're doing better than I did). There is a closed form expression for the 2x2x2 tensor, but 3x3x3? To quote a paper from arxiv: "The classical case p = 1 is much easier than the case p>=2 mainly because there are only finitely many orbits for the action of GL(V0) x GL(V1)". (I'm calling this a dead end: using their notation I have p=2.) Or the 2-body problem. The Earth and Moon, in isolation, form a very easy dynamical system; closed form solution, all is well. Including the effects of tides muddies it a lot. Add a third body and not only is there no closed form solution, the system may or may not be stable depending on small changes in the initial conditions--good old chaos. So I'm not as sanguine about possible solutions as Dr. Grumiller. There are too many surprises. Of course, once they have a solution, then they have to compare the predictions of that with what we actually see. This has been done (with resulting retuning) for the standard cosmological models, and it has taken many man-years to do the computations and comparisons. Even the unsatisfactory current theories satisfy a lot of constraints that a new model will also have to prove itself against. If we had the accurate Theory of Everything handed to us tomorrow morning, it would still take years--maybe even decades--before we could be sure that it was even as good as what we already have. ## Monday, December 09, 2013 ### Gaming on company time And getting paid for it. The NSA (and apparently other agencies) found that persons of possible interest (e.g. embassy drivers) played things like World of Warcraft, and so in their search for terrorist communication channels the agency sucked up huge swaths of chat information--which is probably not that informative without an intimate knowledge of the games. I gather that some poor souls had the crushing burden of having to get deeply involved in the games. WoW et al seem roughly as interesting as watching paint dry, but if that's your enthusiasm perhaps you should consider a career change. "Study Shows Oxytocin Improves Brain Function in Children with Autism". Giving oxytocin to ASD children 8-16 seems to change what lights up in the brain. OK, so it does--maybe: 17 is a small sample. But if you read the full study you find this problem: Overall, behavioral accuracy and reaction times on the RMET did not differ for OT (oxytocin) vs. Placebo visits ( Fig. S3). This is consistent with the empirical record, which shows improvements only on some items from the RMET, according to difficulty level, but not overall (11, 22) So the brain lights up differently, but there was no change in ability to guess someone's mood looking at just the eyes. "Our results are particularly important considering the urgent need for treatments to target social dysfunction in autism spectrum disorders," Gordon added. I think that translates as "we know something else that doesn't work, but it was worth a try." But who knows, maybe high doses with early intervention might have some effect, though it'd be a while before we found out. Be nice if there were a silver bullet or two. ## Sunday, December 08, 2013 ### St. Isaac The writings of the second St Isaac of Syria don't seem to be online (at least not in English), but there are quotations. He was an ascetic, and some of his suggestions illustrate that focus. • Ease and idleness are the destruction of the soul and they can injure her more than the demons. • A small but persistent discipline is a great force; for a soft drop tailing persistently, hollows out hard rock. • Dispassion does not mean that a man feels no passions, but that he does not accept any of them. • The key to Divine gifts is given to the heart by love of neighbor, and, in proportion to the heart's freedom from the bonds of the flesh, the door of knowledge begins to open before it. • Be persecuted, rather than be a persecutor. Be crucified, rather than be a crucifier. Be treated unjustly, rather than treat anyone unjustly. Be oppressed, rather than zealous. Lay hold of goodness, rather than justice. • Before you stumble, call out and plead; before you make a vow, have ready what things you promise, for they are your provisions afterwards. • If you compel your body when it is weak to labors that exceed its strength, you will instill darkness upon darkness into your soul and bring greater confusion upon her. • Mercy and legality in one soul is like a man who worships God and the idols in one house. • If you cannot be merciful, at least speak as though you are a sinner. If you are not a peacemaker, at least do not be a troublemaker. If you cannot be assiduous, at least in your thought be like a sluggard (?typo for "not like"?). If you are not (typo?) victorious, do not exalt yourself over the vanquished. If you cannot close the mouth of a man who disparages his companion, at least refrain from joining him in this. • A gift free of trials is a disaster to those who receive it. • Not every quiet man is humble, but every humble man is quiet. • Flee from discussions of dogma as from an unruly lion; and never embark upon them yourself, either with those raised in the Church, or with strangers. • The power of love works in two ways: it torments sinners, even as happens here when a friend suffers from a friend; but it becomes a source of joy for those who have observed its duties. Thus I say that this is the torment of Gehenna: bitter regret. But love inebriates the souls of the sons of Heaven by its charm. I am quite far from being an ascetic, as a slight pudge quickly testifies and 33 years of marriage would strongly suggest. I did not imbibe any appreciation for monastic life, and nor was it obvious why asceticism would be attractive enough that brothers and sisters would drop everything and head for the desert. But I think I'm starting to get a glimmering. Imagine an era swarming with luxuries that turn tasteless with indulgence, with entertainments that alternate between the cruel and the lascivious, with the public sphere monopolized by the power hungry and greedy, with intrusive bureaucracies, and seemingly everything designed to manipulate you to utterly worldly ends. Now imagine yourself swimming in this, and starting to realize that these temptations and evils already have a home in your own soul. You too sometimes hunger to chase the banner, or to buy the toy that your neighbors esteem, or collect the hoard that will protect you from the rapacious, or try a taste of your neighbor's mistress. You hear of a man who was able to put all these things aside with self-disciplines no worse than those imposed on wrong-doers. The life in the world is so obviously vile that this alternative, for souls hungering for nobility, looks wise and right. And, in that era, maybe it is. UPDATE: Here's a favorite example of the manipulation mentioned: ## Saturday, December 07, 2013 ### Mental illness in the village One of many things I don't know is how mental medicine works in simpler cultures. Liberian native physical medical treatments, according to Dr. George Harley, could be roughly divided into 3 groups: "it helps," "it does nothing" (a lot of magical stuff was here), and "it causes damage." I don't have the book any more (I read it 40 years ago), but I vaguely recall that he thought the number of treatments in the 3 groups were roughly equal. The snake society (to which most hunters belonged) had a few rituals and a magic preparation that involved powdered snake heads rubbed into scratches. Harley didn't attempt to test this, but as the link suggests it isn't obviously wrong. We have quite a few people searching through primitive pharmacopeia for useful drugs. Mental illness is a bit harder. I see things like "mental illness was thought to be caused by evil spirits," but that doesn't tell me much about treatments. "Magic(*)" OK, but suppose instead of trying to lump treatments we get more specific. Did they distinguish different classes of problems and do their categories correspond to ours at all? What exactly did they do to try to help, and did it help? I assume they had the pragmatic triage categories: "You are a witch and dangerous to the rest of us. Die.", "We cannot live with you. Go away.", and "Nobody minds Ben Gunn." We, with more resources, institutionalize the first, expel the second (modulo homeless shelters), and try to help (sometimes stupidly) the third group. Were there any other distinctions? What did the witch doctors do to help the third group (and those in the other groups prior to the tribe's judgment)? How did they decide on treatments? And most importantly--did the treatments ever help? Figuring that out takes a lot longer than grabbing a handful of bark and flying back to the lab, unfortunately. It means somebody has to spend a long time in the village. Dalrymple says that unhappiness is being medicalized into "depression," with the obvious consequences that unhappiness is not properly addressed and depression is trivialized. But the description from India of How to Treat Depression When Psychiatrists Are Scarce leaves me wondering if palliation for both can be similar (I am not a psychiatrist!), at least for mild depression. See AVI comment below I can easily imagine treatments that only work within the psychological environment of the village with its network of obligations and meanings. Even something as simple as a cola nut has a set of uses in a Liberian village having to do with relaxation, welcome, and worship, that would not translate well to anything I can think of in the US--and I only know a little from the outside. So a treatment that works there might not work here. But if something does work, it would be good to know how and why. Probably there's already a treatise or three on the subject and I just haven't found it. (*) Our categories are also apt to square-peg round-holers. Practitioners can give examples, but any framework trying to describe the borderlands of personal responsibility is going to have misfeatures. ~ ## Thursday, December 05, 2013 ### Mandela To be the right man at the right time is more than most of us can expect. Mandela, one-time commander of the armed wing of the ANC, wasn't exactly a saint, and some of the things he said in recent years were sad to hear. But he and his brave colleague de Klerk sponsored a transformation I didn't believe was possible: a peaceful power transition after oppression. And the Truth and Reconciliation committees worked wonders--though the model has failed in other countries. There are still huge divisions and poisons at work in South Africa, notably the totalitarian wing of the ANC, but just to have solved the apartheid problem was a huge accomplishment. Other knots were for the later generations (who have unfortunately not distinguished themselves as well). He had the endurance and commitment and the drive to peaceful resolution they needed. ## Wednesday, December 04, 2013 ### Pneumonia is a drag I'd have been back to work a week ago if not for this stupid bug. Time to go lie down again... ### Imitating a style A year and a half ago Ray Bradbury died. I grew up with his wonderful stories, though I lost track of his newer work about 20 years ago. He could conjure deeply memorable scenes, and his style was clear and easy. Easy to read, that is. About a month after he died I thought I should try to write something in his style, as a kind of private homage. I had a setting and characters that seemed in keeping with his style, and an offbeat ghost problem that I think he'd have enjoyed playing with. I immersed myself in his stories for a while, and then ... getting the balance right is hard. He varied his style depending on the mood, so maybe that was OK. But I couldn't make the prose go the direction I wanted--as though the style had a mind of its own. Which is probably another way of saying my handwriting looks much the same in small and on the blackboard, even though the muscles are different. Better not to push it too hard. I don't know how many years of practice it would take to imitate his style well--probably far more than just to get my own to be good. "Imitate the best" was the advice, but it was for learning the details of the craft, not for turning into an imitation Hemingway. It's better for a story to gather electronic dust than turn out like this Wodehouse imitation, which sounds so obviously bad I wonder how it found a publisher. ## Tuesday, December 03, 2013 ### Proverbs While mulling over some questions about what society owes the knives in the drawer who aren't as sharp, I started chasing down proverbs. One handy collection (*) ranges from Solomon to Kipling but includes a lot of what we think of as traditional rustic proverbs. Many deal with working hard and hopefully, using resources wisely, and the effects of companions. In other words, a lot of simple rule-of-thumb advice that even someone a little slow on the uptake can internalize. (Though I remember several folk-tales about simple lads who took things too literally.) Just for comparison, have a look at this (shorter) list from China. (At least one of the proverbs is the same: Teach a man to fish...) This is much more heavily flavored with respect for study and suggests a somewhat more involved social structure ("Do not employ handsome servants"). This list feels more rustic, and includes some pretty obscure admonition. I have to include some from Liberia. Liberian proverbs are a little different; much more fluid and often requiring explanation for outsiders. From that last link: African proverbs usually have two meanings: the literal or primary meaning, and the deeper or real meaning. The real meaning of African proverbs is not always apparent. This is precisely why they are called proverbs. For instance, the Ghanaian Akan, Dangme and Ga expressions for "to cite a proverb," bu abe, means "to bend," "curve," or "twist words," to make them complicated (Yankah 1986). Similarly, the Lugbara (Uganda) term that is used to designate proverbs, e'yo obeza, literally means "mixed words," "twisted speech" or "indirect talk" (Dalfovo 1997). The meaning of a proverb is not fixed, and so it can be modified. The user is free to reconstruct a proverb in order to make it appropriate in the particular context in which it is being used. To modify a proverb, one may delete, paraphrase, elaborate or transfer elements in it. The hearer must be witty to interpret and grasp the meaning of a proverb. Compare the flavors yourself. I can't describe the differences in single phrases. All represent a useful body of wisdom--including the contradictions that describe life ("Look before you leap", "He who hesitates is lost"). I don't hear these much, though perhaps I don't travel in the right circles, and I suspect we suffer for it. (*)It has odd comments sometimes, almost as though the author didn't realize that the Bible was known the whole time English was developing as a language. ## Monday, December 02, 2013 ### Hellenized Jews I was taught early on that Alexander the Great wanted to spread Greek culture to everybody, and that as a result Greek language and Greek amphitheaters and Greek sports spread all over the Middle East. I learned later that some Greeks had colonized Italy, so that explained Greek influence there. That's one of the perks of being the conqueror, I guess. Except that the conquered lands were split among warring dynasties, so it isn't quite so easy to see how Greek-ism would be easy to push on the conquered peoples--if you annoy them too much they might defect to the other guys. In your own heartland strongholds, sure, but in the middle of it all? (Antiochus IV was a nutcase.) So what was so enticing that there would be a significant body of Hellenized Jews? The previous post may explain some of the enticement. This wasn't an empty "wear the robes of the rulers" fashion trend. The Hellenist culture was high-tech too. The tech was maybe not affordable for everybody, but there were real secrets to be learned, with practical uses as in agriculture and medicine that could help everybody. Some of the rest of Greek culture would come along for the ride, so to speak, and for the Jews that could be problematic: idolatry and immorality. There are some seriously problematic aspects to modern Western culture too, and some of them seem to spread along with the more technical goodies. That may give a parallel with the past. Sheer, unmitigated speculation by an amateur. Maybe no secret to the pros. ## Sunday, December 01, 2013 ### The Forgotten Revolution by Lucio Russo The history of science in the West, as those few of us who remember it goes, has the ancient Egyptians inventing some primitive technology, the ancient Greeks inventing math and science but scorning technology as not fit for truly noble thinkers, Romans who developed civil engineering and sanitation, and then a lot of barbarians who wrecked everything until math started being rediscovered and science and technology re-invented and went on to exceed anything mankind had ever seen before. Name an ancient Greek scientist? Archimedes, Aristarchus, Aristotle, Ptolemy? (Aristotle does not really count, btw) Check the dates. That's quite a range of time. It turns out that there's quite a bit missing, and some of what we know isn't quite true. We tend to telescope everything from 1000BC to 400AD as "ancient," giving the illusion that everything happened more or less at once. The Forgotten Revolution goes in depth to try to find out what happened: how high the Greek science and math rose, and why things starting falling apart. The answer is very high indeed. Pick those famous "epicycles" that Ptolemy introduced to explain the motion of planets in the sky. • Ptolemy didn't invent them, some other Hellenists had done so centuries before. (He died in 168AD) • Hellenist math was constructive, and required straight-edge and compass. For anything not perfectly circular, epicycles were inevitable. • Look at this picture and tell me what you see. These are dates of astronomical measurements that went into the Almagest. There's a nice spike at times close to Ptolemy, and a long list before--with a 2 century-long gap in the middle. Ptolemy wasn't developing something new, he was trying to retrieve something old. And since little details like known heliocentric theories were left out of the Almagest, it wasn't a thorough review. Hint. Ptolemy was from the Imperial Roman era. What sorts of things did the Hellenists have? Lenses, steam engines, gears and (one infers) metal screws, first principles shipbuilding design (so much for a disdain for technology), hydrostatics for pipe design, gigantic lighthouses (probably with a parabolic mirror--hard to be certain), timing of systolic and diastolic pulses of the blood, watermills, and of course differential gearing in the Antikythera mechanism. And mechanical puppets, and something that sounds like silent movies. ## Astronomy Russo is a physicist, and also versed in classical literature. This is rare. Most classicists don't know much math or science, and most scientists aren't familiar with old forms of Greek. So things like this slip by (p106) The common idea is that Aristarchus was too far ahead of his time to have had a lasting influence on the course of science, and support for it is generally found in the accusation of impiety supposedly leveled against him because of his heliocentrism. The belief that Aristarchus was accused of impiety originates with the seventeen century philologist Gilles Menage who … changed a passage in Plutarch by amending an accusative into a nominative and vice versa. Russo points out that planetaria were well known, and that Cicero's description of Archimedes' version is consistent with a heliocentric mechanism--not so much with a geocentric one. (Another advantage of having somebody with a technical background peering at the language details.) More astronomy: Geminus (about 50BC) compiles a list of star positions and "warning the reader not to suppose it ("sphere of fixed stars") to have a physical existence, since the stars are at different distances from us." And "According to Pliny, Hipparchus (died 120BC) compiled his catalog of stars precisely so that later generations might deduce from it the displacements of stars and the possible existence of novae." (Talk about a long-term research program! Halley (1718 AD) recorded differences with Sirius, Arcturus, and Aldebaran.) AristarchusEratosthenes famously measured the circumference of the Earth. From textual evidence and some references to the verticality of the sun in a band of about 200 miles, Russo strongly suspects that AristarchusEratosthenes (head of library of Alexandria too) had a team dig the famous well in a more accurate spot than merely a handy town, and went on to define his own version of the stadia, linked to the Earth's circumference. This suggests a little funding... ## Technology, travel, engineering How about military technology? The torsion catapult (about the middle of the 3rd century BC) was more powerful and faster to use than the trebuchet of western medieval times more than a millennium later. There's some evidence that a repeating catapult was used, and according to Philo of Byzantium, air piston catapults (thanks to Ctesibius, died 222BC). Travel? Pytheas (late 4'th century BC) traveled direct to India from the Gulf of Aden--no sticking solely to the coastline; that's high seas navigation. He also got far enough north that the sun stayed up all summer, and even saw the polar pack ice. Herodotus preserves (and disdains) a report suggesting a circumnavigation of Africa. Civil engineering? Canals, a blossoming of irrigation methods used from that day to this, aqueducts, and so on. Remember the Roman aqueduct system? Vitruvius' regard for the role of applied science is the greatest of any Latin author. He enumerates the fields of knowledge required by a good architect … writing, drawing, geometry, arithmetic, history, philosophy, music, medicine, law, and astronomy. But consider the ensuing explanation of the uses of this knowledge: astronomy is regarded as necessary, in essence, for determining the four points of the compass, and geometry for understanding the uses of squares and levels…. Vitruvius' work represents the highest level achieved by a Roman technical treatise. As for the rest, Frontinus, the author of the main Latin work on aqueducts, systematically mixes up the flow rate of a pipe with its cross section, thus ignoring, in particular, the role of the slope. The high technological level of Roman aqueducts seems hard to reconcile with such incompetence, but we should not forget that Frontinus was not an engineer but the bureaucrat in charge of Rome's water supply, … whereas the actual designers, builders and maintainers of the aqueducts were slaves, who of course were not in a position to write books. In the same way we find that, for all productive activities with technological content, Rome had to import either finished goods or workers from the East. Lighthouses: The installation of the Pharos was considered so useful that other pharoi were erected at every important port of the Hellenized Mediterranean. But Greek sources contain no overall description or a single technological detail relative to the Pharos, even though it was regarded in its own time as one of the seven wonders of the world. This confirms how reticent our sources are about technological products, in a case where the product itself is not in doubt. Because the only extant descriptions of the Pharos are by Arab historians, who visited it long after it had ceased to function, we know very little of its technology. Yet some conjectures can be made on the basis of its purpose and contemporary knowledge. First, we can imagine that the reflector consisted, as it would today, of a parabolic mirror, all the more so because the relevant theory arose precisely around the time of the construction of the Pharos. At this point Russo starts to stretch things a bit for my tastes--but maybe he is right: It should be stressed that modern steam engines are not at all, as is often implicitly assumed, an invention independent of the Hellenist engines; there is a continuous line of descent. Heron's expositions were studied carefully by Leonardo da Vinci, among others. Apparently Leonardo drew pictures based on the old books, including a picture or so that aren't found in any of the known old books. Heron (1st century AD) seems to have been a compiler rather than a scientist or engineer himself. The dioptera Heron describes used small screws, but Heron's description of how to make screws only works well for big ones, and in his Automata he never uses gears in the descriptions at all, only friction devices. But gears would have worked better in some cases, and we know that centuries before there were precision gears. This implies that by Heron's time there had been a significant technology loss. There was still a lot to see. Heron says that an early automatic play "merely showed, by way of motion, a face with blinking eyes. … Heron also says that with "still" automata, one can either show a character in motion, or a character appearing and disappearing." There is some suggestion of the use of acids in mining, and we know they had pumps to raise water 30 meters out of an underground mine; some of the hardware is still there. ## Medicine Apparently Galen was not the pinnacle of "ancient" medicine. Herophilus, for example, had "a water "stopwatch" built that could be adjusted for the age of the patient." He described some mental illnesses, studied the circulatory system (with terms still used), studied the eye with great care, discovered the nerves (sensory and motor both!), and "For some diseases, such as the cholera, it is recorded that Herophilus handed down no treatment: this is perhaps the best proof of how serious he was in his medicine." ## Science as such Russo goes to considerable pains to define a scientific attitude as abstracting details from reality to create models of more or less validity--and making clear that the models are not the reality. It seems fairly clear that this attitude was gone by Imperial Roman times. He argues for its presence in Hellenist times. This is a little tricky, since most of the Hellenist scientist are known only by later references; not much is left. Russo also complains that a reliance on "homogeneity" undermines science, but another word for that in context is "dimensional analysis." Things like x^2+x^3 were allegedly not dealt with by Greek mathematicians (not clear that this is true, BTW). If x is a distance, the expression as it stands does not seem to make any sense. Given how important constructive techniques were for the Greeks (They didn't have computers, OK? And you can try mapping the measurements from a small construction to a big one by hand, but I'll bet you make mistakes.), it isn't too surprising that these wouldn't have been high priority. Did Newton base his work on gravitation on old Greek manuscripts? That seems likely to Russo and extremely unlikely to me. Some extraordinarily muddled Roman descriptions of Greek astronomy have triangles emanating from the Sun to move the planets. This, to Russo, looks a lot like Newton's geometric explanation of step-by-step motions of the planets under the Sun's gravitation, and Russo's explanation of how the language could have been garbled is quite plausible. However, untangling that rubbish is harder than figuring it out in the first place, and I think Newton's work was independent. Newton acknowledges his debt in optics to the Archbishop de Dominis, who used Greek terminology and borrowed from Arabs. It seems that the development of optics was plagued by amazing bad luck: The "Ancients" knew how to make good lenses but did not know what to make of them and kept them as baubles, later intellectuals--not just Leonardo and Fracastoro, but also Roger Bacon and Gosseteste centuries earlier--knew many uses for them, yet could not build them and had never seen such things. Some medieval manuscripts even show astronomers looking at the sky through long tubes; the incongruity has been addressed by postulating that these were empty sighting-tubes! Consider that Bacon, in the fifth book of the Opus maius, waxed enthusiastic about the Ancient's ability to enlarge small objects and to bring faraway ones close, using appropriate configurations of lenses and mirrors ## What happened? Disasters piled on limited knowledge pools seems the likely culprit. Quite a bit of this is my interpretation of his data. He doesn't devote as much time as I'd like to seeing what went wrong. Thebes had the death penalty for spying on their shipyards. No technology spread if they could help it. I don't know if I'm reading between the lines, but the complete absence of details about lighthouse construction sounds as though it was better for your health not to be too specific. If you have many small pools of knowledge trading finished goods, if not the technology proper, with a big pool of talent at Alexandria, you can keep developing sciences and skills and reverse engineering things for quite a while provided there's peace. There wasn't. Rome was expanding. Archimedes was killed by a Roman soldier, and Syracuse wasn't the last place; the Romans kept on. Euergetes II (Ptolemy VIII) perhaps decided that the Romans would like him better if he moved against Greeks, so he persecuted the Greek population of Alexandria--including the ones from the library. Almost nobody was left, and some of the survivors from the library seem to have headed East far out of Roman reach; some suggest all the way to India. Do I need to say more? Even if the books survive, explaining what exactly what the most advanced treatises mean demands some humans who still understand. Lose continuity, and the most advanced material is now unreadable--or if readable, not understood. Tell me what the average news reporter would make of the phrase fiber bundle, and compare it to the link. And any complicated technology demands the work of different kinds of experts. Lose any of them, and you can't repair it anymore. So the Roman predilection for encyclopedic works rather than deep analysis makes sense--they don't understand the hard stuff. Nobody did anymore. And you can only coast so long on an encyclopedia, especially when it get corrupted and larded with superstition. Can it happen again? Of course. The barbarians are always with us, both outside and inside. We spread our knowledge far more widely today, so the catastrophe would have to be much greater to hit it all, but the same problem of transmission of the hard details remains. Don't count on Wikipedia being there after a disaster; think in terms of what books the local library kept from the great starving seventy years ago when the burners went through. ### Word problems On another blog I mentioned a method of trying to parse word problems I'd used with several of our kids, and from the dead silence that ensued I conclude that people are too embarrassed to ask what it was. Or something. Blogger is not terribly friendly when I want to use and re-use many small images, so I stashed the whole thing on another web site. The idea is quite simple: look for key words and keep careful track of your units. ### Resisting "Columnist Disease" 'But I must think it one or the other.' 'By my father's soul, you must not--until you have some evidence. Can you not remain in doubt?' 'I don't know that I have ever tried.' 'You must learn to, if you are to come far with me. It is not hard to do it. In Eschropolis, indeed, it is impossible, for the people who live there have to give an opinion once a week or once per day, or else Mr. Mammon would soon cut off their food. But out here in the country you can walk all day and all the next day with an unanswered question in your head: you need never speak until you have made up your mind.' The Pilgrim's Regress by C.S.Lewis, Book 4 Chapter 3 ## Saturday, November 30, 2013 ### Reminders AVI recently posted on the spiritual gift of "Reminding". He noticed that he rarely learned anything new from Bible reading and studies, and thought it a good thing: reminders are not news but we need them, even when we're not enthusiastic about them. Someone learning new things may applaud the teacher; someone hearing the also-necessary reminders may just grump. The job of Reminder is definitely more like foot-washing... I'm still learning aspects of the faith emphasized by other branches--finally getting around to it--which is very interesting and edifying and not always very attached to how I live. On the other hand, my familiar daily readings that cycle through the gospels bring me to Matthew 25 regularly. Every time it seems to dig a little deeper: "Pay attention, dummy!" If we hunger and thirst after new things, the old familiar may seem, and perhaps eventually become, mere rote. (Merton said he knew monks dead from the neck up.) There must be a trick to being thankful for reminders. Maybe experience? The oil level in the van hasn't been a problem since we bought it, but we still need to check it--one day it won't be OK. Hone up the kitchen knives. Sweep the floors, call your mother, mow the grass (come Spring): do the ordinary maintenance that, if we keep it up, isn't so ordinary. Sometimes I find myself so far afield that I wonder if I need a flapper. Now that I think of it, most of the prophets were doing reminders (linked to an "otherwise here's what's coming" or "this happened because"). And they were highly honored for it (usually after they were dead): I think AVI's right; "one of the higher gifts we should aspire to." ## Thursday, November 28, 2013 ### Catacombs Going to Italy is not in our budget anytime soon. But Google Maps has gotten there already, and no stress on my knee! ## Wednesday, November 27, 2013 ### Distributed detection of chemical weapons Cell phones for chemical weapons detection? The idea seems reasonable enough--if the technology is cheap enough and the drain on batteries low enough, you could put environmental testing into a cell phone and have it relay back the GPS coordinates when something untoward is detected. One reading isn't any use. I doubt they can get the false positive rate down to anything near 0, and in a city with a hundred thousand cell phones somebody's phone will always be setting off a warning. So it isn't very good for detecting as-yet-unused munitions--they need to be leaking enough so that more than one phone in the area is effected. That doesn't mean the technology is of no use. If you have a dozen flags up all at once, civil defense can send out warnings, try to isolate traffic, get people indoors--the usual stuff. But a couple of questions come to mind. 1. Are there chemicals that will give false positives? If so, the usual suspects will make it happen. 2. Can the control be turned off by an app? A mole app, if popular enough, could generate false positive patterns, or disable the system entirely so civil defense has less active systems than they think. If their models lead them to expect 100 and they only have 30, what does that mean? You could make a simple compromise radiation detector that just looks for gammas. Most alpha emitters will include beta and gamma in the decay chain somewhere (tritium is tough, though). Pity the poor guy who got radio-isotope therapy. He'd have to stay indoors for a week. ### Be sure to wear your helmet when playing in the driveway Son hit by meteorites? It could be, though I'd want things checked a bit more closely. How hard did the objects hit the boy? We can estimate how fast rocks that size should be traveling at terminal velocity and see if the injury is consistent. Is it really airplane debris? If it turns out to be just bits from an old satellite that should stir a bit of a fuss. If it is real, I wonder what the boy's life will be like? Everywhere he goes he'll be the meteorite boy. Sort of like being a celebrity's kid--you have to duck the fame or try something to be known on your own. I wear hats to keep off the Sun's UV. I don't think I'll upgrade. If a meteorite wants to clobber me I guess I'll just have to worry along somehow. ## Tuesday, November 26, 2013 ### Imagining a world without antibiotics I'd be long dead without antibiotics: several bouts of pneumonia plus other kid stuff I've forgotten about. The drugs have been getting more and more resistant, as predicted. McKenna warns about regularly mixing antibiotics in animal feed--I gather it helps the critters grow more quickly. But it obviously speeds up the rate at which bacteria become immune. There are details.. However, the companies which sell antibiotics report that only 13 percent of all product sold were sold under the “performance enhancement,” or growth promotion, label. The rest is for animals who need medicine to prevent and treat illness! Therefore, looking at the total volume of product sold is not meaningful to this debate. Although everyone continues to harp on this 80 percent number, it is important to understand that there are many more livestock in the U.S. than there are people; most are larger and need a larger dose. Also critical to this discussion, the types of antibiotics used in humans are much different than those used in animals. My main take-away from this (interested) party is the claim that the antibiotics used in humans aren't the same as in animals. It isn't clear if he means people get one penicillin derivative and animals get another: in fact one of the comments claims "In 2010, 70 percent of antibiotics sold for use in food animals were ionophores or tetracyclines." Another says "Drug use is rarely effective to'compensate' for poor practices. A large percent of ontimicrobials, has been often stated, are ionophores. Ths effect of this class is to push the rumen microbial fermentation over to a more favorable volatile fatty acid ratio, improving the efficiency. Also, some work out of Minnesota suggests that the growth promotion effect in hogs with other antibiotics may be in fact a similar mode of action. Which seems to also be a common thread in the press lately about obseity and gut microbes in humans." Which is related to the "grow more quickly" that I thought common knowledge. So things are a little more complicated than met the eye. Butchering isn't a precision job, and some cross-contamination occurs with the gut bacteria. Even if the antibiotic residue is below the allowed value, the bacteria are still there and itching for a little sloppy meat handling. Or fertilizer handling--veggies fertilized with nice organic manure... The companies' report of 13% may or may not be very useful. For oral antibiotics, what's to keep you from mixing an old batch into the big feed bin after the sick cow gets well? As long as the residues die away by the time you ship, who's to know? I've thought that "provide for the common defence" applied just as well to epidemics as it does to merely human enemies. (No, the ordinary cold and flu season doesn't qualify; you're on your own. Think Spanish flu or bubonic plague.) Systematically building up antibiotic resistance seems like an activity designed to undermine the security of the state. That said, I suspect that the next big plagues won't come out of American farms, but out of megacities. People share diseases pretty easily, and in a lot of the world the antibiotics can be a little old (I've heard of donated drugs arriving pre-expired), or will be taken only until the patient feels better. Another recipe for new plagues. ## Friday, November 22, 2013 ### IceCube No, my name is not on the paper. Halzen and Karle and Yeck assembled a good team; sharp and pleasant to work with: people who like to solve problems and find things out. And there were a lot of problems. It turned out that the clearest ice in the world had a tilted dust layer in it thanks to some distant volcano, and studying the properties of that took man-years (and is using lots of GPUs even now). And the background rates from cosmic ray showers in our atmosphere depend on "forward-going" charm quark production rates that even the LHC has trouble measuring. Congratulations to all. ## Thursday, November 21, 2013 ### Process theology The subject came up recently, and I've been trying to learn a little about it. If I understand process theology's principles correctly, one foundation is that existence means to be in relation. When this is applied to God, it seems to contradict classical theology with its "aseity" although (forgive the proverb) the devil is in the details. The Christian Trinity exists both in Himself and in relation with Himself, and the source of relationship is within the trinity, but you still have contradiction for simpler monotheisms. Some of the P.T. proponents seem to be claiming that God is compelled to create as a requirement of His own existence--in order to exist in relation. In addition, some assert that God does not know the future. IIUC, this is not the common predestinarian confusions that mix human experience with eternal views of time. Instead they are asking what are the implications of co-creation. I think this simply shifts the paradox to a new corner--which of the infinite set of possible histories do we agree to make. Their pop expression "does not know the future" is not helpful. Why would we want to try to redefine the ground of existence as relationship? The first thing that comes to mind is the amazing success this sort of thing has had in mathematics. Pick up any advanced math reference, and it starts out describing mappings (morphisms). Category Theory is a hot topic, and a very powerful tool. Spending the effort to make a description of a physics law independent of the coordinate system you pick makes some symmetries much clearer. Analyzing relationships is what much of modern math does. Of course we exist in relation, but that's not the same as saying everything must. Which is why I thought of math first. I think the Creator/creation divide is so great that the same terms in the language don't mean the same things when applied to each side (I notice that the Chalcedonian debates were followed by a rise in apophatic theology: it seems fitting that after arguments over fine distinctions there be an admission that sometimes words mislead.) So I'm wary, even though I've written about God as a suffering servant myself. The process theology field spreads over a moderately wide range, and the theologians I ran into first may not be typical. They seem to elevate the human view of the relationship to be a peer to God's view, which of course makes the current political and social fashion the next step in the revelation of the relationship between God and man. ## Wednesday, November 20, 2013 ### Typing one-handed is very error-prone. I remember a suggestion that the superior quality of older letters was partly due to having extra time to think every time you re-dipped the quill. We all know there are several biases to the sampling involved in that evaluation, and if the principle is true my output should be of nearly Shakespearean quality for awhile. Rotator cuffs are amazingly complex machines, but they have several unpleasant failure modes, and I gather they don't heal as well as simpler systems. ## Tuesday, November 19, 2013 ### Oswald In commemoration of the Kennedy assassination, BBC dug up stories of Oswald's life in Russia. Oswald basked in the attention of being one of Minsk's few foreigners, and its only American. He regularly made social calls to a girls' dormitory, near his flat. "He would come without warning and knock at someone's door and say, 'Hello, here I am,'" says Inna Markava, an English-language translator who was a student at the time. "And that's it - spend two or three hours. He thought that he was the centre of the group," she says. "I remember that we were in the room, sitting, and if he thought we had forgotten about him, he would immediately remind everybody, that he was there, that he should not be forgotten." The article's author says: Still it was somewhat unnerving to hear so many good things about a person whose name is associated with one of the most infamous acts of our era. I think David Stern and I may have different ideas about what constitutes "good things" about a person. "Weird" is the word I'd choose. I gather one is supposed to try to remember where you were that day. I'm afraid I don't remember much at all. Maybe I've forgotten, or maybe I just didn't pay a lot of attention at the time. A friend of mine was vexed that his favorite program (Fireball XL5) was preempted by funeral programming and for some reason blamed the Kennedys. (We didn't have a TV, and my oxen were ungored.) ## Saturday, November 16, 2013 ### Round world When did people learn that the world was round? The Greeks figured it out and taught the West and the Arabs, and I presume the Indians as well. The Chinese seemed to consider it flat until Ricci et al, though perhaps that was due to the representation of the Earth (a grid) in the language, taken mystically (Yin/Yang) or at face value by the unlearned. Though apparently it took a while for Chinese astronomers to buy into Ricci's notions. The Chinese weren't lacking in good astronomers, but astronomical knowledge about the stars and planets doesn't demand that you understand the shape of the Earth. I read claims that the Maya knew the world was round, but I haven't yet found an explanation, just bald assertions. Once again, astronomy doesn't have to tell you the shape of the Earth unless you ask the right question (Earth's shadow on the Moon). The lies of Draper and White have been dreadfully long-lasting, and clutter up google search badly enough to make finding information tough. ### Supertime speculation I referred to the science fiction notion of humanity as a psychically linked collective organism in a previous post. If that were the case, we'd be one unhappy organism. When cells in your body decide to go "every man for himself" we call it cancer. And if the super-organism carried our individualist attitudes up to the next level... The world=The flesh=The devil and the church is emergency surgery. ## Thursday, November 14, 2013 ### Technological decline Via Not Even Rocket Science: some interesting notes on Tasmanian technological decline. Summary of the situation: after rising seas separated Tasmania from Australia, they lost, according to the archeological record, the skills required to make clothing, fire, composite weapons (i.e. putting a stone tip on a spear), fishing boats--in fact they stopped eating fish. Wrapped in a skin and smeared with grease to keep off the cold, they used clubs and sharp sticks. Yet their ancestors back in 6000BC knew better; even in 1000BC they still had bone tools. The theory proposed is that the populations were small enough that there were not enough apt pupils to maintain the more complex technologies. Knapping takes either instruction or a lot of trial and error. In their models teaching is lossy, and only a few pupils better their master. I'm not sure there's always a best way to attach a spear-point--more likely several good ways that you adapt depending on details of the wood and the point. Henrich explicitly assumes that human intelligence averages are universal, which is known not to be the case, but it may not matter much for this argument. The information loss model is easily illustrated in the loss of native technologies and stories when European technologies arrived in the Americas. The reason differed, but the operation was the same--not enough people to learn the old ways, and sometimes even the language dies out. The Forgotten Revolution (I'm still reading it) describes how the Hellenists reached heights of scientific understanding not regained for two thousand years--but when political purges scattered the Alexandrian Greeks the knowledge died out. Without a large enough pool of scholars the chances of getting someone with the intellectual horsepower to understand Eratosthenes, much less carry on his work, are poor. You're lucky to be able to teach a little pre-algebra. I assume everybody has read A Canticle for Leibowitz. That may be the solution, but I wonder if that's all. You can imagine a Tasmanian PETA that persuades the rest that fish are bad. After a while you forget why--but in the meantime you don't need fish tridents or nets, and you forget how to make those too. Disease or poor nutrition (famine) will drop the average IQ of a cohort of youngsters. Their kids may be OK, but in the meantime information gets lost. It isn't pleasant to think that the things I've worked with will evaporate without a trace--knowledge is supposed to be cumulative--but it has happened before, and can easily happen again. I suppose we need to be "Faithful in his generation", even if those who come after lose it all. ## Monday, November 11, 2013 ### Pehn-Pehn Ban Recently Liberia decided to ban the motorcycle taxis that have been serving Monrovia. They are cheap, often overloaded, and if you can believe it, even less careful of traffic rules than the taxis and money-buses. The last straw seems to have been when an accident ended with a bus being burned by motorcyclists in retaliation. As you can guess, there is more than one side to the problem. (The official exchange rate is 85 LD to the USD, but I suspect the unofficial rate is twice that.) Garloe, who claimed he has been operating motorcycle since 2012, said the leadership has always devised schemes to collect money from them. "We are often confronted by our leaders to purchase various items such as stickers, daily tickets, identification cards from the Union." "Think about this, we pay L$600 for stickers, L$50 for daily tickets that are not even monitored along with L\$500 for identification card which many of us have paid for and are yet to receive; we do not know where the money is going," the frustrated motorcyclist said. Quizzed as to what was responsible for the latest restrictions coming from government, he said "maybe, the union's refusal to remit money and fines collected from motorcyclists and the inability of the union to adequately monitor and supervise the sector are some of the reasons they have been banned from plying the main streets." Adding his voice, motorcyclist Samuel Zeiguah, mirrored the issue from a very different perspective, saying that the ban will not only affect them as riders but will also affect police officers and officials of the Union who are in the habit of clamping down on poor motorcyclists. "Police officers along with the officers of the five different unions will find it increasingly hard to regularly pay their 'susu', because of the ban. Policeman is going to seriously and surely feel it," he asserted. I like Samuel's little sarcastic touch. Turns out there's even more at stake here: Mr. Bernard feared that this policy could increase the high criminal rate in the country, as many of the 'pehn-pehn' riders may revert in their ugly past, stressing: "Some of these bike riders you see here today are former combatants, and some of them are former armed robbers, who saw bike riding as a means of making livelihood. So, if you do something that will keep them less busy, you give room to rise in the criminal rate." Unintended consequences all over the place. (Source for the quotes is AllAfrica) ## Sunday, November 10, 2013 ### All Things Considered The whole is here, but for your edification: Real pain, as in the case of being burnt at Smithfield or having a toothache, is a positive thing; it can be supported, but scarcely enjoyed. But, after all, our toothaches are the exception, and as for being burnt at Smithfield, it only happens to us at the very longest intervals. And most of the inconveniences that make men swear or women cry are really sentimental or imaginative inconveniences—things altogether of the mind. For instance, we often hear grown-up people complaining of having to hang about a railway station and wait for a train. Did you ever hear a small boy complain of having to hang about a railway station and wait for a train? No; for to him to be inside a railway station is to be inside a cavern of wonder and a palace of poetical pleasures. Because to him the red light and the green light on the signal are like a new sun and a new moon. Because to him when the wooden arm of the signal falls down suddenly, it is as if a great king had thrown down his staff as a signal and started a shrieking tournament of trains. I myself am of little boys' habit in this matter. They also serve who only stand and wait for the two fifteen. and For instance, there is a current impression that it is unpleasant to have to run after one's hat. Why should it be unpleasant to the well-ordered and pious mind? Not merely because it is running, and running exhausts one. The same people run much faster in games and sports. The same people run much more eagerly after an uninteresting little leather ball than they will after a nice silk hat. There is an idea that it is humiliating to run after one's hat; and when people say it is humiliating they mean that it is comic. It certainly is comic; but man is a very comic creature, and most of the things he does are comic—eating, for instance. And the most comic things of all are exactly the things that are most worth doing—such as making love. A man running after a hat is not half so ridiculous as a man running after a wife. Now a man could, if he felt rightly in the matter, run after his hat with the manliest ardour and the most sacred joy. He might regard himself as a jolly huntsman pursuing a wild animal, for certainly no animal could be wilder. In fact, I am inclined to believe that hat-hunting on windy days will be the sport of the upper classes in the future. There will be a meet of ladies and gentlemen on some high ground on a gusty morning. They will be told that the professional attendants have started a hat in such-and-such a thicket, or whatever be the technical term. Notice that this employment will in the fullest degree combine sport with humanitarianism. The hunters would feel that they were not inflicting pain. Nay, they would feel that they were inflicting pleasure, rich, almost riotous pleasure, upon the people who were looking on. When last I saw an old gentleman running after his hat in Hyde Park, I told him that a heart so benevolent as his ought to be filled with peace and thanks at the thought of how much unaffected pleasure his every gesture and bodily attitude were at that moment giving to the crowd. The same principle can be applied to every other typical domestic worry. A gentleman trying to get a fly out of the milk or a piece of cork out of his glass of wine often imagines himself to be irritated. Let him think for a moment of the patience of anglers sitting by dark pools, and let his soul be immediately irradiated with gratification and repose. Again, I have known some people of very modern views driven by their distress to the use of theological terms to which they attached no doctrinal significance, merely because a drawer was jammed tight and they could not pull it out. I can testify to the elusiveness of the free-range hat.
2023-04-01 13:36:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35224026441574097, "perplexity": 2337.092918118139}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950030.57/warc/CC-MAIN-20230401125552-20230401155552-00281.warc.gz"}
https://discuss.codechef.com/questions/47881/spoj-problem
× # spoj problem 0 http://www.spoj.com/problems/ABA12D/ how do i solve this question. i need to find whether sum of all divisors of a number is a prime number or not.I am not able to think of any optimised solution keeping the constraints in mind. asked 19 Jul '14, 19:11 1★aman2192 1●2●3 accept rate: 0% @aman2192 There is a pattern for the solution. Write the first few numbers that satisfy the given constraints and you'll be good to go. (19 Jul '14, 19:30) 0 thanks.... answered 19 Jul '14, 19:46 1★aman2192 1●2●3 accept rate: 0% 0 @michelangelo...i observed that if no. if a perfect square answer is yes. the only exception is the number 2.But it gives wrong answer answered 19 Jul '14, 20:04 1★aman2192 1●2●3 accept rate: 0% 36, 49, 81, and many more perfect squares do not satisfy that property. The trick is to reduce the solution space to perfect squares and then apply the given conditions to get the final solution. Of course, the exception being 2. (19 Jul '14, 20:15) 0 but if i store all perfect squares and start finding their sum of divisors and then check if it is prime or not, it will give tle as the sum of divisors for 10^6 would be quite big. And i'll have to check if that no.. is prime which again will involve loping answered 19 Jul '14, 21:11 1★aman2192 1●2●3 accept rate: 0% toggle preview community wiki: Preview By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×18 question asked: 19 Jul '14, 19:11 question was seen: 4,023 times last updated: 23 Oct '14, 18:03
2019-01-17 00:29:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6718977093696594, "perplexity": 2809.2963612317803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658662.31/warc/CC-MAIN-20190117000104-20190117022104-00056.warc.gz"}
http://tutornotes.com.hk/DSE%20Core%20Math/More%20about%20equations_Examples_II.html
### More about equations Examples • It is given that $$2 + \frac{1}{{2 + {\textstyle{1 \over {2\, + \,{\textstyle{1 \over {2\; + \; \cdots }}}}}}}}$$ is a fixed value and let x be the value. (a) Prove that $${x^2} + 2x - 1 = 0$$. (b) Hence find the value of $$2 + \frac{1}{{2 + {\textstyle{1 \over {2\, + \,{\textstyle{1 \over {2\; + \; \cdots }}}}}}}}$$. • $$\begin{array}{c}(a)x = 2 + \frac{1}{{2 + {\textstyle{1 \over {2\, + \,{\textstyle{1 \over {2\; + \; \cdots }}}}}}}}\\ = 2 + \frac{1}{x}\\{x^2} - 2x - 1 = {0^{}}\end{array}$$ $$\begin{array}{1}(b){x^2} - 2x - 1 = 0\\x = \frac{{ - ( - 2) \pm \sqrt {{{( - 2)}^2} - 4(1)( - 1)} }}{{2(1)}}\\ = \frac{{2 \pm \sqrt 8 }}{2}\\ = \frac{{2 \pm 2\sqrt 2 }}{2}\\ = 1 \pm \sqrt 2 \end{array}$$ ∵ $$2 + \frac{1}{{2 + {\textstyle{1 \over {2\, + \,{\textstyle{1 \over {2\; + \; \cdots }}}}}}}}$$ is a positive number. ∴ $$2 + \frac{1}{{2 + {\textstyle{1 \over {2\, + \,{\textstyle{1 \over {2\; + \; \cdots }}}}}}}} = \underline{\underline {1 + \sqrt 2 }}$$ • Consider $$\frac{{\cos \theta }}{{{{\sin }^2}\theta }} = \frac{1}{3}$$. (a) Rewrite the equation in the form of $$a{\cos ^2}\theta + b\cos \theta + c = 0$$, where a, b and c are integers. (b) Hence solve $$\frac{{\cos \theta }}{{{{\sin }^2}\theta }} = \frac{1}{3}$$, where $$0^\circ \le \theta \le 360^\circ$$. (Give your answers correct to 1 decimal place if necessary.) • $$\begin{array}{1}(a) \frac{{\cos \theta }}{{{{\sin }^2}\theta }} = \frac{1}{3}\\3\cos \theta = {\sin ^2}\theta \\3\cos \theta = 1 - {\cos ^2}\theta \\{\cos ^2}\theta + 3\cos \theta - 1 = 0\end{array}$$ (b) $${\cos ^2}\theta + 3\cos \theta - 1 = 0$$ $$\begin{array}{1}\cos \theta = \frac{{ - {\rm{ }}3 \pm \sqrt {{3^2} - 4(1)( - 1)} }}{{2(1)}}\\ = \frac{{ - {\rm{ }}3 \pm \sqrt {13} }}{2}\end{array}$$ $$\cos \theta = \frac{{ - {\rm{ }}3 + \sqrt {13} }}{2}$$ or $$\cos \theta = \frac{{ - {\rm{ }}3 - \sqrt {13} }}{2}$$ (rejected) $$\theta = 72.4^\circ$$ or $$360^\circ - 72.4^\circ$$ (corr. to 1 d.p.) ∴ $$\theta = \underline{\underline {72.4^\circ \;\;{\rm{o}}{\kern 1pt} {\rm{r}}\;\;287.6^\circ }}$$ • A group of students have to share the expense of $2 640 equally for a Christmas party. Now 3 of them cannot join the party and the rest of the students need to pay$8 more each. (a) If the final number of students joining the party is n, express the original amount payable for each student in terms of n. (b) How many students will join the party finally? • (a) ∵ $$(n + 3)$$ students will join the party originally. ∴ Original amount payable by each student$$= \underline{\underline {\ \frac{{2{\rm{ }}640}}{{n + 3}}}}$$ (b) Final amount payable by each student$$= \ \frac{{2{\rm{ }}640}}{n}$$ $$\begin{array}{1}\frac{{2{\rm{ }}640}}{n} - \frac{{2{\rm{ }}640}}{{n + 3}} = 8\\2{\rm{ }}640(n + 3) - 2{\rm{ }}640n = 8n{(n + 3)^{^{}}}\\2{\rm{ }}640n + 7{\rm{ }}920 - 2{\rm{ }}640n = 8{n^2} + 24{n^{^{}}}\\{n^2} + 3n - 990 = 0\\(n - 30)(n + 33) = {0^{}}\end{array}$$ $$n = 30$$ or $$n = - 33$$ (rejected) ∴ 30 students will join the party finally.
2017-04-28 14:03:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34966713190078735, "perplexity": 1377.1542903268912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00433-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathhelpforum.com/advanced-statistics/206763-conditional-expectation-poisson-processes.html
# Thread: Conditional Expectation of Poisson Processes 1. ## Conditional Expectation of Poisson Processes Suppose that Xt is a Poisson Process with parameter λ = 1. Find E(X1 | X2) and E(X2 | X1). I think they should be equal but I'm not sure how to write the value of the expectation of either. 2. ## Re: Conditional Expectation of Poisson Processes Hey bjnovak. Can you calculate the conditional distribution for P(X1=x|X2=y) and find its expectation? You also have to tell us what kind of expectation you want to find since it is a multi-dimensional random variable: the expectation can be any general function of X1 and X2 or E[f(X1,X2)] and if you don't specify this then it won't make sense. 3. ## Re: Conditional Expectation of Poisson Processes I believe the expectation is E[f(X1,X2)] where E( X2 | X1) (x) = f(X1,X2) is the (sum over X2 of X2*f(X1,X2) ) / f over X1 (X1) (equation from book). I'm not sure how the poisson process applies to this 4. ## Re: Conditional Expectation of Poisson Processes That means that in this particular case f(X1,X2) = X2. If you find the conditional distribution and get E[X2] of that distribution (it will be bivariate) then you can show what you need to show.
2017-01-16 20:11:52
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429333567619324, "perplexity": 1104.9273607108616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279248.16/warc/CC-MAIN-20170116095119-00111-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/probablity-problem-using-series.748615/
# Homework Help: Probablity problem using series 1. Apr 13, 2014 ### toothpaste666 1. The problem statement, all variables and given/known data Professors Bob and Fred cannot decide who should buy a new kite. In order to decide, they play a game. They take it is turns to throw a (standard, six-sided) die, with Professor Bob going fi rst. The winner of the game is the fi rst one to throw a 4. For example, Professor Bob wins if he throws a 4 immediately or the results are non-4 for Bob, non-4 for Fred, 4 for Bob or non-4 for Bob, non-4 for Fred, non-4 for Bob, non-4 for Fred, 4 for Bob and so on. Find the probability that Professor Bob wins. (Hint: the calculation requires a certain type of series) 2. Relevant equations $\frac{a}{1-r}$ 3. The attempt at a solution Bobs first turn he has 1/6 chance of rolling a 4. If it goes to his second turn that means that he did not throw a 4 the first turn and also that Fred did not throw a 4 on his turn so his chances of rolling a 4 on the second turn are 5/6 * 5/6 * 1/6. To analyze the chances of Bob rolling a 4 on his first 3 turns: $\frac{1}{6} + (\frac{5}{6} * \frac{5}{6} * \frac{1}{6}) + (\frac{5}{6} * \frac{5}{6} * \frac{5}{6} * \frac{5}{6} * \frac{1}{6})$ $\frac{1}{6} + (\frac{5}{6})^2 \frac{1}{6} +(\frac{5}{6})^4 \frac{1}{6}$ This is a geometric series where the first term is 1/6 and the common ratio is (5/6)^2 plugging this into a/(1-r) we have: $\frac{\frac{1}{6}}{\frac{36}{36} - \frac{25}{36}}$ $\frac{1}{6} * \frac{36}{11}$ $\frac{6}{11}$ So Bob's chance of winning would be 6/11. This seems to make sense because they would have just about the same chances of winning except Bob's would be slightly more because he goes first which leaves the chance of him winning without Fred even having a turn. Is this the correct answer/approach to this problem? 2. Apr 13, 2014 ### Ray Vickson Your answer and approach are correct. There is another way that is perhaps more intuitive and more "probabilistic". Let b = probability that Bob wins, given that Bob goes first, and f = probability that Bob wins, given that Fred goes first. We have $$b = \frac{1}{6} + \frac{5}{6} f \; \longleftarrow \text{do you see why?}\\ f = \frac{5}{6} b$$ The second equation above follows from the fact that if Bob is to win when Fred goes first, Fred's first toss must not be a '4', and then after that, everything is the same, but now with Bob going first. Solving the two equations gives $b = 6/11, f = 5/11$. 3. Apr 13, 2014 ### toothpaste666 Can you explain that a little bit more? It seems a lot more efficient than what I did, but I don't fully understand it. 4. Apr 14, 2014 ### Ray Vickson If Bob starts he wins immediately if he gets '4'; the probability of that is 1/6. Otherwise, Bob wins when Fred goes first following Bob's initial non-4, and the probability of that sequence is (5/6)*f. Those are the two ways Bob can win going first, so b = 1/6 + (5/6)*f. For Bob to win when Fred goes first, two things must happen in sequence: first, Fred has to get a non-4 (probability = 5/6) and then Bob wins eventually while going first (for the remainder of the games). The probability of that sequence is (5/6)*b, so f = (5/6)*b.
2018-07-20 23:12:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7993786931037903, "perplexity": 920.8546120960589}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591837.34/warc/CC-MAIN-20180720213434-20180720233434-00085.warc.gz"}
http://specialfunctionswiki.org/index.php/Pythagorean_identity_for_sinh_and_cosh
# Pythagorean identity for sinh and cosh Jump to: navigation, search ## Theorem The following formula holds: $$\cosh^2(z)-\sinh^2(z)=1,$$ where $\cosh$ denotes the hyperbolic cosine and $\sinh$ denotes the hyperbolic sine. ## Proof From the definitions $$\cosh(z)=\dfrac{e^{z}+e^{-z}}{2}$$ and $$\sinh(z)=\dfrac{e^{z}-e^{-z}}{2},$$ we see $$\begin{array}{ll} \cosh^2(z) - \sinh^2(z) &= \left( \dfrac{e^{z}+e^{-z}}{2} \right)^2 - \left( \dfrac{e^{z}-e^{-z}}{2} \right)^2 \\ &= \dfrac{1}{4} \left( e^{2z}+2+e^{-2z}-e^{2z}+2-e^{-2z} \right) \\ &= 1, \end{array}$$ as was to be shown. █
2022-08-14 18:20:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995477199554443, "perplexity": 812.5997552546655}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572063.65/warc/CC-MAIN-20220814173832-20220814203832-00637.warc.gz"}
http://pyro.ai/examples/ekf.html
# Kalman Filter¶ Kalman filters are linear models for state estimation of dynamic systems [1]. They have been the de facto standard in many robotics and tracking/prediction applications because they are well suited for systems with uncertainty about an observable dynamic process. They use a “observe, predict, correct” paradigm to extract information from an otherwise noisy signal. In Pyro, we can build differentiable Kalman filters with learnable parameters using the pyro.contrib.tracking library ## Dynamic process¶ To start, consider this simple motion model: $X_{k+1} = FX_k + \mathbf{W}_k$ $\mathbf{Z}_k = HX_k + \mathbf{V}_k$ where $$k$$ is the state, $$X$$ is the signal estimate, $$Z_k$$ is the observed value at timestep $$k$$, $$\mathbf{W}_k$$ and $$\mathbf{V}_k$$ are independent noise processes (ie $$\mathbb{E}[w_k v_j^T] = 0$$ for all $$j, k$$) which we’ll approximate as Gaussians. Note that the state transitions are linear. ## Kalman Update¶ At each time step, we perform a prediction for the mean and covariance: $\hat{X}_k = F\hat{X}_{k-1}$ $\hat{P}_k = FP_{k-1}F^T + Q$ and a correction for the measurement: $K_k = \hat{P}_k H^T(H\hat{P}_k H^T + R)^{-1}$ $X_k = \hat{X}_k + K_k(z_k - H\hat{X}_k)$ $P_k = (I-K_k H)\hat{P}_k$ where $$X$$ is the position estimate, $$P$$ is the covariance matrix, $$K$$ is the Kalman Gain, and $$Q$$ and $$R$$ are covariance matrices. For an in-depth derivation, see [2] ## Nonlinear Estimation: Extended Kalman Filter¶ What if our system is non-linear, eg in GPS navigation? Consider the following non-linear system: $X_{k+1} = \mathbf{f}(X_k) + \mathbf{W}_k$ $\mathbf{Z}_k = \mathbf{h}(X_k) + \mathbf{V}_k$ Notice that $$\mathbf{f}$$ and $$\mathbf{h}$$ are now (smooth) non-linear functions. The Extended Kalman Filter (EKF) attacks this problem by using a local linearization of the Kalman filter via a Taylors Series expansion. $f(X_k, k) \approx f(x_k^R, k) + \mathbf{H}_k(X_k - x_k^R) + \cdots$ where $$\mathbf{H}_k$$ is the Jacobian matrix at time $$k$$, $$x_k^R$$ is the previous optimal estimate, and we ignore the higher order terms. At each time step, we compute a Jacobian conditioned the previous predictions (this computation is handled by Pyro under the hood), and use the result to perform a prediction and update. Omitting the derivations, the modification to the above predictions are now: $\hat{X}_k \approx \mathbf{f}(X_{k-1}^R)$ $\hat{P}_k = \mathbf{H}_\mathbf{f}(X_{k-1})P_{k-1}\mathbf{H}_\mathbf{f}^T(X_{k-1}) + Q$ and the updates are now: $X_k \approx \hat{X}_k + K_k\big(z_k - \mathbf{h}(\hat{X}_k)\big)$ $K_k = \hat{P}_k \mathbf{H}_\mathbf{h}(\hat{X}_k) \Big(\mathbf{H}_\mathbf{h}(\hat{X}_k)\hat{P}_k \mathbf{H}_\mathbf{h}(\hat{X}_k) + R_k\Big)^{-1}$ $P_k = \big(I - K_k \mathbf{H}_\mathbf{h}(\hat{X}_k)\big)\hat{P}_K$ In Pyro, all we need to do is create an EKFState object and use its predict and update methods. Pyro will do exact inference to compute the innovations and we will use SVI to learn a MAP estimate of the position and measurement covariances. As an example, let’s look at an object moving at near-constant velocity in 2-D in a discrete time space over 100 time steps. [ ]: import os import math import torch import pyro import pyro.distributions as dist from pyro.infer.autoguide import AutoDelta from pyro.optim import Adam from pyro.infer import SVI, Trace_ELBO, config_enumerate from pyro.contrib.tracking.extended_kalman_filter import EKFState from pyro.contrib.tracking.distributions import EKFDistribution from pyro.contrib.tracking.dynamic_models import NcvContinuous from pyro.contrib.tracking.measurements import PositionMeasurement smoke_test = ('CI' in os.environ) assert pyro.__version__.startswith('1.8.4') [ ]: dt = 1e-2 num_frames = 10 dim = 4 # Continuous model ncv = NcvContinuous(dim, 2.0) # Truth trajectory xs_truth = torch.zeros(num_frames, dim) # initial direction theta0_truth = 0.0 # initial state xs_truth[0, :] = torch.tensor([0.0, 0.0, math.cos(theta0_truth), math.sin(theta0_truth)]) for frame_num in range(1, num_frames): # sample independent process noise dx = pyro.sample('process_noise_{}'.format(frame_num), ncv.process_noise_dist(dt)) xs_truth[frame_num, :] = ncv(xs_truth[frame_num-1, :], dt=dt) + dx Next, let’s specify the measurements. Notice that we only measure the positions of the particle. [ ]: # Measurements measurements = [] mean = torch.zeros(2) # no correlations cov = 1e-5 * torch.eye(2) # sample independent measurement noise dzs = pyro.sample('dzs', dist.MultivariateNormal(mean, cov).expand((num_frames,))) # compute measurement means zs = xs_truth[:, :2] + dzs We’ll use a Delta autoguide to learn MAP estimates of the position and measurement covariances. The EKFDistribution computes the joint log density of all of the EKF states given a tensor of sequential measurements. [ ]: def model(data): # a HalfNormal can be used here as well R = pyro.sample('pv_cov', dist.HalfCauchy(2e-6)) * torch.eye(4) Q = pyro.sample('measurement_cov', dist.HalfCauchy(1e-6)) * torch.eye(2) # observe the measurements pyro.sample('track_{}'.format(i), EKFDistribution(xs_truth[0], R, ncv, Q, time_steps=num_frames), obs=data) guide = AutoDelta(model) # MAP estimation [ ]: optim = pyro.optim.Adam({'lr': 2e-2}) svi = SVI(model, guide, optim, loss=Trace_ELBO(retain_graph=True)) pyro.set_rng_seed(0) pyro.clear_param_store() for i in range(250 if not smoke_test else 2): loss = svi.step(zs) if not i % 10: print('loss: ', loss) [ ]: # retrieve states for visualization R = guide()['pv_cov'] * torch.eye(4) Q = guide()['measurement_cov'] * torch.eye(2) ekf_dist = EKFDistribution(xs_truth[0], R, ncv, Q, time_steps=num_frames) states= ekf_dist.filter_states(zs) ## References¶ [1] Kalman, R. E. A New Approach to Linear Filtering and Prediction Problems. 1960 [2] Welch, Greg, and Bishop, Gary. An Introduction to the Kalman Filter. 2006.
2023-03-22 22:19:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6703394055366516, "perplexity": 6474.923649736461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00303.warc.gz"}
http://openstudy.com/updates/513041fee4b0ea4eb1416131
1. Jaweria Group Title $\int\limits(5x ^{3}-4x+1)dx$ 2. Jaweria Group Title Evaluate the following indefinite integrals 3. campbell_st Group Title well do you know the basics of integration... $\int\limits x^n dx = \frac{1}{n + 1} x^{n + 1} + C$ C is a constant you need to apply this rule to each term in your indefinite integral 4. arjont Group Title this is same as taking the integral of each term in the expression 5. Jaweria Group Title ok do you mind if u can go little bit in more detail 6. arjont Group Title apply the exponent rule cambell_st wrote to each term 7. campbell_st Group Title so you can have $\int\limits ax^n dx = a \times \frac{1}{n + 1} x^{n +1} + C$ so you can do your question in 3 parts $\int\limits 5x^3 dx - \int\limits 4x dx + \int\limits 1 dx$
2014-07-31 23:44:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7369043231010437, "perplexity": 2677.1267602713565}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273766.32/warc/CC-MAIN-20140728011753-00139-ip-10-146-231-18.ec2.internal.warc.gz"}
https://archive.lib.msu.edu/crcmath/math/math/l/l060.htm
## Lamé Curve A curve with Cartesian equation first discussed in 1818 by Lamé. If is a rational, then the curve is algebraic. However, for irrational , the curve is transcendental. For Even Integers , the curve becomes closer to a rectangle as increases. For Odd Integer values of , the curve looks like the Even case in the Positive quadrant but goes to infinity in both the second and fourth quadrants (MacTutor Archive). The Evolute of an Ellipse, Curve Astroid Superellipse 3 Witch of Agnesi MacTutor History of Mathematics Archive. Lamé Curves.'' http://www-groups.dcs.st-and.ac.uk/~history/Curves/Lame.html.
2021-11-27 09:00:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9368190765380859, "perplexity": 1223.5545734054203}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358153.33/warc/CC-MAIN-20211127073536-20211127103536-00100.warc.gz"}
http://brollopsfotografering.com/e88m6v/tikhonov-regularization-python-56cd53
Is there a way to add the Tikhonov regularization into the NNLS implementation of scipy [1]? Elastic Net is a regularization technique that combines Lasso and Ridge. __ so that it’s possible to update each °c 1999 Society for Industrial and Applied Mathematics Vol. This tutorial is divided into three parts; they are: Linear regression refers to a model that assumes a linear relationship between input variables and the target variable. Tikhonov regularization. Lecturer: Samuli Siltanen Camera operator: Jesse Railo Editor: Heli Virtanen. Ridge regression with built-in cross validation, Kernel ridge regression combines ridge regression with the kernel trick. "weight decay") regularization, linearly weighted by the lambda term, and that you are optimizing the weights of your model either with the closed-form Tikhonov equation (highly recommend… The scalar µ > 0 is known as the regularization parameter. √ μ … As an iterative algorithm, this solver is A default value of 1.0 will fully weight the penalty; a value of 0 excludes the penalty. obtain a closed-form solution. would get a R^2 score of 0.0. Also known as Ridge Regression or Tikhonov regularization. The value of alpha is 0.5 in our case. The longer we train the network, the more specialized the weights will become to the training data, overfitting the training data. TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES 187 less than kLxTLSk2. Alpha corresponds to 1 / (2C) in other linear models such as 4 $\begingroup$ I am working on a project that I need to add a regularization into the NNLS algorithm. The example below downloads and loads the dataset as a Pandas DataFrame and summarizes the shape of the dataset and the first five rows of data. Implementing Tikhonov regularization (weight decay/ridge regression) in Python to solve ill-posed problems. NCPcriterion(ncp): Choose = NCP astheminimizerofd( ) = kc(r ) c whitek 2. Ridge Regression is a popular type of regularized linear regression that includes an L2 penalty. (i.e., when y is a 2d-array of shape (n_samples, n_targets)). improves the conditioning of the problem and reduces the variance of The eigenvalue from the truncation level in SVD is similar to the two choices of in the Tikhonov scheme. In this tutorial, you will discover how to develop and evaluate Ridge Regression models in Python. The effect of regularization may be varied via the scale of matrix $\Gamma$. A hyperparameter is used called “lambda” that controls the weighting of the penalty to the loss function. The scikit-learn Python machine learning library provides an implementation of the Ridge Regression algorithm via the Ridge class. Our pipeline is now ready to be fitted. The second approach, called graph Tikhonov regularization, is to use a smooth (differentiable) quadratic regularizer. In other academic communities, L2 regularization is also known as ridge regression or Tikhonov regularization. By default, the model will only test the alpha values (0.1, 1.0, 10.0). Ask your questions in the comments below and I will do my best to answer. A consequence of this choice is that the solution will tend to have smoother transitions. Total . Initialize self. Using a Lagrange multiplier we can rewrite the problem as: \hat \theta_{ridge} = argmin_{\theta \in \mathbb{R}^n} \sum_{i=1}^m (y_i - \mathbf{x_i}^T \theta)^2 + … Linear least squares with l2 regularization. Maximum number of iterations for conjugate gradient solver. To use this class, it is fit on the training dataset and used to make a prediction. Both methods also use an Considering no bias parameter, the behavior of this type of regularization can be studied through gradient of the regularized objective function. vi How do these choices for μrelate to the SVD truncation level chosen earlier ? I assume that you are talking about the L2 (a.k. Your specific results may vary given the stochastic nature of the learning algorithm. Tikhonov regularization is a generalized form of L2-regularization. Test samples. This is also known as $$L1$$ regularization because the regularization term is the $$L1$$ norm of the coefficients. Ridge regression is also known as L2 regularization and Tikhonov regularization. ‘cholesky’ uses the standard scipy.linalg.solve function to with default value of r2_score. This model solves a regression model where the loss function is the linear least squares function and regularization is given by the l2-norm. With a single input variable, this relationship is a line, and with higher dimensions, this relationship can be thought of as a hyperplane that connects the input variables to the target variable. A constant model that always How to configure the Ridge Regression model for a new dataset via grid search and automatically. Regularization Regularization: Ridge Regression and the LASSO Statistics 305: Autumn Quarter 2006/2007 Wednesday, November 29, 2006 Statistics 305: Autumn Quarter 2006/2007 Regularization: Ridge Regression and the LASSO. Running the example fits the model and discovers the hyperparameters that give the best results using cross-validation. We will use the housing dataset. Linear regression models that use these modified loss functions during training are referred to collectively as penalized linear regression. See help(type(self)) for accurate signature. After completing this tutorial, you will know: How to Develop Ridge Regression Models in PythonPhoto by Susanne Nilsson, some rights reserved. This estimator has built-in support for multi-variate regression (i.e., when y is a 2d-array of shape (n_samples, n_targets)). TIKHONOV REGULARIZATION AND TOTAL LEAST SQUARES GENE H. GOLUBy, PER CHRISTIAN HANSENz, AND DIANNE P. O’LEARYx SIAM J. MATRIX ANAL.PPL. Tikhonov Regularization¶ Tikhonov regularization is a generalized form of L2-regularization. New in version 0.17: random_state to support Stochastic Average Gradient. Ridge Regression is an extension of linear regression that adds a regularization penalty to the loss function during training. Thanks, looks like I pasted the wrong version of the code in the tutorial. This can be achieved by fitting the model on all available data and calling the predict() function, passing in a new row of data. In this case, we can see that the model achieved a MAE of about 3.382. SummaryofMethods(Tikhonov) Discrepancyprinciple(discrep): Choose = DP suchthatkAx bk 2 = dpkek 2. Disclaimer | 1, pp. $\begingroup$ I really only want to add any regularization to the NNLS. Active 5 months ago. Note that ‘sag’ and Retain only those features necessary to fit the data. MultiOutputRegressor). Ridge regression - introduction¶. How do we know that the default hyperparameters of alpha=1.0 is appropriate for our dataset? Dec 28, 2018 8 min read Optimization, Inverse Problems. It uses the Tikhonov Regularization method, but rather that using its analytic solution it. An efficient way to solve this equation is the least squares method. classifiers = [100, 50, 15, 5, 1, 0.1] r_squared = [] C = np.concatenate([Tfwd, np.zeros(n)]) fig, ax = plt.subplots(2, 1, figsize=(7.5, 10)) for tikhonov in classifiers: B = np.concatenate([A, tikhonov*L]) T_lstqs = np.linalg.lstsq(B, C, rcond=None) ax[0].scatter(x, T_lstqs[0], label="$\lambda=$" + str(tikhonov)) r_squared.append(rsqr(Tt, T_lstqs[0])) ax[1].scatter(tikhonov, r_squared[-1], label="$\lambda=$" + … Another approach would be to test values between 0.0 and 1.0 with a grid separation of 0.01. Section 2 discusses regularization by the TSVD and Tikhonov methods and introduces our new regularization matrix. Linear regression that includes an L2 penalty terms in the non-negative least square problem is very in! + α 2kfk2 2 regularization may be overwritten closed-form solution α 2kfk2 2 as penalty to! Zero and minimizing their impact on the type of regularization may be overwritten SIAM J. matrix ANAL.PPL coding: -! Ncpcriterion ( ncp ): Choose = gcv astheminimizerofG ( ) and score ( ) = kAx 2! Data and 13 input variables and a numerical target variable a worked example to a. The square of the parameters are minimized models such as 1e-3 or smaller are common like parameter! Training dataset the bounds of expected performance on this same test harness about! Full-Rank matrix according to the loss function address: PO Box 206 Vermont. - NNLS ( Python: scipy ) ask question Asked 6 years, 10 ago... Parameters of Ridge regression ; we will download it automatically as part of a grid search, which converges the... In a model based on the training data standard regression dataset min read … regularization... Of data with 13 numerical input variables and a numerical target variable Laplace! Total least squares residuals and L2 norm: of the squared coefficient values regularization because the technique! Prediction task configure the Ridge regression with the Kernel trick is a standard regression dataset Jason PhD... More stable for Singular matrices than ‘ cholesky ’ uses its iterative solution, which discuss... The cost function and thereby reducing coefficients lower towards zero 2018 8 min read optimization Inverse. However, the lambda term can be found in section 4 … Tikhonov and... Numpy as np: import matplotlib coding: utf-8 - * - coding: utf-8 *. Retain only those features necessary to fit this model solves a regression model that always predicts the value... Our prior knowlege about correlations between different predictors with a scaler from.... Which we discuss next are often faster than other solvers when both n_samples and n_features are Large is introduce! Will only test the alpha values ( 0.1, 1.0, 10.0 ) of! Such as neural networks, and are often faster than other solvers when n_samples. Lecturer: Samuli Siltanen Camera operator: Jesse Railo Editor: Heli Virtanen the! Model parameters ) using stochastic gradient descent, and linear models ( GLMs ) with advanced regularization options X compute. Solving non-unique Inverse problems than using scipy at a worked example will help: http: //machinelearningmastery.com/machine-learning-performance-improvement-cheat-sheet/ lecturer: Siltanen... May decide to use in the training data, overfitting the training data square problem is sometimes referred to regularized! So as to avoid the risk of overfitting regularization the idea behind SVD is similar to the function! Level in SVD is to penalize a model based on the training,. To have smoother transitions in our case chooses the solver automatically based on the training data, Perhaps of! And discovers the hyperparameters that give the best possible score is 1.0 and it can be used fit. Have defined complete example listed below optimization, Inverse problems is to limit the degree freedom! Class, it is good practice to test several different values for intuitive... Its iterative solution, which converges to the loss function is the least squares GENE H. GOLUBy PER... Has built-in support for multi-variate regression ( i.e., when y is a 2d-array of shape ( n_samples, )... Mae negative for optimization purposes scikit-learn library also provides a simple relation for linear regression that a. Squares function and thereby reducing coefficients lower towards zero referenced regularization method the! Become to the loss function relation for linear regression is also known as L2 regularization total... To as regularized linear regression looks like this correlations between different predictors a. The expected value of 1.0 will fully weight the penalty, have become popular the alpha values ( 0.1 1.0. P. O ’ LEARYx SIAM J. matrix ANAL.PPL regularization improves the conditioning of the examples seen in cost!, PER CHRISTIAN HANSENz, and DIANNE P. O ’ LEARYx SIAM J. ANAL.PPL. So both the least squares GENE H. GOLUBy, PER CHRISTIAN HANSENz, and linear models ( GLMs with. In a model based on the norm of the regularized objective function ncp ): =... Problems $\begingroup$ I really only want to test several different values the. Penalty in least square problem is very common in machine learning tasks, where the function! Restores the images tikhonov regularization python preserving the edges information can compute the L2 loss for a tensor using. It was a point that a reviewer on my paper brought up μ … machine learning Python! We do that as part of our worked examples ) regularization term always makes equation! Ncpcriterion ( ncp ): Choose = DP suchthatkAx bk 2 = dpkek 2 subobjects that are estimators algorithm! 2.2 Tikhonov regularization more information for kernalised Ridge regression adds “ squared magnitude ” of coefficient as penalty term the! 7 on Inverse problems 1 course Autumn 2018 ( a t a −1. Laplace transform using tikhonov regularization python regularization is given by the TSVD and Tikhonov methods and introduces our new regularization is... Before regression by subtracting the mean and dividing by the l2-norm numeric value an., would get a R^2 score of 0.0 calculations ( i.e non-zero ) regularization term is the least solution. Am working on a project that I need to download the dataset ; we will demonstrate how develop... Term to the loss code in the computational routines: ‘ auto ’ chooses the automatically. Compute the Ridge regression models in Python to solve ill-posed problems gradient descent, and linear such... Full rank ) is the penalty term data in a nutshell, if r = 0 elastic is. °C 1999 Society for Industrial and Applied Mathematics Vol α 2kfk2 2 = gcv astheminimizerofG ( ) = (. Are talking about the L2 loss for a new dataset via grid.. On stable methods for the new rows of data and 13 input variables and a target! ): Choose = gcv astheminimizerofG ( ) = kAx bk2 2 m P n i=1 [... Problems $\begingroup$ I really only want to add the Tikhonov.. Is tikhonov regularization python to false parameters for this estimator has built-in support for multi-variate regression ( i.e., when is! Hyperparameter controls the weighting of the use of an $L_2$ penalty in least -! Fitting line method restores the images by preserving the edges information it be! We may decide to use the Ridge regression and model which uses L2 is called Ridge regression also! For μrelate to the minimisation problem min f kg − Afk2 2 + α 2kfk2.! Always makes the equation nonsingular need to download the dataset ; we demonstrate... If you wish to standardize, please use sklearn.preprocessing.StandardScaler before calling fit on the housing is. Will know: how to use a smooth ( differentiable ) quadratic regularizer kernalised regression.
2021-04-11 04:34:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4687378704547882, "perplexity": 1444.9310351536017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060927.2/warc/CC-MAIN-20210411030031-20210411060031-00025.warc.gz"}
https://www.acemap.info/field/2015690902
Old Web English Sign In # Phase (waves) In physics and mathematics, the phase of a periodic function F {displaystyle F} of some real variable t {displaystyle t} is the relative value of that variable within the span of each full period. In physics and mathematics, the phase of a periodic function F {displaystyle F} of some real variable t {displaystyle t} is the relative value of that variable within the span of each full period. The phase is typically expressed as an angle ϕ ( t ) {displaystyle phi (t)} , in such a scale that it varies by one full turn as the variable t {displaystyle t} goes through each period (and F ( t ) {displaystyle F(t)} goes through each complete cycle). Thus, if the phase is expressed in degrees, it will increase by 360° as t {displaystyle t} increases by one period. If it is expressed in radians, the same increase in t {displaystyle t} will increase the phase by 2 π {displaystyle 2pi } . This convention is especially appropriate for a sinusoidal function, since its value at any argument t {displaystyle t} then can be expressed as the sine of the phase ϕ ( t ) {displaystyle phi (t)} , multiplied by some factor (the amplitude of the sinusoid). (The cosine may be used instead of sine, depending on where one considers each period to start.) Usually, whole turns are ignored when expressing the phase; so that ϕ ( t ) {displaystyle phi (t)} is also aperiodic function, with the same period as F {displaystyle F} , that repeatedly scans the same range of angles as t {displaystyle t} goes through each period. Then, F {displaystyle F} is said to be 'at the same phase' at two argument values t 1 {displaystyle t_{1}} and t 2 {displaystyle t_{2}} (that is, ϕ ( t 1 ) = ϕ ( t 2 ) {displaystyle phi (t_{1})=phi (t_{2})} ) if the difference between them is a whole number of periods. The numeric value of the phase ϕ ( t ) {displaystyle phi (t)} depends on the arbitrary choice of the start of each period, and on the interval of angles that each period is to be mapped to. The term 'phase' is also used when comparing a periodic function F {displaystyle F} with a shifted version G {displaystyle G} of it. If the shift in t {displaystyle t} is expressed as a fraction of the period, and then scaled to an angle φ {displaystyle varphi } spanning a whole turn, one gets the phase shift, phase offset, or phase difference of G {displaystyle G} relative to F {displaystyle F} . If F {displaystyle F} is a 'canonical' function for a class of signals, like sin ⁡ ( t ) {displaystyle sin(t)} is for all sinusoidal signals, then φ {displaystyle varphi } is called the initial phase of G {displaystyle G} . Let F {displaystyle F} be a periodic signal (that is, a function of one real variable), and T {displaystyle T} be its period (that is, the smallest positive real number such that F ( t + T ) = F ( t ) {displaystyle F(t+T)=F(t)} for all t {displaystyle t} ) . Then the phase of F {displaystyle F} at any argument t {displaystyle t} is Here [ [ ⋅ ] ] {displaystyle !]} denotes the fractional part of a real number, discarding its integer part; that is, [ [ x ] ] = x − ⌊ x ⌋ {displaystyle !]=x-leftlfloor x ight floor } ; and t 0 {displaystyle t_{0}} is an arbitrary 'origin' value of the argument, that one considers to be the beginning of a cycle. This concept can be visualized by imagining a clock with a hand that turns at constant speed, making a full turn every T {displaystyle T} seconds, and is pointing straight up at time t 0 {displaystyle t_{0}} . The phase ϕ ( t ) {displaystyle phi (t)} is then the angle from the 12:00 position to the current position of the hand, at time t {displaystyle t} , measured clockwise. Child Topic No Parent Topic
2023-03-27 20:00:32
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9081506729125977, "perplexity": 1465.2587293099514}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948684.19/warc/CC-MAIN-20230327185741-20230327215741-00248.warc.gz"}
https://en.neurochispas.com/algebra/applications-of-logarithms/
# Applications of Logarithms Logarithms have a large number of applications in real life. Logarithms are especially used to create measurement scales that are more manageable. Some examples of the applications of logarithms include the Richter scale for measuring earthquakes, the decibel scale for measuring sound, orders of magnitude, and applications in data analysis. Here, we will look at these applications in more detail. ##### ALGEBRA Relevant for Learning about various applications of logarithms. See applications ##### ALGEBRA Relevant for Learning about various applications of logarithms. See applications ## Orders of magnitude When we express something like “a 6 digit salary”, we are describing the numbers depending on how many powers of 10 they have (they are in the tens, hundreds, thousands, etc). Adding a digit means multiplying by 10. For example, 1 has one digit and 100,000 has six digits. Logarithms count the number of multiplications that are added to obtain a number, Therefore, starting with 1 (a single digit), we add 5 more digits ($latex {{10}^5}$) and we obtain 100,000, a six-digit number. Logarithms help us represent numbers using more manageable scales. It is easier to talk about something that has 6 digits than to mention that we have a hundred thousand. In computers, where everything is counted with bits (1 or 0), each bit has a doubling effect (not × 10). Thus, if we go from 8 bits to 16 bits, this is 8 orders of magnitude or $latex {{2}^8}=256$ times larger. Changing from 16 bits to 32 bits represents a change of 16 orders of magnitude or $latex {{2}^{16}}=65536$ times larger. ## Richter scale The Richter scale is a base-10 logarithmic scale. This scale defines the magnitude of an earthquake as the logarithm of the ratio of the amplitude of seismic waves to an arbitrary standard amplitude: $latex M = \log (\frac{A}{S})$ where, A is the amplitude of the earthquake measured with a seismometer from approximately 100 km from the epicenter of the earthquake and S is the standard amplitude of an earthquake, which is defined as approximately 1 micrometer. Since the Richter scale is a base 10 logarithmic scale, each increment of one on the Richter scale indicates an intensity ten times stronger than the previous number on the scale. ## Decibel scale Sound carries energy and its intensity is defined as: $latex I =\frac{P}{A}$ where, P is the power, which indicates the energy that flows per unit area, A, which is perpendicular to the direction in which the sound wave travels. Sound intensity is measured in terms of volume, which is measured in terms of a logarithm. Therefore, the intensity of the sound is defined as: $latex \beta = (10dB) \log (\frac{I}{I_{0}})$ In this definition, dB represents decibels which are equal to one-tenth of a bel (B). I is the intensity of the sound and $latex I_{0}$ is the standard intensity. With decibels, we can represent sound intensities that vary greatly in magnitude on the same scale. ## Applications in data analysis Logarithms are widely used in data analysis, which in turn is used in data science and computational machine learning. The logit plays a very important role in logistic regression. All probabilities can be easily converted to logit. Logarithmic transformations are also important to make it easier to see patterns in your data. Using logarithmic transformations it is possible to obtain exponential functions that are easier to read and are more understandable. Since logarithms can model a wide variety of phenomena, they are extremely useful in data science. Much of data science is modeling real-life situations, so logarithmic scales are vital. Google gives every page on the web a score (PageRank), which is roughly a measure of the website’s authority and importance of the page. This is a logarithmic scale, which means that PageRank counts the number of digits in the score. For example, a site with a PageRank of 2 (2 digits) is 10 times more popular than a site with a PageRank of 1. CNN’s PageRank is 9, so there is a difference of 4 orders of magnitude ($latex {{ 10}^4}=10000$) compared to a page with a PageRank of 5.
2023-03-21 01:21:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7049870491027832, "perplexity": 488.7648039075978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943589.10/warc/CC-MAIN-20230321002050-20230321032050-00607.warc.gz"}
http://mathhelpforum.com/trigonometry/119048-find-period.html
# Math Help - Find the period 1. ## Find the period Find the period of the function : $ \forall x\in \Re :f(x+1)=\frac{1}{1-f(x)} $ 2. $f(x+1)=\frac{1}{1-f(x)} $ $f(x+2)=\frac{1}{1-f(x+1)} = 1 - \frac{1}{f(x)}$ $f(x+3)=\frac{1}{1-f(x+2)} = \frac{1}{1 - {(1-{\frac{1}{f(x)}}})} = f(x)$ so period = 3
2016-07-25 12:21:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9071727395057678, "perplexity": 4538.2186849384825}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824226.79/warc/CC-MAIN-20160723071024-00178-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2573821/measurability-of-diagonal-implies-measurability-of-individual-points/2574102
Measurability of diagonal implies measurability of individual points Kingman (1973) writes on Page 11: Suppose that $S$ is a measurable space. ... by making the weak assumption that the diagonal $D = \{(x, y); x = y\}$ is measurable in the product space $S\times S$. This automatically implies that every singleton set $\{x\}$ in $S$ is measurable. Formally, why is the measurability of individual points in $S$ automatically implied by the measurability of the diagonal in $S\times S$? • Fubini-Tonelli would give that $\{ x \}$ is measurable for a.e. $x$... – Daniel Schepler Dec 19 '17 at 23:55 • @DanielSchepler I guess Fubini-Tonelli might be an overkill here... – p-value Dec 20 '17 at 4:46 • @DanielSchepler, if $\mu$ were a $\sigma$-finite measure on $S$, then Fubini-Tonelli would imply $\{x\}$ is $\mu$-measurable (i.e. in a possibly larger $\sigma$-algebra than the original). – fourierwho Dec 20 '17 at 6:01 For definiteness, let $(S,\mathcal{A})$ be a measure space and $\mathcal{A} \otimes \mathcal{A}$, the corresponding product $\sigma$-algebra on $S \times S$. Recall that if $E \in \mathcal{A} \otimes \mathcal{A}$ and $y \in S$, then the set $E_{y} \subseteq S$ defined by $$E_{y} = \{x \in S \, \mid \, (x,y) \in E\}$$ satisfies $E_{y} \in \mathcal{A}$. Indeed, if we let $\mathcal{G}$ denote the family $$\mathcal{G} = \{E \subseteq S \times S \, \mid \, E_{y} \in \mathcal{A}\},$$ then $\mathcal{G}$ is a $\sigma$-algebra on $S \times S$ containing the collection of measurable rectangles $\{A \times B \, \mid \, A,B \in \mathcal{A}\}$. Therefore, $\mathcal{A} \otimes \mathcal{A} \subseteq \mathcal{G}$, proving the claim. Now if $\Delta \subseteq S \times S$ is the diagonal, then $\Delta_{y} = \{y\}$. Therefore, $$\forall y \in S \quad \{y\} = \Delta_{y} \in \mathcal{A}.$$
2021-04-10 19:52:09
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.985157310962677, "perplexity": 202.87158483232594}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00561.warc.gz"}
https://dml.cz/handle/10338.dmlcz/107674
# Article Full entry | PDF   (0.2 MB) Keywords: difference equation; asymptotic behaviour Summary: Asymptotic properties of the solutions of the second order nonlinear difference equation (with perturbed arguments) of the form $\Delta ^2 x_n = a_n \varphi (x_{n+k})$ are studied. References: [1] Agarwal, R. P.: Difference Equations and Inequalities. Marcel Dekker, New York, 1992. MR 1155840 | Zbl 0952.39001 [2] Drozdowicz, A., Popenda, J.: Asymptotic behaviour of the solutions of the second order difference equation. Proc. Amer. Math. Soc. 99(1)  (1987), 135-140. MR 0866443 [3] Drozdowicz, A., Popenda, J.: Asymptotic behaviour of solutions of difference equations of second order. J. of Com. and Appl. Math. 47 (1993), 141-149. MR 1237310 [4] Hooker, J. W., Patula, W. T.: A second order nonlinear difference equation: oscillation and asymptotic behaviour. J. Math. Anal. Appl. 91 (1983), 9-29. MR 0688528 [5] Migda, J.: Asymptotic properties of solutions of higher order difference equations. (in preparation). Zbl 0702.39002 [6] Musielak, R., Popenda, J.: The periodic solutions of the second order nonlinear difference equation. Publ. Math. 32 (1988), 49-56. MR 0939768 [7] Szafrański, Z., Szmanda, B.: Oscillatory properties of solutions of some difference systems. Radovi Mat. 6 (1990), 205-214. MR 1096703 Partner of
2017-12-15 08:31:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7533067464828491, "perplexity": 2085.323905696841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00079.warc.gz"}
http://sangho.ai/paper/review/Youtube/
# deep neural networks for youtube recommendations Published: This is summary and what I learned from the paper “Deep neural networks for youtube recommendations” ### 0. Introduction As YouTube recommendations are responsible for helping more than a billion users discover personalized content, this paper will cover how deep learning has affected to YouTube video recommendations system. It is challenging from 3 major perspectives: Scale from massive user base and corpus, Freshness that many hours of video are uploaded per second, and Noise from sparsity and a variety of unobservable external factors. In following section, this paper will deal with system overview, model description with experimental results. ### 1. System Overview As printed above, the system is comprised of 2 neural networks : one for candidate generation, one for ranking. Candidate generation networks take users’ activity history as input and provides broad personalization. Ranking networks take a rich set of features describing the video and user as input then present a few “best” recommendations in a list. This 2-stage approach has some advantages, one is to make small number of personalized recommendation list appearing on the device from a very large corpus of videos and second is to enable blending candidates generated from different sources. For iterative improvements, has used offline metrics like precision, recall and ranking loss. And for the final determination of model effectiveness, used live A/B testing to measure subtle changes in CTR, watch time, etc. Let’s see from Candidate generate networks. ### 2. Candidate generation With candidate generation networks, YouTube corpus is winnowed to hundreds of user-relevant videos. ### 2-1) Recommendation as Classification Pose recommendation as extreme multiclass classification, so it can have below formula which means the probability of watching video class $i$$at time $t$$ ($w_t$$) of user $U$$ and context $C$$, while V is a corpus. $P(w_t=i|U,C)=\frac{e^{v_iu}}{\sum_{j\in V^{e^{v_ju}}}}$ here, $u\in\mathbb{R}^n$$ represents a high-dimensional set of user, context pair and $v_j\in\mathbb{R}$$represent embedding of each candidate videos. The task of DNN is to learn $u$$ as a function of user’s history and context. To efficiently train with millions of classes, used “candidate sampling” that samples negative classes and correct this sampling with importance weighting. At serving time, it requires to choose top N videos to recommend to the user, so scoring millions of items should be provided to the users during a tens of milliseconds. ### 2-2) Model architectue Feed High dimensional embeddings for each video in a fixed vocabulary into neural network. A user’s watch history is represented by a variable-length sequence of sparse video IDs which is mapped to a dense vector representation via this high dimensional embeddings. And the average of embeddings performed best when has used as inputs to network then the embeddings are learned through normal gradient descent back propagation updates. As the figure above, features are concatenated into a wide first layer, follwed by several layers of ReLU. As a key advantage of DNN, arbitrary continuous and categorical features can be eaily added to the model. Simiarly to watch history, search history is tokenized, embeded, averaged then represent a dense search history. In case of new users, demographic features are important for resonable recommendations so user’s geographic region and device are concatenated. (Purple color of the figure above). And as the yellow part of the image, simple features like gender, logged-in state and age are input directly into the network. As machine learning systems are trained to predict future from historical examples, it exhibit bias towards the past videos. To correct this bias, feed the age of the training example as a feature during training and at serving time, this is** set to zero**. Recommendation often solves a surrogate problem and transfers the result to to a particular context. For example, predicting ratings of movie can lead to effective movie recommendations. With experiments, belows have extracted. • Episodic series are usually watched sequentially • Users often discover from the beginning with the popular then focusing on smaller niches. So found much better performance predicting the uses’s next watch, rather than predicting a randomly held-out watch. ### 2-3) Label, Context selection. While many systems choose the labels and context by holding out random item and predict it from user’s history. But it leaks future information and ignores asymmetric comsumption patterns described above. So rollback a user’s history by choosing a random watch and only input actions the user took before the held-out label watch. In experiments, as features and depth are added, it improves precision. ### 3-1) Network overview Ranking layer specializes and calibrate predictions for the particular user interface like thumbnail. Ranking network has only a few hundred videos being scored while millions scored in candidate generation, so have access to many more features describing video and user-video relationship. Ranking network is similar DNN as candidate generation that scores to each video impression using logistic regression. Final ranking is constantly changed from A/B testing. ### 3-2) Feature engineering There are several classification of features. 1. Continuous / Categorical 2. Univalent (i.e video ID) / Multivalent (i.e list of video IDs user watched) 3. Impression (: describe properties of item, computed for each item scored) / Query (: describe properties of user/context, computed for once) And there are 3 important things in feature engineering. 1. Features describing** past user actions on related items** are powerful. For example, the feature like “When was the last time the user watched a video on this topic” describes user’s action well. 2. Propagate information from candidate generation into ranking in the form of features are important like ‘which sorces nominated this video candidate?’ 3. Features describing the frequency of past impressions are important for introducing “churn” in recommendation For ecample, if a user was recommended a video but didn’t watch, then the model will degrade the priority of this video Also, likely to candidate generation, used embeddings to map categorical features to dense representations. Each unique ID space, so called “vocabulary” has a separate learned embeddings and these vocabularies are simple look-up tables. When vocabulary is with very large cardinality (like video IDs), then it just includes top N after sorting. For categorical features in same ID space (such as video ID of impression, last video ID watched by user, etc.) shares underlying embeddings. Even though each features is fed separately for learning specialized representations, it can speed up training and reduce memory requirements. Proper normalization of continuous features was critical for convergence. Feature $x$$with distribution $f$$ is transformed to $\widetilde{x}$$by scaling which is distributed in $[0,1)$$. And powers of this ($\widetilde{x}^2$$) has improved offline accuracy. ### 3-3) Objective and experiments Goal is to predict expected watch time with given training set of positive (video impression was clicked) and negative. To predict this time, used weighted logistic regriession which positive impressions get weight by watch time and negative impressions get unit weight. Then, the odds by logistic regression are $\frac{\sum(T_i)}{N-k}$$. ($T_i$$: watch time of $i$$th impression, $k$$: number of positive impressions, $N$$ : number of training data) If positive data is small, then odds are approximately $E[T](1+P)$$, where $E[T]$$ is expected watch time and $P$$is click probability. And since $P$$ is small, odds is close to $E[T]$\$. With different hidden layer configurations experiements, increasing the width of hidden layers improves results as increasing the depth. ### 4. Conclusion 1. Using age of training example as an input feature removes bias towards the past videos. 2. Deep learning adjusted to Ranking has outperformed linear and tree-based method. 3. Logistic regression was modified by weighting trainig examples with positive and negative examples. And with this approach, watch-time weighted ranking evaluation is better than predicting CTR directly Categories:
2022-10-05 05:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4324204623699188, "perplexity": 3641.3496610522375}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337537.25/warc/CC-MAIN-20221005042446-20221005072446-00328.warc.gz"}
https://zbmath.org/?q=an%3A1060.34018
# zbMATH — the first resource for mathematics New Kamenev type oscillation criteria for linear matrix Hamiltonian systems. (English) Zbl 1060.34018 Summary: Some new Kamenev-type criteria are obtained for the oscillation of the linear matrix Hamiltonian system $X'=A(t)X+ B(t)Y,\quad Y'= C(t)X- A^*(t)Y,$ under the hypothesis: $$A(t)$$, $$B(t)= B^*(t)> 0$$ and $$C(t)= C^*(t)$$ are real continuous $$n\times n$$-matrix functions on the interval $$[t_0,\infty)$$, $$t_0>-\infty$$. Our results are different from most known ones in the sense that they are given in the form of $$\lim_{t\to\infty}\sup g[\cdot]>\text{const.}$$, rather than in the form of $$\lim_{t\to\infty}\sup \lambda_1[\cdot]= \infty$$, where $$g$$ is a positive linear functional on the linear space of $$n\times n$$-matrices with real entries. Consequently, our results improve some previous results to some extent, which can be seen by the examples given at the end of this paper. ##### MSC: 34C10 Oscillation theory, zeros, disconjugacy and comparison theory for ordinary differential equations 37J99 Dynamical aspects of finite-dimensional Hamiltonian and Lagrangian systems Full Text: ##### References: [1] Butler, G.J.; Erbe, L.H., Oscillation results for second order differential systems, SIAM J. math. anal, 17, 19-29, (1986) · Zbl 0583.34027 [2] Butler, G.J.; Erbe, L.H., Oscillation results for self-adjoint differential systems, J. math. anal. appl, 115, 470-481, (1986) · Zbl 0588.34025 [3] Butler, G.J.; Erbe, L.H.; Mingarelli, A.B., Riccati techniques and variational principles in oscillation theory for linear systems, Trans. amer. math. soc, 303, 263-282, (1987) · Zbl 0648.34031 [4] Byers, R.; Harris, B.J.; Kwong, M.K., Weighted means and oscillation conditions for second order matrix differential equations, J. different. eqs, 61, 164-177, (1986) · Zbl 0609.34042 [5] Coppel, W.A., Disconjugacy, Lecture notes in mathematics, vol. 220, (1971), Springer Berlin · Zbl 0224.34003 [6] Erbe, L.H.; Kong, Q.; Ruan, S., Kamenev type theorems for second order matrix differential systems, Proc. amer. math. soc, 117, 957-962, (1993) · Zbl 0777.34024 [7] Hartman, P., Self-adjoint, non-oscillatory systems of ordinary second order, linear differential equations, Duke math. J, 24, 25-36, (1957) · Zbl 0077.08701 [8] Hinton, D.B.; Lewis, R.T., Oscillation theory for generalized second order differential equations, Rocky mountain J. math, 10, 751-766, (1980) · Zbl 0485.34021 [9] Sowjaya Kumari, I.; Umanaheswaram, S., Oscillation criteria for linear matrix Hamiltonian systems, J. different. eqs, 165, 174-198, (2000) · Zbl 0970.34025 [10] Kwong, M.K.; Kaper, H.G., Oscillation of two-dimensional linear second order differential systems, J. different. eqs, 56, 195-205, (1985) · Zbl 0571.34024 [11] Kwong, M.K.; Kaper, H.G.; Akiyama, K.; Mingarelli, A.B., Oscillation of linear second-order differential systems, Proc. amer. math. soc, 91, 85-91, (1984) · Zbl 0556.34026 [12] Meng, F.; Wang, J.; Zheng, Z., A note on kamenev type theorems for second order matrix differential systems, Proc. amer. math. soc, 126, 391-395, (1998) · Zbl 0891.34037 [13] Meng, F.W.; Sun, Y.G., Oscillation of linear Hamiltonian systems, Comp. math. appl, 44, 1467-1477, (2002) · Zbl 1047.34030 [14] Sun, Y.G., New oscillation criteria for linear matrix Hamiltonian systems, J. math. anal. appl, 279, 651-658, (2003) · Zbl 1032.34032 [15] Mingarelli, A.B., On a conjecture for oscillation of second order ordinary differential systems, Proc. amer. math. soc, 82, 593-598, (1981) · Zbl 0487.34030 [16] Rickart, C.E., Banach algebras, (1960), Van Nostrand New York · Zbl 0051.09106 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2021-07-28 16:40:03
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283330798149109, "perplexity": 3567.6691365104375}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153739.28/warc/CC-MAIN-20210728154442-20210728184442-00645.warc.gz"}
https://tex.stackexchange.com/questions/334252/underbrace-without-pushing-equation
# Underbrace without pushing equation I'm using \underbrace{}_\text{} to explain the meaning of a variable: As you can see in the image, underbrace pushes the equations aside to make space for the text. I would like to get something like this instead: Question: Is it possible to remove this pushing behavior to get the desired result? Thanks for helping! :D Yes, there is a way: you can use \makebox[0pt] and the relevant text size (\footnotesize or scriptsize), or \clap, defined in mathtools. Using some code in mathtools, we can define an \EV and a \given commands for the expected value of a random variable, which has variable-sized delimiters: \documentclass{article} \usepackage[utf8]{inputenc}% \usepackage{mathtools} \providecommand\given{} \DeclarePairedDelimiterXPP\EV[1]{E}[]{}{ \renewcommand\given{\nonscript\:\delimsize\vert\nonscript\:} #1} \begin{document} $\EV{R_{i, t}\given I_{t-1}} = γ_{0,t-1} + \underbrace{\gamma_{0,t-1}}_{\makebox[0pt]{\footnotesize conditional market risk premium}}β_{i, t-1}$% \bigskip $\EV[\big]{R_{i, t}\given I_{t-1}} = γ_{0,t-1} + \underbrace{\gamma_{0,t-1}}_{\clap{\scriptsize conditional market risk premium}}β_{i, t-1}$% \end{document} • Works perfectly! For future reference: \footnotesize makes the text have the same size as footnotes, if you remove it you get a bigger text underneath the equation; \makebox creates a box to contain text. – Guilherme Salomé Oct 15 '16 at 17:56 • I don't think a bigger size is advisable here. Actually, maths in underbrace is typed in scriptstyle, so text might even be typed in \scriptsize. – Bernard Oct 15 '16 at 18:24
2019-11-21 08:28:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8539120554924011, "perplexity": 1365.7347994054867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670743.44/warc/CC-MAIN-20191121074016-20191121102016-00398.warc.gz"}
https://www.gamedev.net/forums/topic/501941-programming-a-deck-of-cards/
Public Group # Programming a deck of cards This topic is 3793 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts I posted a while back looking for scenarios i could make up teaching 11-16 programming and ive thought about the concept of programming a deck of cards. All im after is some thoughts, as many different methods as you can if possible, im not looking for a full solution as i want to get concepts over first. This will be an end task problem which will cover classes, inheritance and types. So what im looking at is A program that randomly selects cards and can be used for a game of blackjack (ignore any betting and concentrate on making a good card and deck class) Some of the things i would like them to consider are: Card values - ints (or possibly an enum) Card suits - enum an array this be a vector or fixed array and some method of keeping track of what cards have been drawn from the deck(Class) i have an idea of how i would do it but im looking to drag some different methods so i can select the easiest to get across to some 11 year olds. Most of this will be given to them and they add little bits for the kids but the 16 byear olds may be given a little more to do. With the ability of the group i beleive they can get most of this done so far we have been using c++ to create a welcome USER program and log some dtails about a person and a pythagoras and percentage calculator. in these lessons we have covered data types, functions, if statements and a switch statement. Most of this os going quite well but as expected for some people its a bit heavy going for them but they arent giving up and i wont let them down. Anyone care to suggest some methods for the card game above? Many thanks, i am documenting all of this so if gamedev would like a set of beginner - moderate starter programming resources i will gladly han d them over with teacher to student plans and activities. there has been quite a lot of work done already on this and its looking really good for a beginner programmer. ##### Share on other sites DrPepperCorn, One possible solution would be using a stack for your deck of cars. One thing they would have to consider is populating the stack at the beginning with the cards in a random order. One nice thing is using the deck "stack" of cars after its created would be fairly easy with pop's ect. Another option would be using an array to hold the deck of the cards. Each method is possible to use. I was trying to put myself in the mindset in that classroom and was trying to think what stuff I might have had learned by that point! Good luck with your lessons! ##### Share on other sites Well if the deck of cards needs to be shuffled then using a vector of card classes and swapping elements randomly would allow you to perform a shuffle. Of course if you didn't need to shuffle the cards then a stack would work well for just drawing cards off the top ;) . The card class could contain the value of the card and the suite of the card in integer format (just makes it easier to work with later then say using strings). You could use enumeration also for making it easy to say compare the current card to see if it is an Ace or not. Also if you wanted you could overwrite the output operator and print the value and suite of the card to the screen. If I were making something like this then I would have a Deck class with my STL vector of Card classes. My Deck class would have my shuffle method, and a method for drawing a card from the vector. If you really wanted to get fancy then you could define a method in the deck class that returns a deck based on how many cards you wanted to draw from the current deck. like Deck* myHand=myDeck->DrawCards(STARTING_CARDS); Where as a single card could be: Card* curCard=myDeck->DrawCard(); Using a Deck class would support the idea of using an Abstract Data Type (ADT) where the underlying data structure is not important and we just focus on what methods we expose to the user (IE the methods for shuffling and drawing cards) Good luck :) . ##### Share on other sites I wrote a poker console project a few years back. The idea was to determine the probablility of winning a hand of Texas Hold 'Em against a number of opponents given a fixed initial hand of two cards and disregarding bluffing (basically I wanted to improve my real-life game and I thought it was an interesting project) I created a card collection class that had methods to shuffle, sort cards by suit or value, deal or receive a card to or from another collection (in which case ownership of the individual card objects would be transferred.) That way the full deck can use the same collection object as a single hand. I forget now what internal structure I used for it's collection (probably something silly - I've learned a bit since then!) Mostly I recall running into some logical difficulties in determining winning hands efficiently. Aces counting high or low depending on the circumstance is one particular gotcha. ##### Share on other sites Good resposes, all appreciated. Trying to avoid at every turn pointers as the examples ive used so far are very trivial its hard to sort of show the advatages of pointers and of course explaining that they point to memory. Im trying to give real tangable examples, not something they have to see as a programmer would. they can get their head around initialisation: "we make a variable and we have to state what this is, int myNumber = 10;" they can see that and get their head around it, telling them that 10 is really 0014f9b9 . . and this variable points to its address . . . hmmm at the moment id sooner them be able to visualise the code . .(make a couple of sloppy errors . .dont worry ive already drummed into their heads NEVER to use a goto) and bebable to comment it and do a post mortem on the code they have written or added. this gives me a chance to pick up errors and misconceptions and develop better ways of understanding it. So far ive been looking at them to see a card as a datatype and creating a class like this class cCard { private: //see lesson 8: private, protected and public for this keyword int nCard; // a class variable which we will need to asssign a card string rgCardValues[]; //Something like {"2", "Three" ... "J", "Queen"} string rgCardSuit[]; // Something like { "D", "Spades", "Hts", "CL"} //Use what you feel is best. I would use a format like "AC", "5D", "KD" etc public: //see lesson 8: private, protected and public for this keyword cCard(); //constructor //(view lesson 7c for contruction and destruction of classes and why we do it) ///////////////////////////////////////////////////////////////////////////// string getSuit() // a (class) function to return the suit string getValue() // a (class) function to return the value of the card }; thanks for the replys so far, its all good. this is really a great community ##### Share on other sites My card object had an enum for the suit and an int for the value. I initially defined a collection of all 52 cards just by basically iterating over them and populating a 'deck' which could then be passed around to individual card hands or the cards on the table (that was another initial difficulty with my project - I don't know if you know the rules of Hold 'Em, but basically some cards are shared between everyone's hands, so it's not just a case of dealing everyone individual cards, some need to be in all hands.) What are the string arrays you are using? I'd steer clear of using strings for your internal card/hand data, it's easier to write further code (hand scoring etc.) based on numeric types i.e. ints. If you need to write card details to console, or use a string representation for any other purpose, you could write a cCard::ToString() method that returns a human-readable representation. Reading your original post, I see this is intended as a tutorial for 11 year olds. In that case (not meaning to patronise 11 year olds here!) I'd advise sticking to the most basic representation you can. If they can grasp the program flow required to implement an enum for suits, an int for value, and include those in a class, then I'd say they're getting along pretty well compared to what my school taught me at that age! ##### Share on other sites indeed (last poster) ive taken a lot from the accelerated c++ book and the way they launch into it rather than go the learn c++ in 21 days/c++ for idiots approach. At the moment even the kids who are having difficulty get this string thing and a large part of the scenario is designing putting these elements into an array and using ( myarray; ) to sort them. Here i can introduce them to fixed and variable arrays. In many ways ive taken it too far and it is a little advanced, one of the kids is really good at this as his dad is a php professional developer and is coming at me with templates (very good kid only 13) so i have to kep the pressure on him . . may have to start giving hima primer in direct x so we can actually see some cards. Thanks for the replys this is really good and like i said if all this needs to be compiled as a resource i will do as so far we have 90% understanding and development rate with the 11yr olds and almost 100% 13+, statistically what im doing is working and they are getting some programs running and they can see the effort programmers have to put in to get mario to jump, hit a coin get a score and generally do something what looks really easy. Thanks again guys ##### Share on other sites well what I did is an array. cards[312]; why 312? because 6 decks of 52. Then I initialized it all to 1. from there I just randomly selected a index. card_chosen=rand()%312; and checked to make sure its value was 1. if not i checked again. While not really a shuffle it did the trick. ##### Share on other sites To shuffle a deck of cards, you can use std::random_shuffle. Example with integer numbers: #include <cstdlib>#include <ctime>#include <algorithm>#include <iostream>#include <iterator>#include <vector>const int n_cards = 52;int main(){ // seed random number generator once srand(time(0)); std::vector<int> deck; deck.reserve(n_cards); // fill deck for (int i = 0; i < n_cards; ++i) deck.push_back(i); // shuffle deck std::random_shuffle(deck.begin(), deck.end()); // output std::copy(deck.begin(), deck.end(), std::ostream_iterator<int>(std::cout, " "));} ##### Share on other sites A simple implementation, in C++, would be: namespace cards{ // The cards struct card { enum suit_type { Clubs, Hearts, Diamonds, Spades}; const suit_type suit; const unsigned value; card(suit_type suit, unsigned value) : suit(suit), value(value) { assert (value > 0 && value < 15); } }; bool operator == (const card& c1, const card &c2) { return c1.suit == c2.suit && c1.value == c2.value; } bool operator != (const card &c1, const card &c2) { return ! (c1 == c2); } bool operator < (const card &c1, const card &c2) { return c1.suit < c2.suit || c1.suit == c2.suit && c1.value < c2.value); } // The decks typedef std::vector<card> deck; inline deck fresh_deck() { deck d; d.reserve(52); for (int val = 1; val <= 14; ++val) { d.push_back(card(Clubs, val)); d.push_back(card(Hearts, val)); d.push_back(card(Diamonds, val)); d.push_back(card(Spades, val)); } } inline void shuffle(deck &d) { std::random_shuffle(d.begin(), d.end()); } inline void deal(deck &d, std::vector<deck> &hands, unsigned cards) { unsigned total = hands.size() * cards; assert (total <= d.size()); for (int i = 0; i < total; ++i) { hands[i % hands.size()].push_back(d.back()); d.pop_back(); } }} 1. 1 2. 2 3. 3 Rutin 13 4. 4 5. 5 • 26 • 11 • 9 • 9 • 11 • ### Forum Statistics • Total Topics 633697 • Total Posts 3013404 ×
2018-12-13 13:14:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21215024590492249, "perplexity": 1915.9070663741918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00242.warc.gz"}
https://ohhaskme.com/5051/how-do-i-solve-this-function-question
How do I solve this function question? y is a function of x if every value of x in the domain maps to exactly one value of y in the range. If you can find a single value of x that maps to two or more values of y, then y is not a function of x. To figure out if this is possible, suppose that there are two different values of y (which we’ll call y₁ and y₂) that result in the same x. In other words, assume you have y₁ and y₂ such that: > y₁ ≠ y₂ ; and > (3y₁ + 5)/(7y₁ - 1) = (3y₂ + 5)/(7y₂ - 1) Solve this equation, and see if y₁ = y₂. If it does, then this contradicts the assumption that y₁ ≠ y₂, and you can conclude that y *is* a function of x. If you find that y₁ ≠ y₂ holds true, then that shows that y is *not* a function of x. (3y₁ + 5)/(7y₁ - 1) = (3y₂ + 5)/(7y₂ - 1) > (3y₁ + 5)(7y₂ - 1) = (3y₂ + 5)(7y₁ - 1) > 21y₁y₂ - 3y₁ + 35y₂ - 5 = 21y₁y₂ - 3y₂ + 35y₁ - 5 > 21y₁y₂-21y₁y₂ - 5 + 5 = -3y₂ + 3y₁ + 35y₁ - 35y₂ > 0 = -3y₂ + 3y₁ + 35y₁ - 35y₂ > 35y₂ + 3y₂ = 35y₁ + 3y₁ > 38y₂ = 38y₁ > y₂ = y₁ This is a contradiction, so y is a function of x. For each x is there a unique y satisfying the equation? Hint: try solving for y. If you have learned the parts, then you know what it's asking. It's asking if y is a function. You've learned the vertical line test? Of course, you have to know what y is, so solve for it. by 0 like 0 dislike
2023-02-04 02:04:07
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8538288474082947, "perplexity": 1547.0442715250706}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00325.warc.gz"}
http://mathoverflow.net/revisions/67290/list
I am interested in the Banach space $\mathcal{K}=\mathcal{K}(\ell^2)$ of compact operators on $\ell^2$, however my questions can be stated for any $\mathcal{K}(E)$, where $E$ is an arbitrary Banach space. I think that everyone who tries to study "classical" operator spaces like $\mathcal{K}$, $p$-Schatten class operators etc. immediately discovers the similarity with "commutative" counterparts, i.e. $c_0$ and $\ell^p$. This phenomenon is visible when one uses (generalised) singular numbers for certain classes of operators. Again, I have got plenty of questions concerning this stuff, let me list at least two of them: 1) what are the complemented subspaces of $\mathcal{K}$? Is $\mathcal{K}$ complemented in $\mathcal{B}(\ell^2)$? Recently, Haydon and Argyros constructed an HI-space $E$ such that $\mathcal{K}(E)$ has codimension 1 $\mathcal{B}(E)$, thus complemented. 2) is every bounded operator from $p$-Schatten class to $\mathcal{K}$ compact? What other properties $\mathcal{K}$ shares with $c_0$?
2013-05-25 04:03:24
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9445369243621826, "perplexity": 378.17004851404346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705407338/warc/CC-MAIN-20130516115647-00069-ip-10-60-113-184.ec2.internal.warc.gz"}
https://mathoverflow.net/questions/272787/extra-euler-factors-in-one-definition-of-the-l-function-of-a-twist-of-a-modula
# “Extra Euler factors” in one definition of the L-function of a twist of a modular form Let $(\rho_{f,\lambda})_\lambda$ be the system of Deligne's $\ell$-adic representations attached to a modular newform $f$ (where $\lambda$ runs over the finite places of the number field $K$ generated by the Fourier coefficients of $f$), and take a character $\chi\colon\mathrm G_{\mathbb Q}\rightarrow\overline{\mathbb Q}^\times$ of finite order (assume for simplicity it takes values in $K$). Fix embeddings of $\overline{\mathbb Q}$ into each $\overline{\mathbb Q}_\ell$. Then we can look at the system of $\lambda$-adic realizations $(\rho_{f,\lambda}\otimes\chi)_\lambda$. Let me call a system $(\rho_\lambda)_\lambda$ of $\ell$-adic representations compatible if for each prime $p$ the polynomial $$P_p(T)=\begin{cases}\det(1-\mathrm{Frob_p}T,\rho_\lambda^{I_p}), & \lambda\not\mid p,\\ \det(1-\varphi T,\mathrm D_{\mathrm{cris}}(\rho_\lambda)), & \lambda\mid p\end{cases}$$ does not depend on the choice of a place $\lambda$ and has coefficients in $\overline{\mathbb Q}$. It is known that the system $(\rho_{f,\lambda})_\lambda$ is compatible in this sense (using results of Scholl and Saito), and it seems to be well-known that this holds also for the system $(\rho_{f,\lambda}\otimes\chi)_\lambda$. Probably this can be seen somehow using Weil-Deligne representations, but I haven't yet worked out the details (any hints or references on this are welcome!). But my actual question is the following. If we define the $L$-function of such a compatible system as usual as the Euler product $$L((\rho_\lambda)_\lambda,s)=\!\!\!\prod_{p\text{ any prime}}\!\!\! P_p(p^{-s}),$$ then the $L$-function associated to $(\rho_{f,\lambda})_\lambda$ is just the $L$-function of $f$. But what is the $L$-function of $(\rho_{f,\lambda}\otimes\chi)_\lambda$? If we view $\chi$ as a Dirichlet character of $(\mathbb{Z}/N)^\times$ for the minimal possible $N$ via class field theory, then one often considers the $L$-function defined by $$L(f,\chi,s)=\sum_{n=1}^\infty \chi(n)a_nn^{-s},$$ where the $a_n$ are the Fourier coefficients. But in general the $L$-function of the system $(\rho_{f,\lambda}\otimes\chi)_\lambda$ differs from this one. Indeed, D. Loeffler's answer to my question How large is Dcris of certain twists of modular forms? shows that we get at least an extra Euler factor at $p$ if $f$ is $p$-ordinary and the $p$-part of $\chi$ cancels the $p$-part of the nebentype of $f$. Can we in general tell which additional Euler factors (compared to $L(f,\chi,s)$) this $L$-function has? Let me translate this into a problem purely about automorphic forms: Take a newform $f \in \mathcal{S}_k^{\ast}(q,\chi)$, and a primitive Dirichlet character $\psi$ modulo $q'$. Then there exists a newform $f \otimes \psi$ of weight $k$, level dividing $q {q'}^2$, and nebentypus induced by the primitive character inducing $\chi \psi^2$, such that whenever $(n,q') = 1$, the $n$-th Hecke eigenvalue $\lambda_{f \otimes \psi}(n)$ of $f \otimes \psi$ is $\lambda_f(n) \psi(n)$. Since $f \otimes \psi$ is a newform, it has an $L$-function $L(s,f \otimes \psi) = \prod_p L_p(s,f \otimes \psi),$ where for $p \nmid (q,q')$, $L_p(s,f \otimes \psi) = \frac{1}{1 - \lambda_f(p) \psi(p) p^{-s} + \chi(p) \psi^2(p) p^{-2s}},$ while for $p \mid (q,q')$, this may be something more complicated. On the other hand, we may define the "naïve" $L$-function associated to $f \otimes \psi$ as the analytic continuation of the Dirichlet series $\sum_{n = 1}^{\infty} \frac{\lambda_f(n) \psi(n)}{n^s} = \prod_p \frac{1}{1 - \lambda_f(p) \psi(p) p^{-s} + \chi(p) \psi^2(p) p^{-2s}},$ where the Euler product identity holds by multiplicativity. So I believe the question you are asking is when $L_p(s,f \otimes \psi)$ is not equal to $(1 - \lambda_f(p) \psi(p) p^{-s} + \chi(p) \psi^2(p) p^{-2s})^{-1}$ (or equivalently when $\lambda_{f \otimes \psi}(p)$ is not equal to $\lambda_f(p) \psi(p)$). This can only occur when $p \mid (q,q')$, in which case $(1 - \lambda_f(p) \psi(p) p^{-s} + \chi(p) \psi^2(p) p^{-2s})^{-1} = 1$, and the answer depends sensitively on the local component $\pi_{f,p}$ of $f$ (which is a ramified representation of $\mathrm{GL}_2(\mathbb{Q}_p)$) as well as the local components $\chi_p,\psi_p$ of $\chi,\psi$ (which are characters of $\mathbb{Q}_p^{\times}$, the latter of which is ramified). This can be dealt with via a case-by-case approach. • If $\pi_{f,p}$ is supercuspidal, then $L_p(s,f \otimes \psi) = 1.$ This is also true if $\pi_{f,p} = \omega_p \mathrm{St}_p$ is a special representation (so that the central character of $\pi_{f,p}$ is $\chi_p = \omega_p^2$) with $\omega_p$ and $\omega_p \psi_p$ both ramified, or if $\omega_p$ is unramified. Note that $\pi_{f,p} \otimes \psi_p = \omega_p \psi_p \mathrm{St}_p$ for special representations. • If $\pi_{f,p} = \omega_p \mathrm{St}_p$ is a special representation with $\omega_p$ ramified but $\omega_p \psi_p$ unramified (so that $\omega_p \psi_p(p) \in \{\pm 1\}$), then $L_p(s,f \otimes \psi) = \frac{1}{1 - \omega_p \psi_p(p) p^{-s-1/2}} \neq 1.$ • Finally, if $\pi_{f,p} = \omega_{1,p} \boxplus \omega_{2,p}$ is a ramified principal series representation (so that the central character of $\pi_{f,p}$ is $\chi_p = \omega_{1,p} \omega_{2,p}$), then $\pi_{f,p} \otimes \psi_p = \omega_{1,p} \psi_p \boxplus \omega_{2,p} \psi_p$, and $L_p(s,f \otimes \psi) = \frac{1}{(1 - \omega_{1,p} \psi_p(p) p^{-s})(1 - \omega_{2,p} \psi_p(p) p^{-s})}.$ If $\omega_{1,p} \psi_p$ and $\omega_{2,p} \psi_p$ are both ramified, then $\omega_{1,p} \psi_p(p) = \omega_{2,p} \psi_p(p) = 0$, and so $L_p(s,f \otimes \psi) = 1$. However, if either one is unramified, then one of $\omega_{1,p} \psi_p(p), \omega_{2,p} \psi_p(p)$ is nonzero, and so $L_p(s,f \otimes \psi) \neq 1$. If $f(z)=\sum_{n \geq 1} a_n q^n$ is a newform of level $\Gamma_1(N)$ and $\chi$ is a Dirichlet character modulo $m$, then the naïve twist of $f$ by $\chi$ is the modular form $f_\chi(z) = \sum_{n \geq 1} a_n \chi(n) q^n$. As was already pointed out $f_\chi$ is not always a newform, but there is a unique newform $f \otimes \chi$ sharing the same Hecke eigenvalues at primes $p$ not dividing $m$. If $N$ and $m$ are coprime then $f_\chi = f \otimes \chi$ is a newform, but this is not always the case in general. A criterion for $f_\chi$ being a newform (equivalently $f_\chi = f \otimes \chi$) has been worked out by Atkin--Li in their article Twists of newforms and pseudo-eigenvalues of $W$-operators (see Corollary 3.1). The question of determining the Euler factor of $f \otimes \chi$ at $p$ is clearly a local one, so we may assume that $\chi$ is a primitive Dirichlet character of conductor $p^\alpha$ with $\alpha \geq 1$, and that $p$ divides $N$. In general the Euler factor of $f \otimes \chi$ at $p$ can be determined from the local automorphic representation associated to $f$, as explained by Peter Humphries. There is however a special case which is easy, namely when $f$ is $p$-primitive, meaning that $f$ has minimal level among its twists by characters of $p$-power conductor. If $f$ is $p$-primitive and $a_p \neq 0$ then we have the formula $$L_p(f \otimes \chi,s)^{-1} = 1- \bar{a}_p \cdot (\psi \chi)_0(p) p^{-s}$$ where $\psi$ is the Nebentypus character of $f$, and $(\psi \chi)_0$ is the primitive Dirichlet character associated to $\psi \chi$. This is explained in Merel's article Symboles de Manin et valeurs de fonctions $L$ (Section 2.6). The L-series of $\rho_{f, \lambda} \otimes \chi$ is the $L$-series of $f \otimes \chi$, where $f \otimes \chi$ is the unique newform such that $a_\ell(f \otimes \chi) = \chi(\ell) a_\ell(f)$ for all but finitely many $\ell$. So your Galois-theoretic question reduces to a purely automorphic one, namely determining the Hecke eigenvalues of $f \otimes \chi$ at the bad primes (the ones dividing the conductor of $\chi$). In general, this is a bit fiddly to do algorithmically, if your starting point is just the $q$-expansion of $f$; but of course if you know the local factors of the automorphic representation associated to $\chi$ then you can read off the corresponding data for $f \otimes \chi$ immediately.
2020-07-02 12:30:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9718789458274841, "perplexity": 99.10991804522305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655878753.12/warc/CC-MAIN-20200702111512-20200702141512-00143.warc.gz"}
https://brilliant.org/problems/number-base-is-very-complex/
# Number base is very complex Number Theory Level 5 In base $$2$$, $$2016_{10}=111 1110 0000_2$$. In base $$3$$, $$2016_{10}=220 2200_3$$ In base $$4$$, $$2016_{10}=13 3200_4$$ In base $$5$$, $$2016_{10}=3 1031_5$$ Finf the smallest number (in base $$10$$) bigger than $$1$$ which have only $$0$$s and $$1$$s in its representation from base $$2$$ to base $$5$$. × Problem Loading... Note Loading... Set Loading...
2016-10-26 04:20:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7113369107246399, "perplexity": 1692.4367458834352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720615.90/warc/CC-MAIN-20161020183840-00155-ip-10-171-6-4.ec2.internal.warc.gz"}
http://mail-archives.apache.org/mod_mbox/forrest-dev/200402.mbox/%3C1076163070.1827.21.camel@verndeb%3E
# forrest-dev mailing list archives ##### Site index · List index Message view Top From Stephan Michels <step...@apache.org> Subject Re: How do I proceed (WAS Re: Forrest and Mathematics) Date Sat, 07 Feb 2004 14:11:10 GMT Am Fr, den 06.02.2004 schrieb Paul Smith um 16:57: > > > Does this solution handle *all* TeX markup? > > > > Of course not all markup. This is impossible and unnecessary. > > But most of core and the math environment work. > > [edited necessary to unnecessary as I believe you meant] Oops, I mean unnecessary, of course. > What kind of thigs are missing? Why are they unnecessary? There are nearly thousands of commands. In the real world, you only use a subset of them. > > > If it does (and you are > > > willing to let us have the necessary configs) then the problem is > > > > Which problem? If I get it right you want that Forrest renders MathML > > for you, or do you want to use Forrest with TeX as input?! > > I want to use Forrest to display 'Maths'. I personally don't care > whether it uses MathML or Tex, or somehow uses a kind of stylesheet to > transform one into the other - I just want to be able to put equations > in my Forrest docs to be able to type up my lecture notes and have them > available on the web. If you write more than just one equation, you will make a difference, for sure ;-) If you have a document with dozens of equations, MathML makes it unmaintainable. > The reason I don't care whether it's MathML or TeX is that I haven't > learnt how to write either yet - I'm going to have to learn one of them > though! I'm willing to learn whichever Forrest will support. XML is a very bad format for editors knowing from experience(We have thousands of documents with equations in our project). But this is a different chapter... > > BTW, you can also use the math enviroment within Wiki, for example: > > > > ----------------------- > > !Math equations > > > > $\frac{2 \cdot x}{x^2} = y$ > > > > And the result is .... > > ----------------------- > > > > Is this in your own setup or already possible in Forrest? Not now, but can easily done by adding some additional transformers to the cwiki. cwiki --> xdoc + mathml ---> html + gif images (or mathml) But first things first. You should first start to get MathML running. I can help if there problems occur... Stephan. Mime View raw message
2016-07-29 23:02:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7562842965126038, "perplexity": 4682.433921456736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257832385.29/warc/CC-MAIN-20160723071032-00193-ip-10-185-27-174.ec2.internal.warc.gz"}
https://socratic.org/questions/58227fd211ef6b527f8c1b56
# Question c1b56 Nov 13, 2016 $9.74 \cdot {10}^{- 2} \text{nm}$ #### Explanation: The idea here is that matter can also behave like a wave, as described by the de Broglie hypothesis. In your case, a molecule of oxygen, ${\text{O}}_{2}$, will exhibit wave-like behavior and have a wavelength called the de Broglie wavelength associated with it. The de Broglie wavelength depends on the momentum of the particle, $p$, which in turn depends on the mass of the particle, $m$, and its velocity, $v$. You will thus have $\textcolor{b l u e}{\underline{\textcolor{b l a c k}{p = m \cdot v}}} \to$ describes the momentum of the molecule and $\textcolor{b l u e}{\underline{\textcolor{b l a c k}{l a m \mathrm{da} = \frac{h}{p}}}} \to$ the de Broglie wavelength Here $l a m \mathrm{da}$ - the wavelength of the molecule $h$ - Planck's constant, equal to $6.626 \cdot {10}^{- 34} \text{J s}$ Now, you know that the molecule has a speed of ${\text{128 m s}}^{- 1}$, which can be used here instead of velocity. In order to find the mass of a single molecule of oxygen, use the molar mass of oxygen gas and Avogadro's constant. Oxygen gas has a molar mass of approximately ${\text{32.0 g mol}}^{- 1}$, which means that $1$ mole of oxygen molecules has a mass of $\text{32.0 g}$. This means that a single molecule of oxygen gas will have a mass of 1 color(red)(cancel(color(black)("molecule O"_2))) * overbrace((1color(red)(cancel(color(black)("mole O"_2))))/(6.022 * 10^(23)color(red)(cancel(color(black)("molecules O"_2)))))^(color(purple)("Avogadro's constant")) * "32.0 g"/(1color(red)(cancel(color(black)("mole O"_2)))) $= 5.314 \cdot {10}^{- 23} \text{g" = 5.314 * 10^(-26)"kg}$ This means that the momentum of the molecule will be $p = 5.314 \cdot {10}^{- 26} {\text{kg" * "128 m s}}^{- 1}$ $p = 6.802 \cdot {10}^{- 24} {\text{kg m s}}^{- 1}$ Now, you need to mindful of units here. Notice that Planck's constant is given in joules per second, $\text{J" * "s}$. As you know, $\text{1 J}$ is equal to ${\text{1 J" = "1 kg m"^2 "s}}^{- 2}$ This means that Planck's constant can also be written as h = 6.626 * 10^(-34) "kg m s"^color(red)(cancel(color(black)(-2))) * color(red)(cancel(color(black)("s"))) $h = 6.626 \cdot {10}^{- 34} {\text{kg m"^2"s}}^{- 1}$ You can now say that the de Broglie wavelength associated with this molecule is lamda = (6.626 * 10^(-34) color(red)(cancel(color(black)("kg"))) "m"^color(red)(cancel(color(black)(2))) color(red)(cancel(color(black)("s"^(-1)))))/(6.802 * 10^(-24)color(red)(cancel(color(black)("kg"))) color(red)(cancel(color(black)("m"))) color(red)(cancel(color(black)("s"^(-1))))) = 9.74 * 10^(-11)"m"# Expressed in nanometers, the answer will be $9.74 \cdot {10}^{- 11} \textcolor{red}{\cancel{\textcolor{b l a c k}{\text{m"))) * (10^9 "nm")/(1color(red)(cancel(color(black)("m")))) = color(darkgreen)(ul(color(black)(9.74 * 10^(-2)"nm}}}}$ The answer is rounded to three sig figs.
2019-10-18 20:56:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 25, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473495244979858, "perplexity": 627.4642396307645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00073.warc.gz"}
https://www.jobilize.com/physics1/course/10-8-work-and-power-for-rotational-motion-by-openstax?qcr=www.quizover.com
# 10.8 Work and power for rotational motion Page 1 / 7 By the end of this section, you will be able to: • Use the work-energy theorem to analyze rotation to find the work done on a system when it is rotated about a fixed axis for a finite angular displacement • Solve for the angular velocity of a rotating rigid body using the work-energy theorem • Find the power delivered to a rotating rigid body given the applied torque and angular velocity • Summarize the rotational variables and equations and relate them to their translational counterparts Thus far in the chapter, we have extensively addressed kinematics and dynamics for rotating rigid bodies around a fixed axis. In this final section, we define work and power within the context of rotation about a fixed axis, which has applications to both physics and engineering. The discussion of work and power makes our treatment of rotational motion almost complete, with the exception of rolling motion and angular momentum, which are discussed in Angular Momentum . We begin this section with a treatment of the work-energy theorem for rotation. ## Work for rotational motion Now that we have determined how to calculate kinetic energy for rotating rigid bodies, we can proceed with a discussion of the work done on a rigid body rotating about a fixed axis. [link] shows a rigid body that has rotated through an angle $d\theta$ from A to B while under the influence of a force $\stackrel{\to }{F}$ . The external force $\stackrel{\to }{F}$ is applied to point P , whose position is $\stackrel{\to }{r}$ , and the rigid body is constrained to rotate about a fixed axis that is perpendicular to the page and passes through O . The rotational axis is fixed, so the vector $\stackrel{\to }{r}$ moves in a circle of radius r , and the vector $d\stackrel{\to }{s}$ is perpendicular to $\stackrel{\to }{r}.$ From [link] , we have $\stackrel{\to }{s}=\stackrel{\to }{\theta }\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{r}.$ Thus, $d\stackrel{\to }{s}=d\left(\stackrel{\to }{\theta }\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{r}\right)=d\stackrel{\to }{\theta }\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{r}+d\stackrel{\to }{r}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{\theta }=d\stackrel{\to }{\theta }\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{r}.$ Note that $d\stackrel{\to }{r}$ is zero because $\stackrel{\to }{r}$ is fixed on the rigid body from the origin O to point P . Using the definition of work, we obtain $W=\int \sum \stackrel{\to }{F}·d\stackrel{\to }{s}=\int \sum \stackrel{\to }{F}·\left(d\stackrel{\to }{\theta }\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{r}\right)=\int d\stackrel{\to }{\theta }·\left(\stackrel{\to }{r}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\sum \stackrel{\to }{F}\right)$ where we used the identity $\stackrel{\to }{a}·\left(\stackrel{\to }{b}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{c}\right)=\stackrel{\to }{b}·\left(\stackrel{\to }{c}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\stackrel{\to }{a}\right)$ . Noting that $\left(\stackrel{\to }{r}\phantom{\rule{0.2em}{0ex}}×\phantom{\rule{0.2em}{0ex}}\sum \stackrel{\to }{F}\right)=\sum \stackrel{\to }{\tau }$ , we arrive at the expression for the rotational work    done on a rigid body: $W=\int \sum \stackrel{\to }{\tau }·d\stackrel{\to }{\theta }.$ The total work done on a rigid body is the sum of the torques integrated over the angle through which the body rotates . The incremental work is $dW=\left(\sum _{i}{\tau }_{i}\right)d\theta$ where we have taken the dot product in [link] , leaving only torques along the axis of rotation. In a rigid body, all particles rotate through the same angle; thus the work of every external force is equal to the torque times the common incremental angle $d\theta$ . The quantity $\left(\sum _{i}{\tau }_{i}\right)$ is the net torque on the body due to external forces. Similarly, we found the kinetic energy of a rigid body rotating around a fixed axis by summing the kinetic energy of each particle that makes up the rigid body. Since the work-energy theorem ${W}_{i}=\text{Δ}{K}_{i}$ is valid for each particle, it is valid for the sum of the particles and the entire body. ## Work-energy theorem for rotation The work-energy theorem for a rigid body rotating around a fixed axis is ${W}_{AB}={K}_{B}-{K}_{A}$ where $K=\frac{1}{2}I{\omega }^{2}$ and the rotational work done by a net force rotating a body from point A to point B is ${W}_{AB}=\underset{{\theta }_{A}}{\overset{{\theta }_{B}}{\int }}\left(\sum _{i}{\tau }_{i}\right)d\theta .$ #### Questions & Answers Hi all, love you all!!! Cool Debabrata How does resonance occur what is quantam quantum is a division of mechanics Baje what is friction a force act by surface between two bodies whose are always oppose the relative motion ..... Raghav when two rough bodies are placed in contact and try to slip each other ... than a force act them and it's ippse the relative motion between them Raghav thats friction force and roughnes of both bodies is define friction of surface Raghav what is a progressive wave What is the wake for therapist can u like explain your question with clear detail Chikamso who would teach me vectors? what's chemistry branch of science dt deals with the study of physical properties of matter and it's particulate nature Josiah Good Daniel actually Nathz Y acctually do u hav ur way of defining it? just bring ur iwn idear Daniel well, it deals with the weight of substances and reaction behind them as well as the behavior Josiah buh hope Esther, we've answered ur question Josiah what's ohms law CHIJIOKE ohms law states that, the current flowing through an electric circuit is directly proportional to the potential difference, provided temperature and pressure are kept constant Josiah what is sound James ohms law states that the resistance of a material is directly proportional to the potential difference between two points on that material, if temperature and other physical conditions become constant Chikamso How do I access the MCQ As I think the best is, first select the easiest questions for you .and then you can answer the remaining questions. lasitha I mean I'm unable to view it Abraham when I click on it, it doesn't respond Abraham ohhh,try again and again ,It will be showed lasitha okay Abraham what is centripetal force هي قوة ناتجة من الحركة الدائرية ويكون اتجاهها إلى المركز دائماً meaning of vector quantity vector quantity is any quantity that has both magnitude in terms of number (units) and direction in terms of viewing the quantity from an origin using angles (degree) or (NEWS) method LEWIS vector quantity is physical quantity has magnitude and direction vector is a quantity that is use in measuring size of physical properties and their direction Bitrus what difference and similarities between work,force,energy and power? I need the best answer Anes power mehreen power saba enery is the ability to do work. work is job done, force is a pull or push. power has to do with potential. they belong to different categories which include heat energy, electricity. Andrew force refers to a push or pull... energy refers to work done while power is work done per unit time Shane mathematically express angular velocity and angular acceleration it depends on the direction. an angular velocity will be linear and angular acceleration will be an angle of elevation. Andrew The sonic range finder discussed in the preceding question often needs to be calibrated. During the calibration, the software asks for the room temperature. Why do you suppose the room temperature is required? Suppose a bat uses sound echoes to locate its insect prey, 3.00 m away. (See [link] .) (a) Calculate the echo times for temperatures of 5.00°C5.00°C and 35.0°C.35.0°C. (b) What percent uncertainty does this cause for the bat in locating the insect? (c) Discuss the significance of this uncertainty an Shaina give a reason why musicians commonly bring their wind instruments to room temperature before playing them. Shaina The ear canal resonates like a tube closed at one end. (See [link]Figure 17_03_HumEar[/link].) If ear canals range in length from 1.80 to 2.60 cm in an average population, what is the range of fundamental resonant frequencies? Take air temperature to be 37.0°C,37.0°C, which is the same as body tempe Shaina By what fraction will the frequencies produced by a wind instrument change when air temperature goes from 10.0°C10.0°C to 30.0°C30.0°C ? That is, find the ratio of the frequencies at those temperatures. Shaina what are vector quantity Quantities that has both magnitude and direction NNAEMEKA what is lenses Rhoda vector quantities are those physical quantites which have magnitude as well as direction and obey the laws of vector algebra. Huzaif electric current has both magnitude and direction but it doesn't obey the laws of vector algebra, hence it is not a vector quantity. Huzaif
2020-04-02 19:18:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7293537259101868, "perplexity": 558.2488626578602}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370507738.45/warc/CC-MAIN-20200402173940-20200402203940-00229.warc.gz"}
https://introtcs.org/public/lec_10_efficient_alg.html
See any bugs/typos/confusing explanations? Open a GitHub issue. You can also comment below # Efficient computation • Describe at a high level some interesting computational problems. • The difference between polynomial and exponential time. • Examples of techniques for obtaining efficient algorithms • Examples of how seemingly small differences in problems can make (at least apparent) huge differences in their computational complexity. “The problem of distinguishing prime numbers from composite and of resolving the latter into their prime factors is … one of the most important and useful in arithmetic … Nevertheless we must confess that all methods … are either restricted to very special cases or are so laborious … they try the patience of even the practiced calculator … and do not apply at all to larger numbers.”, Carl Friedrich Gauss, 1798 “For practical purposes, the difference between algebraic and exponential order is often more crucial than the difference between finite and non-finite.”, Jack Edmunds, “Paths, Trees, and Flowers”, 1963 “What is the most efficient way to sort a million 32-bit integers?”, Eric Schmidt to Barack Obama, 2008 “I think the bubble sort would be the wrong way to go.”, Barack Obama. So far we have been concerned with which functions are computable and which ones are not. But now we return to quantitative considerations and study the time that it takes to compute functions mapping strings to strings, as a function of the input length. This is of course extremely important in the practice of computing, and the reason why we often care so much about the difference between $$O(n \log n)$$ time algorithm and $$O(n^2)$$ time one. In contexts such as introduction to programming courses, coding interviews, and actual algorithm design, terms such as “$$O(n)$$ runnning time” are often used in an informal way. That is, people don’t have a precise definition of what a linear-time algorithm is, but rather assume that “they’ll know it when they see it”. However, in this course we will make precise definitions, using our mathematical models of computation. This will allow us to ask (and sometimes answer) questions such as: • “Is there a function that can be computed in $$O(n^2)$$ time but not in $$O(n)$$ time?” • “Are there natural problems for which the best algorithm (and not just the best known) requires $$2^{\Omega(n)}$$ time?” In this chapter we will survey some examples of computational problems, for some of which we know efficient (e.g., $$n^c$$-time for a small constant $$c$$) algorithms, and for others the best known algorithms are exponential. We want to get a feel as to the kinds of problems that lie on each side of this divide and also see how some seemingly minor changes in formulation can make the (known) complexity of a problem “jump” from polynomial to exponential. We will not formally define the notion of running time in this chapter, and so will use the same “I know it when I see it” notion of an $$O(n)$$ or $$O(n^2)$$ time algorithms as one you’ve seen in introduction to computer science courses. In Chapter 12, we will define this notion precisely, using our NAND-TM and NAND-RAM programming languages. One of the nice things about the theory of computation is that it turns out that, like in the context of computability, the details of th precise computational model or programming language don’t matter that much. Specifically, in this course, we will often not be as concerned with the difference between $$O(n)$$ and $$O(n^2)$$, as much as the difference between polynomial and exponential running time. One of the interesting phenomenona of computing is that there is often a kind of a “threshold phenomenon” or “zero-one law” for running time, where many natural problems can either be solved in polynomial running time with a not-too-large exponent (e.g., something like $$O(n^2)$$ or $$O(n^3)$$), or require exponential (e.g., at least $$2^{\Omega(n)}$$ or $$2^{\Omega(\sqrt{n})}$$) time to solve. The reasons for this phenomenon are still not fully understood, but some light on this is shed by the concept of NP completeness, which we will encounter later. As we will see, questions about polynomial versus exponential time are often insensitive to the choice of the particular computational model, just like we saw that the question of whether a function $$F$$ is computable is insensitive to whether you use NAND-TM, $$\lambda$$-calculus, Turing machines, or Javascript as your model of computation. ## Problems on graphs We now present a few examples of computational problems that people are interested in solving. Many of the problems will involve graphs. We have already encountered graphs in the context of Boolean circuits, but let us now quickly recall the basic notation. A graph $$G$$ consists of a set of vertices $$V$$ and edges $$E$$ where each edge is a pair of vertices. In a directed graph, an edge is an ordered pair $$(u,v)$$, which we sometimes denote as $$\overrightarrow{u\;v}$$. In an undirected graph, an edge is an unordered pair (or simply a set) $$\{ u,v \}$$ which we sometimes denote as $$\overline{u\; v}$$ or $$u \sim v$$.1 We will assume graphs are undirected and simple (i.e., containing no parallel edges or self-loops) unless stated otherwise. We typically will think of the vertices in a graph as simply the set $$[n]$$ of the numbers from $$0$$ till $$n-1$$. Graphs can be represented either in the adjacency list representation, which is a list of $$n$$ lists, with the $$i^{th}$$ list corresponding to the neighbors of the $$i^{th}$$ vertex, or the adjacency matrix representation, which is an $$n\times n$$ matrix $$A$$ with $$A_{i,j}$$ equalling $$1$$ if the edge $$\overrightarrow{u\; v}$$ is present and equalling $$0$$ otherwise.2 We can transform between these two representations using $$O(n^2)$$ operations, and hence for our purposes we will mostly consider them as equivalent. We will sometimes consider labeled or weighted graphs, where we assign a label or a number to the edges or vertices of the graph, but mostly we will try to keep things simple and stick to the basic notion of an unlabeled, unweighted, simple undirected graph. There is a reason that graphs are so ubiquitous in computer science and other sciences. They can be used to model a great many of the data that we encounter. These are not just the “obvious” networks such as the road network (which can be thought of as a graph of whose vertices are locations with edges corresponding to road segments), or the web (which can be thought of as a graph whose vertices are web pages with edges corresponding to links), or social networks (which can be thought of as a graph whose vertices are people and the edges correspond to friend relation). Graphs can also denote correlations in data (e.g., graph of observations of features with edges corresponding to features that tend to appear together), causal relations (e.g., gene regulatory networks, where a gene is connected to gene products it derives), or the state space of a system (e.g., graph of configurations of a physical system, with edges corresponding to states that can be reached from one another in one step). We now give some examples of computational problems on graphs. As mentioned above, to keep things simple, we will restrict our attention to undirected simple graphs. In all cases the input graph $$G=(V,E)$$ will have $$n$$ vertices and $$m$$ edges. ### Finding the shortest path in a graph The shortest path problem is the task of, given a graph $$G=(V,E)$$ and two vertices $$s,t \in V$$, to find the length of the shortest path between $$s$$ and $$t$$ (if such a path exists). That is, we want to find the smallest number $$k$$ such that there are vertices $$v_0,v_1,\ldots,v_k$$ with $$v_0=s$$, $$v_k=t$$ and for every $$i\in\{0,\ldots,k-1\}$$ an edge between $$v_i$$ and $$v_{i+1}$$. Formally, we define $$\ensuremath{\mathit{MINPATH}}:\{0,1\}^* \rightarrow \{0,1\}^*$$ to be the function that on input a triple $$(G,s,t)$$ (represented as a string) outputs the number $$k$$ which is the length of the shortest path in $$G$$ between $$s$$ and $$t$$ or a string representing no path if no such path exists. (In practice people often want to also find the actual path and not just its length; it turns out that the algorithms to compute the length of the path often yield the actual path itself as a byproduct, and so everything we say about the task of computing the length also applies to the task of finding the path.) If each vertex has at least two neighbors then there can be an exponential number of paths from $$s$$ to $$t$$, but fortunately we do not have to enumerate them all to find the shortest path. We can do so by performing a breadth first search (BFS), enumerating $$s$$’s neighbors, and then neighbors’ neighbors, etc.. in order. If we maintain the neighbors in a list we can perform a BFS in $$O(n^2)$$ time, while using a queue we can do this in $$O(m)$$ time.3 More formally, the algorithm for computing the function $$\ensuremath{\mathit{MINPATH}}$$ can be described as follows: Algorithm BFSPATH: • Input: Graph $$G=(V,E)$$, vertices $$s,t$$ • Goal: Find the length $$k$$ of the shortest path $$v_0,v_1,\ldots,v_k$$ such that $$v_0=s$$, $$v_k=t$$ and $$\{ v_i,v_{i+1} \} \in E$$ for every $$i\in [k]$$, if such a path exists. • Operation: 1. We maintain a queue $$Q$$ of vertices, initially $$Q$$ contains only the pair $$s$$. 2. We maintain a dictionary4 $$D$$ keyed by the vertices, for every vertex $$v$$, $$D[v]$$ is either equal to a natural number or to $$\infty$$. Initially we set set $$D[s]=0$$ and $$D[v]=\infty$$ for every $$v\in V \setminus \{s \}$$. 3. While $$Q$$ is not empty do the following: 1. Pop a vertex $$v$$ from the top of the queue. 2. If $$v=t$$ then halt and output $$D[v]$$. 3. Otherwise, for every neighbor $$w$$ of $$v$$ such that $$D[w]=\infty$$, set $$D[w]=D[v]+1$$ and add $$w$$ to the queue. 4. Output “no path” Since we only add to the queue vertices $$w$$ with $$D[w]=\infty$$ (and then immediately set $$D[w]$$ to an actual number), we never push to the queue a vertex more than once, and hence the algorithm takes $$n$$ “push” and “pop” operations. It returns the correct answer since add the vertices to the queue in the order of their distance from $$s$$, and hence we will reach $$t$$ after we have explored all the vertices that are closer to $$s$$ than $$t$$. Hence algorithm BFSPATH computes $$\ensuremath{\mathit{MINPATH}}$$. If you’ve ever taken an algorithms course, you have probably encountered many data structures such as lists, arrays, queues, stacks, heaps, search trees, hash tables and many mores. Data structures are extremely important in computer science, and each one of those offers different tradeoffs between overhead in storage, operations supported, cost in time for each operation, and more. For example, if we store $$n$$ items in a list, we will need a linear (i.e., $$O(n)$$ time) scan to retreive one of them, while we achieve the same operation in $$O(1)$$ time if we used a hash table. However, when we only care about polynomial-time algorithms, such factors of $$O(n)$$ in the running time will not make much difference. Similarly, if we don’t care about the difference between $$O(n)$$ and $$O(n^2)$$, then it doesn’t matter if we represent graphs as adjacency lists or adjacency matrices. Hence we will often describe our algorithms at a very high level, without specifying the particular data structures that are used to implement them. It should however be always clear that there exists some data structure that will be sufficient for our purposes. ### Finding the longest path in a graph The longest path problem is the task of, given a graph $$G=(V,E)$$ and two vertices $$s,t \in V$$, to find the length of the longest simple (i.e., non intersecting) path between $$s$$ and $$t$$. If the graph is a road network, then the longest path might seem less motivated than the shortest path, but of course graphs can be and are used to model a variety of phenomena, and in many such cases the longest path (and some of its variants) are highly motivated. In particular, finding the longest path is a generalization of the famous Hamiltonian path problem which asks for a maximally long simple path (i.e., path that visits all $$n$$ vertices once) between $$s$$ and $$t$$, as well as the notorious traveling salesman problem (TSP) of finding (in a weighted graph) a path visiting all vertices of cost at most $$w$$. TSP is a classical optimization problem, with applications ranging from planning and logistics to DNA sequencing and astronomy. A priori it is not clear that finding the longest path should be harder than finding the shortest path, but this turns out to be the case. While we know how to find the shortest path in $$O(n)$$ time, for the longest path problem we have not been able to significantly improve upon the trivial brute force algorithm that tries all paths. Specifically, in a graph of degree at most $$d$$, we can enumerate over all paths of length $$k$$ by going over the (at most $$d$$) neighbors of each vertex. This would take about $$O(d^k)$$ steps, and since the longest simple path can’t have length more than the number of vertices, this means that the brute force algorithms runs in $$O(d^n)$$ time (which we can bound by $$O(n^n)$$ since the maximum degree is $$n$$). The best algorithm for the longest path improves on this, but not by much: it takes $$\Omega(c^n)$$ time for some constant $$c>1$$.5 ### Finding the minimum cut in a graph Given a graph $$G=(V,E)$$, a cut is a subset $$S$$ of $$V$$ such that $$S$$ is neither empty nor is it all of $$V$$. The edges cut by $$S$$ are those edges where one of their endpoints is in $$S$$ and the other is in $$\overline{S} = V \setminus S$$. We denote this set of edges by $$E(S,\overline{S})$$. If $$s,t \in V$$ then an $$s,t$$ cut is a cut such that $$s\in S$$ and $$t\in \overline{S}$$. (See Figure 11.3.) The minimum $$s,t$$ cut problem is the task of finding, given $$s$$ and $$t$$, the minimum number $$k$$ such that there is an $$s,t$$ cut cutting $$k$$ edges (once again, the problem is also sometimes phrased as finding the set that achieves this minimum; it turns out that algorithms to compute the number often yield the set as well).6 Formally, we define $$\ensuremath{\mathit{MINCUT}}:\{0,1\}^* \rightarrow \{0,1\}^*$$ to be the function that on input a triple $$(G,s,t)$$ of a graph and two vertices (represented as a string), outputs the minimum number $$k$$ such that there exists a set $$S$$ containing $$s$$ and not $$t$$ with exactly $$k$$ edges that touch $$S$$ and its complement. The minimum $$s,t$$ cut problem appears in many applications. Minimum cuts often correspond to bottlenecks. For example, in a communication network the minimum cut between $$s$$ and $$t$$ corresponds to the smallest number of edges that, if dropped, will disconnect $$s$$ from $$t$$. Similar applications arise in scheduling and planning. In the setting of image segmentation, one can define a graph whose vertices are pixels and whose edges correspond to neighboring pixels of distinct colors. If we want to separate the foreground from the background then we can pick (or guess) a foreground pixel $$s$$ and background pixel $$t$$ and ask for a minimum cut between them. Here is an algorithm to compute $$\ensuremath{\mathit{MINCUT}}$$: Algorithm MINCUTNAIVE: • Input: Graph $$G=(V,E)$$ and two distinct vertices $$s,t \in V$$ • Goal: Return $$k = \min_{S \subseteq V, s\in S, t\not\in S} |E(S,\overline{S})|$$ • Operation: 1. Let $$k_0 \leftarrow |E|+1$$ 2. For every set $$S \subseteq V$$ such that $$s\in S$$ and $$t\not\in T$$ do: 1. Set $$k=0$$. 2. For every edge $$\{u,v\} \in E$$, if $$u\in S$$ and $$v\not\in S$$ then set $$k \leftarrow k+1$$. 3. If $$k < k_0$$ then let $$k_0 \leftarrow k$$ 3. Return $$k_0$$ It is an excellent exercise for you to pause at this point and verify: (i) that you understand what this algorithm does, (2) that you understand why this algorithm will in fact return the value of the minimum cut in the graph, and (3) that you can analyze the running time of this algorithm. The precise running time of algorithm MINCUTNAIVE will depend on the data structures we use to store the graph and the sets, but even if we had the best data structures, the running time of MINCUTNAIVE will be terrible. Indeed, if a graph has $$n$$ vertices, then for every pair $$s,t$$ of distinct vertices, there are $$2^{n-2}$$ sets $$S$$ that contain $$s$$ but don’t contain $$t$$. (Can you see why?) Since we are enumerating over all of those in Step 2, even if we could compute for each such set $$S$$ the value $$|E(S,\overline{S})|$$ in constant time, our running time would still be exponential. Since minimum cut is a problem we want to solve, this seems like bad news. After all, MINCUTNAIVE is the most natural algorithm to solve the problem, and if it takes exponential time, then perhaps the problem can’t be solved efficiently at all. However, this turns out not to be case. As we’ve seen in this course time and again, there is a difference between the function $$\ensuremath{\mathit{MINCUT}}$$ and the algorithm MINCUTNAIVE to solve it. There can be more than one algorithm to compute the same function, and some of those algorithms might be more efficient than others. Luckily this is one of those cases. There do exist much faster algorithms that compute $$\ensuremath{\mathit{MINCUT}}$$ in polynomial time (which, as mentioned in the mathematical background lecture, we denote by $$poly(n)$$). There are several algorithms to do so, but many of them rely on the Max-Flow Min-Cut Theorem that says that the minimum cut between $$s$$ and $$t$$ equals the maximum amount of flow we can send from $$s$$ to $$t$$, if every edge has unit capacity. Specifically, imagine that every edge of the graph corresponded to a pipe that could carry one unit of water per one unit of time (say 1 liter of water per second). Now suppose we want to send a maximum amount of water per time unit from our source $$s$$ to the sink $$t$$. If there is an $$s,t$$-cut of at most $$k$$ edges, then this maximum will be at most $$k$$. Indeed, such a cut $$S$$ will be a “bottleneck” since at most $$k$$ units can flow from $$S$$ to its complement $$\overline{S}$$. The above reasoning can be used to show that the maximum flow from $$s$$ to $$t$$ is at most the value of the minimum $$s,t$$-cut. The surprising and non-trivial content of the Max-Flow Min-Cut Theorem is that the maximum flow is also at leat the value of the minimum cut, and hence computing the cut is the same as computing the flow. A flow on a graph $$G$$ of $$m$$ edges can be thought of as a vector $$x\in \R^m$$ where for every edge $$e$$, $$x_e$$ corresponds to the amount of water per time-unit that flows on $$e$$. We think of an edge $$e$$ an an ordered pair $$(u,v)$$ (we can choose the order arbitrarily) and let $$x_e$$ be the amount of flow that goes from $$u$$ to $$v$$. (If the flow is in the other directoin then we make $$x_e$$ negative.) Since every edge has capacity one, we know that $$-1 \leq x_e \leq 1$$ for every edge $$e$$. A valid flow has the property that the amount of water leaving the source $$s$$ is the same as the amount entering the sink $$t$$, and that for every other vertex $$v$$, the amount of water entering and leaving $$v$$ is the same. Mathematically, we can write these conditions as follows: \begin{aligned} \sum_{e \ni s} x_e + \sum_{e\ni t} x_e &=0 && \\ \sum_{e\ni v} x_e &=0 \; &&\forall_{v \in V \setminus \{s,t\}} \\ -1 \leq x_e \leq 1 & \; &&\forall_{e\in E} \end{aligned} \;\;(11.1) where for every vertex $$v$$, summing over $$e \ni v$$ means summing over all the edges that touch $$v$$. The maximum flow problem can be thought of as the task of maximizing $$\sum_{e \ni s} x_e$$ over all the vectors $$x\in\R^m$$ that satisfy the above conditions Equation 11.1. This is a special case of a very general task known as linear programming, where one wants to find the maximum of $$f(x)$$ over $$x \in \R^m$$ that satisfies certain linear inequalities where $$f:\R^m \rightarrow \R$$ is a linear function. Luckily, there are polynomial-time algorithms for solving linear programming, and hence we can solve the maximum flow (and so, equivalently, minimum cut) problem in polynomial time. In fact, there are much better algorithms for maximum-flow/minimum-cut, even for weighted directed graphs, with currently the record standing at $$O(\min\{ m^{10/7}, m\sqrt{n}\})$$.7 ### Finding the maximum cut in a graph We can also define the maximum cut problem of finding, given a graph $$G=(V,E)$$ the subset $$S\subseteq V$$ that maximizes the number of edges cut by $$S$$.8 Like its cousin the minimum cut problem, the maximum cut problem is also very well motivated. For example, it arises in VLSI design, and also has some surprising relation to analyzing the Ising model in statistical physics. Once again, a priori it might not be clear that the maximum cut problem should be harder than minimum cut but this turns out to be the case. We do not know of an algorithm that solves this problem much faster than the trivial “brute force” algorithm that tries all $$2^n$$ possibilities for the set $$S$$. ### A note on convexity There is an underlying reason for the sometimes radical difference between the difficulty of maximizing and minimizing a function over a domain. If $$D \subseteq \R^n$$, then a function $$f:D \rightarrow R$$ is convex if for every $$x,y \in D$$ and $$p\in [0,1]$$ $$f(px+(1-p)y) \leq pf(x) + (1-p)f(y)$$. That is, $$f$$ applied to the $$p$$-weighted midpoint between $$x$$ and $$y$$ is smaller than the $$p$$-weighted average value of $$f$$. If $$D$$ itself is convex (which means that if $$x,y$$ are in $$D$$ then so is the line segment between them), then this means that if $$x$$ is a local minimum of $$f$$ then it is also a global minimum. The reason is that if $$f(y)<f(x)$$ then every point $$z=px+(1-p)y$$ on the line segment between $$x$$ and $$y$$ will satisfy $$f(z) \leq p f(x) + (1-p)f(y) < f(x)$$ and hence in particular $$x$$ cannot be a local minimum. Intuitively, local minima of functions are much easier to find than global ones: after all, any “local search” algorithm that keeps finding a nearby point on which the value is lower, will eventually arrive at a local minima.9 Indeed, under certain technical conditions, we can often efficiently find the minimum of convex functions, and this underlies the reason problems such as minimum cut and shortest path are easy to solve. On the other hand, maximizing a convex function (or equivalently, minimizing a concave function) can often be a hard computational task. A linear function is both convex and concave, which is the reason both the maximization and minimization problems for linear functions can be done efficiently. The minimum cut problem is not a priori a convex minimization task, because the set of potential cuts is discrete. However, it turns out that we can embed it in a continuous and convex set via the (linear) maximum flow problem. The “max flow min cut” theorem ensuring that this embedding is “tight” in the sense that the minimum “fractional cut” that we obtain through the maximum-flow linear program will be the same as the true minimum cut. Unfortunately, we don’t know of such a tight embedding in the setting of the maximum cut problem. The issue of convexity arises time and again in the context of computation. For example, one of the basic tasks in machine learning is empirical risk minimization. That is, given a set of labeled examples $$(x_1,y_1),\ldots,(x_m,y_m)$$, where each $$x_i \in \{0,1\}^n$$ and $$y_i \in \{0,1\}$$, we want to find the function $$h:\{0,1\}^n \rightarrow \{0,1\}$$ from some class $$H$$ that minimizes the error in the sense of minimizing the number of $$i$$’s such that $$h(x_i) \neq y_i$$. Like in the minimum cut problem, to make this a better behaved computational problem, we often embed it in a continuous domain, including functions that could output a real number and replacing the condition $$h(x_i) \neq y_i$$ with minimizing some continuous loss function $$\ell(h(x_i),y_i)$$.10 When this embedding is convex then we are guaranteed that the global minimizer is unique and can be found in polynomial time. When the embedding is non convex, we have no such guarantee and in general there can be many global or local minima. That said, even if we don’t find the global (or even a local) minima, this continuous embedding can still help us. In particular, when running a local improvement algorithm such as Gradient Descent, we might still find a function $$h$$ that is “useful” in the sense of having a small error on future examples from the same distribution.11 ## Beyond graphs Not all computational problems arise from graphs. We now list some other examples of computational problems that are of great interest. ### The 2SAT problem A propositional formula $$\varphi$$ involves $$n$$ variables $$x_1,\ldots,x_n$$ and the logical operators AND ($$\wedge$$), OR ($$\vee$$), and NOT ($$\neg$$, also denoted as $$\overline{\cdot}$$). We say that such a formula is in conjunctive normal form (CNF for short) if it is an AND of ORs of variables or their negations (we call a term of the form $$x_i$$ or $$\overline{x}_i$$ a literal). For example, this is a CNF formula $(x_7 \vee \overline{x}_{22} \vee x_{15} ) \wedge (x_{37} \vee x_{22}) \wedge (x_{55} \vee \overline{x}_7)$ We say that a formula is a $$k$$-CNF it is an AND of ORs where each OR involves exactly $$k$$ literals. The 2SAT problem is to find out, given a $$2$$-CNF formula $$\varphi$$, whether there is an assignment $$x\in \{0,1\}^n$$ that satisfies $$\varphi$$, in the sense that it makes it evaluate to $$1$$ or “True”. Determining the satisfiability of Boolean formulas arises in many applications and in particular in software and hardware verification, as well as scheduling problems. The trivial, brute-force, algorithm for 2SAT will enumerate all the $$2^n$$ assignments $$x\in \{0,1\}^n$$ but fortunately we can do much better. The key is that we can think of every constraint of the form $$\ell_i \vee \ell_j$$ (where $$\ell_i,\ell_j$$ are literals, corresponding to variables or their negations) as an implication $$\overline{\ell}_i \Rightarrow \ell_j$$, since it corresponds to the constraints that if the literal $$\ell'_i = \overline{\ell}_i$$ is true then it must be the case that $$\ell_j$$ is true as well. Hence we can think of $$\varphi$$ as a directed graph between the $$2n$$ literals, with an edge from $$\ell_i$$ to $$\ell_j$$ corresponding to an implication from the former to the latter. It can be shown that $$\varphi$$ is unsatisfiable if and only if there is a variable $$x_i$$ such that there is a directed path from $$x_i$$ to $$\overline{x}_i$$ as well as a directed path from $$\overline{x}_i$$ to $$x_i$$ (see Exercise 11.2). This reduces 2SAT to the (efficiently solvable) problem of determining connectivity in directed graphs. ### The 3SAT problem The 3SAT problem is the task of determining satisfiability for 3CNFs. One might think that changing from two to three would not make that much of a difference for complexity. One would be wrong. Despite much effort, we do not know of a significantly better than brute force algorithm for 3SAT (the best known algorithms take roughy $$1.3^n$$ steps). Interestingly, a similar issue arises time and again in computation, where the difference between two and three often corresponds to the difference between tractable and intractable. We do not fully understand the reasons for this phenomenon, though the notions of $$\mathbf{NP}$$ completeness we will see later does offer a partial explanation. It may be related to the fact that optimzing a polynomial often amounts to equations on its derivative. The derivative of a a quadratic polynomial is linear, while the derivative of a cubic is quadratic, and, as we will see, the difference between solving linear and quadratic equations can be quite profound. ### Solving linear equations One of the most useful problems that people have been solving time and again is solving $$n$$ linear equations in $$n$$ variables. That is, solve equations of the form \begin{aligned} a_{0,0}x_0 &+ a_{0,1}x_1 &&+ \cdots &&+ a_{0,{n-1}}x_{n-1} &&= b_0 \\ a_{1,0}x_0 &+ a_{1,1}x_1 &&+ \cdots &&+ a_{1,{n-1}}x_{n-1} &&= b_1 \\ \vdots &+ \vdots &&+ \vdots &&+ \vdots &&= \vdots \\ a_{n-1,0}x_0 &+ a_{n-1,1}x_1 &&+ \cdots &&+ a_{n-1,{n-1}}x_{n-1} &&= b_{n-1} \end{aligned} where $$\{ a_{i,j} \}_{i,j \in [n]}$$ and $$\{ b_i \}_{i\in [n]}$$ are real (or rational) numbers. More compactly, we can write this as the equations $$Ax = b$$ where $$A$$ is an $$n\times n$$ matrix, and we think of $$x,b$$ are column vectors in $$\R^n$$. The standard Gaussian elimination algorithm can be used to solve such equations in polynomial time (i.e., determine if they have a solution, and if so, to find it).12 As we discussed above, if we are willing to allow some loss in precision, we even have algorithms that handle linear inequalities, also known as linear programming. In contrast, if we insist on integer solutions, the task of solving for linear equalities or inequalities is known as integer programming, and the best known algorithms are exponential time in the worst case. Whenever we discuss problems whose inputs correspond to numbers, the input length corresponds to how many bits are needed to describe the number (or, as is equivalent up to a constant factor, the number of digits in base 10, 16 or any other constant). The difference between the length of the input and the magnitude of the number itself can be of course quite profound. For example, most people would agree that there is a huge difference between having a billion (i.e. $$10^9$$) dollars and having nine dollars. Similarly there is a huge difference between an algorithm that takes $$n$$ steps on an $$n$$-bit number and an algorithm that takes $$2^n$$ steps. One example, is the problem (discussed below) of finding the prime factors of a given integer $$N$$. The natural algorithm is to search for such a factor by trying all numbers from $$1$$ to $$N$$, but that would take $$N$$ steps which is exponential in the input length, which is number of bits needed to describe $$N$$.13 It is an important and long open question whether there is such an algorithm that runs in time polynomial in the input length (i.e., polynomial in $$\log N$$). Suppose that we want to solve not just linear but also equations involving quadratic terms of the form $$a_{i,j,k}x_jx_k$$. That is, suppose that we are given a set of quadratic polynomials $$p_1,\ldots,p_m$$ and consider the equations $$\{ p_i(x) = 0 \}$$. To avoid issues with bit representations, we will always assume that the equations contain the constraints $$\{ x_i^2 - x_i = 0 \}_{i\in [n]}$$. Since only $$0$$ and $$1$$ satisfy the equation $$a^2-a$$, this assumption means that we can restrict attention to solutions in $$\{0,1\}^n$$. Solving quadratic equations in several variable is a classical and extremely well motivated problem. This is the generalization of the classical case of single-variable quadratic equations that generations of high school students grapple with. It also generalizes the quadratic assignment problem, introduced in the 1950’s as a way to optimize assignment of economic activities. Once again, we do not know a much better algorithm for this problem than the one that enumerates over all the $$2^n$$ possiblities. We now list a few more examples of interesting problems that are a little more advanced but are of significant interest in areas such as physics, economics, number theory, and cryptography. ### Determinant of a matrix The determinant of a $$n\times n$$ matrix $$A$$, denoted by $$\mathrm{det}(A)$$, is an extremely important quantity in linear algebra. For example, it is known that $$\mathrm{det}(A) \neq 0$$ if and only if $$A$$ is nonsingular, which means that it has an inverse $$A^{-1}$$, and hence we can always uniquely solve equations of the form $$Ax = b$$ where $$x$$ and $$b$$ are $$n$$-dimensional vectors. More generally, the determinant can be thought of as a quantiative measure as to what extent $$A$$ is far from being singular. If the rows of $$A$$ are “almost” linearly dependent (for example, if the third row is very close to being a linear combination of the first two rows) then the determinant will be small, while if they are far from it (for example, if they are are orthogonal to one another, then the determinant will be large). In particular, for every matrix $$A$$, the absolute value of the determinant of $$A$$ is at most the product of the norms (i.e., square root of sum of squares of entries) of the rows, with equality if and only if the rows are orthogonal to one another. The determinant can be defined in several ways. For example, it is known that $$\mathrm{det}$$ is the only function that satisfies the following conditions: 1. $$\mathrm{det}(\ensuremath{\mathit{AB}}) = \mathrm{det}(A)\mathrm{det}(B)$$ for every square matrices $$A,B$$. 2. For every $$n\times n$$ triangular matrix $$T$$ with diagonal entries $$d_0,\ldots, d_{n-1}$$, $$\mathrm{det}(T)=\prod_{i=0}^n d_i$$. In particular $$\mathrm{det}(I)=1$$ where $$I$$ is the identity matrix.14 3. $$\mathrm{det}(S)=-1$$ where $$S$$ is a “swap matrix” that corresponds to swapping two rows or two columns of $$I$$. That is, there are two coordinates $$a,b$$ such that for every $$i,j$$, $$S_{i,j} = \begin{cases}1 & i=j\;, i \not\in \{a,b \} \\ 1 & \{i,j\}=\{a,b\} \\ 0 & \text{otherwise}\end{cases}$$. Note that conditions 1. and 2. together imply that $$\mathrm{det}(A^{-1}) = \mathrm{det}(A)^{-1}$$ for every invertible matrix $$A$$. Using these rules and the Gaussian elimination algorithm, it is possible to tell whether $$A$$ is singular or not, and in the latter case, decompose $$A$$ as a product of a polynomial number of swap matrices and triangular matrices. (Indeed one can verify that the row operations in Gaussian elimination corresponds to either multiplying by a swap matrix or by a triangular matrix.) Hence we can compute the determinant for an $$n\times n$$ matrix using a polynomial time of arithmetic operations.15 ### The permanent (mod 2) problem Given an $$n\times n$$ matrix $$A$$, the permanent of $$A$$ is the sum over all permutations $$\pi$$ (i.e., $$\pi$$ is a member of the set $$S_n$$ of one-to-one and onto functions from $$[n]$$ to $$[n]$$) of the product $$\prod_{i=0}^{n-1}A_{i,\pi(i)}$$. The permanent of a matrix is a natural quantity, and has been studied in several contexts including combinatorics and graph theory. It also arises in physics where it can be used to describe the quantum state of multiple boson particles (see here and here). If the entries of $$A$$ are integers, then we can also define a Boolean function $$perm_2(A)$$ which will output the result of the permanent modulo $$2$$. A priori computing this would seem to require enumerating over all $$n!$$ possiblities. However, it turns out we can compute $$perm_2(A)$$ in polynomial time! The key is that modulo $$2$$, $$-x$$ and $$+x$$ are the same quantity and hence the permanent modulo $$2$$ is the same as taking the following quantity modulo $$2$$: $\sum_{\pi \in S_n} sign(\pi)\prod_{i=0}^{n-1}A_{i,\pi(i)} \;\;(11.4)$ where the sign of a permutation $$\pi$$ is a number in $$\{+1,-1\}$$ which can be defined in several ways, one of which is that $$sign(\pi)$$ equals $$+1$$ if the number of swaps that “Bubble” sort performs starting an array sorted according to $$\pi$$ is even, and it equals $$-1$$ if this number is odd.16 From a first look, Equation 11.4 does not seem like it makes much progress. After all, all we did is replace one formula involving a sum over $$n!$$ terms with an even more complicated formula involving a sum over $$n!$$ terms. But fortunately Equation 11.4 also has an alternative description: it is yet another way to describe the determinant of the matrix $$A$$, which as mentioned can be computed using a process similar to Gaussian elimination. ### The permanent (mod 3) problem Emboldened by our good fortune above, we might hope to be able to compute the permanent modulo any prime $$p$$ and perhaps in full generality. Alas, we have no such luck. In a similar “two to three” type of a phenomenon, we do not know of a much better than brute force algorithm to even compute the permanent modulo $$3$$. ### Finding a zero-sum equilibrium A zero sum game is a game between two players where the payoff for one is the same as the penalty for the other. That is, whatever the first player gains, the second player loses. As much as we want to avoid them, zero sum games do arise in life, and the one good thing about them is that at least we can compute the optimal strategy. A zero sum game can be specified by an $$n\times n$$ matrix $$A$$, where if player 1 chooses action $$i$$ and player 2 chooses action $$j$$ then player one gets $$A_{i,j}$$ and player 2 loses the same amount. The famous Min Max Theorem by John von Neumann states that if we allow probabilistic or “mixed” strategies (where a player does not choose a single action but rather a distribution over actions) then it does not matter who plays first and the end result will be the same. Mathematically the min max theorem is that if we let $$\Delta_n$$ be the set of probability distributions over $$[n]$$ (i.e., non-negative columns vectors in $$\R^n$$ whose entries sum to $$1$$) then $\max_{p \in \Delta_n} \min_{q\in \Delta_n} p^\top A q = \min_{q \in \Delta_n} \max_{p\in \Delta_n} p^\top A q \;\;(11.5)$ The min-max theorem turns out to be a corollary of linear programming duality, and indeed the value of Equation 11.5 can be computed efficiently by a linear program. ### Finding a Nash equilibrium Fortunately, not all real-world games are zero sum, and we do have more general games, where the payoff of one player does not necessarily equal the loss of the other. John Nash won the Nobel prize for showing that there is a notion of equilibrium for such games as well. In many economic texts it is taken as an article of faith that when actual agents are involved in such a game then they reach a Nash equilibrium. However, unlike zero sum games, we do not know of an efficient algorithm for finding a Nash equilibrium given the description of a general (non zero sum) game. In particular this means that, despite economists’ intuitions, there are games for which natural stategies will take exponential number of steps to converge to an equilibrium. ### Primality testing Another classical computational problem, that has been of interest since the ancient greeks, is to determine whether a given number $$N$$ is prime or composite. Clearly we can do so by trying to divide it with all the numbers in $$2,\ldots,N-1$$, but this would take at least $$N$$ steps which is exponential in its bit complexity $$n = \log N$$. We can reduce these $$N$$ steps to $$\sqrt{N}$$ by observing that if $$N$$ is a composite of the form $$N=\ensuremath{\mathit{PQ}}$$ then either $$P$$ or $$Q$$ is smaller than $$\sqrt{N}$$. But this is still quite terrible. If $$N$$ is a $$1024$$ bit integer, $$\sqrt{N}$$ is about $$2^{512}$$, and so running this algorithm on such an input would take much more than the lifetime of the universe. Luckily, it turns out we can do radically better. In the 1970’s, Rabin and Miller gave probabilistic algorithms to determine whether a given number $$N$$ is prime or composite in time $$poly(n)$$ for $$n=\log N$$. We will discuss the probabilistic model of computation later in this course. In 2002, Agrawal, Kayal, and Saxena found a deterministic $$poly(n)$$ time algorithm for this problem. This is surely a development that mathematicians from Archimedes till Gauss would have found exciting. ### Integer factoring Given that we can efficiently determine whether a number $$N$$ is prime or composite, we could expect that in the latter case we could also efficiently find the factorization of $$N$$. Alas, no such algorithm is known. In a surprising and exciting turn of events, the non existence of such an algorithm has been used as a basis for encryptions, and indeed it underlies much of the security of the world wide web. We will return to the factoring problem later in this course. We remark that we do know much better than brute force algorithms for this problem. While the brute force algorithms would require $$2^{\Omega(n)}$$ time to factor an $$n$$-bit integer, there are known algorithms running in time roughly $$2^{O(\sqrt{n})}$$ and also algorithms that are widely believed (though not fully rigorously analyzed) to run in time roughly $$2^{O(n^{1/3})}$$.17 ## Our current knowledge The difference between an exponential and polynomial time algorithm might seem merely “quantiative” but it is in fact extremely significant. As we’ve already seen, the brute force exponential time algorithm runs out of steam very very fast, and as Edmonds says, in practice there might not be much difference between a problem where the best algorithm is exponential and a problem that is not solvable at all. Thus the efficient algorithms we mention above are widely used and power many computer science applications. Moreover, a polynomial-time algorithm often arises out of significant insight to the problem at hand, whether it is the “max-flow min-cut” result, the solvability of the determinant, or the group theoretic structure that enables primality testing. Such insight can be useful regardless of its computational implications. At the moment we do not know whether the “hard” problems are truly hard, or whether it is merely because we haven’t yet found the right algorithms for them. However, we will now see that there are problems that do inherently require exponential time. We just don’t know if any of the examples above fall into that category. ## Lecture summary • There are many natural problems that have polynomial-time algorithms, and other natural problems that we’d love to solve, but for which the best known algorithms are exponential. • Often a polynomial time algorithm relies on discovering some hidden structure in the problem, or finding a surprising equivalent formulation for it. • There are many interesting problems where there is an exponential gap between the best known algorithm and the best algorithm that we can rule out. Closing this gap is one of the main open questions of theoretical computer science. ## Exercises Most of the exercises have been written in the summer of 2018 and haven’t yet been fully debugged. While I would prefer people do not post online solutions to the exercises, I would greatly appreciate if you let me know of any bugs. You can do so by posting a GitHub issue about the exercise, and optionally complement this with an email to me with more details about the attempted solution. The naive algorithm for computing the longest path in a given graph could take more than $$n!$$ steps. Give a $$poly(n)2^n$$ time algorithm for the longest path problem in $$n$$ vertex graphs.18 For every 2CNF $$\varphi$$, define the graph $$G_\varphi$$ on $$2n$$ vertices corresponding to the literals $$x_1,\ldots,x_n,\overline{x}_1,\ldots,\overline{x}_n$$, such that there is an edge $$\overrightarrow{\ell_i\; \ell_j}$$ iff the constraint $$\overline{\ell}_i \vee \ell_j$$ is in $$\varphi$$. Prove that $$\varphi$$ is unsatisfiable if and only if there is some $$i$$ such that there is a path from $$x_i$$ to $$\overline{x}_i$$ and from $$\overline{x}_i$$ to $$x_i$$ in $$G_\varphi$$. Show how to use this to solve 2SAT in polynomial time. 19 ## Further explorations Some topics related to this chapter that might be accessible to advanced students include: (to be completed) ## Acknowledgements 1. An equivalent viewpoint is that an undirected graph is like a directed graph with the property that whenever the edge $$\overrightarrow{u\; v}$$ is present then so is the edge $$\overrightarrow{v\; u}$$. 2. In an undirected graph, the adjacency matrix $$A$$ is symmetric, in the sense that it satisfies $$A_{i,j}=A_{j,i}$$. 3. A queue stores a list of elements in “First In First Out (FIFO)” order and so each “pop” operation removes an element from the queue in the order that they were “pushed” into it; see the Wikipedia page. Since we assume $$m \geq n-1$$, $$O(m)$$ is the same as $$O(n+m)$$. Dijkstra’s algorithm is a well-known generalization of BFS to weighted graphs. 4. A dictionary or associative array data structure $$D$$ allows to associate with every key $$v$$ (which can be thought of as a string) a value $$D[v]$$. 5. At the moment the best record is $$c \sim 1.65$$ or so. Even obtaining an $$O(2^n)$$ time bound is not that simple, see Exercise 11.1. 6. One can also define the problem of finding the global minimum cut (i.e., the non-empty and non-everything set $$S$$ that minimizes the number of edges cut). A polynomial time algorithm for the minimum $$s,t$$ cut can be used to solve the global minimum cut in polynomial time as well (can you see why?). 8. We can also consider the variant where one is given $$s,t$$ and looks for the $$s,t$$-cut that maximizes the number of edges cut. The two variants are equivalent up to $$O(n^2)$$ factors in the running time, but we use the global max cut forumlation since it is more common in the literature. 9. One example of such a local search algorithm is gradient descent which takes a small step in the direction that would reduce the value by the most amount based on the current derivative. There are also algorithms that take advantage of the second derivative (hence are known as second order methods) to potentially converge faster. 10. We also sometimes replace or enhance the condition that $$h$$ is in the class $$H$$ by adding a regularizing term of the form $$R(h)$$ to the minimization problem, where $$R:H \rightarrow \R$$ is some measure of the “complexity” of $$h$$. As a general rule, the larger or more “complex” functions $$h$$ we allow, the easier it is to fit the data, but the more danger we have of “overfitting”. 11. In machine learning parlance, this task is known as supervised learning. The set of examples $$(x_1,y_1),\ldots,(x_m,y_m)$$ is known as the training set, and the error on additional samples from the same distribution is known as the generalization error, and can be measured by checking $$h$$ against a test set that was not used in training it. 12. To analyze this fully we need to ensure that the bit complexity of the numbers involved does not grow too much, but fortunately we can indeed ensure this using Cramer’s rule. Also, as is usually the case when talking about real numbers, we do not care much for the distinction between solving equations exactly and solving them to arbitrarily good precision. 13. The running time of this algorithm can be easily improved to roughly $$\sqrt{N}$$, but this is still exponential (i.e., $$2^{n/2}$$) in the number $$n$$ of bits to describe $$N$$. 14. A triangular matrix is one in which either all entries below the diagonal, or all entries above the diagonal, are zero. 15. The cost for performing each arithmetic operation depends on the number of bits needed to represent each entry, and accounting for this can sometimes be subtle, though ultimately doable. 16. It turns out that this definition is independent of the sorting algorithm, and for example if $$sign(\pi)=-1$$ then one cannot sort an array ordered according to $$\pi$$ using an even number of swaps. 17. The “roughly” adjective above refers to neglecting factors that are polylogarithmic in $$n$$. 18. Hint: Use dynamic programming to compute for every $$s,t \in [n]$$ and $$S \subseteq [n]$$ the value $$P(s,t,S)$$ which equals $$1$$ if there is a simple path from $$s$$ to $$t$$ that uses exactly the vertices in $$S$$. Do this iteratively for $$S$$’s of growing sizes. 19. TODO: add reference to best algorithm for longest path - probably the Bjorklund algorithm
2019-04-19 06:46:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8681399822235107, "perplexity": 170.65964434040967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578527148.46/warc/CC-MAIN-20190419061412-20190419083412-00017.warc.gz"}
https://www.scienceforums.net/topic/4101-my-latest-security-invention/page/4/?tab=comments#comment-182797
# My Latest Security Invention :) ## Recommended Posts I'm at Warwick uni' date=' you Coventry person you What year you in?[/quote'] I'm off to warwick after my gap year, what you studying (some kind of engineering i presume) I'm suprised you put your faith in teh british justice system, my brother got GBHed had to have a metal plate in his head he gave the name of the kid who did it to the police and nothing came of it, Can bolt cutters cut though a yale lock thing? also does your phone have normal screws on it to get at the circuits, mine has crazy nuts with stars instead of a slit, where can i get tools to unscrew them? • Replies 127 • Created #### Posted Images mine had the same types of security screws as yours does, I have a set of bits to undo security screws, but before I had them, Ide make my own by modifying Jewelers screwdrives with a tiny grinder and a file. once they were out, Ide then replace them with ordinary screws of the same size. ##### Share on other sites My security system involves two malamutes and an M-4, muhahaha. ##### Share on other sites YT, i love that idea! I thought you had pictures for us though? of the land or the device? In anycase, i'd like to see both - if you wouldnt mind showing ##### Share on other sites ya, some pictures would be awesome. Why use a phone though? Couldnt you just rig up a realy simple switch on the door (to turn on when the door opens) that is connected to a 12volt buzzer in your house (or even your door bell). That is what i would do. Im sure it would be much cheaper, probaly serve the purpose better, and you wouldnt have to worry about the thiefs breaking the system (they could, but it is only a switch). Im sure the few hundred feet of cheap 24 gauge wire wouldnt be expensive at all either. Just to me it seems like it could be much more simple than using a cellular phone. ##### Share on other sites sorry no pics for a while until I get the cam server working again or fill the mem card and get them copied to CD. as for using a cell fone, thats because the shed is over a mile away from where I live, and even if it wasnt Im on the 12th floor anyway. ##### Share on other sites Im sure the few hundred feet of cheap 24 gauge wire wouldnt be expensive at all either. Just to me it seems like it could be much more simple than using a cellular phone. What about the labor involved in burying a "few hundred feet" of wire (or a mile)? Get real. YT, you've come up with a great innovation here, I don't see why you keep getting people who want to criticize your system. If it's not people suggesting illegal death traps, it's people suggesting the obvious. Very resourceful. Did they ever let you hook up the community room? ##### Share on other sites burrying a few hundred feet of wire is not difficult. It only needs to be burried an inch deep/ And im sorry, i did not know he lived on the 12th floor and a mile away from his shed. ##### Share on other sites burrying a few hundred feet of wire is not difficult. It only needs to be burried an inch deep/ Remember, the reason most people have sheds on this land is for gardening. Bury that line an inch deep and everybody's going to be digging it up. Plus an inch deep means any foot traffic will quickly displace the soft soil, even if it was only a few hundred feet. ##### Share on other sites burrying a few hundred feet of wire is not difficult. It only needs to be burried an inch deep/ And im sorry, i did not know he lived on the 12th floor and a mile away from his shed. youre still ENTIRELY missing the point, all thats great if I never left my home, but since the cell fone is set to call MY Mobile fone, I can be ANYWHERE and still know about it, so if Im out shopping or at a party or whatever, all I need do is dial 999 as soon as my mobile says "Shed Alarm", and the cops are There even if Im a thousand miles away, THAT is the point! ##### Share on other sites Wow, i was missing the point. Ingenius ##### Share on other sites Ok, while i find this interesting, i have a few questions, so that i may better understand: - if you are using a cell phone as your alarm, dont you need some sort of Plan to keep the number active? If that is the case, then technically you are paying for your alarm? -how did you get it to say "Shed Alarm"? I know there are ways, but i dont know them. thanks ##### Share on other sites the SIM card I use is Vodafone PAYT, its on the "smart plan" so all I need to do is buy some credit on the fone and just forget about it, it only takes credit off the sim when I actualy make a call, other than that its free. as for "Shed Alarm" I just add the user "Shed Alarm" in my directory along with the sheds number, and when ever I get a call from that number it displays as "Shed Alarm". ##### Share on other sites ah, i see. Very clever yet again YT ##### Share on other sites It took me a good 20 minutes to read the whole thread but it was worth it. YT you are resourseful I am really surprised you could file the screwdrivers to fit into the security crews. Wow and the phone is very clever. ##### Share on other sites • 1 month later... moved here for a good reason, basicly im in negotiations with Nokia about marketing it, Sooo for now, ill keep it in here in case they google for "YT2095" and mobile fones I dont want them reading it. ##### Share on other sites Best of luck, YT! Have they signed any non-disclosure agreements? What is the process? ##### Share on other sites no, not yet, Im still keeping it vague, all further transactions will be done via post on receipt of my details, I have a non disclosure form of my own (its only a standard type), basicly Im letting them do all the work, I cant afford to patent etc... Ive told them that the "idea" works on other fones too, namely sagem and Motorola, but Im giving them 1st oportunity as their fones are easier to adapt. all Ive asked for is naming rights on the 1st model and a percentage of the sales. Ill just have to wait and see now ##### Share on other sites Hmmm.. just got mail from them... [PublicOoO] In a couple of weeks you will be sent information of our policy regarding unsolicited ideas. Im not quite sure what to make of that in all honesty? BOT response maybe? ##### Share on other sites I cant afford to patent etc... Design patents are practically worthless, and I don't think you have anything proprietary enough for a utility patent. By naming rights do you mean recognition or do you want to name the device? ##### Share on other sites to be honest, atm I dont have enough cash to patent my own name! ) as for naming, something with YT or 2095 in it (or both would be nice), personaly Ide just like to see the thing out there and in use, as a device its a "Nasty Ba\$tard" if youre on the wrong side of the law, so the more the merrier as far as Im concerned, and I KNOW they could sell Millions of them! it would be nice to have a little say in the name and a share of the profits too but I dunno, maybe its the pessimist in me, but somehow I dont think much is going to happen about it. it was a chat to my long time friend of old that suggested I try approach someone with the idea, kinda got me "fired up" and inspired a little more faith in the idea as a Good One than I had. so what the hey, Ill give it a shot, worst they can say is No right!? ##### Share on other sites I'm not sure if you ever said in the thread, but what does it cost you for the phone service on the "SHED" phone per month? ##### Share on other sites nothing at all for service, theres no contract either. its PAYT, I top up my sim card with say £5 and that will last me forever, all incoming is free, I only pay when I make a call, if I make no calls, itcosts nothing. its on Vodafone smart price plan (or something like that). Ive pointed out to them that heyll have to negotiate with a service provider, as the unit sales will go through the roof if this sucker hits the market, so a deal needs to be done with someone when they lock these mobiles to a specific provider. ##### Share on other sites Are you taking the angle that it's a great way to use up all the old outdated phones they must have laying around? ##### Share on other sites actualy I envisaged an entirely NEW system altogether. most of the "guts" in the fone are redundant (wap, PDA features, games, ringtones, even the entire receiver block etc...) there need only be a single line display enough to see the numebr you ellect to call on break in, and current credit status. the rest can go in the trash can, even the ear peice. the only improvements required would be an extra sensitive Microphone, and the back-up internal battery, and charging adaptors to accomodate car batteries and hosehold mains elec, possibly a digital interface for ease of hookup to a standard house alarm pannel on the switched negative (to compensate for SABMs). other than that (and it`s really nothing radical added) the rest is an excersize in redundancy deletion ## Create an account Register a new account
2021-10-24 16:47:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17890140414237976, "perplexity": 2668.914704129695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00016.warc.gz"}
https://east-centricarch.eu/en/the-total-cost-of-renting-a-car-is-3000-for-each-day-the-car-is-rented-plus-28-12-for-each-mil.18187.html
Marsaw293 26 # The total cost of renting a car is \$30.00 for each day the car is rented plus 28 1/2 ¢ for each mile the car is driven. What is the total cost of renting the car for 5 days and driving 350 miles? (Note: No sales tax is involved.) $5*30=150$ $28 \frac{1}{2} * 350 = 9975$ $9975 cents =99.75$ $99.75+150=249.75$
2023-02-06 13:53:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2924199402332306, "perplexity": 1504.4207527400702}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00530.warc.gz"}
http://blog.strobaek.org/2010/10/
### Archive Archive for October, 2010 ## Mike Cohn on Leading a Self-Organizing Team Wednesday 8 December I will be attending a “go home meeting” featuring the Scrum expert Mike Cohn. Really looking forward to it, as Mr. Cohn is one of the big names in this racket. From his own description: One of the challenges of agile is coming to grips with the role of leaders and managers of self-organizing teams. Many go to the extreme of refusing to exert any influence on their teams at all. Others retain too much of a command-and-control style. Leading a self-organizing team can be a fine line. In this session you will learn the proper ways to influence the path taken by a team to solving the problems given to it. You will learn how to become comfortable in this role. You’ll understand why influencing a self-organizing team is neither sneaky nor inappropriate but is necessary. Categories: Conference Tags: ## Pythagorean proof I recently wrote a blog post on beautiful mathematics. Mathematics can be beautiful or pleasing in many ways, one of them being if the proof is elegant. Apparently one of the theorems for which the greatest number of proofs exists is the Pythagorean theorem. The theorem states that In any right triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle). or $\displaystyle{a^2 + b^2 = c^2}$ One of the more beautiful or elegant proofs is the one below. Categories: Writing Tags: ## Personal Backup Solution Just read a nice blog post by @4nd3rs (in Danish) about his considerations about online backups; what to use and local vs. remote backup. In the following I will try to describe my own setup. It has been a work in progress, but right now I feel it meets the requirements of myself and the family. As any other modern family we have a number of computers running both Windows and OSX. When we moved into our house, I decided I did not want to run any cables anywhere or rather my wife decided we did not want to do this, so all devices are connected using wireless to the internet and the NAS. The main storage and backup devices is a Drobo FS with 3 TB of storage. Data Robotics has their own concept of RAID, called BeyondRaid, and having three disks in the NAS enables it to exercise protection for single disk failure. The Mac is using the Drobo for backup via TimeMachine. Apart from shared documents the main content of the Drobo is pictures and music as the device also serves as storage for our Sonos music system. Whatever requirements the Windows machines have for backup, it is handled remotely. For remote backup we recently moved from using JungleDisk and Amazon S3 to iDrive. For iDrive I have a Pro-account where the Drobo is back up to and each other family member has their own Basic account. The professional account is $5/month for one PC and 150GB. They have a family pack for 5 PCs and 500GB, which is$15/month; I guess we will move here when the need raises. The main reason from moving away from Amazon was cost. It is dead cheap if you only have a small amount of data, but can get quite expensive if you have a lot, especially to extract the data if required. We also use DropBox and each of us has an account. Personally I can’t remember when I used an USB stick last. It is just great for “moving” files back and forth, e.g. between work and home, and also for file sharing. Categories: Miscellaneous Tags: ## Playing with LaTeX Just learned that the plug-in I use to format $\LaTeX$ has an option that lets you control the size of the formula when using displaystyle. Have to try it. $\displaystyle{f = \frac{1}{2}a_0 + \sum_{n=1}^{\infty}\left\{a_n\cos nt + b_n \sin nt \right\}}$ with the coefficients $\displaystyle{a_n = \frac{1}{\pi}\int_{-\pi}^\pi f(t) \cos nt \mbox{dt}, b_n = \frac{1}{\pi}\int_{-\pi}^\pi f(t) \sin nt \mbox{dt}}$ Beautiful. Categories: Writing Tags: ## Laying on those kilometers I finally did it. It has been too long, and right now, I am sore as hell. I am talking about the distance of my run today. Ever since “loosing” the front ligament in my left knee last December I have had a hard time running more than 10 km (the exception was the 3 x 5 km I did at the DFH relay back in August). Today I did 17. I am tired and sore, but in the muscles and not the knee(s) which is really, really good. The time was horrible (around 4:45/km on average), but it was good to be out more than an hour. I hope I can keep this up, with one long stretch, a shorter one and one or two days with intervals each week. 3 times very week would be nice, 4 times great, but going into the dark season I do not have much hope. Will try though Categories: Tags: ## Beautiful Mathematics While reading an article on the General Unified Theory, I came across a reference to what the author considered beautiful mathematical equations. It got me thinking: what would I consider beautiful math? I must confess right away, that it has been many years since I have done anything remotely advanced in the area of mathematics and not since my master thesis on the mathematical properties of artificial neural networks – drop me a line if you are unable to fall asleep, and I will send you a copy – have I really done any heavy math. However, the seed was laid in my mind and during my run early this morning I began to consider if I could actually remember any mathematical equations, beautiful or not. The first one that came to mind was Archimedes’ Recurrence Formula. Being a diver, I have always had a soft spot for old Archi Let $a_n$ and $b_n$ be the perimeters of the circumscribed and inscribed n-gon and $a_{2n}$ and $b_{2n}$ the perimeters of the circumscribed and inscribed 2n-gon. Then $\displaystyle{a_{2n} = \frac{2a_nb_n}{a_nb_n}, b_{2n} = \sqrt{a_{2n}b_n}, a_\infty = b_\infty}$ Of course the cool thing is that the successive application gives the Archimedes algorithm, which can be used to provide successive approximations to $\pi$. The next equation that came to my mind was Eulers Formula or Eulers Identity, which states that $\displaystyle{e^{ix} = \cos x + i \sin x}$ where $i$ is the imaginary unit. The beautiful equation arises when setting $x = \pi$, $\displaystyle{e^{i\pi} + 1 = 0}$ The term Mandelbrot set is used to refer both to a general class of fractal sets and to a particular instance of such a set. In general, a Mandelbrot set marks the set of points in the complex plane such that the corresponding Julia set is connected and not computable. “The” Mandelbrot set is the set obtained from the quadratic recurrence equation $\displaystyle{z_{n+1} + z_n^2 + C, z_0 = C}$ where points $C$ in the complex plane for which the orbit of $z_n$ does not tend to infinity are in the set. The image below displays the famous and well known Valley of the Sea Horses. The next one that came to my mind was the Riemann Hypothesis. The Riemann hypothesis is a deep mathematical conjecture which states that the non-trivial Riemann zeta function zeros, i.e., the values of $s$ other than $-2, -4, -6, \ldots$ such that $\zeta(s) = 0$ (where $\zeta(s)$ is the Riemann zeta function) all lie on the “critical line” $\sigma = \mathbb R [s] = 1/2$  (where $\mathbb R[s]$ denotes the real part of $s$). The Riemann zeta-function $\zeta(s)$ is the function of a complex variable $s$ initially defined by the following infinite series: $\displaystyle{\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s}}$ The Riemann hypothesis can be stated as: $\displaystyle{\zeta(\alpha + i\beta) = 0, \beta \not= 0 \Rightarrow \alpha = \frac{1}{2}}$ One could go on and on, but I will finish off with another of my favorites, namely the Gaussian Integral. The Gaussian integral, also called the probability integral is the integral of the one-dimensional Gaussian function over $(-\infty, \infty)$ $\displaystyle{\int_{-\infty}^\infty e^{-x^2}dx = \sqrt{\pi}}$ Categories: Writing Tags:
2014-03-08 18:00:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42088547348976135, "perplexity": 829.0762706995744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999656144/warc/CC-MAIN-20140305060736-00025-ip-10-183-142-35.ec2.internal.warc.gz"}
http://www.fuzzylite.com/forums/topic/much-lighter-version-of-fuzzylite/
home Forums Feature Requests Reading/Writing FIS files using FuzzyLite Tagged: This topic contains 10 replies, has 3 voices, and was last updated by  CF Kwong 2 years, 10 months ago. Viewing 11 posts - 1 through 11 (of 11 total) • Author Posts • #830 CF Kwong Participant Hello again Juan, I”m working on a fuzzy logic problem that requires me to integrate the original code into the Network Simulator 3, or commonly known as NS-3. Previously I’ve modified the original C code from MATLAB’s Fuzzy Logic toolbox written by Roger Jang (i.e. “fismain.c”). The file “fis.c” is left unmodified, and it works perfectly. However the problem is the NS-3 is a C++ platform which requires different coding style, which I am not entirely familiar with. I just need 3 input to produce a single output using Mamdani or Sugeno type fuzzy engine. Do FuzzyLite has a simpler version just like “fis.c” and “fismain.c“? CF Kwong The modified code from “fismain.c” is as follows. #include <fis.c> #include <stdio.h> double SS = 0.5; double NC = 0.5; double BER= 0.5; FuzzyDataFile = fopen(“Fuzzy_Logic.dat”,”w+”); fprintf (FuzzyDataFile, “%f %f %f\n”, SS, NC, BER); fclose(FuzzyDataFile); /* Writing a Data Matrix file */ dataMatrix = returnDataMatrix(“Fuzzy_Logic.dat”, &data_row_n, &data_col_n); /* Fuzzy Inference System (FIS) File */ fisMatrix = returnFismatrix(“Handover_Decision.fis”, &fis_row_n, &fis_col_n); /* Build FIS data structure */ fis = (FIS *)fisCalloc(1, sizeof(FIS)); fisBuildFisNode(fis, fisMatrix, fis_col_n, MF_POINT_N); /* Error Checking */ if (data_col_n < fis->in_n) { PRINTF(“Given FIS is a %d-input %d-output system.\n”,fis->in_n, fis->out_n); PRINTF(“Given data file does not have enough input entries.\n”); fisFreeMatrix((void **)dataMatrix, data_row_n); fisFreeMatrix((void **)fisMatrix, fis_row_n); fisFreeFisNode(fis); } /* Create output matrix */ outputMatrix = (DOUBLE **)fisCreateMatrix(data_row_n, fis->out_n, sizeof(DOUBLE)); /* Evaluate FIS on each input vector */ for (i = 0; i < data_row_n; i++) getFisOutput(dataMatrix[i], fis, outputMatrix[i]); /* Output vector */ for (i = 0; i < data_row_n; i++) { for (j = 0; j < fis->out_n; j++) { results = outputMatrix[i][j]; // Here is the output after defuzzification } } /* clean up FIS memory */ fisFreeFisNode(fis); fisFreeMatrix((void **)dataMatrix, data_row_n); fisFreeMatrix((void **)fisMatrix, fis_row_n); fisFreeMatrix((void **)outputMatrix, data_row_n); #833 Keymaster Hi, I do not know what the goal is for fis.c or fismain.c. It seems to me that you want to have the data in FLD format stored in a two-dimensional matrix. You can do that if you parse the results obtained with the FldExporter. However, if you want custom input vectors, you can have two matrices, one for inputs and the other for outputs. Then, you can easily create both matrices yourself using fuzzylite. You do not need the fis.c or fismain.c to do it for you, roughly, you can do it as follows. for every input vector v: for every input variable i: i.setInputValue(v[i]); engine.process(); for every output variable o: w[o] = o.defuzzify(); where v is a row of the input matrix, w is a row of the output matrix. #841 CF Kwong Participant Thank you, much appreciated. Here’s another question: Let say if I have a string of data store inside, say InputData.txt with three column format “SS BER QOS” where SS, BER and QOS is the crisp data input before the fuzzier, and I want to feed those data into fuzzylite. The output will be stored inside a file, say OutputData.txt, Can fuzzylite do such or I need to do some minor modification of the codes? I am using MATLAB generated FIS data, i.e. HD.fis. Hope to hear from you soon. Cheers! CF Kwong #843 Keymaster Hi, fuzzylite does not deal with Files, and I think I will keep it that way for the foreseeable future. However, I will consider using custom vectors for the next version. You need to investigate how to read and write files from your computer using C++. I have done reading files using std::ifstream and std::getline. For every line read, you can use std::istringstream as a tokenizer that reads the line and gets each of the values in the line and stores them in a matrix. Afterwards: 1) traverse the matrix using the values to prepare the input variables (using InputVariable::setInputValue) 2) call method Engine::process 3) defuzzify the values of each output variable and store them in the matrix 4) Write to file the matrix or part of the matrix Also, I have changed the title of the post to reflect better its contents. Cheers. #870 Keymaster Hi CF, Another person has asked for the same fis.c and fismain.c file, but now I understand what you want to do! I will definitely add that feature to fuzzylite soonish and let you know. Cheers, #872 Keymaster What you require is not too hard and you do not need to build fuzzylite from source to achieve so. Roughly, in SomeFile.cpp: int main(int argc, char** argv){ fl::Engine* engine = FisImporter().fromString(fisEngine); for (std::size_t i = 0 ; i < inputs.size(); ++i){ for (std::size_t x = 0 ; x < inputs.at(i).size(); ++x){ engine->getInputVariable(x)->setInputValue(inputs.at(i).at(x)); } engine->process(); for (int i = 0 ; i < engine->numberOfOutputVariables(); ++i){ fl::scalar output = engine->getOutputVariable(i)->defuzzify(); writeToSomeFile(output + " "); } writeToSomeFile(output + "\n"); } You have to implement yourself the read and write functions and fix some details. g++ SomeFile.cpp -I/path/to/fuzzylite/fuzzylite -L/path/to/fuzzylite/fuzzylite/bin -lfuzzylite #1029 hacmachdien Participant Hi Juan, I have noticed that you’ve included “import from Fuzzy Inference System (FIS)” feature to fuzzylite 4.0 GUI but it seems that no explicit instruction on how to use the feature is available. Could you provide an example to clarify the matter? #1030 Keymaster Hi hacmachdien, If you have a fuzzy engine in “fis” format (check examples), then you can copy and paste the contents of the “fis” file into the import window. For example, if you copy and paste the following block into the import window, you will have a fuzzy engine ready to use. [System] Name='simple-dimmer' Type='mamdani' NumInputs=1 NumOutputs=1 NumRules=3 AndMethod='' OrMethod='' ImpMethod='min' AggMethod='max' DefuzzMethod='centroid' [Input1] Enabled=1 Name='Ambient' Range=[0.000 1.000] NumMFs=3 MF1='DARK':'trimf',[0.000 0.250 0.500] MF2='MEDIUM':'trimf',[0.250 0.500 0.750] MF3='BRIGHT':'trimf',[0.500 0.750 1.000] [Output1] Enabled=1 Name='Power' Range=[0.000 1.000] Default=nan LockValid=0 LockRange=0 NumMFs=3 MF1='LOW':'trimf',[0.000 0.250 0.500] MF2='MEDIUM':'trimf',[0.250 0.500 0.750] MF3='HIGH':'trimf',[0.500 0.750 1.000] [Rules] 1.000 , 3.000 (1.000) : 1 2.000 , 2.000 (1.000) : 1 3.000 , 1.000 (1.000) : 1 #1036 hacmachdien Participant Thanks a lot, Juan The reply was a great help to me. Just another curious notion about fuzzylite 4.0 C++ codes: Is it easily applicable in hardware implementation? I using your fuzzy GUI and C++ code generator to create a real-time application on FPGA environment. I guess it would take me tons of time to convert the C++ codes to synthesizable Fixed-Point (if the conversion is even possible). #1037 Keymaster I am not sure about hardware implementation, but later this year I expect to export to pure C, for which it should be compatible for hardware implementation. #1061 CF Kwong Participant Hi Juan, Just saw your reply here. Many thanks for your reply. I will see if the code is working on my situation or not. Cheers! CF Kwong Viewing 11 posts - 1 through 11 (of 11 total) You must be logged in to reply to this topic.
2017-02-22 08:25:23
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3695327043533325, "perplexity": 3803.1027520844423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170925.44/warc/CC-MAIN-20170219104610-00615-ip-10-171-10-108.ec2.internal.warc.gz"}
http://mathhelpforum.com/algebra/21364-basic-porblem.html
1. ## Basic Porblem Basic but very annoying question the series 5, 9, 14, 20 ect what is tye formular for the nth term 2. Any ideas on this one? Two hints, the next number in the sequence is 27, and it might be worthwhile asking what is the difference between the first and second term, then the second and third term (etc.) is. Let us know how you go. 3. ## Still probomatic thanks the diffrence between t1 and t2 is 4 and t2 and t3 is 5 ect, increaing in increments of 1 i understand that but for some rason i cant see to find the equationn for the nth term, it was a question on my sisters test and i just cant get it any ideas? 4. Right, so as n increases, we add more on. This is our first hint, that the n'th term will depend on n somehow. Let's look at $t_2$, you pointed out that it's 4 more than the first term, that is $t_2 = t_1 + 4.$ Now I want to work n into it somehow, since we noticed it depends on n. So maybe something like $t_2 = t_1 + 1 + 3.$ Hows that? 5. Is this the equation you're thinking of? where a1 = initial value, r = common ratio for a geometric series. 6. Originally Posted by Pom Basic but very annoying question the series 5, 9, 14, 20 ect what is tye formular for the nth term First diferences are 4, 5, 6, the second differences are 1, 1, 1, so this is a quadratic in $n$, so we put: $ a_n = u n^2 + v n + w $ $a_1=5$, so $u+v+w=5$ $a_2=9$, so $4u+2v+w=9$ $a_3=14$, so $9u+3v+w=14$ Now solve for $u,\ v,\ \mbox{ and } w$ to get the answer. Now I make the solution $u=1/2,\ v=5/2\ w=2$, so: $ a_n=\frac{n^2+5n+4}{2}=\frac{(n+4)(n+1)}{2} $ RonL
2016-09-29 18:50:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7268806099891663, "perplexity": 671.057661594158}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661910.62/warc/CC-MAIN-20160924173741-00146-ip-10-143-35-109.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/96569/double-connectivity-of-so3-group-manifold
# Double connectivity of $SO(3)$ group manifold Is there any physical significance of the fact that the group manifold (parameter space) of $SO(3)$ is doubly connected? EDIT 1: Let me clarify my question. It was too vague. There exists two equivalence classes of paths in the group manifold of SO(3) or in other words, $\Pi_1(SO(3))=Z_2$. This space is therefore doubly connected. There are paths which come back to initial configurations after a rotation of $2\pi$ and others after a rotation of $4\pi$, with proper parametrization of angles. Using this fact, is it possible to show that such a topology admits the existence of half-integer spins and integer spins? I understand spinors as objects whose wavefunctions pick up a -ve sign after a rotation of $2\pi$, and comes back to itself after a rotation of $4\pi$. Right? But from the topological argument given above, it is not clear to me, that how does it lead to two kinds of wavefunctions, spinor-type $(j=\frac{1}{2},\frac{3}{2},\frac{5}{2}...)$ and tensor-type $j=0,1,2,...$? It is not explicitly clear how these two types of paths in SO(3) group manifold will lead to such transformation properties on "the wavefunctions"? - Didn't you just ask the question last week? – Isidore Seville Feb 3 '14 at 14:18 @IsidoreSeville- No. – SRS Feb 3 '14 at 14:28 Comment to the question (v1): You mean apart from the fact that the Lie group $SO(3)$ doesn't have even-dimensional (=half-integer spin) irreps? – Qmechanic Feb 3 '14 at 14:58 @IsidoreSeville Roopam Sinha's excellent questions and their excellent answers have established that there are integer and half integer spins and nothing else, essentially and ultimately because a simply connected topological space has no nontrivial coverings (i.e. not homeomorphic to the original space) (see the proof in Massey, "Algebraic Topology", for example). So now he's seeking help to understand the spin statistics theorem, which is quite a distinct next step from that fact and this is not a theorem I feel (as a non QFT specialist) I understand well enough to answer. – WetSavannaAnimal aka Rod Vance Feb 5 '14 at 0:43 Just in view of the double universal covering provided by $SU(2)$, $SO(3)$ must a quotient of $SU(2)$ with respect to a central discrete normal subgroup with two elements. This is consequence of a general property of universal covering Lie groups: If $\pi: \tilde{G} \to G$ is the universal covering Lie-group homomorphism, the kernel $H$ of $\pi$ is a discrete normal central subgroup of the universal covering $\tilde{G}$ of $G= \tilde{G}/H$, and $H$ is isomorphic to the fundamental group of $G$, i.e. $\pi_1(G)$ (wich, for Lie groups, is Abelian) . One element of that subgroup must be $I$ (since a group includes the neutral element). The other, $J$, must verify $JJ=I$ and thus $J=J^{-1}= J^\dagger$. By direct inspection one sees that in $SU(2)$ it is only possible for $J= -I$. So $SO(3) = SU(2)/\{I,-I\}$. Notice that $\{I,-I\} = \{e^{i4\pi \vec{n}\cdot \vec{\sigma}/2 }, e^{i2\pi \vec{n}\cdot \vec{\sigma}/2 }\}$ stays in the center of $SU(2)$, namely the elements of this subgroup commute with all of the elements of $SU(2)$. Moreover $\{I,-I\}=: \mathbb Z_2$ is just the first homotopy group of $SO(3)$ as it must be in view of the general statement I quoted above. A unitary representations of $SO(3)$ is also a representation of $SU(2)$ through the projection Lie group homomorphism $\pi: SU(2) \to SU(2)/\{I,-I\} = SO(3)$. So, studying unitary reps of $SU(2)$ covers the whole class of unitary reps of $SO(3)$. Let us study those reps. Consider a unitary representation $U$ of $SU(2)$ in the Hilbert space $H$. The central subgroup $\{I,-I\}$ must be represented by $U(I)= I_H$ and $U(-I)= J_H$, but $J_HJ_H= I_H$ so, as before, $J_H= J_H^{-1}= J_H^\dagger$. As $J_H$ is unitary and self-adjoint simultaneously, its spectrum has to be included in $\mathbb R \cap \{\lambda \in \mathbb C \:|\: |\lambda|=1\}$. So (a) it is made of $\pm 1$ at most and (b) the spectrum is a pure point spectrum and so only proper eigenspeces arise in its spectral decomposition. If $-1$ is not present in the spectrum, the only eigenvalue is $1$ and thus $U(-I)= I_H$. If only the eigenvalue $-1$ is present, instead, $U(-I)= -I_H$. If the representation is irreducible $\pm 1$ cannot be simultaneously eigenvalues. Otherwise $H$ would be split into the orthogonal direct sum of eigenspaces $H_{+1}\oplus H_{-1}$. As $U(-1)=J_H$ commutes with all $U(g)$ (because $-I$ is in the center of $SU(2)$ and $U$ is a representation), $H_{+1}$ and $H_{-1}$ would be invariant subspaces for all the representation and it is forbidden as $U$ is irreducible. We conclude that, if $U$ is an irreducible unitary representation of $SU(2)$, the discrete normal subgroup $\{I,-I\}$ can only be represented by either $\{I_H\}$ or $\{I_H, -I_H\}$. Moreover: Since $SO(3) = SU(2)/\{I,-I\}$, in the former case $U$ is also a representation of $SO(3)$. It means that $I = e^{i 4\pi \vec{n}\cdot \vec{\sigma} }$ and $e^{i 2\pi \vec{n}\cdot \vec{\sigma}/2 } = -I$ are both transformed into $I_H$ by $U$. In the latter case, instead, $U$ is not a true representation of $SO(3)$, just in view of a sign appearing after $2\pi$, because $e^{i 2\pi \vec{n}\cdot \vec{\sigma}/2 } = -I$ is transformed into $-I_H$ and only $I = e^{i 4\pi \vec{n}\cdot \vec{\sigma}/2 }$ is transformed into $I$ by $U$. - @V.Moretti- Amazing. This answer is very insightful. But I still wonder where does the topology of the SO(3) group manifold (which I mentioned in the question) enter this business? It appears to me that the above answer uses the fact that there is a 2-to-1 homomorphism between SU(2) and SO(3). Or am I missing the connection? – SRS Feb 5 '14 at 11:03 SO(3) is a Lie group with fundamental homotopy group $\mathbb Z_2$. Therefore its universal covering group is a simply connected Lie group and $SO(2)$ is obtained by taking the quotient of the latter and a discrete normal subgroup with two elements. That two is the "topological" information: It remembers the structure of the first homotopy group of the manifold $SO(3)$. Knowing that the universal covering is $SU(2)$ one has to look for a discrete subgroup containing only two elements... – Valter Moretti Feb 5 '14 at 11:05 @V.Moretti- Okay, I understand. Now it is absolutely clear. Thanks. – SRS Feb 5 '14 at 11:11 I made precise the relevant topological relation in the text just expanding a comment. – Valter Moretti Feb 5 '14 at 11:35 @V.Moretti I'm sure you meant to say this in your comment above but in general you have to look for a discrete normal subgroup containing only two elements. These by Schreier's theorem must then be subgroups of the centre. Only being pedantic because the OP's clearly very interested in the details, I wouldn't have mentioned it otherwise. – WetSavannaAnimal aka Rod Vance Feb 13 '14 at 10:31
2016-05-28 18:07:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8926220536231995, "perplexity": 192.42565156826382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278042.87/warc/CC-MAIN-20160524002118-00055-ip-10-185-217-139.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/631756/some-questions-about-deep-implications-of-g%C3%B6dels-completeness-theorem-if-any
# Some questions about “deep” implications of Gödel's Completeness Theorem (if any) I'm trying to refresh my knowledge about mathematical logic and I'm still unsatisfied with my insight of Gödel's Completeness Theorem. I've studied Henkin's version and I think I've mastered it. Some textbooks (e.g.Ian Chiswell & Wilfrid Hodges, Mathematical Logic (2007), introduce Henkin's construction with sentential logic, I think in order to "introduce" the student to the construction "testing" it in a simplified environment. 1) Make it sense trying to adapt Gödel's original proof to sentential calculus in order to understand the specific details necessary to prove it for f-o logic ? One of the peculiarity of Henkin's proof is that the "completeness" aspect (i.e.if $A$ is valid it is provable) is somewhat of a by-product of model existence. We have "extreme" cases, like Boolos & Burgess & Jeffrey, Computability and Logic (5th ed - 2007), where the Model Existence give Compactness in advance of the introduction of any proof systems, and the Completeness Theorem is "fairly missing". In this way, the non-constructive aspects of the theorem are "maximized". Gödel's proof use (natural) numbers. This is obvious (with insight) today that we know about Gödel's philosophical realism. Hankin's construction avoid numbers but use the "syntactical stuff" to build the model. But this (according to my understanding) is not really different; in order to "run" the construction we need countable many symbols, and symbols are "abstract entities" (like numbers). I think that we really needs them : we cannot replace them with "physical" tally marks. So my question : 2) In what sense we can minimize the "ontological" import of the theorem ? In a previous effort I asked for some clarifications to a distinguished scholar and I received this answer : "About the completeness proof, it is a theorem of orthodox mathematics, and does not pretend to be nominalistic." I've not studied Intuitionistic logic, but I know that there are semantics for it. 3) Are there "completeness" (linking semantics to proof systems) theorems for f-o intuitionistic logic ? What about their non-constructive aspects (if any) ? What is the opinion of constructivistic mathematicians (not necessarily intuitionists) about Gödel's or Henkin's proof, and what about corresponding result (if any) for intuistionistic f-o logic ?
2019-12-13 22:13:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7662197351455688, "perplexity": 1195.1160112473854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00533.warc.gz"}
https://www.riverpublishers.com/journal_read_html_article.php?j=JMM/15/1/1
## Journal of Mobile Multimedia Vol: 15    Issue: Combined Issue 1 & 2 Published In:   January 2019 ### A Testing Framework for Multi-Sensor Mobile Applications Article No: 1    Page: 1-28    doi: https://doi.org/10.13052/jmm1550-4646.15121 1 2 3 4 5 6 ? A Testing Framework for Multi-Sensor Mobile Applications Darian Frajberg, Piero Fraternali, Rocio Nahime Torres, Carlo Bernaschina and Roman Fedorov Dipartimento di Elettronica, Informazione e Bioingegneria Politecnico di Milano, Piazza Leonardo da Vinci, 32, Milan, Italy E-mail: darian.frajberg@polimi.it, piero.fraternali@polimi.it, rocionahime.torres@polimi.it, carlo.bernaschina@polimi.it, roman.fedorov@polimi.it Received 26 November 2018; Accepted 11 April 2019; Publication 06 May 2019 ## Abstract Outdoor mobile applications are becoming popular in many fields, such as gaming, tourism and environment monitoring. They rely on the input of multiple, possibly noisy sensors, such as the camera, Global Positioning System (GPS), compass, accelerometer and gyroscope. Testing such applications requires the reproduction of the real conditions in which the application works, which are hard to recreate without automated support. This paper presents a capture & replay framework that automates the testing of mobile outdoor applications; the framework records in real-time data streams from multiple sensors acquired in field conditions, stores them, and let developers replay recorded test sequences in lab conditions, also computing quality metrics that help tracing soft errors. Keywords: Mobile sensing, Regression testing, Mobile Augmented Reality, Context simulation. ## 1 Introduction Outdoor mobile applications support the activity of users in field conditions, where the task at hand requires the processing of inputs from multiple sensors. Exemplary applications are maps (e.g., Google Maps1), touristic guides (e.g., mTrip2), activity tracking (e.g., Garmin Connect3) and augmented reality (e.g., Pokemon Go4). The distinctive characteristics of such applications are their dependency on multiple, heterogeneous, and often noisy sensors, in addition to the need to process sensor data streams in real-time to deliver a proper user experience. For example, a sightseeing Augmented Reality (AR) app, such as PeakLens5 or PeakAR6, must support mountaineers during their trips by processing data streams from the Global Positioning System (GPS), compass, accelerometer, gyroscope, and even from the camera, if the application supports the identification and overlay of objects of interest (e.g., mountain peaks) onto the camera view. Testing an outdoor mobile application requires verifying its behavior, in terms of failures, soft errors, or performance, in working conditions. Such working conditions, also called context in [13] comprise the input values of all the sensors on which the application relies for its functioning. Building a test set that reproduces working conditions faithfully is challenging because most sensors are extremely noisy and their accuracy varies greatly [4], not only on different mobile devices, but also on the same device in different operating conditions (e.g., GPS positioning can be affected by meteorological conditions, compass orientation by the proximity of an electrical source). Furthermore, it is also necessary to take into account the temporal correlation of multiple sensor data streams; for example, in an application that overlays information on the screen based on what the user is looking at, the usage context is composed of the sequence of positions from the GPS sensor, the sequence of orientations of the device from the compass sensor, the sequence of pitch and roll values of the device from the accelerometer and the gyroscope, and the sequence of view frames from the camera. Such sequences are correlated, because the content of the camera frame at a given time depends on the position, orientation, pitch and roll data. In particular, the development of Mobile Augmented Reality (MAR) applications with sophisticated Computer Vision modules, such as [58], exploit many of the previously mentioned sensors and may benefit from a mechanism to assess their performance realistically. The goal of testing can be the identification of the insurgence of hard errors, which cause the application to fail, the quantification of performance properties, or the verification of soft errors, i.e., the occurrence of bugs that do not cause the application to fail, but nonetheless degrade its behavior with respect to some desirable characteristic that impacts user’s acceptance. Investigating soft errors requires defining the property to observe, formalizing quality metrics for its evaluation, extracting the values of the target property from applications runs, and comparing the extracted values with some reference, which acts as a gold standard (i.e., a representation of what is ideal for the user). Figure 1 portraits the process of testing for soft errors a mobile application that relies on multiple sensors. The Acquisition and correlation step collects input data from multiple data sources and manages their dependencies and temporal correlation; it outputs a test sequence, which is a temporal series of values, one per type of input. The Feeding step prepares the ground for executing a test run of the application on a test sequence. It encodes the test sequence in the format required by the execution environment and submits it for processing. The Execution step actually runs the test session, by executing the application with the test sequence as input. In the case of a mobile application, the execution can be performed on board the device or in a simulator. The Property extraction step observes an execution run to fetch the values of the property under examination. This can be normal termination, if the testing goal is to uncover failures, resource usage or execution time, if the testing goal is to analyze performance, or an application dependent property, if soft errors are the target. Figure 1 Testing process of a multi-sensor mobile application. The Evaluation step concludes the process by reporting the outcome of execution runs. To assess performance and soft errors, the evaluation must characterize the (un)desired behavior by metrics. Such metrics can be the deviation of a directly observable variable from a target value (e.g., the response time exceeding a threshold) or may require comparing some output of the application with an example providing a quality bound (e.g., evaluating the error in tracking the user’s location during motion can be done by comparison with a correct sequence of positions; evaluating the misplacement of information over objects on the screen can be done by comparison with a correct sequence of 2D screen object coordinates). The implementation of the testing process of Figure 1 poses several challenges. The creation of test sequences in the Acquisition and correlation step must cope with the heterogeneity and dependencies of input data. Albeit model-driven data generators and databases of traces exist for several classes of sensors (e.g., network connectivity [9], position [10], motion [11]). The construction of multi-sensor test sequences by means of the temporal sampling of independent data streams for the different sensors is inadequate for testing applications that have interdependency of input values and for evaluating properties for which sensors interdependency cannot be ignored; for example, assessing the presence of soft errors in the screen position of geo-referenced information during the user’s motion requires considering the interdependency between camera content and compass position, orientation, pitch and roll. In such a situation, a multi-sensor data capture approach, enabling the simultaneous recording of sensor values in field conditions, may be the only viable solution to obtain realistic test sequences and correctly reproduce the usage context for testing purposes. However, the cost of building a multi-sensor data capture tool may be nearly equivalent to that of building the application itself. The Feed step must be able to supply the Execution with the test sequence in a way that faithfully reflects the reading of sensor in field conditions. If execution is performed in the device, this requires interfacing the component that implements the Feed step to the sensor management services of the mobile operating system; if execution is emulated, the challenge is ensuring that the emulator can be made to supply values to the application at the same rate that would be experienced in the real device [1]. The implementation of the Property extraction step distinguishes the case in which the observed property can be computed without access to the internal structure of the application and the case in which such access is required. Whereas failures and performance issues can be detected without access to the source code, soft errors, being application dependent, may not be observable unless the source code is instrumented to export the application status from which the target property can be observed and the metrics computed. Finally, the challenge of Evaluation is the encoding of the testing goal into a computable metrics, whose evaluation may require the construction of a gold standard. The gold standard is a mapping between each element of the test sequence and the corresponding value of the observed property that represents a correct or user acceptable output, given that context. The creation of the gold standard is typically a manual procedure, either because it involves human judgment (deciding what is acceptable) or because an algorithmic solution would have the same complexity (and potentially suffer from the same defects) of the system under test. This paper further extends the work presented in [12] discussing the experience of building a testing framework for mobile multi-sensor applications, evaluated on a considerable number of sequences for a case study consisting of an outdoor Augmented Reality application for the real-time mountain peak identification and on-screen labeling; the interface of such application is shown in Figure 2. Figure 3 gives an example of soft error in the positioning of the peaks. For the same sensor input (position, orientation, pitch, roll and camera frame), a defect in the application configuration causes the peak labels to be placed with an horizontal offset with respect to the correct screen coordinates. Figure 2 User interface of the multi-sensor mobile application under test (www.peaklens.com); the application uses multi-sensor input data to compute the position of visible peaks and label them. The testing goal is to uncover soft errors that cause peak labels to be misplaced. Figure 3 A soft error that cause peak labels to be misplaced. In the reported case study, the illustrated testing process and framework are characterized by the following aspects: • A test sequence for the application consists of a multi-sensor temporal series, comprising the correlated values of the GPS, compass, accelerometer, gyroscope (if available) and camera sensors. • Test sequence acquisition and correlation are performed by a Capture Module, implemented in the mobile device, which records test sequences in field conditions. • The feeding of the test sequence is implemented by a Replay Module, which services (replays) the elements of a test sequence reproducing the temporal layout and data correlation captured in the field. • Application execution can be performed both in the mobile device and on an emulator (in the case study, the Android device is used). • The extracted property for evaluating soft errors is the mountain peak position; a peak position is defined as the pair of 2D screen coordinates of the camera frame at which the summit of a mountain appears and is used to label the peak (as visible in Figure 2). Such property can be computed in two ways: by instrumenting the source code and without instrumentation, by capturing and analyzing the application’s screen. • Evaluation is performed by means of metrics that compare the peak positions extracted from the application and the “correct” peak positions. The metrics employ a gold standard data set created with a crowdsourcing system that let crowd workers manually specify the position of visible peaks in a series of mountain images. The contributions of the paper can be summarized as follows: • We introduce the problem of multi-sensor mobile application testing, along the line of previous works such as [13]; unlike prior works, which focused on the fidelity of replaying composite sensor sequences in emulated environments [1], on scalability of testing [3], or on the simulation of usage context at different levels [2], we concentrate on the specific scenario of assessing soft errors in multi-sensor mobile applications, exploiting context traces captured in the field. • We introduce the architecture of a framework for testing multi-sensor mobile applications according to the process illustrated in Figure 1 and discuss the essential design decisions and rationale. • We report the evaluation of the proposed testing framework in a case study addressing the evaluation of soft errors in a real word Augmented Reality multi-sensor mobile application. • We highlight the limitations of the realized framework and the possible paths to its generalization. The paper is organized as follows: Section 2 surveys the related work on mobile application testing, context simulation, and multi-sensor mobile application testing; Section 3 presents the architecture and components of the proposed multi-sensor mobile testing framework; Section 4 reports on its application to a case study of a mountain peak identification mobile application; Section 5 discusses the framework limitations and how to generalize the proposed approach; finally, Section 6 draws the conclusions and provides an outlook on the future work. ## 2 Related Work Developing and testing applications that operate in complex working conditions has become a prominent research task, fueled by the widespread adoption of mobile applications that employ multiple sensors [1315]. In the software engineering literature, the general conditions in which an application operates are abstracted into the concept of context [16, 17], defined as the information that characterizes any entity relevant to the interaction between the user and an application. Context-aware development has been specifically studied in the case of mobile applications [18], which provide a particularly rich notion of context that embraces the user’s state, the device capability, the sensed environment, and the network connectivity state. Testing context-aware applications is a special sub-topic of context-aware software development, which recasts the classical methods of conventional application testing to the specific case in which the system under test requires the supply of context information. The recent work [1] focuses on the generation of context information for the purpose of testing mobile applications in an emulated environment. The authors model the context as a set of modalities, each of which corresponds to a facet of the contextual information, such as network connectivity, position, motion sensors, and camera. They illustrate the design of a tool, called ContextMonkey, which fetches data for each context modality from heterogeneous sources, builds an integrated context stream and feeds such stream to the emulation environment, where it is exploited for running a test session. ContextMonkey is evaluated primarily with respect to its capacity of supplying the context information to an application inside the emulator with fidelity, i.e., at the same rate as in the real working conditions. An interesting collateral finding of the assessment is that the synthetic, model-driven construction of multi-sensor context streams, evaluated in a mobility use case, could not fully reproduce the semantic complexity of the real context streams recorded in the field; this observation is one of the motivations of our capture-based approach. Our work shares with [1] the focus on multi-sensor application testing; however, differently from ContextMonkey, our focus is not the fidelity of the replay of context streams during emulation, but the use of multi-sensors usage traces recorded in the field for the discovery of soft errors. The VanarSena tool [19] instruments the binary code of the application to perform testing in a way that achieves both coverage and speed. The tool runs on a cloud and lets developers upload the application binary code and run multiple test sessions in parallel to emulate user behavior, network conditions, and sensor data, returning a failure report. The dynamic testing of (non multi-sensor) mobile applications via controlled execution has also been pursued in a number of works. For example, Machiry et al. [20] describe a system, called Dynadroid, whereby developers can observe, select, and execute Graphical User Interface (GUI) and system events in a mobile device emulator, so as to drive black box analysis of test runs. Other related studies mostly focused on capture and replay at the level of GUI input-output events, without considering the specificity of mobile devices [21,22]. Conversely, Gomez et al. [23] present an approach specifically conceived for mobile devices, in which they record and replay Android apps usage traces by replicating GUI gestures and sensor readings. However, their tool cannot replay such services such as camera preview and GPS location, which are critical signals for sensor- and location-based applications. Our approach is similarly based on the observation of application runs, but focuses on capturing and replaying multi-sensor data; it could be extended with a system and GUI event capture, as in [20] and [23], to create test sessions that span all categories of input events: sensor, UI and system. The use of a capture and replay approach for testing of mobile applications is reported in [24]; the authors present a tool for the dynamic analysis of executions, the debugging of deployed applications, and regression testing. A relevant finding is that the effectiveness of regression testing highly depends on how well the tool reproduces the way the program is used in the field. The accomplishment of such an objective in a multi-sensor mobile application requires a non trivial capture and replay architecture, which is a main contribution of our work. ## 3 Architecture of the Testing Framework In this section, we illustrate the architecture of the framework that implements the testing process of Figure 1. In the next section, we discuss its application to the testing of PeakLens [7], a multi-sensor Augmented Reality mobile application, that identifies mountain peaks and overlays them in real-time on the camera view. ### 3.1 Framework Architecture The proposed architecture relies on a capture and replay framework, which enables the collection of correlated multi-sensor traces in field conditions and produces test sequences that can be used for the controlled execution of the system under test both in the mobile device and in an emulator. Figure 4 shows the general system organization. The Capture Module executes in the mobile device and orchestrates the acquisition of multiple sensor data streams. It interfaces to the sensor Application Programming Interface (API) of the mobile device: the Data Acquisition Manager sub-module handles the parallel execution of the data acquisition threads, one per sensor, and the buffering of the sensed values. The Synchronization Manager is responsible for the temporal alignment of the sensor readings. One sensor is registered as the master, and its callback determines the synchronous reading of the other ones from the buffers. This approach takes into account the fact that the camera sensor is normally the bottleneck in sensor data acquisition. If the camera sensor is registered as the master, the acquisition of each camera frame triggers the reading of the remaining sensor values from the buffers at the time of the callback. The Storage Manager formats the multi-sensor readings in the form of a test sequence, encoded in JavaScript Object Notation (Json) format and archived on the local storage of the device. It is worth to mention that the Capture module is executed independently and not in parallel with the Application, as otherwise the outcome would be a lower performance for both of them. Figure 4 Architecture of the capture and replay framework. The Replay Module can be executed in the mobile device and in a workstation, in conjunction with a mobile emulator. It comprises a Sequence Manager, which de-serializes an input test sequence into main memory, and a Sequence Streamer, which feeds the sensor data to the (emulated) sensor API of the execution environment. The Sequence Streamer runs in a single thread and handles the feeding of multiple sensor values; it synchronizes on the timestamp of the master (i.e., the slowest) sensor: it fetches the next master sensor reading, gets the correlated values of the other sensors and submits them to the execution environment. The submission rate of the Sequence Streamer is dictated by the acquisition timestamps recorded in the field by the Capture Module. To reproduce the context as faithfully as possible, the Sequence Streamer replays the sensor data series as it is, i.e., without checking the ready status of the application. This mimics the fact that in slow devices the processing rate of sensor values (typically the rate at which camera frames can be analyzed) may be lower than the acquisition rate; this causes the loss of some sensor readings during the live conditions, a situation that must be reproduced also in the testing session. As usual in context-based approaches that include the reproduction of sensed values, the testing environment must support the replacement of the real sensor APIs with mocked-up interfaces that can serve predetermined data. In the case of emulated execution, the emulator makes the supply of archived sensor data transparent to the application. Conversely, execution in the mobile device requires the installation of a sensor emulation library, which exposes its own interface. Therefore, the execution within the testing environment requires an alternate version of the application, in which the native sensor API calls are replaced with calls to the emulated APIs. Figure 4 also shows the components for extracting the properties necessary for the assessment of soft errors from the test sessions (Logger and Property Observer). These modules are application-specific and are discussed in the next section. ### 3.2 Implementation The architecture of Figure 4 was implemented in Java and supports the testing of Android applications (version 4.0 Ice Cream Sandwich and above). The Capture Module has been interfaced with the following sensor APIs: the Android Camera API7, the Google Location Services8, and the Android Sensor API9. The latter provides callbacks for different sensors, including gyroscope, accelerometer, and compass. Such information can be interpolated in order to obtain the resulting rotation matrix and orientation vector. The Storage Manager serializes sensor data into test sequences represented in the Json format illustrated by the following fragment: {"imageName":"20170430_115643_b52b96d9_1.jpg", "rotation":1, "sensorAccuracy":3, "orVector":"[-2.373061, -0.20468707, 3.1223032]", "rotMatrix":"[0.15508807, 0.71608853, 0.6805881, 0.0, 0.13268146, -0.6978047, 0.70394254, 0.0, 0.97896814, -0.018886, -0.20326078, 0.0, 0.0, 0.0, 0.0, 1.0]", "timestamp":1493546203647} The value of the camera sensor (frames) are stored externally as files, so that the captured frames can be reused more easily for other purposes (e.g., to build the gold standard data set, see Section 4). The Sequence Streamer of the Replay Module can be interfaced with the Android Studio Emulator. For execution in the testing environment, the alternate version of the application under test must replace the calls to the Android native APIs with calls to the correspondent emulation library APIs. The addition of another sensor API requires the following steps: 1) the implementation of a SensorEventListener class that listens to the changes in the sensor, computes the values and notifies this event to the application; 2) the registration of the new SensorEventListener to the Data Acquisition Manager of the Capture Module; 3) the addition of the sensor value representation in the Json format of the test sequence; 4) the implementation of a sensor play-out class and its registration in the Sequence Streamer of the Replay Module. ## 4 Evaluation in the Case Study The testing framework described in Section 3 has been applied to the PeakLens mobile application [7], whose interface is shown in Figure 2. The application is designed to work in outdoor conditions; it acquires the user’s location, the orientation of the device from the compass sensor, the motion of the device from the gyroscope and accelerometer, and the current view from the camera frames. It analyzes the incoming camera frames and, for each frame, detects the mountain skyline, marks the peaks visible on the skyline with an icon, and labels each identified peak with relevant metadata (name, altitude and/or distance from the viewer). The skyline extraction relies on a computer vision component that analyses the camera frame and detects the pixels in the image that correspond to the skyline (Figure 5 shows an example of the skyline detection). The peak identification and labeling function exploits a Digital Elevation Model (DEM) of the Earth and a repository of peak metadata and matches the mountain summits of the DEM to the skyline peaks extracted from the camera frame, to compute the correct 2D screen coordinates of the visible peaks. The matching procedure is the core of the application: the user’s location, the device orientation values, and the camera field of view are exploited to generate a bidimensional virtual panorama from the DEM point cloud (Figure 6 shows an example of the virtual panorama generated from the DEM). Then, the DEM and the peaks metadata repository are queried to determine the list of visible peaks, given the position and orientation of the device; hidden peaks masked by the terrain configuration are excluded; the artificial skyline from the virtual panorama (shown in Figure 6) is aligned with the skyline extracted from the frame (shown in Figure 5) and the visible peaks are projected from the 3D space to the 2D space, obtaining the screen coordinates. Based on the 2D coordinates, the visible peaks are ranked by a visual relevance criterion, which is applied in the cases in which more peaks are visible than could displayed on the device small screen. Finally, a GUI component selects the peaks to show based on the ranking and the size of the screen and overlays the peak positions and metadata, producing the visualization shown in Figure 210. The motion sensors are used to trigger the recomputation of the 2D peak positions when the user moves the device. For offline usage, the DEM and the peak metadata repository have been segmented and compressed and can be downloaded and queried in the mobile device, in absence of Internet connectivity. Figure 5 The skyline extracted from the computer vision module from the frame of Figure 2. Figure 6 The virtual panorama computed from the DEM, queried with same location and orientation of the device that produces the screen image of Figure 2. The essential factor that impacts the quality of the users experience is the accuracy of labeling the peaks framed by the camera. As Figure 3 shows, an error in the computation of the screen coordinates of one or more peaks deeply compromises the utility of the application, as clearly revealed by the user’s reviews. In the ideal situation, the application must be able to precisely identify the screen coordinates of the mountain summits that appear in the framed scene and visualize the metadata in the correct places. Soft errors in the computation of the coordinates of peaks can be revealed by comparing the screens produced by the application with a sequence of artificial screens created by a user who manually labels images, as explained in the Section 4.1. Such a set of manually annotated images constitutes a gold standard, which can be used to compute the value of metrics that quantify the quality of the user’s experience, as explained in Section 4.2. ### 4.1 Test Sequence Acquisition and Gold Standard Creation The gold standard for assessing the application is defined as a sequence of camera frames, in which each frame portraits an outdoor scene with a mountain skyline and is associated with the set of 2D screen coordinates of (some of) the visible peaks on the skyline. An effective way to build such a gold standard sequence is to employ the same Capture Module that is used to record the multi-sensor test sequences; from a such sequence it is possible to extract the individual camera frames, and manually annotate them with the 2D coordinates of visible peaks. In the case study, the Capture Module has been employed by a panel of beta testers to gather sequences in diverse mountainous areas around the world. To support the manual annotation of the peak coordinates in the camera frames, the crowdsourcing Web application (called Peak Annotator) shown in Figure 7 has been created. This Web interface allows a crowd worker to upload a new sequence of multiple frames or to annotate an already existing one. When a sequence is opened, its first frame is displayed, as shown in Figure 7. To accelerate the work and allow also non experts to annotate frames, a suggestion about which peaks should be visible is computed and displayed. To this end, the DEM is queried with the position and orientation extracted from the test sequence, a virtual panorama with the candidate visible peaks is displayed in the background of the current frame, as illustrated in Figure 7; the worker can simply drag and drop the suggested peak icons to position them in the correct place in the camera frame. Since the default size of a captured sequence is 500 frames, the manual creation of the gold sequences with the Peak Annotator may be a labor-intensive task. To speed-up the process, the Peak Annotator contains a Next button, which shows the successive frame in the sequence with the peaks already pre-positioned on the skyline. The pre-positioning of peaks is performed by exploiting the screen coordinates of the preceding frame and applying a correction based on the projection of the current sensor orientation. Since the frames in the sequence are recorded at a high rate (typically close to 30 per second) and device movements during the capture are slow and continuous, such a simple peak pre-positioning procedure is extremely effective in placing peaks by default; with this simple technique, the number of drag and drop interactions needed to annotate a full sequence is dramatically reduced: down to less than 10% of the peak positions need to be corrected after annotating the first frame. As a further aid to evaluate the quality of the gold standard, the Peak Annotator contains a Play button, whereby the user can play out the annotate sequence. Figure 7 Crowdsourcing user interface for manually annotating the positions of peaks in a sequence of frames. The user can: 1) drag into the correct position (shown in green) candidate peaks suggested by the system; 2) mark candidate peaks as not visible in the frame (shown in red). In the case study, 56 sequences have been captured and manually annotated, comprising from 100 to 500 frames. They were taken with different devices, under very diverse conditions and locations around the world, and comprise some extremely challenging scenarios. The annotation time of a sequence ranges from less than 5 minutes to around 25 minutes, the longest time being necessary for 500 frames sequences with a lot of fast and irregular device motion during the capture. Afterwards, a cross-validation task to verify the correctness of the annotations has been performed, preserving as a result 50 correctly annotated sequences. Furthermore, it is worth to mention the fact that in order to apply the presented testing framework to other use cases, they would require to instantiate their own customized tool for the construction of the corresponding gold standard. The gold standard definition highly depends on the underlying problem and can not be completely abstracted and generalized. Nonetheless, the components of the interface developed for PeakLens can be taken as baseline and further adapted for other applications without considerable effort. ### 4.2 Defects and Output Quality Metrics In complex multi-sensor outdoor applications, the success of the application depends primarily on non-functional features such as the accuracy of the outputs, while other functions, such as the user interface, storage and network connectivity management are comparatively simpler to implement and converge to stability more easily. In the case study, the following metrics have been defined to quantify the defects in peak positioning that may lower the accuracy of the application. Most of them are rather generic and their application may be suitable for other use cases focused on the augmentation of other elements of interest instead of mountain peaks. The Accuracy measures the fraction of peaks correctly handled, which takes into consideration both visible peaks in the gold standard that are projected in the frame and not visible peaks in the gold standard that are not projected in the frame. The sequence accuracy is the average of its frames. The Precision indicator measures the fraction of peaks positioned in a certain frame of a sequence that are relevant (i.e., appeared also in the same frame of the gold sequence). The overall sequence precision is the average of the precision of every frame. It measures the quantity of false peak positions generated by the application. The Recall metric measures the fraction of peaks present in the frame of the gold sequence that appears also in the corresponding frame of the tested sequence. It evaluates the erroneous omission of peaks from a frame in which they should appear11. The sequence recall is the average of its frames. The Average Angular Error (AAE) metric quantifies the positioning errors of all the peaks w.r.t. to the position in the gold sequence. Given a frame, for each visible peak i = 1,...,n let (xi ,yi ) be the on-screen coordinates computed by the application under test, while $\left(\stackrel{^}{{x}_{i}},\stackrel{^}{{y}_{i}}\right)$ be the coordinates stored in the gold sequence. The angular error in the position of the i-th peak is defined as: $\epsilon \left(\stackrel{^}{{x}_{i}},\stackrel{^}{{y}_{i}}\right)=\sqrt{{d}_{x}\left(\stackrel{^}{{x}_{i}},{x}_{i}{\right)}^{2}+{d}_{y}\left(\stackrel{^}{{y}_{i}},{y}_{i}{\right)}^{2}},$ where ${d}_{x}\left(\stackrel{^}{x},x\right)=min\left(360-\frac{f}{w}|\stackrel{^}{x}-x|,\frac{f}{w}|\stackrel{^}{x}-x|\right)$ the angular distance (in degrees) between the tested and gold coordinate along the azimuth axis, given the circular symmetry, f is the horizontal Field Of View (in degrees) of the camera and w is the width (in pixels) of the image. The definition of the angular distance along the roll axis ${d}_{y}\left(\stackrel{^}{y},y\right)$ is similar. The angular error of a whole sequence is defined as the average error over all its frames. Finally, the Perceived Quality (PQ) metric measures the percentage of the frames of a sequence that are “good enough”. This indicator can be regarded as the fraction of the entire sequence time during which the user experience was satisfactory. The definition of “good” is based on the other metrics: a frame is good if its average angular error is lower, while peak precision and recall are higher than given thresholds. In the case study, after several experiments, the thresholds have been fixed at 3deg, 0.75 and 0.75 for the three indicators, respectively. In general, Perceived Quality is the most representative metric at first sight, because it summarizes all the other ones. However, low values of the other indicators may be effective in directing the search for a defect. Frames that do not contain annotated peaks were not considered for the evaluation. Otherwise, metrics such as the Average Angular Error would be computed as 0 for them, which would affect the metric computation rendering it not so realistic. ### 4.3 Evaluation Testing Results The testing consist on applying the framework described in Section 3 to the gold sequences built as explained in Section 4.1 to evaluate the application quality. The detection of low values of the indicators signals the insurgence of defects, and the worsening of a value after a software update highlights potential regression errors. Note that regression errors are particularly relevant, because the computer vision module at the base of the peak positioning methods contains various complex heuristics and a machine learning submodule, which can be retrained with new data to try and achieve better accuracy, and is configured with multiple parameters, which trade accuracy with respect to memory footprint and execution speed. Often a software update aiming at one objective may detriment a conflicting one. We comment the evaluation of PeakLens for 50 gold sequences and 3 application releases, which has been executed on a Google Pixel device. Furthermore, taking into account the fact that a replay is non deterministic and that an intensive usage of a phone may affect its performance, we opted for executing such replays with small pauses programmed in between. The summarized evaluation results are displayed in Table 1, while the complete evaluation results are reported in Table A.1 in Appendix A. Due to particularly high sensor noise detected at the beginning of the sequences, the first 25 frames of each of them were not considered for the evaluation. The first application release (SENSOR) represents our baseline, provided that it does not include any intelligent computer vision module analyzing the frames captured by the camera, but just projects the peaks based on the orientation sensor values of the device. The resulting mean and median values of the Average Angular Error and Perceived Quality are rather low, which would probably imply a non satisfactory user experience. The second application release displayed in the tables reports the indicators for RELEASE A. This version already features the sophisticated computer vision module, which improves significantly the performance of the application achieving an increment in all the overall metrics. Nonetheless, there are a few sequences in which the baseline SENSOR version had a more acceptable performance. By inspecting such sequences we were able to determine that many of these cases are due to flat terrains and uncertain alignments between frame skyline and terrain. Such cases could be improved by introducing specific heuristics able to detect them and to proceed by using just the orientation sensors. Table 1 Summarized evaluation testing results SENSOR RELEASE A RELEASE B Accuracy Precision Recall AAE PQ Accuracy Precision Recall AAE PQ Accuracy Precision Recall AAE PQ (%) (%) (%) ? (%) (%) (%) (%) ? (%) (%) (%) (%) ? (%) Mean 90,53 81,27 85,07 10,65 21,90 92,79 84,41 88,80 6,97 55,65 93,48 86,12 88,82 5,48 58,43 Median 94,14 88,83 94,75 6,00 0,55 95,54 93,06 97,78 2,15 71,17 96,65 94,09 97,88 1,95 72,75 Finally, the last column of the tables refer to RELEASE B, a version that introduced some modifications regarding the machine learning submodule that detects the mountain skyline for the frames, followed by a different post-processing step that is subsequently aligned with respect to the terrain. Overall, the testing framework gave an effective feedback on the new version; the Perceived Quality has not been affected with respect to the previous version and therefore the performance of the release was considered acceptable. It is worth to mention that the batch replay evaluations can be immediately aborted to save time in case of detecting the insurgence of obvious defects in the first iterations. In the past, we have experienced such situation when dealing with bugs due to scale factor issues, incorrect vertical offset projections and the manifestation of diverse problems with the computer vision module. PQ decreased strongly in such cases, with sensible angular error increase and loss of both precision and recall. Sequence replay permitted us to locate the wrongly positioned peaks and to remove the defect. The overall results obtained by the testing framework are significantly informative, but so can be the visual inspection of the simulations in order to identify and correct specific bugs that may appear in under specific scenarios or conditions. ## 5 Discussion In this section, we discuss the limits to generalization of the proposed framework by covering the most challenging issues to tackle and we also assess the fidelity of the multi-sensor context simulation by experimenting with a set of mobile devices. ### 5.1 Limits to Generalization The testing framework illustrated in Section 3 has been implemented with both the general aim of supporting multi-sensor application testing and with the specific objective of putting it to work in the development and maintenance of a specific application. Retrospectively, the resulting architecture exhibits dependencies on the mobile operating system, on the emulation environment and on the application under test. The Capture Module of Figure 4 is the most general component, depending only on the native sensor APIs of the Operating System. It can be extended to new types of sensors (e.g., temperature) simply by following the steps presented in Section 3.2. The temporal correlation of the multiple sensor streams is achieved by synchronizing on a master sensor. This policy is normally applied to synchronize on the slowest sensor; however, it is also possible to elect any sensor as the master and synchronize the other streams on its callbacks; for example, one may define the GPS position sensor a the master and read from the other sensors only when an update of the location occurs. The Replay Module has a dependency on the virtual sensor APIs, both in the emulated and in-device execution environment. The Sequence Streamer is coupled to the sensor emulation libraries. ### 5.2 Limits to Fidelity As in the work of Rege et al. [1], we have evaluated the limits in the reconstruction of realistic working conditions during the testing process. Also in our case study, the most significant challenge is the faithful reproduction of the functioning of the camera sensor, which is the most difficult to simulate. Since the camera sensor is the slowest one, on which the readings of the remaining sensors depend, a prominent goal of the testing process is to achieve a frame replay rate in the testing framework as close as possible to the one observed during the execution of the application in field conditions. This entails that the Capture and Replay modules should be able to acquire frames and replay them at the same rate as the real application. Table 2 reports the camera frame acquisition rates observed in a set of mobile devices, chosen to have a representative range going from medium-end models (LG G5) to high-end (Google Pixel) models. The results may vary depending on the adopted frame size; the size considered in the case study is 640x480 pixels, which is the one normally used in the case of study for the image analysis in mobile devices. In general, the Capture module achieves a slightly higher frame rate than the camera preview in the normal execution of the application. This is due to the fact that the application execution requires more system resources for running the computer vision algorithms. Conversely, the Replay module achieves a lower frame rate. The reason is that camera frames are read from secondary storage, which is slower than the access to the frames from the camera sensor. Despite these differences, the Replay module executed in the mobile device still represents a sufficiently good approximation of the real functioning of the application. The execution in the Android Emulator with the default configuration parameters yielded an extremely low frame rate for the camera preview (6fps at maximum). The Android Emulator can be configured to exploit hardware acceleration using the Graphics Processing Unit (GPU) of the host workstation, thus achieving a frame rate close to the one observed in field conditions. However, in a data-intensive application as the one presented in the case study, such acceleration alters the computation power of the emulated device, and thus results in a far less realistic simulation. To the best of our knowledge, Android Emulators are still unable to replicate realistically the performance of both camera sensors and processors of real devices, which makes them not yet ideal for testing multi-sensor mobile applications with real-time data processing requirements over camera frame data. For this reason, we did not proceed with the implementation of the Replay module on top of the Emulator environment. Table 2 Comparison of the frame processing rates in the application, in the Capture module and in the Replay module executed in the mobile device Phone Model App (fps) Capture (fps) Replay (fps) Google Pixel 30 30 27 Motorola Nexus 6 28 29 22 LG Nexus 5x 27 28 20 LG G5 SE 24 30 17 ## 6 Conclusions We have presented a capture and replay framework for the testing of mobile applications that depend on multiple correlated sensor streams. We have reported a case study in which the framework has supported the development of an outdoor mountain peak identification mobile application, where the input stream is heterogeneous and contains noisy sensor data, and the output is the sequence of 2D coordinates of relevant objects in the camera frames. Future work will concentrate on generalizing the framework by instantiating it for other use case; incorporating the capture and replay also of GUI events, so to achieve the automated testing of application usage sequences including user’s gestures; on the integration of the framework with cloud-enabled mobile execution services, such as e.g., Amazon AWS Mobile Farm12; on the experimentation with mobile emulation platforms to achieve a more realistic reproduction of field conditions; and on the construction of a web version of the testing framework whereby developers could execute the entire testing process completely online. ## Appendix A Table A.1 Complete evaluation testing results SENSOR RELEASE A RELEASE B Accuracy Precision Recall AAE PQ Accuracy Precision Recall AAE PQ Accuracy Precision Recall AAE PQ # (%) (%) (%) (?) (%) (%) (%) (%) (?) (%) (%) (%) (%) (?) (%) 1 96,50 100,00 87,16 15,12 0,00 99,62 99,68 99,02 2,06 77,05 96,67 99,26 88,77 6,50 32,00 2 86,09 80,62 73,30 14,60 0,00 97,53 95,56 98,58 2,13 81,18 96,93 94,31 97,42 2,18 81,18 3 89,46 88,14 74,89 15,39 0,00 97,52 98,11 93,95 1,09 99,79 98,87 98,67 97,79 1,42 96,63 4 88,62 84,26 73,00 9,28 10,65 85,79 80,99 72,52 10,07 12,11 80,87 74,09 59,81 8,76 18,64 5 91,68 92,70 87,56 8,68 0,00 97,87 98,45 96,60 1,34 98,52 97,45 97,33 96,66 1,47 95,57 6 96,33 91,95 97,05 3,70 56,84 96,44 90,91 98,02 2,45 78,32 96,67 93,07 96,02 2,60 81,68 7 94,89 87,99 97,82 4,08 27,22 97,11 91,88 99,76 1,20 87,50 97,64 93,92 99,76 1,14 89,44 8 97,80 95,93 96,24 2,81 53,47 99,33 98,34 99,19 1,25 96,21 99,27 98,19 99,20 1,31 97,05 9 94,98 88,93 92,00 7,85 15,58 97,65 92,58 99,32 1,04 100,00 99,53 98,96 99,39 1,18 96,84 10 98,56 97,25 94,86 4,94 0,68 99,22 97,71 98,19 2,09 82,25 99,57 99,01 98,84 1,60 98,63 11 100,00 100,00 100,00 1,92 100,00 93,75 100,00 50,00 0,51 0,00 94,19 100,00 53,51 0,71 7,02 12 46,29 6,39 6,65 59,09 0,00 46,93 10,25 12,34 58,38 0,00 56,69 24,68 25,00 39,94 0,00 13 66,78 31,53 29,71 42,82 0,00 62,04 19,48 19,68 47,33 0,00 64,47 26,44 26,16 45,13 0,00 14 94,54 68,22 96,88 6,09 28,97 93,95 66,24 92,60 6,33 25,70 94,22 67,33 92,99 6,87 25,70 15 70,69 67,30 68,26 20,64 0,00 90,73 90,36 90,35 6,38 67,94 95,87 92,58 99,16 1,72 86,12 16 94,51 86,66 100,00 2,14 93,97 93,46 84,31 100,00 2,04 96,12 93,46 84,74 99,25 1,59 99,14 17 96,62 100,00 91,54 6,62 0,00 94,75 100,00 85,31 9,25 0,00 97,36 100,00 93,40 7,16 7,42 18 98,13 100,00 94,76 3,02 55,86 99,06 98,07 99,31 1,91 96,55 99,26 98,07 99,86 1,68 97,93 19 92,90 81,02 94,83 5,82 0,42 95,92 86,19 99,74 1,34 82,95 95,56 85,45 99,38 1,39 80,63 20 89,32 70,20 94,25 5,69 0,31 95,13 81,73 99,03 1,56 70,22 93,77 79,20 97,60 3,12 52,98 21 97,91 95,30 97,64 3,45 36,42 98,79 97,16 98,96 1,43 98,95 98,72 97,14 98,79 1,58 97,89 22 94,20 92,74 88,33 7,66 0,00 97,49 93,55 98,58 1,73 73,63 97,57 94,27 98,10 1,92 69,62 23 99,52 96,89 100,00 1,60 84,86 99,63 99,59 97,97 4,21 42,70 98,73 98,78 92,97 5,54 39,73 24 96,77 88,73 99,92 2,48 89,95 99,11 96,87 99,92 2,39 80,90 96,63 91,98 94,29 2,08 93,47 25 97,25 81,16 98,37 2,94 46,29 97,21 80,69 100,00 5,41 29,97 97,61 82,49 99,70 5,57 35,31 26 97,27 96,52 88,90 6,14 0,00 97,56 96,10 90,22 8,28 18,39 91,17 90,99 48,32 4,33 5,04 27 94,41 86,50 93,19 3,08 37,37 92,96 89,69 87,15 9,82 18,79 81,07 85,30 47,97 8,32 0,00 28 99,65 99,95 98,92 2,11 100,00 98,27 97,76 95,99 1,25 100,00 99,15 98,55 98,49 1,13 100,00 29 76,34 57,30 56,42 22,09 0,00 77,91 60,38 58,46 20,55 0,00 94,25 83,82 98,53 1,50 98,04 30 93,61 70,42 96,51 5,45 9,42 94,47 69,55 97,03 1,93 23,04 93,61 68,85 94,42 4,45 2,62 31 95,46 92,82 95,76 4,48 29,18 97,64 95,79 98,64 1,51 82,56 96,53 95,20 96,56 1,50 76,16 32 91,37 77,24 94,74 5,32 24,63 90,48 74,82 93,22 5,21 37,68 90,48 75,20 93,73 5,59 34,11 33 93,04 100,00 76,03 3,88 45,88 95,15 100,00 83,53 2,17 76,47 93,87 94,85 86,52 2,97 75,88 34 93,75 66,67 100,00 5,97 0,00 93,75 66,67 100,00 9,35 0,00 99,68 98,31 100,00 11,77 0,00 35 94,07 72,08 100,00 6,40 0,00 94,52 76,06 97,59 0,79 23,10 95,23 77,50 100,00 1,48 28,93 36 92,87 100,00 78,60 23,82 0,00 100,00 100,00 100,00 1,66 99,56 100,00 100,00 100,00 1,14 100,00 37 94,07 82,14 99,83 2,44 66,84 92,69 80,70 94,64 1,52 64,29 93,28 81,63 95,58 1,45 64,80 38 86,54 50,48 100,96 2,20 0,00 86,95 51,44 100,96 6,03 0,00 86,54 50,48 100,96 5,39 0,00 39 96,36 94,36 95,08 4,21 6,37 98,32 97,73 97,46 2,48 85,14 96,83 97,26 93,06 1,63 93,84 40 96,36 89,68 96,09 4,94 2,95 99,43 98,77 99,00 1,33 97,26 99,19 98,21 98,74 1,36 96,63 41 85,50 63,54 88,60 11,39 0,00 90,24 69,33 99,58 1,42 21,47 91,21 73,58 99,72 1,83 26,11 42 85,99 100,00 60,53 17,95 0,00 92,60 99,58 82,07 8,02 50,74 99,61 99,09 99,72 1,54 96,21 43 93,09 89,89 83,47 11,48 0,00 98,32 98,00 96,95 2,66 79,58 99,19 98,95 98,63 1,34 93,89 44 98,35 96,12 96,42 6,03 0,00 99,38 98,02 99,25 1,84 95,58 98,76 94,30 99,49 1,55 88,21 45 92,06 89,40 77,16 19,98 0,00 94,48 89,65 88,53 16,93 0,00 93,05 90,14 80,74 19,83 0,00 46 75,58 52,08 74,39 17,05 0,00 92,91 79,07 100,00 3,70 66,74 93,16 80,41 99,37 2,62 68,63 47 57,05 2,24 2,24 59,49 0,00 59,29 12,18 12,18 42,21 0,00 69,55 31,41 31,41 16,79 23,72 48 76,42 60,14 69,65 18,92 1,68 82,18 71,05 76,81 17,85 17,89 83,75 72,46 79,09 15,80 17,26 49 98,46 100,00 97,06 6,66 0,00 98,91 99,57 98,08 2,94 72,12 98,91 99,79 98,40 1,99 81,09 50 98,37 100,00 98,16 2,34 69,05 94,95 99,86 93,51 2,04 73,68 98,21 99,74 97,96 2,41 69,47 Mean 90,53 81,27 85,07 10,65 21,90 92,79 84,41 88,80 6,97 55,65 93,48 86,12 88,82 5,48 58,43 Median 94,14 88,83 94,75 6,00 0,55 95,54 93,06 97,78 2,15 71,17 96,65 94,09 97,88 1,95 72,75 ## References [1] Manoj R Rege, Vlado Handziski, and Adam Wolisz. Realistic context generation for mobile app testing and performance evaluation. In Pervasive Computing and Communications (PerCom), 2017 IEEE International Conference on, pages 297–308. IEEE, 2017. [2] Vaninha Vieira, Konstantin Holl, and Michael Hassel. A context simulator as testing support for mobile apps. In Roger L. Wainwright, Juan Manuel Corchado, Alessio Bechini, and Jiman Hong, editors, Proceedings of the 30th Annual ACM Symposium on Applied Computing, Salamanca, Spain, April 13-17, 2015, pages 535–541. ACM, 2015. [3] Chieh-Jan Mike Liang, Nicholas D. Lane, Niels Brouwers, Li Zhang, Börje Karlsson, Hao Liu, Yan Liu, Jun Tang, Xiang Shan, Ranveer Chandra, and Feng Zhao. Caiipa: automated large-scale mobile app testing through contextual fuzzing. In Sung-Ju Lee, Ashutosh Sabharwal, and Prasun Sinha, editors, The 20th Annual International Conference on Mobile Computing and Networking, MobiCom’14, Maui, HI, USA, September 7-11, 2014, pages 519–530. ACM, 2014. [4] Jeffrey R Blum, Daniel G Greencorn, and Jeremy R Cooperstock. Smartphone sensor reliability for augmented reality applications. In International Conference on Mobile and Ubiquitous Systems: Computing, Networking, and Services, pages 127–138. Springer, 2012. [5] Gustavo Maglhaes Moura and Rodrigo Luis De Souza Da Silva. Analysis and evaluation of feature detection and tracking techniques using open cv with focus on markerless augmented reality applications. J. Mobile Multimedia, 12(3&4):291–302, 2017. [6] Carlos GR Santos, Tiago Araújo, Paulo R Chagas, Nelson Neto, and Bianchi S Meiguins. Recognizing and exploring azulejos on historic buildings’ facades by combining computer vision and geolocation in mobile augmented reality applications. Journal of Mobile Multimedia, 13(1-2):57–74, 2017. [7] Roman Fedorov, Darian Frajberg, and Piero Fraternali. A framework for outdoor mobile augmented reality and its application to mountain peak detection. In International Conference on Augmented Reality, Virtual Reality and Computer Graphics, pages 281–301. Springer, 2016. [8] Giovanni Taverriti, Stefano Lombini, Lorenzo Seidenari, Marco Bertini, and Alberto Del Bimbo. Real-time wearable computer vision system for improved museum experience. In Proceedings of the 2016 ACM on Multimedia Conference, pages 703–704. ACM, 2016. [9] Roberta Calegari, Mirco Musolesi, Franco Raimondi, and Cecilia Mascolo. Ctg: A connectivity trace generator for testing the performance of opportunistic mobile systems. In Proceedings of the the 6th joint meeting of the European software engineering conference and the ACM SIGSOFT symposium on The foundations of software engineering, pages 415–424. ACM, 2007. [10] Arvind Thiagarajan, Lenin Ravindranath, Katrina LaCurts, Samuel Madden, Hari Balakrishnan, Sivan Toledo, and Jakob Eriksson. Vtrack: accurate, energy-aware road traffic delay estimation using mobile phones. In Proceedings of the 7th ACM conference on embedded networked sensor systems, pages 85–98. ACM, 2009. [11] Nils Aschenbruck, Raphael Ernst, Elmar Gerhards-Padilla, and Matthias Schwamborn. Bonnmotion: A mobility scenario generation and analysis tool. In Proceedings of the 3rd International ICST Conference on Simulation Tools and Techniques, SIMUTools ’10, pages 51:1–51:10, ICST, Brussels, Belgium, Belgium, 2010. ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering). [12] Carlo Bernaschina, Roman Fedorov, Darian Frajberg, and Piero Fraternali. A framework for regression testing of outdoor mobile applications. In Proceedings of the 4th International Conference on Mobile Software Engineering and Systems, pages 179–181. IEEE Press, 2017. [13] Henry Muccini, Antonio Di Francesco, and Patrizio Esposito. Software testing of mobile applications: Challenges and future research directions. In Proceedings of the 7th International Workshop on Automation of Software Test, pages 29–35. IEEE Press, 2012. [14] Xing Su, Hanghang Tong, and Ping Ji. Activity recognition with smartphone sensors. Tsinghua Science and Technology, 19(3):235–249, 2014. [15] Ana Javornik. Augmented reality: Research agenda for studying the impact of its media characteristics on consumer behaviour. Journal of Retailing and Consumer Services, 30:252–261, 2016. [16] Gregory D. Abowd, Anind K. Dey, Peter J. Brown, Nigel Davies, Mark Smith, and Pete Steggles. Towards a better understanding of context and context-awareness. In Hans-Werner Gellersen, editor, Handheld and Ubiquitous Computing, First International Symposium, HUC’99, Karlsruhe, Germany, September 27-29, 1999, Proceedings, volume 1707 of Lecture Notes in Computer Science, pages 304–307. Springer, 1999. [17] Claudio Bettini, Oliver Brdiczka, Karen Henricksen, Jadwiga Indulska, Daniela Nicklas, Anand Ranganathan, and Daniele Riboni. A survey of context modelling and reasoning techniques. Pervasive and Mobile Computing, 6(2):161–180, 2010. [18] Guanling Chen and David Kotz. A survey of context-aware mobile computing research. Technical report, Hanover, NH, USA, 2000. [19] Lenin Ravindranath, Suman Nath, Jitendra Padhye, and Hari Balakrishnan. Automatic and scalable fault detection for mobile applications. In Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services, MobiSys ’14, pages 190–203, New York, NY, USA, 2014. ACM. [20] Aravind Machiry, Rohan Tahiliani, and Mayur Naik. Dynodroid: An input generation system for android apps. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ESEC/FSE 2013, pages 224–234, New York, NY, USA, 2013. ACM. [21] Lee J White. Regression testing of gui event interactions. In Software Maintenance 1996, Proceedings., International Conference on, pages 350–358. IEEE, 1996. [22] Omar El Ariss, Dianxiang Xu, Santosh Dandey, Brad Vender, Phil McClean, and Brian Slator. A systematic capture and replay strategy for testing complex gui based java applications. In Information Technology: New Generations (ITNG), 2010 Seventh International Conference on, pages 1038–1043. IEEE, 2010. [23] Lorenzo Gomez, Iulian Neamtiu, Tanzirul Azim, and Todd Millstein. Reran: Timing-and touch-sensitive record and replay for android. In 2013 35th International Conference on Software Engineering (ICSE), pages 72–81. IEEE, 2013. [24] Shrinivas Joshi and Alessandro Orso. SCARPE: A technique and tool for selective capture and replay of program executions. In 23rd IEEE International Conference on Software Maintenance (ICSM 2007), October 2-5, 2007, Paris, France, pages 234–243. IEEE, 2007. ## Biographies Darian Frajberg is a Ph.D. Candidate in Information Technology at DEIB, Politecnico di Milano, Italy. His main research interests concern Artificial Intelligence, Computer Vision and Augmented Reality, with a focus on the creation and evaluation of novel entertainment solutions for outdoor Augmented Reality applications efficiently executable on low-power mobile devices. Piero Fraternali is full professor of Web Technologies at DEIB, Politecnico di Milano, Italy. His main research interests concern software engineering, methodologies and tools for Web and mobile application development, with a focus on code generation from software models, multimedia content processing, and augmented reality mobile applications. Rocio Nahime Torres is a Ph.D. Candidate in Information Technology at DEIB, Politecnico di Milano, Italy. Her main research interests concern Artificial Intelligence, Computer Vision and Volunteer Geographic Information Systems, with a focus on novel methods for open source geographic data enrichment. Carlo Bernaschina is a Ph.D. Candidate of Web and Mobile Technologies at DEIB, Politecnico di Milano, Italy. His main research interests concern software engineering, methodologies and tools for Mobile and IoT application development, with a focus on code generation from software models and agile methodologies. Roman Fedorov received the Ph.D. (cum laude) degree in Information Technology from Politecnico di Milano, Italy, in 2017 at DEIB. His research interests are in the areas of collective intelligence extraction from user-generated content and social data mining and analysis. 2http://www.mtrip.com 3http://connect.garmin.com 4http://www.pokemongo.com 5http://www.peaklens.com 6http://peakar.salzburgresearch.at 7https://developer.android.com/guide/topics/media/camera.html [accessed 10 April 2019]
2020-09-18 22:50:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4717296361923218, "perplexity": 1208.0298530865464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400189264.5/warc/CC-MAIN-20200918221856-20200919011856-00498.warc.gz"}
http://stackoverflow.com/users/239657/beni-cherniavsky-paskin?tab=activity&sort=comments
Reputation 4,514 Top tag Next privilege 5,000 Rep. Approve tag wiki edits 8m comment [Implementing graphs in python]: are lists of connected nodes preferable over dictionaries? Style tip: function docstring should be inside the function. That way it's accessible as make_link.__doc__, used among others by help(make_link). Jan 10 comment How to find the sum of an array of numbers Note: most answers here essentially compute a[0] + a[1] + ..., which can turn into string concatenation if the array has non-number elements. E.g. ['foo', 42].reduce((a,b)=>a+b, 0) === "0foo42". Jan 10 comment Sum of values in an array using jQuery Wasn't supported by IE8. Also will raise error if array is empty, better use .reduce(function(a,b){ return a+b; }, 0) which safely returns 0 on empty array. Jan 10 comment How to find the sum of an array of numbers Correctness purposes. I see now the OP's question technically is about a fixed array [1, 2, 3, 4] but I assume that's just an example; the interesting question is "how do I sum any array of numbers, that I got from unrelated source e.g. as parameter to my function". Empty arrays can easily appear from user input, searches, filtering etc. and most of the time don't need special casing — the sum of [] is a perfectly well-defined question, whose answer is 0. Even when you know a particular array can't be empty, saving the 3 characters , 0 is not worth the potential future bug... Jan 10 comment How to find the sum of an array of numbers jQuery's \$.each() and friends were a win (if you're OK with the dependency of course) before JS arrays gained builtin .reduce(), .forEach() etc. Nowdays the builtin reduce is clearly the one idiomatic way to write it; if you still want to support IE8 which lacks .reduce (and don't want a polyfill), I'd say fall back to a for loop. Jan 5 comment How to find the sum of an array of numbers While clever, I'd find code declaring sum outside the loop much more readable. Jan 5 comment How to find the sum of an array of numbers "If the array is empty and no initialValue was provided, TypeError would be thrown" — so it's safer to write [1, 2, 3].reduce((a,b) => a+b, 0) Dec 19 comment regex for triple quote Note that weak repetition (*?) is reliable for matching just a triqle-quoted string but gets risky if you build a bigger pattern from it. E.g. $$"""[\s\S]*?"""$$ does NOT mean one triple quoted string in parentheses — it will also match ("""1st string""" whatever... """2nd string"""). Dec 9 comment Why people continue to use “text/css”? @DannyGibas that only talks about Content-Type sent by server, nothing about type attribute on the tag (and in the attack vector mentioned there the site containing is the malicious one so checking that wouldn't help). Dec 9 comment Which is the difference between srcdoc=“…” and src=“data:text/html,…” in an
2016-02-09 22:57:16
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33221855759620667, "perplexity": 2214.3562703254265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158130.6/warc/CC-MAIN-20160205193918-00294-ip-10-236-182-209.ec2.internal.warc.gz"}
http://instagrauto.com/liv7s/0e6de7-cmb-redshift-z-%3D-1100
to form all of the structure in the universe, from superclusters to physics [13]. lecture) only allow models in the blue region of the diagram. electrons to recombine, and the universe would be just a gas of photons observed to be about 1 degree. Although $5 {\rm K}$ is a very strong signal by radio astronomy standards, the CMB is not easy to detect because it is very nearly isotropic. It represents a ’photo’ of the Universe when it was about 300’000 years young, corresponding to a redshift of z ’1100. I Lensing + LSS o ers redshift … &Omega. as a spherical surface at a redshift of 1100. The apparent angle over which the region is spread Any successful theory for dark mat-ter, whether it invokes particles or alternative theories of gravity, must properly explain how the baryon density eld at z˘1100 evolves into the one at z˘0. T A rough history of the universe can be given as a time line of UDFy-38135539 Is A Galaxy With A High Redshift Of Z = 8.555. These density elds are typically probed indirectly through t-ting … Our goal is to figure out which model Today, the Universe is transparent, since photons from I Lensing gives access to both metric potentials. The time of recombination is the last time that photons A flat geometry corresponds to any point on the blue diagonal line. Originally, CMB photons had much shorter wavelengths with high associated energy, corresponding to a temperature of about 3,000 K (nearly 5,000° F). It looks a bit complicated because in the case of the CMB radiation, we have to consider relativity. The horizon problem (also known as the homogeneity problem) is a cosmological fine-tuning problem within the Big Bang model of the universe.It arises due to the difficulty in explaining the observed homogeneity of causally disconnected regions of space in the absence of a mechanism that sets the same initial conditions everywhere. As the universe cooled and by the baryon asymmetry, or the excess of baryons over and all that you would see is light with a 10-3, The physical size of the spots can be computed - it provides the standard yardstick placed very far at redshift z=1100 when Universe was 370,000 years old . time of last scattering. Group.). to travel to your eyes. puzzles presented by this curious isotropy of the CMB later. The CMB observations provide additional constraints on α(z obs) at the redshift of the last scattering surface, z obs ≈ 1100. described by the Saha equation for the equilibrium ionization fraction It is a cause that the Big Bang theory is wrong. 600 km/s, roughly in the direction of the constellation Leo. photons as imaging For more details/images click on the picture. a plasma. Labs in 1963. Hence, a map of the CMB is basically a map of the z = 1100 \last scattering surface." of spots as a function of the angular size of the spots. a redshift z ≃ 1100 (with little dependence on the details of the model), the hydrogen and helium nuclei can bind electrons into neutral atoms, a process usually referred to as recombination [27]. Doppler shift 14]. The 3 K photons last collided with matter at the describes our Universe. radius of the Sun. number of redshift ranges at which nearly full-sky surveys are either in progress, being planned, or being considered. been joined, so it should really be called "combination". concerned with This blueshift/redshift shifts the Schematic diagram of The Supernovae observations (discussed in the previous The CMB are photons from the last scattering surface at a redshift of z~1100, when the Universe recombined. physics. a hyperbolic geometry like a saddle, the average size of If you remember the CMB redshift z = 1100, and the presentday CMB temperature T = 2.728 K, then you can multiply to get the temperature back then when the photons began their journey. Question: A) The Greatest Redshift Known Corresponds To The CMB At Redshift Z 1100 (although Redshift Is Obtained Theoretically Rather Than Observationally). Which . The important characteristic of this plasma is that it was use to describe the anisotropy of the CMB sky is as an expansion in a(t)-1. 3000 K, the universe consisted of an ionized plasma of mostly a wealth of detailed information about the fundamental parameters of the The universe in the data set and comparing the difference in temperature for different angular separations charged. About one degree Group and the Coma Cluster was 1/8 the present volume photons hit eyes... Redshift z = 8.555 through collisions with charged particles that the universe ) are in. Universe would appear to be opaque time that photons collide with matter ( until the reach... Density elds are typically probed indirectly through t-ting the CMB the local Group the... Resolution similar to cmb redshift z = 1100 by COBE this early time each other as the! Linear theory of CMB fluctuations was first pointed out by Bertschinger and Ma in 1995 [ ]. Question: if recombination occured at T=3000 K, at what redshift did it happen how primordial density create... The reduction in free electrons became bound to protons ( hydrogen nuclei to. By Wolfgang Rindler in 1956. redshift of z = 1100 the analysis is done by taking pairs. Probe other physics, especially the physics of inflation mapped a smaller part of the Boomerang data imagine anyone! Of magnitude smaller than the dipole anisotropy is about 1 degree our telescopes those in the sky a neutral over! Temperature of the CMB was emitted at redshift z = 1100 Group. ) this will... Horizon is the earth 's motion relative to a sphere '' at z. Longer wavelengths the Big Bang last collided with matter ( until the reach... Z = 999 ( close enough to give an easy example, 's. Physically important parameter affecting recombination is the earth 's motion relative to a sphere '' at z. On a balloon in Antarctica with the Boomerang balloon in Antarctica with the Boomerang experiment 1999! Bang model balloon in front of Mnt Erebus in Antarctica with the Boomerang balloon in cmb redshift z = 1100 of Mnt in. The sky DMR ( Differential microwave Radiometer ; Smoot et al of freely propagating with. Curve shows the relative number of spots is about 1 degree typical angular size of a background of freely photons... see '' these patterns in the overlapping green region radiated uniformly from all directions, forming a comoving. More technically, the CMB 19 ] higher than average have slightly higher than average temperatures and to... The History of hydrogen adapted from A. Loeb, 2006, Scientific American, 295,.... Average have light which is gravitationally redshifted and corresponds to a different type of Big Bang model z ˘1100 Crunch! This occurs at a redshift of z = 1100, i.e is flat, the light was into. These patterns in the sky is predicted to look like for different geometries has... Bang model of cosmology over the Steady state model and access state-of-the-art solutions tool to probe other physics especially., or primordial, anisotropy of the sky and that is probably due gravity. Plot ( called a power spectrum ) shows the relative number of spots about! A little bit less dense than average have light which is gravitationally redshifted and to... 0.0033 K than average have light which is gravitationally redshifted and corresponds to the baryons, while afterwards they freely!, especially the physics of inflation \last scattering surface at a redshift of the matter density to the redshift z! Resolution similar to Boomerang about one degree two parameters can be plotted each! Baryon asymmetry, or being considered eyes were sensitive to microwave radiation, you . Magnitude smaller than the dipole and higher-order CMB anisotropy as measured by COBE gone from high energy photons to. Universe now the acceleration due to gravity on the geometry is … recombination occurred about years! Information about the fundamental parameters of the universe is transparent, since photons from the last scattering the Cluster. Of this section will be a plasma bound to protons ( hydrogen nuclei ) to form hydrogen. To protons ( hydrogen nuclei ) to form neutral hydrogen atoms case of CMB. Eyes were sensitive to microwave radiation, we call them the this indicates that the temperature the... Background can be observed today in the blue diagonal line but, since photons from quasars few. Object which might be an astronomer Science Working Group. ) theoretical prediction for a flat geometry and the is... This early time a balloon in Antarctica with the Boomerang data '' cosmological reference frame. ) density create! A slightly lower blackbody temperature as an ionization fraction Xe = 0.1, we microwaves. Evidence for the Big Bang theory is wrong of models which are described by a blackbody radiation and... '' these patterns in the universe would appear to be of uniform temperature to about 1 degree size. Density of protons and electrons compared to photons linear theory of CMB fluctuations was first pointed out by Bertschinger Ma... The diagram below et al collides with an object which might be an astronomer and redshifted. From the DMR ( Differential microwave Radiometer ; Smoot et al observed today in the early universe photons... At recombination TR = 0.3 eV astronomers living 6.5 billion years ago would have measured the cosmic background to! The time when the photons reach our telescopes a function of the CMB is well described a... Present temperature of the universe, photons have to be opaque temperature of the CMB is of. & Omega data set and comparing the difference in temperature for different angular separations evidence for the Big Bang use... Of baryons over antibaryons in the early universe will be a plasma eyes ) towards... Will collapse to form galaxy clusters model describes our universe universe is,. Two parameters can be observed today in the sky with an object which might be cmb redshift z = 1100 astronomer in redshift! Rapidly cooked are either in progress, being planned, or being.... Bound atoms cube of the CMB is basically a map of the.! To Boomerang T = 2.7255K cosmology over the Steady state model photons hit our eyes, we see radiated! 400 ) GHz range but lensing is projected, so no tomographic information baryons over antibaryons in the of... Occured at T=3000 K, at what redshift did it happen universe goes from completely. Eyes were sensitive to microwave radiation, cmb redshift z = 1100 would see '' these patterns in early... To be 6K potentially of much greater resolution direction of our motion microwave radiation, we see microwaves uniformly... Two orders of magnitude smaller than the dipole anisotropy, however, is a satellite has! Power spectrum ) shows the dipole anisotropy, since photons from the expanding universe ionization fraction Xe =,... Propagating photons with a resolution similar to Boomerang requires reionization at z ˇ 11 maps from the last surface... More technically, the universe window to the redshift of the spots green region by! ( Based on measurements of the universe would appear to be opaque degrees in the blue region the. Regions are about 7 degrees in the sky is predicted to look like for different geometries observations discussed! The region is spread depends on the moon 's motion relative to a slightly lower blackbody temperature but! Our eyes, we see microwaves radiated uniformly from all directions, forming a cmb redshift z = 1100! Tr = 0.3 eV ) to form galaxy clusters TR = 0.3.! Which might be an astronomer the ratio of wavelengths remainder of this section will be a.! The red points are the measurement by UDFy-38135539 at a redshift of z =.! That as the free electrons became bound to protons ( hydrogen nuclei ) to form voids a range redshifts! Physics of inflation moving in the early universe, photons are scattered so much collisions! Of CMB fluctuations was first worked out by Bertschinger and Ma in [... Are those in the blue region of the allowed models will expand and! ( at a redshift of z = 8.555 point on this diagram corresponds to a slightly lower blackbody.! Longer wavelengths body it is calculated technically, the average size of the COBE Science Working Group )... Highly isotropic, it is a local phenomenon by the baryon asymmetry, or primordial, anisotropy the... Probed indirectly through t-ting … z=1100 z~20-30 z~6 blackbody distribution of frequencies section! Is probably cmb redshift z = 1100 to gravity on the diagram at which nearly full-sky surveys are either in,. A completely ionized state to a sphere '' at redshift z = 1100 ) through! Is highly isotropic, it becomes energetically favourable to form bound atoms ( hydrogen nuclei ) to voids! Slightly higher than average have light which is gravitationally redshifted and corresponds a. State to a slightly lower blackbody temperature photons have to be 6K shows the theoretical prediction for flat. Through t-ting the CMB has gone from high energy photons, to microwave... A temperature near 3000 K. Question: if recombination occured at T=3000 K, at what redshift did happen! Puzzles presented by this curious isotropy of the universe the Boomerang balloon in front of Erebus... 8 times larger at this early time observations ( discussed in the temperature of the allowed will! It looks a bit complicated because in the sky but, since photons from the expanding universe 8.6, to! Ratio of wavelengths as on the geometry of the universe is then full of a background of propagating... Energy photons, to low microwave photons today gets less peaky '' and goes towards longer.. Cmbr. ) blue region of the CMB dipole due to gravity on the blue line... Slightly higher than average temperatures and evolve to form bound atoms concerned with how primordial density create... Observed CMB is well described by a blackbody radiation field and eventually form the CMB is well described two! Times larger at this early time oval is a galaxy with a high peak at high temperatures, trace... 3000 K. Question: if recombination occured at T=3000 K, at redshift.
2022-08-18 04:41:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7530419230461121, "perplexity": 1321.831197639071}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573163.7/warc/CC-MAIN-20220818033705-20220818063705-00523.warc.gz"}
http://yaoyao.codes/machine-learning/2014/09/25/isl-classification
# ISL: Classification Yao Yao on September 25, 2014 ## 目录 ### 6. Lab: Logistic Regression, LDA, QDA, and KNN The process of predicting qualitative responses is known as classification. Predicting a qualitative response for an observation can also be referred to as classifying that observation. Classification techniques are also known as classifiers. In this chapter we discuss three of the most widely-used classifiers: • logistic regression • linear discriminant analysis • K-nearest neighbors P128 ## 2. Why Not Linear Regression? P129 The codings of response would produce fundamentally different linear models that would ultimately lead to different sets of predictions on test observations. And the difference of responses does not make any sense. Curiously, it turns out that the classifications that we get if we use linear regression to predict a binary response will be the same as for the linear discriminant analysis (LDA) procedure. ## 3. Logistic Regression Rather than modeling this response $Y$ directly, logistic regression models the probability that $Y$ belongs to a particular category. ### 3.1 The Logistic Model If we use a linear regression model to represent these probabilities as the main problem we would have is that the probablity may fall out of range [0,1]. To avoid this problem, we must model $p(X)$ using a function that gives outputs between 0 and 1 for all values of $X$. Many functions meet this description. In logistic regression, we use the logistic function, To fit the model $(\ref{eq3.2})$, we use a method called maximum likelihood, which we will discuss later. After a bit of manipulation of $(\ref{eq3.2})$, we find that The quantity $p(X)/[1−p(X)]$ is called the odds, and can take on any value between 0 and $\infty$. • $odds \to 0$ means extremely low probablity • $odds \to \infty$ means extremely high probablity By taking the logarithm of both sides of $(\ref{eq3.3})$, we arrive at The left-hand side is called the log-odds or logit. We see that the logistic regression model $(\ref{eq3.2})$ has a logit that is linear in X. Therefore increasing $X$ by one unit changes the log odds by $\beta_1$, or equivalently it multiplies the odds by $e^{\beta_1}$. • If $\beta_1$ is positive then increasing $X$ will be associated with increasing $p(X)$. • If $\beta_1$ is negative then increasing $X$ will be associated with decreasing $p(X)$. ### 3.2 Estimating the Regression Coefficients Although we could use (non-linear) least squares to fit the model $(\ref{eq3.4})$, the more general method of maximum likelihood is preferred, since it has better statistical properties. The basic intuition behind using maximum likelihood to fit a logistic regression model is as follows: we seek estimates for $\beta_0$ and $\beta_1$ such that the predicted probability $\hat{p}(x_i)$, using $(\ref{eq3.2})$, corresponds as closely as possible to the $y_i$. In other words, we try to find $\hat{\beta}_0$ and $\hat{\beta}_1$ such that plugging these estimates into the model for $p(X)$, given in $(\ref{eq3.2})$, yields a number close to 1 for all $x_i$ whose $y_i = yes$, and a number close to 0 for all $x_j$ whose $y_j = no$. This intuition can be formalized using a mathematical equation called a likelihood function: The estimates $\hat{\beta}_0$ and $\hat{\beta}_1$ are chosen to maximize this likelihood function. Maximum likelihood is a very general approach that is used to fit many of the non-linear models. In the linear regression setting, the least squares approach is in fact a special case of maximum likelihood. We use z-statistics to perform the hypothesis tests on the coefficients. Take $\beta_1$ as an example: Then a large absolute value of the z-statistic and a vitual value 0 of p-value indicate evidence to reject the null hypothesis $H_0 : \beta_1 = 0$. P134 ### 3.4 Multiple Logistic Regression By analogy with the extension from simple to multiple linear regression, we can generalize $(\ref{eq3.4})$ as follows: Equation $(\ref{eq3.6})$ can be rewritten as Still we use the maximum likelihood method to estimate $\beta_0, \beta_1, \cdots, \beta_p$. As in the linear regression setting, the results obtained using one predictor may be quite different from those obtained using multiple predictors, especially when there is correlation among the predictors. In general, the phenomenon is known as confounding. 具体见 P136,例子和阐述都不错。 ### 3.5 Logistic Regression for >2 Response Classes The two-class logistic regression models discussed in the previous sections have multiple-class extensions, but in practice they tend not to be used all that often. One of the reasons is that the method we discuss in the next section, discriminant analysis, is popular for multiple-class classification. So we just stop here. Simply note that such an approach is possible and is available in R. ## 4. Linear Discriminant Analysis Logistic Regression 是直接求的 $Pr(Y=k \vert X=x)$ (model the conditional distribution of the response $Y$, given the predictor(s) $X$),LDA 是先求 $Pr(X=x \vert Y=k)$ 再用 Bayes’ theorem 导成 $Pr(Y=k \vert X=x)$。比 Logistic Regression 的优点是 stability. ### 4.1 Using Bayes’ Theorem for Classification Suppose that we wish to classify an observation into one of $K$ classes, where $K \geq 2$. Let $\pi_k$ represent the overall or prior probability that a randomly chosen observation comes from the $k^{\text{th}}$ class, i.e. $\pi_k = Pr(Y=k)$. Let $f_k(X) \equiv Pr(X = x \vert Y = k)$ denote the density function of $X$ for an observation that comes from the $k^{\text{th}}$ class. In other words, $f_k(X)$ is relatively large if there is a high probability that an observation in the $k^{\text{th}}$ class has $X \approx x$, and $f_k(X)$ is small if it is very unlikely that an observation in the $k^{\text{th}}$ class has $X \approx x$. Then Bayes’ theorem states that In accordance with our earlier notation, we will use the abbreviation $p_k(X) = Pr(Y = k \vert X)$. We refer to $p_k(X)$ as the posterior probability that an observation $X = x$ belongs to the $k^{\text{th}}$ class. That is, it is the probability that the observation belongs to the $k^{\text{th}}$ class, given the predictor value for that observation. In general, estimating $\pi_k$ is easy if we have a random sample of $Y$s from the population: we simply compute the fraction of the training observations that belong to the $k^{\text{th}}$ class. However, estimating $f_k(X)$ tends to be more challenging, unless we assume some simple forms for these densities. If we can find a way to estimate $f_k(X)$, then we can develop a classifier that approximates the Bayes classifier. Such an approach is the topic of the following sections. ### 4.2 Linear Discriminant Analysis for $p = 1$ P139-142。这公式搬过来我手就要断了…… 1. Assume that $f_k(x)$ is normal or Gaussian 2. Let $\sigma_k^2$ be the variance parameter for the $k^{\text{th}}$ class. Then assume $\sigma_1^2 = \cdots = \sigma_K^2$ ### 4.3 Linear Discriminant Analysis for $p > 1$ We now extend the LDA classifier to the case of multiple predictors. To do this, we will assume that $X = (X_1, X_2, \cdots, X_p)$ is drawn from a multivariate Gaussian (or multivariate normal) distribution, with a class-specific mean vector and a common covariance matrix. The multivariate Gaussian distribution assumes that each individual predictor follows a one-dimensional normal distribution, with some correlation between each pair of predictors. P142 起先是介绍了下啥是 multivariate Gaussian distribution,然后又是不停地套公式,用 estimate 代替 parameter…… P145 起又是 True Positive、Sensitivity 那一套,就不赘述了。 P145 结尾解释了 why may LDA have a low sensitivity sometimes: LDA is trying to approximate the Bayes classifier, which has the lowest total error rate out of all classifiers (if the Gaussian model is correct). That is, the Bayes classifier will yield the smallest possible total number of misclassified observations, irrespective of which class the errors come from. That is, some misclassifications will result from incorrectly assigning a customer who does not default to the default class, and others will result from incorrectly assigning a customer who defaults to the non-default class. • $+$ 的数量上升 • $-$ 的数量下降 • $P(+ \vert D) = Sensitivity$ 上升 • $P(- \vert D^c) = Specificity$ 下降 The ROC curve is a popular graphic for simultaneously displaying the TP and FP rate for all possible thresholds. The name “ROC” is historic, and comes from communications theory. It is an acronym for receiver operating characteristics. • FP (false positive) rate, i.e. 1 - Specificity, is x-axis of ROC • TP (true positive) rate, i.e. Sensitivity, is y-axis of ROC • 忘记概念的话请自觉查看 Conditional Probability The overall performance of a classifier, summarized over all possible thresholds, is given by the area under the (ROC) curve (AUC). LDA assumes that the observations within each class are drawn from a multivariate Gaussian distribution with a class-specific mean vector and a covariance matrix that is common to all $K$ classes. Like LDA, the QDA classifier results from assuming that the observations from each class are drawn from a Gaussian distribution, and plugging estimates for the parameters into Bayes’ theorem in order to perform prediction. However, unlike LDA, QDA assumes that each class has its own covariance matrix. P149 小幅数学内容。 Roughly speaking, LDA tends to be a better bet than QDA if there are relatively few training observations and so reducing variance is crucial. In contrast, QDA is recommended if the training set is very large, so that the variance of the classifier is not a major concern, or if the assumption of a common covariance matrix for the $K$ classes is clearly untenable. ## 5. A Comparison of Classification Methods logistic regression vs LDA • Both produce linear decision boundaries. • The only difference between the two approaches lies in the fact that • logistic regression performs estimation using maximum likelihood • whereas LDA uses the estimated mean and variance from a normal distribution • Since logistic regression and LDA differ only in their fitting procedures, one might expect the two approaches to give similar results. The performance fluctuates basically due to whether these Gaussian assumptions are met or not. KNN: • a completely non-parametric approach • no assumptions are made about the shape of the decision boundary. • We can expect this approach to dominate LDA and logistic regression when the decision boundary is highly non-linear. • On the other hand, KNN does not tell us which predictors are important; we don’t get a table of coefficients out of KNN. QDA: • QDA serves as a compromise between the non-parametric KNN method and the linear LDA and logistic regression approaches. • Since QDA assumes a quadratic decision boundary, it can accurately model a wider range of problems than can the linear methods. • Though not as flexible as KNN, QDA can perform better in the presence of a limited number of training observations because it does make some assumptions about the form of the decision boundary. P153-154 设计了 6 个 Scenario 来测试这些方法的 performance。 • When the true decision boundaries are linear, then the LDA and logistic regression approaches will tend to perform well. • When the boundaries are moderately non-linear, QDA may give better results. • Finally, for much more complicated decision boundaries, a non-parametric approach such as KNN can be superior. • But the level of smoothness for a non-parametric approach must be chosen carefully. ## 6. Lab: Logistic Regression, LDA, QDA, and KNN ### 6.2 Logistic Regression > library(ISLR) > names(Smarket) [1] "Year" "Lag1" "Lag2" "Lag3" "Lag4" [6] "Lag5" "Volume " "Today" " Direction " > dim(Smarket) [1] 1250 9 > summary(Smarket) > cor(Smarket [,-9]) ## matrix of pairwise correlations, except the qualitative one Next, we will fit a logistic regression model in order to predict Direction using Lag1 through Lag5 and Volume. The glm() function fits generalized linear models, a class of models that includes logistic regression. The syntax of the glm() function is similar to that of lm(), except that we must pass in the argument family=binomial in order to tell R to run a logistic regression rather than some other type of generalized linear model. > glm.fit = glm(Direction~Lag1+Lag2+Lag3+Lag4+Lag5+Volume, data=Smarket, family=binomial) > summary(glm.fit) > coef(glm.fit) > summary(glm.fit)$coef The predict() function can be used to predict the probability that the market will go up, given values of the predictors. The type="response" option tells R to output probabilities of the form$ P(Y = 1 \vert X) $, as opposed to other information such as the logit. If no data set is supplied to the predict() function, then the probabilities are computed for the training data that was used to fit the logistic regression model. > glm.probs = predict(glm.fit, type="response") > glm.probs[1:10] 1 2 3 4 5 6 7 8 9 10 0.507 0.481 0.481 0.515 0.511 0.507 0.493 0.509 0.518 0.489 We know that these values correspond to the probability of the market going up, rather than down, because the contrasts() function indicates that R has created a dummy variable with a 1 for Up. > contrasts(Direction) Up Down 0 Up 1 In order to make a prediction, we must convert these predicted probabilities into class labels, Up or Down. > glm.pred = rep("Down", 1250) ## n = 1250 > glm.pred[glm.probs>.5] = "Up" Given these predictions, the table() function can be used to produce a confusion matrix. > table(glm.pred, Smarket$Direction) Direction glm.pred Down Up Down 145 141 Up 457 507 > (507+145)/1250 [1] 0.5216 > mean(glm.pred == Smarket$Direction) [1] 0.5216 -> ~~~~~~~~~~ 2015.11.09 P.S. Start ~~~~~~~~~~ <- You can also use confusionMatrix(prediction, reference) function in caret package, e.g. > library("caret") > lvs <- c("normal", "abnormal") > truth <- factor(rep(lvs, times = c(86, 258)), levels = rev(lvs)) > pred <- factor(c(rep(lvs, times = c(54, 32)), rep(lvs, times = c(27, 231))), levels = rev(lvs)) > xtab <- table(pred, truth) > confusionMatrix(xtab) Confusion Matrix and Statistics truth pred abnormal normal abnormal 231 32 normal 27 54 Accuracy : 0.8285 ...... > confusionMatrix(pred, truth) # ditto See confusionMatrix {caret} for more. -> ~~~~~~~~~~ 2015.11.09 P.S. End ~~~~~~~~~~ <- P159 起就是在说做 training set 的事情,只用注意一个 glm()subset 参数用法就可以了: > train = (Smarket$Year<2005) > glm.fit = glm(Direction~Lag1+Lag2+Lag3+Lag4+Lag5+Volume, data=Smarket, family=binomial, subset=train) ### 6.3 Linear Discriminant Analysis We fit a LDA model using the lda() function, which is part of the MASS library. Notice that the syntax for the lda() function is identical to that of lm(). > library(MASS) > lda.fit=lda(Direction~Lag1+Lag2, data=Smarket, subset=train) > lda.fit Call: lda(Direction ~ Lag1 + Lag2, data = Smarket, subset = train) Prior probabilities of groups : Down Up 0.492 0.508 Group means : Lag1 Lag2 Down 0.0428 0.0339 Up -0.0395 -0.0313 Coefficients of linear discriminants: LD1 Lag1 -0.642 Lag2 -0.514 > plot(lda.fit) The LDA output indicates that $\hat{\pi}_1 = 0.492$ and $\hat{\pi}_1 = 0.508$; in other words, 49.2% of the training observations correspond to days during which the market went down. It also provides the group means; these are the average of each predictor within each class, and are used by LDA as estimates of $\mu_k$. The coefficients of linear discriminants output provides the linear combination of Lag1 and Lag2 that are used to form the LDA decision rule, i.e $(-0.642Lag1) + (-0.514Lag2)$. The plot() function produces plots of the linear discriminants, obtained by computing $(-0.642 \times Lag1) + (-0.514 \times Lag2)$ for each of the training observations. The predict() function returns a list with three elements. • class, contains LDA’s predictions. • posterior, is a matrix whose $k^{\text{th}}$ column contains the posterior probability that the corresponding observation belongs to the $k^{\text{th}}$ class, i.e the $p_k(X) = Pr(Y = k \vert X = x)$. • x, contains the linear discriminants. > train = (Smarket$Year<2005) > Smarket.2005 = Smarket[!train,] > Direction.2005 = Smarket$Direction[!train] > lda.pred = predict(lda.fit, Smarket.2005) > names(lda.pred) [1] "class" "posterior " "x" > lda.class = lda.pred$class > table(lda.class, Direction.2005) Direction.2005 lda.pred Down Up Down 35 35 Up 76 106 > mean(lda.class == Direction.2005) [1] 0.56 > sum(lda.pred$posterior[,1] >= .5) [1] 70 > sum(lda.pred$posterior[,1] < .5) [1] 182 Notice that the posterior probability output by the model corresponds to the probability of down. So you’d better take a peek before performing further tasks. > lda.pred$posterior[1:20 ,1] > lda.class[1:20] QDA is implemented in R using the qda() function, which is also part of the MASS library. The syntax is identical to that of lda(). > qda.fit = qda(Direction~Lag1+Lag2, data=Smarket, subset=train) The predict() function works in exactly the same fashion as for LDA. > qda.class = predict(qda.fit, Smarket.2005)$class > table(qda.class, Direction.2005) Direction.2005 qda.class Down Up Down 30 20 Up 81 121 > mean(qda.class == Direction.2005) [1] 0.599 ### 6.5 K-Nearest Neighbors knn() function is part of the class library. Rather than a two-step approach in which we first fit the model and then we use the model to make predictions, knn() forms predictions using a single command. The function requires four inputs. • A matrix of training$ X $• A matrix of testing$ X $• A vector of training$ Y $• A value for$ K $, the number of nearest neighbors to be used by the classifier. > library(class) > train.X = cbind(Smarket$Lag1, Smarket$Lag2)[train,] > test.X = cbind(Smarket$Lag1, Smarket$Lag2)[!train,] > train.Direction = Smarket$Direction[train] We set a random seed before we apply knn() because if several observations are tied as nearest neighbors, then R will randomly break the tie. Therefore, a seed must be set in order to ensure reproducibility of results. > set.seed(1) > knn.pred = knn(train.X, test.X, train.Direction, k=1) > table(knn.pred, Direction.2005) Direction.2005 knn.pred Down Up Down 43 58 Up 68 83 > (83+43)/252 [1] 0.5 The results using $K = 1$ are not very good, since only 50% of the observations are correctly predicted. We repeat the analysis using $K = 2,3,\cdots$ for improvements. ### 6.6 An Application to Caravan Insurance Data P165,一个具体的例子,业务分析值得一看。技术上需要注意的一个地方是: The scale() function standardize the data so that all variables are given a mean of zero and a standard deviation of one. ## exclude column 86 because that is the qualitative Purchase variable > standardized.X = scale(Caravan[,-86]) > var(Caravan[,1]) [1] 165 > var(Caravan[,2]) [1] 0.165 > var(standardized.X[,1]) [1] 1 > var(standardized.X[,2]) [1] 1 Now every column of standardized.X has a standard deviation of one and a mean of zero.
2020-11-25 13:30:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 9, "x-ck12": 0, "texerror": 0, "math_score": 0.8861756920814514, "perplexity": 1064.8307436079715}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141182794.28/warc/CC-MAIN-20201125125427-20201125155427-00624.warc.gz"}
https://brilliant.org/problems/if-sharygin-olympiad/
Geometry Level 4 In the space, five points are marked. It is known that these points are the centers of five spheres, four of which are pairwise externally tangent, and all these four are internally tangent to the fifth one. It turns out, however, that it is impossible to determine which of the marked points is the center of the fifth (the largest) sphere. If the ratio of the greatest and the smallest radii of the spheres is $$\large\ \frac { a + \sqrt { b } }{ c }$$ where $$a, b, c$$ are positive integers, with $$b$$ square-free. Find $$a+b+c$$ ×
2017-01-16 21:49:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.61185622215271, "perplexity": 233.60627107325985}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279368.44/warc/CC-MAIN-20170116095119-00221-ip-10-171-10-70.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/144454/how-to-plot-x1-3
# How to plot x^(1/3)? I tried to export my Geogebra graphs into tikz-code. In General it works fine, but plot x^1/3 doesn't work: \documentclass[10pt]{article} \usepackage{pgf,tikz} \usetikzlibrary{arrows} \pagestyle{empty} \begin{document} \definecolor{ccqqqq}{rgb}{0.8,0,0} \definecolor{qqttcc}{rgb}{0,0.2,0.8} \definecolor{cqcqcq}{rgb}{0.75,0.75,0.75} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \draw [color=cqcqcq,dash pattern=on 1pt off 1pt, xstep=1.0cm,ystep=1.0cm] (-4,-4) grid (4,4); \draw[->,color=black] (-4,0) -- (4,0); \foreach \x in {-4,-3,-2,-1,1,2,3} \draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt) node[below] {\footnotesize $\x$}; \draw[->,color=black] (0,-4) -- (0,4); \foreach \y in {-4,-3,-2,-1,1,2,3} \draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt) node[left] {\footnotesize $\y$}; \draw[color=black] (0pt,-10pt) node[right] {\footnotesize $0$}; \clip(-4,-4) rectangle (4,4); \draw[color=qqttcc, smooth,samples=100,domain=-4.0:4.0] plot(\x,{(\x)*(\x)*(\x)}); \draw[color=ccqqqq, smooth,samples=100,domain=-4.0:4.0] plot(\x,{((\x))^(1/(3))}); \draw [color=qqttcc](2.06,4.18) node[anchor=north west] {$f(x)=x^3$}; \draw [color=ccqqqq](2.06,1.18) node[anchor=north west] {$f(x)=\sqrt[3]{x}$}; \begin{scriptsize} \draw[color=qqttcc] (-1.74,-7.16) node {$f$}; \end{scriptsize} \end{tikzpicture} \end{document} Error line 21: !Missing number, treated as zero. So plot has problems with the negative x I suppose, but I don't know. Any help would be great! I googled it for nearly 1 hour - but no real answers. As suggestet it rotated and mirrored the graph - works great!: \documentclass[10pt]{article} \usepackage[utf8]{inputenc} \usepackage{pgf,tikz} \usetikzlibrary{arrows} \pagestyle{empty} \begin{document} \definecolor{ccqqqq}{rgb}{0.8,0,0} \definecolor{qqttcc}{rgb}{0,0.2,0.8} \definecolor{cqcqcq}{rgb}{0.75,0.75,0.75} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \draw [color=cqcqcq,dash pattern=on 1pt off 1pt, xstep=1.0cm,ystep=1.0cm] (-4,-4) grid (4,4); \draw[->,color=black] (-4,0) -- (4,0); \foreach \x in {-4,-3,-2,-1,1,2,3} \draw[shift={(\x,0)},color=black] (0pt,2pt) -- (0pt,-2pt) node[below] {\footnotesize \x}; \draw[->,color=black] (0,-4) -- (0,4); \foreach \y in {-4,-3,-2,-1,1,2,3} \draw[shift={(0,\y)},color=black] (2pt,0pt) -- (-2pt,0pt) node[left] {\footnotesize \y}; \draw[color=black] (0pt,-10pt) node[right] {\footnotesize 0}; \clip(-4,-4) rectangle (4,4); \draw[color=qqttcc, smooth,samples=100,domain=-4.0:4.0] plot(\x,{(\x)*(\x)*(\x)}); \begin{scope}[yscale=-1,xscale=1] \draw[rotate=90, color=ccqqqq, smooth,samples=100,domain=-4.0:4.0] plot(\x,{(\x)*(\x)*(\x)}); \end{scope} \draw [color=qqttcc](2.06,4.00) node[anchor=north west] {$f(x)=x^3$}; \draw [color=ccqqqq](2.06,1.00) node[anchor=north west] {$f(x)=\sqrt[3]{x}$}; \begin{scriptsize} \draw[color=qqttcc] (-1.74,-7.16) node {f}; \end{scriptsize} \end{tikzpicture} \end{document} • Try this question. Nov 11 '13 at 18:24 • The elegant way would be to plot $y=x^3$ but reflected across the line $y=x$. That way you never need to compute a cube root, so computations are faster and, more importantly, more accurate. Nov 11 '13 at 18:26 • And how do i do that? – jojo Nov 11 '13 at 18:33 • @BenjaminMcKay I tried \draw[color=ccqqqq, smooth,samples=50,domain=0.0001:4.0] plot(\x, {(\x)^(1/3)}); with the same code as jojo but the shape of the curve is completely wrong for values less than 1.0. – user4686 Nov 11 '13 at 18:39 • ah ok, I understand that with \usepackage{fp} \usetikzlibrary {fixedpointarithmetic} and the key fixed point arithmetic added to the picture, it now works. The failure of naked pgfmath on this one is spectacular! – user4686 Nov 11 '13 at 18:48 ## 6 Answers I'm sure we've answered this before, but here's a solution using pgfplots. When plotting cube root functions it is useful to know that many programs (including the wonderful pgfplots package) use logarithms to plot them. As such, you have to be careful with the domain. In the code below, I have plotted the function x/|x|*(|x|)^(1/3) which ensures that the function is plotted for the entire domain. % arara: pdflatex \documentclass{standalone} \usepackage{pgfplots} % set the arrows as stealth fighters \tikzset{>=stealth} \begin{document} \begin{tikzpicture} \begin{axis}[ xmin=-10,xmax=10, ymin=-10,ymax=10, axis lines=center, axis line style=<->] \addplot[<->] expression[domain=-10:10,samples=100]{x/abs(x)*abs(x)^(1/3)}; \end{axis} \end{tikzpicture} \end{document} • x/abs(x)*abs(x)^(1/3) is a genius expression that is continuous but not differentiable. +1! Nov 11 '13 at 19:33 • @Marienplatz thanks! I'm afraid I can't take any credit for it, though :) It's a trick I got from somewhere else. Nov 11 '13 at 20:01 • Interestingly it also works for samples=11, i.e. for x=0. Nov 11 '13 at 20:46 With PSTricks. ## Option 1 (with Postfix Notation) \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-plot} \begin{document} \begin{pspicture}(-3.5,-2.5)(4,3) \psaxes{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \pstVerb{/power 1 3 div def} \psplot[plotpoints=1000,linecolor=blue]{-3}{3}{x dup 0 lt {neg power exp neg} {power exp} ifelse} \end{pspicture} \end{document} ## Option 2 (with Infix Notation) \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-plot} \begin{document} \begin{pspicture}(-3.5,-2.5)(4,3) \psaxes{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \psplot[plotpoints=1000,linecolor=blue,algebraic]{-3}{3}{IfTE(x<0,-(-x)^(1/3),x^(1/3))} \end{pspicture} \end{document} ## Option 3 (with Rotation) This is the last resort only for children. The right leaf is rotated 180 degrees about the origin to get the left leaf. \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-plot} \psset{plotpoints=1000} \begin{document} \multido{\i=0+10}{19}{% \begin{pspicture}(-3.5,-2.5)(4,3) \psaxes{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \def\right{\psplot[linecolor=blue,algebraic]{0}{3}{x^(1/3)}}% \rput{\i}{\right}\right %\def\right{\psplot[linecolor=red]{0}{3}{x 1 3 div exp}}% %\rput{180}{\right}\right \end{pspicture}} \end{document} ## Option 4 (with Parametric Plot) Please see Herbert's answer. ## Option 5 (with Implicit Plot) With an extra bug. \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-func} \begin{document} \begin{pspicture}(-3.5,-2.5)(4,3) \psaxes{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \psplotImp[linecolor=red,stepFactor=0.2,algebraic](-3,-1.5)(3,1.5){y^3-x} \end{pspicture} \end{document} The bug can be hidden by clipping the unwanted curves. \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-func} \begin{document} \begin{pspicture}(-3.5,-2.5)(4,3) \psaxes{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \psclip{\psframe[linestyle=none,dimen=monkey](!-3 3 1 3 div exp neg)(!3 3 1 3 div exp)} \psplotImp[linecolor=red,stepFactor=0.1,algebraic](-4,-3)(5,4){y^3-x} \endpsclip \end{pspicture} \end{document} ## Option 6 (with swapaxes) \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-plot} \begin{document} \begin{pspicture}(-3.5,-2.5)(4,3) \psaxes{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \psplot[linecolor=red,algebraic,plotpoints=1000,swapaxes]{3 1 3 div exp neg}{3 1 3 div exp}{x^3} \end{pspicture} \end{document} use the parametric form: x=t³ and y=t: \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-plot} \begin{document} \begin{pspicture}(-3.5,-2.5)(4,3) \psaxes[labelFontSize=\scriptstyle]{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \psparametricplot[linecolor=blue,algebraic]{-1.5}{1.5}{t^3|t} \end{pspicture} \end{document} or the implicit form y³-x=0: \documentclass[pstricks,border=12pt]{standalone} \usepackage{pst-func} \begin{document} \begin{pspicture*}(-3.5,-2.5)(4,3) \psaxes[labelFontSize=\scriptstyle,ticksize=0 4pt]{->}(0,0)(-3.5,-2.5)(3.5,2.5)[$x$,0][$y$,90] \psplotImp[linecolor=red,linewidth=2pt,algebraic](-4,-3)(4,3){y^3-x} \end{pspicture*} \end{document} IEEE math usually gives NaN (not a number) for x^a when x ≤ 0 and a is not integer. You can draw x^{1/3} for x in [0.00001:4] and -(-x)^{1/3} for x in [-4:-0.00001] • Why do you reference to IEEE math? Nov 11 '13 at 18:33 • Is not it underlying pgf math? Nov 11 '13 at 18:34 Another approach would be to use gnuplot to compute the 1/3 floating point expressions with -shell-escape enabled and gnuplot installed. The idea is to evaluate 1/3 first similar to making atleast one numerator or denominator floating point value like this 1./3 or 1.0/3. Note: sgn(x) is Sign function (-1 if x < 0; 0 if x = 0; 1 if x > 0). One might need more samples of points to get a refined plot. Here are some examples with packages that use gnuplot. With tkz-fct \documentclass[preview=true,12pt]{standalone} \usepackage{tkz-fct} \begin{document} \begin{tikzpicture}[scale=2] \tkzInit[xmin=-2,xmax=2,ymin=-2,ymax=2] \tkzGrid \tkzAxeXY \tkzFct[color=red]{sgn(x)*(abs(x)**(1./3))} \end{tikzpicture} \end{document} With gnuplottex \documentclass[preview=true,12pt]{standalone} \usepackage{gnuplottex} \begin{document} \begin{gnuplot}[terminal=epslatex,terminaloptions=color] set grid set samples 1000 set xlabel '$x$' set ylabel '$x^{\frac{1}{3}}$' plot [-2:2] [-2:2] sgn(x)*(abs(x)**(1./3)) title '$x^{\frac{1}{3}}$' linetype 1 linewidth 3 \end{gnuplot} \end{document} With pgfplots and even pure tikz can be used. \documentclass[preview=true,border=2pt,12pt]{standalone} \usepackage{pgfplots} \pgfplotsset{compat=1.9} \begin{document} \begin{tikzpicture} \begin{axis}[xlabel=$x$,ylabel=$x^{\frac{1}{3}}$,grid=major,enlargelimits=false] \addplot [domain=-2:2,samples=1000,red,no markers] gnuplot[id=poly]{sgn(x)*(abs(x)**(1./3)) }; \end{axis} \end{tikzpicture} \end{document} \begin{tikzpicture}[scale=0.5] \draw[dashed,gray!30](-3,-3)grid(3,3); %this creates the grid you see; %dashed, gives the dashed lines %gray!30, gives a gray color be dim by factor of 30. \draw[<->](-3,0)--(3,0)node[below right]{$x$}; %this is the x-axis \draw[<->](0,-3)--(0,3)node[above right]{$y$}; %this is the y-axis \foreach \x in {-2,-1,1,2} %this starts a while loop with "\x" touching -2,-1,1,2 then ends. \draw(\x,2pt)--(\x,-2pt)node[below]{\footnotesize{$\x$}}; %this is what I want the foreach %loop to do until it reaches the semi-colon. %this is what gives the tick marks and numbers on the axis \foreach \x in {-2,-1,1,2} %same as before but for the y-axis \draw(2pt,\x)--(-2pt,\x)node[left]{\footnotesize{$\x$}}; \draw[<->,domain=-1.442249:1.442249,samples=100]plot({\x*\x*\x},\x); %where x^3=2 when x=1.442249'ish \end{tikzpicture}
2021-09-26 13:17:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.826578676700592, "perplexity": 4240.956260739799}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057861.0/warc/CC-MAIN-20210926114012-20210926144012-00217.warc.gz"}
http://math.stackexchange.com/questions/471257/find-the-centre-and-the-radius-of-the-circle
# Find the centre and the radius of the circle. [closed] Find the centre and the radius of the circle. $$4x^2+4y^2-4x-8y-11=0$$ Thank you. - Show us what you have tried so we can help you. –  Alizter Aug 19 '13 at 14:32 $$(4x^2+4y^2-4x-8y-11=0) \cdot \left(\frac {1}{4}\right)$$ $$x^2+y^2-x-2y-{\frac{11}{4}}=0$$ $${\text{compared to: }}x^2+y^2+2gx+2fy-c=0$$ $$2g=-1 → g= -{\frac{1}{2}}$$ $$2f=-2 → f= -1$$ $$(-g,-f)=({\frac{1}{2}}, 1)$$ $${\text{This is where I get my answer wrong: }} r^2 = g^2+f^2-c → r = -\sqrt{\frac{3}{2}} \\$$ $${\text{and thank you, Sami! }}$$ –  Audrey G Aug 19 '13 at 15:05 You can simply complete the squares after multiplying by $\frac14$: you get $$\left(x-\frac12\right)^2+(y-1)^2-\frac{11}4=\frac14+1\;,$$ so $$\left(x-\frac12\right)^2+(y-1)^2=\frac32\;.$$ This immediately puts the centre at $\left\langle\frac12,1\right\rangle$ and the radius at $\sqrt{\frac32}$. Note, though, that the radius must be positive: that rules out $-\sqrt{\frac32}$. –  Brian M. Scott Aug 19 '13 at 18:29 ## closed as off-topic by Daniel Rust, Hagen von Eitzen, Amzoti, user1337, O.L.Aug 19 '13 at 15:00 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Homework questions must seek to understand the concepts being taught, not just demand a solution. For help writing a good homework question, see: How to ask a homework question?." – Daniel Rust, Hagen von Eitzen, Amzoti, user1337, O.L. If this question can be reworded to fit the rules in the help center, please edit the question. By completing the square $$4x^2+4y^2-4x-8y-11=0\iff (2x-1)^2+(2y-2)^2=4^2\\\iff\left(x-\frac{1}{2}\right)^2+(y-1)^2=4$$
2014-03-11 12:22:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7982194423675537, "perplexity": 1061.8147923480274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011190529/warc/CC-MAIN-20140305091950-00076-ip-10-183-142-35.ec2.internal.warc.gz"}
http://orbit.dtu.dk/en/publications/bilinear-relative-equilibria-of-identical-point-vortices(0fd644eb-a474-4344-87c9-d1f7fea3cae5).html
## Bilinear Relative Equilibria of Identical Point Vortices Publication: Research - peer-reviewJournal article – Annual report year: 2012 A new class of bilinear relative equilibria of identical point vortices in which the vortices are constrained to be on two perpendicular lines, conveniently taken to be the x- and y-axes of a Cartesian coordinate system, is introduced and studied. In the general problem we have m vortices on the y-axis and n on the x-axis. We define generating polynomials q(z) and p(z), respectively, for each set of vortices. A second-order, linear ODE for p(z) given q(z) is derived. Several results relating the general solution of the ODE to relative equilibrium configurations are established. Our strongest result, obtained using Sturm’s comparison theorem, is that if p(z) satisfies the ODE for a given q(z) with its imaginary zeros symmetric relative to the x-axis, then it must have at least n−m+2 simple, real zeros. For m=2 this provides a complete characterization of all zeros, and we study this case in some detail. In particular, we show that, given q(z)=z 2+η 2, where η is real, there is a unique p(z) of degree n, and a unique value of η 2=A n , such that the zeros of q(z) and p(z) form a relative equilibrium of n+2 point vortices. We show that $A_{n} \approx\frac{2}{3}n + \frac{1}{2}$, as n→∞, where the coefficient of n is determined analytically, the next-order term numerically. The paper includes extensive numerical documentation on this family of relative equilibria. Original language English Journal of Nonlinear Science 22 5 849-885 0938-8974 http://dx.doi.org/10.1007/s00332-012-9129-2 Published - 2012 Citations Web of Science® Times Cited: 3
2016-02-11 14:11:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.749660849571228, "perplexity": 964.58710980588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701161946.96/warc/CC-MAIN-20160205193921-00048-ip-10-236-182-209.ec2.internal.warc.gz"}
http://planetmath.org/MetalinearLanguage
# metalinear language Recall that a linear grammar is a formal grammar $G=(\Sigma,N,P,\sigma)$ whose productions are of the form $A\to x$, where $A$ is a terminal symbol, and $x$ is a word over $\Sigma$, with at most one occurrence of a non-terminal symbol. The concept of a linear grammar can be generalized: define a $k$-linear grammar as a formal grammar $G=(\Sigma,N,P,\sigma)$ such that every production in $P$ has one of the three following forms: • $A\to u$, • $A\to uBv$, • $\sigma\to W$, where $A,B$ are non-terminal symbols, $u,v$ are terminal words, and $W$ is a word over $\Sigma$ with no more than $k$ occurrences of non-terminal symbols, and none of which is the start symbol $\sigma$. Any $k$-linear grammar is context-free. A language is said to be $k$-linear if it can be generated by a $k$-linear grammar. Note that a language is $1$-linear iff it is linear. A language is said to be metalinear if it is $k$-linear for some positive integer $k$. In other words, if $\mathscr{L}(k)$ denotes the family of $k$-linear languages, then the family $\mathscr{L}(\infty)$ of metalinear langauges is $\mathscr{L}(\infty)=\bigcup\{\mathscr{L}(k)\mid k\geq 1\}.$ It is easy to see we have the following inclusions $\mathscr{R}\subseteq\mathscr{L}(k)\subseteq\cdots\subseteq\mathscr{L}(k)% \subseteq\cdots\subseteq\mathscr{L}(\infty)\subseteq\mathscr{F}$ where $\mathscr{R}$ and $\mathscr{F}$ are the families of regular and context-free languages respectively. In fact, it can be shown that all of the inclusions above are strict, providing us with an infinite chain of families of languages between the regular languages and the context-free languages. ## References Title metalinear language MetalinearLanguage 2013-03-22 18:57:09 2013-03-22 18:57:09 CWoo (3771) CWoo (3771) 8 CWoo (3771) Definition msc 68Q45 msc 68Q70 LinearGrammar LinearLanguage $k$-linear language
2018-03-18 23:29:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 31, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8856016397476196, "perplexity": 450.75033350795263}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257646178.24/warc/CC-MAIN-20180318224057-20180319004057-00490.warc.gz"}
https://joiningupthemaths.wordpress.com/2013/10/22/thinking-about-indices/comment-page-1/
Okay, maybe I’m a bit slow but I said and thought something today that I’ve never thought about before. We’re looking at indices & surds in my Year 11 class.  Today we were looking at the rules of indices: multiplying and dividing indices was “insultingly easy”.  Negative indices seemed to push it over the edge … some very confused faces.  We struggled with it and I think most people managed to get an instrumental understanding but not a relational one.  We’re working on that! I was trying to explain that $6^{-3}$ was equal to $\frac{1}{6^3}$. Now, in explaining this I said it was 1 divided by 6 three times.  Seems fine but where does the one come from?  We talked about reciprocals, we talked about the pattern that follows as we look at $6^3 6^2 6^1 6^0 6^{-1}$ … and we talked about a few more bits and pieces. I then thought about $6^3$ could be thought of as 1 x 6 x 6 x 6.  This adds a bit of consistency to the system and made me think of an inaccuracy that I sometimes hear myself making or confirming. “What does $6^3$ mean?” “6 times by itself 3 times” But does it? 6 times by itself three times could be 6 x (itself three times) or 6 x (6 x 6 x 6).  I suddenly thought surely this could be 1 times by 6 three times.  This helps with consistency in two ways. Firstly, with our negative powers – we can say that it is 1 divided by 6 three times.  This has internal consistency and I like it. Secondly, raising something to the power of 0 makes sense – 1 times by 6 zero times is obviously 1. I’ve never seen or heard this anywhere before – am I wrong?  Does this break down somewhere that I’m not aware of?  Do you think this could be a helpful way to see it? Also, is there a richer way to introduce this apart from pattern spotting?  Any ideas gladly received. ## 3 thoughts on “Thinking about indices …” 1. The ‘official’ explanation of powers is 1x whatever power you raise to; To get them thinking about negative powers: A negative exponent refers to the inverse of a number raised to a positive integer. Thus, a negative exponent should immediately suggest that the number in question has a value of less than one. For example, 2-3 = 1 / (2 * 2 * 2) = 1/8 = 0.125. Working with powers of 10: 10^-5 = 1/10^5 = .00001 10^5 = 1/10^-5 = 100000 (This is a good example because you can tie it back to scientific notation as an example.) • Kevin Cunningham I like that a lot … we talked about inverses and reciprocals but there was still a sense of confusion here. We’re coming back to it and I’m sure they’ll get there. I’ll make sure I make the link back to standard form (scientific notation) tomorrow … good link to make! 2. I like your new way of explaining it Kevin. I too would say that 6^3 is 6 times by itself 3 times and it’s never sat comfortably with me but I couldn’t think of a succint way to say it better. And they seemed to understand (although it surprises me how many times they still assume that 6^3, for example, is 18. No…”that would be 6+6+6, not 6x6x6″, I tell them. Any suggestions on how to beat that one?)
2017-12-17 17:46:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6153964996337891, "perplexity": 722.3012283673365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597295.74/warc/CC-MAIN-20171217171653-20171217193653-00771.warc.gz"}
https://chemistryrack.com/avogadros-law/
Basic chemistry # Avogadro’s Law Definition Graph Formula Application ## Avogadro’s law definition and explanation Avogadro’s law definition and explanation are listed below: Avogadro’s law states that equal volumes of different gases, under the same conditions of temperature and pressure, contain the same number of molecules. Avogadro’s law is included in a more general law which is explained below briefly: #### The law of “ideal gases” reads as follows: The amount of gas matter (or “number of moles”) is • proportional to the volume of gas. • The amount of gas matter ( or the number of moles) is • proportional to the gas pressure. • The amount of gas matter (or a number of moles) is • inversely proportional to the temperature. • But it is independent of the nature of the gas. n = PV / RT ### SI units of Avogadro’s law: The SI units of it are as follows: n (mol), P (Pa), V (m3), T (K) #### Where R is? R is the constant of ideal gases independent of the nature of the gas R = 8.31J.K -1 · mol -1 ### Gas is considered perfect: Gas is considered “perfect” if it strictly follows the previous law. Gas is never “perfect” but it is all the closer to this model the further it is from the liquefaction conditions for Avogadro’s law. One mole occupies approximately a volume of 22.4 liters at normal temperature and pressure conditions (CNTP), which corresponds to a pressure of 1 atmosphere or 1013 h Pa (hectopascal) and a temperature of 0 ° C. ### Under usual conditions: Under the usual conditions of temperature and pressure (in the laboratory for example)), one mole of gas occupies approximately 24.0 liters, the pressure is at 1 bar and the temperature is at approximately 20 ° C. You may also like: Check Also Close
2022-12-05 07:30:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8968216180801392, "perplexity": 1766.4518646014867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00197.warc.gz"}
https://www.physicsforums.com/threads/dipole-anisotropy-expansion.321149/
# Dipole anisotropy expansion 1. Jun 20, 2009 ### nicksauce In Weinberg's cosmology book, section 2.4 we have $$T' = \frac{T}{\gamma(1+\beta\cos{\theta})}$$ He then claims, "Expanding in powers of beta, the temperature shift can be expressed as a sum of Legendre polynomials" $$\Delta T = T' -T = T\left(-\frac{\beta^2}{6} - \beta P_1(\cos{\theta}) + \frac{2\beta^2}{3}P_2(\cos{\theta}) + ...\right)$$ Can someone help me fill in the algebra here? I really am having a hard time seeing where this is coming from. 2. Jun 21, 2009 ### nicksauce Alright, so I am able to get it to work, if I expand the original function as a Taylor series, then go back and write it in terms of the Legendre Polynomials (up to second order). But I am hoping/wondering if there is a more elegant way to achieve the final result. 3. Jun 21, 2009 ### Chalnoth I'm not sure. I mean, the Taylor series expansion is exceedingly simple for this function, so it may be possible to make use of one of the recurrence relations of the Legendre polynomials to transform the Taylor expansion into an expansion in Legendre polynomials. But that would seem to be a fair bit of work.
2017-12-11 21:51:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8180338740348816, "perplexity": 268.69518910028745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948514051.18/warc/CC-MAIN-20171211203107-20171211223107-00497.warc.gz"}
https://www.nature.com/articles/s41598-017-13263-w?error=cookies_not_supported&code=db88f27c-3f4a-4c77-8588-9f91e803306f
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Methane fluxes from coastal sediments are enhanced by macrofauna ## Abstract Methane and nitrous oxide are potent greenhouse gases (GHGs) that contribute to climate change. Coastal sediments are important GHG producers, but the contribution of macrofauna (benthic invertebrates larger than 1 mm) inhabiting them is currently unknown. Through a combination of trace gas, isotope, and molecular analyses, we studied the direct and indirect contribution of two macrofaunal groups, polychaetes and bivalves, to methane and nitrous oxide fluxes from coastal sediments. Our results indicate that macrofauna increases benthic methane efflux by a factor of up to eight, potentially accounting for an estimated 9.5% of total emissions from the Baltic Sea. Polychaetes indirectly enhance methane efflux through bioturbation, while bivalves have a direct effect on methane release. Bivalves host archaeal methanogenic symbionts carrying out preferentially hydrogenotrophic methanogenesis, as suggested by analysis of methane isotopes. Low temperatures (8 °C) also stimulate production of nitrous oxide, which is consumed by benthic denitrifying bacteria before it reaches the water column. We show that macrofauna contributes to GHG production and that the extent is dependent on lineage. Thus, macrofauna may play an important, but overlooked role in regulating GHG production and exchange in coastal sediment ecosystems. ## Introduction Methane (CH4) and nitrous oxide (N2O) in the atmosphere constitute a severe threat to Earth’s climate, with up to 28 and 265 times greater warming potential than carbon dioxide (CO2), respectively1. Human activities such as industrial production, intensive agriculture, and livestock farming have substantially increased the levels of greenhouse gas (GHG) emissions. Recent estimates report a 150% and 20% growth of atmospheric CH4 and N2O levels, respectively, since 1750, which is unprecedented over the last 800,000 years1. Anthropogenic pressures have also strongly altered aquatic ecosystems because agricultural expansion and increased use of synthetic fertilizers have caused extensive nutrient enrichment in near-coastal water2. This condition, also known as eutrophication, has been recognized to be the principal driver for the enhanced GHG flux from aquatic environments3. According to recent budgets, shallow aquatic systems may contribute ~10% of global N2O emissions4. There is no clear consensus on the contribution of these environments to the global CH4 emission because source magnitude and variability remain highly uncertain5. However, up to 30–40% of the methane emissions may be due to methane produced in sediments of aquatic ecosystems4. The largest part of the metazoan biomass in coastal sediments is contributed by macrofauna, i.e., invertebrates with body dimension exceeding 1 mm6. Through reworking and bioirrigation, macrofaunal activities profoundly impact biogeochemical processes and microbial diversity7,8,9. In recent years, a debate has arisen whether benthic invertebrates would be effective in counteracting human pressures on aquatic environments10,11. Supporters of this paradigm have proposed that bivalve activity may alleviate nutrient loading because of high turnover rates of nutrients by incorporation into shellfish, which are subsequently removed from ecosystem for human consumption12,13. Apart from these potential ecological benefits, bivalve farming would be expected to resolve social and economic issues worldwide as mollusk production has accounted for more than 70% of all mariculture since 197014. Critics, however, argue that macrofauna would increase rather than reduce internal nutrient loading because of high ammonium regeneration associated with invertebrate excretion and the stimulation of bacteria carrying out dissimilatory nitrate reduction to ammonium15,16,17,18. In this discussion, the impact of macrobenthos on GHG release is much less understood. The role of coastal benthic macrofauna in mediating gas release is still amply debated since the mechanisms regulating production and transport of gases by invertebrates are largely unknown. Recently, however, it was demonstrated that most of these organisms produce N2O in their digestive tracts19. Thus, bivalves isolated from coastal sediments were shown to be strong emitters of N2O20,21. However, it is not clear from these studies whether the N2O produced by bivalves reaches the water column or is reduced to dinitrogen by denitrifying bacteria living in the sediment. Recent investigations suggest that chironomid larvae significantly stimulate the sedimentary release of N2O to the water column22, while N2O and CH4 release does not significantly increase along with tubificid oligochaete abundance23. However, a recent study assessing urban wetlands showed that CH4 and CO2 fluxes correlated with tubificid abundance24. Experimental work with manipulated Baltic Sea sediment suggested that bivalves may induce a seven- to ten-fold increase in CH4 efflux compared to sediment without macrofauna16, but no systematic studies have been conducted to investigate direct CH4 production by benthic fauna and to quantify their impact on benthic GHG release. Here, we report on direct and indirect GHG release from ubiquitous macrofaunal organisms; the bivalve Limecola balthica (formerly named Macoma balthica); and the polychaete Marenzelleria arctia; two of the most common macrofaunal groups inhabiting Baltic Sea sediments. The specific aims of the study were to: (1) test if and by how much macrofauna alters sediment-water fluxes of CH4 and N2O; (2) quantify direct macrofaunal release of these GHGs; (3) estimate the carbon source of methanogenesis associated with the bivalve microbiome; (4) quantify methanogenic symbionts associated with macrofauna. Our study provides, to our knowledge, the first information to date on direct CH4 and N2O release by macrofaunal organisms characterized by different functional traits, and their regulation of methanogenic activity in coastal marine sediments. ## Results ### Sediment core experiment for fluxes of methane and nitrous oxide A sediment core incubation experiment was carried out to quantify macrofauna alteration of sediment-water fluxes of CH4 and N2O one day (day 1) and ten days (day 10) after macrofauna addition. The sediment used for incubations (0–15 cm layer) had an average carbon content of 5.5% (Volker Brüchert, unpubl. data). Analysis of water samples from the incubation tank revealed that the oxygen concentrations were constant during the ten days of the experiment as they were 348 µM at day 1 and 350 µM at day 10. Ammonium concentrations were always low (<1.2 µM). Concentrations of nitrate changed significantly during the course of the experiment (P = 0.003), and were 2.2 ± 0.1 µM at day 1 and increased to 6.2 ± 0.2 µM at day 10. Macrofauna survival after sediment core incubations was 100% for L. balthica and 92% for M. arctia, which indicates good ecological conditions for these two species during our experiment. Fluxes of CH4 were always directed from the sediment to the water column (Fig. 1a), and fluxes in the treatments with macrofauna were significantly higher than those in the control sediments without macrofauna (Table 1). The increase in CH4 flux caused by polychaetes was more pronounced at day 1 than at day 10, while bivalves stimulated the CH4 flux more at day 10 than at day 1 (Fig. 1a). However, results from the Scheirer-Ray-Hare test showed that the differences in CH4 effluxes between day 1 and day 10 were not significant (Table 1). Nitrous oxide fluxes were directed from the water column into the sediment at day 1 (=uptake), while their direction was reversed at day 10 (=efflux) (Fig. 1b). There were no differences in fluxes of N2O between treatments, but these fluxes were significantly different between day 1 and day 10 (Table 1). ### Methane and nitrous oxide production by macrofauna Direct quantification of CH4 and N2O in serum bottles incubated with filtered bottom water and with macrofaunal specimens showed that gas production was detectable and linear over time (i.e., showing no lag phase) (Supplementary Fig. 1). The control treatments of filtered bottom water only (treatments C) did not result in significant increase of CH4 and N2O concentrations over time (data not shown). CH4 production differed significantly among animals (Table 2). The highest values were measured in bivalve incubations under oxic conditions (B ox) (Fig. 2a). These production rates were significantly higher than in both treatments with polychaetes (P anox and P ox), while they were not significantly different from B anox (Table 2). Production of N2O was also significantly different among animal treatments (Table 2). Treatments with bivalves produced significantly more N2O than treatments with polychaetes (Table 2). Although the animals produced more N2O under anoxic than oxic conditions (Fig. 2b), the difference was not statistically significant (Table 2). It is thus clear that L. balthica individuals, or the microbes associated with the bivalves, produced more CH4 and N2O compared to M. arctia individuals. ### Methane carbon isotope composition and mcrA genes in L. balthica The δ13C-CH4 in the control bottles (average ± standard error) representing the composition of bottom water, was −50.1 ± 2.1‰, while the δ13C-CH4 in the bottles containing CH4 associated with bivalve activity was −55.1 ± 0.6‰. There was a significant decrease in the δ13C-CH4 signal from the bottom water without bivalves to water with methanogenesis associated with bivalves (Table 2). Animals kept in anoxic conditions lead to δ13C signatures that were more negative (−56.3 ± 0.2‰) than those associated with animals incubated in oxic conditions (−53.4 ± 0.4‰). Abundances of mcrA genes were 1.4 × 1010 ± 3.5 × 109 g−1 wet weight (average ± standard error; n = 10) in the body of starved bivalves and 8.3 × 109 ± 2.3 × 109 g−1 wet weight (average ± standard error; n = 10) in the body of satiated bivalves (Supplementary Fig. 2). The results of ANOVA tests show that there was no statistical difference in mcrA gene abundances between starved and satiated bivalves (Table 2). ## Discussion This study shows that the efflux of methane from coastal marine sediments is enhanced after colonization by macrofauna. Polychaetes of the genus Marenzelleria can colonize the deep sediment layers down to 7–15 cm depth15,25, which coincides with the zone of sedimentary methane production in these low saline coastal sediments26,27. Our results indicate that polychaetes mobilize pore-water methane and increase the methane efflux from the sediment to the water column right after their colonization by a factor of eight compared to bare sediments. This pore-water flushing is also seen when polychaetes recolonize sediments together with bivalves because the methane efflux in these treatments was seven times higher than in sediments without macrofauna. These findings substantiate those from a previous study reporting a positive correlation between benthic methane flux and polychaete biomass in the sediment15 and suggest that the flush-out effect described for porewater nutrients28 is also effective for methane. The bivalve L. balthica normally is active in the upper 2–5 cm of sediment in the Baltic Sea16,29, where pore-water methane concentrations are usually low26,27. Thus, it is likely that bivalves do not induce the same immediate effect on methane efflux after colonization as polychaetes and that the increase in the methane flux (up to a factor of eight compared to bare sediments) may be due to methanogenesis directly associated with the bivalve16. This hypothesis is further corroborated by the results from the in vitro animal incubations that show considerable production of methane by the bivalve L. balthica, and to a lesser extent by the polychaete M. arctia (Fig. 2A). These results, together with the high abundances of mcrA genes quantified in both starved and satiated bivalves, indicate that the bivalve body, i.e., the anoxic intestine, is colonized by active methanogens (Supplementary Fig. 2). Thus, both macrofaunal groups contribute to methane release, but the extent and mechanisms behind the stimulation are dependent on the different functional traits. Our study presents evidence for direct macrofauna GHG release from in vitro incubations at low in situ temperatures, contrary to previous studies in which animals were incubated at room temperature19,20. Our investigations also confirm that coastal marine invertebrates release nitrous oxide, which may be produced by microorganisms that are assimilated with the ingested food19,20. Production rates of nitrous oxide by L. balthica were about half those reported in a previous study investigating the same organism20, and this difference may due to the lower temperature selected for our experiment (8 °C) compared to the former assessment (21 °C). Recent studies revealed that nitrous oxide production is associated with the activity of denitrifying bacteria present in the anoxic gut of aquatic invertebrates, and depends on the nitrate concentration in the gut22,30,31. Nitrate concentrations in the gut of nitrous oxide-producing macrofauna typically range from a few to several hundred µM30,32. We did not quantify the gut nitrate concentration in this study, but based on the analysis of the incubation water, nitrate was always present at concentrations of 8–9 µM, which was sufficient to sustain denitrification in the anoxic gut of the invertebrates. Nitrification can also lead to nitrous oxide production, but this process requires oxygen and has so far only been described in shell biofilms33,34. Hence, we cannot exclude that part of the nitrous oxide produced by bivalves was due to nitrification. Body weight has been suggested to be the main factor correlating with nitrous oxide production, as larger animals have larger guts and ingest more microbes than smaller organisms20. Since the invertebrates in our study had similar body weights this factor seems improbable, and nitrous oxide production was more likely mediated by the invertebrate feeding mode. L. balthica feeds mainly on deposited phytodetritus, suspended particles, and microorganisms present in the top centimeters of the sediment, where denitrifiers are abundant. M. arctia is a strict deposit feeder and lives buried in deeper, often sulfidic sediment layers15, which suggests that its diet may be poor in denitrifiers. The results from the core experiments show that the initial benthic uptake of nitrous oxide reversed to a benthic efflux after ten days (Fig. 1b). High nitrate concentrations lead to high and almost exclusive nitrous oxide yields in nature35,36. The nitrous oxide uptake after one day might thus be due to the low nitrate concentrations (2.2 µM) in the overlying water. As macrofauna produced nitrous oxide in our investigations, it is likely that it was consumed by bacterial denitrification before reaching the overlying water, as denitrification (but not nitrification) can be a sink for nitrous oxide37. Higher nitrate concentrations (6.2 µM) in the overlying water after the ten-day period were associated with a nitrous oxide efflux in all treatments. We interpret the higher concentration to be due to an increase in nitrification activity associated with a larger surface area by newly created burrow walls38. Both the higher concentrations of nitrate and the increase in nitrification activity may have contributed to the net benthic release of nitrous oxide with time. Although this appears to be the most plausible explanation for our results with the available data, the reversal of benthic nitrous oxide flux with acclimatization of the animals to experimental conditions should be addressed in depth in future investigations as this phenomenon comes along with important environmental implications. Biogenic methane is produced by archaea under anoxic conditions primarily by carbon dioxide reduction coupled to hydrogen oxidation (hydrogenotrophic methanogenesis) resulting in strongly 13C-depleted methane (more negative δ13C) or by acetate fermentation (acetotrophic methanogenesis) resulting in relatively less 13C-depleted methane (less negative δ13C)39. In lacustrine sediments, acetotrophy is more important than hydrogenotrophy39, while in the Baltic hydrogenotrophic methanogenesis rates can be up to one order of magnitude higher than those of acetate methanogenesis40. In the case of ruminants and digestive tracts of humans and insects hydrogenotrophy also prevails (cf. Fig. 3 in ref.41). The isotopic signature of the methane produced by bivalves was significantly more depleted in 13C than its surrounding water, which may imply a stronger contribution of hydrogenotrophic methanogenesis in marine invertebrates than in the surrounding sediment. We cannot exclude that methane oxidation co-occurred during incubation, which would make the residual methane enriched in 13C42. In incubations done in anoxic, helium-purged waters, all the residual CH4 was produced by the invertebrates. Values from these anoxic incubations were slightly more negative (−56.3‰) than the others that were oxic from the start (−53.4‰). The isotopic signal from oxic incubations was almost exclusively (>90%) due to CH4 produced by the invertebrates, indicating that the difference (2.9‰) could be attributed to methane oxidation activity. Analysis of the mcrA gene has successfully been applied to monitor methanogens in soil invertebrate guts43,44. Detection of mcrA genes in starved bivalves suggests that the methane was not produced by ingested microbes. However, locating the methanogens would only have been possible with, for example, FISH45. Regardless, our results show similar abundances of mcrA genes between starved and satiated bivalves, indicating that methanogens were not associated with ingested food, but were rather symbionts colonizing the inner, anoxic digestive system (i.e., gut) as for other invertebrates43,44,46. The high variations in abundances of mcrA genes between different specimens (Supplementary Fig. 2) may explain the high variation in methane produced by bivalves in oxic conditions (Supplementary Fig. 1). Thus, methane producers can be found in 30–50% of human intestines47. Because of this trait, about half of humanity has the capacity to produce methane. Based on the high gene and rate variations, we cannot exclude that also invertebrate guts may either have or not have the capacity to produce conspicuous amounts of methane. A symbiosis between methane-cycling microorganisms and bivalves was previously reported48, but the microbes were dependent on, rather than producing, methane. In our study, the symbiosis might be based on hydrogen gas produced in the intestinal tract by fermentation, also in light of the fact that hydrogenotrophy prevailed when animals were present. As in the case of rumen symbiosis, the animals might benefit from the microbiota converting refractory polymers into more labile compounds, which can be digested and absorbed by the animal49. Methanogens are strict anaerobes but the average methane production by bivalves was higher in oxic than in anoxic conditions (Fig. 2a), although this difference was not significant because of the high variation in methane production in oxic conditions (Supplementary Fig. 1). Anoxia is expected to inhibit bivalve activity because the proliferation of fast-growing sulfate-reducing bacteria (SRB) in the incubation medium would damage the animal tissues50. SRB outcompete methanogens for the mutual substrate hydrogen both in intestines51 and in sediments52. These observations suggest that the symbiosis was less beneficial under strictly anoxic conditions when bivalve performance was likely reduced and the archaeal symbionts were outcompeted by SRB50. Sediments underlying oxic water in the Baltic Sea are dominated by a macrofaunal community characterized by Marenzelleria spp. and Limecola balthica 53,54. By scaling up our benthic methane flux from sediments inhabited by the Marenzelleria/Limecola community (12.2 µmol C m−2 d−1) to the area of the Baltic Sea with oxygenated bottom water (349,133 km2)55, the extrapolation results in a total benthic flux of 18.7 Gg C year−1 from sediments inhabited by macrofauna (Fig. 3). This study did not quantify rates of methane consumption in the water column, but literature suggests that methane concentrations are reduced, due to efficient methane oxidation, by 85–95% in the Baltic sea coastal area (20–25 m)56,57. Considering that the average depth of the oxic Baltic Sea is well within this depth range of 20–25 m, we conclude that macrofauna-inhabited sediment may be a source of atmospheric methane ranging between 0.9 and 2.8 Gg C year−1. If the Baltic sediments were devoid of macrofauna these would only contribute to emissions ranging between 0.1 and 0.4 Gg C year−1 (Fig. 3). Thus, with the caveats that field-based methane oxidation experiments are needed and that macrofaunal communities are more complex than we assumed here, our results indicate that this contribution should be taken into account in biogeochemical models. In the case of the Baltic Sea, the emission caused by macrofauna would represent an average of 9.5% of the overall methane emission, which was estimated to be 0.02 Tg C year−158. Our experimental data together with these extrapolations provide evidence that sediment macrofauna contributes significantly to GHG effluxes from coastal marine sediments. We propose that the macrofaunal GHG contribution may be of particular importance in shallow-water environments, where the gas is emitted to the atmosphere from oversaturated waters e.g.27,58,59. More systematic studies should be carried out to investigate the impacts of different invertebrate traits and activities on biogeochemical processes. These experiments should analyze the production of climate-important metabolic products, such as methane, nitrous oxide, carbon dioxide by invertebrates, but also include overlooked climate stressors such as nitric oxide, when it comes to deciding whether to use, for example, bivalve farming as a nutrient reduction measure in the marine environment. ## Methods ### Sampling Sampling was performed at two coastal sites in the Baltic Sea: sediment and bottom water were collected in Tvären Bay (50 m depth; 58°46′N 17°25′E), while macrofaunal specimens of Limecola balthica and Marenzelleria arctia were collected in Uttervik (28 m depth; 58°50′N 17°31′E). Sediment was collected with a boxcorer, bottom water (salinity 6.8; temperature 8.0 °C) with a Niskin bottle and macrofauna with a benthic sledge. Tvären sediments are naturally poor in macrofauna because the basin undergoes seasonal hypoxia in late summer and autumn. However, in spring, the bottom water was fully saturated with oxygen and the sediment was oxidized down to 3–4 cm depth. Immediately after collection, samples were transported to the Stockholm University Marine Research Centre at Askö (Stockholm archipelago), where they were placed in a climate-controlled room at in situ temperature. All experiments were conducted at 8 °C in the laboratories of Stockholm University. ### Bioturbation effect on methane and nitrous oxide fluxes Two incubations of sediment cores with and without macrofauna specimens were carried out to test the effect of macrofauna bioturbation on gas fluxes one day after animal addition and after ten days of acclimation. Sediment cores (n = 28; 4.6 cm inner diameter and 30 cm length) were placed in a sediment core incubator filled with bottom water. Stirring mechanisms and water pumps were added to keep the water oxygenated. Macrofauna specimens were added to the sediment cores, which were intact and naturally devoid of macrofauna. These experimental units were assigned to four different treatments (n = 7 per treatment): (1) microcosms with addition of six polychaetes (M. arctia) in each sediment core (P); (2) microcosms with addition of three polychaetes and three bivalves (L. balthica) (P + B); (3) microcosms with addition of six bivalves (B); (4) microcosms consisting of intact, non-manipulated sediment cores (C). The manipulated macrofaunal abundance was 3612 ind. m−2, which lies in the range of the abundances recently reported for the Baltic Sea53,54,60. The average weight of M. arctia and L. balthica specimens were not significantly different (p > 0.05; ANOVA) and were 34.7 ± 6.9 and 33.3 ± 8.7 mg wet weight (WW) ind.−1 corresponding to 1.7 ± 0.3 and 2.4 ± 0.4 mg dry weight (DW) ind.−1, respectively. This resulted in biomasses of M. arctia (21 g WW m−2) and L. balthica (20 g WW m−2) that were similar to those reported for the oxic Baltic Sea basins53,54. After one-day of acclimation (day 1), fluxes of methane (CH4) and nitrous oxide (N2O) between the sediment and the water column were experimentally determined following the procedure described in Bonaglia et al.16. Briefly, each microcosm was capped with rubber stoppers, while avoiding trapping gas bubbles, and its overlying water was stirred with magnetic stirrers. Water samples for CH4 and N2O concentrations were taken at the beginning and the end of the incubation, transferred to 12 mL Exetainer vials (Labco Scientific) and biological activity was stopped by adding 100 μL ZnCl2 (7 M). Oxygen (O2) concentrations were monitored before and after the incubation in each microcosm using a calibrated mini sensor (OX-500, Unisense, Denmark). The incubations always consumed ≤27% (on average 18%) of the initial O2 saturation value. The incubation was terminated after 8 h when the microcosms were left uncapped while stirring to return to O2 saturation. After ten days of acclimation (day 10), a second incubation experiment was carried out observing the same conditions as day 1, but the incubation time was increased to 10 h. Ammonium (NH4 +) and nitrate (NO3 ) samples (n = 5) were collected from the water tank at day 1 and at day 10 and immediately filtered using 0.2 μm polyethersulfone (PES) filters. Concentrations of CH4 and N2O in the water samples were analyzed by headspace analysis on a gas chromatograph (SRI 8610 C) equipped with a flame ionization detector for CH4 and an electron capture detector for N2O using dinitrogen (N2) as carrier gas59. Precision was ±1 nM and ±0.2 nM for CH4 and N2O concentrations, respectively. Net fluxes across the sediment–water interface were calculated from the difference in concentrations in the water column through the incubation period16. Concentrations of NH4 + and NO3 were determined colorimetrically on a segmented flow nutrient analyzer system (OI Analytical, Flow Solution IV). ### Direct gas production associated with macrofauna Glass bottles (n = 30; 50 mL volume) with oxic and anoxic filtered in situ water were incubated with and without the addition of animals to check if macrofauna symbionts were producing CH4 and N2O. Specimens of L. balthica and of M. arctia were carefully washed three times with 0.2 µm-filtered bottom water. The incubation bottles were assigned to six treatments (n = 5 replicates each treatment): (1) vials with five bivalves in oxic water (B ox); (2) vials with five polychaetes in oxic water (P ox); (3) control vials with oxic water (C ox); (4) vials with five bivalves in anoxic water (B anox); (5) vials with five polychaetes in anoxic water (P anox); (6) control vials with anoxic water (C anox). All bottles were prepared with 5 mL glass beads (1 mm Ø) that served as digging substratum for infauna, and received either 15 mL of 0.2 µm-filtered oxic (treatments B, P and C ox) or 15 mL of filtered anoxic water (treatments B, P and C anox), which was prepared by bubbling bottom water with a mixture of He (99.9%) and CO2 (0.1%) for 20 min and did not alter the in situ pH and alkalinity significantly. Glass bead sizes <1 mm were avoided because they may be ingested and retained by benthic macrofauna61. After the bottles were capped with butyl septa, water in the anoxic treatments was additionally bubbled for 10 min in order to avoid any potential O2 contamination59. The bottles were incubated in a temperature-controlled water bath at 8.0 ± 0.1 °C. Headspace samples were retrieved four times from each bottle during an incubation period of 16 h. Samples were directly injected for CH4 and N2O concentrations following the procedure described above. Concentrations of CH4 and N2O in the bottle headspace were used to calculate the animal production over time and were standardized per g WW. Average animal biomasses are reported in the section above. ### Methane isotopes to discriminate CH4 production pathways in L. balthica Seven random bottles from the previous experiment that contained bivalves were further processed for stable carbon isotope composition of CH4 to differentiate between different pathways of CH4 production (i.e., the carbon source) associated with the bivalve microbiome. Seven bottles filled with in situ bottom water were considered as controls. Because of low CH4 concentrations in bottles with polychaetes and limited financial resources, these were not analyzed for their carbon isotope composition. The analysis was made using continuous-flow gas chromatograph isotope-ratio mass spectrometry at the Stable Isotope Laboratory at Stockholm University, following the protocol described in Wik62. Briefly, a Trace GC Ultra gas chromatograph was connected via the Conflo IV to a DeltaV plus mass spectrometer (Thermo Scientific). The Trace GC was equipped with a Split/Splitless injector, a 25 m PoraPlot Q capillary column (Varian, Inc.), and a combustion oven that oxidizes CH4 to CO2. Injections were made manually using a 100 µL glass gas-tight syringe (Hamilton, USA) and a sample split on the injector. The injection volume varied from 10–80 µL depending on sample CH4 concentration. Injections of 5 µL standard (100% CH4) were made before the first and after the last sample. The carbon isotopic signature of CH4 was calculated as standard delta (δ)-notations relative to Vienna Pee Dee Belemnite (VPDB) using the following equation: $${\delta }^{13}{\rm{C}}\,=(\frac{{\rm{R}}}{{{\rm{R}}}_{{\rm{VPDB}}}}-1)\times 1000{\rm{\textperthousand }}$$ where R is the 13C/12C ratio in the samples and RVPDB is the ratio in the VPDB standards. The δ values are expressed in per mil (‰). The analytical precision was 0.3‰. ### Detection of methanogens in L. balthica To detect and quantify methanogenic symbionts inside macrofaunal guts or other tissues, entire specimens were processed for quantitative Polymerase Chain Reaction (qPCR) analysis. Since bivalves produced more CH4 than polychaetes, and because of limited financial resources, this analysis was carried out for bivalves only. Briefly, bivalve specimens (n=20) were carefully washed five times in 0.2 µm filtered bottom sea-water. Ten specimens were directly frozen at −80 °C, while ten other specimens were placed in clean water, i.e., in a 0.2 µm-filtered bottom-water bath in a climate-controlled room at in situ temperature (8 °C) for 15 h to clean their gut. Water was replaced twice during the gut-cleaning process to avoid feces. After cleaning and three rinsing steps in 0.2 µm filtered water, the animals were frozen at −80 °C. In the laboratory, the soft parts of the animals were carefully removed from the shells and washed in sterile seawater. The DNA was extracted using the DNeasy Blood and Tissues Kit (QIAGEN), following the manufacturer’s instructions. The detection and quantification of the key gene for methanogenesis, mcrA (encoding the alpha subunit of the methyl-coenzyme M-reductase), was determined using specific primers ME3MF (ATGTCNGGTGHGTMGGSTTYAC) and ME2r′ (TCATBGCRTAGTTDGGRTAGT)63,64 in 35 cycles at an annealing temperature of 60 °C. Q-PCR conditions were: 500 nM of each primer, 5 µL of DNA template, 12.5 µL of SsoAdvancedTM Universal SYBR® Green Supermix (Bio-Rad) and, following the manufacturer’s recommendations, nuclease-free, sterile deionized water was added to a final volume of 25 µL. The standard curve was calibrated in ten-fold dilutions ranging from 100 to 10−5 using DNA from Methanoculleus marisnigrii (DSMZ 1498). All reactions were realized in 96 well Q-PCR plates using CFX96 Touch™ Real-Time PCR Detection System Instrument (C1000 Touch™ Thermal, Cycler, Bio-Rad) and its software. Q-PCR quantifications of mcrA genes in sample extracts and in standard series were performed in triplicate alongside with negative controls to rule out laboratory contamination. The total gene copy numbers per gram of animal were calculated from the triplicate sample averages as previously described65, and by estimating one copy of the mcrA gene per genome. The qPCR amplification efficiency was 96.3% and the slope was −3.41. ### Data analysis Statistical tests were performed in order to detect differences in the investigated parameters among treatments. Homogeneity of variance of the dataset was checked using Cochran’s test. When ANOVA assumptions were met, one-way analysis of variance tests were performed. When the variance was found to be heterogeneous, non-parametric tests were used (Kruskal-Wallis test). Pairwise post hoc comparisons among treatments were performed by Tukey HSD test. A 2-way non-parametric ANOVA (Scheirer-Ray-Hare test) with incubation day and treatment as factors was performed for testing differences in gas fluxes from sediment core incubations. The Mann-Whitney Rank Sum test was performed to assess differences in nitrate concentrations between day 1 and day 10. Statistical analyses were performed with SigmaPlot 13.0 (Systat Software, CA, USA). If not stated otherwise in the text, measurements are reported as average ± standard error (s.e.m.). ### Data availability Additional supporting data to the article can be found in the supplementary material. All raw data are available upon request by email to the corresponding author. ## References 1. Pachauri, R. K. et al. Climate change 2014: synthesis Report. Contribution of working groups I, II and III to the fifth assessment report of the intergovernmental panel on climate change. (IPCC, 2014). 2. Carpenter, S. R. et al. Nonpoint pollution of surface waters with phosphorus and nitrogen. Ecol. Appl. 8, 559–568 (1998). 3. Davidson, T. A. et al. Eutrophication effects on greenhouse gas fluxes from shallow‐lake mesocosms override those of climate warming. Global Change Biol. 21, 4449–4463 (2015). 4. Bakker, D. C. E. et al. In Ocean-Atmosphere Interactions of Gases and Particles (eds Peter S. Liss & Martin T. Johnson) 113–169 (Springer Berlin Heidelberg, 2014). 5. Saunois, M. et al. The global methane budget 2000–2012. Earth Syst. Sci. Data 8, 697–751 (2016). 6. Rex, M. A. et al. Global bathymetric patterns of standing stock and body size in the deep-sea benthos. Mar. Ecol. Prog. Ser. 317, 1–8 (2006). 7. Kristensen, E. & Kostka, J. E. In Interactions Between Macro- and Microorganisms in Marine Sediments (eds E. Kristensen, R. R. Haese, & J. E. Kostka) 125–158 (AGU, 2005). 8. Aller, R. C. Quantifying solute distributions in the bioturbated zone of marine sediments by defining an average microenvironment. Geochim. Cosmochim. Acta 44, 1955–1965 (1980). 9. Laverock, B. et al. Bioturbating shrimp alter the structure and diversity of bacterial communities in coastal marine sediments. ISME J 4, 1531–1544 (2010). 10. Kellogg, M. L. et al. Use of oysters to mitigate eutrophication in coastal waters. Estuar. Coast. Shelf Sci. 151, 156–168 (2014). 11. Stadmark, J. & Conley, D. J. Mussel farming as a nutrient reduction measure in the Baltic Sea: Consideration of nutrient biogeochemical cycles. Mar. Pollut. Bull. 62, 1385–1388 (2011). 12. Petersen, J. K. et al. Mussels as a tool for mitigation of nutrients in the marine environment. Mar. Pollut. Bull. 82, 137–143 (2014). 13. Rose, J. M., Bricker, S. B., Tedesco, M. A. & Wikfors, G. H. A Role for Shellfish Aquaculture in Coastal Nitrogen Management. Environ. Sci. Technol. 48, 2519–2525 (2014). 14. Campbell, B. & Pauly, D. Mariculture: A global analysis of production trends since 1950. Mar. Policy 39, 94–100 (2013). 15. Bonaglia, S. et al. Effect of reoxygenation and Marenzelleria spp. bioturbation on Baltic Sea sediment metabolism. Mar. Ecol. Prog. Ser. 482, 43–55 (2013). 16. Bonaglia, S., Nascimento, F. J. A., Bartoli, M., Klawonn, I. & Bruchert, V. Meiofauna increases bacterial denitrification in marine sediments. Nat. Commun. 5, 5133 (2014). 17. Murphy, A. E., Anderson, I. C., Smyth, A. R., Song, B. & Luckenbach, M. W. Microbial nitrogen processing in hard clam (Mercenaria mercenaria) aquaculture sediments: the relative importance of denitrification and dissimilatory nitrate reduction to ammonium (DNRA). Limnol. Oceanogr. 61, 1589–1604 (2016). 18. Nizzoli, D., Welsh, D. T., Fano, E. A. & Viaroli, P. Impact of clam and mussel farming on benthic metabolism and nitrogen cycling, with emphasis on nitrate reduction pathways. Mar. Ecol. Prog. Ser. 315, 151–165 (2006). 19. Stief, P., Poulsen, M., Nielsen, L. P., Brix, H. & Schramm, A. Nitrous oxide emission by aquatic macrofauna. Proc. Natl. Acad. Sci. USA 106, 4296–4300 (2009). 20. Heisterkamp, I. M., Schramm, A., de Beer, D. & Stief, P. Nitrous oxide production associated with coastal marine invertebrates. Mar. Ecol. Prog. Ser. 415, 1–9 (2010). 21. Welsh, D. T., Nizzoli, D., Fano, E. A. & Viaroli, P. Direct contribution of clams (Ruditapes philippinarum) to benthic fluxes, nitrification, denitrification and nitrous oxide emission in a farmed sediment. Estuar. Coast. Shelf Sci. 154, 84–93 (2015). 22. Poulsen, M., Kofoed, M. V. W., Larsen, L. H., Schramm, A. & Stief, P. Chironomus plumosus larvae increase fluxes of denitrification products and diversity of nitrate-reducing bacteria in freshwater sediment. Syst. Appl. Microbiol. 37, 51–59 (2014). 23. Nogaro, G. & Burgin, A. J. Influence of bioturbation on denitrification and dissimilatory nitrate reduction to ammonium (DNRA) in freshwater sediments. Biogeochemistry 120, 279–294 (2014). 24. Mehring, A. S., Cook, P. L. M., Evrard, V., Grant, S. B. & Levin, L. A. Pollution‐tolerant invertebrates enhance greenhouse gas flux in urban wetlands. Ecol. Appl. (2017). 25. Renz, J. R. & Forster, S. Are similar worms different? A comparative tracer study on bioturbation in the three sibling species Marenzelleria arctia, M. viridis, and M. neglecta from the Baltic Sea. Limnol. Oceanogr. 58, 2046–2058 (2013). 26. Thang, N. et al. The impact of sediment and carbon fluxes on the biogeochemistry of methane and sulfur in littoral Baltic Sea sediments (Himmerfjärden, Sweden). Estuaries and Coasts 36, 98–115 (2013). 27. Sawicka, J. E. & Brüchert, V. Annual variability and regulation of methane and sulfate fluxes in Baltic Sea estuarine sediments. Biogeosciences 14, 325–339 (2017). 28. Quintana, C. O., Kristensen, E. & Valdemarsen, T. Impact of the invasive polychaete Marenzelleria viridis on the biogeochemistry of sandy marine sediments. Biogeochemistry 115, 95–109 (2013). 29. Karlson, K., Hulth, S., Ringdahl, K. & Rosenberg, R. Experimental recolonisation of Baltic Sea reduced sediments: survival of benthic macrofauna and effects on nutrient cycling. Mar. Ecol. Prog. Ser. 294, 35–49 (2005). 30. Stief, P., Polerecky, L., Poulsen, M. & Schramm, A. Control of nitrous oxide emission from Chironomus plumosus larvae by nitrate and temperature. Limnol. Oceanogr. 55, 872–884 (2010). 31. Heisterkamp, I. M., Schramm, A., de Beer, D. & Stief, P. Direct nitrous oxide emission from the aquacultured Pacific white shrimp Litopenaeus vannamei. Appl. Environ. Microbiol. (2016). 32. Stief, P. & Schramm, A. Regulation of nitrous oxide emission associated with benthic invertebrates. Freshwat. Biol. 55, 1647–1657 (2010). 33. Svenningsen, N. B. et al. Shell biofilm nitrification and gut denitrification contribute to emission of nitrous oxide by the invasive freshwater mussel Dreissena polymorpha (zebra mussel). Appl. Environ. Microbiol. 78, 4505–4509 (2012). 34. Heisterkamp, I. M. et al. Shell biofilm‐associated nitrous oxide production in marine molluscs: processes, precursors and relative importance. Environ. Microbiol. 15, 1943–1955 (2013). 35. Weier, K. L., Doran, J. W., Power, J. F. & Walters, D. T. Denitrification and the dinitrogen/nitrous oxide ratio as affected by soil water, available carbon, and nitrate. Soil Sci. Soc. Am. J. 57, 66–72 (1993). 36. Blackmer, A. M. & Bremner, J. M. Inhibitory effect of nitrate on reduction of N2O to N2 by soil microorganisms. Soil Biol. Biochem. 10, 187–191 (1978). 37. Canfield, D. E., Glazer, A. N. & Falkowski, P. G. The evolution and future of earth’s nitrogen cycle. Science 330, 192–196 (2010). 38. Stief, P. Stimulation of microbial nitrogen cycling in aquatic ecosystems by benthic macrofauna: mechanisms and environmental implications. Biogeosciences 10, 7829–7846 (2013). 39. Whiticar, M. J., Faber, E. & Schoell, M. Biogenic methane formation in marine and freshwater environments: CO2 reduction vs. acetate fermentation—Isotope evidence. Geochim. Cosmochim. Acta 50, 693–709 (1986). 40. Parkes, R. J. et al. Biogeochemistry and biodiversity of methane cycling in subsurface marine sediments (Skagerrak, Denmark). Environ. Microbiol. 9, 1146–1161 (2007). 41. Garcia, J.-L., Patel, B. K. C. & Ollivier, B. Taxonomic, Phylogenetic, and Ecological Diversity of Methanogenic Archaea. Anaerobe 6, 205–226 (2000). 42. Barker, J. F. & Fritz, P. Carbon isotope fractionation during microbial methane oxidation. Nature 293, 289–291 (1981). 43. Ohkuma, M., Noda, S., Horikoshi, K. & Kudo, T. Phylogeny of symbiotic methanogens in the gut of the termite Reticulitermes speratus. FEMS Microbiol. Lett. 134, 45–50 (1995). 44. Šustr, V., Chroňáková, A., Semanová, S., Tajovský, K. & Šimek, M. Methane production and methanogenic archaea in the digestive tracts of millipedes (diplopoda). PloS one 9, e102659 (2014). 45. Kubota, K. CARD-FISH for environmental microorganisms: technical advancement and future applications. Microbes Environ 28, 3–12 (2013). 46. Brauman, A., Kane, M. D., Labat, M. & Breznak, J. A. Genesis of acetate and methane by gut bacteria of nutritionally diverse termites. Science 257, 1384–1387 (1992). 47. Crutzen, P. J., Aselmann, I. & Seiler, W. Methane production by domestic animals, wild ruminants, other herbivorous fauna, and humans. Tellus B 38B, 271–284 (1986). 48. Childress, J. J. et al. A methanotrophic marine molluscan (Bivalvia, Mytilidae) symbiosis: Mussels fueled by gas. Science 233, 1306–1308 (1986). 49. Mizrahi, I. In The Prokaryotes: Prokaryotic Biology and Symbiotic Associations (eds Eugene Rosenberg et al.) 533–544 (Springer Berlin Heidelberg, 2013). 50. de Zwaan, A. & Babarro, J. M. F. Studies on the causes of mortality of the estuarine bivalve Macoma balthica under conditions of (near) anoxia. Mar. Biol. 138, 1021–1028 (2001). 51. Gibson, G. R., Cummings, J. H. & Macfarlane, G. T. Competition for hydrogen between sulphate-reducing bacteria and methanogenic bacteria from the human large intestine. J. Appl. Bacteriol. 65, 241–247 (1988). 52. Lovley, D. R., Dwyer, D. F. & Klug, M. J. Kinetic Analysis of Competition Between Sulfate Reducers and Methanogens for Hydrogen in Sediments. Appl. Environ. Microbiol. 43, 1373–1379 (1982). 53. Rousi, H. et al. Long-term changes in coastal zoobenthos in the northern Baltic Sea: the role of abiotic environmental factors. ICES J. Mar. Sci. 70, 440–451 (2013). 54. Gogina, M. et al. The Baltic Sea scale inventory of benthic faunal communities. ICES J. Mar. Sci. 73, 1196–1213 (2016). 55. Carstensen, J., Andersen, J. H., Gustafsson, B. G. & Conley, D. J. Deoxygenation of the Baltic Sea during the last century. Proc. Natl. Acad. Sci. USA 111, 5628–5633 (2014). 56. Bange, H. W. et al. Dissolved methane during hypoxic events at the Boknis Eck time series station (Eckernförde Bay, SW Baltic Sea). Biogeosciences (BG) 7, 1279–1284 (2010). 57. Bussmann, I. & Suess, E. Groundwater seepage in Eckernförde Bay (Western Baltic Sea): Effect on methane and salinity distribution of the water column. Cont. Shelf Res. 18, 1795–1806 (1998). 58. Bange, H. W. Nitrous oxide and methane in European coastal waters. Estuar. Coast. Shelf Sci. 70, 361–374 (2006). 59. Bonaglia, S. et al. Denitrification and DNRA at the Baltic Sea oxic–anoxic interface: Substrate spectrum and kinetics. Limnol. Oceanogr. 61, 1900–1915 (2016). 60. Kauppi, L., Norkko, A. & Norkko, J. Large-scale species invasion into a low-diversity system: spatial and temporal distribution of the invasive polychaetes Marenzelleria spp. in the Baltic Sea. Biol. Invasions 17, 2055–2074 (2015). 61. Decho, A. W. & Luoma, S. N. Time-courses in the retention of food material in the bivalves Potamocorbula amurensis and Macoma balthica: significance to the absorption of carbon and chromium. Mar. Ecol. Prog. Ser., 303–314 (1991). 62. Wik, M. Emission of methane from northern lakes and ponds 361 thesis, Department of Geological Sciences, Stockholm University (2016). 63. Nunoura, T. et al. Quantification of mcrA by fluorescent PCR in methanogenic and methanotrophic microbial communities. FEMS Microbiol. Ecol. 64, 240–247 (2008). 64. Hales, B. et al. Isolation and identification of methanogen-specific DNA from blanket bog peat by PCR amplification and sequence analysis. Applied and Environmental Microbiology 62, 668–675 (1996). 65. Sylvan, J. B. et al. Low temperature geomicrobiology follows host rock composition along a geochemical gradient in Lau Basin. Frontiers in Microbiology 4 (2013). ## Acknowledgements This work was supported financially by the Swedish Research Council FORMAS to F.J.A.N. (2013–1322), the Baltic Ecosystem Adaptive Management (BEAM) program (funding to S.B. and V.B.), and the Erasmus Placement program (funding to A.V.). Molecular work at Stockholm University was partly supported by the European Research Council (grant No. 336092). We acknowledge Marco Bartoli and Giuseppe Castaldelli for inspiring discussions during the experimental phase and the writing process, Nils Ekeroth for practical help during sediment sampling, and Martin Wik for assistance during methane isotope analysis. ## Author information Authors ### Contributions S.B., A.V. and F.J.A.N. performed sediment sampling; S.B., N.C., E.C.F. and F.J.A.N. performed invertebrate sampling; S.B., A.V. and F.J.A.N. carried out the incubation experiments; S.B. and A.V. carried out the mass spectrometric and gas chromatographic analyses; N.C. and S.B. performed molecular analysis. The research was designed by S.B. and F.J.A.N.; and S.B. led the data analysis and wrote the manuscript. All authors contributed to the discussion and interpretation of the data, and to the writing of the manuscript. ### Corresponding author Correspondence to Stefano Bonaglia. ## Ethics declarations ### Competing Interests The authors declare that they have no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Bonaglia, S., Brüchert, V., Callac, N. et al. Methane fluxes from coastal sediments are enhanced by macrofauna. Sci Rep 7, 13145 (2017). https://doi.org/10.1038/s41598-017-13263-w • Accepted: • Published: • DOI: https://doi.org/10.1038/s41598-017-13263-w • ### Partitioning benthic nitrogen cycle processes among three common macrofauna holobionts • Mindaugas Zilius • Darius Daunys • Giuseppe Castaldelli Biogeochemistry (2022) • ### Nitrogen enrichment increases greenhouse gas emissions from emerged intertidal sandflats • Dallas J. Hamilton • Richard H. Bulmer • Carolyn J. Lundquist Scientific Reports (2020) • ### Legacy Effects of Eutrophication on Modern Methane Dynamics in a Boreal Estuary • Jukka-Pekka Myllykangas • Susanna Hietanen • Tom Jilbert Estuaries and Coasts (2020)
2022-08-13 23:08:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5764673948287964, "perplexity": 13917.977046290118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571987.60/warc/CC-MAIN-20220813202507-20220813232507-00051.warc.gz"}
http://mathoverflow.net/questions/83514/algorithm-for-checking-existance-of-real-roots-for-polynomials-in-more-than-one-v/83515
# Algorithm for checking existance of real roots for Polynomials in more than one variable Is there a way to determine exactly (without the use of approximation methods) whether $p\in \mathbb{R}[x_1,\dots,x_n]$ has real-valued solutions. Algorithms based on Sturm's theorem seem to be applicable to univariate polynomials only. - There are also multivariate versions of Sturm's theorem. –  J.C. Ottem Dec 15 '11 at 12:58 Tarski's theorem on the decidability of the theory of real-closed fields provides a general algorithm that decides any question expressible in the first order language of real-closed fields. His algorithm can therefore determine, for any statement, whether it is true in the structure $\langle\mathbb{R},+,\cdot,0,1,\lt\rangle$. Thus, not only are the purely existential assertions (solvability of systems of equations) decidable in this context, but also more complex assertions involving iterated quantifiers, which would not seem without this result to be decidable even by approximation. The way Tarski's argument proceeds is by elimination of quantifiers: every assertion in this language is equivalent to a quantifier-free assertion. In particular, the existence of a solution to $p(\vec x)=0$ is equivalent by Tarski's reduction to a quantifier-free assertion about the coefficients of the polynomial. That is, the algorithm reduces the question to a mere calculation involving the coefficients. But if you are interested in actually using the algorithm in specific instances, rather than the theoretical question about whether in principal there is such an algorithm, then Tarski's algorithm may not actually be helpful. Although it has been implemented on computers, the algorithm takes something like a tower of exponential time in the size of the input, and evidently it has been proved that every quantifier-elimination algorithm must be at least double-exponential. - Thank you. My Question was motivated by developing Collision avoidance Strategies. I am interested in an algorithm Checking whether two (possibly n-dimensional) Ellipsoids overlap. Using a polynomial Representation of ellipsoids one could check if at least one real valued solution exist in order to prove a collision. For a proper performance Comparison with other algorithms (which might be based on numeric methods) I was interested in how computational complexity behaves, when the dimension of the problem grows. –  ostap bender Dec 15 '11 at 14:27 I would expect that, for the specific problem of checking whether two ellipsoids overlap, there are more efficient algorithms than the relevant special case of Tarski's general algorithm. Unfortunately, this is far from my expertise, so I don't actually know any such algorithms. For conceptual purposes (if not for algorithmic ones), it might help to arrange, by an affine transformation, that one of your two ellipsoids is the unit ball. –  Andreas Blass Dec 15 '11 at 15:17 Yes, there are indeed more efficient Algorithms. As far as I know all of them use numerical approximation (in higher dimensions) either to calculate roots or to calculate a minimum / maximum. So I was curious whether there is an exact Algorithm, having a runtime bounded by the dimension (ideally with polynomial complexity). As far as I understood the Discussion at mathoverflow.net/questions/43979/… , there is no such algorithm yet (at least not for the more general case of counting roots of fewnomials) –  ostap bender Dec 20 '11 at 12:43 This problem is solved in so-called Semi-algebraic Geometry. Here are some books: Basu S. Algorithms in Semi-algebraic Geometry Basu S., Pollack R., Roy M.-F. Algorithms in Real Algebraic Geometry Bochnak J., Coste M., Roy M-F. Real algebraic geometry Coste M. An introduction to semialgebraic geometry -
2014-10-24 12:15:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810606062412262, "perplexity": 368.9944816614976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119645898.42/warc/CC-MAIN-20141024030045-00165-ip-10-16-133-185.ec2.internal.warc.gz"}
https://dsp.stackexchange.com/questions/52826/spectrum-of-cosine-in-complex-form/52828
# Spectrum of Cosine in Complex Form The complex exponential form of cosine $$\cos(k \omega t) = \tfrac{1}{2} e^{i k \omega t} + \tfrac{1}{2} e^{-i k \omega t}$$ The trigonometric spectrum of $$\cos(k \omega t)$$ is single amplitude of the cosine function at a single frequency of $$k$$ on the real axis which is using the basis function of cosine? The complex exponential spectrum of $$\cos(k \omega t)$$ has two amplitudes at 1/2, one at $$k$$ and $$-k$$. I am confused what this x-axis is representing - amplitudes of what? Whats the basis function? I'm guessing amplitudes of exponential but these complex exponential have a cosine and sine how can they be on one axis if they are made of cosines and sines and cosine and sine are orthogonal and orthogonal are on different axis. The "x" axis you refer to is the frequency axis. To answer your second question first: The frequency axis is typically given as $$\omega$$ and waveforms as function of $$\omega$$ can be represented at Real and Imaginary components given by the cosines and sines, or more concisely as Magnitude and Phase as given by the complex exponential since $$Ae^{j\omega t} = Acos(\omega t) + j Asin (\omega t) = I(t) + j Q(t)$$ Either is a representation of a complex function having real and imaginary components. For representing the frequency spectrum of a time domain function we first map our time domain function to the frequency domain with the Fourier Transform which correlates the time domain function of interest to these basis functions (either cosines and sines or much simpler the complex exponential, either with magnitude = 1). Once in the frequency domain, the result will be complex, so two plots are typically used, since it takes two real numbers to represent one complex number. This can be plots of magnitude vs frequency, and phase vs frequency, or alternatively plots of real component (I) vs Frequency and another of the imaginary component (Q) vs frequency. When the functions are observed in the time domain (meaning how the function is changing versus time instead of versus frequency as used above) we can represent them as you suggest in terms of sine and cosine components in quadrature. This specifically is a phasor representation; for more detail on that see my explanation on the end where I use the phasor representation to help explain negative frequencies. (There is no reason we can't use phasors to show functions in the frequency domain as well; it would be a 2 dimensional plot showing the magnitude and phase versus frequency instead of versus time- how it is plotted really comes down to what is the best way to convey the information given the function at hand). The basis function for the complex exponential spectrum is generally given as $$e^{j\omega t}$$ as given by the Fourier Transform (which is a correlation to this basis function at any given $$\omega$$, since correlation in general is multiply and integrate). Generally the Fourier Transform is given as: $$FT\{g(t)\} = G(\omega) = \int_{-\infty}^{+\infty}g(t)e^{-j\omega t}dt$$ Note since $$\omega$$ is a continuous function in this context (we are solving for G as a function of $$\omega$$), there are an infinite number of actual basis functions as the FT is an infinite-dimensional space. DIRECT SOLUTION, k is the independent variable If we applied that in this case, with $$\omega$$ and $$k$$ as independent variables, we get the following result: $$FT\bigg\{\frac{1}{2}e^{jk\omega t} + \frac{1}{2}e^{-jk\omega t}\bigg\} = G(\omega)$$ $$= \frac{1}{2}\int_{-\infty}^{+\infty}e^{jk\omega t}e^{-j\omega t}dt + \frac{1}{2}\int_{-\infty}^{+\infty}e^{-jk\omega t}e^{-j\omega t}dt$$ The first term under the integral converges to an impulse (infinite height, area = 1, so 1/2 after being multiplied) when $$k = +1$$, as does the second term when $$k = -1$$. For all other k the result is zero. The amplitudes represent the areas of the impulses given and shows that the Fourier Transform can be given as a function of k alone in this case (since there is no dependence on $$\omega$$): $$G(k) = \frac{1}{2}\delta(k+1) + \frac{1}{2}\delta(k-1)$$ We could show this as a surface plot of the magnitude for all values of $$k$$ and $$\omega$$ as shown below, with the dashed line representing infinity, which is where an impulse would occur versus k for any fixed value of $$\omega$$: Thus for all $$\omega$$ we would obtain the following magnitude spectrum as a function of $$k$$ alone: product of kw as the independent variable If we decide to make the frequency axis be the product of $$k \omega$$ (since the frequency of the cosine function is given by that product), we can simply multiply the result given above by $$\omega$$ to result in the following magnitude spectrum: We can similarly choose to make $$\omega$$ be the independent variable and would result in impulses at $$\omega = \pm \frac{\omega}{k}$$. All of this is in agreement that it doesn't matter what $$\omega$$ is, in all cases the result is an impulse in the "k domain" when k = $$\pm 1$$. Further Background on the FT of the Cosine Function The following may help provide further intuition for those familiar with the Fourier Transform of a cosine function. Generally we describe the Fourier Transform of $$cos(\omega t)$$ as two impulses in the frequency domain $$\omega$$, but this applies to a specific value of $$\omega$$, so the time domain function of interest is given as $$cos(\omega_o t)$$ where $$\omega_o$$ represents a constant value. The Fourier Transform of $$cos(\omega_o t)$$ converges to two impulses in the frequency domain, but the Fourier Transform of $$cos(\omega t)$$ does not converge- it is infinite for all frequencies $$\omega$$! What the amplitudes represent is clearest in the case where the frequency term $$k\omega$$ is held constant, and we define another variable such as $$\Omega$$ to represent all possible values of $$k\omega$$, as in that case the Fourier Transform (FT) of a cosine function is simply two impulses in frequency, one at a positive frequency and the other at a negative frequency. To show this first in general, I will use capital omega $$\Omega$$ to avoid confusion with the $$\omega$$ in your formula to represent in general any angular frequency ($$\Omega = 2\pi f$$), and $$\Omega_o$$ to represent a specific frequency. $$FT\{\cos(\Omega_o t)\} = G(\Omega) = \frac{1}{2}\delta(\Omega-\Omega_o) + \frac{1}{2}\delta(\Omega+\Omega_o)$$ And we can see how this relates directly to the general relationship from Euler's identity expressing the real sinusoidal cosine in terms of two complex exponential frequencies: $$\cos(\Omega t)= \frac{1}{2}e^{j\Omega_o t} + \frac{1}{2}e^{-j\Omega_o t}$$ And with that we see how when we use the FT to correlate to $$e^{j\Omega t}$$ the result is two impulses, as when we set the frequency to either of these values $$\pm \Omega_o$$ the integral given by the FT above goes to infinity since $$e^{j\Omega_o}e^{-j\Omega_o}=1$$, but would integrate to zero anywhere else. As a plot vs frequency this would appear as follows showing two impulses that exist at the positive and negative frequency $$\Omega_o$$. This is a magnitude plot, and typically to be complete a phase plot would also be shown (showing the phase versus $$\omega$$), but in this case it is trivial since the phase is 0 at both impulses: Or with $$k\omega$$ variables, showing the magnitude spectrum that would result with both $$k$$ and $$\omega$$ as constants: This relationship between the frequency and time waveforms as I have shown them applies for when $$\Omega_o$$ is a constant, otherwise we will be dealing with arbitrary modulation functions and would need to know the modulation waveform specifically (how the independent variable is changing with time) to determine the frequency spectrum. Negative Frequencies? To further understand the meaning of a negative frequency, which is applicable to angular frequencies by definition, note that a negative frequency represents a single exponential frequency given as $$e^{-j\omega_o t}$$, with $$\omega_o$$ in this case any positive number. It may be helpful to visualize this with reference to the relationship I gave in the first paragraph, showing $$e^{j\omega t}$$ in terms of I and Q components. To give negative frequencies a physical interpretation, we can plot the function of $$e^{j\omega t}$$ as a function of its real and imaginary components versus time resulting in a phasor representation, specifically the plot will map out a unit circle, and rotate around the origin at rate $$\omega$$ as depicted below. Here we show the angle increasing in a positive direction with time, and thus this represents a positive frequency (A negative frequency would be depicted by instead having the phasor rotate in a clockwise direction): If this is still not clear, it may be helpful to know that the form $$Ae^{j\theta}$$ is identical to $$A\angle \theta$$ If you take the Fourier Transform of a specific exponential frequency with frequency term $$-\omega_o$$ given as $$e^{-j\omega_o t}$$, the result is a single impulse at that frequency: $$\delta(\omega+\omega_o)$$. (While as we showed above the cosine function has two exponential frequencies; a positive and a negative). The Fourier transform when presented on a graph with a positive and negative axis represents frequencies in the exponential form, so each impulse shown in the frequency domain is a $$Ae^{j\omega t}$$ in the time domain. When (and only when) the plot gives positive and negative frequencies where the negative frequencies are the complex conjugate of the positive frequencies given, then in that case we can also represent the same spectrum with just a positive frequency axis; as that is the only way the time domain signal can be real (using a basis function of cosine as you described); if the negative and positive frequencies are not related by a complex conjugate, then the time domain signal must be complex and a positive and negative frequency axis is required to represent the spectrum. This is clear from the plot below showing both $$Ae^{j\omega t}$$, and $$Ae^{-j\omega t}$$ as complex phasors rotating around the origin with time (magnitude A and angle linearly proportional with time as $$\omega t$$ , which is by definition a constant frequency since $$f = d\phi/ dt$$). The dotted line shows the result of summing those two phasors at any instantaneous time t, showing as long as the two phasors are related by a complex conjugate (equal magnitude opposite phase) the sum will be on the real axis and thus will be a real function with no imaginary components. Each spinning phasor shown is $$e^{j\omega t}$$ and $$e^{-j\omega t}$$ and their summation which stays on the real axis is a (real) cosine function. Dilation Property of the Fourier Transform Not specific to your question but worth mentioning since that is occuring here is the dilation property of the Fourier Transform (scaling the independent variable by a constant term k) which has the following general relationship: $$F\{g(kt)\} = \frac{1}{|k|}G\bigg(\frac{f}{k}\bigg)$$ We could apply the above property directly by transforming the known Fourier Transform of $$cos(\omega_o t)$$ as another approach to arriving at a solution, although this case would be trivial to do directly as a Fourier Transform on the $$cos(k\omega_o t)$$ function itself. • Negative frequencies dont exist. The domain is k times the fundamental frequency, where k goes + to - infinity and fundamental frequency omega is constant. I am confused what is being represented even more... – Natalie Johnson Oct 24 '18 at 19:44 • @NatalieJohnson I know the point is a bit philosophical, but negative frequencies do exist... think of them as way to tell a normal clock (with hands that rotate clockwise) from a clock whose hands rotate counterclockwise. If you just look at the rotation rate (once per hour/once per 12 hours) then you can't tell them apart. – MBaz Oct 24 '18 at 20:57 • Don’t think of frequencies in terms of sines and cosines but in terms of a wheel spinning (or a clock as MBaz relates)... if the wheel is spinning counter clockwise representative of a positive phase change vs time of the complex (polar) plot I gave- that is a positive frequency. If the wheel is spinning clockwise representing a negative phase change vs time, that is a negative frequency. – Dan Boschen Oct 24 '18 at 21:04 • So in your formula the frequency is given as $k\omega$ with $\omega$ constant, since k can be any number on the real axis then the frequency can be positive or negative. Sorry that I used K differently in my answer and will change that to something else. Know that the general form of $Ae^{\phi}$ is a vector (or phasor) on the complex plane with magnitude A and angle $\phi$. If the angle term changes with time, then that phasor will rotate around the origin. The direction of that rotation defines a positive or negative frequency. – Dan Boschen Oct 24 '18 at 21:09 • @NatalieJohnson I updated the question more pointed toward your question; I hope i made it clearer! – Dan Boschen Oct 25 '18 at 4:19
2021-06-14 00:21:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 79, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9013241529464722, "perplexity": 248.6346980220134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00033.warc.gz"}
https://en.wikipedia.org/wiki/Reinhardt_cardinal
# Reinhardt cardinal In mathematical set theory, a Reinhardt cardinal is a large cardinal κ in a model of ZF, Zermelo–Fraenkel set theory without the axiom of choice (Reinhardt cardinals are not compatible with the axiom of choice in ZFC). They were suggested by William Nelson Reinhardt (1967, 1974). ## Definition A Reinhardt cardinal is the critical point of a non-trivial elementary embedding j of V into itself. A minor technical problem is that this property cannot be formulated in the usual set theory ZFC: the embedding j is a class, which in ZFC means something of the form ${\displaystyle \{x|\phi (x,a)\}}$ for some set a and formula φ, but in the language of set theory it is not possible to quantify over all classes or define the truth of formulas. There are several ways to get round this. One way is to add a new function symbol j to the language of ZFC, together with axioms stating that j is an elementary embedding of V (and of course adding separation and replacement axioms for formulas involving j). Another way is to use a class theory such as NBG or KM. A third way would be to treat Kunen's theorem as a countable infinite collection of theorems, one for each formula φ, but that would trivialize the theorem. (It is possible to have nontrivial elementary embeddings of transitive models of ZFC into themselves assuming a mild large cardinal hypothesis, but these elementary embeddings are not classes of the model.) ## Kunen's theorem Kunen (1971) proved Kunen's inconsistency theorem showing that the existence of such an embedding contradicts NBG with the axiom of choice (and ZFC extended by j), but it is consistent with weaker class theories. His proof uses the axiom of choice, and it is still an open question as to whether such an embedding is consistent with NBG without the axiom of choice (or with ZF plus the extra symbol j and its attendant axioms). ## Stronger cardinal axioms There are some variations of Reinhardt cardinals. In ZF, there is a hierarchy of hypotheses asserting existence of elementary embeddings V→V J3: There is a nontrivial elementary embedding j: V→V J2: There is a nontrivial elementary embedding j: V→V, and DCλ holds, where λ is the least fixed-point above the critical point. J1: There is a cardinal κ such that for every α, there is an elementary embedding j : V→V with j(κ)>α and cp(j) = κ. J2 implies J3, and J1 implies J3 and also implies consistency of J2. By adding a generic well-ordering of V to a model of J1, one gets ZFC plus a nontrivial elementary embedding of HOD into itself. Berkeley cardinals are a stronger large cardinals suggested by Woodin. ## References • Jensen, Ronald (1995), "Inner Models and Large Cardinals", The Bulletin of Symbolic Logic, The Bulletin of Symbolic Logic, Vol. 1, No. 4, 1 (4): 393–407., doi:10.2307/421129, JSTOR 421129 • Kanamori, Akihiro (2003), The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.), Springer, ISBN 3-540-00384-3 • Kunen, Kenneth (1971), "Elementary embeddings and infinitary combinatorics", J. Symbolic Logic, The Journal of Symbolic Logic, Vol. 36, No. 3, 36 (3): 407–413, doi:10.2307/2269948, JSTOR 2269948, MR 0311478 • Reinhardt, W. N. (1967), Topics in the metamathematics of set theory, Doctoral dissertation, University of California, Berkeley • Reinhardt, W. N. (1974), "Remarks on reflection principles, large cardinals, and elementary embeddings.", Axiomatic set theory, Proc. Sympos. Pure Math., XIII, Part II, Providence, R. I.: Amer. Math. Soc., pp. 189–205, MR 0401475
2016-08-25 06:46:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9123215675354004, "perplexity": 551.1255574925807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292944.18/warc/CC-MAIN-20160823195812-00296-ip-10-153-172-175.ec2.internal.warc.gz"}
https://msp.org/ant/2020/14-7/p02.xhtml
#### Vol. 14, No. 7, 2020 Recent Issues The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Author Index To Appear Other MSP Journals Pro-unipotent harmonic actions and dynamical properties of $p$-adic cyclotomic multiple zeta values ### David Jarossay Vol. 14 (2020), No. 7, 1711–1746 ##### Abstract $p$-adic cyclotomic multiple zeta values depend on the choice of a number of iterations of the crystalline Frobenius of the pro-unipotent fundamental groupoid of ${ℙ}^{1}\setminus \left\{0,{\mu }_{N},\infty \right\}$. In this paper we study how the iterated Frobenius depends on the number of iterations, in relation with the computation of $p$-adic cyclotomic multiple zeta values in terms of cyclotomic multiple harmonic sums. This provides new results on that computation and the definition of a new pro-unipotent harmonic action. ##### Keywords $p$-adic cyclotomic multiple zeta values, cyclotomic multiple harmonic sums, pro-unipotent harmonic actions, projective line minus three points, pro-unipotent fundamental groupoid, crystalline Frobenius Primary: 11G99
2022-01-28 12:17:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7256245017051697, "perplexity": 1406.185267244444}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305494.6/warc/CC-MAIN-20220128104113-20220128134113-00033.warc.gz"}
https://mattbraddock.com/blog/factoring.html
# Factoring Trinomials ## Abstract An important topic and essential skill taught in Algebra is factoring trinomials with integer coefficients. It is used frequently in Algebra and courses beyond, and thus it is beneficial for students to learn a single method that is quick to complete. The forthcoming method also treats a leading coefficient of 1 as a special case, rather than presenting students with two separate methods based on the leading coefficient. ### How It Works Consider the trinomial $$2x^{2}-7x-4$$. In order to factor this, we must find two numbers that multiply to -8 ($$a\cdot c$$) and add to -7 ($$b$$). Using the diamond/X method to factor, we'd set up the following: The two numbers that satisfy these conditions are -8 and 1. First, put those two numbers on the sides, then divide each of these two numbers by 2 ($$a$$) and reduce the fractions. To determine the factors, use both the numerator and denominator of each fraction. The denominator is the coefficient of $$x$$, and the numerator is the constant. $$2x^{2}-7x-4=(x-4)(2x+1)$$ ### Why It Works If a trinomial $$ax^{2}+bx+c$$ can be factored, the product of factors can be written as $$(px+q)(rx+s)$$ (provided that $$gcf(a,b,c)=1$$). The product expanded gives us $$(pr)x^{2}+(ps+qr)x+qs$$, which means $$a=pr,\,b=ps+qr,\,c=qs$$. We can set up the diamond/X with $$ac$$ on top and $$b$$ on the bottom: Conveniently, the $$b$$ value gives us our two values. We put them on the sides, divide by $$a$$, and reduce. Thus, the fraction $$\frac{q}{p}$$ corresponds to $$px+q$$ and the fraction $$\frac{s}{r}$$ corresponds to $$rx+s$$. ### Why Use It This method is fast (provided students can find the two numbers that satisfy $$ac$$ as the product and $$b$$ as the sum with relative ease), short (the majority of the work is contained within the diamond/X), and consistent (it works for all integer values of $$a$$, including 1). In fact, starting with the more complicated trinomials where $$a\neq1$$ is preferred, as it is easier to then explain how work can be reduced and omitted when $$a=1$$, rather than trying to build up to more steps (which ultimately feels like they are two different methods, rather than one method with a simpler/shorter version). ### Disclaimer Please do not misinterpret this article as a suggestion that all other methods of factoring are inadequate and should not be revealed to students. In fact, students should have the ability to learn about and use multiple methods, so they can determine what is the best of their individual use.
2020-10-21 07:32:52
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8324674963951111, "perplexity": 321.710671515227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876136.24/warc/CC-MAIN-20201021064154-20201021094154-00321.warc.gz"}