url
stringlengths
14
2.42k
text
stringlengths
100
1.02M
date
stringlengths
19
19
metadata
stringlengths
1.06k
1.1k
https://economics.stackexchange.com/tags/marxism/hot
# Tag Info 17 Marx addresses this about two-thirds of the way through Section 1 of the Manifesto. In the standard English edition of 1888, it reads: The lower strata of the middle class - the small tradespeople, shopkeepers, and retired tradesmen generally, the handicraftsmen and peasants - all these sink gradually into the proletariat, partly because their ... 16 The Labor Theory of Value has been replaced by the theory of Marginal Utility, which was already accepted by Marx time. In fact he acknowledged: "nothing can have value, without being an object of utility" -Wikipedia: Marginal Utility - The Marginal Revolution and Marxism Marginal Utility addresses the diamond - water paradox by explaining that the more ... 14 Among economic historians, Marx is often considered the most important economist of the 19th century. His attempts to provide a systematic explanation of the functioning of capitalism was on a far grander scale than anyone who preceded him, and in that sense, he reset the bar very high for what would henceforth be considered comprehensive economic theory, ... 8 (I am not certain of whether I use established English terminology here). The answer is no, because in a communist state (as described by Marx), there are no markets, especially, markets for production factors. And in none of the countries that implemented a socialistic socio-economic system, was labor "directly socialized" -it continued to be considered as ... 8 You seem to be looking for the phrase 'wage share' or 'share of labour compensation'. Wage share: The wage share (or labor share) is the ratio between compensation of employees (according to the system of National accounts) and one of the following variables: gross domestic product at market prices gross domestic product at factor cost net ... 7 From reading a selection of writings by Marx, I have come to understand the following three as the core elements of Marxian economics. The labour theory of value. It asserts that the exchange value of a commodity $x$ (what quantities of $x$ that can be exchanged for another commodity) is determined by that labour time which is socially necessary to produce \$... 6 Marx is indeed an influential classical economist, however he added almost nothing to economics as a discipline. His theories of labor value, exploitation and modes of production were all articulated before him, which he acknowledged. What he did do was systematize and popularize these theories as scientific. Marx was closer to Malthus than to Smith in terms ... 5 "I have read that the Labour Theory of Value holds that the value of a good or service is determined by the total amount of labour involved in its production." That would be Adam Smith's version of the LTV, which according to him only holds in pre-capitalist societies. Karl Marx's version is thus: the value of a good or service is determined by the total ... 5 Material capital is any durable good that is used as a factor of production and, by virtue of being durable, it is gradually consumed in production over a maximum possible duration of a length that is determined by (i) how much a unit of capital is used and (ii) the depreciation rate of a unit of capital. Capital forms by labor and savings (which is in ... 5 Here are some comments on Marx from people who can certainly not be called marxists. If Karl Marx and V. I. Lenin were alive today, they would be leading contenders for the Nobel Prize in economics. Marx predicted the growing misery of working people, and Lenin foresaw the subordination of the production of goods to financial capital’s accumulation ... 5 Two main changes happened when capitalism came into existence these being the ability to buy and sell property among the peasantry. Under the feudal system each household was endowed land which was used to produce food/ agricultural products for the household and the lord which provided the land. When feudalism was abolished the possibilities of production ... 4 A Marxian view of the Diamond-Water Paradox would be that diamonds are scarce and expensive BECAUSE they require a lot of labor to produce (at the margin), while water is cheap because it can be produced with relatively little labor (anyone can go down to the river and draw a bucket of water). 4 There is a major disciplinary specification problem here: “who is an economist?” At the time Marx was active as an author (including posthumously with Engels) the field of knowledge was known as “political economy,” so as to distinguish it from the domestic economy of household management—both from the Greek oikos. Political economy was, and still can, be ... 4 There is a very interesting paper by John Roemer, published in Econometrica in 1980, that presents a mathematical general equilibrium Marxian model. I am not sure someone have ever estimated it though. Find it here (subscription required). General equilibrium models (oftentimes idyosincratic) were also developed in the USSR. Check here for an account of them.... 4 Your question is complex. First of all, what is science? Not even methodologists have settled this question. Falseability, testeability, a method? (great read here). Second, what do you mean by Marxism? Marxism is around 150 years old, and has evolved in the process. Karl Popper argued that Marxism became a pseudoscience from the moment that their ... 4 Quantities of products are not increased by transportation. Nor, with a few exceptions, is the possible alteration of their natural qualities, brought about by transportation, an intentional useful effect; it is rather an unavoidable evil. But the use-value of things is materialised only in their consumption, and their consumption may necessitate a ... 4 Paul Samuelson wrote a series of papers formalizing Marxian economics, e.g. https://www.sciencedirect.com/science/article/pii/B9780123567505500177 4 The example you've picked is slightly complicated, because I think there are two possible reasons for the difference in price. Both could contribute at the same time, depending on your view of how wood production works in your example. 1. The labour value invested in growing, harvesting and processing better quality wood (prior to furniture manufacture). ... 3 This is how I view it. At the very core of economics, is each and every human on this earth making decisions he or her thinks are in his or hers best interest. In real life, we do not calculate utility before making a decision or compute any expected values; we do what feels right at the moment. Taking the cross-section of human decisions, economists try to ... 3 Conteporary Marxist economist Richard D Wolff says on the issue (abridged): The labor theory of value is not a theory is not a theory of prices. The prices of things is determined based on whats going on among the people buying and whats going on among those selling. Marx was not so silly...Notice this is not called the labor theory of price, so we need to ... 3 I'm not a Marx expert, but I've read The Capital some years ago and as far as I can remember Marx said that only the workers produce value. All others' wages (and the workers' wages too!) and other costs are covered by the produced values of the workers, so the others decrease the "profit", which could remain at the workers otherwise. Therefore Marx ... 3 There is a lot of misunderstanding here. First, LTV only operates in capitalist societies. In capitalist society, production is guided by the market, without anyone coordinating between the various branches of production from above. Wealth takes the form of commodities, which have different prices relative to each other. For marginal utility, prices do not ... 3 I think an excellent example of communist societies would be small tribal societies, before the industrial revolution. Means of production were generally shared and workers generally worked to benefit the societal unit which worked because individual societies generally small and tribal. In modern day industrialized society I think the answer is probably ... 3 No. Importantly, the conditions for the creation of a Marxist communist state has never occurred. Namely, the identification of workers along class lines over nationalist lines. Soviet totalitarian communism insisted upon socialist 'brotherhood', but this was mandated rather than organically occurring. 3 It's amusing to me that everyone gets taught that the LTV was all Marx. Here is Adam smith resolving that paradox for you: From wikipedia: Value "in use" is the usefulness of this commodity, its utility. A classical paradox often comes up when considering this type of value. In the words of Adam Smith: The word value, it is to be observed, ... 3 One (perhaps flippant) answer is that Marxists have a lot of ideas about how prices "should" work or how the value of labor "should" be rewarded, that aren't in line with what we observe, because central planning and/or mass subsidies that Marxist economies like end up being grossly inefficient. :P Another (perhaps more interesting) answer is that there is ... 3 Labour power is a commodity like any other. Its value is consequently a function of the labour embodied in it. Taking your example, product A (a webpage) takes 10 hours you need special skills (those of a webdesigner) that are hard to learn only 14% of population already have those skills to build that product Versus product B (a chair) takes 10 hours ... 3 It has nothing to do with new products in the market, nor with confusion of use-value and exchange-value. To make it a bit simpler: 1) Commodities in capitalism have a two-fold character. The use-value (they are useful products for the buyer to consume) and the exchange-value (they can be exchanged to another products in a given way). 2) The exchange-... 3 Though I'm certainly no expert in Marxist economic theory, I think he's presenting it as an "if you can figure out a way of speeding workers up, then you can generate additional surplus from their efforts. Effectively, this is just saying "say you're Ford, and have teams of 7 working on assembly lines that generate 4 cars per hour. Suppose there is some way ... Only top voted, non community-wiki answers of a minimum length are eligible
2021-12-09 07:38:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46085071563720703, "perplexity": 2253.098802007859}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363689.56/warc/CC-MAIN-20211209061259-20211209091259-00386.warc.gz"}
https://sts-math.com/post_13715.html
Simplify 10/15 Find the greatest common factor of the denominator and divide both by that number. To find the simplest form of a fraction, you have to find the greatest common factor (GCF). To do so, list out the factors of the numerator (10) and the denominator (15) and find the biggest common number between the two numbers. Factors of 10: 1, 2, 5, 10 Factors of 15: 1, 3, 5, 15 Out of those factors, we can see that 1 and 5 are the common factors, but 5 is the greatest which makes it our GCF. Now we can divide the numerator and the denominator by the GCF to receive our simplest form. 10 ÷ 5 = 2 15 ÷ 5 = 3 Rewrite the fraction with the new numerator (2) and the new denominator (3). The simplest form of 10/15 is: 2/3. RELATED:
2022-01-23 12:36:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8550980091094971, "perplexity": 431.02256697911776}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304261.85/warc/CC-MAIN-20220123111431-20220123141431-00710.warc.gz"}
https://speech-rule-engine.github.io/sre-tests/output/en/InferenceEnglish.html
## English Mathspeak Inference rules. Locale: en, Style: Verbose. 0 $\begin{array}{c}\begin{array}{c}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}$ inference rule with conclusion upper X and 1 premise 1 $\begin{array}{c}\begin{array}{c}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}$ inference rule with conclusion upper X and 1 premise 2 $\begin{array}{c}\begin{array}{c}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}$ inference rule with conclusion upper X and 1 premise 3 $\begin{array}{c}\begin{array}{c}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}\text{N}$ inference rule label upper N with conclusion upper X and 1 premise 4 $\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}$ axiom upper A 5 $\phantom{\rule{.5ex}{0ex}}\phantom{\rule{.5ex}{0ex}}$ axiom 6 $\begin{array}{c}\begin{array}{c}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}\text{N}$ inference rule label upper N with conclusion upper X and 1 premise 7 $\begin{array}{c}\begin{array}{ccc}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}& & \phantom{\rule{.5ex}{0ex}}B\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}\text{N}$ inference rule label upper N with conclusion upper X and 2 premises 8 $\begin{array}{c}\begin{array}{ccccc}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}& & \phantom{\rule{.5ex}{0ex}}B\phantom{\rule{.5ex}{0ex}}& & \phantom{\rule{.5ex}{0ex}}C\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}\text{N}$ inference rule label upper N with conclusion upper X and 3 premises 9 $\text{N}\begin{array}{c}\begin{array}{c}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}$ inference rule label upper N with conclusion upper X and 1 premise 10 $\text{N}\begin{array}{c}\begin{array}{ccc}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}& & \phantom{\rule{.5ex}{0ex}}B\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}$ inference rule label upper N with conclusion upper X and 2 premises 11 $\text{N}\begin{array}{c}\begin{array}{ccccc}\phantom{\rule{.5ex}{0ex}}A\phantom{\rule{.5ex}{0ex}}& & \phantom{\rule{.5ex}{0ex}}B\phantom{\rule{.5ex}{0ex}}& & \phantom{\rule{.5ex}{0ex}}C\phantom{\rule{.5ex}{0ex}}\end{array}\\ \phantom{\rule{.5ex}{0ex}}X\phantom{\rule{.5ex}{0ex}}\end{array}$ inference rule label upper N with conclusion upper X and 3 premises
2021-06-22 01:05:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5770934224128723, "perplexity": 4236.503424148246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488504969.64/warc/CC-MAIN-20210622002655-20210622032655-00513.warc.gz"}
http://egeoscien.neigae.ac.cn/CN/10.1007/s11769-019-1044-0
• 论文 • ### Spatio-temporal Change and Carrying Capacity Evaluation of Human Coastal Utilization in Liaodong Bay, China from 1993 to 2015 XU Jingping, LI Fang, SUO Anning, ZHAO Jianhua, SU Xiu 1. National Marine Environmental Monitoring Center, Dalian 116023, China • 收稿日期:2018-09-26 出版日期:2019-06-27 发布日期:2019-05-06 • 通讯作者: XU Jingping. E-mail:xjp.pp@126.com E-mail:xjp.pp@126.com • 基金资助: Under the auspices of Marine Public Welfare Project (No. 201005011) ### Spatio-temporal Change and Carrying Capacity Evaluation of Human Coastal Utilization in Liaodong Bay, China from 1993 to 2015 XU Jingping, LI Fang, SUO Anning, ZHAO Jianhua, SU Xiu 1. National Marine Environmental Monitoring Center, Dalian 116023, China • Received:2018-09-26 Online:2019-06-27 Published:2019-05-06 • Contact: XU Jingping. E-mail:xjp.pp@126.com E-mail:xjp.pp@126.com • Supported by: Under the auspices of Marine Public Welfare Project (No. 201005011) In China, promoting the development of coastal areas has been included in a series of national strategic development plans. At the same time, many marine environmental problems have been associated with the rapid development of coastal sea use. In order to quantify the impact of human activities on the coast, the characteristics of coastlines and near-shore sea use of Liaodong Bay, Northeast China, were first classified using multi-source, remotely sensed imagery using automatic or semi-automatic extraction methods for five periods between 1993 and 2015. Sea use dynamics and coastline dynamics resulting from human activates were analyzed. Results showed a significant trend of continuous growth in sea use and a progressive increase in the total length of artificial coastline, but a noticeable loss of natural coastline during the five periods. Reclaimed land and enclosed areas were the main types of sea use. Most coastal human activities were distributed in the northern part of the bay. In recent years, rapid industrialization and urbanization in China's coastal areas have promoted large-scale land reclamation. Accordingly, the observed coastline changes during each period had a close relationship with coastal development and sea area utilization. Based on marine functional zoning (MFZ), the sea use carrying capacity was evaluated by means of indexes to describe human exploitation of the marine and coastal environments in the bay. This showed that the intensity of coastal utilization in Liaodong Bay has increased year-on-year. Sea use carrying capacity reached a ‘critically loaded’ state by 2008 and was ‘overloaded’ by 2015. Abstract: In China, promoting the development of coastal areas has been included in a series of national strategic development plans. At the same time, many marine environmental problems have been associated with the rapid development of coastal sea use. In order to quantify the impact of human activities on the coast, the characteristics of coastlines and near-shore sea use of Liaodong Bay, Northeast China, were first classified using multi-source, remotely sensed imagery using automatic or semi-automatic extraction methods for five periods between 1993 and 2015. Sea use dynamics and coastline dynamics resulting from human activates were analyzed. Results showed a significant trend of continuous growth in sea use and a progressive increase in the total length of artificial coastline, but a noticeable loss of natural coastline during the five periods. Reclaimed land and enclosed areas were the main types of sea use. Most coastal human activities were distributed in the northern part of the bay. In recent years, rapid industrialization and urbanization in China's coastal areas have promoted large-scale land reclamation. Accordingly, the observed coastline changes during each period had a close relationship with coastal development and sea area utilization. Based on marine functional zoning (MFZ), the sea use carrying capacity was evaluated by means of indexes to describe human exploitation of the marine and coastal environments in the bay. This showed that the intensity of coastal utilization in Liaodong Bay has increased year-on-year. Sea use carrying capacity reached a ‘critically loaded’ state by 2008 and was ‘overloaded’ by 2015.
2021-10-24 19:42:15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18950335681438446, "perplexity": 8108.354478177233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587593.0/warc/CC-MAIN-20211024173743-20211024203743-00611.warc.gz"}
https://bt.gateoverflow.in/40/gate2018-40
# GATE2018-40 $5'$ capping of mRNA transcripts in eukaryotes involves the following events: 1. Addition of GMP on the $5'$ end 2. Removal of $\gamma$-phosphate of the triphosphate on first base at the $5'$ end 3. $5'-5’$ linkage between GMP and the first base at $5'$ end 4. Addition of methyl group to $N7$ position of guanine Which one of the following is the correct sequence of events? 1. P, Q, R, S 2. P, R, Q, S 3. Q, P, R, S 4. Q, P, S, R edited ## Related questions 1 Determine the correctness or otherwise of the following Assertion [a] and the Reason [r]: Assertion: The association constant in water for the G-C base pair is three times lower than that for the A-T base pair. Reason: There are three hydrogen bonds in the G-C base pair and two in the A-T ... r] is true Both [a] and [r] are false Both [a] and [r] are tme and [r] is not the correct reason for [a] 2 Which one of the following is INCORRECT about protein structures? A protein fold is stabilized by favorable non-covalent interactions All parts of a fold can be classified as helices, strands or turns Two non-covalent atoms cannot be closer than the sum of their van der Waals radii The peptide bond is nearly planar If a segment of a sense strand of DNA is $5'-\text{ATGGACCAGA}-3'$, then the resulting RNA sequence after transcription is $5' – \text{AGACCAGGTA}-3’$ $5'- \text{UCUGGUCCAU}-3'$ $5'- \text{UACCUGGUCU}- 3'$ $5’ -\text{AUGGACCAGA}-3’$ The repeat sequence of telomere in humans is $5'-\text{TATAAT}-3’$ $5'-\text{TTAGGG}-3'$ $5’-\text{GGGCCC}-3'$ $5’-\text{AAAAAA }-3'$
2022-01-17 10:59:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7979324460029602, "perplexity": 2343.5258834712063}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300533.72/warc/CC-MAIN-20220117091246-20220117121246-00155.warc.gz"}
https://yurichev.com/blog/modinv/
## Yet another explanation of modulo inverse Let's imagine, we work on 4-bit CPU, it has 4-bit registers, each can hold a value in 0..15 range. Now we want to divide by 3 using multiplication. Let's find modulo inverse of 3 using Wolfram Mathematica: In[]:= PowerMod[3, -1, 16] Out[]= 11 This is in fact solution of a $3m=16k+1$ equation ($16 = 2^4$): In[]:= FindInstance[3 m == 16 k + 1, {m, k}, Integers] Out[]= {{m -> 11, k -> 2}} The "magic number" for division by 3 is 11. Multiply by 11 instead of dividing by 3 and you'll get a result (quotient). This works, let's divide 6 by 3. We can now do this by multiplying 6 by 11, this is 66=0x42, but on 4-bit register, only 0x2 will be left in register ($0x42 \equiv 2 \mod 2^4$). Yes, 2 is correct answer, 6/3=2. Let's divide 3, 6 and 9 by 3, by multiplying by 11 (m). |123456789abcdef0|123456789abcdef0|123456789abcdef0|123456789abcdef0|123456789abcdef0|123456789abcdef0|123456789abcdef0| m=11 |*********** | | | | | | | 3/3 3m=33 |****************|****************|* | | | | | 6/3 6m=66 |****************|****************|****************|****************|** | | | 9/3 9m=99 |****************|****************|****************|****************|****************|****************|*** | A "protruding" asterisk(s) (*) in the last non-empty chunk is what will be left in 4-bit register. This is 1 in case of 33, 2 if 66, 3 if 99. In fact, this "protrusion" is defined by 1 in the equation we've solved. Let's replace 1 with 2: In[]:= FindInstance[3 m == 16 k + 2, {m, k}, Integers] Out[]= {{m -> 6, k -> 1}} Now the new "magic number" is 6. Let's divide 3 by 3. 3*6=18=0x12, 2 will be left in 4-bit register. This is incorrect, we have 2 instead of 1. 2 asterisks are "protruding". Let's divide 6 by 3. 6*6=36=0x24, 4 will be left in the register. This is also incorrect, we now have 4 "protruding" asterisks instead of correct 2. Replace 1 in the equation by 0, and nothing will "protrude". Now the problem: this only works for dividends in 3x form, i.e., which can be divided by 3 with no remainder. Try to divide 4 by 3, 4*11=44=0x2c, 12 will be left in register, this is incorrect. The correct quotient is 1. We can also notice that the 4-bit register is "overflown" during multiplication twice as much as in "incorrect" result in low 4 bits. Here is what we can do: use only high 4 bits and drop low 4 bits. 4*11=0x2c and 2 is high 4 bits. Divide 2 by 2, this is 1. Let's "divide" 8 by 3. 8*11=88=0x58. 5/2=2, this is correct answer again. Now this is the formula we can use on our 4-bit CPU to divide numbers by 3: "x*3 >> 4 / 2" or "x*3 >> 5". This is the same as almost all modern compilers do instead of integer division, but they do this for 32-bit and 64-bit registers.
2017-12-17 02:05:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7831657528877258, "perplexity": 1745.4838667906777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00662.warc.gz"}
https://zbmath.org/?q=an:1131.68076
## Characterizations of finite and infinite episturmian words via lexicographic orderings.(English)Zbl 1131.68076 Summary: We characterize by lexicographic order all finite Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of infinite episturmian words in a wide sense (episturmian and episkew infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all balanced infinite words over a 2-letter alphabet; in other words, all Sturmian and skew infinite words, the factors of which are (finite) Sturmian. ### MSC: 68R15 Combinatorics on words ### Keywords: skew infinite words Full Text: ### References: [1] Allouche, J.-P.; Shallit, J., Automatic sequences: theory, applications, generalizations, (2003), Cambridge University Press UK · Zbl 1086.11015 [2] Arnoux, P.; Rauzy, G., Représentation géométrique de suites de complexité $$2 n + 1$$, Bull. soc. math. France, 119, 199-215, (1991) · Zbl 0789.28011 [3] Berstel, J.; Séébold, P., Sturmian words, (), 45-110 · Zbl 0883.68104 [4] de Luca, A., Sturmian words: structure, combinatorics and their arithmetics, Theoret. comput. sci., 183, 45-82, (1997) · Zbl 0911.68098 [5] Droubay, X.; Justin, J.; Pirillo, G., Episturmian words and some constructions of de luca and Rauzy, Theoret. comput. sci., 255, 539-553, (2001) · Zbl 0981.68126 [6] Droubay, X.; Pirillo, G., Palindromes and Sturmian words, Theoret. comput. sci., 223, 73-85, (1999) · Zbl 0930.68116 [7] Gan, S., Sturmian sequences and the lexicographic world, Proc. amer. math. soc., 129, 1445-1451, (2001) · Zbl 1004.37009 [8] A. Glen, Powers in a class of $$\mathcal{A}$$-strict standard episturmian words, in: 5th International Conference on Words, Université du Québec à Montréal, Publications du LaCIM 36 (2005) 249-263. Theoret. Comput. Sci., in press (doi:10.1016/j.tcs.2007.03.023) [9] A. Glen, A characterization of fine words over a finite alphabet, in: International School and Conference on Combinatorics, Automata and Number Theory, Université de Liége, Belgium, 2006, p. 9 [10] Heinis, A.; Tijdeman, R., Characterisation of asymptotically Sturmian sequences, Publ. math. debrecen, 56, 3-4, 415-430, (2000) · Zbl 0961.11004 [11] Jenkinson, O.; Zamboni, L.Q., Characterisations of balanced words via orderings, Theoret. comput. sci., 310, 247-271, (2004) · Zbl 1071.68090 [12] Justin, J.; Pirillo, G., Decimations and Sturmian words, Theor. inform. appl., 31, 3, 271-290, (1997) · Zbl 0889.68090 [13] Justin, J.; Pirillo, G., Episturmian words and Episturmian morphisms, Theoret. comput. sci., 276, 281-313, (2002) · Zbl 1002.68116 [14] Justin, J.; Pirillo, G., On a characteristic property of arnoux-Rauzy sequences, Theor. inform. appl., 36, 4, 385-388, (2002) · Zbl 1060.68094 [15] Justin, J.; Pirillo, G., Episturmian words: shifts, morphisms and numeration systems, Internat. J. found. comput. sci., 15, 2, 329-348, (2004) · Zbl 1067.68115 [16] Justin, J.; Vuillon, L., Return words in sturmian and Episturmian words, Theor. inform. appl., 34, 5, 343-356, (2000) · Zbl 0987.68055 [17] Mignosi, F.; Zamboni, L.Q., On the number of arnoux-Rauzy words, Acta arith., 101, 2, 121-129, (2002) · Zbl 1005.68117 [18] Morse, M.; Hedlund, G.A., Symbolic dynamics II: Sturmian trajectories, Amer. J. math., 62, 1-42, (1940) · JFM 66.0188.03 [19] Pirillo, G., A new characteristic property of the palindrome prefixes of a standard Sturmian word, Sém. lothar. combin., 43, (1999), pp. 3 · Zbl 0941.68101 [20] Pirillo, G., Inequalities characterizing standard sturmian and Episturmian words, Theoret. comput. sci., 341, 1-3, 276-292, (2005) · Zbl 1077.68085 [21] G. Pirillo, Morse and Hedlund’s skew Sturmian words revisited, Ann. Comb. (in press) · Zbl 1189.68089 [22] Pytheas Fogg, N., () [23] Richomme, G., Conjugacy and Episturmian morphisms, Theoret. comput. sci., 302, 1-34, (2003) · Zbl 1044.68142 [24] Risley, R.N.; Zamboni, L.Q., A generalization of Sturmian sequences: combinatorial structure and transcendence, Acta arith., 95, 2, 167-184, (2000) · Zbl 0953.11007 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
2022-05-21 12:26:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8497713208198547, "perplexity": 9430.45728757482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539101.40/warc/CC-MAIN-20220521112022-20220521142022-00120.warc.gz"}
https://solvedlib.com/n/use-a-graphing-utility-to-find-the-inverse-if-it-exists,13032859
# Use a graphing utility to find the inverse, if it exists, of each matrix. Round answers to two decimal places. ###### Question: Use a graphing utility to find the inverse, if it exists, of each matrix. Round answers to two decimal places. $\left[\begin{array}{rrrr} 44 & 21 & 18 & 6 \\ -2 & 10 & 15 & 5 \\ 21 & 12 & -12 & 4 \\ -8 & -16 & 4 & 9 \end{array}\right]$ #### Similar Solved Questions ##### IH NMRAtomAchemical shlft (6}Structure:2-dlbramocyclohexanemeasured and recorded the tables above confirm that ou Do the FTIR and NMR spectro you synthesized the asslgned target compound? Explaln_ IH NMR AtomA chemical shlft (6} Structure: 2-dlbramocyclohexane measured and recorded the tables above confirm that ou Do the FTIR and NMR spectro you synthesized the asslgned target compound? Explaln_... ##### Can you please answer this Wni er flows steadily at 2 ku/" threagh a 40.md Asauaning... Can you please answer this Wni er flows steadily at 2 ku/" threagh a 40.md Asauaning fully-developed fow, deterine a) the outlet temperature of the water t) the rate of heat transfer into the water Use the thermmophysieal properties of water at 37c ,-950 D-40 mm... ##### Question 3 of 12points)Attempt of 3Mell question in popup7.2 Section Exercise 19,20A normal population has mean u =61 and standard deviation 0 =20_(a) What proportion of the population is greater than 108?(b) What is the probability that randomly chosen value will be less than 81_Round answers to four decimal placesPart of 2The proportion of the population greater than 108 isPart 2 of 2The probability that randomly chosen value will be less than 81 is Question 3 of 12 points) Attempt of 3 Mell question in popup 7.2 Section Exercise 19,20 A normal population has mean u =61 and standard deviation 0 =20_ (a) What proportion of the population is greater than 108? (b) What is the probability that randomly chosen value will be less than 81_ Round answe... ##### C) For what values of 2. does the following set of equations have non trivial solutions... c) For what values of 2. does the following set of equations have non trivial solutions for x. y. -(1+1)x + y + 3a = 0 x+(2-2)y=0 3x +(2-2)==0... ##### DETAILSASK YOUR TEACQuestionGiven the vector AB=(1,4,-2) ad the point B=(4,1,2) , Then the midpoint of the line segment AB is (4-2,-1) (1,1,1) (3-1,3) (41,) (7,2,2)Submit AnswermSi DETAILS ASK YOUR TEAC Question Given the vector AB=(1,4,-2) ad the point B=(4,1,2) , Then the midpoint of the line segment AB is (4-2,-1) (1,1,1) (3-1,3) (41,) (7,2,2) Submit Answer mSi... ##### Sana.. MindTap.Cangag. Our discussion about 'effective teachers teaching effectively and your textbooksusest that teachers should repeat... sana.. MindTap.Cangag. Our discussion about 'effective teachers teaching effectively and your textbooksusest that teachers should repeat important points several times in their classroom lectures. The rational behind this suggestion is that students need to elaborate on information that they hea... ##### A 65.0 kg skier slides down a 37.20 slope with /lk 0.107 If the slope is 42.0 m long, and the skier starts from rest, how fast is she going at the bottom? (Unit = mls) A 65.0 kg skier slides down a 37.20 slope with /lk 0.107 If the slope is 42.0 m long, and the skier starts from rest, how fast is she going at the bottom? (Unit = mls)... ##### When a certain spring is stretched a distance I, it exerts a restoring force of magnitude... When a certain spring is stretched a distance I, it exerts a restoring force of magnitude F = 2x N, where r is measured from the point of equilibrium. The potential energy stored in the spring when it is stretched.5 m from equilibrium is: (a).15 J (b).37J (c) 25 J (2).5 J (e) none of these For a blo... ##### Rotifera lifecycle: Species that only reproduce sexually Species reproduce exclusively by asexual parthenogenesis Species reproduce alternating these two mechanisms: Expand: Amictic phase VS Mictic phaseDiagnostic Characteristics of Phylum MesozoaCompare and contrast Phyla Entoprocta, Ectoprocta,and Brachiopoda Entoprocta Ectoprocta Brachiopoda General description: Similarities:Differences: Rotifera lifecycle: Species that only reproduce sexually Species reproduce exclusively by asexual parthenogenesis Species reproduce alternating these two mechanisms: Expand: Amictic phase VS Mictic phase Diagnostic Characteristics of Phylum Mesozoa Compare and contrast Phyla Entoprocta, Ectoprocta,a... ##### Question 5: (12 points) In the Angus Reid 2009 Canada Day poll of 1000 adult Canadlians (500 males and 500 females) hockey was a source of pride for 78% of males and 66% of females. Find 99% conficlence interval for the true population proportion of Canadian females who feel pride in OU" national game of hockey. Interpret the interval. points) Test the hypothesis that the true population proportion of Canaclian males who feel pridle in hockey is bigger than 75% (Use w = 0.01). (8 points) Question 5: (12 points) In the Angus Reid 2009 Canada Day poll of 1000 adult Canadlians (500 males and 500 females) hockey was a source of pride for 78% of males and 66% of females. Find 99% conficlence interval for the true population proportion of Canadian females who feel pride in OU" nation... ##### Chargos 1 Oisovam 8 Iaginan Icoldad 11 V Lowcinoj 1 1 HlL cnalouJun jaViU.u * I0 ;pnomis{ chargos 1 Oisovam 8 Iaginan Icoldad 1 1 V Lowcinoj 1 1 Hl L cnalou Jun jaVi U.u * I0 ; pnomis {... ##### For July, White Corporation has budgeted production of 7,900 units. Each unit requires 0.40 direct labor-hours... For July, White Corporation has budgeted production of 7,900 units. Each unit requires 0.40 direct labor-hours at a cost of $5.10 per direct labor-hour. How much will White Corporation budget for labor in July?$40,290 $17,380$3,160 \$16,116 O... ##### Ancalori 47 7r MilVaar-closonmolud0a' caTprbabaty Iha: rindomly solectea 3-voar-od Wnrldavcalu?AccrioCarain counlnig 0eDarmcrTn Fobababy (Tyea nlogumunom MaCind IYour-6h dgonialTnrulud n dy cro i5 ancalori 47 7r MilVaar-clos onmolud 0a' caT prbabaty Iha: # rindomly solectea 3-voar-od Wnrl davcalu? Accrio Carain counlnig 0eDarmcr Tn Fobababy (Tyea nlogu munom MaCind IYour-6h dgonial Tnrulud n dy cro i5... ##### HW4: Problem 6Previous ProblemProblem ListNext Problempoint)Let {Uj, Uz, Uj, W } be linearly independent set of vectors.Select the best statement:(UI. Wz.W, could be linearly independent = linearly dependent set vectors depending on the vectors chosen: always . linearly independent set vectors, Uz. Uj never linearly Indopondont set of vectors. none of the abovePravlaw My AnbworsSubmlt AnswcrsYou have attempted thls problom tlmos: You havo= attempts remaining HW4: Problem 6 Previous Problem Problem List Next Problem point) Let {Uj, Uz, Uj, W } be linearly independent set of vectors. Select the best statement: (UI. Wz.W, could be linearly independent = linearly dependent set vectors depending on the vectors chosen: always . linearly independent set vector...
2022-09-27 05:30:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6375424861907959, "perplexity": 9793.781875362916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00599.warc.gz"}
https://joachim-gassen.github.io/ExPanDaR/dev/articles/use_ExPanD.html
ExPanD is a shiny based app building on the functions of the ExPanDaR package. Its purpose is to make panel data exploration fun and easy. Using ExPanD you can • quickly explore panel data, regardless of its origin, • prototype simple test designs and verify them out-of sample and • provide users the opportunity to assess the robustness of your findings without providing them access to the underlying data. This vignette will guide you through the process of using ExPanD by discussing three use cases. While the first two use macro-economic data to explore the association of gross domestic production (GDP) per capita with life expectancy at birth, the last use case explores the association between financial accounting performance measures and concurrent stock returns. If you do not use R you can still use the ExPanD app to explore panel data! In this case, access the hosted variant of the ExPanD app here and follow the advice below on how to upload a suitable panel data file for online exploration. No worries: Your data won’t be stored on the server and will be deleted from memory once the connection to the server is closed. ## Starting ExPanD to upload a local file containing panel data The easiest way to start using ExPanD is to use it with a local data file containing panel data. ExPanD supports Stata, SAS, CSV, Excel and R file formats. For using ExPanD from within R, you have to install the ExPanDaR package and start ExPanD. devtools::install_github("joachim-gassen/ExPanDaR") library(ExPanDaR) ExPanD() Alternatively, you can simply access the online hosted version of the ExPanD app here (no R required). After starting ExPanD, you will be greeted with a bare bones file upload dialog. Now you need a file to explore. Feel free to use whatever you want but for our first use case I will use the well-known gapminder dataset provided by the gapminder package (click here if you do not know the gapminder initiative). library(gapminder) # write.csv(gapminder, file = "gapminder.csv", row.names = FALSE) #> # A tibble: 10 x 6 #> country continent year lifeExp pop gdpPercap #> <fct> <fct> <int> <dbl> <int> <dbl> #> 1 Afghanistan Asia 1952 28.8 8425333 779. #> 2 Afghanistan Asia 1957 30.3 9240934 821. #> 3 Afghanistan Asia 1962 32.0 10267083 853. #> 4 Afghanistan Asia 1967 34.0 11537966 836. #> 5 Afghanistan Asia 1972 36.1 13079460 740. #> 6 Afghanistan Asia 1977 38.4 14880372 786. #> 7 Afghanistan Asia 1982 39.9 12881816 978. #> 8 Afghanistan Asia 1987 40.8 13867957 852. #> 9 Afghanistan Asia 1992 41.7 16317921 649. #> 10 Afghanistan Asia 1997 41.8 22227415 635. To use ExPanD, you need the following: • A panel dataset in long format with at least two numerical variables and without duplicate observations (as identified by the cross-sectional and time-series dimension), • a variable or a vector of variables within this dataset that identifies the cross-sectional dimension and • a variable that is coercible to an ordered factor and that identifies (and sorts) the time dimension of your panel. As you see, the gapminder file contains country-year data. It is organized in a long format by using country as cross-sectional identifier and year as time-series identifier. Each of the additional variables is then stored in a separate column. It has one factor (continent) and three numerical variables (lifeExp, pop and gdpPercap). So, it complies with the above requirements, assuming that is has no duplicates: any(duplicated(gapminder[,c("country", "year")])) #> [1] FALSE OK. Use the commented-out write.csv() function call above to save the CSV file to your system and the file dialog to load it into ExPanD (if you are not using R, you can download the CSV file here). After uploading the file, two dialog boxes will appear asking you to select the cross-sectional identifier(s) and the time-series identifier. Select country as the cross-sectional identifier and year as the time-series identifier. ExPanD will now process the data to display it so that you can start exploring. ## Starting ExPanD with a data frame containing panel data Alternatively, if you are using R you can bypass the file upload dialog by specifying a data frame and its cross-sectional as well as time-series identifiers. devtools::install_github("joachim-gassen/ExPanDaR") library(ExPanDaR) library(gapminder) ExPanD(df = gapminder, cs_id = "country", ts_id = "year") ## Exploring data Regardless whether you uploaded the gapminder data via the file dialog or specified the data frame in the ExPanD() function call, the ExPanD shiny app will start up and look like this. As can be seen by the bar chart, the gapminder dataset provides a balanced panel of 142 countries with 12 observations per country. The missing values graph shows no missing data across all variables. When you scroll down, you will see that the dataset contains three numerical variables. Play around with the histogram and the extreme observation table to learn more about these. The time trend graph and the quantile time trend communicate good news: the life expectancy is increasing over time world-wide. You can verify that the same holds true for the population of the sample countries and for their GDP per capita. You will also notice that for the latter two the cross-country distribution widens over time. The gapminder dataset is often used to document the strong positive association between GDP per capita and life expectancy. You can see this association in the correlation plot. The blue ellipsoid above (below) the diagonal visualizes the positive Pearson (Spearman) correlation of the two variables. If you are interested in the exact correlation values, hover over the ellipsoid with your mouse. The scatter plot and the regression analysis section allow you to explore this association in a little bit more detail. Below you will see a screenshot where I prepared a “Hans Rosling” scatter plot (click here if you do not know the name). In addition, I estimated a by region OLS model with country fixed effects and standard errors clustered by country to verify that the association is not just driven by unobservable time-constant country heterogeneity. Looking at the scatter plot you notice that there are some observations with extremely high GDP per capita that trigger the LOESS smoother line to get a negative slope. If you hover over the dots with your mouse you will see that these are observations from Kuwait. To what extent are our regression results affected by these extreme observations? To figure this out, scroll up and select to winsorize your data at the 1 % level. After doing this, the figure from above now looks like this. The association has become more robust across regions and the scatter plot now shows a positive association across the complete range of winsorized GDP per capita. Continue to play around with your data. Let us assume that at some point you find something that you consider worth preserving so that next time you start ExPanD with the gapminder dataset, it starts directly into the view that you just have. No problem! Just scroll down to the bottom of the page. There, you will find a save dialog (and a load dialog as well, just in case). Save your ExPanD choices to a place that you will remember. The file that will be stored is a plain list, saved as a RDS file. Assuming that you named the file “ExPanD_config.RDS” and stored in your current work directory, you can now start ExPanD right into your favorite analysis by providing this list. ExPanD(df = gapminder, cs_id = "country", ts_id = "year", config_list = ExPanD_config) The gapminder dataset contains only three numerical variables. You might wonder how the association between GDP per capita and life expectancy would look like if you include additional test or control variables. In addition, GDP per capita, as a metric affected by growth processes, is far from being normally distributed. Does the association with life expectancy hold when you log transform it? Time for our second use case that re-examines the above presented association by using data provided by the World Bank. The questions of the last paragraph are typical for exploratory data analysis workflows and ExPanD is equipped to handle them. When started in its “advanced mode”, it provides two samples: A base sample and an analysis sample. You can then define additional variables based on the base sample interactively. When you call ExPanD without options, it will start into the advanced mode, generating an analysis sample that is identical to the sample that you uploaded. When you start ExPanD by providing it with a data frame at the command line, you decide whether you want to use the “simple” or “advanced” mode. When you prepare a data frame containing variable definitions via the var_def parameter, ExPanD will start in the advanced mode. A variable definition file has to contain at least three character columns: var_name, var_def and type. In addition, it can contain a logical column can_be_na. Let’s take a look at the variable definition data frame for the worldbank dataset provided by the ExPanDaR package. #> var_name var_def type can_be_na #> 1 country country cs_id 0 #> 2 region region factor 0 #> 3 income income factor 0 #> 4 year year ts_id 0 #> 5 time as.numeric(as.character(year))-1960 numeric 0 #> 6 gdp NY.GDP.MKTP.KD numeric 0 #> 7 population SP.POP.TOTL numeric 0 #> 8 gdp_capita NY.GDP.PCAP.KD numeric 0 #> 9 extdebt_gni DT.DOD.DECT.GN.ZS numeric 1 #> 10 debtservice_gni DT.TDS.DPPG.GN.ZS numeric 1 var_name contains variable names for the analysis sample and var_def contains the definitions for these variables. The definitions refer to variables contained in the worldbank dataset (which are conforming to the naming convention of the World Bank). Most definitions are just simple 1:1 transformations of the worldbank dataset but, as you can see from the definition for time, you can also use standard R expressions within the scope of the worldbank data frame. For the R experts: Your definition will be evaluated within a dplyr::mutate() call on the base data frame grouped by the cross section and ordered by the time-series identifier, so for example that lead() and lag() should work as expected. With the type variable you specify the nature of the variable that you just defined. Possible values are cs_id, ts_id, numeric, logical, and factor. They identify cross-sectional identifier(s), the time-series identifier, numerical variables, Boolean (True/False) type variables, and variables to be treated as grouping factors. Note that the data does not have to have the according class but is has to be coercible to it. The can_be_na variable can be omitted. If you do not provide it, it will be set to TRUE for all variables besides the cross-sectional and time-series identifiers. In the worldbank_var_def data frame it is set to FALSE for the variables time, gdp, population, and gpd_capita, meaning that only observations with non-missing values for these variables will be included in the analysis dataset. By customizing this data frame that you provide to ExPanD() via the var_def parameter, you can design the analysis sample as you wish. An alternative and more interactive approach is to define variables interactively while running ExPanD(). Let’s try. Run the following code to start ExPanD with the worldbank base data in advanced mode. library(ExPanDaR) ExPanD(df = worldbank, df_def = worldbank_data_def, var_def = worldbank_var_def, config_list = ExPanD_config_worldbank) What you will see is a similar analysis to the gapminder analysis of the first use case but with a more extensive dataset. The scatter plot and the regression analysis are displayed below. It shows a positive association of GDP per capita with life expectancy after controlling for public spending on health and income inequality (which happens to be negatively associated with life expectancy). As you see from the table, the number of observations is 1,068. How does this reconcile with the roughly 8,500 observations that the World Bank sample has data for? A quick look at the missing values graph below helps to understand the issue. While gdp_capita is available for all observations (remember the can_be_na variable in the data definition data frame?) and life_expectancy has good coverage for all but the most recent years, both pubspend_health_gdp and giniindex are only available for later years in the sample. giniindex is also only available for a subset of countries. Taken together, this drastically reduces the sample size of the regression model. Explore whether this has an effect on the documented associations by excluding and including the test variables one-by-one. You will see that the associations are reasonable robust. Now let’s see whether the distributional properties of the main independent variable of interest have an impact on the association. The screenshot below displays the histogram of gdp_capita. This looks like a log-normal distributed variable, so a log transformation should yield a more normally distributed variable. To calculate a logged variant of gdp_capita we first need to find which World Bank data item gdp_capita is based on. Hovering with your mouse over the variable name in the descriptive sample, you will see that it is based on the data item NY.GDP.PCP.KD. When you switch the tab of the descriptive statistics to the base sample, you can see all 72 base data items that the worldbank dataset contains. Use the dialog above the descriptive statistics as shown below to calculate a log-transformed measure of GDP per capita. You will see a message window that your variable was successfully generated. How does its histogram look like? Better. Now let’s see how this new variable is associated with life expectancy. First a quick look at the scatter plot. Now this looks different than the gapminder plot in the section above as it exhibits a more linear association. Let’s see how our regression model looks like when use log_gdp_capita instead of gdp_capita. The logged version of GDP per capita remains robustly positively associated with life expectancy but now income inequality (as measured by giniindex) is only marginally associated with life expectancy. Another thing that one can notice from the scatter plot above is that each country appears to be on its own “trajectory” in terms of life expectancy development. As we also know that in most countries and periods GDP per capita increases over time: Can we be sure that the association of GDP per capita and life expectancy is different from a general time trend in the data? Below, you will find a scatter plot that uses time as the independent variable. You can see that for most countries, life expectancy seems to follow a robust and similar linear time trend. There are some exceptions from this rule (China, Mali, Rwanda, Sierra Leone). To infer whether our association “survives” a control for time-induced variance in life expectancy that is stable across countries, we estimate a regression model that includes country and year fixed effects. See below. All associations are gone. Fun fact: When you replace lag_gdp_capita with gdp_capita you will see that it’s coefficient even turns significantly negative meaning that increasing GDP per capita is associated with a decrease in life expectancy! Please keep in mind that the above is not meant to challenge common believes in health economics and epidemiology but is merely being presented as a use case for interactive data exploration to infer the robustness of statistical inference. ## Using ExPanD with multiple samples There are instances where you might want to explore several samples simultaneously. Two examples: • You are analyzing observational data and data is available from alternative data sources. You are interested to learn whether data from different data sources will generate the same insights. • You want to use exploratory data analysis to develop a predictive model. You need to test this predictive model out of sample. For this, you split your original sample in a training and a test dataset. Our third and final use case will build on the second motivation. For this, we will explore and test an association that is a key finding in an area where I do most of my research work: financial accounting and capital markets. I do not want to bore you with the details but in essence the topic that we will explore is the concurrent association between financial reporting measures of corporate performance and stock market returns. Prior research has documented that financial reporting performance measures, most prominently net income, have a robust but overall weak association with concurrent stock market returns. Net income can be broken up in two components: Cash flow from operations and total accruals. While the former essentially captures the net cash receipts that a company realizes over the year as an outcome of its operating business activities, the latter reflects the financial accounting adjustments to reflect timing disparities between economic activities and cash collection. Two examples for total accruals: • A company sells goods to a customer in period 1 but does not collect the cash revenue until period 2. This results in a positive accrual in period 1 and a negative accrual in period 2. • A company buys and pays inventory in period 1, uses it in production in period 2, and sells the goods for cash in period 3. Here, we would have a positive accrual (adjusting the negative cash flow from operations) in period 1 and a negative accrual in period 3. Generously glossing other many important details, the accounting literature has documented three key findings around this notion: • Both, cash flows from operations and total accruals, are associated with concurrent stock returns • Cash flows from operations have a stronger association with returns than total accruals • The stock market seems to miss-price accruals meaning that it puts too much weight on accruals when incorporating net income news in stock prices. We will revisit the second statement. To do so, we use the dataset russell_3000 that is included with the ExPanDaR package. Most capital market based research uses data from commercial data vendors, requiring researchers to obtain a costly license. To circumvent this barrier to open science, we collect data from publicly available APIs (Yahoo and Google Finance) using the tidyquant package. The sample comprises available data for a sample of U.S. listed firms that where members of the Russell 3000 index in 2017. The data are more or less as provided by tidyquant and are used here for illustrative purposes only. To explore the data and to test a model on the data, we split the russell_3000 in two equally sized randomly selected samples: The “training sample” and the “test sample”. The idea is that we will explore the training sample but will infer the significance of our association test from the test sample. Run the following to generate the two samples and to start ExPanD with them. library(ExPanDaR) set.seed(42) training_sample <- sample(nrow(russell_3000), round(.5*nrow(russell_3000))) test_sample <- setdiff(1:nrow(russell_3000), training_sample) ExPanD(df = list(russell_3000[training_sample, ], russell_3000[test_sample, ]), df_def = russell_3000_data_def, df_name = c("Training sample", "Test sample")) ExPanD starts displaying the training sample. As you can infer from the bar chart and the descriptive table, the russell_3000 dataset contains a short unbalanced panel of four years and 2,289 firms. Also, you have varying amounts of missing data across the variables of the dataset with some variables containing no values for the first fiscal year of the sample. When you hover with your mouse over the variable names in the descriptive table a tool-tip will present you with hopefully informative variable definitions. Where do the variable definitions come from? When you take a look at the ExPanD() function call, you will notice the df_def parameter. It points to a data frame provided by the ExPanDaR package that provides the variable definitions. The variables that we are interested in are return (the annual stock market return and our dependent variable), nioa (net income, deflated by average total assets), cfoa (cash flow from operations, deflated by average total assets) and accoa (total accruals, deflated by average total assets). To explore their level of association, I suggest that you start with analyzing the scatter plot of cash flow and returns. You will notice that, again, extreme observations are relatively influential. Limit their influence by winsorizing to the 1 % and 99 % percentile. After doing that, you will get an image that looks like the screenshot below. There seems to be the predicted positive association, although it is relatively weak and mostly confined to positive cash flows. Let’s see how the association looks like for accruals in the picture below. No robust association visible here. To test whether the two associations indeed differ, we set up a regression model using the fact that $$nioa = cfoa + accoa$$. This is why we can test for significant differences in the association by regressing returns on net income and accruals. If the coefficient for accruals turns out to be significantly negative, we found evidence that the association of accruals with returns is significantly weaker that the association of cash flow with returns. In order to control for unobserved time-constant factors that drive stock market returns and that vary at the firm level, we include firm fixed effects in the analysis. The figure below shows our findings for the training sample. We find a marginally significant coefficient for $$accoa$$ but, as we discussed above, we should not base our inferences on the training sample since we used this sample to explore the data. For example, our (admittedly ad hoc) decision to winsorize the data was based on a visual inspection of the scatter plot. In a strict sense, this violates the usage of this data for testing. So, scroll up, switch the sample to the test sample and see what you find. As the screenshot shows, we find no significant coefficient for $$accoa$$. This indicates that the predicted difference in associations is not statistically significant at conventional levels. While this might be driven by the relatively low power of the test (short panel with cross-sectional fixed effects), I encourage you to use ExPanD to explore this finding further. You will easily notice that depending on how exactly you specify your test (fixed effect structures, standard error clustering, etc.) and whether/how you cut your training and test sample, you are able to generate findings that are or are not “statistically significant at conventional levels”. So, again, this use case of ExPanD demonstrates how the app can be used to assess the robustness of statistical inference.
2022-01-21 07:12:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2695499658584595, "perplexity": 1629.686169686992}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00290.warc.gz"}
https://byjus.com/gate/how-many-times-gate-exam-is-conducted-in-a-year/
How many times a GATE exam is conducted in a year? Graduate Aptitude Test in Engineering (GATE) is a competitive exam conducted by the IISC and the 7 IITs. Qualifying in the GATE exams is mandatory to get admissions to the ME/M. Tech programmes for specific PSU jobs. GATE is also used for availing of financial assistance (such as scholarships). Now, the question that needs to be addressed here in this article is “How many times a GATE exam is conducted in a year.” How Many Times In a Year the GATE Exam is conducted? Will the GATE be conducted twice or more in a year? And the answer is no. So then, how many times is the GATE exam conducted in a year? GATE exams are usually conducted only once a year. These exams are generally conducted during the first or second week of February. Hence, the answer to the question is that this computer-based National exam is organised only once a year. Meanwhile, the GATE scores are valid for three years, and there are no limitations to how many times a candidate can appear for the GATE exams. However, the requirements for the PSUs or post-graduate courses may differ according to the institutions. It is also said that GATE scores are valid for only a year for most PSU’s. Even if there are no limitations to the number of times a candidate can apply for the GATE exams, there is a limit to the number of times a candidate can appear for the GATE exams in a year. Since the exams are conducted only once a year, candidates can also appear for the GATE exams only once within the year.
2022-11-30 18:20:02
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8268383145332336, "perplexity": 630.2360536982061}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710765.76/warc/CC-MAIN-20221130160457-20221130190457-00208.warc.gz"}
https://www.yuyakaneta.page/publication/lmu-ltau/
# On the possibility of a search for the $L_\mu - L_\tau$ gauge boson at Belle-II and neutrino beam experiments ### Abstract We study the possibilities of a search for the light and weakly interacting gauge boson in the gauged $L_\mu - L_\tau$ model. Introducing the kinetic mixing at the tree level, the allowed parameter regions for the gauge coupling and kinetic mixing parameter are presented. Then, we analyze one photon plus missing event within the allowed region and show that a search for the light gauge boson will be possible at the Belle-II experiment. We also analyze the neutrino trident production process in neutrino beam experiments. Publication In Progress of Theoretical and Experimental Physics (PTEP) ##### Yuya Kaneta/金田佑哉 ###### Data Scientist My research was a particle physics phenomenology. Of course, I am interested in data science and particle physics, but recently I also interest in the use of data in the space exploration industry.
2022-01-22 20:54:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6188593506813049, "perplexity": 790.1144987652982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303884.44/warc/CC-MAIN-20220122194730-20220122224730-00400.warc.gz"}
https://www.investopedia.com/ask/answers/041415/how-capital-asset-pricing-model-capm-represented-security-market-line-sml.asp
The capital asset pricing model (CAPM) and the security market line (SML) are used to gauge the expected returns of securities given levels of risk. The concepts were introduced in the early 1960s and built on earlier work on diversification and modern portfolio theory. Investors sometimes use CAPM and SML to evaluate a security—in terms of whether it offers a favorable return profile against its level of risk—before including the security within a larger portfolio. ## Capital Asset Pricing Model The capital asset pricing model (CAPM) is a formula that describes the relationship between the systematic risk of a security or a portfolio and expected return. It can also help measure the volatility or beta of a security relative to others and compared to the overall market. ### Key Takeaways • Any investment can be viewed in terms of risks and return. • The CAPM is a formula that yields expected return. • Beta is an input into the CAPM and measures the volatility of a security relative to the overall market. • SML is a graphical depiction of the CAPM and plots risks relative to expected returns. • A security plotted above the security market line is considered undervalued and one that is below SML is overvalued. Mathematically, the CAPM formula is the risk-free rate of return added to the beta of the security or portfolio multiplied by the expected market return minus the risk-free rate of return:  \begin{aligned} &\text{Required Return} = \text{RFR} + \beta_\text{stock/portfolio} \times ( \text{R}_\text{market} - \text{RFR} ) \\ &\textbf{where:} \\ &\text{RFR} = \text{Risk-free rate of return} \\ &\beta_\text{stock/portfolio} = \text{Beta coefficient for the stock or portfolio} \\ &\text{R}_\text{market} = \text{Return expected from the market} \\ \end{aligned} The CAPM formula yields the expected return of the security. The beta of a security measures the systematic risk and its sensitivity relative to changes in the market. A security with a beta of 1.0 has a perfect positive correlation with its market. This indicates that when the market increases or decreases, the security should increase or decrease by the same percentage amount. A security with a beta higher than 1.0 carries greater systematic risk and volatility than the overall market, and a security with a beta less than 1.0, has less systematic risk and volatility than the market. ## Security Market Line The security market line (SML) displays the expected return of a security or portfolio. It is a graphical representation of the CAPM formula and plots the relationship between the expected return and beta, or systematic risk, associated with a security. The expected return of securities is plotted on the y-axis of the graph and the beta of securities is plotted on the x-axis. The slope of the relationship plotted is known as the market risk premium (the difference between the expected return of the market and the risk-free rate of return) and it represents the risk-return tradeoff of a security or portfolio. ## CAPM, SML, and Valuations Together, the SML and CAPM formulas are useful in determining if a security being considered for investment offers a reasonable expected return for the amount of risk taken on. If a security’s expected return versus its beta is plotted above the security market line, it is considered undervalued, given the risk-return tradeoff. Conversely, if a security’s expected return versus its systematic risk is plotted below the SML, it is overvalued because the investor would accept a smaller return for the amount of systematic risk associated. The SML can be used to compare two similar investment securities that have approximately the same return to determine which of the two securities carries the least amount of inherent risk relative to the expected return. It can also compare securities with equal risk to determine if one offers a higher expected return. While the CAPM and the SML offer important insights and are widely used in equity valuation and comparison, they are not standalone tools. There are additional factors—other than the expected return of an investment over the risk-free rate of return—that should be considered when making investment choices.
2021-07-23 15:27:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9156773686408997, "perplexity": 1427.7558521231033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046149929.88/warc/CC-MAIN-20210723143921-20210723173921-00607.warc.gz"}
http://mathoverflow.net/questions/128183/existence-of-limit-measure
# Existence of limit measure Let $X$ be a separable metric space, $\mu_{n}$ a sequence of Borel probability measures and $\mathcal{C}$ be a family of sets that is closed under finite unions and interections, and that contains all the balls. If $\mu_{n}(A)$ converges for every $A\in\mathcal{C}$, does there exists a Borel measure $\mu_{\infty}$ such that $\mu_{\infty}(E)=\lim\mu_{n}(E)$ for every $E\in\mathcal{C}?$ From Theorem 4.3 in this paper we can get this result when $X$ is locally compact. Here Sion makes an outer measure and then shows open sets are measurable. His proof definitively uses local compactness. Does anybody know if the result is true when $X$ is not necessarily locally compact? - The referenced Theorem 4.3 does not imply the condition in the first paragraph, at least not as stated. The theorem is in terms of weak$^*$ convergence, i.e. $\int f \ d\mu_n \to \int f\ d\mu_\infty$ for all $f \in C_c(X)$ (continuous with compact support). This does not imply that $\mu_n(E) \to \mu_\infty(E)$ for $E\not\subset K$ for any $K$ compact (hence Andreas' counterexample below). The statement is true if you assume that all elements in $\mathcal C$ are relatively compact. – D. Kelleher Apr 20 '13 at 19:33 Thanks for your comments. Theorem 4.3 talks about two kinds of limits. The other limit is the outer measure constructed in section 3. Theorem 3.3 mentions that the outer measure is Radon (hence every borel set is measurable) if every open set is the countable union of compact sets. – FelipeG Apr 21 '13 at 17:18 Why isn't the following a (locally compact) counterexample? Let $X$ be the set of natural numbers, with the metric where the distance between every two distinct points is 1. So the topology is discrete, and the only balls are the singletons and the whole space. Let $\mathcal C$ consists of the finite sets and the whole space. Let $\mu_n$ be the probability measure concentrated at the point $n$. Then for each finite set $A$ we have $\lim_n\mu_n(A)=0$, while for the whole space $X$ we have $\lim_n\mu_n(X)=1$. These limits are not the values of any countably additive measure $\mu_\infty$. - The answer is no, and, without more conditions, I think that the best one can hope for is that $\mu_\infty$ is a finitely additive measure. The idea from the paper of Maurice Sion that was cited is that finite measures are dual to $C_0(X)$, continuous functions vanishing at infinity. This technique doesn't work if $X$ is not locally compact -- if $x_0\in X$ is such that there is no relatively compact neighborhood of $x_0$, then $f(x_0)=0$ for all $f \in C_0(X)$. So a pointmass at $x_0$ is equivalent to the zero measure as elements of the dual of $C_0$! As a work around, one could consider $L^\infty(X)$, (in particular, one would need an appropriate Borel reference measure). The dual of $L^\infty$ can be classified as finitely-additive measures. The condition $\mu_n(A)\to\mu_\infty(A)$ for all $A$ in a basis of $X$ means that the induced functionals converge on simple functions, and the limit is bounded because the measures were taken to be probabilities. So the limit should be a finitely additive measure. I'm being a little fast and loose, so there may be things I'm forgetting to assume (regularity of the measures maybe?). -
2016-06-30 17:40:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9559507966041565, "perplexity": 119.3267925523442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00152-ip-10-164-35-72.ec2.internal.warc.gz"}
https://socratic.org/questions/how-much-boiling-water-would-you-need-to-raise-the-bath-to-body-temperature-abou
# How much boiling water would you need to raise the bath to body temperature (about 37 ∘C)? Assume that no heat is transferred to the surrounding environment. Express your answer to two significant figures and include the appropriate units. ## You fill your bathtub with 25 kg of room-temperature water (about 25 ∘C). You figure that you can boil water on the stove and pour it into the bath to raise the temperature. Jan 5, 2018 $\text{4.8 kg}$ #### Explanation: The idea here is that the heat given off by the boiling water will be equal to the heat absorbed by the room-temperature sample. color(blue)(ul(color(black)(q_"absorbed" = -q_"given off")))" " " "color(darkorange)("(*)") The minus sign is used here because, by convention, heat given off carries a minus sign. Another assumption that you have to make is that the specific heat of liquid water is constant regardless of the temperature of the liquid water. In other words, you need to have ${c}_{\text{liquid water at 25"^@"C") = c_ ("liquid water at 100"^@"C}}$ Now, your tool of choice here will be the equation $\textcolor{b l u e}{\underline{\textcolor{b l a c k}{q = m \cdot {c}_{\text{liquid water}} \cdot \Delta T}}}$ Here • $q$ is the heat absorbed or given off • $m$ is the mass of the sample • ${c}_{\text{liquid water}}$ is the specific heat of liquid water • $\Delta T$ is the change in temperature, calculated as the difference between the final temperature and the initial temperature of the sample So, you know that you have ${q}_{\text{absorbed" = m_1 * c_"liquid water" * DeltaT_"warming}}$ for the room-temperature water, which has $\Delta {T}_{\text{warming" = 37^@"C" - 25^@"C" = 12^@"C}}$ Similarly, you have ${q}_{\text{given off" = m_2 * c_"liquid water" * DeltaT_"cooling}}$ for the boiling water, which has $\Delta {T}_{\text{cooling" = 37^@"C" - 100^@"C" = -63^@"C}}$ Use equation $\textcolor{\mathrm{da} r k \mathmr{and} a n \ge}{\text{(*)}}$ to get m_1 * color(red)(cancel(color(black)(c_"liquid water"))) * DeltaT_"warming" = - m_2 * color(red)(cancel(color(black)(c_"liquid water"))) * DeltaT_"cooling" ${m}_{1} \cdot \Delta {T}_{\text{warming" = - m_2 * DeltaT_"cooling}}$ This is equivalent to ${m}_{2} = \left(\Delta {T}_{\text{warming")/(-DeltaT_"cooling}}\right) \cdot {m}_{1}$ Plug in your values to find m_2 = (12 color(red)(cancel(color(black)(""^@"C"))))/(-(-63color(red)(cancel(color(black)(""^@"C"))))) * "25 kg" = color(darkgreen)(ul(color(black)("4.8 kg"))) Notice that you need the minus sign to cancel out the minus sign coming from the change in temperature. The answer is rounded to two sig figs. So, if you add $\text{4.8 kg}$ of liquid water at ${100}^{\circ} \text{C}$ to $\text{25 kg}$ of water at ${25}^{\circ} \text{C}$, you will end up with a mixture that has a final temperature of ${37}^{\circ} \text{C}$.
2021-09-24 04:03:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 22, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.619443953037262, "perplexity": 606.1400170151832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057496.18/warc/CC-MAIN-20210924020020-20210924050020-00052.warc.gz"}
http://www.stata.com/support/faqs/graphics/join-paired-points/
Search >> Home >> Resources & support >> FAQs >> Joining paired points on a graph Note: This FAQ is relevant for users of releases prior to Stata 8. ### How do I join paired points on a graph? Title Joining paired points on a graph Author Nicholas J. Cox, Durham University, UK Date November 1997 Note: This FAQ concerns some special graphs of paired observations. If you have multiple measurements per observation and want to generate more standard high-low type charts, these can be created with the connect(||) and connect(II) graph options; see [G] connect. Suppose you have data reflecting paired measurements, taken before and after an intervention, and you want to draw a graph that joins before and after values, observation by observation. This can be read in two ways: Specifically: you have data before and after and want a scatter plot in which before and after are responses on the y-axis and the variable on the x-axis is set to a low or a high constant for before and after, respectively. before should be joined to after for each observation. | | * | * | * | ******** | * | * | * | * +------------- before after More generally: suppose that we have data as variables x1 y1 (= before) x2 y2 (= after). We want to join pairs of points (x1, y1) and (x2, y2). I will show how to do this more general case first. 1. Save the data if important, because the stack command will overwrite it. 2. Generate an identifier if it does not exist: . gen id = _n 1. Stata does not support graphs with more than one variable on the x-axis, so we must stack the data so that two x variables are put in one longer X variable: . stack x1 y1 id x2 y2 id, into(X Y ID) clear 1. We have to make sure that the data are in the right order so that only the right pairs of points are joined: . egen Xmin = min(X), by(ID) . gsort -Xmin ID X . gen Y1 = Y if _stack == 1 . gen Y2 = Y if _stack == 2 . graph Y Y1 Y2 X, c(L..) sy(iop) 1. The last command ensures that you get distinguishing symbols for y1 and y2. The crucial option is connect(L), which joins points for Y if and only if X is increasing. The gsort command previously put data points in the correct order. Here is an example of this type of graph. In this case, we are assuming that both X and Y receive some stochastic effect from the regime change. The more specific case is, not surprisingly, easier. 1. Save the data if important. 2. Generate an identifier if it does not exist. . gen id = _n . stack y1 id y2 id, into(Y ID) clear . sort ID _stack . graph Y _stack, c(L) In this second type of graph, we are not considering the x variable at all. Using the same Y-data from the prior graph, we obtain the following: This kind of scatterplot and other plots for this problem are discussed in McNeil (1992). ### Reference McNeil, D. 1992. On graphing paired data. American Statistician 46: 307–311.
2015-10-04 15:19:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27432799339294434, "perplexity": 2189.3864171509513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736675218.1/warc/CC-MAIN-20151001215755-00058-ip-10-137-6-227.ec2.internal.warc.gz"}
https://ask.sagemath.org/answers/33867/revisions/
Revision history [back] If I correctly understand the Macaulay2 command you provide, you can mimick the same behavior in SageMath as follows: sage: T = TermOrder("wdeglex", (1,2)) sage: R = PolynomialRing(QQ, 'x,y', order=T) sage: R Multivariate Polynomial Ring in x, y over Rational Field sage: x,y = R.gens() sage: (x*y).degree() 3 You can find more informations on term orders in the documentation [1]. Several weighted term orders are available. [1] http://doc.sagemath.org/html/en/reference/polynomial_rings/sage/rings/polynomial/term_order.html
2019-06-24 18:06:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4490523934364319, "perplexity": 5201.4836691202}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999620.99/warc/CC-MAIN-20190624171058-20190624193058-00087.warc.gz"}
https://www.tt-forums.net/viewtopic.php?t=6594
Latest guidance (last update 31/05/2004) Archived discussions related to Transport Empire. Read-only access only. Moderator: Transport Empire Moderators ChrisCF Transport Empire Developer Posts: 3608 Joined: 26 Dec 2002 16:39 Location: Over there ---> Latest guidance (last update 31/05/2004) So far, the informaiton here is entirely provisional. Comments, suggestions, anything that's left out, check below to see if it's already been mentioned. This is to be a clean topic for potential contributors to find up-to-date information on procedures, standards, etc. This information will eventually be duplicated on the SourceForge project page once accepted. Maybe these will end up on the SF site with linksfrom here instead, who knows Prerequisites: Regular contributors SHOULD (or MUST?) register accounts with SourceForge such that they might be added to the project and get CVS access. Patches: * Can be generated with GNU diff, available on most UNIX systems, and for Windows as part of Cygwin or possibly MinGW. * SHOULD be generated by individuals unable to commit files to CVS, or to fix problems in release (i.e. non-CVS stuff). * Unless it conflicts with the reason for the patch SHOULD be taken against the most recent CVS version of the file. * SHOULD be in the "unified" format. (-u option for GNU diff) * MUST ignore whitespace in the source files. (-b option for GNU diff) * SHOULD be case-sensitive. (NOT using -i option for GNU diff) * For multiple files, you SHOULD send one diff per file, unless you know how to do directory diffs properly (or I find a good way of explaining them) Code Someone should draft some convention for these, though they will probably appear somewhere else. Examples: * Source MUST be indented with each new level of scope * Opening braces MUST be on the same line as the thing requiring it, and the closing braces MUST be on the line after the block (example to follow) * Beginning of file SHOULD contain a list of people that worked on the file * All source files MUST carry the project boilerplate (once it's been written) * Use of the usual tags is encouraged (TODO, FIXME, HACK, etc. - more needed) Releases * Official code releases MUST be done based on the contents of CVS. You SHOULD NOT bundle up a package based on your own modifications, instead posting the diffs, or committing to CVS. * Binary releases MUST be done based on CVS. Anonymous CVS runs on a time-delay, so this MUST be done by someone with project access. * Anyone may release bundled versions, however, if they are not taken from current CVS, they MUST be identified as a third-party release. Meetings For meetings on IRC: * A bare minimum of 96 hours' notice SHOULD be provided for the sake of catching a decent audience * Times should be given in UTC to avoid timezone dificulties * Saturday evenings might be preferable. Most people work in the week, but one man's evening might be another's lunch break. * Meetings typically last more than an hour - take this into account when deciding a start time - most people interested in the project are between UTC-8 and UTC+2, which by itself is a large gap Comments below please - pretty much anything listed above can be changed at this early stage, so suggest away. Last edited by ChrisCF on 31 May 2004 22:26, edited 2 times in total. Bugzilla available for use - PM for details. Hellfire Transport Empire Developer Posts: 699 Joined: 03 Feb 2003 09:30 Location: Back at the office • Patches: • Should include CVS revision number. Code: • Each source file should include a header (todo!) containing some essential information: • $ID$ tag (containing revision number, date and last committer) • Perhaps (at most) three log items, for quick reference ($log$ tag) Releases: • Create tags in the CVS at every release so we can always find certain snapshots of the repostitory Feel free to contact me over Email! My current timezone: Europe/Amsterdam (GMT+1 or GMT+2) Code: Select all +------------Oo.------+ | Transport Empire -> | +---------------------+ Under construction... Hyronymus Tycoon Posts: 13218 Joined: 03 Dec 2002 10:36 Location: The Netherlands Contact: Perhaps a good idea to agree on a style of coding. I know some people aren't font of the indentions used in C++ but it does create clarity. If one doesn't do that the other one who's used to that might be searching for something for hours. oliver Engineer Posts: 57 Joined: 17 Apr 2003 20:32 Coding style draft This is what I currerntly have and is flexible enough to be addapted. also it has doxygen style tags, something that I demand if you guys want me : ) Code: Select all /*! \file <filename>.h * * \section generic <generic description> * * \section project Project information. * Project: TEmpire\n * \author <Original Author> * \date yyyymmdd * \version <version number> * * GPL * * \section history Change history * yyyymmdd <Firstname Lastname of changee>\n Initial version * ********************************************************************/ #ifndef _<FILENAME>_H #define _<FILENAME>_H /*! \def EXAMPLE(x) * \brief Brief description on EXAMPLE. * * Detailed description about EXAMPLE, it's use * and further information here. * */ #define EXAMPLE(x) /*! \defgroup BOOLEANS * \brief document #define block * * This block sets the TRUE and FALSE values to 0 or not 0. * */ /*@{*/ #define FALSE 0 /*!< FALSE is 0 */ #define TRUE (!FALSE) /*!< TRUE is everything but FALSE */ /*}@*/ #else #error "ERROR file _<FILENAME>.h multiple times included" #endif /* --- _<FILENAME>_H --- */ =============================================== /*! \file <filename>.c * * \section generic <generic description> * * \section project Project information. * Project: TEmpire\n * \author <original author>\n * \date <yyyymmdd> * \version <version> * * GPL. * * \section history Change history * yyyymmdd <Firstname Lastname of Changee>\n Initial version * ********************************************************************/ /* System Includes */ #include <stdio.h> /* Library Includes */ /* Local Includes */ #include "<filename>.h" /* prototypes for this source file */ /* Local defines */ /*! \fn int example(type param, type param, ...) * \brief this is an example function * * \param param this parameter is an example parameter * * \return if successful, the function returns something, otherwise not. * \retval int */ int example(type param, type param, ...) { int retval; /* temporary var to store return value */ reval = EXIT_FAILURE; /* set return value to EXIT_FAILURE */ return retval; /* return returnvalue */ } /* --- example() --- */ This however is still work in progress ... im' not content with the 'blocks' etc however this works, and it works well. edit: things look a lot nicer with 'code' : ) Last edited by oliver on 08 Jul 2004 03:55, edited 1 time in total. Hellfire Transport Empire Developer Posts: 699 Joined: 03 Feb 2003 09:30 Location: Back at the office You could use CVS keywords to fill in some details automagically. For example: $Revision$ will fill in the revision number, e.g. $Revision: 1.23$ And changelogs can be generated from the CVS logs with: Code: Select all /*! some comments * * $Log$ */ will turn into Code: Select all /*! some comments * * $Log$ * Some version info * The comments on this commit */ My point here: your doxygen style commenting will be preserved by the $Log$ command Feel free to contact me over Email! My current timezone: Europe/Amsterdam (GMT+1 or GMT+2) Code: Select all +------------Oo.------+ | Transport Empire -> | +---------------------+ Under construction... oliver Engineer Posts: 57 Joined: 17 Apr 2003 20:32 i'm sure there's something in doxygen to combine both these. I like it. But since i haven't used cvs al that much. all though i'm a big fan of cvs, i've heard even better things about subversion. kinda like cvs2 in a way. I think it is deffinatly worth a read. We could keep subversions 'main' branch in sync with cvs on sourceforge and all, but use subversion for our own intermediate development or something. granted we MUST have online file versioning. hellfire, could you check the doxygen pages about CVS support? wether it uses or something? I'd do it but i got a plane to catch tomorrow early : D vacation! finally!! Topic locked, information is outdated (14112006). Who is online Users browsing this forum: No registered users and 0 guests
2020-08-15 01:39:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5088267922401428, "perplexity": 11160.032308776797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740423.36/warc/CC-MAIN-20200815005453-20200815035453-00598.warc.gz"}
https://brilliant.org/problems/down-the-road-to-infinity/
# Down the Road to Infinity Calculus Level 5 Imgur $\begin{cases} y=0 \\ y=(2-x)+\sqrt{x^2-4} \\ y=(2+x)-\sqrt{x^2+4} \end{cases}$ Let the area bounded by the curves above be $$A$$. Find $$\lfloor 100A\rfloor$$ ×
2017-05-24 00:48:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3126104772090912, "perplexity": 2605.958164436134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607726.24/warc/CC-MAIN-20170524001106-20170524021106-00012.warc.gz"}
https://www.lmfdb.org/L/rational/8/1232%5E4/1.1/c1e4-0
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 8-1232e4-1.1-c1e4-0-0 $3.13$ $9.36\times 10^{3}$ $8$ $2^{16} \cdot 7^{4} \cdot 11^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.0476991 Modular form 1232.2.q.f 8-1232e4-1.1-c1e4-0-1 3.13 9.36\times 10^{3} 8 2^{16} \cdot 7^{4} \cdot 11^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.0802057$ Modular form 1232.2.e.d 8-1232e4-1.1-c1e4-0-2 $3.13$ $9.36\times 10^{3}$ $8$ $2^{16} \cdot 7^{4} \cdot 11^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.152416 Modular form 1232.2.q.g 8-1232e4-1.1-c1e4-0-3 3.13 9.36\times 10^{3} 8 2^{16} \cdot 7^{4} \cdot 11^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.455823$ Modular form 1232.2.e.c 8-1232e4-1.1-c1e4-0-4 $3.13$ $9.36\times 10^{3}$ $8$ $2^{16} \cdot 7^{4} \cdot 11^{4}$ 1.1 $$1.0, 1.0, 1.0, 1.0 1 1 0 0.678678 Modular form 1232.2.a.s 8-1232e4-1.1-c1e4-0-5 3.13 9.36\times 10^{3} 8 2^{16} \cdot 7^{4} \cdot 11^{4} 1.1$$ $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.739125$ Modular form 1232.2.q.h 8-1232e4-1.1-c1e4-0-6 $3.13$ $9.36\times 10^{3}$ $8$ $2^{16} \cdot 7^{4} \cdot 11^{4}$ 1.1 $1.0, 1.0, 1.0, 1.0$ $1$ $1$ $0$ $0.762268$ Modular form 1232.2.e.b
2021-12-08 14:56:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9816831350326538, "perplexity": 331.01058008864004}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363515.28/warc/CC-MAIN-20211208144647-20211208174647-00531.warc.gz"}
https://itecnotes.com/electrical/electronic-is-there-no-phase-shift-in-cc-and-cb-configuration/
# Electronic – Why is there no Phase Shift in CC and CB Configuration phase shifttransistors I am curious why in common base and common collector configurations there is no phase shift between output and input (at low frequencies, where stray capacitance doesn't "kick in" yet)? I get it why there is a phase shift in common emitter configuration. If the signal at the base (input) is increased, then the collector current through collector resistor R also increases, which makes a larger voltage drop across it. According to formula $$\V_{CE} = V_{CC} – Ic*Rc\$$ it can be easily seen that $$\V_{CE}\$$ decreases as $$\V_{RC}\$$ increases – therefore 180° phase shift. In case of common base, as the signal at the emitter (input) increases, $$\Vc\$$ also increases as $$\Vc = V_{CE} + V_{RE}\$$. $$\V_{CE}\$$ increases as input signal is increased. But shouldn't it decrease just like it does in CE configuration? Why does collector current increase with increase of emitter current anyway? CB configuration is more logical to me (regarding phase shift), since output is taken from emitter, which just follows the base (input). I am not certain if I understand phase shift of any of above configurations at all. Sources always say that there is or there isn't any phase shift in certain configuration, but they never say why is it the way is it – the explanation behind that fact. And that is what I would like to know. ## Common Collector The common collector configuration is also sometimes called an emitter-follower. You find it almost anywhere there is a need to boost current compliance. For example, perhaps the most common use is for a very simple voltage regulator: simulate this circuit – Schematic created using CircuitLab Here, $$\R_1\$$ and zener $$\D_1\$$ set up a voltage that is about one diode-drop above the desired regulated voltage. $$\Q_1\$$'s emitter simply follows the base voltage, except one diode-drop below it. It draws current from its collector, as needed, and substantially boosts the compliance current range of the regulated output voltage. Note that if the zener voltage were higher, then the emitter output would also be higher. The emitter follows its base and is "in phase" with it. If the base voltage were to vary up and down, the emitter would simply follow it. So the output voltage, which is at the emitter, is in phase (not out of phase) with its base voltage. It simply follows it. I expect this isn't confusing. But since you are discussing everything except the common-emitter situation, then perhaps this is worth discussing here. ## Common Base The common base configuration isn't used often "at low frequencies, where stray capacitance doesn't "kick in" yet." It is more commonly used at RF. But when it is used at low frequencies, this is usually in circumstances as either a cascode or as a low-impedance input where that is appropriate (current signaling instead of voltage signaling.) I'm not going to discuss every possible way of looking at it. But here's a simplified version of one I've discussed elsewhere here: simulate this circuit This is a sink/source output driver that will convert an I/O pin's output voltage (which is assumed to be based upon a $$\5\:\text{V}\$$ power supply rail and drives IN) into a $$\150\:\text{V}\$$ output capable of sinking and sourcing current. The output does invert the input signal. But it does so using a common base BJT ($$\Q_3\$$) that does things "in phase" with the input. (It's not a crafted design. It has a number of problems that would require additional components in any realistic use. It's purpose is to keep close to the minimum requirements for illustration purposes.) Take a close look. When the I/O pin goes LOW, this pulls down on $$\R_2\$$ and $$\R_3\$$. pulling down on $$\R_3\$$ turns off $$\Q_2\$$, of course. But it creates an emitter current in $$\Q_3\$$, via $$\R_2\$$. This current is passed along to the collector, which causes a voltage drop across $$\R_1\$$. So when the signal "pulls down" on the emitter, via $$\R_2\$$, this causes the collector to also be "driven downward." Note here that the input signal moved "downward" and that the collector of $$\Q_3\$$ also goes "downward" in response. This is "in phase" and not "out of phase." With this emitter current being translated into collector current, via $$\Q_3\$$, there is now current to feed the base of $$\Q_1\$$, which becomes active and pulls the output to its supply rail of $$\150\:\text{V}\$$. Similarly, when the I/O pin goes HIGH (or close to $$\5\:\text{V}\$$) then $$\Q_3\$$ is off (the base voltage and the emitter voltage are nearly the same) and then $$\R_1\$$ pulls $$\Q_3\$$'s collector towards the $$\150\:\text{V}\$$ rail causing $$\Q_1\$$ to go off. But, of course, this also supplies base current into $$\Q_2\$$ (via $$\R_3\$$) and turns it on, pulling the output LOW. ## Above Common Base Example Explored The above common base schematic doesn't have part values and I didn't show it "in action." I just discussed it, a little. And perhaps I wasn't concrete enough to make the point well. So I'll expand that discussion by providing a simulation using LTspice and then I'll point to the generated output results to clarify my earlier points. First, the schematic I used for simulation purposes. (I'm using BJTs with a high breakdown voltage.) Note that I've added a small-valued resistor as the "series resistance" of some MCU I/O pin. (It doesn't affect the results. But it adds a tiny bit of realism.) That said, the above schematic again is NOT a practical one. It is pretty obvious that I didn't do much thinking about the resistor values, for example. And that's only the beginning of its problems. So please don't imagine that this is useful. It's only purpose is for illustration. Let's now look at the emitter voltage and the collector voltage of $$\Q_3\$$: It is obvious, on its face, that these two signals (the emitter voltage and the collector voltage of $$\Q_3\$$) are in phase with each other. It would be difficult to argue otherwise. In case you are curious, let's also take a look at the input voltage and the output voltage of the entire circuit: You can see that the output inverts the sense of the input. But this is the entire circuit, which just happens to use a common base BJT within it. The common base BJT itself? In the earlier graphic you can see that the input signal at its emitter is "in phase" with the output signal at its collector. No question of it. ## Summary These are just two examples to make the case. You can compose many others. The results will be similar. You need to be flexible in how you see things. In the common-emitter case, for example, where you seem to already understand that the signal is inverted, there is another way to "see it." When the signal at the base of a CE amplifier is pulled up, this also pulls up on the emitter. The emitter follows the base. So does that mean the CE arrangement is really just an "emitter follower?" No. Not really. Yes, the emitter does follow the base (that is always the case when you apply a signal to the base.) But here, the output signal is taken from the collector. And when the emitter follows the base (up or down), this generates an emitter current that is "in phase" with the base signal. However, since this current then is applied to the collector load resistor and the output is taken directly from the collector, it follows that the collector load resistor drops more voltage with an increasing emitter current. So the collector is $$\180^\circ\$$ out of phase, as the increasing emitter current causes a decreasing collector voltage because of the increasing collector resistor voltage drop. In the common base case, the signal is applied to the emitter and the collector is the output. I've provided a sample schematic to show what happens where. Pulling down on the emitter (via a resistor) causes the emitter current to increase and thereby causes the collector voltage to decrease (just as in the CE arrangement.) But here, since the input signal goes downward to create an increasing voltage drop across the collector load, causing the collector voltage itself to go downward too, then this is a case where the collector voltage "follows" the emitter voltage. So it's "in phase." In the common collector case, the emitter simply follows the base in a still more obvious way. Clearly, in this case the signal at the base is replicated at the emitter, but with increased current compliance. But again, "in phase." Note that even the CE case has an "emitter follower" perspective. The point here is that the BJT is just a BJT. It does what it always does. The reason for the different "perspectives" is mostly so that you can pick out the primary (1st order) behavior that's important. In reality? The BJT has no clue by itself. It's just a BJT in a circuit. It has no clue what's going on. It just "behaves" like a BJT. It doesn't know anything about where the "common" lead is at. So you can take a circuit described as common-emitter and look at it from a perspective of common-collector (or emitter-follower) for a moment, while looking at the emitter behavior for a moment before looking at the collector behavior. It's all a matter of perspective. Neil, I see, wrote a comment to you that makes sense to me. He wrote: Get a transistor, power supply, resistors, DMM and find out what happens. How true that is. This leads me to a bit of a long discourse that most are going to want to ignore. But here it is, anyway: ## tl;dr Postscript I've always had a great difficulty in just remembering arbitrary rules. It was hard for me to just memorize mathematical conclusions, for example. I had to be able to see how they worked in order to retain them well. Not just be told that they worked. I don't learn by parroting back what people say. I learn by thinking about what they say and making sense in my own mind about it, where possible. It's one of the reasons I strongly supported the International Astronomical Union's decision in 2006 to reclassify Pluto as a dwarf planet. The trigger of this change was Eris, discovered in 2005, which was 25% more massive than Pluto. But the real problem had been around for a long time before. Scientists didn't actually have a widely accepted or clear definition of what a planet actually is or should be. And the prior 15 years' improvements in solar system models, plus the discovery of Eris, made this lack of clarity manifest. Stern & Levison 2002, Mohanty & Jayawardhana 2006, and Basri & Brown 2006 provided theoretical and empirical results which let us take note that nature itself provided a clear separation of nearly six orders of magnitude difference between the other eight planets in our system and Pluto. Pluto really belonged in a class that included Erin and did not include Mercury or Jupiter. When nature shows you a distinction like that, you don't hold onto old, muddled ideas which confuse rather than enlighten. There's another great story to be found in Jagdish Mehra's biography of Richard Feynman, "The beat of a different drum." In the summer the Feynmans would take their vacations in the Catskill mountains. There would be a large group of people there, but the fathers would all go back to New York to work during the week and only come back again over the weekend. 'On weekends, when my father came,' recalled Richard, 'he would take me for walks in the woods. When the other mothers saw this, they thought it was wonderful and that the other fathers should take their sons for walks. They tried to work on them but they did not get anywhere, at first. They wanted my father to take all the kids, but he didn't want to because he had a special relationship with me. So it ended up that the other fathers had to take their children for walks the next weekend. 'The next Monday, when the fathers were back at work, we kids were playing in a field. One kid said to me, "See that bird? What kind of bird is that?" I said, "I haven't the slightest idea what kind of bird it is." He says, "It's a brown-throated thrush. Your father does not teach you anything!" 'But it was the opposite. He had already taught me: "See that bird? It's a Spencer's warbler." (I knew he didn't know the real name.) "well, in Italian it's Chutto Lapittida. In Portuguese, it is Bom da Peida. In Chinese, it's Chung-long-tah, and in Japanese it is Katano Tekeda. You can know the name of that bird in all the languages in the world, but when you are finished, you'll know absolutely nothing whatever about the world. You'll know about the humans in different places, and what they call the bird. So let's look at the bird and see what it is doing -- that's what counts." I learned very early from my father the difference between knowing the name of something and knowing something.' Feynman explained, 'My father understood that knowledge was different from the names of things. The names of things are only a convention that human beings use to discuss things, and of course that is important. But when he would tell me about looking at the birds, it was not just to look at them but to see what they were doing. As an example, he said, "Look, see the birds walking around there. They seem to be pecking their feathers all the time. Why do you think they do that?" And I said, "Well, I don't know." I was a kid of ten or eleven. I said, "Maybe their feathers get ruffled when they are flying." I made an attempt at an explanation. He then said, "If that were the case, they would peck more when they just landed after they flew. And after they got straightened out, walking around, they wouldn't peck so much. So let's see, watch those that land and then see how long they go on pecking and whether or not they peck in their feathers at the same rate." After a while we discovered that indeed they did. So it was not due to a need to straighten out their feathers just after flying. You see, he had made a little experiment, learning how to observe and discuss.' The point of all this is, "Don't memorize and parrot conclusions others reached.' Listen, but then observe and learn and see how things work. Propose for yourself new ways to think about why those things act that way and test out those new ways, to see if they continue to work well. Learn to think for yourself. Anyone can parrot. Even a 3 year old can. Neil's advice is good. Set down, do, observe, think. Don't take rules handed down from on high, as gospel. Work out the details on your own. Think about them. Make your own sense of the world. A BJT is just a BJT. We humans like to imagine that it is in CE, CB, or CC arrangement. It helps us to orient ourselves when examining a circuit. But the BJT itself? It just sits there and responds. It doesn't know what surrounds it. It just is. The terms are there for you and me and for others to help us communicate. But don't get mired in words. Learn behavior through observation. Make up your own ideas about what you observe. You can later learn to associate behaviors with the words people assign to them.
2022-12-09 10:03:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 35, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5372534990310669, "perplexity": 962.8368231657923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711394.73/warc/CC-MAIN-20221209080025-20221209110025-00097.warc.gz"}
https://findscholars.unh.edu/display/publication123305
# Similarity Degree of a Class of C-Algebras ### Abstract • Suppose that $\mathcal M$ is a countably decomposable type II$_1$ von Neumann algebra and $\mathcal A$ is a separable, non-nuclear, unital C$^*$-algebra. We show that, if $\mathcal M$ has Property $\Gamma$, then the similarity degree of $\mathcal M$ is less than or equal to $5$. If $\mathcal A$ has Property c$^*$-$\Gamma$, then the similarity degree of $\mathcal A$ is equal to $3$. In particular, the similarity degree of a $\mathcal Z$-stable, separable, non-nuclear, unital C$^*$-algebra is equal to $3$. • Qian, Wenhua • Shen, Junhao • January 2016 ### Keywords • Property Gamma • Similarity degree • Similarity problem • 121 • 149 • 84 • 1
2019-08-20 01:12:48
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626996517181396, "perplexity": 1349.9921217453777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315174.57/warc/CC-MAIN-20190820003509-20190820025509-00037.warc.gz"}
https://fplab.bitbucket.io/posts/2010-05-13-using-reflection-to-impro.html
# Nottingham FP Lab Blog ## Using reflection to improve automatization by gallais on May 13, 2010. Tagged as: Lunches. Last week I talked about a solver for propositional logic that uses reflection. This work is the opportunity to present how one shall develop a solver using reflection. ### Reflection The purpose of reflection is being able to manipulate terms of the language inside the language itself. It allows you to design certified solvers whereas the use of a MetaLanguage (Ltac for example) doesn’t guarantee anything. Since AIM XI, the latest version of Agda has a couple of new features. One of them is the possibility for the user to have access to the current goal [1]. From now, you can use : • A datatype Term that represents the terms in Agda • A command quoteGoal t in e which has the typing rule: e[t := T] : T ⊢ quote t in e : T • A command quote which gives you the internal representation of an identifier A solver will be designed in three steps. Let’s say that the type MyType will represent the set of goals that you want to deal with and that MyTerm will be the representation of the inhabitants of MyType. We need to: • Add a proper quoting function taking a Term and outputing a MyType element (preferably a non provable one if the Term has not a good shape) • Design the solver taking a MyType term and outputing a MyTerm element • Give the semantics of our datatypes and prove the soundness of our solver ### A solver for propositional logic Proving a formula of propositional logic is the same (thanks to Curry-Howard’s isomorphism) as finding a lambda term which is an inhabitant of the corresponding type. Our work is based on the (said to be “structural” but not in Agda’s sense) deduction rules presented in a paper by Roy Dyckhoff and Sara Negri [2]. #### Implementation The “MyType” datatype: data Type : ℕ → Set where atom : ∀ {n} → Fin n → Type n ⊥ : ∀ {n} → Type n _∩_ _⊃_ _∪_ : ∀ {m} → Type m → Type m → Type m The “MyTerm” datatype is more verbose but pretty straight-forward so I won’t include it here. It contains all the basic constructors for this simply-typed lambda caculus with sum and product types (var, lam, app, inj1, inj2, case, proj1, proj2, and). The only tricky thing is the lift function that lifts all the free variables of a given term because it has to deal with modifications of the environment when going under a lambda. #### Interface The issue of partiality (the formula is maybe not provable) is solved by using dependent types: the solver requires un argument that will either have the type ⊤ if the proposition is provable (the placeholder tt is then inferred by agda) or have the same type as the goal if it is not provable. Example of the use of the solver: Ex : ∀ {A B C D : Set} → ((A → B) × (C → A)) → (A ⊎ C) → B × (((A → D) ⊎ D) → D) Ex {A} {B} {C} {D} = quoteGoal t in solve 4 t (A ∷ B ∷ C ∷ D ∷ []) _ ### References [1] See Agda/test/succeed/Reflection.agda and Agda/doc/release-notes/2-2-8.txt` [2] Roy Dyckhoff and Sara Negri, Admissibility of Structural Rules for Contraction-Free Systems of Intuitionistic Logic, http://www.jstor.org/stable/2695061 You can get the source code on the following darcs repository: darcs get http://patch-tag.com/r/gallais/agda
2020-02-17 19:32:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5220974683761597, "perplexity": 2182.351915156927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143079.30/warc/CC-MAIN-20200217175826-20200217205826-00057.warc.gz"}
http://math.stackexchange.com/questions/565977/prove-that-this-sequence-diverges-to-infinity
# Prove that this sequence diverges to infinity. $\lim_{n\to \infty} (1 +\frac{1}{n}\tag{displayed})^{n^2} = \infty$ I don't know how to tackle this one. Knowing that it diverges to infinity and thus does not have an upper bound, should I try to prove that it's an unbounded subsequence, if so how? Is that sufficient to show that $\infty$ is the limit? Any help would be appreciated. - "Converges to infinity" is quite the oxymoron. –  Ron Gordon Nov 13 '13 at 22:06 English is not my first language and I actually couldn't think of a n appropriate way to put it. What's the right way? –  KitKat Nov 13 '13 at 22:09 You could say diverges to infinity. For that matter, you could say converges to infinity in the extended reals. Or you could be very informal and just say that it blows up. –  Brian M. Scott Nov 13 '13 at 22:10 "Diverges." Sorry for the sarcasm. –  Ron Gordon Nov 13 '13 at 22:10 Thanks. No problem, didn't even catch it before you mentioned it. –  KitKat Nov 13 '13 at 22:11 Hint: $$(1+\frac{1}{n})^{n^2}\geq 1+n^2(\frac{1}{n})$$ - Can't believe I didn't see Bernoulli coming. Thanks! –  KitKat Nov 13 '13 at 22:17 $$\lim_{n\to \infty} a_n = + \infty$$ A sequence diverges to $+ \infty$, if $\forall k \in \mathbb{R}$, $\exists N_k \gt 0$ such that $a_n \gt k$, $\forall n > N_k$. Take any $k \in R$ Using (from above) $$(1+\frac{1}{n})^{n^2}\geq 1+n^2(\frac{1}{n})$$ $$= 1 + n^2/n$$ $$=1 + n > n > k$$ $$\iff n > k$$ Take $N_k = k$ are you are done. - How does this answer the question? –  Lord Soth Nov 13 '13 at 22:23 Show that $\forall k \in R$ $\exists N_k > 0$ such that $n > k$ $\forall n > N_k$ Take any $k \in R$ Then if $N_k = n$ you are done. –  Zhoe Nov 13 '13 at 22:31 Great, can you then show how it is done for this particular problem? –  Lord Soth Nov 13 '13 at 22:33 Not 100% certain of correctness, but that's how I'd approach it. –  Zhoe Nov 13 '13 at 23:21 An approach would be using the fact that: $$\lim_{x \to \infty} \left(1+\frac{1}{n}\right)^n = e$$ - How would this approach work ? –  Amr Nov 13 '13 at 22:12 @user2896626 this lacks rigour. –  Amr Nov 13 '13 at 22:14 @user2896626 As a counterexample to the fact that you seem to assume: Set $b_n=1+\frac{1}{n}$. Then $b_n$ converges to $1$. However: $e=\lim_{n\rightarrow \infty} (1+\frac{1}{n})^n=\lim_{n\rightarrow \infty} b_n^n\not=\lim_{n\rightarrow} B^n=\lim_{n\rightarrow} 1^n =1$ –  Amr Nov 13 '13 at 22:22 @user2896626 The fact that $1^n$ results in an indeterminate form does not mean that its limit does not exist. In fact, $\lim_{n\rightarrow\infty} 1^n = 1$. –  Lord Soth Nov 13 '13 at 23:04 While I agree with @Amr 's obejction that, in general, if $b_n\rightarrow B$, this does not guarantee that $\lim_{n\rightarrow \infty} b_n^n = \lim_{n\rightarrow \infty} B^n$, in this particular case, it can be made to work. Namely, since $(1+\frac{1}{n})^n\rightarrow e$, for large enough $n$, $(1+\frac{1}{n})^n > 2$. Then, for large enough $n$, $(1+\frac{1}{n})^{n^2} = [(1+\frac{1}{n})^n]^n > 2^n$, so diverges. –  Jason DeVito Nov 14 '13 at 0:19
2015-10-09 16:26:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.968756914138794, "perplexity": 677.7042428180142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737933027.70/warc/CC-MAIN-20151001221853-00106-ip-10-137-6-227.ec2.internal.warc.gz"}
https://mathematica.stackexchange.com/questions/159031/generate-random-matrix-where-the-entries-in-each-column-are-drawn-from-a-differe
# Generate random matrix where the entries in each column are drawn from a different range I know that you can generate an $m\times n$ matrix of random numbers by RandomReal[range, {m, n}], where e.g. range = {0, 1}. Is there a way to generate an $m\times n$ matrix of random numbers and have each column entry be drawn from a different range? My question is, if there is something analogous to RandomReal[{range1,range2,...,rangen},{m,n}] (which obviously does not evaluate because it is not supported). My current solution to this problem is using Map; i.e. Transpose[ Map[ RandomReal[#,m]&, {range1,range2,...,rangen} ] ] where m is the desired number of $n$-tuples of random numbers from $n$ different ranges that I need. Is there a better alternative to this? The simplest way is to use UniformDistribution[] in RandomVariate[]: BlockRandom[SeedRandom[42]; (* for reproducibility *) RandomVariate[UniformDistribution[{{3, 4}, {5, 7}}], 4]] {{3.42591, 6.11193}, {3.39102, 5.57834}, {3.34707, 5.5937}, {3.45374, 5.41282}} Alternatively, you can use RescalingTransform[] on the results of RandomReal[]: scaledRandomReal[ranges_?MatrixQ, n_Integer] := With[{m = Length[ranges]}, RescalingTransform[ConstantArray[{0, 1}, m], ranges][RandomReal[1, {n, m}]]] BlockRandom[SeedRandom[42]; scaledRandomReal[{{3, 4}, {5, 7}}, 4]] {{3.42591, 5.78205}, {3.34707, 5.90748}, {3.55596, 5.57834}, {3.29685, 5.41282}} • Are there advantages to the first form (RandomVariate[UniformDistribution[...) over Transpose[RandomReal[#,4]&/@{{3,4},{5,7}}]? – orome Nov 1 '17 at 12:55 • Yes, that works too. I just didn't want to use slots. – J. M. will be back soon Nov 1 '17 at 13:14 • I think I'll use UniformDistribution because it's easier to wrap my head around it; RescalingTransform is really interesting and it's kind of embarrassing not knowing anything about it even though it's a 10 years old feature; +1 for the people at Wolfram – user42582 Nov 2 '17 at 8:08
2020-01-28 23:20:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4496200680732727, "perplexity": 1897.8512091465664}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783342.96/warc/CC-MAIN-20200128215526-20200129005526-00323.warc.gz"}
https://opendsa-server.cs.vt.edu/ODSA/Books/CS3/html/Diskdrive.html
Register # 9.3. Disk Drives¶ ## 9.3.1. Disk Drives¶ A programmer typically views a random access file stored on disk as a contiguous series of bytes, with those bytes possibly combining to form data records. This is called the logical file. The physical file actually stored on disk is usually not a contiguous series of bytes. It could well be in pieces spread all over the disk. The file manager, a part of the operating system, is responsible for taking requests for data from a logical file and mapping those requests to the physical location of the data on disk. Likewise, when writing to a particular logical byte position with respect to the beginning of the file, this position must be converted by the file manager into the corresponding physical location on the disk. To gain some appreciation for the the approximate time costs for these operations, you need to understand the physical structure and basic workings of a disk drive. Disk drives are often referred to as direct access storage devices. This means that it takes roughly equal time to access any record in the file. This is in contrast to sequential access storage devices such as tape drives, which require the tape reader to process data from the beginning of the tape until the desired position has been reached. As you will see, the disk drive is only approximately direct access: At any given time, some records are more quickly accessible than others. ### 9.3.1.1. Disk Drive Architecture¶ A hard disk drive is composed of one or more round platters, stacked one on top of another and attached to a central spindle. Platters spin continuously at a constant rate. Each usable surface of each platter is assigned a read/write head or I/O head through which data are read or written, somewhat like the arrangement of a phonograph player’s arm “reading” sound from a phonograph record. Unlike a phonograph needle, the disk read/write head does not actually touch the surface of a hard disk. Instead, it remains slightly above the surface, and any contact during normal operation would damage the disk. This distance is very small, much smaller than the height of a dust particle. It can be likened to a 5000-kilometer airplane trip across the United States, with the plane flying at a height of one meter! A hard disk drive typically has several platters and several read/write heads, as shown in Figure 9.3.1 (a). Each head is attached to an arm, which connects to the boom. 1 The boom moves all of the heads in or out together. When the heads are in some position over the platters, there are data on each platter directly accessible to each head. The data on a single platter that are accessible to any one position of the head for that platter are collectively called a track, that is, all data on a platter that are a fixed distance from the spindle, as shown in Figure 9.3.1 (b). The collection of all tracks that are a fixed distance from the spindle is called a cylinder. Thus, a cylinder is all of the data that can be read when the arms are in a particular position. Figure 9.3.1: Disk drive schematic. (a) A typical disk drive arranged as a stack of platters. (b) One track on a disk drive platter. Each track is subdivided into sectors. Between each sector there are inter-sector gaps in which no data are stored. These gaps allow the read head to recognize the end of a sector. Note that each sector contains the same amount of data. Because the outer tracks have greater length, they contain fewer bits per inch than do the inner tracks. Thus, about half of the potential storage space is wasted, because only the innermost tracks are stored at the highest possible data density. This arrangement is illustrated by Figure 9.3.2 (a). Disk drives today actually group tracks into zones such that the tracks in the innermost zone adjust their data density going out to maintain the same radial data density, then the tracks of the next zone reset the data density to make better use of their storage ability, and so on. This arrangement is shown in Figure 9.3.2 (b). Figure 9.3.2: The organization of a disk platter. Dots indicate density of information. (a) Nominal arrangement of tracks showing decreasing data density when moving outward from the center of the disk. (b) A “zoned” arrangement with the sector size and density periodically reset in tracks further away from the center. In contrast to the physical layout of a hard disk, a CD-ROM consists of a single spiral track. Bits of information along the track are equally spaced, so the information density is the same at both the outer and inner portions of the track. To keep the information flow at a constant rate along the spiral, the drive must speed up the rate of disk spin as the I/O head moves toward the center of the disk. This makes for a more complicated and slower mechanism. In general, it is desirable to keep all sectors for a file together on as few tracks as possible. This desire stems from two assumptions: 1. Seek time is slow (it is typically the most expensive part of an I/O operation), and 1. If one sector of the file is read, the next sector will probably soon be read. Assumption (2) is called locality of reference, a concept that comes up frequently in computer applications. Contiguous sectors are often grouped to form a cluster. A cluster is the smallest unit of allocation for a file, so all files are a multiple of the cluster size. The cluster size is determined by the operating system. The file manager keeps track of which clusters make up each file. In Microsoft Windows systems, there is a designated portion of the disk called the File Allocation Table, which stores information about which sectors belong to which file. In contrast, Unix does not use clusters. The smallest unit of file allocation and the smallest unit that can be read/written is a sector, which in Unix terminology is called a block. Unix maintains information about file organization in certain disk blocks called inodes. A group of physically contiguous clusters from the same file is called an extent. Ideally, all clusters making up a file will be contiguous on the disk (i.e., the file will consist of one extent), so as to minimize seek time required to access different portions of the file. If the disk is nearly full when a file is created, there might not be an extent available that is large enough to hold the new file. Furthermore, if a file grows, there might not be free space physically adjacent. Thus, a file might consist of several extents widely spaced on the disk. The fuller the disk, and the more that files on the disk change, the worse this file fragmentation (and the resulting seek time) becomes. File fragmentation leads to a noticeable degradation in performance as additional seeks are required to access data. Another type of problem arises when the file’s logical record size does not match the sector size. If the sector size is not a multiple of the record size (or vice versa), records will not fit evenly within a sector. For example, a sector might be 2048 bytes long, and a logical record 100 bytes. This leaves room to store 20 records with 48 bytes left over. Either the extra space is wasted, or else records are allowed to cross sector boundaries. If a record crosses a sector boundary, two disk accesses might be required to read it. If the space is left empty instead, such wasted space is called internal fragmentation. A second example of internal fragmentation occurs at cluster boundaries. Files whose size is not an even multiple of the cluster size must waste some space at the end of the last cluster. The worst case will occur when file size modulo cluster size is one (for example, a file of 4097 bytes and a cluster of 4096 bytes). Thus, cluster size is a tradeoff between large files processed sequentially (where a large cluster size is desirable to minimize seeks) and small files (where small clusters are desirable to minimize wasted storage). Every disk drive organization requires that some disk space be used to organize the sectors, clusters, and so forth. The layout of sectors within a track is illustrated by Figure 9.3.3. Typical information that must be stored on the disk itself includes the File Allocation Table, sector headers that contain address marks and information about the condition (whether usable or not) for each sector, and gaps between sectors. The sector header also contains error detection codes to help verify that the data have not been corrupted. This is why most disk drives have a “nominal” size that is greater than the actual amount of user data that can be stored on the drive. The difference is the amount of space required to organize the information on the disk. Even more space will be lost due to fragmentation. Figure 9.3.3: An illustration of sector gaps within a track. Each sector begins with a sector header containing the sector address and an error detection code for the contents of that sector. The sector header is followed by a small intra-sector gap, followed in turn by the sector data. Each sector is separated from the next sector by a larger inter-sector gap. ### 9.3.1.2. Disk Access Costs¶ When a seek is required, it is usually the primary cost when accessing information on disk. This assumes of course that a seek is necessary. When reading a file in sequential order (if the sectors comprising the file are contiguous on disk), little seeking is necessary. However, when accessing a random disk sector, seek time becomes the dominant cost for the data access. While the actual seek time is highly variable, depending on the distance between the track where the I/O head currently is and the track where the head is moving to, we will consider only two numbers. One is the track-to-track cost, or the minimum time necessary to move from a track to an adjacent track. This is appropriate when you want to analyze access times for files that are well placed on the disk. The second number is the average seek time for a random access. These two numbers are often provided by disk manufacturers. A typical example is the Western Digital Caviar serial ATA drive. The manufacturer’s specifications indicate that the track-to-track time is 2.0 ms and the average seek time is 9.0 ms. In 2008 a typical drive in this line might be 120GB in size. In 2011, that same line of drives had sizes of up to 2 or 3TB. In both years, the advertised track-to-track and average seek times were identical. For many years, typical rotation speed for disk drives was 3600 rpm, or one rotation every 16.7 ms. Most disk drives in 2011 had a rotation speed of 7200 rpm, or 8.3 ms per rotation. When reading a sector at random, you can expect that the disk will need to rotate halfway around to bring the desired sector under the I/O head, or 4.2 ms for a 7200-rpm disk drive. Once under the I/O head, a sector of data can be transferred as fast as that sector rotates under the head. If an entire track is to be read, then it will require one rotation (8.3 ms at 7200 rpm) to move the full track under the head. If only part of the track is to be read, then proportionately less time will be required. For example, if there are 16,000 sectors on the track and one sector is to be read, this will require a trivial amount of time (1/16,000 of a rotation). Example 9.3.1 Assume that an older disk drive has a total (nominal) capacity of 16.8GB spread among 10 platters, yielding 1.68GB/platter. Each platter contains 13,085 tracks and each track contains (after formatting) 256 sectors of 512 bytes/sector. Track-to-track seek time is 2.2 ms and average seek time for random access is 9.5 ms. Assume the operating system maintains a cluster size of 8 sectors per cluster (4KB), yielding 32 clusters per track. The disk rotation rate is 5400 rpm (11.1 ms per rotation). Based on this information we can estimate the cost for various file processing operations. How much time is required to read the track? On average, it will require half a rotation to bring the first sector of the track under the I/O head, and then one complete rotation to read the track. How long will it take to read a file of 1MB divided into 2048 sector-sized (512 byte) records? This file will be stored in 256 clusters, because each cluster holds 8 sectors. The answer to the question depends largely on how the file is stored on the disk, that is, whether it is all together or broken into multiple extents. We will calculate both cases to see how much difference this makes. If the file is stored so as to fill all of the sectors of eight adjacent tracks, then the cost to read the first sector will be the time to seek to the first track (assuming this requires a random seek), then a wait for the initial rotational delay, and then the time to read (which is the same as the time to rotate the disk again). This requires $9.5\mathrm{ms.} + 11.1\mathrm{ms.} \times 1.5 = 26.2 \mathrm{ms.}$ In this equation, 9.5ms. is the average seek time for a (random) track on the disk. 11.1ms. is the time for one rotation of a disk spinning at 5400RPM. Since we need to wait for rotational delay (one half rotation) and then read all of the contents of the track (one full rotation), we multiply 11.1ms. by 1.5. Thus, the total time to read a random track from the disk is 26.2ms. After reading the first track, we can then assume that the next seven tracks require only a track-to-track seek because they are adjacent. Therefore, each requires $2.2\mathrm{ms.} + 11.1\mathrm{ms.} \times 1.5 = 18.9 \mathrm{ms.}$ Here, 2.2ms. is the time to seek to an adjacent track. Again we must wait for rotational delay (one half rotation) followed by a full rotation to read the track, so we multiply the rotation time (11.1ms.) times 1.5 for the disk rotation. Thus, we get a total of 18.9ms. to read the data from an adjacent track. The total time required to read all 8 adjacent tracks is therefore $26.2 \mathrm{ms} + 7 \times 18.9 \mathrm{ms} = 158.5 \mathrm{ms}.$ In contrast, what would the time be if the file’s clusters are spread randomly across the disk? Then we must perform a seek for each cluster, followed by the time for rotational delay. Once the first sector of the cluster comes under the I/O head, very little time is needed to read the cluster because only 8/256 of the track needs to rotate under the head, for a total time of about 5.9 ms for latency and read time. Thus, the total time required is about $256 (9.5\mathrm{ms.} + 5.9\mathrm{ms.}) \approx 3942 \mathrm{ms}$ or close to 4 seconds. This is much longer than the time required when the file is all together on disk! That is, 256 times we must perform a seek to a random track (9.5ms.). Then we wait on average one half of a disk rotation followed by reading the actual data which requires a further 8/256 of a rotation, for a total of 5.9ms. This example illustrates why it is important to keep disk files from becoming fragmented, and why so-called “disk defragmenters” can speed up file processing time. File fragmentation happens most commonly when the disk is nearly full and the file manager must search for free space whenever a file is created or changed. ### 9.3.1.3. Notes¶ 1 This arrangement, while typical, is not necessarily true for all disk drives. Nearly everything said here about the physical arrangement of disk drives represents a typical engineering compromise, not a fundamental design principle. There are many ways to design disk drives, and the engineering compromises change over time. In addition, most of the description given here for disk drives is a simplified version of the reality. But this is a useful working model to understand what is going on. To complicate matters further, Solid State Drives (SSD) work rather differently.
2022-12-06 16:18:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46460312604904175, "perplexity": 907.4595545621557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00836.warc.gz"}
https://indico.cern.ch/event/686555/contributions/2962550/
ICHEP2018 SEOUL Jul 4 – 11, 2018 COEX, SEOUL Asia/Seoul timezone Status and prospects of the AWAKE experiment Jul 6, 2018, 11:44 AM 23m 105 (COEX, Seoul) 105 COEX, Seoul Parallel Accelerator: Physics, Performance, and R&D for Future Facilities Speaker Mr Fearghus Keeble (University College London) Description AWAKE is a plasma wakefield acceleration experiment at CERN, using the $400~\mathrm{GeV}$ proton bunch of the SPS to drive an accelerating gradient in the GV m$^{-1}$ range. AWAKE aims to inject 15–20 MeV electrons into this plasma wakefield and accelerate them to GeV energies over 10 metres. An introduction to AWAKE and its physics will be presented, as well as an overview of the experimental apparatus and the most recent results. Longer term plans, including the future of the AWAKE facility and possible applications of the technology to HEP, will be discussed. Primary author Mr Fearghus Keeble (University College London)
2022-07-02 11:52:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3492313027381897, "perplexity": 10408.553057390462}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104054564.59/warc/CC-MAIN-20220702101738-20220702131738-00325.warc.gz"}
https://global-sci.org/intro/article_detail/jcm/9265.html
Volume 13, Issue 3 High-Accuracy P-Stable Methods with Minimal Phase-Lag for Y DOI: J. Comp. Math., 13 (1995), pp. 232-242 Published online: 1995-06 Preview Full PDF 2 641 Export citation Cited by • Abstract In this paper, we develop a one-parameter family of P-stable sixth-order and eighth-order two-step methods with minimal phase-lag errors for numerical integration of second order periodic initial value problems: $$y''=f(t,y), \quad y(t_0)=y_0, \quad y'(t_0)=y'_0.$$ We determine the parameters so that the phase-lag (frequency distortion) of these methods are minimal. The resulting methods are P-stable methods with minimal phase-lag errors. The superiority of our present P-stable methods over the P-stable methods in [1--4] is given by comparative studying of the phase-lag errors and illustrated with numerical examples. • Keywords @Article{JCM-13-232, author = {}, title = {High-Accuracy P-Stable Methods with Minimal Phase-Lag for Y}, journal = {Journal of Computational Mathematics}, year = {1995}, volume = {13}, number = {3}, pages = {232--242}, abstract = { In this paper, we develop a one-parameter family of P-stable sixth-order and eighth-order two-step methods with minimal phase-lag errors for numerical integration of second order periodic initial value problems: $$y''=f(t,y), \quad y(t_0)=y_0, \quad y'(t_0)=y'_0.$$ We determine the parameters so that the phase-lag (frequency distortion) of these methods are minimal. The resulting methods are P-stable methods with minimal phase-lag errors. The superiority of our present P-stable methods over the P-stable methods in [1--4] is given by comparative studying of the phase-lag errors and illustrated with numerical examples. }, issn = {1991-7139}, doi = {https://doi.org/}, url = {http://global-sci.org/intro/article_detail/jcm/9265.html} } TY - JOUR T1 - High-Accuracy P-Stable Methods with Minimal Phase-Lag for Y JO - Journal of Computational Mathematics VL - 3 SP - 232 EP - 242 PY - 1995 DA - 1995/06 SN - 13 DO - http://dor.org/ UR - https://global-sci.org/intro/jcm/9265.html KW - AB - In this paper, we develop a one-parameter family of P-stable sixth-order and eighth-order two-step methods with minimal phase-lag errors for numerical integration of second order periodic initial value problems: $$y''=f(t,y), \quad y(t_0)=y_0, \quad y'(t_0)=y'_0.$$ We determine the parameters so that the phase-lag (frequency distortion) of these methods are minimal. The resulting methods are P-stable methods with minimal phase-lag errors. The superiority of our present P-stable methods over the P-stable methods in [1--4] is given by comparative studying of the phase-lag errors and illustrated with numerical examples.
2020-07-07 08:56:26
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2954534590244293, "perplexity": 2436.3797431782828}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891884.11/warc/CC-MAIN-20200707080206-20200707110206-00246.warc.gz"}
https://www.ssccglapex.com/hi/in-an-engineering-college-the-average-salary-of-all-engineering-graduates-from-mechanical-trade-is-rs-2-45-lacs-per-annum-and-that-of-the-engineering-graduates-from-electronics-trade-is-rs-3-56-lacs/
### In an engineering college the average salary of all engineering graduates from Mechanical trade is Rs. 2.45 lacs per annum and that of the engineering graduates from Electronics trade is Rs. 3.56 lacs per annum. The average salary of all Mechanical and Electronics graduates is Rs. 3.12 lacs per annum. Find the least number of Electronics graduates passing out from this institute. A. 43 B. 59 C. 67 D. Cannot be determined Let the number of Mechanical Engineering graduates be M and the number of Electronics Engineering graduates be E. Then, $\begin{array}{l}\Rightarrow2.45M+3.56E=3.12(Mi+E)\\ \Rightarrow2.45M+3.56E=3.12M+3.12]\\ \Rightarrow0.44E=0.67M\\ \Rightarrow\Large\frac{M}{E}=\frac{0.44}{0.67}=\frac{44}{67}\end{array}$ Since the ratio 44 : 67 is in its simplest form, So least number of Electronics graduates = 67
2023-02-01 18:29:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7413480877876282, "perplexity": 1285.6452873075255}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00168.warc.gz"}
https://socratic.org/questions/how-do-you-solve-the-following-system-4x-5y-1-2x-6y-18
# How do you solve the following system: 4x-5y= -1, -2x=6y+18 ? Dec 31, 2015 The solution for the system of equations is color(blue)(x=-96/34,y=-35/17 #### Explanation: $\textcolor{b l u e}{4 x} - 5 y = - 1$........equation $1$ $- 2 x - 6 y = 18$, multiplying by $2$ $\textcolor{b l u e}{- 4 x} - 12 y = 36$........equation $2$ Solving by elimination. Adding equations $1$ and $2$ $\textcolor{b l u e}{\cancel{4} x} - 5 y = - 1$ $\textcolor{b l u e}{\cancel{- 4 x}} - 12 y = 36$ $- 17 y = 35$ color(blue)(y=35/(-17) Finding $x$ from equation $2$: $- 2 x - 6 y = 18$ $- 2 x = 18 + 6 y$ $- 2 x = 18 + 6 \times \left(- \frac{35}{17}\right)$ $- 2 x = 18 - \frac{210}{17}$ $- 2 x = \frac{306}{17} - \frac{210}{17}$ $- 2 x = \frac{96}{17}$ $x = \frac{96}{17 \times - 2}$ color(blue)(x=96/(-34)
2019-10-16 02:25:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 23, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.722400426864624, "perplexity": 7264.539954260223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00174.warc.gz"}
https://www.gamedev.net/forums/topic/452032-yet-another-how-to-find-normal-of-a-triangle-thread/
# Yet another How to find Normal of a triangle thread! This topic is 3903 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. ## Recommended Posts My mind over heating... since days... I've read numerous articles Is there a readily available function that returns a normal vertex taking in 3 vertices? void vertnormal(float v1x,float v1y,float v1z,float v2x,float v2y,float v2z,float v3x,float v3y,float v3z,float *nx,float *ny,float *nz) { //Here goes the code that does all calculations which I donot know! :< *nx = valuex; *ny = valuey; *nz = valuez; } Could someone post such a similar function? Thankxs ##### Share on other sites Three vertices form two vectors, right? The cross product of two vectors forms the normal. So, vector 1 = <v2x-v1x, v2y-v1y, v2z-v1z> And, vector 2 = <v2x-v3x, v2y-v3y, v2z-v3z> Then, take the cross product [ i j k ][v2x-v1x v2y-v1y v2z-v1z][v2x-v3x v2y-v3y v2z-v3z] = ((v2y-v1y)*(v2z-v3z)-(v2z-v1z)*(v2y-v3y))i - ((v2x-v1x)*(v2z-v3z)-(v2z-v1z)*(v2x-v3x))j + ((v2x-v1x)*(v2y-v3y)-(v2y-v1y)*(v2x-v3x))k So the x portion of the vector is: ((v2y-v1y)*(v2z-v3z)-(v2z-v1z)*(v2y-v3y)) The y portion is: -((v2x-v1x)*(v2z-v3z)-(v2z-v1z)*(v2x-v3x)) The z portion is: ((v2x-v1x)*(v2y-v3y)-(v2y-v1y)*(v2x-v3x)) Of course, the issue is that the vector can go two ways (positive and negative direction), so make sure you know which way your triangle is facing (as in, which way is 'out') Also, if you want to 'normalize' (make of size 1) the vector, divide each part of the vector by the size of the vector, which is: sqrt(x*x + y*y + z*z) ##### Share on other sites I donot do OPENGL or direct x programming... I just need a plain C function.. More over I am not so goood in vectors.. I donot know what cross product is. Hope someone helps. ##### Share on other sites Quote: Paraphrased from visage x = (v2y-v1y)*(v2z-v3z)-(v2z-v1z)*(v2y-v3y);y = -(v2x-v1x)*(v2z-v3z)+(v2z-v1z)*(v2x-v3x);z = (v2x-v1x)*(v2y-v3y)-(v2y-v1y)*(v2x-v3x);r = sqrt(x*x + y*y + z*z);x /= r;y /= r;z /= r; Quote: Original post by OmniscientN00bHope someone helps. Somebody did. ##### Share on other sites Quote: Original post by erissianx = (v2y-v1y)*(v2z-v3z)-(v2z-v1z)*(v2y-v3y);y = -(v2x-v1x)*(v2z-v3z)+(v2z-v1z)*(v2x-v3x);z = (v2x-v1x)*(v2y-v3y)-(v2y-v1y)*(v2x-v3x);r = sqrt(x*x + y*y + z*z);x /= r;y /= r;z /= r; Quote: Original post by OmniscientN00bSomebody did. So those are the normal vertices right? Excellent I will try those.
2018-02-21 13:54:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4063114523887634, "perplexity": 10359.943438217571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813622.87/warc/CC-MAIN-20180221123439-20180221143439-00640.warc.gz"}
https://math.stackexchange.com/questions/1991212/partial-sum-of-the-harmonic-series-between-two-consecutive-fibonacci-numbers
# Partial sum of the harmonic series between two consecutive fibonacci numbers I was playing around with some calculations and I noticed that the partial sum of the harmonic series: $$s_n=\sum_{k=F_n}^{F_{n+1}}\frac 1 k$$ where $F_n$ and $F_{n+1}$ are two consecutive Fibonacci numbers have some interesting properties. It is close to $\frac 1 2$ for small values of $n$ and it seems to converge to a value less than $0.5$ for large $n$. This is what I've got so far: $$\lim_{n\to\infty} s_n\approx 0.481212$$ I googled a bit to see if there is some theorems or resources for this, and found nothing. I suspect that the series might converge to a smaller number and I may have reached some computational limitations which led to the conclusion that the limit is close to $\frac 1 2$. So my questions are: 1. Can we show that the series converge to a non-zero value? 2. In case the first answer is yes, can the limit be expressed in a closed form? In terms of the harmonic numbers $H_n$, your sequence is $$s_n = H_{F_{n+1}} - H_{F_n-1}$$ As $n \to \infty$ it's known that $H_n = \log n + \gamma + o(1)$, so \begin{align} s_n &= \log F_{n+1} + \gamma + o(1) - \log(F_n-1) - \gamma - o(1) \\ &= \log F_{n+1} - \log(F_n-1) + o(1). \end{align} Now $F_m \sim \varphi^m/\sqrt{5}$, where $\varphi$ is the golden ratio, so using the fact that $a \sim b \implies \log a = \log b + o(1)$ we have \begin{align} s_n &= \log(\varphi^{n+1}/\sqrt{5}) - \log(\varphi^{n}/\sqrt{5}) + o(1) \\ &= \log \varphi + o(1). \end{align} In other words, $$\lim_{n \to \infty} \sum_{k=F_n}^{F_{n+1}} \frac{1}{k} = \log \varphi.$$ • It might be better to use $s_n\approx ...$ instead of $s_n=...$ – polfosol Oct 30 '16 at 8:14 • @polfosol, no I disagree. Everything in my answer is rigorous following the definitions of $\sim$ and little-o notation. – Antonio Vargas Oct 30 '16 at 8:14 • @polfosol, see, for example, here for the definitions. – Antonio Vargas Oct 30 '16 at 8:15 • I didn't notice that. Fair enough – polfosol Oct 30 '16 at 8:15 • I will add a comment that ought to have been done : thanks for your very precise and nice answer. – Jean Marie Oct 30 '16 at 8:35 The Fibonacci numbers increase as $\phi^n$ (where $\phi$ is the golden mean $\frac{1+\sqrt{5}}{2}$), and harmonic numbers increase as $\log n$ (i.e., the natural log). Therefore, the difference between the harmonic numbers for successive Fibonacci numbers will approach $\log\phi \approx 0.481211825...$ To expand a bit, the Fibonacci numbers can be expressed as $\frac{\phi^n - (1-\phi)^n}{\sqrt{5}}$. (Try it! The fact that the equation $f(x+2) - f(x+1) - f(x) = 0$ requires a sum of powers of $\phi$ and $1-\phi$ follows from the fact that these are the solutions to the equation $x^2 - x - 1 = 0$, and the coefficients come from f(1) = f(2) = 1.) The second term vanishes, so large Fibonacci numbers can be approximated quite well as $\frac{\phi^n}{\sqrt{5}}$. Since one definition of the natural logarithm is the integral from 1 to the parameter of the function $t^{-1}$, the harmonic numbers can be approximated as the natural logarithm, and in fact the difference approaches a constant (called $\gamma$, about 0.577). If you're not familiar with integrals, the fact that the harmonic numbers increase as a logarithm is suggested by Oresme's proof that the harmonic series diverges... $$1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \frac{1}{9} + \cdots > 1 + \frac{1}{2} + \frac{1}{4} + \frac{1}{4} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{16} + \cdots$$ ...and it just so happens that that logarithm is the natural logarithm. So if you accept that for very large n, the harmonic numbers approach $\log n$, and that the Fibonacci numbers approach $\frac{\phi^n}{\sqrt{5}}$, then you get for two successive... $$\log\left(\frac{\phi^{n+1}}{\sqrt{5}}\right) - \log\left(\frac{\phi^n}{\sqrt{5}}\right) = \log\left(\frac{\phi^{n+1}}{\phi^n}\right) = \log\phi$$ ($\log x - \log y = \log \frac{x}{y}$ is a natural inverse of $\frac{e^x}{e^y} = e^{x-y}$.) • I will mark this as accepted if you add some more details ;) – polfosol Oct 30 '16 at 8:10 • ...har-r-r-rumph. – user361424 Oct 30 '16 at 8:36 • @user361424 very nice answer, a compliment that ought to have been done by the proposer before asking for "more details" ! – Jean Marie Oct 30 '16 at 8:42 • @JeanMarie It seems I am stuck with this – polfosol Oct 30 '16 at 8:44 • I think this might be the inverse fastest gun in the west problem... (this answer was originally just the first paragraph, and without the parentheticals explaining the golden mean and clarifying the natural log). – user361424 Oct 30 '16 at 8:45
2021-01-22 01:30:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9867463707923889, "perplexity": 225.12238835619792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703528672.38/warc/CC-MAIN-20210121225305-20210122015305-00037.warc.gz"}
https://www.toppr.com/guides/chemistry/chemical-bonding-and-molecular-structure/methane/
# Methane Methane is an organic compound and the first member of the Alkane series. The alkanes are saturated hydrocarbons and are available in nature, adequately. Methane is the simplest member of this series and available in nature abundantly. It is available in gaseous form at normal temperature and pressure. It is a good source of energy as fuel. Methane is highly flammable in presence of fire and therefore while handling the containers/cylinders of methane, one should very cautious. Chemically, there are two types of atoms that form methane, namely, carbon and hydrogen. One number of carbon atom combines with four numbers of hydrogen atoms to form one molecule of methane. Therefore, the formula of methane is $$CH_{4}$$. Methane is produced both biologically and geologically in nature. Geologically, it occurs below the ground and seafloor. Another source of methane is crude oil. Methane ## Sources of Methane In nature, methane is available, abundantly. In a biological system, methane gas is produced in marshy/swampy lands by anaerobic bacterial decomposition of vegetables and biological wastes. Therefore, wetlands are major sources of production of methane. Because of this source of the produce of methane, it is also known as marsh gas. Other sources include volcanoes, termites (methane produced during digestive processes), vent in the ocean floor, etc. Methane is the major constituent of liquefied Natural Gas (LNG) which contains 50 to 90% of it and it is also the major constituent of biogas produced in the biogas plants used as a fuel for cooking in the rural area. In nature, the production of methane is through the biogenic process and this process is known as methanogenesis. The final step of methanogenesis is as under: $$CO_{2} + 4H_{2}\rightarrow CH_{4} + 2 H_{2}O$$ Chemically, methane can be produced by heating hydrogen gas and carbon monoxide gas at 573 K temperature in presence of a nickel catalyst. Fractional distillation of natural gas gives the pure form of methane. ### Uses of Methane 1. Methane is a greenhouse gas that affects ambient temperature. It is useful as a fuel for rockets in liquefied form. 2. Methane has the advantage over kerosene as a rocket fuel because of its lower molecular weight and higher production of heat per unit mass. 3. It also produces small exhaust molecules which increases the chance of re-use of the booster. 4. It is useful to heat ovens, homes, water heaters, chemical reactors, kilns, etc. 5. As it is a major constituent of natural gas, methane is used for the generation of electricity. The generation of electricity happens by running the turbines from steam which is produced by boiling water using methane as fuel. As compared to other hydrocarbons, methane is a clean fuel as it produces less carbon dioxide for each unit of heat released. ### Physical Properties of Methane 1) Methane is the simplest hydrocarbon and is a colourless and odourless gas at room temperature and pressure. Methane is also known as marsh gas or methyl hydride. While storing in the cylinders for usage and transportation, methane is mixed with odorant, tert-butylthiol, to find out the leakage, it occurs, as a safety measure. Under prolonged exposure to heat, these containers can explode and can create damages. Therefore, cylinders or containers containing methane should be kept in a dark and cool place to avoid un-happenings. 2) Methane is formed by four covalent sigma bonds between carbon and hydrogen, C-H and it has a tetrahedral molecular structure. The covalent bonds are formed by sharing the electrons freely available in the outermost orbit of an atom. 3) The melting point of methane is 90.7 K and the boiling point is 111.6 K. It is lighter than air and has a specific gravity of 0.554. It is slightly soluble in water and completely soluble in organic solvents like ethanol, diethyl ether, benzene, toluene, etc. 4) Also, it forms an explosive mixture with air when its percentage reaches between 5 to 10% with air, by volume. Because of its explosive behaviour with air, in the coal mines before entering the mines gases are removed by blowing fresh air. On controlled burning of methane, flames are pale yellow, slightly luminous and very hot. This nature of methane makes it useful in the generation of steam which, in turn, used to run the turbine to generate electricity. ### Chemical Properties of Methane We can categorize the chemical properties of methane under the following headings: • Combustion • Selective Oxidation • Steam reforming • Halogenation Combustion: When the reaction of methane occurs with oxygen, it releases a substantial amount of heat (891 $$\frac{KJ}{mol}$$. This reaction happens in the following way, $$CH_{4}+2O_{2}\rightarrow CO_{2}+2H_{2}O \bigtriangleup H$$ Selective Oxidation: The purpose of selective oxidation of methane is to produce methanol. This reaction occurs in the presence of catalysts. Though the reaction of methane with oxygen is difficult to control and even in the presence of an insufficient amount of oxygen, methane, ultimately, reacts with it to form carbon dioxide and water as an end product. To produce methanol, natural gas (containing 50 to 87% of methane) is reformed, converted and distilled the synthesized gas to get the pure methanol. Steam reforming: Steam reforming of methane is useful when we have to produce hydrogen from hydrocarbons. For this purpose, natural gas is used and the reaction that occurs in this process is as below: $$CH_{4}+H_{2}O\rightleftharpoons CO+H_{2}$$ Halogenation: Methane reacts with halogens in the presence of ultra-violet rays to form halomethane. In the presence of UV light, methane forms methyl radical and halogens form halide radicals. The reaction between these radicals is so fast that all the hydrogen atoms are replaced by halides. If the reaction is controlled, several products are formed, like, halomethane, dihalomethane, trihalomethane, and ultimately, tetrahalomethane. The general representation of this reaction is as $$X\cdot +CH_{4}\rightarrow HX+CH_{3}\cdot$$ $$CH_{3}\cdot +X_{2}\rightarrow CH_{3}X+X\cdot$$ ## FAQs on Methane Q.1: What is methane? Answer: Methane is the first member in the series of saturated straight hydrocarbons. These types of hydrocarbons are also known as paraffin. Methane is the major part of natural gas and is available in nature, abundantly. Methane is a colourless and odourless gas and occurs in gaseous form at room temperature and pressure. It is soluble in organic solvents like ethanol, methanol, benzene, toluene, etc. Methane is less soluble in water. The chemical formula of methane is $$CH_{4}$$. Q.2: How much dangerous is the use of methane? Answer: Methane can affect the human body when inhaled excessively. A very high level of methane in closed areas decreases the level of oxygen which causes suffocation, headache, dizziness, vomiting, loss of coordination and judgement, nausea and loss of consciousness. If the level of concentration of methane increases 5 to 14% by volume in air, it becomes explosive. These types of explosions are frequent in coal mines and collieries. Therefore before entering the mines fresh air is passed to lower the concentration of methane. Q.3: Does methane exist in our solar system apart from the Earth? Answer: Yes. Methane occurs in our solar system’s other planets and some large moons of the planets. In the approach to the planet Mars, methane is being proposed to be used as rocket propellant on future Mars missions as this gas can be extracted and synthesized on the planet for in situ utilization. As methane can be produced by a non-biological process and this process is called Serpentinization and synthesizing methane on Mars can give fuel to return to the Earth. Share with friends ## Customize your course in 30 seconds ##### Which class are you in? 5th 6th 7th 8th 9th 10th 11th 12th Get ready for all-new Live Classes! Now learn Live with India's best teachers. Join courses with the best schedule and enjoy fun and interactive classes. Ashhar Firdausi IIT Roorkee Biology Dr. Nazma Shaik VTU Chemistry Gaurav Tiwari APJAKTU Physics Get Started Subscribe Notify of ## Question Mark? Have a doubt at 3 am? Our experts are available 24x7. Connect with a tutor instantly and get your concepts cleared in less than 3 steps.
2021-08-03 20:58:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4140571653842926, "perplexity": 2536.9098368995265}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154471.78/warc/CC-MAIN-20210803191307-20210803221307-00436.warc.gz"}
https://usgs-r.github.io/EGRETci/reference/runGroupsBoot.html
This function that does the uncertainty analysis for determining the change between two groups of years. The process is virtually identical to what is used for runPairsBoot which looks at a change between a pair of years. runGroupsBoot(eList, groupResults, nBoot = 100, startSeed = 494817, blockLength = 200, jitterOn = FALSE, V = 0.2) ## Arguments eList named list with at least the Daily, Sample, and INFO dataframes data frame returned from runGroups the maximum number of bootstrap replicates to be used, typically 100 setSeed value. Defaults to 494817. This is used to make repeatable output. days, typically 200 is a good choice logical, if TRUE, adds "jitter" to the data in an attempt to avoid some numerical problems. Default = FALSE. See Details below. numeric a multiplier for addition of jitter to the data, default = 0.2. ## Value eBoot, a named list with bootOut, wordsOut, xConc, xFlux, pConc, pFlux values. • bootOut is a data frame with the results of the bootstrap test. • wordsOut is a character vector describing the results. • xConc and xFlux are vectors of length iBoot, of the change in flow normalized concentration and flow normalized flux computed from each of the bootstrap replicates. • pConc and pFlux are vectors of length iBoot, of the change in flow normalized concentration or flow normalized flux computed from each of the bootstrap replicates expressed as % change. ## Details In some situations numerical problems are encountered in the bootstrap process, resulting in highly unreasonable spikes in the confidence intervals. The use of "jitter" can often prevent these problems, but should only be used when it is clearly needed. It adds a small amount of random "jitter" to the explanatory variables of the WRTDS model. The V parameter sets the scale of variation in the log discharge values. The standard deviation of the added jitter is V * standard deviation of Log Q. The default for V is 0.2. Larger values should generally be avoided, and smaller values may be sufficient. runPairsBoot, runGroups ## Examples library(EGRET) eList <- Choptank_eList if (FALSE) { groupResults <- runGroups(eList, group1firstYear = 1995, group1lastYear = 2004, group2firstYear = 2005, group2lastYear = 2014, windowSide = 7, wall = TRUE, sample1EndDate = "2004-10-30", paStart = 4, paLong = 2, verbose = FALSE) boot_group_out <- runGroupsBoot(eList, groupResults) plotHistogramTrend(eList, boot_group_out, caseSetUp=NA) }
2022-01-21 07:37:05
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35449278354644775, "perplexity": 3555.2312817515467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320302740.94/warc/CC-MAIN-20220121071203-20220121101203-00067.warc.gz"}
https://socratic.org/questions/how-do-you-simplify-frac-5-1-4-1-8
# How do you simplify \frac { 5( 1- 4) } { 1- 8}? ##### 2 Answers Oct 17, 2017 The answer is $\frac{15}{7}$. #### Explanation: Simplify within parentheses. $5 \frac{- 3}{1 - 8}$ Simplify the denominator. $\frac{5 \left(- 3\right)}{-} 7$ Simplify the numerator. $- \frac{15}{-} 7$ Cancel divide the fraction by $- 1$. The answer is $\frac{15}{7}$. Oct 17, 2017 Simplification. #### Explanation: You can subtract $4$ from $1$ to get $- 3$ $\frac{5 \left(- 3\right)}{1 - 8}$ Then you can subtract $8$ from $1$ to get $- 7$ $\frac{5 \left(- 3\right)}{-} 7$ Then multiply $5$ and $- 3$ to get $- 15$ $\frac{- 15}{-} 7$ since both the numerator and the denominator have a negative sign, we can cancel it out $\frac{15}{7}$ There's your fractional answer. if you need a decimal answer just do some long division, and your answer should be $2. \overline{142857}$.
2020-01-17 19:23:38
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9857955574989319, "perplexity": 856.6770032064059}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250590107.3/warc/CC-MAIN-20200117180950-20200117204950-00420.warc.gz"}
https://www.acmicpc.net/problem/3325
시간 제한메모리 제한제출정답맞힌 사람정답 비율 1 초 128 MB111100.000% ## 문제 A secret intelligence agency (which is even too secret to mention its name here) controls agents around the world. From time to time the headquarters need to send out a message to all agents. For obvious reasons, the transmission must be as secure as possible. The agency’s executives mistrust electronic communication and therefore transfer their messages by contact persons (in short: contacts). They organized agents and contacts into a large network; each contact is responsible for transporting information from exactly one agent to another, and only in this one direction. Nonetheless there might be more than one contact to transport information between two agents. When the headquarters send out a message, their “message officer” uses some of his own contacts to transport it to a number of field agents. Those agents use their contacts to forward the message to other agents, and so on until it eventually reaches every single agent. However, in order to reduce risk, the number of times a message is transported by contacts should be minimized (i.e. no agent should receive the same message twice). Therefore an agent doesn’t forward a message using all of his contacts but obeys a “transmission scheme” for this message. A transmission scheme contains information on how a message is to be forwarded by the agents. Recently, the agency found out that some contacts misused confidential information. For this reason, they decided to split each message into two parts which are both useless if not read together. They now send out the two parts but without using the same contact twice. Therefore no contact will see the full message. Nonetheless it is important that every agent eventually receives both parts of the message. The agency now wonders how to create valid transmission schemes for each part that satisfy the above conditions. Write a program that calculates valid transmission schemes for each part of the message, given the agency’s network of agents and contacts. It might be the case that no such two schemes exist. ## 입력 The first line of the input contains two integers N (2 ≤ N ≤ 2 000), the number of agents, and M (1 ≤ M ≤ 1 000 000), the number of contact persons. The message officer in the headquarters has the number 1, the other agents are numbered from 2 to N; contacts are numbered from 1 to M. The i-th of the next M lines contains two integers vi and wi ≠ vi, describing the fact that contact i knows how to deliver a message from agent vi to agent wi. ## 출력 If no two valid transmission schemes exist, the output consists of a single line with the string NONE. Otherwise, the output consists of two lines. Each line describes a valid transmission scheme for one part of the message by giving the list of contacts used to transmit it; the first line for the first part, the second line for the second part. If there is more than one solution, output any of them. ## 예제 입력 1 4 6 1 2 1 3 2 3 3 2 2 4 2 4 ## 예제 출력 1 1 3 5 2 4 6 ## 힌트 The first part is transmitted using the contacts 1, 3 and 5, i.e. from the headquarters to agent 2, from agent 2 to agent 3 and from agent 2 also to agent 4. The second part is transmitted using the contact persons 2, 4 and 6, i.e. from the headquarters to agent 3, from agent 3 to agent 2 and from agent 2 to agent 4.
2023-04-01 05:04:28
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37783747911453247, "perplexity": 979.0556435050516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00418.warc.gz"}
https://compdemocracy.org/opinion-groups/
Opinion Groups A conversation can often be thought of as being composed of opinion groups (subsets of the participant body which tend to agree with each other). These opinion groups are central to how we think about and get value out of a conversation. As a tool, Polis' goal is to: • reflect back to participants an understanding of themselves in relation to the opinion landscape • surface points of common ground or rough consensus Opinion groups allow participants to • frame an understanding of their position in the opinion landscape relative to what opinion group they align with • understand where people who think differently than they do tend to fall on the issues by reviewing the representative comments for other opinion groups • surface points of common ground by looking at group informed consensus Generally speaking, opinion groups are able to accomplish this (and more) by serving as a lower dimensional representation of the full set of information in the conversation. Instead of thinking about thousands of participants and comments, we can think about a handful of opinion groups and the key comments which help us understand them. This allows us to weave a more coherent story about how these groups relate to each other, which would not be possible otherwise. Clustering There is no one "right way" to detect opinion groups within a conversation. Many algorithms exist for doing this, and are generally considered clustering algorithms. Groups within the conversation are typically identified with (or as) individual clusters. K-means Polis uses the k means clustering algorithm (clustering algorithms) to group participants into clusters based on similarity of responses. K-means is a very old and simple method. In comparison with newer techniques, it is very limited in the kinds of patterns it can find. While this does restrict the kinds of patterns the resulting opinion groups can reflect, it has the benefit of being relatively predictable and easy to interpret. Selecting a number of clusters Because the k means algorithm depends on a fixed choice of K (the number of clusters to be selected in the data set), Polis uses the silhouette coefficient to select for an optimal number of clusters. There are numerous clustering algorithms out there that we are researching and trialing for downstream analysis. However, we're unlikely to adopt any of these as part of the core real-time engine without careful consideration of the trade offs. It is one of our number one goals as an organization to researched and develop a well defined computational and sociological framework for evaluating these trade offs. • Opinion group • Representative Comments which help Polis identify different sides of the conversation. .css-1unrqcp{color:inherit;-webkit-text-decoration:none;text-decoration:none;}The Computational Democracy Project © 2021 The Computational Democracy Project, a 501c3 nonprofit.
2022-01-25 18:02:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3732128441333771, "perplexity": 1225.44301166675}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304859.70/warc/CC-MAIN-20220125160159-20220125190159-00189.warc.gz"}
https://chat.stackexchange.com/transcript/36/2019/11/20
00:00 - 18:0018:00 - 23:00 12:00 AM LOL, that sounds like a language problem. In some languages, "please" and "you're welcome" are the same, for example. wait did i paste that? that's so weird Apparently so. 6 I'd vote for singular values/eigenvalues/eigenvectors over determinants and adjugates for the way to approach this. TLDR: standard errors increase as the eigenvalues of $X^TX$ get increasingly small and this corresponds to the formation of valleys in the loss surface representing our increasing ... Here we go I'm confused what it means to "approach" a non-trivial nullspace I thought a matrix either has a nontrivial one or it doesn't. What does it mean to "approach"? or should I just keep studying and ask this question after doing more problems best that comes to mind is if I had vectors e1,e2, and e1+t*e3 in RR^3 They're talking about varying $X$, aren't they? 12:02 AM yes Think about a continuous parametrized family $X_t$ of matrices. So it would be like different matrices and the sequence of them approaches it ok So if $X_0$ has trivial nullspace, then for all small $t$ $X_t$ also does. But you can't take the converse. that makes sense. great. that was what i was confused by :) super So they're imagining some nonsingular matrices approaching a singular one, I guess. 12:04 AM small meaning "within some neighborhood", to be clear? Yes, obviously depending on the particular family, etc. right cool Let $X,Y, Z \sim \mathbb{U}(0,1)$ what is $P(X + Y > Z)$ As an example I'm fond of, consider $$M(x,y,z)=\begin{pmatrix} 1 & x & y \\ x & 1 & z \\ y & z & 1\end{pmatrix}$$ The condition for it to be PSD is $(x,y,z)\in [-1,1]^3$ and $\det M=1-x^2-y^2-z^2+2xyz\geq 0$ 12:09 AM I am trying to think about the whole space as the unit cube and there must be some prism in which $X+ Y > Z$ and the probability will then be the volume of that prism over the volume of the cube The set of (x,y,z) in [-1,1]^3 which fulfills that is shown here: images.app.goo.gl/efbcFwqfSVW6QfjL7 @Semiclassical interesting I know how to do this problem with integration but a peer told me the prism will actually be a triangular pyramid. However, I do not see it If you're in the cube but outside that figure, the matrix has at least one negative eigenvalue. If you're inside the figure, you have all positive eigenvalues. So you get a surface of singular matrices with nonnegative eigenvalues. If you ignore the nonnegative part and ask for what (x,y,z) the matrix is singular, you get this instead: (well, that figure rotated appropriately) Lots of fun stuff there. @genescuba I don't agree that P(X+Y>Z) will be the volume of a triangular pyramid. However, P(X+Y<Z) would be. hmm, why is tthat Simplest way to see that is to note that Z and 1-Z occur with equal frequency, so P(X+Y<Z) = P(X+Y<1-Z)=P(X+Y+Z<1). 12:19 AM intuitively X+Y > Z seems like that 2 sides of a triangle must add up to be greater than 3rd At which point one notes that the surface x+y+z=1 intersects with the vertices (1,0,0), (0,1,0), (0,0,1) of the cube. so you get a tetrahedron with those as vertices along with (0,0,0) and that's a triangular pyramid volume works out to be 1/6 I should think? Yes. P(X+ Y > Z) = 5/6 is the solution you could also pull off this logic with the original version: the plane x+y=z intersects the cube at (1,0,1), (0,1,1), and (0,0,0) ah I see. with the point (1,1,0) being the other vertex I like symmetry, so P(X+Y+Z<1)=1/6 is neat 12:24 AM how is this point (1,1,0) on the x+y = z plane ? it's not. But it is a vertex of the cube and it lies in the region x+y>z, so it's the other vertex of the tetrahedron in that case ah makes sense, yup those will be the vertices of the region x + y > z right. clever solution that saves integration ! yeah. of course, you have to remember how to get the volume of a cube. which is easy to do with integration but I can never remember how to figure that out without calculus 12:30 AM Hey guys! Could somebody please give me a hand with the following question, please? I am not really sure how to start. Here's the question: If f is periodic with a period of $2a$ for some $a>0$, then $f(x)=f(x+2a)$ for all $x∈R$. Show that if f is continous, there exists some $c∈[0,a]$ such that $f(c)=f(c+a)$. oh boy with regards to the starred comment: I meant to say "volume of a cone". i certainly know how to justify the volume of a cube :S I'm aware of the fact that the Intermediate Value Theorem should be used here, but I'm not sure how to use it in this instance. 12:50 AM @Abwatts can you justify it using words? @LeakyNun The question makes sense to me since I know that the y values in periodic functions repeat at some point, but I'm not sure what to do since the proof should work for all real numbers.. Am I supposed to use induction here? If there is an exact sequence of $A$-modules: 0->M->N->N->0 with N finitely generated, do we have $M=0$? I have the following calculation for an isometric embedding (into Minkowski space). Although I understand the calculation steps, I don't know why they are done. Could anybody clarify those steps? In particular: understand the first two lines (how to equate the two metrics/ line elements), but then... (i) why do we need those (two) requirements/ where are they coming from, and (ii) why does it make sense to propose this solution ... and (iii) why for $t>=-1$? Here is the link to the calculation: https://www.dropbox.com/s/qmcefg3idmxyom6/Photo%20Nov%2019%2C%2016%2019%2017.jpg?dl=0 I am trying to show that for a surjective endomorphism M->M of a finitely generated A-module $M$(say by n elements), the kernel of the induced endomorphism A^n->A^n is the same as the kernel of M->M. The above exact sequence is what I get after applying snake lemma. @Abwatts: Here's a big hint. How can you rephrase the equality of two numbers? 1:03 AM @Abwatts I think you misunderstood the question. They don't just need to "repeat at some point" surely they repeat at some point, since e.g. f(0) = f(2a) The question is why it repeats distance $a$ apart SOMEWHERE. Still, you should answer my hint. @TedShifrin When two numbers are pointing to the same value and thus, they explicitly said to be equal? Pointing? I'm asking you to algebraically rearrange what it means to say $u=v$. Or difference of two functions perhaps? Aha. $u=v$ if and only if $u-v=0$. So what function should you look at for your problem? 1:10 AM $g(x)=f(x+a)-f(x)$? I guess you looked at the answer someone posted to your question? But yes. On what interval? Re @Eric Yes :P.. [0, a] where a > 0? or simply all positive real numbers? What interval do you want $c$ to be in? $[0,a]$ for $a > 0$? You don't need to repeat that about $a$. It is in the original statement. OK, so that's a good interval to use. Now figure out how the intermediate value theorem gives you what you want. 1:15 AM @TedShifrin Let $X$ be a smooth projective curve over $k=\Bbb F_q$ and $K$ be the function field and $\Omega_K$ be the field of meromorphic $1$-forms; why $\Omega_K \otimes_K K_v = k_v((u)) \ \mathrm du$ for a point $v$ on $X$? Can I pick both $0$, and $a$ since they're in the interval c is in, and show where the function $g(x)$ is both above and below in that interval? I thought about splitting the proof into 3 cases where the first 2 clearly prove the question right away, and the third one where we have to do some rearrangements to apply the IVT thm. Would that be a valid approach for this kind of question? If you have that $X\sim\text{Gamma}(n,p)$ then how would you find the distribution of $Y=(X,\text{log}(X))^T$? I wouldn't (Note: The end goal here is to find the moment generating function of $Y$, but you have to calculate the Expectation of an exponential function in order to get that, which requires knowledge of the distribution, unless there is some other method I am unaware of) you can use LOTUS 1:29 AM I suppose. But that $\text{log}(X)$ in $E(e^{t_1X+t_2\text{log}(X)})$ is making me wary LOTUS is LOTUS. Also, doesn't, $\int_0^\infty e^{t_1x+t_2x}f_{(X,\text{log}(X))}(x,\text{log}(x))dx$ still require me to first find the distribution of $Y$? $\displaystyle E\left(e^{t_1 X + t_2 \log(X)}\right) = \int_0^\infty e^{t_1 x + t_2 \log(x)} f_X(x) \ \mathrm dx$ Aaaaah, I'm dumb That looks gross 1:35 AM I failed to notice that $e^{t_1X+t_2\text{log}(X)}$ is just a function of $X$ In multivariable functions when we do a change of variables, the scaling factor is the absolute value of the determinant of the jacobian why in single variable functions for change of variables (u-sub) do we not take the absolute value? we do though I guess the Gamma distribution has nice expectation values for both X and log(X), so I guess that integral could be tractable @Semiclassical what that's just another gamma integral I guess. I haven't done a lot of gamma integrals. 1:37 AM you can literally absorb t1 and t2 inside if you remember the gamma pdf oh, right. $e^{t_2 \log(X)}=X^{t_2}$ fair enough Yeah, you distribute the $e^{\text{junk}}$. It actually is simplifying fast @Rithaniel maybe looking for a kernel will help here after expanding I think you might be able to manipulate the expression to get the gamma kernel 1:51 AM End result looks like: $\frac{\lambda^n\Gamma(n+t_2+1)}{(\lambda-t_1)^{n+1}\Gamma(n)}$ Unless I made some mistake somewhere What expectation value is that supposed to be? though I guess I should also ask which parametrization you're using for Gamma That's the moment generating function of $(X,\text{log}(X))$ where $X\sim\text{Gamma}(n,\lambda)$ and $f_X(x)=\frac{\lambda^n}{\Gamma(n)}x^{n-1}e^{-\lambda x}$ Good evening good friends. In Wikipedia is stated that in d demensional Vectorspace of finite field with n elements there is exactly $\prod\limits_{k=1}^{n-1} (q^n-q^k)$ so much basis. Can you please redirect me to a topic/video/page where this is discussed and showed how one reaches such result? My love and thanks. Opsy dayzy that's not quite correct let me rephrase. $\prod\limits_{k=1}^{d-1} (q^d-q^k)$ for d dimensions and q elements. 2:09 AM Hi! (sorry for the repost) I have the following calculation (see image) for an isometric embedding (into Minkowski space). Although I understand the calculation steps, I don't know why they are done. Could anybody clarify those steps? In particular: I understand the first two lines (i.e. how to equate the two metrics/ line elements), but then... (i) why do we need those (two) requirements (line 3-4)/ where are they coming from, and (ii) why does it make sense to propose this solution (line 5-6)... and 2:58 AM @Rithaniel actually, that can't be quite right: If that's the moment generating function, then you should recover E[1]=1 when t1=t2=0 whereas it looks to me like you don't quite cancel properly to get that Ah, I see my mistake. I assumed my integral was the expectation, but it was actually just an integral over the whole domain of the function $\frac{\lambda^n\Gamma(n+t_2)}{(\lambda-t_1)^{n}\Gamma(n)}$ Let $M \in \operatorname{GL}_2(\Bbb Q)^+$ (positive determinant) and let $\Gamma \subset \operatorname{SL}_2(\Bbb Z)$ be a subgroup of finite index. Suppose $M\Gamma M^{-1} \subset \operatorname{SL}_2(\Bbb Z)$ also. Then $M\Gamma M^{-1}$ obviously has finite index in $\operatorname{SL}_2(\Bbb Z)$. Consider the left coset spaces $\Gamma \setminus \operatorname{SL}_2(\Bbb Z)$ and $M\Gamma M^{-1}\setminus \operatorname{SL}_2(\Bbb Z)$. The map sending $\Gamma\cdot A \mapsto M\Gamma M^{-1}\cdot A$ for all $A \in \operatorname{SL}_2(\Bbb Z)$ is a bijection I think.. it's obviously surjective.. I just need to show that the kernel of that guy is empty and I think this follows from $\Gamma \cap M\Gamma M^{-1} = \varnothing$, does that sound reasonable? lol Also hi :) @Rithaniel nice that's still only a necessary condition, of course: it's correct at t1=t2=0 Well, I suppose I should add some domain limitations. Like, $t_1<\lambda$ Hmm well actually $\Bbb I_2 \in \Gamma \cap M\Gamma M^{-1}$ 3:12 AM yeah. valid in some neighborhood of t1=t2=0 $\frac{\lambda^n\Gamma(n+t_2)}{(t_1-\lambda)^{n}\Gamma(n)}$ where $t_1>\lambda$ Or $\frac{\lambda^n\Gamma(n+t_2)}{(\vert\lambda-t_1\vert)^{n}\Gamma(n)}$ in general Wait, no, that doesn't work The $e^{(-\lambda+t_1)x}$ is fixed, and so $\lambda-t_1$ has to be positive Yeah, the answer is $\frac{\lambda^n\Gamma(n+t_2)}{(\lambda-t_1)^{n}\Gamma(n)}$ @Rithaniel One more sanity check: If you set $t_1=0$ in that, then it's not a function of $\lambda$ anymore Does it make sense that E[e^(t2 log X)]=E[X^t2] would be independent of $\lambda$? Hmmmmm, that's an interesting question. I'm not sure 3:31 AM So you'd have, for instance, $\displaystyle E[X]=\int_0^\infty x\frac{\lambda^n}{\Gamma(n)}x^{n-1}e^{-\lambda x}\,dx$ be a function of $n$ alone Easiest way to see if that makes sense is to consider the substitution $u=\lambda x$ Well, $E(X^t)=\int_0^\infty \frac{\lambda^n}{\Gamma(n)}x^{n+t-1}e^{-\lambda x}dx=\frac{\Gamma(n+t)}{\Gamma(n)}\int_0^\infty \frac{\lambda^n}{\Gamma(n+t)}x^{n+t-1}e^{-\lambda x}dx=\frac{\Gamma(n+t)}{\Gamma(n)}$ Wouldn't that last integral need to be $\int_0^\infty \frac{\lambda^{n+t}}{\Gamma(n+t)}x^{n+t-1}e^{-\lambda x}\,dx=1$? Ah, shoot, yeah. Right. So you endn up with $E(X^t) = \frac{\Gamma(n+t)}{\Gamma(n)}\lambda^{-t}$ which makes sense if only for dimensional reasons: If $x$ and $\lambda$ have units, then $\lambda x$ should be dimensionless (otherwise $e^{-\lambda x}$ is rather nonsense) so $E((\lambda X)^t)$ should be dimensionless, but not $E(X^t)$ Newest form: $\frac{\lambda^n\Gamma(n+t_2)}{(\lambda-t_1)^{n+t_2}\Gamma(n)}$ 3:38 AM Okay. That one matches what I had in Mathematica :) Excellent use of technology to head me off at the pass :P yep. but you'll note that the sanity checks didn't really involve any tech Indeed, but you gotta know to look for them definitely So that's the lesson: If you have something (and aren't pressed for time) examine it with a fine-toothed comb 3:42 AM eh, the lesson for me is to consider simple limiting cases also, when I see a distribution like $f_X(x)\,dx=\frac{\lambda^{n}}{\Gamma(n)} x^{n-1}\,dx$, my immediate impulse is to consider the substitution $u=\lambda x$ that gives $f_X(x)\,dx = \frac{1}{\Gamma(n)} u^{n-1}\,du$, which is $\lambda$-independent So $$E[e^{t_1 X}X^{t_2}] = \int_0^\infty e^{t_1 x}x^{t_2}f_X(x)\,dx = \lambda^{-t_2}\int_0^\infty e^{(t_1/\lambda)u}u^{t_2} \frac{1}{\Gamma(n)}u^{n-1}\,du$$ oh, bleh. I missed something obvious in there $f_X(x)\,dx = \frac{\lambda^n}{\Gamma(n)}x^{n-1}e^{-\lambda x}\,dx=\frac{1}{\Gamma(n)}u^{n-1}e^{-u}\,du$ Main thing is that the integral is only a function of $\lambda$ via $t_1/\lambda$ (which, comparing with your earlier form, corresponds to writing the result as $\lambda^{-t_2}(1-t_1/\lambda)^{-n-t_2}\frac{\Gamma(n+2)}{\Gamma(n)}$ 4:06 AM I assume that $\Gamma(n+t_2)$ in the numerator? woops, yeah Alright, then woo, it checks out So, here's something I puzzled out earlier today: If $N,M$ are random variables with joint distribution $n^m$ where $M$ is discrete with support across the natural numbers (including 0) and $N$ is continuous with support on the interval $(x,1-x)$, then what is $x$? well, one should presumably have $\sum_{m=0}^\infty \int_x^{1-x}n^m\,dn=1$ So $\sum_{m=0}^\infty n^m = \frac{1}{1-n}$ and thus $1=\int_x^{1-x}\frac{dn}{1-n}=\left[-\ln(1-n)\right]_x^{1-x}=\ln(1-x)-\ln x=\ln(1/x-1)$ so $1/x-1=e\implies x=1/(1+e)$ So that's neat Yeah, that's what I got. Also, if it's support is $(0,x)$ then $x=1-\frac{1}{e}$. It's relatively easy for people at this level of math, but what do you think about it for an undergrad assignment? eh. I think it's fine: The first thing you should know about distributions is that they're normalized. 4:22 AM Also, in the $(0,x)$ case, we have that $E(N)=\frac{1}{e}$ and $E(M)=e-2$ in the $(x,1-x)$ case, $E(N)=\frac{2}{1+e}$ and $E(M)=e-\frac{1}{e}-1$ nice 1 hour later… 5:43 AM Hello 1 hour later… 6:55 AM @JoeShmo Ah yes, I forgot lol 1 hour later… 8:25 AM Can every compact set in a manifold be written as a finite union of compact sets each of which lie in a chart? Here is my approach: let $\{U_\alpha\}$ cover $K$, where $U_\alpha$'s are charts. Then some $U_1, \dots, U_n$ cover $K$. Let $K_i = \overline{K\cap U_i}$. Then clearly $K = K_1 \cup \dots \cup K_n$. But how do I show that $K_i$ is in some chart? @feynhat the whole manifold if it's compact Or a non simply connected one, like a circle on the torus ? Are you giving counter-examples? @AlessandroCodenotti 8:59 AM Yes 9:11 AM @AlessandroCodenotti I don't see how circle is a counter example. 9:43 AM Not any circle, but one of the nontrivial ones, like turning around the torus So the red and magenta circles in the image at the top It's not contained in a single chart because they are homeomorphisms with $\Bbb R^2$, which is contractible 1 hour ago, by feynhat Can every compact set in a manifold be written as a finite union of compact sets each of which lie in a chart? union Wait, I thought you wanted the union to be disjoint No. I just want each $K_i$ to be inside a chart. Oh ok then it looks doable 10:45 AM @AlessandroCodenotti This answer seems to work. But I am not so sure about what he means by balls. Are those balls taken in some local coordinate about x? ...or is he thinking of $M$ as being embedded in some $\mathbb{R}^l$ and taking balls there? Every topological manifold is metrizable (if you ask topological manifolds to be second countable) 11:11 AM @AlessandroCodenotti I don't think I have sufficient knowledge of metrization theorems. Can you point me to the theorem which says that second countable => metrizability. 11:22 AM Urysohn metrization theorem Hausdorff+regular+second countable implies metrizable To show that a manifold is regular you can use that locally euclidean implies locally compact and that Hausdorff+locally compact implies completely regular @BaiduryaMathaddict But if that is the case, i.e., if lim_{x->c} f(x)^{g(x)} = lim_{x->c} f(x)^{lim_{x->c} g(x)} when why do I often see the 'exponential-log technique' used to evaluate these limits? That is, we evaluate lim_{x->c} f(x)^{g(x)} using lim_{x->c} Exp[ g(x) Log(f(x)) ]. The first technique is simpler so why does this 'exponential-log technique' get used? @AlessandroCodenotti Thanks a lot. But, I think for each $K_i$ to lie in a chart, we should take the balls in local coordinates. What I mean we cover $K$ with $B(x, d/3)$ where each ball is taken in local coordinates about $x$. Then we can cover $K$ with finitely many of these say $B_i$. And then, set $K_i = \overline{K \cap B_i}$. Clearly, $K = \cup K_i$, and $K_i \subset B(x, 2d/3)$ which is in a chart. 12:26 PM Sets from top to bottom: 1. Countably infinite/Countable (included here for completeness) 2. omega finite 3. Stackel finite 4. Tarski finite 5. Bounded, Streamless or Noetherian (From intuitionist set theories) 6. Amorphous 7. Δ2 8. Δ3, which includes Motowoski finite 9. Δ4 10. Δ5 11. Kuratowski finite 12. Hyperset (from non well founded set theories) 13. Fuzzy set (NB I knew they exists, but I am too lazy to find the axioms that knocks out the Deltas one by one) I wonder if there is a way to prove, that omega finite is the smallest possible finite in the hierarchy of finite definitions... 12:44 PM @feynhat Yes 3 hours later… 3:24 PM Where's the center of mass of a hemisphere (just the 2D surface, not the 3D solid) Is it just halfway up? Nic Cage voice ♫ Look at this Wait Was that not Nick Cage Apparently it was not look at this graph 3:32 PM It definitely was Nick Cage It reminds me of G-bach's C Apparently it's Chad Kroeger, whoever that is hes the singer for Nickleback Chad Kroeger is Nick Cage's evil twin but it's from Nickelback which makes me feel like, at least in some weird sense, I was close Not OK: If you plug "If you plug $x$ into itself, the resulting sentence is false" into itself, the resulting sentence is false OK: If you substitite "If you substitute $x$ into itself, the resulting string of symbols cannot be obtained from the axioms" into itself, the resulting string of symbols cannot be obtained from the axioms 3:40 PM I maintain that that's gross @AkivaWeinberger Yes, because surface area of a zone of a sphere is dependent only on the "height" of the zone. The latter is how Gödel's First Incompleteness Theorem is proven @TedShifrin Ah, yeah, 'cause of the area-preserving map to the cylinder Here's something I'm seeing in a question and I feel like I should know of DogAteMy, ayup. First, a familiar observation: Consider the space curve $x\mapsto (x,x^1,x^2,\cdots, x^{d})$ in RR^{d}$. Then any$(d-1)$-plane intersects this moment curve in at most$d$points. Does that still work if we replace$x^k$with$f_k(x)$where$\{f_k(x)\}$is a sequence of orthogonal polynomials? (The specific question is for the case of Chebyshev polynomials of the first kind.) 3:48 PM The original is based on the Vandermonde determinant, but that can be rephrased as a linear independence statement in function space, so probably yes. Right. Oh, I'm being dumb. It's just the fundamental theorem of algebra when you pull back the equation of the plane. Ah. That seems to match the answer so far: math.stackexchange.com/a/3443587/137524 Though I agree with Omnomnomnom that the answer hasn't yet verified that the determinant is nonzero anywhere. (I'm sure it is, but their last comment seems to miss the point.) > First, a familiar observation: Consider the space curve$x\mapsto (x,x^1,x^2,\cdots, x^{d})$in$\mathbb{R}^{d}$. Then any$(d-1)$-plane intersects this moment curve in at most$d$points. Missing$ sigh this is the danger in being too lazy to put mathjax on The answer's last comment shows that the product $\prod_{j<k}(y_j-y_k)$ is nonzero. But the question is whether the determinant is nonzero, and I don't see how their comment addresses that. 3:53 PM Because that's the Vandermonde determinant. But their matrix isn't the Vandermonde matrix. Oh, they're just arguing by the root-factor theorem for polynomials. If $f$ is a polynomial and $f(x)=f(y)$, then $x-y$ is a factor of $f$. Right. But that only shows $p(x)=A q(x)$. could still have $p(x)=0$. You just have to show it's nonzero somewhere. Yes, I agree. I just don't see where they've done that in their answer. 3:56 PM You're right, there's a gap in the argument. Okay. Now I'm gonna eat breakfast :P lol I mean, I would very much doubt that the determinant vanishes identically. But just because I find it implausible for X to be false, doesn't mean I know how to prove it's true :P Oh, but in the last comment, the answerer addressed it. Is he wrong? It was shortly after your objection. What I see is him addressing that $\prod_{j<k}(y_j-y_k)$ is nonzero. But the point to fill in is whether the determinant itself is nonzero. 3:58 PM No, look at the final comment. Oh, actually, I don't know where that comes from. Yeah. Yeah, I think he's addressing the original determinant. Hmm. He's telling you which $x_i$ give a specific nonzero result. Well, he's telling me what $x_i$ -should- give a nonzero result. 4:00 PM Well, if you verify his product formula for the $\sin x_i$ ... Anyhow, bye for now. later. My own inclination would be to observe that the leading term of T_n(y) is (2y)^n/2 (for n>0) and argue that the leading term of the determinat is therefore blah blah blah 4:17 PM Hello, I got a question How can we convert a proper fraction transfer function to state space say $G(s) = \frac{s^3+s^2 + 3}{2s^3 + 4s + 7}$ This will cause $\frac{Y(s)}{U(s)} = G(s) = \frac{a_1 s^2 + a_2 s + a_3}{2s^3 + 4 s + 7} + D$ but now how to proceed wikipedia seems to have an entry on this subject: en.wikipedia.org/wiki/Realization_(systems) @Semiclassical It has for case of strictly proper ah. I'm guessing this isn't because there's no quartic term in the denominator? yes, i mean I can do it when power of s on numerator is less than that on denominator, but if it is equal then its a problem for me One does have $$G(s)=\frac{s^3+s^2+3}{2s^3+4s+7}=\frac{1}{2}+\frac{2s^2-4s-1}{4s^3+8s+14}$$ So if one can get rid of that constant $1/2$, then one is back to a proper transfer function? 4:25 PM Yes! but how is it to be done, i mean getting rid of that 1/2 yeah I want to say it's just "take D=1/2" so that $y(t) = Cx(t)+\frac12 u(t)$ I'm not conversant with the conventions, though. So it's not clear when D is a matrix vs. a scalar when I read that WP entry looking at the article: A is n-by-n, B is 4-by-1, C is 1-by-4 and D is 1-by-1 so taking D=1/2 seems reasonable enough I mean if I only had $\frac{Y(s)}{U(s)} = \frac{2s^2-4s-1}{4s^3+8s+14}$ then I get $4y''' + 8y'+14y = 2u''-4u'-u$ right but here due to extra constant 0.5 how can we directly conclude your statement, I am not able to understand that 4:46 PM @Semiclassical according to some people, the volume of $[0,1]^n$ is $1/n!$ volume a cube is a tricky thing lol i meant to say volume of a cone :P (and how to get it without calculus) @jeea looking at wikipedia, I'm tempted to suggest changing variables from $y$ to $z=y-u/2$ there's a convention in which the volume element is $n!\,dx^1\wedge\cdots\wedge dx^n$ awful! simply because, if $y(t)=Cx(t)+Du(t)$ with $D=1/2$, then $z(t) = Cx(t)$ i.e. $D=0$ I think it really does just come down to $D\neq 0$. 4:57 PM @semiclassical thanks a lot I got it! neat 5:35 PM @RyanUnger Blah. amusingly, 1/n! is the volume of the cone I was having in mind :P I did manage to convince myself how you'd dissect a cube to get the usual result, though. @Semiclassic: You can do a synthetic geometric argument that three pyramids fit in a cube. And then it's standard to approximate cones by pyramids. Hmm. I managed to get 6 pyramids into a cube, though I could just as easily have done 3 pyramids into a triangular prism. also, I'm being sloppy and equating cone with pyramid. Oh, oops, if I'm doing triangular pyramids, I guess I'm wrong. I guess we could look at Euclid and see what he did for this. hmm Book XII, Proposition 7: Any prism with a triangular base is divided into three pyramids equal to one another with triangular bases. 5:40 PM Oh, you beat me. I was looking at an old-fashioned paper book. he actually has it in more generality than I did: I only tried to do the case of a right triangular prism whose cross-section was a 45-45-90 triangle you have officially snatched the pebble from the master's hand grasshopper you may leave the temple :P Oh, look, it's skull. eh. except that Euclid beat us all to it :P hi, all 5:43 PM I wonder if that dissection works in higher dimensions. I know that there's Dehn stuff going on there that is: Can I always dissect the n-cube into n! congruent polyhedra? In the 3-D case, you have to change one of the pyramids. Are they all actually congruent? I just think they have equal volume. Hmm. @Semiclassical wouldn't that make Euclid the grand master :P I think they're all the same in 3D up to reflection can't say I'm totally sure about that tho
2020-12-05 03:24:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.857643723487854, "perplexity": 820.7105294044301}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141746033.87/warc/CC-MAIN-20201205013617-20201205043617-00094.warc.gz"}
https://www.gamedev.net/topic/687302-game-industry-is-it-worth-trying-to-do-something-new/
• FEATURED • FEATURED • FEATURED • FEATURED • FEATURED View more View more • ##### Unreal Awards $275k in Latest Round of Unreal Dev Grants • ##### Unreal 4.16 Released • ##### Microsoft's Slim AR Form Factor • ##### YoYo Games Releases GameMaker 2 Education edition View more ### Image of the Day Submit IOTD | Top Screenshots ### The latest, straight to your Inbox. Subscribe to GameDev.net Direct to receive the latest updates and exclusive content. Sign up now # Game Industry: Is it worth trying to do something new? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 6 replies to this topic ### #1Flyverse Members Posted 18 March 2017 - 06:08 PM EDIT: I just noticed I posted this into the wrong sub-forum, I'm so sorry!! Might there be a possibility of moving this thread instead of deleting/closing it? Thank you very much! Heya! This question I have might turn out to be a little weird. First, let me give you a little background about myself so the context can be understood more easily (I know that this will be very boring and annoying, but I believe that my question will be way too unclear if I don't add this information.): I'm turning 18 soon, and am currently in the process of applying to different universities. Mostly in Germany. I'm interested in a lot of very different things - And game development happens to be one of these interests. Now, game development most certainly is not going to be what I will study. I've thought about either going for a mixed Informatics/Business or a mixed Physics/Business degree (Focusing on the more entrepreneurial side of things). Not because I don't want to eventually get into the gaming industry (Why would I ask the question here if I didn't want to), but because I believe that I'll get the most out of my studies that way. In terms of programming, I started around 7 years ago and am now "OK" at it in general, I guess. I'll now try to explain what I optimally would like to do later on, what problems I believe stand in the way of it, and then well... I guess I'd like to hear your opinion about it If I could just snap my fingers and wish for an optimal future, I'd absolutely love to be putting the few game ideas I have into reality without having the restrictions of being an employed programmer, but more or less by coordinating whole projects themselves (Ideally in a well-doing start-up) Before anyone points out how utopic this is, I just want to say: I know that. I know that ideas alone are worthless. Without proper funding, you'll get nowhere. Without a proper organized team, you'll get nowhere. Without X, you'll get nowhere. X being literally one of so so many things. Especially since what I'd optimally like to do wouldn't just be to release a snake or pong game. That alone isn't the only problem, however. Not only do I have no capital whatsoever, but I've also never managed to actually finish one of the countless projects I've started. In terms of real project management, my experience is near zero. Even in terms of programming, I only have the experience I acquired through seven years of trying and failing and reading articles (I haven't had anyone review my code yet, but I really don't think that the outcome would be anything less than eye cancer. I'm exaggerating, but I've felt as if I did not progress at all these past 1-2 years.). I have bits and pieces of knowledge in this and that, but nothing concrete: In other words, to the industry I'd probably be completely worthless. No one in his right mind would join someone without any capital and concrete and profound knowledge on specific topics. And this still isn't everything. Thing is: I don't actually want to be a programmer. As I said above: I'd like to put my own ideas into practice, all the while being able to at one hand manage the project and on the other maybe still contribute something on the technical side. Again: Yes, this sounds utopic, and yes, it absolutely is. However, I still have to ask: What do you think? What do you think of the combination of my kind of "dream job" and path of study? Ideally, I'd obviously like to use the time and lack of constraints I have during my studies to be able to make progress on such projects. I know that everything I've said is far-fetched, but the thing is that I'm actually not quite sure how far-fetched it is. Is it far-fetched in the sense of hard but attainable? If that were to be the case, I think that I'd go for it. If it is far-fetched in the sense of you're wasting your life, I might choose to go down another path - I have quite a few project ideas in completely different disciplines as well, some of which may arguably be easier to attain, especially by going to a research-driven university. TL;DR: I'm not sure what to do. I have project ideas for completely different disciplines some of which are related to game development. However, those related to game-development sound like the "I don't know programming and am alone and want to code an MMORPG in 2 months" threads (No, this is not my case, yes, this is an extreme example) Any opinions ? Anything is welcome! Thanks a lot. Edited by Flyverse, 18 March 2017 - 06:12 PM. ### #2Tom Sloper Moderators Posted 18 March 2017 - 08:04 PM I just noticed I posted this into the wrong sub-forum I don't think so. You're in the Job Advice forum, and you're seeking advice on your career goal. I'm turning 18 soon, and am currently in the process of applying to different universities. Perfect! Get a bachelors degree in any subject you like. Then follow it up with an MBA. You'll need a business masters degree to achieve your career goal. If I could just snap my fingers and wish for an optimal future, I'd absolutely love to be putting the few game ideas I have into reality without having the restrictions of being an employed programmer, but more or less by coordinating whole projects themselves (Ideally in a well-doing start-up) This is total feasible. It's just going to take time. 6 years for the education, and then maybe 6 or more years until you can start your own company. I don't actually want to be a programmer. You don't have to be. Yes, this sounds utopic, and yes, it absolutely is. ... I'm actually not quite sure how far-fetched it is. It's not far-fetched at all. Get those degrees, then take any job in the game industry except programmer, and work in the industry and save your money and build a network of contacts. I wrote an article on how to achieve this goal (it is not a far-fetched or uncommon goal), at http://www.sloperama.com/advice/lesson29.htm -- Tom Sloper Sloperama Productions Making games fun and getting them done. www.sloperama.com Please do not PM me. My email address is easy to find, but note that I do not give private advice. ### #3Tom Sloper Moderators Posted 18 March 2017 - 08:31 PM By the way, Flyverse. I just noticed a total mismatch between your post and your subject line, "Is it worth trying to do something new?" It's not clear what "something new" you want to try to do. Is there something you want to add, something you forgot to ask? -- Tom Sloper Sloperama Productions Making games fun and getting them done. www.sloperama.com Please do not PM me. My email address is easy to find, but note that I do not give private advice. ### #4Flyverse Members Posted 19 March 2017 - 04:08 PM don't think so. You're in the Job Advice forum, and you're seeking advice on your career goal. Okay, great! Perfect! Get a bachelors degree in any subject you like. Then follow it up with an MBA. You'll need a business masters degree to achieve your career goal. What do you think about a joint science/business degree, or joint degrees in general? This is total feasible. It's just going to take time. 6 years for the education, and then maybe 6 or more years until you can start your own company. Those 6 years after my studies, what exactly will I be using them for? Gaining experience in already established studios, I assume? (I know that such an experience-gaining step is obviously compulsory, but I'd preferably like to cut back that step as much as I can possibly can - In order to gain time. The older I get, the more responsibilities I will have (Maybe a family, maybe this maybe that, etc), and thus the less risk I'll be able to take.) What do you think about using my free time while studying to already build up my idea in more detail, including a full game design, business plan and some thoughts about how to implement everything? Basically, for every single one of my projects (Not only those related to game development) I've thought about efficiently using my free time in order to be able to (a) cut on expenses later on (Getting some work done in advance) and (b) get closer to the product I myself imagined. (The biggest problem here is that up to now I've pretty much used all of my time either dreaming or procrastinating, so "using my time" is pretty abstract here.) It's not far-fetched at all. Get those degrees, then take any job in the game industry except programmer, and work in the industry and save your money and build a network of contacts. I wrote an article on how to achieve this goal (it is not a far-fetched or uncommon goal), at http://www.sloperama.com/advice/lesson29.htm Great! Thanks for the link, I'll read it all! The articles seem really awesome. In fact, I remember reading a few there already a few years back. It's not clear what "something new" you want to try to do. Is there something you want to add, something you forgot to ask? Actually, yeah, I did forget to add something in there! Unfortunately though, I kind of feel as if I've forgotten most of it. It was planned to be a bigger part of my post, but, oh well - I'll post it when I remember. One part of it had to do with the fact that you don't see new technically challenging and bigger games releasted too often from different developers than the usual big ones, and that the chances of doing so are thus quite low. Something in that direction - I'll post when I remember correctly! In any case, thank you very much for the help! PS: Does anyone happen to know how I could get some of my current code reviewed? I know that I'm far from being a good programmer, but I think that I'd be able to progress much faster if I knew what was lacking in the first place. ### #5Tom Sloper Moderators Posted 19 March 2017 - 05:14 PM What do you think about a joint science/business degree, or joint degrees in general? I think if that's what you like, you should do it. The older I get, the more responsibilities I will have (Maybe a family, maybe this maybe that, etc), and thus the less risk I'll be able to take.) Yes, I'm sure it's only single twentysomethings who start businesses. PS: Does anyone happen to know how I could get some of my current code reviewed? The For Beginners forum, here on GameDev.net. But I thought you said you don't want to be a programmer. Edited by Tom Sloper, 19 March 2017 - 05:16 PM. -- Tom Sloper Sloperama Productions Making games fun and getting them done. www.sloperama.com Please do not PM me. My email address is easy to find, but note that I do not give private advice. ### #6Kylotan Moderators Posted 20 March 2017 - 04:08 AM Those 6 years after my studies, what exactly will I be using them for? Gaining experience in already established studios, I assume? Yes. Without that, you won't have the contacts, respect, or experience necessary to succeed in what you want to do. The older I get, the more responsibilities I will have (Maybe a family, maybe this maybe that, etc) These are your choice. Families don't appear by accident. What do you think about using my free time while studying to already build up my idea in more detail, including a full game design, business plan and some thoughts about how to implement everything? Your problem is that nobody cares about your idea, and you're not experienced enough to make a reasonable business plan. Your choices are pretty much either (a) start making the game, or (b) start down the route to get other people to make the game. Tom's link looks pretty accurate to me. I don't see a way you can cut corners unless you have hundreds of thousands of$/€/£ in the bank. PS: Does anyone happen to know how I could get some of my current code reviewed? I know that I'm far from being a good programmer, but I think that I'd be able to progress much faster if I knew what was lacking in the first place. You can ask about your code in the For Beginners forum, but be warned that few people want to take the time to read through hundreds of lines of someone else's code. Critique on short snippets is common, however. Besides, there's no good alternative to taking proper programming courses. ### #7Flyverse  Members Posted 20 March 2017 - 03:57 PM I think if that's what you like, you should do it. Okay, great! Yes, I'm sure it's only single twentysomethings who start businesses. Nonono, don't get me wrong, that's not what I meant at all! I may have misformulated it a bit. I just had quite a few people I spoke with tell me that they'd like to take the risk of starting their own company or something similar, but can't take the risk anymore because of too many dependencies they created with time. What I basically meant with it was that I'd like to try gaining experience as fast as possible. But yeah, I know that this is kind of like saying "Give me free stuff", so it doesn't really work that way. Anyway. The For Beginners forum, here on GameDev.net. But I thought you said you don't want to be a programmer. Okay, great! And yeah, I can see how my statements seem contradictory - However, what I meant by it is that I don't want to be a programmer professionally, but instead keep it as a hobby as it is now - It's a skill I'm trying to get better at over time, and put to its best use without losing it. Basically: I want to do as much as I can by myself, and I do think that programming is quite interesting. Actually getting my skill to be somewhere that could be considered for professional work would take way too much time for something that I don't really want to do as a job. Oh god, I think I made this sound even more confusing than before. These are your choice. Families don't appear by accident. That's true, haha. Your problem is that nobody cares about your idea, and you're not experienced enough to make a reasonable business plan. Your choices are pretty much either (a) start making the game, or (b) start down the route to get other people to make the game. Tom's link looks pretty accurate to me. I don't see a way you can cut corners unless you have hundreds of thousands of \$/€/£ in the bank. Yeah, I know. That's also the reason that I'm working on a small platformer game with a friend of mine right now - I'm trying to gain some more experience regarding multiple aspects of game development, from coding to art to releasing and advertising it. Everything's obviously very beginner like and it won't amount to much either, but I do hope to get some experience that'll at least help me understand what kind of "things" I could try to learn in my free time while studying to progress faster. You can ask about your code in the For Beginners forum, but be warned that few people want to take the time to read through hundreds of lines of someone else's code. Critique on short snippets is common, however. Besides, there's no good alternative to taking proper programming courses. I'll start a topic in there soon, then! Have a nice evening. Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.
2017-05-27 12:01:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3628907799720764, "perplexity": 550.0087073833497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608953.88/warc/CC-MAIN-20170527113807-20170527133807-00125.warc.gz"}
https://www.physicsforums.com/threads/time-independent-schroedinger-equation.265656/
# Time-independent Schrödinger Equation 1. Oct 20, 2008 ### soul Hi everyone, I have been studying Quantum mechanics course for one month and our subject for now is Time-independent Schrödinger Equation. What I couldn't figure out is whether $$\Psi(x,\,0) = \Psi(x)$$, since $$\Psi(x,\,0)$$ doesn't contain any time dependence and $$\Psi(x)$$ as well. Can someone explain me that that expression is true. 2. Oct 20, 2008 ### olgranpappy your LaTeX isn't showing up for me... but it looks like you are asking whether psi(x,0) is equal to psi(x). In which case, what do you mean by psi(x,0) and psi(x)? 3. Oct 20, 2008 ### soul I am new in quantum and there could be some lack of terminology in my question. I mean that Schr. Eq. at t = 0 which is shown as psi(x,0) and wave function independent of time psi(x) are the same in sqaure well and in some other cases?? 4. Oct 20, 2008 ### Avodyne In general, no. Consider a potential like the square well that has only bound-state solutions. Then there is a discrete set of allowed energies, the energy eigenvalues, $E_n$, and corresponding eigenfunctions, $\psi_n(x)$, $n=1,2,\ldots$ ; these are the solutions of the time-independent Schrodinger equation. Then, the most general solution of the time dependent Schrodinger equation is $$\psi(x,t)=\sum_{n=1}^\infty c_n e^{-iE_nt/\hbar}\psi_n(x),$$ where the $c_n$'s are arbitrary coefficients. EDIT: something seems wrong with the TeX processing on the new server ... 5. Oct 20, 2008 ### Dr Transport to fix your LaTEX issues, you need to close with [/itex] or [/tex]..... 6. Oct 20, 2008
2017-10-23 16:28:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6887698173522949, "perplexity": 720.852239310583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826114.69/warc/CC-MAIN-20171023145244-20171023165244-00390.warc.gz"}
https://samrat.me/posts/2016-11-23-misc-updates/
# Samrat Man Singh Email: mail@samrat.me Hi! I’m Samrat, a programmer from Nepal. ### 2016-11-23 I’ve been reading Neal Stephenson’s Quicksilver. This is the second time I’ve started to read the book and I must say that I’m just having a lot of trouble getting into the book. That said, I haven’t abandoned it yet- I’ve enjoyed all of the other Stephenson novels I’ve read so far. Also, the last book I read, Gore Vidal’s Creation got me excited about the genre of historical fiction. I’m still hoping Quicksilver gets better eventually. The other book I’ve been working through is Compiler Construction by Niklaus Wirth. This is a concise, and in my opinion, great introduction to compilers. I have also been building a small compiler written in C. It doesn’t have much yet, but here’s the test program, which it can successfully compile: a := 11; b := a+12; if a > b then c := b*5 elsif a > 20 then c := b + 5 elsif 0 > 1 then c := 2*b else c := 101 end; repeat a := a - 1; b := b + 3 until a < 1; d := 14 No procedures, arrays or types yet(of course, it doesn’t have plenty of other stuff missing, these three just happen to be on my immediate backlog). But hey, baby steps! Tags:
2021-05-12 07:10:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2562149167060852, "perplexity": 2227.9196016952033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00600.warc.gz"}
https://math.paperswithcode.com/paper/some-properties-of-non-linear-fractional
## Some properties of non-linear fractional stochastic heat equations on bounded domains 16 Dec 2016  ·  Foondun Mohammud, Guerngar Ngartelbaye, Nane Erkan · Consider the following stochastic partial differential equation, \begin{equation*} \partial_t u_t(x)= \mathcal{L}u_t(x)+ \xi\sigma (u_t(x)) \dot F(t,x), \end{equation*} where $\xi$ is a positive parameter and $\sigma$ is a globally Lipschitz continuous function. The stochastic forcing term $\dot F(t,x)$ is white in time but possibly colored in space... The operator $\mathcal{L}$ is a non-local operator. We study the behaviour of the solution with respect to the parameter $\xi$, extending the results in \cite{FoonNual} and \cite{Bin} read more PDF Abstract # Code Add Remove Mark official No code implementations yet. Submit your code now # Categories Probability Mathematical Physics Analysis of PDEs Mathematical Physics
2021-08-03 04:29:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420336246490479, "perplexity": 2760.7210369418804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00319.warc.gz"}
https://guitarknights.com/guitar-humidifier-guitar-notes-on-staff.html
A guitar strap is a strip of material with an attachment mechanism on each end, made to hold a guitar via the shoulders at an adjustable length. Guitars have varying accommodations for attaching a strap. The most common are strap buttons, also called strap pins, which are flanged steel posts anchored to the guitar with screws. Two strap buttons come pre-attached to virtually all electric guitars, and many steel-string acoustic guitars. Strap buttons are sometimes replaced with "strap locks", which connect the guitar to the strap more securely. Guitar chords are dramatically simplified by the class of alternative tunings called regular tunings. In each regular tuning, the musical intervals are the same for each pair of consecutive strings. Regular tunings include major-thirds (M3), all-fourths, augmented-fourths, and all-fifths tunings. For each regular tuning, chord patterns may be diagonally shifted down the fretboard, a property that simplifies beginners' learning of chords and that simplifies advanced players' improvisation.[70][71][72] The diagonal shifting of a C major chord in M3 tuning appears in a diagram. Getting to grips with how chords are formed gives you a basic introduction to music theory and helps you understand the ways you can alter them to create more interesting sounds. All chords are built from certain notes in scales. The C major scale is the easiest, because it just runs C, D, E, F, G, A and B. These notes are numbered (usually using Roman numerals) in that order, from one (I) to seven (VII). Through School of Rock's private guitar lessons and group rehearsals, children learn to play the guitar and eventually perform the songs they love in a fun, supportive and comfortable atmosphere. Based on the student's age and skill level, guitar lessons for kids are part of every School of Rock music program including Rookies, Rock 101, Performance, House Band, and AllStars. I strongly recommend beginner guitar players to use the Uberchord app (click for free download) for practicing chord progressions and chord changes, and use the real-time feedback to improve your playing skills. While, I’ll help you expedite the process of grabbing chords confidently on the neck and get you on your way to playing along expertly with your favourite band, or better yet, running a band of your own. While the free guitar lessons here will help you get started, we always recommend committed students to invest in their guitar skills by starting a Guitareo membership. That’s where you’ll get a more comprehensive library of step-by-step video lessons so you always know exactly what to learn next, play-along songs so you can apply your skills to real music, and community support so you’ll get all of your questions answered. Click here to learn more about Guitareo. Chords are the backbone of most guitar music. As a beginner, mastering the most common chords allows you to play along to popular songs and even start writing your own. Technically speaking, a chord is a group of three or more notes played in one smooth strumming motion. Chords are classified according to the overall effect they produce. Major and minor chords, which create happy and sad sounds, respectively, are the most basic chords you'll need to play beginner-friendly songs. As previously stated, a dominant seventh is a four-note chord combining a major chord and a minor seventh. For example, the C7 dominant seventh chord adds B♭ to the C-major chord (C,E,G). The naive chord (C,E,G,B♭) spans six frets from fret 3 to fret 8;[49] such seventh chords "contain some pretty serious stretches in the left hand".[46] An illustration shows a naive C7 chord, which would be extremely difficult to play,[49] besides the open-position C7 chord that is conventional in standard tuning.[49][50] The standard-tuning implementation of a C7 chord is a second-inversion C7 drop 2 chord, in which the second-highest note in a second inversion of the C7 chord is lowered by an octave.[49][51][52] Drop-two chords are used for sevenths chords besides the major-minor seventh with dominant function,[53] which are discussed in the section on intermediate chords, below. Drop-two chords are used particularly in jazz guitar.[54] Drop-two second-inversions are examples of openly voiced chords, which are typical of standard tuning and other popular guitar-tunings.[55] With that in mind, the inverse to this rule isn’t always true. During the Folk Boom of the 1950s and 60s, there were actually quite a few musicians who put nylon strings on steel string acoustics. This gave the guitar a very warm and relaxed tone, though should you choose to do this be aware that you’re going to get a lot less volume and a reduced response across the entire frequency range. Classical guitars, also known as "Spanish" guitars [11] , are typically strung with nylon strings, plucked with the fingers, played in a seated position and are used to play a diversity of musical styles including classical music. The classical guitar's wide, flat neck allows the musician to play scales, arpeggios, and certain chord forms more easily and with less adjacent string interference than on other styles of guitar. Flamenco guitars are very similar in construction, but they are associated with a more percussive tone. In Portugal, the same instrument is often used with steel strings particularly in its role within fado music. The guitar is called viola, or violão in Brazil, where it is often used with an extra seventh string by choro musicians to provide extra bass support. The main purpose of the bridge on an acoustic guitar is to transfer the vibration from the strings to the soundboard, which vibrates the air inside of the guitar, thereby amplifying the sound produced by the strings. On all electric, acoustic and original guitars, the bridge holds the strings in place on the body. There are many varied bridge designs. There may be some mechanism for raising or lowering the bridge saddles to adjust the distance between the strings and the fretboard (action), or fine-tuning the intonation of the instrument. Some are spring-loaded and feature a "whammy bar", a removable arm that lets the player modulate the pitch by changing the tension on the strings. The whammy bar is sometimes also called a "tremolo bar". (The effect of rapidly changing pitch is properly called "vibrato". See Tremolo for further discussion of this term.) Some bridges also allow for alternate tunings at the touch of a button. UPDATE: SEPTEMBER 3rd, 2010 -- Good evening, and hi everybody! I get requests to add tabs once in a while, and for years one of the most common requests has been 'Psychic Hearts', and more recently 'Trees Outside the Academy'. I resisted for years, but boredom and the need to please has a funny way of making things happen, so I'm proud to bring you tabs for the entirety of "Psychic Hearts" and its related tracks, as well as the majority of "Trees Outside the Academy". I was originally planning on providing bass tabs as well as the Mascis solos, but I decided I wasn't that desperate for accolade. With all this attention on Thurston, I felt bad for Lee, so I've updated my outdated tab for his excellent solo acoustic piece "Here" (located under "Other Tabs") with the proper tuning, which also happens to be the tuning for the equally excellent "Lee #2", so I've updated that one too! Justin Sandercoe has thought long and hard about how to teach people to play the guitar, and how to do this over the internet. He has come up with a well-designed series of courses that will take you from nowhere to proficiency. I tried to learn how to play years ago, using books, and got nowhere. I've been using Justin's site for just over a year and I feel I've made real progress. What's more, Justin offers his lessons for free - a boon for any young player who has the urge to play, but whose pockets are empty. I've seen and used other sites for learners: none of them offer as clearly marked a road as Justin does. Are you stuck in a musical rut? New tunings and tricks can help you keep learning guitar in fresh, fun ways. Try one of these great tips from guitar teacher Samuel B. to breathe new life into your guitar playing... One of the first things I tell any new student is that I don't specialize in a formal discipline. If jazz or classical training is your objective, then I'm not your guy. Instead, I specialize primarily in American roots music (that which we tend to casually lump together as "folk" YellowBrickCinema composes Sleep Music, Study Music and Focus Music, Relaxing Music, Meditation Music (including Tibetan Music and Shamanic Music), Healing Music, Reiki Music, Zen Music, Spa Music and Massage Music, Instrumental Music (including Piano Music, Guitar Music and Flute Music) and Yoga Music. We also produce music videos with Classical Music from composers such as Mozart, Beethoven and Bach. #### YellowBrickCinema’s Sleep Music is the perfect relaxing music to help you go to sleep, and enjoy deep sleep. Our music for sleeping is the best music for stress relief, to reduce insomnia, and encourage dreaming. Our calm music for sleeping uses Delta Waves and soft instrumental music to help you achieve deep relaxation, and fall asleep. Our relaxing sleep music can be used as background music, meditation music, relaxation music, peaceful music and sleep music. Let our soothing music and calming music help you enjoy relaxing deep sleep. This is why it’s important to have someone that holds you accountable for prolonging your learning and practicing, regardless of whether it’s a musical instrument, or a sport, or any other after-school matter. Someone to keep you motivated. And if you decide to make your passion a profession, someone to guide you along the way on how to find the right opportunities for paid gigs, or even a full-time career in playing the Guitar. The ratio of the spacing of two consecutive frets is {\displaystyle {\sqrt[{12}]{2}}} (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/ {\displaystyle {\sqrt[{12}]{2}}} ). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817).[15] Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz. The previously discussed I-IV-V chord progressions of major triads is a subsequence of the circle progression, which ascends by perfect fourths and descends by perfect fifths: Perfect fifths and perfect fourths are inverse intervals, because one reaches the same pitch class by either ascending by a perfect fourth (five semitones) or descending by a perfect fifth (seven semitones). For example, the jazz standard Autumn Leaves contains the iv7-VII7-VIM7-iiø7-i circle-of-fifths chord-progression;[80] its sevenths occur in the tertian harmonization in sevenths of the minor scale.[81] Other subsequences of the fifths-circle chord-progression are used in music. In particular, the ii-V-I progression is the most important chord progression in jazz music. Learning to play other people's guitar solos is a great way to begin learning to write your own! Guitar teacher Nils B. shares his tips to learning four classic rock solos so you can develop your technique... An essential part of every musician's development is to imitate those who have already mastered their instrument. After settling on a song, give it a couple of close listens (preferably on headphones or a decent stereo), pick up a good transcription, then learn the rhythm parts, while an Most electric guitar bodies are made of wood and include a plastic pick guard. Boards wide enough to use as a solid body are very expensive due to the worldwide depletion of hardwood stock since the 1970s, so the wood is rarely one solid piece. Most bodies are made from two pieces of wood with some of them including a seam running down the center line of the body. The most common woods used for electric guitar body construction include maple, basswood, ash, poplar, alder, and mahogany. Many bodies consist of good-sounding, but inexpensive woods, like ash, with a "top", or thin layer of another, more attractive wood (such as maple with a natural "flame" pattern) glued to the top of the basic wood. Guitars constructed like this are often called "flame tops". The body is usually carved or routed to accept the other elements, such as the bridge, pickup, neck, and other electronic components. Most electrics have a polyurethane or nitrocellulose lacquer finish. Other alternative materials to wood are used in guitar body construction. Some of these include carbon composites, plastic material, such as polycarbonate, and aluminum alloys. On the other hand, some chords are more difficult to play in a regular tuning than in standard tuning. It can be difficult to play conventional chords especially in augmented-fourths tuning and all-fifths tuning,[20] in which the large spacings require hand stretching. Some chords, which are conventional in folk music, are difficult to play even in all-fourths and major-thirds tunings, which do not require more hand-stretching than standard tuning.[21] The electric guitar, developed for popular music in the United States in the 1930s, usually has a solid, nonresonant body. The sound of its strings is both amplified and manipulated electronically by the performer. American musician and inventor Les Paul developed prototypes for the solidbody electric guitar and popularized the instrument beginning in the 1940s.
2019-07-18 19:20:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18539884686470032, "perplexity": 3608.3160250380242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525793.19/warc/CC-MAIN-20190718190635-20190718212635-00452.warc.gz"}
http://assert.pub/arxiv/cs/cs.lg/
### Top 10 Arxiv Papers Today in Machine Learning ##### #1. Fourier Transform Approach to Machine Learning III: Fourier Classification ###### Soheil Mehrabkhani We propose a Fourier-based learning algorithm for highly nonlinear multiclass classification. The algorithm is based on a smoothing technique to calculate the probability distribution of all classes. To obtain the probability distribution, the density distribution of each class is smoothed by a low-pass filter separately. The advantage of the Fourier representation is capturing the nonlinearities of the data distribution without defining any kernel function. Furthermore, contrary to the support vector machines, it makes a probabilistic explanation for the classification possible. Moreover, it can treat overlapped classes as well. Comparing to the logistic regression, it does not require feature engineering. In general, its computational performance is also very well for large data sets and in contrast to other algorithms, the typical overfitting problem does not happen at all. The capability of the algorithm is demonstrated for multiclass classification with overlapped classes and very high nonlinearity of the class distributions. more | pdf | html ###### Tweets arxivml: "Fourier Transform Approach to Machine Learning III: Fourier Classification", Soheil Mehrabkhani https://t.co/PJICyanoKx Memoirs: Fourier Transform Approach to Machine Learning III: Fourier Classification. https://t.co/CBrD70EAv4 alexandralacruz: RT @Memoirs: Fourier Transform Approach to Machine Learning III: Fourier Classification. https://t.co/CBrD70EAv4 cygarde: RT @Memoirs: Fourier Transform Approach to Machine Learning III: Fourier Classification. https://t.co/CBrD70EAv4 StigmaEnder: RT @Memoirs: Fourier Transform Approach to Machine Learning III: Fourier Classification. https://t.co/CBrD70EAv4 None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 2815 Unqiue Words: 916 ##### #2. Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications ###### Qing Qu, Zhihui Zhu, Xiao Li, Manolis C. Tsakiris, John Wright, René Vidal The problem of finding the sparsest vector (direction) in a low dimensional subspace can be considered as a homogeneous variant of the sparse recovery problem, which finds applications in robust subspace recovery, dictionary learning, sparse blind deconvolution, and many other problems in signal processing and machine learning. However, in contrast to the classical sparse recovery problem, the most natural formulation for finding the sparsest vector in a subspace is usually nonconvex. In this paper, we overview recent advances on global nonconvex optimization theory for solving this problem, ranging from geometric analysis of its optimization landscapes, to efficient optimization algorithms for solving the associated nonconvex optimization problem, to applications in machine intelligence, representation learning, and imaging sciences. Finally, we conclude this review by pointing out several interesting open problems for future research. more | pdf | html None. ###### Tweets arxivml: "Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications", Qing Qu, Zhihui Zhu, Xiao Li, M… https://t.co/NI7zMWjft0 StatsPapers: Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications. https://t.co/VyLo5gam4f IgorCarron: RT @StatsPapers: Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications. https://t.co/VyLo5gam4f jackiefloyd: RT @StatsPapers: Finding the Sparsest Vectors in a Subspace: Theory, Algorithms, and Applications. https://t.co/VyLo5gam4f None. None. ###### Other stats Sample Sizes : None. Authors: 6 Total Words: 0 Unqiue Words: 0 ##### #3. FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence ###### Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, Colin Raffel Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance. In this paper, we demonstrate the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling. Our algorithm, FixMatch, first generates pseudo-labels using the model's predictions on weakly-augmented unlabeled images. For a given image, the pseudo-label is only retained if the model produces a high-confidence prediction. The model is then trained to predict the pseudo-label when fed a strongly-augmented version of the same image. Despite its simplicity, we show that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks, including 94.93% accuracy on CIFAR-10 with 250 labels and 88.61% accuracy with 40 -- just 4 labels per class. Since FixMatch bears many similarities to existing SSL methods that achieve worse performance, we carry out an extensive ablation study to tease apart the experimental factors that are... more | pdf | html None. ###### Tweets D_Berthelot_ML: FixMatch: focusing on simplicity for semi-supervised learning and improving state of the art (CIFAR 94.9% with 250 labels, 88.6% with 40). https://t.co/QuP6oN7iCS Collaboration with Kihyuk Sohn, @chunliang_tw @ZizhaoZhang Nicholas Carlini @ekindogus @Han_Zhang_ @colinraffel https://t.co/BmeYvpEHzX arxivml: "FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence", Kihyuk Sohn, David Berthelot, Chu… https://t.co/bXM2Sjlwyq phalanxXxXxX: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper: https://t.co/bJq2a2D0dG code: https://t.co/mRpHGSoIv8 https://t.co/9htNCgAnlN hereticreader: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence - https://t.co/9D1COPFWPY https://t.co/TszkmFaMKr arxiv_cs_cv_pr: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. Kihyuk Sohn, David Berthelot, Chun-Liang Li, Zizhao Zhang, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Han Zhang, and Colin Raffel https://t.co/2yOM5S9jz8 StatsPapers: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence. https://t.co/cfgsxX3XLx FujitaAtsunori: RT @phalanxXxXxX: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper: https://t.co/bJq2a2D0dG code: https… inoichan: RT @phalanxXxXxX: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper: https://t.co/bJq2a2D0dG code: https… TeraBytesMemory: RT @phalanxXxXxX: FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence paper: https://t.co/bJq2a2D0dG code: https… None. None. ###### Other stats Sample Sizes : None. Authors: 9 Total Words: 0 Unqiue Words: 0 ##### #4. Mobility Inference on Long-Tailed Sparse Trajectory ###### Lei Shi Analyzing the urban trajectory in cities has become an important topic in data mining. How can we model the human mobility consisting of stay and travel from the raw trajectory data? How can we infer such a mobility model from the single trajectory information? How can we further generalize the mobility inference to accommodate the real-world trajectory data that is sparsely sampled over time? In this paper, based on formal and rigid definitions of the stay/travel mobility, we propose a single trajectory inference algorithm that utilizes a generic long-tailed sparsity pattern in the large-scale trajectory data. The algorithm guarantees a 100\% precision in the stay/travel inference with a provable lower-bound in the recall. Furthermore, we introduce an encoder-decoder learning architecture that admits multiple trajectories as inputs. The architecture is optimized for the mobility inference problem through customized embedding and learning mechanism. Evaluations with three trajectory data sets of 40 million urban users validate... more | pdf | html None. ###### Tweets arxivml: "Mobility Inference on Long-Tailed Sparse Trajectory", Lei Shi https://t.co/cFQlHR0F2M StatsPapers: Mobility Inference on Long-Tailed Sparse Trajectory. https://t.co/iW63jLjHRX None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 0 Unqiue Words: 0 ##### #5. Generate High-Resolution Adversarial Samples by Identifying Effective Features ###### Sizhe Chen, Peidong Zhang, Chengjin Sun, Jia Cai, Xiaolin Huang As the prevalence of deep learning in computer vision, adversarial samples that weaken the neural networks emerge in large numbers, revealing their deep-rooted defects. Most adversarial attacks calculate an imperceptible perturbation in image space to fool the DNNs. In this strategy, the perturbation looks like noise and thus could be mitigated. Attacks in feature space produce semantic perturbation, but they could only deal with low resolution samples. The reason lies in the great number of coupled features to express a high-resolution image. In this paper, we propose Attack by Identifying Effective Features (AIEF), which learns different weights for features to attack. Effective features, those with great weights, influence the victim model much but distort the image little, and thus are more effective for attack. By attacking mostly on them, AIEF produces high resolution adversarial samples with acceptable distortions. We demonstrate the effectiveness of AIEF by attacking on different tasks with different generative models. more | pdf | html ###### Tweets arxivml: "Generate High-Resolution Adversarial Samples by Identifying Effective Features", Sizhe Chen, Peidong Zhang, Chengj… https://t.co/GGFDXbG93S arxiv_cs_cv_pr: Generate High-Resolution Adversarial Samples by Identifying Effective Features. Sizhe Chen, Peidong Zhang, Chengjin Sun, Jia Cai, and Xiaolin Huang https://t.co/ya7ejWg32g StatsPapers: Generate High-Resolution Adversarial Samples by Identifying Effective Features. https://t.co/UtaTvpEQ0V None. None. ###### Other stats Sample Sizes : None. Authors: 5 Total Words: 5361 Unqiue Words: 1751 ##### #6. batchboost: regularization for stabilizing training with resistance to underfitting & overfitting ###### Maciej A. Czyzewski Overfitting & underfitting and stable training are an important challenges in machine learning. Current approaches for these issues are mixup, SamplePairing and BC learning. In our work, we state the hypothesis that mixing many images together can be more effective than just two. Batchboost pipeline has three stages: (a) pairing: method of selecting two samples. (b) mixing: how to create a new one from two samples. (c) feeding: combining mixed samples with new ones from dataset into batch (with ratio $\gamma$). Note that sample that appears in our batch propagates with subsequent iterations with less and less importance until the end of training. Pairing stage calculates the error per sample, sorts the samples and pairs with strategy: hardest with easiest one, than mixing stage merges two samples using mixup, $x_1 + (1-\lambda)x_2$. Finally, feeding stage combines new samples with mixed by ratio 1:1. Batchboost has 0.5-3% better accuracy than the current state-of-the-art mixup regularization on CIFAR-10 & Fashion-MNIST. Our method... more | pdf | html None. ###### Tweets StatsPapers: batchboost: regularization for stabilizing training with resistance to underfitting &amp; overfitting. https://t.co/hq6sEOuouo arxivml: "batchboost: regularization for stabilizing training with resistance to underfitting &amp; overfitting", Maciej A. Czyz… https://t.co/Zns2ZLjPxA arxiv_cs_cv_pr: batchboost: regularization for stabilizing training with resistance to underfitting &amp; overfitting. Maciej A. Czyzewski https://t.co/tAdif09ufj None. None. ###### Other stats Sample Sizes : None. Authors: 1 Total Words: 0 Unqiue Words: 0 ##### #7. Simple and Effective Graph Autoencoders with One-Hop Linear Models ###### Guillaume Salha, Romain Hennequin, Michalis Vazirgiannis Graph autoencoders (AE) and variational autoencoders (VAE) recently emerged as powerful node embedding methods, with promising performances on challenging tasks such as link prediction and node clustering. Graph AE, VAE and most of their extensions rely on graph convolutional networks (GCN) encoders to learn vector space representations of nodes. In this paper, we propose to replace the GCN encoder by a significantly simpler linear model w.r.t. the direct neighborhood (one-hop) adjacency matrix of the graph. For the two aforementioned tasks, we show that this approach consistently reaches competitive performances w.r.t. GCN-based models for numerous real-world graphs, including all benchmark datasets commonly used to evaluate graph AE and VAE. We question the relevance of repeatedly using these datasets to compare complex graph AE and VAE. We also emphasize the effectiveness of the proposed encoding scheme, that appears as a simpler and faster alternative to GCN encoders for many real-world applications. more | pdf | html None. ###### Tweets BrundageBot: Simple and Effective Graph Autoencoders with One-Hop Linear Models. Guillaume Salha, Romain Hennequin, and Michalis Vazirgiannis https://t.co/gwZdV2H7OD arxivml: "Simple and Effective Graph Autoencoders with One-Hop Linear Models", Guillaume Salha, Romain Hennequin, Michalis V… https://t.co/KwMKt2nKtD StatsPapers: Simple and Effective Graph Autoencoders with One-Hop Linear Models. https://t.co/Flv14FtrEg None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 0 Unqiue Words: 0 ##### #8. Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data ###### Kyoung-Woon On, Eun-Sol Kim, Yu-Jung Heo, Byoung-Tak Zhang Conventional sequential learning methods such as Recurrent Neural Networks (RNNs) focus on interactions between consecutive inputs, i.e. first-order Markovian dependency. However, most of sequential data, as seen with videos, have complex dependency structures that imply variable-length semantic flows and their compositions, and those are hard to be captured by conventional methods. Here, we propose Cut-Based Graph Learning Networks (CB-GLNs) for learning video data by discovering these complex structures of the video. The CB-GLNs represent video data as a graph, with nodes and edges corresponding to frames of the video and their dependencies respectively. The CB-GLNs find compositional dependencies of the data in multilevel graph forms via a parameterized kernel with graph-cut and a message passing framework. We evaluate the proposed method on the two different tasks for video understanding: Video theme classification (Youtube-8M dataset) and Video Question and Answering (TVQA dataset). The experimental results show that... more | pdf | html None. ###### Tweets arxivml: "Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data", Kyoung-Woon On, E… https://t.co/fE9biROPCc StatsPapers: Cut-Based Graph Learning Networks to Discover Compositional Structure of Sequential Video Data. https://t.co/MOiPKOokKC None. None. ###### Other stats Sample Sizes : None. Authors: 4 Total Words: 0 Unqiue Words: 0 ##### #9. Understanding the Limitations of Network Online Learning Studies of networked phenomena, such as interactions in online social media, often rely on incomplete data, either because these phenomena are partially observed, or because the data is too large or expensive to acquire all at once. Analysis of incomplete data leads to skewed or misleading results. In this paper, we investigate limitations of learning to complete partially observed networks via node querying. Concretely, we study the following problem: given (i) a partially observed network, (ii) the ability to query nodes for their connections (e.g., by accessing an API), and (iii) a budget on the number of such queries, sequentially learn which nodes to query in order to maximally increase observability. We call this querying process Network Online Learning and present a family of algorithms called NOL*. These algorithms learn to choose which partially observed node to query next based on a parameterized model that is trained online through a process of exploration and exploitation. Extensive experiments on both synthetic and... more | pdf | html None. ###### Tweets arxivml: "Understanding the Limitations of Network Online Learning", Timothy LaRock, Timothy Sakharov, Sahely Bhadra, Tina E… https://t.co/0GGEqjEsUC StatsPapers: Understanding the Limitations of Network Online Learning. https://t.co/tZ3uypWayV None. None. ###### Other stats Sample Sizes : None. Authors: 4 Total Words: 0 Unqiue Words: 0 ##### #10. Data-Driven Permanent Magnet Temperature Estimation in Synchronous Motors with Supervised Machine Learning ###### Wilhelm Kirchgässner, Oliver Wallscheid, Joachim Böcker Monitoring the magnet temperature in permanent magnet synchronous motors (PMSMs) for automotive applications is a challenging task for several decades now, as signal injection or sensor-based methods still prove unfeasible in a commercial context. Overheating results in severe motor deterioration and is thus of high concern for the machine's control strategy and its design. Lack of precise temperature estimations leads to lesser device utilization and higher material cost. In this work, several machine learning (ML) models are empirically evaluated on their estimation accuracy for the task of predicting latent high-dynamic magnet temperature profiles. The range of selected algorithms covers as diverse approaches as possible with ordinary and weighted least squares, support vector regression, $k$-nearest neighbors, randomized trees and neural networks. Having test bench data available, it is shown that ML approaches relying merely on collected data meet the estimation performance of classical thermal models built on thermodynamic... more | pdf | html ###### Tweets arxivml: "Data-Driven Permanent Magnet Temperature Estimation in Synchronous Motors with Supervised Machine Learning", Wilhe… https://t.co/IaYmkBLmXv arxiv_cs_LG: Data-Driven Permanent Magnet Temperature Estimation in Synchronous Motors with Supervised Machine Learning. Wilhelm Kirchgässner, Oliver Wallscheid, and Joachim Böcker https://t.co/CERh88LPOj Memoirs: Data-Driven Permanent Magnet Temperature Estimation in Synchronous Motors with Supervised Machine Learning. https://t.co/i7m9wshKUy None. None. ###### Other stats Sample Sizes : None. Authors: 3 Total Words: 6429 Unqiue Words: 2498 Assert is a website where the best academic papers on arXiv (computer science, math, physics), bioRxiv (biology), BITSS (reproducibility), EarthArXiv (earth science), engrXiv (engineering), LawArXiv (law), PsyArXiv (psychology), SocArXiv (social science), and SportRxiv (sport research) bubble to the top each day. Papers are scored (in real-time) based on how verifiable they are (as determined by their Github repos) and how interesting they are (based on Twitter). To see top papers, follow us on twitter @assertpub_ (arXiv), @assert_pub (bioRxiv), and @assertpub_dev (everything else). To see beautiful figures extracted from papers, follow us on Instagram. Tracking 256,574 papers. ###### Search Sort results based on if they are interesting or reproducible. Interesting Reproducible Online ###### Stats Tracking 256,574 papers.
2020-01-22 19:42:42
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5337926745414734, "perplexity": 6377.978997609008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250607407.48/warc/CC-MAIN-20200122191620-20200122220620-00401.warc.gz"}
https://physics.stackexchange.com/questions/459544/proving-that-gravitational-potential-is-work-done-by-the-object-against-gravity
# Proving that gravitational potential is work done by the object against gravity while KE increases and PE reduces There's a person in my class who thinks that the formula for gravitational potential (-GM/r) represents the work done by gravity to move an object from infinity to any point in the gravitational field. I know it's actually the work done by the object as it goes towards the earth. That's consistent with the formula W=Fd (gravity would be doing positive work, hence negative work by the object itself) and the fact that the object would be gaining kinetic energy as it moves towards the earth, so it's doing negative work (since work represents the transfer of energy). None of these explanations seems to fly for the dude, and one problem he has is this: If the object is gaining kinetic energy as it moves towards the earth, it is also losing gravitational potential energy (since potential reduces as the object goes towards the earth). So, the net change in energy wold be 0. Hence, the formula -GM/r can only represent the work done by gravity, not the object itself. Otherwise, gravity isn't doing any work in the first place. Ergo, gravity is doing negative work as it is attracting an object from infinity to any point r. So you are both wrong, but the other person is off by a sign while you seem to have some larger misconceptions. I know it's actually the work done by the object as it goes towards the earth. That's consistent with the formula W=Fd (gravity would be doing positive work, hence negative work by the object itself) This doesn't make sense. As the object moves in the gravitational field, it isn't doing any work at all. You can only talk about the work done by forces, and the only force present here is gravity. The confusion is understandable. We usually talk fast and loose like this. For example, if I push on a box, you would probably here discussion about "the work I do on the box". But really the more precise language is to talk about the work the force I apply does. Another misconception you seem to have is that there needs to be no net work done. This is only the case if the ball is moving at a constant speed$$^*$$. But since it is "falling" in this field, this is not the case. Really, the potential energy is the negative work done by the conservative force. Technically the more general statement is $$W=-\Delta U$$ but since you are asking about when we start at infinity where $$U=0$$ in this case, we can say for this specific process $$W=-U$$ But this isn't needed at all to think about the sign of the work done by gravity. The force is always acting in the direction of the displacement of the ball. So the work done by gravity is always positive. So in this case the net work isn't zero? Isn't work the transfer of energy? If that's the case, shouldn't net work be zero all the time (since energy has to come from somewhere)? Isn't the net energy change for the object falling into the gravitational field positive? Saying work is the transfer of energy is kind of misleading. Really what you should be thinking is that the work done on an object changes its kinetic energy. You might be familiar with this: $$W_{net}=\Delta K$$ So when gravity does work on our object as it falls, it gains kinetic energy. We can stop right here and never even think about potential energy. If we do this, we are treating gravity as an external force. All we see is our object and a force acting on it. This force changes its kinetic energy. However, there are special forces called conservative forces which we can associate a scalar potential energy with. This means (as I somewhat explained above) instead of directly determining how much work the conservative force does, we can just look at the change in this potential energy. If we go this route, we don't worry about the work done by gravity anymore. We instead look at the total mechanical energy ($$E=K+U$$) and we see that it does not change during this process. $$\Delta E=0$$ does not mean there is no net work being done. $$\Delta K=0$$ means there is no net work being done. What $$\Delta E=0$$ means is that (assuming we have taken into account all conservative forces) there are no other external, forces acting on our ball. Long story short: Gravity (the only force acting on our object) does positive work on our ball, which increases its kinetic energy ($$W=\Delta K>0$$). If we decide to work with potential energy as well, we can say that $$\Delta E = \Delta K+\Delta U=0$$, or $$\Delta K = -\Delta U = W_{grav}$$ This is the usual statement of "conservation of energy" (without external/non-conservative forces). Notice how work can still be done by conservative forces and $$\Delta E$$ is still going to be $$0$$. $$^*$$This might also be where your confusion lies. Typically you hear people say the potential energy is "the work done to move a mass from infinity to that point." But this statement doesn't say it's assumptions. What this is considering is if I were to also be applying a force to the ball equal to the gravitational force but in the opposite direction as it moves to the point of interest from infinity. Therefore the net work done on the ball is in fact $$0$$. Therefore I can say the work my force does is negative the work done by gravity. i.e. $$W_{me}=-W_{grav}=-(-\Delta U)=\Delta U$$ Add this is the work that particular statement is referring to. The work my force does in this specific scenario. • So in this case the net work isn't zero? Isn't work the transfer of energy? If that's the case, shouldn't net work be zero all the time (since energy has to come from somewhere)? Isn't the net energy change for the object falling into the gravitational field positive? – Main Man Andy Feb 8 at 13:33 • @MainManAndy The only force acting on the ball is gravity, and it is doing positive work. When I have time I can add more detail to the answer. But essentially you are getting confused trying to consider the work done by gravity and potential energy at the same time, when really they are two sides of the same thing depending on what you consider to be part of your system. – Aaron Stevens Feb 8 at 13:50 • 🤔I see. Eager to see your extra details if you add 'em. – Main Man Andy Feb 8 at 14:01 • @MainManAndy I have added more to address your concerns about what net work really means in terms of the energies we are talking about (which was mentioned in your main question as well). Please let me know if something still doesn't make sense. This is something I had to struggle through as well when learning all of this. These are great questions, and wrestling through them is the right path to take to understanding these things at a deeper level. – Aaron Stevens Feb 8 at 15:56 My answer is in two parts. The first part tries to explain in terms of energy and work done what is going on when a mass falls towards the Earth and the second part is a commentary on the statements made by the OP in the question. There are two things that you should be clear about. In such a discussion you must define the system under consideration. In this case is it the object or the object and the Earth? This is important because you need to be able to identify internal (to the system and coming in Newton third law pairs) forces and external forces. There is a world of difference between the gravitational potential at a point and the potential energy of a system of objects. The gravitational potential at a point is the work done by an external force in taking unit mass from a position of zero potential to the point. The gravitational potential energy of a system of objects is the work done by external forces in taking the objects from old positions where the potential energy is zero to their new positions. Consider an object of mass $$m$$ as the system and the object is situated in the Earth's gravitation field which for the present assume to be uniform and have a strength $$g$$. There is only one external force on the mass which is the gravitational attraction of the Earth of magnitude $$mg$$ and directed downwards. If the mass starts from rest and falls a distance $$h$$ to reach the surface of the Earth then the work done by the gravitational field (external force) on the mass is $$+mgh$$. It is a positive quantity because the external force and the direction of travel of the mass are both in the same direction. The work energy theorem tells you that this external work done on the mass results in a change (increase) in the kinetic energy of the mass. Note that potential energy and potential have not been mentioned. If the gravitational field is not constant then the initial magnitude of the force on the mass $$m$$ is $$\dfrac{GMm}{(R+h)^2}$$ where $$M$$ is the mass of the Earth and $$R$$ its radius. The final magnitude of the force is $$\dfrac{GMm}{R^2}$$ so to evaluated the work done one must do an integration. The work done by external force on the mass is $$GMm\left[ \dfrac 1 R -\dfrac {1}{R+h} \right] = \dfrac {GMm}{R}\left[ 1 - \left( 1+\frac hR \right )^{-1} \right] \approx m\, g\, h$$ if $$R\gg h$$ and the gravitational field strength $$g =\dfrac{GM}{R^2}$$. Now this could have been done by using the idea that the mass $$m$$ finds itself in a gravitational field due to the Earth and at a distance $$r$$ from the centre of the Earth the potential is $$- \dfrac{GMm}{r}$$ having taken the zero of potential to be when $$r$$ is infinity. The potential energy of the mass $$m$$ changes from $$- \dfrac{GMm}{R+h}$$ to $$- \dfrac{GMm}{r}$$and so the change in potential energy of the mass is $$GMm\left[ \dfrac 1 R -\dfrac {1}{R+h} \right]$$ the same value as the work done by the "external" force acting on the mass $$m$$. However we now have a system of two masses, the Earth and the mass, and the gravitational forces of attraction (there are two - force on mass due to Earth and force on Earth due to mass) are internal forces but because $$M\gg m$$ the Earth does not move only the work done by the force on the mass due to the Earth is considered. It is the mass and the Earth system that have gravitational potential energy. The statement "that work done by the internal gravitational force on the mass due to the Earth" can be put in another way - "the mass (and Earth) system lose gravitational potential energy" Note that I have added words and symbols to some of the statements as indicated by [square brackets]. If the object is gaining kinetic energy as it moves towards the earth, it is also losing gravitational potential [energy] (since potential [due to the Earth] reduces as the object goes towards the earth). This statement is correct. So, the net change in energy would be 0. This statement is correct if by energy it is meant the sum of the kinetic energy and the gravitational potential energy. Hence, the formula -GM[m]/r can only represent the work done by gravity, not the object itself. This statement is "correct" if by gravity it is meant "the force on the mass due to the gravitational attraction of the Earth" as the gravitational field of the mass cannot attract the mass itself but the negative sign should not be there as the gravitational attractive force on the mass due to the Earth is in the same direction as the movement of the mass. Otherwise, gravity isn't doing any work in the first place. Ergo, gravity is doing negative work as it is attracting an object from infinity to any point r. This statement is not correct as the gravitational force on the mass due to the Earth is downwards and the mass is moving downwards so the work done by this gravitational force must be positive. One problem with what is written is the interpretation of the word gravity. Is it a force or a field? There's a person in my class who thinks that the formula for gravitational potential (-GM/r) represents the work done by gravity to move an object from infinity to any point in the gravitational field. -GM/r is the gravitational potential at a distance $$r$$ from the centre of the Earth and is the work done by an external force in moving unit mass from infinity (zero of potential) to a distance $$r$$ from the centre of the Earth. The work done by the gravitational attraction on the unit mass due to the Earth (gravity?) is positive as the force and the movement of the mass are in the same direction. I know it's actually the work done by the object as it goes towards the earth. That's consistent with the formula W=Fd (gravity would be doing positive work, hence negative work by the object itself) and the fact that the object would be gaining kinetic energy as it moves towards the earth, so it's doing negative work (since work represents the transfer of energy). I found this statement very difficult to unravel. I know it's actually the work done by the object as it goes towards the earth. It would have been better to add a word to make it . . . . actually the [negative] work done by the object . . . . That's consistent with the formula W=Fd (gravity would be doing positive work, hence negative work by the object itself) This concept of negative work done by an object is not really needed in this case. and the fact that the object would be gaining kinetic energy as it moves towards the earth, so it's doing negative work (since work represents the transfer of energy). Here this idea of negative work done by an object is continued to explain the increase in the kinetic energy of the object. You can see the gravitational potential as the work done by the field to move a unit point mass from infinity to the point r. The energy inside the gravitational field (because a conservative field) is conserved so that E=U+K is a constant: $$E=-GMm/r+ \frac{1}{2}mv^2$$ where U(r) is potential gravitational energy: $$U(r) =\frac{-GMm}{r}$$ for $$r \to \infty$$, $$U(r) \to 0$$ so that as $$m$$ approaches $$M$$, its kinetic energy increases while its potential energy decreases
2019-08-19 13:53:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 48, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6809596419334412, "perplexity": 150.91235616802743}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00165.warc.gz"}
http://mathhelpforum.com/math-topics/217042-i-need-logic-math-help-print.html
# I need logic/math help. Show 40 post(s) from this thread on one page Page 1 of 2 12 Last • Apr 8th 2013, 06:00 PM indieExe I need logic/math help. RC = 9 // A number NB = 2 // A number base NBR = RC / NB = 9/2 = 4.5 //Amount NB fits in RC In this case the 0.5 in 4.5 represents the first fragment needed in order for 0.5 * x to equal 1. If 0.5 * x = 1, then that would represent that the value has reached the number base. If 0.5 * x = 0.5, then that would indicate that (0.5 * x) represents NB/1, and as x increases it would go on: NB/2, NB/3.. until NB/y = 1. Do you understand what i am trying to say? Because i need help to solve this for any number base, and i do not carry much math knowledge so i would appreciate if you could help me. The method needs to work for any Number Base. If you don't understand what i am trying to say, please leave a comment so that i can try to explain it further. • Apr 8th 2013, 07:16 PM indieExe Re: I need logic/math help. Wait, i solved it.. I just do 1/NB to get the first fragment.. • Apr 8th 2013, 07:20 PM Re: I need logic/math help. No, I don't understand what you are asking. What is $x$? • Apr 9th 2013, 08:30 AM indieExe Re: I need logic/math help. I needed 1/NB to find this, double NBR - int NBR) / (1/NB) = what number of 1/NB in the fragment. • Apr 9th 2013, 09:53 AM Re: I need logic/math help. To avoid confusing myself, I changed the variables as follows: a = RC b = NB Let $a, b \in \mathbb{Z}_{\geq 0}$ Solve for $x$ when $\left[ \frac{a}{b} - \text{int}\left(\frac{a}{b}\right) \right] x = 1$ $x = \frac{1}{ \left[ \frac{a}{b} - \text{int}\left(\frac{a}{b}\right) \right] }$ Now let's examine the expression $\left[ \frac{a}{b} - \text{int}\left(\frac{a}{b}\right) \right]$ There exists a unique $q \in \mathbb{Z}_{\geq 0}$ and $r \in \[0,b)$ such that $\frac{a}{b} = q + \frac{r}{b}$ It follows that $\text{int}\left( \frac{a}{b} \right) = q$ Therefor $\left[ \frac{a}{b} - \text{int}\left(\frac{a}{b}\right) \right] = \left(q + \frac{r}{b}\right) - (q) = \frac{r}{b}$ So finally $x = \frac{1}{ \left( \frac{r}{b} \right) } = \frac{b}{r}$ where $r$ is the remainder of $a \div b$ For example, if a = 5, b = 3 $5 \div 3 = 1 + \frac{2}{3}$ You need to multiply $\frac{2}{3}$ (the fractional part) by $\frac{3}{2}$ (b = 3, r = 2) to get $1$ i.e. $\frac{2}{3} \times \frac{3}{2} = 1$ Note: in the case of base 2, the remainder of non-reducible fractions will always be 1, that might be why you got 1/b. [What was the point of all this BTW?] • Apr 9th 2013, 11:36 AM indieExe Re: I need logic/math help. I didn't quite understand this: q E Z >(with underscore) and R E [0,b) I'm using it like this: Say NB = 3, RC = 5, NBR = 5/3 = 1.66666, 1/NB = 1/3 = 0.33333 (NBR - int NBR) / (1/NB) = 2 // Each 1/NB represents 1. This method appears to work for all number bases, NBR = 422 / 382 = RC / NB = 1.1047... 1/382 = 1/NB = 002617... (NBR - int NBR) / (1/NB) = 40.007.. = 40; 422 - 382 = RC - NB = 40, i am 40 over, which i just realized i could do too in the case RC > NB. I am trying to write an algorithm to generate fixed ASCII combinations, that follow the rule of this algorithm, such that RC 1 would always equal the same combination of ASCII letters. RC = Requested Combination. R1 = ASCII value start R2 = ASCII value end NB = (R2 - R1)+1 NBR = The amount NB repeates in order for it to reach RC. I have this ARRAY of characters. 4 NBR = 4.5, NB = 2, RC = 9, R1 = 1, R2 = 2; NBR 1 represents NB*1, ARRAY [1]++, ARRAY[0] = R1; NBR 2 represents NB*2, ARRAY [1]++, ARRAY[0] = R1; NBR 3 represents NB*3, ARRAY [2]++, ARRAY[1] = R1; NBR 4 represents NB*4, ARRAY [1]++, ARRAY[0] = R1; NBR - int NBR > 0 and NBR - int NBR < 1; ARRAY [0] = ( NBR - intNBR ) / (1/NB)) = 1 ARRAY[] = 121, the 9'th combination in RC when NB = 2 The algorithm follows the same that our way of counting numbers does. X = Position in array. (This method is very slow because it goes through each combination that is required to get RC) START If ARRAY[x] < NB then ARRAY[x] = itself + 1 If ARRAY[x] == NB then while ARRAY[x=x+1] == NB, when ARRAY[x] !=NB then ARRAY[x] = itself +1 and while x=x-1 >= 0 thenARRAY[x] = R1 REPEAT 1 2 11 12 21 22 111 112 121 = 9 different combinations. I managed to create this algorithm some time ago, but it was too slow. It would take the rest of my life to go to a combination, so therefore i must do it pure mathematically, then i can go to any combination of letters in lightning speed, because arithmetic is faster than constantly accessing variables, when you might have to access the same variables maybe a billion times, depending on what combination you request. With this new algorithm: RC, NBR, NB, MP, 1/NB, R1, R2 RC = User Defined, NB = Number Base = (R2 - R1) +1, NBR = Number Base Repeated = RC / NB, MP = Max Positions in ARRAY, i have yet to find this mathematically. 1/NB = Can be used to find the left overs from NBR and directly put the value in the ARRAY. I must do it pure mathematically, which i am close to now. The only thing remaining is applying the values to ARRAY, and figuring out MP, mathematically- The thing that confuses my mind is: NBR, and how i apply the values i want based on it. NBR == 1 would mean ARRAY[1] = R1 AND ARRAY [0] = R1 NBR == 2 would mean ARRAY[1] = R1+1 AND ARRAY [0] = R1 So on until NBR = NB THEN we must create a new position ARRAY[2] = R1 AND ARRAY[1] = R1 AND ARRAY[0] = R1 The first 1/NB in NBR creates position 0 The first 1 in NBR creates position 1 The first NB'th in NBR creates position 2 The first NB^2'th in NBR creates position 3 And it continues like that until it has reached MP and ARRAY[MP] = NB AND all preceeding characters in ARRAY[] = NB So until the total of ARRAY[] = NB*MP What i aim for is: while (MP >= 0) ARRAY[MP] = Expression MP-=1 Maybe you could solve the expression required? • Apr 9th 2013, 07:25 PM Re: I need logic/math help. I'll finish reading your response when I get home, but for now: $a,b \in \mathbb{Z}_{\geq 0}$ mean $a$ and $b$ are non-negative integers (i.e. integers greater then or equal to zero.) $\mathbb{Z}$ is the set of integers. $x \in \mathbb{Z}$ means $x$ is contained in the set of integers (i.e. $x$ is an integer, not, for example, any old number.) $\[0,b)$ means the closed-open (sometimes know as a "half-closed") interval from 0 (inclusive) to b (exclusive). $x \in \[0,b)$ is the same as saying $0 \leq x < b$ • Apr 9th 2013, 10:32 PM Re: I need logic/math help. I'm not exactly sure what you are trying to do. Are you looking for a map (a function) that takes natural numbers to unique ASCII sequences? (Sort of like an inverse hashing function?) array[0] = "1" array[1] = "2" array[2] = "11" ...? I don't quite understand what you are asking for. By the way, I do program. What language is this in? And could you supply the part of the old code that did what you want. (Maybe then I could follow your logic.) • Apr 10th 2013, 12:47 AM indieExe Re: I need logic/math help. The old program is about a half year or so old, and is written in C++. It is super slow, and not very well written. Do you know how to compile in C++? If so i can send you the _U8CSTRING.cpp file so that you can compile it and test the program to see visually what it does. If not i can only compile it for windows 64 bit or Gnu/Linux 32 bit or Gnu/Linux 64 bit. The program doesn't include mathematics at all as you can see, only some addition and subraction. And it will access variables ((R2-R1)+1)^P Times.. And of course printing all the combinations also consumes processing power. With what i was trying to explain in my later posts was doing this except accessing variables ((R2-R1)+1)^P Times, and printing every combination, but instead goto a specific combination with almost pure mathematics and only obtaining that combination. I don't know if i have made it obvious, but the algorithm can generate any possible combination of letters. Meaning that everything that can be represented with characters exists within that algorithm. In my earlier post i should have probably called ARRAY[] for CHARACTER[], ARRAY[] is an array of characters. Code: #include <iostream>//standard Input/output #include "_U8CSTRING.cpp"// A string CLASS. _pushnput increases the string at given position, cam is it's lenght. using namespace std; const unsigned long long R1 = 48, //First ASCII value                         R2 = 57, //Last ASCII value                         P = 2;//Max Positions in string _U8CS _C(P);//String //Is all characters in _C equal R2? bool _C_Done_Question(unsigned long long _Beg, unsigned long long _End){  while(_Beg < _End)  switch(_C.str[_Beg++]){   case R2: break;   default: return false;  };  return true; } //This is the algorithm. void generate(){   for(register unsigned long long x = 0;x<P;++x){_C._pushnput(0,R1);} for (unsigned long long x = (_C.cam-1);_C_Done_Question(0,_C.cam)==false;){  for(;_C.str[x]-1!=R2;++_C.str[x]){cout<<_C.str<<endl;}   --_C.str[x];   if (_C.str[x-1]==R2){     while(_C.str[x]==R2&&x>0)--x;     if(_C.str[x]<R2)       {++_C.str[x];         for(++x;x<(_C.cam);++x){_C.str[x]=R1;}           --x;           }         cout<<_C.str<<endl;   }   else if (_C.str[x-1]<R2){   ++_C.str[x-1];     for(;x<(_C.cam);++x){_C.str[x]=R1;}     --x;     cout<<_C.str<<endl;     ++_C.str[x];     cout<<_C.str<<endl;   }  } } int main(){ generate(); return 0;} • Apr 10th 2013, 07:42 AM Re: I need logic/math help. Yes, I know how to compile C++ code. (Nerd) • Apr 10th 2013, 08:11 AM Re: I need logic/math help. I'm looking through this file (changed the indentation and spacing so I could read it), but I'm not sure what some of these _U8CS functions do: e.g. _U8CS::_pushnput or _U8CS::cam _U8CS::str I assume holds a c-string representation of your _U8CS (by the way, what's up with these identifier choices? jk, but not rly) Or it might even just be the data your _U8CS object is working with. I'll keep looking through the code, and see if I can't better understand what you're trying to do. [In short, could I see the header file? "_U8CSTRING.cpp"] Here's my edited version of the code, just in case anyone else was having trouble reading it: Code: /*  * Original Author: indieExe on mathhelpforum.com  * Downloaded from: http://mathhelpforum.com/math-topics/217042-i-need-logic-math-help.html#post781213  * Downloaded on: April 4, 2013  *  * Edited by: Christopher D'Angelo, mathhead200.com,  *            Mathhead200 on mathhelpforum.com  */   #include <iostream> //standard Input/output #include "_U8CSTRING.cpp" // A string CLASS. _pushnput increases the string at given position, cam is it's lenght. using namespace std; const unsigned long long R1 = 48, //First ASCII value                         R2 = 57, //Last ASCII value                         P = 2; //Max Positions in string _U8CS _C(P); //String //Is all characters in _C equal R2? bool _C_Done_Question(unsigned long long _Beg, unsigned long long _End) {         while( _Beg < _End )                 switch( _C.str[_Beg++] ) {                 case R2:                         break;                 default:                         return false;                 };         return true; } //This is the algorithm. void generate() {         for( register unsigned long long x = 0; x < P; ++x ) {                 _C._pushnput(0, R1);         }         for( unsigned long long x = (_C.cam - 1); _C_Done_Question(0, _C.cam) == false; ) {                 for( ; _C.str[x] - 1 != R2; ++_C.str[x] ) {                         cout << _C.str << endl;                 }                 --_C.str[x];                 if ( _C.str[x - 1] == R2) {                         while( _C.str[x] == R2 && x > 0 )                                 --x;                         if( _C.str[x] < R2 ) {                                 ++_C.str[x];                                 for( ++x; x < (_C.cam); ++x ){                                         _C.str[x] = R1;                                 }                                 --x;                         }                         cout << _C.str << endl;                 } else if ( _C.str[x - 1] < R2 ) {                         ++_C.str[x - 1];                         for( ; x < (_C.cam); ++x ) {                                 _C.str[x] = R1;                         }                         --x;                         cout << _C.str << endl;                         ++_C.str[x];                         cout << _C.str << endl;                 }         }         } int main() {         generate();         return 0; } • Apr 10th 2013, 10:36 AM indieExe Re: I need logic/math help. I use alot of abbreviations.. _U8CS = unsigned 8bit character string. tam = total allocated amount, cam = current 'pseudo' amount, eam = extra bytes to allocate whenever an allocation is needed, str = a pointer to unsigned char type which is used as a string, _pushnput(), literally does what the identifier says to the values in the string. Code: //_U8CSTRING.cpp //This is an incomplete string class. #pragma once class _U8CS{ public: _U8CS(unsigned extraBytes = 20){tam=0,cam=0,eam=extraBytes;alloc();} ~_U8CS(){dealloc();} unsigned tam,cam,eam; unsigned char *str; void alloc(){ str=new unsigned char [tam=eam]; str[cam = 0]='\0'; } void realloc(){  unsigned char *_new = new unsigned char [tam+=eam];   for(unsigned x = 0;x<=cam;_new[x]=str[x],++x){}   delete[]str;   str = _new; } void dealloc(){if(tam!=0&&str!=0){ delete[]str;tam=0,cam=0;str=0;}} //This method simply moves the the character values fromn this->str[position] to this->str[cam], lenght times for each character. //In order to make room for the string at arg2. void _pushnput(unsigned position,unsigned char *str,unsigned lenght){ if( position <= cam && position >= 0 ){   if (lenght+cam >= tam) realloc();   unsigned x = cam+1, y = 0;   while (x>position)     this->str [x+lenght-1] = this->str [--x];   for(x=position;x<position+lenght;)     this->str[x++]=str[y++];   cam+=lenght;  } } //The same concept except it only makes room for a single character. void _pushnput(unsigned position, unsigned char c){ if (!(position <=cam&&position>=0) && !cam) return; unsigned x = ++cam; for (;x>position;str[x]=str[--x]){} str[x] = c; } void _remov(unsigned position){ if (! (position<= cam && position >= 0) && !cam ) return; unsigned x = position; for (;x<cam;str[x] = str [++x]); --cam; } }; • Apr 10th 2013, 12:12 PM indieExe Re: I need logic/math help. //Once you have read and understood what the other program does. If you run this program, you will discover that it is ultra slow. This is because: NB = (R2-R1) +1, Total Times Accessed A Variable = NB ^ P. ( P = Lenght of string. ) In the case NB = 26, and P = 400. The amount of times accessed a variable is 26^400, which is a ridiculously long number, and it will probably not even fit into a type of unsigned long long. What needs to be rid of: 1. Fixed lenght for _C (The string containing the combination). 2. NB^P times accessing variables. What remains is: Only the concept of emulating this algorithm through pure mathematics. (The concept of generating a combination that will always equal that combination, no randomness involved.) What i came up with when i tried this(And i am bad at mathematics, i only know 1st to 2st year of secondary school equations): Abbreviations: RC = Requested Combination. (User Defined). NB = Number Base. NBR = Number Base Repeated. NBRFR = Number Base Repeated Fragment Repeated. (The left over from NBR translated into 1's instead of NBRFF's). NBRFF = Number Base Repeated First Fragment. (First Fragment Representing NB/1. This value is required to determine NBRFR). R1 = Range1. (User Defined). R2 = Range2. (User Defined). //---------- Definitions: NB = (R2 - R1)+1. (Such that say 49 - 48 = 2, and not 1). NBR = RC/NB. NBRFF = NB^-1 = 1/NB. NBRFR = (NBR - int NBR) / NBRFF. //---------- LABEL 1. //I might have to refer to this text later. The thing about NBR (At least what my logic tells me): You know that for each first NB^-1, NB^0, NB^1, NB^2, NB^3, NB^4 ET CETERA, in NBR there's a new position in _C.str (The string containing the ultimate combination). According to that logic for every NB^-1, NB^0, NB^1, NB^2, NB^3, NB^4 ET CETERA, that comes after the first (what i said in previous sentence.) will not create a new position in _C.str (because it has already been created.), but rather increase the value of _C.str[Current_Position+1], and reset the values of _C.str[Current_Position] to R1 til Current_Position-- == _WHATEVER_THE_END_IS. IF NBRFR is not equal to ZERO. This indicates that there is NB/NBRFR left that has not yet been added to _C.str. (I am currently unsure of where in _C.str to put it. I have a hunch it is in _C.str[0].) What the mathematical challenge is: LABEL 1. -> (Determining the amount of positions in _C.str required, Applying the values required to the correct positions in _C.str). Sort of like: while (!LASTPOSITIONIN_C) /*Apply value to _C.str[Current_Position], using a magical formula. ++Current_Position (goto next position in _C.str)*/; //You may have replied meanwhile i wrote this. Please come with corrections that your perceptive detects, or questions. Or maybe you are able to discover another method that does this? • Apr 10th 2013, 03:00 PM Re: I need logic/math help. This method is broken: Code: void _pushnput(unsigned position, unsigned char c) {     if( !(position <= cam && position >= 0) && !cam )         return;     unsigned x = ++cam; //--- What is this loop doing? ---     for( ; x > position; str[x] = str[--x] ) {     }     str[x] = c; } The loop just deincrements x, then copies the character from str[x] back into str[x]...? As you can see in the following example, the first time this happens, it just deletes the '\0' at str[0]: Code: #include <iostream> #include "_U8CSTRING.cpp" using namespace std; int main() {     char x = 'L'; //here to fill the stack, so you can see the '\0' missing     _U8CS s(2);     char y = 'R'; //^ same here ^         cout << s.str << '\n'; //prints "" (empty string)         s._pushnput(0, 'H');     cout << s.str << '\n'; //prints "H??" (for me)         s._pushnput(0, 'w');     cout << s.str << '\n'; //prints "w??" (for me)         return 0; } I'm still not entirely sure, what it is you're trying to do... Are you trying to list all the permutations of a set of characters? E.g. { 'a', 'b', 'c' } 0 |-> "abc" 1 |-> "acb" 2 |-> "bac" 3 |-> "bca" 4 |-> "cab" 5 |-> "cba" (This is a guess. I'll keep looking through your code and try to figure out what you are trying to accomplish, as running the program printed some pretty weird results. I'm not giving up on you yet. But I'd like to teach you how to write neater code... (Nerd)) • Apr 10th 2013, 03:40 PM indieExe Re: I need logic/math help. Quote: The loop just deincrements x, then copies the character from str[x] back into str[x]...? No, it copies the value from str[x-1] to str[x]. (++identifier is evaluated before an expression in C++) Try to put the (--x) and (++cam) within parenthesis, what compiler are you using? (The only fault in this method, is that i doens't check to see if reallocation is required) Put this: Code: if (cam>=tam) realloc(); between the for loop, and Code: unsigned x = ++cam The program above, (the generation program) will iterate every combination that can happen within P positions when each position has a max value of NB. (It is the same concept that we use for counting numbers.) Code: void _pushnput(unsigned position, unsigned char c){ if (!(position <=cam&&position>=0) && !cam) return; unsigned x = ++cam;//++cam is the position above the '\0' (++cam is evaluated before x) for (;x>position;str[x/*x is above '\0'*/]=str[--x/*this is the character '\0'*/]){} //Therefore in the case of cam is 0, str[1] = str[0] = '\0'. str[x] = c;//All characters have been moved 1 to the right, str[position] is set to c. } using namespace std; int main (){ _U8CS s(0); s._pushnput(0,'E'); s._pushnput(0,'C'); s._pushnput(0,(unsigned char*)"HELLO",5); cout<<"\""<<s.str<<"\""<<endl;//Prints "HELLOCE". return 0; } Here is the output of the generation program, R1 = 48, R2 = 49, P = 4. 0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111: ((R2-R1)+1)^P = 2^4 = 16. (Notice there is a bug in the program which outputs the same number twice, i removed those occurances in this example.) As you can see, it follows the same algorithm we use for counting numbers. Btw, what is a permutation? Show 40 post(s) from this thread on one page Page 1 of 2 12 Last
2016-09-30 03:17:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5212253928184509, "perplexity": 10488.042710872982}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662018.69/warc/CC-MAIN-20160924173742-00161-ip-10-143-35-109.ec2.internal.warc.gz"}
https://discuss.codechef.com/t/dquery-editorial/103297
# DQUERY - Editorial Setter: rag_hav13 Testers: tabr 2484 # PREREQUISITES: Factorisation, Prefix Sums, Binary Search # PROBLEM: You are given a list of N integers A_1, A_2, \cdots, A_n, along with Q queries. Each query consists of two integers p and k. Suppose you are allowed to reorder only those elements of A which are divisible by p. You have to output the maximum possible value of \sum_{i=1}^k A_i over all such reorderings. # EXPLANATION: Our reordering strategy is trivial. Let S = \{A_i | \ i \in [N], A_i \text{ is divisible by } p\}. We would like to place the largest element from S at the smallest index, the second largest element at the second smallest index, so on and so forth. \ What would be the value of \sum_{i=1}^k A_i in such a reordering? Define T = \{i \in [k] \ | \ A_i \text{ is divisible by } p\}. Then, in our reordering, we would place the largest |T| elements from S into indices in T, and we would place the elements that are already in those T indices somewhere else (possibly outside the range [1, k]). Therefore, the answer we get would be \sum_{i=1}^k A_i + (\text{sum of the largest } |T| \text{ elements of }S) - \sum_{i \in T} A_i We can precompute the factorisation of all numbers in range [1, 1e5], and use this to prime factorise M. Note that each element of M can have at most \log 1e5 < 17 prime factors. Now, for each prime number \in [1, 1e5], we can precompute S. We can precompute suffix sums of S (when sorted by value) and prefix sums of S (when sorted by index). Finally, we can precompute prefix sums of M. Now we describe how to answer each query. We can find |T| by doing a simply binary search for k in S, this takes O(\log n). We can compute \sum_{i=1}^k A_i in O(1) using the prefix sums we precomputed. We can compute (\text{sum of the largest } |T| \text{ elements of }S) in O(1) using the suffix sums we precomputed. And finally, we can compute \sum_{i \in T} A_i using the prefix sums of S in O(1), that we also precomputed. # TIME COMPLEXITY: Approximately O(M\log M + 1e5\log1e5) precomputation, and then O(\log N) per query, where M = N \log 1e5. # SOLUTION: Editorialist's Solution #include <bits/stdc++.h> #include <ext/pb_ds/assoc_container.hpp> #include <ext/pb_ds/tree_policy.hpp> //#include <sys/resource.h> #define double long double #define int long long #define initrand mt19937 mt_rand(time(0)); #define rand mt_rand() #define MOD 1000000007 #define INF 1000000000 #define mid(l, u) ((l+u)/2) #define rchild(i) (i*2 + 2) #define lchild(i) (i*2 + 1) #define mp(a, b) make_pair(a, b) #define lz lazup(l, u, i); #define ordered_set tree<pair<int, int>, null_type,less<pair<int, int>>, rb_tree_tag,tree_order_statistics_node_update> using namespace std; using namespace __gnu_pbds; bool notPrime[100001]; vector<int> pfac[100001]; signed main(){ ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); for(int i = 2;i<=100000;i++){ if(notPrime[i]) continue; for(int j = i;j<=1e5;j+=i){ notPrime[j] = true; pfac[j].push_back(i); } } int t; cin>>t; while(t--) { int n; cin >> n; int m[n]; for(int i = 0;i<n;i++) cin>>m[i]; int pref[n]; pref[0] = m[0]; for(int i = 1;i<n;i++) pref[i] = m[i] + pref[i-1]; vector<int> li[100001]; for(int i = 0;i<n;i++){ for(int j: pfac[m[i]]){ li[j].push_back(i); } } vector<int> pf[100001], sf[100001]; for(int i = 0;i<=100000;i++){ if(li[i].size() == 0) continue; pf[i].push_back(m[li[i][0]]); for(int j = 1;j<li[i].size();j++){ pf[i].push_back(m[li[i][j]] + pf[i][j-1]); } vector<int> temp; for(int j: li[i]) temp.push_back(m[j]); sort(temp.begin(), temp.end()); sf[i].push_back(temp[temp.size() - 1]); for(int j = 1;j<temp.size();j++){ sf[i].push_back(temp[temp.size() - 1 - j] + sf[i][j-1]); } } int q; cin>>q; while(q--){ int p, k; cin>>p>>k; int ans = pref[k-1]; int cnt = lower_bound(li[p].begin(), li[p].end(), k) - li[p].begin(); if(cnt > 0){ ans -= pf[p][cnt - 1]; ans += sf[p][cnt - 1]; } cout<<ans<<endl; } } } 1 Like Can someone explain how the pre comuputing part works? let me explain you by an example lets assume menu to be like this [1,2,4,5,8,7] prefix sum for above array : [1,3,7,12,20,27] Let’s say p=2 and k=4 the optimal way of reordering would be [1,8,4,5,2,7] for p=2 [2,4,8] are his favourite as 2 divides them Here we must reorder items in such a way that deliousness is maximized hence we’ll bring items which are maximum to front the array sorted by values [8,4,2] pf sum : [8,12,14] but we also have to remove sum of items present in the these places before reorder hence we also need to sort them by indices [2,4,8] pf sum arr: [2,6,14] Now let’s calculate deliousness firstly ,lets consider prefix sum of k items before reordering = 12 for accessing pf sum of fav items arrays we need no. of fav items present till [1,k] which can be obtained for each query or technique discussed below lets remove sum of those items which can be replaced with higher value = 2+4=6 lets add sum of those items which will be brought to front = 8+4=12 hence the answer can be calculated by 12-6+12==>18 technique to get no. of fav_items occured till k we can store list of indices of fav_items and obtain lower bound of k in the list hope this helps 3 Likes Ahh…!!, Very Explanatory and Concise Editorial. Thank you for the explanation, it cleared up many things. 1 Like
2022-09-30 16:36:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8087234497070312, "perplexity": 6558.712456119514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335491.4/warc/CC-MAIN-20220930145518-20220930175518-00231.warc.gz"}
http://libai.math.ncu.edu.tw/webclass/statistics/probability/notes/ch2_sec3_p1/index.html
Some Simple Propositions Proposition 1 P(Ec) = 1 - P(E) Proposition 2 , 則 . Proof: Since , it follows that we can express F as . Hence, as E and EcF are mutually exclusive, we obtain from Axiom 3 that P(F)=P(E)+P(EcF) which proves the result, since Proposition 3 Proof: Proposition 4 We may also calculate the probability that any one of the three events E or F or G occurs: which by Proposition 3 equals . Now it follows from the distributive law that the events and are equivalent, and hence we obtain from the preceding equations that
2019-02-19 14:54:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9188480377197266, "perplexity": 634.8848183790798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247490225.49/warc/CC-MAIN-20190219142524-20190219164524-00517.warc.gz"}
https://www.e-csd.org/journal/view.php?number=281
ORIGINAL ARTICLE Commun Sci Disord. 2006;11(1): 108-120. Understanding of the Liaison Rule in the Hangul Reading among 5-to 9-Year-Old Children Eun-Seon Lee , and Dong-ISeok Copyright ©2006 The Korean Academy of Speech-Language Pathology and Audiology 이은선(Eun-Seon Lee)| 석동일(Dong-I1 Seok) Share : ABSTRACT This study was carried out to determine the level of understanding of the liaison rule, by targeting 120 children aged 5 years to 9 years and 11 months by giving them reading tasks. We chose this age range because subjectional vowels are important variables in the liaison rule of reading. This study focused on investigating the correlation between liaison rule development and age, postposition, number of syllables and phoneme. The study procedure examined 64 sentences with the liaison rule. The data analysis was based on the Pronunciation Law, clauses 13 and 14. The understanding of the liaison rule by children aged 5 was significantly lower than that by children aged 6-9. The results also showed significant variation between individuals within the same age group. In addition, children commonly made errors with the words containing the /ㅅ/ phoneme which is out of harmony with the pure Korean phonemes. Children aged 7-9 did not show significant differences, which led us to believe that the development of the liaison rule was the most active from the age of 7. These results on the development of the liaison rule according to age suggests that the rules of the Korean alphabets and phonemes should be taught before the liaison rule. In conclusion, we believe that the liaison rule should be taught at age 5 when the children begin to read and after a complete understanding of phonemes has been developed, i.e., the rules of the Korean alphabets and phonemes should be taught first. Keywords: 연음규칙 | 읽기과제 | 종속적 모음 | liaison rule | reading tasks | subjectional vowels TOOLS Full text via DOI CrossRef TDM Supplement Share: METRICS 2,449 View
2023-02-07 14:07:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3475937247276306, "perplexity": 3323.5673894417064}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00299.warc.gz"}
https://or.stackexchange.com/tags/traveling-salesman/new
# Tag Info I would like to add an answer by @Misha Lavrov: The MTZ constraints really need to know the direction of an edge in order to work. I have called them "timing constraints" when teaching, because they are an inequality representation of the condition If $x_{ij} = 1$ (if we go from vertex $i$ to vertex $j$) then $t_j$ (the MTZ variable that ...
2021-11-28 15:17:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6974549293518066, "perplexity": 598.2518478359166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358560.75/warc/CC-MAIN-20211128134516-20211128164516-00622.warc.gz"}
http://gaselectricity.in/joule-heating-wikipedia-gas-efficient-cars-2015
Joule heating – wikipedia gas efficient cars 2015 ## A voltage difference between two points of a conductor creates an electric field that accelerates charge carriers in the direction of the electric field, giving them kinetic energy. When the charged particles collide with ions in the conductor, the particles are scattered; their direction of motion becomes random rather than aligned with the electric field, which constitutes thermal motion. Thus, energy from the electrical field is converted into thermal energy. [3] Power loss and noise [ edit ] Joule heating is referred to as ohmic heating or resistive heating because of its relationship to Ohm’s Law. It forms the basis for the large number of practical applications involving electric heating. However, in applications where heating is an unwanted by-product of current use (e.g., load losses in electrical transformers) the diversion of energy is often referred to as resistive loss. The use of high voltages in electric power transmission systems is specifically designed to reduce such losses in cabling by operating with commensurately lower currents. The ring circuits, or ring mains, used in UK homes are another example, where power is delivered to outlets at lower currents, thus reducing Joule heating in the wires. Joule heating does not occur in superconducting materials, as these materials have zero electrical resistance in the superconducting state. where t is time and P is the instantaneous power being converted from electrical energy to heat. Far more often, the average power is of more interest than the instantaneous power: P a v g = U rms I rms = I rms 2 R = U rms 2 / R {\displaystyle P_{avg}=U_{\text{rms}}I_{\text{rms}}=I_{\text{rms}}^{2}R=U_{\text{rms}}^{2}/R} These formulas are valid for an ideal resistor, with zero reactance. If the reactance is nonzero, the formulas are modified: P a v g = U rms I rms cos ⁡ ϕ = I rms 2 Re ⁡ ( Z ) = U rms 2 Re ⁡ ( Y ∗ ) {\displaystyle P_{avg}=U_{\text{rms}}I_{\text{rms}}\cos \phi =I_{\text{rms}}^{2}\operatorname {Re} (Z)=U_{\text{rms}}^{2}\operatorname {Re} (Y^{*})} In plasma physics, the Joule heating often needs to be calculated at a particular location in space. The differential form of the Joule heating equation gives the power per unit volume. d P / d V = J ⋅ E {\displaystyle \mathrm {d} P/\mathrm {d} V=\mathbf {J} \cdot \mathbf {E} } Here, J {\displaystyle \mathbf {J} } is the current density, and E {\displaystyle \mathbf {E} } is the electric field. For a neutral plasma not in magnetic field and with a conductivity σ {\displaystyle \sigma } , J = σ E {\displaystyle \mathbf {J} =\sigma \mathbf {E} } and therefore d P / d V = J ⋅ E = J ⋅ J / σ = J 2 ρ {\displaystyle \mathrm {d} P/\mathrm {d} V=\mathbf {J} \cdot \mathbf {E} =\mathbf {J} \cdot \mathbf {J} /\sigma =J^{2}\rho } • Some food processing equipment may make use of Joule heating: running current through food material (which behave as an electrical resistor) causes heat release inside the food. [6] The alternating electrical current coupled with the resistance of the food causes the generation of heat. [7] A higher resistance increases the heat generated. Ohmic heating allows for fast and uniform heating of food products, which keeps the high quality in foods. Products with particulates heat up faster in Ohmic heating (as compared to conventional heat processing) due to higher resistance. [8] Joule heating (Ohmic Heating) is a flash pasteurization (also called "high-temperature short-time" (HTST)) aseptic process that runs an alternating current of 50–60 Hz through food. [9] Heat is generated through the electrical resistance of the food. [9] As the product heats up, electrical conductivity increases linearly. [7] A higher electrical current frequency is best as it reduces oxidation and metallic contamination. [9] This heating method is best for foods that contain particulates suspended in a weak salt containing medium due to their high resistance properties. [8] Ohmic heating allows for a maintained quality of foods due to the uniform heating that decreases deterioration and over-processing of food. [9] Benefits [ edit ] Ohmic heating has similar benefits than other rapid heating methods. This method can destroy microorganisms achieving sterility through electroporation of cell membranes, membrane rupture from the voltage drop across cell membranes, and cell lysis. [9] Ideal Food Products [ edit ] In particulate foods, the particles heat up faster than the liquid matrix due to the higher resistance to electricity. [9] This prevents overheating of the liquid matrix while the particles receive sufficient heat processing. [9] Below are some examples of different electrical conductivity values of certain foods to display composition and salt concentration affect electrical conductivity. Electrical conductivity of selected foods [10] Food Ohmic heating is highly influenced by the electrical conductivity of the product. [9] Dependent on the position of the food relative to the electrodes,there can be areas of overprocessing and underprocessing. [9] Fats, oils, alcohols, bone, and crystalline structures cannot be heated directly by ohmic heating due to their low electrical conductivity values. [9] Similarly, it is difficult to obtain uniform heating in non-homogenous food particulates making it difficult to assure sterility. [7] Heating efficiency [ edit ]
2019-08-18 13:09:59
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.91867595911026, "perplexity": 2129.3032870738484}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313889.29/warc/CC-MAIN-20190818124516-20190818150516-00502.warc.gz"}
https://www.physicsforums.com/threads/looking-for-a-little-history-on-the-hyperbolic-functions.191889/
# Looking For a little History on the Hyperbolic Functions 1. Oct 17, 2007 I was just browsing through my textbook in the section on hyperbolic trig functions. It defines sinhx to be $$\frac{e^x-e^{-x}}{2}$$, which comes from breaking the function $$f(x)=e^x$$ into two functions, the other of which forms coshx. Oddly enough, this is one of the only sections in the text that does not include a brief history of the topic at hand. I came across one site that said that a Lambert discovered (or created, I don't know which) the hyperbolic functions. Does anyone know of any good sources where I could get the rundown on the history of these things. I am just curious as to why someone would have wanted to break $$e^x$$ into parts in the first place. I know that the hyperbolic functions will serve some purposes in integration, but I would assume that that was not their original intent. Any insight would be appreciated, Casey 2. Oct 17, 2007 ### Hurkyl Staff Emeritus As their name suggests, they are useful for hyperbolic trigonometry. For example, the unit hyperbola defined by x^2 - y^2 = 1​ is parametrized by $$(x, y) = (\pm \cosh u, \sinh u)$$.​ I don't remember the details, but this is very closely related to hyperbolic geometry -- the non-Euclidean geometry that Lambert was studying. (Of course, he was trying to find a contradiction, but still, he laid the foundations for this particular subject) 3. Oct 17, 2007 Thanks Hurkly. Then, I guess I was looking for something more along the lines of what is hyperbolic trig. What made somebody say to themselves, "Hey, I think I'll break the function e^x up into two ridiculous looking fractions that when summed equal just e^x again after breakfast today..." Know what I mean? Seems like it probably had an application or some purpose..... Casey 4. Oct 17, 2007 ### Hurkyl Staff Emeritus Oh phooey, I confused Lambert with Saccheri. (But I think Lambert did some work along those lines too) Anyways, as I was trying to imply, hyperbolic trig functions are to (rectangular) hyperbolas as circular trig functions are to circles. So anytime a hyperbola can be made interesting to study, the hyperbolic trig functions will probably come into play. One major application is in Minkowski geometry (the space-time of special relativity); squared distances in the Minkowski plane are given by $\Delta(ct)^2 - \Delta x^2$, so the hyperbola plays the same role in Minkowski geometry as the circle does in Euclidean geometry. (it's the locus of all points a fixed distance from a given point) 5. Oct 17, 2007 Word. I'll search this Minkowski geometry a little. So I guess my question really should have been, why study hyperbolas. And this you have answered. Thanks Hurkyl, Casey 6. Oct 17, 2007 ### Hurkyl Staff Emeritus Hyperbolas are one of the conic sections: after lines, they are the simplest of all shapes, and were known even to Euclid. Because of their simplicity, they tend to crop up frequently, just like their cousins: circles, ellipses, and parabolas. In fact, in projective geometry, circles, ellipses, parabolas, and hyperbolas are all the same thing. Their apparent difference is an artifact of perspective: a hyperbola has two points at infinity, a parabola 1, and an ellipse none. Last edited: Oct 17, 2007 7. Oct 17, 2007 ### neutrino This is usually a good site on the history of mathematics, but in the case of hyper-trig funcions, it just seems to have two sentences, one in each article(one by trig. functions and the other, the biography of Lambert.), something along the lines of Lambert made important discoveries... . 8. Oct 19, 2007 Thanks neutrino. I had run into that site earlier from a google search. I just was not sure what kind of website it was. Is it a school? Casey 9. Oct 19, 2007 ### LukeD As you're probably aware though, cosine and sine come from break up e^x in a different way, using complex numbers which is that cos(x) = (e^(ix)+e^(-ix))/2 and sin(x) = (e^(ix)-e^(-ix))/(2i) From that actually you find that cosh(x) = cos(ix) and sinhy(x) = -i*sin(ix) 10. Oct 20, 2007 ### neutrino It's a site on history of mathematics maintained by the maths dept.(or the school of maths and stat.) of the Univ. of St.Andrews, Scotland. 11. Oct 20, 2007 ### eeuler Hyperbolic functions stem back to ancient times i believe. Hypatia, a female mathematician/astronomer from Alexandria contributed to the development, mainly through conic sections, which i think she elaborated on. Well i do know that her contributions lead to the development of hyperbolic functions. 12. Oct 20, 2007 ### neutrino You could ask her if you want to. She is a regular, here, at PF. :tongue2: 13. Oct 20, 2007 ### eeuler Oh wow, really!;) hehe what a coincidence i mentioned her and she ends up being a member here:) 14. Oct 21, 2007 ### Gib Z I would imagine that when Euler found his identity $e^{ix} = \cos x + i \sin x$ and rearranged the series of cos and sin to derive that, he would have had to check if he was allowed to rearranged the series in the way he did. To be able to arrange the terms as he did, he would have to to prove that the series for sin and cos, which have alternating signs, also converge absolutely. As you should know, the absolute series for sin and cos and sinh and cosh respectively. 15. Oct 21, 2007 ### neutrino Lots of links regarding Euler's publication of this identity are available here.
2018-10-19 04:18:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6632923483848572, "perplexity": 1020.6931744940146}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512323.79/warc/CC-MAIN-20181019041222-20181019062722-00112.warc.gz"}
http://www.gamedev.net/topic/636488-is-anyone-else-having-doubts-about-the-raspberry-pi/
• Create Account # Is anyone else having doubts about the Raspberry Pi? Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. 33 replies to this topic ### #1Shaquil  Members   -  Reputation: 798 Posted 28 December 2012 - 09:48 PM One thing I got for Christmas was the Raspberry Pi. At this point, I wish I had put the money toward something else. The Pi Foundation claims that these machines are for kids to learn with, but I just can't see it. I'm a kid, I'm new to Linux, I'm new to working with hardware, and using the Pi thus far has been a complete pain. The $35 dollar price point was a lie, much like the price of nearly any small/"portable" piece of hardware. You pretty much have to buy a case if you want this thing to last, and that's at the very least$10. Then there's the micro USB power supply which requires 5v at about 500mah. I dunno what makes anyone think that a charger like that would be laying around. I have a micro USB phone charger, but it maxes at 250mah. So that was another $10. Then there's the SD card, which ranges between 10 and 30 dollars. We'll call it$15 to be fair. Altogether that's around $70 for this "computer" that's completely painful to use from the start. And then there's the 2 weeks - 1 month or longer wait. Worse, if you look at the website's main blog (http://www.raspberrypi.org/), all you see is posts highlighting projects that are utterly out of the reach of beginners, done by people with years of hardware, software and linux experience who are using the pi to do things they, for the most part, already had an idea how to do. In what way is this helpful to newcomers, other than to lure them in with projects that seem feasible? I just can't see it. I'm sure that a very tiny few people on this forum actually have a pi, but I'd love to get some feedback. Mine is pretty much sitting there. It'll be nice to have as a linux computer I can turn on and practice with through putty so I don't have dual boot over to Ubuntu, but other than that, I can't see myself using it for a while. It's not beginner friendly in the least. Sponsor: ### #2Luckless Crossbones+ - Reputation: 1417 Posted 29 December 2012 - 12:00 AM 5v 500mah micro USB? Isn't that the standard USB spec power output for powered ports on a PC computer, using cords that many households (at least those geeky enough to house someone wanting a Pi) would have half a dozen or more? I have SD cards laying around, and CF cards too for that matter. This really doesn't look like something that someone without any computing experience would have any interest in, and a prior interest in computers comes with a prior collection of goodies. It may not be simple or overly friendly, but it isn't exactly impossible to work with. Also, aren't most of those projects fairly open? Want to do something like what someone else did? Copy them and make your improvements. Not sure how they did it? Ask. It isn't a magic device that you pull out of a box and things just happen. What kind of projects do you want them to feature and focus on? The web is full of blogs detailing simple little projects from people toying with them. Old Username: Talroth If your signature on a web forum takes up more space than your average post, then you are doing things wrong. ### #3way2lazy2care Members - Reputation: 778 Posted 29 December 2012 - 12:56 AM I have slightly biased opinions on it, but I do wish they sold kits that came with cases/power supplies/etc. I don't think the intention was necessarily for kids to learn about things on their own. It seemed more to be about having a system that was simple enough for a kid to understand when being taught rather than about having a system that is easy enough for a child to use with no previous experience. We live in a world where computers are perceived increasingly as a black box inside which stuff happens rather than a series of components that work together. I think the raspberry pi succeeds in breaking that mold. ### #4Dinner Members - Reputation: 263 Posted 29 December 2012 - 01:55 AM I think the device is for experimenting, I had mine for ages, projects in mind, build an os, slowly gathering parts for a robot (have nearly all the movement and vision, but it needs a brain. I sold mine so I am able to buy the upgraded version (got a good deal, some price that I paid for it. The other person wanted it for a business client, they setup solar panels at schools, and record the information down to give to the teachers to teach the students about solar panels, they wanted to use the small factor of the pi as an client to download the information from the panels and ftp it to a web server to display the results. ### #5Cornstalks Crossbones+ - Reputation: 6866 Posted 29 December 2012 - 03:57 AM The$35 dollar price point was a lie, much like the price of nearly any small/"portable" piece of hardware. I thought they were pretty clear about what the $35 gives you. You pretty much have to buy a case if you want this thing to last, and that's at the very least$10. Depends on how you treat it. If you don't abuse it you really don't need a case. Then there's the micro USB power supply which requires 5v at about 500mah. I dunno what makes anyone think that a charger like that would be laying around. I have a micro USB phone charger, but it maxes at 250mah. So that was another $10. Really? I can power it through my micro USB phone cable just fine... Besides, in USB 2.0 a unit can draw 500 mA of power just fine, at least as specified by USB 2.0 (and assuming Wikipedia isn't lying). If you just use a standard USB 2.0 connection you're fine, and I'd say most people have such a connection readily available. Then there's the SD card, which ranges between 10 and 30 dollars. We'll call it$15 to be fair. I've got spare SD cards laying around. Sure, you may not, but many people do, and if you don't you can get a dirt cheap 4GB one for under $10. Altogether that's around$70 for this "computer" that's completely painful to use from the start. I'm not sure how to say this, but if you're expecting something fancy for $70, it's not gonna happen. I think people get their hopes up too high for the Pi. It wasn't ever meant to be a "here's a pretty tutorial on how to get into Linux and computer stuff;" it's more of a "here's a cheap little thing you can tinker the heck out of." Additionally, as has been mentioned, a lot of the beginners that are targeted are beginners in a classroom, with an instructor to guide them. I understand it may not have been what you expected (and maybe it was marketed to you in a less-than-ideal way), which is unfortunate. I don't know what doubts I'd have about the Pi. What exactly are you doubting? Frustrations I can understand, but I'm not sure about doubts. From the FAQ: "We want to see it being used by kids all over the world to learn programming." From the Raspberry Pi User Guide: "A big kick up the backside came a few years ago, when we were moving quite slowly on the Raspberry Pi project. ... I was talking to a neighbour's nephew about the subjects he was taking for his GCSE. ... computer games were a passion for him, but his schooling had skirted around any programming. This is the sort of situation I want to see the back of, where potential enthusiasm is squandered to no purpose." So wait, this is for kids who are enthusiastic about learning things like programming, hardware, and linux? Where are the detailed tutorials? Where's the very patient, helpful community? I think they've just gotten themselves into something they didn't understand fully. If you want to help kids get into hardware and understanding computers under all the GUI's and abstractions, then you've got a road ahead of you. Especially if most of your supported operating systems are linux-based. You're going to have to provide something the Linux/Unix community still hasn't done: A welcoming, down-to-earth community for kids and beginners, the two interests groups most likely to completely give up and go somewhere else when things get tougher than they are fun. At the moment, we're talking about people who don't even know what "pwd" does, and they're asking questions like "How can I get audio over HDMI?" and being told "post your edid dump." Oh, is that all? Thanks. You think my little sister is going to put down her kindle fire to enjoy the subtle pleasantries of googling for hours to solve a problem she doesn't understand? The idea is great, but the execution is not. This isn't the way to get kids or beginners into computing. Edited by Shaquil, 29 December 2012 - 12:43 PM. ### #9way2lazy2care Members - Reputation: 778 Posted 29 December 2012 - 01:06 PM At the moment, we're talking about people who don't even know what "pwd" does, and they're asking questions like "How can I get audio over HDMI?" and being told "post your edid dump." Oh, is that all? Thanks. You think my little sister is going to put down her kindle fire to enjoy the subtle pleasantries of googling for hours to solve a problem she doesn't understand? The idea is great, but the execution is not. This isn't the way to get kids or beginners into computing. What's wrong with that? "post your edid dump." "What is an edid dump and how do I get it?" "Check this out. http://en.wikipedia.org/wiki/Extended_display_identification_data" "Oh cool I learned something new about my raspberry pi and computing in general! :D" ### #10Shaquil Members - Reputation: 798 Posted 29 December 2012 - 01:19 PM "post your edid dump." "What is an edid dump and how do I get it?" "Check this out. http://en.wikipedia.org/wiki/Extended_display_identification_data" "Oh cool I learned something new about my raspberry pi and computing in general! :D" Now I may be wrong, and you can try to find it if you want, but nowhere on that page does it tell you to type tvservice -d [filename] . So how is it helpful, again...? The most help I got on the forums was "type tvservice -d". No one told me I had to specify a file. I only realized it after a little reading, and applying what miniscule unix experience I have already. Had I seen it a month ago I'd have been like "What? It doesn't even work." Even now, I've got it writing to a file, but I've no idea how to parse or read the file. No help on that, either. If that's the best that can be done now, I'm just gonna put this thing away for a while. Thanks for the help anyway. ### #11Servant of the Lord Crossbones+ - Reputation: 14331 Posted 29 December 2012 - 02:00 PM Yeah, I guess when you live in a nice area where there's a best buy and a radio shack and another hardware store around the block. Funny, but turns out there are some places on earth that aren't like that either. I just don't like the "$35 computer" thing, when you know it costs more than $35 to use this thing. I also have the USB cables (at least 2) to power the thing. I don't have a smartphone, just the free (or almost free) cellphone the phone company gave me. Not everyone will have those! But many will, so why charge them extra for something they already have? I also have a MicroSD card (three, but I lost one and gave one away). I got them each for less than$10. That's just happenstance though, most people won't have them. I don't have a collection of geeky technology sitting around my desk, so the fact that I had a MicroSD card is a coincidence. The computer part costs $35, and they are trying to get it cheaper still. That doesn't mean it won't cost more to use the device. Total cost to program with the Raspberry Pi:$35 - The Pi itself $10 - The cables$10 - The MicroSD card $0 - The case isn't actually needed$350 - A monitor $700 - A computer to code on Reoccurring$40 monthly internet fee to access the documentation Should they ship all this with a Raspberry Pi? No. If their descriptions and stated goals are too enthusiastic, that's just a very small company of people who are very passionate. Ideally, when they start having third would countries using this device, the schools will have a computer and a monitor and cables set up, and the kids will each individually only have the Pi and the MicroSD card. Hopefully, when they reach that point, the Pi and the MicroSD cards will together cost $12 or so, and will be purchased en-bulk by their government, just like India was planning on doing with the OLPC, before the OLPC was more expensive then predicted and India decided to research making$10 computers to hook up to school monitors. It's perfectly fine to abbreviate my username to 'Servant' rather than copy+pasting it all the time. All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.                                                                                                                                                       [Need free cloud storage? I personally like DropBox] ### #12markr  Crossbones+   -  Reputation: 1640 Posted 29 December 2012 - 05:23 PM I think the main problem is that the media frenzy surrouding the Pi has resulted in some expectations in some parts of the community, which vastly exceeded anything the Pi foundation ever promised or even suggested. Yes, many people need to buy additional bits - especially PSUs and leads - to use their Pi. As far as $700 for "a computer to code on" - this is absolutely false - the Pi was MEANT to be a self-hosting system. If you need to cross-compile to it, then you're Doing It Wrong. That was never the intention (and it's certainly not how I use mine!) ### #13frob Moderators - Reputation: 14779 Posted 30 December 2012 - 04:12 AM I think the main problem is that the media frenzy surrouding the Pi has resulted in some expectations in some parts of the community, which vastly exceeded anything the Pi foundation ever promised or even suggested. That's what I think whenever I see the posts "Raspberry Pi is teh aw3s0m3". Or when I look at the frequent Slashdot articles on it, which are very similar. The Foundation as always been pretty clear about what they want: A cheap SoC that anyone can use -- which people tend to notice -- and materials focused around education and the Python programming language and tools like PyGame --- but that second part people tend to overlook. The device can do more, certainly. And people have done quite a lot with it. It is a cool little device, no question about that. But relative to what most people have available it is slow and clunky. The Foundation wanted people to do cool stuff with it, but that is beyond their initial goals. If you have a computer system and you can install PyGame on it... well then, you've got it and that's it. That's all the device was designed for. Check out my personal indie blog at bryanwagstaff.com. ### #14Shaquil Members - Reputation: 798 Posted 30 December 2012 - 08:18 AM That's what I think whenever I see the posts "Raspberry Pi is teh aw3s0m3". Or when I look at the frequent Slashdot articles on it, which are very similar. The Foundation as always been pretty clear about what they want: A cheap SoC that anyone can use -- which people tend to notice -- and materials focused around education and the Python programming language and tools like PyGame --- but that second part people tend to overlook. The device can do more, certainly. And people have done quite a lot with it. It is a cool little device, no question about that. But relative to what most people have available it is slow and clunky. The Foundation wanted people to do cool stuff with it, but that is beyond their initial goals. If you have a computer system and you can install PyGame on it... well then, you've got it and that's it. That's all the device was designed for. No, the worst thing is that not a single person who has posted in this topic can seem to exactly agree on what the hell the Pi is for. You're right that the media has been playing things up in a different direction than what the foundation first wanted, but let's be honest: The foundation is eating it up. Like I said before, just look at their blog. For the most part, it's just hobbyist projects that are in no way beneficial to someone who doesn't know Linux, Python, or how to work with hardware. There are some things that are slightly beginner friendly, but then that's completely offset by posts like this, where they point out projects being done by grown men who admittedly have years of experience working with hardware. It plays right into that "It's a$35 computer you can do anything with!" idea. I just don't know what they're trying to do. I'm sure there's an appropriate place for them to point out stuff like that, but why the main site that everyone goes to? What is the message supposed to be? Edited by Shaquil, 30 December 2012 - 08:19 AM. ### #15Madhed  Crossbones+   -  Reputation: 1892 Posted 30 December 2012 - 08:37 AM The Pi is what it is... It never claimed to be more. Originally the Raspberry was designed as an educational tool. To enable classes in poorer countries to get a bunch of cheap computers to teach their students. Then later the geek/nerd crowd jumped on it and hyped it up like it was... t3h l33test sh!t evar!!1 This is also the reason for the shipping delays. They were just not prepared for the amount of orders. I really don't see the problem here. If it didn't meet your expectations you hve only yourself to blame. ### #16MrDaaark  Members   -  Reputation: 3535 Posted 30 December 2012 - 08:48 AM Your complaints don't make sense. Your post just reads like someone who went into Home Depot and bought a ton of wood, then complains that no one has told you what the wood was for. The PI is not supposed to be "FOR" anything. It's a system on a chip, and left at that. What it's "FOR" is up to you. You have 100% freedom. If it were anything more than a system on a chip, it would be useless for it's intended purpose. It's a system that anyone with the relevant skills can take and transform into another device without having to design their own chipset and OS. ### #17SymLinked  Members   -  Reputation: 690 Posted 30 December 2012 - 09:26 AM I'm a kid, I'm new to Linux, I'm new to working with hardware, and using the Pi thus far has been a complete pain. To start with, Linux isn't easy when you're new to it. The Pi isn't making it easier, either. No one said it was going to be easy. I got what I expected for $70. Bought one unit in August and played around with OpenCV and sensors/motors but it got difficult for me as I'm not used to working with anything else than Visual Studio/Eclipse. Was busy with other projects and so I lost interest a little bit and just figured I'd slap XMBC on it. Worth every penny anyway, it's$70 for crying out loud. The possibilities are there, definatly. Will buy another one to experiment with when I get the spare time. ### #18superman3275  Crossbones+   -  Reputation: 1869 Posted 30 December 2012 - 11:14 AM That's what I think whenever I see the posts "Raspberry Pi is teh aw3s0m3". Or when I look at the frequent Slashdot articles on it, which are very similar. The Foundation as always been pretty clear about what they want: A cheap SoC that anyone can use -- which people tend to notice -- and materials focused around education and the Python programming language and tools like PyGame --- but that second part people tend to overlook. The device can do more, certainly. And people have done quite a lot with it. It is a cool little device, no question about that. But relative to what most people have available it is slow and clunky. The Foundation wanted people to do cool stuff with it, but that is beyond their initial goals. If you have a computer system and you can install PyGame on it... well then, you've got it and that's it. That's all the device was designed for. No, the worst thing is that not a single person who has posted in this topic can seem to exactly agree on what the hell the Pi is for. You're right that the media has been playing things up in a different direction than what the foundation first wanted, but let's be honest: The foundation is eating it up. Like I said before, just look at their blog. For the most part, it's just hobbyist projects that are in no way beneficial to someone who doesn't know Linux, Python, or how to work with hardware. There are some things that are slightly beginner friendly, but then that's completely offset by posts like this, where they point out projects being done by grown men who admittedly have years of experience working with hardware. It plays right into that "It's a \$35 computer you can do anything with!" idea. I just don't know what they're trying to do. I'm sure there's an appropriate place for them to point out stuff like that, but why the main site that everyone goes to? What is the message supposed to be? I don't see the problem here. They have a blog where they post cool projects that people have done with the rasberry pi. Did they every say they would have tutorials on the blog? What did you expect I'm a game programmer and computer science ninja ! Here's Breakout: Breakout! Want to ask about Python and / or Pygame? What about HTML5 / CSS3 / JavaScript? What about C++ and / or SFML 2 (and 1.6)? Recruiting for a game development team and need a passionate programmer? Just want to talk about programming? Email me here: Superman3275@Gmail.com or Personal-Message me on here ! ### #19Shaquil  Members   -  Reputation: 798 Posted 30 December 2012 - 12:11 PM Your complaints don't make sense. Your post just reads like someone who went into Home Depot and bought a ton of wood, then complains that no one has told you what the wood was for. The PI is not supposed to be "FOR" anything. It's a system on a chip, and left at that. What it's "FOR" is up to you. You have 100% freedom. If it were anything more than a system on a chip, it would be useless for it's intended purpose. It's a system that anyone with the relevant skills can take and transform into another device without having to design their own chipset and OS. I'll just repost this for emphasis. From the FAQ and User Guide. From the FAQ: "We want to see it being used by kids all over the world to learn programming." From the Raspberry Pi User Guide: "A big kick up the backside came a few years ago, when we were moving quite slowly on the Raspberry Pi project. ... I was talking to a neighbour's nephew about the subjects he was taking for his GCSE. ... computer games were a passion for him, but his schooling had skirted around any programming. This is the sort of situation I want to see the back of, where potential enthusiasm is squandered to no purpose." I'm sorry, but it is FOR something. There's actually a large, growing thread in the forums about this very issue. It seems I'm not the only one: http://www.raspberrypi.org/phpBB3/viewtopic.php?f=24&t=25501 I'd love to have a true discussion about this, but there's nothing more boring than when someone tries to downplay my opinion or right to speak by saying that "You're not makin no sense! What you talkin bout!?" I'm done here. At least we agree on one thing: I was wrong about the Pi, its usefulness as a learning tool, and certainly its community of users. ### #20Recantha2  Members   -  Reputation: 106 Posted 30 December 2012 - 01:39 PM I believe someone on the RPi forum said it succinctly: "It does exactly what it says on the tin. It runs Linux or RISC OS. You can program it. You can learn how right from the lowest level. What you can't do is jump in the deep end and magically swim. You need guidance, from parents or teachers or scout leaders or books or the net. If you throw a kid in the pool you'll have drowned kids. If you let them learn with guidance you may get an Olympic swimmer. The educational material will be coming." Now, of course, this "educational material" is a bit of a mythical beast at the moment, and in my opinion a lot of info has been too long coming. The important questions I have for Shaquil are: What do you want to do with the Pi? How much do you know so far? Where are you stuck? If you could let us know what you're looking for, in terms of help, I know of several people from the meetup I go to who would be glad to help. -- Mike Old topic! Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic. PARTNERS
2014-03-07 10:52:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18213213980197906, "perplexity": 1357.888578824085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999642168/warc/CC-MAIN-20140305060722-00002-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.lmfdb.org/LocalNumberField/?p=3&n=4
Label Polynomial $p$ $e$ $f$ $c$ Galois group Slope content 3.4.0.1 x4 - x + 2 3 1 4 0 $C_4$ (as 4T1) $[\ ]^{4}$ 3.4.2.1 x4 + 9x2 + 36 3 2 2 2 $C_2^2$ (as 4T2) $[\ ]_{2}^{2}$ 3.4.2.2 x4 - 3x2 + 18 3 2 2 2 $C_4$ (as 4T1) $[\ ]_{2}^{2}$ 3.4.3.1 x4 + 3 3 4 1 3 $D_{4}$ (as 4T3) $[\ ]_{4}^{2}$ 3.4.3.2 x4 - 3 3 4 1 3 $D_{4}$ (as 4T3) $[\ ]_{4}^{2}$
2020-03-31 10:43:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3017014265060425, "perplexity": 1175.80086767923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370500426.22/warc/CC-MAIN-20200331084941-20200331114941-00162.warc.gz"}
https://www.geeksforgeeks.org/higher-order-derivatives/?ref=rp
# Higher Order Derivatives • Last Updated : 08 May, 2021 Derivative of a function f(x) tells us how will the value of the function change when we change x. This quantity gives us an idea and the direction about the rate of change of the function. For example, a positive derivative indicates the increase in the value of the function while a negative value indicates that there might be a decrease in the value of the function. Derivatives are very essential for us in predicting the limits, direction of change, and system behavior given some input. ### Derivatives and Higher-Order Derivatives Derivative of a real function tells us about the rate of change of the function. Derivatives are defined using limits and for the function f(x), it’s derivative is denoted by f'(x). It’s definition in terms of limit is given below, To calculate derivatives for different functions, we usually use the following two properties: Multiplication Rule for Differentiation Let’s say we have a complicated function f(x) which is multiple of two simpler functions h(x) and g(x). In that case, we use the multiplication formula for derivatives. Division Rule for Differentiation In another case, let’s say our complicated function f(x) is composed to division of two different functions. For example, f(x) = ### Second Order Derivatives Just like the derivatives tell us the rate of change of the functions, higher-order derivatives tell us the rate of change of the previous derivative. For example, a second-order derivative tells us about the rate of change of derivative. Let’s say we have a function f(x). y = f(x) If f'(x) is differentiable, we can differentiate it again to get a second-order derivative. It is denoted by, It is also written as, f”(x). Let’s see some problems with second-order derivatives. ### Sample Problems Question 1: Given f(x) = x3. Find the value of f”(x). Solution: We need to first find the derivative, f(x) = x3 ⇒f'(x) = 3x2 Differentiating it again, we get the second order derivative. f”(x) = 6x Question 2: Given f(x) = ex + sin(x). Find the value of f”(x). Solution: f(x) = ex + sin(x) The first derivative will be, f'(x) = ex + cos(x) Differentiating it again, f”(x) = ex – sin(x) Question 3: Given f(x) = ex.sin(x). Find the value of f”(x) at x = 0. Solution: f(x) = ex.sin(x) Since this is product of two functions, we will use multiplication property for derivatives. f'(x) = exsin(x) + excos(x) ⇒ f'(x) = ex (sin(x) + cos(x)) f”(x) = ex (sin(x) + cos(x)) + ex (cos(x) -sin(x)) ⇒f”(x) = ex (2cos(x)) ⇒f”(x) = 2excos(x) at x =0. f”(0) = 2 Question 4: Given f(x) = ex.sin(x). Find the value of f”(x) at x = 0. Solution: f(x) = ex.sin(x) Since this is product of two functions, we will use multiplication property for derivatives. f'(x) = exsin(x) + excos(x) ⇒ f'(x) = ex (sin(x) + cos(x)) f”(x) = ex (sin(x) + cos(x)) + ex (cos(x) -sin(x)) ⇒f”(x) = ex (2cos(x)) ⇒f”(x) = 2excos(x) Question 5: Given y = 3e2x + 2e3x, prove that Solution: y = 3e2x + 2e3x y’ =  6e2x + 6e3x y”  =  12e2x + 18e3x Substituting these values in the equation, ⇒12e2x + 18e3x – 5(6e2x + 6e3x) + 6(3e2x + 2e3x) = 0 ⇒12e2x + 18e3x – 30e2x – 30e3x + 18e2x + 12e3x = 0 ⇒-30e2x + 30e3x – 30e2x – 30e3x = 0 ⇒ 0 = 0 Hence, Proved. Question 6: Given y = ex(x + 1). Find the value of second derivative at x = 1. Solution: y = ex(x + 1) Since this function is product of two functions, We will use multiplication rule for derivative. y’ = ex (x + 1) + ex Now we can differentiate it again to get the second derivative. y”= Again this function will require multiplication rule for differentiation. y” = ex (x + 1) + ex + ex ⇒ y” = ex(x + 3) Question 7: Given y = . Find the value of second derivative at x = 1. Solution: y = Since this function is division of two functions, We will use division rule for derivative. y’ = ⇒ y’ = Now we can differentiate it again to get the second derivative. Again this function will require multiplication rule for differentiation. y”= At x = 1, y” = ⇒ y” = ⇒ y” = ⇒ y” = 0 My Personal Notes arrow_drop_up
2022-06-26 23:36:00
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907997608184814, "perplexity": 1784.6931093582666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00274.warc.gz"}
https://www.statsmodels.org/v0.10.2/datasets/generated/ccard.html
Bill Greene’s credit scoring data.¶ Description¶ More information on this data can be found on the homepage for Greene’s Econometric Analysis. See source. Notes¶ Number of observations - 72 Number of variables - 5
2022-01-17 23:06:58
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38727498054504395, "perplexity": 5490.157760356162}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300624.10/warc/CC-MAIN-20220117212242-20220118002242-00123.warc.gz"}
https://ncatlab.org/nlab/show/quiver+representation
# nLab quiver representation Contents ### Context #### Representation theory representation theory geometric representation theory # Contents ## Idea Given a quiver $Q$, a linear representation of $Q$ (over some ground field $\mathbb{K}$) is: 1. a $\mathbb{K}$-vector space $V_v$ for each vertex $v \in Q_0$, 2. a linear map $V_v \xrightarrow{\rho(e)} V_{v'}$ for each edge $e \in Q_1$. Notice that there is no further compatibility condition, in particular if there are edges forming a triangle, then the associated linear maps are not required to be related under composition. A homomorphism between two quiver representations is a linear map $V_v \xrightarrow{\phi_v} V'_v$ for each vertex, such that for each edge $v \xrightarrow{e} v'$ the following diagram commutes in Vect${}_{\mathbb{K}}$: This makes a category of quiver representations of $Q$ over $\mathbb{K}$, typically denoted $Rep_{\mathbb{K}}(Q)$, or similar. In other words, if one regards $Q$ as the directed graph that it is, and considers its free category $FrCat(Q)$, then 1. a quiver representation is a functor $FrCat(Q) \xrightarrow{\rho} Vect_{\mathbb{K}} \,,$ 2. a morphism of quiver representations is a natural transformation $\phi \;\colon\; \rho \Rightarrow \rho' \,,$ 3. and the category of quiver representations is equivalently the functor category from the free category of the quiver to the category of vector spaces: $Rep_{\mathbb{K}}(Q) \;\simeq\; Func \big( FrCat(Q) ,\, Vect_{\mathbb{K}} \big) \,.$ ## References Review: • Harm Derksen, Jerzy Weyman, Quiver Representations, Notices of the AMS 52 2 (2005) $[$pdf$]$ Textbook accounts: Discussion for A-type quivers in the context of (zigzag) persistent homology and topological data analysis: Last revised on May 20, 2022 at 04:14:29. See the history of this page for a list of all contributions to it.
2023-04-01 17:38:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 27, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9335113167762756, "perplexity": 544.2343999722419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00308.warc.gz"}
http://nrich.maths.org/public/leg.php?code=5039&cl=3&cldcmpid=8098
Search by Topic Resources tagged with Interactivities similar to Constructing Triangles: Filter by: Content type: Stage: Challenge level: There are 155 results Broad Topics > Information and Communications Technology > Interactivities Diamond Mine Stage: 3 Challenge Level: Practise your diamond mining skills and your x,y coordination in this homage to Pacman. Drips Stage: 2 and 3 Challenge Level: An animation that helps you understand the game of Nim. Nine Colours Stage: 3 Challenge Level: Can you use small coloured cubes to make a 3 by 3 by 3 cube so that each face of the bigger cube contains one of each colour? Conway's Chequerboard Army Stage: 3 Challenge Level: Here is a solitaire type environment for you to experiment with. Which targets can you reach? You Owe Me Five Farthings, Say the Bells of St Martin's Stage: 3 Challenge Level: Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? When Will You Pay Me? Say the Bells of Old Bailey Stage: 3 Challenge Level: Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring? Cogs Stage: 3 Challenge Level: A and B are two interlocking cogwheels having p teeth and q teeth respectively. One tooth on B is painted red. Find the values of p and q for which the red tooth on B contacts every gap on the. . . . Muggles Magic Stage: 3 Challenge Level: You can move the 4 pieces of the jigsaw and fit them into both outlines. Explain what has happened to the missing one unit of area. Right Angles Stage: 3 Challenge Level: Can you make a right-angled triangle on this peg-board by joining up three points round the edge? Square Coordinates Stage: 3 Challenge Level: A tilted square is a square with no horizontal sides. Can you devise a general instruction for the construction of a square when you are given just one of its sides? Semi-regular Tessellations Stage: 3 Challenge Level: Semi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations? Multiplication Tables - Matching Cards Stage: 1, 2 and 3 Challenge Level: Interactive game. Set your own level of challenge, practise your table skills and beat your previous best score. Balancing 2 Stage: 3 Challenge Level: Meg and Mo still need to hang their marbles so that they balance, but this time the constraints are different. Use the interactivity to experiment and find out what they need to do. Lost Stage: 3 Challenge Level: Can you locate the lost giraffe? Input coordinates to help you search and find the giraffe in the fewest guesses. Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Subtended Angles Stage: 3 Challenge Level: What is the relationship between the angle at the centre and the angles at the circumference, for angles which stand on the same arc? Can you prove it? Got It Stage: 2 and 3 Challenge Level: A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target. Number Pyramids Stage: 3 Challenge Level: Try entering different sets of numbers in the number pyramids. How does the total at the top change? Picturing Triangle Numbers Stage: 3 Challenge Level: Triangle numbers can be represented by a triangular array of squares. What do you notice about the sum of identical triangle numbers? More Number Pyramids Stage: 3 Challenge Level: When number pyramids have a sequence on the bottom layer, some interesting patterns emerge... An Unhappy End Stage: 3 Challenge Level: Two engines, at opposite ends of a single track railway line, set off towards one another just as a fly, sitting on the front of one of the engines, sets off flying along the railway line... Balancing 1 Stage: 3 Challenge Level: Meg and Mo need to hang their marbles so that they balance. Use the interactivity to experiment and find out what they need to do. Shear Magic Stage: 3 Challenge Level: What are the areas of these triangles? What do you notice? Can you generalise to other "families" of triangles? Rolling Around Stage: 3 Challenge Level: A circle rolls around the outside edge of a square so that its circumference always touches the edge of the square. Can you describe the locus of the centre of the circle? Khun Phaen Escapes to Freedom Stage: 3 Challenge Level: Slide the pieces to move Khun Phaen past all the guards into the position on the right from which he can escape to freedom. Fifteen Stage: 2 and 3 Challenge Level: Can you spot the similarities between this game and other games you know? The aim is to choose 3 numbers that total 15. Icosian Game Stage: 3 Challenge Level: This problem is about investigating whether it is possible to start at one vertex of a platonic solid and visit every other vertex once only returning to the vertex you started at. Bow Tie Stage: 3 Challenge Level: Show how this pentagonal tile can be used to tile the plane and describe the transformations which map this pentagon to its images in the tiling. Diagonal Dodge Stage: 2 and 3 Challenge Level: A game for 2 players. Can be played online. One player has 1 red counter, the other has 4 blue. The red counter needs to reach the other side, and the blue needs to trap the red. Shuffles Tutorials Stage: 3 Challenge Level: Learn how to use the Shuffles interactivity by running through these tutorial demonstrations. Triangles in Circles Stage: 3 Challenge Level: Can you find triangles on a 9-point circle? Can you work out their angles? Balancing 3 Stage: 3 Challenge Level: Mo has left, but Meg is still experimenting. Use the interactivity to help you find out how she can alter her pouch of marbles and still keep the two pouches balanced. Cosy Corner Stage: 3 Challenge Level: Six balls of various colours are randomly shaken into a trianglular arrangement. What is the probability of having at least one red in the corner? Top Coach Stage: 3 Challenge Level: Carry out some time trials and gather some data to help you decide on the best training regime for your rowing crew. Archery Stage: 3 Challenge Level: Imagine picking up a bow and some arrows and attempting to hit the target a few times. Can you work out the settings for the sight that give you the best chance of gaining a high score? See the Light Stage: 2 and 3 Challenge Level: Work out how to light up the single light. What's the rule? Flip Flop - Matching Cards Stage: 1, 2 and 3 Challenge Level: A game for 1 person to play on screen. Practise your number bonds whilst improving your memory First Connect Three Stage: 2 and 3 Challenge Level: The idea of this game is to add or subtract the two numbers on the dice and cover the result on the grid, trying to get a line of three. Are there some numbers that are good to aim for? A Tilted Square Stage: 4 Challenge Level: The opposite vertices of a square have coordinates (a,b) and (c,d). What are the coordinates of the other vertices? Attractive Tablecloths Stage: 4 Challenge Level: Charlie likes tablecloths that use as many colours as possible, but insists that his tablecloths have some symmetry. Can you work out how many colours he needs for different tablecloth designs? Estimating Angles Stage: 2, 3 and 4 Challenge Level: How good are you at estimating angles? Volume of a Pyramid and a Cone Stage: 3 These formulae are often quoted, but rarely proved. In this article, we derive the formulae for the volumes of a square-based pyramid and a cone, using relatively simple mathematical concepts. Partitioning Revisited Stage: 3 Challenge Level: We can show that (x + 1)² = x² + 2x + 1 by considering the area of an (x + 1) by (x + 1) square. Show in a similar way that (x + 2)² = x² + 4x + 4 Poly-puzzle Stage: 3 Challenge Level: This rectangle is cut into five pieces which fit exactly into a triangular outline and also into a square outline where the triangle, the rectangle and the square have equal areas. Disappearing Square Stage: 3 Challenge Level: Do you know how to find the area of a triangle? You can count the squares. What happens if we turn the triangle on end? Press the button and see. Try counting the number of units in the triangle now. . . . Two's Company Stage: 3 Challenge Level: 7 balls are shaken in a container. You win if the two blue balls touch. What is the probability of winning? Inside Out Stage: 4 Challenge Level: There are 27 small cubes in a 3 x 3 x 3 cube, 54 faces being visible at any one time. Is it possible to reorganise these cubes so that by dipping the large cube into a pot of paint three times you. . . . Nim-interactive Stage: 3 and 4 Challenge Level: Start with any number of counters in any number of piles. 2 players take it in turns to remove any number of counters from a single pile. The winner is the player to take the last counter. Sliding Puzzle Stage: 1, 2, 3 and 4 Challenge Level: The aim of the game is to slide the green square from the top right hand corner to the bottom left hand corner in the least number of moves. Overlap Stage: 3 Challenge Level: A red square and a blue square overlap so that the corner of the red square rests on the centre of the blue square. Show that, whatever the orientation of the red square, it covers a quarter of the. . . .
2017-07-21 14:46:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27366703748703003, "perplexity": 1537.2080721314367}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423785.29/warc/CC-MAIN-20170721142410-20170721162410-00310.warc.gz"}
http://math.stackexchange.com/questions/186145/a-fiber-bundle-over-euclidean-space-is-trivial
# A fiber bundle over Euclidean space is trivial. What's the easiest way to see this? The only thing I could think to do was try to patch together trivializations. I couldn't find a way to make that work. Thank you! edit: For the record, here's why I asked about this special case of the more general result about fiber bundles over contractible spaces.  In the much beloved book by Bott and Tu, it's claimed that the Leray Hirsch theorem can be proved in the same way the Kunneth theorem is proved:  Induct on the size of a finite good cover for the base space, applying the Mayer Vietoris sequence and the Poincare lemma for the induction step. Its assumed that there exists a finite good cover for the base space but it's not assumed this cover is a refinement of the cover of the base space which gives the local trivializations of the fiber bundle.  Therefore to apply the Poincare lemma in the induction step it seems that you need to know that the result I asked about is true.  Since fiber bundles had just been introduced in the text I thought may be there was a short, elementary proof that the authors had taken for granted. - let me know the reason for the downvote so I don't do it again. Is it because I didn't tell you what I've tried? –  Leray Hirsch Aug 24 '12 at 3:06 I did not downvote, but allow me to hazard a guess. (If I'm wrong, perhaps the downvoters will explain their thinking). My guess is that basically every resource on fiber bundles contains a proof of the fact that "A fiber bundle over a contractible space is trivial", and you are trying to prove a special case of this. They, perhaps, think you should have done more of your own research before asking here. –  Jason DeVito Aug 24 '12 at 4:02 Thanks, Jason. I guess that's fair? I did see on Wikipedia that the bundle is trivial if the base space is a contractible CW complex. I guess I asked because I thought perhaps the proof of the special case might be easier. –  Leray Hirsch Aug 24 '12 at 4:09 @JasonDeVito There are usually several different correct answers to a mathematical question. Discouraging to post a question in this site just because it is easy to find an answer on the internet may not be a good idea. There might be a very good answer which is not well known. –  Makoto Kato Aug 24 '12 at 4:14 Put a connection on the fiber-bundle, then parallel transport from the zero of your vector space along a straight line in the Euclidean space gives you your trivialization. –  Ryan Budney Aug 24 '12 at 6:28
2014-12-20 11:39:48
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8601148724555969, "perplexity": 196.5825597580698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769709.84/warc/CC-MAIN-20141217075249-00043-ip-10-231-17-201.ec2.internal.warc.gz"}
https://blog.herbert.top/problemset/sliding-window-median/readme_en/
| English | 简体中文 | # 480. Sliding Window Median ## Description Median is the middle value in an ordered integer list. If the size of the list is even, there is no middle value. So the median is the mean of the two middle value. Examples: [2,3,4] , the median is 3 [2,3], the median is (2 + 3) / 2 = 2.5 Given an array nums, there is a sliding window of size k which is moving from the very left of the array to the very right. You can only see the k numbers in the window. Each time the sliding window moves right by one position. Your job is to output the median array for each window in the original array. For example, Given nums = [1,3,-1,-3,5,3,6,7], and k = 3. Window position Median --------------- ----- [1 3 -1] -3 5 3 6 7 1 1 [3 -1 -3] 5 3 6 7 -1 1 3 [-1 -3 5] 3 6 7 -1 1 3 -1 [-3 5 3] 6 7 3 1 3 -1 -3 [5 3 6] 7 5 1 3 -1 -3 5 [3 6 7] 6 Therefore, return the median sliding window as [1,-1,-1,3,5,6]. Note: You may assume k is always valid, ie: k is always smaller than input array's size for non-empty array. Answers within 10^-5 of the actual value will be accepted as correct.
2021-07-24 20:58:11
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22458507120609283, "perplexity": 590.438816393404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150308.48/warc/CC-MAIN-20210724191957-20210724221957-00094.warc.gz"}
https://www.physicsforums.com/threads/remdesivir-a-possible-treatment-for-covid-19.990773/
# Remdesivir - a possible treatment for COVID-19? Homework Helper ## Main Question or Discussion Point For anyone following Remdesivir as a treatment for COVID 19: Remdesivir is "intracellularly metabolized to an analogue of adenosine triphosphate that inhibits viral RNA polymerases". I am not sure whether this disables the RNA polymerase from functioning or whether it causes RNA polymerase to produce defective viral mRNA transcript (eg. by inserting a few of these modified adenosine molecules instead of normal adenosine during transcription). In any event, this prevents replication of the virus by inhibiting RNA polymerase function. This is not my area, but seems to me that to be really effective, such a drug has to be able to selectively enter cells i.e. enter only cells infected by a virus. Otherwise, the drug would enter healthy cells and interfere with normal RNA transcription and damage or kill them. If Remdesivir could be modified to somehow identify cells infected by SARS-CoV-2 and enter only those cells, there could be huge potential for this drug. I would appreciate hearing from others who have a background in molecular biology. AM atyy and BillTre Related Biology and Medical News on Phys.org Ygggdrasil Gold Member 2019 Award SARS-CoV-2, the virus that causes COVID-19, has an RNA genome. Therefore, to copy its RNA genome in order to make new copies of the virus, the virus requires an RNA-dependent RNA polymerase (RdRP)—that is, an enzyme that makes RNA by reading off of an RNA template. The RdRP enzyme is encoded by the viral genome, so remdesivir targets a protein present only in virally-infected cells, not an enzyme present in all human cells. Furthermore, the viral RdRP enzyme is very different from the DNA-dependent RNA polymerases that are present inside normal human cells and involved in transcription (copying the genetic information from DNA to mRNA so that information could be translated into protein by the ribosome). AFAIK, there are no functional RNA-dependent RNA polymerases encoded in the human genome. berkeman, atyy, BillTre and 2 others Homework Helper SARS-CoV-2, the virus that causes COVID-19, has an RNA genome. Therefore, to copy its RNA genome in order to make new copies of the virus, the virus requires an RNA-dependent RNA polymerase (RdRP)—that is, an enzyme that makes RNA by reading off of an RNA template. The RdRP enzyme is encoded by the viral genome, so remdesivir targets a protein present only in virally-infected cells, not an enzyme present in all human cells. Furthermore, the viral RdRP enzyme is very different from the DNA-dependent RNA polymerases that are present inside normal human cells and involved in transcription (copying the genetic information from DNA to mRNA so that information could be translated into protein by the ribosome). AFAIK, there are no functional RNA-dependent RNA polymerases encoded in the human genome. So does that mean that the intracellular adenosine analogue that Remdesivir delivers is taken up only by the viral RNA dependent polymerase (RdRP) and not the host cell's RNA polymerase nor in its RNA transcripts? If so, how would that occur? AM Last edited: Ygggdrasil Gold Member 2019 Award So does that mean that the intracellular adenosine analogue that Remdesivir delivers is taken up only by the viral RNA dependent polymerase (RdRP) and not the host cell's RNA polymerase nor in its RNA transcripts? If so, how would that occur? Even though the SARS-CoV-2 RdRP and cellular RNA polymerases perform similar chemical reactions, the structures of the active sites are slightly different, and chemists can exploit these differences to design drugs that can bind to the RdRP but not to cellular RNA polymerases. This ability to discriminate between similar types of active sites enables a number of important drugs, such as the nucleoside analogs used as reverse transcriptase inhibitors in anti-HIV therapy (which can bind to the active site of HIV reverse transcriptase, but not similar cellular DNA polymerases) or kinase inhibitor drugs like Gleevec used in anti-cancer therapy (which can selectively bind to the active sites of specific cellular protein kinases without binding to all of the various protein kinases in the body). Here is a recent paper with a lot more information about how remdesivir binds to and inhibits the viral RdRP enzyme: https://science.sciencemag.org/content/368/6498/1499 jim mcnamara and Andrew Mason Homework Helper Even though the SARS-CoV-2 RdRP and cellular RNA polymerases perform similar chemical reactions, the structures of the active sites are slightly different, and chemists can exploit these differences to design drugs that can bind to the RdRP but not to cellular RNA polymerases. This ability to discriminate between similar types of active sites enables a number of important drugs, such as the nucleoside analogs used as reverse transcriptase inhibitors in anti-HIV therapy (which can bind to the active site of HIV reverse transcriptase, but not similar cellular DNA polymerases) or kinase inhibitor drugs like Gleevec used in anti-cancer therapy (which can selectively bind to the active sites of specific cellular protein kinases without binding to all of the various protein kinases in the body). Here is a recent paper with a lot more information about how remdesivir binds to and inhibits the viral RdRP enzyme: https://science.sciencemag.org/content/368/6498/1499 Thanks so much for your very clear response and link. The article appears to have been published today so it is about as up-to-date as possible. It appears that the Remdesivir ATP molecule (RTP) with its modified adenosine attaches to the RNA primer strand at the first base pair which terminates further RNA transcription. So that answers my initial question. Since Remdesivir was designed as a general anti-viral and seemed to work well on SARS-CoV which has a slightly different shaped RNA polymerase than SARS-CoV-2, a bit of tweaking of the drug shape/binding sites might be all that is needed for a really effective treatment of COVID-19. I will take some time to go through the article over the weekend. This approach to drug "engineering" is really fascinating stuff. AM Last edited: Ygggdrasil Gold Member 2019 Award Thanks so much for your very clear response and link. The article appears to have been published today so it is about as up-to-date as possible. For such a rapidly moving field as COVID-19 research, the actual published scientific literature is actually a bit out of date. For example, the Science paper that was just published today was first released as a (non-peer reviewed) pre-print on April 9. So, even for research that is quite timely, the published scientific literature can be months behind (more typically, the peer review process takes 0.5-1+ years, inserting further delays between when a research finding is first made and when it is formally published). Since Remdesivir was designed as a general anti-viral and seemed to work well on SARS-CoV which has a slightly different shaped RNA polymerase than SARS-CoV-2, a bit of tweaking of the drug shape/binding sites might be all that is needed for a really effective treatment of COVID-19. Yes, it is likely that remdesivir could be modified to have better activity against SARS-CoV-2 (IIRC, remdesivir was originally designed against the Ebola virus). However, it would likely take quite a long time to optimize the drug and for the drug to go through clinical trials before it could be approved for widespread use in patients. Such a drug would likely not be able to help with the current outbreak but would help if vaccination cannot fully eradicate the disease or if we encounter a new zoonotic coronavirus in the future (with three new coronaviruses emerging in the past 20 years, we are almost certain to see other new coronaviruses in the future). Homework Helper Here is a recent paper with a lot more information about how remdesivir binds to and inhibits the viral RdRP enzyme: https://science.sciencemag.org/content/368/6498/1499 That paper examines the structure of the RdRp molecule and the way that Remdesivir interferes with the RdRp function in replicating the viral RNA genome. The authors mention another similar drug EIDD-2801 that shows even greater effectiveness in blocking viral RNA replication in SARS-CoV-2: "In particular, EIDD-2801 has been shown to be 3 to 10 times as potent as remdesivir in blocking SARS-CoV-2 replication (36). The N4 hydroxyl group off the cytidine ring forms an extra hydrogen bond with the side chain of K545, and the cytidine base also forms an extra hydrogen bond with the guanine base from the template strand. These two extra hydrogen bonds may explain the apparent higher potency of EIDD-2801 in inhibiting SARS-CoV-2 replication." EIDD-2801 is just entering Phase 2 trials AM Last edited: Tom.G and Ygggdrasil Homework Helper One possible problem with the remdesivir approach is that it functions only after the virus has infected the cell. Since lung epithelial cells express multiple ACE2 receptors a cell can be attacked by several viruses and unless the drug is 100% effective in stopping replication the virus may still proliferate and cause a lot of damage. However.... used in conjunction with another imperfect drug that reduces the virus' ability to get into cells, might provide an effective therapy for COVID. AM Tom.G Homework Helper It looks like the Trump administration thinks remdesivir may be a cure for COVID-19. They have just bought up the entire world supply for the next 3 months. That's 500,000 doses. Europe is a tad upset. Someone should tell the EU to approach Merck and Ridgebackbio to purchase supplies of EIDD-2801, which Ridgeback says it has been producing. Remdesivir is injected whereas EIDD-2801 is taken as a pill so EIDD-2801 is easier to deploy and, apparently, easier to manufacture. The owners of Ridgebackbio say they will have 1 million doses available by fall. AM Ygggdrasil Gold Member 2019 Award One possible problem with the remdesivir approach is that it functions only after the virus has infected the cell. Since lung epithelial cells express multiple ACE2 receptors a cell can be attacked by several viruses and unless the drug is 100% effective in stopping replication the virus may still proliferate and cause a lot of damage. However.... used in conjunction with another imperfect drug that reduces the virus' ability to get into cells, might provide an effective therapy for COVID. AM At least for some viruses, nucleoside analogues can function to prophylacticaly prevent infections. For example, Truvada, a mixture of the nucleotide analogue tenofovir and the nucleoside emtricitabine, is FDA approved as a pre-exposure prophylaxis (PrEP) medicine to prevent HIV infection. Of course, retroviruses are different than coronaviruses, so the situation could be very different. However, there is data from monkeys that prophylactic administration of remdesivir can prevent disease from the MERS coronavirus (https://www.pnas.org/content/117/12/6771): Abstract: The continued emergence of Middle East Respiratory Syndrome (MERS) cases with a high case fatality rate stresses the need for the availability of effective antiviral treatments. Remdesivir (GS-5734) effectively inhibited MERS coronavirus (MERS-CoV) replication in vitro, and showed efficacy against Severe Acute Respiratory Syndrome (SARS)-CoV in a mouse model. Here, we tested the efficacy of prophylactic and therapeutic remdesivir treatment in a nonhuman primate model of MERS-CoV infection, the rhesus macaque. Prophylactic remdesivir treatment initiated 24 h prior to inoculation completely prevented MERS-CoV−induced clinical disease, strongly inhibited MERS-CoV replication in respiratory tissues, and prevented the formation of lung lesions. Therapeutic remdesivir treatment initiated 12 h postinoculation also provided a clear clinical benefit, with a reduction in clinical signs, reduced virus replication in the lungs, and decreased presence and severity of lung lesions. The data presented here support testing of the efficacy of remdesivir treatment in the context of a MERS clinical trial. It may also be considered for a wider range of coronaviruses, including the currently emerging novel coronavirus 2019-nCoV. Andrew Mason and berkeman Homework Helper At least for some viruses, nucleoside analogues can function to prophylacticaly prevent infections. For example, Truvada, a mixture of the nucleotide analogue tenofovir and the nucleoside emtricitabine, is FDA approved as a pre-exposure prophylaxis (PrEP) medicine to prevent HIV infection. Of course, retroviruses are different than coronaviruses, so the situation could be very different. However, there is data from monkeys that prophylactic administration of remdesivir can prevent disease from the MERS coronavirus (https://www.pnas.org/content/117/12/6771): It seems, though, that the prophylactic effect is not due to remdesivir preventing the virus getting into the cell but is the result of getting a headstart on the virus by getting the remdesivir nucleotide analogue RTP into the cells so that when the virus enters the cells the RTP is already there and able to insert itself into the viral RNA polymerase, stopping viral replication from the outset of infection. If remdesivir has no serious side effects it may be able to function as a prophylactic but at $2,000+ per dose and the fact that it has to be taken intravenously may make that somewhat impractical. The HIV drug cocktail approach is to attack the HIV virus at several stages in its replication cycle, including impeding its ability to enter cells. Although, as you say, the HIV is a retrovirus that does not use an RNA transcriptase, the use of a multi-stage approach to attacking the virus has been very effective in controlling HIV and preventing AIDS. It seems to be a reasonable way of approaching SARS-CoV-2, which has at least 4 stages where small molecule drug intervention could be effective: This article, just published yesterday, seems to suggest that approach might work with SARS-CoV-2. Some of these drugs that seem to work to stop SARS-CoV-2 replication are already approved for HIV and other viruses, so it may be relatively quick to get FDA approval. I wonder if anyone has looked at COVID-19 stats for people taking anti-HIV medication...... AM Last edited: Ygggdrasil Science Advisor Gold Member 2019 Award Thanks for the link. It seems, though, that the prophylactic effect is not due to remdesivir preventing the virus getting into the cell but is the result of getting a headstart on the virus by getting the remdesivir nucleotide analogue RTP into the cells so that when the virus enters the cells the RTP is already there and able to insert itself into the viral RNA polymerase, stopping viral replication from the outset of infection. If remdesivir has no serious side effects it may be able to function as a prophylactic but at$2,000+ per dose and the fact that it has to be taken intravenously may make that somewhat impractical. Agreed. An IV drug is not very practical as a prophylactic. The HIV drug cocktail approach is to attack the HIV virus at several stages in its replication cycle, including impeding its ability to enter cells. In general, HIV drug cocktails are designed to target multiple stages of the HIV replication cycle (such as those that combine reverse transcriptase inhibitors with protease inhibitors). However, the prophylactic drug Truvada combines two different reverse transcriptase inhibitors, so it is only targeting one step of the HIV life cycle (which occurs after viral entry). This article, just published yesterday, seems to suggest that approach might work with SARS-CoV-2. Some of these drugs that seem to work to stop SARS-CoV-2 replication are already approved for HIV and other viruses, so it may be relatively quick to get FDA approval. I wonder if anyone has looked at COVID-19 stats for people taking anti-HIV medication...... Note that the figure you posted is based on the assumed mechanism of action of drugs under investigation to treat COVID-19. Of the drugs listed, only one (remdesivir) has strong evidence of efficacy. Others, in particular hydroxychloroquine and the protease inhibitors (lopinavir and danoprevir), have had various studies conclude that they are not effective at treating COVID-19. Indeed, the CDC recommends against the use of hydroxychloroquine as well as lopinavir and other HIV protease inhibitors for COVID-19: https://www.covid19treatmentguidelines.nih.gov/antiviral-therapy/ Homework Helper Note that the figure you posted is based on the assumed mechanism of action of drugs under investigation to treat COVID-19. Of the drugs listed, only one (remdesivir) has strong evidence of efficacy. Others, in particular hydroxychloroquine and the protease inhibitors (lopinavir and danoprevir), have had various studies conclude that they are not effective at treating COVID-19. Indeed, the CDC recommends against the use of hydroxychloroquine as well as lopinavir and other HIV protease inhibitors for COVID-19: https://www.covid19treatmentguidelines.nih.gov/antiviral-therapy/ All good points. My purpose in posting the diagram was to just show the points at which drug intervention could occur, not to suggest the drugs to be used. APN001, which delivers an extra-cellular recombinant human ACE2 receptor, might assist in step 1, for example, in reducing the rate of entry of the virus into cells. When combined with some of the viral RNA polymerase inhibitors (remdesivir, EIDD-2801 or the five drugs listed in the Science Daily article published June 30) the combination might be much more effective than any individual drug. AM Homework Helper Someone should tell the EU to approach Merck and Ridgebackbio to purchase supplies of EIDD-2801, which Ridgeback says it has been producing. Remdesivir is injected whereas EIDD-2801 is taken as a pill so EIDD-2801 is easier to deploy and, apparently, easier to manufacture. The owners of Ridgebackbio say they will have 1 million doses available by fall. It looks like Merck/Ridgebackbio is taking on Gilead's remdesivir with EIDD-2801. My bet is that EIDD-2801 will become the preferred treatment for two reasons: 1. it appears to be 3 to 10 times as potent as remdesivir in blocking SARS-CoV-2 See: second last paragraph of this paper. 2. EIDD-2801 is taken orally while remdesivir is injected. [Note: It appears that EIDD-2801 has just been renamed MK-4482. See also this Wikipedia article] Last edited: atyy
2020-08-08 18:31:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2833835780620575, "perplexity": 4499.36926132523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738015.38/warc/CC-MAIN-20200808165417-20200808195417-00048.warc.gz"}
https://forum.snap.berkeley.edu/t/add-scripts-to-a-sprite/5257/26
# Add scripts to a sprite haha lol Good work ! This Block: dosen't work with accent letters (like é: from french canada keyboard)(if i add é to the list of letters in the function, it work...) And a litle bug with "apostrophe" (in combinaison with "shift"): if i write an apostrophe (shift+apostrophe), the apostrophe is considered pressed even if this is not the case during subsequent detections. i i write an apostrophe again, all come back in order... use JS keys pressed then! that's a sticky key, so theres no way to fix that @bh can you split mine and @loucheman's post please to that topic Sadly, though, you can't really directly delete them, because a) a sprite can have multiple of the same script, and b) there would be no way to label and find them without changing the way scripts work altogether. But you can do this.scripts.children = [] which deletes all of the scripts at once (included in the project). Also, the scripts don't appear in the scripting area right away, but since the purpose of this is to work programmatically, that doesn't really matter. The script is still present in the scripting area, even if you can't see it. Finally, you need to use @ego-lay_atman-bay's Hat blocks in grey rings for the script to function. the clear scripts is buggy how so? the first time you run it, nothing happens, but when you click it works. maybe a "delete last made script" block? No, it does work immediately, but the canvas doesn't update properly (which, stated earlier, shouldn't matter because this is supposed to be programmatically). Try it, put a when hat block and then clear scripts After (about a minute of) experimenting, I found out that the scripts are ordered by first = least recently dropped, so that could be possible! hang on edit: aaand done! again, it doesn't update until you click, but that shouldn't really matter cool! also, the add script block is broken, you need to do this: oh yeah i forgot to add that edit: fixed Imagine this library combined with the script builder library. huh? You can use the script builder library to create a script, then use this library to put the newly made script in the sprite. I see. You could also use attach script to mouse (which is neater and not as buggy imo) This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.
2022-09-28 19:01:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4944930076599121, "perplexity": 2910.6409166988733}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00275.warc.gz"}
https://www.physicsforums.com/threads/sketching-loci-in-the-complex-plane.659726/
# Homework Help: Sketching loci in the complex plane 1. Dec 18, 2012 ### jj364 1. The problem statement, all variables and given/known data Make a sketch of the complex plane showing a typical pair of complex numbers z1 and z2 Describe the geometrical figure whose vertices are z1, z2 and z0 = a + i0. 2. Relevant equations z2 − z1 = (z1 − a)ei2π/3 a − z2 = (z2 − z1)i2π/3 where a is a real positive constant. 3. The attempt at a solution I really am not sure what to do on this question, my initial thoughts were that the solution would look like 3 lines in the complex plane all 2π/3 apart so that it would look like the solution to a roots of unity question. I tried to rearrange to give z2 in terms of a which yielded z2(1+e2πi/3 - e2πi/3/(1+e2πi/3) = a(1+e4πi/3/(1+e2πi/3)) But to be honest I really don't know where I am going with this! 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Dec 18, 2012 ### Michael Redei I think you're missing an "e" in your second "relevant equation" and that you mean a − z2 = (z2 − z1) ei2π/3 instead. If that's right, you might try to interpret geometrically what multiplying a complex number by ei2π/3 means. 3. Dec 18, 2012 ### jj364 e2πi/3=-1/2 +i√3/2 so multiplying by it would change the real and imaginary components accordingly. So would it be best to split into real and imaginary components so z1=x1=iy1 and z2=x2+iy2, then substitute these into the equations? Which i think gives z2=1/2(x1+a) + i3y1√3 /2 4. Dec 18, 2012 ### Michael Redei That doesn't really say a lot. What does such a multiplication f(z) = ze2πi/3 look like geometrically? If you sketch 1 and f(1) in the complex plane, how could you describe the geometrical operation that takes you from 1 to f(1)? How about 1/2 and f(1/2)? How about 1+i and f(1+i)? If you can discover some similarity, you can apply this knowledge to your original problem. 5. Jan 7, 2013 ### jj364 Ok, so does it rotate them by 2π/3 keeping the same magnitude? But I'm still struggling to work out my problem from this. Do I need to just think about it or can I actually solve the problem using the equations, because I've tried eliminating to no avail? 6. Jan 7, 2013 ### jj364 Actually I think I might have worked it out. I think it is just the solutions to z^3=1 so 1, e$\frac{2\pi i}{3}$, e$\frac{4\pi i}{3}$ I tried it for the equations and it worked, is this right? They are all rotations of 2pi/3 of each other so it does make sense.
2018-09-26 08:28:07
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4731176793575287, "perplexity": 997.2482918814081}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164469.99/warc/CC-MAIN-20180926081614-20180926102014-00120.warc.gz"}
https://stats.stackexchange.com/questions/255918/boltzmann-machines-learning-algorithm
# Boltzmann machines: learning algorithm I'm trying to study Boltzmann machines, so I don't undestand this recurrent formulation for the training stage of the weights $w$: $\Delta w_{ij} = E_{data} (v_i h_j ) − E_{model} (v_i h_j )$ all references tell that $E_{data}$ is the expectation observed in the training set while $E_{model}$ is "that same expectation under the distribution defined by the model"; I don't understand what is this "expectation of the model" and why is intractable; is there a clear reference to understand this concept that is still unclared to me? The expectation of the model, which refers to that deriving from the negative phase of learning where you gibbs sample freely across all neurons, is intractable because the partition function is intractable. It is intractable because you need expectation over hidden AND visible units (the model) because you have to make an exponential sum over both. Because it is intractable you have estimate the maximum likelihood gradient with monte carlo methods. So you just take the values once the markov chain is burnt in as an estimate of that intractable computation. • It is unclear what you are talking about and how it relates to the question. – Michael R. Chernick Apr 23 '17 at 19:08
2019-12-14 12:26:49
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8763535022735596, "perplexity": 511.7282232791629}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541157498.50/warc/CC-MAIN-20191214122253-20191214150253-00142.warc.gz"}
http://math.stackexchange.com/questions/112258/bipartite-graph-non-isomorphic-to-a-subgraph-of-any-k-cube
# Bipartite graph non-isomorphic to a subgraph of any k-cube Find a bipartite graph that is not isomorphic to a subgraph of any k-cube - Welcome to MSE. Wording your questions in a Polite Language will be appreciated and welcome on this forum. Show us what you have done. Also, tell us if it's a homework and if so, add the (homework) tag. –  user21436 Feb 23 '12 at 0:54 In addition, showing what you already know (or think) about the problem will allow the community to help you more effectively. –  Austin Mohr Feb 23 '12 at 1:13 Note that the $k$-cube can be represented with nodes the $k$ length strings of $0$s and $1$s, where two nodes are adjacent if they differ in exactly one point. Consider the points $(1,0, \dots ,0)$, and $(0,1, 0, \dots ,0)$. How many points in any $k$-cube are adjacent to both of those points?
2015-07-07 09:28:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40403860807418823, "perplexity": 356.4825480022695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099105.15/warc/CC-MAIN-20150627031819-00267-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/spin-wave-approximation-bosonic-operator-question.459883/
# Spin-wave approximation - bosonic operator question Can someone explain the attached image for me please? I do not understand how $$2\delta_{k, k'}a_{k'}^{\dagger}a_{k}$$ becomes $$a_{k}^{\dagger}a_{k} + a_{-k}^{\dagger}a_{-k}$$ to me it should just be $$2a_{k}^{\dagger}a_{k}$$ and also I do not understand how $$e^{-ik}a_{-k}a_{k} + e^{ik}a_{-k}^{\dagger}a_{k}^{\dagger} = \cos(k) a_{-k}a_{k} + \cos(k) a_{-k}^{\dagger}a_{k}^{\dagger}$$ #### Attachments • 14.1 KB Views: 385 Last edited: You have to remember that this occurs under the summation over the BZ - as the sum includes -k for every k, you can take, for example, $\sum_{k \in BZ} a^\dagger_k a_k \to \sum_{k \in BZ} a^\dagger_{-k} a_{-k}$ with impunity. That is excellent, thanks theZ. However, it's still not clear to me why $$e^{-ik}a_{-k}a_{k} + e^{ik}a_{-k}^{\dagger}a_{k}^{\dagger} = \cos(k) a_{-k}a_{k} + \cos(k) a_{-k}^{\dagger}a_{k}^{\dagger}$$ Can you explain that? To me it implies that $$a_{-k}a_{k} = a_{-k}^{\dagger}a_{k}^{\dagger}$$ and I don't see why that should be true. Thanks again. As I said, you must understand the equality after summing together k, -k. Look at the creation and annihilation terms separately. For, say, the creation part, call the term to be summed, as initially written, f(k). Call the term to be summed, as the text has rewritten it, g(k). f(k) + f(-k) = g(k) + g(-k) by the definition of cosine. If the operators were fermionic, you would get i sin(k). Excellent, I finally got it! Thank you theZ. It makes perfect sense now. Do you also happen to know about the BSC hamiltonian? For instance, when the BCS Hamiltonian contains the summation $$\sum_{\vec{k} \sigma} c_{\vec{k}\sigma}^{\dagger}c_{\vec{k}\sigma}$$ does this also imply summation over -k and -sigma?
2021-01-21 15:09:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9170135259628296, "perplexity": 872.2808710229833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00568.warc.gz"}
https://asmedigitalcollection.asme.org/offshoremechanics/article-abstract/122/4/289/445827/Numerical-Prediction-of-the-Hydrodynamic-Loads-and?redirectedFrom=fulltext
An incompressible Navier-Stokes flow algorithm is coupled with an elastic body structural response to numerically investigate the hydrodynamics of several relevant offshore applications. These applications include the effects of surface roughness on a bare cylinder and the study of vortex-induced vibrations (VIV) for a cylinder at high Reynolds numbers. The Reynolds number for the roughness cases was $Re=4×106,$ while the Reynolds number for the VIV cases ranged from $2.25×105⩽Re⩽4.75×105.$ Additional VIV cases were also performed for two common suppression devices: strakes and fairings. The results from both the roughness and bare cylinder VIV applications were compared to experimental data in order to further validate the numerical scheme and illustrate the effectiveness of applying Navier-Stokes technologies to offshore applications. [S0892-7219(00)00604-X] 1. Larsen, C., and Halse, K., 1994, “Comparison of Models for Vortex Induced Vibrations of Marine Risers and Cables,” Final Report of the Workshop on Vortex-Induced Vibrations of Marine Risers and Cables, Trondheim, Norway. 2. Yeung , R. W. , and Vaidhyanathan , M. , 1993 , “ Flow Past Oscillating Cylinders ,” ASME J. Offshore Mech. Arct. Eng. , 115 , pp. 197 205 . 3. Dalheim, J., 1996, “An ALE Finite Element Method for Interaction of a Fluid and a 2D Flexible Cylinder,” ECCOMAS. 4. Meling, T. S., 1998, “Numerical Prediction of the Response of a Vortex-Excited Cylinder at High Reynolds Numbers,” Proc., International OMAE Symposium, Lisbon, Portugal. 5. Kallinderis , Y. , Khawaja , A. , and McMorris , H. , 1996 , “ Hybrid Prismatic/Tetrahedral Grid Generation for Viscous Flows Around Complex Geometries ,” AIAA J. , 34 ( 2 ), pp. 291 298 . 6. Spalart , P. R. , and Allmaras , S. R. , 1992 , “ A One-Equation Turbulence Model for Aerodynamic Flows ,” AIAA Pap. 92-0439 , 6 9 . 7. Schulz , K. W. , and Kallinderis , Y. , 1998 , “ Unsteady Flow Structure Interaction for Incompressible Flows Using Deformable Hybrid Grids ,” J. Comput. Phys. , 143 , pp. 569 597 . 8. Craig, R., 1981, Structural Dynamics, Wiley, New York, NY. 9. Achenback , E. , and Heinecke , E. , 1981 , “ On Vortex Shedding from Smooth and Rough Cylinders in the Range of Reynolds Numbers 6×103 to 5×106, J. Fluid Mech. , 109 , pp. 239 251 . 10. Allen, D. W., and Henning, D. L., 1997, “Vortex-Induced Vibration Tests of a Flexible Smooth Cylinder at Supercritical Reynolds Numbers,” Proc., ISOPE Conference, Vol. III, pp. 680–685, Honolulu, HI. 11. Anagnostopoulos , P. , and Bearman , P. W. , 1992 , “ Response Characteristics of a Vortex-Excited Cylinder at Low Reynolds Numbers ,” J. Fluids and Structures , 6 , pp. 39 50 . 12. Sarpkaya, T., and Isaacson, M., 1981, Mechanics of Wave Forces on Offshore Structures, Van Nostrand Reinhold Company. 13. Moe, G., Holden, K., and Yttervoll, P., 1994, “Motion of Spring Supported Cylinders in Subcritical and Critical Water Flows,” Proc., Fourth International Offshore and Polar Engineering Conference, pp. 468–475.
2019-08-26 09:45:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5520716309547424, "perplexity": 13203.051060462663}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027331485.43/warc/CC-MAIN-20190826085356-20190826111356-00446.warc.gz"}
https://economics.stackexchange.com/questions/9245/why-gdp-positive-growth-does-not-decrease-inflation
# Why GDP positive growth does not decrease inflation? My understanding is : If production increases, it would imply the amount of goods increases as well for the same amount of money available in the market. Because of this, we should see prices of goods decrease as the offer got bigger. Since we say inflation increases with GDP growth, where am I wrong thinking it should be the other way around? • Nice question! I think none of the current answers really get it right. My intuition would be to distinguish between the sources of growth. If you have growth due to technological (supply side) improvements we should indeed see falling prices if the money supply remains the same. However, if technology is unchanged but there is demand driven growth then for a fixed money supply we should see higher inflation. One would need to spell this out in a general equilibrium model (with sticky prices?) to make this a proper answer, which is why I posted this as a comment only. – HRSE Nov 19 '15 at 10:21 GDP growth would lead to deflation if the money supply remained unchanged. The government usually increases money supply as the economy grows to avoid deflation. Controlling money supply is not an easy task though because the economy responds with lag to every policy decision. But ideally, the government aims at keeping inflation low (1-4%) and avoiding deflation. Zero inflation is also undesirable because if you know your money will be worth just as much a year from now, you have less incentive to invest. Inflation $$\frac{P_1-P_0}{P_0}$$ Where $P_t$ is the price level at time $t$ This formula is just the percentage change in price. GDP Growth From Investopedia: A measure of economic growth from one period to another expressed as a percentage and adjusted for inflation (i.e. expressed in real as opposed to nominal terms). The real economic growth rate is a measure of the rate of change that a nation's gross domestic product (GDP) experiences from one year to another. So inflation concerns itself with the changes in prices over time. Price can be measured by something like the CPI (consumer price index). GDP on the other hand focusses on what can be produced. If you look at the definition, you can see that in calculation real GDP, the inflation rate is essential. Real GDP is GDP controlled for inflation Useful when comparing different years. An increase in inflation can be a driver for increased nominal GDP but not for increased real GDP. This site provides a pretty solid background and isn't too hard to folow..
2020-10-31 14:39:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010796904563904, "perplexity": 867.7422983918983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107918164.98/warc/CC-MAIN-20201031121940-20201031151940-00630.warc.gz"}
http://www.ams.org/mathscinet-getitem?mr=293375
MathSciNet bibliographic data MR293375 46B15 Schonefeld, Steven Schauder bases in the Banach spaces $C\sp{k}({\bf T}\sp{q})$$C\sp{k}({\bf T}\sp{q})$. Trans. Amer. Math. Soc. 165 (1972), 309–318. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
2016-06-26 03:18:03
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9987356066703796, "perplexity": 6222.328592929919}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00118-ip-10-164-35-72.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3611053/the-function-field-of-an-affine-part-of-a-projective-variety
# The function field of an affine part of a projective variety Let $$\phi:\mathbb{A}^n \to U_0\subseteq \mathbb{P}^n$$ be given by $$\phi(a_1,\ldots,a_n)=(1:a_1:\ldots:a_n).$$ Let $$X\subseteq \mathbb{P}^n$$ be an irreducible Zariski-closed subspace (I call this a projective variety) such that $$X\cap U_0\neq \emptyset.$$ Then $$Y:=\phi^{-1}(X\cap U_0)\subseteq \mathbb{A}^n$$ is an irreducible Zariski-closed subspace (I call this an affine variety). Let $$\theta:k[y_1,\ldots,y_n] \to k(X)$$ be the $$k$$-algebra homomorphism such that $$\theta(y_i)=x_i/x_0$$ for $$i=1,\ldots,n.$$ PROBLEM. I'm struggling to show (in an algebraic fashion) that $$\ker\theta$$ is the vanishing ideal of $$Y.$$ Any hints or help greatly appreciated! ATTEMPT. Recall that $$k(X)$$ consists of formal fractions $$g/h$$ where 1. $$g,h \in k[x_0,\ldots,x_n]$$ are homogeneous of the same degree, 2. $$h$$ does not vanish on $$X$$ i.e. $$h\notin I(X),$$ 3. we identify two fractions $$g/h$$ and $$g'/h'$$ if and only if $$gh'-g'h \in I(X).$$ Note that, for any $$f \in k[y_1,\ldots,y_n],$$ we have $$\theta(f)=\frac{F(x_0,x_1,\ldots,x_n)}{x_0^{\deg f}}$$ where $$F$$ is the homogenisation of $$f$$ at $$x_0.$$ It follows that $$f \in \ker \theta$$ if and only if $$F \in I(X).$$ Clearly, if $$F \in I(X),$$ then $$f=F(1,y_1,\ldots,y_n) \in I(Y).$$ Conversely, if $$f \in I(Y),$$ then $$F \in I(X\cap U_0).$$ Hence, since $$X\cap U_0$$ is dense in $$X,$$ it follows that $$F \in I(X).$$ This proves the claim (right?). • One slight nitpick: the kernel should be the vanishing ideal of $Y$. Anyways, this ought to be fairly direct from the definition of $k(X)$. Please add your definition and any attempts you've made to your post. – KReiser Apr 6 '20 at 1:50 • Oh yes, sorry, I've changed to "vanishing ideal of Y". I think I've managed to give the proof now - does it look right to you? Thanks! – user350031 Apr 6 '20 at 7:27
2021-04-16 08:09:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615409970283508, "perplexity": 205.01792594681905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038088731.42/warc/CC-MAIN-20210416065116-20210416095116-00423.warc.gz"}
https://math.stackexchange.com/questions/1751502/law-of-large-numbers-for-martingales
# Law of Large Numbers for Martingales The following question has me stumped: Let $X_n$ be a square integrable martingale with $E((X_n)^2)\leq n$ for all $n$. Prove that $X_n/n$ tends to $0$ almost surely. (this is in a sense a law of large numbers, generalizing the case where $X_n$ is a sum of $n$ iid zero mean random variables.) Any ideas? We assume $X_0=0$ without loss of generality. Define $$Y_n:=\sum_{i=1}^n\frac{X_i-X_{i-1}}i, n\geqslant 1, \quad Y_0:=0.$$ Then $\left(Y_n\right)_{n\geqslant 1}$ is a martingale (for the same filtration as $\left(X_n\right)_{n\geqslant 1}$) and using the fact that $\left(X_i-X_{i-1}\right)_{i\geqslant 1}$ is a martingale differences sequence, we have \begin{align} \mathbb E\left[Y_n^2\right]=\sum_{i=1}^n\frac 1{i^2}\left(\mathbb E\left[X_i^2\right]-\mathbb E\left[X_{i-1}^2\right]\right). \end{align} Now, using Abel's transformation and the assumption on $\mathbb E\left[X_i^2\right]$, we derive boundedness of the sequence $\left(\mathbb E\left[Y_n^2\right]\right)_{n\geqslant 1}$. Using the martingale convergence theorem, we get that the sequence $\left(Y_n\right)_{n\geqslant 1}$ converges almost surely to some random variable $Y$. Now, we have (accounting $X_0=0$) \begin{align} \frac{X_n}n&=\frac 1n\sum_{l=1}^n\left(X_l-X_{l-1}\right)\\ &=\frac 1n\sum_{l=1}^n\frac{X_l-X_{l-1}}l\cdot l\\ &=\frac 1n\sum_{l=1}^n\left(Y_l-Y_{l-1}\right)\cdot l\\ &=\frac 1n\sum_{k=1}^nY_k\cdot k-\frac 1n\sum_{k=0}^{n-1}Y_k\cdot (k+1)\\ &=\frac 1n\sum_{k=1}^nY_k\cdot k-\frac 1n\sum_{k=1}^{n-1}Y_k\cdot (k+1)\\ &=Y_n-\frac 1n\sum_{k=1}^nY_k, \end{align} from which it follows that $X_n/n\to 0$ almost surely. The claim in question is a corollary of a standard SLLN for martingale difference sequences (MDS). SLLN for MDS The statement of SLLN for MDS is as follows. Let $$N_t$$ be a martingale difference sequence (MDS) such that $$\sum\limits_{t=1}^{\infty} \frac{E[N_t^2]}{t^2} < \infty$$, then $$\frac{1}{n} \sum_{t=1}^n N_t \rightarrow 0 \;\;a.s.$$ (In this case, the martingale difference sequence $$N_t$$ is given by differencing the martingale $$X_t$$: $$N_t = X_t - X_{t-1}$$. Then summation by parts gives \begin{align*} \sum_{t=1}^n \frac{E[N_t^2]}{t^2} &= \sum_{t=1}^n \frac{E[X_t^2] - E[X_{t-1}^2]}{t^2} \\ &= \frac{E[X_n^2]}{n^2} - \sum_{t = 1}^{n} E[X_{t-1}^2] \left( \frac{1}{t^2} - \frac{1}{(t-1)^2} \right). \end{align*} The assumption that $$E[X_{t}^2] = O(t)$$ implies that $$E[X_{t-1}^2] ( \frac{1}{(t-1)^2} - \frac{1}{t^2} ) = O(\frac{1}{t^2}).$$ Therefore $$\sum\limits_{t=1}^{\infty} \frac{E[N_t^2]}{t^2} < \infty$$. ) In turn, the SLLN for MDS can be shown via two arguments. Both are standard devices for results of this type, one via the martingale convergence theorem and another via Kolmogorov's martingale maximal inequality. Via Martingale Convergence Theorem (The previous answer is a variation of this argument.) If $$\sum\limits_{t=1}^{\infty} \frac{E[N_t^2]}{t^2} < \infty$$, the martingale $$Y_n = \sum\limits_{t = 1}^n \frac{N_t}{t}$$, $$n \geq 1$$, is bounded in $$L^2$$, therefore converges almost surely (and in $$L^2$$). Therefore, by Kronecker's lemma, $$\frac{1}{n}\sum_{t = 1}^n N_t \stackrel{a.s.}{\rightarrow} 0$$ as $$n \rightarrow \infty$$. Via Maximal Inequality Consider again the $$L^2$$-martingale $$Y_n = \sum\limits_{t = 1}^n \frac{X_t}{t}$$, $$n \geq 1$$. Let $$\sigma^2_t = \frac{E[ X_t^2 ]}{t^2}$$. By the maximal inequality, for all $$n > 0$$ and for all $$\epsilon > 0$$, $$P( \sup_{m \geq n} | S_m - S_n | \geq \epsilon ) \leq \frac{K}{\epsilon^2} \sum_{t \geq n} \sigma^2_t$$ for some constant $$K$$ independent of $$n$$. Therefore $$P( \inf_n \sup_{m \geq n} | S_m - S_n | \geq \epsilon ) = 0$$ for all $$\epsilon > 0$$. In other words, the sequence $$S_n$$, $$n \geq 1$$, is Cauchy, therefore converges, with probability $$1$$. Again by Kronecker's lemma, $$\frac{1}{n}\sum_{t = 1}^n N_t$$ converges to zero as $$n \rightarrow \infty$$ with probability $$1$$. • I just looked up my intro probability script because I could not believe it: The strong law of large numbers was proved in 2 pages using 4th moments. You proved a more general statement in 3 lines and I can not find an issue with it. wtf Jul 28 at 20:11 • ah I did not notice the change from 1/t to 1/n at first. I guess there is some of the difficulty hidden in kronecker's lemma Jul 28 at 20:21 • The martingale theorems (martingale convergence theorem and maximal inequality) are kind of big hammers. (Correct me if otherwise, but I don't believe the argument you refer to exploits the martingale structure.) Aug 7 at 2:44 • @FelixB. No, not "handwaved." It's the Martingale Convergence Theorem---as stated. Aug 9 at 12:19 • hm, no I made a mistake we do not have monotonicity since we do not care about $\sum_{t=1}^n \frac{N_t^2}{t^2}$ but rather $Y_n^2=\left(\sum_{t=1}^n\frac{N_t}{t}\right)^2$ and I don't think the statement "bounded in $L^2$ implies almost sure convergence" is true: math.stackexchange.com/a/138054/445105 Aug 11 at 15:03
2021-10-26 05:07:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 34, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9953634738922119, "perplexity": 312.1075501992292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587799.46/warc/CC-MAIN-20211026042101-20211026072101-00596.warc.gz"}
http://physics.stackexchange.com/questions/24969/how-to-convert-a-fits-file-to-xls-excel-file
# How to convert a FITS file to .xls Excel file? We are trying to determine the isophots in elliptical galaxies in order to check De-Vaucouleurs law. To do so, we want to convert the data from a FITS file to Excel and analyze it using Excel math capabilities. Does someone know how to make such a conversion? - Yes, NASA's FTools software contains a program that will do this for you. Go to the FTools website and download a copy of the HEATOOLS. You want to specify that you want the Fimage package on the download page. Since you're running windows, you'll proabably need to download the PC-Cygwin package and install Cygwin as well as there is no native Windows versions. Alternately you can try the FTools through NASA's online interface called WebHera. In either case the tool you want to use is called fimgdump this will dump an image into an ASCII text file that you can then import into Excel. Of course if you want a whole suite of image analysis software, I strongly recommend looking at IRAF. It is a old, but well tested and still widely used astronomical software data analysis package. - Good answer, and I'll repeat your IRAF suggestion - it's going to be much better for most analysis. –  spencer nelson Jun 9 '11 at 15:42 If you only have a few files to convert, and you don't want to install software yourself, the CDF group at GSFC offers a web service that will convert from FITS to ASCII, which can then be imported in Excel: If you have more to convert, and you're willing to do a little programming, there's a WSDL description to generate a SOAP client. They also have already compiled applications to do some of the conversions, but you'd have to take the FITS -> CDF -> ASCII route to use those: - Thanks guys. I'll be checking those two. I was combing the web and found few additional sights. The next one offers a list of "FITS I/O Libraries": http://fits.gsfc.nasa.gov/fits_libraries.html#java_grosbol I already tried opening a FITS file with MATLAB. It created a new int16 array which I used to create 3D meshes of NGC5921. You can view them here: https://picasaweb.google.com/108572494054451266909/NGC5921# - Alon, did you know you can edit your original question body? This allows you to keep it up to date with any progressions / further information you might come across; this post is an excellent candidate for a question update and it would be great if you would take the time to edit it in. –  Grant Thomas Jul 8 '11 at 19:56
2015-09-04 14:36:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20417501032352448, "perplexity": 448.84415988271246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645353863.81/warc/CC-MAIN-20150827031553-00055-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-16-integrals-and-vector-fields-section-16-8-the-divergence-theorem-and-a-unified-theory-exercises-16-8-page-1025/7
## Thomas' Calculus 13th Edition $$-8 \pi$$ As we know that $div F=\dfrac{\partial A}{\partial x}i+\dfrac{\partial B}{\partial y}j+\dfrac{\partial C}{\partial z}k$ From the given equation, we have $$Flux =\iiint_{o}(x-1) dz dy dx \\ =\nabla \cdot F=\int_{0}^{2\pi}\int_{0}^{2}\int_{0}^{r^2} (r \cos \theta-1) \space dz \space dr \space d\theta\\=\int_{0}^{2 \pi}(\dfrac{32}{5}\times \cos \theta -4) d\theta \\= -8 \pi$$
2023-03-21 10:51:54
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9969631433486938, "perplexity": 254.2778232750154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00648.warc.gz"}
https://codereview.meta.stackexchange.com/questions/8953/give-jison-a-higher-tag-priority-than-javascript?noredirect=1
# Give [jison] a higher tag priority than [javascript] Recently I asked a question and created the new tag . I also tagged the question with because that is what the language compiles to. However, I noticed that the title in the browser was prefixed with javascript - which I believe should be changed to jison -, as the primary code is written in the language Jison. Excuse me if tag priority cannot be changed, but I believe this is a bug and should be fixed. • IIRC it takes the most popular tag, rather than a 'language' tag. – Peilonrayz Aug 24 '18 at 14:51 • @Peilonrayz Than in this case, should I not tag the question with javascript – FreezePhoenix Aug 24 '18 at 14:53 • I wouldn't as the question doesn't seem to have anything to do with JavaScript. And following your train of thought we should tag all C/C++ questions with assembly... But then it'd use a different tag rather than JavaScript so still wouldn't solve this problem. – Peilonrayz Aug 24 '18 at 14:55 • – brug Aug 24 '18 at 15:07 I looked and I don't think that we can change the hierarchy of the tags because they are based on SO's tag system. My guess is that if you posted on their meta about it for their site, that you may be able to get it changed. But I'm not sure about the whole process. Something else that should be discussed is the syntax highlighting. If it is close enough to JavaScript I could set that for the tag, but think it would only apply on Code Review. • The syntax for Jison is very close to that of YACC as opposed to JS – FreezePhoenix Aug 24 '18 at 14:59 • I don't think SO have a pre-defined 'what's a language list', look at the question now, it says 'compiler' is the language as it's the most popular tag on the question. – Peilonrayz Aug 24 '18 at 14:59 • If the syntax highlighter for CR uses vim, I think I could get a vim file for Jison – FreezePhoenix Aug 24 '18 at 15:00 • if I remember right they use google prettify or something like that, and just reference it. – Malachi Aug 24 '18 at 15:00 • @Peilonrayz Odd behavior. It's also the first tag listed - could that be what it is? – FreezePhoenix Aug 24 '18 at 15:01 • It looks like it's using the correct one anyhow. Somehow. – FreezePhoenix Aug 24 '18 at 15:02 • stackoverflow.com/editing-help#syntax-highlighting here is the SO highlighting – Malachi Aug 24 '18 at 15:03 • And here's a meta about the tag title selection – Peilonrayz Aug 24 '18 at 15:06 To make be in the title it has to be the most popular tag on the question. Since it has one question you'd have to remove all the other tags, which I'd advise against. As Malachi has mentioned syntax highlighting, this mechanism works separately to this tag title selection. In short it's bound to a tag. They may both interact if there are two tags with syntax highlighting are used, and I'd assume it's the most popular tag that it'd use. But you can set a different default in the question if you want: <!-- language-all: lang --> • No, if you have two tags with conflicting syntax-highlighting-preference, it simply uses default instead of either. – Deduplicator Aug 30 '18 at 23:17
2020-07-04 10:20:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1888507753610611, "perplexity": 1299.9564708403766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886095.7/warc/CC-MAIN-20200704073244-20200704103244-00436.warc.gz"}
https://physics.stackexchange.com/questions/518236/whats-the-energy-of-all-the-light-electromagnetic-radiation-in-our-galaxy
# What's the energy of all the light/electromagnetic radiation in our galaxy? I came upon this question while watching a pop-sci video on youtube about Dark Matter and thinking about all the things that could be contributing gravitational influence to a galaxy. From relativity we know that mass and energy is more or less the same and and both bend spacetime (i.e. cause gravity). And given how much energy stars give off and how big galaxies are, there is a lot of light, a lot of photons whizzing around and altogether that adds up to a sizeable chunk of energy. Relatively speaking maybe negligible next to ordinary or dark matter, but should be a big number. Some bounded volume would have to be defined, but I have no idea if physicists have definition for the boundary of a galaxy or what it is. We can easily see without a calculation that this mass-energy is negligible compared to the mass-energy of the stars. The galaxy is somewhere on the order of $$10^4$$ light years in size. That means that a star's light spends $$\sim10^4$$ years inside the galaxy before it's gone. So the ratio of the mass-energy of the light in our galaxy to the mass-energy of its stars is on the order of the fraction of the sun's mass that it loses by radiation over $$\sim10^4$$ years. This is a negligible fraction. (A calculation shows that it's $$\sim10^{-9}$$.) There was an era when the universe's gravity was radiation-dominated, but that was in the very early universe. • That was a much simpler and understandable answer than I anticipated. Thanks. – martixy Dec 9 '19 at 15:40 • The answer by @KeithMcCary includes photons from all stars and galaxies within the visible universe, but only yields a photon density about twice the value that Ben Crowell's gives. It's still "a negligible fraction" of the average mass density of our galaxy. – S. McGrew Dec 9 '19 at 16:28 You can calculate the flux from summing up the contribution from the blackbody spectrum. The answer is that there are about 400 CMB photons in every cubic centimeter of the Universe, all moving at the speed of light, and representing a flux of 3.14× 10-6W/m2 (at the surface of the Earth, and everywhere else!). ... In terms of energy flux, the CMB is fairly similar to starlight within our Galaxy. http://www.astro.ubc.ca/people/scott/faq_email.html CMB photon energy is about $$6.626 \times 10^{-4}$$ eV. Volume of the Milky Way = $$6.7 \times 10^{51} km^3$$. Taking "similar" to mean equal, the energy is (CMB photon density)$$\times$$(CMB photon energy)$$\times$$(Volume of the Milky Way) $$\approxeq 2 \times 10^{65}$$ eV $$\approxeq 3 \times 10^{46}$$ J. By $$E=mc^2$$ this corresponds to $$\approxeq 3 \times 10^{29}$$ kg. Dividing by the mass of MYG ($$\approxeq 6 \times 10^{42}$$ kg) gives $$\approxeq 5 \times 10^{-14}$$, five orders of magnitude less than Ben Crowell's estimate. The discrepancy could be due to: 1) My estimate of starlight seems to be for our location in the MWG. It is much higher in the center. 2) The thickness (1,000 ly) of the MWG might be a more appropriate estimate of size in Ben's calculation. • @martixy Oops, CMB photon energy is $10^{-4}$, not $10^{4}$ (I knew that) so that reduces it by 8 orders of magnitude. – Keith McClary Dec 9 '19 at 17:49
2021-04-23 09:21:46
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6837859153747559, "perplexity": 603.7251247161291}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039568689.89/warc/CC-MAIN-20210423070953-20210423100953-00058.warc.gz"}
https://merlinwz.com/topic/87822-how-medieval-is-your-medieval-name/
# How medieval is "Your Medieval Name"? • Curator ## How medieval is “Your Medieval Name”? So, how medieval is “Your Medieval Name”? Actually, pretty medieval! The feminine names are almost all good solid choices for late medieval England or France: • Milicent – Yes, medieval! • Alianor – Yes, medieval! • Ellyn – Yes, medieval! • Sybbyl – Yes, medieval! • Jacquelyn – Yes, medieval! • Catherine – Yes, medieval! • Elizabeth – Yes, medieval! • Thea – Possibly medieval but we’ve not found any evidence for it yet. • Lucilla – Sort of medieval: R.G. Collingwood and R.P. Wright, The Roman Inscriptions of Britain I: Inscriptions on Stone — Epigraphic Indexes (Gloucester: Alan Sutton, 1983), RIB 1288 and 1271, note one Iulia Lucilla in a first- to fourth-century British inscription (in this name, Lucilla appears as a cognomen), and another Romano-British inscription mentioning a woman known only as [L]ucilla. • Mary – Yes, medieval! • Arabella – Yes, medieval: E.G. Withycombe, The Oxford Dictionary of English Christian Names, 3rd ed. (Oxford: Oxford University Press, 1988). s.n. Arabel(la) has a 13th C Latin example of the name. • Muriel – Yes, medieval: A variety of forms can be found in P.H. Reaney & R.M. Wilson, A Dictionary of English Surnames (London: Routledge, 1991). • Isabel – Yes, medieval! • Angmar – Um, no. • Isolde – Yes, medieval! • Eleanor – Yes, medieval! • Josselyn – Yes, medieval, but not as a feminine name. • Margaret – Yes, medieval! • Luanda – Um, no. • Ariana – Not medieval: It’s a modern Italian form of the Greek name Ariadne, found in mythology, and in the Greek and Byzantine empires. • Clarice – Yes, medieval! • Idla – Possibly medieval. It appears that at least one googlebook has a Polish example of the name, but we have not been able to get more than a snippet view, to be able to confirm the date and context. • Claire – Yes, medieval! • Rya – Um, no. • Joan – Yes, medieval! • Clemence – Yes, medieval! • Morgaine – Yes, medieval, but only used in literature, and not by real people. • Edith – Yes, medieval! • Nerida – Definitely not. • Ysmay – Yes, medieval: Withycombe (op. cit.) has an example of this spelling. The masculine names don’t fare quite so well. • Ulric – Yes, medieval! • Baird – Yes, medieval, but only as a surname, not as a given name. It is derived from Old French baiard or baiard ‘bay-colored’. • Henry – Yes, medieval! • Oliver – Yes, medieval • Fraden – Possibly medieval, but only as a surname, not as a given name. • John – Yes, medieval! • Geoffrey – Yes, medieval! • Francis – Yes, medieval! • Simon – Yes, medieval! • Fendel – Not medieval to my knowledge, either as a given name or a surname. • Frederick – Yes, medieval! • Thomas – Yes, medieval! • Arthur – Yes, medieval! • Cassius – More Roman than medieval. • Richard – Yes, medieval! • Matthew – Yes, medieval! • Charles – Yes, medieval! • Reynard – Yes, medieval! • Favian – Sort of medieval, if you take it as a variant of Fabian. • Philip – Yes, medieval! • Zoricus – Not medieval to our knowledge, but it could possibly turn up at some point in future research. • Carac – Not medieval • Alistair – Medieval, but not as the nominative form of the name, only as the genitive. • Caine – Yes, medieval, but only as a surname, not as a given name. • Gawain – Yes, medieval! • Godfrey – Yes, medieval! • Mericus – More Roman than medieval. • Rowley – Yes, medieval, but only as a surname, not as a given name. • Brom – Yes, medieval, but only as a surname, not as a given name. • Cornell – Yes, medieval, but only as a surname, not as a given name. All the surnames are fine for 14th-16th C English, except these: • Cabrera – This is Spanish, and would only have been used by women; the masculine form is Cabrero. • Coastillon – Not quite sure what this is but it looks like a misspelling of some French place name. That's a good one Never a doubt • 1 month later... • Chamberlain Cassius Archer LOL • Curator Mericus de Biville • 5 months later... On 1/22/2021 at 11:17 PM, Gethin said: ## How medieval is “Your Medieval Name”? So, how medieval is “Your Medieval Name”? Actually, pretty medieval! The feminine names are almost all good solid choices for late medieval England or France: • Milicent – Yes, medieval! • Alianor – Yes, medieval! • Ellyn – Yes, medieval! • Sybbyl – Yes, medieval! • Jacquelyn – Yes, medieval! • Catherine – Yes, medieval! • Elizabeth – Yes, medieval! • Thea – Possibly medieval but we’ve not found any evidence for it yet. • Lucilla – Sort of medieval: R.G. Collingwood and R.P. Wright, The Roman Inscriptions of Britain I: Inscriptions on Stone — Epigraphic Indexes (Gloucester: Alan Sutton, 1983), RIB 1288 and 1271, note one Iulia Lucilla in a first- to fourth-century British inscription (in this name, Lucilla appears as a cognomen), and another Romano-British inscription mentioning a woman known only as [L]ucilla. • Mary – Yes, medieval! • Arabella – Yes, medieval: E.G. Withycombe, The Oxford Dictionary of English Christian Names, 3rd ed. (Oxford: Oxford University Press, 1988). s.n. Arabel(la) has a 13th C Latin example of the name. • Muriel – Yes, medieval: A variety of forms can be found in P.H. Reaney & R.M. Wilson, A Dictionary of English Surnames (London: Routledge, 1991). • Isabel – Yes, medieval! • Angmar – Um, no. • Isolde – Yes, medieval! • Eleanor – Yes, medieval! • Josselyn – Yes, medieval, but not as a feminine name. • Margaret – Yes, medieval! • Luanda – Um, no. • Ariana – Not medieval: It’s a modern Italian form of the Greek name Ariadne, found in mythology, and in the Greek and Byzantine empires. • Clarice – Yes, medieval! • Idla – Possibly medieval. It appears that at least one googlebook has a Polish example of the name, but we have not been able to get more than a snippet view, to be able to confirm the date and context. • Claire – Yes, medieval! • Rya – Um, no. • Joan – Yes, medieval! • Clemence – Yes, medieval! • Morgaine – Yes, medieval, but only used in literature, and not by real people. • Edith – Yes, medieval! • Nerida – Definitely not. • Ysmay – Yes, medieval: Withycombe (op. cit.) has an example of this spelling. The masculine names don’t fare quite so well. • Ulric – Yes, medieval! • Baird – Yes, medieval, but only as a surname, not as a given name. It is derived from Old French baiard or baiard ‘bay-colored’. • Henry – Yes, medieval! • Oliver – Yes, medieval • Fraden – Possibly medieval, but only as a surname, not as a given name. • John – Yes, medieval! • Geoffrey – Yes, medieval! • Francis – Yes, medieval! • Simon – Yes, medieval! • Fendel – Not medieval to my knowledge, either as a given name or a surname. • Frederick – Yes, medieval! • Thomas – Yes, medieval! • Arthur – Yes, medieval! • Cassius – More Roman than medieval. • Richard – Yes, medieval! • Matthew – Yes, medieval! • Charles – Yes, medieval! • Reynard – Yes, medieval! • Favian – Sort of medieval, if you take it as a variant of Fabian. • Philip – Yes, medieval! • Zoricus – Not medieval to our knowledge, but it could possibly turn up at some point in future research. • Carac – Not medieval • Alistair – Medieval, but not as the nominative form of the name, only as the genitive. • Caine – Yes, medieval, but only as a surname, not as a given name. • Gawain – Yes, medieval! • Godfrey – Yes, medieval! • Mericus – More Roman than medieval. • Rowley – Yes, medieval, but only as a surname, not as a given name. • Brom – Yes, medieval, but only as a surname, not as a given name. • Cornell – Yes, medieval, but only as a surname, not as a given name. All the surnames are fine for 14th-16th C English, except these: • Cabrera – This is Spanish, and would only have been used by women; the masculine form is Cabrero. • Coastillon – Not quite sure what this is but it looks like a misspelling of some French place name. Alistair predates Medieval times it's the anglicised version Alasdair (Gàidhlig) Alistair, Like Baird and Stewart they are Scottish names  yet you say the names are from England & France (Englonde & Fraunc) Edited by Makara • 6 months later... You got: Maharana Pratap The greatest of all warriors, Maharana Pratap, was an Indian king who ruled Mewar, a region in north-western India in the present-day state of Rajasthan. You should feel lucky. He is known to screw Mughal invaders to ground all alone by himself without the help of other Rajput states. I prefer to play the quiz  😂😂 `https://www.proprofs.com/quiz-school/story.php?title=what-was-your-medieval-name` On 8/26/2021 at 3:13 PM, Makara said: Alistair predates Medieval times it's the anglicised version Alasdair (Gàidhlig) Alistair, Like Baird and Stewart they are Scottish names  yet you say the names are from England & France (Englonde & Fraunc) The latter is most likely a Scottish Gaelic corruption of the Norman French Alexandre or Latin Alexander, which was incorporated into English in the same form as Alexander. The deepest etymology is the Greek Ἀλέξανδρος (man-repeller): ἀλέξω (repel) + ἀνήρ (man), "the one who repels men", a warrior name. Another, not nearly so common, Anglicization of Alasdair is Allaster. ## Join the conversation You can post now and register later. If you have an account, sign in now to post with your account. Note: Your post will require moderator approval before it will be visible. ×   Pasted as rich text.   Paste as plain text instead Only 75 emoji are allowed.
2023-02-03 23:52:24
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8291630744934082, "perplexity": 13669.689278409396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500076.87/warc/CC-MAIN-20230203221113-20230204011113-00248.warc.gz"}
https://huggingface.co/Hate-speech-CNERG/indic-abusive-allInOne-MuRIL
Hate-speech-CNERG /indic-abusive-allInOne-MuRIL YAML Metadata Error: "language[2]" with value "hi-en" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47. YAML Metadata Error: "language[3]" with value "ka-en" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47. YAML Metadata Error: "language[4]" with value "ma-en" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47. YAML Metadata Error: "language[6]" with value "ta-en" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47. YAML Metadata Error: "language[8]" with value "ur-en" is not valid. It must be an ISO 639-1, 639-2 or 639-3 code (two/three letters), or a special value like "code", "multilingual". If you want to use BCP-47 identifiers, you can specify them in language_bcp47. This model is used detecting abusive speech in Bengali, Devanagari Hindi, Code-mixed Hindi, Code-mixed Kannada, Code-mixed Malayalam, Marathi, Code-mixed Tamil, Urdu, Code-mixed Urdu, and English languages. The allInOne in the name refers to the Joint training/Cross-lingual training, where the model is trained using all the languages data. It is finetuned on MuRIL model. The model is trained with learning rates of 2e-5. Training code can be found at this url LABEL_0 :-> Normal LABEL_1 :-> Abusive For more details about our paper Mithun Das, Somnath Banerjee and Animesh Mukherjee. "Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages". Accepted at ACM HT 2022. Please cite our paper in any published work that uses any of these resources. @article{das2022data, title={Data Bootstrapping Approaches to Improve Low Resource Abusive Language Detection for Indic Languages}, author={Das, Mithun and Banerjee, Somnath and Mukherjee, Animesh}, journal={arXiv preprint arXiv:2204.12543}, year={2022} }
2023-01-31 10:46:21
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.187908336520195, "perplexity": 12439.820434379588}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00663.warc.gz"}
https://scipost.org/SciPostPhys.9.1.006
## Hall viscosity and conductivity of two-dimensional chiral superconductors Félix Rose, Omri Golan, Sergej Moroz SciPost Phys. 9, 006 (2020) · published 14 July 2020 ### Abstract We compute the Hall viscosity and conductivity of non-relativistic two-dimensional chiral superconductors, where fermions pair due to a short-range attractive potential, e.g. $p+\mathrm{i}p$ pairing, and interact via a long-range repulsive Coulomb force. For a logarithmic Coulomb potential, the Hall viscosity tensor contains a contribution that is singular at low momentum, which encodes corrections to pressure induced by an external shear strain. Due to this contribution, the Hall viscosity cannot be extracted from the Hall conductivity in spite of Galilean symmetry. For mixed-dimensional chiral superconductors, where the Coulomb potential decays as inverse distance, we find an intermediate behavior between intrinsic two-dimensional superconductors and superfluids. These results are obtained by means of both effective and microscopic field theory. ### Authors / Affiliations: mappings to Contributors and Organizations See all Organizations. Funders for the research work leading to this publication
2022-10-04 21:05:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6964307427406311, "perplexity": 2352.8154461397316}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337524.47/warc/CC-MAIN-20221004184523-20221004214523-00529.warc.gz"}
https://mathoverflow.net/questions/349996/characters-on-hopf-algebras
Characters on Hopf algebras For any algebra $$A$$, a character for $$A$$ is a non-zero algebra map $$c:A \to \mathbb{C}$$. For $$H$$ be a Hopf algebra, a character is given by $$\epsilon:H \to \mathbb{C}$$ the counit of $$H$$. I am looking for other examples of (co-semi-simple) Hopf algebras with characters distinct of the counit. I am really interested in noncommutative, noncocommutative examples. • Would you consider Larson's character? Jan 8, 2020 at 21:51 • What is Larson's character? Jan 8, 2020 at 22:35 I think that a general example is the so-called Larson's character, which in a sense ties together the trace and determinant functions. To make the long story short: Let $$C$$ be a cocommutative bialgebra, $$V$$ a vector space and $$EV$$, the exterior algebra. Then, it has been shown that: If $$C\otimes V\rightarrow V$$ is an action which makes $$V$$ a $$C$$-module, then there is a unique measuring $$C\otimes EV\rightarrow EV$$, extending the action on $$V$$. In this sense, $$EV$$ becomes a $$C$$-module, with $$C\cdot E^kV\subset E^kV$$. If we furthermore assume that $$\dim V=n$$ then $$E^nV$$ is 1-dim. Let it be spanned by $$\{z\}$$. For any $$c\in C$$, let $$\chi(c)$$ be defined by $$c\cdot z=\chi(c)z$$. In this way, a linear map $$\chi:C\rightarrow k$$ is defined. It can be easily shown that this is an algebra map. It is called the Larson's character. It can furthermore be shown that, if $$g$$ is a grouplike element of $$C$$ then $$\chi(g)=\det T_g$$, where $$T_g:V\rightarrow V$$ is explicitly given by $$v\mapsto g\cdot v$$; and that if $$g$$ is a primitive element then $$\chi(g)=Trace(T_g)$$. For a detailed presentation of the above, you can see ch. VII, sect. 7.1, p.146-153, from Sweedler's book on Hopf algebras. Furthermore, you can also take a look at Larson's paper on Characters of Hopf algebras. However, the presentation there looks quite different: Larson adopts a dual point of view (to the usual notion of characters in group/algebra representation theory) and develops a theory of characters based on comodules of Hopf algebras. He actually considers characters as elements of the hopf algebra (instead of functionals on it) which are associated with comodules over the hopf algebra rather than modules over the hopf agebra. Furthermore, for the case of cosemisimple hopf algebras, an orthogonality relation for characters is proved. Edit: Although i have not studied Larson's paper in detail, from what i can understand, i think that his approach is more general than Sweedler's approach (in the sense that it is not limited to the cocommutative case). In the cocommutative case, i think is essentially equivalent to the one followed in Sweedler's book; Sweedler's presentation can be recovered if we adopt Larson's approach and start from comodules of the finite dual $$C^{\circ}$$ hopf algebra. • What does 'measuring' mean in "there is a unique measuring $E V \otimes V \to E V$"? Jan 10, 2020 at 4:44 • @LSpice, if $A$ is an algebra and $H$ is a bialgebra, and we have a bilinear map (not necessarily a $H$-action), $\triangleright: H \times A \to A$ satisfying $h \triangleright(ac) = (h_{1} \triangleright a)(h_{2}\triangleright c)$ and $h\triangleright 1_A=\varepsilon(h)1_A$, then we say that the bilinear map $\triangleright$ is a measuring or that $(\triangleright,H)$ measures $A$ to $A$. Jan 10, 2020 at 21:10 • [Sorry for what may be an ignorant question but...] Is there any relation between Larson's character and the Fredholm determinant? I know that the latter can be a trace and determinant connection (e.g. on MSE); but, I don't know if there is anything more to say in this direction. Jan 10, 2020 at 23:17 • @Benjamin Dickman, to speak the truth, i do not know. In fact i am not very familiar with Fredholm determinant. However, your remark seems very interesting to me. I will try to study a little and to think about it. Meanwhile, maybe it would be interesting to post this as a question (either here or on MSE). Jan 10, 2020 at 23:30 • I don't think I have the necessary background to parse an answer around their connection; so, I am not intending to post such a question. But please ping me if you ask any such thing on either site - now or in the future! Jan 11, 2020 at 4:17 Take an example of a finite dimensional Hopf algebra $$A$$, presented by generators and relations, generated by grouplikes and primitives. There are a lot of non-commutative non-cocommutative examples in the literature. Compute the group $$G(A)$$ of group like elements (from the presentation this should be very easy). Now the example is $$H=A^*$$, the group likes in $$A=A^{**}$$ are the characters of $$H$$. • Trying to "rephrase" your description: if $A$ is a finite dimensional $k$-algebra and $A^*$ its dual $k$-coalgebra, then $G(A^*)=\mathcal{A}lg(A,k)$, i.e. the grouplikes of the dual coalgebra are exactly the algebra maps from $A$ to $k$, that is the characters of $A$. Jan 10, 2020 at 3:17 • And in the case that $A$ is a fin dim cocommutative bialgebra then Larson's character (mentioned in my post) is one of them. Jan 11, 2020 at 3:24
2022-06-26 11:20:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 37, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8225017189979553, "perplexity": 282.5630256402611}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103205617.12/warc/CC-MAIN-20220626101442-20220626131442-00775.warc.gz"}
https://millerlp.github.io/oceanwaves/reference/prCorr.html
Bottom-mounted pressure transducers suffer from pressure signal attenuation when attempting to estimate surface wave heights. This function corrects water surface height time series based on the depth of the water column and height of the sensor above the bottom. prCorr(pt, Fs, zpt, M = 512, CorrLim = c(0.05, 0.33), plot = FALSE) Arguments pt A vector of sea surface elevations (units of meters). Sampling frequency (units of Hz). Normally 4 Hz for an OWHL logger. Height of the pressure sensor above the seabed (units of meters). Length of time series segments that will be used in the detrending and attenuation correction operations. 512 samples is the default, should be an even number. [min max] frequency for attenuation correction (Hz, optional, default [0.05 0.33], which translate to periods of 20 sec to 3 sec). Logical value TRUE or FALSE. Displays a plot of the original and corrected time series. Value A vector of the depth-corrected surface heights (units of meters usually). Any original trend in the input data (such as tide change) is present in the output data. The returned surface height fluctuations will typically be more extreme than the raw input surface heights. References Based on original MATLAB function by developed by Travis Mason, M. Lecouturier & Urs Neumeier http://neumeier.perso.ch/matlab/waves.html Each segment of pt will be linearly detrended, corrected for attenuation, and the linear trend will be added back to the returned data. Examples data(wavedata) corrected = prCorr(wavedata$swPressure.mbar, Fs = 4, zpt = 0.1) # Plot the results corrected = prCorr(wavedata$swPressure.mbar, Fs = 4, zpt = 0.1, plot=TRUE)
2021-10-18 19:39:12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.649351179599762, "perplexity": 4226.5215078737365}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00611.warc.gz"}
http://motls.blogspot.com/2012/02/wsj-publishes-collective-letter.html
## Wednesday, February 01, 2012 ... ///// ### WSJ publishes a collective letter disagreeing with Lindzen et al. Comparison of dentists, climate scientists, astrologers The Wall Street Journal just published a letter to the editor Check With Climate Scientists for Views on Climate which was signed by a few dozens of climate alarmists. It is meant as a reply to an op-ed by 16 scientists that WSJ previously published. Ms Katharine Hayhoe, a typical religiously obsessed woman with IQ around 80, is an important co-author of the letter in the Wall Street Journal. Her chapter called "How It Is Crucial for the Survival of the Planet for the Future U.S. Presidents To Sleep With Nancy Pelosi On Al Gore's Couch" has been removed from Newt Gingrich's future book. For years, media would paint a surrealistic picture in which Ms Hayhoe and similar "experts" beat folks like Richard Lindzen. The new published letter is somewhat less hysterical than the responses by the alarmist blogosphere; but it is arguably even more pretentious than the comments by the alarmist bloggers. Why? Let's look at the letter in some detail: To some extent, I do. Or at least sometimes I am forced to. Last time I visited my dentist, she gave me a pretty long lecture about the weakening of the cardiovascular system that may be induced by bacteria living in the teeth or canals or abscesses. At the end, I decided to say "No" to a proposed procedure but that's not important here. My more important point is that experts often have to study adjacent disciplines in quite some detail. After all, even the climate alarmists sometimes claim to be able to do some statistical calculations although they are not specialized statisticians – a fact that is often self-evident to the readers of the climate articles; Michael Mann's statistical methodology behind the hockey stick graph is just the most notorious example. There are lots of others. But some other people who are better scientists do learn the methods of the adjacent disciplines pretty much perfectly. I would argue that the experimental particle physicists at CERN do know statistics – at least those members of the team who are responsible for this portion of their scientific research. They may be much more reliable and comprehensively trained experts in statistics than many people who are "just statisticians". Climate alarmism and astrology However, I want to make one more important point. The specialization may be a good thing but too specialized disciplines run a much higher risk that they could be totally wrong: the whole discipline could be based on a misconception. What do I mean? What I mean is that the comment "We're just like the heart surgeons and you shouldn't ask anyone else" may also be exploited by the astrologers, if I pick a specific example of a discipline that is almost generally accepted as a pseudoscience. An astrologer could tell you: "I am the only expert in astrology. I have been doing horoscopes for 40 years and earned millions of dollars by doing so. The astronomers and biologists who wrote an article that disagreed with me aren't really certified experts in astrology. You should better listen to astrologers when they're talking about the impact of planets and about the horoscopes; everyone else is a layman." Now, is this argument valid? It could be valid in some sense; the astrologers may have written down many more predictions how the planets influence the human fates than anyone else. However, there's a more important problem here: everything they have done is scientifically indefensible. It's just a pile of crap. Climate alarmism is analogous. You may be an expert specialist in diverse kinds of threats that the human activity creates for the climate and for the ecosystem (via the climate). The only problem is that the very basic foundation of your expertise, the idea that humans are significantly changing the climate so that they really matter for the thermodynamics of the atmosphere and things influenced by the atmosphere as strongly as other factors (or more so) is invalid. So you may be a great expert because you have written more papers about the threats that the climate is facing, about all the catastrophes that have already taken place because of the climate modified by the mankind and about the future ones that await us, and so on. The only problem is that all this stuff is just rubbish. The more stuff of this kind you write, the more rubbish you produce. You may become an increasingly potent expert if you write many more papers like that, but you're just an expert in a wrong subdiscipline. Much like in the case of astrology, it's an expertise that makes you a clown whose clown status is growing with the number of papers you write about the dangerous man-made climate change. Getting back, to the broader Earth sciences What is the systematic solution in the case of astrology? How does science catch disciplines such as astrology that are totally wrong and that are developing a class of "experts" who are completely ludicrous from a scientific viewpoint? How is the institutionalized science protected against the growth of disciplines such as astrology that could be inviting an ever increasing number of new astrologers and strengthen because it is so cool? The answer is that the claims made by astrology are actually being evaluated by other scientists whose research focuses on very similar claims – by astronomers (when you care about the actual rules that govern the motion of the celestial bodies), by physicists (who figure out whether the bodies do or don't exert forces of various types at a distance), psychologists (who determine how people react and which of the human reactions may be placebo effects etc.), biologists and physicians (who study what are the actual factors that determine your health), economists (who study what decides about your banking account, and whether Jupiter or the balance of supply and demand is a major factor), and so on. The punch line is that those other scientists will tell you that astrology doesn't work. They may surround it and see (and tell you) that there's no legitimate room for a big new discipline at the "interdisciplinary point" where microeconomics or medicine meets astronomy. Needless to say, this is exactly the treatment that must apply to another controversial scientific discipline, the research of man-made climate change, as well. Much like astrology, this whole discipline stands on the assumption that there's something very interesting to study about the sufficiently significant and observable (by assumption) effects of the human activity (analogy of the planets) on the Earth's climate and ecosystem (analogy of the human fates in astrology). It's totally necessary for people with backgrounds similar to the 16 scientists who wrote the op-ed in the WSJ to independently evaluate the question whether the discipline studying "man-made climate change" is a legitimate one, or whether it is analogous to astrology and also tries to defend a predetermined conclusion that there exists a significant effect of AB on XY. Make no doubts about it, the scientific assessment that you may actually get these days is that the man-made climate change science is analogous to astrology, indeed. So you shouldn't view the authors' achievements in the research of dangerous man-made climate change as scientific achievements. If they actually want to talk about natural science, they should better leave it to experts – i.e. to scientists, a group that they don't belong to. Alarmists compare themselves to scientists The climate alarmists try to compare themselves to the legitimate scientists in various fields while the climate skeptics are being presented as counterparts of those who don't believe in the HIV-AIDS relationship and many other crazy things. However, they don't have any evidence whatsoever that these ad hominem attacks and comparisons are the right ones. They don't seem to care. They believe that the readers of the Wall Street Journal are gullible enough that they will just accept whatever is written in the daily. The search for a right analogy is a problem that can't be solved just by vague comparisons and by counting the number of "experts". At least 97% of astrologers will also agree that the planets have a significant impact on the human fate. Does it prove something? Does it prove that astrology is right? We could double or triple or quadruple the number of astrologers, much like we did it with the climate doomsayers in recent decades. Would it make the case for astrology as a real science any stronger? You do understand why it wouldn't, don't you? (BTW, 97% depends on how you count them. Using another methodology, one may find out that only 2.38% of climate scientists subscribe to the alarmist proclamations.) I am already tired of crackpots comparing themselves to Galileo Galilei or the best heart surgeons, so let me terminate this part of the article at this point. Katharine Hayhoe and the other "climate scientists" offer us several of the basic slogans of the kind "global warming is real and it is man-made", something that the readers are probably expected to memorize and parrot. But what they are missing is that many of the readers – and most of the important ones in the "debate" – actually know much more about the climate than the science that the letter written by the alarmists offers. For example, millions of people who actively participate in the debate have studied the actual global temperature data and the trends. It is not really rocket science. Hayhoe et al. write the following, among similar statements: Climate experts know that the long-term warming trend has not abated in the past decade. In fact, it was the warmest decade on record. The sentences are constructed in such a way that the writers implicitly believe that the last decade's being the warmest one on record (which is true e.g. for the HadCRUT3 record since 1850) implies the first sentence, namely that the warming trend hasn't abated in the last decade which probably means that the trend is either positive or even greater than the trend in 1991-2000. But this implication is clearly logically invalid. The fact that the period 2001-2010 was the warmest XYZ0+1–XYZ0+10 ten-year period doesn't imply that there was a positive global warming trend since 2001. And indeed, it's straightforward to calculate that the HadCRUT3 warming trend was negative in the last 10 years. It was negative in the last 11 years, too. In the last 12 years or 13 years, the trend gets a positive sign, but since 1998, it gets negative again. Quite generally, the trends you get from those 10-15 years of data are so small relatively to the noise that they're "statistically insignificant" which is a technical way of saying that you shouldn't attribute them any importance and you should treat them as numbers that are zero for all practical purposes. They only differ from zero by small amounts that may be interpreted as noise or error. In some approximation, you could say that the statement "last decade was the warmest one" depends on the comparison of the temperatures in 2001-2010 with those in 1991-2000 or, approximately, on the trend in 1991-2010, a 20-year period, which was slightly positive. However, the claim that the temperature trend was negative in the last decade depends roughly on the comparison of the periods 2001-2005 with 2006-2010. And the latter, most recent 5-year period of this type, simply wasn't warmer than the previous one. That's pretty much the reason why you don't get a positive warming trend in the last 10 years. Note that for each question, we are comparing different periods or we are computing the linear trends in different intervals. Hayhoe et cl. don't distinguish these things. This kind of sloppiness is probably self-evident to many readers. It's not the only elementary technical problem in the alarmists' letter. Such points make millions of readers understand that the authors are really not competent in discussing time series or the evolution of the global mean temperature; or they're being deliberately dishonest. At any rate, there exists no sensible reason to take their other words seriously. Even a moderately intelligent reader is able to find out that the temperature trend was negative in the last 10 years; Hayhoe et al. obviously underestimate the number of people who are capable of finding such elementary numerical results. The people who can calculate the negative sign of the trend in the last 10 years simply know that Hayhoe et al. are either incompetent or dishonest – and that's really enough not to trust them when it comes to much more complex (and politically far-reaching) questions than a simple linear regression applied to 10 annual temperature figures! Comment section and reactions by the readers The discussion in the comment section of the Wall Street Journal also makes it clear that a vast majority of the readers actually understand that the climate alarmist memes have been spreading primarily by the same intimidation which is the heart of this very letter to the editor, too. For decades, climate alarmists – including many of the very authors of the letters – have been harassing their colleagues, other scientists, and laymen. They were bullying them and screaming that everyone has to agree because someone else has already agreed and everyone must join a majority. However, aside from bullying, harassment, intimidation, and propaganda, there has never been any convincing scientific arguments that we are facing any dangerous change of the climate that would deviate from the changes that the mankind has experienced in past centuries, apparently for natural reasons. What they're missing is the scientific beef. Snobs and scientific secretaries in various universities and scientific institutions (and some researchers) who depend on a big inflow of money into the climate and related research and who love to improve their status in an extreme left-wing environment of the Academia may have reacted in the way that the climate alarmists wanted. They just joined the bandwagon. No doubts about that, a huge number of those folks did. But the impartial experts outside these morally contaminated structures don't have these motivations and biases. In combination with the fact that the climate fearmongering turns out to be scientifically incorrect, that's why most of the people in the broader scientific and technological community (but even most of the WSJ readers) simply don't buy into this propaganda. They have figured out that it is scientific rubbish, a counterpart of astrology. Ms Hayhoe and her fellow alarmist cultists should finally take notice. And that's the memo. #### snail feedback (3) : So Katy thinks that melting Antarctic ice threatens polar bears. Obviously not a geography or biology major, definitely deserves the title of climastrologist!
2016-12-08 02:00:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40690526366233826, "perplexity": 1262.257488511748}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542323.80/warc/CC-MAIN-20161202170902-00050-ip-10-31-129-80.ec2.internal.warc.gz"}
https://minilatex.com/2018/02/15/
# Using Macros in MiniLatex User defined macros in math mode are totally legit in MiniLatex.  One way to insert them is like this: \$\$ \newcommand{\bra}{\lbracket} \newcommand{\ket|{\rbracket} \$\$ That is, enclose the definitions in double dollar signs.  Do this in the body of the text. You can now use these macros in the usual way, e.g. \$\bra a | b \ket\$. If you are using www.knode.io , there is another way.  Make a plain text document titled, say “TeX Macros.”  The actual title is irrelevant. Put the macro definitions in this document. Take not of the document ID number (it is displayed in the footer).  Let’s suppose that the ID is 453.  Then, in the keywords field of the document that is to use the macros, put the text “texmacros:453”, as in the figure below.  That’s all there is to it! You can try this out using the Demo App.
2022-05-27 14:37:38
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9787681102752686, "perplexity": 1629.6210927606828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00087.warc.gz"}
https://codereview.stackexchange.com/questions/103786/plugin-pattern-for-generic-python-application
# Plugin Pattern for Generic Python Application ## Summary I am experimenting with a plugin pattern for a generic Python application (not necessarily web, desktop, or console) which would allow packages dropped into a plugin folder to be used according to the contract they would need to follow. In my case, this contract is simply to have a function called do_plugin_stuff(). I'd like the pattern to make sense for a system that sells plugins like the plugin store in Wordpress. Minimal Python plugin mechanism is a decent question (despite being 4 years old) with some very good discussion about Django (which I haven't used) and how it allows for a plugin to be installed anywhere via pip. I'd see that as a phase two, because it seems like a pip-based plugin pattern is (sweeping generalization probably not always true) most valuable for purely free (as in money) plugin store. If free (as in open source) plugins are sold for money in a store, if seems that pip would be a poor choice for installation because which people might pay for something they're about to get the source code for and use freely / redistribute, they might be unlikely to pay / donate for something they've already installed. ## Code It's also on GitHub under my same username (PaluMacil) and I made a release tag of v1.0.0 to freeze the the repo at the code shown below. app/plugins/blog/__init__.py def do_plugin_stuff(): print("I'm a blog!") app/plugins/toaster/__init__.py def do_plugin_stuff(): print("I'm a toaster!") app/plugins/__init__.py (empty) app/__init__.py from importlib import import_module from os import path, listdir def create_app(): app = Application() plugin_dir = path.join(path.dirname(__file__), 'plugins') import_string_list = [''.join(['.plugins.', d]) for d in listdir(plugin_dir) if path.isdir(path.join(plugin_dir, d)) and not d.startswith('__')] print(str(len(import_string_list)) + " imports to do...") for import_string in import_string_list: module = import_module(import_string, __package__) app.plugins.update({module.__name__.split('.')[2]: module}) print(str(len(app.plugins)) + " plugins in the app") return app class Application: def __init__(self): self.plugins = {} The line not d.startswith('__') eliminated my pychache dir from Pycharm. run.py from app import create_app from pprint import PrettyPrinter app = create_app() app.plugins['toaster'].do_plugin_stuff() printer = PrettyPrinter(indent=4) printer.pprint(app.plugins.__repr__()) ## Points for Review I'm new enough to Python (very new but coming from a decent C# background, and I read PEP8 before attempting this) that I've never written a Python 2 application. I think my method of importing requires Python 3.3 or 3.4, though I'm not certain. Commentary on this might be nice. Ways of making this code accessible to earlier versions of Python seem to be messy; they involve conditional imports and such, which are verbose and ugly. If there is a trick or two that would make my code better for different versions of Python with minimal cruft, that would be great to see. Am I missing anything that makes my code much more verbose than it should be? For instance, I'm iterating twice through the directories--once to make a list of packages, and again to make my dictionary. Would it be cleaner to make both parts one loop? The one-loop alternative seems verbose, but there could be further improvements, perhaps: # Alternative to current code which uses a single loop: for d in listdir(plugin_dir): if path.isdir(path.join(plugin_dir, d)) and not d.startswith('__'): module = import_module(''.join(['.plugins.', d]), __package__) app.plugins.update({module.__name__.split('.')[2]: module}) Is module.__name__.split('.')[2] a fragile way to get the value for my plugin dictionary? Would [-1] be a better index to use on the result of the split? I'm having trouble understanding why I might chose to use pkgutil.iter_modules instead of my approach, but I'm wondering if there might be some benefit. It seems to be based on importlib since Python 3.3 (PEP 302). Would the only difference be that I wouldn't pull in a folder that doesn't have an __init__.py inside it to make it a package? • Your import syntax looks fine to me and the imports seem to run in Python2.7. Why are you concerned they might be version specific? – SuperBiasedMan Sep 4 '15 at 14:22 • Thanks for the 2.7 run, @SuperBiasedMan. I think I came to that incorrect conclusion based upon importlib.reload being new in Python 3.4. I had been considering expanding this code to include a way of reloading plugins using that. I decided to wait on that because importlib.invalidate_caches() (new in 3.3) seems to be a package I would need to understand and play with first, which could take a while. In short, I think, my intention to use those to libraries is the culprit of my compatibility question, and I should have installed 2.7 to check myself before posting that part of the question. – Palu Macil Sep 4 '15 at 14:29 • Ah I see, you meant the stuff you were doing with importlib. I'm not as familiar with that. I can tell you that Python 2.7 does have it and can import modules that way but I don't know enough to say it's as compatible as you need. – SuperBiasedMan Sep 4 '15 at 14:31 You shouldn't call str on the int returned from len, instead use str.format. "{} plugins in the app".format(len(app.plugins)) Also you're calling repr backwards. The whole point of an object having a __repr__ function is that it allows an object to be passed to repr(). So you could change app.plugins.__repr__() to repr(app.plugins).
2020-04-08 19:10:29
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21138815581798553, "perplexity": 2611.98071201378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371821680.80/warc/CC-MAIN-20200408170717-20200408201217-00126.warc.gz"}
https://www.groundai.com/project/functional-aggregate-queries-with-additive-inequalities/
Functional Aggregate Queries with Additive Inequalities # Functional Aggregate Queries with Additive Inequalities ## Abstract Motivated by fundamental applications in databases and relational machine learning, we formulate and study the problem of answering functional aggregate queries (FAQ) in which some of the input factors are defined by a collection of additive inequalities between variables. We refer to these queries as FAQ-AI for short. To answer FAQ-AI in the Boolean semiring, we define relaxed tree decompositions and relaxed submodular and fractional hypertree width parameters. We show that an extension of the InsideOut algorithm using Chazelle’s geometric data structure for solving the semigroup range search problem can answer Boolean FAQ-AI in time given by these new width parameters. This new algorithm achieves lower complexity than known solutions for FAQ-AI. It also recovers some known results in database query answering. Our second contribution is a relaxation of the set of polymatroids that gives rise to the counting version of the submodular width, denoted by #subw. This new width is sandwiched between the submodular and the fractional hypertree widths. Any FAQ and FAQ-AI over one semiring can be answered in time proportional to #subw and respectively to the relaxed version of #subw. We present three applications of our FAQ-AI framework to relational machine learning: -means clustering, training linear support vector machines, and training models using non-polynomial loss. These optimization problems can be solved over a database asymptotically faster than computing the join of the database relations. ## 1 Introduction We consider the problem of computing functional aggregate queries with inequality joins, or FAQ-AI queries for short. This is a fundamental computational problem that goes beyond databases: core computation for supervised and unsupervised machine learning can be formulated in FAQ-AI. Inequalities occur naturally in scenarios involving temporal and spatial relationships between objects in databases. In a retail scenario (e.g., TPC-H), we would like to compute the revenue generated by a customer’s orders whose dates closely precede the ship dates of their lineitems. In streaming scenarios, we would like to detect patterns of events whose time stamps follow a particular order [12]. In spatial data management scenarios, we would like to retrieve objects whose coordinates are within a multi-dimensional range or in close proximity of other objects [27]. The evaluation of Core XPath queries over XML documents amounts to the evaluation of conjunctive queries with inequalities expressing tree relationships in the pre/post plane [16]. ### 1.1 Motivating examples ###### Example 1.1. The -means algorithm divides the input dataset into clusters of similar data points [20]. Each cluster has a mean , which is chosen according to the following optimization (similarity is defined here with respect to the norm): min(G1,…,Gk)k∑i=1∑x∈Gi∥x−μi∥22. (1) Let be the ’th component of mean vector . For a data point , the function computes the difference between the squares of the -distances from to and from to : cij(x) = ∥x−μi∥22−∥∥x−μj∥∥22 = ∑ℓ∈[n][μ2i,ℓ−2xℓ(μi,ℓ−μj,ℓ)−μ2j,ℓ]. A data point is closest to mean from the set of means iff . To compute the mean vector , we need to compute the sum of values for each dimension over . If the dataset is the join of database relations over schemas , we can formulate this sum computation as a datalog-like query with aggregates [17]: Q(i,ℓ)1(∑xℓ)←⎛⎝⋀p∈[m]Rp(xSp)⎞⎠∧⎛⎝⋀j∈[k]cij(x)≤0⎞⎠. The above notation means that the answer to query is the sum of over all tuples satisfying the conjunction on the right-hand side. Section 4 gives further queries necessary to compute the -means. As we show in this article, such queries with aggregates and inequalities can be computed asymptotically faster than the join defining . ∎ Simple queries can already highlight the limitations of state-of-the-art evaluation techniques, as shown next. ###### Example 1.2. State-of-the-art techniques take time to compute the following query over relations of size : Q2()←R(a,b)∧S(b,c)∧T(c,d)∧a≤d, Examples 3.9 and 3.19 show how to compute and its counting version in time using the techniques introduced in this article.∎ ### 1.2 The Faq-Ai problem One way to answer the above queries is to view them as functional aggregate queries (FAQ[4] formulated in sum-product form over some semiring. We therefore briefly introduce FAQ over a single semiring. We first establish notation. For any positive integer , let . For , let denote a variable/attribute, and denote a value in the discrete domain of . For any , define , . That is, is a tuple of variables and is a tuple of values for these variables. Let a semiring and a multi-hypergraph . To each edge we associate a function called factor. An FAQ query over one semiring with free variables has the form: Q(xF) (2) Under the Boolean semiring , the query (2) becomes a conjunctive query: The factors represent input relations, where iff , with some notational overloading. Under the sum-product semiring, the query (2) counts the number of tuples in the join result for each tuple , where the factors are indicator functions . (The notation denotes the indicator function of the event in the semiring : if holds, and otherwise.) To aggregate over some input variable, say , we can designate an identity factor . Throughout the article, we assume the query size to be a constant and state runtimes in data complexity. It is known [4] that over an arbitrary semiring, the query (2) can be answered in time , where is the size of the largest relation , fhtw denotes the fractional hypertree width of the query, and has no free variables [15]. If has free variables, fhtw-width becomes FAQ-width instead [4]. Here is the size of the largest factor . Over the Boolean semiring, the time can be lowered to  [6], where subw is the submodular width [28] and hides a polylogarithmic factor in . Motivated by the examples in Section 1.1, we formulate a class of FAQ queries called FAQ-AI: ###### Definition 1.3 (Faq-Ai). Given a hyperedge multiset that is partitioned into two multisets , where stands for “skeleton” and stands for “ligament”, the input to a query from the FAQ-AI class is the following: 1. To each hyperedge , there corresponds a function , as in the FAQ case. 2. To each hyperedge , there corresponds functions , one for every variable . The output to the FAQ-AI query is the following: Q(xF) =⨁xV∖F(⨂K∈EsRK(xK))⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠. (3) The summation is over tuples . The (uni-variate) functions can be user-defined functions, e.g., , or binary predicates with one key in and a numeric value, e.g., a table salary(employee_id, salary_value) where employee_id is a key. The only requirement we impose is that, given , the value can be accessed/computed in -time. If , then we get back the FAQ formulation (2). ###### Example 1.4. The queries in Section 1.1 are instances of (3): Q(i,ℓ)1() =⨁x[n]xℓ⊗⎛⎝⨂p∈[m]Rp(xSp)⎞⎠⊗⎛⎝⨂j∈[k]1cij(x)≤0⎞⎠, (4) Q2() =⨁x[4]R(x1,x2)⊗S(x2,x3)⊗T(x3,x4)⊗1x1−x4≤0. is over the sum-product semiring. can be over any semiring: Example 3.9 discusses the case of the Boolean semiring while Example 3.19 discusses the sum-product semiring. ∎ ### 1.3 Our contributions To answer FAQ queries of the form (2), currently there are two dominant width parameters: fractional hypertree width (fhtw [15]) and submodular width (subw [28]).1 It is known that for any query, and in the Boolean semiring we can answer (2) in -time [6, 28]. For non-Boolean semirings, the best known algorithm, called InsideOut [4, 5], evaluates (2) in time . For queries with free variables, fhtw is replaced by the more general notion of FAQ-width (faqw[4]; however, for brevity we discuss the non-free variable case here. Following [5], both width parameters subw and fhtw can be defined via two constraint sets: the first is the set TD of all tree decompositions of the query hypergraph , and the second is the set of polymatroids on vertices of . The widths subw and fhtw are then defined as maximin and respectively minimax optimization problems on the domain pair TD and , subject to “edge domination” constraints for . Section 2 presents these notions and other related preliminary concepts in detail. Our contributions include the following: Answering Faq-Ai over Boolean semiring On the Boolean semiring, one way to answer query (3) is to apply the PANDA algorithm [28], using edge domination constraints on and the set TD of all tree decompositions of . However, we can do better. In Section 3.2 we define a new notion of tree decomposition: relaxed tree decomposition, in which the hyperedges in only have to be covered by adjacent TD bags. Then, we present a variant of the InsideOut algorithm running on these relaxed TDs using Chazelle’s classic geometric data structure [9] for solving the semigroup range search problem. We show that our InsideOut variant meets the “relaxed fhtw” runtime, which is the analog of fhtw on relaxed TD. The PANDA algorithm can use the InsideOut variant as a blackbox to meet the “relaxed subw” runtime. The relaxed widths are smaller than the non-relaxed counterparts, and are strictly smaller for some classes of queries, which means our algorithms yield asymptotic improvements over existing ones. Answering Faq over an arbitrary semiring Next, to prepare the stage for answering FAQ-AI over an arbitrary semiring, in Section 3.3 we revisit FAQ over a non-Boolean semiring, where no known algorithm can achieve the subw-runtime. Here, we relax the set of polymatroids to a superset of relaxed polymatroids. Then, by adapting the subw definition to relaxed polymatroids, we obtain a new width parameter called “sharp submodular width” (#subw). We show how a variant of PANDA, called #PANDA, can achieve a runtime of for evaluating FAQ over an arbitrary semiring. We prove that , and that there are classes of queries for which #subw is unboundedly smaller than fhtw. Answering Faq-Ai over an arbitrary semiring Getting back to FAQ-AI, we apply the #subw result under both relaxations: relaxed TD and relaxed polymatroids, to obtain a new width parameter called the relaxed #subw. We show that the new variants of PANDA and InsideOut can achieve the relaxed #subw runtime. We also show that there are queries for which relaxed #subw is essentially the best we can hope for, modulo -sum-hardness. Applications to relational Machine Learning Equipped with the algorithms for answering FAQ-AI, in Section 4 we return to relational machine learning applications over training datasets defined by feature extraction queries over relational databases. We show how one can train linear SVM, -means, and ML models using Huber/hinge loss functions without completely materializing the output of the feature extraction queries. In particular, this shows that for these important classes of ML models, one can sometimes train models in time sub-linear in the size of the training dataset. ### 1.4 Related work Appendix A revisits two prior results on the evaluation of queries with inequalities through FAQ-AI lenses: Core XPath queries over XML documents [14] and inequality joins over tuple-independent probabilistic databases [32]. Throughout the article, we contrast our new width notions with fhtw and subw and our new algorithm #PANDA with the state-of-the-art algorithms PANDA and InsideOut for FAQ and FAQ-AI queries. Prior seminal work considers the containment and minimization problem for queries with inequalities [23]. The efficient evaluation of such queries continues to receive good attention in the database community [22]. There is a bulk of work on queries with disequalities (not-equal), which are at times referred to as inequalities. Queries with disequalities are a proper subclass of FAQ-AI (since can be represented as ). Prior works [24, 3] present several results for this proper subclass that are stronger than our general results for FAQ-AI in this work. In particular, for queries with disequalities it suffices to consider tree decompositions only for “skeleton” edges (ignoring “ligament” edges which -in this case- are the disequalities) [24, 3], whereas for the more general FAQ-AI we need to consider “relaxed” tree decompositions (see Def. 3.3). Section 4 reviews relevant works on machine learning. ## 2 Preliminaries We assume without loss of generality that semiring operations and can be performed in -time. (When the assumption does not hold, for the set semiring for instance, we can multiply the claimed runtime with the real operation’s runtime.) ### 2.1 Tree decompositions and polymatroids We briefly define tree decompositions, fhtw and subw parameters. We refer the reader to the recent survey by Gottlob et al. [13] for more details and historical contexts. In what follows, the hypergraph should be thought of as the hypergraph of the input query, although the notions of tree decomposition and width parameters are defined independently of queries. A tree decomposition of a hypergraph is a pair , where is a tree and maps each node of the tree to a subset of vertices such that 1. every hyperedge is a subset of some , (i.e. every edge is covered by some bag), 2. for every vertex , the set is a non-empty (connected) sub-tree of . This is called the running intersection property. The sets are called the bags of the tree decomposition. Let denote the set of all tree decompositions of . When is clear from context, we use TD for brevity. To define width parameters, we use the polymatroid characterization from Abo Khamis et al. [6]. A function is called a (non-negative) set function on . A set function on is modular if for all , monotone if whenever , and submodular if for all . A monotone, submodular set function with is called a polymatroid. Let denote the set of all polymatroids on . Given , define the set of edge dominated set functions: ED :={h | h:2V→R+,h(S)≤1,∀S∈E}. (5) We next define the submodular width and fractional hypertree width of a given hypergraph : \sf fhtw(H) :=min(T,χ)∈\sf TDmaxh∈\sf ED∩Γnmaxt∈V(T)h(χ(t)), (6) \sf subw(H) :=maxh∈\sf ED∩Γnmin(T,χ)∈% \sf TDmaxt∈V(T)h(χ(t)). (7) It is known [28] that , and there are classes of hypergraphs with bounded subw and unbounded fhtw. Furthermore, fhtw is strictly less than other width notions such as (generalized) hypertree width and tree width. ###### Remark 2.1. Prior to Abo Khamis et al. [6], the commonly used definition of is [15] \sf fhtw(H):=min(T,χ)∈\sf TDmaxt∈V(T)ρ∗E(χ(t)), where is the fractional edge cover number of a vertex set using the hyperedge set . It is straightforward to show, using linear programming duality [6], that maxt∈V(T)maxh∈\sf ED∩Γnh(χ(t))=maxt∈V(T)ρ∗E(χ(t)), (8) proving the equivalence of the two definitions. However, the characterization (6) has two primary advantages: (i) it exposes the minimax / maximin duality between fhtw and subw, and more importantly (ii) it makes it completely straightforward to relax the definitions by replacing the constraints by other applicable constraints, as shall be shown in later sections.∎ ###### Definition 2.2 (F-connex tree decomposition [7, 35]). Given a hypergraph and a set , a tree decomposition of is -connex if there is a subset that forms a connected subtree of and satisfies . (Note that could be empty.) We use to denote the set of all -connex tree decompositions of . (Note that when , .) ### 2.2 InsideOut and Panda To answer the FAQ query (2), we need a model for the representation of the input factors . The support of the function is the set of tuples such that . We use to denote the size of its support. For example, if represents an input relation, then is the number of tuples in . In practice, there often are factors with infinite support, e.g., represents a built-in function in a database, an arithmetic operator, or a comparison operator as in (3). To deal with this more general setting, the edge set is partitioned into two sets , where is finite for all and for all . For simplicity, we often state runtimes of algorithms in terms of the “input size” . Moreover, we use to denote the output size of . We always assume that ; otherwise the output size could be infinite. InsideOut [4, 5] To answer (2), the InsideOut algorithm works by eliminating variables, along with an idea called the “indicator projection”. Its runtime is described by the FAQ-width of the query, a slight generalization of fhtw. For one semiring, we can define by applying Definition (6) over a restricted set of tree decompositions and edge dominated polymatroids. In particular, let denote the set of free variables in (2), and recall from Definition 2.2. Then, \sf ED∞/ :={h | h:2V→R+,h(S)≤1,∀S∈E∞/}, (9) \sf faqw(Q) :=min(T,χ)∈\sf TDFmaxh∈\sf ED∞/∩Γnmaxt∈V(T)h(χ(t)) (10) (remark~{}???) =min(T,χ)∈\sf TDFmaxt∈V(T)ρ∗E∞/(χ(t)) (11) Note that when and (i.e. ). A simple result from Abo Khamis et al. [4] is the following: (Recall that throughout the article we assume the query size to be a constant and state runtimes in data complexity.) ###### Proposition 2.3 ([4]). InsideOut answers query (2) in time . To solve the FAQ-AI (3), we can apply Proposition 2.3 with since all ligament factors are infinite. But this is suboptimal—later, we show a new InsideOut variant that is polynomially better. Panda [6] For the Boolean semiring, i.e., when the FAQ query (2) is of the form Q(xF) =⋁xV∖F∈∏i∈V∖F\sf Dom(Xi) ⋀K∈ERK(xK), (12) we can do much better than Proposition 2.3. When , Marx [28] showed that (12) can be answered in time . The PANDA algorithm [6] generalizes Marx’s result to deal with general degree constraints, and to meet precisely the -runtime. In fact, PANDA works with queries such as (12) with free variables as well. In the context of this article, we can define the following notion of submodular FAQ-width in a natural way: \sf smfw(Q) :=maxh∈\sf ED∞/∩Γnmin(T,χ)∈\sf TDFmaxt∈V(T)h(χ(t)). (13) Then, the results from Abo Khamis et al. [6] imply: ###### Proposition 2.4 ([6]). PANDA answers query (12) in time . These results only work for the Boolean semiring. Section 3 introduces a variant of PANDA, called #PANDA, that also works for non-Boolean semirings. ### 2.3 Semigroup range searching Orthogonal range counting (and searching) is a classic and ubiquitous problem in computational geometry [11]: given a set of points in a -dimensional space, build a data structure that, given any -dimensional rectangle, can efficiently return the number of enclosed points. More generally, there is the semigroup range searching problem [9], where each point of the input points also has a weight , where is a semigroup.2 The problem is: given a -dimensional rectangle , compute . Classic results by Chazelle [9] show that there are data structures for semigroup range searching which can be constructed in time , and answer rectangular queries in -time. Also, this is almost the best we can hope for [10]. There are more recent improvements to Chazelle’s result (see, e.g., Chan et al. [8]), but they are minor (at most a factor), as the original results were already very close to matching the lower bound. Most of these range search/counting problems can be reduced to the dominance range searching problem (on semigroups), where the query is represented by a point , and the objective is to return . Here, denotes the “dominance” relation (coordinate-wise ). We can think of as the lower-corner of an infinite rectangle query. ## 3 Relaxed tree decompositions and relaxed polymatroids ### 3.1 Connection to semigroup range searching We always assume that ; otherwise the output size could be infinite. We start with a special case of (3) in which the skeleton part contains only two hyperedges and . Consider the aggregate query of the form Q(xF)=⨁xV∖FΦ1(xU)⊗Φ2(xL)⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠, (14) where and are two input functions/relations over variable sets and , respectively. We prove the following simple but important lemma: ###### Lemma 3.1. Let , and . For , query (14) can be answered in time . ###### Proof. If there is a hyperedge for which , then in a -time pre-processing step we can “absorb” the factor into the factor , by replacing with . A similar absorption can be done with . Hence, without loss of generality we can assume that and for all . Furthermore, we only need to show that we can compute (14) for , because after is computed, we can marginalize away variables in -time. Abusing notation somewhat, for each and each , define the function by θST(xT) :=∑v∈TθSv(xv). (15) Fix a tuple such that . A tuple is said to be -adjacent if . We show how to compute the following sum in poly-logarithmic time: ⨁xL∖UΦ1(xU)⊗Φ2(xL)⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠= (16) Φ1(xU)⊗⨁xL∖UΦ2(xL)⊗⎛⎝⨂S∈Eℓ1θSS∩U(xS∩U)≤−θSS∖U(xS∖U)⎞⎠. (17) where the inner sum ranges only over tuples which are -adjacent; non-adjacent tuples contribute . Now, for the fixed and for each define the following -dimensional points: q(xU) =(qS(xU))S∈EℓwhereqS(xU):=θSS∩U(xS∩U), p(xL) =(pS(xL))S∈EℓwherepS(xL):=−θSS∖U(xS∖U). We write to say that is dominated by coordinate-wise: . Assign to each point a “weight” of . Now, taking (17), =⨁xL∖U⎛⎝⨂S∈Eℓ1qS(xU)⪯pS(xL)⎞⎠⊗Φ2(xL) (18) =⨁xL∖U1q(xU)⪯p(xL)⊗Φ2(xL). (19) The expression thus computes, for a given “query point” , the weighted sum over all points that dominate the query point. This is precisely the dominance range counting problem, which—modulo a -preprocessing step—can be solved in time [9], as reviewed in Section 2.3. To conclude the proof, note that (14) can be written as (assuming as is the case in Lemma 3.1) Q(xF) =⨁xU∖F⨁xL∖UΦ1(xU)⊗Φ2(xL)⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠ same as~{}(???), where the outer sum ranges over tuples in . ∎ ###### Example 3.2. Let be a binary relation. Suppose we want to count the number of tuples satisfying . By setting , , , the problem can be reduced to the form (14) with , . We can thus compute this count in time .∎ ### 3.2 Relaxed tree decompositions Equipped with this basic case, we can now proceed to solve the general setting of (3). To this end, we define a new width parameter. ###### Definition 3.3 (Relaxed tree decomposition). Let denote a multi-hypergraph whose edge multiset is partitioned into and . A relaxed tree decomposition of (with respect to the partition ) is a pair , where is a tree, and satisfies the following properties: • The running intersection property holds: for each node the set is a connected subtree in . • Every “skeleton” edge is covered by some bag , . • Every “ligament” edge is covered by the union of two adjacent bags and , i.e. , where . Let denote the set of all relaxed tree decompositions of (with respect to the skeleton-ligament partition). When is clear from context we use for the sake of brevity. Given , let denote the set of all relaxed -connex tree decompositions of . #### Faq-Ai on a general semiring We use relaxed TDs in conjunction with Lemma 3.1 to answer FAQ-AI with a relaxed notion of faqw. In particular, the relaxed width parameters of are defined in exactly the same way as the usual width parameters defined in Section 2, except we allow the TDs to range over relaxed ones. ###### Definition 3.4 (Relaxed faqw). Let be an FAQ-AI query (3), and be its hypergraph. Furthermore, let denote the set of hyperedges for which . Then, the relaxed FAQ-width of is defined by \sf faqwℓ(Q) :=min(T,χ)∈\sf TDℓFmaxh∈\sf ED% ∞/∩Γnmaxt∈V(T)h(χ(t)) (20) When , collapses to which is the relaxed fhtw for FAQ-AI without free variables: \sf fhtwℓ(Q) :=min(T,χ)∈\sf TDℓ∅maxh∈\sf ED∞/∩Γnmaxt∈V(T)h(χ(t)) (21) A relaxed tree decomposition of is optimal if its width is equal to , i.e., \sf faqwℓ(Q)=maxh∈\sf ED∞/∩Γnmaxt∈V(T)h(χ(t)). ###### Theorem 3.5. Any FAQ-AI query of the form (3) on any semiring can be answered in time , where is the maximum number of additive inequalities covered by a pair of adjacent bags in an optimal relaxed tree decomposition.3 ###### Proof. We first consider the case of no free variables because this case captures the key idea. Fix an optimal relaxed tree decomposition . We first compute, for each bag of the tree decomposition, a factor such that Q() =⨁xV(⨂K∈EsRK(xK))⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠ (22) =⨁xV⎛⎝⨂t∈V(T)Φt(xχ(t))⎞⎠⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠. (23) To define the factors , we need the notion of indicator projection [5, 4]. For a given and such that , the indicator projection of onto the bag is a function defined by πt,K(xJ) :={1∃xK∖J s.t. RK((xJ,xK∖J))≠0,0otherwise. (24) Recall from Definition 3.3 that every is covered by at least one bag for . Fix an arbitrary coverage assignment , where is covered by the bag . Then, the factors are defined by: Φt(xχ(t)) :=⨂K∈α−1(t)RK(xK)⊗⨂K∈EsK∩χ(t)≠∅πt,K(xK∩χ(t)). (25) It is easy to verify that (23) holds. Using a worst-case optimal join algorithm [30, 31, 39] we can compute (25) in time O(Nρ∗E∞/(χ(t))⋅logN)=O(Nmaxh∈\sf ED∞/∩Γnh(χ(t))⋅logN). (26) Over all , our runtime is bounded by , where w=maxt∈V(T)maxh∈\sf ED∞/∩Γnh(χ(t)). (27) The support of each factor has size bounded by . Next we compute (23) in time . We will make use of the fact that is a relaxed TD. Fix an arbitrary root of the tree decomposition ; following InsideOut, we compute (23) by eliminating variables from the leaves of up to the root. Without loss of generality, we assume that the tree decomposition is non-redundant, i.e., no bag is a subset of another in the tree decomposition (otherwise the contained bag factor can be “absorbed” into the containee bag factor). Let be any leaf of , be its parent, where and . Now write (23) as follows: ⨁xV⎛⎝⨂t∈V(T)Φt(xχ(t))⎞⎠⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠ =⨁xV∖(L∖U)⨁xL∖U⎛⎝⨂t∈V(T)Φt(xχ(t))⎞⎠⊗⎛⎝⨂S∈Eℓ1∑v∈SθSv(xv)≤0⎞⎠ =⨁xV∖(L∖U)⎛⎝⨂t∈V(T)∖{t1,t2}Φt(xχ(t))⎞⎠⊗⎛⎜ ⎜⎝⨂S∈EℓS∩(L∖U)=∅1∑v∈SθSv(xv)≤0⎞⎟ ⎟⎠ ⊗⎡⎢ ⎢ ⎢⎣⨁xL∖UΦt1(xL)⊗Φt2(xU)⊗⎛⎜ ⎜⎝⨂S∈EℓS∩(L∖U)≠∅1∑v∈SθSv(xv)≤0⎞⎟ ⎟⎠⎤⎥ ⎥ ⎥⎦.a sub-query φU(xU) of the form~{}(???) with free vars U (28) The third equality uses the semiring’s distributive law. (Note that implies that thanks to Definition 3.3 and the fact that is the only neighbor of .) Lemma 3.1 implies that we can compute the sub-query from (28) in the allotted time. The above step eliminates all variables in . Repeatedly applying the above step yields the desired output . When the query has free variables, the algorithm proceeds similarly to the case of an FAQ with free variables [4]. ∎ ###### Example 3.6. Given three binary relations and , consider a query that counts the number of tuples that satisfy: R(a,b)∧S(b,c)∧T(c,d)∧(a≤c)∧(c≤b)∧(d≤b). (29) The query has and . Let . Note that . In fact, any of the previously known algorithms, e.g. [4, 5], would take time to answer . However, this query has , and by Theorem 3, it can be answered in time . (Note that here .) An optimal relaxed tree decomposition is shown in Figure 1.∎ We next give a couple of simple lower and upper bounds for . The upper bound shows that, effectively is the best we can hope for, if the FAQ-AI query is arbitrary. The lower bound shows that, while the relaxed tree decomposition idea can improve the runtime by a polynomial factor, it cannot improve the runtime over straightforwardly applying InsideOut (over non-relaxed tree decompositions) by more than a polynomial factor. ###### Proposition 3.7. For any positive integer , there exists an FAQ-AI query of the form (3) for which , and it cannot be answered in time , modulo -sum hardness. ###### Proof. It is widely assumed [33, 26] that is the best runtime for -sum, which is the following problem: given number sets of maximum size , determine whether there is a tuple such that . We can reduce -sum to our problem: Consider the query over the Boolean semiring: Q()←⎛⎝⋀i∈[k]Ri(xi)⎞⎠∧⎛⎝∑i∈[k]xi≤0⎞⎠∧⎛⎝∑i∈[k]xi≥0⎞⎠. (30) The answer to is true iff there is a tuple such that . The reduction shows that our query (30) is -sum-hard. For this query, . ∎ ###### Proposition 3.8. For any FAQ-AI query of the form (3), we have ; in particular, when has no free variables . ###### Proof. Let denote a relaxed tree decomposition of with fractional hypertree width . Construct a new (non-relaxed) tree decomposition for as follows. Each vertex in is also a vertex in with . Moreover, to each edge there corresponds an additional vertex in whose bag is . As for the edge set of , for each edge , there are two corresponding edges in , namely and . We can verify that
2021-01-24 12:15:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9048938751220703, "perplexity": 1234.6920948897057}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703548716.53/warc/CC-MAIN-20210124111006-20210124141006-00437.warc.gz"}
https://mathswithdavid.com/15-november/
# 15 November 👉🏼Simplify 5(x + 3) + 2x – 4 👈🏼Calculate $8 \frac{1}{3} \div \frac{4}{7}$ 👆🏼Find x: 👇🏼Factorise completely: x3 – 25x 🖐🏼In the following Venn diagram, if we know that A is true, find the probability of B (we can say “Find the probability of B given A”, or “Find P(B|A)”):
2023-03-21 10:17:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.809752881526947, "perplexity": 1471.4883306912475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943695.23/warc/CC-MAIN-20230321095704-20230321125704-00445.warc.gz"}
https://xianblog.wordpress.com/2010/11/07/exchange-algorithm/
## Exchange algorithm Following a comment by Mark Johnson on the ABC lectures, I read Murray, Ghahramani and MacKay’s “Doubly-intractable distributions” paper. As I already wrote in a reply to this comment, The link to the paper is quite relevant. First, because those doubly untractable distributions are a perfect setting for ABC. Second, because the solution of Moller, Pettit, Berthelsen and Reeves (2004, Biometrika) is a close alternative to ABC. Indeed, the core of the Moller et al.’ method is to simulate pseudo-data as in ABC, in order to cancel the untractable part of the likelihood. If one uses as target density on the auxiliary pseudo-data the indicator function used in ABC (assuming this results in a density on the pseudo-data), then we get rather close to ABC-MCMC! Of course, there still are differences in that (a) the auxiliary variable method of Moller et al. (2004) and Murray et al. (2006) still requires (the functional) part of the likelihood function to be available; (b) the A in ABC-MCMC approach stands for approximative; (c) the connection only works when considering a distance between the data and the pseudo-data, not when using summary statistics. It would nonetheless be interesting to see a comparison between both approaches, for instance in a Potts model. To be more precise, equation (9) in Murray et al. (2006)  is very similar to ABC if $p(x|\theta,y)$ is replaced by the indicator function of the proximity to y (assuming a uniform distribution is available). In that case (9) becomes $\dfrac{f(y|\theta')\pi(\theta')}{f(y|\theta)\pi(\theta)}\dfrac{q(\theta|\theta',y)}{q(\theta'|\theta,y)}\dfrac{f(x|\theta)}{f(x'|\theta')}\mathbb{I}_{d(x,y)<\epsilon}$ I also found the exchange algorithm interesting because it uses a straightforward importance sampling estimator of the normalising constant ratio, $\widehat{\dfrac{\mathcal{Z}(\theta)}{\mathcal{Z}\theta')}}=\dfrac{f(x|\theta)}{f(x|\theta')},\qquad x\sim f(x|\theta'),$ $\dfrac{f(y|\theta')\pi(\theta')}{f(y|\theta)\pi(\theta)}\dfrac{q(\theta|\theta',y)}{q(\theta'|\theta,y)}\dfrac{f(w|\theta)}{f(w|\theta')}$
2022-12-02 10:45:40
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8669348955154419, "perplexity": 1186.5997317180202}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710900.9/warc/CC-MAIN-20221202082526-20221202112526-00016.warc.gz"}
https://math.stackexchange.com/questions/574749/simplify-sum-of-products-abc-abc-abc
# Simplify Sum of Products: $\;A'B'C' + A'B'C + ABC'$ [closed] How would you simplify the following sum of products expression using algebraic manipulations in boolean algebra? $$A'B'C' + A'B'C + ABC'$$ ## closed as off-topic by Jack, Stefan4024, kimchi lover, Rebellos, LeucippusDec 10 '17 at 2:13 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Jack, Stefan4024, kimchi lover, Rebellos, Leucippus If this question can be reworded to fit the rules in the help center, please edit the question. • You have to say a bit about what each letter represents. Matrices? – rcorty Nov 20 '13 at 15:37 • Its boolean algebra – user550 Nov 20 '13 at 15:38 • wolframalpha.com/input/… – user550 Nov 20 '13 at 15:50 Essentially, all that's involved here is using the distributive law (DL), once. Distributive Law, multiplication over addition: $$PQ + PR = P(Q + R)\tag{DL}$$ In your expression, in the first two terms, put $P = A'B'$: We also use the identity $$\;P + P' = 1\tag{+ID}$$ \begin{align} A'B'C' + A'B'C + ABC' & = A'B'(C' + C) + ABC' \tag{DL}\\ \\ &= A'B'(1) + ABC' \tag{+ ID}\\ \\ & = A'B' + ABC'\end{align} Hint: the first two terms are the same except for the $C'$ or $C$. Put those two terms together. • So A'B' + ABC' ? – user550 Nov 20 '13 at 15:44
2019-09-23 19:49:45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9735080003738403, "perplexity": 1433.1479131533333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514578201.99/warc/CC-MAIN-20190923193125-20190923215125-00047.warc.gz"}
https://alfven.princeton.edu/research/disprel/
## Characterization of Dispersion Relations ### Motivation Numerical and experimental tools for plasma dispersion relation studies in plasma propulsion devices are an important resource for researchers investigating thruster physics. The plasma wave modes described by the dispersion relation can affect physical processes such as particle transport or grow into instabilities that disrupt the discharge. For example, the experimentally observed anomalous electron transport in the channel of Hall thrusters might be partly explained by electrostatic plasma waves influencing transport processes. A core aspect of such plasma wave studies involves characterizing the plasma dispersion relation, which requires the development of versatile numerical and experimental tools. ### The Dispersion Relation The dispersion relation $$\mathcal{D}$$ of a plasma is a function which satisfies the condition $\mathcal{D}(\omega, \mathbf{k}; p_1, p_2, \dots ) = 0,$ where $$\omega$$ is the wave frequency, $$\mathbf{k}$$ is the wavenumber vector, and the $$p_i$$ are plasma parameters such as electron density or background magnetic field. The exact functional form of $$\mathcal{D}$$ depends on the particular plasma model and governing equations from which it is derived. The plasma wave modes that may arise in a discharge are described by the complex zeros of $$\mathcal{D}$$, expressed as functions of the form $$\omega = \omega(\mathbf{k};p_1, p_2, \dots)$$ derived by solving the above equation. For example, the perturbative analysis for high-frequency electrostatic wave solutions to the governing equations of a warm unmagnetized plasma leads to the dispersion relation for warm Langmuir waves $\omega^2 - \omega_{p,\text{e}}^2 - 3k^2v_{t, \text{e}}^2 = 0,$ where $$\omega_{p,\text{e}}$$ is the electron plasma frequency (which is a function of electron density) and $$v_{t,\text{e}}$$ is the electron thermal velocity (which is a function of electron temperature). While this dispersion relation has a simple analytical expression for $$\omega(k)$$, the zeros of more general dispersion relations cannot be solved in closed form due to the mathematical complexity of $$\mathcal{D}$$. Characterizing the dispersion relation and resulting plasma wave modes in these cases requires more robust techniques. ### Plasma Rocket Instability Characterizer We developed the Plasma Rocket Instability Characterizer (PRINCE), a prototype software tool that numerically solves for the zeros of a user-specified $$\mathcal{D}$$ using geometry and plasma parameter data input through a graphical interface (Figures 1 and 2). PRINCE autonomously locates and tracks the zeros of the dispersion relation chosen by the user by applying numerical algorithms based on Cauchy’s Argument Principle and Newton-Raphson’s method. Information about the instabilities found is presented through various data visualization options. PRINCE finds the zeros (green dots) of $$\mathcal{D}$$ using a root-finding algorithm which applies Cauchy’s Argument Principle on grid cells (blue) in the complex plane. If multiple zeros are contained in a cell, a recursive application of the algorithm to subdivided cells resolves the zeros (Figure 3). A separate root-tracking algorithm characterizes the zeros as functions of $$\omega$$ or $$\mathbf{k}$$. See this paper for full details. ### Active Wave Packet Injection Diagnostic To complement PRINCE’s numerical capabilities, we developed an experimental diagnostic which measures the dispersion relation of a plasma by actively injecting wave packets into the discharge and recording the plasma’s dielectric response. In contrast to techniques like passive probe interferometry or laser-induced fluorescence, the active wave packet injection (AWPI) methodology provides the ability to tailor the harmonic content of the input signal in addition to control over the signal-to-noise ratio of the measurements. The linear dispersion relation can them be measured simultaneously at multiple frequencies. As shown in Figure 4, an emitter probe or antenna injects a wave which travels downstream to receiver probes which record time-dependent ion-saturation-current traces. Frequency-domain analysis involving Welch’s method to estimate power and coherence spectra of these fluctuating currents, which correspond to plasma density compressions and rarefactions induced by the plasma waves, yields the plasma dispersion relation measurement. We developed the AWPI diagnostic schematically shown in Figure 5 and photographed in Figure 6. Two molybdenum plates constitute the diagnostic’s antenna while a set of three Langmuir probes serve as receiver probes for measurements of the wavenumber in the directions parallel and perpendicular to the background magnetic field. We used the diagnostic in an experimental study of electrostatic ion-cyclotron (EIC) waves conducted in the 13.56 MHz magnetized RF argon discharge pictured in Figure 7. We measured the dispersion relation of the plasma and found it to agree with the fluid theory predictions. ### Contact Currently at Princeton: • None Former students: • Sebastián Rojas Mata
2022-05-28 08:04:00
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38513219356536865, "perplexity": 918.991101690378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663013003.96/warc/CC-MAIN-20220528062047-20220528092047-00597.warc.gz"}
https://typeset.io/topics/banach-space-2ivxofxe
Topic # Banach space About: Banach space is a(n) research topic. Over the lifetime, 29605 publication(s) have been published within this topic receiving 480128 citation(s). ##### Papers More filters Book 01 Jan 1972 TL;DR: In this paper, the authors consider the problem of finding solutions to elliptic boundary value problems in Spaces of Analytic Functions and of Class Mk Generalizations in the case of distributions and Ultra-Distributions. Abstract: 7 Scalar and Vector Ultra-Distributions.- 1. Scalar-Valued Functions of Class Mk.- 1.1 The Sequences {Mk}.- 1.2 The Space $${D_{{M_k}}}\left( H \right)$$.- 1.3 The Spaces $${D_{{M_k}}}\left( H \right)$$ and $${\varepsilon _{{M_k}}}\left( H \right)$$.- 2. Scalar-Valued Ultra-Distributions of Class Mk Generalizations.- 2.1 The Space $$D{'_{{M_k}}}\left( \Omega \right)$$.- 2.2 Non-Symmetric Spaces of Class Mk.- 2.3 Scalar Ultra-Distributions of Beurling-Type.- 3. Spaces of Analytic Functions and of Analytic Functionals.- 3.1 The Spaces H(H) and H'(H).- 3.2 The Spaces H(?) and H(?).- 4. Vector-Valued Functions of Class Mk.- 4.1 The Space $${D_{{M_k}}}\left( {\phi F} \right)$$.- 4.2 The Spaces $${D_{{M_k}}}\left( {H,F} \right)$$ and $${E_{{M_k}}}\left( {\phi F} \right)$$.- 4.3 The Spaces $${D_{ \pm ,{M_k}}}\left( {\phi F} \right)$$.- 4.4 Remarks on the Topological Properties of the Spaces $${D_{{M_k}}}\left( {\phi F} \right),{E_{{M_k}}}\left( {\phi F} \right),{D_{ \pm ,{M_k}}}\left( {\phi F} \right)$$.- 5. Vector-Valued Ultra-Distributions of Class Mk Generalizations.- 5.1 Recapitulation on Vector-Valued Distributions.- 5.2 The Space $$D{'_{{M_k}}}\left( {\phi F} \right)$$.- 5.3 The Space $$D{'_{ \pm ,{M_k}}}\left( {\phi F} \right)$$.- 5.4 Vector-Valued Ultra-Distributions of Beurling-Type.- 5.5 The Particular Case: F = Banach Space.- 6. Comments.- 8 Elliptic Boundary Value Problems in Spaces of Distributions and Ultra-Distributions.- 1. Regularity of Solutions of Elliptic Boundary Value Problems in Spaces of Analytic Functions and of Class Mk Statement of the Problems and Results.- 1.1 Recapitulation on Elliptic Boundary Value Problems.- 1.2 Statement of the Mk-Regularity Results.- 1.3 Reduction of the Problem to the Case of the Half-Ball.- 2. The Theorem on "Elliptic Iterates": Proof.- 2.1 Some Lemmas.- 2.2 The Preliminary Estimate.- 2.3 Bounds for the Tangential Derivatives.- 2.4 Bounds for the Normal Derivatives.- 2.5 Proof of Theorem 1.3.- 2.6 Complements and Remarks.- 3. Application of Transposition Existence of Solutions in the Space D'(?) of Distributions.- 3.1 Generalities.- 3.2 Choice of the Form L the Space ?(?) and its Dual.- 3.3 Final Choice of the Form L the Space Y.- 3.4 Density Theorem.- 3.5 Trace Theorem and Green's Formula in Y.- 3.6 The Existence of Solutions in the Space Y.- 3.7 Continuity of Traces on Surfaces Neighbouring ?.- 4. Existence of Solutions in the Space $$D{'_{{M_k}}}\left( \Omega \right)$$ of Ultra-Distributions.- 4.1 Generalities.- 4.2 The Space $${\Xi _{{M_k}}}\left( \Omega \right)$$ and its Dual.- 4.3 The Space $${Y_{{M_k}}}$$ and the Existence of Solutions in $${Y_{{M_k}}}$$.- 4.4 Application to the Regularity in the Interior of Ultra-Distribution Solutions of the Equation Au = f.- 5. Comments.- 6. Problems.- 9 Evolution Equations in Spaces of Distributions and Ultra-Distributions.- 1. Regularity Results. Equations of the First Order in t.- 1.1 Orientation and Notation.- 1.2 Regularity in the Spaces D+.- 1.3 Regularity in the Spaces $${D_{ + ,{M_k}}}$$.- 1.4 Regularity in Beurling Spaces.- 1.5 First Applications.- 2. Equations of the Second Order in t.- 2.1 Statement of the Main Results.- 2.2 Proof of Theorem 2.1.- 2.3 Proof of Theorem 2.2.- 3. Singular Equations of the Second Order in t.- 3.1 Statement of the Main Results.- 3.2 Proof of Theorem 3.1.- 4. Schroedinger-Type Equations.- 4.1 Statement of the Main Results.- 4.2 Proof of Theorem 4.1.- 4.3 Proof of Theorem 4.2.- 5. Stability Results in Mk-Classes.- 5.1 Parabolic Regularization.- 5.2 Approximation by Systems of Cauchy-Kowaleska Type (I).- 5.3 Approximation by Systems of Cauchy-Kowaleska Type (II).- 6. Transposition.- 6.1 Orientation.- 6.2 The Parabolic Case.- 6.3 The Second Order in t Case and the Schroedinger Case.- 7. Semi-Groups.- 7.1 Orientation.- 7.2 The Space of Vectors of Class Mk.- 7.3 The Semi-Group G in the Spaces D(A? Mk). Applications.- 7.4 The Transposed Settings. Applications.- 7.5 Another Mk-Regularity Result.- 8. Mk -Classes and Laplace Transformation.- 8.1 Orientation-Hypotheses.- 8.2 Mk -Regularity Result.- 8.3 Transposition.- 9. General Operator Equations.- 9.1 General Results.- 9.2 Application. Periodic Problems.- 9.3 Transposition.- 10. The Case of a Finite Interval ]0, T[.- 10.1 Orientation. General Problems.- 10.2 Space Described by v(0) as v Describes X.- 10.3 The Space $${\Xi _{{M_k}}}$$.- 10.4 Choice of L.- 10.5 The Space Y and Trace Theorems.- 10.6 Non-Homogeneous Problems.- 11. Distribution and Ultra-Distribution Semi-Groups.- 11.1 Distribution Semi-Groups.- 11.2 Ultra-Distribution Semi-Groups.- 12. A General Local Existence Result.- 12.1 Statement of the Result.- 12.2 Examples.- 13. Comments.- 14. Problems.- 10 Parabolic Boundary Value Problems in Spaces of Ultra-Distributions.- 1. Regularity in the Interior of Solutions of Parabolic Equations.- 1.1 The Hypoellipticity of Parabolic Equations.- 1.2 The Regularity in the Interior in Gevrey Spaces.- 2. The Regularity at the Boundary of Solutions of Parabolic Boundary Value Problems.- 2.1 The Regularity in the Space $$D\left( {\bar Q} \right)$$.- 2.2 The Regularity in Gevrey Spaces.- 3. Application of Transposition: The Finite Cylinder Case.- 3.1 The Existence of Solutions in the Space D'(Q): Generalities, the Spaces X and Y.- 3.2 Space Described by ?v as v Describes X.- 3.3 Trace and Existence Theorems in the Space Y.- 3.4 The Existence of Solutions in the Spaces D's,r(Q) of Gevrey Ultra-Distributions, with r > 1, s ? 2m.- 4. Application of Transposition: The Infinite Cylinder Case.- 4.1 The Existence of Solutions in the Space D' (R D'(?)): The Space X_.- 4.2 The Existence of Solutions in the Space D'+ (R D'(?)): The Space Y+ and the Trace and Existence Theorems.- 4.3 The Existence of Solutions in the Spaces D'+,s(R D'r(?)), with r > 1, s ? 2m.- 4.4 Remarks on the Existence of Solutions and the Trace Theorems in other Spaces of Ultra-Distributions.- 5. Comments.- 6. Problems.- 11 Evolution Equations of the Second Order in t and of Schroedinger Type.- 1. Equations of the Second Order in t Regularity of the Solutions of Boundary Value Problems.- 1.1 The Regularity in the Space $$D\left( {\bar Q} \right)$$.- 1.2 The Regularity in Gevrey Spaces.- 2. Equations of the Second Order in t Application of Transposition and Existence of Solutions in Spaces of Distributions.- 2.1 Generalities.- 2.2 The Space $${D_{ - ,\gamma }}\left( {\left[ {0,T} \right] {D_\gamma }\left( {\bar \Omega } \right)} \right)$$ and its Dual.- 2.3 The Spaces X and Y.- 2.4 Study of the Operator ?.- 2.5 Trace and Existence Theorems in the Space Y.- 2.6 Complements on the Trace Theorems.- 2.7 The Infinite Cylinder Case.- 3. Equations of the Second Order in t Application of Transposition and Existence of Solutions in Spaces of Ultra-Distributions.- 3.1 The Difficulties in the Finite Cylinder Case.- 3.2 The Infinite Cylinder Case for m > 1.- 4. Schroedinger Equations Complements for Parabolic Equations.- 4.1 Regularity Results for the Schroedinger Equation.- 4.2 The Non-Homogeneous Boundary Value Problems for the Schroedinger Equation.- 4.3 Remarks on Parabolic Equations.- 5. Comments.- 6. Problems.- Appendix. Calculus of Variations in Gevrey-Type Spaces. 5,794 citations Journal ArticleDOI Antonio Ambrosetti TL;DR: In this paper, general existence theorems for critical points of a continuously differentiable functional I on a real Banach space are given for the case in which I is even. Abstract: This paper contains some general existence theorems for critical points of a continuously differentiable functional I on a real Banach space. The strongest results are for the case in which I is even. Applications are given to partial differential and integral equations. 3,777 citations Book 01 May 1980 TL;DR: In this article, a spectral perturbation of spectral families and applications to self-adjoint eigenvalue problems are discussed, as well as the Trotter-Kato theorem and related topics. Abstract: Distributions and Sobolev spaces.- Operators in Banach spaces.- Examples of boundary value problems.- Semigroups and laplace transform.- Homogenization of second order equations.- Homogenization in elasticity and electromagnetism.- Fluid flow in porous media.- Vibration of mixtures of solids and fluids.- Examples of perturbations for elliptic problems.- The Trotter-Kato theorem and related topics.- Spectral perturbation. Case of isolated eigenvalues.- Perturbation of spectral families and applications to selfadjoint eigenvalue problems.- Stiff problems in constant and varialbe domains.- Averaging and two-scale methods.- Generalities and potential method.- Functional methods.- Scattering problems depending on a parameter. 3,326 citations Journal ArticleDOI TL;DR: In this paper, a characterization of compact sets in Lp (0, T; B) is given, where 1⩽P⩾∞ and B is a Banach space. Abstract: A characterization of compact sets in Lp (0, T; B) is given, where 1⩽P⩾∞ and B is a Banach space. For the existence of solutions in nonlinear boundary value problems by the compactness method, the point is to obtain compactness in a space Lp (0,T; B) from estimates with values in some spaces X, Y or B where X⊂B⊂Y with compact imbedding X→B. Using the present characterization for this kind of situations, sufficient conditions for compactness are given with optimal parameters. As an example, it is proved that if {fn} is bounded in Lq(0,T; B) and in L loc 1 (0, T; X) and if {∂fn/∂t} is bounded in L loc 1 (0, T; Y) then {fn} is relatively compact in Lp(0,T; B), ∀p 3,291 citations Journal ArticleDOI TL;DR: In this article, the authors consider linear equations y = Φx where y is a given vector in ℝn and Φ is a n × m matrix with n 0 so that for large n and for all Φ's except a negligible fraction, the solution x1of the 1-minimization problem is unique and equal to x0. Abstract: We consider linear equations y = Φx where y is a given vector in ℝn and Φ is a given n × m matrix with n 0 so that for large n and for all Φ's except a negligible fraction, the following property holds: For every y having a representation y = Φx0by a coefficient vector x0 ∈ ℝmwith fewer than ρ · n nonzeros, the solution x1of the 1-minimization problem is unique and equal to x0. In contrast, heuristic attempts to sparsely solve such systems—greedy algorithms and thresholding—perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almost-spherical sections in Banach space theory, and deviation bounds for the eigenvalues of random Wishart matrices. © 2006 Wiley Periodicals, Inc. 2,580 citations ##### Network Information ###### Related Topics (5) Operator theory 18.2K papers, 441.4K citations 94% related Sobolev space 19.8K papers, 426.2K citations 92% related Uniqueness 40.1K papers, 670K citations 89% related Hilbert space 29.7K papers, 637K citations 88% related Stochastic partial differential equation 21.1K papers, 707.2K citations 88% related ##### Performance ###### Metrics No. of papers in the topic in previous years YearPapers 202236 20211,164 20201,253 20191,181 20181,124 20171,132
2023-03-29 07:23:57
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7577095031738281, "perplexity": 3119.651570332142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948951.4/warc/CC-MAIN-20230329054547-20230329084547-00642.warc.gz"}
https://gamedev.stackexchange.com/questions/102214/pathfinding-with-inertia
# Pathfinding with inertia I'm currently working for pathfinding for a game where units are moving, but they have inertia. Most typical pathfinding algorithms (A*, Djikastra, etc.) are simply designed to minimize the length of the path. However, these techniques do not apply, as far as I know, to instances where the unit has inertia. If the unit has inertia, then there is a significant difference in the cost to leave a tile in a particular direction based on the direction you want to go. For example, the cost of leaving a tile proceeding North is significantly higher if you entered the tile from the East than if you entered from the South. (In the former example, you would have to slow down to halt you East-West velocity, while in the latter, you could go straight through.) The fact that the system has inertia means that in order to make a turn, you may have to slow down well in advance of making the turn. My best thought to date is that you calculate the additional time it would take to slow down, and then add it to the heuristic cost of moving. However, this would seem to imply that you could never add a tile to the closed list, as entering from another direction could fundamentally change the cost of moving. In addition, the concept of using a grid is an abstraction anyway, because both position and velocity are floating-point concepts. Is there some algorithm that could handle pathfinding on an open plane with inertia better than A*, or what modifications could I make to a pre-existing algorithm to make it suitable to this kind of motion? • Couldn't you try to find a way to give a value to the cost incurred by inertia and add it to your pathfinding algorithms? From what I recall, they are based on cost from traversing node graph, so inertia could serve as weight? – Vaillancourt Jun 9 '15 at 21:51 The only new constraint that inertia lays onto path-finding is continuity, which means no sudden breaks in velocity. Start by generating an A* path, but with a big twist. The reason A* by itself is not appropriate is because it violates continuity, so lets make a new one. A* chooses the best path as the shortest path, but with inertia the shortest path is no longer the fastest path. The fastest path is going to be the one that gives the shortest path without breaking continuity. Normal A* allows each iteration to "jump" to an adjacent tile starting with the lowest cost. Instead of allowing only adjacent tiles, we will allow any tile that is within our range of motion for the next turn. This means the only tiles we can choose next are the tiles that would require us to change momentum an amount the object physically can. TLDR: We shift the choices of A* by our momentum and add that to our momentum. • Wouldn't that mean that we could never stop travelling in the direction that we started out going in? For example, if you started accelerating left, and you continued that for several tiles, you couldn't stop in just 1 tile. You would have to continually slow down over the course of several tiles. – Stack Tracer Jun 9 '15 at 22:10 • @StackTracer If inertia is implemented then yes slow down must be continuous unless some other form of slowdown is added. – newton1212 Jun 9 '15 at 22:13 Pathfinding algorithms like A* can deal with inertia (or any other dimension you can throw at it) just fine. The key is to treat them as an additional dimension, and create a higher-dimensional search graph to search in. To keep things simple, let's suppose we have only two speeds: slow and fast, and this path: A --(sharp turn)-- B ----------- C --(ravine)-- D To execute the sharp turn AB, we need to be slow; to jump the ravine, we need to be fast, and we can only change speeds on paths. Here's the resulting search graph: (fast): A B ----> C -----> D \ ^ X / v (slow): A ----> B ----> C D So you can see, the only path from A to D in this case is by A to B slow, B to C speeding up to fast, and C to D fast. Path cost is also easy: it depends on the speed. So if we arbitrarily decide that the cost of fast-fast is 1, fast-slow or slow-fast as 2, and slow-slow as 3, the cost of A->D is 3 + 2 + 1 = 6. The problem as you may have guessed, is that A* operates on graphs, and not on continuous ranges. That is, you need to come up with discrete speeds like I did with slow/fast, and each additional speed level will multiply the size of your search graph. The more physically demanding your game, the more speed levels you'll need, and the more costly pathfinding will be, and at worst it will be too expensive for games. If it is, then you have some other options: • Make your AI cheat so it can fudge some paths, for example being able to make a turn even if it's going slightly too fast. This means you can get away with less speed levels in your A* search graph • Similar to racing games, pre-calculate ideal path curves, and your AI simply navigates to the best node on that curve and continues along it
2021-01-20 01:41:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4586109220981598, "perplexity": 659.1248496171457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519843.24/warc/CC-MAIN-20210119232006-20210120022006-00795.warc.gz"}
http://pi-virtualworld.blogspot.in/2014/04/markov-chain-implementation-in-c-using.html
# Markov Chains #### Introduction In this article we will look at markov models and its application in classification of discrete sequential data. • Markov processes are examples of stochastic processes that generate random sequences of outcomes or states according to certain probabilities. • Markov chains can be considered mathematical descriptions of Markov models with a discrete set of states. • Markov chains are integer time process ${X_n,n\ge 0}$ for which each random variable $X_n$ is integer valued and depends on the past only through most recent random variable $X_{n-1}$ for all integer $n\geq 1$. • ${X_n,n\in N}$ is a discrete Markov chain on state space $S={1,\ldots,M}$ • At each time instant t,The system changes state ,and makes a transition. • The markov chains follow the markovian and stationarity property. • For a first order markov chain,the markov property states that the state of the system at time $t+1$ depends only on the state of the system at time $t$.The markov chain is also said to be memoryless due to this property. \begin{eqnarray*} & Pr(X_{t+1} = x_{t+1} |X_{t} = {x_1 \ldots x_t} = Pr(X_{t+1} = x_{t+1} |X_{t} = x_t) \end{eqnarray*} • A stationarity assumption is also made which implies that markov property is independent of time. \begin{eqnarray*} & Pr(X_{t+1} = x_i | X_{t} = x_j) = P_{i,j} & \text{for $\forall$ t and $\forall i,j \in {0 \ldots M}$} \end{eqnarray*} • Thus we are looking at processes whose sample functions are sequence of integers between ${1 \ldots M}$. • Thus markov process is parameterized by transition probability $P_{ij}$ and intital probability $P_{i0}$ • Markov chains can be represented by directed graphs,where each state is represented by a node and directed arc represents a non zero transition probability. • If a markov chain has M states then transition probability can be represented by a $MxM$ matrix. \begin{eqnarray*} &T =\begin{bmatrix} P_{11} & P_{22} & \ldots &P_{1M} \\ P_{21} & P_{22} & \ldots & P_{2M} \\ \ldots \\ P_{M1} & P_{M2} & \ldots & P_{MM} \\ \end{bmatrix} \\ &\sum_{j} P_{ij} = 1 \end{eqnarray*} • The matrix T is stochastic matrix where elements in each row sum to 1 • This implies that it is necessary for transition to occur from present state to one of the M states. • The probability of sequence being generated by markov chain is given by \begin{eqnarray*} &P({X}) = \pi(x_0)*\prod_{t=1}^T p(x_t | x_{t-1}) \\ & \text{$p(x_t | x_{t-1})$ is the probability of observing the sequence $x_t$ at} \\ \end{eqnarray*} time instant t given the present state is $t-1$ • Let us consider a 2 models with following initial transition and probability matrix. \begin{eqnarray*} & \pi_1 = \begin{bmatrix} 1 & 0 & 0 \\ \end{bmatrix} \\ & T_1 = \begin{bmatrix} 0.6 & 0.4 & 0 \\ 0.3 & 0.3 & 0.4 \\ 0.4 & 0.1 & 0.5 \\ \end{bmatrix} & \pi_2 = \begin{bmatrix} 0.1 & .5 & 0.4 \end{bmatrix} \\ & T_2 = \begin{bmatrix} 0.9 & 0.05 & 0.05 \\ 0.3 & 0.1 & 0.6 \\ 0.3 & 0.5 & 0.2 \\ \end{bmatrix} \end{eqnarray*} #### Sequence Classification • The sequence generate from these two markov chains \begin{eqnarray*} & S_1= \begin{bmatrix} 1 & 2 & 1 & 2 & 3 \\ 3 & 3 & 2 & 1 & 1 \\ \end{bmatrix} \\ & S_2= \begin{bmatrix} 3 & 2 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ \end{bmatrix} \end{eqnarray*} • In sequence 1 since $P_{13}=0$,we can observe there is no transition from 1 to 3 and dominant transition is expected to be from $1->1$. • In sequence 2 since dominant transition is from $1->1$ we can observe a long sequence of 1's. • We can also compute the probability that the sequence has been generate from a given markov process. • The sequence 1 has probability of $8.6400e^{-05}$ from the 1st model and $2.4300e^{-07}$ from second model. • The sequence 2 has probability of 0 being generate from 1st model and 0.00287 from 2nd model. • Thus if we have sequence and know it is being generate from 1 of 2 models we can always predict the model the sequence has been generated from by choosing the model which generates the maximum probability. • Thus we can use markov chain for sequence modelling and classification. #### Generating a Sequence • The idea behind generating a sequence from a markov process is to use a uniform random number generator. • For each row of initial probability or transition matrix select state which is most likely. • For example if the row contains values $[0.6,0.4,0]$ • If a uniform random value generates a value between 0 and 0.6 then state 0 is returned • If a random value between 0.6 and 1 is generated then state 1 is returned. • First step is to use the above method to select a inital state of matrix by passing the initial probability matrix as input. • Next random state will be selected from the transition probability by passing the transition probability matrix as input. #### Code The code can be found in https://github.com/pi19404/OpenVision in files {ImgML/markovchain.cpp} and {ImgML/markovchain.hpp} .
2017-10-22 04:20:10
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997871518135071, "perplexity": 361.38807799730586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825141.95/warc/CC-MAIN-20171022041437-20171022061437-00076.warc.gz"}
https://grantmcdermott.com/2012/10/08/climate-economy-and-ladies/
### Grant McDermott Assistant Professor Dept. of Economics University of Oregon # Climate, economy and ladies As a postscript to the previous entry, here's a quick story about a newspaper interview that I had last week. It was with one of the major broadsheets of the region and related to the launch of our new website. The interview itself went pretty well, I thought. The journalist was mostly interested in discussing our aims, as well as how we perceive the public's general understanding of environmental issues from an economic perspective. At one point, he asked the inevitable question of how I ended up in Scandinavia all the way from Cape Town. I told him that it was mostly down to my interests in these very issues. You'd be hard pressed to find a country that has a better track record of managing its natural resources than Norway. It didn't hurt that I was also lucky enough to receive some generous funding offers.[*] However, I went on to tell him a joke that I had heard from another Southern Hemisphere expat upon arrival, which is that people like us usually find ourselves in Norway for one of two reasons: Oil or women. It was a throwaway line of course (and quite obviously a jape), and I didn't think much more of it... I suppose it reflects my media naivete then that I was surprised[**] by the headline that ran above my interview the next day: "Climate, economy and ladies". ___ [*] E.g. For those of you thinking about doing a PhD $$-$$ but can't bear the thought of scraping by on a measly tuition stipend for four/five years $$-$$ consider this: Doing a PhD in Norway is treated as a job and you are paid accordingly. That is, your salary has to be somewhat comparable with what a Master's graduate could typically earn outside of academia. Accepted PhD candidates are thus awarded a "research scholarship" which currently amounts to around US\$71,000 per annum... [**] Mind you, probably not as surprised as my (non-Norwegian) girlfriend.
2019-08-18 11:16:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35687455534935, "perplexity": 1818.5785524028058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313803.9/warc/CC-MAIN-20190818104019-20190818130019-00094.warc.gz"}
http://www.chegg.com/homework-help/questions-and-answers/consider-the-following-information-rate-of-return-if-state-occurs-state-of-probability-of--q3394182
## 11.3 Consider the following information. Rate of Return if State Occurs State of Probability of State Economy of Economy Stock A Stock B Recession .20 .01 –.25 Normal .55 .09 .15 Boom .25 .14 .38 Requirement 1: Calculate the expected return for the two stocks. (Do not include the percent signs (%). Round your answers to 2 decimal places (e.g., 32.16).) Expected return E(RA) % E(RB) % Requirement 2: Calculate the standard deviation for the two stocks. (Do not include the percent signs (%). Round your answers to 2 decimal places (e.g., 32.16).) Standard deviation ?A % ?B %
2013-05-26 01:26:08
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904939293861389, "perplexity": 2522.1660856256267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706477730/warc/CC-MAIN-20130516121437-00050-ip-10-60-113-184.ec2.internal.warc.gz"}
https://www.transtutors.com/questions/renee-manufactured-and-sold-a-gadget-a-specialized-asset-used-by-auto-manufacturers--2568251.htm
Renee manufactured and sold a "gadget, " a specialized asset used by auto manufacturers that qual... Renee manufactured and sold a "gadget, " a specialized asset used by auto manufacturers that qualifies for the domestic production activities deduction. Renee incurred $18, 300 in direct expenses in the project, which includes$4,000 of wages Renee paid to employees in the manufacturing of the gadget. What is Renee's domestic production activities deduction for the gadget in each of the following alternative scenarios? a. Renee sold the gadget for $25, 700 and she reported AGI of$93, 700 before considering the manufacturing deduction. b. Renee sold the gadget for $31,000 and she reported AGI of$6, 200 before considering the manufacturing deduction. c. Renee sold the gadget for $48, 200 and she reported AGI of$65, 500 before considering the manufacturing deduction.
2018-06-21 12:18:22
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3259047865867615, "perplexity": 10910.179777122163}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864148.93/warc/CC-MAIN-20180621114153-20180621134153-00379.warc.gz"}
http://bib-pubdb1.desy.de/collection/JournalArticle?ln=en
Journal Article 2019-08-2211:16 [PUBDB-2019-03067] Journal Article et al Quantitative comparison of microfabric and magnetic fabric in black shales from the Appalachian plateau (western Pennsylvania, U.S.A.) Tectonophysics 765, 161 - 171 (2019) [10.1016/j.tecto.2019.04.013] 2019-08-2209:45 [PUBDB-2019-03065] Journal Article et al Correlation of surface pressure and hue of planarizable push–pull chromophores at the air/water interface Beilstein journal of organic chemistry 13, 1099 - 1105 (2017) [10.3762/bjoc.13.109]   It is currently not possible to directly measure the lateral pressure of a biomembrane. Mechanoresponsive fluorescent probes are an elegant solution to this problem but it requires first the establishment of a direct correlation between the membrane surface pressure and the induced color change of the probe [...] OpenAccess: PDF PDF (PDFA); 2019-08-2117:32 [PUBDB-2019-03062] Journal Article Brezesinski, G. The Influence of Calcium Traces in Ultrapure Water on the Lateral Organization in Tetramyristoyl Cardiolipin Monolayers ChemPhysChem 20(11), 1521 - 1526 (2019) [10.1002/cphc.201900126]   Cardiolipin (CL) plays an important role in administering the structural organization of biological membranes and therefore helps maintaining the integer membrane functionality. CL has a dimeric structure consisting of four acyl chains and two phosphate groups [...] Restricted: PDF PDF (PDFA); 2019-08-2116:58 [PUBDB-2019-03061] Journal Article et al In situ formation of electronically coupled superlattices of $\mathrm{Cu_{1.1}S}$ nanodiscs at the liquid/air interface Chemical communications 55(33), 4805 - 4808 (2019) [10.1039/C9CC01758E]   We report on the in situ monitoring of the formation of conductive superlattices of $\mathrm{Cu_{1.1}S}$ nanodiscs via cross-linking with semiconducting cobalt 4,4′,4′′,4′′′-tetraaminophthalocyanine (CoTAPc) molecules at the liquid/air interface by real-time grazing incidence small angle X-ray scattering (GISAXS). We determine the structure, symmetry and lattice parameters of the superlattices, formed during solvent evaporation and ligand exchange on the self-assembled nanodiscs [...] 2019-08-2115:55 [PUBDB-2019-03060] Report/Journal Article et al Revealing Hidden Facts of Li Anode in Cycled Lithium–Oxygen Batteries through X-ray and Neutron Tomography [I-20170061] ACS energy letters 4(1), 306 - 316 (2019) [10.1021/acsenergylett.8b02242]   The gap between the successful application and perspective promise of lithium–oxygen battery (LOB) technology should be filled by an in-depth and comprehensive understanding of the underlying working and degradation mechanisms. Herein, the correlation between the morphological evolution of the Li anode and the overall cell electrochemical performance of cycled LOBs has been revealed for the first time by complementary X-ray and neutron tomography, together with further postmortem scanning electron microscopy, X-ray diffraction, and Fourier transfrom infrared spectroscopy characterizations. [...] Restricted: PDF PDF (PDFA); External link: Fulltext 2019-08-2112:57 [PUBDB-2019-03054] Journal Article et al Signatures of structural differences in Pt–P- and Pd–P-based bulk glass-forming liquids Communications Physics 2(1), 83 (2019) [10.1038/s42005-019-0180-2]   The structural differences between the compositionally related Pt–P- and Pd–P-based bulk glass-forming liquids are investigated in synchrotron X-ray scattering experiments. Although Pt and Pd are considered to be topologically equivalent in structural models, we show that drastic changes in the total structure factor and in the reduced pair distribution function are observed upon gradual substitution [...] OpenAccess: PDF PDF (PDFA); 2019-08-2110:48 [PUBDB-2019-03051] Report/Journal Article et al Search for pair production of Higgs bosons in the $b\bar{b}b\bar{b}$ final state using proton-proton collisions at $\sqrt{s} = 13$ TeV with the ATLAS detector [CERN-EP-2018-029; arXiv:1804.06174] Journal of high energy physics 1901(01), 030 (2019) [10.1007/JHEP01(2019)030]   A search for Higgs boson pair production in the $b\overline{b}b\overline{b}$ final state is carried out with up to 36.1 fb$^{−1}$ of LHC proton-proton collision data collected at $\sqrt{s}=13$ TeV with the ATLAS detector in 2015 and 2016. Three benchmark signals are studied: a spin-2 graviton decaying into a Higgs boson pair, a scalar resonance decaying into a Higgs boson pair, and Standard Model non-resonant Higgs boson pair production. [...] OpenAccess: Search for pair production of Higgs - PDF PDF (PDFA); Aaboud2019_Article_SearchForPairProductionOfHiggs - PDF PDF (PDFA); 2019-08-2108:47 [PUBDB-2019-03049] Journal Article et al Corrosion behavior of metal–composite hybrid joints: Influence of precipitation state and bonding zones Corrosion science 158, 108075 - (2019) [10.1016/j.corsci.2019.07.002]   The corrosion behavior of AA2024-T3/carbon-fiber-reinforced polyphenylene sulfide joints was investigated. The joints were exposed to salt spray from one to six weeks. [...] 2019-08-2108:09 [PUBDB-2019-03048] Journal Article et al Structural analysis of ligand‐bound states of the Salmonella type III secretion system ATPase InvC Protein science n/a, n/a - n/a (2019) [10.1002/pro.3704] 2019-08-1912:30 [PUBDB-2019-03034] Report/Journal Article et al Search for light resonances decaying to boosted quark pairs and produced in association with a photon or a jet in proton-proton collisions at $\sqrt{s}=13$ TeV with the ATLAS detector [arXiv:1801.08769; CERN-EP-2017-280] Physics letters / B 788, 316 - 335 (2019) [10.1016/j.physletb.2018.09.062]   This Letter presents a search for new light resonances decaying to pairs of quarks and produced in association with a high-$p_T$ photon or jet. The dataset consists of proton–proton collisions with an integrated luminosity of 36.1 fb$^{−1}$ at a centre-of-mass energy of $\sqrt{s}=13$ TeV recorded by the ATLAS detector at the Large Hadron Collider. [...] OpenAccess: 1801.08769 - PDF PDF (PDFA); 1-s2.0-S037026931830830X-main - PDF PDF (PDFA);
2019-08-25 12:06:27
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6966632008552551, "perplexity": 6706.382133405903}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323328.16/warc/CC-MAIN-20190825105643-20190825131643-00121.warc.gz"}
https://www.albert.io/ie/ap-physics-1-and-2/impulse-momentum-theorem-final-speed-from-rest
Free Version Easy # Impulse-Momentum Theorem: Final Speed from Rest APPH12-BNU6FZ A force acts on a $5.00\text{ kg}$ cart that starts at rest as shown in the graph below. What is the best estimate of the final speed of the cart after $10\text{ seconds}$ have passed? A $5.0\text{ m/s}$ B $7.5\text{ m/s}$ C $12.5\text{ m/s}$ D $37.5\text{ m/s}$
2017-01-18 06:15:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4488674998283386, "perplexity": 868.2402394861153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280239.54/warc/CC-MAIN-20170116095120-00060-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/entangled-photon-polarization-correlation.564571/
# Entangled photon polarization correlation 1. Jan 2, 2012 ### gespex Hello everyone, I've got a quick question... Given two entangled photons, going through a polarization filter with relative angle a, what is the correlation between the two "answers" (whether the photon is blocked or let through)? I believe it's either cos(a) or cos^2(a), but I'm not sure which of the two. If we do the test in sequence, with the same photon, then the correlation is cos^2(a), is that correct? 2. Jan 3, 2012 ### Edgardo If analyzer A is set at an angle $\alpha$ and analyzer B at an angle $\beta$ then the probability that both photons (of the entangled pair) pass the analyzer is: $$P_{AB}(\alpha,\beta) = \frac{1}{2}\text{cos}^2(\alpha-\beta)$$ A derivation is given in Gregor Weih's dissertation, see page 26, Eq (1.40) and (1.41). The setup is on page 25. I'm not sure though what you mean with the second question. 3. Jan 3, 2012 ### gespex My second question was more a confirmation, as I'm quite sure about it. Let's say we have a photon that went through polarization filter at 0 degrees, and we have a second polarization filter at $\alpha$ degrees, the chance it goes through the second polarization filter is: $$\text{cos}^2 \alpha$$ Right? (I like those tex tags!) 4. Jan 3, 2012 ### Edgardo Yes, correct. This is Malus's law. 5. Jan 3, 2012 6. Jan 3, 2012 ### StevieTNZ I am confused by this whole entanglement thing - on one breathe when measurement occurs, both photons assume a definite polarisation and from this point on are no longer entangled. Yet, if we measure one photon before sending the other through a polariser orientated at a certain angle (rather than vertical or horizontal), we find the results are still correlated. Or is it that, if the photon going through the 2nd polariser had a definite polarisation, even if it were not entangled the pass/fail rate is still the same as if it were entangled? 7. Jan 4, 2012 ### DrChinese We don't know the moment or mechanism by which entanglement ends. We can make these statements: a) It is "as if" both take on a definite polarization when one takes on a definite polarization. There is no sense in which the ordering of that collapse matters. b) When collapse occurs, it is not necessary that *all* entanglement ends. Just on the related bases for the measurement. For example, they could remain frequency/momentum entangled even though they are no longer polarization entangled.
2017-09-20 20:18:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7819488048553467, "perplexity": 1071.5318243221272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818687447.54/warc/CC-MAIN-20170920194628-20170920214628-00608.warc.gz"}
http://support.sas.com/documentation/cdl/en/statug/68162/HTML/default/statug_intromod_sect046.htm
# Introduction to Statistical Modeling with SAS/STAT Software #### Residual Analysis The model errors are unobservable. Yet important features of the statistical model are connected to them, such as the distribution of the data, the correlation among observations, and the constancy of variance. It is customary to diagnose and investigate features of the model errors through the fitted residuals . These residuals are projections of the data onto the null space of and are also referred to as the "raw" residuals to contrast them with other forms of residuals that are transformations of . For the classical linear model, the statistical properties of are affected by the features of that projection and can be summarized as follows: Furthermore, if , then . Because , and the "hat" matrix satisfies , the hat matrix is also the leverage matrix of the model. If denotes the ith diagonal element of (the leverage of observation i), then the leverages are bounded in a model with intercept, . Consequently, the variance of a raw residual is less than that of an observation: . In applications where the variability of the data is estimated from fitted residuals, the estimate is invariably biased low. An example is the computation of an empirical semivariogram based on fitted (detrended) residuals. More important, the diagonal entries of are not necessarily identical; the residuals are heteroscedastic. The "hat" matrix is also not a diagonal matrix; the residuals are correlated. In summary, the only property that the fitted residuals share with the model errors is a zero mean. It is thus commonplace to use transformations of the fitted residuals for diagnostic purposes. ##### Raw and Studentized Residuals A standardized residual is a raw residual that is divided by its standard deviation: Because is unknown, residual standardization is usually not practical. A studentized residual is a raw residual that is divided by its estimated standard deviation. If the estimate of the standard deviation is based on the same data that were used in fitting the model, the residual is also called an internally studentized residual: If the estimate of the residual’s variance does not involve the ith observation, it is called an externally studentized residual. Suppose that denotes the estimate of the residual variance obtained without the ith observation; then the externally studentized residual is ##### Scaled Residuals A scaled residual is simply a raw residual divided by a scalar quantity that is not an estimate of the variance of the residual. For example, residuals divided by the standard deviation of the response variable are scaled and referred to as Pearson or Pearson-type residuals: In generalized linear models, where the variance of an observation is a function of the mean and possibly of an extra scale parameter, , the Pearson residual is because the sum of the squared Pearson residuals equals the Pearson statistic: When the scale parameter participates in the scaling, the residual is also referred to as a Pearson-type residual: ##### Other Residuals You might encounter other residuals in SAS/STAT software. A "leave-one-out" residual is the difference between the observed value and the residual obtained from fitting a model in which the observation in question did not participate. If is the predicted value of the ith observation and is the predicted value if is removed from the analysis, then the "leave-one-out" residual is Since the sum of the squared "leave-one-out" residuals is the PRESS statistic (prediction sum of squares; Allen 1974), is also called the PRESS residual. The concept of the PRESS residual can be generalized if the deletion residual can be based on the removal of sets of observations. In the classical linear model, the PRESS residual for case deletion has a particularly simple form: That is, the PRESS residual is simply a scaled form of the raw residual, where the scaling factor is a function of the leverage of the observation. When data are correlated, , you can scale the vector of residuals rather than scale each residual separately. This takes the covariances among the observations into account. This form of scaling is accomplished by forming the Cholesky root , where is a lower-triangular matrix. Then is a vector of uncorrelated variables with unit variance. The Cholesky residuals in the model are In generalized linear models, the fit of a model can be measured by the scaled deviance statistic . It measures the difference between the log likelihood under the model and the maximum log likelihood that is achievable. In models with a scale parameter , the deviance is . The deviance residuals are the signed square roots of the contributions to the deviance statistic:
2018-09-24 10:01:51
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8972822427749634, "perplexity": 653.8248003028382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267160337.78/warc/CC-MAIN-20180924090455-20180924110855-00327.warc.gz"}
https://omarcafini.info/relationship/weakest-correlation-relationship-examples.php
# Weakest correlation relationship examples ### Correlation Coefficient As another example, these variables could also have a weak negative correlation . A coefficient of means that for every unit change in variable B, variable A. Generally, the correlation coefficient of a sample is denoted by r, and the The weakest linear relationship is indicated by a correlation coefficient equal to 0. relationship between the two; i.e. to see if they are correlated. We can Examples of negative, no and positive correlation are as follows. “very weak”. Парень захохотал. - Доедешь до конечной остановки, приятель. Через пять минут автобус, подпрыгивая, несся по темной сельской дороге.
2019-12-10 16:49:53
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8098407983779907, "perplexity": 3404.7933596480575}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540528457.66/warc/CC-MAIN-20191210152154-20191210180154-00421.warc.gz"}
https://www.ademcetinkaya.com/2023/03/absttsx-absolute-software-corporation.html
Outlook: Absolute Software Corporation is assigned short-term Ba1 & long-term Ba1 estimated rating. Dominant Strategy : Sell Time series to forecast n: 10 Mar 2023 for (n+3 month) Methodology : Active Learning (ML) ## Abstract Absolute Software Corporation prediction model is evaluated with Active Learning (ML) and Logistic Regression1,2,3,4 and it is concluded that the ABST:TSX stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Sell ## Key Points 1. Which neural network is best for prediction? 2. Short/Long Term Stocks 3. What is the use of Markov decision process? ## ABST:TSX Target Price Prediction Modeling Methodology We consider Absolute Software Corporation Decision Process with Active Learning (ML) where A is the set of discrete actions of ABST:TSX stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4 F(Logistic Regression)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Active Learning (ML)) X S(n):→ (n+3 month) $\stackrel{\to }{R}=\left({r}_{1},{r}_{2},{r}_{3}\right)$ n:Time series to forecast p:Price signals of ABST:TSX stock j:Nash equilibria (Neural Network) k:Dominated move a:Best response for target price For further technical information as per how our model work we invite you to visit the article below: How do AC Investment Research machine learning (predictive) algorithms actually work? ## ABST:TSX Stock Forecast (Buy or Sell) for (n+3 month) Sample Set: Neural Network Stock/Index: ABST:TSX Absolute Software Corporation Time series to forecast n: 10 Mar 2023 for (n+3 month) According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Sell X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.) Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.) Z axis (Grey to Black): *Technical Analysis% ## IFRS Reconciliation Adjustments for Absolute Software Corporation 1. Leverage is a contractual cash flow characteristic of some financial assets. Leverage increases the variability of the contractual cash flows with the result that they do not have the economic characteristics of interest. Stand-alone option, forward and swap contracts are examples of financial assets that include such leverage. Thus, such contracts do not meet the condition in paragraphs 4.1.2(b) and 4.1.2A(b) and cannot be subsequently measured at amortised cost or fair value through other comprehensive income. 2. Amounts presented in other comprehensive income shall not be subsequently transferred to profit or loss. However, the entity may transfer the cumulative gain or loss within equity. 3. The fair value of a financial instrument at initial recognition is normally the transaction price (ie the fair value of the consideration given or received, see also paragraph B5.1.2A and IFRS 13). However, if part of the consideration given or received is for something other than the financial instrument, an entity shall measure the fair value of the financial instrument. For example, the fair value of a long-term loan or receivable that carries no interest can be measured as the present value of all future cash receipts discounted using the prevailing market rate(s) of interest for a similar instrument (similar as to currency, term, type of interest rate and other factors) with a similar credit rating. Any additional amount lent is an expense or a reduction of income unless it qualifies for recognition as some other type of asset. 4. A net position is eligible for hedge accounting only if an entity hedges on a net basis for risk management purposes. Whether an entity hedges in this way is a matter of fact (not merely of assertion or documentation). Hence, an entity cannot apply hedge accounting on a net basis solely to achieve a particular accounting outcome if that would not reflect its risk management approach. Net position hedging must form part of an established risk management strategy. Normally this would be approved by key management personnel as defined in IAS 24. *International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS. ## Conclusions Absolute Software Corporation is assigned short-term Ba1 & long-term Ba1 estimated rating. Absolute Software Corporation prediction model is evaluated with Active Learning (ML) and Logistic Regression1,2,3,4 and it is concluded that the ABST:TSX stock is predictable in the short/long term. According to price forecasts for (n+3 month) period, the dominant strategy among neural network is: Sell ### ABST:TSX Absolute Software Corporation Financial Analysis* Rating Short-Term Long-Term Senior Outlook*Ba1Ba1 Income StatementB3B1 Balance SheetBaa2Baa2 Leverage RatiosBa2Ba3 Cash FlowBaa2Baa2 Rates of Return and ProfitabilityB2Baa2 *Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents. How does neural network examine financial reports and understand financial state of the company? ### Prediction Confidence Score Trust metric by Neural Network: 78 out of 100 with 682 signals. ## References 1. Chernozhukov V, Chetverikov D, Demirer M, Duflo E, Hansen C, Newey W. 2017. Double/debiased/ Neyman machine learning of treatment effects. Am. Econ. Rev. 107:261–65 2. Wu X, Kumar V, Quinlan JR, Ghosh J, Yang Q, et al. 2008. Top 10 algorithms in data mining. Knowl. Inform. Syst. 14:1–37 3. Bamler R, Mandt S. 2017. Dynamic word embeddings via skip-gram filtering. In Proceedings of the 34th Inter- national Conference on Machine Learning, pp. 380–89. La Jolla, CA: Int. Mach. Learn. Soc. 4. Rumelhart DE, Hinton GE, Williams RJ. 1986. Learning representations by back-propagating errors. Nature 323:533–36 5. Hastie T, Tibshirani R, Friedman J. 2009. The Elements of Statistical Learning. Berlin: Springer 6. Athey S, Tibshirani J, Wager S. 2016b. Generalized random forests. arXiv:1610.01271 [stat.ME] 7. S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Natural actor-critic algorithms. Automatica, 45(11): 2471–2482, 2009 Frequently Asked QuestionsQ: What is the prediction methodology for ABST:TSX stock? A: ABST:TSX stock prediction methodology: We evaluate the prediction models Active Learning (ML) and Logistic Regression Q: Is ABST:TSX stock a buy or sell? A: The dominant strategy among neural network is to Sell ABST:TSX Stock. Q: Is Absolute Software Corporation stock a good investment? A: The consensus rating for Absolute Software Corporation is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating. Q: What is the consensus rating of ABST:TSX stock? A: The consensus rating for ABST:TSX is Sell. Q: What is the prediction period for ABST:TSX stock? A: The prediction period for ABST:TSX is (n+3 month)
2023-03-20 21:43:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7118340730667114, "perplexity": 8180.243877391044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943562.70/warc/CC-MAIN-20230320211022-20230321001022-00364.warc.gz"}
https://www.physicsforums.com/threads/a-limit.69588/
# A Limit 1. Apr 1, 2005 ### Spectre5 I know that the limit as cos(t) goes to infinity is undefined becuase cosine oscillates between plus and minus one. Now I have this limit to compute: Limit of [ (t * cost(t)) / (e^(t)) ] as t goes to infinity I know that the answer is 0 and I intuitively know why (because exp raises value far quicker than just t) But, how do I go about proving this...the top is undefined and using LoHospitals rule gets no where becuase a (t * sin(t)) will still be in the numerator. So how do I go about doing this? Can I just ignore the affects of the cos and just use LoHospitals rule for t/e^t?? Thanks for any help 2. Apr 1, 2005 ### whozum The limit of the product is the product of the limits. Try separating your limit into things you can work with, and go from there. 3. Apr 1, 2005 ### Jameson You could use the formal Delta-Epsilon definition of a limit, but as you said earlier, you can intuitively look at the limit. Sine will oscillate between -1 and 1 forever, but x will continue to grow infinitely large. So the result will be that y-values of the function will get increasingly smaller for both positive and negative numbers. It will jump between very small positive and very small negative numbers, but both numbers are going to 0. Last edited: Apr 1, 2005 4. Apr 1, 2005 ### dextercioby Use this $$-\frac{t}{e^{t}}\leq \frac{t\cos t}{e^{t}}\leq +\frac{t}{e^{t}}$$ and then the "sqeeze theorem". Daniel. 5. Apr 1, 2005 ### Spectre5 thanks for the help everyone dextercioby: ahhhh forgot about the sqeeze therom :) thanks 6. Apr 1, 2005 ### dextercioby That is "squeeze" :tongue2: Rats!!I hate mis-spelling :yuck: Daniel. 7. Apr 1, 2005 ### Spectre5 Ya...and mine is not "sqeeze therom" but "squeeze theorem".... I did both wrong haha...
2016-10-25 03:22:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7593123912811279, "perplexity": 2767.9775578845934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00115-ip-10-171-6-4.ec2.internal.warc.gz"}